url
stringlengths
31
38
title
stringlengths
7
229
abstract
stringlengths
44
2.87k
text
stringlengths
319
2.51M
meta
dict
https://arxiv.org/abs/2209.05976
Local boundedness for $p$-Laplacian with degenerate coefficients
We study local boundedness for subsolutions of nonlinear nonuniformly elliptic equations whose prototype is given by $\nabla \cdot (\lambda |\nabla u|^{p-2}\nabla u)=0$, where the variable coefficient $0\leq\lambda$ and its inverse $\lambda^{-1}$ are allowed to be unbounded. Assuming certain integrability conditions on $\lambda$ and $\lambda^{-1}$ depending on $p$ and the dimension, we show local boundedness. Moreover, we provide counterexamples to regularity showing that the integrability conditions are optimal for every $p>1$.
\section{Introduction} In this note, we study local boundedness of weak (sub)solutions of non-uniformly elliptic quasi-linear equations of the form \begin{equation}\label{eq} \nabla \cdot a(x,\nabla u)=0\qquad\mbox{in $\Omega$}, \end{equation} where $\Omega\subset\R^d$ with $d\geq2$ and $a:\Omega\times\R^d\to\R^d$ is a Caratheodory function. The main example that we have in mind are $p$-Laplace type operators with variable coefficients, that is, there exist $p>1$ and $A:\Omega\to\R^{d\times d}$ such that $a(x,\xi)=A(x)|\xi|^{p-2}\xi$ for all $x\in\Omega$ and $\xi\in\R^d$. In order to measure the ellipticity of $a$, we introduce for fixed $p>1$ \begin{equation}\label{def1} \lambda(x) := \inf_{\xi \in \R^d \setminus \{ 0 \}} \frac{a(x,\xi)\cdot \xi}{|\xi|^p} \qquad \mu(x) := \sup_{\xi \in \R^d \setminus \{ 0 \}} \frac{|a(x,\xi)|^p}{(a(x,\xi)\cdot \xi)^{p-1}} \end{equation} and suppose that $\lambda$ and $\mu$ are nonnegative. In the uniformly elliptic setting, that is that there exists $0<m\leq M<\infty$ such that $m\leq \lambda\leq\mu\leq M$ in $\Omega$, solution to \eqref{eq} are locally bounded, H\"older continuous and even satisfy Harnack inequality, see e.g.\ classical results of Ladyzhenskaya \& Ural'tseva, Serrin and Trudinger \cite{LU68,T67}. \smallskip In this contribution, we are interested in a nonuniformly elliptic setting and assume that $\lambda^{-1}\in L^t(\Omega)$ and $\mu\in L^s(\Omega)$ for some integrability exponents $s$ and $t$. In \cite{BS19a}, we studied this in the case of linear nonuniformly elliptic equations, that is $a(x,\xi)=A(x)\xi$ corresponding to the case $p=2$, and showed local boundedness and Harnack inequality for weak solutions of \eqref{eq} provided it holds $\frac1s+\frac1t<\frac2{d-1}$. The results of \cite{BS19a} improved classical findings of Trudinger \cite{T71,T73} (see also \cite{MS68}) from the 1970s and are optimal in view of counterexamples constructed by Franchi et. al. in \cite{FSS98}. In this manuscript we extend these results to the more general situation of quasilinear elliptic equation with $p$-growth as described above. More precisely, we show \begin{theorem}\label{T:1} Let $d \ge 2, p >1$, and let $\Omega \subset \R^d$. Moreover, let $s\in[1,\infty]$ and $t\in(1/(p-1),\infty]$ satisf \begin{equation}\label{pqcond} \frac 1s + \frac 1t < \frac p{d-1} \end{equation} Let $a : \Omega \times \R^d \to \R^d$ be a Caratheodory function with $a(\cdot,0)\equiv0$ such that $\lambda$ and $\mu$ defined in~\eqref{def1} satisfy $\mu \in L^s(\Omega)$ and $\frac 1\lambda \in L^t(\Omega)$. Then any weak subsolution of \eqref{eq} is locally bounded from above in $\Omega$. \end{theorem} \begin{remark} Note that Theorem~\ref{T:1}, restricted to the case $p=2$ recovers the local boundedness part of \cite[Theorem~1.1]{BS19a}. \end{remark} \begin{remark} In \cite{CMM18}, Cupini, Marcellini and Mascolo studied local boundedness of local minimizer of nonuniformly elliptic variational integrals of the form $\int_\Omega f(x,\nabla v)\,dx$ where $f$ satisfies \begin{equation}\label{growth:nonuniformpq} \lambda(x)|\xi|^p\leq f(x,\xi)\leq \mu(x)+\mu(x)|\xi|^q\qquad\mbox{with $\lambda^{-1}\in L^t(\Omega)$ and $\mu\in L^s(\Omega)$}. \end{equation} They proved local boundedness under the relation $\frac1{p t}+\frac1{q s}+\frac1p-\frac1q<\frac1d$ (see also \cite{BCM20} for related results). Considering the specific case $f(x,\xi)=\lambda(x)|\xi|^p$, the result of \cite{CMM18} implies local boundedness of solutions to $\nabla \cdot(\lambda(x)|\nabla u|^{p-2}\nabla u)=0$ provided $\lambda^{-1}\in L^t(\Omega)$ and $\lambda\in L^s(\Omega)$ with $\frac1{s}+\frac1{t}<\frac{p}d$, which is more restrictive compared to assumption \eqref{pqcond} in Theorem~\ref{T:1}. It would be interesting to investigate if the methods of the present paper can be combined with the ones of \cite{CMM18} to obtain local boundedness for minimizer of functionals satisfying \eqref{growth:nonuniformpq} assuming $\frac1{p t}+\frac1{q s}+\frac1p-\frac1q<\frac1{d-1}$. Note that in the specific case $s=t=\infty$, this follows from \cite{HS19}. \end{remark} The proof of Theorem~\ref{T:1} is presented in Section~\ref{sec:positive} and follows a variation of the well-known Moser-iteration method. The main new ingredient compared to earlier works \cite{T71,CMM18} lies in an optimized choice of certain cut-off functions -- an idea that we first used in \cite{BS19a} for linear nonuniformly elliptic equations (see also \cite{AD,BS4,zhang} for recent applications to linear parabolic equations). As mentioned above, an example constructed in \cite{FSS98} shows that condition \eqref{pqcond} is optimal for the conclusion of Theorem~\ref{T:1} in the case $p=2$. In the second main result of this paper, we show -- building on the construction of \cite{FSS98} -- that condition \eqref{pqcond} is optimal for the conclusion of Theorem~\ref{T:1} for all $p\in(1,\infty)$. More precisely, we have \begin{theorem}\label{T:negative} Let $d\geq3$ and $1+\frac{1}{d-2}<p<\infty$ and let $s\geq1$ and $t>\frac{1}{p-1}$ be such that $\frac1s+\frac1t\geq \frac{p}{d-1}$ and $\frac{1}{1+1/t}p<d-1$. Then there exists $\lambda:B(0,1)\to(0,\infty)$ satisfying $\lambda\in L^s(B_1)$ and $\lambda^{-1}\in L^t(B_1)$ and an unbounded weak subsolution of \begin{equation}\label{eq:negative} -\nabla \cdot (\lambda |\nabla v|^{p-2}\nabla v)= \end{equation} in $B(0,1)$. Moreover, the same conclusion is valid for $d\geq3$, $1<p\leq 1+\frac{1}{d-2}$ and $s\geq1$ and $t>\frac{1}{p-1}$ satisfying the strict inequalities $\frac1s+\frac1t> \frac{p}{d-1}$ and $\frac{t}{t+1}p<d-1$. \end{theorem} In particular, we see that condition \eqref{pqcond} is sharp on the scale of Lebesgue-integrability for the conclusion of Theorem~\ref{T:1}. We note that in the particularly interesting case $p=2$ and $d=3$ the construction in Theorem~\ref{T:negative} fails in the critical case $\frac1s+\frac1t=\frac{p}{d-1}$, see \cite{AD} for counterexamples to local boundedness for related problems in $d=3$. Let us now briefly discuss a similar but different instance of non-uniform ellipticity which is one of the many areas within the Calculus of Variations, where G.\ Mingione made significant contributions. Consider variational integrals \begin{equation}\label{eq:int} \int_\Omega F(x,\nabla u)\,dx, \end{equation} where the integrand $F$ satisfies $(p,q)$ growth conditions of the form \begin{equation}\label{growth:pq} |\xi|^p\lesssim F(x,\xi)\lesssim 1+|\xi|^q\qquad1<p\leq q<\infty, \end{equation} which where first systematically studied by Marcellini in \cite{Mar89,Mar91}; see also the recent reviews \cite{Min06,MR21}. The focal point in the regularity theory for those functionals is to obtain Lipschitz-bounds on the minimizer. Indeed, once boundedness of $|\nabla u|$ is proven the unbalanced growth in \eqref{growth:pq} becomes irrelevant and there is a huge literature dedicated to Lipschitz estimates under various assumptions on $F$, see e.g.\ the interior estimates \cite{BM20,BS19c,BS22,ELM02} in the autonomous case, \cite{BCM18,CM15,CMMP21,DM19,DM21,DM22,EMM19,HO21} in the non-autonomous case, \cite{BDMS,DP22} for Lipschitz-bounds at the boundary, and also examples where the regularity of minimizer fail \cite{BDS20,G87,Mar91,ELM04,FMM04}. Finally, we explain a link between functionals with $(p,q)$-growth and (linear) equations with unbounded coefficients. Consider the autonomous case that $F(x,\xi)=F(\xi)$ and let $u\in W^{1,p}(\Omega)$ be a local minimizer of \eqref{eq:int}. Linearizing the corresponding Euler-Largrange equation yield (formally) $$ \nabla \cdot D^2F(\nabla u)\nabla \partial_i u=0. $$ Assuming $(p,q)$-growth with $p=2$ of the form $|\zeta|^2\lesssim D^2F(\xi)\zeta\cdot\zeta\lesssim (1+|\xi|)^{q-2}|\zeta|^2$ implies that $|D^2F(\nabla u)|\in L_{\rm loc}^{\frac{2}{q-2}}(\Omega)$. Hence condition \eqref{pqcond} with $p=2$ yield local boundedness of $\partial_i u$ if $\frac{q-2}{2}<\frac{2}{d-1}$, which is the currently best known general bound ensuring Lipschitz-continuity of local minimizer of \eqref{eq:int} - this reasoning was made rigorous in \cite{BS19c} for $p\geq2$ (see also \cite{BS22} for the case $p\in(1,\infty)$). \section{Local boundedness, proof of Theorem~\ref{T:1}}\label{sec:positive} Before we prove Theorem~\ref{T:1}, we introduce the notion of solution that we consider here. \begin{definition}\label{def:solution} Fix a domain $\Omega\subset\R^d$ and a Caratheodory function $a:\Omega\times\R^d\to\R^{d}$ such that for a fixed $p\in(1,\infty)$ the functions $\lambda,\mu\geq0$ given in \eqref{def1} satisfy $\frac1\lambda\in L^{\frac{1}{p-1}}(\Omega)$ and $\mu\in L^1(\Omega)$. The spaces $H_0^{1,p}(\Omega,a)$ and $H^{1,p}(\Omega,a)$ are respectively defined as the completion of $C_c^{1}(\Omega)$ and $C^{1}(\Omega)$ with respect to the norm $\|\cdot\|_{H^{1,p}(\Omega,a)}$, where \begin{equation*} \|u\|_{H^{1,p}(\Omega,a)}:=\biggl(\int_\Omega \lambda |\nabla u|^p+\mu |u|^p\,dx\biggr)^\frac1p. \end{equation*} We call $u$ a weak solution (subsolution, supersolution) of \eqref{eq} in $\Omega$ if and only if $u\in H^{1,p}(\Omega,a)$ and \begin{equation}\label{def:harmonic} \forall \phi\in H_0^{1,p}(\Omega,a),\, \phi\geq0:\qquad \mathcal A(u,\phi)=0\quad (\leq0,\geq0),\quad\mbox{where}\quad\mathcal A(u,\phi):=\int_\Omega a(x,\nabla u) \cdot \nabla \phi\,dx. \end{equation} Moreover, we call $u$ a local weak solution of \eqref{eq} in $\Omega$ if and only if $u$ is a weak solution of \eqref{eq} in $\Omega'$ for every bounded open set $\Omega'\Subset\Omega$. Throughout the paper, we call a solution (subsolution, supersolution) of \eqref{eq} in $\Omega$ $a$-harmonic ($a$-subharmonic, $a$-superharmonic) in $\Omega$. \end{definition} The above definitions generalize the concepts of weak solutions and the spaces $H^1(\Omega,a)$ and $H^1_0(\Omega,a)$ discussed by Trudinger \cite{T71,T73} in the linear case, that is $a(x,\xi)=A(x)\xi$. We stress that the condition $\lambda^{-1}\in L^\frac1{p-1}(\Omega)$ and H\"older inequality imply $$ \|\nabla u\|_{L^1(\Omega)}\leq \|\lambda^{-1}\|_{L^\frac1{p-1}(\Omega)}\biggl(\int_{\Omega}\lambda |\nabla u|^p\biggr)^\frac1p\leq\|\lambda^{-1}\|_{L^\frac1{p-1}(\Omega)}\|u\|_{H^{1,p}(\Omega,a)} $$ and thus, we have that $W^{1,1}(\Omega)\subset H^{1,p}(\Omega,a)$, where we use that by the same computation as above it holds $\|u\|_{L^1(\Omega)}\leq\|\mu^{-1}\|_{L^\frac1{p-1}(\Omega)}\|u\|_{H^{1,p}(\Omega,a)}$ and that by definition we have $\lambda\leq\mu$. From this, we also deduce that the elements of $H^{1,p}(\Omega,a)$ are strongly differentiable in the sense of \cite{GT}. In particular this implies that there holds a chain rule in the following sense \begin{remark} Let $g:\R\to\R$ be uniformly Lipschitz-continuous with $g(0)=0$ and consider the composition $F:=g(u)$. Then, $u\in H_0^{1,p}(\Omega,a)$ (or $\in H^{1,p}(\Omega,a)$) implies $F \in H_0^{1,p}(\Omega,a)$ (or $\in H^{1,p}(\Omega,a)$), and it holds $\nabla F=g'(u)\nabla u$ a.e.\ (see e.g.\ \cite[Lemma 1.3]{T73}). In particular, if $u$ satisfies $u\in H^{1,p}(\Omega,a)$ (or $\in H^{1,p}(\Omega,a)$) then also the truncations \begin{equation* u_+:=\max\{u,0\};\quad u_{-}:=-\min\{u,0\} \end{equation*} satisfy $u_+,u_-\in H^{1,p}(\Omega,a)$ (or $\in H^{1,p}(\Omega,a)$). \end{remark} Now we come to the local boundedness from above for weak subsolutions of \eqref{eq}. \begin{theorem}\label{T:est} Let $d \ge 3$, $\Omega\subset \R^d$ and $p\in(1,\infty)$. Moreover, let $s\in[1,\infty]$ and $t\in(\frac1{p-1},\infty]$ satisfy \eqref{pqcond}. Let $a : \Omega \times \R^d \to \R^d$ be a Caratheodory function with $a(\cdot,0)\equiv0$ such that $\lambda$ and $\mu$ defined in~\eqref{def1} satisfy $\mu \in L^s(\Omega)$ and $\frac 1\lambda \in L^t(\Omega)$. Then, there exists $c=c(d,p,s,t)\in[1,\infty)$ such that for any weak subsolution $u$ of \eqref{eq} and for any ball $B_{R}\subset \Omega$ it hold \begin{equation* \sup_{B_{R/2}} u \le c \Lambda(B_R)^{\frac1p\frac1\delta} \|u_+\|_{\underline W^{1,\frac{1}{1+1/t}p}(B_R)} \end{equation*} where $\Lambda(S) := \bigl( \fint_S \mu^{s} \bigr)^{1/s} \bigl( \fint_S \lambda^{-t} \bigr)^{1/t}$, $\|v\|_{\underline W^{1,\gamma}(B_r)}:=r^{-\frac{d}\gamma}\|v\|_{L^\gamma(B_r)}+r^{1-\frac{d}\gamma}\|\nabla v\|_{L^\gamma(B_r)}$ for all $\gamma\geq1$ and $r>0$; and $\delta := \frac1{s_*}-(\frac1p-\frac1{pt})>0$ (see Lemma~\ref{lm1} for the definition of $s_*$). Moreover, in the case $1+\frac1t<\frac{p}{d-1}$, there exists $c=c(d,p,t)\in[1,\infty)$ such that % \begin{equation*} \sup_{B_{R/2}} u \le c\|u_+\|_{\underline W^{1,\frac{1}{1+1/t}p}(B_R)}. \end{equation*} \end{theorem} In the two-dimensional case, we have the following \begin{proposition}\label{P:2d} Let $\Omega\subset \R^2$ and $p\in(1,\infty)$. Let $a : \Omega \times \R^d \to \R^d$ be a Caratheodory function with $a(\cdot,0)\equiv0$ such that $\lambda$ and $\mu$ defined in~\eqref{def1} satisfy $\mu \in L^1(\Omega)$ and $\frac 1\lambda \in L^\frac1{p-1}(\Omega)$. Then, there exists $c=c(d,p)\in[1,\infty)$ such that for any weak subsolution $u$ of \eqref{eq} and for any ball $B_{R}\subset \Omega$ it holds % \begin{equation*} \sup_{B_{R/2}} u \le c\|u_+\|_{\underline W^{1,1}(B_R)}. \end{equation*} \end{proposition} Before we proof Theorem~\ref{T:est} and Proposition~\ref{P:2d}, we show that they imply the claim of Theorem~\ref{T:1} \begin{proof}[Proof of Theorem~\ref{T:1}] In view of Theorem~\ref{T:est} and Proposition~\ref{P:2d} it remains to show that for any weak subsolution $u$ of \eqref{eq} and for any ball $B_{R}\subset \Omega$ it holds $\|u_+\|_{\underline W^{1,\frac{t}{t+1}p}(B_R)}<\infty$. This is a consequence of H\"older inequality and the concept of weak subsolution, see Definition~\ref{def:solution}. Indeed, we have $$ \biggl(\int_{B_R}(|u|+|\nabla u|)^\frac{tp}{t+1}\biggr)^\frac{t+1}t\leq \biggl(\int_{B_R}\lambda^{-t}\biggr)^\frac1t\int_{B_R}\lambda(|u|+|\nabla u|)^p<\infty, $$ where the right-hand side is finite since $u\in H^{1,p}(\Omega,a)$ (note that $\lambda\leq \mu$ by definition). \end{proof} For the proof of Theorem~\ref{T:est}, we need a final bit of preparation, namely the following optimization lemma \begin{lemma}[Radial optimization]\label{lm1} Let $d \ge 3$, $p > 1$, $s > 1$, and let $s_* := \max\{1,\big( \frac 1p \bigl( 1 - \frac 1s\bigr) + \frac 1{d-1} \big)^{-1}\}$. For $\frac 12 \le \rho < \sigma \le 2$, let $v \in W^{1,s_*}(B_\sigma)$ and $\mu \in L^s(B_\sigma)$, $\mu \ge 0$, be such that $\mu |v|^p \in L^1(B_\sigma)$. Then there exists $c=c(d,p,s)$ such that \begin{equation}\nonumber J(\rho,\sigma,v) := \inf \biggl\{ \int_{B_\sigma} \mu |v|^p |\nabla \eta|^p \dx : \eta \in C^1_0(B_\sigma), \eta \ge 0, \eta = 1 \textrm{ in } B_\rho \biggr\} \end{equation} satisfies \begin{equation}\nonumber J(\rho,\sigma,v) \le c(\sigma-\rho)^ -\frac{pd}{d-1}} \|\mu\|_{L^s(B_\sigma \setminus B_\rho)} \bigl( \|\nabla v\|^p_{L^{s_*}(B_\sigma \setminus B_\rho)} + \rho^{-p} \|v\|^p_{L^{s_*}(B_\sigma \setminus B_\rho)} \bigr). \end{equation} \end{lemma} Lemma~\ref{lm1} generalizes \cite[Lemma~2.1]{BS19a} from $p=2$ to $p>1$ and we provide a proof in the appendix. \begin{proof}[Proof of Theorem~\ref{T:est}]\PfStart{pf1} By standard scaling and translation arguments it suffices to suppose that $B_1\Subset \Omega$ and $u$ is locally bounded in $B_\frac{1}2$. Hence, we suppose from now on that $B_1\Subset \Omega$. In Steps~1--4 below, we consider the case $s>1$. We first derive a suitable Caccioppoli-type inequality for powers of $u_+$ (Step~1) and perform a Moser-type iteration (Steps~2--4). In Step~5, we consider the case $1+\frac1t<\frac{p}{d-1}$ which includes the case $s=1$. \PfStep{pf1}{pf1s1} Caccioppoli inequality. \\ Assuming $B \subset \Omega$, for any cut-off function $\eta \in \mathcal C^1_0(B)$, $\eta \ge 0$ and any $\beta \ge 1$, there holds \begin{equation}\label{Caccioppoli} \int \eta^p \lambda(x) u_+^{\beta -1} |\nabla u_+|^p \le \biggl( \frac p \beta \biggr)^{p} \int u_+^{p+\beta-1} \mu(x) |\nabla \eta|^p. \end{equation} For $\beta \ge 1$, we use the weak formulation~\eqref{def:harmonic} with $\phi:=\eta^p u_+^{\beta}$: \footnote{Rigorously, we are a priori not allowed to test with $u^\beta$. Instead, for $N \ge 1$ one should modify $u^\beta$ by replacing $u^\beta$ with affine $\alpha N^{\alpha-1}u - (\alpha-1)N^\beta$ in the set $u \ge N$, obtain the conclusion by testing the weak formulation with this modified function, and subsequently sends $N \to \infty$ -- for details, see~\cite[Page 460]{BS19a}.} \begin{equation*} \int a(x,\nabla u) \cdot \nabla (\eta^p u_+^\beta ) \le 0. \end{equation*} We have $\int (a(x,\nabla u) - a(x,\nabla u_+))\cdot \nabla(\eta^p u_+) = 0$, so that we were able to replace $u$ with $u_+$ inside $a(x,\cdot)$. Applying Leibniz rule we get from the previous display \begin{equation}\label{eq01} \beta \int \eta^p u^{\beta -1} a(x,\nabla u) \cdot \nabla u \le - \int p \eta^{p-1} u^{\beta} a(x,\nabla u) \cdot \nabla \eta, \end{equation} where to simplify the notation for the rest of this proof we write $u$ instead of $u_+$. Using definition of $\mu$ in~\eqref{def1} in form of $|a(x,\xi)| \le \mu(x)^{\frac 1p} (a(x,\xi)\cdot \xi)^{\frac{p-1}p}$ for any $\xi \in \R^d$ (in fact we use \eqref{def1} for $\xi\neq0$ and for $\xi=0$ the inequality follow from the assumption $a(x,0)=0$), we can bound the r.h.s. in the last math display from above by \begin{align* p \int \eta^{p-1} u^{\beta} \mu(x)^{\frac 1p} (a(x,\nabla u)\cdot \nabla u)^{\frac{p-1}{p}} |\nabla \eta| &= p \int u^{\beta-(\beta-1)\frac{p-1}{p}} \mu(x)^{\frac 1p} |\nabla \eta| (\eta^p u^{\beta -1} a(x,\nabla u)\cdot \nabla u)^{\frac{p-1}{p}} \\ &\le p \biggl( \int u^{p+\beta-1} \mu(x) |\nabla \eta|^p \biggr)^{\frac 1p} \biggl( \int \eta^p u^{\beta-1} a(x,\nabla u)\cdot \nabla u) \biggr)^{\frac{p-1}{p}}, \end{align*} where in the second step we applied H\"older inequality with exponents $p$ and $\frac{p}{p-1}$, respectively. Observe that the last term on the r.h.s. appears on the l.h.s. in~\eqref{eq01}, so that after absorbing it we get from~\eqref{eq01} \begin{equation*} \beta \biggl( \int \eta^p u^{\beta -1} a(x,\nabla u) \cdot \nabla u\biggr)^{\frac 1p} \le p \biggl( \int u^{p+\beta-1} \mu(x) |\nabla \eta|^p \biggr)^{\frac 1p}, \end{equation*} which after taking the $p$-th power turns into \begin{equation*} \int \eta^p u^{\beta -1} a(x,\nabla u) \cdot \nabla u \le \biggl( \frac p \beta \biggr)^{p} \int u^{p+\beta-1} \mu(x) |\nabla \eta|^p. \end{equation*} By definition of $\lambda$ in~\eqref{def1} in form of $\lambda(x)|\xi|^p \le a(x,\xi)\cdot \xi$ for any $\xi \in \R^d$, one has $\lambda(x) |\nabla u|^p \le a(x,\nabla u)\cdot \nabla u$, thus implying the claimed Caccioppoli inequality \eqref{Caccioppoli}. \PfStep{pf1}{pf1s2} Improvement of integrability. \\ We claim that there exists $c=c(d,p,s)\in[1,\infty)$ such that for $\frac 12 \le \rho < \sigma \le 1$ and $\alpha \ge 1$ it holds \begin{equation}\label{eq04} \|\nabla(u^\alpha)\|_{L^{\frac{pt}{t+1}}(B_\rho)} \le c (\sigma-\rho)^{-\frac{d}{d-1}} \Lambda(B_\sigma)^\frac{1}{p} \|u^\alpha\|_{W^{1,s_*}(B_\sigma \setminus B_\rho)}. \end{equation} Let $\eta \in \mathcal C^1_0(B_\sigma)$, $\eta \ge 0$, with $\eta = 1$ in $B_\rho$. First, we rewrite the Caccioppoli inequality \eqref{Caccioppoli} from Step~\ref{pf1s1} as inequality for $u^{1 + \frac{\beta-1}{p}}$: \begin{equation}\label{eq02} \left( \frac{p}{p+\beta-1} \right)^p \int \eta^p \lambda(x) |\nabla (u^{1+\frac{\beta-1}{p}})|^p \le \biggl( \frac p \beta \biggr)^{p} \int \mu(x) (u^{1+\frac{\beta-1}{p}})^p |\nabla \eta|^p. \end{equation} Calling $v:=u^{1 + \frac{\beta-1}{p}}$, we can estimate the r.h.s. with the help of Lemma~\ref{lm1}, yielding \begin{equation*} \int \eta^p \lambda(x) |\nabla v|^p \le c\left( \frac{p+\beta-1}{\beta} \right)^p (\sigma-\rho)^{-\frac{pd}{d-1}} \|\mu\|_{L^s(B_\sigma \setminus B_\rho)} \bigl( \|\nabla v\|^p_{L^{s_*}(B_\sigma \setminus B_\rho)} + \rho^{-p} \|v\|^p_{L^{s_*}(B_\sigma \setminus B_\rho)} \bigr). \end{equation*} Using H\"older inequality with exponents $(\frac{t+1}{t},t+1)$ and the fact that $\eta = 1$ in $B_\rho$, we see that \begin{equation* \|\nabla v\|_{L^{\frac{pt}{t+1}}(B_\rho)}^p \le \|\lambda^{-1}\|_{L^t(B_\rho)} \|\lambda |\nabla v|^p \|_{L^1(B_\rho)} \le \|\lambda^{-1}\|_{L^t(B_\rho)} \int \eta^p \lambda(x) |\nabla v|^p. \end{equation*} Using that $\frac 12 \le \rho \le \sigma \le 1$, combination of two previous relations yields \begin{equation*} \|\nabla v\|_{L^{\frac{pt}{t+1}}(B_\rho)}^p \le c\left( \frac{p+\beta-1}{\beta} \right)^p (\sigma-\rho)^{-\frac{pd}{d-1}} \Lambda(B_\sigma) \|v\|^p_{W^{1,s_*}(B_\sigma \setminus B_\rho)}, \end{equation*} which after taking $p$-root turns into \begin{equation*} \|\nabla(u^\alpha)\|_{L^{\frac{pt}{t+1}}(B_\rho)} \le c (\sigma-\rho)^{-\frac{d}{d-1}} \Lambda(B_\sigma)^\frac{1}{p} \|u^\alpha\|_{W^{1,s_*}(B_\sigma \setminus B_\rho)}, \end{equation*} with $\alpha := 1+\frac{\beta-1}{p}$. \PfStep{pf1}{pf1s3} One-step Improvement. First, we note that \eqref{pqcond} and $t>\frac1{p-1}$ imply $\delta:= \frac{1}{s_*} - \frac1p(1+\frac{1}{t})>0$. In particular it holds $s_*<\frac{t p}{t+1}$. We claim that there exists $c=c(d,s,t,p)$ such that for $\frac 12 \le \rho < \sigma \le 1$ there holds \begin{equation}\label{eq05} \|u^{\chi\alpha}\|_{W^{1,s_*}(B_\rho)}^{\frac{1}{\chi \alpha}} \le \biggl( \frac{ c\Lambda(B_\sigma)^\frac{1}{p} }{(\sigma-\rho)^{\frac{d}{d-1}} } \biggr)^{\frac{1}{\chi \alpha}} \|u^\alpha\|_{W^{1,s_*}(B_\sigma)}^\frac{1}{\alpha}, \end{equation} where $\chi:=1+\delta>1$. Using H\"older inequality with exponent $\frac{pt}{(t+1)s_*}>1$ and its dual exponent $\frac{pt}{pt-(t+1)s_*}=\frac1{\delta s_*}$ we get \begin{align*} \biggl( \int_{B_\rho} |\nabla(u^{(1+\delta)\alpha})|^{s_*} \biggr)^{\frac1{s_*}} &= (1+\delta)\alpha \biggl( \int_{B_\rho} |\nabla u|^{s_*} u^{(\alpha-1)s_*} u^{\alpha \delta s_*} \biggr)^{\frac1{s_*}} = (1+\delta) \biggl( \int_{B_\rho} |\nabla(u^{\alpha})|^{s_*} u^{\alpha \delta s_*} \biggr)^{\frac1{s_*}} \\ &\le (1+\delta) \biggl( \int_{B_\rho} |\nabla(u^\alpha)|^{\frac{pt}{t+1}} \biggr)^{\frac{t+1}{pt}} \biggl( \int_{B_\rho} u^\alpha \biggr)^{\delta}. \end{align*} Combining the above estimate with \eqref{eq04} from Step~\ref{pf1s2}, we get (recall $\chi = 1+\delta$) \begin{equation*} \|\nabla(u^{\chi\alpha})\|_{L^{s_*}(B_\rho)} \le c (\sigma-\rho)^{-\frac{d}{d-1}} \Lambda(B_\sigma)^\frac{1}{p} \|u^\alpha\|_{W^{1,s_*}(B_\sigma)}^\chi, \end{equation*} where we hided $\chi = 1 + \delta < \frac{d}{d-1}$ into $c$. In order to have full $W^{1,s_*}(B_\rho)$-norm also on the l.h.s., using $s_* \ge 1$ as well as $\chi < \frac{d}{d-1}$ we can use Sobolev inequality to the effect \begin{equation*} \| u^{\chi\alpha} \|_{L^{s_*}(B_\rho)} \le c \| u^{\alpha} \|_{W^{1,s_*}(B_\rho)}, \end{equation*} thus obtaining the claim. \PfStep{pf1}{pf1s4} Iteration.\\ We iterate the outcome of Step~\ref{pf1s3}. For $\bar \alpha \ge 1$ and $n \in \mathbb{N}$ let $\alpha_n := \bar \alpha \chi^{n-1}$, $\rho_n := \frac 12 + \frac{1}{2^n+1}$, $\sigma_n := \rho_n + \frac{1}{2^{n+1}} = \rho_{n-1}$. Then~\eqref{eq05} from Step~\ref{pf1s4} with $\alpha := \alpha_n$ has the form \begin{equation}\label{eq05} \|u^{\alpha_{n+1}}\|_{W^{1,s_*}(B_{\rho_n})}^{\frac{1}{\alpha_{n+1}}} \le ( c\Lambda(B_1)^\frac{1}{p}4^n)^{\frac{1}{\bar\alpha \chi^n}} \|u^{\alpha_n}\|_{W^{1,s_*}(B_{\rho_{n-1}})}^\frac{1}{\alpha_n}. \end{equation} Using that $L^p$ approximates $L^\infty$ as $p \to \infty$, we see that \begin{align}\label{est:sup:general} \|u\|_{L^\infty(B_{1/2})} &\le \biggl( \prod_{n=1}^\infty ( c\Lambda(B_\sigma)^\frac{1}{p}4^n)^{\frac{1}{\bar\alpha \chi^n}} \biggr) \|u^{\bar\alpha}\|_{W^{1,s_*}(B_1)}^\frac{1}{\bar\alpha}\notag \\ &\le c\Lambda(B_\sigma)^{\frac{1}{p\bar \alpha}\frac{1}{\chi-1}} \|u^{\bar\alpha}\|_{W^{1,s_*}(B_1)}^\frac{1}{\bar\alpha}, \end{align} which for $\bar \alpha = 1$ yields the desired claim where we use $\chi=1+\delta$ and $s_*\leq \frac{tp}{t+1}$. \PfStep{pf1}{pf1s5} The remaining case $1+\frac1t<\frac{p}{d-1}$. Using Fubini theorem, we can choose a generic radius $r_0 \in (\frac 12,1)$ such that \begin{equation*} \|u_+\|_{W^{1,\frac{pt}{t+1}}(S_{r_0})}^\frac{pt}{t+1} \le 2\|u_+\|_{W^{1,\frac{pt}{t+1}}(B_1)}^\frac{pt}{t+1}. \end{equation*} We test the weak formulation of $-\nabla \cdot a(x,\nabla u)\leq0$ see \eqref{def:harmonic} with the non-negative test function $\phi := (u_+-\sup_{S_{r_0}} u_+)_+$, which obviously vanishes on $S_{r_0}$ and can be therefore trivially extended by zero to the whole domain $\Omega$. This yields \begin{equation*} 0 \stackrel{\eqref{def:harmonic}}\ge \int_{B_{r_0}} a(x,\nabla u)\cdot \nabla \phi = \int_{B_{r_0}} a(x,\nabla \phi)\cdot \nabla \phi \stackrel{\eqref{def1}}\ge \int_{B_{r_0}} \lambda(x) |\nabla \phi|^p. \end{equation*} In particular, we see that $\nabla \phi = 0$ a.e. in $B_{r_0}$, hence $\phi \equiv 0$ and thus \begin{equation*} \|u_+\|_{L^\infty(B_{\frac 12})} \le \|u_+\|_{L^\infty(B_{r_0})} \le \sup_{S_{r_0}} u_+. \end{equation*} Using that $\frac{pt}{t+1} > d-1$, which follows from $1+\frac 1t < \frac p{d-1}$, we have by Sobolev embedding that $\sup_{S_{r_0}} u_+ \le c\|u_+\|_{W^{1,\frac{pt}{t+1}}(S_{r_0})}$ for some $c=c(d,p,t)>0$ which by the above choice of $r_0$ completes the claim. \end{proof} \begin{proof}[Proof of Proposition~\ref{P:2d}] This follows exactly as in Step~5 of the proof of Theorem~\ref{T:est} using that for $d=2$ it holds $\sup_{S_{r_0}} u_+ \le c\|u_+\|_{W^{1,1}(S_{r_0})}$. \end{proof} We close this section by deriving from Theoem~\ref{T:est} in the case $s>1$ an $L^\infty-L^\gamma$ estimate . \begin{corollary}\label{C:est} Let $d \ge 2$, $\Omega\subset \R^d$ and $p\in(1,\infty)$. Moreover, let $s\in(1,\infty]$ and $t\in(\frac1{p-1},\infty]$ satisfy \eqref{pqcond}. Let $a : \Omega \times \R^d \to \R^d$ be a Caratheodory function with $a(\cdot,0)\equiv0$ such that $\lambda$ and $\mu$ defined in~\eqref{def1} satisfy $\mu \in L^s(\Omega)$ and $\frac 1\lambda \in L^t(\Omega)$. Then, any weak subsolution $u$ of \eqref{eq} and any $\gamma>0$ there exists $c=c(\gamma,d,p,s,t) \in [1,\infty)$ such that for any ball $B_{R} \subset \Omega$ \begin{equation*} \sup_{B_{R/2}} u \le c \Lambda(B_R)^{\frac1\gamma\frac{s}{s-1}(1+\frac1\delta)} \biggl(\fint_{B_R}u_+^\gamma\biggr)^\frac{1}{\gamma}. \end{equation*} \end{corollary} \begin{proof}\PfStart{pf2} Without loss of generality we consider $R=1$ and suppose that $B_1\Subset\Omega$. Caccioppoli inequality \eqref{eq02} with $\beta=1+p(\alpha-1)$ for $\alpha\geq1$ and $\eta\in C_c^1(B_1)$ with $\eta=1$ on $B_\frac12$ and $|\nabla \eta|\leq 2$ and H\"older inequality yield \begin{align*} \| \nabla (u_+^\alpha)\|_{L^\frac{pt}{t+1}(B_{1/2})}^p\leq&\|\lambda^{-1}\|_{L^t(B_1)}\int_{B_1}\eta^p\lambda|\nabla (u_+^\alpha)|^p\leq (2p)^p\|\lambda^{-1}\|_{L^t(B_1)}\int_{B_1}\mu u_+^{\alpha p}\\ \leq& (2p)^p\|\lambda^{-1}\|_{L^t(B_1)}\|\mu\|_{L^s(B_1)}\|u_+^\alpha\|_{L^{\frac{s}{s-1}p}(B_1)}. \end{align*} The above inequality combined with $\frac{tp}{t+1}\leq p\leq \frac{sp}{s-1}$ implies $\|u_+^\alpha\|_{W^{1,\frac{tp}{t+1}}(B_{1/2})}^\frac1\alpha \le c \Lambda(B_1)^\frac{1}{\alpha p} \| u_+ \|_{L^{\frac{\alpha ps}{s-1}}(B_1)}$ (note that $1\leq\Lambda(B_r)$) for some $c=c(d,p)\in[1,\infty)$. Hence, we have in combination with \eqref{est:sup:general} that \begin{equation}\label{eq06} \|u_+\|_{L^\infty(B_{1/4})} \le c \Lambda(B_1)^{\frac{1}{ \alpha p} (1+\frac1{\delta})} \|u_+\|_{L^{\frac{\alpha ps}{s-1}}(B_1)}, \end{equation} where $c=c(\alpha, d,p,t,s)\in[1,\infty)$. From estimate \eqref{eq06} the claim follows by routine arguments and we only sketch the idea (see \cite[Proof of Theorem 3.3, Step 2]{BS19a} for precise arguments in the case $p=2$). By scaling and translation, we deduce from \eqref{eq06} that for all $\rho>0$ and $x\in B_1$ such that $B_\rho(x)\subset B_1$ it holds for $\alpha\geq1$ \begin{equation* \|u_+\|_{L^\infty(B_{\rho/4}(x))} \le c \Lambda(B_\rho(x))^{\frac{1}{\alpha p} (1+\frac1{\delta})} \rho^{-\frac{d}p(1-\frac1s)}\|u_+\|_{L^{\frac{\alpha ps}{s-1}}(B_\rho(x))}, \end{equation*} where $c$ is as in \eqref{eq06}. Combining the above estimate with a simple covering argument, we obtain that there exists $c=c(\alpha,d,p,s,t)\in[1,\infty)$ such that for all $\theta\in(0,1)$ and $r\in(0,1]$ it holds $$ \|u_+\|_{L^\infty(B_{\theta r})} \le c \Lambda(B_r)^{\frac{1}{\alpha p} (1+\frac1{\delta})} (1-\theta)^{-\kappa}r^{-d\frac{s-1}{\alpha ps}}\|u_+\|_{L^{\frac{\alpha ps}{s-1}}(B_r)}, $$ where $\kappa:=\frac{d}{\alpha p}((\frac1t+\frac1s)(1+\frac1{\delta})+1-\frac1s)$ which is the claim for all $\gamma \geq\frac{ps}{s-1} $ (by choosing $\alpha=\frac{s-1}{ps}\gamma$). The claim for $\gamma\in(0,\frac{ps}{s-1})$ follows by a standard interpolation and iteration argument see e.g.\ the textbook reference \cite[p.\ 75]{HL} in the uniformly elliptic case or as mentioned above \cite[Proof of Theorem 3.3, Step 2]{BS19a} for a closely related setting. \end{proof} \section{Counterexample, proof of Theorem~\ref{T:negative}} \begin{proof}[Proof of Theorem~\ref{T:negative}]\PfStart{pf3} The following construction is very much inspired by a construction in \cite{FSS98} in the linear case, that is $p=2$, and $d=4$ (which was already extended to $d\geq3$ in \cite{Schwarzmannthesis}). Let $d\geq3$. Throughout the proof, we set $$ x=(x_1,\dots,x_d)=(x_1,x')\quad\mbox{and}\quad |x'|=\sqrt{\sum_{j=2}^dx_j^2}. $$ For any $p\in(1,\infty)$ and $\theta\in[0,1]$, we define $\lambda_\theta(x):=\omega_\theta(|x'|)$ where $\omega_\theta:(0,1)\to\R_+$ is defined as \begin{equation}\label{def:omegatheta} \omega_\theta(r)=\begin{cases} (i+1)^{(p-1)\theta} 4^{-pi\theta}&\mbox{when $r\in[\frac12 4^{-i},4^{-i})$},\\ ((i+1)^{-(p-1)}4^{pi})^{1-\theta}&\mbox{when $r\in[\frac14 4^{-i},\frac12 4^{-i})$} \end{cases} \end{equation} for $i\in\mathbb N$. We will construct an explicit subsolution to $-\nabla \cdot (\lambda_\theta |\nabla v|^{p-2}\nabla v)=0$, which is of the form \begin{equation}\label{def:v} v(x)=e^{\alpha x_1}\phi(|x'|) \end{equation} for some parameter $\alpha=\alpha(d,p)>0$ and $\phi:(0,1)\to\R$ is defined by \begin{equation}\label{def:phi} \phi(r)=\begin{cases} i+\frac{\eta_i}{2^Q-1}((4^ir)^{-Q}-1)&\mbox{when $r\in[\frac12 4^{-i},4^{-i})$},\\ (i+1)-(1-\eta_i)(4^{i+1}r-1)^2&\mbox{when $r\in[\frac14 4^{-i},\frac12 4^{-i})$} \end{cases},\quad\mbox{with}\quad Q=\begin{cases}\max\{d-3,1\}&\mbox{if $p\geq2$}\\ \frac{d-2}{p-1}-1&\mbox{if $1<p<2$}\end{cases} \end{equation} where $\eta_i\in[0,1]$ will be specified below. Note that $Q>0$ and $\phi$ is continuous by definition. We choose $\eta_i\in(0,1)$ such that the flux $\lambda_\theta|\nabla v|^{p-2}\nabla v$ is continuous at $|x'|=\frac12 4^{-i}$ for every $i\in\mathbb N$. More precisely, we set $\eta_i$ to be the largest constant (in $[0,1]$) satisfying \begin{equation}\label{def:etai} F_i(\eta_i)=0, \end{equation} where $F_i:(0,1]\to\R$ is given by \begin{align*} F_i(\eta):=&\sqrt{(\alpha (i+\eta)4^{-i})^2+(C_Q\eta)^2}^{p-2}C_Q\eta \\ &- \sqrt{(\alpha(i+\eta)(i+1)^{-1})^2+(8(1-\eta)4^{i}(i+1)^{-1})^2}^{p-2} 8(1-\eta)4^{2i}(i+1)^{-1} \end{align*} with \begin{equation* C_Q=Q\frac{2^{Q+1}}{2^Q-1}. \end{equation*} Note that $\eta_i$ is well-defined since $F_i:(0,1)\to\R$ is continuous with $$ \lim_{\eta\to0}F_i(\eta)=- \sqrt{(\alpha i)^2+(2\cdot 4^{i+1})^2}^{p-2} 8\cdot4^{2i}(i+1)^{-(p-1)}<0 $$ and $$ \lim_{\eta\to1}F_i(\eta)=\sqrt{(\alpha (i+1)4^{-i})^2+C_Q^2}^{p-2}C_Q>0. $$ The definition of $\eta_i$ is rather implicit and we provide now some explicit bounds on $\eta_i$ which will be useful for later computations. We distinguish two cases. For $p\geq2$ and $\alpha\geq C_Q$, we have that \begin{equation}\label{eta:loweboundgeq} \exists j=j(d,p)\geq2\mbox{ such that $\forall i\geq j$:}\quad\eta_i \geq 1-8^{-1} (4^{p-2} C_Q) 4^{-2i}(i+1)=:\underline \eta_i. \end{equation} Indeed, let $j=j(d,p)\geq2$ be such that $\underline \eta_i\in(0,1)$ for all $i\geq j$. By definition of $\eta_i$, it suffices to show that $F_i(\underline \eta_i)\leq0$ for $i\geq j$. We have \begin{align*} F_i(\underline \eta_i)\leq&\sqrt{(\alpha (i+1)4^{-i})^2+C_Q^2}^{p-2}C_Q - \sqrt{(\alpha i/(i+1))^2}^{p-2} (4^{p-2} C_Q)\\ =&\sqrt{((i+1)4^{-i})^2+(C_Q/\alpha)^2}^{p-2}\alpha^{p-2}C_Q - \alpha^{p-2}\sqrt{(i/(i+1))^2}^{p-2} (4^{p-2} C_Q)\\ \leq&\alpha^{p-2}(2^{p-2}C_Q-2^{-(p-2)}(4^{p-2} C_Q))=0, \end{align*} where we used for the last inequality $(i+1)4^{-i}\leq 1$ and $i/(i+1)\geq\frac12$ for $i\geq1$ and $\alpha\geq C_Q$. In the case $p\in(1,2)$, we have for $\alpha\geq 2^\frac{2-p}{p-1}C_Q$ that \begin{equation}\label{eta:loweboundleq} \exists j=j(\alpha,d,p)\geq2\mbox{ such that $\forall i\geq j$:}\quad\eta_i \geq 1-8^{-1} \alpha 4^{-2i}(i+1)=:\overline \eta_i. \end{equation} Indeed, this follows as above from \begin{align*} F_i(\overline \eta_i)\leq&C_Q^{p-1} - \sqrt{\alpha^2+(\alpha 4^{-i})^2}^{p-2} \alpha\leq C_Q^{p-1}-\alpha^{p-1}2^{p-2}\leq0. \end{align*} \PfStep{pf3}{pf3s1} We show that for every $\alpha\geq\max\{1,2^{\frac{2-p}{p-1}}\}C_Q$, the function $v$ defined in \eqref{def:v} has finite energy, that is $\int_{B_1}\lambda_\theta (|v|^p+|\nabla v|^p)<\infty$ provided $(1-\theta)p<d-1$. We show first $\int_{B_1}\lambda_\theta |v|^p<\infty$. For this, we observe that $0\leq\phi(r)\leq\log(4/r)$ for all $r\in(0,1)$. Indeed, $\phi\geq0$ is clear from the definition \eqref{def:phi} and for $r\in[\frac14 4^{-i},4^{-i})$, we have $$\phi(r)\leq i+1=\log_4(4^{i+1})\leq \log_4(\tfrac{4}r)\leq \log(\tfrac{4}r).$$ Similarly, we get \begin{equation}\label{estabove:omegatheta} \omega_\theta(r)\leq \begin{cases} ((2r)^p\log(4/r)^{p-1})^\theta&\mbox{when $r\in[\frac12 4^{-i},4^{-i})$},\\ (r^p\log(2/r)^{p-1})^{-(1-\theta)}&\mbox{when $r\in[\frac14 4^{-i},\frac12 4^{-i})$} \end{cases}. \end{equation} Hence, there exists $C=C(\alpha,d,p)>0$ such that \begin{align*} \int_{B_1}\lambda_\theta v^p\,dx\leq C\int_0^1r^{-(1-\theta)p}\log(2/r)^{p-(1-\theta)(p-1)}r^{d-2}\,dr<\infty, \end{align*} where the last integral is finite since $(1-\theta)p< d-1$ Next, we show $\int_{B_1}\lambda_\theta |\nabla v|^p<\infty$. For this we compute the gradient of $v$: \begin{equation}\label{normnablav} \nabla v=\begin{pmatrix}\alpha \phi \\ \phi' \frac{x'}{|x'|}\end{pmatrix}e^{\alpha x_1}\quad\mbox{and}\quad|\nabla v|=\sqrt{\alpha^2 \phi^2+\phi'^2}e^{\alpha x_1} \end{equation} Moreover, we compute \begin{equation}\label{phiprime} \phi'(r)=\begin{cases} -Q\frac{\eta_i}{2^Q-1}(4^ir)^{-Q}r^{-1}&\mbox{when $r\in(\frac12 4^{-i},4^{-i})$},\\ -2(1-\eta_i)4^{i+1}(4^{i+1}r-1)&\mbox{when $r\in(\frac14 4^{-i},\frac12 4^{-i})$} \end{cases} \end{equation} and for later usage \begin{equation}\label{phiprime2} \phi''(r)=\begin{cases} Q(Q+1)\frac{\eta_i}{2^Q-1}(4^ir)^{-Q}r^{-2}&\mbox{when $r\in(\frac12 4^{-i},4^{-i})$},\\ -2(1-\eta_i)4^{2(i+1)}&\mbox{when $r\in(\frac14 4^{-i},\frac12 4^{-i})$} \end{cases}. \end{equation} From \eqref{eta:loweboundgeq} and \eqref{eta:loweboundleq}, we obtain that there exists $C=C(\alpha,d,p)>0$ such that $0\leq 1-\eta_i\leq C 4^{-2i}(i+1)$ for $i\geq j(\alpha,d,p)$ and thus in combination with \eqref{phiprime} there exists $C=C(\alpha,d,p)>0$ such that \begin{equation* |\phi'(r)|\leq C\begin{cases} r^{-1}&\mbox{when $r\in(\frac12 4^{-i},4^{-i})$},\\ \log(2/r)r&\mbox{when $r\in(\frac14 4^{-i},\frac12 4^{-i})$} \end{cases} \end{equation*} for all $i\geq j$. Hence, we find $C=C(\alpha,d,p)>0$ such that \begin{align*} \int_{B_1}\lambda_\theta |\nabla v|^p\leq C+C\int_0^1 \biggl((r^p\log(2/r)^{p-1})^\theta r^{-p}+(r^p\log(2/r)^{p-1})^{-(1-\theta)}(\log(2/r)r)^p\biggr)r^{d-2}\,dr<\infty, \end{align*} where we use again $(1-\theta)p<d-1$. Finally, it is easy to check that the sequence $(v_k)_k$ defined by $v_k(x)=e^{\alpha x_1}\phi_k(|x'|)$ with $ \phi_k(x)=\phi(x)$ if $|x|>4^{-k}$ and $\phi_k(x)=k$ if $|x'|\leq 4^{-k}$ is a sequence of Lipschitz functions satisfying $\lim_{k\to\infty}\int_{B_1}\lambda_\theta(|v-v_k|^p+|\nabla v-\nabla v|^p)\to0$ as $k\to\infty$ and a straightforward regularization shows that $v$ in $H^{1,p}(B_1,a)$ with $a(x,\xi):=\lambda_\theta(x)|\xi|^{p-2}\xi$. \PfStep{pf3}{pf3s2} We claim that there exist $\alpha_0=\alpha_0(d,p)\geq1$ such that for every $\alpha\geq\alpha_0$ there exists $\rho=\rho(\alpha,d,p)\in(0,1]$ such that $v$ defined in \eqref{def:v} is a weak subsolution in $\{x\in B_1\,:\,\delta<|x'|<\rho\}$ for all $\delta>0$. For this, we observe first that by \eqref{normnablav} the nonlinear strain $|\nabla v|^{p-2}\nabla v$ of $v$ is given by \begin{equation}\label{normnablav1} |\nabla v|^{p-2}\nabla v=\sqrt{\alpha^2\phi^2+\phi'^2}^{p-2}\begin{pmatrix}\alpha \phi \\ \phi' \frac{x'}{|x'|}\end{pmatrix}e^{\alpha (p-1) x_1}. \end{equation} Introducing the notation $M_{2i}=B_1\cap \{\frac12 4^{-i}<|x'|<4^{-i}\}$ and $M_{2i+1}=B_1\cap \{\frac14 4^{-i}<|x'|<\frac124^{-i}\}$, we obtain with help of integrating by parts \begin{align*} \int_{B_1} \lambda_\theta|\nabla v|^{p-2}\nabla v\cdot \nabla\varphi=&\sum_{i\in\mathbb N}\int_{M_i}\omega_\theta|\nabla v|^{p-2}\nabla v\cdot \nabla\varphi\\ =&\sum_{i\in\mathbb N}-\int_{M_i}\omega_\theta\nabla \cdot (|\nabla v|^{p-2}\nabla v)\varphi+\int_{\partial M_i}\omega_\theta |\nabla v|^{p-2}\nabla v\cdot \nu \varphi\\ =&\sum_{i\in\mathbb N}-\int_{M_i}\omega_\theta\nabla \cdot (|\nabla v|^{p-2}\nabla v)\varphi+\int_{\partial M_i}\omega_\theta\sqrt{\alpha^2 \phi^2+\phi'^2}^{p-2}\phi' e^{(p-1)\alpha x_1}\varphi, \end{align*} where $\nu$ denotes the outer unit normal to $M_i$ that is $\nu=(0,x'/|x'|)$. Hence, it suffices to show that there exists $\alpha_0>0$ such that for all $\alpha\geq\alpha_0$ there exists $j=j(\alpha,d,p)\geq2$ such that \begin{enumerate}[(i)] \item $v$ satisfies $\nabla \cdot (|\nabla v|^{p-2}\nabla v)\geq0$ in the classical sense in each shell $M_i$ for all $i\geq j$ \item the flux has only nonnegative jumps at the interfaces, that is \begin{align*} (\omega_\theta\sqrt{\alpha^2 \phi^2+\phi'^2}^{p-2}\phi')(\gamma_-):=&\lim_{r\to \gamma\atop r<\gamma}(\omega_\theta \sqrt{\alpha^2 \phi^2+\phi'^2}^{p-2}\phi')(r)\\ \leq&\lim_{r\to \gamma\atop r>\gamma}(\omega_\theta \sqrt{\alpha^2 \phi^2+\phi'^2}^{p-2}\phi')(r)=:(\omega_\theta |\nabla v|^{p-2}\phi')(\gamma_+) \end{align*} for all $\gamma\in\bigcup_{i\in\mathbb N, i\geq j}\{4^{-i}\}\cup\{\frac12 4^{-i}\}$. \end{enumerate} \substep{2.1} Argument for (i). Let $\alpha\geq1$ be such that \begin{equation}\label{ass:alpha} \alpha\geq \alpha_0(p,d):=\max\biggl\{1,C_Q,2^\frac{2-p}{p-1}C_Q,2^p\sqrt{C_Q(1+\frac{d-2}{p-1})},8\frac{d-1}{p-1}\biggr\ \end{equation} and let $j=j(\alpha,d,p)\geq2$ be such that the estimates \eqref{eta:loweboundgeq} and \eqref{eta:loweboundleq} are valid. We show that $v$ with $\alpha$ as above, satisfies $\nabla \cdot (|\nabla v|^{p-2}\nabla v)\geq0$ in the classical sense in each shell $M_i$ for all $i\geq j$. We compute with help of \eqref{normnablav1} on $M_i$ \begin{align}\label{eq:plaplacev} &\nabla \cdot (|\nabla v|^{p-2}\nabla v)\notag\\ =&\biggl(\alpha^2(p-1)\sqrt{\alpha^2\phi^2+\phi'^2}^{p-2}\phi+(p-2)\sqrt{\alpha^2\phi^2+\phi'^2}^{p-4}|\phi'|^2(\alpha^2\phi+\phi'')\notag\\ &\quad+\sqrt{\alpha^2\phi^2+\phi'^2}^{p-2}(\phi''+(d-2)\frac{\phi'}{|x'|})\biggr)e^{\alpha(p-1) x_1}\notag\\ =&\sqrt{\alpha^2\phi^2+\phi'^2}^{p-4}\biggl(\alpha^2(p-1)(\alpha^2 \phi^2+\phi'^2)\phi+(p-2)\phi'^2(\alpha^2\phi+\phi'')\notag\\ &\qquad\qquad\qquad+(\alpha^2 \phi^2+\phi'^2)(\phi''+(d-2)\frac{\phi'}{|x'|})\biggr)e^{\alpha(p-1) x_1}. \end{align} We show that $v$ is a classical subsolution in $M_{2i+1}$. Note that $\phi>0$ and $\phi',\phi''<0$ on $(\frac14 4^{-i},\frac12 4^{-i})$. We consider first the case $p\geq2$. From $\phi>0$, $\phi''<0$ and $\phi'^2\leq \alpha^2\phi^2+\phi'^2$, we deduce $$ (p-2)\phi'^2(\alpha^2\phi+\phi'')\geq (p-2)(\alpha^2\phi^2+\phi'^2)\phi'' $$ and in combination with \eqref{eq:plaplacev} that \begin{align*} \nabla \cdot (|\nabla v|^{p-2}\nabla v)\geq&\sqrt{\alpha^2\phi^2+\phi'^2}^{p-2}(\alpha^2(p-1)\phi+(p-1)\phi''+(d-2)\frac{\phi'}{|x'|})e^{\alpha(p-1) x_1}. \end{align*} Hence, $\nabla \cdot (|\nabla v|^{p-2}\nabla v)\geq0$ on $M_{2i+1}$ is equivalent to \begin{align*} \alpha^2(p-1)\phi(r)+(p-1)\phi''(r)+(d-2)\frac{\phi'(r)}{r}\geq0\quad \mbox{for all $r\in(\frac14 4^{-i},\frac12 4^{-i})$}, \end{align*} which is by \eqref{def:phi}, \eqref{phiprime} and \eqref{phiprime2} valid if and only i $$ \alpha^2(p-1)\left(i+1-(1-\eta_i)(4^{i+1}r-1)^2\right)-2(1-\eta_i)4^{i+1}((p-1)4^{i+1}+r^{-1}(d-2)(4^{i+1}r-1))\geq 0 $$ for all $r\in(\frac14 4^{-i},\frac12 4^{-i})$. We estimate with help of $\eta_i\in[0,1]$, \begin{align*} & \alpha^2(p-1)\left((i+1)-(1-\eta_i)(4^{i+1}r-1)^2\right)-2(1-\eta_i)4^{i+1}((p-1)4^{i+1}+r^{-1}(d-2)(4^{i+1}r-1))\\ \geq&\alpha^2(p-1)i-2(1-\eta_i)4^{2(i+1)}(p-1+d-2). \end{align*} The lower bound on $\eta_i\geq\underline \eta_i$, see \eqref{eta:loweboundgeq}, implies $1-\eta_i\leq 1-\underline \eta_i\leq 8^{-1} (4^{p-2} C_Q) 4^{-2i}(i+1)$ and thus \begin{align}\label{subsolution:quadraticphase:1} \alpha^2(p-1)i-2(1-\eta_i)4^{2(i+1)}(p-1+d-2)\geq&\alpha^2(p-1)i-4^{p-1} C_Q(i+1) (p+d-3)\geq0, \end{align} where the last inequality is valid since $(i+1)/i\leq2$ for $i\geq1$ and $\alpha^2\geq4^{p-1}C_Q2(1+\frac{d-2}{p-1})$ (which is ensured by $\alpha\geq\alpha_0$, see \eqref{ass:alpha}). \medskip Next, we consider the case $p\in(1,2)$. We deduce from \eqref{eq:plaplacev} with $p-2<0$ and $\phi>0$, $\phi',\phi''<0$ that \begin{align}\label{subsolution:quadraticphasepleq2} &\nabla \cdot (|\nabla v|^{p-2}\nabla v)\notag\\ \geq&\sqrt{\alpha^2\phi^2+\phi'^2}^{p-4}\left((\alpha^2 \phi^2+\phi'^2)\left(\alpha^2(p-1)\phi+\phi''+(d-2)\frac{\phi'}{|x'|}\right)-(2-p)\phi'^2\phi\right)e^{\alpha(p-1) x_1}. \end{align} Similar computations as above yield for all $r\in(\frac14 4^{-i},\frac12 4^{-i})$ and $p\in(1,2)$ \begin{eqnarray*} \alpha^2(p-1)\phi(r)+\phi''(r)+(d-2)\frac{\phi'(r)}{r}&\geq& \alpha^2(p-1)(i+\eta_i)-2(1-\eta_i)4^{2(i+1)}(d-1)\\ &\stackrel{\eqref{eta:loweboundleq}}{\geq}&\alpha^2(p-1)i-4\alpha(i+1)(d-1)\geq1 \end{eqnarray*} where the last inequality is valid for all $i\geq1$ and $\alpha\geq 8\frac{d-1}{p-1}$ (see \eqref{ass:alpha}). Inserting this into \eqref{subsolution:quadraticphasepleq2}, we obtain (using $2-p\leq1$) \begin{eqnarray* \nabla \cdot (|\nabla v|^{p-2}\nabla v)&\geq&\sqrt{\alpha^2\phi^2+\phi'^2}^{p-4}\left(\alpha^2 \phi^2-\phi'^2\phi\right)e^{\alpha(p-1) x_1}\\ &\stackrel{\eqref{phiprime},\eqref{eta:loweboundleq}}\geq&\sqrt{\alpha^2\phi^2+\phi'^2}^{p-4}\phi\left(\alpha^2 \phi- (\alpha(i+1)4^{-i})^2\right)e^{\alpha(p-1) x_1} \geq0, \end{eqnarray*} where we use in the last inequality that $4^{-2i}(i+1)^2\leq1$ and $\phi\geq1$ on $(\frac14 4^{-i},\frac12 4^{-i})$ with $i\geq1$. \bigskip Now, we show that $v$ is a classical subsolution in $M_{2i}$. In view of \eqref{eq:plaplacev} it suffices to show that for all $r\in(\frac12 4^{-i},4^{-i})$ it holds \begin{equation}\label{need:logphase} \alpha^4(p-1)\phi^3(r)+\alpha^2(2p-3)\phi(r)\phi'^2(r)+\phi'^2((p-1)\phi''(r)+\frac{d-2}r\phi'(r))+\alpha^2\phi^2(r)(\phi''(r)+\frac{d-2}r\phi'(r))\geq0 \end{equation} For $p\geq\frac32$, we obviously have $$ \alpha^4(p-1)\phi^3(r)+\alpha^2(2p-3)\phi(r)\phi'^2(r)\geq0\qquad\mbox{for all $r\in(\frac12 4^{-i},4^{-i})$}. $$ Let us first consider $p\geq2$. In the case $d\geq4$, the choice of $\phi$ ensures $$ \forall r\in(\frac12 4^{-i},4^{-i}):\quad\phi''(r)+\frac{d-2}r\phi'(r)=0\quad\mbox{and}\quad (p-1)\phi''(r)+\frac{d-2}r\phi'(r)=(p-2)\phi''(r)\geq0 $$ and similarly for $d=3$ that $\phi''(r)+\frac{d-2}r\phi'(r)=\frac12 \phi''(r)\geq0$ and $(p-1)\phi''(r)+\frac{d-2}r\phi'(r)\geq0$. Altogether, we have that \eqref{need:logphase} is valid for all $r\in(\frac12 4^{-i},4^{-i})$ provided $p\geq2$. Next, we consider the case $p\in(1,2)$. The choice of $\phi$ ensures $$ \forall r\in(\frac12 4^{-i},4^{-i}):\quad(p-1)\phi''(r)+\frac{d-2}r\phi'(r)=0\quad\mbox{and}\quad \phi''(r)+\frac{d-2}r\phi'(r)=(2-p)\phi''(r)\geq0. $$ Using the above two identities, we see that \eqref{need:logphase} is equivalent to $$ \alpha^4(p-1)\phi^3(r)+\alpha^2(2p-3)\phi(r)\phi'^2(r)+\alpha^2\phi^2(r)(2-p)\phi''(r)\geq0 $$ and thus it suffices to show $$ \alpha^2(2p-3)\phi\phi'^2+\alpha^2\phi^2(2-p)\phi''\geq0. $$ For $p\in[\frac{3}2,2]$ the above inequality directly follows from $\phi,\phi''\geq0$ and it is left to consider $p\in(1,\frac{3}2)$ in which case the above inequality is equivalent to $$ \frac{3-2p}{2-p}\frac{\phi'^2}{\phi''}\leq \phi. $$ The above inequality is valid on $(\frac12 4^{-i},4^{-i})$ provided $i\geq 2$. Indeed, this follows from $\phi\geq i$ on $(\frac12 4^{-i},4^{-i})$ and $$ \frac{3-2p}{2-p}\frac{\phi'^2}{\phi''}\leq \frac{3-2p}{2-p}\frac{Q}{Q+1}\frac{\eta_i}{2^Q-1}2^Q\leq 2\frac{Q}{Q+1}\leq 2. $$ \substep{2.2} Argument for (ii). Let $\alpha\geq1$ and $j=j(\alpha,d,p)\geq2$ be as in Substep~2.1. In view of \eqref{normnablav}, we need to show that for all $\gamma\in\bigcup_{i\in\mathbb N,i\geq j}\{4^{-i}\}\cup\{\frac12 4^{-i}\}$ it holds \begin{equation}\label{ineq:jumpcondition} (\omega_\theta \sqrt{\alpha^2 \phi^2+\phi'^2}^{p-2}\phi')(\gamma_+)\geq(\omega_\theta \sqrt{\alpha^2 \phi^2+\phi'^2}^{p-2}\phi')(\gamma_-). \end{equation} For $\gamma\in \bigcup_{i\in\mathbb N}\{4^{-i}\}$, we directly observe that $$ (\omega_\theta \sqrt{\alpha^2 \phi^2+\phi'^2}^{p-2}\phi')(\gamma_+)=0>(\omega_\theta \sqrt{\alpha^2 \phi^2+\phi'^2}^{p-2}\phi')(\gamma_-). $$ Moreover, the definition of $\eta_i$ via \eqref{def:etai} ensures that \eqref{ineq:jumpcondition} holds as an equality for all $\gamma\in \bigcup_{i\in\mathbb N, i\geq j}\{\frac12 4^{-i}\}$ which finishes the argument. \PfStep{pf3}{pf3s3} Let $1<p<\infty$ and $\theta\in[0,1]$ be such that $(1-\theta)p<d-1$. Let $\alpha\geq\alpha_0$ and $\rho=\rho(\alpha,d,p)\in(0,1)$ be as in Step~2. We show that $v$ is a weak subsolution on $\Omega_\rho:=B_1\cap \{|x'|<\rho\}$. We follow a similar reasoning as in \cite{FSS98}. For $k\in\mathbb N$, let $\psi_k\in C^1(\R;[0,1])$ be a cut-off function satisfying $$\psi_k=0\quad\mbox{on $[0,\frac12 4^{-k}]$},\quad \psi_k\equiv1\quad\mbox{on $[4^{-k},1]$},\quad \|\psi'_k\|_{L^\infty(0,1)}\leq 4^{k+1}$$ and we define $\varphi_k\in C^1(B_1)$ by $\varphi_k(x)=\psi_k(|x'|)$. For every $\eta\in C_c^1(\Omega_\rho)$ with $\eta\geq0$, we have \begin{align}\label{est:step3:cutoff2} \int_{\Omega_\rho}\lambda_\theta |\nabla v|^{p-2}\nabla v\cdot \nabla \phi\,dx=&\int_{\Omega_\rho}\lambda_\theta |\nabla v|^{p-2}\nabla v\cdot (\nabla ((1-\varphi_k)\eta)+\nabla (\varphi_k \eta))\,dx\notag\\ \leq&\int_{\Omega_\rho}\lambda_\theta |\nabla v|^{p-2}\nabla v\cdot \nabla ((1-\varphi_k)\eta)\,dx, \end{align} where we use that $0\leq \varphi_k \eta\in C_c^1(\Omega_\rho\setminus \Omega_{4^{-k-1}}) $ and that by Step~2 $v$ is a subsolution on $ \Omega_\rho\setminus \Omega_{\delta}$ for every $\delta\in(0,\rho)$. It remains to show that the integral on the right-hand side in \eqref{est:step3:cutoff2} vanishes as $k\to\infty$. Note that $0\leq 1-\varphi_k\leq1 $ and $1-\varphi_k\equiv0$ on $\Omega_\rho\setminus \Omega_{4^{-k}}$. Hence, with help of the product rule, we obtain \begin{align*} \biggl|\int_{\Omega_\rho}\lambda_\theta |\nabla v|^{p-2}\nabla v\cdot \nabla ((1-\varphi_k)\eta)\,dx\biggr|\leq\int_{\Omega_{4^{-k}}}\lambda_\theta |\nabla v|^{p-1}|\nabla \eta|\,dx+\int_{\Omega_{\rho}}\eta\lambda_\theta |\nabla v|^{p-2}|\nabla v\cdot \nabla \varphi_k|\,dx. \end{align*} By dominated convergence, the first term on the right-hand side converges to zero as $k$ tends to $\infty$ (recall that we showed in Step~1 that $\lambda_\theta |\nabla v|^p\in L^1(B_1)$). To estimate the remaining integral we use $|\nabla v\cdot \nabla \varphi_k|=|\phi'||\nabla \varphi_k|e^{\alpha x_1}\leq C4^{k+1}|\phi'|$ for some $C=C(\alpha)>0$ on the set $\{|x'|\in(\frac12 4^{-k},4^{-k})\}$ and $\nabla v\cdot \nabla \varphi_k=0$ otherwise. Hence, we have that $|\nabla v|^{p-2}|\nabla v\cdot \nabla \varphi_k|\leq C4^{k+1}|x'|^{-(p-1)}$ on $\{|x'|\in(\frac12 4^{-k},4^{-k})\}$ and thus we obtain (using $\lambda_\theta=(k+1)^{\theta(p-1)}(2|x'|)^{p\theta}$ on $\{|x'|\in(\frac12 4^{-k},4^{-k})\}$, see \eqref{def:omegatheta}) \begin{align*} &\int_{\Omega_{\rho}}\eta\lambda_\theta |\nabla v|^{p-2}|\nabla v\cdot \nabla \varphi_k|\,dx\\ \leq& C\|\eta\|_{L^\infty(B_1)}4^{k+1}(k+1)^{\theta(p-1)}\int_{\frac12 4^{-k}}^{4^{-k}}r^{-(p-1)}r^{p\theta} r^{d-2}\,dr\\ =&C\|\eta\|_{L^\infty(B_1)}4^{k+1}(k+1)^{\theta(p-1)}\frac1{d-p(1-\theta)}4^{-k(d-p(1-\theta))}\biggl(1-2^{-(d-p(1-\theta)}\biggr)\stackrel{k\to\infty}\to0, \end{align*} where we use $p(1-\theta)<d-1$ the assumption and thus $d-p(1-\theta)>1$. \PfStep{pf3}{pf3s4} Conclusion. \substep{4.1} We consider the case $1+\frac1{d-2}<p<\infty$. Let $s>1$ and $t>\frac1{p-1}$ be such that $\frac1s+\frac1t=\frac{p}{d-1}$ and $\frac{t}{t+1}p<d-1$. We claim that there exist $0\leq \lambda\in L^s(B_1)$ with $\lambda^{-1}\in L^t(B_1)$ and an unbounded weak subsolution to \eqref{eq:negative}. We set $\theta=\frac1t \frac{d-1}p$ and observe that $\frac1s+\frac1t=\frac{p}{d-1}$ implies $\theta\in[0,1]$ and $1-\theta=\frac1s\frac{d-1}p$. Moreover, the restriction $\frac{t}{t+1}p<d-1$ in the form $p<(1+\frac1t)(d-1)$ ensures $$ (1-\theta)p=(1-\frac1t\frac{d-1}p)p=(p-\frac1t(d-1))<d-1. $$ Hence, in view of Steps~1--3, there exist the function $v$ defined in \eqref{def:v} with $\alpha=\alpha_0=\alpha_0(p,d)\geq1$ such that $v$ is an unbounded weak subsolution to $$ -\nabla \cdot (\lambda_\theta |\nabla v|^{p-2}\nabla v)=0\qquad\mbox{in $B(0,\rho)$ with $\rho=\rho(d,p)\in(0,1]$,} $$ where $\lambda_\theta(x)=\omega_\theta(|x'|)$, \textit{cf.} \eqref{def:omegatheta}. Appealing to \eqref{estabove:omegatheta}, we have that there exists $C=C(d,p)>0$ such that \begin{align*} \|\lambda_\theta\|_{L^s(B_1)}\leq& C\biggl(\int_0^1(r^{-p}\log(2/r)^{-(p-1)})^\frac{d-1}pr^{d-2}\,dr\biggr)^{\frac{1}{s}}\\ =&C\biggl(\int_0^1r^{-1}\log(2/r)^{-(1-\frac1p)(d-1)}\,dr\biggr)^{\frac{1}{s}}<\infty \end{align*} where we use that $p>1+\frac1{d-2}$ implies $(1-\frac1p)(d-1)>1$. Similarly, we have \begin{align*} \|\lambda_\theta^{-1}\|_{L^t(B_1)}\leq C\biggl(\int_0^1r^{-1}\log(2/r)^{-(1-\frac1p)(d-1)}\,dr\biggr)^{\frac{1}{t}}<\infty. \end{align*} Finally, we observe that by a simple scaling argument namely considering $\tilde v(x)=v(x/\rho)$ and $\lambda(x):=\lambda_\theta(x/\rho)$ we find that $\tilde v$ is a weak subsolution to \eqref{eq:negative} in $B_1$ and $\lambda$ satisfies $\lambda\in L^s(B_1)$ and $\lambda^{-1}\in L^t(B_1)$. \substep{4.2} We consider $1<p\leq 1+\frac{1}{d-2}$. Let $s$ and $t$ be as in the statement of the theorem. Clearly, we find $\overline s>s$ and $\overline t>t$ such that $\frac1{\overline s}+\frac1{\overline t}=\frac{p}{d-1}$. Hence, for $\lambda_\theta$ with $\theta=\frac1{\overline t} \frac{d-1}p$, we obtain as in Substep~4.1, an unbounded subsolution. It remains to check if $\lambda_\theta \in L^s(B_1)$ and $\lambda^{-1}\in L^t(B_1)$. By construction, we have $1-\theta=\frac1{\overline s} \frac{d-1}p$ and thus \begin{align*} \|\lambda_\theta\|_{L^s(B_1)}\leq& C\biggl(\int_0^1(r^{-p}\log(2/r)^{-(p-1)})^{\frac{d-1}p\frac{s}{\overline s}}r^{d-2}\,dr\biggr)^{\frac{1}{s}}\\ =&C\biggl(\int_0^1r^{-(d-1)\frac{s}{\overline s}+d-2}\log(2/r)^{-(1-\frac1p)(d-1)\frac{s}{\overline s}}\,dr\biggr)^{\frac{1}{s}}<\infty, \end{align*} where we use $s/\overline s<1$ and thus $-(d-1)\frac{s}{\overline s}+d-2>-1$. A similar argument shows $\lambda_\theta^{-1}\in L^t(B_1)$ which finishes the argument. \end{proof}
{ "timestamp": "2022-09-14T02:22:56", "yymm": "2209", "arxiv_id": "2209.05976", "language": "en", "url": "https://arxiv.org/abs/2209.05976", "abstract": "We study local boundedness for subsolutions of nonlinear nonuniformly elliptic equations whose prototype is given by $\\nabla \\cdot (\\lambda |\\nabla u|^{p-2}\\nabla u)=0$, where the variable coefficient $0\\leq\\lambda$ and its inverse $\\lambda^{-1}$ are allowed to be unbounded. Assuming certain integrability conditions on $\\lambda$ and $\\lambda^{-1}$ depending on $p$ and the dimension, we show local boundedness. Moreover, we provide counterexamples to regularity showing that the integrability conditions are optimal for every $p>1$.", "subjects": "Analysis of PDEs (math.AP)", "title": "Local boundedness for $p$-Laplacian with degenerate coefficients", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363733699888, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7084883552770862 }
https://arxiv.org/abs/2110.08670
Upper Bounds on Resolvent Degree via Sylvester's Obliteration Algorithm
For each $n$, let RD$(n)$ denote the minimum $d$ for which there exists a formula for the general polynomial of degree $n$ in algebraic functions of at most $d$ variables. In this paper, we recover an algorithm of Sylvester for determining non-zero solutions of systems of homogeneous polynomials, which we present from a modern algebro-geometric perspective. We then use this geometric algorithm to determine improved thresholds for upper bounds on RD$(n)$.
\section{Introduction}\label{sec:Introduction} A classical problem in mathematics is to determine the roots of a general degree $n$ polynomial in one variable in terms of its coefficients. Modern work on this problem centers around resolvent degree, an invariant whose ideas permeate classical work, but was not formally defined until the independent definitions of Brauer in \cite{Brauer1975} and Arnol'd and Shimura in \cite{ArnoldShimura1976}. Farb and Wolfson greatly expanded the context of resolvent degree in \cite{FarbWolfson2019}. As in \cite{Wolfson2021}, we denote the resolvent degree of the general degree $n$ polynomial by $\RD(n)$. In \cite{Dixmier1993}, Dixmier noted that ``Every reduction of $\RD(n)$ would be serious progress,'' and Wolfson provided new upper bounds on $\RD(n)$ in \cite{Wolfson2021} by constructing a ``bounding function'' $F(m)$ such that $\RD(n) \leq n-m$ for $n \geq F(m)$. The current best upper bounds on $\RD(n)$ are found in \cite{Sutherland2021C}, where Sutherland constructs an improved bounding function $G(m)$ and shows that $\lim\limits_{m \rightarrow \infty} \frac{F(m)}{G(m)} = \infty$. In this paper, we recover an algorithm from \cite{Sylvester1887} (henceforth referred to as the ``obliteration algorithm'') for solving systems of equations using polynomials of minimal degree. An additional modern description of the Sylvester's work and its relevance to resolvent degree is given in \cite{Heberle2021}. Here we present the algorithm primarily from an algebro-geometric viewpoint using the language of ``polar cones'' (c.f. \cite{Sutherland2021C}). We then use the obliteration algorithm to determine the following new upper bounds on resolvent degree: \begin{theorem}\label{thm:Upper Bounds on Resolvent Degree} \textbf{(Upper Bounds on Resolvent Degree)} \begin{enumerate} \item For $n \geq 5,250,199$, $\RD(n) \leq n-13$. \item For each $14 \leq m \leq 17$ and $n > \frac{(m-1)!}{120}$, $\RD(n) \leq n-m$. \item For $n \geq 381,918,437,071,508,901$, $\RD(n) \leq n-22$. \item For each $23 \leq m \leq 25$ and $n > \frac{(m-1)!}{720}$, $\RD(n) \leq n-m$. \end{enumerate} The above result is found as Theorem \ref{thm:Bounds from the Geometric Obliteration Algorithm} in Section \ref{sec:Upper Bounds on Resolvent Degree} and leads to the construction of a new bounding function $G'(m)$ such that $\RD(n) \leq n-m$ for $n \geq G'(m)$ and $G'(m) \leq G(m)$ in Corollary \ref{cor:The New Bounding Function}. \end{theorem} \paragraph{Historical Remarks} Sutherland uses two distinct methods to construct $G(m)$ in \cite{Sutherland2021C}. For general $m$, Sutherland uses a result of Debarre and Manivel (found in \cite{DebarreManivel1998}) to improve on Wolfson's construction in \cite{Wolfson2021}. For small $m$, Sutherland uses iterated polar cone methods which build upon the methods of \cite{Wiman1927}, \cite{Chebotarev1954}, and \cite{Segre1945} (note, however, that Wiman and Chebotarev do not use the language of polars at all and Segre refers only to individual polars). An application of Sylvester's obliteration algorithm to certain small $m$ cases is considered in \cite{Heberle2021}. By combining Sylvester's obliteration algorithm with the other methods described above, the authors believe they have exhausted the classical methods related to the theory of Tschirnhaus transformations; implications of this are discussed in Subsection \ref{subsec:Remaining Questions}. \paragraph{Outline of the Paper} In Section 2, we recall the relevant background on resolvent degree, polar cones, and Tschirnhaus transformations. In Section 3, we present a modern, geometric version of the obliteration algorithm and related phemonena, as well as a summary of Sylvester's original work. In Section 4, we apply the geometric obliteration algorithm to obtain upper bounds on resolvent degree. In Section 5, we discuss Python implementations of the geometric obliteration algorithm used for computations relevant for Theorem \ref{thm:Bounds from the Geometric Obliteration Algorithm}. \paragraph{Conventions} \begin{enumerate} \item We restrict to fields $K$ which are finitely generated $\mathbb C$-algebras. One could instead fix an arbitrary algebraically closed field $F$ of characteristic zero (in lieu of $\mathbb C$) and the statements (relative to $F$) would hold. \item We follow the conventions of \cite{Harris2010} for algebraic varieties. In particular, a projective (respectively, affine) variety is defined to be a closed algebraic set in $\mathbb P_K^r$ (respectively, $\mathbb A_K^r$). When we say variety without a specific modifier, we mean a quasi-projective variety. Note that we do not assume that varieties are irreducible. \item Given $a,b \in \mathbb Z_{\geq 0}$, we set $[a,b] = \left\{ x \in \mathbb Z \ | \ a \leq x \leq b \right\}$. \item Given a collection of homogeneous polynomials $S = \left\{ f_1,\dotsc,f_s \right\} \subseteq K[x_0,\dotsc,x_r]$, we write $\mathbb V(f_1,\dotsc,f_s)$ (and occasionally $\mathbb V(S)$) for the subvariety of $\mathbb P_K^r$ determined by the conditions $f_1 = \cdots = f_s = 0$. \item Given a subvariety $V \subseteq \mathbb P_K^r$, we write $V(K)$ for the set of $K$-rational points of $V$. \item Given points $P_0,\dotsc,P_\ell \in \mathbb P^r(K)$, we write $\Lambda(P_0,\dotsc,P_\ell)$ for the linear subvariety of $\mathbb P_K^r$ that they determine. Additionally, we refer to a linear subvariety $\Lambda \subseteq \mathbb P_k^r$ of dimension $k \geq 3$ as a $k$-plane. We refer to linear subvarieties of dimension 1 (respectively, 2) as lines (respectively, planes). \item We use the notation $K_n$ to mean $\mathbb C(a_1,\dotsc,a_n)$, a purely transcendental extension of $\mathbb C$ with transcendence basis $a_1,\dotsc,a_n$. \end{enumerate} Note that for generic choices of $f_1,\dotsc,f_s$, $\mathbb V(f_1,\dotsc,f_s)$ is a complete intersection. However, there are examples of such choices which are not complete intersections, such as the twisted cubic curve. Following the convention of \cite{Sutherland2021C}, we refer to a subvariety $\mathbb V(f_1,\dotsc,f_s)$ as an intersection of hypersurfaces. Given an intersection of hypersurfaces $V$, we say it is of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$ if $V$ has multi-degree \begin{equation*} ( \underbrace{d,\dotsc,d}_{\ell_d \text{ many}}, \underbrace{d-1,\dotsc,d-1}_{\ell_{d-1} \text{ many}}, \dotsc, \underbrace{2,\dotsc,2}_{\ell_2 \text{ many}}, \underbrace{1,\dotsc,1}_{\ell_1 \text{ many}} ). \end{equation*} \noindent If $\ell_j=0$ for any $1 \leq j \leq d-1$, the corresponding column may be omitted from the presentation. For example, an intersection of two cubic hypersurfaces is of type $\left[ \begin{matrix} 3 \\ 2 \end{matrix} \right]$. When $d \geq 2$ and each $\ell_j=1$, we say $V$ is of type $(1,\dotsc,d)$. Finally, we say a system of equations $S$ is of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$ exactly when the corresponding intersection of hypersurfaces $\mathbb V(S)$ is of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$. \paragraph{Acknowledgements} The authors thank David Ishii Smyth and Jesse Wolfson for their support. The second author thanks Joshua Jordan for helpful conversations. \section{Resolvent Degree, Polar Cones, and Tschirnhaus Transformations}\label{sec:Resolvent Degree and Polar Cones} \subsection{Resolvent Degree}\label{subsec:Resolvent Degree} We refer the reader to \cite{FarbWolfson2019} for general definitions of resolvent degree, a summary of its history, and additional context. We follow the notation of \cite{Sutherland2021C} and remark that Definitions 2.13 and 2.14 of \cite{Sutherland2021C} recall the definitions of resolvent degree for finite extensions of $\mathbb C$-fields and for generically finite, dominant, rational maps of $\mathbb C$-varieties. In Lemma \ref{lem:Properties of Resolvent Degree}, we summarize several basic results which will be used frequently (and often without explicit reference). Item 1 is the field-theoretic version of Lemma 2.7 of \cite{FarbWolfson2019} and follows immediately from the definition of resolvent degree. Items 2 and 3 are algebraic versions of Lemma 2.9 of \cite{FarbWolfson2019} and can be found explicitly as follows as Lemma 2.18 and Proposition 2.19 of \cite{Sutherland2021C}. Note that items 2 and 3 follow directly from the primitive element theorem. \begin{lemma}\label{lem:Properties of Resolvent Degree} \textbf{(Properties of Resolvent Degree)} \begin{enumerate} \item Let $E_0 \hookrightarrow E_1 \hookrightarrow \cdots \hookrightarrow E_\ell$ be a tower of field extensions. Then, \begin{align*} \RD(E_\ell/E_0) = \max\left\{ \RD(E_j/E_{j-1}) \ | \ j \in [1,\ell] \right\}. \end{align*} \item Let $K'/K$ be a degree $d$ field extension. Then, $\RD(K'/K) \leq \RD(d)$. \item Let $V \subseteq \mathbb P_K^r$ be a degree $d$ subvariety. Then, there is an extension $K'/K$ with $\RD(K'/K) \leq \RD(d)$ over which we can determine a $K'$-rational point of $V$. \end{enumerate} \end{lemma} \noindent As a consequence of item 3, we say that we can determine a point of a degree $d$ subvariety $V$ by solving a degree $d$ polynomial. \subsection{Polar Cones and $k$-Polar Points}\label{subsec:Polar Cones and k-Polar Points} The original theory of polars for hypersurfaces is classical and a classical reference is \cite{Bertini1923}; a modern reference on polars is \cite{Dolgachev2012}. We now recall the key definitions and results of \cite{Sutherland2021C}; we use the same notation and begin with the definition of the polars. \begin{definition}\label{def:Polars and Polar Cones} \textbf{(Polars and Polar Cones)} \\ Let $f \in K[x_0,\dotsc,x_r]$ be a homogeneous polynomial of degree $d$ and $P \in \mathbb P^r(K)$. Observe that the set \begin{equation*} I_j^* := \Hom_\catname{Set}\left( [1,j] , [0,r] \right) \end{equation*} \noindent indexes the (ordered) $j^{th}$ partial derivatives of $f$ for each $j \in [0,d]$. We also use the shorthand \begin{equation*} \partial_0^{j_0} \cdots \partial_\ell^{j_\ell} = \frac{\partial^{j_0+\cdots+j_\ell}}{\partial x_0^{j_0} \cdots \partial x_\ell^{j_\ell}}. \end{equation*} \noindent For each $j \in [0,d]$, the \textbf{$j^{th}$ polar of $f$ at $P$} is the homogeneous polynomial \begin{equation}\label{eqn:jth polar of a polynomial} t(j,f,P) := \sum\limits_{\iota \in I_{d-j}^*} \left( \partial_0^{\left| \iota^{-1}(0) \right|} \cdots \partial_r^{\left| \iota^{-1}(r) \right|} f \right) \biggr\rvert_{P} x_0^{\left| \iota^{-1}(0) \right|} \cdots x_r^{\left| \iota^{-1}(r) \right|}, \end{equation} \noindent which is of degree $d-j$. Next, consider the hypersurface $H = \mathbb V(f)$. The \textbf{$j^{th}$ polar of $H$ at $P$} is \begin{align*} T(j,f,P) := \mathbb V(t(j,f,P)) \subseteq \mathbb P_K^r. \end{align*} \noindent Finally, the \textbf{(first) polar cone of $H$ at $P$} is \begin{equation*} \mathcal C(H;P) := \bigcap\limits_{j=0}^{d-1} T(j,f,P). \end{equation*} \end{definition} Note that $T(0,f,P)=H$ for all $P$ and $T(d,f,P) = \mathbb P_K^r$ if $P \in H(K)$. If $H$ is smooth at $P$, then $T(d-1,f,P)$ is the tangent hyperplane of $H$ at $P$. Our interest in polars stems from our interest in polar cones, which are themselves motivated by the following classical result. \begin{lemma}\label{lem:Bertini's Lemma for Hypersurfaces} \textbf{(Bertini's Lemma for Hypersurfaces)} \\ Let $H \subseteq \mathbb P_K^r$ be a hypersurface and $P \in H(K)$. Then, $\mathcal C(H;P) \subseteq H$ is a cone with vertex $P$. \end{lemma} \noindent In particular, for any point $Q \in \mathcal C(H;P) \setminus \{P\}$, the line $\Lambda(P,Q)$ lies in $H$. Observe that for an intersection of hypersurfaces $\mathbb V(f_1,\dotsc,f_s)$, a line $\Lambda$ lies on $\mathbb V(f_1,\dotsc,f_s)$ exactly when $\Lambda$ lies on each hypersurface $\mathbb V(f_j)$. \begin{definition}\label{def:Polar Cone of an Intersection of Hypersurfaces} \textbf{(Polar Cone of an Intersection of Hypersurfaces)} \\ Let $V = \mathbb V(f_1,\dotsc,f_s) \subseteq \mathbb P_K^r$ be an intersection of hypersurfaces and $P \in V(K)$. The \textbf{(first) polar cone of $V$ at $P$} is \begin{equation*} \mathcal C(V;P) := \bigcap\limits_{j=1}^s \mathcal C(\mathbb V(f_j);P). \end{equation*} \end{definition} \begin{lemma}\label{lem:Bertini's Lemma for Intersections of Hypersurfaces} \textbf{(Bertini's Lemma for Intersections of Hypersurfaces)} \\ Let $V \subseteq \mathbb P_K^r$ be an intersection of hypersurfaces and $P \in V(K)$. Then, $\mathcal C(V;P) \subseteq V$ is a cone with vertex $P$. \end{lemma} Iterating the polar cone construction yields a method for determining $k$-planes on intersections of hypersurfaces. We now recall the associated definitions. \begin{definition}\label{def:Iterated Polar Cones and k-Polar Points} \textbf{(Iterated Polar Cones and k-Polar Points)} \\ Let $V \subseteq \mathbb P_K^r$ be an intersection of hypersurfaces and $P_0 \in V(K)$. First, set $\mathcal C^1(V;P_0) := \mathcal C(V;P_0)$. Given additional points $P_1,\dotsc,P_{k-1} \in V(K)$ such that \begin{equation*} P_{\ell} \in \mathcal C^{\ell}(V;P_0,\dotsc,P_{\ell-1}) \setminus \Lambda(P_0,\dotsc,P_{\ell-1} \end{equation*} \noindent for $\ell \in [1,k-1]$, the \textbf{$k^{th}$ polar cone of $V$ at $P_0,\dotsc,P_{k-1}$} is \begin{equation*} \mathcal C^k(V;P_0,\dotsc,P_{k-1}) := \mathcal C\left( \mathcal C^{k-1}(V;P_0,\dotsc,P_{k-2});P_{k-1} \right). \end{equation*} \noindent We refer to an ordered collection of such points $(P_0,\dotsc,P_k)$ as a \textbf{$k$-polar point} of $V$. If the points $P_0,\dotsc,P_{k-1}$ have already been chosen, we refer to $\mathcal C^k(V;P_0,\dotsc,P_{k-1})$ as \emph{the} $k^{th}$ polar cone of $V$. In the event that such points exist, but have not been explicitly chosen, we refer to \emph{a} $k^{th}$ polar cone of $V$. Additionally, it is sometimes useful to refer to $V$ itself as a zeroth polar cone of $V$ (at any of its rational points). \end{definition} By noting that iterated polar cones are nested, i.e. \begin{equation*} \mathcal C^k(V;P_0,\dotsc,P_{k-1}) \subseteq \mathcal C^{k-1}(V;P_0,\dotsc,P_{k-2}) \subseteq \cdots \subseteq \mathcal C^2(V;P_0,P_1) \subseteq \mathcal C(V;P_0) \subseteq V \end{equation*} \noindent and that the points $P_0,\dotsc,P_k$ defining a $k$-polar point $(P_0,\dotsc,P_k)$ are in general position, we arrive at the $k$-plane analogue of Lemma \ref{lem:Bertini's Lemma for Intersections of Hypersurfaces}. \begin{lemma}\label{lem:Polar Point Lemma} \textbf{(Polar Point Lemma)} \\ Let $V \subseteq \mathbb P_K^r$ be an intersection of hypersurfaces and let $(P_0,\dotsc,P_k)$ be a $k$-polar point of $V$. Then, $\Lambda(P_0,\dotsc,P_k) \subseteq V$ is a $k$-plane. \end{lemma} \subsection{Tschirnhaus Transformations} We use the notation and conventions of \cite{Sutherland2021C} for Tschirnhaus transformations and refer the reader there for details. Note also that Wolfson provides a more complete history of Tschirnhaus transformations in \cite{Wolfson2021}. Let $K_n = \mathbb C(a_1,\dotsc,a_n)$ be a purely transcendental extension of $\mathbb C$ with transcendence basis $a_1,\dotsc,a_n$. \begin{definition}\label{def:General Polynomials} \textbf{(General Polynomials)} \\ The \textbf{general polynomial of degree $n$} is the polynomial \begin{equation*} \phi_n(z) = z^n + a_1z^{n-1} + \cdots + a_{n-1}z + a_n \in K_n[z]. \end{equation*} \end{definition} \begin{definition}\label{def:Tschirnhaus Transformations} \textbf{(Tschirnhaus Transformations)} \\ A \textbf{Tschirnhaus transformation} of the general degree $n$ polynomial is an isomorphism of $K_n$-fields \begin{equation*} \Upsilon:K_n[z]/(\phi_n(z)) \rightarrow K_n[z]/(\psi(z)), \end{equation*} \noindent where $\psi(z) = z^n + b_1z^{n-1} + \cdots + b_{n-1}z + b_n$. We say that $\Upsilon$ has type $(j_1,\dotsc,j_k)$ if $b_{j_1} = \cdots = b_{j_k} = 0$. \end{definition} As per Remark 3.3 of \cite{Sutherland2021C}, the space of all Tschirnhaus transformations of the general degree $n$ polynomial (up to re-scaling) is \begin{equation*} \mathcal T_{K_n}^n := \mathbb P_{K_n}^{n-1} \setminus [1:0:\cdots:0] \subseteq \mathbb P_{K_n}^{n-1}. \end{equation*} \noindent Note that each $b_j$ in Definition \ref{def:Tschirnhaus Transformations} is a homogeneous polynomial of degree $j$ in $a_1,\dotsc,a_n$. \begin{definition}\label{def:Tschirnhaus Complete Intersections} \textbf{(Tschirnhaus Complete Intersections)} \\ Fix $n \in \mathbb Z_{\geq 1}$. For any $m \in [1,n-1]$, the \textbf{$m^{th}$ extended Tschirnhaus hypersurface is} \begin{equation*} \tau_m := \mathbb V(b_m) \subseteq \mathbb P_{K_n}^{n-1}, \end{equation*} \noindent and the \textbf{$m^{th}$ extended Tschirnhaus complete intersection} is \begin{equation*} \tau_{1,\dotsc,m} := \bigcap\limits_{j=1}^m \tau_j \subseteq \mathbb P_{K_n}^{n-1}. \end{equation*} \noindent Additionally, the \textbf{$m^{th}$ Tschirnhaus hypersurface} is \begin{equation*} \tau_m^\circ := \tau_m \cap \mathcal T_{K_n}^n = \tau_m \setminus \left\{ [1:0:\cdots:0] \right\}, \end{equation*} \noindent and the \textbf{$m^{th}$ Tscihrnhaus complete intersection is} \begin{equation*} \tau_{1,\dotsc,m}^\circ := \tau_{1,\dotsc,m} \cap \mathcal T_{K_n}^n = \tau_{1,\dotsc,m} \setminus \left\{ [1:0:\cdots:0] \right\}. \end{equation*} \end{definition} \begin{remark}\label{rem:Strategy for Upper Bounds on RD(n)} \textbf{(Strategy for Upper Bounds on $\RD(n)$}\\ If we can determine a $K'$-rational point of $\tau_{1,\dotsc,m-1}^\circ$ over an extension $K'/K_n$ of sufficiently small resolvent degree, then we can conclude that $\RD(n) \leq n-m$. Notice that if we can determine an $(m-d-1)$-plane $\Lambda \subseteq \tau_{1,\dotsc,d}^\circ$ over an extension $L/K_n$ of low resolvent degree, then we need only further pass to an extension $K'/L$ with $\RD(K'/L) \leq \RD\left( \frac{(m-1)!}{d!} \right)$, by Lemma \ref{lem:Properties of Resolvent Degree}. \end{remark} Lemma \ref{lem:Polar Point Lemma} yields that every $k$-polar point determines a $k$-plane, hence Remark \ref{rem:Strategy for Upper Bounds on RD(n)} yields that it will suffice to determine $k$-polar points on the Tschirnhaus complete intersections $\tau_{1,\dotsc,d}^\circ$. \section{The Obliteration Algorithms}\label{sec:The Obliteration Algorithms} In \cite{Sylvester1887}, Sylvester gives an algorithm to determine an upper bound on the number of variables required to determine a non-trivial solution for a system of homogeneous polynomials of given degrees by solving polynomials of the same, or lower, degrees. The algorithm centers on Sylvester's ``formula of obliteration'' (see Corollary \ref{cor:Geometric Formula of Obliteration} and Proposition \ref{prop:Sylvester's Formula of Obliteration}), and thus we refer to the method as the ``obliteration algorithm.'' In Subsection \ref{subsec:The Geometric Obliteration Algorithm}, we give a modern description of the obliteration algorithm via geometry (in terms of varieties, rational points, and polar cones). In Subsection \ref{subsec:Sylvester's Obliteration Algorithm}, we describe the obliteration algorithm in terms of systems of homogeneous polynomials and explain Sylvester's classical language. \subsection{The Geometric Obliteration Algorithm}\label{subsec:The Geometric Obliteration Algorithm} In this subsection, we give a geometric construction of Sylvester's obliteration algorithm. More specifically, given an intersection of hypersurfaces $V \subseteq \mathbb P_K^r$, we give a bound on the ambient dimension required to be able to determine a point of $V$ over an extension $K'/K$ of bounded resolvent degree. Note that this bound depends only on the type of $V$. \begin{definition}\label{def:Minimal Dimension Bound} \textbf{(Minimal Dimension Bound)}\\ The \textbf{minimal dimension bound of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$}, denoted $r(d;\ell_d,\dotsc,\ell_1)$ is the minimal $r' \in \mathbb Z_{\geq 1} \cup \{\infty\}$ such that whenever $r \geq r'$, we can determine a point of any intersection of hypersurfaces of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$ in $\mathbb P_K^r$ over an extension $K'/K$ determined by solving polynomials of degree at most $d$. Given a complete intersection $V$ of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$, we set $r(V) := r(d;\ell_d,\dotsc,\ell_1)$. \end{definition} \begin{remark}\label{rem:Finiteness of the Minimal Dimension Bound} \textbf{(Finiteness of the Minimal Dimension Bound)}\\ The main goal of this section is to establish an upper bound on $r(d;\ell_d,\dotsc,\ell_1)$. More specifically, we introduce a recursive, combinatorial bound $g(d;\ell_d,\dotsc,\ell_1)$ in Definition \ref{def:Geometric Dimension Bound} which we will show satisfies \begin{equation}\label{eqn:Fundamental Inequality} r(d;\ell_d,\dotsc,\ell_1) \leq g(d;\ell_d,\dotsc,\ell_1). \end{equation} \noindent The proof of inequality (\ref{eqn:Fundamental Inequality}) is exactly the geometric version of the obliteration algorithm. \end{remark} We now give Definition \ref{def:Geometric Dimension Bound} and note that the underlying geometric intuition is explained in Lemma \ref{lem:The Reduction Lemma} and Remark \ref{rem:Geometric Insight for the Reduction Lemma}. \begin{definition}\label{def:Geometric Dimension Bound} \textbf{(Geometric Dimension Bound)}\\ The \textbf{geometric dimension bound of type $\left[ \begin{matrix} 1 \\ \ell_1 \end{matrix} \right]$} is $g(1;\ell_1) := \ell_1$. Similarly, the geometric dimension bound \textbf{of type $\left[ \begin{matrix} 2 &1\\ 1 &\ell_1 \end{matrix} \right]$} is $g(2;1,\ell_1) := 1+\ell_1$ and the geometric dimension bound \textbf{of type $\left[ \begin{matrix} 2 &1\\ \ell_2 &\ell_1 \end{matrix} \right]$} with $\ell_2 \geq 2$ is \begin{equation*} g(2;\ell_2,\ell_1) := g(2;\ell_2-1,\ell_2+\ell_1+1). \end{equation*} \noindent For $d \geq 3$, the geometric dimension bound \textbf{of type $\left[ \begin{matrix} d &d-1 &\cdots &2 &1\\ 1 &\ell_{d-1} &\cdots &\ell_2 &\ell_1 \end{matrix} \right]$} is \begin{equation*} g(d;1,\ell_{d-1},\dotsc,\ell_2,\ell_1) := g\left( d-1;\ell_{d-1},(\ell_{d-1}+\ell_{d-2}),\dotsc,\sum\limits_{j=2}^{d-1} \ell_j, \left( \sum\limits_{j=1}^{d-1} \ell_j \right) + 1 \right). \end{equation*} \noindent For $d \geq 3$ and $\ell_d \geq 2$, the geometric dimension bound \textbf{of type $\left[ \begin{matrix} d &d-1 &\cdots &2 &1\\ 1 &\ell_{d-1} &\cdots &\ell_2 &\ell_1 \end{matrix} \right]$} is \begin{equation*} g(d;\ell_d,\ell_{d-1},\dotsc,\ell_2,\ell_1) := g\left( d;\ell_d-1,(\ell_d+\ell_{d-1})-1,\dotsc,\left( \sum\limits_{j=2}^d \ell_j \right) - 1, \sum\limits_{j=1}^{d-1} \ell_j \right). \end{equation*} \noindent Finally, given an intersection of hypersurfaces $V$ of type $\left[ \begin{matrix} d &d-1 &\cdots &2 &1\\ \ell_d &\ell_{d-1} &\cdots &\ell_2 &\ell_1 \end{matrix} \right]$, we set \begin{equation*} g(V) := g(d;\ell_d,\dotsc,\ell_1). \end{equation*} \end{definition} \begin{remark}\label{rem:Hyperplane Identities} \textbf{(Hyperplane Identities)}\\ The definitions of both the minimal and geometric dimension bounds admit a ``hyperplane identity,'' which we use without explicit reference: \begin{align*}\label{eqn:Hyperplane Identity} 1+r(d;\ell_d,\dotsc,\ell_2,\ell_1) &= r(d;\ell_d,\dotsc,\ell_2,\ell_1+1),\\ 1+g(d;\ell_d,\dotsc,\ell_2,\ell_1) &= g(d;\ell_d,\dotsc,\ell_2,\ell_1+1). \end{align*} \end{remark} We next state Lemma \ref{lem:The Reduction Lemma}, which is the technical underpinning of the geometric obliteration algorithm and which specializes to give the geometric version of Sylvester's formula of reduction. \begin{lemma}\label{lem:The Reduction Lemma} \textbf{(The Reduction Lemma)}\\ Let $V$ be an intersection of hypersurfaces of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$ which is not a hyperplane. Take $V_d$ to be a degree $d$ hypersurface and $V^\text{red}$ to be an intersection of hypersurfaces of type \begin{equation*} \left[ \begin{matrix} d &\cdots &1\\ \ell_d-1 &\cdots &\ell_1 \end{matrix} \right] \end{equation*} \noindent if $\ell_d \geq 2$ and of type \begin{equation*} \left[ \begin{matrix} d-1 &\cdots &1\\ \ell_{d-1} &\cdots &\ell_1 \end{matrix} \right] \end{equation*} \noindent \noindent if $\ell_d=1$, such that $V = V^\text{red} \cap V_d$. Let $P \in V^\text{red}(K)$ and take $H$ to be a hyperplane which does not contain $P$. Then, \begin{equation*} g(V) = g(H \cap \mathcal C(V^\text{red};P)) = g(\mathcal C(V^\text{red};P))+1. \end{equation*} \end{lemma} \begin{remark}\label{rem:Geometric Insight for the Reduction Lemma} \textbf{(Geometric Insight for the Reduction Lemma)}\\ The proof of Lemma \ref{lem:The Reduction Lemma} will follow immediately from Definition \ref{def:Geometric Dimension Bound}, but we wish to first address the geometric reasoning underlying the lemma. Suppose our goal is to determine a point $Q$ of $V$ over an extension of bounded resolvent degree. Observe that if we can determine a line $\Lambda \subseteq V^\text{red}$, then we need only solve a degree $d$ polynomial to determine a point of $V$. As $V^\text{red}$ is $V$ with $V_d$ removed, it is already ``less difficult'' to determine the point $P \in V^\text{red}(K)$ given by assumption (i.e. $g(V) \geq g(V^\text{red})$). Additionally, we can determine a line $\Lambda \subseteq V^\text{red}$ by determining a point $P' \not= P$ of $\mathcal C(V^\text{red};P)$. As $H$ is taken to be a hyperplane which does not contain $P$, it suffices to determine any point of $\mathcal C(V^\text{red};P) \cap H$, which is also ``less difficult'' as $\mathcal C(V^\text{red};P)$ is defined by fewer top degree hypersurfaces. \end{remark} \begin{proof} \textbf{(Proof of Lemma \ref{lem:The Reduction Lemma})}\\ First, consider when $\ell_d \geq 2$. From Definition \ref{def:Polar Cone of an Intersection of Hypersurfaces}, observe that $\mathcal C(V^\text{red};P)$ has type \begin{equation*} \left[ \begin{matrix} d &\cdots &1\\ \ell_{d}-1 &\cdots &\left( \sum\limits_{j=1}^d \ell_j \right) -1\end{matrix} \right]. \end{equation*}. \noindent From Definition \ref{def:Geometric Dimension Bound}, it follows that \begin{align*} g(V) &= g(d;\ell_d,\dotsc,\ell_1)\\ &= g\left( d;\ell_d-1,(\ell_d+\ell_{d-1})-1,\dotsc,\left( \sum\limits_{j=2}^d \ell_j \right) -1, \sum\limits_{j=1}^d \ell_j \right)\\ &= g\left( \mathcal C(V^\text{red};P) \right) + 1\\ &= g\left( H \cap \mathcal C(V^\text{red};P) \right). \end{align*} \noindent Similarly, when $\ell_d=1$, we have \begin{align*} g(V) &= g(d;\ell_d,\dotsc,\ell_1)\\ &= g\left( d-1;\ell_{d-1},(\ell_d+\ell_{d-1}),\dotsc, \sum\limits_{j=2}^d \ell_j, \left( \sum\limits_{j=1}^d \ell_j \right) + 1 \right)\\ &= g\left( \mathcal C(V^\text{red};P) \right) + 1\\ &= g\left( H \cap \mathcal C(V^\text{red};P) \right). \end{align*} \end{proof} As in Lemma \ref{lem:The Reduction Lemma}, we will frequently want to split an intersection of hypersurfaces $V$ into parts analogous to $V^\text{red}$ and $V_d$, and so we introduce the following terminology and notation. \begin{definition}\label{def:Reduction and Complement} \textbf{(Reduction and Complement)}\\ Given an intersection of hypersurfaces $V$ of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$ with $\ell_d \geq 2$, a \textbf{reduction} of $V$ is an intersection of hypersurfaces $V^{\text{red}}$ of type $\left[ \begin{matrix} d &d-1 &\cdots &2 &1\\ \ell_d-1 &\ell_{d-1} &\cdots &\ell_2 &\ell_1 \end{matrix} \right]$ such that $V = V^\text{red} \cap V_d$ for some degree $d$ hypersurface $V_d$, we which refer to as a \textbf{complement of $V^\text{red}$ for $V$}. When $V$ is an intersection of hypersurfaces of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$ with $\ell_d = 1$, a reduction of $V$ is an intersection of hypersurfaces $V^{\text{red}}$ of type $\left[ \begin{matrix} d-1 &\cdots &1\\ \ell_{d-1} &\cdots &\ell_1 \end{matrix} \right]$ such that $V = V^\text{red} \cap V_d$ for some degree $d$ hypersurface $V_d$, we which refer to as a complement of $V^\text{red}$ for $V$. \end{definition} With Lemma \ref{lem:The Reduction Lemma} and Definition \ref{def:Reduction and Complement} in place, we now state the geometric version of Sylvester's ``formula of reduction.'' \begin{corollary}\label{cor:Geometric Formula of Reduction} \textbf{(Geometric Formula of Reduction)}\\ Let $W$ be a complete intersection of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$. Then, for any $P_0 \in W(K)$, any reduction $\mathcal C(W;P_0)^\text{red}$, and any $P_1 \in \mathcal C(W;P_0)^\text{red}(K)$, we have \begin{align*} g\left( \mathcal C(W;P_0) \right) &= g\left( \mathcal C\left( \mathcal C(W;P_0)^\text{red};P_1 \right) \right) + 1. \end{align*} \end{corollary} \begin{proof} This follows immediately as a special case of Lemma \ref{lem:The Reduction Lemma} applied to $V=\mathcal C(W;P_0)$. \end{proof} We will soon want to successively iterate Lemma \ref{lem:The Reduction Lemma} so that we can eliminate the hypersurfaces of largest degree from any intersection of hypersurfaces by introducing many hypersurfaces of strictly lower degree in Proposition \ref{prop:The Obliteration Proposition}. \begin{definition}\label{def:Sylvester Reductions} \textbf{(Sylvester Reductions)}\\ Let $V$ be an intersection of hypersurfaces of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$ with $d \geq 2$ and which is not a hypersurface. A \textbf{first partial Sylvester reduction of $V$} is \begin{equation*} V^\text{Syl}(d;1) := \mathcal C(V^\text{red};P_0), \end{equation*} \noindent where $V^\text{red}$ is any reduction of $V$ and $P_0 \in V^\text{red}(K)$. Proceeding inductively, for any $k \in [2,\ell_d]$, a \textbf{$k^{th}$ partial Sylvester reduction of $V$} is \begin{equation*} V^\text{Syl}(d;k) := \mathcal C(H_{k-1} \cap V_{k-1}^\text{Syl};P_k) = H_{k-1} \cap \mathcal C(V_{k-1}^\text{Syl};P_k) , \end{equation*} \noindent where $H_{k-1}$ is a hyperplane which does not contain $P_{k-1}$ and $P_k \in (H_{k-1} \cap V_{k-1}^\text{Syl})(K)$. When $d \geq 3$, a \textbf{first Sylvester reduction of $V$} is \begin{equation*} V_1^\text{Syl} := V^\text{Syl}(d;\ell_d). \end{equation*} \noindent For each $k \in [2,d-1]$, let $\lambda_{d-k+1}$ be the number of degree $d-k+1$ hypersurfaces defining a $(k-1)^{st}$ Sylvester reduction $V_{k-1}^\text{Syl}$. Then, a \textbf{$k^{th}$ Sylvester reduction of $V$} is \begin{equation*} V_k^\text{Syl} := \left( V_{k-1}^\text{Syl} \right)^\text{Syl}(d-k+1;\lambda_{d-k+1}). \end{equation*} \end{definition} \begin{prop}\label{prop:The Obliteration Proposition} \textbf{(The Obliteration Proposition)}\\ Let $V$ be an intersection of hypersurfaces of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$ with $d \geq 2$ which is not a hypersurface. Then, \begin{equation*} g(V) = g\left( V_1^\text{Syl} \right). \end{equation*} \noindent for any first Sylvester reduction $V_1^\text{Syl}$ of $V$. \end{prop} \begin{proof} From Lemma \ref{lem:The Reduction Lemma} and Definition \ref{def:Sylvester Reductions}, it follows immediately that \begin{equation*} g\left( V^\text{Syl}(d;k) \right) = g\left( V^\text{Syl}(d;k+1) \right) \end{equation*} \noindent for each $k \in [1,\ell_d-1]$. Consequently, applying Lemma \ref{lem:The Reduction Lemma} to $V$ and its partial Sylvester reductions yields \begin{equation*} g(V) = g\left( V^\text{Syl}(d;1) \right) = \cdots = g\left( V^\text{Syl}(d;\ell_d-1) \right) = g\left( V^\text{Syl}(d;\ell_d) \right) = g\left( V_1^\text{Syl} \right). \end{equation*} \end{proof} \begin{remark}\label{rem:Geometric Dimension Bound via Obliteration} \textbf{(Geometric Dimension Bound via Obliteration)}\\ From the definition of the $k^{th}$ Sylvester reductions, we can iteratively apply Proposition \ref{prop:The Obliteration Proposition} to observe that \begin{align*} g(V) = g\left( V_1^\text{Syl} \right) = \cdots = g\left( V_{d-2}^\text{Syl} \right) = g\left( V_{d-1}^\text{Syl} \right), \end{align*} \noindent which provides the most succinct description of the central argument of the geometric obliteration algorithm. \end{remark} We now arrive at the geometric version of Sylvester's ``formula of obliteration'' as a specialization of Proposition \ref{prop:The Obliteration Proposition}. \begin{corollary}\label{cor:Geometric Formula of Obliteration} \textbf{(Geometric Formula of Obliteration)}\\ Let $W$ be an intersection of hypersurfaces of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$ with $d \geq 2$. For any $P_0 \in W(K)$ and any Sylvester reduction $\mathcal C(W;P_0)_1^\text{Syl}$, we have \begin{equation}\label{eqn:Geometric Formula of Obliteration} g( \mathcal C(W;P_0) ) = g\left( \mathcal C(W;P_0)_1^\text{Syl} \right). \end{equation} \end{corollary} \begin{proof} This follows immediately as a special case of Proposition \ref{prop:The Obliteration Proposition} with $V = \mathcal C(W;P_0)$. \end{proof} \begin{remark}\label{rem:Explicit Numerics of the Formula of Obliteration} \textbf{(Explicit Numerics of the Formula of Obliteration)}\\ Sylvester's formula of obliteration (Proposition \ref{prop:Sylvester's Formula of Obliteration}) is given numerically and, for notational reasons, he chooses to write the statement in terms of ``linear solutions'' (see Definition \ref{def:Dimensional Solutions}) of $\mathcal C(W;P_0)^\text{Syl}(d;\ell_d-1)$ instead of $g\left( \mathcal C(W;P_0)_1^\text{Syl} \right)$. For this reason, we delay the discussion of numerics of the formula of obliteration to Subsection \ref{subsec:Sylvester's Obliteration Algorithm}. \end{remark} As we have established the reduction lemma and the obliteration proposition, which we used to recover Sylvester's formula of reduction and formula of obliteration, we proceed to prove inequality (\ref{eqn:Fundamental Inequality}). \begin{prop} \label{prop:Minimal vs. Geometric Dimension Bound} \textbf{(Minimal vs. Geometric Dimension Bound)}\\ For every type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$ of an intersection of hypersurfaces, $r(d;\ell_d,\dotsc,\ell_1) \leq g(d;\ell_d,\dotsc,\ell_1) < \infty$. \end{prop} \begin{proof} \textbf{(The Geometric Obliteration Algorithm)}\\ We proceed by induction on $d$. First, observe that when $d=1$, it is immediate that \begin{equation*} r(1;\ell_1) = \ell_1 = g(1;\ell_1). \end{equation*} \noindent We additionally consider the case $d=2$ before considering the general case. For the $d=2$ case, we proceed via induction on $\ell_2$. When $\ell_2=1$, $\deg(V)=2$ and thus we can determine a point of $V$ by solving a quadratic polynomial when \begin{align*} \dim\left( V \right) \geq r - (\ell_1+1) = 0. \end{align*} \noindent It follows that \begin{align*} r(2;1,\ell_1) = \ell_1+1 = g(2;1,\ell_1). \end{align*} \noindent Now, consider the case where $\ell_2 \geq 2$ is arbitrary. Our inductive hypothesis yields $r(2;\ell_2-1,\lambda_1) \leq g(2;\ell_2-1,\lambda_1)$ for any $\lambda_1 \geq 0$. Let $V^\text{red}$ be a reduction of $V$ with complement $V_2$. As $V^\text{red}$ is of type $\left[ \begin{matrix} 2 &1\\ \ell_2-1 &\ell_1 \end{matrix} \right]$, we can determine a point $P_0$ of $V^\text{red}$ over an iterated quadratic extension whenever $r \geq g(V^\text{red})$. Let $H$ be a hypersurface which does not contain $P_0$. Note that $H \cap \mathcal C(V^\text{red};P_0)$ is of type $\left[ \begin{matrix} 2 &1\\ \ell_2-1 &\ell_2+\ell_1 \end{matrix} \right]$ and so we can similarly determine a point $P_1$ of $H \cap \mathcal C(V^\text{red};P_0)$ over an iterated quadratic extension whenever $r \geq g(V^\text{red})+1$. From Lemma \ref{lem:Bertini's Lemma for Intersections of Hypersurfaces}, we have that \begin{equation*} \Lambda(P_0,P_1) \subseteq \mathcal C(V^\text{red};P_0) \subseteq V^\text{red}. \end{equation*} \noindent Thus, we can determine a point of $\Lambda(P_0,P_1) \cap V_2 \subseteq V$ over an additional quadratic extension. From Lemma \ref{lem:The Reduction Lemma}, it follows that \begin{equation*} r(2;\ell_2,\ell_1) \leq \max\left\{ g(2;\ell_2-1,\ell_1), g(2;\ell_2-1,\ell_1+\ell_2) \right\} = g(2;\ell_2-1,\ell_1+\ell_2) = g(2;\ell_2,\ell_1). \end{equation*} Now, let us return to our induction on $d$ and consider the case of general $d \geq 2$. Our inductive hypothesis for $d$ yields that $r(d-1;\lambda_{d-1},\dotsc,\lambda_1) \leq g(d-1;\lambda_{d-1},\dotsc,\lambda_1)$ for any $\lambda_{d-1} \geq 1$ and $\lambda_j \geq 0$ for all $j \in [1,d-2]$. We proceed by induction on $\ell_d$. Let $V^\text{red}$ be a reduction of $V$ with complement $V_d$. When $\ell_d=1$, the inductive hypothesis on $d$ yields that we can determine a point $P_0$ of $V^\text{red}$ by solving polynomials of degree at most $d-1$ when $r \geq g(V^\text{red})$. Letting $H$ denote a hyperplane which does not contain $P_0$, we can similarly determine a point $P_1$ of $H \cap \mathcal C(V^\text{red};P_0)$ over by solving polynomials of degree at most $d-1$ when $r \geq g\left( \mathcal C(V^\text{red};P_0) \right) + 1$. It follows that \begin{equation*} \Lambda(P_0,P_1) \subseteq \mathcal C(V^\text{red};P_0) \subseteq V^\text{red}, \end{equation*} \noindent and so we can determine a point of $\Lambda(P_0,P_1) \cap V_d \subseteq V$ by solving a degree $d$ polynomial. As a result, \begin{equation*} r(d;1,\ell_{d-1},\dotsc,\ell_1) \leq \max\left\{ g(V^\text{red}), g\left( \mathcal C(V^\text{red};P_0) \right) + 1 \right\} = g\left( \mathcal C(V^\text{red};P_0) \right) + 1 = g(V) = g(d;1,\ell_{d-1},\dotsc,\ell_1). \end{equation*} Next, we consider the case of arbitrary $\ell_d \geq 2$. Our inductive hypothesis for $\ell_d$ yields that $r(d;\ell_d-1,\lambda_{d-1},\dotsc,\lambda_1) \leq g(d;\ell_d-1,\lambda_{d-1},\dotsc,\lambda_1)$ for all $\lambda_j \geq 0$, $j \in [1,d-1]$. As a result, we can determine a point $P_0$ of $V^\text{red}$ by solving polynomials of degree at most $d$ when $r \geq g(V^\text{red})$. Taking $H$ to be a hyperplane which does not contain $P_0$, we can determine a point $P_1$ of $H \cap \mathcal C(V^\text{red};P_0)$ by solving polynomials of degree at most $d$ when $r \geq g\left( \mathcal C(V^\text{red};P_0) \right) + 1$. Therefore, \begin{equation*} \Lambda(P_0,P_1) \subseteq \mathcal C(V^\text{red};P_0) \subseteq V^\text{red}, \end{equation*} \noindent and we can determine a point a point of $\Lambda(P_0,P_1) \cap V_d \subseteq V$ by solving an additional degree $d$ polynomial. Consequently, \begin{equation*} r(d;\ell_d,\dotsc,\ell_1) \leq \max\left\{ g(V^\text{red}), g\left( \mathcal C(V^\text{red};P_0) \right) + 1 \right\} = g\left( \mathcal C(V^\text{red};P_0) \right) + 1 = g(V) = g(d;\ell_d,\dotsc,\ell_1). \end{equation*} Finally, we note that the polar cone construction introduces only finitely many hypersurfaces, all of which are strictly smaller degree. Consequently, iterating Lemma \ref{lem:The Reduction Lemma} yields that $g(d;\ell_d,\dotsc,\ell_1)$ is finite for every type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$. \end{proof} \subsection{Sylvester's Obliteration Algorithm}\label{subsec:Sylvester's Obliteration Algorithm} In \cite{Sylvester1887}, Sylvester writes \begin{quote} ``In the following memoir I propose to present \emph{Hamilton's} process under what appears to me to be a clearer and more easily intelligible form, to extend his numerical results and to establish the principles of a more general method than that to which he has confined himself.'' \end{quote} \noindent We now propose to serve the analogous role for Sylvester that Sylvester served for Hamilton. Note that \cite{Sylvester1887} begins with a ``a somewhat more extended statement of the Law of Inertia (Tr\"{a}gheitsgesetz) for quadratic forms'' and provides a brief history of the theory of Tschirnhaus transformations, both of which we omit here. Sylvester's law of inertia is well-known (see \cite{Ostrowski1959}) and not necessary for our purposes; we refer the reader to \cite{Wolfson2021} for a more complete history of Tschirnhaus transformations. Throughout this subsection, we consider a system $S = \left\{ f_1,\dotsc,f_s \right\}$ of homogeneous polynomials. Given a solution $P_0$ of $S$, the ``first emanant'' of $S$ at $P_0$ is \begin{equation*} S(1;P_0) := \left\{ t(\ell,f_j,P_0) \ | \ j \in [1,s], \ell \in [0,\deg(f_j)-1] \right\}, \end{equation*} \noindent where $t(\ell,f_j,P_0)$ is as in equation (\ref{eqn:jth polar of a polynomial}) of Definition \ref{def:Polars and Polar Cones}. Given a solution $P_1$ of $S(1;P_0)$, Sylvester's sub-lemma states that any linear combination $\lambda_0 P_0 + \lambda_1 P_1$ (what he calls an ``alliance'' of $P_0$ and $P_1$) is a solution of $S(1;P_0)$, where $[\lambda_0:\lambda_1] \in \mathbb P^1(K)$. Consequently, Sylvester says that $P_0$ and $P_1$ define a ``linear solution'' of $S(1;P_0)$ (and thus also of $S$, since $S \subseteq S(1;P_0)$). Note that the geometric version of Sylvester's sub-lemma is Lemma \ref{lem:Bertini's Lemma for Intersections of Hypersurfaces}. The core algebraic computation reduces to the case of hypersurfaces; see Lemma 2.8 of \cite{Sutherland2021C}. Additionally, just as Sutherland constructs iterated polar cones in \cite{Sutherland2021C}, Sylvester analogously introduces ``$k^{th}$ emanants'' and his lemma is the analogue of the polar point lemma (Lemma \ref{lem:Polar Point Lemma}). His proof follows from iterating the sublemma. Sylvester now focuses on linear solutions of systems of equations. First, he introduces ``completed emanants'' to ensure that $P_1$ is distinct from $P_0$ (and thus $P_0$ and $P_1$ determine a genuine linear solution). More specifically, a completed emanant is a system of equations $T = S(1;P_0) \cup \left\{ g \right\}$, where $g$ is a homogeneous linear polynomials such that $g(P_0) \not= 0$. Next, let $S$ be of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$. Sylvester introduces the notation $[d;\ell_d,\dotsc,\ell_1]$ to denote the number of variables necessary to determine a linear solution of $S$, i.e. \begin{equation*} [d;\ell_d,\dotsc,\ell_1] = r\left( \mathcal C(\mathbb V(S);P_0) \right)+1, \end{equation*} \noindent for any $P_0 \in \mathbb V(S)(K)$. It follows that Sylvester's formula of reduction is \begin{equation*} [d;\ell_d,\dotsc,\ell_1] \leq \left[d;\ell_d-1,\ell_d+\ell_{d-1},\dotsc,\sum\limits_{j=2}^d \ell_j, \sum\limits_{j=1}^d \ell_j \right] + 1, \end{equation*} \noindent when $\ell_d \geq 2$. When $\ell_d=1$, let $d'$ be the largest $j \leq d-1$ such that $\ell_j$ is non-zero. Then, Sylvester's formula of reduction is \begin{equation*} [d;\ell_d,\dotsc,\ell_1] \leq \left[d';\ell_{d'},\ell_{d'}+\ell_{d'-1},\dotsc,\sum\limits_{j=2}^{d'} \ell_j, \sum\limits_{j=1}^{d'} \ell_j \right] + 1. \end{equation*} \noindent Sylvester then claims the his formula of obliteration without proof. We state his formula of obliteration and provide a proof, for the sake of completeness. \begin{prop}\label{prop:Sylvester's Formula of Obliteration} \textbf{(Sylvester's Formula of Obliteration)}\\ Let $S$ be a system of homogeneous polynomials of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$ with $d \geq 2$ and $\ell_d \geq 2$. Then, \begin{align*} [d;\ell_d,\dotsc,\ell_1] &\leq [d-1;\lambda_{d-1}, \lambda_{d-2},\dotsc,\lambda_2,\lambda_1] + \ell_d,\\ &= [d-1;\lambda_{d-1}, \lambda_{d-2},\dotsc,\lambda_2,\lambda_1+\ell_d], \end{align*} \noindent where \begin{equation*} \lambda_{d-j} = \binom{\ell_d+j-1}{j} \frac{j\ell_d+1}{j+1} + \sum\limits_{\nu=0}^{j-1} \binom{\ell_d+\nu-1}{\nu} \ell_{d-j+\nu}. \end{equation*} \end{prop} \begin{proof} It is straightforward to see that iteratively applying Sylvester's formula of reduction allows us to reduce to a system of equations of degree at most $d-1$. For the explicit numerics, we give a proof via induction on $\ell_d$. Note that to determine a linear solution of $S$, it suffices to determine a point solution of a completed emanant $T_0$ of $S$ at some point solution $P_0$. Additionally, we note that the type of $T_0$ is \begin{equation*} \left[ \begin{matrix} d &d-1 &\cdots &2 &1\\ \ell_d &\ell_d+\ell_{d-1} &\cdots &\sum\limits_{j=2}^d \ell_j &\left( \sum\limits_{j=1}^d \ell_j \right) + 1 \end{matrix} \right]. \end{equation*} Now, suppose that $\ell_d=1$. We can determine a point solution $P_1$ of $T_0$ by determining a linear solution of the subsystem $T_0'$, which is of type \begin{equation*} \left[ \begin{matrix} d-1 &\cdots &2 &1\\ 1+\ell_{d-1} &\cdots &1+\sum\limits_{j=2}^{d-1} \ell_j &\left( 1+\sum\limits_{j=1}^{d-1} \ell_j \right) + 1 \end{matrix} \right]. \end{equation*} \noindent Futhermore, we see that \begin{align*} \lambda_{d-j} = \binom{1+j-1}{j} \frac{j(1)+1}{j+1} + \sum\limits_{\nu=0}^{j-1} \binom{1+\nu-1}{\nu} \ell_{d-j+\nu} = 1 + \sum\limits_{\nu=0}^{j-1} \ell_{d-j+\nu} = 1 + \sum\limits_{\mu=d-j}^{d-1} \ell_\mu, \end{align*} \noindent so the claim holds when $\ell_d=1$. Now, consider the case where $\ell_d \geq 2$ is arbitrary. To determine a point solution of $T_0$, it suffices to determine a linear solution of a subsystem $T_0'$, which is of type \begin{equation*} \left[ \begin{matrix} d &d-1 &\cdots &2 &1\\ \ell_d-1 &\ell_d+\ell_{d-1} &\cdots &\sum\limits_{j=2}^d \ell_j &\left( \sum\limits_{j=1}^d \ell_j \right) + 1 \end{matrix} \right] . \end{equation*} \noindent Thus, \begin{align*} [d;\ell_d,\dotsc,\ell_1] \leq \left[d;\ell_d-1,(\ell_d+\ell_{d-1}),\dotsc,\left( \sum\limits_{j=2}^d \ell_d \right), \left( \sum\limits_{j=1}^d \ell_j \right) + 1\right]. \end{align*} \noindent By induction, however, we have that \begin{equation*} \left[d;\ell_d-1,(\ell_d+\ell_{d-1}),\dotsc,\left( \sum\limits_{j=2}^d \ell_d \right), \left( \sum\limits_{j=1}^d \ell_j \right) + 1\right] \leq [d-1;\theta_{d-1},\dotsc,\theta_1+\ell_d], \end{equation*} \noindent where \begin{align*} \theta_{d-j} &= \binom{(\ell_d-1)+j-1}{j} \frac{j(\ell_d-1)+1}{j+1} + \sum\limits_{\nu=0}^{j-1} \binom{(\ell_d-1)+\nu-1}{\nu} \left( \sum\limits_{\mu=0}^j \ell_{d-j+\mu}\right), \\ &= \binom{\ell_d+j-2}{j} \frac{j\ell_d-j+1}{j+1} + \sum\limits_{\nu=0}^{j-1} \binom{\ell_d+\nu-2}{\nu} \left( \sum\limits_{\mu=0}^j \ell_{d-j+\mu}\right). \end{align*} \noindent Note that for each $\mu' \in [0,j-1]$, there are exactly $\mu'+1$ summands containing $\ell_{d-j+\mu'}$, namely \begin{equation*} \binom{\ell_d-2}{0} \ell_{\mu'}, \binom{\ell_d-1}{1} \ell_{\mu'}, \dotsc, \binom{\ell_d + \mu'-2}{\mu'}\ell_{\mu'}. \end{equation*} \noindent Additionally, there are exactly $j$ summands containing $\ell_d$, namely \begin{equation*} \binom{\ell_d-2}{0} \ell_d, \binom{\ell_d-1}{1} \ell_d, \dotsc, \binom{\ell_d + j-3}{j-1}\ell_d. \end{equation*} \noindent As a result, \begin{align*} \theta_{d-j} &= \binom{\ell_d+j-2}{j} \frac{j\ell_d-j+1}{j+1} + \sum\limits_{\nu'=0}^{j-1} \binom{\ell_d+\nu'-2}{\nu'}\ell_d + \sum\limits_{\mu_1=0}^{j-1} \left( \sum\limits_{\mu_2=0}^{\mu_1} \binom{\ell_d+\mu_2-2}{\mu_2} \right) \ell_{d-j+\mu_1},\\ &= \binom{\ell_d+j-2}{j} \frac{j\ell_d-j+1}{j+1} + \binom{\ell_d+j-2}{j-1}\ell_d + \sum\limits_{\mu_1=0}^{j-1} \binom{\ell_d+\mu_1-1}{\mu_1} \ell_{d-j+\mu_1}. \end{align*} \noindent Next, we see that \begin{equation*} \binom{\ell_d+j-2}{j} \frac{j\ell_d-j+1}{j+1} = \binom{\ell_d+j-2}{j} \frac{j\ell_d+1}{j+1} - \binom{\ell_d+j-2}{j} \frac{j}{j+1}, \end{equation*} \noindent and \begin{equation*} \binom{\ell_d+j-2}{j-1}\ell_d = \binom{\ell_d+j-2}{j-1} \frac{j\ell_d+1}{j+1} + \binom{\ell_d+j-2}{j-1} \frac{\ell_d-1}{j+1}. \end{equation*} \noindent Noting that $\binom{\ell_d+j-2}{j} + \binom{\ell_d+j-2}{j-1} = \binom{\ell_d+j-1}{j}$, it follows that \begin{equation*} \theta_{d-j} = \binom{\ell_d+j-1}{j}\frac{j\ell_d+1}{j+1} + \binom{\ell_d+j-2}{j-1} \frac{\ell_d-1}{j+1} - \binom{\ell_d+j-2}{j} \frac{j}{j+1} + \sum\limits_{\mu_1=0}^{j-1} \binom{\ell_d+\mu_1-1}{\mu_1} \ell_{d-j+\mu_1}. \end{equation*} \noindent However, \begin{align*} \binom{\ell_d+j-2}{j-1} \frac{\ell_d-1}{j+1} - \binom{\ell_d+j-2}{j} \frac{j}{j+1} &= \frac{(\ell_d+j-2)!(\ell_d-1)}{(j-1)!(\ell_d-1)!(j+1)} - \frac{(\ell_d+j-2)!j}{j!(\ell_d-2)!(j+1)},\\ &= \frac{(\ell_d+j-2)!}{(j-1)!(\ell_d-2)!(j+1)} - \frac{(\ell_d+j-2)!}{(j-1)!(\ell_d-2)!(j+1)},\\ &= 0, \end{align*} \noindent and thus \begin{equation*} \theta_{d-j} = \binom{\ell_d+j-1}{j}\frac{j\ell_d+1}{j+1} + \sum\limits_{\mu_1=0}^{j-1} \binom{\ell_d+\mu_1-1}{\mu_1} \ell_{d-j+\mu_1} = \lambda_{d-j}, \end{equation*} \noindent which proves the claim. \end{proof} Sylvester then applies his formula of obliteration to the question of determining non-zero solutions of equations which define the Tschirnhaus complete intersections $\tau_{1,\dotsc,m-1}$, including his Triangle of Obliteration. We omit his discussion here as the bounds he obtains are succeeded by the bounds of \cite{Brauer1975}, \cite{Wolfson2021}, \cite{Sutherland2021C}, and the next section. \section{Upper Bounds on Resolvent Degree}\label{sec:Upper Bounds on Resolvent Degree} \subsection{Previous Bounds}\label{subsec:Previous Bounds} The current upper bounds on $\RD(n)$ were determined by Sutherland in \cite{Sutherland2021C}, which improved upon those of Wolfson \cite{Wolfson2021}. The general framework used by both Sutherland (with polar cones) and Wolfson (without polar cones) for constructing their respective bounding functions $G(m)$ and $F(m)$ was outlined in Remark \ref{rem:Strategy for Upper Bounds on RD(n)}. We define Sutherland's $G(m)$ below, but first we highlight the function's key properties (and recall that property 1, which both $F(m)$ and $G(m)$ share, is why we refer to $F(m)$ and $G(m)$ as bounding functions). \begin{theorem}\label{thm:Sutherland2021C} \textbf{(Theorem 1.3 of \cite{Sutherland2021C})}\\ The function $G(m)$ of Definition 3.26 of \cite{Sutherland2021C} has the following properties: \begin{enumerate} \item For each $m \geq 1$ and $n \geq G(m)$, $\RD(n) \leq n-m$. \item For each $d \geq 4$, $G(2d^2+7d+6) \leq \frac{(2d^2+7d+5)!}{d!}$. In particular, for $d \geq 4$ and $n \geq \frac{(2d^2+7d+5)!}{d!}$, \begin{equation*} \RD(n) \leq n-2d^2-7d-6. \end{equation*} \item For each $m \geq 1$, $G(m) \leq F(m)$ with equality only when $m \in \left\{ 1,2,3,4,5,15,16 \right\}$ and \begin{align*} \lim\limits_{m \rightarrow \infty} \frac{F(m)}{G(m)}. \end{align*} \end{enumerate} \end{theorem} We will now numerically define $G(m)$ (which will require two additional functions) and then a summary of the construction of $G(m)$. We refer the reader to \cite{Sutherland2021C} for the full construction of $G(m)$ and proofs of the statements in Theorem \ref{thm:Sutherland2021C}. \begin{definition}\label{def:The Function G(m)} \textbf{(The Function $G(m)$)}\\ We first define $\vartheta:\mathbb Z_{\geq 3} \times \mathbb Z_{\geq 1} \rightarrow \mathbb Z_{\geq 1}$ so that $\vartheta(d,k)$ is the minimal $r \in \mathbb Z_{\geq 1}$ such that \begin{equation*} (k+1)(r-k) - \sum\limits_{j=2}^d \binom{k+i}{i} \geq 0. \end{equation*} \noindent Explicitly, we have \begin{align*} \vartheta(d,k) = k + \left\lceil \frac{1}{k+1}\left( \binom{k+d+1}{d} - (k+2) \right) \right\rceil. \end{align*} \noindent Next, we define $\varphi:\mathbb Z_{\geq 15} \times \mathbb Z_{\geq 1} \rightarrow \mathbb Z_{\geq 1}$ by \begin{equation*} \varphi(d,k) = \max\left\{ \frac{(d+k)!}{d!}, \binom{\vartheta(d,k)+d+1}{d} - (\vartheta(d,k)+1)^2 - (\vartheta(d,k)+d) \right\}. \end{equation*} \noindent Finally, we define $G:\mathbb Z_{\geq 1} \rightarrow \mathbb Z_{\geq 1}$. For $m \in [1,14]$, we define $G(m)$ by \begin{center} \begin{tabular}{|c|ccccc|ccccc|} \hline $m$ &1 &2 &3 &4 &5 &6 &7 &8 &9 &10\\ \hline $G(m)$ &2 &3 &4 &5 &9 &21 &109 &325 &1681 &15121\\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{|c|cccc|} \hline $m$ &11 &12 &13 &14 \\ \hline $G(m)$ &151,201 &1,663,201 &19,958,401 &259,459,201 \\ \hline \end{tabular} \end{center} \noindent and for $m \geq 15$ by \begin{align*} G(m) = 1 + \min\left\{ \varphi(d,m-d-1) \ | \ 4 \leq d \leq m-1 \right\}. \end{align*} \end{definition} The values of $G(m)$ for $m \in [1,5]$ are classical and described in \cite{Wolfson2021}. In \cite{Chebotarev1954}, Chebotarev gave an argument that $\RD(n) \leq n-6$ for $n \geq 21$, however his argument had a gap which was fixed by Theorem 3.7 of \cite{Sutherland2021C}. More specifically, Chebotarev (like Wiman before him in \cite{Wiman1927}) assumed certain intersections of hypersurfaces were generic without proof. For $m \in [6,14]$, Sutherland determined $k$-polar points on extended Tschirnhaus complete intersections $\tau_{1,\dotsc,d}^\circ$. However, the degrees of iterated polar cones grow exponentially and this method could not be further extended (see Remark 3.19 of \cite{Sutherland2021C}). For general $m$, Sutherland was able to improve on the bounds of Wolfson by using Theorem 2.1 of \cite{DebarreManivel1998} to minimize the ambient dimension required for Wolfson's algorithm. \subsection{New Bounds}\label{subsec:New Bounds} We will now improve on $G(m)$ for $m \in [13,17] \cup [22,25]$. For $m \in [7,16]$, $G(m)$ is obtained by determining an $(m-5)$-plane on $\tau_{1,2,3,4}^\circ$. Additionally, for $m \in [17,24]$, $G(m)$ is obtained by determining an $(m-6)$-plane on $\tau_{1,2,3,4,5}^\circ$. Finally, for $m \in [25,33]$, $G(m)$ is obtained by determining an $(m-7)$-plane on $\tau_{1,2,3,4,5,6}^\circ$. Our improvements will come from determining an $(m-6)$-plane on $\tau_{1,2,3,4,5}^\circ$ for $m \in [13,17]$ and from determining an $(m-7)$-plane on $\tau_{1,2,3,4,5,6}^\circ$ for $m \in [22,25]$. Note that in each of these cases, one can apply the geometric obliteration algorithm to obtain improved bounds. However, we will use a slight modification which allows for a minor optimization. \begin{remark}\label{rem:A Modification of the Geometric Obliteration Algorithm} \textbf{(A Modification of the Geometric Obliteration Algorithm)}\\ Let $V \subseteq \mathbb P_K^r$ be an intersection of hypersurfaces of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$. Recall that successive uses of Proposition \ref{prop:The Obliteration Proposition} yield that \begin{equation*} g(V) = g\left( V_1^\text{Syl} \right) = \cdots = g\left( V_{d-3}^\text{Syl} \right) = g\left( V_{d-2}^\text{Syl} \right), \end{equation*} \noindent and that $V_{d-2}^\text{Syl}$ is an intersection of type $\left[ \begin{matrix} 2 &1\\ \lambda_2 &\lambda_1 \end{matrix} \right]$. In the spirit of the obliteration algorithm, we could indeed continue to apply Lemma \ref{lem:The Reduction Lemma} until there is a single quadric left, at which point we need only solve a final quadratic polynomial. However, we also note that $\deg\left( V_{d-2}^\text{Syl} \right)$ is $2^{\lambda_2}$ and thus we can determine a point of $W_V$ by solving a polynomial of degree $2^{\lambda_2}$ whenever $r \geq \lambda_2 + \lambda_1$. Consequently, we obtain a slight improvement in the forthcoming bounds on $\RD(n)$ by reducing only to a $j^{th}$ partial Sylvester reduction of $V_{d-2}^\text{Syl}$ for some $j < \lambda_2$ instead of $V_{d-1}^\text{Syl}$. \end{remark} \begin{definition}\label{def:Optimal Reduction of Tschirnhaus Complete Intersection} \textbf{(Optimal Reduction of Tschirnhaus Complete Intersection)}\\ For each $d \geq 3$ and $m \geq d+2$, consider \begin{equation*} \left( \mathcal C^{m-d-1}(\tau_{1,\dotsc,d};P_0,\dotsc,P_{m-d-2}) \right)_{d-2}^\text{Syl}, \end{equation*} \noindent a $(d-2)^{nd}$ Sylvester reduction of an $(m-d-1)^{st}$ polar cone of $\tau_{1,\dotsc,d}$, which is of type $\left[ \begin{matrix} 2 &1\\ \lambda_2 &\lambda_1 \end{matrix} \right]$. For each $j \in [1,\lambda_2-1]$, note that a $j^{th}$ partial Sylvester reduction $\left( \left( \mathcal C^{m-d-1}(\tau_{1,\dotsc,d};P_0,\dotsc,P_{m-d-2}) \right)_{d-2}^\text{Syl} \right)^\text{Syl}(2;j)$ of $\left( \mathcal C^{m-d-1}(\tau_{1,\dotsc,d};P_0,\dotsc,P_{m-d-2}) \right)_{d-2}^\text{Syl}$ has type $\left[ \begin{matrix} 2 &1\\ \lambda_2 - j &\lambda_1 + \sum\limits_{\nu=\lambda_2-j}^{\lambda_2-1} \nu \end{matrix} \right]$. Observe that \begin{equation*} \deg\left( \left( \left( \mathcal C^{m-d-1}(\tau_{1,\dotsc,d};P_0,\dotsc,P_{m-d-2}) \right)_{d-2}^\text{Syl} \right)^\text{Syl}(2;j) \right) = 2^{\lambda_2-j}. \end{equation*} \noindent For each such $j$, set \begin{equation*} \xi(m,d;j) := \max\left\{ (m-d+1)+\left( \lambda_2-j \right) + \left( \lambda_1 + \sum\limits_{\nu=\lambda_2-j}^{\lambda_2-1} \nu \right), 2^{\lambda_2-j}+1 \right\}. \end{equation*} \noindent The \textbf{optimal reduction bound of $\tau_{1,\dotsc,d}$ for $m$}, is \begin{align*} \Xi(m,d) := \min\left\{ \xi(m,d;j) \ | \ j \in [0,\lambda_2-1] \right\}. \end{align*} \end{definition} In particular, $\Xi(m,d)$ is defined exactly so that for $n \geq \Xi(m,d)$, we can determine an $(m-d-1)^{th}$ polar point of $\tau_{1,\dotsc,d}^\circ$ in $\mathbb P_{K_n}^{n-1}$ over an extension $K'/K_n$ with $\RD(K'/K_n) \leq \RD(\Xi(m;d))$. \begin{remark}\label{rem:Xi(m,d) is Non-Decreasing in m} \textbf{($\Xi(m,d)$ is Non-Decreasing in $m$)}\\ Note that $\Xi(m,d)$ is non-decreasing in $m$ for fixed $d$. This can be seen geometrically from the fact if $(P_0,\dotsc,P_{m-d-1})$ is an $(m-d-1)^{st}$ polar point of $\tau_{1,\dotsc,d}$, then $(P_0,\dotsc,P_{m-d-2})$ must be an $(m-d-2)^{nd}$ polar point of $d$, so $\Xi(m,d) \geq \Xi(m-1,d)$. \end{remark} We are now ready to state and prove the main theorem. \begin{theorem}\label{thm:Bounds from the Geometric Obliteration Algorithm} \textbf{(Bounds from the Geometric Obliteration Algorithm)} \begin{enumerate} \item For $n \geq 5,250,198$, $\RD(n) \leq n-13$. \item For each $m \in [14,17]$ and $n > \frac{(m-1)!}{120}$, $\RD(n) \leq n-m$. \item For $n \geq 381,918,437,071,508,900$, $\RD(n) \leq n-22$. \item For each $m \in [23,25]$ and $n > \frac{(m-1)!}{720}$, $\RD(n) \leq n-m$. \end{enumerate} \end{theorem} \begin{proof} We continue to use the notation established in Definition \ref{def:Optimal Reduction of Tschirnhaus Complete Intersection}. For each $m \in [13,17]$, we set \begin{equation*} G'(m) = \max\left\{ \Xi(m,5), \frac{(m-1)!}{120}+1 \right\}, \end{equation*} \noindent and for each $m \in [22,25]$, we set \begin{equation*} G'(m) = \max\left\{ \Xi(m,6), \frac{(m-1)!}{720}+1 \right\}. \end{equation*} \noindent In each case, it suffices to show the claim when $n=G'(m)$. Further, note that $G'(m) = \Xi(m,5)$ exactly when $m=13$ and $G'(m)=\Xi(m,6)$ exactly when $m=22$. Recall that the space of Tschirnhaus transformations up to re-scaling is $\mathbb P_{K_{G'(m)}}^{G'(m)-1}$. Let us first consider the case of $m \in [13,17]$ and let $H \subseteq \mathbb P_{K_n}^{G'(m)-1}$ be a hyperplane which does not contain $[1:0:\cdots:0]$. Note that $H \cong \mathbb P_{K_n}^{G'(m)-2}$ and $H \cap \tau_{1,\dotsc,5} = H \cap \tau_{1,\dotsc,5}^\circ$. Since $\Xi(m,5) \geq \Xi(m-1,5)$, we can assume that we have an $(m-7)$-polar point $(P_0,\dotsc,P_{m-7})$ of $H \cap \tau_{1,\dotsc,5}^\circ$. Consider the minimal $j$ such that $\Xi(m,5) = \xi(m,5;j)$. By definition of $\xi(m,5;j)$, we have that \begin{equation*} \dim\left( \left( \left( \mathcal C^{m-6}(H \cap \tau_{1,\dotsc,5}^\circ;P_0,\dotsc,P_{m-7}) \right)_3^\text{Syl} \right)^\text{Syl}(2;j) \right) \geq m-6, \end{equation*} \noindent and thus \begin{equation*} \dim\left( \left( \left( \mathcal C^{m-6}(H \cap \tau_{1,\dotsc,5}^\circ;P_0,\dotsc,P_{m-7}) \right)_{3}^\text{Syl} \right)^\text{Syl}(2;j) \cap \Lambda(P_0,\dotsc,P_{m-7}) \right) \geq (m-6) - (m-7) \geq 1. \end{equation*} \noindent As a result, we can determine an $(m-6)$-polar point $(P_0,\dotsc,P_{m-6})$ of $\tau_{1,\dotsc,5}^\circ$ by solving a polynomial of degree at most $\Xi(m;5)$. From Lemma \ref{lem:Polar Point Lemma}, $\Lambda = \Lambda(P_0,\dotsc,P_{m-6}) \subseteq \tau_{1,\dotsc,5}^\circ$ is an $(m-6)$-plane. We can then determine a point of $\Lambda \cap \tau_{1,\dotsc,m-1}^\circ$ by solving a polynomial of degree $\frac{(m-1)!}{120}$. We now consider the similar case of $m \in [22,25]$. Let $H \subseteq \mathbb P_{K_n}^{G'(m)-1}$ be a hyperplane which does not contain $[1:0:\cdots:0]$. Note that $H \cong \mathbb P_{K_n}^{G'(m)-2}$ and $H \cap \tau_{1,\dotsc,5} = H \cap \tau_{1,\dotsc,5}^\circ$. Since $\Xi(m,6) \geq \Xi(m-1,6)$, we can assume that we have an $(m-8)$ polar point $(P_0,\dotsc,P_{m-8})$ of $H \cap \tau_{1,\dotsc,6}^\circ$. Consider the minimal $j$ such that $\Xi(m,6) = \xi(m,6;j)$. Observe that \begin{equation*} \dim\left( \left( \left( \mathcal C^{m-7}(H \cap \tau_{1,\dotsc,6}^\circ;P_0,\dotsc,P_{m-8}) \right)_4^\text{Syl} \right)^\text{Syl}(2;j) \right) \geq m-7, \end{equation*} \noindent and so \begin{equation*} \dim\left( \left( \left( \mathcal C^{m-7}(H \cap \tau_{1,\dotsc,6}^\circ;P_0,\dotsc,P_{m-8}) \right)_{3}^\text{Syl} \right)^\text{Syl}(2;j) \cap \Lambda(P_0,\dotsc,P_{m-8}) \right) \geq (m-7)-(m-8) \geq 1. \end{equation*} It follows that we can determine an $(m-7)$-polar point $(P_0,\dotsc,P_{m-7})$ of $\tau_{1,\dotsc,6}^\circ$ by solving a polynomial of degree at most $\Xi(m;6)$. Lemma \ref{lem:Polar Point Lemma} yields that $\Lambda = \Lambda(P_0,\dotsc,P_{m-7}) \subseteq \tau_{1,\dotsc,6}^\circ$ is an $(m-7)$-plane. Consequently, we can determine a point of $\Lambda \cap \tau_{1,\dotsc,m-1}^\circ$ by solving a polynomial of degree $\frac{(m-1)!}{720}$. In the following tables, we note the values of $\Xi(m,5)$ and $\frac{(m-1)!}{120}+1$ for $m \in [13,17]$ and the approximate values of $\Xi(m,6)$ and $\frac{(m-1)!}{720}+1$ for $m \in [22,25]$. The exact values of $\Xi(m,5)$ for $m \in [13,17]$ and of $\Xi(m,6)$ for $m \in [22,25]$ were computed using Algorithm \ref{alg:The Geometric Obliteration Algorithm Applied to Tschirnhaus Complete Intersections}, which can be found in Subsection \ref{subsec:Appendix D - The Geometric Obliteration Algorithm Applied to Polar Cones of Tschirnhaus Complete Intersections}. \begin{center} \begin{tabular}{cc} \begin{tabular}{|c|c|c|} \hline $m$ &$\Xi(m,5)$ &$\frac{(m-1)!}{120}+1$ \\ \hline 13 &5,250,198 &3,991,681 \\ \hline 14 &12,253,482 &51,891,841 \\ \hline 15 &26,357,165 &726,485,761 \\ \hline 16 &53,008,668 &10,897,286,401 \\ \hline 17 &100,769,994 &174,356,582,401 \\ \hline \end{tabular} & \begin{tabular}{|c|c|c|} \hline $m$ &$\Xi(m,6)$ &$\frac{(m-1)!}{720}+1$\\ \hline 22 &$\sim 3.819 \times 10^{17}$ &$\sim 7.096 \times 10^{16}$ \\ \hline 23 &$\sim 9.526 \times 10^{17}$ &$\sim 1.561 \times 10^{18}$ \\ \hline 24 &$\sim 2.262 \times 10^{18}$ &$\sim 3.591 \times 10^{19}$ \\ \hline 25 &$\sim 5.137 \times 10^{18}$ &$\sim 8.617 \times 10^{20}$ \\ \hline \end{tabular} \end{tabular} \end{center} \end{proof} \subsection{Obstruction to Further Bounds via the Geometric Obliteration Algorithm}\label{subsec:Obstruction to Further Bounds via The Geometric Obliteration Algorithm} Unfortunately, the proof strategy of Theorem \ref{thm:Bounds from the Geometric Obliteration Algorithm} does not yield further bounds on $\RD(n)$. Recall that for $m \geq 15$, $G(m)$ is defined by \begin{equation*} G(m) = 1 + \min\left\{ \varphi(d,m-d-1) \ | \ d \in [4,m-1] \right\}, \end{equation*} \noindent where \begin{equation*} \varphi(d,k) = \max\left\{ \frac{(d+k)!}{d!}, \binom{\vartheta(d,k)+d+1}{d} - ( \vartheta(d,k)+1 )^2 - (\vartheta(d,k)+d) \right\}. \end{equation*} \noindent For each $d$, the values of $m$ for which $G(m) = 1 + \varphi(d,m-d-1)$ is a set of consecutive integers. Equivalently, there are positive integers $m_d$ and $m_d'$ such that $G(m) = 1 + \varphi(d,m-d-1)$ if and only if $m \in \left[ m_d,m_d' \right]$; see Lemma 3.33 of \cite{Sutherland2021C} for details. Similarly, we briefly introduce the notation \begin{equation*} \varrho(d,k) = \max\left\{ \Xi(d+k+1,d), \frac{(d+k)!}{d!}+1 \right\} \end{equation*} \noindent for $d \geq 4$ and $k \geq 1$, as well as \begin{equation*} H(m) = \min\left\{ \varrho(d,m-d-1) \ | \ d \in [4,m-1] \right\} \end{equation*} \noindent for $m \geq 13$. For fixed $d$, note that $\Xi(m,d)$ is a polynomial in $m$, whereas $\frac{(d+k)!}{d!} = \frac{(m-1)!}{d!}$ grows factorially. It follows that for each $d$, there are positive integers $M_d$ and $M_d'$ such that $H(m) = \varrho(d,m-d-1)$ if and only if $m \in [M_d,M_d']$. In the following table, we compare the values $m_d$ and $M_d$ for $d=5,6,7,8$. \begin{center} \begin{tabular}{|c|c|c|} \hline $d$ &$m_d$ &$M_d$\\ \hline $5$ &17 &13 \\ \hline $6$ &25 &22 \\ \hline $7$ &34 &41 \\ \hline $8$ &44 &78 \\ \hline \end{tabular} \end{center} This provides further evidence (along with Remark 3.19 of \cite{Sutherland2021C}) that iterated polar cone methods are most effective for intersections of hypersurfaces of small types. Next, we determine an explicit lower bound on $\Xi(m,d)$. \begin{lemma}\label{lem:Lower Approximation} \textbf{(Lower Approximation)}\\ Let $V \subseteq \mathbb P_K^r$ be an intersection of hypersurfaces of type $\left[ \begin{matrix} d\\ \ell_d \end{matrix} \right]$ with $d \geq 3$ and $\ell_d \geq 2$. Denote the type of a $(d-2)^{nd}$ Sylvester reduction $V_{d-2}^\text{Syl}$ by $\left[ \begin{matrix} 2 &1\\ \lambda_2 &\lambda_1 \end{matrix} \right]$. Then, \begin{align*} \lambda_1 \geq \lambda_2 \geq \left\lceil 2^{5-2d} \left( \ell_d-1 \right)^{2d-4} \right\rceil. \end{align*} \end{lemma} \begin{proof} Note that the number of degree $d-1$ hypersurfaces of $V_1^\text{Syl}$ is \begin{equation*} \theta_{d-1} = \sum\limits_{j=1}^{\ell_d-1} \ell_d-j = \frac{1}{2}(\ell_d-1)\ell_d \geq \left\lceil \frac{1}{2}(\ell_d-1)^2 \right\rceil. \end{equation*} \noindent The same argument yields that the number of degree $d-2$ hypersurfaces of $V_2^\text{Syl}$ is \begin{equation*} \theta_{d-2} \geq \left\lceil \frac{1}{2}\left\lceil \frac{1}{2}(\ell_d-1)^2 \right\rceil^2 \right\rceil \geq \left\lceil 2^{-3} (\ell_d-1)^4 \right\rceil. \end{equation*} \noindent Proceeding similarly, we see that \begin{equation*} \lambda_2 = \theta_2 \geq \left\lceil 2^{5-2d} \left( \ell_d-1 \right)^{2d-4} \right\rceil. \end{equation*} \noindent Finally, note that $\lambda_1 \geq \lambda_2$ follows immediately from the polar cone construction. \end{proof} \begin{corollary}\label{cor:Lower Bound for Xi(m,d)} \textbf{(Lower Bound for $\Xi(m,d)$)}\\ Let $d \geq 4$ and $m \geq d+2$. Then, \begin{equation*} \Xi(m,d) \geq \left\lceil 4 \left( \frac{m-d-1}{2} \right)^{2d-4} \right\rceil. \end{equation*} \end{corollary} \begin{proof} First, Proposition 2.26 of \cite{Sutherland2021C} yields that an $(m-d-1)^{th}$ polar cone of $\tau_{1,\dotsc,d}$ is of type \begin{equation*} \left[ \begin{matrix} d &d-1 &\cdots &2 &1\\ 1 &\binom{m-d}{1} &\cdots &\binom{m-3}{d-2} &\binom{m-2}{d-1} \end{matrix} \right]. \end{equation*} \noindent Thus, the number of degree $d-1$ hypersurfaces of $V = \left( \tau_{1,\dotsc,d} \right)_1^\text{Syl}$ is $m-d$. Let $\sigma(m,d)$ be as in Definition \ref{def:Optimal Reduction of Tschirnhaus Complete Intersection}. It follows from Lemma \ref{lem:Lower Approximation} that \begin{equation*} \lambda_1 \geq \lambda_2 \geq \left\lceil 2^{5-2d}(m-d-1)^{2d-4} \right\rceil. \end{equation*} \noindent Moreover, for each $j$, \begin{equation*} \xi(m,d;j) \geq \lambda_1 + \lambda_2 \geq \left\lceil 2^{5-2d}(m-d-1)^{2d-4} \right\rceil + \left\lceil 2^{5-2d}(m-d-1)^{2d-4} \right\rceil \geq \left\lceil 4 \left( \frac{m-d-1}{2} \right)^{2d-4} \right\rceil, \end{equation*} \noindent and thus it follows that \begin{equation*} \Xi(m,d) = \min\left( \xi(m,d;j) \ | \ 0 \leq j \leq \lambda_2-1 \right) \geq \left\lceil 4 \left( \frac{m-d-1}{2} \right)^{2d-4} \right\rceil. \end{equation*} \end{proof} While we do not provide a full comparison here, we note that the key obstruction to obtaining further bounds on $\RD(n)$ using the methods of Theorem \ref{thm:Bounds from the Geometric Obliteration Algorithm} is that $\Xi(m,d)$ has a lower bound which grows exponentially in $d$ and that $m-d-1$ grows much more quickly than $d$ (for example, $m-d-1 \geq 19$ for $m \geq 26$). Having indicated the obstruction to obtaining further upper bounds on $\RD(n)$ using these methods, we now combine Theorems \ref{thm:Sutherland2021C} and \ref{thm:Bounds from the Geometric Obliteration Algorithm} to immediately construct a new bounding function with the same key properties of $G(m)$. \begin{corollary}\label{cor:The New Bounding Function} \textbf{(The New Bounding Function)}\\ Let $G':\mathbb Z_{\geq 2} \rightarrow \mathbb Z_{\geq 1}$ be the function with \begin{equation*} G'(m) = \max\left\{ \Xi(m,5), \frac{(m-1)!}{120}+1 \right\}, \end{equation*} \noindent for $m \in [13,17]$, with \begin{equation*} G'(m) = \max\left\{ \Xi(m,6), \frac{(m-1)!}{720}+1 \right\}, \end{equation*} \noindent for $m \in [22,25]$, and with $G'(m) = G(m)$ for $m \not\in [13,17] \cup [22,25]$. Then, $G'(m)$ has the following properties: \begin{enumerate} \item For each $m \geq 1$ and $n \geq G'(m)$, $\RD(n) \leq n-m$. \item For each $d \geq 4$, $G'(2d^2+7d+6) \leq \frac{(2d^2+7d+5)!}{d!}$. In particular, for $d \geq 4$ and $n \geq \frac{(2d^2+7d+5)!}{d!}$, \begin{align*} \RD(n) \leq n - 2d^2-7d-6. \end{align*} \end{enumerate} \end{corollary} \subsection{Remaining Questions}\label{subsec:Remaining Questions} To the best of the authors' knowledge, the bounding function $G'(m)$ of Corollary \ref{cor:The New Bounding Function} exhausts the techniques and methods for determining upper bounds on resolvent degree from the classical literature (including \cite{Bring1786}, \cite{Chebotarev1954}, \cite{Hamilton1836}, \cite{Hilbert1927}, \cite{Segre1945}, \cite{Sylvester1887}, \cite{SylvesterHammond1887}, \cite{SylvesterHammond1888}, \cite{Tschirnhaus1683}, and \cite{Wiman1927}), as well as the modern insights from \cite{Brauer1975}, \cite{Sutherland2021C}, and \cite{Wolfson2021}. The bounding functions of Brauer, Hamilton, Sutherland, Sylvester, and Wolfson are constructed by determining points on the Tschirnhaus complete intersections $\tau_{1,\dotsc,m-1}^\circ$ over extensions of bounded resolvent degree. However, there are solutions of the quintic, the sextic, and the septic which use alternative constructions of Tschirnhaus transformations (see \cite{Klein1884} and \cite{Klein1905} for the respective original works, or \cite{Morrice1956} and \cite{Sutherland2019} for the respective English translations). We believe it would be insightful to understand whether one can reduce the general question of determining $\RD(n)$ to the more specific question of determining points on the Tschirnhaus complete intersections $\tau_{1,\dotsc,m-1}^\circ$. \begin{question}\label{question:Optimal Formulas via Tschirnhaus Complete Intersections} \textbf{(Optimal Formulas via Tschirnhaus Complete Intersections)}\\ For every $n$, let $m_n$ be such that $\RD(n) \leq n-m_n$. Is there a formula for the general degree $n$ polynomial obtained by determining a point of $\tau_{1,\dotsc,m_n-1}^\circ$ over an extension $K'/K_n$ of bounded resolvent degree? \end{question} For general $m$, the definition of $G'(m)=G(m)$ uses the combinatorial condition of Theorem 2.1 of \cite{DebarreManivel1998} to guarantee the existence of $k$-planes on the $\tau_{1,\dotsc,d}^\circ$ and then uses the dimension of the relevant moduli space (see Subsection 3.3 of \cite{Sutherland2021C} for details). Notably, this combinatorial condition is non-constructive and relies only on the type of $\tau_{1,\dotsc,d}^\circ$. One might hope that such formulas could be determined using constructive methods and one approach may be to leverage the specific geometry of the $\tau_{1,\dotsc,d}^\circ$ (e.g., using more information than its type). \begin{question}\label{question:RD Bounds via Explicit Constructions of k-Planes} \textbf{(RD Bounds via Explicit Constructions of $k$-Planes)}\\ Is there a bounding function $\mathfrak G(m)$ with $\mathfrak G(m) \leq G'(m)$ which arises from an explicit construction of $k$-planes on the $\tau_{1,\dotsc,d}^\circ$? If so, is it possible to determine the bounding function $\mathfrak G(m)$ such that \begin{equation*} \lim\limits_{m \rightarrow \infty} \frac{G(m)}{\mathfrak G(m)} = \lim\limits_{m \rightarrow \infty} \frac{G'(m)}{\mathfrak G(m)} = \infty? \end{equation*} \end{question} Theorem \ref{thm:Bounds from the Geometric Obliteration Algorithm} was proved using a consequence of the geometric obliteration algorithm, namely that $r(V) \leq g(V)$ for any intersection of hypersurfaces $V$. \begin{question}\label{question:Minimal Dimension Bound vs. Geometric Dimension Bound} \textbf{(Minimal Dimension Bound vs. Geometric Dimension Bound)}\\ For which intersections of hypersurfaces $V$ is the inequality $r(V) \leq g(V)$ strict? Are there classical examples of types of intersections of hypersurfaces where the inequality is not strict? \end{question} Let us now briefly consider a cubic hypersurface $H = \mathbb V(f) \subseteq \mathbb P_K^r$. When $r=3$ and $H$ is smooth, the Cayley-Salmon theorem yields that $H$ contains exactly 27 lines. The resolvent degree of determining a line on $H$ is bounded above by the dimension of the moduli space of smooth cubic surfaces, which is \begin{equation*} \dim\left( \mathcal M^\circ(3;3) \right) = \binom{3+3}{3} - (3+1)^2 = 4. \end{equation*} \noindent That $H$ has exactly 27 lines is consistent with Theorem 2.1 of \cite{DebarreManivel1998}, which states that the Fano variety of lines of a cubic surface in $\mathbb P_K^3$ is non-empty and has dimension 0. In particular, when $r=3$, most points $P \in H(K)$ do not lie on a line of $H$ over an algebraic closure $\overline{K}$. When $r=4$, however, any polar cone $\mathcal C(V;P)$ has dimension at least one and thus every point $P \in V(H)$ lies on at least one line $\Lambda = \Lambda(P,Q) \subseteq H$ over an algebraic closure $\overline{K}$. To determine such a point $Q$ directly, we must solve a polynomial of degree $6 = 3! = \deg(\mathcal C(V;P))$. Hence, we can determine a line through any point $P$ over an extension with $K'/K$ with $\RD(K'/K) \leq \RD(6) \leq 2$. Additionally, observe that \begin{align*} g( \mathcal C(V;P) ) = g(3;1,1,1) = g(2;1,3) = 5. \end{align*} \noindent Thus, when $r \geq 5$, we can determine a point $Q \in \mathcal C(V;P) \setminus \{P\}$ over an extension determined by solving at most cubic polynomials (i.e., over a solvable extension). Now, let $V \subseteq \mathbb P_K^r$ be an intersection of hypersurfaces of type $\left[ \begin{matrix} d &\cdots &1\\ \ell_d &\cdots &\ell_1 \end{matrix} \right]$. For each $k \geq 1$, take $s_k(V)$ to be the minimal $s$ such that \begin{align*} (k+1)(s-k) - \sum\limits_{j=1}^d \ell_j \binom{k+j}{j} \geq 0. \end{align*} One implication of Theorem 2.1 of \cite{DebarreManivel1998} is that $V$ contains a $k$-plane for all $r \geq s_k(V)$. We expect $s_k(V)$ to be the minimal ambient dimension required for $V$ to contain a $k$-plane; however, we expect the resolvent degree of determining such a $k$-plane to be large. Conversely, we expect $r\left( \mathcal C^k(V;P_0,\dotsc,P_{k-1}) \right)+k$, the ambient dimension required to determine a $k$-polar point over an extension $K'/K$ of small resolvent degree $(\RD(K'/K) \leq \RD(d))$, to be large. \begin{question}\label{question:Minimizing Ambient Dimension vs. Minimizing RD of Extensions} \textbf{(Minimizing Ambient Dimension vs. Minimizing RD of Extensions)}\\ Let $V$ be an intersection of hypersurfaces. How do $g\left( \mathcal C^k(V;P_0,\dotsc,P_{k-1}) \right)+k$, $r\left( \mathcal C^k(V;P_0,\dotsc,P_{k-1}) \right)+k$, and $s_k(V)$ compare? \end{question} \newpage \section{Python Implementations of the Obliteration Algorithm and Related Phenomena} In Subsection \ref{subsec:Appendix A - The Geometric Obliteration Algorithm}, we provide an implementation (Algorithm \ref{alg:The Geometric Obliteration Algorithm}) of the geometric obliteration algorithm in Python. In Subsection \ref{subsec:Appendix B - Lemmata for Appendix C}, we prove several lemmata which make the computations for the proof of Theorem \ref{thm:Bounds from the Geometric Obliteration Algorithm} feasible. Algorithm \ref{alg:The Geometric Obliteration Algorithm with Computational Improvements} in Subsection \ref{subsec:Appendix C - The Geometric Obliteration Algorithm with Computational Improvements} takes the same input and provides the same output as Algorithm \ref{alg:The Geometric Obliteration Algorithm}, but uses the lemmata of Subsection \ref{subsec:Appendix B - Lemmata for Appendix C} to decrease computation time. Finally, Algorithm \ref{alg:The Geometric Obliteration Algorithm Applied to Tschirnhaus Complete Intersections} in Subsection \ref{subsec:Appendix D - The Geometric Obliteration Algorithm Applied to Polar Cones of Tschirnhaus Complete Intersections} computes the information necessary for Theorem \ref{thm:Bounds from the Geometric Obliteration Algorithm}. \subsection{Appendix A: The Geometric Obliteration Algorithm}\label{subsec:Appendix A - The Geometric Obliteration Algorithm} \begin{algorithm} \begin{algorithmname}\label{alg:The Geometric Obliteration Algorithm} \textbf{(The Geometric Obliteration Algorithm)} \end{algorithmname} \vspace{-12pt} \hrulefill \vspace{-12pt} \begin{itemize} \item Input: An intersection of hypersurfaces $V$ of type $\left[ \begin{matrix} d &d-1 &\cdots &2 &1\\ \ell_d &\ell_{d-1} &\cdots &\ell_2 &\ell_1 \end{matrix} \right]$ with $d \geq 2$, encoded as the list $\text{DegreeList} = [\ell_d, \ell_{d-1}, \dotsc, \ell_2, \ell_1]$. \item Output: The geometric dimension bound $g(d;\ell_d,\dotsc,\ell_1)$. \end{itemize} \vspace{-12pt} \hrulefill \begin{algorithmic}[1] \Statex The function \textproc{ComputePolarCone} inputs a list which contains the type of an intersection of hypersurfaces $W$. It then returns a list which contains the type of a polar cone $\mathcal C(W;P)$. In particular, recall that for each $d' < d$, each hypersurface $H$ with $\deg(H) > d'$ defining $W$ contributes exactly one new degree $d'$ hypersurface defining $\mathcal C(W;P)$ and each hypersurface defining $\mathcal C(W;P)$ arises in this manner. \Statex \Function{ComputePolarCone}{List}: \State counter = List[0] \State ReturnList = [counter] \For{index \textbf{in} \ \textbf{range}(1,\textbf{len}(List)):} \State counter += List[index] \State ReturnList.append(counter) \EndFor \State \Return ReturnList \EndFunction \Statex \Statex The function \textproc{ObliterateLargestDegreeHypersurfaces} inputs a list which contains the type of an intersection of hypersurfaces $W$ whose largest degree hypersurface has degree $d \geq 3$. It identifies the number of hypersurfaces of largest degree and proceeds to iteratively remove a hypersurface $H$ of largest degree and compute a polar cone of the remaining intersection of hypersurfaces $W'$ (with an additional hyperplane included). \Statex \Statex Note that an additional hyperplane is added each time to avoid repeated polar cone points, i.e. if $P$ was the cone point of the previous polar cone point, we pass to a hyperplane which does not contain $P$ to ensure that the cone point $Q$ of the next polar cone satisfies $Q \not= P$. Also, the polar cone of a hyperplane plane at any point is just the hyperplane itself, so to compute the combinatorics, it suffices to add one after computing the polar cone instead of doing it beforehand. \Statex \Statex As taking the polar cone of a hypersurface $H$ introduces only hypersurfaces of strictly smaller degree, this process terminates and \textproc{ObliterateLargestDegreeHypersurfaces} returns a list whose data is the multi-degree of an intersection of hypersurfaces $V'$ whose largest degree hypersurface has degree $d-1$. \algstore{bkbreak} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[1] \algrestore{bkbreak} \Function{ObliterateLargestDegreeHypersurfaces}{List}: \While{List[0] $>$ 0:} \State List[0] -= 1 \State TempList = \textproc{ComputePolarCone}(List) \State List = TempList \State List[\textbf{len}(List)-1] += 1 \EndWhile \State ReturnList = [] \For{index \textbf{in} \ \textbf{range}(1,\textbf{len}(List):} \State ReturnList.append(List[index]) \EndFor \State \Return ReturnList \EndFunction \Statex \Statex The function \textproc{ObliterateQuadricsViaLoops} works similarly to \textproc{ObliterateLargestDegreeHypersurfaces}, but the input is the multi-degree of an intersection of hypersurfaces of type $\left[ \begin{matrix} 2 &1\\ \ell_2 &\ell_1 \end{matrix} \right]$ and the loop ends with a single quadric remaining instead of zero quadrics remaining. \Statex \Function{ObliterateQuadricsViaLoops}{List}: \While{List[0] $>$ 1:} \State List[0] -= 1 \State TempList = \textproc{ComputePolarCone}(List) \State List = TempList \State List[\textbf{len}(List)-1] += 1 \EndWhile \State \Return [List[0],List[1]] \EndFunction \Statex \Statex The procedure \textproc{Main} inputs the multi-degree of an intersection of hypersurfaces $V$ as the list DegreeList and proceeds to successively ``obliterate'' the hypersurfaces of largest degree. The final step of the procedure is to return a list of the form $[1,\alpha]$, which is the requisite intersection of a single quadric and $\alpha$ hyperplanes. \Statex \Procedure{Main}{DegreeList}: \For{index \textbf{in} \ \textbf{range}(1,\textbf{len}(DegreeList)-1):} \State TempDegreeList = \textproc{ObliterateLargestDegreeHypersurfaces}(DegreeList) \State DegreeList = TempDegreeList \EndFor \State FinalList = \textproc{ObliterateQuadricsViaLoops}(DegreeList) \State Sum = FinalList[0] + FinalList[1] \State \Return Sum \EndProcedure \end{algorithmic} \end{algorithm} \newpage \subsection{Appendix B: Lemmata for Computational Improvements}\label{subsec:Appendix B - Lemmata for Appendix C} In this subsection, we give explicit numerics for Proposition \ref{prop:The Obliteration Proposition} when $d = 2,3,4$. \begin{lemma}\label{lem:Obliterating Quadrics} \textbf{(Obliterating Quadrics)}\\ Consider an intersection of hypersurfaces $V$ of type $\left[ \begin{matrix} 2 &1\\ \ell_2 &\ell_1 \end{matrix} \right]$. Then, \begin{equation*} g(V) = 1 + \ell_1 + \frac{1}{2}(\ell_2-1)(\ell_2+2). \end{equation*} \end{lemma} \begin{proof} First, observe that $V^\text{Syl}(2;1)$ has type \begin{equation*} \left[ \begin{matrix} 2 &1\\ \ell_2-1 &\ell_1 + \ell_2 \end{matrix} \right], \end{equation*} \noindent by Definition \ref{def:Sylvester Reductions}. Similarly, $V^\text{Syl}(2;2)$ has type \begin{equation*} \left[ \begin{matrix} 2 &1\\ \ell_2-2 &\ell_1+\ell_2+\ell_2-1 \end{matrix} \right]. \end{equation*} \noindent Proceeding in this manner yields that $V^\text{Syl}(2;\lambda_2-1)$ has type \begin{align*} \left[ \begin{matrix} 2 &1\\ 1 &\ell_1 + \sum\limits_{j=1}^{\ell_2-1} (\ell_2-j+1) \end{matrix} \right], \end{align*} \noindent and we note that \begin{equation*} \sum\limits_{j=1}^{\ell_2-1} (\ell_2-j+1) = \frac{1}{2}\left( \ell_2-1 \right) \left( \ell_2+2 \right). \end{equation*} \noindent From Lemma \ref{lem:The Reduction Lemma} and Definition \ref{def:Sylvester Reductions}, we see that \begin{equation*} g(V) = g\left( V^\text{Syl}(2;\lambda_2-1) \right) = 1 + \ell_1 + \frac{1}{2}\left( \ell_2-1 \right) \left( \ell_2+2 \right). \end{equation*} \end{proof} \begin{lemma}\label{lem:Obliterating Cubics} \textbf{(Obliterating Cubics)}\\ Consider an intersection of hypersurfaces $V$ of type $\left[ \begin{matrix} 3 &2 &1\\ \ell_3 &\ell_2 &\ell_1 \end{matrix} \right]$. Then, $V_1^\text{Syl}$ is of type $\left[ \begin{matrix} 2 &1\\ \beta_3 &\alpha_3 \end{matrix} \right]$, where \begin{align*} \beta_3 &= \ell_2 + \frac{1}{2}(\ell_3-1)\ell_3,\\ \alpha_3 &= \ell_1 + \ell_2\ell_3 + \frac{1}{2}\ell_3(\ell_3+1) + \frac{1}{6}\ell_3\left( 2\ell_3^2-3\ell_3+1 \right). \end{align*} \end{lemma} \begin{proof} An argument analogous to the proof of Lemma \ref{lem:Obliterating Quadrics} yields that \begin{align*} \beta_3 &= \ell_2 + \sum\limits_{j=1}^{\ell_3} (\ell_3-j) = \ell_2 + \frac{1}{2}(\ell_3-1)\ell_3. \end{align*} \noindent Next, observe that $V^\text{Syl}(3;j)$ has type \begin{equation*} \left[ \begin{matrix} 3 &2 &1\\ \ell_3-j &\ell_2 + \sum\limits_{k=1}^j (\ell_3-k) &\lambda_j \end{matrix} \right]. \end{equation*} \noindent Consequently, \begin{align*} \lambda_{j+1} = \lambda_j + \left( \ell_3-j-1 \right) + \left( \ell_2 + \sum\limits_{k=1}^j (\ell_3-k) \right) + 1. \end{align*} \noindent Combined with the initial condition $\lambda_0 = \ell_1$, we obtain that \begin{align*} \alpha_3 &= \ell_1 + \left( \sum\limits_{j_1=1}^{\ell_3} (\ell_3-j_1+1) \right) + \left( \sum\limits_{j_2=1}^{\ell_3} \ell_2 + \sum\limits_{j_3=2}^{\ell_3} \sum\limits_{j_4=1}^{j_4-1} \ell_3-j_4 \right),\\ &= \ell_1 + \frac{1}{2}\ell_3(\ell_3+1) + \left( \ell_2\ell_3 + \sum\limits_{j_3=2}^{\ell_3} \sum\limits_{j_4=1}^{j_3-1} \ell_3-j_4 \right),\\ &= \ell_1 + \ell_2\ell_3 + \frac{1}{2}\ell_3(\ell_3+1) + \sum\limits_{j_3=2}^{\ell_3} \sum\limits_{j_4=1}^{j_3-1} (\ell_3-j_2),\\ &= \ell_1 + \ell_2\ell_3 + \frac{1}{2}\ell_3(\ell_3+1) + \frac{1}{6}\ell_3\left( 2\ell_3^2-3\ell_3+1 \right). \end{align*} \end{proof} \begin{lemma}\label{lem:Obliterating Quartics} \textbf{(Obliterating Quartics)}\\ Consider an intersection of hypersurfaces $V \subseteq \mathbb P_K^r$ of type $\left[ \begin{matrix} 4 &3 &2 &1\\ \ell_4 &\ell_3 &\ell_2 &\ell_1 \end{matrix} \right]$. Then, $V_1^\text{Syl}$ is of type $\left[ \begin{matrix} 3 &2 &1\\ \gamma_4 &\beta_4 &\alpha_4 \end{matrix} \right]$, where \begin{align*} \gamma_4 &= \ell_3 + \frac{1}{2}(\ell_4-1)\ell_4,\\ \beta_4 &= \ell_2 + \ell_3\ell_4 + \frac{1}{2}(\ell_4-1)\ell_4 + \frac{1}{6}\ell_4\left( 2\ell_4^2-3\ell_4+1 \right),\\ \alpha_4 &= \ell_1 + \ell_4\left( \ell_2+\ell_3+\frac{1}{2}(\ell_4+1) \right) + \ell_4\left( \frac{1}{2}\ell_3(\ell_4+1) + \frac{1}{3}\left( 2\ell_4^2-3\ell_4+1 \right) \right)\\ & + \frac{1}{24} (\ell_4-2)(\ell_4-1)\ell_4(3\ell_4-1). \end{align*} \end{lemma} \begin{proof} The proofs of Lemmata \ref{lem:Obliterating Quadrics} and \ref{lem:Obliterating Cubics} generalize to determine $\gamma_4$ and $\beta_4$ in a straightforward manner. It remains to determine $\alpha_4$. Note that $V^\text{Syl}(4;j)$ has type \begin{equation*} \left[ \begin{matrix} 4 &3 &2 &1\\ \ell_4-j &\ell_3 + \sum\limits_{k_1=1}^j (\ell_4-k_1) &\ell_2 + \left( \sum\limits_{k_2=1}^j \ell_4 - k_2 \right) + \sum\limits_{k_3=1}^j \left( \ell_3 + \sum\limits_{k_4=1}^{j-1} (\ell_4-k_4) \right) &\lambda_j \end{matrix} \right]. \end{equation*} \noindent As a result, \begin{align*} \lambda_{j+1} = \lambda_j + (\ell_4-j-1) + \left( \ell_3 + \sum\limits_{k=1}^j (\ell_4-k) \right) + \left( \ell_2 + \left( \sum\limits_{k_1=1}^j \ell_4 - k_1 \right) + \sum\limits_{k_2=1}^j \left( \ell_3 + \sum\limits_{k_3=1}^{j-1} (\ell_4-k_3) \right) \right) + 1. \end{align*} \noindent Given the initial condition $\lambda_0 = \ell_1$, it follows that \begin{align*} \alpha_4 &= \ell_1 + \left( \sum\limits_{j_1=1}^{\ell_4} \ell_4-j_1+1 \right) + \left( \sum\limits_{j_2=1}^{\ell_4} \ell_3 + \sum\limits_{j_3=2}^{\ell_4} \sum\limits_{j_4=1}^{j_3-1} (\ell_4-j_4) \right)\\ &+ \left( \sum\limits_{j_5=1}^{\ell_4} \ell_2 + \sum\limits_{j_6=2}^{\ell_4} \sum\limits_{j_7=1}^{j_6-1} (\ell_4-j_7) + \sum\limits_{j_8=2}^{\ell_4} \sum\limits_{j_9=1}^{j_8-1} \ell_3 + \sum\limits_{j_{10}=3}^{\ell_4} \sum\limits_{j_{11}=2}^{j_{10}-1} \sum\limits_{j_{12}=1}^{j_{11}-1} (\ell_4-j_{12}) \right),\\ \\ &= \ell_1 + \left( \frac{1}{2}\ell_4(\ell_4+1) \right) + \left( \ell_3\ell_4 + \frac{1}{6}\ell_4\left( 2\ell_4^2-3\ell_4+1 \right) \right)\\ &+ \left( \ell_2\ell_4 + \frac{1}{6}\ell_4\left( 2\ell_4^2-3\ell_4+1 \right) + \frac{1}{2}(\ell_4-1)\ell_4\ell_3 + \sum\limits_{j_{10}=3}^{\ell_4} \sum\limits_{j_{11}=2}^{j_{10}-1} \sum\limits_{j_{12}=1}^{j_{11}-1} (\ell_4-j_{12}) \right),\\ \\ &= \ell_1 + \ell_4\left( \ell_2+\ell_3+\frac{1}{2}(\ell_4+1) \right) + \ell_4\left( \frac{1}{2}\ell_3(\ell_4-1) + \frac{1}{3}\left( 2\ell_4^2-3\ell_4+1 \right) \right) + \sum\limits_{j_{10}=3}^{\ell_4} \sum\limits_{j_{11}=2}^{j_{10}-1} \sum\limits_{j_{12}=1}^{j_{11}-1} (\ell_4-j_{12}),\\ &= \ell_1 + \ell_4\left( \ell_2+\ell_3+\frac{1}{2}(\ell_4+1) \right) + \ell_4\left( \frac{1}{2}\ell_3(\ell_4+1) + \frac{1}{3}\left( 2\ell_4^2-3\ell_4+1 \right) \right) + \frac{1}{24} (\ell_4-2)(\ell_4-1)\ell_4(3\ell_4-1). \end{align*} \end{proof} \newpage \subsection{Appendix C: The Geometric Obliteration Algorithm with Computational Improvements}\label{subsec:Appendix C - The Geometric Obliteration Algorithm with Computational Improvements} \begin{algorithm} \begin{algorithmname}\label{alg:The Geometric Obliteration Algorithm with Computational Improvements} \textbf{(The Geometric Obliteration Algorithm with Computational Improvements)} \end{algorithmname} \vspace{-12pt} \hrulefill \vspace{-12pt} \begin{itemize} \item Input: An intersection of hypersurfaces $V$ of type $\left[ \begin{matrix} d &d-1 &\cdots &2 &1\\ \ell_d &\ell_{d-1} &\cdots &\ell_2 &\ell_1 \end{matrix} \right]$ with $d \geq 2$, encoded as the list $\text{DegreeList} = [\ell_d, \ell_{d-1}, \dotsc, \ell_2, \ell_1]$. \item Output: The geometric dimension bound $g(d;\ell_d,\dotsc,\ell_1)$. \end{itemize} \vspace{-12pt} \hrulefill \begin{algorithmic}[1] \Statex We will use the same functions \textproc{ComputePolarCone} and \textproc{ObliterateLargestDegreeHypersurfaces} which were originally defined in Algorithm \ref{alg:The Geometric Obliteration Algorithm}. \Statex \Statex We now implement Lemma \ref{lem:Obliterating Quartics} (respectively, Lemmata \ref{lem:Obliterating Cubics} and \ref{lem:Obliterating Quadrics}) via the following three functions. \Statex \Function{ObliterateQuartics}{List}: \State a = List[0] \State b = List[1] \State c = List[2] \State d = List[3] \State gammafour = b + (1/2)*(a-1)*a \State betafour = c + a*b + (1/2)*a*(a+1) + (1/6)*(a-1)*a*(2*a-1) \State alphafour = d + a*(b+c+(1/2)*(a+1)) + a*((1/2)*b*(a-1)+(1/3)*((2*(a**2))-(3*a)+1)) \State \hspace{0.7in} + (1/24)*(a-2)*(a-1)*a*(3*a-1) \State \Return [gammafour,betafour,alphafour] \EndFunction \Statex \Function{ObliterateCubics}{List}: \State a = List[0] \State b = List[1] \State c = List[2] \State betathree = b + (1/2)*(a-1)*a \State alphathree = c + a*b + (1/2)*a*(a+1) + (1/6)*a*((2*(a**2))-(3*a)+1) \State \Return [betathree,alphathree] \EndFunction \Statex \Function{ObliterateQuadrics}{List}: \State a = List[0] \State b = List[1] \State alphatwo = b + (1/2)*a*(a+1) \State \Return [1,alphatwo] \EndFunction \algstore{bkbreak} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[1] \algrestore{bkbreak} \Statex The \textproc{Main} procedure works very similarly to its counterpart in Algorithm \ref{alg:The Geometric Obliteration Algorithm}, with the only differences being the use of specialized functions to obliterate quartic, cubic, and quadric hypersurfaces. \Statex \Procedure{Main}{DegreeList}: \If{\textbf{len}(DegreeList) == 2:} \State FinalDegreeList = \textproc{ObliterateQuadrics}(DegreeList) \State Sum = FinalDegreeList[0] = FinalDegreeList[1] \State \Return Sum \ElsIf{\textbf{len}(DegreeList) == 3:} \State TempDegreeList = \textproc{ObliterateCubics}(DegreeList) \State DegreeList = TempDegreeList \State TempDegreeList = \textproc{ObliterateQuadrics}(DegreeList) \State FinalDegreeList = TempDegreeList \State Sum = FinalDegreeList[0] = FinalDegreeList[1] \State \Return Sum \ElsIf{\textbf{len}(DegreeList == 4:} \State TempDegreeList = \textproc{ObliterateQuartics}(DegreeList) \State DegreeList = TempDegreeList \State TempDegreeList = \textproc{ObliterateCubics}(DegreeList) \State DegreeList = TempDegreeList \State TempDegreeList = \textproc{ObliterateQuadrics}(DegreeList) \State FinalDegreeList = TempDegreeList \State Sum = FinalDegreeList[0] = FinalDegreeList[1] \State \Return Sum \Else: \For{index \textbf{in} \ \textbf{range}(1,\textbf{len}(DegreeList)-3):} \State TempDegreeList = \textproc{ObliterateLargestDegreeHypersurfaces}(DegreeList) \State DegreeList = TempDegreeList \EndFor \State TempDegreeList = \textproc{ObliterateQuartics}(DegreeList) \State DegreeList = TempDegreeList \State TempDegreeList = \textproc{ObliterateCubics}(DegreeList) \State DegreeList = TempDegreeList \State TempDegreeList = \textproc{ObliterateQuadrics}(DegreeList) \State FinalDegreeList = TempDegreeList \State Sum = FinalDegreeList[0] = FinalDegreeList[1] \State \Return Sum \EndIf \EndProcedure \end{algorithmic} \end{algorithm} \newpage \subsection{Appendix D: The Geometric Obliteration Algorithm for $\mathcal C^{m-d-1}(\tau_{1,\dotsc,d};P_0,\dotsc,P_{m-d-1})$}\label{subsec:Appendix D - The Geometric Obliteration Algorithm Applied to Polar Cones of Tschirnhaus Complete Intersections} \begin{algorithm} \begin{algorithmname}\label{alg:The Geometric Obliteration Algorithm Applied to Tschirnhaus Complete Intersections} \textbf{(The Geometric Obliteration Algorithm for $\mathcal C^{m-d-1}(\tau_{1,\dotsc,d};P_0,\dotsc,P_{m-d-1})$)} \end{algorithmname} \vspace{-12pt} \hrulefill \vspace{-12pt} \begin{itemize} \item Imported Packages: scipy.special, math \item Input: A positive integer $d$ and and another positive integer $m \geq d+2$. \item Output: The optimal reduction bound of $\tau_{1,\dotsc,d}$ for $m$, $\Xi(m,d)$. \end{itemize} \vspace{-12pt} \hrulefill \begin{algorithmic}[1] \Statex We will use the same functions \textproc{ComputePolarCone} and \textproc{ObliterateLargestDegreeHypersurfaces} which were originally defined in Algorithm \ref{alg:The Geometric Obliteration Algorithm}, as well as the functions \textproc{ObliterateQuartics} and \textproc{ObliterateCubics} which originally defined in Algorithm \ref{alg:The Geometric Obliteration Algorithm with Computational Improvements}. \Statex \Statex We first implement a closed form for the type of an $(m-d-1)^{st}$ polar cone of $\tau_{1,\dotsc,d}$, which is Proposition 2.26 of \cite{Sutherland2021C}. \Statex \Function{PolarConeOfTschirnhausType}{Type,Level}: \State ReturnList = [1] \For{counter \textbf{in} \ \textbf{range}(1,Type):} \State NewTerm = scipy.special.comb((Level+counter), counter, exact=True) \State OutputList.append(NewTerm) \EndFor \State \Return ReturnList \EndFunction \algstore{bkbreak} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[1] \algrestore{bkbreak} \Statex This function takes the type of an $(m-d-1)^{st}$ polar cone of $\tau_{1,\dotsc,d}$ as an input and outputs $\Xi(m,d)$. \Statex \Function{ObliterateAMinimalNumberOfQuadrics}{List}: \State a = List[0] \State b = List[1] \State Dimension = b + (1/2)*(a**2 + a - 2) \State NumberOfQuadrics = 1 \State DimensionList = [Dimension] \While{ 2**NumberOfQuadrics $<$ Dimension:} \State NumberOfQuadrics += 1 \State Dimension = NumberOfQuadrics \State + (1/2)*(a**2 + a - NumberOfQuadrics**2 - NumberOfQuadrics) \State DimensionList.append(Dimension) \EndWhile \State MaxList1 = [2**(NumberOfQuadrics-1)+1, DimensionList[NumberOfQuadrics-2]+m-d+1] \State MaxList2 = [2**NumberOfQuadrics+1, DimensionList[NumberOfQuadrics-1]+m-d+1] \State Max1 = max(MaxList1[0], MaxList1[1]) \State Max2 = max(MaxList2[0], MaxList2[1]) \If{ Max2 $<$ Max1:} \If{ MaxList2[1] $<$ MaxList2[0]: } \State \Return MaxList2[0] \Else: \State \Return MaxList2[1] \EndIf \Else: \If{ MaxList1[1] $<$ MaxList1[0]: } \State \Return MaxList1[0] \Else: \State \Return MaxList1[1] \EndIf \EndIf \EndFunction \algstore{bkbreak} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[1] \algrestore{bkbreak} \Statex The \textproc{Main} procedure functions similarly to its counterpart in Algorithm \ref{alg:The Geometric Obliteration Algorithm with Computational Improvements}. The two differences are that the degree list is computed based on $m$ and $d$ and the use of \textproc{ObliterateAMinimalNumberQuadrics} instead of \textproc{ObliterateQuadrics}. \Statex \Procedure{Main}{m,d}: \State PolarConeLevel = m-d-1 \State DegreeList = \textproc{PolarConeOfTschirnhausType}(d,PolarConeLevel) \If{\textbf{len}(DegreeList) == 2:} \State \Return \textproc{ObliterateAMinimalNumberQuadrics}(DegreeList) \ElsIf{\textbf{len}(DegreeList) == 3:} \State TempDegreeList = \textproc{ObliterateCubics}(DegreeList) \State DegreeList = TempDegreeList \State \Return \textproc{ObliterateAMinimalNumberQuadrics}(DegreeList) \ElsIf{\textbf{len}(DegreeList == 4:} \State TempDegreeList = \textproc{ObliterateQuartics}(DegreeList) \State DegreeList = TempDegreeList \State TempDegreeList = \textproc{ObliterateCubics}(DegreeList) \State DegreeList = TempDegreeList \State \Return \textproc{ObliterateAMinimalNumberQuadrics}(DegreeList) \Else: \For{index \textbf{in} \ \textbf{range}(1,\textbf{len}(DegreeList)-3):} \State TempDegreeList = \textproc{ObliterateLargestDegreeHypersurfaces}(DegreeList) \State DegreeList = TempDegreeList \EndFor \State TempDegreeList = \textproc{ObliterateQuartics}(DegreeList) \State DegreeList = TempDegreeList \State TempDegreeList = \textproc{ObliterateCubics}(DegreeList) \State DegreeList = TempDegreeList \State \Return \textproc{ObliterateAMinimalNumberQuadrics}(DegreeList) \EndIf \EndProcedure \end{algorithmic} \end{algorithm} \newpage
{ "timestamp": "2021-10-19T02:15:38", "yymm": "2110", "arxiv_id": "2110.08670", "language": "en", "url": "https://arxiv.org/abs/2110.08670", "abstract": "For each $n$, let RD$(n)$ denote the minimum $d$ for which there exists a formula for the general polynomial of degree $n$ in algebraic functions of at most $d$ variables. In this paper, we recover an algorithm of Sylvester for determining non-zero solutions of systems of homogeneous polynomials, which we present from a modern algebro-geometric perspective. We then use this geometric algorithm to determine improved thresholds for upper bounds on RD$(n)$.", "subjects": "Algebraic Geometry (math.AG); Commutative Algebra (math.AC); History and Overview (math.HO)", "title": "Upper Bounds on Resolvent Degree via Sylvester's Obliteration Algorithm", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363729567545, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7084883549801384 }
https://arxiv.org/abs/1408.1262
Theta rank, levelness, and matroid minors
The Theta rank of a finite point configuration $V$ is the maximal degree necessary for a sum-of-squares representation of a non-negative linear function on $V$. This is an important invariant for polynomial optimization that is in general hard to determine. We study the Theta rank and levelness, a related discrete-geometric invariant, for matroid base configurations. It is shown that the class of matroids with bounded Theta rank or levelness is closed under taking minors. This allows for a characterization of matroids with bounded Theta rank or levelness in terms of forbidden minors. We give the complete (finite) list of excluded minors for Theta-$1$ matroids which generalizes the well-known series-parallel graphs. Moreover, the class of Theta-$1$ matroids can be characterized in terms of the degree of generation of the vanishing ideal and in terms of the psd rank for the associated matroid base polytope. We further give a finite list of excluded minors for $k$-level graphs and matroids and we investigate the graphs of Theta rank $2$.
\section{Introduction} \label{sec:intro} Let $V$ be a configuration of finitely many points in $\R^n$. A linear function $\ell(\x) = \delta - \langle c, \x \rangle$ which is non-negative on $V$ is called $\mathbf{k}$\Defn{-sos} with respect to $V$ if there exist polynomials $h_1,\ldots,h_s\in \mathbb{R}[x_1,\ldots , x_n]$ such that $\deg h_i\leq k$ and \begin{equation}\label{eqn:k-sos} \ell(v) \ = \ h_1^2(v) + h_2^2(v) + \cdots + h_s^2(v) \end{equation} for all $v \in V$. The \Defn{Theta rank} $\Th(V)$ of $V$ is the smallest $k \ge 0$ such that every non-negative linear function is $k$-sos with respect to $V$. The Theta rank was introduced in~\cite{GPT} as a measure for the `complexity' of linear optimization over $V$ using tools from polynomial optimization. If $V$ is given as the solutions to a system of polynomial equations, then the size of a semidefinite program for the (exact) optimization of a linear function over $V$ is of order $O(n^{\Th(V)})$. For many practical applications, for example in combinatorial optimization, an algebraic description of $V$ is readily available and the semidefinite programming approach is the method of choice. Clearly, situations with high Theta rank render the approach impractical. We are interested in \[ \VTheta_{k} \ := \ \{ V \text{ point configuration\,} : \Th(V) \le k \}. \] As $V$ is finite and $\ell(\x)$ non-negative on $V$, we may interpolate $\sqrt{\ell(\x)}$ over $V$ by a single polynomial which shows that $\Th(V) \le |V|-1$. This, however, is a rather crude estimate as the $0/1$-cube $V = \{0,1\}^n$ has Theta rank $1$. Let $\ell(\x)$ be a non-negative linear function. The subconfiguration $V^\prime = \{ v \in V : \ell(v)=0\}$ is called a face of $V$ with supporting hyperplane $H = \{ \x \in \R^n :\ell(\x) = 0\}$. If $V^\prime \neq V$ is inclusion-maximal, then $V^\prime$ is called a \Defn{facet} and $H$ (and equivalently $\ell(\x)$) \Defn{facet-defining}. If $V$ is a full-dimensional point configuration then $H$ and $\ell(\x)$, up to positive scaling, are unique. It follows from basic convexity that $\Th(V)$ is the smallest $k$ such that all facet-defining linear functions $\ell(\x)$ are $k$-sos. A point configuration $V$ is \Defn{$k$-level} if for every facet-defining hyperplane $H$ there are $k$ parallel hyperplanes $H = H_1,H_2,\dots,H_k$ with \[ V \ \subseteq \ H_1 \cup H_2 \cup \cdots \cup H_k. \] Equivalently, $V$ is $k$-level if every facet-defining linear function $\ell(\x)$ takes at most $k$ distinct values on $V$. We say that a facet $F$ is $k$-level if its facet-defining linear function $\ell(\x)$ takes exactly $k$ distinct values on $V$. The \Defn{levelness} $\Lev(V)$ of $V$ is the smallest $k$ such that $V$ is $k$-level. It is easy to see that $\Th(V) \le \Lev(V) - 1$. Hence, the class $\VLevels_k$ of all $k$-level point configurations is a subclass of $\VTheta_{k-1}$. A main result of~\cite{GPT} is the following characterization of $\VTheta_1$. \begin{thm}[{\cite[Thm.~4.2]{GPT}}]\label{thm:GPT} Let $V$ be a finite point configuration. Then $V$ has Theta rank $1$ if and only if $V$ is $2$-level. \end{thm} For $k \ge 2$, it can be shown that $\VLevels_k\subsetneq \VTheta_{k-1}$. The (convex) polytopes $P = \conv(V)$ for $2$-level point configurations are very interesting. They arise in the study of extremal centrally-symmetric polytopes~\cite{SWZ} as well as in statistics under the name of \emph{compressed} polytopes~\cite{Sullivant}. Every $2$-level polytope is affinely isomorphic to a $0/1$-polytope which gives them a combinatorial character. Nevertheless we lack a genuine understanding of such a family. In this paper, we study the subclasses $\MTheta_k \subset \VTheta_k$ of point configurations coming from the bases of \emph{matroids}. We recall the notion of matroids and the associated geometric objects in Section~\ref{sec:matroids_pts}. In particular, we show that the classes $\MTheta_k$ are closed under taking \emph{minors}. This, in principle, allows for a characterization of $\MTheta_k$ in the form of forbidden sub-structures. In Section~\ref{sec:2level}, we focus on the class $\MTheta_1$ of matroids of Theta rank $1$ or, equivalently, $2$-level matroids. Our first main result is the following. \begin{thm}\label{thm:main} Let $\MM = (E,\BB)$ be a matroid and $V_\MM \subset \R^E$ the corresponding base configuration. The following are equivalent: \begin{enumerate}[\rm (i)] \item $V_\MM$ has Theta rank $1$ or, equivalently, is $2$-level; \item $\MM$ has no minor isomorphic to $\MM(K_4)$, $\mathcal{W}^3$, $Q_6$, or $P_6$; \item $\MM$ can be constructed from uniform matroids by taking direct sums or $2$-sums; \item The vanishing ideal $I(V_\MM)$ is generated in degrees $\le 2$; \item The base polytope $P_\MM$ has minimal psd rank. \end{enumerate} \end{thm} Part (ii) yields a complete and, in particular, finite list of excluded minors whereas (iii) gives a synthetic description of this class of matroids. The parts (iv) and (v) are proven in Section~\ref{sect:gen_psd}. The former states that $2$-level matroids are precisely those matroids $\MM$ for which the base configuration $V_\MM$ is cut out by quadrics (Theorem~\ref{thm:main2}). This contrasts the situation for general point configurations (Example~\ref{ex:gen2}). The psd rank of a polytope $P$ is the smallest `size' of a spectrahedron that linearly projects to $P$. The psd rank was studied in~\cite{GPT2,GRT} and it was shown that the psd rank $\Psd(P)$ is at least $\dim P + 1$. Part (v) shows that the $2$-level matroids are exactly those matroids for which the psd rank of the base polytope $P_\MM = \conv(V_\MM)$ is minimal. Again, this is in strong contrast to the psd rank of general polytopes. In Section~\ref{sec:higher} we give a complete list of excluded minors for $k$-level graphs (Theorem~\ref{thm:excludminorsgraphs}). The classes of $3$-level and $4$-level graphs appear in works of Halin~(see~\cite[Ch.~6]{Diestel}) and Oxley~\cite{Oxley5wheel}. In particular, the wheel with $5$ spokes $W_5$ is shown to have Theta rank $3$. Combined with results of Oxley~\cite{Oxley5wheel}, this yields a finite list of candidates for a complete characterization of Theta-$2$ graphs. Whereas the list of forbidden minors for graphs is always finite, this is generally not true for matroids. In Section~\ref{sec:klevel} we show that $k$-levelness of matroids is characterized by finitely many excluded minors and we conjecture this to be true for matroids of Theta rank~$k$. \textbf{Acknowledgements.} We would like to thank Philipp Rostalski and Frank Vallentin for helpful discussions regarding computations and we thank Bernd Sturmfels for his interest in the project. \section{Point configurations and matroids} \label{sec:matroids_pts} In this section we study properties of Theta rank and levelness related to the geometry of the point configuration. In particular, we investigate the behavior of these invariants under taking sub-configurations. We recall basic notions from matroid theory and associated point configurations and polytopes. \subsection{Theta rank, levelness, and face-hereditary properties} The definitions of levelness and Theta rank make only reference to the affine hull of the configuration $V$ and thus neither depend on the embedding nor on a choice of coordinates. To have it on record we note the following basic property. \begin{prop}\label{prop:TH-aff}% The levelness and the Theta rank of a point configuration are invariant under affine transformations. \end{prop} That this does not hold for (admissible) projective transformations is clear for the levelness and for the Theta rank follows from Theorem~\ref{thm:GPT}. \begin{prop}\label{prop:TH-product}% Let $V_1 \subset \R^{d_1}$ and $V_2 \subset \R^{d_2}$ be point configurations. Then the Theta rank satisfies $\Th(V_1 \times V_2) = \max(\Th(V_1),\Th(V_2))$. The same is true for $\Lev(V_1 \times V_2)$. \end{prop} \begin{proof} \newcommand\y{\mathbf{y}} A linear function $\ell(\x,\y)$ is facet defining for $V_1 \times V_2$ if and only if $\ell(\x,0)$ is facet defining for $V_1$ or $\ell(0,\y)$ is facet defining for $V_2$. Thus any representation~\eqref{eqn:k-sos} lifts to $\R[\x,\y]$. \end{proof} The Theta rank as well as the levelness of a point configuration are not monotone with respect to taking subconfigurations as can be seen by removing a single point from $\{0,1\}^d$. However, it turns out that monotonicity holds for subconfigurations induced by supporting hyperplanes. Let us call a collection $\mathcal{P}$ of point configurations \Defn{face-hereditary} if it is closed under taking faces. That is, $V \cap H \in \mathcal{P}$ for any $V \in \mathcal{P}$ and supporting hyperplane $H$ for $V$. \begin{lemma}\label{lem:face-hed} The classes $\VTheta_{ k}$ and $\VLevels_{ k}$ are face-hereditary. \end{lemma} \begin{proof} Let $V \subset \R^d$ be a full-dimensional point configuration and $H = \{ p \in \R^d : g(p) = 0\}$ a supporting hyperplane such that the affine hull of $V^\prime := V \cap H$ has codimension $1$. Let $\ell(\x)$ be facet-defining for $V^\prime$. Observe that $\ell(\x)$ and $\ell_\delta(\x) := \ell(\x)+ \delta g(\x)$ give the same linear function on $V^\prime$ for all $\delta$. For \[ \delta \ = \ \max \bigl\{\tfrac{-\ell(v)}{g(v)} : v \in V \setminus V^\prime\bigr\} \] $\ell_\delta(\x)$ is non-negative on $V$. Hence any representation~\eqref{eqn:k-sos} of $\ell_\delta$ over $V$ yields a representation for $\ell$ over $V^\prime$. Moreover, the levelness of $\ell_\delta(\x)$ gives an upper bound on the levelness of $\ell(\x)$. \end{proof} It is interesting to note that these properties are not hereditary with respect to arbitrary hyperplanes. Indeed, consider the point configuration \[ V \ = \ (\{0,1\}^n \times \{-1,0,1\} ) \setminus \{\0\} \] It can be easily seen that $\Th(V) = \Lev(V) - 1 = 2$. The hyperplane $H = \{ \x \in \R^{n+1} : x_{n+1} = 0\}$ is not supporting and $V^\prime = V \cap H = \{0,1\}^n \setminus \{\0\}$. The linear function $\ell(\x) = x_1 + \cdots + x_n -1$ is facet-defining for $V^\prime$ with $n$ levels. As for the Theta rank, any representation~\eqref{eqn:k-sos} yields a polynomial $f(\x) = \ell(\x) - \sum_i h_i^2(\x)$ of degree $2k$ that vanishes on $V^\prime$ and $f(\0)= -1-\sum_i h_i^2(0) <0$. For $n > 4$, the following proposition assures that $\Th(V^\prime) \ge 3$. \begin{prop}\label{prop:cube_gen} Let $V^\prime = \{0,1\}^n \setminus \{\0\}$ and $f(\x)$ a polynomial vanishing on $V^\prime$ and $f(\0)\neq 0$. Then $\deg f \ge n$. \end{prop} \begin{proof} For a monomial $\x^\alpha$, let $\tau = \{ i : \alpha_i > 0 \}$ be its support. Over the set of $0/1$-points it follows that $\x^\alpha$ and $\x^\tau := \prod_{i \in \tau} x_i$ represent the same function. Hence, we can assume that $f$ is of the form $f(\x) \ = \ \sum_{\tau \subseteq [n]} c_\tau \x^\tau$ for some $c_\tau \in \R$, $\tau \subseteq [n]$. Moreover $c_{\emptyset}=f(0)\neq 0$ and without loss of generality we can assume $c_{\emptyset}=1$. Any point $v \in V^\prime$ is of the form $v = \1_\sigma$ for some $\emptyset \neq \sigma \subseteq [n]$ and we calculate \[ 0 \ = \ f(v) \ =\sum_{\emptyset \subseteq \tau \subseteq \sigma} c_\tau. \] It follows that $c_\tau$ satisfies the defining conditions of the \emph{M\"obius function} of the Boolean lattice and hence equals $c_\tau=(-1)^{|\tau |}$ for all $\tau \subseteq [n]$. In particular $c_{[n]} \neq 0$ which finishes the proof. \end{proof} \subsection{Matroids and basis configurations} We now introduce the combinatorial point configurations that are our main object of study. Matroids and their combinatorial theory are a vast subject and we refer the reader to the book by Oxley~\cite{Oxley} for further information. \begin{defi}\label{dfn:matroid} A \Defn{matroid} of rank $k$ is a pair $\MM = (E,\mathcal{B})$ consisting of a finite ground set $E$ and a collection of bases $\emptyset\neq \mathcal{B}\subseteq \binom{E}{k}$ satisfying the basis exchange axiom: for $B_1,B_2 \in \BB$ and $x \in B_1 \setminus B_2$ there is $y\in B_2\setminus B_1$ such that $(B_1 \setminus x)\cup y \in \BB$. \end{defi} A set $I \subseteq E$ is \Defn{independent} if $I \subseteq B$ for some $B \in \BB$. The \Defn{rank} of $X$, denoted by $\rk_{\MM}(X)$, is the cardinality of the largest independent subset contained in $X$. The \Defn{circuits} of $\MM$ are the inclusion-minimal dependent subsets. An element $e$ is called a \Defn{loop} if $\{e\}$ is a circuit. We say that $e,f \in E$ are \Defn{parallel} if $\{e,f\}$ is a circuit. A \Defn{parallel class} $H \subseteq E$ is the equivalence class of elements parallel to each other. The class $H$ is \Defn{non-trivial} if $|H| > 1$. A matroid is \Defn{simple} if it does not contain loops or parallel elements. A \Defn{flat} of a matroid is a set $F\subseteq E$ such that $\rk(F)<\rk(F\cup e)$ for all $e\in E\setminus F$. A particular class of matroids that we will consider are the \Defn{graphic matroids}. To a graph $G = (V,E)$ we associate the matroid $\MM(G) = (E, \BB)$. The bases are exactly the spanning forests of $G$. The running example for this section is the following. \begin{ex}\label{ex:graph} Let $G$ be the graph \begin{center} \includegraphics[scale=.8]{trianglewithdoubleedge.pdf} \end{center} The graphic matroid $\MM = \MM(G)$ has ground set $E = \{1,2,3,4\}$, $\rk(\MM)=2$, and bases \[ \BB(G) \ = \ \{ 12, 13, 14, 23, 24 \}. \] \end{ex} The \Defn{dual matroid} $\MM^*$ of the matroid $\MM=(E,\BB)$ is the matroid defined by the pair $(E,\BB^*)$ where $\BB^*=\{E\setminus B \; : \; B\in \BB \} $. A \Defn{coloop} of $\MM$ is an element which is a loop of $\MM^*$. Equivalently it is an element which appears in every basis of $\MM$.\\ If $e \in E$ is not a coloop, we define the \Defn{deletion} as $\MM\setminus e := (E \setminus e, \{ B \in \BB : e \not\in B\})$. If $e$ is a coloop, then the bases of $\MM\setminus e$ are $\{ B \setminus e : B \in \BB\}$. Dually, if $e \in E$ is not a loop, we define the \Defn{contraction} as $\MM/e := (E \setminus e, \{ B\setminus e : e \in B \in \BB\})$. These operations can be extended to subsets $X \subseteq E$ and we write $\MM\setminus X$ and $\MM/ X$, respectively. We also define the \Defn{restriction} of $\MM$ to a subset $X \subseteq E$ as $\MM|_X := \MM\setminus(E \backslash X)$. Note that $(\MM \setminus X)^* = \MM^* / X$. A \Defn{minor} of $\MM$ is a matroid obtained from $\MM$ by a sequence of deletion and contraction operations. The subclass of graphic matroids is closed under taking minors but not under taking duals. To each matroid we associate a point configuration representing the set of bases. For a fixed ground set $E$ let us write $\Char{X} \in \{0,1\}^E$ for the characteristic vector of $X \subseteq E$. \begin{defi}\label{dfn:base_conf} Let $\MM = (E,\BB)$ be a matroid. The \Defn{base configuration} of $\MM$ is the point configuration \[ V_{\MM} \ := \ \{ \Char{B} : B \in \BB\} \ \subset \ \R^E. \] The \Defn{base polytope} of $\MM$ is $P_\MM := \conv(V_\MM)$. \end{defi} The dual $\MM^*$ is obtained by taking the complements of bases. The corresponding base configuration is thus \begin{equation}\label{eqn:V-dual} V_{\MM^*} \ = \ \1 - V_\MM. \end{equation} In particular, $V_\MM$ and $V_{\MM^*}$ are related by an affine transformation. Observe that $V_\MM$ is not a full-dimensional point configuration. Indeed, $V_\MM$ is contained in the hyperplane $\sum_{e \in E} x_e = \rk(E)$. In order to determine the dimension of $V_\MM$ we need to consider the relations among elements of $E$: $e_1,e_2\in E$ are related if there exists a circuit of $\MM$ containing both. This is an equivalence relation and the equivalence classes are called the \Defn{connected components} of $\MM$. Let us write $c(\MM)$ for the number of connected components. The matroid $\MM$ is \Defn{connected} if $c(\MM)=1$. Let $\MM_1$ and $\MM_2$ be matroids with disjoint ground sets $E_1$ and $E_2$. The collection \[ \BB \ := \ \{ B_1\cup B_2 : B_1\in \BB(\MM_1), B_2\in \BB(\MM_2)\}. \] is the set of bases of a matroid on $E_1 \cup E_2$, called the \Defn{direct sum} of $\MM_1$ and $\MM_2$ and denoted by $\MM_1 \oplus \MM_2$. The corresponding base configuration is exactly the Cartesian product \begin{equation}\label{eqn:direct_sum} V_{\MM_1 \oplus \MM_2} \ = \ V_{\MM_1} \times V_{\MM_2}. \end{equation} If $E_1,\dots,E_r \subseteq E$ are the connected components of $\MM$, then $\MM = \bigoplus_i \MM|_{E_i}$. Thus, showing that $\dim V_{\MM} = |E|-1$ if $\MM$ is connected proves the following. \begin{prop}\label{prop:V_dim} The smallest affine subspace containing $V_\MM$ is of dimension $|E|-c(\MM)$. \end{prop} For a subset $X \subseteq E$ let us write $\ell_X(\x) = \sum_{e \in X} x_e$. For $A \subseteq E$ we then have $\ell_X(\Char{A}) = |A \cap X|$. Hence $\rk_{\MM}(X) = \max_{v \in P_\MM}\ell_X(v)$. For $X \subseteq E$ we define the supporting hyperplane \[ H_\MM(X) \ := \ \{ \x \in \R^E : \ell_X(\x) = \rk_\MM(X)\}. \] The corresponding faces of $V_\MM$ (or equivalently of $P_\MM$) are easy to describe. \begin{prop}[\cite{Edmonds70}]\label{prop:X-faces} For a matroid $\MM = (E,\BB)$ and a subset $X \subset E$, we have \[ V_\MM \cap H_\MM(X) \ = \ V_{\MM|_X \oplus \MM/X} \ = \ V_{\MM|_X} \times V_{\MM/X}. \] \end{prop} Let us illustrate this on our running example. \begin{ex}[continued] The graph given in Example~\ref{ex:graph} yields a connected matroid on $4$ elements and hence a $3$-dimensional base configuration. The corresponding base polytopes is this: \begin{center} \includegraphics[scale=1]{examplematroidbasepol.pdf} \end{center} The $5$ bases correspond to the vertices of $P_\MM$. We considered the subset $\{ 3,4 \}$ whose associated face $\MG|_{\{3,4\}}\times \MG/\{3,4 \}$ is the quadrilateral facet of the polytope, and the subset $\{1,2\}$ whose associated face $\MG|_{\{1,2\}}\times \MG/\{1,2 \}$ is the vertex $(1,1,0,0)$. \end{ex} We define the following families of matroids: \begin{align*} \MLevels_{ k} &\ := \ \{ \MM \text{ matroid} : \Lev(V_\MM) \le k \}, \text{and}\\ \MTheta_{ k} &\ := \ \{ \MM \text{ matroid} : \Th(V_\MM) \le k \}. \end{align*} We will say that a matroid $\MM$ is of Theta rank or level $k$ if the corresponding base configuration $V_\MM$ is. Now combining Proposition~\ref{prop:X-faces} with Lemma~\ref{lem:face-hed} proves the main theorem of this section. \begin{thm}\label{thm:minor_closed} The classes $\MTheta_{ k}$ and $\MLevels_{ k}$ are closed under taking minors. \end{thm} \begin{proof} Using Proposition~\ref{prop:X-faces} repeatedly on one-element sets shows that for every minor $\NN$ of $\MM$ there is a supporting hyperplane such that $V_\MM \cap H$ is affinely isomorphic to $V_\NN$. Lemma~\ref{lem:face-hed} assures us that $\Th(V_\NN) \le \Th(V_\MM)$. \end{proof} Let us analogously define the classes $\GTheta_{ k}$ and $\GLevels_{ k}$ of graphic matroids of Theta rank and levelness bounded by $k$. These are also closed under taking minors and the Robertson--Seymour's theorem (\cite{RobertsonSeymour}) asserts that there is a finite list of excluded minors characterizing each class. In the remainder of the section we will recall the facet-defining hyperplanes of $V_\MM$ which will also show that \emph{all} faces of $V_\MM$ correspond to direct sums of minors. The facial structure of $V_\MM$ has been of interest originally in combinatorial optimization~\cite{Edmonds70} (see also~\cite[Ch.~40]{Schrijver}) and later in geometric combinatorics and tropical geometry~\cite{Ardila,Sturmfels,Kim}. \begin{thm} Let $\MM = (E,\BB)$ be a connected matroid. For every facet $U \subset V_\MM$ there is a unique $ \emptyset \neq S \subset E$ such that $U = V_\MM \cap H_\MM(S)$. Conversely, a subset $ \emptyset \neq S \subset E$ gives rise to a facet if and only if \begin{enumerate}[\rm (i)] \item $S$ is a flat such that $M|_S$ as well as $M/S$ are connected; \item $S = E \setminus e$ for some $e \in E$ such that $M|_S$ as well as $M/S$ are connected. \end{enumerate} \end{thm} In~\cite{Sturmfels} the subsets $S$ in (i) were called \Defn{flacets} and we stick to this name. In our study of the Theta rank and the levelness of base configurations, the following asserts that we will only need to consider flacets. For brevity, a $k$-level flacet refers to a flacet whose corresponding facet is $k$-level. \begin{prop}\label{prop:only_flacets} Let $\MM$ be a connected matroid and $S = E \setminus e$. Then $\ell_S(\x)$ takes $2$ values on $V_\MM$ and hence is $1$-sos. \end{prop} \begin{proof} Let $r$ be the rank of $\MM$. Restricted to the affine hull of $V_\MM$, we have that $\ell_S(\x)$ and $r-\x_e$ induce the same linear function. As $V_\MM$ is a $0/1$-configuration, it follows that $\ell_S(\x)$ takes the $2$ values $r$ and $r-1$ on $V_\MM$. \end{proof} \begin{ex} The facets of the running example are four triangles and one square. The four triangles correspond to the two sets $\{1,2,4 \}$, $\{1,2,3\}$ of cardinality $|E|-1$ and the two flacets $\{2\}$, $\{1\}$, while the square corresponds to the flacet $\{3,4\}$. We have already described in the previous example the square facet. In the picture we highlight two triangular facets, the first one (green) corresponding to the flacet $\{1\}$, the second one (red) to the set $\{1,2,4\}$. \begin{center} \includegraphics[scale=1]{examplematroidbasepol2.pdf} \end{center} \end{ex} A seemingly trivial but useful class of matroids is given by the \Defn{uniform matroids} $U_{n,k}$ for $0 \le k \le n$ given on ground set $E = \{ 1,\dots, n\}$ and bases $\BB(U_{n,k}) = \{ B \subseteq E : |B| = k \}$. \begin{prop}\label{prop:TH-uniform} Uniform matroids are $2$-level and hence have Theta rank $1$. \end{prop} \begin{proof} The base polytope of $U_{n,k}$ is also known as the \emph{$(n,k)$-hypersimplex} and is given by \[ P_{U_{n,k}} \ = \ \conv \{ \1_B : B \subseteq E, |E| = k\} \ = \ \Bigl\{ \x \in \R^E : 0 \le x_e \le 1, \sum_e x_e = k \Bigr\}. \] The facet-defining linear functions are among the functions $\{ \pm \ell_{\{e\}}(\x) = \pm x_e : e \in E \}$ which can take only two different values on $0/1$-points. \end{proof} \section{$2$-level matroids} \label{sec:2level} In this section we investigate the excluded minors for the class of $2$-level matroids and, by Theorem~\ref{thm:GPT}, equivalently the matroids of Theta rank $1$. In this case we can give the complete and in particular finite list of forbidden minors. We start by showing that we can exclude matroids with few elements and of small rank. \begin{prop}\label{prop:smallmatroids} Let $\MM=(E,\BB)$ be a matroid. If $\rk(\MM) \le 2$ or $|E| \le 5$, then $\MM$ is $2$-level. \end{prop} \begin{proof} The case $\rk(\MM)=1$ is trivial since there is no proper flacet. On the other hand, if $\rk(\MM)=2$ the proper flacets are necessarily flacets of rank $1$. The linear function $\ell_F(\x)$ for any such flacet $F$ only takes values in $\{0,1\}$ and thus is $2$-level. By~\eqref{eqn:V-dual} and Proposition~\ref{prop:TH-aff}, $\MM$ and $\MM^*$ have the same Theta rank and levelness. If $|E| \le 5$, then either $\MM$ or $\MM^*$ is of rank $\le 2$. \end{proof} A first example of a matroid of levelness $\ge 3$ is given by the graphic matroid associated to the complete graph $K_4$. \begin{prop}\label{prop:K4} The graphic matroid $\MM(K_4)$ is $3$-level. \end{prop} \begin{proof} Let $F = \{1,2,3\}$ be the flat corresponding to the labelled example shown below. Both the contraction of $F$ and the restriction to $F$ are connected (or biconnected on the level of graphs) and thus $F$ is a flacet with $\ell_F(\x) = x_1+x_2+x_3$. The spanning trees $B_1 = \{1,5,6\}$ and $B_2 = \{4,5,6\}$ satisfy $|F \cap B_2| < |F \cap B_1| < \rk(F)$ which shows that $\MM(K_4)$ is at least $3$-level. To see that $\MM(K_4)$ is at most $3$-level we notice that every proper flacet $F$ has rank smaller or equal than $ \rk(\MM(K_4)) - 1 =2$ and hence $\ell_F(\x)$ can take at most three different values. \end{proof} Before analyzing other matroids we quickly recall a \Defn{geometric representation} of certain matroids of rank $3$: The idea is to draw a diagram in the plane whose points correspond to the elements of the ground set. Any subset of $3$ elements constitute a basis unless they are contained in a depicted line. \begin{ex} Let us consider the graph $K_4$ and its geometric representation as a matroid: \begin{center} \includegraphics[scale=0.6]{k4graph.pdf} \qquad \qquad \qquad \includegraphics[scale=0.7]{k4geomrepr.pdf} \end{center} Thus the geometric representation consists only of the four lines associated to the $3$-circuits of $K_4$. \end{ex} Starting from the geometric representation of $\MM(K_4)$ we define three new matroids by removing one, two or three lines of the representation and we call them respectively $\mathcal{W}^3$, $Q_6$ and $P_6$. None of these matroids is graphic, but we can easily draw their geometric representations: \begin{center} \includegraphics[scale=0.6]{geomrepresentations.pdf} \end{center} \begin{prop}\label{prop:minor-list} The matroids $\mathcal{W}^3$, $Q_6$, and $P_6$ are $3$-level. \end{prop} \begin{proof} Let $\MM$ be any of the three given matroids and consider $F=\{ 1,2,3 \}$. It is easy to check that $\MM|_F \cong U_{3,2}$ and $\MM/F \cong U_{3,1}$ which marks $F$ as a flacet. The vertices of the matroid polytope associated to the bases $\{4,5,6\},\{1,4,6\},\{ 1,2,6 \}$ lie on distinct hyperplanes parallel to $H_\MM(F) = \{ \ell_F(\x) = \rk_\MM(F) \}$. Therefore the matroids are at least $3$-level. Since $\rk(\MM)=3$, we can use the same argument as in the proof of Proposition~\ref{prop:K4}. \end{proof} The list of excluded minors for $\MLevels_{2}$ so far includes $\MM(K_4)$, $\mathcal{W}^3$, $Q_6$, and $P_6$. To show that this list is complete, we will approach the problem from the constructive side and consider how to synthesize $2$-level matroids. We already saw that $\MLevels_{2}$ is closed under taking direct sums. We will now consider three more operations that retain levelness. Let $\MM_1 = (E_1, \BB_1)$ and $\MM_2 = (E_2, \BB_2)$ be matroids such that $\{p\} = E_1 \cap E_2$. We call $p$ a \Defn{base point}. If $p$ is not a coloop of both, then we define define the \Defn{series connection} $\SeriesConn(\MM_1,\MM_2)$ with respect to $p$ as the matroid on ground set $E_1 \cup E_2$ and with bases \[ \BB \ = \ \{ B_1 \cup B_2 : B_1 \in \BB_1, B_2 \in \BB_2, B_1 \cap B_2 = \emptyset \}. \] We also define the \Defn{parallel connection} with respect to $p$ as the matroid $\SeriesConn(\MM_1^*,\MM_2^*)^*$ provided $p$ is not a loop of both. Notice that $\SeriesConn(\MM_1,\MM_2)$ contains both $\MM_1$ and $\MM_2$ as a minor. The operations of series and parallel connection, introduced by Brylawski~\cite{Brylawski}, are inspired by the well-known series and parallel operations on graphs. The following example illustrates the construction in the graphic case. \begin{ex} Let us consider again the two graphic matroids $U_{3,2}$ and $ \MM(K_4)$. Their series connection is the following graph: \begin{center} \includegraphics[scale=0.7]{seriesconn.pdf} \end{center} \end{ex} An extensive treatment of these two operations is given in~\cite[Sect.~7.1]{Oxley}. We focus here on the geometric properties from which many combinatorial consequences can be deduced. For the following result, we write $E_1 \uplus E_2 = (E_1 \cup E_2 \cup \{p_1,p_2\}) \setminus \{p\}$ for the \emph{disjoint union} of $E_1$ and $E_2$. \begin{lemma}\label{lem:P_series} Let $\MM_1 = (E_1,\BB_1)$ and $\MM_2 = (E_2,\BB_2)$ be matroids with $\{p\} = E_1 \cap E_2$ not a coloop of both. Then the base polytope $P_\SeriesConn$ of the series connection $\SeriesConn = \SeriesConn(\MM_1,\MM_2)$ is linearly isomorphic to \[ (P_{\MM_1} \times P_{\MM_2}) \cap \{ \x \in \R^{E_1\uplus E_2} : x_{p_1} + x_{p_2} \le 1 \}. \] \end{lemma} \begin{proof} It is clear that the base configuration $V_\SeriesConn$ is isomorphic to \[ V^\prime \ = \ (V_{\MM_1} \times V_{\MM_2}) \cap \{ \x \in \R^{E_1\uplus E_2} : x_{p_1} + x_{p_2} \le 1 \} \] under the linear map $\pi : \R^{E_1 \uplus E_2} \rightarrow \R^{E_1 \cup E_2}$ given by $\pi(\1_{p_1}) = \pi(\1_{p_2}) = \1_{p}$ and $\pi(\1_{e}) = \1_e$ otherwise. Indeed, let $r_i = \rk(\MM_i)$, then a linear inverse is given by $s : \R^{E_1 \cup E_2} \rightarrow \R^{E_1 \uplus E_2}$ with $s(\x)_{p_i} = r_i - \ell_{E_i}(\x)$ for $i=1,2$ and the identity otherwise. It is therefore sufficient to show that the vertices of \[ P^\prime \ = \ (P_{\MM_1} \times P_{\MM_2}) \cap \{ \x \in \R^{E_1\uplus E_2} : x_{p_1} + x_{p_2} \le 1 \}. \] are exactly the points in $V^\prime$. Clearly $V^\prime$ is a subset of the vertices and any additional vertex of $P^\prime$ would be the intersection of the relative interior of an edge of $P_{\MM_1} \times P_{\MM_2}$ with the hyperplane $H = \{ \x : x_{p_1} + x_{p_2} = 1 \}$. However, every edge of $P_{\MM_1} \times P_{\MM_2}$ is parallel to some $\1_e - \1_f$ for $e,f \in E_1$ or $e,f \in E_2$. Thus every edge of $P_{\MM_1} \times P_{\MM_2}$ can meet $H$ only in one of its endpoints which proves the claim. \end{proof} It is interesting to note that the operation that related $P_{\MM_1}$ and $P_{\MM_2}$ to $P_{\SeriesConn(\MM_1,\MM_2)}$ is exactly a \emph{subdirect product} in the sense of McMullen~\cite{mcmullen}. From the description of $P_{\SeriesConn(\MM_1,\MM_2)}$ we instantly get information about the Theta rank and levelness of the series and parallel connection. \begin{cor}\label{cor:Series_level} Let $\SeriesConn = \SeriesConn(\MM_1,\MM_2)$ be the series connection of matroids $\MM_1$ and $\MM_2$. Then \[ \Th(\SeriesConn) \ = \ \max( \Th(\MM_1), \Th(\MM_2) ). \] The same holds true for the parallel connection as well as for the levelness. \end{cor} \begin{proof} Lemma~\ref{lem:P_series} shows that the facet-defining linear functions of $P_\SeriesConn$ are among those of $P_{\MM_1} \times P_{\MM_2}$ and $\ell(\x) = x_{p_1} + x_{p_2}$. However, by the characterization of the bases of $\SeriesConn$, $\ell(\x)$ can take only values in $\{0,1\}$. Hence, $\Th(V_\SeriesConn) \ = \ \Th(V_{\MM_1}\times V_{\MM_2})$ and Proposition~\ref{prop:TH-product} finishes the proof. \end{proof} \begin{cor}\label{cor:Series_closed} The classes $\MTheta_{ k}$ and $\MLevels_{ k}$ are closed under taking series and parallel connections. \end{cor} The most important operation that we will need is derived from the series connection. Let $\MM_1 = (E_1,\BB_1)$ and $\MM_2 = (E_2,\BB_2)$ be matroids with $E_1 \cap E_2 = \{p\}$. If $p$ is not a loop nor a coloop for neither $\MM_1$ nor $\MM_2$, then we define the \Defn{$2$-sum} \[ \MM_1 \oplus_2 \MM_2 \ := \ \SeriesConn(\MM_1,\MM_2) / p. \] This is the matroid on the ground set $E = (E_1 \cup E_2) \setminus p$ and with bases \[ \BB \ := \ \{ B_1\cup B_2\setminus p : B_1\in \BB_{\MM}, B_2\in \BB_{\NN}, p\in B_1 \triangle B_2 \} \] where $B_1 \triangle B_2$ is the symmetric difference. The 2-sum is an associative operation for matroids which defines, by analogy to the direct sum, the 3-connectedness: a connected matroid $\MM$ is \Defn{3-connected} if and only if it cannot be written as a 2-sum of two matroids each with fewer elements than $\MM$. \begin{ex} Let us consider the $2$-sum of a matroid $U_{3,2}\bigoplus_2 \MM(K_4)$: both matroids are graphic, therefore we can illustrate the operation for the corresponding graphs. \begin{center} \includegraphics[scale=0.7]{2-sum.pdf} \end{center} To perform the $2$-sum we select an element for each matroid, while in the picture it looks like we also need to orient the chosen element. This is the case only because we are drawing an embedding of a graphic matroids; in fact the structure given by the vertices is forgotten when we look at the matroid. Whitney's $2$-Isomorphism Theorem \cite[Thm.~5.3.1]{Oxley} clarifies that the matroid structure does not depend on the orientation we decide for the chosen elements. \end{ex} We will need the following two properties of $2$-sums. \begin{lemma}[{\cite[Lem.~2.3]{Chaourar}}]\label{lem:3-uniform} Let $\MM$ be a $3$-connected matroid having no minor isomorphic to any of $\MM(K_4)$, $\mathcal{W}^{3}$, $Q_6$, $P_6$. Then $\MM$ is uniform. \end{lemma} \begin{lemma}[{\cite[Thm.~8.3.1]{Oxley}}]\label{lem:not3con} Every matroid that is not $3$-connected can be constructed from $3$-connected proper minors of itself by a sequence of direct sums and $2$-sums. \end{lemma} We can finally give a complete characterization of the class $\MLevels_{2} = \MTheta_{ 1}$. \begin{thm}\label{thm:main1} For a matroid $\MM$ the following are equivalent. \begin{enumerate}[\rm (i)] \item $\MM$ has Theta rank $1$. \item $\MM$ is $2$-level. \item $\MM$ has no minor isomorphic to $\MM(K_4)$, $\mathcal{W}^3$, $Q_6$, or $P_6$. \item $\MM$ can be constructed from uniform matroids by taking direct or $2$-sums. \end{enumerate} \end{thm} \begin{proof} (i) $\Rightarrow$ (ii) is just Theorem~\ref{thm:GPT}. (ii) $\Rightarrow$ (iii) follows from Theorem~\ref{thm:minor_closed} and Proposition~\ref{prop:minor-list}. Let $\MM$ be a matroid satisfying (iii). If $\MM$ is $3$-connected, then $\MM$ is uniform by Lemma~\ref{lem:3-uniform}. If $\MM$ is not $3$-connected, then Lemma~\ref{lem:not3con} shows that it satisfies (iv). Finally, uniform matroids have Theta rank $1$ by Proposition~\ref{prop:TH-uniform}. Theta rank $\le k$ is retained by series connection (Corollary~\ref{cor:Series_level}) and, by definition, also by the $2$-sum. \end{proof} \begin{ex}\label{ex:serpargraphs} If we look at the family of $2$-level graphic matroids, the only excluded minor is the graph $K_4$. The class of graphs which do not contain $K_4$ as a minor is the well-known class of \Defn{series-parallel graphs} $\GSeriesPar$. The theorem implies $\GLevels_{ 2}=\GSeriesPar$. \end{ex} There are other point configurations that are naturally associated to a matroid $\MM$, most notably the configuration $D_\MM = \{ \Char{X} : X \subseteq E \text{ dependent} \}$. For \emph{binary} matroids, the associated polytope (up to translation and scaling) is called the \emph{cycle polytope}. The practical relevance stems from the situation where $\MM = \MM(G)^*$ for some graph $G$. In this case, $D_\MM$ represents the collection of cuts in $G$ which are important in combinatorial optimization. The Theta rank of $D_\MM$ has been studied in~\cite{GLPT}. In particular, the paper gives a characterization of binary matroids with $\Th(D_\MM)=1$ in terms of forbidden minors with some additional conditions on the cocircuits. The situation is slightly different as the Theta rank of circuit configurations is monotone with respect to deletion minors but not necessarily with respect to contraction minors. The characterization of $2$-level cut polytopes has been also obtained by Sullivant~\cite{Sullivant}. \section{Generation and psd rank} \label{sect:gen_psd} In this section we study two further face-hereditary properties of point configurations that are intimately related to Theta-$1$ configurations. \subsection{Degree of generation} For a point configuration $V \subset \R^d$, the \Defn{vanishing ideal} of $V$ is \[ I(V) \ := \ \{ f(\x) \in \R[x_1,\dots,x_d] : f(v) = 0 \text{ for all } v \in V \}. \] We say that $V$ is of \Defn{degree $\le k$} if the ideal $I(V)$ has some set of generators of degree $\le k$. We write $\Gen(V) = k$ for the maximal degree in any minimal generating set for $I(V)$. We define \[ \VGen_{ k} \ := \ \{ V \text{ point configuration} : \Gen(V) \le k \} \] It is clear that $\Gen(V)$ is an affine invariant and, since all point configurations are finite, we get \begin{prop} The class $\VGen_{ k}$ is face-hereditary. \end{prop} \begin{proof} Let $H = \{ p : \ell(p) = 0 \}$ be a supporting hyperplane for $V$. The vanishing ideal of $V^\prime = V \cap H$ is the ideal generated by $I(V)$ and $\ell(\x)$. Since $\ell(\x)$ is linear, this then shows that $\Gen(V^\prime) \le \Gen(V)$. \end{proof} The relation to point configurations of Theta rank $1$ is given by the following proposition which is implicit in~\cite{GPT}. \begin{prop}\label{prop:gen2} If $V \subset \R^d$ be a point configuration of Theta rank $1$, then $\Gen(V) \le 2$. \end{prop} \begin{proof} From Theorem~\ref{thm:GPT} we infer that the points $V$ are in convex position and the polytope $P=\conv(V)$ is $2$-level. We may assume that the configuration is spanning and hence up to affine equivalence, the polytope is given by \[ P \ = \ \left\{ p \in \R^d : \begin{array}{r@{\ }c@{\ }l@{\ }l} 0 \le & p_i & \le 1 &\text{ for } i = 1,\dots,d \\ \delta_j^- \le & \ell_j(p) & \le \delta_j^+ &\text{ for } j=1,\dots,n \end{array} \right\} \] for unique linear functions $\ell_j(\x)$ and $\delta_j^- < \delta_j^+$. In particular, $V \subset \{0,1\}^d$. We claim that $I(V)$ is generated by the quadrics \[ x_i(x_i-1) \quad \text{for } 1 \le i \le d, \qquad (\ell_j(\x) - \delta_j^-)(\ell_j(\x)-\delta^+_j) \quad \text{for } 1 \le j \le n. \] The vanishing locus $U$ is a smooth and real subset of $\{0,1\}^d$. Thus, the polynomials span a real radical ideal. Now, every vertex $v \in V \subseteq \{0,1\}^d$ satisfies $\ell_j(v) = \delta_j^{\pm}$. Hence $V \subseteq U$. Conversely, every $u \in U$ is a vertex of $P$ and hence $U \subseteq V$. \end{proof} The following example illustrates the fact that degree of generation is invariant under projective transformations while Theta rank is not. \begin{ex}\label{ex:gen2} To see that generation in degrees $\le 2$ is necessary for Theta rank $1$ but not sufficient, consider the planar point configuration $V = \{ (1,0),(0,1),(2,0),(0,2)\}$. The configuration is clearly not $2$-level and hence not Theta $1$, however the vanishing ideal $I(V)$ is generated by $x_1x_2$ and $(x_1+x_2-1)(x_1+x_2-2)$ which implies $\Gen(V)\le 2$. \begin{center} \includegraphics[scale=0.7]{gendegree2example.pdf} \end{center} \end{ex} The vanishing ideals of base configurations are easy to write down explicitly. \begin{prop}\label{prop:matroid_ideal} Let $\MM = (E,\BB)$ be a matroid of rank $r$. The vanishing ideal for $V_\MM$ is generated by \[ x_e^2-x_e \text{ for all } e \in E, \quad \ell_E(\x) -r, \quad \x^C \text{ for all circuits } C \subset E. \] \end{prop} \begin{proof} Any solution to the first two sets of equations is of the form $\1_B$ for some $B \subseteq E$ with $|B| = r$. For the last set of equations, we note that $(\1_B)^C = 0$ for all circuits $C$ if and only if $B$ does not contain a circuit. This is equivalent to $B \in \BB$. Arguments similar to those used in the proof of Proposition~\ref{prop:gen2} show that the polynomials generate a real radical ideal. \end{proof} Let us write $\MGen_{ k}$ for the class of matroids $\MM$ with $\Gen(V_\MM)\le k$. The previous proposition is a little deceiving in the sense that it suggests a direct connection between the size of circuits and the degree of generation. This is not quite true. Indeed, let $G = K_4 \setminus e$ be the complete graph on $4$ vertices minus an edge. Then $\MM(G)$ has a circuit of cardinality $4$ but $\MM(G) \in \MLevels_{ 2} \subseteq \MGen_{ 2}$ by Theorem~\ref{thm:main1} and Proposition~\ref{prop:gen2}. The main result of this section is that for base configurations the condition of Proposition~\ref{prop:gen2} is also sufficient. \begin{thm}\label{thm:main2} Let $\MM$ be a matroid. Then $V_\MM$ is Theta $1$ if and only if $\Gen(V_\MM) \le 2$. \end{thm} \begin{proof} From Proposition~\ref{prop:gen2} we already know that $\MTheta_{1} \subseteq \MGen_{2}$. Now, if $\MM \in \MGen_{ 2} \setminus \MTheta_{ 1}$, then $\MM$ has a minor isomorphic to $\MM(K_4)$, $P_6$, $Q_6$, or $\mathcal{W}^3$. Since $\MGen_{ 2}$ is closed under taking minors, the following proposition yields a contradiction. \end{proof} \begin{prop} $\MM(K_4)$, $\mathcal{W}^3$, $Q_6$, and $P_6$ are not in $\MGen_{ 2}$. \end{prop} \begin{proof} For a point configuration $V \subset \R^n$, let $I \subset \R[x_1,\dots,x_n]$ be its vanishing ideal. If $I$ is generated in degrees $\le k$, then so is any Gr\"obner basis of $I$ with respect to a degree-compatible term order. The claim can now be verified by, for example, using the software \emph{Macaulay2}~\cite{M2}. \end{proof} \newcommand\PSD{\mathcal{S}} \subsection*{Psd rank and minimality} Let $\PSD^m \subset \R^{m \times m}$ be the vector space of symmetric $m \times m$ matrices. The \Defn{psd cone} is the closed convex cone $ \PSD^m_+ \ = \ \{ A \in \PSD^m : A \text{ positive semidefinite} \}. $ \begin{defi} A polytope $P \subset \R^d$ has a \Defn{psd-lift} of size $m$ if there is a linear subspace $L \subset \PSD^m$ and a linear projection $\pi : \PSD^m \rightarrow \R^d$ such that $ P = \pi(\PSD^m_+ \cap L)$. The \Defn{psd rank} $\Psd(P)$ is the size of a smallest psd-lift. \end{defi} Psd-lifts together with lifts for more general cones were introduced by Gouveia, Parrilo, and Thomas~\cite{GPT2} as natural generalization of \emph{polyhedral lifts} or \emph{extended formulations}. Let us define $\VPsd_{ k}$ as the class of point configurations $V$ in convex position such that $\conv(V)$ has a psd-lift of size $\le k$. In~\cite{GRT} it was shown that for a $d$-dimensional polytope $P$ the psd rank is always $\ge d+1$. A polytope $P$ is called \Defn{psd-minimal} if $\Psd(P) = \dim P +1$. We write $\VPmin$ for the class of psd-minimal (convex position) point configurations. \begin{prop}\label{prop:PSD_closed} The classes $\VPsd_{ k}$ and $\VPmin$ are face-hereditary. \end{prop} \begin{proof} Let $V \in \VPsd_{ k}$ and let $(L,\pi)$ be a psd-lift of $P=\conv(V)$. For a supporting hyperplane $H$ we observe that $(L \cap \pi^{-1}(H), \pi)$ is a psd-lift of $P \cap H$ of size $m\leq k$. Let $P$ be psd-minimal and let $F = P \cap H$ a face of dimension $\dim F = \dim P -1$. If $F$ is not psd-minimal, then by~\cite[Prop.~3.8]{GRT}, $\Psd(P) \ge \Psd(F)+1 > \dim F + 2 = \dim P + 1$. \end{proof} A characterization of psd-minimal polytopes in small dimensions was obtained in~\cite{GRT} and, in particular, the following relation was shown. \begin{prop}\label{prop:psd} Let $V$ be a point configuration in convex position. If $\Th(V) = 1$, then $P = \conv(V)$ is psd-minimal. \end{prop} In~\cite{GRT} an example of a psd-minimal polytope that is not $2$-level is given, showing that the condition above is sufficient but not necessary. The main result of this section is that the situation is much better for base configurations. \begin{thm}\label{thm:main3} Let $\MM$ be a matroid. The base polytope $P_\MM = \conv(V_\MM)$ is psd-minimal if and only if $\Th(\MM) = 1$. \end{thm} In light of Proposition~\ref{prop:psd} it remains to show that there is no psd-minimal matroid $\MM$ with $\Th(\MM) > 1$. Since $\VPmin$ is face-hereditary, it is sufficient to show that the excluded minors $\MM(K_4)$, $\mathcal{W}^3$, $Q_6$, and $P_6$ are not psd-minimal. In order to do so, we need to recall the connection to slack matrices and Hadamard square roots developed in~\cite{GRT}. For a more coherent picture of the relations in particular to cone factorizations we refer to the papers~\cite{GPT2,GRT}. Let $P$ be be a polytope with vertices $v_1,\dots,v_t$ and facet-defining linear functions $\ell_j(\x) = \beta - \langle a_j, \x\rangle$ for $j = 1,\dots,f$. The \Defn{slack matrix} of $P$ is the non-negative matrix $S_P\in\mathbb{R}^{t\times f}$ with \[ (S_P)_{ij}=\beta_j-\langle a_j,v_i\rangle \] for $i=1,\dots,t$ and $j=1,\dots,f$. A \Defn{Hadamard square root} of $S_P$ is a matrix $H \in \R^{t \times f}$ such that $(S_P)_{ij} = H^2_{ij}$ for all $i,j$. Moreover, we define $\Hrk S_P$ as the smallest rank among all Hadamard square roots. The following is the main connection between Hadamard square roots and the psd-rank. \begin{thm}[{\cite[Thm.~3.5]{GRT}}] A polytope $P$ is psd-minimal if and only if $\Hrk(S_P) = \dim P + 1$. \end{thm} Thus, we will complete the proof of Theorem~\ref{thm:main3} by showing that the slack matrices for the excluded minors of $\MTheta_1$ have Hadamard square roots of rank $\ge 7$. We start with a technical result. \begin{prop}\label{prop:tech} The matrix \[ A_0 \ = \ \begin{pmatrix} 0 & 1 & 1 & 1 \\ 1 & 0& 1 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 \end{pmatrix} \] has $\Hrk A_0 = 4$. \end{prop} \begin{proof} Every Hadamard square root of $A_0$ is of the form \[ H \ = \ \begin{pmatrix} 0 & y_1 & y_2 & y_3 \\ y_4 & 0 & y_5 & y_6 \\ y_7 & y_8 & 0 & y_9 \\ y_{10} & y_{11} & y_{12} & 0 \end{pmatrix} \] with $y_i^2=1$, $i=1,..,12$. Claiming that $\Hrk A_0 = 4$ is equivalent to the claim that every Hadamard square root $H$ is non-singular. Using the computer algebra software \emph{Macaulay2}~\cite{M2} it can be checked that the ideal \[ I \ = \ \langle y_1^2-1,...,y_{12}^2-1, \det H \rangle \ \subseteq \ \mathbb{C}[y_1,\dots,y_{12}] \] contains $1$ which excludes the existence of a rank-deficient Hadamard square root. \end{proof} \begin{prop} Let $P = P_\MM$ the base polytope for $\MM \in \{ \MM(K_4), \mathcal{W}^3, Q_6, P_6\}$. Then $\Hrk (S_P) \geq 7$. \end{prop} \begin{proof} We explicitly give the argument for $\MM = \MM(K_4)$ and $P = P_\MM$. This proof works also for the other matroids for the same choice of the collection of bases and flacets. It will be sufficient to find a $7 \times 7$-submatrix $A$ of $S_P$ with $\Hrk(N) \ge 7$. Consider the following collection of bases and flacets of $\MM$: \[ \begin{array}{l@{ \ \ = \ \ }l@{\hspace{1cm}}l@{ \ \ = \ \ }l} B_1 & \{1,2,4\} & F_1 & \{ 1 \}\\ B_2 & \{1,2,5\} & F_2 & \{ 2 \}\\ B_3 & \{1,2,6\} & F_3 & \{ 3 \}\\ B_4 & \{1,3,6\} & F_4 & \{ 4 \}\\ B_5 & \{1,4,6\} & F_5 & \{ 5 \}\\ B_6 & \{1,5,6\} & F_6 & \{ 6 \}\\ B_7 & \{2,4,6\} & F_7 & \{ 3,4,6 \}\\ \end{array} \] and the induced submatrix of $S_P$ \[ A \ = \ \bordermatrix{ % ~& {\scriptstyle \{1\}} & {\scriptstyle \{2\}} & {\scriptstyle \{3\}} & {\scriptstyle \{4\}} & {\scriptstyle \{5\}} & {\scriptstyle \{6\}} & {\scriptstyle \{3,4,6\}}\cr % {\scriptstyle \{1,2,4\}}& 0 & 0 & 1 & 0 & 1 & 1 & 1 \cr {\scriptstyle \{1,2,5\}}& 0 & 0 & 1 & 1 & 0 & 1 & 2 \cr {\scriptstyle \{1,2,6\}}& 0 & 0 & 1 & 1 & 1 & 0 & 1 \cr {\scriptstyle \{1,3,6\}}& 0 & 1 & 0 & 1 & 1 & 0 & 0 \cr {\scriptstyle \{1,4,6\}}& 0 & 1 & 1 & 0 & 1 & 0 & 0 \cr {\scriptstyle \{1,5,6\}}& 0 & 1 & 1 & 1 & 0 & 0 & 1 \cr {\scriptstyle \{2,4,6\}}& 1 & 0 & 1 & 0 & 1 & 0 & 0 \cr } \] Then $\Hrk(A) = 7$ if and only if \newcommand\HL[1]{\fcolorbox{red}{white}{$#1$}} \[ \begin{vmatrix} 0 & 0 & \pm 1 & 0 & \pm 1 & \pm 1 & \pm 1 \\ 0 & 0 & \pm 1 & \pm 1 & 0 & \pm 1 & \pm \sqrt{2} \\ 0 & 0 & \pm 1 & \pm 1 & \pm 1 & 0 & \pm 1 \\ 0 & \pm 1 & 0 & \pm 1 & \pm 1 & 0 & 0 \\ 0 & \pm 1 & \pm 1 & 0 & \pm 1 & 0 & 0 \\ 0 & \pm 1 & \pm 1 & \pm 1 & 0 & 0 & \pm 1 \\ \fcolorbox{red}{white}{$\pm 1$} & 0 & \pm 1 & 0 & \pm 1 & 0 & 0 \\ \end{vmatrix} \ = \ \pm \begin{vmatrix} 0 & \pm 1 & 0 & \pm 1 & \pm 1 & \pm 1 \\ 0 & \pm 1 & \pm 1 & 0 & \pm 1 & \HL{\pm\sqrt{2}} \\ 0 & \pm 1 & \pm 1 & \pm 1 & 0 & \pm 1 \\ \pm 1 & 0 & \pm 1 & \pm 1 & 0 & 0 \\ \pm 1 & \pm 1 & 0 & \pm 1 & 0 & 0 \\ \pm 1 & \pm 1 & \pm 1 & 0 & 0 & \pm 1 \end{vmatrix} \ \neq \ 0 . \] The last determinant is of the form $a + \sqrt{2} \cdot b$ for some integers $a,b$. To check that this determinant is nonzero, we can check that $b$ is nonzero. By Laplace expansion, this is the case if \[ \begin{vmatrix} 0 & \pm 1 & 0 & \pm 1 & \HL{\pm 1} \\ 0 & \pm 1 & \pm 1 & \pm 1 & 0 \\ \pm 1 & 0 & \pm 1 & \pm 1 & 0 \\ \pm 1 & \pm 1 & 0 & \pm 1 & 0 \\ \pm 1 & \pm 1 & \pm 1 & 0 & 0 \end{vmatrix} \ = \ \pm \begin{vmatrix} 0 & \pm 1 & \pm 1 & \pm 1 \\ \pm 1 & 0 & \pm 1 & \pm 1 \\ \pm 1 & \pm 1 & 0 & \pm 1 \\ \pm 1 & \pm 1 & \pm 1 & 0 \end{vmatrix} \ \neq \ 0. \] The latter is exactly the claim that the matrix $A_0$ of Proposition~\ref{prop:tech} has $\Hrk(A_0)=4$. \end{proof} \section{Higher level graphs}\label{sec:higher} In this section we study the class $\GLevels_k$ of $k$-level graphs for arbitrary $k$. The Robertson-Seymour theorem assures that the list of forbidden minors characterizing $\GLevels_k$ is finite and we give an explicit description in the next subsection. In Section~\ref{ssec:3level}, we focus on the class of $3$-level graphs which is characterized by exactly one forbidden minor, the wheel $W_4$ with $4$ spokes. The class of $W_4$-minor-free graphs was studied by Halin and we recover its building blocks from levelness considerations. In Section~\ref{ssec:4level} we focus on the class of graphs with Theta rank $2$. Forbidden minors for this class can be obtained from the structure of $4$-level graphs. \subsection{Excluded minors for $k$-level graphs} A consequence of Theorem~\ref{thm:main1} is that a graph $G$ is $2$-level if and only if $G$ does not have $K_4$ as a minor. In order to give a characterization of $k$-level graphs in terms of forbidden minors, we first need to view $K_4$ from a different angle. \begin{defi}\label{dfn:cone} The \Defn{cone} over a graph $G = (V,E)$ with apex $w \not\in V$ is the graph \[ \cone(G) = (V \cup \{w\},E \cup \{ wv : v \in V\} ). \] \end{defi} Let us denote by $C_n$ the \Defn{$n$-cycle}. Thus, we can view $K_4$ as the cone over $C_3$. As in the previous section, we only need to consider graphic matroids $\MM(G)$ which are connected. In terms of graph theory these correspond exactly to biconnected graphs. For a flacet $F$ let us denote by $V_F \subseteq V$ the vertices covered by $F$. \begin{prop}\label{prop:V_closed} Let $G = (V,E)$ be a biconnected graph and $F \subset E$ a flacet with $|E \backslash F| \ge 2$. Then $G|_F$ is a vertex-induced subgraph. \end{prop} \begin{proof} By contradiction, suppose that $e\in E \backslash F$ is an edge with both endpoints in $V_F$. Since $F$ is a flacet, $G/F$ is a biconnected graph with loop $e$. This contradicts $|E \backslash F| \ge 2$. \end{proof} The definition of flacets requires the graph $G/F$ to be biconnected. This, in turn, implies that $G|_{E\backslash F}$ is connected. Let us write $C(F): = \{ uv \in E: u \in V_F, v \not\in V_F\}$ for the induced \Defn{cut}. Moreover, let us write $\oF := E \setminus (F \cup C(F))$. The next result allows us to find minors $G'$ of $G$ with $\Lev(G') = \Lev(G)$. \begin{lemma}\label{lem:contract_oF} Let $G$ be a biconnected graph and $F$ a $k$-level flacet. Then $F$ is a $k$-level flacet of the graph $G/\oF$. \end{lemma} \begin{proof} Let $H = G/\overline{F}$. It follows from the definition of flacets, that $G|_{\oF}$ is connected and thus $H/F = G/(F \cup \oF)=U_{|C(F)|,1}$ is biconnected. Moreover $H|_F = G|_F$ is biconnected and therefore $F$ is a flacet of $H$. For the levelness of $F$, observe that it cannot be bigger than $k$. Let $T_1 \subset E$ be a spanning tree such that the restriction to the connected graph $G|_{E \setminus F}$ is also a spanning tree. In particular, $|T_1 \cap F|$ is minimal among all spanning trees. It now suffices to show that there is a sequence of spanning trees $T_1,T_2,\dots,T_k \subset E$ with $|T_i \cap F| = |T_1 \cap F| + i-1$ for all $i=1,\dots,k$ and such that $T_i \cap \oF = T_j \cap \oF$ for all $i,j$. The contractions $T_i / \oF$ then show that $F$ is at least $k$-level for $H$. If $T_i \cap F$ is not a spanning tree for $G|_F$, then pick $e \in F \setminus T_i$ such that $e$ connects two connected components of $(V_F,T_i \cap F)$. Since $T_i$ is a spanning tree, there is a cycle in $T_i \cup e$ that uses at least one cut edge $f \in C(F) \cap T_i$. Hence $T_{i+1} = (T_i \setminus e) \cup f$ is the new spanning tree with the desired properties. \end{proof} The contraction of $\oF$ in $G$ gives a graph with vertices $V_F \cup \{w\}$, where $w$ results from the contraction of $\oF$. \begin{prop}\label{prop:vertexdegreelevelness} Let $G=(V,E)$ be a simple, biconnected graph and let $w$ be a vertex such that the set of edges $F$ of $G - w$ is a flacet. Then $F$ is $k$-level if and only if $\deg(w)= k$. \end{prop} \begin{proof} Let $E_w$ be the edges incident to $w$. For a spanning tree $T \subseteq E$, we have $\ell_F(\1_T) = |F \cap T| = |T| - |E_w \cap T|$. Hence, $F$ is $k$-level if and only if there are at most $k$ spanning trees $T_1,\dots,T_k$ such that every $T_i$ uses a different number of edges from $E_w$. Since $|E_w|=\deg(w)$ and every spanning tree contains at least one edge of $E_w$, there are at most $\deg(w)$ spanning trees with different size of the intersection with $F$, thus $k\leq \deg(w)$. Moreover, $G$ is simple, thus there exists a spanning tree $T_1$ such that $E_w\subseteq T_1$. Applying the same reasoning of the proof of Lemma \ref{lem:contract_oF}, we obtain the sequence of spanning trees with the desired properties. Finally, we observe that $T_1\cap F$ has $\deg(w)-1$ connected components, thus the sequence is made of at least $\deg(w)$ trees, proving that $\deg(w)\leq k$. \end{proof} It follows from Proposition \ref{prop:vertexdegreelevelness} that the cone over a biconnected graph on $k$ vertices has a $k$-level flacet. The next result gives a strong converse to this observation. A graph $G$ is called \Defn{minimally biconnected} if $G\setminus e$ is not biconnected for all $e \in E$. For more background on this class of graphs we refer to \cite{Plummer} and \cite{dirac}. \begin{prop}\label{prop:pyramidminor} Let $G$ be a simple, biconnected graph with a vertex $w$ such that the set of edges $F$ not incident to $w$ is a flacet. If $F$ is $k$-level, then $G$ has a minor $\cone(H)$ where $H$ is a minimally biconnected graph on $k$ vertices. \end{prop} \begin{proof} Let $m = |V_F|$. By Proposition \ref{prop:vertexdegreelevelness}, $\deg(w)=k$ and thus $m\geq k$. By removing edges if necessary, we can assume that $F$ is minimally biconnected. By a result of Tutte (see~\cite[Thm.~4.3.1]{Oxley}) the contraction of any edge of $F$ leaves a biconnected graph. Contract an edge such that at most one endpoint is connected to $w$. The new edge set $F^\prime$ is still a $k$-level flacet. By iterating these deletion-contraction steps, we obtain a cone over $F^\prime$ with apex $w$. \end{proof} \begin{thm}\label{thm:excludminorsgraphs} A graph $G$ is $k$-level if and only if $G$ has no minor $\cone(H)$ where $H$ is a minimally biconnected graph on $k+1$ vertices. \end{thm} \begin{proof} Let $G=(V,E)$ be a graph and $F \subset E$ a $m$-level flacet such that $m > k$. By Lemma~\ref{lem:contract_oF}, we may assume that $F$ is the set of edges not incident to some $w \in V$. By Proposition~\ref{prop:pyramidminor}, we may also assume that $G|_F$ is minimally biconnected on $m$ vertices. Now, $G|_F$ contains a minor $H$ that is minimally biconnected on $k+1$ vertices and hence $G$ contains $\cone(H)$ as a minor. \end{proof} \subsection{The class of $3$-level graphs}\label{ssec:3level} According to Theorem~\ref{thm:excludminorsgraphs}, the excluded minors for $\GLevels_3$ are cones over minimally biconnected graphs on $4$ vertices. The only minimally biconnected graph on $4$ vertices is the $4$-cycle and hence the excluded minor is $W_4 = \cone(C_4)$, the wheel with $4$ spokes. In general, let us write $W_n = \cone(C_n)$ for the \Defn{$n$-wheel}, which is a $n$-level graph. The family of $W_4$-minor-free graphs was considered by R.~Halin (see~\cite[Ch.~6]{Diestel}). In this section, we will rediscover the \emph{building blocks} for this class. We start with the observation that by Lemma~\ref{lem:not3con} and Corollary~\ref{cor:Series_level}, we may restrict to $3$-connected, simple graphs. Recall that a graph $G$ (and its matroid) is \Defn{$k$-connected} if the removal of any $k-1$ vertices leaves $G$ connected. Also, a graph is \Defn{$k$-regular} if every vertex is incident to exactly $k$ edges. \begin{prop}\label{prop:3level_3conn} A $3$-level, $3$-connected simple graph is $3$-regular. \end{prop} \begin{proof} A graph $G$ with a vertex of degree at most $2$ cannot be $3$-connected. If there is a vertex $w$ of degree at least $4$, then $G-w$ is biconnected. It follows that the set of edges $F$ not incident to $w$ form a flacet and Proposition~\ref{prop:vertexdegreelevelness} yields the claim. \end{proof} The following well-known result (see~\cite[Thm~8.8.4]{Oxley}) puts strong restrictions on minimally $3$-connected matroids. A \Defn{$n$-whirl} is the matroid of the $n$-wheel $W_n = \cone(C_n)$ with the additional basis being the rim of the wheel $B = E(C_n)$. \begin{thm}[Tutte's wheels and whirl theorem]\label{thm:Tutte} Let $\MM = (E,\BB)$ be a $3$-connected matroid. Then the following are equivalent: \begin{enumerate}[\rm (i)] \item For all $e \in E$ neither $\MM \backslash e$ nor $\MM / e$ is $3$-connected; \item $\MM$ is a $n$-whirl or $n$-wheel, for some $n$. \end{enumerate} \end{thm} We will come back to whirls in the next section. For now, we note that the only minimally $3$-connected graphs are the wheels. Moreover note that every $3$-regular simple graph must have an even number of vertices ($3|V(G)|=2|E(G)|$). \begin{lemma}\label{lem:3con_6vert} Let $G$ be a $3$-connected $3$-regular simple graph with at least $6$ vertices. Then $G$ is at least $4$-level. \end{lemma} \begin{proof} By assumption $G$ cannot be a wheel. By Theorem \ref{thm:Tutte}, there must be an edge $e$ such that $G \setminus e$ or $G / e$ is $3$-connected. Now, $G\setminus e$ has a degree-2 vertex for all $e \in E$ and hence is not $3$-connected. On the other hand, $G / e$ is $3$-connected and the removal of multiple edges does not alter $3$-connectivity. This rules out all the cases where $G/e$ has multiple edges, because there would be a vertex of degree $2$ (not counting multiple edges). The only possibility is that $G/e$ is a simple $3$-connected graph with a vertex of degree $4$. By Proposition~\ref{prop:vertexdegreelevelness}, we conclude that $G\setminus e$ (and consequently $G$) is at least $4$-level . \end{proof} \begin{cor}\label{cor:k4only3level} $K_4$ is the only $3$-level, $3$-connected simple graph. \end{cor} The following gives a complete characterization of level $3$ graphs. \begin{thm}\label{thm:level3} For a graph $G$ the following are equivalent. \begin{enumerate}[\rm (i)] \item $G$ has no minor isomorphic to $W_4$; \item $G$ is $3$-level; \item $G$ can be constructed from the cycles $C_2$, $C_3$, the dual $C_3^*$, and $K_4$ by taking direct or $2$-sums. \end{enumerate} \end{thm} \begin{proof} (i) $\Leftrightarrow$ (ii) is Theorem~\ref{thm:excludminorsgraphs} together with the fact that $C_4$ is the unique minimally biconnected graph on $4$ vertices. (ii) $\Rightarrow$ (iii) follows from Corollary \ref{cor:k4only3level}. (iii) $\Rightarrow$ (ii) follows from Corollary~\ref{cor:Series_level} and~\eqref{eqn:direct_sum}. \end{proof} By inspecting the building blocks for $2$-level (Example~\ref{ex:serpargraphs}) and $3$-level graphs, it is tempting to think that the building blocks of $k$-level graphs are given by the building blocks and the forbidden minors of $\GLevels_{k-1}$. This turns out to be false even for $\GLevels_{4}$. Indeed $\Lev(K_5)=4$ and we cannot obtain it as a sequence of direct sums and $2$-sums of $C_2$, $C_3$, $C_3^*$, $K_4=W_3$, and $W_4$. \subsection{$4$-level and Theta-$2$ graphs}\label{ssec:4level} A further hope one could nourish is that $3$-level graphs coincide with the graphs of Theta rank $2$. This would be the case if and only if $\Th(W_4) = 3$. The only $k$-level flacet $F$ of $W_n$ with $k > 3$ is given by the rim of the wheel $F = E(C_n)$. To find a sum-of-squares representation of $\ell_F(\x)$ for the basis configuration $V_{\MM(W_n)}$ of $W_n$, we may project onto the coordinates of $F$ which coincides with the configuration of \emph{forests} of $C_n$. Now, every subset of $E(C_n)$ is independent except for the complete cycle $I = E(C_n)$. Hence the configuration of forests is given by $\{0,1\}^n \setminus \{\1\}$ and the linear function in question is $\ell(\x) = n-1 - \sum_i x_i$. For $n=4$, \[ 18 \ell(\x) \ = \ 2 (\ell(\x)(\ell(\x)-4))^2 + (\ell(\x)(\ell(\x)-1))^2 \quad\text{ for all } \x \in \{0,1\}^4, \x \neq \1 \] gives a sum-of-squares representation~\eqref{eqn:k-sos} of degree $\le 2$. We may now pullback the $2$-sos representation to $\ell_F(\x)$ which shows that $W_4$ is Theta-$2$. Towards a list of excluded minors for $\GTheta_2$, we focus on the class of $4$-level graphs. Using Theorem \ref{thm:excludminorsgraphs} we easily find the two excluded minors for $\GLevels_{ 4}$: \begin{center} \includegraphics[scale=3.5]{excludedminor4lev.pdf} \end{center} The first graph is the $5$-wheel $W_5$, the second graph is the cone over $K_{2,3}$ and is called $A_3\backslash x$ in~\cite{Oxley5wheel}. The next result states that this is the right class to study. \begin{prop}\label{prop:W5_theta3} The wheel $W_5$ has Theta rank $3$. \end{prop} \begin{proof} Let $F = E(C_5)$ be the edges of the rim of the wheel which is a flat of rank $4$. This is the unique flacet of levelness $5$ and it is sufficient to show that $4 - \ell_F(\x)$ is not $2$-sos with respect to the spanning trees $V = V_{\MM(W_5)}$ of $W_5$. Arguing by contradiction, let us suppose that there are polynomials $h_1(\x),\dots,h_m(\x)$ of degree $\le 2$ such that \[ f(\x) \ := \ 4 - \ell_F(\x) - h_1(\x)^2 - \cdots -h_1(\x)^2 \] is identically zero on $V$. Consider the point $p = \1_F$. This is not a basis of $\MM(W_5)$ and a polynomial separating $p$ from $V$ is given by $f$. That is, by construction $f$ is a polynomial that vanishes on $V$ and $f(p) \le -1 \neq 0$. Now we may compute a degree-compatible Gr\"obner basis of the vanishing ideal $I = I(V)$ using \emph{Macaulay2}~\cite{M2}. Evaluating the elements of the Gr\"obner basis at $p$ shows that the only polynomials not vanishing on $p$ are of degree $5$. As $\deg(f) \le 4$ by construction, this yields a contradiction. \end{proof} The proof suggests an interesting connection to Tutte's wheels and whirls theorem (Theorem~\ref{thm:Tutte}): For $n = 4$ it states that the vanishing ideal of the $n$-wheel $I(W_n)$ is generated by $I(\mathcal{W}^n)$ and a unique polynomial of degree $n$. This should be viewed in relation to Proposition~\ref{prop:cube_gen}: Projecting $V_{\mathcal{W}^n}$ and $V_{W_n}$ onto the coordinates of $F = E(C_n)$ yields $\{0,1\}^n$ and $\{0,1\}^n \setminus \1$, respectively. Oxley~\cite{Oxley5wheel} determined that the class of $3$-connected graphs not having $W_5$ as a minor consists of $17$ individual graphs and $4$ infinite families. The graph $A_3 \backslash x$ is clearly among these graphs and is a minor of the $4$ infinite families as well as three further ones. This proves the following result. \begin{thm} Every $4$-level graph is obtained by direct and $2$-sums of $C_2$, $C_3$, $C_3^*$, and the following $14$ graphs\\ \centerline{ \includegraphics[scale=1.8]{constructorsLev4.pdf}. } \qed \end{thm} As $A_3 \backslash x$ is Theta-2, a complete list of excluded minor has to be extracted from the $17$ graphs plus $4$ families in~\cite{Oxley5wheel}. As a last remark, we note that the Theta-$1$ graphs are given by series-parallel graphs. The property of being Theta-$2$ however is independent of planarity. \begin{prop} The graphs $K_5$ and $K_{3,3}$ have Theta rank $2$. \end{prop} \begin{proof} For both cases we use the idea that for a given flacet $F \subseteq E$, we may project the basis configuration $V$ onto the coordinates given by $F$ and find a $2$-sos representation of the linear function $\rk(F) - \sum_i x_i$. For the graph $K_{3,3}$, the only flacets of levelness $>3$ are given by $4$-cycles. Projecting onto these coordinates yields $\{0,1\}^4 \setminus \1$ which is a point configuration of Theta rank $2$. For the complete graph $K_5$, we note that the only flacets $F$ of levelness $>3$ are given by the edges of an embedded $K_4$. For such a flacet, we might equivalently consider $\ell_{E \setminus F}(\x)-1 \ge 0$. Projecting onto $E \setminus F$ again yields $\{0,1\}^4 \setminus \0$. \end{proof} \section{Excluded minors for $k$-level matroids}\label{sec:klevel} The cone construction (Definition~\ref{dfn:cone}) employed in the previous section to show the existence of finitely many excluded minors for $\GLevels_k$ cannot be extended to general matroids. Indeed, whereas any two trees on $n$ vertices have the same matroid, the matroid of their cones typically do not. Moreover, there is no Robertson-Seymour theorem for general matroids: Minor-closed classes of matroids are generally not characterized by finitely many excluded minors. In this section we show that $k$-level matroids can be characterized in finite terms and we describe the class explicitly. A matroid $\MM$ is called \Defn{minimally $\boldsymbol k$-level} if $\Lev(\MM)=k$ and $\Lev(\NN)<\Lev(\MM)$ for every minor $\NN$ of $\MM$. It is clear that excluded minors for $\MLevels_k$ are given by the minimally $l$-level matroids for $l > k$. The main result of this section is the following. \begin{thm}\label{thm:finiteexclminorsklevel} Excluded minors for the class of $(k-1)$-level matroids are given by the minimally $k$-level matroids. In particular, the list of excluded minors for $\MLevels_{k-1}$ is finite. \end{thm} Let us formalize a notion that we already saw in the proof of Lemma~\ref{lem:contract_oF}: A \Defn{$\boldsymbol k$-sequence of bases} for a flacet $F$ is a collection of bases $B_1,\ldots , B_k \in \BB(\MM)$ such that \begin{itemize}[$\circ$] \item $|F\cap B_1|$ is minimal among all bases of $\MM$, \item $|F\cap B_{i+1}|=|F\cap B_i|+1$, for $1\leq i < k$, \item $F\cap B_i\subset F\cap B_{i+1}$, for $1\leq i \leq k{-}1$, and \item $|F\cap B_k|=\rk_{\MM}(F)$. \end{itemize} It is straightforward to verify that a flacet $F$ is $k$-level if and only if $F$ has a $k$-sequence of bases. Indeed, starting with a basis $B_1$ such that $|F \cap B_1|$ is minimal, one iteratively alters $B_{i+1}$ by some $e_i \in F \setminus B_i$. We can also make a more refined choice. \begin{lemma}\label{lem:ksequencewithe} Let $\MM$ be a connected matroid and $F$ a $k$-level flacet. For any $e\in \cF := E(\MM) \setminus F$, there exists a $k$-sequence of bases $B_1,\ldots , B_k$ such that $e\in B_i$ for $i=1,\ldots, k$. \end{lemma} \begin{proof} Since $\MM$ is connected, $e$ is not a loop and we can find a basis $B_1$ such that $|F \cap B_1|$ is minimal and $e \in B_1$. For $1 \le i < k$, $|F \cap B_i| < \rk(F)$. So, there is an $e_i \in F \setminus B_i$ such that $(F \cap B_i) \cup \{e_i\}$ is independent. Let $C_i \subseteq B_i \cup \{e_i\}$ be the fundamental circuit containing $e_i$. Since $F$ is a flat and $C_i$ is not a circuit in $F$, there is $f_i \in C_i \setminus (F \cup \{e\})$ and we define $B_{i+1} = (B_i \setminus f_i) \cup e_i$. \end{proof} Since $\MM_1\oplus_2 \MM_2$ contains both $\MM_1$ and $\MM_2$ as minors, it follows from Lemma~\ref{lem:not3con} and Corollary~\ref{cor:Series_level} that every minimally $k$-level matroid is $3$-connected. \begin{prop}\label{prop:minimal_dualisminimalconnected} Let $\MM$ be a minimally $k$-level matroid and $F$ a $k$-level flacet of $\MM$. Then $(\MM/ F)^*$ is a minimally connected matroid. \end{prop} \begin{proof} Suppose $(\MM/ F)^*$ is not minimally connected. There exists an element $e\in \cF$ such that the deletion $(\MM/ F)^*\setminus e$ is a connected matroid. Since a matroid is connected if and only if its dual is, we infer that $(\MM/ F)/ e$ is connected. Since $\MM$ is minimally $k$-level, it is $3$-connected. By Lemma \ref{lem:ksequencewithe} we can construct a $k$-sequence of bases for $F$ such that all bases contain $e$. We have that $B_1\setminus e, \ldots , B_k\setminus e$ is a $k$-sequence of bases for $F$ with respect to the matroid $\MM/e$. We only need to check that $F$ is a flacet of $\MM/e$. If $C$ is a circuit containing $e$ and some elements of $F$, it must contain at least a second element $e'\in \cF$ because $F$ is a flat. In addition, there must be at least a third element $e''\in \cF$, otherwise $e'$ would be a loop of $(\MM/ F)/e$, which is connected by hypothesis. This shows that $F$ is a flat of $\MM/e$. Moreover, $(\MM/e)/F\cong (\MM/F)/e$ and $(\MM/e)|_F\cong \MM|_F$ are connected. Thus $F$ is a $k$-level flacet of $\MM/e$, contradicting the $k$-level minimality of $\MM$. \end{proof} Similar to the case of graphs, the following proposition states that $\cF = E(\MM) \setminus F$ is independent for a $k$-level flacet of a minimally $k$-level matroid. \begin{prop}\label{prop:minimalklevel_rankoftilde} Let $\MM$ be a minimally $k$-level matroid and $F$ a $k$-level flacet of $\MM$. Then $\rk(\cF)=|\cF|$. \end{prop} \begin{proof} By contradiction, suppose $\rk(\cF)<|\cF|$. Consider a $k$-sequence of bases $B_1,\ldots , B_k$ for $F$. Because of the assumption $\rk(\cF)<|\cF|$, we can pick an element $e\in \cF\setminus B_1$. By Proposition \ref{prop:minimal_dualisminimalconnected}, $(\MM/F)/e$ is not connected. Since $F$ is a flacet, $\MM/F$ is connected and, by~\cite[Thm.~4.3.1]{Oxley}, $(\MM/F)\setminus e$ is connected. Now $F$ is a flat of $\MM\setminus e$ and both $(\MM\setminus e)|_F\cong \MM|_F$ and $(\MM\setminus e)/F\cong (\MM/F)\setminus e$ are connected. Hence, $F$ is a flacet of the matroid $\MM\setminus e$. The bases $B_1, \ldots, B_k$ are also bases for $\MM\setminus e$ and form a $k$-sequence for the flacet $F$. Thus $\MM\setminus e$ is a $k$-level minor of $\MM$, contradicting the $k$-level minimality of $\MM$. \end{proof} \begin{prop}\label{prop:reductionofF} Let $\MM$ be a minimally $k$-level matroid and $F$ a $k$-level flacet of $\MM$. Then $\MM|_F$ is a minimally connected matroid. \end{prop} \begin{proof} Suppose that $(\MM|_F)\setminus e$ is connected for some $e\in F$. Then $\Fme=F\setminus e$ is a flat of $\MM\setminus e$. We show that $\Fme$ is a $k$-level flacet of $\MM\setminus e$. The matroid $(\MM\setminus e)|_{\Fme}\cong (\MM|_F)\setminus e$ is connected by hypothesis. Note that $\MM / \Fme$ has $e$ as a loop and hence $(\MM\setminus e)/\Fme\cong \MM/F$. Thus, $(\MM\setminus e)/\Fme$ is also connected which shows that $\Fme$ is a flacet. At last, we show that there is a $k$-sequence of bases of $\MM\setminus e$ for $\Fme$. Since $\MM|_F$ is connected it has a basis that avoids $e$. We can complete this to a basis $B_k$ of $\MM$. Now for $f \in \cF$, $B_k \cup f$ contains a circuit and by Proposition~\ref{prop:minimalklevel_rankoftilde} this circuit is not entirely in $\cF$. Hence, we can define a basis $B_{k-1} := B_k \setminus f \cup f'$ for some $f' \in B_k \cap F$. Continuing this way yields a $k$-sequence of bases $B_1,\dots,B_k$ for $F$ that avoids $e$ and hence is a $k$-sequence of bases for $\Fme$ in $\MM \setminus e$. This contradicts the $k$-level minimality of $\MM$. \end{proof} \begin{prop}\label{prop:rankofFminklevel} Let $\MM$ be a minimally $k$-level matroid and $F$ a $k$-level flacet of $\MM$. Then $\rk(F)=k-1$. \end{prop} \begin{proof} Suppose that $\rk(F)>k{-}1$. Consider a $k$-sequence $B_1, \dots , B_k$ for $F$: by definition $|F\cap B_k|=\rk(F)>k{-}1$ and thus $|F\cap B_1|>0$. Equivalently, there is an element $e\in F$ such that $e\in B_i$ for $i=1,\ldots , k$. We prove that the matroid $\MM/e$ is $k$-level with respect to the flacet $\Fme=F\setminus e$. $(\MM/e)|_{\Fme}\cong (\MM|_F)/e$ is connected because $\MM|_F$ is minimally connected by Proposition \ref{prop:reductionofF} and $(\MM/e)/\Fme\cong \MM/ F$ is connected because $F$ is a flacet of $\MM$. Finally, $B_1\setminus e, \dots , B_k\setminus e$ are bases of $\MM/e$ and form a $k$-sequence for the flacet $\Fme$, contradicting the $k$-level minimality of $\MM$. \end{proof} We can finally show that the excluded minors of $\MLevels_k$ are given by the minimally $(k+1)$-level matroids. \begin{prop}\label{prop:minorofminklevel} Every minimally $(k+1)$-level matroid has a $k$-level minor. \end{prop} \begin{proof} Let $\MM$ be a minimally $(k+1)$-level matroid and $F$ a $(k+1)$-level flacet. Choose a $k$-sequence $B_0,\dots , B_{k}$ for $F$. Pick an element $f\in F \setminus B_1$ such that $f \in B_i$ for $i=1,\dots, k$. Applying the same reasoning as in the proof of Proposition~\ref{prop:rankofFminklevel}, we infer that $\Fme:=F\setminus f$ is a flacet of $\MM/f$. Moreover, $B_1\setminus f, \ldots ,B_k\setminus f$ is a $k$-sequence of bases which shows that $\MM/f$ is $k$-level. \end{proof} To complete the proof of Theorem~\ref{thm:finiteexclminorsklevel}, we show that for fixed $k$, the size of the ground set of a minimally $k$-level matroid is bounded. This trivially implies that there only finitely many minimally $k$-level matroids. To bound the size of the ground set of a minimally $k$-level matroid $\MM$, we choose one of its $k$-level flacets $F$ and bound separately the size of $F$ and the size of its complement $\cF=E(\MM)\setminus F$. We quote two useful facts from Oxley's book. \begin{prop}{\cite[Prop.~4.3.11]{Oxley}}\label{prop:finitenessofF} Let $\MM$ be a minimally connected matroid of rank $r$ where $r\geq 3$. Then $|E(\MM)|\leq 2r{-}2$. Moreover, equality holds if and only if $\MM\cong M(K_{2,r-1})$. \end{prop} \begin{prop}{\cite[Ch.~4, Ex. 10 (d)]{Oxley}} \label{prop:finitenessofG} Let $\MM$ be a matroid for which $\MM^*$ is minimally connected. Then either $\MM\cong U_{n,1}$ for some $n\geq 3$ or $\MM$ has at least $\rk(\MM){+}1$ non-trivial parallel classes. \end{prop} \begin{proof}[Proof of Theorem \ref{thm:finiteexclminorsklevel}] In light of Theorem~\ref{thm:main1}, we only need to consider $k \ge 3$. Let $M$ be a minimally $k$-level matroid $\MM$. Any $k$-level flacet $F$ of $\MM$ is of rank $k-1$ by Proposition \ref{prop:rankofFminklevel}; By Proposition \ref{prop:reductionofF}, $M|_F$ is minimally connected. If $\rk(F) = 2$, then Proposition~\ref{prop:finitenessofG} implies that $M|_F \cong U_{3,2}$. For $\rk(F) \ge 3$, by Proposition~\ref{prop:finitenessofF}, $F$ has at most $2(k-1)-2=2k-4$ elements. Hence, we need to upper bound the number of elements in $\cF$. Set \[ T \ := \ \{e \in \cF \; : \; \exists \, C \text{ circuit of $\MM$ with $e\in C$ and $|C\cap \cF|=2$} \}. \] That is, every $e \in T$ is in a non-trivial parallel class in $M / F$. The number of non-trivial parallel classes is bounded from above by $\frac{|T|}{2}$. Set $S:=\cF \setminus T$. Define $h:=\rk(\MM){-}\rk(F){-}1$, so that $\rk(M/F)=h+1$. By Proposition~\ref{prop:minimal_dualisminimalconnected}, $(\MM/F)^*$ is minimally matroid on at least $3$ elements (since for $k \ge 3$ this implies $|\cF|\geq 4$). By Proposition~\ref{prop:finitenessofG} there are two possibilities: If $\MM/F \cong U_{|\cF|,1}$, then $\rk(\MM)=k$ and $|\cF|\leq k$ because of Proposition~\ref{prop:minimalklevel_rankoftilde}. It follows that $|E(\MM)|=|F|{+}|\cF|\leq 2k{-}4{+}k=3k{-}4$. If $k=2$, then $|\cF| \le 3$. On the other hand, if $\rk(\MM/F)=h{+}1>1$, then $\MM/F$ has at least $h + 2$ non trivial parallel classes. Hence we obtain $|T|\geq 2h + 4$. Moreover, $\cF$ has exactly $\rk(F) + h + 1=k + h$ elements and this fact yields $|T|\leq k + h$. Together this gives \[ 2h{+}4\leq k{+}h \quad \Longrightarrow \quad h\leq k{-}4. \] It is immediate that $|\cF|=k{+}h\leq 2k{-}4$ and finally $|E(\MM)|=|F| + |\cF|\leq 2k - 4 {+} 2k - 4=4k - 8.$ \end{proof} The result of this section does not rule out that matroids of Theta rank~$k$ have infinitely many excluded minors and we did not manange to extend our techniques to Theta rank. However, we conjecture that the class $\MTheta_k$ of matroids of Theta rank~$k$ is described by finitely many excluded minors. \bibliographystyle{myamsalpha}
{ "timestamp": "2015-09-09T02:11:44", "yymm": "1408", "arxiv_id": "1408.1262", "language": "en", "url": "https://arxiv.org/abs/1408.1262", "abstract": "The Theta rank of a finite point configuration $V$ is the maximal degree necessary for a sum-of-squares representation of a non-negative linear function on $V$. This is an important invariant for polynomial optimization that is in general hard to determine. We study the Theta rank and levelness, a related discrete-geometric invariant, for matroid base configurations. It is shown that the class of matroids with bounded Theta rank or levelness is closed under taking minors. This allows for a characterization of matroids with bounded Theta rank or levelness in terms of forbidden minors. We give the complete (finite) list of excluded minors for Theta-$1$ matroids which generalizes the well-known series-parallel graphs. Moreover, the class of Theta-$1$ matroids can be characterized in terms of the degree of generation of the vanishing ideal and in terms of the psd rank for the associated matroid base polytope. We further give a finite list of excluded minors for $k$-level graphs and matroids and we investigate the graphs of Theta rank $2$.", "subjects": "Combinatorics (math.CO); Optimization and Control (math.OC)", "title": "Theta rank, levelness, and matroid minors", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.985936372130286, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7084883543862427 }
https://arxiv.org/abs/1501.00430
The Unimodality Conjecture for cubical polytopes
Although the Unimodality Conjecture holds for some certain classes of cubical polytopes (e.g. cubes, capped cubical polytopes, neighborly cubical polytopes), it fails for cubical polytopes in general. A 12-dimensional cubical polytope with non-unimodal face vector is constructed by using capping operations over a neighborly cubical polytope with 2 to the power 131 vertices. For cubical polytopes, the Unimodality Conjecture is proved for dimensions less than 11. The first one-third of the face vector of a cubical polytope is increasing and its last one-third is decreasing in any dimension.
\section{Introduction} The vector $\textbf{a}=(a_0,\ldots, a_{d-1})$ is called \textit{unimodal} if for some (not necessarily unique) index $i$, $(a_0,\ldots, a_i)$ is non-decreasing and $(a_i,\ldots, a_{d-1})$ is non-increasing. If that is the case, we say that the unimodal vector $\textbf{a}$ \textit{peaks} at $i$. We say that the vector $\textbf{a}$ \textit{dips} at $i$ if $f_{j}>f_i<f_{k}$ for some $0\leq j<i<k\leq d-1$. Clearly, the vector $\textbf{a}$ is unimodal if and only if it does not dip anywhere. The question of unimodality of the members of certain classes of vectors has been of long-standing interest in algebra, combinatorics, graph theory and geometry (see e.g. \cite{B1,sta}). By \textit{$f$-vector} (\textit{face vector}) we mean the vector $(f_0,\ldots,f_{d-1})$, where $f_i$ is the number of $i$-dimensional proper faces of a $d$-polytope. The unimodality of face vectors of polytopes is extensively studied (see e.g. \cite{bj1,eck,maj,SZ1}). In 1961 (according to Bj\"orner \cite{bj1}), Motzkin conjectured that the $f$-vector of any polytope is unimodal. The Unimodality Conjecture for polytopes was also stated by Welsh \cite{wel}. Danzer already showed in 1964 (see \cite[Section 2.1]{zi1}) that the conjecture cannot stand in its full generality, still leaving open the question: which natural classes of polytopes have unimodal $f$-vectors? Examples of simplicial polytopes with non-unimodal face vectors were first published by Bj\"orner \cite{bj2}. Bj\"orner's original counterexamples were $24$-dimensional, but subsequently also $20$-dimensional counterexamples were constructed by Bj\"orner \cite{bj3} and independently by Lee \cite{bil,lee}. It was shown by Eckhoff \cite{eck} that, in fact, this is the smallest dimension in which simplicial counterexamples can be found. In Section \ref{non}, we construct a $12$-dimensional cubical polytope with non-unimodal face vector and we show in Section \ref{small}, that there is no cubical counterexamples of dimensions less than 11. Bj\"orner conjectured a partial unimodality property for polytopes. Namely, the face vectors of polytopes increase on the first quarter, and they decrease on the last quarter. Bj\"orner has proved in \cite{bj1}, that this conjecture holds for simplicial polytopes, the face vectors of simplicial polytopes moreover increase up to the middle, and they decrease on the last quarter. In Section \ref{par}, we prove a similar statement for cubical polytopes: their face vectors can dip only on the middle one-third part. \section{Cubes and cubical polytopes} The \textit{$d$-cube} (denote by $C^d$) is a polytope combinatorially equivalent to the unit cube $[0, 1]^d$. It is a well-known fact that the face vector of the $d$-cube is unimodal and peaks at $\lfloor \frac{d}{3} \rfloor$ (in addition, it also peaks at $\lfloor \frac{d+1}{3} \rfloor$). This fact can also be expressed as follows: let $j\in \{0,1,2\}$ such that $d\equiv j \mod 3$, then the $f$-vector of the $d$-cube peaks at $\frac{d-j}{3}$. If $j=2$, then it also peaks at $\frac{d-j}{3}+1$. The $f$-vector of the $d$-cube has no more peaks. The $f$-vector of the $d$-cube is strictly increasing up to $\lfloor \frac{d}{3} \rfloor$ and it is strictly decreasing from $\lfloor \frac{d+1}{3} \rfloor$ on. That is, \begin{equation}\label{szig} f_0(C^d)<\cdots <f_{\lfloor \frac{d}{3} \rfloor}(C^d)\hspace{3mm} \text{and}\hspace{3mm} f_{\lfloor \frac{d+1}{3}\rfloor}(C^d)>\cdots > f_{d-1}(C^d) \end{equation} A $d$-polytope is called \textit{cubical} provided all its facets are combinatorially equivalent to $(d-1)$-cubes (in other words, its all proper faces are cubes). The following important combinatorial invariant of a cubical polytope is introduced by Adin \cite{adi}. Let $P$ be a cubical $d$-polytope with f-vector $\textbf{f}=(f_0,\ldots,f_{d-1})$ and let $H$ be the $d\times d$ matrix given by \begin{equation}\label{fhH} H(i,j)=2^{-j}\binom{d-i-1}{d-j-1}, \hspace{4mm} \text{for} \hspace{3mm} 0\leq i,j \leq d-1 \end{equation} Define the \textit{short cubical h-vector} $\textbf{h}^{(sc)}=(h_0^{(sc)},\ldots,h_{d-1}^{(sc)})$ of $P$ by $\textbf{h}^{(sc)}=\textbf{f}\cdot H^{-1}$. Equivalently, the face vector of $P$ can be expressed by \begin{equation}\label{fh} \textbf{f}=\textbf{h}^{(sc)}\cdot H \end{equation} \begin{lemma}\label{egy} Let $d$ be a positive integer and let $H$ be the matrix defined in \text{\textnormal{(\ref{fhH})}}. \vspace{1.5mm}Then \begin{enumerate} \item[(\textit{i})] $H(i,i)<H(i,i+1)<\cdots <H(i, \lfloor\frac{d+2i}{3}\rfloor-1)\vspace{2mm}\leq H(i, \lfloor\frac{d+2i}{3}\rfloor)>\cdots >H(i,d-1)$ \\for $0\leq i \vspace{1.5mm}\leq d-1$ \item[(\textit{ii})]$H(i,j)=H(i+1,j)+2H(i+1,j+1)$ for $0\leq i,j \leq d-2$ \end{enumerate} \end{lemma} \begin{proof}Denote the $i^{th}$ row of $H$ by $H(i,*)$. Let us note that for all $0\leq i \leq d-1$, $H(i,*)$ can be viewed as the concatenation of two vectors. The first one is the null vector (with $i$ components) and the second one is $2^{1-d}\cdot \textbf{f}(C^{d-i-1})$, where $\textbf{f}(C^{d-i-1})$ is the face vector (supplemented with the last component $f_{d-i-1}=1$) of a $(d-i-1)$-dimensional cube. Therefore, the inequalities of $(i)$ follow from (\ref{szig}). Thus, $H(i,*)$ is unimodal and peaks at $\lfloor \frac{d-i-1+1}{3} \rfloor+i=\lfloor \frac{d-i}{3} \rfloor+i=\lfloor \frac{d+2i}{3} \rfloor$ and also at $\lfloor \frac{d-i-1}{3} \rfloor+i=\lfloor \frac{d+2i-1}{3} \rfloor$ for all $0\leq i \leq d-1$. The recursion of $(ii)$ follows from the so-called Pascal's rule, i.e. $\binom{n}{k}= \binom{n-1}{k-1}+\binom{n-1}{k}$. \end{proof} \begin{remark}\label{rem} Alternatively, we can separate three different cases as follows: let $j\in \{0,1,2\}$ such that $d-i\equiv j \mod 3$, then $H(i,*)$ peaks at $\frac{d+2i-j}{3}$. If $j=0$, then it also peaks at $\frac{d+2i-j}{3}-1$. The vector $H(i,*)$ has no more peaks. \end{remark} % % % The following lemma can be verified through a case-by-case analysis. The proof based on the inequalities $(i)$ and the recursion $(ii)$ of Lemma \ref{egy} and some elementary properties of the binomial coefficients. We omit the details. Alternatively, it can also be proved by adapting the methods (with some necessary modifications) applied by Bj\"orner in the proof of Lemma 6 and Lemma 7 of \cite{bj1}. % \begin{lemma}\label{bi} Let $d$ be a positive integer and $0\leq i\leq k\leq d-1$. Let $H$ be the matrix defined in \text{\textnormal{(\ref{fhH})}}. Let $a_j=H(i,j)+H(k,j)$ for $0\leq j \leq d-1$. Then $$a_0<\cdots <a_{\lfloor\frac{d+2i}{3}\rfloor-1}\vspace{0mm}\leq a_{ \lfloor\frac{d+2i}{3}\rfloor}>\cdots >a_{d-1}$$ \end{lemma} % The following important lemma is needed to prove Theorem \ref{thp} and Theorem \ref{kis}. Statement $(ii)$ is a brief formulation of the Cubical Dehn-Sommerville Equations. \begin{lemma}[Adin \cite{adi}, Lemma 1, Corollary 10]\label{ad1} Let $P$ be a cubical $d$-polytope. Then \begin{enumerate} \item[(\textit{i})] all the components of $\textbf{h}^{(sc)}(P)$ are positive integers, \item[(\textit{ii})] $\textbf{h}^{(sc)}(P)$ is symmetric: $h_i^{(sc)}=h_{d-i-1}^{(sc)}$, $(0\leq i \leq d-1),$ \item[(\textit{iii})] $\textbf{h}^{(sc)}(P)$ is unimodal. \end{enumerate} \vspace{-1mm}\end{lemma} \section{Capped and neighborly cubical polytopes}\label{cnc} There is a cubical analogue of the simplicial stacking operation, the so-called \textit{capping operation} described by Jockusch \cite{joco} as follows: let $Q$ be a cubical $d$-polytope, then the polytope $P$ is called a \textit{capped polytope over} $Q$ if there is a $d$-cube $C$ such that $P = Q \cup C$ and $Q \cap C$ is a facet of both $Q$ and $C$. Roughly speaking, $P$ is obtained by glueing a cube onto a facet of $Q$. A polytope $P$ is said to be an $n$\textit{-fold capped polytope over} $Q$ if there is a sequence $P_0, P_1,\ldots,P_n$ ($1\leq n$) of polytopes such that for $i=0,\ldots,n-1$ \begin{enumerate} \item[(\textit{i})] $P_{i+1}$ is a capped polytope over $P_i$, \item[(\textit{ii})] $P_0=Q$ and $P_n=P$. \end{enumerate} For $n=0$, the $n$\textit-fold capped polytope over $Q$ is $Q$ itself. A polytope is said to be $n$\textit{-fold capped} (or simply \textit{capped}) if it is an $n$-fold capped polytope over a cube. Capped polytopes are the cubical analogues of the (simplicial) stacked polytopes. Since the capping operation destroys a cubical facet while it creates $2d-1$ new ones, the capping operation increases the component $f_k$ of the face vector by $2^{d-k}\binom{d}{k}-2^{d-k-1}\binom{d-1}{k}$ if $0\leq k \leq d-2$ and by $2(d-1)$ if $k=d-1$. Hence, if $P$ is an $n$-fold capped polytope over $Q$, then we have \begin{equation}\label{cap} f_k(P)=\left\{ \begin{array}{ll} f_k(Q)+n\Big(2^{d-k}\binom{d}{k}-2^{d-k-1}\binom{d-1}{k}\Big) & \hspace{3mm}\vspace{0mm}\textrm{if $0\leq k \leq d-2$} \\ f_k(Q)+2n(d-1) & \hspace{3mm}\textrm{if $k=d-1$} \end{array} \right. \end{equation} By using (\ref{cap}) and the fact that the face vector of the $d$-cube is unimodal and peaks at $\lfloor \frac{d+1}{3} \rfloor$, it is not difficult to show that the face vector of a capped $d$-polytope is also unimodal and also peaks at $\lfloor \frac{d+1}{3} \rfloor$. The \textit{$k$-skeleton} of a $d$-polytope is the union of its $k$-dimensional faces. A cubical $d$-polytope (with $2^n$ vertices for some $n\geq d$) is called \textit{neighborly cubical} provided its $(\lfloor \frac{d}{2} \rfloor -1)$-skeleton is combinatorially equivalent to the $(\lfloor \frac{d}{2} \rfloor -1)$-skeleton of a cube. The concept of neighborly cubical polytopes was introduced by Babson, Billera and Chan \cite{bill}. Neighborly cubical polytopes can be considered as the cubical analogues of the (simplicial) cyclic polytopes. It is proved in \cite{maj1}, that the Unimodality Conjecture holds for neighborly cubical polytopes. The number of vertices and the dimension of a neighborly cubical polytope determine its $f$-vector and it is given by (see \cite{maj1}) \begin{displaymath} f_k=\left\{ \begin{array}{ll} 2^{n-k}\displaystyle\sum_{i=0}^{\frac{d-2}{2}} \textstyle \Big(\binom{d-i-1}{k-i}+\binom{i}{k-d+i+1}\Big)\binom{n-d+i}{i} & \textrm{if $d$ is even} \\ 2^{n-k}\Bigg(\displaystyle\sum_{i=0}^{\frac{d-3}{2}} \textstyle \Big(\binom{d-i-1}{k-i}+\binom{i}{k-d+i+1}\Big)\binom{n-d+i}{i}+\displaystyle\sum_{j=0}^{n-d}2^{-j}\textstyle\binom{\frac{d-1}{2}}{d-k-1} \binom{n-\frac{d+3}{2}-j}{n-d-j}\Bigg) & \textrm{if $d$ is odd} \end{array} \right. \end{displaymath} By using the above explicit formula, it can be shown that the $f$ -vector of a neighborly cubical polytope peaks approximately at $\lfloor \frac{2d}{3} \rfloor$ if $n$ is large \vspace{2mm} enough. \section{Partial unimodality for cubical polytopes}\label{par} In this section, we show that the maximal component of the face vector of a cubical polytope can occur only in the middle one-third part (i.e. between $\lfloor \frac{d}{3} \rfloor$ and $\lfloor \frac{2d}{3} \rfloor$), furthermore, the violation of the Unimodality Conjecture is possible only in this part. \begin{theorem}[partial unimodality for cubical polytopes]\label{thp} Let $P$ be a cubical $d$-polytope with face vector $(f_0,\ldots,f_{d-1})$. Then \begin{enumerate} \item[(\textit{i})]$f_0<\cdots <f_{\lfloor \frac{d}{3} \rfloor-1}\leq f_{\lfloor \frac{d}{3} \vspace{1mm}\rfloor}$ \item[(\textit{ii})] $f_{\lfloor \frac{2d}{3} \rfloor}>\cdots>f_{d-1}$ \end{enumerate} \end{theorem} \begin{proof}Let $H$ be the matrix defined in \text{\textnormal{(\ref{fhH})}}. For $0\leq i\leq {d-1}$, denote the $i^{th}$ row of $H$ by $H(i,*)$. For $0\leq i\leq \lfloor \frac{d-1}{2}\rfloor$, let us define the vectors $\textbf{b}^i$ by \begin{displaymath} \textbf{b}^i=\left\{ \begin{array}{ll} H(i,*)+H(d-i-1,*) & \hspace{3mm}\textrm{if $2i\neq d-1$} \vspace{2mm}\\ H(i,*) & \hspace{3mm}\textrm{if $2i=d-1$} \end{array} \right. \end{displaymath} By using the above notation and the symmetric property of the short $h$-vector (see $(ii)$ of Lemma \ref{ad1}), the relation (\ref{fh}) can be rewritten as $$\textbf{f}(P)=\sum^{\lfloor\frac{d-1}{2}\rfloor}_{i=0}h^{(sc)}_i\textbf{b}^i$$ From Lemma \ref{bi} follows that $\textbf{b}^i$ is unimodal and peaks at $\lfloor \frac{d+2i}{3} \rfloor$ for all $0\leq i \leq \lfloor\frac{d-1}{2} \rfloor$ Furthermore, we have $$\textbf{b}^i_i<\cdots <\textbf{b}^i_{\lfloor\frac{d+2i}{3}\rfloor-1}\vspace{0mm}\leq \textbf{b}^i_{ \lfloor\frac{d+2i}{3}\rfloor}>\cdots >\textbf{b}^i_{d-1}$$ Therefore, all the vectors $\textbf{b}^i$ peak between $\lfloor \frac{d+2\cdot 0}{3} \rfloor=\lfloor \frac{d}{3} \rfloor$ and $\lfloor \frac{d+2\lfloor \frac{d}{2} \rfloor}{3} \rfloor=\lfloor \frac{2d}{3} \rfloor$ and $$\textbf{b}^i_0\leq\cdots \leq\textbf{b}^i_{\lfloor\frac{d}{3}\rfloor-1}\vspace{0mm}\leq \textbf{b}^i_{ \lfloor\frac{d}{3}\rfloor} \text{ and }\textbf{b}^i_{ \lfloor\frac{2d}{3}\rfloor}>\cdots >\textbf{b}^i_{d-1}$$ for all $0\leq i\leq \lfloor \frac{d-1}{2}\rfloor$. Furthermore, $\textbf{b}^i_0<\cdots <\textbf{b}^i_{\lfloor\frac{d}{3}\rfloor-1}\vspace{0mm}\leq \textbf{b}^i_{ \lfloor\frac{d}{3}\rfloor}$ for $i=0$. Consequently, $\textbf{f}(P)$ has the stated property, since $\textbf{f}(P)$ is a non-negative linear combination of the vectors $\textbf{b}^i$ (see $(i)$ of Lemma \ref{ad1}). \end{proof} \begin{remark}\label{r2} In fact, we could state a little bit more about $\textbf{f}(P)$. Namely, it can be easily checked that if $d\equiv j \mod 6 $ for some $j\in \{0,2,3\}$, then $\textbf{b}^{\lfloor \frac{d}{2} \rfloor}$ peaks at $\lfloor \frac{2d}{3} \rfloor-1$. Consequently, the sequence $(f_{\lfloor \frac{2d}{3} \rfloor-1},f_{\lfloor \frac{2d}{3} \rfloor},\ldots,f_{d-1})$ is strictly decreasing if the dimension of $P$ is congruent to $0$ or $2$ or $3$ modulo $6$.\end{remark} \section{Non-unimodal $f$-vectors}\label{non} According to Theorem \ref{thp}, capped polytopes and neighborly cubical polytopes are extremal among all cubical polytopes, in the sense that the maximal component of their $f$-vectors occur at the most far away position from their middle. Since the peaks of the $f$-vectors of a capped and a neighborly cubical polytope are situated as far away from each other as possible, it seems to be reasonable to involve these polytopes in constructing counterexamples to the Unimodality Conjecture for cubical polytopes. In fact, most of the non-cubical counterexamples were also based on this idea. For instance, Danzer used stacked polytopes (with peaks at $\lfloor \frac{d}{2} \rfloor$) and crosspolytopes (with peaks at $\lfloor \frac{2d}{3} \rfloor$ ) for his first counterexamples (according to Ziegler \cite[Section 2.1]{zi1}). According to Theorem \ref{thp} and Remark \ref{r2}, for all cubical $12$-polytopes, $$f_0<\cdots <f_3\leq f_4 \hspace{2mm} \text{and} \hspace{2mm} f_7>\cdots >f_{11}$$ Therefore, the possible positions for a dip are $5$ and $6$. The starting point of our construction is a neighborly cubical $12$-polytope, to which we apply the capping operation. Due to (\ref{cap}), by applying the capping operation, we are adding a vector with peak at $\lfloor \frac{d}{3} \rfloor$ to a vector whose peak is at $\lfloor \frac{2d}{3} \rfloor$. Joswig and Ziegler proved in \cite{jos} that there exists a $d$-dimensional neighborly cubical polytope with $2^n$ vertices for any $n\geq d\geq 2$. They constructed neighborly cubical polytopes as linear projections of cubes. For the base of our counterexample, we chose a neighborly cubical $12$-polytope with $2^{131}$ vertices, to which we apply the capping operation $1.841\cdot 10^{42}$ times. Then we obtain a cubical polytope with non-unimodal $f$-vector. By using (\ref{cap}) and the formula of the $f$-vector of neighborly cubical polytopes (see in Section \ref{cnc}), it can be computed, that the $f$-vector of this polytope dips at $5$ indeed. \begin{theorem} There exists a $12$-dimensional cubical polytope with $3.770370722\cdot 10^{45}$ vertices for which $f_4 >f_5 < f_6$. Consequently, the Unimodality Conjecture fails for cubical polytopes in general. \end{theorem} As a matter of historical interest, we mention that Lee's simlicial counterexample was one of the first applications of the sufficiency part of the famous $g$-theorem (see Billera and Lee, Corollary 2 in \cite{bil}). \section{Unimodality for small dimensional cubical polytopes}\label{small} Using the relations between the $f$-vectors and the $g$-vectors of simplicial polytopes, Eckhoff \cite{eck} has shown that the $f$-vector of a simplicial polytope is unimodal if its dimension is less than $20$. We prove the following statement in a similar way. \begin{theorem}\label{kis} The $f$-vectors of cubical $d$-polytopes are unimodal for all $d\leq 10$. \end{theorem} \begin{proof} First let $P$ be a cubical $10$-polytope with $f$-vector $\textbf{f}=(f_0,\ldots,f_9)$. It follows from Theorem \ref{thp}, that $f_0<f_1< f_2\leq f_3$ and $f_6>\ldots> f_9$. Hence, $\textbf{f}$ can possibly dip at $4$ and $5$. Using Lemma \ref{ad1} and the relation $\textbf{f}=\textbf{h}^{(sc)}\cdot H$, one can show that all the assumptions $$f_3>f_4<f_5,\hspace{1mm} f_3>f_4<f_6, \hspace{1mm}f_3>f_5<f_6 \hspace{1mm}\text{ and }\hspace{1mm} f_4>f_5<f_6$$ lead to contradictions. Consequently, the face vector of $P$ is unimodal indeed. For $d<10$, similar reasoning completes the proof. \end{proof} The method of the above proof does not lead to an analogous result in the case $d=11$. However, we could construct some symmetric unimodal vector $\textbf{v}$ (whose components are positive integers), such that $v\cdot H$ would be non-unimodal, we could not guarantee that there would exist some 11-polytope, whose short cubical $h$-vector would equal $\textbf{v}$. Since the complete combinatorial characterization of cubical polytopes is not known, the following question still remains open: \begin{question}Is there any cubical $11$-polytope with non-unimodal face vector? \end{question} To violate the unimodality in dimension $d=11$, for the role of the $h$-vector, we need a candidate with an ``outlandish shape'', namely, its middle component should be relatively large (compared to other components). Hence, by considering the characterization of simplicial polytopes, one may believe that the answer to the above question is negative.
{ "timestamp": "2015-01-07T02:04:26", "yymm": "1501", "arxiv_id": "1501.00430", "language": "en", "url": "https://arxiv.org/abs/1501.00430", "abstract": "Although the Unimodality Conjecture holds for some certain classes of cubical polytopes (e.g. cubes, capped cubical polytopes, neighborly cubical polytopes), it fails for cubical polytopes in general. A 12-dimensional cubical polytope with non-unimodal face vector is constructed by using capping operations over a neighborly cubical polytope with 2 to the power 131 vertices. For cubical polytopes, the Unimodality Conjecture is proved for dimensions less than 11. The first one-third of the face vector of a cubical polytope is increasing and its last one-third is decreasing in any dimension.", "subjects": "Combinatorics (math.CO)", "title": "The Unimodality Conjecture for cubical polytopes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363717170517, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7084883540892949 }
https://arxiv.org/abs/2205.03487
Twist monomials of binary delta-matroids
Recently, we introduced the twist polynomials of delta-matroids and gave a characterization of even normal binary delta-matroids whose twist polynomials have only one term and posed a problem: what would happen for odd binary delta-matroids? In this paper, we show that a normal binary delta-matroid whose twist polynomials have only one term if and only if each connected component of the intersection graph of the delta-matroid is either a complete graph of odd order or a single vertex with a loop.
\section{Introduction} The partial dual with respect to a subset $A$ of edges of a ribbon graph $G$ was introduced by Chmutov \cite{CG} in connection with the Jones-Kauffman and Bollob\'{a}s-Riordan polynomials. In 2020, Gross, Mansour and Tucker \cite{GMT} introduced the partial duality polynomial of a ribbon graph, the generating function enumerating partial duals by Euler-genus and proposed the following conjecture. \begin{conjecture}[\cite{GMT}]\label{con1} There is no orientable ribbon graph having a non-constant partial duality polynomial with only one non-zero coefficient. \end{conjecture} In \cite{QYXJ}, we found an infinite family of counterexamples to this conjecture. Essentially, these are the only counterexamples \cite{SCFV, QYXJ2}. Chumutov and Vignes-Tourneret \cite{SCFV} also showed that it would be interesting to know whether the partial duality polynomial and the related conjectures would make sence for general delta-matroids. In \cite{QYXJ3}, we showed that partial duality polynomials have delta-matroid analogues. We introduced the twist polynomials of delta-matroids and discussed its basic properties for delta-matroids. We gave a characterization of even normal binary delta-matroids with one term twist polynomials and posed a problem: \begin{problem}[\cite{QYXJ3}] What would happen for odd binary delta-matroids with only one term twist polynomials? \end{problem} In this paper we answer this problem and the main result of this paper is a characterization of normal binary delta-matroids whose twist polynomials have only one term: ~ \noindent {\bf Main Theorem.} Let $D=(E, \mathcal{F})$ be a normal binary delta-matroid. Then $^{\partial}w_{D}(z)=mz^k$ if and only if each connected component of $G_{D}$ is either a complete graph of odd order or a single vertex with a loop. \section{Preliminaries} We give a brief review of delta-matroids and related terminologies, and refer the reader to \cite{AB1, CISR, CMNR, JO} for further details. A \emph{set system} is a pair $D=(E, \mathcal{F})$, where $E$ or $E(D)$, is a finite set, called the \emph{ground set}, and $\mathcal{F}$ or $\mathcal{F}(D)$, is a collection of subsets of $E$, called \emph{feasible sets}. $D$ is \emph{proper} if $\mathcal{F}\neq \emptyset$, is \emph{trivial} if $E=\emptyset$, and is \emph{normal} if $\mathcal{F}$ contains the empty set. $D$ is said to be \emph{even} if the cardinality of the sets in $\mathcal{F}$ all have the same parity. Otherwise, we call $D$ \emph{odd}. Bouchet \cite{AB1} introduced delta-matroids as follows. \begin{definition}[\cite{AB1}] A \emph{delta-matroid} is a proper set system $D=(E, \mathcal{F})$ for which satisfies the symmetric exchange axiom: for all triples $(X, Y, u)$ with $X, Y \in \mathcal{F}$ and $u\in X\Delta Y$, there is a $v\in X\Delta Y$ (possibly $v=u$ ) such that $X\Delta \{u, v\}\in \mathcal{F}$. \end{definition} Here and below $\Delta$ denotes the symmetric difference operation on pairs of sets, $$X\Delta Y:=(X\cup Y)\backslash (X\cap Y).$$ And $|A|$ denotes the cardinality of a finite set $A$. Let $D=(E, \mathcal{F})$ be a delta-matroid. If for any $F_{1}, F_{2}\in \mathcal{F}$, we have $|F_{1}|=|F_{2}|$. Then $D$ is said to be a \emph{matroid} and we refer to $\mathcal{F}$ as its \emph{bases}. If a delta-matroid forms a matroid $M$, then we usually denote $M$ by $(E, \mathcal{B})$. We say that the \emph{rank} of $M$, written $r(M)$, is equal to $|B|$ for any $B\in\mathcal{B}(M)$. For a delta-matroid $D=(E, \mathcal{F})$, let $\mathcal{F}_{max}(D)$ and $\mathcal{F}_{min}(D)$ be the collection of sets in $\mathcal{F}(D)$ that have the maximum and minimum cardinality among sets in $\mathcal{F}(D)$, respectively. Bouchet \cite{AB2} showed that $D_{max}:=(E, \mathcal{F}_{max})$ and $D_{min}:=(E, \mathcal{F}_{min})$ are both matroids. $D_{min}$ is called the \emph{lower matroid}, and $D_{max}$ is called the \emph{upper matroid}. The \emph{width} of $D$, denote by $w(D)$, is defined by $$w(D):=r(D_{max})-r(D_{min}).$$ We observe that $w(D)=r(D_{max})$, for a normal delta-matroid $D$. In 1987, Bouchet \cite{AB1} introduced a fundamental operation on a delta-matroid called a twist. Given a delta-matroid $D=(E, \mathcal{F})$ and a subset $A$ of $E$, the \emph{twist} of $D$ with respect to $A$, denoted by $D*A$, is given by $$(E, \{A\Delta X: X\in \mathcal{F}\}).$$ The \emph{dual} of $D$, written $D^{*}$, is equal to $D*E$. Observe that the twist of a delta-matroid is a delta-matroid \cite{AB1}. \begin{definition}[\cite{QYXJ3}] The \emph{twist polynomial} of any delta-matroid $D=(E, \mathcal{F})$ is the generating function $$^{\partial}w_{D}(z):=\sum_{A\subseteq E}z^{w(D*A)}$$ that enumerates all twists of $D$ by width. \end{definition} In particular, a one term twist polynomial is called a \emph{twist monomial}. Observe that analyzing the twist polynomials of all delta-matroids are equivalent to analyzing normal delta-matroids \cite{QYXJ3}. Consequently, it suffices to consider normal delta-matroids. \begin{definition}[\cite{CMNR}] For delta-matroids $D=(E, \mathcal{F})$ and $\widetilde{D}=(\widetilde{E}, \widetilde{\mathcal{F}})$ with $E\cap \widetilde{E}=\emptyset$, the \emph{direct sum} of $D$ and $\widetilde{D}$, written $D\oplus \widetilde{D}$, is the delta-matroid defined as $$D\oplus \widetilde{D}:=(E\cup \widetilde{E}, \{F\cup \widetilde{F}: F\in \mathcal{F}~\text{and}~\widetilde{F}\in \widetilde{\mathcal{F}}\}).$$ \end{definition} A delta-matroid is \emph{disconnected} if it can be written as $D\oplus \widetilde{D}$ for some non-trivial delta-matroids $D$ and $\widetilde{D}$, and \emph{connected} otherwise. Let $D=(E, \mathcal{F})$ be a delta-matroid. An element $e\in E$ is a \emph{coloop} if for each $F\in \mathcal{F}$ we have $e\in F$, and it is a \emph{loop} if for any $F\in \mathcal{F}$ we have $e\notin F$. \begin{definition}[\cite{CMNR}] Let $D=(E, \mathcal{F})$ be a delta-matroid. Take $e\in E$. Then \begin{description} \item[(1)] $e$ is a \emph{ribbon loop} if $e$ is a loop in $D_{min}$; \item[(2)] A ribbon loop $e$ is \emph{non-orientable} if $e$ is a ribbon loop in $D*e$ and is \emph{orientable} otherwise. \end{description} \end{definition} Let $D=(E, \mathcal{F})$ be a delta-matroid and $e\in E$. Then $D$ \emph{delete} by $e$, denoted $D\backslash e$, is defined as $D\backslash e:=(E\backslash e, \mathcal{F}')$, where \[\mathcal{F}':=\left\{\begin{array}{ll} \{F: F\in \mathcal{F}, F\subseteq E\backslash e\}, & \text{if $e$ is not a coloop,}\\ \{F\backslash e: F\in \mathcal{F}\}, & \text{if $e$ is a coloop}. \end{array}\right.\] $D$ \emph{contract} by $e$, denoted $D/ e$, is defined as $D/ e:=(E\backslash e, \mathcal{F}'')$, where \[\mathcal{F}'':=\left\{\begin{array}{ll} \{F\backslash e: F\in \mathcal{F}, e\in F\}, & \text{if $e$ is not a loop,}\\ \mathcal{F}, & \text{if $e$ is a loop}. \end{array}\right.\] Note that $D^{*}/e=(D\setminus e)^{*}$ \cite{CMNR}. Bouchet \cite{AB1} has shown that the order in which deletions are performed does not matter. Let $D=(E, \mathcal{F})$ be a delta-matroid and $A\subseteq E$. We define $D\setminus A$ as the result of deleting every element of $A$ in any order. The complement of $A\subseteq E$ is $A^c := E\setminus A$. The \emph{restriction} of $D$ to $A$, written $D|_{A}$, is the set system $D\setminus A^{c}$. Throughout the paper, we will often omit the set brackets in the case of a single element set. For example, we write $D*e$ instead of $D*\{e\}$, or $D|_{e}$ instead of $D|_{\{e\}}$. For a finite set $E$, let $C$ be a symmetric $|E|\times|E|$ matrix over $GF(2)$, with rows and columns indexed, in the same order, by the elements of $E$. Let $C[A]$ be the principal submatrix of $C$ induced by the set $A\subseteq E$. We define the set system $D(C)=(E, \mathcal{F})$ with $$\mathcal{F}:=\{A\subseteq E: C[A] \mbox{ is non-singular}\}.$$ By convention $C[\emptyset]$ is non-singular. Bouchet \cite{AB4} showed that $D(C)$ is a normal delta-matroid. A delta-matroid is said to be \emph{binary} if it has a twist that is isomorphic to $D(C)$ for some symmetric matrix $C$ over $GF(2)$. In particular, if $D=(E, \mathcal{F})$ is a normal binary delta-matroid, then there exists a unique symmetric $|E|\times|E|$ matrix $C$ over $GF(2)$, whose rows and columns are labelled (in the same order) by the set $E$ such that $D=D(C)$. In fact, the matrix $C$ can be constructed as follows \cite{AB3, Moff}: \begin{description} \item [(1)] Set $C_{v, v}=1$ if and only if $\{v\}\in \mathcal{F}$. This determines the diagonal entries of $C$; \item [(2)] Set $C_{u,v}=1$ if and only if $\{u\}, \{v\}\in \mathcal{F}$ but $\{u, v\}\notin \mathcal{F}$, or $\{u, v\}\in \mathcal{F}$ but $\{u\}$ and $\{v\}$ are not both in $\mathcal{F}$. Then the feasible sets of size two determine the off-diagonal entries of $C$. \end{description} Observe that the construction above gives a way to define the intersection graph of any normal binary delta-matroid $D$ with respect to the unique matrix $C$ of $D$. The \emph{intersection graph} \cite{Moff} $G_{D}$ of a normal binary delta-matroid $D$ is the graph with the vertex set $E$ and in which two vertices $u$ and $v$ of $G_{D}$ are adjacent if and only if $C_{u, v}=1$ and there is a loop at $v$ if and only if $C_{v, v}=1$. Note that $D$ is connected if and only if $G_{D}$ is connected. \section{The Proof of the Main Theorem} \begin{proposition}[\cite{QYXJ3}]\label{pro 2} Let $D=(E, \mathcal{F})$ and $\widetilde{D}=(\widetilde{E}, \widetilde{\mathcal{F}})$ be two delta-matroids and $A\subseteq E$. Then \begin{description} \item [(1)] $^{\partial}w_{D}(z)=~^{\partial}w_{D*A}(z);$ \item [(2)] $^{\partial}w_{D\oplus \widetilde{D}}(z)=~^{\partial}w_{D}(z)~^{\partial}w_{\widetilde{D}}(z).$ \end{description} \end{proposition} \begin{lemma}[\cite{CMNR}]\label{lem 2} Let $D=(E, \mathcal{F})$ be a delta-matroid with $r(D_{min})=r$ and suppose that $e$ is a non-orientable ribbon loop of $D$. Then a subset $F$ of $E-e$ is a basis of $D_{min}$ if and only if $F\cup e$ is a feasible set of $D$ with cardinality $r+1$. \end{lemma} \begin{lemma}\label{lem 6} Let $D=(E, \mathcal{F})$ be a delta-matroid. If $e$ is a non-orientable ribbon loop and $f$ is a non-ribbon loop of $D$, then $f$ is a non-ribbon loop of $D/e$. \end{lemma} \begin{proof} Since $e$ is a ribbon loop of $D$, it follows that $e$ is a loop in $D_{min}$. Then for any $F\in \mathcal{F}_{min}(D)$, $e\notin F$. Furthermore, since $f$ is a non-ribbon loop of $D$, there exists $F'\in \mathcal{F}_{min}(D)$ such that $f\in F'$ and $e\notin F'$. Thus $F'\cup e\in \mathcal{F}(D)$ by Lemma \ref{lem 2}. We observe that $F'\in \mathcal{F}_{min}(D/e)$. Thus $f$ is not a loop in $(D/e)_{min}$ and hence $f$ is a non-ribbon loop of $D/e$. \end{proof} \begin{lemma}[\cite{QYXJ3}]\label{lem 5} Let $D=(E, \mathcal{F})$ be a normal delta-matroid and $A\subseteq E$. Then \[w(D*A)=w(D|_{A})+w(D|_{A^{c}}).\] \end{lemma} \begin{lemma} \label{lem 4} Let $D=(E, \mathcal{F})$ be a normal delta-matroid, and let $e\in E$ with $w(D)=w(D*e)$. \begin{description} \item[(1)] If $e$ is an orientable ribbon loop of $D$, then $e$ is a non-ribbon loop of $D^{*}$. \item[(2)] If $e$ is a non-orientable ribbon loop of $D$, then $e$ is a non-orientable ribbon loop of $D^{*}$. \end{description} \end{lemma} \begin{proof} {\bf (1)} Since $e$ is an orientable ribbon loop, we have $D|_{e}=(\{e\}, \{\emptyset\})$. Then $w(D|_{e})=0$. Note that $$w(D*e)=w(D|_{e})+w(D\backslash e)$$ by Lemma \ref{lem 5}. Since $w(D)=w(D*e)$, it follows that $w(D\backslash e)=w(D)$. Thus $r((D\backslash e)_{max})=r(D_{max})$ and hence there exists $F\in \mathcal{F}_{max}(D)$ such that $e\notin F$. Then $F^{c}\in \mathcal{F}_{min}(D^*)$ and $e\in F^{c}$. Therefore $e$ is a non-ribbon loop of $D^{*}$. ~ \noindent {\bf (2)} Since $e$ is a non-orientable ribbon loop, it follows that $D|_{e}=(\{e\}, \{\emptyset, \{e\}\})$. Then $w(D|_{e})=1$ and hence $w(D\backslash e)=w(D)-1$. Thus $$r((D\backslash e)_{max})=r(D_{max})-1.$$ Then for any $X\in \mathcal{F}_{max}(D)$, $e\in X$ and there exists $Y\in \mathcal{F}(D)$ such that $|Y|=r(D_{max})-1$ and $e\notin Y$. Notice that this means that for any $X'\in \mathcal{F}_{min}(D^*)$, $e\notin X'$ and there exists $Y'\in \mathcal{F}(D^*)$ such that $|Y'|=r({D^*}_{min})+1$ and $e\in Y$. Thus $e$ is a ribbon loop of both $D^{*}$ and $D^{*}*e$. Hence $e$ is a non-orientable ribbon loop of $D^{*}$. \end{proof} \begin{theorem}\label{the 1} Let $D=(E, \mathcal{F})$ be a connected odd normal binary delta-matroid. Then $^{\partial}w_{D}(z)=mz^k$ if and only if $D=(\{1\}, \{\emptyset, \{1\}\})$. \end{theorem} \begin{proof} If $D=(\{1\}, \{\emptyset, \{1\}\})$, then $^{\partial}w_{D}(z)=2z$. Thus the sufficiency is verified. For necessity, we claim that $|E|=1$. Suppose not. Then we consider two claims as follows. \begin{description} \item[Claim 1.] For any $e, f\in E$, $D|_{\{e, f\}}\neq (\{e, f\}, \{\varnothing, \{e\}, \{e, f\}\})$. Suppose that Claim 1 is not true. Then there exist $e, f\in E$ such that $$D|_{\{e, f\}}= (\{e, f\}, \{\varnothing, \{e\}, \{e, f\}\}).$$ It is easy to verify that $e$ is a non-orientable ribbon loop and $f$ is an orientable ribbon loop of $D$. Since $^{\partial}w_{D}(z)=mz^k$, it follows that $$w(D)=w(D*e)=w(D*f).$$ Then $e$ is a non-orientable ribbon loop and $f$ is a non-ribbon loop of $D^{*}$ by Lemma \ref{lem 4}. Thus $f$ is a non-ribbon loop of $D^{*}/e$ by Lemma \ref{lem 6}. Note that $D^{*}/e=(D\setminus e)^{*}$ and hence $f$ is a non-ribbon loop of $(D\setminus e)^{*}$. Then there exists $F\in \mathcal{F}_{min}((D\setminus e)^{*})$ such that $f\in F$. Let $F':=(E\backslash e)\backslash F$, then $F'\in \mathcal{F}_{max}(D\setminus e)$ and $e, f\notin F'$. Therefore $F'\in \mathcal{F}(D)$ by the definition of $D\backslash e$. For any $X\in \mathcal{F}_{max}(D)$, we observe that $e\in X$. Otherwise, $$r(D*e_{max})=r(D_{max})+1.$$ Since $\{e\}\in \mathcal{F}(D)$, we have $\emptyset\in \mathcal{F}(D*e)$, that is, $r(D*e_{min})=0$. Then $w(D*e)=w(D)+1$, this contradicts $^{\partial}w_{D}(z)=mz^k$. Thus $$r((D\backslash e)_{max})\leq r(D_{max})-1.$$ Furthermore, we observe that there exists $Y\in \mathcal{F}(D)$ such that $e\notin Y$ and $|Y|=r(D_{max})-1$. Otherwise, $$r(D*e_{max})=r(D_{max})-1.$$ Then $w(D*e)=w(D)-1$, this also contradicts $^{\partial}w_{D}(z)=mz^k$. Thus $Y\in D\backslash e$ and hence $$r((D\backslash e)_{max})=r(D_{max})-1.$$ We obtain $|F'|=r(D_{max})-1$. Since $\emptyset, F'\cup \{e, f\}\in \mathcal{F}(D*\{e, f\})$, it follows that $$w(D*\{e, f\})\geq |F'\cup \{e, f\}|=r(D_{max})+1=w(D)+1,$$ this contradicts $^{\partial}w_{D}(z)=mz^k$. Hence Claim 1 is proved. \item[Claim 2.] For any $e, f\in E$, $D|_{\{e, f\}}\neq (\{e, f\}, \{\varnothing, \{e\}, \{f\}\})$ . Suppose that Claim 2 is not true. Then there exist $e, f\in E$ such that $$D|_{\{e, f\}}= (\{e, f\}, \{\varnothing, \{e\}, \{f\}\}).$$ It is easily seen that $$D*e|_{\{e, f\}}=(\{e, f\}, \{\varnothing, \{e\}, \{e, f\}\}).$$ Note that $^{\partial}w_{D*e}(z)=~^{\partial}w_{D}(z)=mz^k$ by Proposition \ref{pro 2} (1), this contradicts Claim 1. Then Claim 2 follows. \end{description} Since $D$ is an odd normal binary delta-matroid, we know that $D=D(C)$ for some symmetric matrix $C$ over $GF(2)$ and there exists a non-orientable ribbon loop $e$. As $D$ is connected and $|E|\geq 2$, there exists $f\in E$ such that \[C[\{e, f\}] = \bordermatrix{ & e & f \cr e & 1 & 1 \cr f & 1 & 0 \cr }\] or \[C[\{e, f\}] = \bordermatrix{ & e & f \cr e & 1 & 1 \cr f & 1 & 1 \cr }.\] Then $$D|_{\{e, f\}}= (\{e, f\}, \{\varnothing, \{e\}, \{e, f\}\})$$ or $$D|_{\{e, f\}}= (\{e, f\}, \{\varnothing, \{e\}, \{f\}\}),$$ this contradicts Claim 1 or 2. Thus $|E|=1$. Since $D$ is an odd delta-matroid, it follows that $D=(\{1\}, \{\emptyset, \{1\}\})$. \end{proof} \begin{corollary}\label{cor 1} Let $D=(E, \mathcal{F})$ be a connected odd normal binary delta-matroid. Then $^{\partial}w_{D}(z)=mz^k$ if and only if $G_{D}$ is a single vertex with a loop. \end{corollary} \begin{proof} Since the intersection graph of $D=(\{1\}, \{\emptyset, \{1\}\})$ is a single vertex with a loop, the result follows immediately from Theorem \ref{the 1}. \end{proof} \begin{proposition}[\cite{QYXJ3}]\label{main-3} Let $D=(E, \mathcal{F})$ be a connected even normal binary delta-matroid. Then $^{\partial}w_{D}(z)=mz^k$ if and only if $G_{D}$ is a complete graph of odd order. \end{proposition} \begin{remark} The proof of the Main Theorem is straightforward by Propositions \ref{pro 2} (2), \ref{main-3} and Corollary \ref{cor 1}. \end{remark} \section*{Acknowledgements} This work is supported by NSFC (Nos. 12171402, 12101600) and the Fundamental Research Funds for the Central Universities (Nos. 20720190062, 2021QN1037).
{ "timestamp": "2022-05-10T02:04:37", "yymm": "2205", "arxiv_id": "2205.03487", "language": "en", "url": "https://arxiv.org/abs/2205.03487", "abstract": "Recently, we introduced the twist polynomials of delta-matroids and gave a characterization of even normal binary delta-matroids whose twist polynomials have only one term and posed a problem: what would happen for odd binary delta-matroids? In this paper, we show that a normal binary delta-matroid whose twist polynomials have only one term if and only if each connected component of the intersection graph of the delta-matroid is either a complete graph of odd order or a single vertex with a loop.", "subjects": "Combinatorics (math.CO)", "title": "Twist monomials of binary delta-matroids", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.985936370890583, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7084883534953991 }
https://arxiv.org/abs/1410.8753
Refined Upper Bounds on Stopping Redundancy of Binary Linear Codes
The $l$-th stopping redundancy $\rho_l(\mathcal C)$ of the binary $[n, k, d]$ code $\mathcal C$, $1 \le l \le d$, is defined as the minimum number of rows in the parity-check matrix of $\mathcal C$, such that the smallest stopping set is of size at least $l$. The stopping redundancy $\rho(\mathcal C)$ is defined as $\rho_d(\mathcal C)$. In this work, we improve on the probabilistic analysis of stopping redundancy, proposed by Han, Siegel and Vardy, which yields the best bounds known today. In our approach, we judiciously select the first few rows in the parity-check matrix, and then continue with the probabilistic method. By using similar techniques, we improve also on the best known bounds on $\rho_l(\mathcal C)$, for $1 \le l \le d$. Our approach is compared to the existing methods by numerical computations.
\section{Introduction} \emph{Stopping sets} are a known cause of failures of message-passing decoders, when applied to binary linear codes on a binary erasure channel~\cite{di2002finite}. Small stopping sets are especially harmful, as they have higher probability of causing the damage. Stopping sets, however, are determined by the selection of a parity-check matrix of the code, rather than by the code itself. The size of the smallest stopping set is called the \emph{stopping distance} of the corresponding parity-check matrix. It is observed in~\cite{santhi2004effect} that by adding redundant rows to the parity-check matrix, the small stopping sets can be eliminated, i.e. the resulting matrix does not contain stopping sets of small size. On the other hand, the increased number of the redundant rows in the parity-check matrix leads to growth in the decoding complexity. Therefore, generally, the trade-off between the size of the smallest stopping set, and the number of rows in the parity-check matrix, is of significant interest. More specifically, let $\code$ be a binary linear $[n, k, d]$ code, and let $H$ be a parity-check matrix for this code. Denote $[n] \triangleq \{ 1,2,\dotsc,n \}$. Let $\cS \subseteq [n]$ be a set of columns of $H$. Denote by $H_{\cS}$ the submatrix of $H$, composed from the columns of $H$ indexed by $\cS$. \begin{define} The set $\cS$ is a \emph{stopping set} in $H$ if $H_{\cS}$ contains no row of Hamming weight one. \end{define} \begin{define}[\hspace{-0.6ex} \cite{schwartz-vardy2006}] The stopping redundancy of $\code$, $\rho(\code)$, is the smallest number of rows in any parity-check matrix of $\code$, such that the corresponding stopping distance is $d$. \label{def:stopping-redundancy} \end{define} Bounds on stopping redundancy of binary linear codes were studied in a number of works over the years~\cite{schwartz-vardy2006, weber2005stopping, etzion2006stopping, han2007improved, hollmann2007parity, olgica2008permutation, han2008improved, zumbraegel2012pseudocodeword}. Algorithms for finding small stopping sets were proposed in~\cite{rosnes2009efficient, karimi2013efficient}. For general binary linear codes, the best known bounds on the stopping redundancy were derived by using probabilistic method in~\cite{han2008improved}. In this work, we improve on the analysis therein. In particular, we observe that the number of stopping sets eliminated by a random codeword of the dual code is not optimal in general case. In our approach, we judiciously select the first few rows in the parity-check matrix, in such way that these rows eliminate more small stopping sets than the randomly chosen nonzero codewords in the dual code. In particular, we pick dual codewords of the minimum weight. If the number of such codewords is small (for example, 1 or 2), then we can provide good estimates on the number of eliminated stopping sets. After that, we proceed with the probabilistic method, similarly to~\cite{han2008improved}. \section{General Theorem} \label{sec:gen-thm} Throughout the remaining sections, if not explicitly stated otherwise, we consider a binary linear $[n, k, d]$ code \code. As it was shown in~\cite[Theorem~3]{schwartz-vardy2006}, if $d \le 3$ then \textit{any} parity-check matrix $H$ for \code has stopping distance $d$, i.e. $\rho(\code) = n-k$. Hence we only consider a case $d \ge 4$ (and, therefore, $r \triangleq n-k \ge 2$). The dual code of \code is denoted by \dual, its dimension and minimum distance are $r$ and $d^\perp$, respectively. We use \dualz as a shorthand for $\dual \setminus \{ \cw 0 \}$. We call any subset of $[n]$ of cardinality $i$ an \emph{$i$-set}. The set of all $i$-sets is denoted by $\mathfrak I_i$: \[ \mathfrak I_i = \{ \mathcal S \subseteq [n] : |\mathcal S| = i\} \; . \] We also use the notation $\mathfrak{I} = \bigcup_{i=3}^{d-1} \mathfrak{I}_i$. We do not consider the $i$-sets of sizes $1$ and $2$. Indeed, if $d \ge 4$ then no parity-check matrix has the all-zero column or two identical columns, which implies there are no stopping sets of sizes $1$ and $2$. We say that a row vector $\cw h \in \mathbb F_2^n$ \emph{covers} the $i$-set $\mathcal{S}$ if the projection of $\cw h$ on the coordinates indexed by $\mathcal{S}$ has Hamming weight $1$. We also say that the $t \times n$ matrix $(\cw h_1\tr,\cw h_2\tr,\dotsc,\cw h_t\tr)\tr$ over $\ff_2$ \emph{covers} $\mathcal{S}$ if any of its rows covers $\mathcal{S}$. If some $i$-set is covered, then the stopping set in the corresponding coordinates cannot exist. Thus, by covering all the $i$-sets, $i=3,4,\dotsc,d-1$, we obtain a matrix with no stopping sets of size less than~$d$. The following lemma is implicitly stated in~\cite{han2008improved}. \begin{lemma} \label{lm:b-lemma} Let $r \ge 3$ and $d$ be two positive integers, and $b$ be a real number, such that $1 \le b \le r-2$, and $(r-1)(d-1) \le 2^{d-1}$. Then, for any $x<2^r$, \[ b - \left( \frac{2^r-2^{r-b}}{2^r-x} \right) \le b \left( 1 - \frac{(d-1) \cdot 2^{r-d+1}}{2^r-x} \right) \; . \] \end{lemma} We omit the proof of Lemma~\ref{lm:b-lemma}. Next, we formulate a general theorem, which is the main result of this paper. It includes Theorem 7 in~\cite{han2008improved} as a special case, and its proof uses similar ideas. \begin{theorem} \label{thm:general-upper-bound} Assume that there exists a matrix, whose rows $\cw h_1, \cw h_2, \dotsc, \cw h_\tau$, $\tau \ge 0$, are linearly independent codewords in \dualz. For $i=3,4,\dotsc,d-1$, let $\mathfrak U_i$, $|\mathfrak U_i| \le u_i$, be the set of $i$-sets not covered by this matrix. Assume also that $(r-1)(d-1) \le 2^{d-1}$. Then \begin{equation} \rho(\code) \le \tau + \min_{t \ge r} \left\{ t + \kappa_t \right\}, \end{equation} where \begin{eqnarray*} \kappa_t & = & \min \left\{k \in \Nat : Q_k(\lfloor \mathcal D_t \rfloor) = 0\right\} \; , \\ Q_k(x) & = & P_k(P_{k-1}(\ldots P_1(x) \ldots)) \; , \\ P_j(x) & = & \left\lfloor x \left( 1 - \frac{(d-1) \cdot 2^{r-d+1}}{2^r-(\tau+t+j)} \right) \right\rfloor \; , \\ \mathcal D_t & = & \sum_{i=3}^{d-1} u_i \prod_{j=\tau+1}^{\tau+t}\left(1 - \frac{i \cdot 2^{r-i}}{2^r-j}\right) \\ && \qquad + \; \frac{1}{2^{t-r}}\left(1+\frac{2/3}{2^{t-r+1}-1}\right) \; . \end{eqnarray*} \end{theorem} \begin{IEEEproof} Let $H$ be a matrix with rows in $\dualz$. Such $H$ is not necessary the parity-check matrix, since its rank can be less than $r$. Define $\delta(H)$ as follows: \begin{align*} \delta(H) \triangleq \Big| \{\mathcal{S} \in \mathfrak{I} ~|~ \mathcal{S} &\mbox{ is not covered by } H \} \Big| + (r - \rank H) . \end{align*} Here $\delta(H) = 0$ means that $\rank H = r$ and all the $i$-sets, $i=3,4,\dotsc,d-1$, are covered. Such $H$ is a parity-check matrix of $\code$, and since its stopping distance is at least $4$, all the $1$-sets and $2$-sets are covered automatically. In the sequel, we construct a matrix $H$, such that $\delta(H) = 0$. We prove this theorem in two steps. First, we show existence of a parity-check matrix of size $(\tau+t) \times n$ with bounded $\delta$. Second, we show that $\delta$ has to decrease after adding one carefully selected additional row to it. Therefore, after adding enough rows, we obtain a parity-check matrix $H$ with $\delta(H) = 0$. Hereafter, we use $H_{i_1,i_2,\dotsc,i_s}$ as a shorthand for the matrix with rows $\cw h_{i_1}, \cw h_{i_2}, \dotsc, \cw h_{i_s}$. \textit{Step 1}. Let $\cw h_{\tau+1}, \cw h_{\tau+2}, \dotsc, \cw h_{\tau+t}$ be $t$ rows drawn uniformly at random without repetitions from $\dualz \setminus \{\cw{h}_1, \cw{h}_2, \dotsc, \cw{h}_\tau\}$. Denote by $\xi$ the number of sets in $\mathfrak I$ that are not covered by $H_{1,2,\dotsc,\tau+t}$. This $\xi$ is an integer discrete random variable. Denote by $\functor I \{\cdot\}$ an indicator function, which takes values $0$ and $1$. The value of the indicator is set to $1$ if the argument is true, and zero otherwise. Then, $\xi$ can be written as follows. \begin{align*} \xi &= \sum_{\mathcal S \in \mathfrak I} \functor{I}\{\mathcal{S} \mbox{ is not covered by } H_{1,2,\dotsc,\tau+t}\} \\ &= \sum_{i=3}^{d-1}\sum_{\mathcal{S} \in \mathfrak U_i} \functor{I}\{\mathcal{S} \mbox{ is not covered by } H_{\tau+1,\tau+2,\dotsc,\tau+t}\} \; . \end{align*} Then, the expected value of $\xi$ is \begin{multline} \label{eq:E_xi_for_pasting} \sum_{i=3}^{d-1} \sum_{\mathcal S \in \mathfrak U_i} \prob \left\{ \mathcal S \mbox{ is not covered by }H_{\tau+1,\tau+2,\dotsc,\tau+t} \right\} \; . \end{multline} To find the probabilities in~(\ref{eq:E_xi_for_pasting}), recall (cf.~\cite[p.~139]{macwilliams1977theory}) that $2^r \times n$ matrix, consisting of all codewords of \dual, is an orthogonal array of strength $d-1$. This means that for any $i=3,4, \dotsc, d-1$, the projection of this matrix on any $i$-set $\mathcal{S}$ contains every vector of length $i$ exactly $2^{r-i}$ times. There are exactly $i \cdot 2^{r-i}$ codewords in $\dual_0$ that cover $\mathcal{S}$. Therefore, \begin{multline} \prob \left\{ \mathcal{S} \mbox{ is not covered by } H_{\tau+1,\tau+2,\dotsc,\tau+t} \right\} \\ = \left. \binom{(2^r - \tau - 1) - i \cdot 2^{r-i}}{t} \right/ {\binom{2^r - \tau - 1}{t}} \\ = \prod_{j=\tau+1}^{\tau+t} \left( 1 - \frac{i \cdot 2^{r-i}}{2^r-j} \right) \; . \label{eq:prob-set-covered} \end{multline} In a numerator we have a number of possible choices of $\cw h_{\tau+1}, \cw h_{\tau+2}, \dotsc, \cw h_{\tau+t}$ that do not cover $\mathcal S$, and in a denominator -- the total number of choices of $\cw h_{\tau+1}, \cw h_{\tau+2}, \dotsc, \cw h_{\tau+t}$. By substituting expression~(\ref{eq:prob-set-covered}) into~(\ref{eq:E_xi_for_pasting}) we have that the expected value of $\xi$ is bounded from above by: \begin{equation} \label{eq:E_xi_bound} \EE\{\xi\} \le \sum_{i=3}^{d-1} u_i \prod_{j=\tau+1}^{\tau+t} \left( 1-\frac{i \cdot 2^{r-i}}{2^r-j} \right) \; . \end{equation} Next, it was shown in \cite[Lemma 6]{han2008improved} that if we draw uniformly at random $s$ codewords from \dual, $s \ge r$, then the matrix constructed from these codewords has expected rank at least \begin{equation*} r - \frac{1}{2^{s-r}}\left( 1+\frac{2/3}{2^{s-r+1}-1} \right) \; . \end{equation*} It is easy to see that if we draw $\cw h_{\tau+1}, \cw h_{\tau+2}, \dotsc, \cw h_{\tau+t}$ uniformly at random from $\dualz \setminus \{\cw{h}_1, \cw{h}_2, \dotsc, \cw{h}_\tau\}$, and then construct the matrix $H_{1,2,\dotsc,\tau+t}$, then the expected value of its rank deficiency is bounded from above: \begin{eqnarray} \EE \{ \eta \} & = & r - \EE \{ \rank H_{1,2,\dotsc,\tau+t} \} \nonumber \\ & \le & \frac{1}{2^{t-r}}\left( 1+\frac{2/3}{2^{t-r+1}-1} \right) \; . \label{eq:E_eta_bound} \end{eqnarray} By summing up (\ref{eq:E_xi_bound}) and (\ref{eq:E_eta_bound}), we obtain that \begin{multline*} \EE \{ \delta(H_{1,2,\dotsc,\tau+t}) \} \; \le \; \sum_{i=3}^{d-1} u_i \prod_{j=\tau+1}^{\tau+t} \left( 1-\frac{i \cdot 2^{r-i}}{2^r-j} \right) \\ + \frac{1}{2^{t-r}}\left( 1+\frac{2/3}{2^{t-r+1}-1} \right) \; . \end{multline*} Since $\delta(H_{1,2,\dotsc,\tau+t})$ is an integer discrete random variable, there is a realisation of it such that \begin{multline*} \delta(H_{1,2,\dotsc,\tau+t}) \le \Bigg\lfloor \sum_{i=3}^{d-1} u_i \prod_{j=\tau+1}^{\tau+t} \left( 1-\frac{i \cdot 2^{r-i}}{2^r-j} \right) \\ + \frac{1}{2^{t-r}}\left( 1+\frac{2/3}{2^{t-r+1}-1} \right) \Bigg\rfloor \; . \end{multline*} \textit{Step 2}. At this point we consider $\cw h_1, \cw h_2, \dotsc, \cw h_{\tau+t}$ as non-random and fixed. In particular, $\xi$ and $\eta$ are non-random. Let $\mathfrak U \subset \mathfrak I$ be the set of all $i$-sets ($3 \le i \le d-1$) not covered by $H_{1,2,\dotsc,\tau+t}$. Add one more new row $\cw h_{\tau+t+1}$, which is randomly chosen from $\dualz \setminus \{ \cw h_1, \cw h_2, \dotsc, \cw h_{\tau+t} \}$. Analogously to $\xi$ and $\eta$ for $H_{1,2,\dotsc,\tau+t}$, we define discrete random variables $\xi'$ and $\eta'$ for $H_{1,2,\dotsc,\tau+t+1}$. Then, \begin{eqnarray*} \EE \{ \xi' \} & = & \sum_{\mathcal S \in \mathfrak U} \prob \{ \mathcal S \mbox{ is not covered by } H_{1,2,\dotsc,\tau+t+1}\} \\ & \le & |\mathfrak U| \cdot \max_{\mathcal S \in \mathfrak U} \prob \{ \mathcal S \mbox{ is not covered by } \cw h_{\tau+t+1} \} \\ & = & \xi \cdot \max_{\mathcal S \in \mathfrak U} \left( 1 - \frac{|\mathcal S| \cdot 2^{r-|\mathcal S|}}{2^r - (\tau+t+1)} \right) \\ & \le & \xi \left( 1 - \frac{(d-1) \cdot 2^{r-d+1}}{2^r - (\tau+t+1)} \right) \; . \end{eqnarray*} Adding one row to any matrix could either leave its rank unchanged or increase it by one. Therefore, if $\eta \ge 1$ then\footnote{Note that the case $\eta \ge 1$ is possible only for $r \ge 3$.} we have that either $\eta' = \eta$ or $\eta' = \eta-1$. To calculate the probabilities of these events, we note that any $l$ linearly independent rows in \dualz span in total $2^l$ codewords (including $\cw 0$). Then \begin{equation*} \prob \{ \eta' = \eta \} = \frac{2^{r-\eta} - (\tau+t+1)}{2^r - (\tau+t+1)} = 1 - \prob \{ \eta' = \eta-1 \} \; , \end{equation*} and, therefore, \begin{eqnarray*} \EE \{ \eta' \} &=& \eta - \left( \frac{2^r - 2^{r-\eta}}{2^r - (\tau+t+1)} \right) \; . \end{eqnarray*} Next, apply Lemma \ref{lm:b-lemma} with $b = \eta$ and $x = \tau+t+1$. Indeed, $\eta \ge 1$ and $\eta \le r-2$ because $H_{1,2,\dotsc,\tau+t}$ consists of at least two different non-zero codewords. Additionally, $\tau+t+1 < 2^r$ since $2^r-1$ is the maximum number of rows in any parity-check matrix for \code. Therefore, \begin{equation} \EE \{ \eta' \} \le \eta \left( 1 - \frac{(d-1) \cdot 2^{r-d+1}}{2^r - (\tau+t+1)} \right) \; . \label{eq:eta-prime} \end{equation} Inequality~(\ref{eq:eta-prime}) holds also when $\eta = 0$ (which includes the case $r=2$), because in that case $\eta' = 0$ as well. Altogether we have \begin{multline*} \EE \{ \delta(H_{1,2,\dotsc,\tau+t+1}) \} = \EE\{ \xi' \} + \EE\{ \eta' \} \\ \le \delta(H_{1,2,\dotsc,\tau+t}) \left( 1 - \frac{(d-1) \cdot 2^{r-d+1}}{2^r - (\tau+t+1)} \right) \; . \end{multline*} Therefore, there exists $\cw h_{\tau+t+1}$ such that $\delta(H_{1,2,\dotsc,\tau+t+1}) \le P_1(\delta(H_{1,2,\dotsc,\tau+t})) \le P_1(\lfloor \mathcal D_t \rfloor)$. We iterate this process of adding rows one-by-one, and after $k$ steps obtain the $(\tau+t+k) \times n$ matrix $H_{1,2,\dotsc,\tau+t+1}$ with $\delta(H_{1,2,\dotsc,\tau+t+1}) \le Q_k(\lfloor \mathcal D_t \rfloor)$. Iterations should be stopped when $Q_k(\lfloor \mathcal D_t \rfloor) = 0$. \end{IEEEproof} \section{Important Special Cases} \label{sec:special-cases} Theorem~\ref{thm:general-upper-bound} gives a general family of bounds on the stopping redundancy. It remains a question how to choose particular $\tau$ and $\cw h_1, \cw h_2, \dotsc, \cw h_\tau$, which yield good concrete bounds. In this section, we study specific selections of these parameters. The first and simple choice is to take $\tau = 1$ and $\cw h_1$ to be a fixed codeword of the minimum weight in $d^\perp$. \begin{cor} \label{cor:1-row} The upper bound in Theorem \ref{thm:general-upper-bound} holds for $\tau=1$ and \[ u_i = \binom{n}{i} - d^\perp \binom{n-d^\perp}{i-1} \; \mbox{ for } i = 3, 4, \dotsc, d-1 \; . \] \end{cor} \begin{IEEEproof} Matrix consisting of one codeword of weight $d^\perp$ covers exactly $d^\perp \binom{n-d^\perp}{i-1}$ $i$-sets for each $i = 3,4,\dotsc,d-1$. We apply Theorem \ref{thm:general-upper-bound} with $\tau=1$ and $u_i = \binom{n}{i} - d^\perp \binom{n-d^\perp}{i-1}$, which yields the result stated in the corollary. \end{IEEEproof} Next, take $\tau = 2$ and consider two different codewords of weight $d^\perp$. \begin{cor} \label{cor:2-rows} If there are at least two different codewords $\cw h_1, \cw h_2 \in \dual$ of weight $d^\perp$, then the upper bound in Theorem~\ref{thm:general-upper-bound} holds for $\tau=2$, where \begin{eqnarray*} && u_i \; = \; \binom{n}{i} - \mathfrak{M}(n, d^\perp, i) \; , \\ &&\mathfrak{M}(n, d^\perp, i) \triangleq 2 d^\perp \binom{n-d^\perp}{i-1} \; - \; \max_{0 \le \Delta \le \lfloor d^\perp / 2 \rfloor} \Bigg\{ \Delta \cdot \\ && \quad \binom{n-2 d^\perp+\Delta}{i-1} \; + \; (\Delta - d^\perp)^2 \binom{n-2 d^\perp+\Delta}{i-2} \Bigg\} \; . \\ \end{eqnarray*} \end{cor} \begin{IEEEproof} Consider two different codewords in \dualz of weight $d^\perp$. They are shown in Figure \ref{fig:2-codewords}, where grey and white colors denote the regions of ones and zeroes, respectively. Let $\Delta$ be the number of codeword positions, where both of the codewords have ones. Obviously $0 \le \Delta \le \lfloor d^\perp / 2 \rfloor$. \begin{figure} \centering \begin{tikzpicture} \draw (0,0) rectangle (8,.5); \filldraw[fill=gray] (0,0) rectangle (3,.5); \draw (0,0) rectangle (8,-.5); \filldraw[fill=gray] (0,0) rectangle (1,-.5); \filldraw[fill=gray] (3,0) rectangle (5,-.5); \path (0,-1) -- (1,-1) node[midway,above]{$\Delta$}; \path (1,-1) -- (3,-1) node[midway,above]{$d^\perp - \Delta$}; \path (3,-1) -- (5,-1) node[midway,above]{$d^\perp - \Delta$}; \path (5,-1) -- (8,-1) node[midway,above]{$n-2 d^\perp + \Delta$}; \end{tikzpicture} \caption{Two codewords of weight $d^\perp$} \label{fig:2-codewords} \end{figure} Each of the codewords covers exactly $d^\perp \binom{n-d^\perp}{i-1}$ $i$-sets. To calculate the total number of $i$-sets covered by these two codewords we need to subtract those $i$-sets that have been counted twice. They are of two kinds: \begin{itemize} \item Covered by the same pattern of size $i$ in $\cw h_1$ and $\cw h_2$. They have one position in the area of length $\Delta$ and all the other positions in the area of length $n-2 d^\perp + \Delta$. There are $\Delta \binom{n-2 d^\perp+\Delta}{i-1}$ such $i$-sets. \item Covered by different patterns of size $i$ (at the same positions) in $\cw h_1$ and $\cw h_2$. They have one position in each of areas of length $d^\perp-\Delta$ and the remaining $i-2$ positions in the area of length $n-2 d^\perp + \Delta$. There are $(\Delta - d^\perp)^2 \binom{n-2 d^\perp+\Delta}{i-2}$ such $i$-sets. \end{itemize} Therefore these two codewords cover together the following amount of $i$-sets \begin{multline*} 2 d^\perp \binom{n-d^\perp}{i-1} - \Delta \binom{n-2 d^\perp+\Delta}{i-1} \\ - (\Delta - d^\perp)^2 \binom{n-2 d^\perp+\Delta}{i-2} \; . \end{multline*} This is at least $\mathfrak{M}(n, d^\perp, i)$. We can now apply Theorem~\ref{thm:general-upper-bound} with $\tau=2$ and $u_i = \binom{n}{i} - \mathfrak{M}(n, d^\perp, i)$. \end{IEEEproof} It might be possible to further improve the bound in Corollary 2 by judiciously selecting three or more codewords in $\dual$, for example by taking three (or more) dual codewords of weight $d^\perp$. However, in that case it becomes more difficult to obtain good analytical estimates on $u_i$. Alternatively, it is also possible to choose some specific $\cw h_1, \cw h_2, \dotsc, \cw h_\tau$ and to compute all $u_i$ directly by computer. In that case, tighter bounds can be obtained. In the sequel, we refer to that method as a \emph{hybrid method}. \section{Stopping Redundancy Hierarchy} Consider a binary $[n, k, d]$ code $\code$. In Definition~\ref{def:stopping-redundancy} it is required that the stopping distance of the code defined by the parity-check matrix $H$ is $d$. However, a weaker requirement on the parity-check matrix of the code can be imposed. In this section, as it was suggested in~\cite{olgica2008permutation}, we require that the stopping distance of the code is at least $l$, for some $1 \le l \le d$. In that case, the number of rows in the parity-check matrix can be smaller than the stopping redundancy of the code. \begin{define}[\hspace{-0.6ex} {\cite[Definition 2.4]{olgica2008permutation}}] For $l \le d$, the $l$-th stopping redundancy of \code is the smallest nonnegative integer $\rho_l(\code)$ such that there exists a (possibly redundant) parity-check matrix $H$ of \code with $\rho_l(\code)$ rows and stopping distance at least $l$. The ordered set of integers $\left(\rho_1(\code), \rho_2(\code), \dotsc, \rho_d(\code) \right)$ is called the \emph{stopping redundancy hierarchy} of \code. \end{define} Note that the (conventional) stopping redundancy $\rho(\code)$ is equal to $\rho_d(\code)$. For codes with the minimum distance $d \ge 4$, neither two columns of the parity-check matrix are identical nor any of the columns equal to the all-zero vector. Therefore, $\rho_1(\code) = \rho_2(\code) = \rho_3(\code) = n-k$. Consequently, only $\rho_l(\code)$ for $l >3$ is of interest. In \cite{olgica2008permutation}, the stopping redundancy hierarchy of binary linear codes is studied, and several upper bounds are obtained. In the sequel, we apply the ideas in previous section to the stopping redundancy hierarchy. We formulate a generalised version of Corollary~\ref{cor:2-rows}. \begin{theorem} \label{thm:hier-2-rows} If $\dual$ contains at least two codewords of minimum weight $d^\perp$, then for $4 \le l \le d$, \begin{equation*} \rho_l(\code) \le 2 + \min_{t \ge r} \left\{ t + \kappa_t^{(1)} \right\} + (r-l+1) \; . \end{equation*} Moreover, if $(r-1)(l-1) \le 2^{l-1}$ then \begin{equation*} \rho_l(\code) \le 2 + \min_{t \ge r} \left\{ t + \kappa_t^{(2)} \right\} \; , \end{equation*} where \begin{eqnarray*} \kappa_t^{(i)} & = & \min \left\{k \in \Nat : Q_k(\lfloor \mathcal D_t^{(i)} \rfloor) = 0\right\} , \; i = 1, 2 \; , \\ Q_k(x) & = & P_k(P_{k-1}(\ldots P_1(x) \ldots)), \\ P_j(x) & = & \left\lfloor x \left( 1 - \frac{(l-1)2^{r-l+1}}{2^r-(2+t+j)} \right) \right\rfloor \; , \\ \mathcal D_t^{(1)} & = & \sum_{i=3}^{l-1} u_i \prod_{j=2}^{t+2}\left(1 - \frac{i \cdot 2^{r-i}}{2^r-j}\right) \; , \\ \mathcal D_t^{(2)} & = & \mathcal D_t^{(1)} +\frac{1}{2^{t-r}}\left(1+\frac{2/3}{2^{t-r+1}-1}\right) \; , \\ u_i &= & \binom{n}{i} - \mathfrak{M}(n, d^\perp, i) \; . \end{eqnarray*} \end{theorem} \begin{IEEEproof} The case when $(r-1)(l-1) \le 2^{l-1}$ is analogous to the proof of Theorem~\ref{thm:general-upper-bound}, with the values of $\tau$ and $u_i$ as in Corollary~\ref{cor:2-rows}. That proof, however, cannot be applied to the cases of small values of $l$ if the condition $(r-1)(l-1) \le 2^{l-1}$ does not hold. We note that this condition is required in the proof only to guarantee the uniform decrease of $\xi$ and $\eta$. Therefore, the argument for decrease of $\xi$ in the proof of Theorem~\ref{thm:general-upper-bound} can be applied as is. After that, we have to ensure that the constructed matrix is of the required rank $r$. Note that since we have covered all the $i$-sets for $i=1,2,\dotsc, l-1$, the rank of the matrix is at least $l-1$. Hence, by adjoining at most $r-(l-1)$ rows, we finally obtain the required parity-check matrix. \end{IEEEproof} We note that tighter bounds on the stopping redundancy hierarchy could be obtained by using the hybrid method, discussed in the last paragraph of Section~\ref{sec:special-cases}. \section{Numerical Experiments} In this section, we compare the bounds on the stopping redundancy obtained in \cite{schwartz-vardy2006}, \cite{han2007improved}, \cite{han2008improved} with our results. We consider two codes: the extended $[24,12,8]$ binary Golay code and the extended $[48,24,12]$ binary Quadratic Residue (QR) code. Both of them are known to be self-dual (cf.\,\cite{houghten2003qr}). The extended $[24,12,8]$ binary Golay code is arguably a remarkable binary block code. It is often used as a benchmark in studies of code structure and decoding algorithms. The code is self-dual, therefore $d^\perp = 8$. Moreover, it is known \cite[p.\,67]{macwilliams1977theory} that there are $759$ codewords of the minimum weight. The example of (conventional) parity-check matrix of the code is shown in Table \ref{tbl:Golay-matrix}, where the blank spaces denote zeroes. In \cite{schwartz-vardy2006}, a greedy (lexicographic) computer search was used. It was found that the actual stopping redundancy of the extended $[24,12,8]$ binary Golay code is at most $34$. \begin{table} \caption{Parity-check matrix of the extended $[24,12,8]$ Golay code} \label{tbl:Golay-matrix} \[ \left( \begin{smallmatrix} 1 & 1 & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 \\ 1 & ~ & 1 & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\ 1 & ~ & ~ & 1 & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 \\ 1 & ~ & ~ & ~ & 1 & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 0 \\ 1 & ~ & ~ & ~ & ~ & 1 & ~ & ~ & ~ & ~ & ~ & ~ & ~ & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 \\ 1 & ~ & ~ & ~ & ~ & ~ & 1 & ~ & ~ & ~ & ~ & ~ & ~ & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 \\ 1 & ~ & ~ & ~ & ~ & ~ & ~ & 1 & ~ & ~ & ~ & ~ & ~ & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 \\ 1 & ~ & ~ & ~ & ~ & ~ & ~ & ~ & 1 & ~ & ~ & ~ & ~ & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 \\ 1 & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & 1 & ~ & ~ & ~ & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 \\ 1 & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & 1 & ~ & ~ & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 1 \\ 1 & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & 1 & ~ & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \end{smallmatrix} \right) \] \end{table} It is known \cite[p.\,604]{macwilliams1977theory} that there are $17296$ codewords of the minimum weight in the extended $[48,24,12]$ binary Quadratic Residue (QR) code. The comparison of the upper bounds on the stopping redundancy is given in Table \ref{tbl:bounds}. \begin{table} \caption[Upper bounds]{Upper bounds on the stopping redundancy} \label{tbl:bounds} \centering \begin{tabular}{lcc} \hline\hline ~ & [24, 12, 8] Golay & [48, 24, 12] QR \\ \hline \cite[Thm 4]{schwartz-vardy2006} & {2509} & {4540385} \\ \cite[Thm 1]{han2008improved} & 198 & {3655} \\ \cite[Thm 3]{han2008improved} & 194 & {3655} \\ \cite[Thm 4]{han2008improved} & 187 & {3577} \\ \cite[Thm 7]{han2008improved} & 182 & {3564} \\ \hline Corollary \ref{cor:1-row} ($\tau=1$) & 180 & {3538} \\ Corollary \ref{cor:2-rows} ($\tau=2$) & 177 & {3515} \\ \hline\hline \end{tabular} \end{table} We also compare the bounds on stopping redundancy hierarchy in the previous chapter with the results for general codes, obtained in \cite{olgica2008permutation} (the bounds for cyclic codes therein are not applicable because neither of the codes is cyclic.) The numerical results are presented in Table \ref{tbl:hier-Golay-24} and Table \ref{tbl:hier-QR-48}. \begin{table} \caption{Bounds on the stopping redundancy hierarchy, $\rho_l$, for the extended $[24,12,8]$ Golay code} \label{tbl:hier-Golay-24} \centering \begin{tabular}{lcccc} \hline\hline $l$ & \cite[Thm 3.8]{olgica2008permutation} & \cite[Thm 3.11]{olgica2008permutation} & \cite[Thm 3.12]{olgica2008permutation} & Thm \ref{thm:hier-2-rows} \\ \hline 4 & 26 & 78 & --- & 25 \\ 5 & --- & 298 & --- & 36 \\ 6 & --- & 793 & 385 & 59 \\ 7 & --- & 1585 & --- & 103 \\ 8 & --- & 2509 & --- & 177 \\ \hline\hline \end{tabular} \end{table} \begin{table} \caption{Bounds on the stopping redundancy hierarchy, $\rho_l$, for the extended $[48,24,12]$ QR code} \label{tbl:hier-QR-48} \centering \begin{tabular}{lccc} \hline\hline $l$ & \cite[Thm 3.8]{olgica2008permutation} & \cite[Thm 3.11]{olgica2008permutation} & Thm \ref{thm:hier-2-rows} \\ \hline 4 & 42 & 300 & 47 \\ 5 & 62 & 2 324 & 58 \\ 6 & 105 & 12 950 & 92 \\ 7 & --- & 55 454 & 158 \\ 8 & --- & 190 050 & 287 \\ 9 & --- & 536 154 & 514 \\ 10 & --- & 1 271 625 & 978 \\ 11 & --- & 2 579 129 & 1856 \\ 12 & --- & 4 540 385 & 3515 \\ \hline\hline \end{tabular} \end{table} \medskip Next, we use the hybrid method, mentioned in the last paragraph of Section~\ref{sec:special-cases}. We take $\tau$ first rows of conventional parity-check matrix of the extended $[24,12,8]$ Golay code (Table~\ref{tbl:Golay-matrix}), for $1 \le \tau \le 12$, compute all $u_i$, and apply techniques similar to Theorem~\ref{thm:general-upper-bound} and Theorem~\ref{thm:hier-2-rows}. Numerical results are presented in Table~\ref{tbl:hier-hybrid-Golay-24}. \begin{table} \caption{{Bounds on the stopping redundancy hierarchy, $\rho_l$, derived by the hybrid method for the extended $[24,12,8]$ Golay code}} \label{tbl:hier-hybrid-Golay-24} \centering \begin{tabular}{c|ccccc} \hline\hline $\rho_l$ & $l = 4$ & $l = 5$ & $l = 6$ & $l = 7$ & $l = 8$ \\ \hline $\tau=1$ & 24 & 36 & 61 & 105 & 180 \\ $\tau=2$ & 24 & 36 & 59 & 103 & 177 \\ $\tau=3$ & 25 & 35 & 58 & 102 & 175 \\ $\tau=4$ & 25 & 34 & 57 & 100 & 174 \\ $\tau=5$ & 26 & 33 & 56 & 99 & 172 \\ $\tau=6$ & 27 & 33 & 56 & 98 & 171 \\ $\tau=7$ & 28 & 33 & 55 & 98 & 170 \\ $\tau=8$ & 29 & 33 & 55 & 97 & 169 \\ $\tau=9$ & 30 & 33 & 55 & 96 & 168 \\ $\tau=10$ & 31 & 33 & 55 & 96 & 167 \\ $\tau=11$ & 32 & 34 & 55 & 96 & 167 \\ $\tau=12$ & 33 & 35 & 56 & 97 & 168 \\ \hline\hline \end{tabular} \end{table} \section{Acknowledgment} The authors wish to thank {\O{}}yvind Ytrehus for helpful discussions. \bibliographystyle{IEEEtran}
{ "timestamp": "2015-03-02T02:10:50", "yymm": "1410", "arxiv_id": "1410.8753", "language": "en", "url": "https://arxiv.org/abs/1410.8753", "abstract": "The $l$-th stopping redundancy $\\rho_l(\\mathcal C)$ of the binary $[n, k, d]$ code $\\mathcal C$, $1 \\le l \\le d$, is defined as the minimum number of rows in the parity-check matrix of $\\mathcal C$, such that the smallest stopping set is of size at least $l$. The stopping redundancy $\\rho(\\mathcal C)$ is defined as $\\rho_d(\\mathcal C)$. In this work, we improve on the probabilistic analysis of stopping redundancy, proposed by Han, Siegel and Vardy, which yields the best bounds known today. In our approach, we judiciously select the first few rows in the parity-check matrix, and then continue with the probabilistic method. By using similar techniques, we improve also on the best known bounds on $\\rho_l(\\mathcal C)$, for $1 \\le l \\le d$. Our approach is compared to the existing methods by numerical computations.", "subjects": "Information Theory (cs.IT)", "title": "Refined Upper Bounds on Stopping Redundancy of Binary Linear Codes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363704773487, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7084883531984513 }
https://arxiv.org/abs/1312.3437
On the growth of a Coxeter group
For a Coxeter system $(W,S)$ let $a_n^{(W,S)}$ be the cardinality of the sphere of radius $n$ in the Cayley graph of $W$ with respect to the standard generating set $S$. It is shown that, if $(W,S)\preceq(W',S')$ then $a_n^{(W,S)}\leq a_n^{(W',S')}$ for all $n\in \mathbb{N}_0$, where $\preceq$ is a suitable partial order on Coxeter systems (cf. Thm. A).It is proven that there exists a constant $\tau= 1.13\dots$ such that for any non-affine, non-spherical Coxeter system $(W,S)$ the growth rate $\omega(W,S)=\limsup \sqrt[n]{a_n}$ satisfies $\omega(W,S)\geq \tau$ (cf. Thm. B). The constant $\tau$ is a Perron number of degree $127$ over $\mathbb{Q}$.For a Coxeter group $W$ the Coxeter generating set is not unique (up to $W$-conjugacy), but there is a standard procedure, the diagram twisting (cf. [BMMN02]), which allows one to pass from one Coxeter generating set $S$ to another Coxeter generating set $\mu(S)$. A generalisation of the diagram twisting is introduced, the mutation, and it is proven that Poincaré series are invariant under mutations (cf. Thm. C).
\section*{Introduction} The growth of finitely generated groups has been the subject of intensive investigations (cf.~\cite{grigorchuk--bppg,grigorchuk--dgfggtim}, \cite{grigorchuk-delaharpe--prgesgt}, \cite{delaharpe--tggt}) and led to ground-breaking results, e.g., M.~Gromov showed that a finitely generated group has polynomial growth if, and only if, it is virtually nilpotent (cf.~\cite{gromov--gpgem}). For a group $G$ being generated by a finite symmetric set $X\subseteq G$ not containing the identity $1\in G$, the growth rate\footnote{The growth rate is often called \emph{exponential growth rate}.} is defined by $\omega(G,X)=\limsup_n \sqrt[n]{a_n}$, where $a_n$ is the number of elements in $G$ which can be written as a product of $n$ elements in $X$ but which cannot be written as a product of less than $n$ elements in $X$. If $G$ is of subexponential growth, i.e., polynomial or intermediate growth, then $\omega(G,X)\leq 1$. The set of isomorphism classes of Coxeter systems admits a partial order $\cleq$, and the corresponding monotonicity result for growth sequences is proven. \begin{thm*}[\ref{thm:an}] Let $(W,S)$ and $(W',S')$ be Coxeter systems. If $(W,S)\cleq (W',S')$ then $a_n^{(W,S)}\leq a_n^{(W',S')}$ for all $n\in \NN_0$. \end{thm*} Spherical and affine Coxeter systems have, respectively, growth rate zero and one. One of the main results of this paper can be stated as follows. \begin{thm*}[\ref{thm:tau}] Let $(W,S)$ be a non-affine, non-spherical Coxeter system. Then its growth rate satisfies $\omega(W,S)\geq \tau$, where $\tau = 1.13\dots$ is an algebraic integer of degree $127$ over $\QQ$, which is also a Perron number with minimal polynomial $m_\tau(t)$ given in \S\ref{s:tau}. Moreover, $\tau=\omega(W,S)$, where $(W,S)$ is the hyperbolic Coxeter system $E_{10}$. \end{thm*} A remarkable coincidence occurrs (cf.~Rem.~\ref{rem:notes-tau}). Besides having the smallest minimial growth rate among Coxeter systems, $E_{10}$ is also known to minimise a certain function $\lambda_\rho$, which reflects, in the hyperbolic case, the metric properties of the orbifold defined by Tits' representation $\rho$ (cf.~\cite{mcmullen--cgsnhm}). For a group $G$ with a finite symmetric generating set $X\subseteq G\setminus \{1\}$ one defines the growth series by $p_{(G,X)}(t)=\sum_{n\in \NN_0} a_n t^n \in \ZZ\llbracket t\rrbracket$, thus $\omega(G,X)$ coincides with the inverse of the radius of convergence of $p_{(G,X)}(t)$, considered as a power series over $\CC$. For a Coxeter system $(W,S)$ the growth series is also called the \emph{Poincar\'e series} of $(W,S)$. In \S\ref{s:rig-growth} we define the new notion of a \emph{mutation} $\mu(M,X,Y,\sigma)$ of a Coxeter matrix $M$, which induces an equivalence relation $\sim$ on Coxeter systems. Mutations generalise diagram twisting (cf.~\cite{brady-etal--rcgag}), but in general they do not preserve the isomorphism class of the group. Nevertheless, the Poincar\'e series is invariant under mutations of the Coxeter matrix. \begin{thm*}[\ref{thm:mu}]Let $(W,S)$ and $(W',S')$ be Coxeter systems satisfying $(W,S)\sim (W',S')$. Then $p_{(W,S)}(t)=p_{(W',S')}(t)$. \end{thm*} Thus, mutations provide a tool to produce finitely many non-isomorphic Coxeter groups with the same growth series. It is an open problem whether there exist infinitely many groups with the same growth series (cf.~\cite[Ch. 1, Pbl.s 1--2]{mann--hgg}). \subsection*{Acknowledgements} I wish to thank J.~Parkinson, P.~Spiga and Th.~Weigel for helpful discussions, and M.~Bucher, T.~Smirnova-Nagnibeda and A.~Talambutsa for the simulating conversations during the conference ``Geometric and analytic group theory'' in Ventotene, Italy. \noindent{}I would also like to thank R. Howlett for providing a copy of \cite{howlett-muehlherr--icgwdnpr}, and the anonymous referees for helpful comments. \section{Growth of finitely generated groups}\label{s:growth} Let $G$ be a finitely generated group, and let $X=X^{-1}\subseteq G\setminus\{1\}$ be a finite, symmetric set of generators. The \emph{length} of $g\in G$ with respect to $X$ is the minimal $n$ such that $g=x_1x_2\dots x_n$ with $x_i\in X$; the \emph{length function} will be denoted by $\ell_{(G,X)}\colon G\to \NN_0$. It has a natural interpretation in terms of the metric on the Cayley graph $\Cay(G,X)$. For $n\in \NN_0$, the ball in $\Cay(G,X)$ centred around $1_G$ with radius $n$ will be denoted by $B^{(G,X)}_n=\{g\in G\mid \ell_{(G,X)}(g)\leq n\}$, the corresponding sphere by $A^{(G,X)}_n=\{g\in G\mid \ell_{(G,X)}(g)= n\}$. Their sizes are $a^{(G,X)}_n=|A^{(G,X)}_n|$ and $b^{(G,X)}_n=|B^{(G,X)}_n|$. The central objects under investigation are the growth series \[p_{(G,X)}(t)=\sum_{n\in \NN_0} a_n^{(G,X)} t^n\in \ZZ\llbracket t\rrbracket,\] and the growth rate $\omega(G,X) =\limsup_{n\to\infty}\sqrt[n]{a_n^{(G,X)}}$. Note that $G$ has \emph{exponential growth} if $\omega(G,X)>1$ for some (and hence any) generating set $X$. The present paper only deals with finitely generated linear groups $G$. Therefore, $G$ has \emph{polynomial growth} with respect to some (and hence any) generating system $X$ if $\omega(G,X)\leq 1$ (cf.~\cite[Cor. 5]{tits--fslg}). The \emph{minimal growth rate} $\omega(G)$ is the infimum of $\omega(G,X)$, as $X$ runs over all finite, symmetric generating sets of $G$. \section{Coxeter groups}\label{s:cox-gps} Standard references for Coxeter groups include \cite{bourbaki--gal46,humphreys--rgcg}. \subsection{Coxeter systems}\label{ss:cox-syst} Let $S$ be a finite set, and let $M$ be an $(S\times S)$-matrix such that $m_{s,s}=1$, and $m_{s,r}=m_{r,s}\in \ZZ_{\geq 2}\cup\{\infty\}$ for all $s,r\in S$, $s\neq r$. Then $M$ is a \emph{Coxeter matrix} over $S$. The \emph{Coxeter system} associated with a Coxeter matrix $M$ over $S$ is the pair $(W,S)$ where \begin{equation}\label{eq:cox-pres} W=W(M)=\langle S\mid (sr)^{m_{s,r}} \,\text{ if }m_{s,r}<\infty\rangle. \end{equation} The Coxeter matrix $M$ (or, equivalently, the presentation \eqref{eq:cox-pres}) is often encoded in the Coxeter graph $\Gamma(M)$ (cf.~\cite[Ch.~IV \no 1.9]{bourbaki--gal46}). Either datum is called the \emph{type} of $(W,S)$. If $I\subseteq S$ let $W_I=\langle I\rangle \leq W$. The \emph{parabolic subsystem} $(W_I,I)$ is a Coxeter system in its own right, with Coxeter matrix $M_I=(m_{s,r})_{s,r\in I}$. Its Coxeter graph is the graph induced from $\Gamma$ by the vertices in $I$, and \begin{equation}\label{eq:l} \ell_{(W_I,I)}=\ell_{(W,S)}|_{W_I}. \end{equation} The finite set $\ca{F}=\ca{F}(W,S)=\{I\subseteq S\mid |W_I|<\infty\}$ is called the set of \emph{spherical residues}. A \emph{Coxeter-isomorphism} $\phi\colon (W,S)\to (W',S')$ of Coxeter systems of types $M$ and $M'$ respectively, is a bijection $\phi\colon S\to S'$ such that $m'_{\phi(s),\phi(r)}=m_{s,r}$ for all $s,r\in S$. Any Coxeter group $(W,S)$ is linear via the Tits' reflection representation $\rho\colon W\to \operatorname{GL}(\RR^S)$ (cf.~\cite[Ch.~V, \S4]{bourbaki--gal46}). The representation $\rho$ is determined by the symmetric matrix\footnote{For short, we put $\frac\pi\infty=0$.} $B=B_M=\left(-\cos\frac{\pi}{m_{s,r}}\right)_{s,r\in S}$, and the signature of $B$ induces the following tetrachotomy on irreducible Coxeter systems. \begin{renum} \item If $B$ is positive definite, then $(W,S)$ is \emph{spherical}, \item if $B$ is positive semidefinite with $0$ a simple eigenvalue, then $(W,S)$ is \emph{affine}, \item if $B$ has $|S|-1$ positive and $1$ negative eigenvalue, then $(W,S)$ is \emph{hyperbolic}\footnote{There are several non-compatible notions of hyperbolicty, cf.~\cite[Note 6.9]{davis--gtcg}. In the present work ``hyperbolic'' coincides with Bourbaki's notion (cf.~\cite[Ch. V, \S4, Ex.13]{bourbaki--gal46}).}, or \item none of the above conditions applies. \end{renum} The irreducible Coxeter system $(W,S)$ is spherical if, and only if, $W$ is a finite group. The classification of spherical and affine systems is classical (cf.~\cite[Ch. VI]{bourbaki--gal46}). For a characterisation of hyperbolic Coxeter systems see \S\ref{ss:min}. \subsection{The word problem}\label{ss:word-pbl} If $S$ is a finite set, let $S^\ast$ be the free monoid\footnote{Words in $S^\ast$ are denoted in boldface: $\bo{w} =s_1 s_2 \dots s_n\in S^\ast$. A subword $\bo{w}'$ of $\bo{w}$ is either the empty word $\boldsymbol{1}$ or a word of the form $\bo{w}'= s_i s_{i+1} \dots s_k$ for $1\leq i\leq k\leq n$.} over $S$, equipped with the natural $\NN_0$-grading $\deg\colon S^\ast\mapsto \NN_0$, $\deg(s)=1$ for all $s\in S$, and the ShortLex total order with respect to some total order on $S$ (cf.~\cite[\S2.5]{epstein-etal--wpg}). For $s,t\in S$ and $m\in \NN_0$ let $[s,t,m]\in S^\ast$ be the word \[ [s,t,m]=\begin{cases} (st)^{m/2}&\text{ if }2\mid m,\\ (st)^{\frac{m-1}{2}}s&\text{ if }2\nmid m. \end{cases} \] Let $M$ be a Coxeter matrix over $S$. The \emph{$M$-operations} (or \emph{$M$-moves}) on $S^\ast$ are modifications of words of the following types: \begin{equation}\label{eq:Mmoves} \begin{array}{cl} M^{(1)}\colon & \bo{v} (ss)\bo{u} \mapsto \bo{v}\bo{u},\\ M^{(2)}\colon & \bo{v} [s,r,m_{s,r}]\bo{u} \mapsto \bo{v} [r,s,m_{s,r}]\bo{u}, \quad\text{ if }m_{s,r}<\infty. \end{array} \end{equation} Let $(W,S)$ be the Coxeter system of type $M$, and let $\pi_M\colon S^\ast\to W(M)$ be the canonical projection (of monoids). Then, for all $\bo{w}\in S^\ast$, \begin{equation}\label{eq:length-deg} \deg(\bo{w})\geq \ell_M(\pi_M(\bo{w})). \end{equation} A word $\bo{w}\in S^\ast$ is called \emph{reduced} for $(W,S)$ if equality holds in \eqref{eq:length-deg}. If $w\in W(M)$, there is a unique ShortLex-minimal element $\sigma_M(w)\in S^\ast$ such that $\pi_M\sigma_M(w)=w$. Thus, $\sigma_M\colon W(M)\to S^\ast$ is a section of $\pi_M$, with the additional property\footnote{Actually, any section of $\pi_M$ with property \eqref{eq:length-section} would suffice for the purposes of this paper.} that \begin{equation}\label{eq:length-section} \deg(\sigma_M(w))=\ell_M(w). \end{equation} A word $\bo{w}\in S^\ast$ is called \emph{$M$-reduced} if its degree cannot be decreased by applying any finite sequence of $M$-operations. If two words $\bo{w},\bo{w}'$ are connected by a sequence of $M$-moves, then they represent the same element in $W(M)$: \begin{equation}\label{eq:piM} \pi_M(\bo{w})=\pi_M(\bo{w}'), \end{equation} and hence reduced words are $M$-reduced. Moreover, Tits solved the word problem as follows. \begin{thm}[{\cite{tits--pmgc}, \cite[Ch.~IV, \S1, Ex.13]{bourbaki--gal46}}]\label{thm:word-pbl} Let $(W,S)$ be the Coxeter system with Coxeter matrix $M$. \begin{renum} \item A word in $S^\ast$ is reduced for $(W,S)$ if, and only if, it is $M$-reduced. \item\label{wp:li2} If $\bo{w},\bo{w}'\in S^\ast$ are reduced words which represent the same element $\pi_M(\bo{w})=\pi_M(\bo{w}')\in W$, then there is a sequence of $M$-operations taking $\bo{w}$ to $\bo{w}'$, and this sequence entirely consists of $M^{(2)}$-operations. \end{renum} \end{thm} Following \cite[\S\S3.3--3.4]{bjorner-brenti--ccg}, let $\ca{R}_M(w)=\{\bo{w}\in S^\ast\mid \pi_M(\bo{w})=w,\text{ and }\deg(\bo{w})=\ell_M(w)\}$ be the set of the reduced words in $S^\ast$ representing $w\in W(M)$. \begin{cor}\label{cor:misc-R} For $\bo{w},\bo{w}'\in S^\ast$, and $w\in W(M)$ the following hold. \begin{renum} \item\label{li:sigma} $\sigma_M(w)\in \ca{R}_M(w)$. \item\label{li:closure} If $\bo{w}\in \ca{R}_M(w)$, and there exists a sequence $\bo{w} \stackrel{M}{\longmapsto} \bo{w}'$ of $M$-moves taking $\bo{w}$ to $\bo{w}'$, then $\bo{w}'\in \ca{R}_M(w)$. \item\label{li:7} If $\pi_M(\bo{w})=\pi_M(\bo{w}')$ and $\bo{w}'$ is reduced, then there exists a sequence of $M$-moves \[\bo{w} \longmapsto \bo{w}'.\] \end{renum} \end{cor} \subsection{Poincar\'e series}\label{s:poincare-ser} Coxeter systems are pairs consisting of a finitely generated group $W$ and a finite, symmetric generating set $S$, and therefore the machinery described in \S\ref{s:growth} applies. In the context of Coxeter systems the growth series is also known as the \emph{Poincar\'e series} $p_{\WS}(t)$ of $(W,S)$. If $(W,S)$ is spherical then $p_{\WS}(t)$ is a polynomial, which can be explicitly computed in terms of the degrees of the polynomial invariants of $(W,S)$, simply known as the \emph{degrees} of $(W,S)$ (cf.~\cite{solomon--ofcg}, \cite[Ch.~3]{humphreys--rgcg}). For arbitrary Coxeter systems, the Poincar\'e series can be computed using the following property. \begin{pro}[{\cite{steinberg--elag}}]\label{pro:ps} Let $(W,S)$ be a Coxeter system with Poincar\'e series $p_{\WS}(t)$. Then \begin{equation}\label{eq:sph-steinberg} \frac{1}{p_{(W,S)}(t^{-1})}=\sum_{I\in \ca{F}}\frac{(-1)^{|I|}}{p_{\WI}(t)}, \end{equation} where $\ca{F}=\ca{F}(W,S)$. In particular, the Poincar\'e series $p_{\WS}(t)$ is a rational function. \end{pro} It is often possible to focus only on irreducible systems. \begin{lem} Let $(W_1,S_1)$ and $(W_2,S_2)$ be Coxeter systems, and let $(W,S)=(W_1\times W_2,S_1\sqcup S_2)$ be their product. Then $\omega(W,S)=\max\{\omega{(W_1,S_1)}, \omega{(W_2,S_2)} \}$. \end{lem} \begin{proof} The factorisation $p_{(W,S)}(t)= p_{(W_1,S_1)}(t)\cdot p_{(W_2,S_2)}(t)$ holds (cf.~\cite[Ch. IV, n.\!\textsuperscript{os}~1.8--1.9]{bourbaki--gal46}). Since Poincar\'e series are series with non-negative coefficients and with degree-zero coefficient equal to one, then $\omega(W,S)\geq \max\{\omega(W_1,S_1), \omega(W_2,S_2)\}$. On the other hand, the product $p(t)$ of two rational functions $p_1(t)$ and $p_2(t)$ is holomorphic \emph{at least} in the smallest of the open disks centred in zero of radii $\rho_1, \rho_2$, where each of the two factors are holomorphic: thus \[\omega(W,S)=\frac1\rho \leq \frac1{\min\{\rho_1,\rho_2\}}= \max\{\omega(W_1,S_1),\omega(W_2,S_2)\}.\qedhere\] \end{proof} \section{The partial order $\cleq$ on the class of Coxeter systems}\label{s:orders} The core of the proof of Theorem~\ref{thm:tau} is the reduction to a finite set of elementary verifications. The tools which provide this reduction are the partial order $\cleq$ over the set of (Coxeter-isomorphism classes of) Coxeter systems, the corresponding monotonicity results, and the finiteness of the set of minimal non-affine, non-spherical Coxeter systems. Let $(W,S)$ and $(W',S')$ be Coxeter systems with Coxeter matrices $M$, $M'$ respectively. Define $(W,S)\cleq(W',S')$ whenever there exists an injective map $\phi\colon S\to S'$ such that $m_{s,r}\leq m'_{\phi(s),\phi(r)}$ for all $s,r\in S$ (cf.~\cite[\S6]{mcmullen--cgsnhm}). In particular, if $(W,S)$ and $(W',S')$ are Coxeter-isomorphic (cf.~\S\ref{ss:cox-syst}) then $(W,S)\cleq(W',S')$ and $(W',S')\cleq(W,S)$. Therefore the preorder $\cleq$ descends to a partial order on the set of Coxeter-isomorphism classes of Coxeter systems. With a mild abuse of notation we will avoid the distinction between a Coxeter system and its Coxeter-isomorphism class. \subsection{Monotonicity properties} The partial order $\cleq$ has the following important property. \begin{thmx}\label{thm:an} Let $(W,S)$ and $(W',S')$ be Coxeter systems with Coxeter matrices $M$ and $M'$, respectively. Let $a_k=a_k^{(W,S)}$ and $a'_k=a_k^{(W',S')}$ be the growth sequences with respect to the Coxeter generating set. If $(W,S)\cleq (W',S')$ then $a_k\leq a'_k$ for all $k\in \NN_0$. \end{thmx} \begin{proof} Let $\phi \colon S\to S'$ be an injective map realising the relation $(W,S)\cleq (W',S')$. Let $S''=\operatorname{im}\phi\subseteq S'$, let $W''=\langle S''\rangle\leq W'$, and let $(W'',S'')$ be the corresponding parabolic subsystem of $(W',S')$. Let $\psi\colon S\to S''$ be given by $\psi(s)=\phi(s)$ for all $s\in S$. Therefore $\phi=\iota\circ\psi$, where $\iota$ is the inclusion $S''\subseteq S'$, and hence one has $(W,S)\cleq (W'',S'')\cleq (W',S')$. Let $a''_k=a_k^{(W'',S'')}$. Since $(W'',S'')$ is a parabolic subgroup of $(W',S')$, then $\ell_{(W'',S'')}=\ell_{(W',S')}|_{W''}$, by \eqref{eq:l}. Hence $A_k^{(W'',S'')}\subseteq A_k^{(W',S')}$, and then \begin{equation}\label{eq:aiiai} a''_k\leq a'_k\quad\text{ for all }k\in \NN_0. \end{equation} We will now prove that $a_k\leq a''_k$ for all $k$. Let $M''$ be the Coxeter matrix of $(W'',S'')$, and let $N=(m''_{\psi(s),\psi(r)})_{s,r\in S}$. Since $\psi$ is a bijection, $(W(N),S)$ is Coxeter-isomorphic to $(W'',S'')$, and in particular it has growth sequence $a_k^{(W(N),S)}=a_k''$. Let $B_k$ and $B^N_k$ be the balls of radius $k$ in $\Cay(W,S)$ and $\Cay(W(N),S)$, respectively. By hypothesis $m_{s,r}\leq n_{s,r}$ for all $s,r\in S$, and suppose that $N\neq M$. Without loss of generality, assume there exists a unique $2$-subset $\{s_0,r_0\}\subseteq S$ such that $m_{s_0,r_0}<n_{s_0,r_0}$. Let $m=m_{s_0,r_0}$ and $n=n_{s_0,r_0}$. \noindent\textbf{Claim.} For all $k$, the map $\eta_k= \left.\pi_N \sigma_M\right|_{B_k} \colon B_k\to B_k^N$, where $\sigma_M$ and $\pi_N$ are defined as in \S\ref{ss:word-pbl}, is well defined and injective. \noindent\textit{Proof of the claim.} First, notice that $\pi_N\sigma_M(B_k)\subseteq B_k^N$, since $\deg(\sigma_M(w))=\ell_M(w)$ by \eqref{eq:length-section} and $\ell_N(\pi_N(\sigma_M(w)))\leq \ell_M(w)$ by \eqref{eq:length-deg}. Hence $\eta_k$ is well defined. Suppose $v,v'\in B_k$ satisfy $\eta_k(v)=\eta_k(v')$ and let $w=\eta_k(v)\in B_k^N$. Thus \[\pi_N\sigma_M(v)=\pi_N\sigma_M(v')=w=\pi_N\sigma_N(w).\] Then, by Cor.~\ref{cor:misc-R}, \ref{li:7}, there exist sequences of $N$-moves \begin{equation}\label{eq:Nmoves} \sigma_M(v)\longmapsto \sigma_N(w) \longmapsfrom \sigma_M(v'). \end{equation} Consider first the sequence $\sigma_M(v)\longmapsto \sigma_N(w)$ on the left, and suppose it can be written as the concatenation of elementary $N$-moves \begin{equation}\label{eq:Nmove-seq} \sigma_M(v)=\bo{u}_0\stackrel{\nu_0}{\longmapsto}\bo{u}_1 \stackrel{\nu_1}{\longmapsto}\bo{u}_2 {\longmapsto}\dots {\longmapsto}\bo{u}_r \stackrel{\nu_r}{\longmapsto}\bo{u}_{r+1}=\sigma_N(w). \end{equation} Assume by contradiction, that there exists some $t$ for which $\nu_t$ is the $N^{(2)}$-move \begin{equation}\label{eq:badmove} \nu_t\colon \bo{u}_t =\bo{u}'[s_0,r_0,n]\bo{u}'' \longmapsto \bo{u}'[r_0,s_0,n]\bo{u}''=\bo{u}_{t+1}, \end{equation} hence $n<\infty$, by \eqref{eq:Mmoves}. Let $t_0$ be the minimum of such $t$'s. Thus, the sequence of moves $\nu_{t_0-1}\circ \nu_{t_0-2}\circ\dots\circ \nu_1\circ\nu_0$ is a sequence of $M$-moves transforming $\sigma_M(v)\in \ca{R}_M(v)$ into $\bo{u}_{t_0}$. Hence, by Cor.~\ref{cor:misc-R},~\ref{li:sigma}--\ref{li:closure}, $\bo{u}_{t_0}\in \ca{R}_M(v)$. Since $n>m$ the word $\bo{u}_{t_0}$ has a subword of the form $[s_0,r_0,m+1]$. Therefore one may apply the $M$-moves \[\begin{split} \bo{u}_{t_0}&=\bo{u}'[s_0,r_0,m+1]\bo{u}''=\bo{u}'s_0[r_0,s_0,m]\bo{u}'' \stackrel{M^{(2)}}{\longmapsto} \bo{u}'s_0[s_0,r_0,m]\bo{u}'' =\\ &=\bo{u}'s_0\so[r_0,s_0,m-1]\bo{u}'' \stackrel{M^{(1)}}{\longmapsto} \bo{u}'[r_0,s_0,m-1]\bo{u}''=\bo{u}''', \end{split}\] and hence $\deg(\bo{u}_{t_0})> \deg\bo{u}'''$, against the hypothesis that $\bo{u}_{t_0}$ is ($M$-)reduced. This gives the desired contradiction and, thus, no $N^{(2)}$-move of the form \eqref{eq:badmove} can occurr. Since all the remaining $N$-moves are also $M$-moves, the sequence \eqref{eq:Nmove-seq} only consists of $M$-moves. An analogous argument applies to the sequence $\sigma_M(v')\longmapsto \sigma_N(w)$. Hence, the sequences in \eqref{eq:Nmoves} entirely consist of $M$-moves, and by \eqref{eq:piM} \[v=\pi_M\sigma_M(v)=\pi_M\sigma_N(w)=\pi_M \sigma_M(v')=v',\] which proves the injectivity of $\eta_k$. \smallskip Let now $v\in A_k^{(W,S)}\subseteq B_k$. Then $\deg(\sigma_M(v))=k$ and the previous argument shows that $\sigma_M(v)$ is also $N$-reduced, therefore $\ell_N(\eta_k(v))=\ell_N(\pi_N\sigma_M(v))=k$. It follows that the maps $\theta_k=\eta_k|_{A_k}\colon A_k^{(W,S)}\to A^{(W(N),S)}_k$ are well defined injections, and hence \begin{equation}\label{eq:aaii} a_k\leq a_k^{(W(N),S)}=a''_k\quad\text{ for all }k\in \NN_0. \end{equation} This, together with \eqref{eq:aiiai}, completes the proof. \end{proof} Theorem~\ref{thm:an} has the following immediate consequence. \begin{cor}\label{cor:omega} If $(W,S)$ and $(W',S')$ are Coxeter systems such that $(W,S)\cleq(W',S')$, then \[\omega(W,S)\leq \omega(W',S').\] \end{cor} \subsection{Minimal non-spherical, non-affine Coxeter systems}\label{ss:min} Let $\ensuremath{\mathpzc{X}}$ be the set of (Coxeter-isomorphism classes of) non-affine, non-spherical, irreducible Coxeter systems, and let $\ensuremath{\mathpzc{M}} =\min_{\cleq} \ensuremath{\mathpzc{X}}$ be the set of $\cleq$-minimal elements of $\ensuremath{\mathpzc{X}}$. It is well known that hyperbolic Coxeter systems are characterised as those systems such that every proper irreducible parabolic subsystem is either of spherical or affine type (cf.~\cite[Ch.~V, \S4, Ex.~13]{bourbaki--gal46}). By minimality, $\ensuremath{\mathpzc{M}}$ consists of hyperbolic Coxeter systems, which are classified in an infinite family of rank-three systems, and $72$ exceptions of rank $|S|\geq 4$ (cf.~\cite[\S\S6.8--6.9]{humphreys--rgcg}). The infinite family consists of the $\langle a,b,c\rangle$-triangle groups with $\frac1a+\frac1b+\frac1c<1$, and among those only the $\langle2,3,7\rangle$, $\langle3,3,4\rangle$ and $\langle2,4,5\rangle$-triangle groups are $\cleq$-minimal. Among the $72$ exceptions, $35$ are in $\ensuremath{\mathpzc{M}}$. Therefore, \begin{pro}[{\cite[Thm.~6.6, Tbl. 5]{mcmullen--cgsnhm}}]\label{pro:mcmullen} $|\ensuremath{\mathpzc{M}}|=38$. \end{pro} \section{The minimal growth rate of Coxeter groups}\label{s:tau} Following the notation of \cite{gross-etal--cfcp}, let $E_{10}$ be the Coxeter system with Coxeter graph \[ \raisebox{4pt}{$\Gamma(E_{10})=$}\quad\Diagrambox{27pt}{6pt}{\TTEeightL{}}\,\,\,. \] \begin{thmx}\label{thm:tau} If $(W,S)$ is a non-spherical, non-affine Coxeter system, then its growth rate satisfies \[ \omega(W,S)\geq \tau=1.138078743\dots, \] where $\tau$ is the growth rate of the hyperbolic Coxeter system $E_{10}$. In particular, $\tau$ is the inverse of the smallest positive real root of the denominator of the Poincar\'e series $p_{E_{10}}(t)$ of the Coxeter system $E_{10}$. Moreover, $\tau$ is an algebraic integer of degree $127$ over $\QQ$, with minimal polynomial \[\begin{split} m_\tau(t)=&\,t^{127} - t^{125} - t^{120} + t^{118} - t^{116} - t^{115} + t^{109} + t^{106} + t^{103} + t^{102} + 2t^{101} + t^{100} + t^{97} \\& + t^{96} + t^{91} - t^{90} - 2t^{89} - t^{88} - t^{87} - t^{86} - t^{85} - 2t^{84} - 2t^{83} - t^{82} - 2t^{81} - 3t^{80} - t^{79}\\& - t^{78} - 2t^{77} - t^{76} - t^{75} - t^{74} - t^{72} - t^{71} + t^{70} + t^{69} + 2t^{67} + 2t^{66} + t^{65} + 2t^{64} + 2t^{63}\\& + 2t^{62} + 3t^{61} + 2t^{60} + 2t^{59} + 3t^{58} + 3t^{57} + 2t^{56} + 2t^{55} + 2t^{54} + t^{53} + 2t^{52} + 2t^{51}\\& + t^{46} - t^{45} - 2t^{44} - t^{43} - t^{42} - 2t^{41} - 2t^{40} - 2t^{39} - 2t^{38} - 2t^{37} - 2t^{36} - 2t^{35} - t^{34}\\& - 2t^{33} - 3t^{32} - t^{31} - t^{29} - t^{28} - t^{27} + t^{25} + t^{22} + t^{21} + t^{20} + t^{19} + t^{18} + t^{17} + t^{16}\\& + t^{15} + t^{14} + t^{13} + t^{12} - t - 1. \end{split}\] The integer $\tau$ is a Perron number, i.e., an algebraic integer whose module stricly exceeds the module of its algebraic conjugates (cf.~\cite{lind--eftms,lind--etmsrcai}). \end{thmx} \begin{proof} By monotonicity of the function $\omega$ with respect to $\cleq$ (cf.~Cor.~\ref{cor:omega}) and by Prop.~\ref{pro:mcmullen}, it suffices to compute $\omega(W,S)$ for finitely many $(W,S)$. Moreover, $p_{(W,S)}(t)$ is power series with non-negative coefficients, and also a rational function, by Prop.~\ref{pro:ps}. Thus, $\omega(W,S)$ is the inverse of the minimal, positive real root of the denominator of $p_{(W,S)}(t)$. \end{proof} Theorem~\ref{thm:tau} can be stated in terms of a gap in the set \[\Omega=\{\omega(W,S) \mid (W,S) \text{ Coxeter system}\,\}\subseteq \{0,1\}\cup \RR_{\geq \tau}.\] \begin{remark}\label{rem:notes-tau} \begin{renum} \item The direct verifications for the $38$ relevant Coxeter systems were performed (cf.~\cite{terragni--dhcs}) with the help of the computational algebra system Magma. The code is available at \url{https://sites.google.com/site/tomterragni/research/computations}. \item The denominator of $p_{E_{10}}(t)$ is $(t-1)m_{\tau^{-1}}(t)$. \item In many cases $\omega(W,S)$ is an algebraic integer, and also a Perron number. It is known that every Perron number $\lambda$ is realised as the Perron--Frobenius eigenvalue of an aperiodic, non-negative integral matrix $P_\lambda$ (cf.~\cite[Thm.~1]{lind--etmsrcai}). Lind's proof is constructive, however the algorithm given in the proof may produce a Perron--Frobenius matrix of non-minimal size. It would be interesting to find a minimal-sized Perron--Frobenius matrix for $\tau$. \item The Poincar\'e series of (all but one) exceptional hyperbolic Coxeter systems are also listed in \cite{chapovalov-leites-stekolshchik--pshcgfvfd}. In the same paper, some radii of convergence are computed. \item It is quite surprising that $\tau$ is not realised as growth rate of any of the small rank Coxeter systems, instead it is associated with the Coxeter system $E_{10}$. However, the growth rate of one of the $\cleq$-minimal rank-three hyperbolic Coxeter groups, namely the one with Coxeter system $\langle 2,3,7\rangle$, is Lehmer's number $\lambda_{\text{Lehmer}}=1.17\dots$ (cf.~\cite{hironaka--wiln}), and an interesting coincidence occurrs. Let \[\lambda_\rho(W,S)= \inf\left( \{\lambda_\rho (w) \mid w\in W\}\cap \RR_{>1}\right),\] where $\lambda_\rho(w)$ is the spectral radius of the matrix $\rho(w)$, and $\rho$ is Tits' reflection representation The number $\lambda_\rho(W,S)$ represents a universal bound for eigenvalues of elements in Coxeter groups. Moreover, if $(W,S)$ is hyperbolic, then $\log\lambda_\rho(W,S)$ is interpreted as a lower bound for the length of non-degenerate, closed hyperbolic geodesics in the orbifold $\HH^{|S|-1}/W$. McMullen proved that \[\inf_{(W,S)}\lambda_\rho (W,S)=\lambda_{\text{Lehmer}},\] the infimum being taken as $(W,S)$ runs through the non-affine, non-spherical Coxeter systems (cf.~\cite{mcmullen--cgsnhm}). The infimum is actually a minimum, and it is attained \emph{exactly} for the Coxeter system $E_{10}$. It would be interesting to understand this phenomenon. \end{renum} \end{remark} \section{Rigidity and growth}\label{s:rig-growth} It is well known that there exist non Coxeter-isomorphic Coxeter systems for which the groups are abstractly isomorphic. For a discussion on the isomorphism problem for Coxeter groups, see \cite{charney-davis--wcsdcg,muehlherr--ipcg,bahls--ipcg}, and references therein. \subsection{Coxeter generating systems}\label{ss:cox-gensyst} Let $G$ be a group generated by a finite set of involutions $R\subseteq G$. Then $M(R)=\left(\ord(sr)\right)_{s,r\in R}$ is a Coxeter matrix. Let $(W,R)$ be the Coxeter system with Coxeter matrix $M(R)$. The identity on $R$ induces a surjective homomorphism of groups $j_R\colon W\to G$. Moreover, when $j_R$ is an isomorphism $G$ is a \emph{Coxeter group with Coxeter generating system $R$}. If $(W,S)$ is a Coxeter system and $\sigma$ is either an inner automorphism or the automorphism of $W$ induced by a Coxeter automorphism of $(W,S)$, then $\sigma(S)$ is another Coxeter generating system, and $(W,\sigma(S))$ is Coxeter-isomorphic to $(W,S)$. In general, any inner-by-Coxeter automorphism preserves the Coxeter-isomorphism type. An automorphism which is not inner-by-Coxeter will be called \emph{exotic}. \subsection{Isomorphisms of Coxeter groups}\label{ss:iso} A major problem in the theory of Coxeter groups is to find all possible Coxeter generating systems of a given a Coxeter group $W$. If, for any two Coxeter generating sets $R,S$ of $W$, the Coxeter systems $(W,S)$ and $(W,R)$ are Coxeter-isomorphic, then $W$ is called \emph{rigid}. It is well known that there exist non-rigid Coxeter groups, e.g., for $n,m$ odd there are exotic isomorphisms \begin{equation}\label{eq:exceptional-iso} W(I_2(2m))\simeq W(I_2(m)\times A_1),\quad\text{ and }\quad W(B_n)\simeq W(D_n\times A_1). \end{equation} There are standard procedures which realise exotic isomorphisms between Coxeter systems, e.g., Brady \emph{et al.} introduced the \emph{diagram twisting} (cf.~\cite[\S4]{brady-etal--rcgag} and \S\ref{s:rig-growth}), and Howlett and M\"uhlherr introduced a construction, the \emph{elementary reductions}, which deal with exotic isomorphisms $(W,S)\to (W,R)$ for which the set of reflections $S^W$ is different from $R^W$ (cf.~\cite{howlett-muehlherr--icgwdnpr}). Reductions generalise the exotic isomorphisms \eqref{eq:exceptional-iso}. Several classes of Coxeter groups are known to be rigid or rigid up to diagram twisting. For instance, if any of the following conditions is satisfied for a Coxeter generating system $S$ of $W$, then $W$ is rigid up to diagram twisting (cf.~\cite{brady-etal--rcgag, bahls--ipcg,muehlherr--ipcg}). \begin{renum} \item $(W,S)$ is right-angled, i.e., $m_{s,r}\in \{2,\infty\}$ for all $s,r\in S$, $s\neq r$; \item $(W,S)$ is infinite and $m_{s,r}<\infty$ for all $s,r\in S$; \item $(W,S)$ can act faithfully, properly and cocompactly on a contractible manifold; \item $(W,S)$ is skew-angled, i.e., $m_{s,r}\neq 2$ for all $s,r\in S$; \item $\Gamma_\infty(W,S)$ is a tree, where $\Gamma_\infty$ is the variant of the Coxeter graph defined in \cite[Ch.~IV, \S1, Ex. 11]{bourbaki--gal46}. \end{renum} \subsection{Mutations of Coxeter groups} \begin{dfn}\label{dfn:twist} Let $M$ be a Coxeter matrix over $S$, and suppose that there exists a partition $S=X\sqcup Y \sqcup T \sqcup Z$ and a Coxeter-automorphism $\sigma$ of the subsystem $(W_X,X)$ satisfying \begin{renum} \item\label{li:TY} $m_{t,y}=\infty$ for all $t\in T$ and $y\in Y$, \item\label{li:ZY} $m_{z,y}<\infty$ for all $z\in Z$ and $y\in Y$, and \item\label{li:ZX} for all $z\in Z$ and $x\in X$ one has $m_{z,\sigma(x)}=m_{z,x}$. \end{renum} Then, the $4$-tuple $(M,X,Y,\sigma)$ is called \emph{mutable}. Associated with a mutable tuple $(M,X,Y,\sigma)$ there is a Coxeter matrix $\mu(M,X,Y,\sigma)=(n_{r,s})_{r,s\in S}$, its \emph{mutation}, given by \begin{equation}\label{eq:mutation} n_{s,r}=n_{r,s}=\begin{cases} m_{\sigma(r),s}& \text{ if } r\in X, s\in Y,\\ m_{\sigma(r),\sigma(s)}& \text{ if } r,s\in X,\\ m_{r,s}& \text{ otherwise.} \end{cases}\end{equation} If $(M,X,Y,\sigma)$ is mutable, then $(\mu(M,X,Y,\sigma), X,Y, \sigma^{-1})$ is mutable and it is called the \emph{inverse mutable} $4$-tuple since $\mu(\mu(M,X,Y,\sigma), X,Y, \sigma^{-1})=M$. The relation ``$N$ is a mutation of $M$'' is symmetric, and therefore its transitive closure is an equivalence relation $\sim$ on Coxeter systems. \end{dfn} \begin{remark}\begin{renum} \item The partition associated with a mutable tuple $(M,X,Y,\sigma)$ is determined by $X$, $Y$ together with conditions \ref{li:TY}--\ref{li:ZY}, and therefore $T,Z$ may be omitted from the notation. \item Many Coxeter matrices $M$ only admit trivially mutable tuples, i.e., tuples with $\sigma=\operatorname{id}_X$. Even when a non-trivial tuple exists, it may happen that the associated mutation is Coxeter-isomorphic to $M$. If this is not the case, $(M,X,Y,\sigma)$ is called \emph{effective}. \item The operation of mutation is a generalisation of the diagram twisting (cf.~\cite{brady-etal--rcgag}). Diagram twists are mutations satisfying the additional conditions (a) $W_X$ is finite, (b) $\sigma(x)=x^{w_0(X)}$ is the conjugation by the longest element of $W_X$, and (c) $m_{z,x}=2$ for all $z\in Z$ and $x\in X$. Effective diagram twists determine exotic isomorphisms of Coxeter groups. \end{renum} \end{remark} \begin{thmx}\label{thm:mu} Let $(W,S)$ be a Coxeter system with Coxeter matrix $M$, and let $(M,X,Y,\sigma)$ be a mutable tuple for $(W,S)$. Let $N= \mu(M,X,Y,\sigma)$, and let $(W',S')$ be the Coxeter system with Coxeter matrix $N$. Then there is a bijection $\hbox to 1.5ex{\hrulefill}^\sharp\colon \ca{F}=\ca{F}(W,S)\to \ca{F}(W',S')=\ca{F}'$, such that $(W_I,I)$ is Coxeter-isomorphic to $(W'_{I^\sharp},I^\sharp)$ for all $I\in \ca{F}$. Moreover, if $(W,S) \sim (W',S')$ then \begin{equation}\label{eq:pmutation} p_{(W,S)}(t)=p_{(W',S')}(t). \end{equation} \end{thmx} \begin{proof} Let $S=X \sqcup Y \sqcup Z \sqcup T $ decompose as in Def.~\ref{dfn:twist}, and let $I\in\ca{F}$. Since every edge of a spherical graph must have a finite label, then either \begin{itemize} \item[(a)] $I\subseteq X\sqcup T \sqcup Z$, or \item[(b)] $I\subseteq X\sqcup Y\sqcup Z$ and $I\cap Y\neq \emptyset$. \end{itemize} Suppose that (a) holds, then define $I^\sharp=\{r^\sharp=r\mid r\in I\}$. By \eqref{eq:mutation}, for $r^\sharp, s^\sharp\in I^\sharp$ on has \[n_{r^\sharp,s^\sharp}=n_{r,s}=\begin{cases} m_{\sigma(r),\sigma(s)} &\text{ if } r,s\in X,\\ m_{r,s} &\text{ if } r\in X, s\not\in X,\\ m_{r,s} &\text{ if } r,s\not\in X. \end{cases}\] Since $\sigma$ is a Coxeter-automorphism of $(W_X,X)$, then $m_{\sigma(r),\sigma(s)}=m_{r,s}$ for $s,r\in X$. \medskip Suppose that (b) holds, then define $I^\sharp=\{r^\sharp \mid r\in I\}$, where now \begin{equation}\label{eq:sharp} r^\sharp=\begin{cases} \sigma^{-1}(r) &\text{ if } r\in X,\\ r &\text{ if } r\not\in X.\end{cases} \end{equation} Then, for $r^\sharp, s^\sharp\in I^\sharp$, by \eqref{eq:mutation}, \eqref{eq:sharp} and Def.~\ref{dfn:twist},~\ref{li:ZX}, one has \[n_{r^\sharp,s^\sharp}=\begin{cases} m_{\sigma(r^\sharp),\sigma(s^\sharp)}=m_{r,s} &\text{ if } r^\sharp,s^\sharp\in X ,\\ m_{\sigma(r^\sharp),s^\sharp}=m_{r,s} &\text{ if } r^\sharp\in X, s^\sharp\in Y ,\\ m_{r^\sharp,s^\sharp}=m_{\sigma^{-1}(r),s}=m_{r,s} &\text{ if } r^\sharp\in X, s^\sharp\in Z,\\ m_{r^\sharp,s^\sharp}=m_{r,s} &\text{ if } r,s\not\in X. \end{cases}\] Hence, $N_{I^\sharp}$ and $M_I$ determine Coxeter-isomorphic systems. It follows that $I^\sharp\in \ca{F}'$ and that (a) holds for $I^\sharp$ if, and only if, (a) holds for $I$. Thus, the map $I\mapsto I^\sharp$ is a map which preserves the Coxeter-isomorphism type, and it is invertible (its inverse being the $\sharp$-map associated to the inverse mutable tuple). The identity \eqref{eq:pmutation} then follows from Steinberg's formula \eqref{eq:sph-steinberg}. \end{proof} \begin{cor}\label{cor:mgr-cox} Suppose that $W$ is rigid up to diagram twisting, and let $S,R$ be Coxeter generating systems for $W$ (cf.~\S\ref{ss:cox-gensyst}). Then \[p_{(W,S)}(t)=p_{(W,R)}(t)\quad \text{ and }\quad\omega(W,S)=\omega(W,R).\] Let $p_{W,\textup{Cox}}(t)$ and $\omega_{\textup{Cox}}(W)$ be these common values. \end{cor} Theorem \ref{thm:mu} implies that effective mutations which are not diagram twists can be regarded as procedures to produce non-isomorphic (and \emph{a fortiori}, non Coxeter-isomorphic) Coxeter systems with the same Poicar\'e series. \begin{exa}Consider the rank-seven Coxeter system $(W,S)$ with Coxeter matrix \[M=\begin{pmatrix} 1 & 3 & 3 & 2 & 3 & 4 & 2\\ 3 & 1 & 3 & 2 & 3 & 4 & 2\\ 3 & 3 & 1 & 2 & 2 & 4 & 3\\ 2 & 2 & 2 & 1 & 3 & 3 & 2\\ 3 & 3 & 2 & 3 & 1 & 2 & \infty\\ 4 & 4 & 4 & 3 & 2 & 1 & 3\\ 2 & 2 & 3 & 2 & \infty & 3 & 1 \end{pmatrix}. \] Let $X =\{s_1,s_2,s_3,s_4\}$, $Y=\{s_5\}$, $Z=\{s_6\}$, $T=\{s_7\}$, and let $\sigma=(1,2,3)$. Then $(M,X,Y,\sigma)$ is mutable, with mutation displayed in~Fig.~\ref{fig:mut}. Moreover, $N=\mu(M,X,Y,\sigma)$ is a proper mutation, i.e., $N$ is not obtained from $M$ by diagram twisting. \begin{figure}[htbp]\begin{center} \begin{tikzpicture}[auto,inner sep=0.5mm, vertex/.style={circle,draw=black,minimum size=4mm}] \node[vertex] (1) at (3,4) {$\scriptstyle s_1$}; \node[vertex] (2) at (4,3) {$\scriptstyle s_2$}; \node[vertex] (3) at (5,4) {$\scriptstyle s_3$}; \node[vertex] (4) at (1.5,4) {$\scriptstyle s_4$}; \node[vertex] (5) at (1,2) {$\scriptstyle s_5$}; \node[vertex] (6) at (4,1) {$\scriptstyle s_6$}; \node[vertex] (7) at (6,2) {$\scriptstyle s_7$}; % \draw (1) to (2); \draw (1) to (3); \draw (1) to (5); \draw (1) to node [pos=0.2,swap] {$\scriptstyle 4$} (6); % \draw (2) to (3); \draw (2) to (5); \draw (2) to node [pos=0.3] {$\scriptstyle 4$} (6); % \draw (3) to node[above,right] {$\scriptstyle 4$} (6); \draw (3) to (7); % \draw (4) to (5); \draw (4) to (6); % \draw (5) to node [pos=0.25,swap] {$\scriptstyle \infty$} (7); % \draw (6) to (7); % \node at (4,3.6) {$\scriptstyle\sigma$}; \draw[ <-] (3.8,3.8) arc [start angle=135, end angle=405, x radius=3mm, y radius=2mm]; \draw [->](6.5,2.5) -- node {$\mu$} (8,2.5); % \begin{scope}[xshift=7.5cm] \node[vertex] (1) at (3,4) {$\scriptstyle s_2$}; \node[vertex] (2) at (4,3) {$\scriptstyle s_3$}; \node[vertex] (3) at (5,4) {$\scriptstyle s_1$}; \node[vertex] (4) at (1.5,4) {$\scriptstyle s_4$}; \node[vertex] (5) at (1,2) {$\scriptstyle s_5$}; \node[vertex] (6) at (4,1) {$\scriptstyle s_6$}; \node[vertex] (7) at (6,2) {$\scriptstyle s_7$}; % \draw (1) to (2); \draw (1) to (3); \draw (1) to (5); \draw (1) to node [pos=0.2,swap] {$\scriptstyle 4$} (6); % \draw (2) to (3); \draw (2) to (5); \draw (2) to node [pos=0.3] {$\scriptstyle 4$} (6); % \draw (3) to node[above,right] {$\scriptstyle 4$} (6); \draw[bend left] (2) to (7); % \draw (4) to (5); \draw (4) to (6); % \draw (5) to node [pos=0.25,swap] {$\scriptstyle \infty$} (7); % \draw (6) to (7); \end{scope} \end{tikzpicture}\end{center} \caption{A proper mutation.}\label{fig:mut} \end{figure} \end{exa} \subsection{A conjecture}\label{ss:conj} Consider the group $\PGL(2,\ZZ)\simeq (C_2\times C_2)\ast_{C_2}S_3$. It is well known that $\PGL(2,\ZZ)\simeq W$, where $(W,S)$ is the Coxeter system $\langle 2,3,\infty\rangle$ with Coxeter graph $\,\CoxGrHCI{}{\infty}\,$. Hence the minimal growth rate satisfies $\omega(\PGL(2,\ZZ))\leq \omega(W,S)=\alpha$, where $\alpha$ is the \emph{plastic number}, with minimal polynomial $m_\alpha(t)=t^3-t-1$. The converse inequality is proven by Bucher and Talambutsa (cf.~\cite[\S6]{bucher-talambutsa--egrfap}). Therefore, the following problem seems to be of some interest. \begin{conjx}\label{conj} Let $W$ be a Coxeter group rigid up to diagram twisting, and let $\omega_{\textup{Cox}}(W)$ be defined as in Cor.~\ref{cor:mgr-cox}. Then $\omega(W)=\omega_{\textup{Cox}}(W)$. \end{conjx} \begin{remark} \begin{renum} \item If $W$ is a product of spherical and affine irreducible Coxeter systems, its Poincar\'e series depends on the chosen generating set. However, the minimal growth rate and the growth rate coincide $\omega(W)=\omega(W,S)$ and their common value is either $0$ or $1$, depending on the finiteness of the group only. \item The rigidity hypothesis in Conj.~\ref{conj} cannot be relaxed since, in general, elementary reductions do not preserve the growth rate, as the following example shows. Let \[M=\begin{pmatrix} 1 & 3 & 2 & 3 & \infty\\ 3 & 1 & 2 & 2 & 2\\ 2 & 2 & 1 & 3 & 2\\ 3 & 2 & 3 & 1 & 4\\ \infty & 2 & 2 & 4 & 1 \end{pmatrix},\quad\quad \Gamma(M)=\,\, \raisebox{-24pt}{\begin{tikzpicture}[auto,inner sep=0.5mm, vertex/.style={circle,draw=black,minimum size=4mm}] \node[vertex] (1) at (0,0) {$\scriptstyle s_1$}; \node[vertex] (2) at (0,1.5) {$\scriptstyle s_2$}; \node[vertex] (3) at (3,0) {$\scriptstyle s_3$}; \node[vertex] (4) at (1.5,0) {$\scriptstyle s_4$}; \node[vertex] (5) at (1.5,1.5) {$\scriptstyle s_5$}; % \draw (1) to (2); \draw (1) to (4); \draw (1) to node {$\scriptstyle \infty$} (5); \draw (3) to (4); \draw (4) to node[above,right] {$\scriptstyle 4$} (5); \end{tikzpicture}}. \] Then $s_5$ is a pseudo-transposition, corresponding to the parabolic subsystem of type $B_3$ generated by $J=\{s_3,s_4,s_5\}$. Let $r_i=s_i$ for $i \in \{1,\dots,4\}$, let $r_5=s_5s_4s_5$ and let $r_6=w_0(J)=s_3s_4s_3s_5s_4s_3s_5s_4s_5$ be the longest element of the parabolic subsystem $(W_J,J)$. Then, $R=\{r_i\mid i\in \{1,\dots,6\}\,\}$ is a Coxeter generating system for $W(M)$ (cf.~\cite{howlett-muehlherr--icgwdnpr}). Its Coxeter matrix $M'=M(R)$ is \[M'=\begin{pmatrix} 1 & 3 & 2 & 3 & \infty & \infty\\ 3 & 1 & 2 & 2 & 2 & 2\\ 2 & 2 & 1 & 3 & 3 & 2\\ 3 & 2 & 3 & 1 & 2 & 2\\ \infty & 2 & 3 & 2 & 1 & 2\\ \infty & 2 & 2 & 2 & 2 & 1 \end{pmatrix},\quad\quad \Gamma(M')=\,\, \raisebox{-24pt}{\begin{tikzpicture}[auto,inner sep=0.5mm, vertex/.style={circle,draw=black,minimum size=4mm}] \node[vertex] (1) at (1.5,0) {$\scriptstyle r_1$}; \node[vertex] (2) at (1.5,1.5) {$\scriptstyle r_2$}; \node[vertex] (3) at (4.5,0) {$\scriptstyle r_3$}; \node[vertex] (4) at (3,0) {$\scriptstyle r_4$}; \node[vertex] (5) at (3,1.5) {$\scriptstyle r_5$}; \node[vertex] (6) at (0,0) {$\scriptstyle r_6$}; % \draw (1) to (2); \draw (1) to (4); \draw (1) to node {$\scriptstyle \infty$} (5); \draw (1) to node [swap] {$\scriptstyle \infty$} (6); \draw (3) to (4); \draw (4) to node[above,right] {$\scriptstyle 4$} (5); \end{tikzpicture}}. \] By direct computation one sees that $\omega(W,S)= 2.24167\dots$, while $\omega(W,R)=2.61578\dots$. \end{renum} \end{remark} \bibliographystyle{amsalpha}
{ "timestamp": "2015-04-01T02:06:44", "yymm": "1312", "arxiv_id": "1312.3437", "language": "en", "url": "https://arxiv.org/abs/1312.3437", "abstract": "For a Coxeter system $(W,S)$ let $a_n^{(W,S)}$ be the cardinality of the sphere of radius $n$ in the Cayley graph of $W$ with respect to the standard generating set $S$. It is shown that, if $(W,S)\\preceq(W',S')$ then $a_n^{(W,S)}\\leq a_n^{(W',S')}$ for all $n\\in \\mathbb{N}_0$, where $\\preceq$ is a suitable partial order on Coxeter systems (cf. Thm. A).It is proven that there exists a constant $\\tau= 1.13\\dots$ such that for any non-affine, non-spherical Coxeter system $(W,S)$ the growth rate $\\omega(W,S)=\\limsup \\sqrt[n]{a_n}$ satisfies $\\omega(W,S)\\geq \\tau$ (cf. Thm. B). The constant $\\tau$ is a Perron number of degree $127$ over $\\mathbb{Q}$.For a Coxeter group $W$ the Coxeter generating set is not unique (up to $W$-conjugacy), but there is a standard procedure, the diagram twisting (cf. [BMMN02]), which allows one to pass from one Coxeter generating set $S$ to another Coxeter generating set $\\mu(S)$. A generalisation of the diagram twisting is introduced, the mutation, and it is proven that Poincaré series are invariant under mutations (cf. Thm. C).", "subjects": "Group Theory (math.GR)", "title": "On the growth of a Coxeter group", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363704773486, "lm_q2_score": 0.7185943985973773, "lm_q1q2_score": 0.7084883531984513 }
https://arxiv.org/abs/1606.01809
Syzygy bundles and the weak Lefschetz property of almost complete intersections
Deciding the presence of the weak Lefschetz property often is a challenging problem. In this work an in-depth study is carried out in the case of Artinian monomial ideals with four generators in three variables. We use a connection to lozenge tilings to describe semistability of the syzygy bundle of such an ideal, to determine its generic splitting type, and to decide the presence of the weak Lefschetz property. We provide results in both characteristic zero and positive characteristic.
\section{Introduction} \label{sec:intro} The \emph{weak Lefschetz property} for a standard graded Artinian algebra $A$ over a field $K$ is a natural property. It says that there is a linear form $\ell \in A$ such that the multiplication map $\times \ell : [A]_i \rightarrow [A]_{i+1}$ has maximal rank for all $i$ (i.e., it is injective or surjective). Its presence implies, for example, restrictions on the Hilbert function and graded Betti numbers of the algebra (see \cite{HMNW, MN}). Recent studies have connected the weak Lefschetz property to many other questions (see, e.g., \cite{BMMNZ,BK-p,GIV, MMO, MMMNW, MN-lower, NS1,St-faces}). Thus, a great variety of tools from representation theory, topology, vector bundle theory, hyperplane arrangements, plane partitions, splines, differential geometry, among others has been used to decide the presence of the weak Lefschetz property (see, e.g., \cite{BMMNZ2, BK, CGJL, HSS, HMMNWW, KRV, KV, LZ, MMN-2012,MN-survey, Stanley-1980}). An important aspect has also been the role of the characteristic of $K$. Any Artinian quotient of a polynomial ring in at most two variables has the weak Lefschetz property regardless of the characteristic of $K$ (see \cite{MZ0} and \cite[Proposition 2.7]{CN-small-type}). This is far from true for quotients of rings with three or more variables. Here we consider quotients $R/I$, where $R = K[x,y,z]$ and $I$ is a monomial ideal containing a power of $x, y$, and $z$. If $I$ has only three generators, then $R/I$ has the weak Lefschetz property, provided the base field has characteristic zero (see \cite{Stanley-1980, ikeda, Wa, BTK}). We focus on the case, where $I$ has four minimal generators, extending previous work in \cite{BK, CN-IJM, MMN-2011}. To this end we use a combinatorial approach developed in \cite{CN-resolutions, CN-small-type} that involves lozenge tilings, perfect matchings, and families of non-intersecting lattice paths. Some of our results have already been used in \cite{MMMNW}. In Section~\ref{sec:trireg}, we recall the connection between monomial ideals in three variables and so-called triangular regions. We use it to establish sufficient and necessary conditions for a balanced triangular subregion to be tileable (see Corollary~\ref{cor:pp-tileable}). In Section~\ref{sec:alg}, we show that the tileability of a triangular subregion $T_d (I)$ is related to the semistability of the syzygy bundle of the ideal $I$ (see Theorem~\ref{thm:tileable-semistable}). We further recall the relation between lozenge tilings of triangular regions and the weak Lefschetz property. All the results up to this point are true for arbitrary Artinian monomial ideals of $R$. In Section~\ref{sec:amaci} we consider exclusively Artinian monomial ideals with four minimal generators. Our results on the weak Lefschetz property of $R/I$ are summarized in Theorem~\ref{thm:amaci-wlp}. In particular, they provide further evidence for a conjecture in \cite{MMN-2011}, which concerns the case where $R/I$ is a level algebra. Furthermore, we determine the generic splitting type of the syzygy bundle of $I$ in all cases but one (see Propositions~\ref{pro:st-nss} and \ref{pro:split-type-semist}). In the remaining case we show that determining the generic splitting type is equivalent to deciding whether $R/I$ has the weak Lefschetz property (see Theorem~\ref{thm:equiv}). This result is independent of the characteristic. \section{Triangular regions}\label{sec:trireg} Besides introducing notation, we recall needed facts from the combinatorial approach to Lefschetz properties developed in \cite{CN-resolutions, CN-small-type}. We also establish a new criterion for tileability by lozenges. Let $R = K[x,y,z]$ be a standard graded polynomial ring over a field $K$, i.e., $\deg{x} = \deg{y} = \deg{z} = 1$. Unless specified otherwise, $K$ is always an arbitrary field. All $R$-modules in this paper are assumed to be finitely generated and graded. Let $A = R/I = \oplus_{j \ge 0} [A]_j$ be a graded quotient of $R$. The \emph{Hilbert function} of $A$ is the function $h_A: \ZZ \to \ZZ$ given by $h_A(j) = \dim_K [A]_j$. The \emph{socle} of $A$, denoted $\soc{A}$, is the annihilator of $\mathfrak{m} = (x, y, z)$, the homogeneous maximal ideal of $R$, that is, $\soc{A} = \{a \in A \st a \cdot \mathfrak{m} = 0\}$. \subsection{Triangular regions}\label{sub:tri}~ Let $I$ be a monomial ideal of $R$. As $R/I$ is standard graded, the monomials of $R$ of degree $d \in \ZZ$ that are \emph{not} in $I$ form a $K$-basis of $[R/I]_d$. Let $d \geq 1$ be an integer. Consider an equilateral triangle of side length $d$ that is composed of $\binom{d}{2}$ downward-pointing ($\dntri$) and $\binom{d+1}{2}$ upward-pointing ($\uptri$) equilateral unit triangles. We label the downward- and upward-pointing unit triangles by the monomials in $[R]_{d-2}$ and $[R]_{d-1}$, respectively, as follows: place $x^{d-1}$ at the top, $y^{d-1}$ at the bottom-left, and $z^{d-1}$ at the bottom-right, and continue labeling such that, for each pair of an upward- and a downward-pointing triangle that share an edge, the label of the upward-pointing triangle is obtained from the label of the downward-pointing triangle by multiplying with a variable. The resulting labeled triangular region is the \emph{triangular region (of $R$) in degree $d$} and is denoted $\mathcal{T}_d$. See Figure~\ref{fig:triregion-R}(i) for an illustration. \begin{figure}[!ht] \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=1]{figs/triregion-R4}\\ \emph{(i) $\mathcal{T}_4$} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=1]{figs/triregion-RmodI}\\ \emph{(ii) $T_4(xy, y^2, z^3)$} \end{minipage} \caption{A triangular region with respect to $R$ and with respect to $R/I$.} \label{fig:triregion-R} \end{figure} Throughout this manuscript we order the monomials of $R$ with the \emph{graded reverse-lexicogra\-phic order}, that is, $x^a y^b z^c > x^p y^q z^r$ if either $a+b+c > p+q+r$ or $a+b+c = p+q+r$ and the \emph{last} non-zero entry in $(a-p, b-q, c-r)$ is \emph{negative}. For example, in degree $3$, \[ x^3 > x^2y > xy^2 > y^3 > x^2z > xyz > y^2z > xz^2 > yz^2 > z^3. \] Thus in $\mathcal{T}_4$, see Figure~\ref{fig:triregion-R}(iii), the upward-pointing triangles are ordered starting at the top and moving down-left in lines parallel to the upper-left edge. We generalize this construction to quotients by monomial ideals. Let $I$ be a monomial ideal of $R$. The \emph{triangular region (of $R/I$) in degree $d$}, denoted by $T_d(I)$, is the part of $\mathcal{T}_d$ that is obtained after removing the triangles labeled by monomials in $I$. Note that the labels of the downward- and upward-pointing triangles in $T_d(I)$ form $K$-bases of $[R/I]_{d-2}$ and $[R/I]_{d-1}$, respectively. It is more convenient to illustrate such regions with the removed triangles darkly shaded instead of being removed. See Figure~\ref{fig:triregion-R}(ii) for an example. Notice that the regions missing from $\mathcal{T}_d$ in $T_d(I)$ can be viewed as a union of (possibly overlapping) upward-pointing triangles of various side lengths that include the upward- and downward-pointing triangles inside them. Each of these upward-pointing triangles corresponds to a minimal generator of $I$ that has, necessarily, degree at most $d-1$. We can alternatively construct $T_d(I)$ from $\mathcal{T}_d$ by removing, for each minimal generator $x^a y^b z^c$ of $I$ of degree at most $d-1$, the \emph{puncture associated to $x^a y^b z^c$} which is an upward-pointing equilateral triangle of side length $d-(a+b+c)$ located $a$ triangles from the bottom, $b$ triangles from the upper-right edge, and $c$ triangles from the upper-left edge. See Figure~\ref{fig:triregion-punctures} for an example. We call $d-(a+b+c)$ the \emph{side length of the puncture associated to $x^a y^b z^c$}, regardless of possible overlaps with other punctures in $T_d (I)$. \begin{figure}[!ht] \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=1]{figs/triregion-punctures}\\ \emph{(i) $T_{d}(x^a y^b z^c)$} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=1]{figs/triregion-punctures-ex}\\ \emph{(ii) $T_{10}(xy^3z^2)$} \end{minipage} \caption{$T_d(I)$ as constructed by removing punctures.} \label{fig:triregion-punctures} \end{figure} We say that two punctures \emph{overlap} if they share at least an edge. Two punctures are said to be \emph{touching} if they share precisely a vertex. \subsection{Tilings with lozenges}\label{sub:tiling}~ A \emph{lozenge} is a union of two unit equilateral triangles glued together along a shared edge, i.e., a rhombus with unit side lengths and angles of $60^{\circ}$ and $120^{\circ}$. Lozenges are also called calissons and diamonds in the literature. See Figure~\ref{fig:triregion-intro}. \begin{figure}[!ht] \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=1]{figs/triregion-gcd-1} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=1]{figs/triregion-tiling} \end{minipage} \caption{A triangular region $T \subset \mathcal{T}_8$ together with one of its $13$ tilings.} \label{fig:triregion-intro} \end{figure} Fix a positive integer $d$ and consider the triangular region $\mathcal{T}_d$ as a union of unit triangles. Thus a \emph{subregion} $T \subset \mathcal{T}_d$ is a subset of such triangles. We retain their labels. As above, we say that a subregion $T$ is \emph{$\dntri$-heavy}, \emph{$\uptri$-heavy}, or \emph{balanced} if there are more downward pointing than upward pointing triangles or less, or if their numbers are the same, respectively. A subregion is \emph{tileable} if either it is empty or there exists a tiling of the region by lozenges such that every triangle is part of exactly one lozenge. Since a lozenge in $\mathcal{T}_d$ is the union of a downward-pointing and an upward-pointing triangle, and every triangle is part of exactly one lozenge, a tileable subregion is necessarily balanced. Let $T \subset \mathcal{T}_d$ be any subregion. Given a monomial $x^a y^b z^c$ with degree less than $d$, the \emph{monomial subregion} of $T$ associated to $x^a y^b z^c$ is the part of $T$ contained in the triangle $a$ units from the bottom edge, $b$ units from the upper-right edge, and $c$ units from the upper-left edge. In other words, this monomial subregion consists of the triangles that are in $T$ and the puncture associated to the monomial $x^a y^b z^c$. See Figure~\ref{fig:triregion-subregion} for an example. \begin{figure}[!ht] \includegraphics[scale=1]{figs/triregion-subregion} \caption{The monomial subregion of $T_{8}(x^7, y^7, z^6, x y^4 z^2, x^3 y z^2, x^4 y z)$ (see Figure~\ref{fig:triregion-intro}) associated to $x y^2 z$.} \label{fig:triregion-subregion} \end{figure} Replacing a tileable monomial subregion by a puncture of the same size does not alter tileability. \begin{lemma}{\cite[Lemma 2.1]{CN-resolutions}}\label{lem:replace-tileable} Let $T \subset \mathcal{T}_d$ be any subregion. If a monomial subregion $U$ of $T$ is tileable, then $T$ is tileable if and only if $T \setminus U$ is tileable. Moreover, each tiling of $T$ is obtained by combining a tiling of $T \setminus U$ and a tiling of $U$. \end{lemma} Let $U \subset \mathcal{T}_d$ be a monomial subregion, and let $T, T' \subset \mathcal{T}_d$ be any subregions such that $T \setminus U = T' \setminus U$. If $T \cap U$ and $T' \cap U$ are both tileable, then $T$ is tileable if and only if $T'$ is, by Lemma \ref{lem:replace-tileable}. In other words, replacing a tileable monomial subregion of a triangular region by a tileable monomial subregion of the same size does not affect tileability. \begin{theorem}{\cite[Theorem 2.2]{CN-resolutions}}\label{thm:tileable} Let $T = T_d(I)$ be a balanced triangular region, where $I \subset R$ is any monomial ideal. Then $T$ is tileable if and only if $T$ has no $\dntri$-heavy monomial subregions. \end{theorem} Let $I$ be a monomial ideal of $R$ whose punctures in $\mathcal{T}_d$ (corresponding to the minimal generators of $I$ having degree less than $d$) have side lengths that sum to $m$. Then we define the \emph{over-puncturing coefficient} of $I$ in degree $d$ to be $\mo_d (I) = m - d$. If $\mo_d (I) < 0$, $\mo_d (I) = 0$, or $\mo_d (I) > 0$, then we call $I$ \emph{under-punctured}, \emph{perfectly-punctured}, or \emph{over-punctured} in degree $d$, respectively. Let now $T = T_d(I)$ be a triangular region with punctures whose side lengths sum to $m$. Then we define similarly the \emph{over-puncturing coefficient} of $T$ to be $\mo_d (T) = m - d$. If $\mo_d (T) < 0$, $\mo_d (T) = 0$, or $\mo_d (T) > 0$, then we call $T$ \emph{under-punctured}, \emph{perfectly-punctured}, or \emph{over-punctured}, respectively. Observe that different monomial ideals can determine the same triangular region of $\mathcal{T}_d$. Consider, for example, $I_1 = (x^5, y^5, z^5, xyz^2, xy^2z, x^2yz)$ and $I_2 = (x^5, y^5, z^5, xyz)$. Then $T_6 (I_1) = T_6 (I_2)$, and $\mo_6(I_1) = 3$ but $\mo_6(I_2) = 0$. However, given a triangular region $T = T_d (I)$, there is a unique largest ideal $J$ that is generated by monomials whose degrees are bounded above by $d-1$ and that satisfies $T = T_d (J)$. We call $J(T)$ the \emph{monomial ideal of the triangular region $T$}. Note that $\mo_d (T) = \mo_d (J(T)) \leq \mo_d (I)$, and equality is true if and only if the ideals $I$ and $J(T)$ are the same in all degrees less than $d$. \begin{remark} \label{rem:puncture coeff} If a monomial subregion $T$ of $\mathcal{T}_d$ has no overlapping punctures, then $\mo_d (T)$ is equal to the number of downwards-pointing unit triangles in $T$ minus the number of upward-pointing unit triangles in $T$. \end{remark} Perfectly-punctured regions admit a numerical tileability criterion. \begin{corollary} \label{cor:pp-tileable} Let $T = T_d(I)$ be a triangular region. Then any two of the following conditions imply the third: \begin{enumerate} \item $T$ is perfectly-punctured; \item $T$ has no over-punctured monomial subregions; and \item $T$ is tileable. \end{enumerate} \end{corollary} \begin{proof} Suppose $T$ is tileable. Then $T$ has no $\dntri$-heavy monomial subregions by Theorem \ref{thm:tileable}. Thus, every monomial subregion of $T$ is not over-punctured if and only if no punctures of $T$ overlap. Hence (ii) implies (i) by Remark \ref{rem:puncture coeff} because $T$ is balanced. For the converse it is enough to show: If some punctures of $T$ overlap, then $T$ is over-punctured. Indeed, if no punctures overlap, then $T$ is perfectly punctured because $T$ is balanced. So assume two punctures of $T$ overlap. Then the smallest monomial subregion $U$ of $T$ containing these two punctures does not overlap with any other puncture of $T$ and is uniquely tileable. Hence $T \setminus U$ is tileable by Lemma \ref{lem:replace-tileable}, and thus $0 \leq \mo_d (T \setminus U) < \mo_d (T)$, as desired. If $T$ is non-tileable, then $T$ has a $\dntri$-heavy monomial subregion. Since every $\dntri$-heavy monomial subregion is also over-punctured, it follows that $T$ has an over-punctured monomial subregion. \end{proof} Any subregion $T \subset \mathcal{T}_d$ can be associated to a bipartite planar graph $G$ that is an induced subgraph of a honeycomb graph (see \cite{CN-resolutions}). We are interested in the bi-adjacency matrix $Z(T)$ of $G$. This is a zero-one matrix whose determinant enumerates signed lozenge tilings (see \cite[Theorem 3.5]{CN-resolutions}). If $T = T_d(I)$ for some monomial ideal $I$, then $Z(T)$ admits an alternative description. Indeed, consider the multiplication map $\times(x+y+z): [R/I]_{d-2} \rightarrow [R/I]_{d-1}$. Let $M(d)$ be the matrix to this linear map with respect to the monomial bases of $[R/I]_{d-2}$ and $[R/I]_{d-1}$ in reverse-lexicographic order. Then the transpose of $M(d)$ is the bi-adjacency matrix $Z(T_d (I))$ (see \cite[Proposition 4.5]{CN-small-type}). Here we need only a special case of these results. \begin{proposition} \label{prop:pm-det} Assume $T = T_d (I) \subset \mathcal{T}_d$ is a non-empty balanced subregion. If $\det Z(T) \in \ZZ$ is not zero, then $T$ is tileable. \end{proposition} \begin{proof} Balancedness of $T$ is equivalent to $\dim_K [R/I]_{d-2} = \dim_K [R/I]_{d-1}$. It follows that $Z(T)$ is a square matrix by \cite[Proposition 4.5]{CN-small-type}. Now \cite[Theorem 3.5]{CN-resolutions} gives the assertion. \end{proof} We conclude this section with a criterion that guarantees non-vanishing of $\det Z(T)$. To this end we recursively define a puncture of $T \subset \mathcal{T}_d$ to be a \emph{non-floating} puncture if it touches the boundary of $ \mathcal{T}_d$ or if it overlaps or touches a non-floating puncture of $T$. Otherwise we call a puncture a \emph{floating} puncture. For example, the region $T$ in Figure \ref{fig:triregion-intro} has three non-floating punctures (in the corners) and three floating punctures, two of them are overlapping and have side length two. \begin{proposition}{\cite[Corollary 4.7]{CN-resolutions}} \label{prop:same-sign} Let $T$ be a tileable triangular region, and suppose all floating punctures of $T$ have an even side length. Then $\per{Z(T)} = |\det{Z(T)}| \neq 0$. \end{proposition} \section{Combinatorial interpretations of some algebraic properties}\label{sec:alg} In this section, we use the connection to triangular regions to reinterpret some algebraic properties. \subsection{Stability of syzygy bundles} \label{sub:syz}~ Throughout this subsection, we assume the characteristic of $K$ is zero. Let $I$ be an Artinian ideal of $S = K[x_1, \ldots, x_n]$ that is minimally generated by forms $f_1, \ldots, f_m$. The \emph{syzygy module} of $I$ is the graded module $\syz{I}$ that fits into the exact sequence \[ 0 \rightarrow \syz{I} \rightarrow \bigoplus_{i=1}^{m}S(-\deg f_i) \rightarrow I \rightarrow 0. \] Its sheafification $\widetilde\syz{I}$ is a vector bundle on $\PP^{n-1}$, called the \emph{syzygy bundle} of $I$. It has rank $m-1$. Semistability is an important property of a vector bundle. Let $E$ be a vector bundle on projective space. The \emph{slope} of $E$ is defined as $\mu(E) := \frac{c_1(E)}{rk(E)}$. Furthermore, $E$ is said to be \emph{semistable} if the inequality $\mu(F) \leq \mu(E)$ holds for every coherent subsheaf $F \subset E$. If the inequality is always strict, then $E$ is said to be \emph{stable}. Brenner established a beautiful characterization of the semistability of syzygy bundles to monomial ideals. Since we only consider monomial ideals in this work, the following may be taken as the definition of (semi)stability herein. \begin{theorem}{\cite[Proposition~2.2 \& Corollary~6.4]{Br}} \label{thm:stable-syz} Let $I$ be an Artinian ideal in $K[x_1, \ldots, x_n]$ that is minimally generated by monomials $g_1, \ldots, g_m$, where $K$ is a field of characteristic zero. Then $I$ has a semistable syzygy bundle if and only if, for every proper subset $J$ of $\{1, \ldots, m\}$ with at least two elements, the inequality \[ \frac{d_J - \displaystyle\sum_{j \in J} \deg g_j}{|J|-1} \leq \frac{-\displaystyle\sum_{i=1}^{m} \deg g_i}{m-1} \] holds, where $d_J$ is the degree of the greatest common divisor of the $g_j$ with $j \in J$. Further, $I$ has a stable syzygy bundle if and only if the above inequality is always strict. \end{theorem} We use Brenner's criterion to rephrase (semi)stability in the case of a monomial ideal of $K[x,y,z]$ in terms of the over-puncturing coefficients of ideals. Note, in particular, that $\mo_d (I)= \sum_{i=1}^{m} (d - \deg g_i) - d$. \begin{corollary} \label{cor:T-stable} Let $I$ be an Artinian ideal in $R = K[x,y,z]$ that is minimally generated by monomials $g_1, \ldots, g_m$ of degree at most $d$. For every proper subset $J$ of $\{1, \ldots, m\}$ with at least two elements, let $I_J$ be the monomial ideal that is generated by $\{ g_j / g_J \st j \in J\}$, where $g_J = \gcd\{g_j \st j \in J \}$ has degree $d_J$. Then $I$ has a semistable syzygy bundle if and only if, for every proper subset $J$ of $\{1, \ldots, m\}$ with at least two elements, the inequality \[ \frac{\mo_{d-d_J} (I_J)}{|J| - 1} \leq \frac{\mo_{d} (I)}{m-1} \] holds. Furthermore, $I$ has a stable syzygy bundle if and only if the above inequality is always strict. \end{corollary} \begin{proof} Since $\mo_{d-d_J} (I_J)= d (|J| -1) + d_J - \sum_{j \in J} \deg g_i$, this follows immediately from Theorem~\ref{thm:stable-syz}. \end{proof} In order to apply this result we slightly extend the concept of a triangular region $T_d (I)$. Label the vertices in $\mathcal{T}_d$ by monomials of degree $d$ such that the label of each unit triangle is the greatest common divisor of its vertex labels. Then a minimal monomial generator of $I$ with degree $d$ corresponds to a vertex of $\mathcal{T}_d$ that is removed in $T_d (I)$. We consider this removed vertex as a puncture of side length zero. Observe that this is in line with our general definition of the side length of a puncture. Using Corollary~\ref{cor:pp-tileable}, we see that semistability is strongly related to tileability of a region. \begin{theorem} \label{thm:tileable-semistable} Let $I$ be an Artinian ideal in $R = K[x,y,z]$ generated by monomials whose degrees are bounded above by $d$, and let $T = T_d(I)$. If $T$ is non-empty, then any two of the following conditions imply the third: \begin{enumerate} \item $I$ is perfectly-punctured; \item $T$ is tileable; and \item $\widetilde\syz{I}$ is semistable. \end{enumerate} \end{theorem} \begin{proof} Assume $I$ is perfectly punctured, that is, $\mo_d(I) = 0$. We will show that $T$ is tileable if and only if $\widetilde\syz{I}$ is semistable. If $T$ is tileable, then $\mo_d(T) = 0$ which implies $J(T) = I$. Hence no punctures of $T$ overlap. This further implies $I_A = J(T_{I_A})$ for any subset $A$ of the generators of $I$. Thus $\mo_{d-d_A}(I_A) = \mo_{d-d_A}(T_{I_A}) \leq 0$, since no punctures overlap and every subregion is not over-punctured by Corollary~\ref{cor:pp-tileable}. Hence, $\widetilde{\syz} I$ is semistable by Corollary~\ref{cor:T-stable}. If $\widetilde{\syz} I$ is semistable, then $\mo_{d-d_A}(I_A) \le 0$ holds for any subset $A$ of the generators of $I$. This implies, in particular, that no punctures of $T$ overlap. Hence $I = J(T)$ and so $\mo_d(T) = \mo_d(I) = 0$. Furthermore, since no pair of punctures overlap, having no over-punctured regions is the same as having no $\dntri$-heavy regions (see Remark \ref{rem:puncture coeff}). Thus, by Corollary~\ref{cor:pp-tileable}, $T$ is tileable. Now assume $I$ is not perfectly-punctured, but $T$ is tileable. We have to show that $\widetilde\syz{I}$ is not semistable. Arguing as in the proof of Corollary \ref{cor:pp-tileable}, we conclude that $T$ is over-punctured and must have overlapping punctures. Consider two such overlapping punctures of $T$. Then the smallest monomial subregion $U$ containing these two punctures does not overlap with any other puncture of $T$ with positive side length. Hence $T' = T \setminus U$ is tileable and $0 \leq \mo_{T'} < \mo_d (I)$. If $T'$ is still over-punctured, then we repeat the above replacement procedure until we get a perfectly-punctured monomial subregion of $T$. Abusing notation slightly, denote this region by $T'$. Let $J$ be the largest monomial ideal containing $I$ and with generators whose degrees are bounded above by $d$ such that $T' = T_d (J)$. Observe that $\mo_d (J) = \mo_{d} (T') = 0$. Notice that a single replacement step above amounts to replacing the triangular region to an ideal $I'$ by the region to the ideal $(I', f)$, where $f$ is a greatest common divisor of the minimal generators of $I'$ that correspond to two overlapping punctures. These generators have degrees less than $d$. Assume now that $T'$ is empty. Then $I$ has two relatively prime minimal generators, say $g_1, g_2$, whose corresponding punctures overlap and are not both contained in a proper monomial subregion of $\mathcal{T}_d$. Since $I$ is Artinian it has $m \ge 3$ minimal generators. Moreover, all minimal generators of $I$ other than $g_1$ and $g_2$ have degree $d$. It follows that $\mo_d((g_1, g_2)) = \mo_d (I)$. Since $m > 2$, Corollary~\ref{cor:T-stable} shows that $\widetilde{\syz}{I}$ is not semistable. It remains to consider the case where $T'$ is not empty, i.e., $J$ is a proper ideal of $R$. Let $g_1, \ldots, g_m$ and $f_1, \ldots, f_n$ be the minimal monomial generators of $I$ and $J$, respectively. Partition the generating set of $I$ into $F_j = \{ g_i \st g_i \mbox{~divides~} f_j\}$. Notice $f_j = \gcd\{F_j\}$. In particular, $n > 1$ as $J$ is a proper Artinian ideal. Set $\mo_j = \mo_{d - \deg f_j} ((\frac{F_j}{f_j})) = \sum_{g \in F_j} (d - \deg{g}) - (d - \deg{f_j})$. Observe $\mo_j \geq 0$ as the subregion of $T_d (I)$ associated to $f_j$ is tileable, hence not under-punctured. Moreover, \begin{equation*} \begin{split} \mo_d (J) = \sum_{j = 1}^{n}(d - \deg{f_j}) - d & = \sum_{j=1}^{n} \left( \sum_{g \in F_j} (d - \deg{g}) - \mo_j \right) - d \\ & = \sum_{j=1}^{n} \sum_{g \in F_j} (d - \deg{g}) - d - \sum_{j=1}^{n} \mo_j \\ & = \mo_d (I) - \sum_{j=1}^{n} \mo_j. \end{split} \end{equation*} As $\mo_d (J) = 0$, we conclude that $\mo_d (I) = \sum_{j=1}^{n} \mo_j$ and, in particular, $\mo_d (I) \geq \mo_j$ for each $j$. Assume $m \cdot \mo_j < \# F_j \cdot \mo_d (I)$ for all $j$. Then $m \sum_{j=1}^{n}\mo_j < \mo_d (I) \sum_{j=1}^{n} \# F_j = \mo_d (I) \cdot m$. But this implies $m \cdot \mo_d (I) < m \cdot \mo_d (I)$, which is absurd. Hence, there is some $k$ such that $m \cdot \mo_k \geq \# F_k \cdot \mo_d (I)$. Since $\mo_d (I)\geq \mo_k$ it follows that $\frac{\mo_k}{\# F_k-1} > \frac{\mo_d (I)}{m-1}$. Indeed, this is immediate if $\mo_d (I) > \mo_k$. If $\mo_d (I) = \mo_k$, then it is also true because $\# F_k < m$. Now Corollary~\ref{cor:T-stable} gives that $\widetilde\syz{I}$ is not semistable. \end{proof} We get the following criterion when focusing solely on the triangular region. Recall that $J(T)$ denotes the monomial ideal of a triangular region $T$ as introduced above Remark~\ref{rem:puncture coeff}. \begin{corollary} \label{cor:semistability-by-region} Let $I$ be an Artinian ideal in $R = K[x,y,z]$ generated by monomials whose degrees are bounded above by $d$, and let $T = T_d(I)$. Assume $T$ is non-empty and tileable. \begin{enumerate} \item If $I \neq I + J(T)$, then $\widetilde\syz{I}$ is not semistable. \item $\widetilde\syz(I + J(T))$ is semistable if and only if $T$ is perfectly-punctured. \end{enumerate} \end{corollary} \begin{proof} Note that $I \ne I + J(T)$ implies $\mo_d (I+J(T)) < \mo_d (I)$. Since $T$ is balanced, we get $0 \leq \mo_d (T) = \mo_d (J(T)) = \mo_d (I + J(T))$. Hence Theorem~\ref{thm:tileable-semistable} gives our assertions. \end{proof} For stability, we obtain the following result. \begin{proposition} \label{pro:pp-stable} Let $I$ be an Artinian ideal in $R = K[x,y,z]$ generated by monomials whose degrees are bounded above by $d$. If $T = T_d(I)$ is non-empty, tileable, and perfectly-punctured, then $\widetilde\syz(I + J(T))$ is stable if and only if every proper monomial subregion of $T$ is under-punctured. \end{proposition} \begin{proof} We may assume $I = I + J(T)$. As $T$ is perfectly-punctured, we have that $\mo_d (I) = \mo_d (T) = 0$. In particular, no punctures of $T$ overlap. Using Corollary~\ref{cor:T-stable}, we see that $\widetilde\syz{I}$ is stable if and only if $\mo_{d-d_J} (T_{d-d_J}(I_J)) < 0$ for all proper subsets $J$ of the set of minimal generators of $I$. This is equivalent to every proper monomial subregion of $T$ being under-punctured (see Remark~\ref{rem:puncture coeff}). \end{proof} By the preceding theorem and proposition, we have an understanding of semistability and stability for perfectly-punctured triangular regions. However, when a region is over-punctured and non-tileable more information is needed to infer semistability. \begin{example} \label{exa:stability} There are monomial ideals with stable syzygy bundles whose corresponding triangular regions are over-punctured and non-tileable. See Figure~\ref{fig:nss-examples}(i) for a specific example. \begin{figure}[!ht] \begin{minipage}[b]{0.30\linewidth} \centering \includegraphics[scale=1]{figs/nss-example-1}\\ \emph{(i) $T_3(x^2, y^2, z^2, xy, xz, yz)$} \end{minipage} \begin{minipage}[b]{0.28\linewidth} \centering \includegraphics[scale=1]{figs/nss-example-2}\\ \emph{(ii) $T_3(x^2, y^2, z^2, xy, xz)$} \end{minipage} \begin{minipage}[b]{0.40\linewidth} \centering \includegraphics[scale=1]{figs/nss-example-3}\\ \emph{(iii) $T_4(x^3, y^3, z^3, xyz, x^2y, x^2z)$} \end{minipage} \caption{Over-punctured, non-tileable regions and various levels of stability.} \label{fig:nss-examples} \end{figure} Moreover, the ideal $(x^2, y^2, z^2, xy, xz)$ has a semistable, but non-stable syzygy bundle (the monomial subregion associated to $x$ breaks stability), and the ideal $(x^3, y^3, z^3, xyz, x^2y, x^2z)$ has a non-semistable syzygy bundle (the monomial subregion associated to $x^2$ breaks semistability). Both of their triangular regions, see Figures~\ref{fig:nss-examples}(ii) and (iii), respectively, are over-punctured and non-tileable. \end{example}~ \subsection{The weak Lefschetz property}\label{sub:wlp}~ We recall some results that help decide the presence of the weak Lefschetz property. In fact, one needs only check near a ``peak'' of the Hilbert function. \begin{proposition}{\cite[Proposition 2.3]{CN-small-type}} \label{pro:wlp} Let $A \neq 0$ be an Artinian standard graded $K$-algebra, and let $\ell$ be a general linear form. Suppose $A$ has no non-zero socle elements of degree less than $d-2$ for some integer $d \ge 0$. Then $A$ has the weak Lefschetz property, provided one of the following conditions is satisfied: \begin{itemize} \item[(i)] $\times \ell: [A]_{d-2} \rightarrow [A]_{d-1}$ is injective and $\times \ell: [A]_{d-1} \rightarrow [A]_{d}$ is surjective. \item[(ii)] $\times \ell: [A]_{d-2} \rightarrow [A]_{d-1}$ is bijective. \end{itemize} \end{proposition} Moreover, for monomial algebras, it is enough to decide whether the sum of the variables is a Lefschetz element. \begin{proposition}{\cite[Proposition~2.2]{MMN-2011}} \label{pro:mono} Let $A = R/I$ be a monomial Artinian $K$-algebra, where $K$ is an infinite field. For any integer $d$, the following conditions are equivalent: \begin{enumerate} \item The multiplication map $\times L: [A]_{d-1} \to [A]_d$ has maximal rank, where $L \in R$ is a general linear form. \item The multiplication map $\times (x + y + z): [A]_{d-1} \to [A]_d$ has maximal rank. \end{enumerate} \end{proposition} As pointed out above Proposition \ref{prop:pm-det}, for a monomial ideal $I \subset K[x, y, z]$, the bi-adjacency matrix $Z(T_d (I))$ can be described using multiplication by $\ell = x+y+z$. We thus get the following criterion for the presence of the weak Lefschetz property, where we consider the entries of $Z(T_d (I))$ as elements of the base field $K$. \begin{corollary}{\cite[Corollary 4.7]{CN-small-type}}\label{cor:wlp-biadj} Let $I$ be an Artinian monomial ideal in $R = K[x,y,z]$. Then $R/I$ has the weak Lefschetz property if and only if, for each positive integer $d$, the matrix $Z(T_d(I))$ has maximal rank. \end{corollary} This can be used to infer the weak Lefschetz property in sufficiently large characteristic from its presence in characteristic zero. \begin{proposition}{\cite[Proposition 7.9]{CN-small-type}}\label{pro:char-0-to-p} Let $R/I$ be any Artinian monomial algebra such that $R/I$ has the weak Lefschetz property in characteristic zero. If $I$ contains the powers $x^a, y^b, z^c$, then $R/I$ has the weak Lefschetz property in positive characteristic whenever $\charf K > 3^{\frac{1}{2}\binom{\frac{1}{2} (a+b+c) + 2}{2}}$. \end{proposition} \section{Artinian monomial almost complete intersections} \label{sec:amaci} This section presents an in-depth discussion of Artinian monomial ideals of $R$ with exactly four minimal generators. They are called Artinian monomial almost complete intersections. These ideals have been discussed, for example, in \cite{BK} and~\cite[Section~6]{MMN-2011}. In particular, we will answer some of the questions posed in \cite{MMN-2011}. Besides addressing the weak Lefschetz property, we discuss the splitting types of the syzygy bundles of these ideals. Particular attention is paid if the characteristic is positive. Some of our results are used in \cite{MMMNW} for studying ideals with the Rees property. Each Artinian ideal of $K[x,y,z]$ with exactly four monomial minimal generators is of the form \[ I_{a,b,c,\alpha,\beta,\gamma} = (x^a, y^b, z^c, x^\alpha y^\beta z^\gamma), \] where $0 \leq \alpha < a$, $0 \leq \beta < b,$ and $0 \leq \gamma < c$, such that at most one of $\alpha$, $\beta$, and $\gamma$ is zero. If one of $\alpha$, $\beta$, and $\gamma$ is zero, then $R/I_{a,b,c,\alpha,\beta,\gamma}$ has type two. In this case, the presence of the weak Lefschetz property has already been described in~\cite{CN-small-type}. Thus, throughout this section we assume that the integers $\alpha, \beta$, and $\gamma$ are all positive; this forces $R/I_{a,b,c,\alpha,\beta,\gamma}$ to have Cohen-Macaulay type three. More precisely: \begin{proposition}{\cite[Proposition 6.1]{MMN-2011}} \label{pro:amaci-props} Let $I = I_{a,b,c,\alpha,\beta,\gamma}$ be defined as above. Then $R/I$ has three minimal socle generators. They have degrees $\alpha + b + c - 3$, $a + \beta + c - 3$, and $a + b + \gamma - 3$. In particular, $R/I$ is level if and only if $a - \alpha = b - \beta = c - \gamma$. \end{proposition}~ \subsection{Presence of the weak Lefschetz property}~\par\label{subsec:aci-wlp} Brenner made Theorem \ref{thm:stable-syz} more explicit in the situation at hand. \begin{proposition}{\cite[Corollary~7.3]{Br}} \label{pro:amaci-semistable} Let $I = I_{a,b,c,\alpha,\beta,\gamma}$ be defined as above, and suppose $K$ is a field of characteristic zero. Set $d = \frac{1}{3}(a+b+c+\alpha + \beta + \gamma)$. Then $I$ has a semistable syzygy bundle if and only if the following three conditions are satisfied: \begin{enumerate} \item $\max\{a, b, c, \alpha + \beta + \gamma\} \leq d$; \item $\min\{\alpha + \beta + c, \alpha + b + \gamma, a + \beta + \gamma\} \geq d$; and \item $\min\{a+b, a+c, b+c\} \geq d$. \end{enumerate} \end{proposition} Furthermore, Brenner and Kaid showed that, for almost complete intersections, nonsemistability implies the weak Lefschetz property in characteristic zero. \begin{proposition}{\cite[Corollary~3.3]{BK}} \label{pro:amaci-nss-wlp} Let $K$ be a field of characteristic zero. Then $I_{a,b,c,\alpha,\beta,\gamma}$ has the weak Lefschetz property if its syzygy bundle is not semistable. \end{proposition} The conclusion of this result is not necessarily true in positive characteristic. \begin{example} \label{exa:amaci-nss} Let $I = I_{5,5,3,1,1,2}$, and thus $d = 6$. Then the syzygy bundle of $I$ is not semistable as $\alpha + \beta + c = 5 < d = 6$. However, the triangular region $T_6(I)$ is balanced and $\det{Z(T_6(I))} = 5$. Hence, $I$ does not have the weak Lefschetz property if and only if the characteristic of $K$ is $5$. \end{example} The following example illustrates that the assumption on the number of minimal generators cannot be dropped in Proposition~\ref{pro:amaci-semistable}. \begin{example} \label{exa:amaci-nss-2} Consider the ideal $J = (x^5, y^5, z^5, xy^2z, xyz^2)$ with five minimal generators. Then Theorem~\ref{cor:T-stable} gives that the syzygy bundle of $J$ is not semistable. Notice that $T_6(J)$ is balanced. However, $\det{Z(T_6(J))} = 0$, and so $R/J$ never has the weak Lefschetz property, regardless of the characteristic of $K$. \end{example} The number $d$ in Proposition~\ref{pro:amaci-semistable} is not assumed to be an integer. In fact, if it is not, then the algebra has the weak Lefschetz property. \begin{proposition}{\cite[Theorem~6.2]{MMN-2011}} \label{pro:amaci-not-3} Let $K$ be a field of characteristic zero. Then $I_{a,b,c,\alpha,\beta,\gamma}$ has the weak Lefschetz property if $a+b+c+\alpha + \beta + \gamma \not\equiv 0 \pmod{3}$. \end{proposition} Again, the conclusion of this result may fail in positive characteristic. Indeed, for the ideal $I_{5,5,3,1,1,2}$ in Example~\ref{exa:amaci-nss} we get $d = \frac{17}{3}$, but it does not have the weak Lefschetz property in characteristic $5$. The following result addresses the weak Lefschetz property in the cases that are left out by Propositions~\ref{pro:amaci-nss-wlp} and~\ref{pro:amaci-not-3}. Its first part extends \cite[Lemma~7.1]{MMN-2011} from level to arbitrary monomial almost complete intersections. Observe that balanced triangular regions correspond to an equality of the Hilbert function in two consecutive degrees, dubbed ``twin-peaks'' in \cite{MMN-2011}. \begin{proposition} \label{pro:amaci-balanced} Let $I = I_{a,b,c,\alpha,\beta,\gamma}$, and assume $d = \frac{1}{3}(a+b+c+\alpha+\beta+\gamma)$ is an integer. If the syzygy bundle of $I$ is semistable and $d$ is integer, then $T_d(I)$ is perfectly-punctured and balanced. Moreover, in this case $R/I$ has the weak Lefschetz property if and only if $\det{Z(T_d(I))}$ is not zero in $K$. \end{proposition} \begin{proof} Note that condition (i) in Proposition~\ref{pro:amaci-semistable} says that $T_d(I)$ has punctures of nonnegative side lengths $d-a, d-b, d-c$, and $d-(\alpha + \beta + \gamma)$. Furthermore, conditions (ii) and (iii) therein are equivalent to the fact that the degree of the least common multiple of any two of the minimal generators of $I$ is at least $d$, i.e., the punctures of $T_d(I)$ do not overlap. Using the assumption that $d$ is an integer, it follows that $T_d(I)$ is perfectly-punctured, and thus balanced. Since the punctures of $T_d(I)$ do not overlap, the punctures of $T_{d-1}(I)$ are not overlapping nor touching. Thus we conclude that the degrees of the socle generators of $R/I$ are at least $d-2$. Hence, Corollary~\ref{cor:wlp-biadj} and Proposition~\ref{pro:wlp} together give that $R/I$ has the weak Lefschetz property if and only if $\det{Z(T_d(I))}$ is not zero in $K$. \end{proof} In the situation of Proposition~\ref{pro:amaci-balanced}, the fact that $R/I$ has the weak Lefschetz property implies that $T_d(I)$ is tileable by Proposition~\ref{prop:pm-det}. Tileability remains true even if $R/I$ fails to have the weak Lefschetz property. \begin{proposition} \label{pro:amaci-ss-tileable} Let $I = I_{a,b,c,\alpha,\beta,\gamma}$. If $R/I$ fails to have the weak Lefschetz property in characteristic zero, then $d = \frac{1}{3}(a+b+c+\alpha+\beta+\gamma)$ is an integer and $T_d(I)$ is tileable. \end{proposition} \begin{proof} By Propositions~\ref{pro:amaci-nss-wlp} and~\ref{pro:amaci-not-3}, we know that the syzygy bundle of $I$ is semistable and $d = \frac{1}{3}(a+b+c+\alpha+\beta+\gamma)$ is an integer. Hence by Proposition~\ref{pro:amaci-balanced}, $T_d(I)$ is perfectly-punctured. Now we conclude by Theorem~\ref{thm:tileable-semistable}. \end{proof} Before we analyze the presence of the weak Lefschetz property, we need to recall a special type of puncture that has been previously studied by Ciucu, Eisenk\"olbl, Krattenthaler, and Zare~\cite{CEKZ}. \begin{remark}\label{rem:axes-central} The central puncture is \emph{axes-central} if it is (approximately) equidistant from a corner puncture and the opposite wall, for each of the three punctures. More specifically, suppose $A = d-a$, $B = d-b$, $C = d-c$, and $M = d-(\alpha + \beta + \gamma)$. There are two cases to consider: \begin{enumerate} \item If $A$, $B$, and $C$ have the same parity, then the region is of the form \[ T_{A+B+C+M}(x^{B+C+M}, y^{A+C+M}, z^{A+B+M}, x^{\frac{1}{2}(B+C)} y^{\frac{1}{2}(A+C)} z^{\frac{1}{2}(A+B)}). \] \item If $A$ and $B$ differ in parity from $C$, then the region is of the form \[ T_{A+B+C+M}(x^{B+C+M}, y^{A+C+M}, z^{A+B+M}, x^{\frac{1}{2}(B+C+1)} y^{\frac{1}{2}(A+C-1)} z^{\frac{1}{2}(A+B)}). \] \end{enumerate} \begin{figure}[!ht] \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=1.0]{figs/axis-central-parity-same}\\ \emph{(i) The parity of $C$ agrees with $A$ and $B$.} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=1.0]{figs/axis-central-parity-diff}\\ \emph{(ii) The parity of $C$ differs from $A$ and $B$.} \end{minipage} \caption{The two prototypical figures with axes-central punctures.} \label{fig:axes-central} \end{figure} The explicit signed enumerations for these regions can be found in~\cite[Theorems~1, 2, 4, \& 5]{CEKZ}. However, the desired consequence for our use is that the signed enumeration is nonzero if and only if not all of $A$, $B$, and $C$ are odd. Moreover, if it is nonzero, then the largest prime divisor of the enumeration is bounded above by $d - 1 = A+B+C+M-1$. \end{remark} Now, we can decide the presence of the weak Lefschetz property in almost all cases. \begin{theorem} \label{thm:amaci-wlp} Let $I = I_{a,b,c,\alpha,\beta,\gamma} = (x^a, y^b, z^c, x^\alpha y^\beta z^\gamma)$ be an Artinian ideal with four minimal generators such that $\alpha$, $\beta$, and $\gamma$ are all positive. Assume the base field $K$ has characteristic zero, and consider the following conditions: \begin{enumerate} \item $\max\{a, b, c, \alpha + \beta + \gamma\} \leq d$; \item $\min\{\alpha + \beta + c, \alpha + b + \gamma, a + \beta + \gamma\} \geq d$; \item $\min\{a+b, a+c, b+c\} \geq d$; and \item $d = \frac{1}{3}(a+b+c+\alpha + \beta + \gamma)$ is an integer. \end{enumerate} Then the following statements hold: \begin{itemize} \item[(a)] If one of the conditions (i) - (iv) is not satisfied, then $R/I$ has the weak Lefschetz property. \item[(b)] Assume all the conditions (i) - (iv) are satisfied. Then: \begin{itemize} \item[(1)] The multiplication map $\times (x+y+z): [R/I]_{j-2} \to [R/I]_{j-1}$ has maximal rank whenever $j \neq d$. \item[(2)] The algebra $R/I$ has the weak Lefschetz property if one of the following conditions is satisfied: \begin{itemize} \item[(I)] Condition (ii) is an equality. \item[(II)] $a+b+c+\alpha+\beta + \gamma$ is divisible by 6. \item[(III)] $c = \frac{1}{2}(a+b+\alpha+\beta+\gamma)$. \item[(IV)] The region $T_d(I)$ has an axes-central puncture (see Remark~\ref{rem:axes-central}) and one of $d-a, d-b, d-c$, and $d-(\alpha+\beta+\gamma)$ is not odd. \item[(V)] $a = b$, $\alpha = \beta$, and $c$ or $\gamma$ is even. \end{itemize} \item[(3)] The algebra $R/I$ fails to have the weak Lefschetz property if one of the following conditions is satisfied: \begin{itemize} \item[(IV')] The region $T_d(I)$ has an axes-central puncture (see Remark~\ref{rem:axes-central}) and all of $d-a, d-b, d-c$, and $d-(\alpha+\beta+\gamma)$ are odd; or \item[(V')] $a = b$, $\alpha = \beta$, and both $c$ and $\gamma$ are odd. \end{itemize} \end{itemize} \end{itemize} \end{theorem} \begin{proof} Assertion (a) follows from Propositions~\ref{pro:amaci-semistable}, \ref{pro:amaci-nss-wlp}, and~\ref{pro:amaci-not-3}. Consider now the claims in part (b). Then Proposition~\ref{pro:amaci-balanced} gives that $R/I$ has the weak Lefschetz property if and only if $\det Z(T_d(I))$ is not zero. The assumptions in (b) guarantee that the punctures of $T = T_d (I)$ do not overlap and the degrees of the socle generators of $R/I$ are at least $d-2$. Then condition (I) implies that the puncture to the generator $x^\alpha y^\beta z^\gamma$ touches another puncture, whereas condition (II) says that this puncture has an even side length. In either case, $R/I$ has the weak Lefschetz property by Proposition~\ref{prop:same-sign}. The proof of (b)(1) uses the Grauert-M\"ulich splitting theorem. We complete this part below Proposition~\ref{pro:split-type-semist}. The remaining assertions all follow from results in~\cite{CN-mirror-symmetry} and~\cite{CN-small-type}, when combined with Proposition~\ref{pro:amaci-balanced}: (III). The condition $c = \frac{1}{2}(a+b+\alpha+\beta+\gamma)$ is equivalent to $d-c = 0$. After taking into account all lozenges forced by the puncture to $x^{\alpha} y^{\beta} z^{\gamma}$, the remaining subregion of $T_d (I)$ is a hexagon, and so $\det Z (T_d (I)) \neq 0$ (see, e.g., Proposition~\ref{prop:same-sign}). (IV) and (IV'). Use~\cite[Theorems~1, 2, 4, \& 5]{CEKZ}, as mentioned in Remark~\ref{rem:axes-central}. (V) and (V'). Use the results in~\cite{CN-mirror-symmetry}. \end{proof} Notice that Theorem~\ref{thm:amaci-wlp}(b)(1) says that, for almost monomial complete intersections, the multiplication map can fail to have maximal rank in at most one degree. \begin{remark}\label{rem:q-and-a} \begin{enumerate} \item Theorem~\ref{thm:amaci-wlp} can be extended to fields of sufficiently positive characteristic by using Proposition~\ref{pro:char-0-to-p}. This lower bound on the characteristic can be improved whenever one knows the determinant of $Z(T_d(I))$. \item Question~8.2(2c) in \cite{MMN-2011} asked if there exist non-level almost complete intersections which never have the weak Lefschetz property. The almost complete intersection $I = I_{3,5,5,1,2,2} = (x^3, y^5, z^5, xy^2z^2)$ is not level and never has the weak Lefschetz property, regardless of field characteristic, as $\det{Z(T_6(I))} = 0$. \end{enumerate} \end{remark}~ \subsection{Level almost complete intersections}\label{sub:level}~\par In the previous subsection, we considered one way of centralizing the inner puncture of a triangular region associated to a monomial almost complete intersection. We called such punctures ``axes-central.'' In this section, we consider another method of centralizing the inner puncture of such a triangular region. It turns out this method of centralization is equivalent to the algebra being level. Consider the ideal $I = I_{a,b,c,\alpha,\beta,\gamma}$ as above. Let $d$ be an integer and assume that $T = T_d(I)$ has one floating puncture. We say the inner puncture of $T$ is a \emph{gravity-central puncture \index{puncture!gravity-central} if the vertices of the puncture are each the same distance from the puncture opposite to it (see Figure~\ref{fig:gravity-central}). \begin{figure}[!ht] \includegraphics[scale=1]{figs/gravity-central} \caption{A prototypical figure with a gravity-central puncture.} \label{fig:gravity-central} \end{figure} \begin{lemma} Let $I = I_{a,b,c,\alpha,\beta,\gamma}$. Then $T_d(I)$ has a gravity-central puncture if and only if $R/I$ is a level algebra. \end{lemma} \begin{proof} The defining property for the distances is $(d-b) + (d-c) - \alpha = (d-a) + (d-c) - \beta = (d-a) + (d-b) - \gamma$. This is equivalent to the condition in Proposition~\ref{pro:amaci-props} that $R/I$ is level, i.e., $a - \alpha = b - \beta = c - \gamma$. \end{proof} Level almost complete intersections were studied extensively in~\cite[Sections~6 and~7]{MMN-2011}. In particular, Migliore, Mir\'o-Roig, and the second author proposed a conjectured characterization for the presence of the weak Lefschetz property for such algebras. We recall it here, though we present it in a different, but equivalent, form to better elucidate the reasoning behind it. \begin{conjecture}{\cite[Conjecture~6.8]{MMN-2011}} \label{conj:level-wlp} Let $I = I_{\alpha+t,\beta+t, \gamma+t, \alpha,\beta,\gamma}$ be an ideal of $R = K[x,y,z]$, where $K$ has characteristic zero, $0 < \alpha \leq \beta \leq \gamma \leq 2(\alpha+\beta)$, $t \geq \frac{1}{3}(\alpha+\beta+\gamma)$, and $\alpha + \beta + \gamma$ is divisible by three. If $(\alpha,\beta,\gamma,t)$ is not $(2,9,13,9)$ or $(3,7,14,9)$, then $R/I$ fails to have the weak Lefschetz property if and only if $t$ is even, $\alpha + \beta + \gamma$ is odd, and $\alpha = \beta$ or $\beta = \gamma$. Furthermore, $R/I$ fails to have the weak Lefschetz property in the two exceptional cases. \end{conjecture} The necessity part of this conjecture was proven in \cite[Corollary~7.4]{MMN-2011}) by showing that $R/I$ does not have the weak Lefschetz property if $t$ is even, $\alpha + \beta + \gamma$ is odd, and $\alpha = \beta$ or $\beta = \gamma$. This result is covered by Theorem~\ref{thm:amaci-wlp}(b)(3)(V') because the region is mirror symmetric. It remained open to establish the presence of the weak Lefschetz property. Theorem~\ref{thm:amaci-wlp} does this in many new cases. \begin{proposition} \label{pro:level-wlp} Consider the ideal $I = I_{\alpha+t,\beta+t, \gamma+t, \alpha,\beta,\gamma}$ as given in Conjecture~\ref{conj:level-wlp}. Then $R/I$ has the weak Lefschetz property if one of the following conditions is satisfied: \begin{enumerate} \item $t$ and $\alpha + \beta + \gamma$ have the same parity; or \item $t$ is odd and $\alpha = \beta = \gamma$ is even. \end{enumerate} \end{proposition} \begin{proof} We apply Theorem~\ref{thm:amaci-wlp} with $d = t + \frac{2}{3} (\alpha + \beta + \gamma)$. Then the side length of the inner puncture of $T_d(I)$ is $t - \frac{1}{3} (\alpha + \beta + \gamma)$. Hence (i) follows from Theorem~\ref{thm:amaci-wlp}(b)(II). Claim (ii) is a consequence of Theorem~\ref{thm:amaci-wlp}(b)(IV) as the given condition implies the inner puncture is axes-central. \end{proof} \begin{remark} Conjecture~\ref{conj:level-wlp} remains open in two cases, both of which are conjectured to have the weak Lefschetz property: \begin{enumerate} \item $t$ even, $\alpha + \beta + \gamma$ is odd, and $\alpha < \beta < \gamma$; and \item $t$ odd, $\alpha + \beta + \gamma$ is even, and $\alpha \le \beta$ or $\beta \le \gamma$. \end{enumerate} \end{remark} Notice that $T = T_d(I_{a,b,c,\alpha,\beta,\gamma})$ is simultaneously axis- and gravity-central precisely if either $a = b = c$ and $\alpha = \beta = \gamma$, or $a = b+2 = c+1$ and $\alpha=\beta+2=\gamma+1$. In the former case, the weak Lefschetz property in characteristic zero is completely characterized below, strengthening \cite[Corollary~7.6]{MMN-2011}. \begin{corollary} Let $I = I_{a, a, a, \alpha, \alpha, \alpha} = (x^a, y^a, z^a, x^{\alpha}, y^{\alpha}, z^{\alpha})$, where $a > \alpha$. Then $R/I$ fails to have the weak Lefschetz property in characteristic zero if and only if $\alpha$ and $a$ are odd and $a \geq 2 \alpha + 1$. \end{corollary} \begin{proof} If $a < 2 \alpha$, then $R/I$ has the weak Lefschetz property by Theorem~\ref{thm:amaci-wlp}(a). Assume now $a \geq 2 \alpha$. Then $R/I$ fails the weak Lefschetz property if $\alpha$ and $a$ are odd by \cite[Corollary~7.6]{MMN-2011} (or Theorem~\ref{thm:amaci-wlp}(b)(3)(V')). Otherwise, $R/I$ has this property by Proposition~\ref{pro:level-wlp}. \end{proof} For $a \geq 2 \alpha$, the triangular region $T_{a+\alpha} (I)$ was considered by Krattenthaler in \cite{Kr-06}. He described a bijection between cyclically symmetric lozenge tilings of the region and descending plane partitions with specific conditions. ~\subsection{Splitting type and regularity}~\par The generic splitting type of a vector bundle on projective space is an important invariant. However, its computation is often challenging. In this section we consider the splitting type of the syzygy bundles of monomial almost complete intersections in $R$. These are rank three bundles on the projective plane. For the remainder of this section we assume $K$ is an infinite field. Let $I = I_{a,b,c,\alpha,\beta,\gamma}$ as above. Recall from Section~\ref{sub:syz} that the syzygy module $\syz{I}$ of $I$ is defined by the exact sequence \begin{equation*} 0 \longrightarrow \syz{I} \longrightarrow R(-\alpha-\beta-\gamma) \oplus R(-a) \oplus R(-b) \oplus R(-c) \longrightarrow I \longrightarrow 0, \end{equation*} and the syzygy bundle $\widetilde\syz{I}$ on $\PP^2$ of $I$ is the sheafification of $\syz{I}$. Its restriction to any line $H$ of $\PP^2$ splits as $\SO_H(p) \oplus \SO_H(q) \oplus \SO_H(r)$. The triple $(p, q, r)$ depends on the choice of the line $H$, but is the same for all general lines. This latter triple is called the \emph{generic splitting type} of $\widetilde\syz{I}$. Since $I$ is a monomial ideal, Proposition~\ref{pro:mono} implies that the generic splitting type $(p, q, r)$ can be determined if we restrict to the line defined by $\ell = x+ y + z$. For computing the generic splitting type of $\widetilde\syz{I}$, we use the observation that $R/(I, \ell) \cong S/J$, where $S = K[x,y]$, and $J = (x^a, y^b, (x+y)^c, x^\alpha y^\beta (x+y)^\gamma)$. Define an $S$-module $\syz{J}$ by the exact sequence \begin{equation} \label{eqn:syz-J} 0 \longrightarrow \syz{J} \longrightarrow S(-\alpha-\beta-\gamma) \oplus S(-a) \oplus S(-b) \oplus S(-c) \longrightarrow J \longrightarrow 0 \end{equation} using the, possibly non-minimal, set of generators $\{x^a, y^b, (x+y)^c, x^\alpha y^\beta (x+y)^\gamma\}$ of $J$. Then $\syz{J} \cong S(p) \oplus S(q) \oplus S(r)$, where $(p, q, r)$ is the generic splitting type of the vector bundle $\widetilde\syz{I}$. The Castelnuovo-Mumford regularity of the ideal $J$ is $\reg{J}= 1 + \reg S/J$. For later use we record the following facts. \begin{remark} \label{rem:splitting-type} Adopt the above notation. Then the following statements hold: \begin{enumerate} \item Using, for example, the Sequence~(\ref{eqn:syz-J}), one gets $-(p + q + r) = a+b+c+\alpha+\beta+\gamma$. \item If any of the generators of $J$ is extraneous, then the degree of that generator is one of $-p$, $-q$, or $-r$. \item As the regularity of $J$ is determined by the Betti numbers of $S/J$, we obtain that $\reg{J} + 1 = \max\{-p,-q,-r\}$ if the Sequence~(\ref{eqn:syz-J}) is a minimal free resolution of $J$. \end{enumerate} \end{remark} Before moving on, we prove a technical but useful lemma. \begin{lemma} \label{lem:reg-2AMACI} Let $S = K[x,y]$, where $K$ is a field of characteristic zero. Consider the ideal $\fa = (x^a, y^b, x^\alpha y^\beta (x+y)^\gamma)$ of $S$, and assume that the given generating set is minimal. Then $\reg{\fa}$ is \[ -1 + \max \left \{a+ \beta, b+\alpha, \min \left \{a+b, a+ \beta + \gamma, b+ \alpha + \gamma, \left\lceil \frac{1}{2}(a+b+ \alpha + \beta + \gamma)\right\rceil \right \} \right\}. \] \end{lemma} \begin{proof} We proceed in three steps. First, considering the minimal free resolution of the ideal $(x^a, y^b, x^\alpha y^\beta)$, we conclude \[ \reg (x^a, y^b, x^\alpha y^\beta) = -1 + \max \{a+ \beta, b+\alpha\}. \] Second, the algebra $S/(x^a, y^b)$ has the strong Lefschetz property in characteristic zero (see, e.g., \cite[Proposition~4.4]{HMNW}). Thus, the Hilbert function of $S/(x^a, y^b, (x+y)^\gamma)$ is \[ \dim_K{[S/(x^a, y^b, (x+y)^\gamma)]_j} = \max\{0, \dim_K{[S/(x^a, y^b)]_j} - \dim_K{[S/(x^a,y^b)]_{j-\gamma}}\}. \] By analyzing when the difference becomes non-positive, we get that \begin{equation}\label{eq:reg-restr-ci} \reg (x^a, y^b, (x+y)^\gamma) = -1 + \min \left \{a+b, a+ \gamma, b+\gamma, \left\lceil \frac{1}{2}(a+b+\gamma)\right\rceil \right \}. \end{equation} Third, notice that \[ (x^a, y^b, x^\alpha y^\beta (x+y)^\gamma):x^\alpha y^\beta = (x^{a-\alpha}, y^{b-\beta}, (x+y)^\gamma). \] Hence, multiplication by $x^\alpha y^\beta$ induces the short exact sequence \[ 0 \rightarrow [S/(x^{a-\alpha}, y^{b-\beta}, (x+y)^\gamma)](-\alpha-\beta) \stackrel{\times x^\alpha y^\beta}{\longrightarrow} S/\fa \rightarrow S/(x^a, y^b, x^\alpha y^\beta) \rightarrow 0. \] It implies \[ \reg{\fa} = \max\{\alpha + \beta + \reg{(x^{a-\alpha}, y^{b-\beta}, (x+y)^\gamma)}, \reg{(x^a, y^b, x^\alpha y^\beta)}\}. \] Using the first two steps, the claim follows. \end{proof} Recall that Proposition~\ref{pro:amaci-semistable} gives a characterization of the semistability of the syzygy bundle $\widetilde\syz{I_{a,b,c,\alpha,\beta,\gamma}}$, using only the parameters $a$, $b$, $c$, $\alpha$, $\beta$, and $\gamma$. We determine the splitting type of $\widetilde\syz{I_{a,b,c,\alpha,\beta,\gamma}}$ for the nonsemistable and the semistable cases separately. ~\subsubsection{Nonsemistable syzygy bundle}~ We first consider the case when the syzygy bundle is not semistable, and therein we distinguish four cases. It turns out that in three cases, at least one of the generators of the ideal $J$ is extraneous. \begin{proposition} \label{pro:st-nss} Consider the ideal $I = I_{a,b,c,\alpha,\beta,\gamma} = (x^a, y^b, z^c, x^\alpha y^\beta z^\gamma)$ with four minimal generators. Assume that the base field $K$ has characteristic zero and, without loss of generality, that $a \leq b \leq c$. Set $d := \frac{1}{3}(a+b+c+\alpha+\beta+\gamma)$, and denote by $(p, q, r)$ the generic splitting type of $\widetilde\syz{I}$. Assume that $\widetilde\syz{I}$ is not semistable. Then: \begin{enumerate} \item If $\min \{\alpha + \beta + \gamma, c\} \geq a+b -1$, then \[ (p, q, r) = (-c, -\alpha - \beta - \gamma, -a-b). \] \item Assume $\min \{\alpha + \beta + \gamma, c\} \leq a+b -2$ and \[ \frac{1}{2}(a+b+ c) \leq \min \left \{a+ \beta + \gamma, b+ \alpha + \gamma, c + \beta + \gamma, \frac{1}{2}(a+b+ \alpha + \beta + \gamma) \right \}. \] Then \[ (p, q, r) = (-\alpha-\beta-\gamma, - \left\lceil \frac{1}{2}(a+b+c) \right\rceil, - \left\lfloor \frac{1}{2}(a+b+c) \right\rfloor). \] \item Assume $\min \{\alpha + \beta + \gamma, c\} \leq a+b -2$ and \[ \frac{1}{2}(a+b+ \alpha + \beta + \gamma) \leq \min \left \{a+ \beta + \gamma, b+ \alpha + \gamma, c + \beta + \gamma, \frac{1}{2}(a+b+ c) \right \}. \] Then \[ (p, q, r) = (-c, q, -a-b-\alpha-\beta-\gamma+q), \] where $- q = \min \left \{a+ \beta + \gamma, b+ \alpha + \gamma, \left\lceil \frac{1}{2}(a+b+ \alpha + \beta + \gamma)\right\rceil \right \}$. \item Assume $\min \{\alpha + \beta + \gamma, c\} \leq a+b -2$ and \begin{equation*} \begin{split} -s = \min \left \{a+ \beta + \gamma, b+ \alpha + \gamma, c + \beta + \gamma \right \} < \hspace*{5cm} \\ \min \left \{ \frac{1}{2}(a+b+ \alpha + \beta + \gamma), \frac{1}{2}(a+b+ c) \right \}. \end{split} \end{equation*} Then \begin{equation*} (p, q, r) = \left ( \left\lfloor \frac{1}{2}(-3d-s) \right\rfloor, \left\lceil \frac{1}{2}(-3 d - s) \right\rceil, s \right). \end{equation*} \end{enumerate} \end{proposition} \begin{proof} Set \begin{equation*} \mu = \min \left \{a+b, a+ \beta + \gamma, b+ \alpha + \gamma, c + \beta + \gamma, \frac{1}{2}(a+b+ \alpha + \beta + \gamma), \frac{1}{2}(a+b+ c) \right \}. \end{equation*} Using $a \leq b \leq c$, \cite[Theorem 6.3]{Br} implies that the maximal slope of a subsheaf of $\widetilde\syz{I}$ is $-\mu$. Since $\widetilde\syz{I}$ is not semistable, we have $\mu < d$ (see Proposition~\ref{pro:amaci-semistable}). Moreover, the generic splitting type of $\widetilde\syz{I}$ is determined by the minimal free resolution of $J = (x^a, y^b, (x+y)^c, x^\alpha y^\beta (x+y)^\gamma)$ as a module over $S = K[x, y]$. We combine both approaches to determine the generic splitting type. Since $\reg (x^a, y^b) = a+b-1$, all polynomials in $S$ whose degree is at least $a+b-1$ are contained in $(x^a, y^b)$. Hence, $J = (x^a, y^b)$ if $\min \{\alpha + \beta + \gamma, c\} \geq a+b -1$, and the claim in case (i) follows by Remark~\ref{rem:splitting-type}. For the remainder of the proof, assume $\min \{\alpha + \beta + \gamma, c\} \leq a+b -2$. Then $a+b > \frac{1}{2}(a+b+ c)$, and thus $\mu \neq a+b$. In case (ii), it follows that $\mu = \frac{1}{2}(a+b+ c)$ and $c \leq \alpha + \beta + \gamma$, and thus $c \leq a+b-2$. Using Equation \eqref{eq:reg-restr-ci}, we conclude that \[ \reg (x^a, y^b, (x+y)^c) = -1 + \min \left \{a+b, \left\lceil \frac{1}{2}(a+b+c)\right\rceil \right \} = -1 + \left\lceil \frac{1}{2}(a+b+c)\right\rceil. \] Observe now that $d > \mu = \frac{1}{2}(a+b+ c)$ is equivalent to $\alpha + \beta + \gamma > \frac{1}{2}(a+b+ c)$. This implies $\alpha + \beta + \gamma > \reg (x^a, y^b, (x+y)^c)$, and thus $J = (x^a, y^b, (x+y)^c)$. Using Remark~\ref{rem:splitting-type} again, we get the generic splitting type of $\widetilde\syz{I}$ as claimed in (ii). Consider now case (iii). Then $d > \mu = \frac{1}{2}(a+b+ \alpha + \beta + \gamma)$, which gives $c > \frac{1}{2}(a+b+ \alpha + \beta + \gamma)$. The second assumption in this case also implies $\frac{1}{2}(a+b+ \alpha + \beta + \gamma) \leq a + \beta + \gamma$, which is equivalent to $b+\alpha \leq a + \beta + \gamma$ and also to $b + \alpha \leq \frac{1}{2}(a+b+ \alpha + \beta + \gamma)$. Similarly, we have that $\frac{1}{2}(a+b+ \alpha + \beta + \gamma) \leq b+\alpha + \gamma$, which is equivalent to $a + \beta \leq b+ \alpha + \gamma$ and also to $a + \beta \le \frac{1}{2}(a+b+ \alpha + \beta + \gamma)$. It follows that \[ \max \{a+\beta, b+ \alpha \} \leq \min \left \{a+ \beta + \gamma, b+ \alpha + \gamma, \frac{1}{2}(a+b+ \alpha + \beta + \gamma) \right \}. \] Hence Lemma~\ref{lem:reg-2AMACI} yields \begin{equation*} \begin{split} \reg (x^a, y^b, x^\alpha y^\beta (x+y)^\gamma) = \hspace*{9.7cm} \\ -1 + \min \left \{a+ \beta + \gamma, b+ \alpha + \gamma, \left\lceil \frac{1}{2}(a+b+ \alpha + \beta + \gamma)\right\rceil \right \} < c. \end{split} \end{equation*} This shows that $(x+y)^c \in (x^a, y^b, x^\alpha y^\beta (x+y)^\gamma) = J$. Setting $- q = 1 + \reg J$, Remark~\ref{rem:splitting-type} provides the generic splitting type in case (iii). Finally consider case (iv). Then $\mu = -s$, and $\mu$ is equal to the degree of the least common multiple of two of the minimal generators of $I$. In fact, $-\mu = s$ is the slope of the syzygy bundle ${\mathcal O}_{\PP^2}(s)$ of the ideal generated by these two generators. Thus, the Harder-Narasimhan filtration (see \cite[Definition~1.3.2]{HM}) gives an exact sequence \[ 0 \to {\mathcal O}_{\PP^2}(s) \to \widetilde\syz{I} \to {\mathcal E} \to 0, \] where ${\mathcal E}$ is a semistable torsion-free sheaf on $\PP^2$ of rank two and first Chern class $-a-b-c - \alpha - \beta - \gamma -s = -3d -s$. Its bidual ${\mathcal E}^{**}$ is a stable vector bundle. Thus, by the theorem of Grauert and M\"ulich (see \cite{GM} or \cite[Corollary 1 of Theorem 2.1.4]{OSS}), its generic splitting type is $( \left\lfloor \frac{1}{2}(-3d-s) \right\rfloor, \left\lceil \frac{1}{2}(-3 d - s) \right\rceil)$. Now the claim follows by restricting the above sequence to a general line of $\PP^2$. \end{proof} We have seen that the ideal $J = (x^a, y^b, (x+y)^c, x^\alpha y^\beta (x+y)^\gamma)$ has at most three minimal generators in the cases (i) - (iii) of the above proposition. In the fourth case, the associated ideal $J \subset S$ may be minimally generated by four polynomials. \begin{example} \label{exa:st-nss-4mingen} Consider the ideal \[ I = I_{4,5,5,3,1,1} = (x^4, y^5, z^5, x^3yz). \] Then the corresponding ideal $J$ is minimally generated by $x^4, y^5, (x+y)^5$, and $x^3y(x+y)$. The syzygy bundle of $\widetilde\syz{I}$ is not semistable, and its generic splitting type is $(-7, -6, -6)$ by Proposition~\ref{pro:st-nss}(iv). \end{example}~ \subsubsection{Semistable syzygy bundle}~ Order the entries of the generic splitting type $(p,q,r)$ of the semistable syzygy bundle $\widetilde\syz{I}$ such that $p \leq q \leq r$. In this case, the splitting type determines the presence of the weak Lefschetz property if the characteristic of $K$ is zero (see \cite[Theorem 2.2]{BK}). The following result is slightly more precise. \begin{proposition}\label{pro:split-type-semist} Let $K$ be a field of characteristic zero, and assume the ideal $I = I_{a,b,c,\alpha,\beta,\gamma}$ has a semistable syzygy bundle. Set $k = \left\lfloor \frac{1}{3}(a+b+c+\alpha+\beta+\gamma) \right\rfloor$. Then the generic splitting type of $\widetilde\syz{I}$ is \begin{equation*} (p, q, r) = \begin{cases} (-k-1,-k,-k) & \text{if } a+b+c+\alpha+\beta+\gamma = 3k+1;\\ (-k-1,-k-1,-k) & \text{if } a+b+c+\alpha+\beta+\gamma = 3k+2; \\ (-k,-k,-k) & \text{if } a+b+c+\alpha+\beta+\gamma = 3k \text{ and} \\ & \text{$R/I$ has the weak Lefschetz property}; \\ (-k-1,-k,-k+1) & \text{if } a+b+c+\alpha+\beta+\gamma = 3k \text{ and} \\ & \text{$R/I$ fails to have the weak Lefschetz property}. \end{cases} \end{equation*} \end{proposition} \begin{proof} The Grauert-M\"ulich theorem \cite{GM} gives that $r - q$ and $q - p$ are both nonnegative and at most 1. Moreover, $p, q$, and $r$ satisfy $a+b+c+\alpha+\beta+\gamma = -(p+q+r)$ (see Remark~\ref{rem:splitting-type}(i)). This gives the result if $k \neq d = \frac{1}{3}(a+b+c+\alpha+\beta+\gamma)$. It remains to consider the case when $k = d$. Then $(-k,-k,-k) $ and $(-k-1,-k,-k+1)$ are the only possible generic splitting types. By Proposition~\ref{pro:amaci-semistable}(i), the minimal generators of the ideal $J = (x^a, y^b, (x+y)^c, x^\alpha y^\beta (x+y)^\gamma)$ have degrees that are less than $d$. Hence $\reg J = d$ if and only if the splitting type of $\widetilde\syz{I}$ is $(-d-1, -d , -d+1)$. Since $\dim_K [R/I]_{d-2} = \dim_K [R/I]_{d-1}$, using Proposition~\ref{pro:amaci-balanced}, we conclude that $\reg J \geq d$ if and only if $R/I$ does not have the weak Lefschetz property. \end{proof} We are ready to add the missing piece in the proof of Theorem~\ref{thm:amaci-wlp}. \begin{proof}[Completion of the proof of Theorem~\ref{thm:amaci-wlp}(b)(1)] \mbox{ } We have just seen that the ideal $J = (x^a, y^b, (x+y)^c, x^\alpha y^\beta (x+y)^\gamma)$ has regularity $d$ if $R/I$ fails the weak Lefschetz property. This implies that the multiplication map $\times (x+y+x): [R/I]_{j-2} \to [R/I]_{j-1}$ is surjective whenever $j > d$. Moreover, since the minimal generators of $J$ have degrees that are less than $d$, we have the exact sequence \begin{equation*} 0 \longrightarrow S(-d+1) \oplus S(-d) \oplus S(-d-1) \longrightarrow S(-\alpha-\beta-\gamma) \oplus S(-a) \oplus S(-b) \oplus S(-c) \longrightarrow J \longrightarrow 0. \end{equation*} In the above proof of Theorem~\ref{thm:amaci-wlp} we saw that the four punctures of $T_d (I)$ do not overlap and that $T_d(I)$ is balanced. Hence $T_{d-1} (I)$ has 3 more downward-pointing than upward-pointing triangles, that is, \[ \dim_K [R/I]_{d-2} = \dim_K [R/I]_{d-3} + 3. \] It follows that the multiplication map in the exact sequence \[ [R/I]_{d-3} \longrightarrow [R/I]_{d-2} \longrightarrow S/J \longrightarrow 0 \] is injective because $\dim_K [S/J]_{d-2} = 3$. Hence $\times (x+y+x): [R/I]_{j-2} \to [R/I]_{j-1}$ is injective whenever $j \leq d-1$. \end{proof} The second author would like to thank the authors of \cite{MMMNW}; it was during a conversation in the preparation of that paper that he learned about the use of the Grauert-M\"ulich theorem for an alternative way of deducing the injectivity of the map $[R/I]_{d-3} \longrightarrow [R/I]_{d-2}$ in the above argument if the characteristic of $K$ is zero. \begin{example} \label{exa:syzygy} Consider the ideal $I_{7,7,7,3,3,3} = (x^7, y^7, z^7, x^3 y^3 z^3)$. It never has the weak Lefschetz property, by Theorem~\ref{thm:amaci-wlp}(vii). The bundle $\widetilde\syz{I_{7,7,7,3,3,3}}$ has generic splitting type $(-11, -10, -9)$. Notice that the similar ideal $I_{6,7,8,3,3,3} = (x^6, y^7, z^8, x^3 y^3 z^3)$ has the weak Lefschetz property in characteristic zero as $\det{N_{6,7,8,3,3,3}} = -1764$. The generic splitting type of $\widetilde\syz{I_{6,7,8,3,3,3}}$ is $(-10,-10, -10)$. \end{example} We summarize part of our results for the case where $I$ is associated to a tileable triangular region. In particular, if $K$ is an infinite field of arbitrary characteristic, then the splitting type can be used to determine the presence of the weak Lefschetz property. \begin{theorem} \label{thm:equiv} Let $I = I_{a,b,c,\alpha,\beta,\gamma} \subset R = K[x,y,z]$, where $K$ is an infinite field of arbitrary characteristic. Assume $I$ satisfies conditions (i)--(iv) in Theorem~\ref{thm:amaci-wlp} and $d := \frac{1}{3}(a+b+c+\alpha+\beta+\gamma)$ is an integer. Then the following conditions are equivalent: \begin{enumerate} \item The algebra $R/I$ has the weak Lefschetz property. \item The determinant of $Z(T_d(I))$ (i.e., the enumeration of signed perfect matchings of the bipartite graph $G(T_d(I)$) is not zero in $K$. \item The generic splitting type of $\widetilde\syz{I}$ is $(-d,-d,-d)$. \end{enumerate} \end{theorem} \begin{proof} Regardless of the characteristic of $K$, the arguments for Proposition~\ref{pro:amaci-balanced} show that $T_d(I)$ is balanced. Moreover, the degrees of the socle generators of $R/I$ are at least $d-2$ as shown in Theorem~\ref{thm:amaci-wlp}(b)(1). Hence, Proposition~\ref{pro:wlp} gives that $R/I$ has the weak Lefschetz property if and only if the multiplication map \[ \times (x+y+z): [R/I]_{d-2} \to [R/I]_{d-1} \] is bijective. Now, Corollary~\ref{cor:wlp-biadj} yields the equivalence of Conditions (i) and (ii). As above, let $(p, q, r)$ be the generic splitting type of $\widetilde\syz{I}$, where $p \leq q \leq r$, and let $J \subset S$ be the ideal such that $R/(I, x+y+z) \cong S/J$. The above multiplication map is bijective if and only if $\reg J = d-1$. Since $\reg J + 1 = -r$ and $p+q+r = -3d$, it follows that $\reg J = d-1$ if and only if $(p, q, r) = (-d, -d, -d)$. Hence, conditions (i) and (iii) are equivalent. \end{proof}
{ "timestamp": "2016-06-07T02:19:28", "yymm": "1606", "arxiv_id": "1606.01809", "language": "en", "url": "https://arxiv.org/abs/1606.01809", "abstract": "Deciding the presence of the weak Lefschetz property often is a challenging problem. In this work an in-depth study is carried out in the case of Artinian monomial ideals with four generators in three variables. We use a connection to lozenge tilings to describe semistability of the syzygy bundle of such an ideal, to determine its generic splitting type, and to decide the presence of the weak Lefschetz property. We provide results in both characteristic zero and positive characteristic.", "subjects": "Commutative Algebra (math.AC)", "title": "Syzygy bundles and the weak Lefschetz property of almost complete intersections", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363700641144, "lm_q2_score": 0.7185943985973773, "lm_q1q2_score": 0.7084883529015035 }
https://arxiv.org/abs/1701.06760
On the dense Preferential Attachment Graph models and their graphon induced counterpart
Letting $\mathcal{M}$ denote the space of finite measures on $\mathbb{N}$, and $\mu_\lambda\in\mathcal{M}$ denote the Poisson distribution with parameter $\lambda$, the function $W:[0,1]^2\to\mathcal{M}$ given by \[ W(x,y)=\mu_{c\log x\log y} \] is called the PAG graphon with density $c$. It is known that this is the limit, in the multigraph homomorphism sense, of the dense Preferential Attachment Graph (PAG) model with edge density $c$. This graphon can then in turn be used to generate the so-called W-random graphs in a natural way. The aim of this paper is to compare the dense PAG model with the W-random graph model obtained from the corresponding graphon. Motivated by the multigraph limit theory, we investigate the expected jumble norm distance of the two models in terms on the number of vertices $n$. We present a coupling for which the expectation can be bounded from above by $O(\log^2 n\cdot n^{-1/3})$, and provide a universal lower bound that is coupling independent, but with a worse exponent.
\section{Introduction} Preferential attachment graphs (PAGs) form a group of random growing graph models that have been studied for a long time \cite{barabasi, durrett, frieze}. The main motivation is modelling randomly evolving large real-world networks, like online and offline social networks, the internet, or biological networks (e.g.\ protein-protein interactions). The basic PAG models have been extended by various features, for example duplication steps, weighted edges, vertices with random fitness. The study of this wide family of models provided information about several phenomena in real-world networks (asymptotic degree distribution, clustering, relation of local and global properties, epidemic spread). The limiting behaviour of PAG models has also been investigated from various points of view, depending somewhat on the edge density along the graph sequences. For instance, in \cite{BBCS}, N.\ Berger, C.\ Borgs, J.\ T.\ Chayes and A.\ Saberi consider a sparse version of the process, with a linear number of edges compared to the number of vertices, and prove convergence in the sense of Benjamini--Schramm to a P\'olya point graph. A variation with added randomness is considered by R.\ Elwes in \cite{E1,E2}, where the preferential attachment model is amended in such a way that the number of edges added at each stage itself is a random variable, but in expectation still preserves a linear growth. The limit here is the infinite Rado graph, or a multigraph variant of the same, depending on whether multiple edges are allowed during the process. At the dense end of the spectrum, C.\ Borgs, J.\ Chayes, L.\ Lov\'asz, V.\ S\'os and K.\ Vesztergombi considered in \cite{BCLSV} the case when the edge density along the sequence is essentially constant $c$ (i.e.\ the number of edges is approximately $cn^2/2$), under the convergence notion of injective graph densities. They showed that with probability 1 the graph sequence converges to the graphon $W:[0,1]^2\to\mb{R}$ given by $W(x,y)=c\ln x\ln y$. Later, B.\ R\'ath and L.\ Szak\'acs considered in \cite{RS} convergence of a more general family of processes with respect to induced graph densities, showing that the limit object is a graphon that now takes Poisson distributions as values instead. If instead of considering induced densities, we look for homomorphism densities, the limit object can be seen to be in some sense a combination of the two previously mentioned ones: we obtain a graphon with $W(x,y)$ being a Poisson distribution with parameter $c\ln x\ln y$ (i.e., the injective density limit is the first moment of the homomorphism density limit). Hence the corresponding graphs contain multiple edges, and the original notions for limits of simple graphs cannot be used any more. The paper \cite{KKLS} by K.-K., L.\ Lov\'asz and B.\ Szegedy provides a framework for handling homomorphism densities in the context of multigraphs, and makes use of the so-called \textit{jumble-norm} to measure distance between graphons. All of the papers \cite{BCLSV,RS,KKLS} also deal with $W$-random graph sequences induced by the limit objects $W$, and show that with probability 1, the resulting graph sequence converges to $W$ in the respective densities sense. These $W$-random graph models are thus very similar to the classical graph sequences that gave rise to the limit $W$, but also exhibit some significant differences. Our goal in this paper is to compare the $c$-dense preferential attachment graph model to its $W$-random counterpart, showing that with probability 1 they are close (but not too close) in the jumble distance. The idea of the proof of the main result is to define a family of random graph models (see Section \ref{randomgraph}), which connects the $W$-random graph and the PAG model, and which can be coupled (see Section \ref{coupling}) so that the pairwise jumble-norm distances are easier to bound. In the discussion part (Section \ref{discussion}), we point out some features of the $W$-random version that can make it more useful in certain applications. \section{Terminology and main result} We shall start by defining the distance notion between multigraphs that we intend to use in this paper. It may be defined more generally for graphons (which essentially are weighted graphs with vertex set $[0,1]$), but that shall not be needed here, and we refer to \cite{KKLS} for more details. \begin{definition}\label{def:jumble} Let $G$ and $H$ be two (multi-)graphs on the same vertex set $[n]:=\left\{1,\ldots, n\right\}$ for some positive integer $n$. Then we define their \emph{jumble norm distance} as \[ d_{\boxtimes}(G, H)=\frac 1n\cdot\max_{S,T\subseteq [n]} \frac{1}{\sqrt{st}}\bigg|\sum_{i\in S, j\in T} U_{ij}-V_{ij}\bigg|, \] where $U_{ij}$ and $V_{ij}$ denote the multiplicity of edge $ij$ in $G$ and $H$, respectively. \end{definition} The cut norm distance $d_\square$ used in many other papers (see e.g. \cite{BCLSV} for details) differs from this in the factor $\frac{1}{\sqrt{st}}$ that is omitted there. As such, our current distance notion magnifies the differences that occur on small sets, and we clearly have $d_\boxtimes\geq d_\square$. Also the jumble norm distance can be considered as an $L^2$-version of the cut norm distance, since $\sqrt{st}$ corresponds to the $L^2$ norm of the characteristic function of the set $S\times T$. Next, fix a positive parameter $c>0$. Let $\mc{M}$ denote the space of finite measures on $\mb{N}$, and $W:[0,1]^2\to\mc{M}$ be the function given by \[ W(x,y)=\mu_{c\log x\log y}, \] where $\mu_\lambda$ denotes the Poisson distribution with parameter $\lambda$. We want to define the notion of $W$-random (multi-)graphs. The essence of the two-step randomization is as follows. We consider the set $[0,1]$ as the vertex set of the infinite graph with ``adjacency function'' $W$, and sample a random spanned subgraph on $n$ vertices by choosing its vertices independently uniformly from $[0,1]$. After this first randomization, we obtain a ``graph'' on $n$ vertices where each ``edge'' is a Poisson distribution. To obtain a true multigraph, we then independently sample an edge multiplicity for each pair of vertices from the corresponding Poisson distribution. If we allow loops, this will correspond to the random graph $\mb{G}_W^{\circ}(n)$, whereas if loops are disallowed, we obtain the random graph $\mb{G}_{W}(n)$. \begin{definition} \label{def:gwn} We choose independent exponential random variables $\xi_i$ with parameter $1$ for every $1\leq i \leq n$. For $i<j$, let $Y_{ij}$ be a Poisson random variable with parameter $c\xi_i\xi_j$. For every $i$, let $Y_{ii}$ be a Poisson random variable with parameter $c\xi_i^2/2$. Assume that all $Y_{ij}$s are conditionally independent with respect to the $\xi_i$s. We put $Y_{ij}$ edges between vertices $i$ and $j$ for every $1\leq i \leq j \leq n$. This yields a random multigraph $\mathbb G_{\rm W}^{\circ}(n)$.\\ If, compared to $\mathbb G_{\rm W}(n)$, we erase the loops, we obtain the random multigraph $\mathbb G_{\rm W}(n)$. \end{definition} \begin{remark} Note that using exponential variables instead of the uniform $[0,1]$ valued ones is compensated by the loss of the $log$ in the parameter. \end{remark} These are the random models we wish to compare to the below version of the PAG model. \begin{definition} We assign an urn to each vertex, initially with one single ball in each of them. Then we run a P\'olya urn process for $\lfloor cn^2\rfloor$ steps. That is, for $t=1, 2, \ldots, \lfloor cn^2\rfloor$, at step $t$, we choose an urn, with probabilities proportional to the number of balls inside the urn, and put a new ball into it (each random choice is conditionally independent from the previous steps, given the actual distribution of the balls). Finally, for $k=1, 2, \ldots, \big\lfloor\lfloor cn^2\rfloor/2\big\rfloor$, we add an edge between the vertices where the balls at step $t=2k-1$ and at step $t=2k$ have been placed. This yields the random multigraph $\mathbb G_{\mr{PAG}}(n)$; multiple edges and loops may occur. \end{definition} It was proved in \cite{KKLS} that with probability 1, the random graph $\mb{G}_6(n)$ converges with respect to multigraph homomorphism densities to the original function $W$. As mentioned in the introduction, this is also the limit object obtained when looking at the random graphs $\mb{G}_1(n)$ defined as the preferential attachment graph on $n$ vertices with $\lfloor cn^2 \rfloor$ edges. \\ Given that letting $n$ go to infinity, the two random sequences $\mb{G}_1(n)$ and $\mb{G}_6(n)$ tend to the same limit, it is natural to ask how close these two sequences are as a function of $n$. Our main result is that under an appropriate coupling, we obtain a polynomial bound on the expected distance. \begin{theorem}\label{thm:main} There exists a coupling for which for every $1<\alpha<2$ there exists $K(\alpha)>0$ such that for every $n\geq 1$ we have \[\mathbb E\big(d_{\boxtimes}\big(\mathbb G_{\mr{PAG}}(n), \mathbb G_W(n)\big)\big)\leq K(\alpha)\cdot \log^2 n\cdot n^{\beta},\] where $\beta:=\max_{\alpha\in (1, 2)}\left\{\alpha-2,\frac{1-\alpha}{2},-1/2, 4-3\alpha\right\}$. With this bound, the optimum value for $\alpha$ is $5/3$, yielding $\beta=-1/3$. \end{theorem} In the last section, we provide a universal, coupling-independent lower bound of $O(n^{-1})$. The exponents are far from each other, but the lower bound uses very little of the structure of the models, so there is room for improvement. \section{Random graph models} \label{randomgraph} We define a family of random graph models such that the neighboring ones are easier to compare in the jumble norm, and the whole family connects the two models of Theorem \ref{thm:main}. In the next section we will also present possible couplings for these pairs of models, which provide a coupling satisfying the conditions of the theorem. A positive number $c>0$ will be a common parameter of all of the models, and it will be considered fixed for the rest of the paper. Model 1 will be a realization of $\mb{G}_{\mr{PAG}}(n)$, whilst models 6 and 7 will be realizations of $\mb{G}_{\rm W}^\circ(n)$ and $\mb{G}_{\rm W}(n)$, respectively.\\ The graphs will have $n$ vertices, labeled by $1, 2, \ldots, n$. The parameter $\alpha$ will be chosen later so that the bounds are the best possible available from our approach. \subsection*{Model 1} We assign an urn to each vertex, initially with one single ball in each of them. Then we run a P\'olya urn process for $\lfloor cn^2\rfloor$ steps. That is, for $t=1, 2, \ldots, \lfloor cn^2\rfloor$, at step $t$, we choose an urn, with probabilities proportional to the number of balls inside the urn, and put a new ball into it (each random choice is conditionally independent from the previous steps, given the actual distribution of the balls). Finally, for $k=1, 2, \ldots, \big\lfloor\lfloor cn^2\rfloor/2\big\rfloor$, we add an edge between the vertices where the balls at step $t=2k-1$ and at step $t=2k$ have been placed. We obtain a random multigraph $\mathbb G_1(n)$ this way; multiple edges and loops may occur. \subsection*{Model 2} Fix $\alpha\geq 0$. Let $r'$ be a random variable with negative binomial distribution, with parameters $n$ and $p_{\alpha}=1-e^{-\frac{1}{n^{\alpha-1}}}$ (we mean the version of negative binomial distribution with possible values $n, n+1, \ldots$). Let $r=r'-n$; this has values $0, 1, \ldots$ (sometimes this distribution is called negative binomial). The urn process is the same as in model $1$ (independent of $r'$), but we add edges between vertices chosen at step $t=2k-1$ and at step $t=2k$ only for $k\geq r/2$ (if $r>cn^2$, then we get the empty graph). We obtain a random multigraph $\mathbb G_2(n, \alpha)$. \subsection*{Model 3} Let $\alpha$ and $r$ be defined as in model $2$. For $t=1, 2, \ldots, r$, we run the P\'olya urn as before. Let $R_i^*$ be the proportion of the balls in urn $i$ after $r$ steps (for $i=1, \ldots, n$). For $t=r+1, \ldots, \lfloor cn^2\rfloor$, independently at each step, we put a new ball in an urn chosen randomly according to the distribution $(R_i^*)$. That is, the probability that the ball at step $t$ falls into urn $i$ is $R_i^*$, for all $t=r+1, \ldots, \lfloor cn^2\rfloor$. Finally, for $k\geq r/2$, we add an edge between the vertices chosen at step $t=2k-1$ and at step $t=2k$. (If $r>cn^2$, we mean the empty graph.) We obtain $\mathbb G_3(n, \alpha)$ this way. \subsection*{Model 4} Let $\alpha, r$ and $R_i^*$ be defined as in model $3$. If $r>cn^2$, take the empty graph. Otherwise, for every pair $1\leq i<j\leq n$, we take a random variable $Z_{ij}$ with Poisson distribution of parameter $cn^2R_i^*R_j^*$. For every $1\leq i\leq n$, we take a random variable $Z_{ii}$ with Poisson distribution of parameter $cn^2(R_i^*)^2/2$. We assume that all $Z_{ij}$s are conditionally independent of each other, given the $R_i^*$s. Finally, we put $Z_{ij}$ edges between vertices $i$ and $j$ for every pair $1\leq i\leq j \leq n$. We obtain $\mathbb G_4(n, \alpha)$ this way. \subsection*{Model 5} Given $n$ and $\alpha$, the model is the same as model $4$ except that $r$ is not included any more; the model is the same as the previous one in the non-empty case. We obtain $\mathbb G_5(n, \alpha)$ this way. \subsection*{Model 6} We choose independent exponential random variables $\xi_i$ with parameter $1$ for every $1\leq i \leq n$. For $i<j$, let $Y_{ij}$ be a Poisson random variable with parameter $c\xi_i\xi_j$. For every $i$, let $Y_{ii}$ be a Poisson random variable with parameter $c\xi_i^2/2$. Assume that all $Y_{ij}$s are conditionally independent with respect to the $\xi_i$s. We put $Y_{ij}$ edges between vertices $i$ and $j$ for every $1\leq i \leq j \leq n$. We obtain a random multigraph $\mathbb G_6(n)$ this way. \subsection*{Model 7} For every $1\leq i < j \leq n$, let $Y_{ij}$ be defined as in model $6$. We add $Y_{ij}$ edges between vertices $i$ and $j$ for all these pairs, but there are no loops in this case. We obtain $\mathbb G_7(n)$ this way. \section{Couplings} \label{coupling} In order to prove Theorem \ref{thm:main}, we need to construct a particular coupling for which the distance of $\mathbb G_{\rm PAG}$ and $\mathbb G_{\rm W}$ is smaller than the upper bound. We do this through a sequence of couplings between the consecutive pairs, with respect to the order of random graph models in the previous section. It will be easy to see that the coupling of the first one (which is a realization of $\mathbb G_{\rm PAG}$) and the last one (which is a realization of $\mathbb G_{\rm W}$) can be constructed following the same order. At each step, we can simply add a finite family of random variables to the probability space independently where necessary, and use the already existing random variables in the other cases. \subsection*{Coupling of model $1$ and model $2$} These two models can be coupled easily. Take a realization of model $1$, and delete the edges corresponding to steps $2k-1$ and $2k$ for $k<r/2$. That is, we do not add the edges in the first $r$ steps. \begin{proposition}\label{prop:12} For all $\alpha>1$ there exists $K_{1,2}>0$ such that \[\mathbb E\big(d_{\boxtimes}\big(\mathbb G_1(n, \alpha), \mathbb G_2(n, \alpha)\big)\big)\leq K_{1,2}\cdot\log n \cdot n^{\alpha-2} \qquad (n=1, 2, \ldots)\] holds in the coupling given above. \end{proposition} \subsection*{Coupling of model $2$ and model $3$} We start from a realization of model 2. Let $R_{i,t}$ be the proportion of the balls in urn $i$ after $t$ steps. Then, for $t=r+1, \ldots, \lfloor cn^2\rfloor$, conditionally on the process in model 2 until $t-1$ steps, we choose a coupling of the distributions given by $(R_{i,t-1})_{i=1}^n$ and $(R_{i}^*)_{i=1}^n$ which minimizes the probability of choosing different urns and which is conditionally independent from the couplings used in the previous steps (with respect to the evolution of the number of balls). After adding the edges, we get a realization of model 3, because the distributions are determined by $(R_i^*)_{i=1}^n$, and the steps are conditionally independent of each other (and there is no difference in the first $r$ steps). \begin{proposition}\label{prop:23} For all $\alpha>1$ there exists $K_{2,3}>0$ such that for every $n\geq 1$ we have \[\mathbb E\big(d_{\boxtimes}\big(\mathbb G_2(n, \alpha), \mathbb G_3(n, \alpha)\big)\big)\leq K_{2,3}\cdot\log^2 n\cdot \left(n^{1/2-\alpha/2}+n^{\alpha-2}\right)\] in the coupling given above. \end{proposition} \subsection*{Coupling of model $3$ and model $4$} The negative binomial random variable $r$ is common in the two models, this is chosen first. If $r>cn^2$, then both models give the empty graph, so we assume the contrary, and construct the coupling given $r$. Notice that in model $3$, since all steps are independent and use the same probability distribution, the edges are chosen independently, with probabilities proportional to $2R_i^*R_j^*$ for $i\neq j$ and $(R_i^*)^2$ for loops. We assign independent Poisson processes to each pair of vertices. For $1\leq i<j\leq n$, the rate of the process is $2R_i^*R_j^*$ for $(i,j)$, and for $1\leq i\leq n$, the rate is ${R_i^*}^2$ for $(i,i)$. We denote by $N_s^{(ij)}$ the number of events until time $s$ in the $(i,j)$ process ($s>0$). The sum of these processes is also a Poisson process; let $\tau$ be the time when the total number of events reaches $\lfloor(\lfloor cn^2\rfloor-r)/2\rfloor+1$. If we put $N_{\tau}^{(ij)}$ edges between $i$ and $j$ for all $1\leq i \leq j \leq n$, then we get model $3$, because all $\tau$ events are distributed among the pairs of vertices independently, with probabilities proportional to the rates. On the other hand, if we put $N_{cn^2/2}^{(ij)}$ edges between $i$ and $j$, then we get model $4$, as the number of edges between the pairs are independent Poisson random variables with the appropriate parameter. Hence this provides a coupling of the two models. \begin{proposition}\label{prop:34} For all $\alpha>1$ there exists $K_{3,4}>0$ such that for every $n\geq 1$ we have \[\mathbb E\big(d_{\boxtimes}\big(\mathbb G_3(n, \alpha), \mathbb G_4(n, \alpha)\big)\big)\leq K_{3,4}\cdot\log n\cdot n^{\alpha-2}\] in the coupling given above. \end{proposition} \subsection*{Coupling of model $4$ and model $5$} For $r\leq cn^2$, there is no difference between the two models. Whenever $r>cn^2$, the graph $G_4$ is the empty graph, so no coupling is needed. \begin{proposition}\label{prop:45} For all $2>\alpha>1$ there exists $K_{4,5}>0$ such that for every $n\geq 1$ we have \[\mathbb E\big(d_{\boxtimes}\big(\mathbb G_4(n, \alpha), \mathbb G_5(n)\big)\big)\leq K_{4,5}\cdot n^{-10}\] in the coupling given above. \end{proposition} \subsection*{Coupling of model $5$ and model $6$} First, we wish to couple the exponential random variables $\xi_i$ with the variables $R_i^*$ from the P\'olya urn. The following representation of the urn process until $r$ steps and its connection to independent exponential random variables yields a natural way to do this. In addition, this lemma will be useful when comparing models $1$ and $2$ as well. \begin{lemma}\label{lem:exp}Fix $\alpha>1$. Let $r$ be defined as in model $2$. Let $X_i^*$ be the number of balls in urn $i$ (for $1\leq i \leq n$) after $r$ steps (we continue the P\'olya urn process even if $r>cn^2$). Let $\xi_1, \ldots, \xi_n$ be independent random variables with exponential distribution of parameter $1$. We define \[C_i=\lceil \xi_i n^{\alpha-1}\rceil \qquad (i=1, \ldots, n).\] Then $(X_1^*, \ldots, X_n^*)$ and $(C_1, \ldots, C_n)$ have the same joint distribution. \end{lemma} \begin{proof}After $r$ steps, the total number of balls is $r+n$; that is, $\sum_{i=1}^n X_i^*=r+n$. As it is well known, by the interchangeability property of the chosen colors in the urn process, for every $s\geq n$ and $\sum_{i=1}^n k_i=s$ we have \[\begin{split}\mathbb P&\left(X_1^*=k_1, \ldots, X_n^*=k_n\left|\sum_{i=1}^n X_i^*=s\right.\right)\\&=\binom{s}{k_1-1}\binom{s-k_1+1}{k_2-1}\ldots \binom{s-k_1-\ldots-k_{n-2}+n-2}{k_{n-1}-1}\cdot\frac{(k_1-1)!\ldots (k_n-1)!}{n(n+1)\ldots(n+s-1)}\\&=\frac{s!(n-1)!}{(n+s-1)!}=\binom{n+s-1}{n-1}^{-1}.\end{split}\] On the other hand, for every $k\geq 0$ and $1\leq i \leq n$, the definition of $C_i$ implies that \begin{equation}\label{eq:ci}\mathbb P(C_i\geq k)=\mathbb P(\xi_in^{\alpha-1}> k-1)= \exp\left(-\frac{k-1}{n^{\alpha-1}}\right)=\bigg(\exp\left(-\frac{1}{n^{\alpha-1}}\right)\bigg)^{k-1}.\end{equation} Hence $C_i$ has geometric distribution of parameter $p_{\alpha}=1-e^{-\frac{1}{n^{\alpha-1}}}$ (where we mean the version with possible values $1, 2,\ldots$). The random variables $C_i$s are independent, thus $\sum_{i=1}^n C_i$ has the same negative binomial distribution as $r+n$. Hence $\sum_{i=1}^n X_i^*$ and $\sum_{i=1}^n C_i$ have the same distribution. In addition, the conditional distributions given the sum are also the same, because we have \begin{align*}\mathbb P(C_1=k_1, \ldots, C_n=k_n)=(1-p_{\alpha})^{k_1-1}p_{\alpha}\ldots(1-p_{\alpha})^{k_n-1}p_{\alpha}=p_{\alpha}^n(1-p_{\alpha})^{\sum_{i=1}^n k_i-n}.\end{align*} This depends only on the sum of the $k_i$s, which implies that \[\mathbb P\left(C_1=k_1, \ldots, C_n=k_n\bigg|\sum_{i=1}^n C_i=s\right)=\binom{n+s-1}{n-1}^{-1},\] just as we have seen in the previous case. \end{proof} Recall that the $R_i^*$-s corresponded to the ratio of the colors in the urn after $r$ steps, and therefore the P\'olya urn model can be coupled to the family of random variables $(\xi_i)$ in such a way that \[ R_i^*=\frac{\lceil \xi_i n^{\alpha-1}\rceil}{\sum_{j=1}^n \lceil \xi_j n^{\alpha-1}\rceil}=\frac{C_i}{\sum_{k=1}^n C_k}. \] Next we couple the Poisson random variables $Y_{ij}$ and $Z_{ij}$ for each pair $1\leq i\leq j\leq n$. We exploit the fact that the sum of two independent Poisson distributions is again a Poisson distribution whose parameter is the sum of the original parameters. Let $\mc{F}$ be the $\sigma$-algebra generated by the families $(\xi_i)$ and $(R_i^*)$. Conditioned on $\mc{F}$, the coupling is done so that for each pair $1\leq i< j\leq n$, we generate independent Poisson random variables $H_{ij}$ and $H^*_{ij}$ of parameter $\mu_{ij}:=cn^2\min\{\xi_i\xi_j,R_i^*R_j^*\}$ and $\mu_{ij}^*:=cn^2\left|\xi_i\xi_j-R_i^*R_j^*\right|$ respectively, and set \[ \begin{array}{lcr} Y_{ij}:=H_{ij}+\mb{I}(\xi_i\xi_j<R_i^*R_j^*)H^*_{ij} & \mbox{ and } & Z_{ij}:=H_{ij}+\mb{I}(\xi_i\xi_j>R_i^*R_j^*)H^*_{ij}. \end{array} \] For the variables $Y_{ii}, Z_{ii}$, the coupling is done similarly, with all parameters halved. \begin{proposition}\label{prop:56} For all $\alpha>1$ there exists $K_{5,6}>0$ such that for every $n\geq 1$ we have \[\mathbb E\big(d_{\boxtimes}\big(\mathbb G_5(n, \alpha), \mathbb G_6(n)\big)\big)\leq K_{5,6}\cdot (\log n)^{1/2}\cdot \big(n^{-1/2}+n^{4-3\alpha}\big)\] in the coupling given above. \end{proposition} \subsection*{Coupling of model $6$ and model $7$} Generate $G_6$, then delete the loops. This yields the natural coupling between $G_6$ and $G_7$. \begin{proposition}\label{prop:67} There exists $K_{6,7}>0$ such that for every $n\geq 1$ we have \[\mathbb E\big(d_{\boxtimes}\big(\mathbb G_6(n), \mathbb G_7(n)\big)\big)\leq K_{6,7}\cdot n^{-3/4}\] in the coupling given above. \end{proposition} We also conclude that this sequence of couplings can be realized in a single probability space, if we start with an appropriate family of independent random variables. Thus we constructed a coupling of $\mathbb G_{\rm PAG}$ and $\mathbb G_{\rm W}$. \section{Proofs} \subsection*{Proof of Theorem 1} The result follows from the triangle inequality and Propositions 1 through 6.\hfill $\square$ We shall therefore now turn our attention to proving the bounds connecting each pair of models. Since the jumble norm distance is not always easy to work with, we shall make use of the following lemma. \begin{lemma} \label{lem:sorosszeg} Let $G$ and $H$ be two (undirected) multigraphs on the vertex set $\{1, 2, \ldots, n\}$. Let $U_{ij}$ be the number of edges between $i$ and $j$ in $G$, and $V_{ij}$ the same quantity in $H$. Then the following holds: \[d_{\boxtimes}(G, H)=\frac 1n\cdot\max_{S,T} \frac{1}{\sqrt{st}}\bigg|\sum_{i\in S, j\in T} U_{ij}-V_{ij}\bigg|\leq \frac 1n\cdot\max_{1\leq i \leq n}\ \sum_{j=1}^n |U_{ij}-V_{ij}|.\] \end{lemma} \begin{proof}Let $\sigma_i=\sum_{j=1}^n |U_{ij}-V_{ij}|$. Notice that if $|S|=s$, $|T|=t$, and $s\leq t$, then \[\bigg|\sum_{i\in S, j\in T} U_{ij}-V_{ij}\bigg|\leq \sum_{i\in S, j\in T} |U_{ij}-V_{ij}|\leq \sum_{i\in S} \sigma_i\leq s \max_{1\leq i \leq n} \sigma_i.\] Hence \[\frac{1}{\sqrt{st}}\bigg|\sum_{i\in S, j\in T} U_{ij}-V_{ij}\bigg|\leq \frac{s\max_{1\leq i \leq n} \sigma_i}{\sqrt {st}}=\frac{\sqrt s}{\sqrt t}\max_{1\leq i \leq n} \sigma_i\leq \max_{1\leq i \leq n} \sigma_i,\] as we assumed that $s\leq t$. In the reverse case $s\geq t$, we get the same with the bound $ \max_{1\leq j \leq n} \sum_{i=1}^n{|U_{ij}-V_{ij}|}$. Since $U_{ij}=U_{ji}$ and $V_{ij}=V_{ji}$, this is equal to the previous maximum. This finishes the proof. \end{proof} \subsection{Models $1$ and $2$} \subsection*{Proof of Propositon $\ref{prop:12}$} Let $U_{ij}$ be the number of edges between $i$ and $j$ in model $1$, and $V_{ij}$ the number of edges between $i$ and $j$ in model $2$. By the definition of the coupling, $U_{ij}$ can never be smaller than $V_{ij}$. If $r<cn^2$, then $U_{ij}-V_{ij}$ is the number of edges added to model $1$ during the first $r$ steps. Therefore $\sum_{j=1}^n |U_{ij}-V_{ij}|$ is at most the number of steps in which urn $i$ was chosen during the first $r$ steps, which is $X_i^*-1$ (cf. Lemma \ref{lem:exp}). Even if $r\geq cn^2$, the sum $\sum_{j=1}^n |U_{ij}-V_{ij}|$ cannot be larger than $cn^2/2$, since there are no more edges in model $1$. By Lemma \ref{lem:sorosszeg} and Lemma \ref{lem:exp}, we obtain \[\mathbb E\big(d_{\boxtimes}\big(\mathbb G_1(n, \alpha), \mathbb G_2(n, \alpha)\big)\big)\leq \mathbb E\big(\min\big(\max_{1\leq i \leq n} X_i^*, cn^2\big)\big)=\mathbb E\big(\min\big(\max_{1\leq i \leq n} C_i, cn^2\big)\big).\] Equation \eqref{eq:ci} implies \[\mathbb P\left(\max_{1\leq i \leq n} C_i>3\log n\cdot n^{\alpha-1}+1\right)\leq \sum_{i=1}^n \mathbb P\left( C_i>3\log n\cdot n^{\alpha-1}+1\right)\leq ne^{-3\log n}=\frac{1}{n^2}. \] Hence the expectation of the minimum is at most $3\log n\cdot n^{\alpha-1}$ plus some constant depending only on $c$. This finishes the proof. \hfill $\square$ \subsection{Models $2$ and $3$} The idea of the proof of Proposition \ref{prop:23} is to find the expected value of the maximum when all global random variables (like $r$) are close to their mean, and then use large deviation theorems to show that this is the case with high probability. Throughout this proof, the constant factor in the $O(\cdot)$ notation may depend only on $c$. First we fix $1\leq i \leq n$. Let $X_{i,t}$ be the number of balls in urn $i$ after $t$ steps. Recall that $X_i^*$ denotes the number of balls in urn $i$ after $r$ steps. We define the proportions similarly (recall that the initial configuration consists of one ball at each urn): \[R_{i,t}=\frac{X_{i,t}}{t+n}; \qquad R_i^*=\frac{X_i^*}{r+n}.\] We will use an application of de Finetti's theorem to the urn process $X_t$ (see e.g.\ Theorem 2.2.\ in \cite{pemantle}). The joint distribution of the urns chosen randomly can be represented as follows. Let $p$ be a random variable with distribution $\mathrm{Beta}(1, n-1)$ (as there is a single ball in urn $i$ at the beginning and $n-1$ balls in the other urns). Then, conditionally on $p$, generate independent Bernoulli random variables taking value $1$ with probability $p$. This has the same distribution as the indicators of the steps when a new ball is placed to urn $i$. This representation has an immediate consequence on the maximum of the proportion. \begin{lemma} \label{lem:maxrt} \begin{enumerate}[(a)] \item Let $p$ be a random variable with distribution $\mathrm{Beta}(1, n-1)$ with $n\geq 1$. Then we have \begin{equation}\label{eq:kisp}\mathbb P\left(p>\frac {16}n \log n\right)\leq n^{-8}.\end{equation} \item For every $1\leq i \leq n$ we have \begin{equation}\label{eq:maxrt} \mathbb P\left(\max_{n\leq t \leq cn^2} R_{i,t}>\frac{36}{n}\log n\right)\leq 2cn^{-6}.\end{equation} \end{enumerate} \end{lemma} \begin{proof} $(a)$ By using that $n-1\geq n/2$, we have \begin{equation*}\begin{split} \mathbb P\left(p>\frac {16} n \log n \right)&=\int_{16\log n/n }^1 (n-1)(1-x)^{n-2}dx=\int_0^{1-16\log n/n} (n-1)x^{n-2}dx\\&=(1-16\log n/n)^{n-1}\leq \exp(-8 \log n)=n^{-8}. \end{split}\end{equation*} $(b)$ Using exponential Markov's inequality and part $(a)$, we have \begin{align*}\mathbb P&\left(R_{i,t}>\frac{36}{n}\log n\right)\leq\mathbb P\left(R_{i,t}>\frac {36}n \log n\bigg\vert p\leq \frac{16}{n}\log n\right)+n^{-8}\\ &=\mathbb P\left(X_{i,t}>\frac {36(t+n)}n \log n\bigg\vert p\leq \frac{16}{n}\log n\right)+n^{-8}\\ &\leq \frac{\mathbb E((1+(e-1)p)^t|p\leq\frac {16} n\log n)}{\exp(\log n\cdot 36 (t+n)/n)}+n^{-8}\leq \frac{\exp((e-1)t\cdot\frac {16} n \log n)}{\exp(\log n\cdot {36}(t+n)/n)}+n^{-8}\\ &\leq \exp\left(((e-1)\cdot 16-36)\frac t n \log n\right)+n^{-8}\leq \exp(-8\log n)+n^{-8}\leq 2n^{-8},\end{align*} where we assumed that $t\geq n$. This immediately implies $(b)$. \end{proof} We will use the following lemma, which is based on a large deviation argument. \begin{lemma}\label{lem:binom} Fix integers $m\geq n\geq 2$. Let $p$ be a random variable with distribution $\mathrm{Beta}(1, n-1)$. Let $\eta$ be a random variable whose conditional distribution with respect to $p$ is binomial with parameters $m$ and $p$. We define \[B_m=\bigg\{\frac{3600 \log n}{m}< p< \frac{16\log n}{n}\bigg\}.\] Then there exists $K_1>0$ such that \[\mathbb P\left(\bigg\{|\eta-mp|\geq K_1\sqrt{\frac{m}{n}}\log n\bigg\}\cap B_m\right)=O(n^{-8}).\] \end{lemma} \begin{proof} We will compare the difference $|\eta-mp|$ to the variance of the binomial distribution, given $p$. We start with \begin{equation}\label{eq:py1}\begin{split} \mathbb P\left(\bigg\{|\eta-mp|\geq K_1\sqrt{\frac{m}{n}}\log n\bigg\}\cap B_m\right)&\leq \mathbb P\left(\bigg\{|\eta-mp|> K\sqrt{mp(1-p)\log n}\bigg\}\cap B_m\right)\\&\quad+\mathbb P\left(K\sqrt{mp(1-p)\log n}> K_1\sqrt{\frac{m}{n}}\log n\right). \end{split}\end{equation} We will choose $K=6$ but keep writing $K$ for clarity. Since $B_m$ is measurable with respect to $p$, the first term is equal to \begin{equation}\label{eq:epib}q_1=\mathbb E\left(\mathbb P\left(\bigg\{|\eta-mp|> K\sqrt{mp(1-p)\log n}\bigg\}\bigg|p\right)\cdot \mathbb I_{B_m}\right),\end{equation} where $\mathbb I_{B_m}$ denotes the indicator function of the event $B_m$. We define $k=mp-K\sqrt{mp(1-p)\log n}$ and $k'=mp+K\sqrt{mp(1-p)\log n}$; then the first event in \eqref{eq:epib} is $\{\eta/m<k/m\}\cup \{\eta/m>k'/m\}$. It is clear that $k/m<p$ and $k'/m>p$; hence we can apply large deviation arguments. Furthermore, we have $k/m>0$ on the event $B_m$, as the following calculation shows. \[p>K^2\frac{\log n}{m}\Leftrightarrow \sqrt p>K \sqrt{\frac{\log n}{m}}\Rightarrow p>K\sqrt{\frac{p(1-p)\log n}{m}}.\] We also need $k'/m<1$. That is, we have to check whether the following holds: \begin{align*}mp+K\sqrt{mp(1-p)\log n}&<m; \\ K\sqrt{mp(1-p)\log n}&<m(1-p);\\ K\sqrt{p\log n}&<\sqrt{m(1-p)}.\end{align*} Since we have $p<16\log n/n$ on $B_m$ and we assumed $m\geq n$, this holds for large enough $n$ (recall that $K=6$ does not depend on any of the parameters). Hence we can apply the relative entropy version of the Chernoff bound for binomial distributions, conditionally with respect to $p$. We obtain \begin{align*}\mathbb P(\eta/m< k/m)&\leq \mathbb E\left(\exp\left(-m D\left(\frac km \bigg\| p\right)\right)\right); \\ \mathbb P(\eta/m> k'/m)&\leq \mathbb E\left(\exp\left(-m D\left(\frac km \bigg\| p\right)\right)\right),\end{align*} where $D(a\| p)= a \log \frac ap+(1-a)\log \frac{1-a}{1-p}$. We need the following quantities for the calculations. \begin{align*} \frac{k}{m}&=\frac{mp-K\sqrt{mp(1-p)\log n}}{m}=p-K\sqrt{\frac{p(1-p)}{m}\log n};\\ \frac{k}{mp}&=1-K\sqrt{\frac{1-p}{mp}\log n};\\ 1-\frac km&=1-p+K\sqrt{\frac{p(1-p)}{m}\log n}; \\ \frac{1-\frac km}{1-p}&=1+K\sqrt{\frac{p}{m(1-p)}\log n}. \end{align*} It is easy to check that $x>-0.1$ implies $\log (1+x)\geq x-2x^2/3$. On the event $B_m$ we have $100K^2\cdot\frac{1-p}{mp}\log n <1$, and hence $K\sqrt{\frac{1-p}{mp}\log n}<0.1$. Therefore \begin{align*} D\left(\frac km \bigg\| p\right)&\geq \bigg(p-K\sqrt{\frac{p(1-p)\log n}{m}}\bigg)\bigg(-K\sqrt{\frac{(1-p)\log n}{pm}}-\frac{2K^2(1-p)\log n}{3pm}\bigg)\\& \quad+\bigg(1-p+K\sqrt{\frac{(1-p)p\log n}{m}}\bigg)\bigg(K\sqrt{\frac{p\log n}{(1-p)m}}-\frac{2K^2p\log n}{3(1-p)m}\bigg)\\ &=-K\sqrt{\frac{p(1-p)\log n}{m}}-\frac{2K^2(1-p)\log n}{3m}+\frac{K^2(1-p)\log n}{m}\\&\quad+\frac{2K^3}3\sqrt{\frac{(1-p)^3\log^3 n}{pm^3}}+K\sqrt{\frac{p(1-p)\log n}{m}}-\frac{2K^2p\log n}{3m}\\&\quad+\frac{K^2p\log n}{m}-\frac{2K^3}3\sqrt{\frac{p^3\log^3 n}{(1-p)m^3}}\\ &\geq\frac{K^2\log n}{3m}-\frac{2K^3p}{3}\cdot\sqrt{\frac{p\log^3 n}{(1-p)m^3}}. \end{align*} Similarly, we have \[D\left(\frac{k'}{m} \bigg\| p\right)\geq \frac{K^2\log n}{3m}-\frac{2K^3(1-p)}{3}\cdot \sqrt{\frac{(1-p)\log^3 n}{pm^3}}.\] Substituting this into the Chernoff bound, we obtain that for $q_1$ defined by equation \eqref{eq:epib} we have \begin{align*}q_1&\leq \mathbb E\left(\exp\left(-\frac 13 K^2\log n+\frac{2K^3p}{3}\cdot\sqrt{\frac{p\log^3 n}{(1-p)m}}\right)\cdot \mathbb I_{B_m}\right)\\&+\mathbb E\left(\exp\left(-\frac 13 K^2\log n+\frac{2K^3(1-p)}{3}\cdot\sqrt{\frac{(1-p)\log^3 n}{pm}}\right)\cdot \mathbb I_{B_m}\right)\end{align*} for $n$ large enough. As for the first term: \[\frac{2K^3p}{3}\cdot\sqrt{\frac{p\log^3 n}{(1-p)m}}\leq \frac{K^3(\log n)^{3/2}}{\sqrt{ n(1-16\log n/n)}}\leq \frac{1}{12}K^2 \log n,\] for $n$ large enough. Hence the first term is $O(n^{-8})$, as we have chosen $K=6$. In the exponent of the second term, since $pm>100K^2 \log n$ holds on $B_m$, we get \[\frac{2K^3(1-p)}{3}\cdot\sqrt{\frac{(1-p)\log^3 n}{pm}}\leq \frac{K^2}{15}\log n.\] Putting this together, we conclude that $q_1=O(n^{-8})$, which is a bound for the first term of \eqref{eq:py1}. The second term of \eqref{eq:py1} can be bounded as follows. \begin{align*}\mathbb P&\left(K\sqrt{mp(1-p)\log n}> K_1\sqrt{\frac{m}{n}}\log n\right)\leq \mathbb P\left(\sqrt p>\frac{K_1\sqrt{\log n}}{K\sqrt n}\right)\\ &=\mathbb P\left(p>\frac{K_1^2}{ K^2n}\log n\right)\leq n^{-8},\end{align*} by equation \eqref{eq:kisp}, if $K_1^2\geq 16K^2=576$. This finishes the proof. \end{proof} Now we compare the differences of the proportions after $r$ steps and the further steps. This will give the order of the distance in the coupling. We define \[B=\bigg\{\frac{36000\log n}{n^{\alpha}}<p<\frac{16\log n}{n}\bigg\}\cap \{r>n^{\alpha}/10\}.\] \begin{proposition} \label{prop:23r}Assuming $\alpha>1$, there exists $K_2, K_3, K_4, K_5>0$ such that for every fixed $1\leq i\leq n$ the following hold. \begin{enumerate}[(a)] \item \[\mathbb P\left(\bigg\{|R_{i,t}-R_i^*|>K_2 \frac{\log n}{\sqrt{n^{\alpha+1}}}\bigg\}\cap B\cap \{t\geq r+n^{\alpha}\}\right)=O(n^{-8}).\] \item \[\mathbb P\left(\bigg\{\sum_{t=r}^{\lfloor cn^2\rfloor}|R_{i,t}-R_i^*|>K_3 \log n \big( n^{3/2-\alpha/2}+ n^{\alpha-1}\big)\bigg\}\cap B\right)=O(n^{-6}).\] \item \[\mathbb P\left(\bigg\{\sum_{t=r}^{\lfloor cn^2\rfloor}|R_{i,t}-R_i^*|>K_4 \log n\cdot \left( n^{3/2-\alpha/2}+n^{\alpha-1}\right)\bigg\}\cap \{r>n^{\alpha}/10\}\right)=O(n^{-6}).\] \item We define \[\Delta_i=\sum_{t=r}^{\lfloor cn^2\rfloor}\left(|R_{i,t}-R_i^*|+R_{i,t}\sum_{k=1}^n |R_{k,t+1}-R_k^*|\right).\] Then for some $K_5>0$ we have \[\mathbb P\left(\bigg\{\Delta_i>K_5 \log^2 n\cdot \big( n^{3/2-\alpha/2}+n^{\alpha-1}\big)\bigg\}\cap \{r>n^{\alpha}/10\}\right)=O(n^{-5}).\] \item For $K_5>0$ defined in $(d)$, we have \[\mathbb P\left(\Delta_i>K_5 \log^2 n\cdot \big( n^{3/2-\alpha/2}+n^{\alpha-1}\big)\right)=O(n^{-5}).\] \end{enumerate} \end{proposition} \begin{proof} We will assume that $r<cn^2$; otherwise the sums become empty, and $\Delta_i=0$. $(a)$ We will use the representation based on de Finetti's theorem together with the following decomposition. \begin{align*}|R_{i,t}-R_i^*|&=\bigg|\frac{X_{i,t}}{t+n}-\frac{X_i^*}{r+n}\bigg|=\bigg|\frac{X_{i,t}-X_i^*}{t+n}-X_i^*\cdot\frac{t-r}{(t+n)\cdot (r+n)}\bigg|\\&\leq \frac{|X_{i,t}-X_i^*-\mathbb E(X_{i,t}-X_i^*|p)|}{t+n}+\frac{|X_i^*-\mathbb E(X_i^*|p)|(t-r)}{(t+n)\cdot (r+n)}\\&\quad +\bigg|\frac{\mathbb E(X_{i,t}-X_i^*|p)}{t+n}-\frac{\mathbb E(X_i^*|p)(t-r)}{(t+n)(r+n)}\bigg|.\end{align*} According to the representation, we know that $X_{i,t}-X_i^*$ is a binomial random variable with parameters $m=t-r$ and $p$, given $p$ and $r$. We will use Lemma \ref{lem:binom} for this conditional distribution. Notice that $B\cap \{t\geq r+n^{\alpha}\}\subseteq B_m$, and $m\geq n$ in this case. Therefore for $K_1$ defined in Lemma \ref{lem:binom} we have \begin{align*}\mathbb P&\left(\bigg\{|X_{i,t}-X_i^*-\mathbb E(X_{i,t}-X_i^*|p)|>K_1\sqrt{\frac{(t-r)}{n}}\log n\bigg\}\cap B\cap\{t\geq r+n^{\alpha}\}\bigg\vert p,r\right)\\&=O(n^{-8}).\end{align*} It follows that \begin{equation}\label{eq:s1}\mathbb P\left(\bigg\{\frac{|X_{i,t}-X_i^*-\mathbb E(X_{i,t}-X_i^*|p)|}{t+n}>K_1\frac{1}{\sqrt{tn}}\log n\bigg\}\cap B\cap\{t\geq r+n^{\alpha}\}\right)=O(n^{-8}).\end{equation} Similarly, $X_i^*-1$ is a binomial random variable with parameters $m=r$ and $p$, given $p$ and $r$. Again, we have that $B\cap \{t\geq r+n^{\alpha}\}\subseteq B_m$. Thus Lemma \ref{lem:binom} can be applied. We get that there exists $K_1'>0$ such that \[\mathbb P\left(\bigg\{|X_i^*-\mathbb E(X_i^*|p)|>K_1'\sqrt{\frac{r}{n}}\log n\bigg\}\cap B\cap\{t\geq r+n^{\alpha}\}\bigg\vert p,r\right)=O(n^{-8}).\] This implies \[\mathbb P\left(\bigg\{\frac{|X_i^*-\mathbb E(X_i^*|p)|(t-r)}{(t+n)(r+n)}>K_1'\sqrt{\frac{r}{(r+n)^2n}}\log n\bigg\}\cap B\cap\{t\geq r+n^{\alpha}\}\bigg\vert p,r\right)=O(n^{-8}).\] In addition, using that $r>n^{\alpha}/10$ holds on the event $B$, we can write \begin{equation}\label{eq:s2}\mathbb P\left(\bigg\{\frac{|X_i^*-\mathbb E(X_i^*|p)|(t-r)}{(t+n)(r+n)}>K_1'\sqrt{\frac{10}{n^{\alpha+1}}}\log n\bigg\}\cap B\cap\{t\geq r+n^{\alpha}\}\right)=O(n^{-8}).\end{equation} Now we reformulate the third term. \begin{align*}S=\bigg|&\frac{\mathbb E(X_{i,t}-X_i^*|p)}{t+n}-\frac{\mathbb E(X_i^*|p)(t-r)}{(t+n)(r+n)}\bigg|=\bigg|\frac{(t-r)p}{t+n}-\frac{(1+r p)(t-r)}{(t+n)(r+n)}\bigg|\\&=\frac{t-r}{(t+n)(r+n)}\cdot |p(r+n)-(1+r p)|=\frac{t-r}{(t+n)(r+n)}|np-1|.\end{align*} By equation \eqref{eq:kisp} we obtain \begin{align*}\mathbb P\left(\bigg\{S>\frac{160\log n}{n^{\alpha}}\bigg\}\cap B\right)&\leq \mathbb P\left(\bigg\{\frac{|np-1|}{r+n}>\frac{160\log n}{n^{\alpha}}\bigg\}\cap B\right)\\&\leq \mathbb P(|np-1|>16\log n)=O(n^{-8}).\end{align*} Putting this together with equations \eqref{eq:s1} and \eqref{eq:s2}, we obtain that there exists $K_2'>0$ such that \[\mathbb P\left(\bigg\{|R_{i,t}-R_i^*|>K_2' \left(\frac{\log n}{\sqrt{tn}}+\frac{\log n}{\sqrt{n^{\alpha+1}}}+\frac{\log n}{n^{\alpha}}\right)\bigg\}\cap B\cap \{t>r+n^{\alpha}\}\right)=O(n^{-8}).\] Since $\alpha>1$ and $t>r+n^{\alpha}$, for $n$ large enough, the middle term is the largest one, and we conclude that for some $K_2>0$ \[\mathbb P\left(\bigg\{|R_{i,t}-R_i^*|>K_2 \frac{\log n}{\sqrt{n^{\alpha+1}}}\bigg\}\cap B\cap \{t>r+n^{\alpha}\}\right)=O(n^{-8}).\] This finishes the proof of $(a)$. $(b)$ It follows from part $(a)$ that \[\mathbb P\left(\sum_{t=\lceil r+n^{\alpha}\rceil}^{cn^2} |R_{i,t}-R_i^*|>cK_2\log n \cdot n^{3/2-\alpha/ 2}\bigg\}\cap B\right)=O(n^{-6}). \] On $B$, we have $r>n^{\alpha}/10>n$, as $\alpha>1$, for large enough $n$. By equation \eqref{eq:maxrt} we get that \[\mathbb P\left(\bigg\{ \sum_{t=r}^{\lfloor r+n^{\alpha}\rfloor} |R_{i,t}-R_i^*|>n^{\alpha}\cdot \frac{72}{n}\log n\bigg\}\cap B\right)\leq 2cn^{-6}=O(n^{-6}).\] The two equations together imply the statement. $(c)$ Similarly to the proof of Lemma \ref{lem:maxrt}, for every $t\geq n^{\alpha}/10$ we have \begin{align*}\mathbb P&\left(\bigg\{R_{i,t}>\frac{64000}{n^{\alpha}}\log n\bigg\}\cap \bigg\{p\leq \frac{36000\log n}{n^{\alpha}}\bigg\}\right)\\&\leq\mathbb P\left(X_{i,t}>\frac {64000(t+n)}{n^{\alpha}} \log n\bigg\vert p\leq \frac{36000}{n^{\alpha}}\log n\right)\\ &\leq \frac{\mathbb E\big((1+(e-1)p)^t|p\leq \frac{36000}{n^{\alpha}}\log n\big)}{\exp(\log n\cdot 64000 (t+n)/n^{\alpha})}\\&\leq \frac{\exp\big((e-1)t\cdot\frac {36000} {n^{\alpha}} \log n\big)}{\exp(\log n\cdot {64000}(t+n)/n^{\alpha})}\\ &\leq \exp\left(((e-1)\cdot 36000-64000)\frac t {n^{\alpha}} \log n\right)\leq n^{-8}.\end{align*} Therefore, writing \[ \ms{L}:=\left\{\sum_{t=r}^{\lfloor cn^2\rfloor}|R_{i,t}-R_i^*|>128000 cn^{2-\alpha}\log n\right\}\cap \left\{\frac{p}{10^3}\leq \frac{36\log n}{n^{\alpha}}\right\}\cap \{r>n^{\alpha}/10\}, \] we have \begin{equation}\label{eq:kicsip}\mathbb P\left(\ms{L}\right)=O(n^{-6}),\end{equation} because on the event $\{r>n^{\alpha}/10\}$ we have $t>n^{\alpha}/10$ in all terms (and the inequality is valid for $R_i^*=R_{i, r}$ as well). For $K_4$ large enough (which may depend only on $c$), the condition \[\bigg\{\sum_{t=r}^{\lfloor cn^2\rfloor}|R_{i,t}-R_i^*|>K_4 \max \Big(n^{2-\alpha}\log n, \big(\log n\cdot n^{3/2-\alpha/2}+\log n\cdot n^{\alpha-1}\big)\Big)\bigg\}\cap \{r>n^{\alpha}/10\}\] implies that either the event in part $(b)$, or the event in inequality \eqref{eq:kicsip}, or $\{p>16\log n/n\}$ holds, according to the value of $p$. Notice that for $\alpha>1$ we have $2-\alpha<3/2-\alpha/2$, hence for large enough $n$ we can get rid of the maximum. Thus, combining these inequalities with part $(a)$ of Lemma \ref{lem:maxrt}, we get the statement of $(c)$. $(d)$ For the first term of $\Delta_i$, we know this statement with constant $K_4$ from part $(c)$. We may assume that $n$ is so large that $n^{\alpha}/10\geq n$ holds. Then we can apply Lemma \ref{lem:maxrt} to get \[\mathbb P\left(\bigg\{\max_{r\leq t\leq \lfloor cn^2\rfloor} R_{i,t}>\frac{16\log n}{n}\bigg\}\cap\{r>n^{\alpha}/10\}\right)=O(n^{-8}).\] On the other hand, if $\max_{r\leq t\leq \lfloor cn^2\rfloor} R_{i,t}\leq \frac{16\log n}{n}$ holds and the second term of $\Delta_i$ is greater than the bound in $(d)$, then \[\sum_{t=r}^{\lfloor cn^2\rfloor} \sum_{k=1}^n |R_{k,t+1}-R_k^*|>\frac{K_5}{16} \log n\cdot n\cdot \big( n^{3/2-\alpha/2}+n^{\alpha-1}\big)\] holds. By choosing $K_5=16K_4$, this implies that for some $1\leq k\leq n$ we have \[\sum_{t=r}^{\lfloor cn^2\rfloor} |R_{k,t+1}-R_k^*|>K_4 \log n\cdot \big( n^{3/2-\alpha/2}+n^{\alpha-1}\big).\] Putting this together with part $(c)$, this finishes the proof of $(d)$ (notice that $K_4$ does not depend on $i$). $(e)$ To see that $(d)$ implies $(e)$, we only have to check that \begin{equation}\label{eq:rn}\mathbb P(r\leq n^{\alpha}/10)=O(n^{-5}).\end{equation} Recall that the random variable $r'=r+n$ has negative binomial distribution with parameters $n$ and $p_{\alpha}=1-\mathrm{exp}(-n^{-\alpha+1})$. For $n$ large enough, the inequality $\mathbb P(r\leq n^{\alpha}/10)\leq \mathbb P(r'\leq n^{\alpha}/5)$ holds and we also have \begin{equation}\label{eq:palpha}\frac{1}{2n^{\alpha-1}}\leq \frac{1}{n^{\alpha-1}}-\frac 23\cdot \frac{1}{n^{2\alpha-2}}\leq p_\alpha=1-e^{-n^{-\alpha+1}}\leq\frac{1}{n^{\alpha-1}}.\end{equation} Notice that $r'$ can be expressed as the independent sum of $n$ geometric random variables supported on $\mb{N}^+$ with mean $m=1/p_{\alpha}$. Thus, we compare $r'/n$ to $n^{\alpha-1}/5$, which is less than the mean of the geometric random variables. Hence we can apply Cram\'er's theorem for $b=n^{\alpha-1}/5$. We obtain that \[\mathbb P(r'\leq n^{\alpha}/5)\leq \exp\big(-n(\vartheta b-\log M(\vartheta))\big),\] where $M(\vartheta)$ is the moment generating function of this geometric random variables, and $\vartheta$ minimizes the expression in the exponent. That is, we have \[M(\vartheta)=\frac{p_{\alpha}e^{\vartheta}}{1-(1-p_{\alpha})\vartheta}; \qquad \vartheta=\frac{1}{1-p_{\alpha}}-\frac{1}{b-1}.\] This yields \[\mathbb P(r'\leq n^{\alpha}/5)\leq\mathrm{exp}\left(-nb\left(\frac{1}{1-p_{\alpha}}-\frac{1}{b-1}\right)+n\log \frac{p_{\alpha}e^{\vartheta}(b-1)}{1-p_{\alpha}}\right).\] It follows from inequality \eqref{eq:palpha} that for $n$ large enough we have \[\mathbb P(r'\leq n^{\alpha}/5)\leq\mathrm{exp}\big(-n^{\alpha}/5+2n+n\log (2e^2/10)\big).\] Since we assumed that $\alpha>1$, this implies inequality \eqref{eq:rn}. \end{proof} {\bf Proof of Proposition $\ref{prop:23}$.} If $r>cn^2$, then both models give the empty graph and the distance is $0$; we will ignore this case. For $t$ odd, let $\mathbb I_{i,t}$ be the indicator of the following event: either vertex $i$ gets different edges at step $(t, t+1)$ in the coupling of model 2 and model 3, or it gets an edge in exactly one of the models. For $t$ even, let $\mathbb I_{i,t}=0$. We will be interested in $Z_i=\sum_{t=r+1}^{\lfloor cn^2\rfloor} \mathbb I_{i,t}$. In addition, we define \[\mathcal G=\sigma \big(r; R_{i,t}: 1\leq i \leq n, 1\leq t \leq cn^2\big). \] Whenever $\mathbb I_{i,t}$ takes value $1$, we either choose vertex $i$ in exactly one of the models at step $t$ or $t+1$, or we choose vertex $i$ in both models, but it gets different pairs in the two models. Thus, by the definition of the coupling, we have that \[\mathbb E(\mathbb I_{i,t}|\mathcal G)\leq |R_{i,t}-R_i^*|+|R_{i,t+1}-R_i^*|+R_{i,t}\sum_{k=1}^n |R_{k,t+1}-R^*_k|+R_{i,t+1}\sum_{k=1}^n |R_{k,t}-R^*_k|.\] A slight modification of Proposition \ref{prop:23r} implies that for some $K_6>0$ we have \begin{equation}\label{eq:k6s}\mathbb P\left(\sum_{t=r}^{\lfloor cn^2\rfloor} \mathbb E(\mathbb I_{i,t}|\mathcal G)>K_6 \log^2 n\cdot \big(n^{3/2-\alpha/2}+n^{\alpha-1}\big)\right)=O(n^{-5}).\end{equation} To see this, note that the sum for the first two terms for odd $t$ gives the first term of $\Delta_i$ defined in part $(d)$ of Proposition \ref{prop:23r}. The third term here corresponds to the second term of $\Delta_i$ with even $t$s omitted. Finally, for the fourth term it is easy to see that the proof of Proposition \ref{prop:23r} is valid if $t+1$ is replaced by $t-1$. Let $D$ be event in equation \eqref{eq:k6s}, and let $k_n=K_6 \log^2 n\cdot \big(n^{3/2-\alpha/2}+n^{\alpha-1}\big)$. By using that $D\in \mathcal G$ and given $\mathcal G$, the indicators $\mathbb I_{i,t}$ are conditionally independent by the definition of the coupling, we obtain \begin{align*}\mathbb P(\{Z_i> 2k_n\}\cap \overline D)&\leq \mathbb P(Z_i> 2k_n|\overline{D})\leq \frac{\mathbb E(e^{Z_i}|\overline{D})}{\exp\big(2k_n\big)}=\frac{\mathbb E(\mathbb E(e^{Z_i}|\mathcal G)|\overline{D})}{\exp\big(2k_n\big)}\\ &\leq \frac{\mathbb E\Big(\prod_{t=r}^{\lfloor cn^2\rfloor} (1+(e-1)\mathbb E(\mathbb I_{i,t}|\mathcal G))\Big| \overline{D}\Big)}{\exp\big(2k_n\big)}\\ &\leq \frac{\mathbb E\Big(\exp\big((e-1)\sum_{t=r}^{\lfloor cn^2\rfloor}\mathbb E(\mathbb I_{i,t}|\mathcal G) \big)\Big|\overline D\Big)}{\exp\big( 2k_n\big)}\\&\leq \frac{\exp\big((e-1)\cdot k_n \big)}{\exp\big(2k_n\big)}\leq \exp\big((e-3)k_n)\big)=O(n^{-5}).\end{align*} Putting this together with equation \eqref{eq:k6s}, we get that \begin{equation*}\mathbb P\left(\sum_{t=r}^{\lfloor cn^2\rfloor} \mathbb I_{i,t}>2K_6 \log^2 n\cdot \big(n^{3/2-\alpha/2}+n^{\alpha-1}\big)\right)=O(n^{-5}).\end{equation*} This immediately implies that \begin{equation*}\mathbb P\left(\max_{1\leq i \leq n} \left(\sum_{t=r}^{\lfloor cn^2\rfloor} \mathbb I_{i,t}\right)>2K_6 \log^2 n\cdot \big(n^{3/2-\alpha/2}+n^{\alpha-1}\big)\right)=O(n^{-4}).\end{equation*} The sum of the indicators is at most $cn^2$. We conclude that \begin{equation*}\mathbb E\left(\max_{1\leq i \leq n} \left(\sum_{t=r}^{\lfloor cn^2\rfloor} \mathbb I_{i,t}\right)\right)\leq 2K_6 \log^2 n\cdot \big(n^{3/2-\alpha/2}+n^{\alpha-1}\big)+O(n^{-2}).\end{equation*} Since the definition of model 2 and model 3 is the same during the first $r-1$ steps, and we included all possible differences into the indicators, $\sum_{t=r-1}^{\lfloor cn^2\rfloor} \mathbb I_{i,t}\leq Z_i+1$ is an upper bound for $\sum_{j=1}^n |U_{ij}-V_{ij}|$, where $U_{ij}$ is the number of edges between $i$ and $j$ in model $2$, and $V_{ij}$ is the corresponding quantity in model $3$ (at the end of the whole process). By using Lemma \ref{lem:sorosszeg} we get the statement of Proposition \ref{prop:23}.\hfill $\square$ \subsection{Models $3$ and $4$} \subsection*{Proof of Proposition \ref{prop:34}} Let $U_{ij}$ be the number of edges between $i$ and $j$ in model $3$, and $V_{ij}$ be the number of edges between them in model $4$. By using the notations introduced for the coupling of the two models, we have $U_{ij}-V_{ij}=N_{\tau}^{(ij)}-N_{cn^2}^{(ij)}$. If $\tau\geq cn^2$, then all the differences are nonnegative, and all of them are negative if $\tau<cn^2$. Thus \begin{equation}\label{eq:nij}\sum_{j=1}^n |U_{ij}-V_{ij}|=\bigg|\sum_{j=1}^n N_{\tau}^{(ij)}-N_{cn^2}^{(ij)}\bigg| \qquad (1\leq i \leq n).\end{equation} We will use the fact that by cumulating the independent Poisson processes assigned to the pairs of vertices we get a Poisson process with rate $2\sum_{i<j} R_i^*R_j^*+\sum_i R_i^*=1$. In addition, the types $(ij)$ of the events are independent of the moments when they occur. Let $N_s$ be the total number of events until time $s$; i.e.\ $N_s=\sum_{i\leq j} N_s^{(ij)}$, which has Poisson distribution with parameter $s$. Since there are $\lfloor cn^2\rfloor-r$ events in the cumulated process until $\tau$, there are $|\lfloor cn^2\rfloor-r-N_{cn^2}|$ events between $\tau$ and $cn^2$. On the other hand, independently of each other, all these events increase $\big|\sum_{j=1}^n N_{\tau}^{(ij)}-N_{cn^2}^{(ij)}\big|$ by $1$ with probability $p_i'=R_i^*(R_i^*+2\sum_{j\neq i} R_j^*)\leq 2R_i^*$. We conclude that the quantity in equation \eqref{eq:nij} has binomial distribution with parameters $|\lfloor cn^2\rfloor-r-N_{cn^2}|$ and $p_i'\leq 2R_i^*$ conditionally with respect to $N_{cn^2}$ and $(R_j^*)_{j=1}^n$. Let $F_i$ be the following event: \[F_i=\bigg\lbrace R_i^*\leq\frac{36}{n}\log n\bigg\rbrace\cap \big\lbrace r<2n^{\alpha}\rbrace \cap \big\lbrace |N_{cn^2}-cn^2|<n^{\alpha}\big\rbrace.\] By using the moment generating function of the binomial distribution, we obtain \begin{align*}\mathbb P&\left(\bigg\lbrace \sum_{j=1}^n |U_{ij}-V_{ij}|>128 \log n\cdot n^{\alpha-1}\bigg\rbrace\cap F_i\right)\leq \mathbb E\left(\frac{(1+(e-1)p_i')^{|\lfloor cn^2\rfloor-r-N_{cn^2}|}}{\exp(128 \log n\cdot n^{\alpha-1})}\cdot \mathbb I_{F_i}\right)\\ &\leq \mathbb E(\exp((e-1)(72\log n/n)\cdot 3n^{\alpha}-128 \log n\cdot n^{\alpha-1}\big)=O(n^{-6}).\end{align*} It follows from part $(b)$ of Lemma \ref{lem:maxrt} and equation \eqref{eq:rn} that $\mathbb P(R_i^*>36\log n/n)=O(n^{-6})$. Similarly to the proof of equation \eqref{eq:rn} in part $(e)$ of Proposition \ref{prop:23r}, it can be shown that $\mathbb P(r\geq 2n^{\alpha})=O(n^{-5})$; one can use Cram\'er's large deviation theorem and the fact that the expectation of $r$ is smaller than $n^{\alpha}$. Finally, recall that $N_{cn^2}$ has Poisson distribution with parameter $cn^2$. We can think of it as the independent sum of $n^2$ Poisson random variables with parameter $c$, and apply Cram\'er's theorem. That is, \begin{align*}\mathbb P(N_{cn^2}-cn^2>n\log n)&=\mathbb P(N_{cn^2}/n^2>c+\log n/n)\\&\leq \exp\big(-n^2(\vartheta (c+\log n/n)-\log M(\vartheta)),\big)\end{align*} where $M$ is the moment generating function of $\mathrm{Poisson}(c)$, and we can choose $\vartheta$ to minimize the expression on the right hand side. By using $\log M(\vartheta)=c(e^{\vartheta}-1)$ and $\vartheta=\log(1+\log n/n)$, it follows that this probability is also $O(n^{-6})$. The same argument works for $\mathbb P(N_{cn^2}-cn^2<-n\log n)$. On the other hand, $\alpha>1$, hence $n^{\alpha}>n\log n$ for large $n$. Putting this together, we obtain that $\mathbb P(\overline{F_i})=O(n^{-6})$, and \[\mathbb P\left(\sum_{j=1}^n |U_{ij}-V_{ij}|>128 \log n\cdot n^{\alpha-1}\right)=O(n^{-6}).\] Since the total sum cannot be larger than $cn^2$, we get Proposition \ref{prop:34} similarly to the arguments in the previous section. \hfill $\square$ \subsection{Models $4$ and $5$} \subsection*{Proof of Proposition \ref{prop:45}} The expected value $\mathbb E\big(d_{\boxtimes}\big(\mathbb G_4(n, \alpha), \mathbb G_5(n)\big)\big)$ can be split according to the value of $r$ as follows. \begin{align*} \mathbb E\big(d_{\boxtimes}\big(\mathbb G_4(n, \alpha), \mathbb G_5(n)\big)\big)&= \mathbb E\big(\left.d_{\boxtimes}\big(\mathbb G_4(n, \alpha), \mathbb G_5(n)\big)\right|r>cn^2\big)\mb{P}(r>cn^2)\\&+ \mathbb E\big(\left.d_{\boxtimes}\big(\mathbb G_4(n, \alpha), \mathbb G_5(n)\big)\right|r\leq cn^2\big)\mb{P}(r\leq cn^2). \end{align*} The second term is zero by the coupling, whilst the first is \[ \mathbb E\big(\left.d_\boxtimes(0,\mathbb G_5(n))\right|r>cn^2\big)\mb{P}(r>cn^2). \] To bound this, note that we always have \begin{align*} d_\boxtimes(0,\mathbb G_5(n))\leq \frac 1n\bigg( 2\sum_{1\leq i<j\leq n}Z_{ij}+\sum_{1\leq i\leq n}Z_{ii}\bigg). \end{align*} But we have by the definition of the variables $Z_{ij}$ \begin{align*} \mb{E}\left( \left.2\sum_{1\leq i<j\leq n}Z_{ij}+\sum_{1\leq i\leq n}Z_{ii}\right|R_1^*,R_2^*,\ldots,R_n^*\right)&= 2\sum_{1\leq i<j\leq n}cn^2R_i^*R_j^*+\sum_{1\leq i\leq n}cn^2(R_i^*)^2/2\\ &\leq cn^2\left(\sum_{1\leq i\leq n} R_i^*\right)^2=cn^2, \end{align*} whence \begin{align*} \mathbb E\big(d_{\boxtimes}\big(\mathbb G_4(n, \alpha), \mathbb G_5(n)\big)\big)&\leq cn\mb{P}(r>cn^2)\leq cn\mb{P}(r'>cn^2). \end{align*} Since $r'$ is, as noted before, the sum of $n$ independent geometric distributions of parameter $p_\alpha=1-e^{-\frac{1}{n^{\alpha-1}}}$ supported on $\mb{N}^+$, we have \begin{equation*} \mb{P}(r'>cn^2)\leq \mb{P}(\mr{Geom}(p_\alpha)>cn)=(1-p_\alpha)^{\lceil cn\rceil}\leq e^{-\frac{cn}{n^{\alpha-1}}}=e^{-cn^{2-\alpha}}. \end{equation*} Provided $\alpha<2$, this yields $cn^2e^{-cn^{2-\alpha}}\leq O(n^{-10})$. \subsection{Models $5$ and $6$} To be able to bound the jumble distance, we have to deal with each of the random variables $H_{ij}^*$. Recall that $\mc{F}$ denoted the $\sigma$-algebra generated by the $\xi_i$ and $R_i^*$, $1\leq i\leq n$. By our coupling we may write for each $1\leq i\leq j\leq n$ \begin{equation*} \mb{E}\left(H_{ij}^*\right)=\mb{E}\left(\mb{E}\left(\left.H_{ij}^*\right|\xi_i,\xi_j\right)\right) =\frac{2-\delta_{ij}}{2}\mb{E}\left( cn^2\left| \frac{C_iC_j}{\left(\sum_{k=1}^n C_k\right)^2}-\frac{\xi_i\xi_j}{n^2} \right|\right). \end{equation*} \begin{lemma}\label{le:fact_mom} Provided $\alpha\geq 1/2$, we have for all non-negative integers $b\in\mb{N}_0$ \begin{equation}\label{eqn:fact_mom_exp} \mb{E}\left( (H_{ij}^*)^{(b)} \right) \leq K_b\bigg(\frac{1}{n^{b/2}}+\frac{1}{n^{(\alpha-1)b}}\bigg), \end{equation} where $k^{(b)}$ denotes the $b^{th}$ factorial moment for any $k\in\mb{N}_0$, i.e.\ $k^{(b)}=k(k-1)\ldots(k-b+1)$. \end{lemma} \begin{proof} It is known that for any $b\in\mb{N}^+$ we have $\mb{E}\left(\mr{Pois}(\lambda)^{(b)}\right)=\lambda^b$. Suppose now that $b\geq 1$. By the law of total expectation, we have \begin{align*} \nonumber\mb{E}\left( (H_{ij}^*)^{(b)} \right)&=\mb{E}\left(\mb{E}\left( \left. (H_{ij}^*)^{(b)} \right|\mc{F} \right)\right)=\label{eqn:fff} \frac{(2-\delta_{ij})^b}{2^b}\mb{E}\left( c^bn^{2b}\left| \frac{C_iC_j}{\left(\sum_{k=1}^n C_k\right)^2}-\frac{\xi_i\xi_j}{n^2} \right|^b \right)\\ &\leq2^bc^bn^{2b}(F_1+F_2), \end{align*} where \[ F_1:=\mb{E}\left(\left|\frac{\xi_i\xi_j}{(\sum_k \xi_k)^2}-\frac{\xi_i\xi_j}{n^2}\right|^b\right); \qquad F_2:=\mb{E}\left(\left|\frac{C_iC_j}{(\sum_k C_k)^2}-\frac{\xi_i\xi_j}{\sum_k \xi_k}\right|^b\right), \] and we made use of the power mean inequality in the form $(a_1+a_2)^b\leq 2^{b-1}(a_1^b+a_2^b)$. Note that we may consider $F_1$ as the error that stems from the randomization in the denominator, whilst $F_2$ captures the error that comes from the rounding $\xi_in^{\alpha-1}\to C_i$. Let us first bound $F_1$. It is known that for the i.i.d.\ exponential variables $\xi_i$, their sum $\sum \xi_k$ is independent from the ratios $\xi_i/\sum \xi_k$. Hence \begin{align*} F_1&=\mb{E}\left(\left|\frac{\xi_i\xi_j}{(\sum_k \xi_k)^2}-\frac{\xi_i\xi_j}{n^2}\right|^b\right)=\frac{1}{n^{2b}}\mb{E}\left(\frac{\xi_i^b\xi_j^b}{(\sum_k \xi_k)^{2b}}\left|n^2-\left(\sum_k \xi_k\right)^2\right|^b\right)\\ &=\frac{1}{n^{2b}}\mb{E}\left(\frac{\xi_i^b\xi_j^b}{(\sum_k \xi_k)^{2b}}\right)\mb{E}\left(\left|n^2-\left(\sum_k \xi_k\right)^2\right|^b\right). \end{align*} Also, we have $\frac{\xi_i}{\sum_k \xi_k}\sim\mr{Beta}(1,n-1)$. The first term can thus be bounded by \begin{equation*}\label{eq:f11} F_{1,1}:=\mb{E}\left(\frac{\xi_i^b\xi_j^b}{(\sum_k \xi_k)^{2b}}\right)\leq \mb{E}\left(\frac{\xi_i^{2b}}{(\sum_k \xi_k)^{2b}}\right)=\frac{(2b)!}{(n+1)(n+2)\ldots(n+2b)}. \end{equation*} We have that given $n$ i.i.d.\ random variables with expectation $0$, and an integer $\nu\geq 2$, the $\nu^{th}$ moment of their sum is bounded by $K^2n^{\nu/2}$, with $K$ depending only on the distribution (see e.g.\ \cite{bahr, phall}). In addition, $\sum_k \xi_k\sim\mr{Gamma}(n,1)$. The second term can therefore be bounded by \[\begin{split}F_{1,2}&=\mathbb E\left(\left|n^2-\left(\sum_k \xi_k\right)^{\!2}\right|^b\,\right)=\mathbb E\left(\left|\sum_k \xi_k-n\right|^b\left(\sum_k \xi_k+n\right)^{\!b}\,\right)\\ &\leq \sqrt{\mathbb E\left(\left|\sum_k \xi_k-n\right|^{2b}\right)} \sqrt{\mathbb E\left(\left(\sum_k \xi_k+n\right)^{\!2b}\right)}\\&\leq K n^{b/2} \cdot 2^{\frac{2b-1}{2}}\sqrt{\mathbb E\left(\sum_k \xi_k\right)^{2b}+n^{2b}}\\ &\leq K2^b n^{b/2}\sqrt{\frac{(2b+n-1)!}{(n-1)!}+n^{2b}}\leq K 2^b n^{b/2} \sqrt{(n+2b)^{2b}+n^{2b}}\\&\leq K 4^b n^{b/2} \sqrt{2n^{2b}+(2b)^{2b}}.\end{split}\] Thus we obtain \[ (2c)^bn^{2b}F_1\leq K(8c)^b\cdot \frac{(2b!)n^{b/2}\cdot\sqrt{2n^{2b}+(2b)^{2b}}}{(n+1)(n+2)\ldots(n+2b)}. \] For a fixed $b$, this means \begin{equation}\label{eq:f1}(2c)^bn^{2b}F_1\leq \frac{K_b'}{n^{b/2}}.\end{equation} Let us now turn to the term $F_2=\mathbb E\left(\left|\frac{C_iC_j}{(\sum_k C_k)^2}-\frac{\xi_i\xi_j}{\sum_k \xi_k}\right|^b\right)$. The first idea is to get rid of the absolute value by observing that if we have random variables $v_1,v_2,v_3$ such that $v_1\geq v_2\geq v_3$ and $v_1\geq 0\geq v_3$, then for any $b\in\mb{N}^+$ we have \[ \mb{E}(|v_1|^b)+\mb{E}(|v_3|^b)\geq \mb{E}(|v_2|^b). \] The role of $v_2$ shall be played by $\frac{C_iC_j}{(\sum_k C_k)^2}-\frac{\xi_i\xi_j}{\sum_k \xi_k}$. Using the fact that by the rounding, $n^{\alpha-1}\xi_k\leq C_k\leq n^{\alpha-1}\xi_k+1$ for each $1\leq k\leq n$, we have \[\begin{split}\frac{C_iC_j}{(\sum_k C_k)^2}-\frac{\xi_i\xi_j}{(\sum_k \xi_k)^2}&\leq \frac{(n^{\alpha-1}\xi_i+1)(n^{\alpha-1}\xi_j+1)}{n^{2\alpha-2}(\sum_k \xi_k)^2}-\frac{\xi_i\xi_j}{(\sum_k \xi_k)^2}\\ &\leq \frac{n^{\alpha-1}(\xi_i+\xi_j)+1}{n^{2\alpha-2}(\sum_k \xi_k)^2}, \end{split}\] and so we can have \[ \frac{n^{\alpha-1}(\xi_i+\xi_j)+1}{n^{2\alpha-2}(\sum_k \xi_k)^2} \] play the role of $v_1$. Applying first the power mean inequality, and using that the reciprocal of the sum $\sum \xi_k$ has inverse gamma distribution, whilst the ratio is a $\mr{Beta}(1,n-1)$ distribution independent of it, for $n$ large enough, we obtain \begin{align*} \mb{E}(|v_1|^b)&\leq 2^{b-1}\frac{1}{n^{(\alpha-1) b}}\mathbb E\left(\bigg(\frac{\xi_i+\xi_j}{(\sum_k \xi_k)^2}\bigg)^b\right)+2^{b-1}\frac{1}{n^{2(\alpha-1) b}}\mathbb E\left(\frac{1}{(\sum_k \xi_k)^{2b}}\right)\\ &\leq \frac{4^{b}}{n^{(\alpha-1) b}}\mathbb E\left(\bigg(\frac{\xi_i}{(\sum_k \xi_k)^2}\bigg)^b\right)+\frac{2^{b-1}}{n^{2(\alpha-1) b}}\mathbb E\left(\frac{1}{(\sum_k \xi_k)^{2b}}\right)\\ &\leq \frac{4^{b}}{n^{(\alpha-1) b}} \mathbb E\left(\bigg(\frac{\xi_i}{\sum_k \xi_k}\bigg)^{b}\right) \mathbb E\left(\bigg(\frac{1}{\sum_k \xi_k}\bigg)^{b}\right) +\frac{2^{b-1}}{n^{2(\alpha-1) b}}\mathbb E\left(\frac{1}{(\sum_k \xi_k)^{2b}}\right)\\ &\leq \frac{4^{b}}{n^{(\alpha-1) b}}\cdot\frac{b!}{(n+1)(n+2)\ldots(n+b)}\cdot\frac{1}{(n-1)\ldots(n-b)}\\ &\quad+\frac{2^{b-1}}{n^{2(\alpha-1) b}}\frac{1}{(n-1)\ldots(n-2b)}\leq \frac{C_b}{n^{(\alpha+1)b}}\left(1+\frac{1}{n^{(\alpha-1) b}}\right). \end{align*} Again by the rounding, we have the lower bound \[\begin{split}\frac{C_iC_j}{(\sum_k C_k)^2}&-\frac{\xi_i\xi_j}{(\sum_k \xi_k)^2}\geq \frac{n^{2(\alpha-1)}\xi_i\xi_j}{(n^{(\alpha-1)}\sum_k \xi_k+n)^2}-\frac{\xi_i\xi_j}{(\sum_k \xi_k)^2}\\ &= -\xi_i\xi_j\frac{(n^{(\alpha-1)}\sum_k \xi_k+n)^2-n^{2(\alpha-1)}(\sum_k \xi_k)^2}{(n^{(\alpha-1)}\sum_k \xi_k+n)^2(\sum_k \xi_k)^2}\\ &=-\xi_i\xi_j\frac{2(n^{(\alpha-1)}\sum_k \xi_k+n)n}{(n^{(\alpha-1)}\sum_k \xi_k+n)^2(\sum_k \xi_k)^2}.\end{split}\] Here it is clear that the last expression is negative, so let's continue without the minus sign. \begin{align*} \xi_i\xi_j\frac{(2n^{(\alpha-1)}\sum_k \xi_k+n)n}{(n^{(\alpha-1)}\sum_k \xi_k+n)^2(\sum_k \xi_k)}&\leq \xi_i\xi_j\frac{2n}{(n^{(\alpha-1)}\sum_k \xi_k+n)(\sum_k \xi_k)^2}\leq \frac{2n\xi_i\xi_j}{n^{(\alpha-1)}(\sum_k \xi_k)^3}. \end{align*} So the role of $-v_3$ will be played by \[ \frac{2\xi_i\xi_j}{n^{(\alpha-2)}(\sum_k \xi_k)^3}. \] We use that the sum is independent of the proportions, use inequality \eqref{eq:f11}, the Cauchy--Schwarz inequality and the moments of the Gamma distribution: \begin{align*} \mb{E}(|v_3|^b)&\leq \frac{2^b}{n^{(\alpha-2)b}}\mathbb E\bigg(\Big(\frac{\xi_i\xi_j}{(\sum_k \xi_k)^2}\Big)^b\bigg)\mathbb E\bigg(\frac{1}{(\sum_k \xi_k)^b}\bigg)\\&\leq \frac{2^b}{n^{(\alpha-2)b}}\cdot\frac{(2b)!}{(n+1)\ldots(n+2b)}\cdot\frac{1}{(n-1)\ldots(n-b)}\leq \frac{C_b'}{n^{(\alpha+1)b}}. \end{align*} Hence $(2c)^bn^{2b}F_2\leq \frac{K''_b} {n^{(\alpha-1)b} }$, and summing up we obtain \[ \mb{E}((H_{ij}^*)^{(b)})\leq \frac{K'_b}{n^{b/2}}+\frac{K''_b}{n^{(\alpha-1) b}}\leq {K_b}\bigg(\frac{1}{n^{b/2}}+\frac{1}{n^{(\alpha-1)b}}\bigg). \qedhere\] \end{proof} \subsection*{Proof of Proposition \ref{prop:56}} Recall that in the coupling of model $5$ and $6$, the absolute value of the difference of the number of edges between $i$ and $j$ is $H_{ij}^*$. By Lemma \ref{le:fact_mom} with $b=1$, for some $K_1>0$, for every fixed $i$ we have \[ \mb{E}\left(\sum_{j=1}^n H^*_{ij} \right)=n\mb{E}(H_{ij}^*)\leq K_1 \big(n^{1/2}+n^{2-\alpha}\big). \] Let now $\varrho_{ij}:=H_{ij}^*\wedge 3$, and $\sigma_i:=\sum_{j=1}^n \varrho_{ij}$. Clearly we have \[ m:=\mb{E}(\sigma_i)\leq \mb{E}\left( \sum_{j=1}^n H^*_{ij} \right)\leq K_1\big(n^{1/2}+n^{2-\alpha}\big). \] For fixed $i$, conditionally on $\mc{F}=\sigma\{ \xi_j, R_j^*; 1\leq j \leq n\}$, the random variables $\varrho_{ij}$ ($1\leq j\leq n$) are independent. Since they fall between $0$ and $3$, by the Hoeffding inequality we have \[ \mb{P}(|\sigma_i-m|\geq s|\mc{F})\leq 2\exp \left(-\frac{2s^2}{9n}\right) \] for any $s\geq 0$. Using the same constant $K_1$ as above, and choosing $s:=9\sqrt{n\log n}$, we have by the bound on $m$ that \begin{align*}\mathbb P(\sigma_i\geq (9+K_1)\sqrt {n \log n}+K_1n^{2-\alpha})&\leq \mathbb P(|\sigma_i-m|\geq 9 \sqrt {n\log n})\\&\leq 2\exp\bigg(-\frac{2\cdot 81 n\log n}{9 n}\bigg)=2n^{-18}=O(n^{-4}).\end{align*} A trivial bound then yields \[\mathbb P(\max_{1\leq i \leq n}\sigma_i\geq (9+K_1)\sqrt {n \log n}+K_1n^{2-\alpha})=O(n^{-3}).\] Since $\sigma_i\leq 3n$ always holds, we obtain \[\mathbb E(\max_{1\leq i\leq n} \sigma_i)\leq (9+K_1) {\sqrt {n\log n}}+K_1n^{2-\alpha}+O(1)\leq K' {\sqrt {\log n}}\big(n^{1/2}+n^{2-\alpha}\big).\] It is clear that $H_{ij}^*\leq \varrho_{ij}+(H_{ij}^*)^{(3)}$, since whenever $H^*_{ij}>3$, its $3$rd factorial moment is positive, and strictly larger than $H_{ij}^*$ itself. Therefore \[\sum_{j=1}^n H_{ij}^* \leq \sigma_i+\sum_{j=1}^n (H_{ij}^*)^{(3)}\quad \Rightarrow \quad\left(\max_{1\leq i\leq n} \sum_{j=1}^n H_{ij}^* \right)\leq \max_{1\leq i\leq n} \sigma_i+\sum_{i,j}(H_{ij}^*)^{(3)}.\] From the above, together with inequality (\ref{eqn:fact_mom_exp}) : \begin{align*}\mathbb E\left(\max_{1\leq i\leq n} \sum_{j=1}^n H_{ij}^* \right)&\leq \mathbb E(\max_{1\leq i\leq n}\sigma_i)+n^2 \cdot \mathbb E\big((H_{ij}^*)^{(3)}\big)\leq \frac{K''}{2}\sqrt{\log n}\big(n^{1/2}+n^{2-\alpha}+n^{5-3\alpha}\big) \\ &\leq K''\sqrt{\log n} \big(n^{1/2}+n^{5-3\alpha}\big), \end{align*} where the last inequality follows from a weighted AM-GM. Finally, Lemma \ref{lem:sorosszeg} concludes the proof. \hfill $\square$ \subsection{Models $6$ and $7$} We have that $\mb{G}_6$ and $\mb{G}_7$ coincide everywhere but the main diagonal, and it is then easy to see that \[ d_{\boxtimes}\big(\mathbb G_6(n), \mathbb G_7(n)\big)=\frac 1n\cdot \max_{1\leq i\leq n} Y_{ii}. \] \subsection*{Proof of Propositon $\ref{prop:67}$} Recall that $Y_{ii}$ has Poisson distribution with parameter $c\xi_i^2$, where $\xi_i$ has $\exp(1)$ distribution. Assume first that $\zeta>0$ is fixed, and $X\sim \mr{Pois}(\zeta)$. Then \begin{equation*} \sum_{k=y+1}^{\infty}\mathbb P(X\geq k)\leq \sum_{k=y+1}^{\infty} k \mathbb P(X= k)=\sum_{k=y+1}^{\infty} \frac{\zeta^k}{(k-1)!}e^{-\zeta} \leq\zeta \sum_{k=z}^{\infty} \frac{\zeta^{k}}{k!}e^{-\zeta}=\zeta \mathbb P(X\geq y). \end{equation*} We will use the factorial moments of the Poisson distribution again. For every fixed $i$ and integers $y>b>0$ for some $K(b)>0$ we have \begin{align*} \sum_{k=y+1}^{\infty}\mathbb P(Y_{ii}\geq k)&=\mathbb E\bigg(\mathbb E\bigg( \sum_{k=y+1}^{\infty}\mathbb P(Y\geq k)\bigg\vert \xi_i\bigg) \bigg) \leq \mathbb E(c\xi_i^2 \mathbb P(Y^{(b)}\geq y^{(b)}|\xi_i)) \\ &\leq \mathbb E\bigg(c\xi_i^2\frac{\xi_i^{(b)}}{y^{(b)}} \bigg)=\frac{K(b)}{y^{(b)}}, \end{align*} because the exponential distribution has finite moments. For an arbitrary function $f:\mb{N}^+\to\mb{N}^+$ we may apply the above inequality to obtain \begin{align*} \mathbb E\big(\max_{1\leq i \leq n} |Y_{ii}|\big)&=\sum_{k=1}^{\infty} \mathbb P\big(\max_{1\leq i \leq n} |Y_{ii}|\geq k\big)\leq f(n)+\sum_{k=f(n)+1}^{\infty }n \mathbb P(Y_{11}\geq k)\leq f(n)+n\frac{K(b)}{f(n)^{(b)}}. \end{align*} Let now $N\in\mb{N}^+$ be fixed, set $f(n):=n^{1/5}$ and $b:=4$. For $n$ large enough (such that $n-3\geq n/2$) and $f(n)^{(4)}\geq f(n)^4/16$, this yields \[\mathbb E\big(\max_{1\leq i \leq n} |Y_{ii}|\big)\leq n^{1/5}+16K(4)n^{1-4/5}\leq n^{1/4}(K(4)+1)).\] Lemma \ref{lem:sorosszeg} concludes the proof. \hfill $\square$ \section{Discussion} \label{discussion} Our main theorem shows that the classical dense preferential attachment graph model yields random graphs that are close to the random graph model obtained through the PAG-graphon, the limit object in the multigraph homomorphism sense of the random sequence $\mb{G}_{\mr{PAG}}$. They are not indistinguishable though (we provide a lower bound on their distance below), and they each have their own advantages for applications. The random graphs $\mb{G}_{\mr{PAG}}$ have the advantage that the number of edges is deterministic, but contrarily to the sparse PAG models, one cannot easily generate a growing family of graphs $\mb{G}_{\mr{PAG}}(n)$. For the graphon induced $\mb{G}_W^\circ$, the number of edges is random, though still asymptotically concentrated around the expected value. Also, the way it is generated does not carry the preferential attachment flavour. This may be an advantage from the simulation point of view: the random variables in the model can be generated simultaneously, without the $cn^2$ steps that have to be performed after each other in the PAG model.\\ However, it is possible to couple the elements of the sequence $\mb{G}_W^\circ(n)$ (or $\mb{G}_W(n)$) so that we obtain a growing sequence (and still keep the convergence with probability 1). Indeed, passing from $n$ to $n+1$ only means that we have to generate the random variable $\xi_{n+1}$, independently of the previous $\xi_i$-s, and then generate the appropriate Poisson random variables $Y_{j(n+1)}$ for $1\leq j\leq n+1$. This coupling shows that adding an extra vertex and extending $\mb{G}_W^\circ(n)$ to $\mb{G}_W^\circ(n+1)$ can be performed easily. It seems that this does not hold for the $\mb{G}_{\rm PAG}$ model. Unfortunately, we do not have a lower bound for the jumble norm distance of $\mb{G}_\mr{PAG}(n)$ and $\mb{G}_W(n)$ that matches the upper bound given in Theorem \ref{thm:main}. Recall that we there obtained $O(n^{-1/3}\log ^2 n)$ as an upper bound for a particular coupling. On the other hand, there is a universal lower bound of $O(n^{-1})$, which holds for every coupling, and also for both for the random graphs $\mb{G}_W(n)$ and $\mb{G}_W^\circ(n)$. The exponents are quite far from each other, but the arguments used for the lower bound use very little of the structure of the graphs. We present a short argument giving this lower bound for both $\mb{G}_W^\circ(n)$ and $\mb{G}_W(n)$. If we take $S=T=\{1, 2, \ldots, n\}$ in Definition \ref{def:jumble}, then we obtain a lower bound for the jumble norm distance of $\mathbb G_{\rm PAG}$ and $\mathbb G_{\rm W}$ by understanding the difference of the number of edges. The main point is that the distribution of this quantity does not depend on the coupling. In $\mathbb G_{\rm PAG}(n)$, the number of edges is deterministic and it is equal to $\lfloor \lfloor cn^2\rfloor/2\rfloor$. We denote by $\mathcal E$ the number of edges in the $\mathbb G_{\rm W}(n)$ graph model. Let $\mathcal G$ be the $\sigma$-algebra generated by $\xi_1, \ldots, \xi_n$ (recall that the latter random variables are independent and have exponential distribution with parameter $1$). Then, conditionally with respect to $\mathcal G$, the random variable $\mathcal E$ has Poisson distribution with parameter $c\sum_{1\leq i<j\leq n} \xi_i \xi_j$. Hence $\mathbb E(\mathcal E)=cn(n-1)/2$ by the law of total expectation. In any coupling of these two models, by $S=T=\{1, 2, \ldots, n\}$ we have \[d_{\boxtimes}\big(\mathbb G_{\rm PAG}(n), \mathbb G_{\rm W}(n)\big)\geq \frac{1}{n^2} \mathbb E\big(\big|\lfloor \lfloor cn^2\rfloor/2\rfloor-\mathcal E\big|\big).\] Notice that \[\mathbb E\big(\big|\lfloor \lfloor cn^2\rfloor/2\rfloor-\mathcal E\big|\big)\geq \big|\mathbb E\big(\lfloor \lfloor cn^2\rfloor/2\rfloor-\mathcal E\big|\big)\big)\big|=\big|\lfloor \lfloor cn^2\rfloor/2\rfloor-cn(n-1)/2\big|\geq c_0 n\] for an appropriate positive number $c_0$. This holds for every coupling; therefore the exponent in Theorem \ref{thm:main} cannot be smaller than $-1$. The previous argument relies on the fact the expected number of edges is different in the two models, due to the lack of loops in the $\mathbb G_W$ model. For the $\mb{G}_\mr{PAG}$ and the $\mathbb G_W^{\circ}$ models, although the expected number of edges are equal to each other, one can prove that the jumble norm distance is still at least $\frac{1}{e^2}\sqrt{\frac c2}\cdot\frac 1n$ for every coupling. The key point is to use the formula for the central absolute moment of the Poisson distribution and see that it is at least constant times the square root of the parameter. To see this, we have to consider the random variable $\mathcal E^{\circ}$, which is the number of edges in $\mathbb G_W^{\circ}$. It has Poisson distribution with parameter $c\sum_{1\leq i<j\leq n}\xi_i \xi_j+\frac c2\sum_{i=1}^n \xi_i^2$ conditionally with respect to $\mathcal G$ (recall Definition \ref{def:gwn}). For sake of simplicity, let $\eta$ be a Poisson($\lambda$) distributed random variable, and $m>0$. First notice that \[\mathbb E(|\eta-m|)\geq |\mathbb E(\eta-m)|=|\lambda-m|.\] On the other hand, by using the formula for the central absolute moment of the Poisson distribution and the well-known upper bound version of Stirling's formula, we have \[\mathbb E(|\eta-\lambda|)=2e^{-\lambda}\frac{\lambda^{\lfloor \lambda\rfloor+1}}{\lfloor\lambda\rfloor!}\geq 2e^{-\lambda}\frac{\lambda^{\lfloor \lambda\rfloor+1}}{e\cdot\lfloor \lambda\rfloor^{\lfloor \lambda\rfloor+1/2}\cdot e^{-\lfloor \lambda\rfloor}}\geq 2e^{-\lambda+\lfloor \lambda\rfloor-1}\sqrt{\lambda}\geq \frac{2}{e^2}\sqrt{\lambda}.\] Putting this together, we get \[\mathbb E(|\eta-m|)\geq \max\big(|\lambda-m|, \mathbb E(|\eta-\lambda|-|\lambda-m|)\big)\geq \max\bigg(|\lambda-m|, \frac{2}{e^2}\sqrt{\lambda}-|\lambda-m|\bigg)\geq \frac{\sqrt{\lambda}}{e^2}.\] Now we apply this for the conditional distribution of $\mathcal E^{\circ}$ with $m=\lfloor \lfloor cn^2\rfloor/2\rfloor$. We obtain \begin{align*}\mathbb E(|\mathcal E^{\circ}-m|)&=\mathbb E(\mathbb E(|\mathcal E^{\circ}-m|)|\mathcal G)\geq \frac{1}{e^2}\mathbb E\bigg(\sqrt{c\sum_{1\leq i<j\leq n}\xi_i \xi_j+\frac c2\sum_{i=1}^n \xi_i^2}\bigg)\\&=\frac{1}{e^2}\mathbb E\bigg(\sqrt{\frac c2}\sum_{i=1}^n \xi_i\bigg)=\frac{1}{e^2}\sqrt{\frac c2}\cdot n.\end{align*} Therefore, since $m$ is the number of edges in the PAG model, we conclude that for every coupling of $\mathbb G_{\rm PAG}$ and $\mathbb G_W^{\circ}$, we have \[d_{\boxtimes}(\mathbb G_{\rm PAG}, \mathbb G_W^{\circ})\geq \frac{1}{e^2}\sqrt{\frac c2}\cdot\frac 1n. \] \begin{remark} In this paper we considered the jumble distance between the two random models for the dense PAG graph, as that is the more natural distance notion for multigraphs generated by unbounded graphons (in this particular case, this corresponds to the unboundedness of the parameters of the Poisson distributions). However, as each finite multigraph generated is bounded per se, one may wonder if it is possible to say anything about the cut distance between, e.g., $\mb{G}_{\mr{PAG}}$ and $\mb{G}^\circ_W$.\\ We recall that the cut distance of two graphs on the same set of $n$ vertices is defined as \[d_{\square}(G, H)=\frac 1{n^2}\cdot\max_{S,T}\bigg|\sum_{i\in S, j\in T} U_{ij}-V_{ij}\bigg|.\] It is easily seen that $d_\square\leq d_\boxtimes$, hence the upper bounds given for the jumble distance apply a fortiori to the cut distance as well. On the other hand, the methods used in this paper do not yield stronger bounds for the cut norm distance. \end{remark} \section*{Acknowledgements} The first author was supported by the Hungarian National Research, Development and Innovation Office, NKFIH grant $\mathrm{n}^\circ$ K108615 and by the MTA R\'enyi Institute Lend\"ulet Limits of Structures Research Group. The second author has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement $\mathrm{n}^\circ$617747, and from the MTA R\'enyi Institute Lend\"ulet Limits of Structures Research Group.
{ "timestamp": "2017-01-25T02:03:59", "yymm": "1701", "arxiv_id": "1701.06760", "language": "en", "url": "https://arxiv.org/abs/1701.06760", "abstract": "Letting $\\mathcal{M}$ denote the space of finite measures on $\\mathbb{N}$, and $\\mu_\\lambda\\in\\mathcal{M}$ denote the Poisson distribution with parameter $\\lambda$, the function $W:[0,1]^2\\to\\mathcal{M}$ given by \\[ W(x,y)=\\mu_{c\\log x\\log y} \\] is called the PAG graphon with density $c$. It is known that this is the limit, in the multigraph homomorphism sense, of the dense Preferential Attachment Graph (PAG) model with edge density $c$. This graphon can then in turn be used to generate the so-called W-random graphs in a natural way. The aim of this paper is to compare the dense PAG model with the W-random graph model obtained from the corresponding graphon. Motivated by the multigraph limit theory, we investigate the expected jumble norm distance of the two models in terms on the number of vertices $n$. We present a coupling for which the expectation can be bounded from above by $O(\\log^2 n\\cdot n^{-1/3})$, and provide a universal lower bound that is coupling independent, but with a worse exponent.", "subjects": "Combinatorics (math.CO); Probability (math.PR)", "title": "On the dense Preferential Attachment Graph models and their graphon induced counterpart", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.98593636965088, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7084883526045554 }
https://arxiv.org/abs/1802.04851
KdV is wellposed in $H^{-1}$
We prove global well-posedness of the Korteweg--de Vries equation for initial data in the space $H^{-1}(R)$. This is sharp in the class of $H^{s}(R)$ spaces. Even local well-posedness was previously unknown for $s<-3/4$. The proof is based on the introduction of a new method of general applicability for the study of low-regularity well-posedness for integrable PDE, informed by the existence of commuting flows. In particular, as we will show, completely parallel arguments give a new proof of global well-posedness for KdV with periodic $H^{-1}$ data, shown previously by Kappeler and Topalov, as well as global well-posedness for the 5th order KdV equation in $L^2(R)$.Additionally, we give a new proof of the a priori local smoothing bound of Buckmaster and Koch for KdV on the line. Moreover, we upgrade this estimate to show that convergence of initial data in $H^{-1}(R)$ guarantees convergence of the resulting solutions in $L^2_\text{loc}(R\times R)$. Thus, solutions with $H^{-1}(R)$ initial data are distributional solutions.
\section{Introduction} The Korteweg--de Vries equation \begin{align}\label{KdV}\tag{KdV} \frac{d\ }{dt} q = - q''' + 6qq' \end{align} was derived in \cite{KdV1895} to explain the observation of solitary waves in a shallow channel of water. Specifically, they sought to definitively settle (to use their words) the debate over whether such solitary waves are consistent with the mathematical theory of a frictionless fluid, or whether wave fronts must necessarily steepen. The equation itself, however, appears earlier; see \cite[p. 77]{Boo}. The term \emph{solitary wave} has now been supplanted by \emph{soliton}, a name coined in \cite{KruskalZubusky} and inspired by the particle-like interactions they observed between solitary waves in their numerical simulations of \eqref{KdV}. In a series of papers, researchers at Princeton's Plasma Physics Laboratory demonstrated that equation \eqref{KdV} exhibits a wealth of novel features, including the existence of infinitely many conservation laws \cite{MR0252826} and the connection to the scattering problem for one-dimensional Schr\"odinger equations \cite{GGKM}. Nowadays, we say that \eqref{KdV} is a completely integrable system (cf. \cite{MR0303132}). Although we shall focus on mathematical matters here, \eqref{KdV} continues to be an important effective model for a diverse range of physical phenomena; see, for example, the review \cite{MR1329553} occasioned by the centenary of \cite{KdV1895}. One of the most basic mathematical questions one may ask of \eqref{KdV} is whether it is well-posed. This is the question of the existence and uniqueness of solutions, together with the requirement that the solution depends continuously on time and the initial data. As we shall discuss, this topic has attracted several generations of researchers who have successively enlarged the class of initial data for which well-posedness can be shown. Our principal contribution is the following: \begin{theorem}[Global well-posedness]\label{T:main} The equation \eqref{KdV} is globally well-posed for initial data in $H^{-1}({\mathbb{R}})$ or $H^{-1}({\mathbb{R}}/{\mathbb{Z}})$ in the following sense: In each geometry, the solution map extends uniquely from Schwartz space to a jointly continuous map $\Phi:{\mathbb{R}}\times H^{-1}\to H^{-1}$. Moreover, for each initial data $q\in H^{-1}$, the orbit $\{\Phi(t,q) : t\in{\mathbb{R}}\}$ is uniformly bounded and equicontinuous in $H^{-1}$. \end{theorem} On the circle ${\mathbb{R}}/{\mathbb{Z}}$, Schwartz space is coincident with $C^\infty({\mathbb{R}}/{\mathbb{Z}})$; on the line, it is comprised of those $C^\infty({\mathbb{R}})$ functions that decay (along with their derivatives) faster than any polynomial as $|x|\to\infty$. For the definition of $H^{-1}({\mathbb{R}})$ and $H^{-1}({\mathbb{R}}/{\mathbb{Z}})$, see subsection~\ref{S:1.1}; informally, they are comprised of those tempered distributions that are derivatives of $L^2$ functions. For the precise definition of equicontinuity in the $H^s$ setting, see \eqref{E:equi1}. The fact that Schwartz-space initial data leads to unique global solutions to \eqref{KdV} that remain in Schwartz class has been known for some time; see, for example, \cite{MR0385355,MR0759907,Sj,MR0410135,MR0261183}. Indeed, in this class, the solution map is known not only to be continuous, but infinitely differentiable in both variables. We shall rely on this result in what follows. In the case of \eqref{KdV} posed on the torus (or equivalently for periodic initial data), Theorem~\ref{T:main} reproduces the principal results of \cite{MR2267286}. Note that because the circle is compact, uniform boundedness and equicontinuity of the orbit is equivalent to it being pre-compact. In the line case, one does not expect orbits to be pre-compact; both solitons and radiation preclude tightness from holding globally in time. The papers \cite{MR2830706,MR2927357} show that well-posedness cannot persist (in either geometry) in $H^{s}$ for any $s<-1$. In this sense, Theorem~\ref{T:main} is sharp. On the other hand, one may consider well-posedness at higher regularity $s>-1$. Existence and uniqueness are immediate from the case $s=-1$; the key is to demonstrate that continuous dependence remains valid in this stronger topology. In this paper, we will settle the cases left open by prior work, namely, the wellposedness of \eqref{KdV} in $H^{s}({\mathbb{R}})$ with $-1\leq s< -\frac34$. (On the circle, all $s\geq -1$ were treated already in \cite{MR2267286}.) In fact, the proof of Corollary~\ref{C:2} provides a simple uniform treatment of all $-1\leq s<0$ and adapts trivially to the case of the circle also. The notion of solution used here (unique limits of Schwartz solutions) coincides with that in \cite{MR2267286} and is informed by several important considerations. Firstly, as the notion of a solution in the case of Schwartz initial data is firmly settled, any notion of a solution to \eqref{KdV} leading to well-posedness in $H^{-1}$ must produce solutions identical to those given by Theorem~\ref{T:main}. Secondly, for functions that are merely $C_t H^{-1}_x$ it is not possible to make sense of the nonlinearity in \eqref{KdV} as a space-time distribution in either geometry. While the local smoothing effect (see subsection~\ref{SS:ls}) provides a potential resolution of this problem in the line setting, there is no natural alternative notion of a weak solution in the circle geometry. Any methodology that purports to apply in wide generality must adopt a notion of solution that applies in wide generality. A wider notion of solution was considered in \cite{Christ'05}, namely, limits of smooth solutions in the presence of smooth asymptotically vanishing forcing. That paper shows (see \cite[\S2.7]{Christ'05}) that with this wider notion of solution, uniqueness cannot be guaranteed for $C_t H^{s}({\mathbb{R}}/{\mathbb{Z}})$ solutions to \eqref{KdV} already for $s<0$. From Theorem~\ref{T:main}, we see that the map $\Phi$ is continuous, as was also shown in \cite{MR2267286} for the circle case. It is natural to ask if this continuity may be expressed more quantitatively. In some sense, the answer is no: it is shown in \cite{MR2018661} that the data to solution map cannot be uniformly continuous on bounded sets when $s<-\frac34$ in the line case or when $s<-\frac12$ in the circle case. Nevertheless, the arguments presented here are sufficiently transparent that one may readily obtain information on the modulus of continuity of $q\mapsto\Phi(t,q)$. Specifically, we find that the key determiners of the modulus of continuity at an initial datum $q\in H^{-1}({\mathbb{R}})$ are the time $t$ in question and the rate at which $$ \int \frac{|\hat q(\xi)|^2\,d\xi}{\xi^2+4\kappa^2} \to 0\qtq{as} \kappa\to\infty. $$ Evidently, this integral does not converge to zero uniformly on any open set of initial data. Let us now turn our attention to a discussion of prior work proving well-posedness for \eqref{KdV}. Discussion of weak solutions (without uniqueness) is postponed until subsection~\ref{SS:ls}. Our discussion will not be exhaustive; the body of literature on \eqref{KdV} is simply immense. Nor will we insist on a strict chronology. Early work on the local and global well-posedness of \eqref{KdV} treated it as a quasi-linear hyperbolic problem. The appearance of the derivative in the nonlinearity prohibits simple contraction mapping arguments from closing. The principal methods employed were (i) compactness and uniqueness arguments (e.g. \cite{MR0454425,MR0261183}) combined with parabolic regularization, (ii) convergence of Picard iterates with (e.g. \cite{MR0312097}) or without (e.g. \cite{MR0407477}) parabolic regularization, and (iii) approximation by the Benjamin--Bona--Mahoney (BBM) equation \cite{MR0393887,MR0385355}. The BBM equation was introduced in \cite{MR0427868}; this equation has a much more regular nonlinearity and global well-posedness was shown there by simple contraction mapping arguments. The BBM equation has the same Hamiltonian as \eqref{KdV}, namely, \begin{equation}\label{I HKdV} H_\text{KdV} (q) := \int \tfrac12 q'(x)^2 + q(x)^3\,dx; \end{equation} however the underlying symplectic structure is different. Of all the prior approaches we know of, the one we follow here is closest in spirit to that of Bona--Smith \cite{MR0385355}, since both we and they employ the idea of approximating the full flow by another Hamiltonian evolution that is more readily controlled. Incidentally, the problem of local well-posedness of \eqref{KdV} in Schwartz space, which we shall take for granted here, is rather easier than the works just cited, because one may safely lose regularity in proving continuous dependence of the solution on the initial data. While multiple authors sought to obtain well-posedness in $H^{s}$ for $s$ as small as possible, these early attempts did not succeed in proving local well-posedness beyond the regime $s>3/2$. Most significantly, this does not reach the level $s=1$ at which one may upgrade local to global well-posedness by exploiting conservation of the Hamiltonian. Nevertheless, global well-posedness was obtained at this time for $s\geq 2$ by using the conserved quantities at such higher regularity discovered in \cite{MR0252826}. To progress further in this vein, the key has been to exploit the dispersive property of \eqref{KdV}. Global well-posedness for finite energy initial data on the line was first proved in \cite{MR1086966}, by utilizing local smoothing and maximal function estimates. The paper actually proves local well-posedness in $H^s({\mathbb{R}})$ for $s>3/4$; the global $H^1({\mathbb{R}})$ result follows trivially from this and conservation of the Hamiltonian. The next conspicuous benchmark for the well-posedness theory was the treatment of initial data with finite momentum \begin{equation}\label{I P} P(q) := \int \tfrac12 q(x)^2 \,dx, \end{equation} that is, data in $L^2$. \emph{Momentum} is the appropriate term here; this quantity is the generator of translations with respect to the standard symplectic structure. Moreover, this quantity is conserved under the KdV flow. The \emph{mass} of a wave is given by $\int q(x)\,dx$, which is also conserved, and represents the total deficit (or surplus) of water relative to $q\equiv 0$. Well-posedness of \eqref{KdV} in $L^2$ was proved both on the line and on the circle by Bourgain in \cite{MR1215780}. At the heart of this work is the use of $X^{s,b}$ spaces, which efficiently capture the dispersive nature of the equation and effectively control the deviation of the KdV dynamics from solutions to the linear equation $\partial_t q = - q'''$. After developing suitable estimates in these spaces, the proof proceeds by contraction mapping arguments; thus the solutions constructed depend analytically on the initial data. Further development and refinement of the methods employed in \cite{MR1215780} ultimately led to a proof of local well-posedness for \eqref{KdV} in $H^{s}({\mathbb{R}})$ for $s\geq -3/4$ and in $H^s({\mathbb{R}}/{\mathbb{Z}})$ for $s\geq -1/2$. Excepting the endpoints, this was proved by Kenig--Ponce--Vega in \cite{MR1329387}. For a discussion of the endpoints, see \cite{MR2018661} and \cite{MR1969209,MR2054622,MR2233689}. These ranges of $s$ are sharp if one requires the data to solution map to be uniformly continuous on bounded sets; see \cite{MR2018661}. These local well-posedness results were made global in time in \cite{MR1969209}, excepting the endpoint case $H^{-3/4}({\mathbb{R}})$, which was proved later in \cite{MR2531556,MR2501679}. At that time, no exact conservation laws were known that were adapted to negative regularity. To obtain such global results, these authors constructed almost conserved quantities, whose growth in time they were able to control. While it is true that the conspicuous manifestations of complete integrability of KdV played no particular role in the series of works we have just described, it is difficult to completely decouple these successes from the exact structure of the KdV equation. In the first place, many of these arguments rely on the absence of unfavorable resonances. This appears in the multilinear $X^{s,b}$ estimates and (rather more explicitly) in the construction of almost conserved quantities in \cite{MR1969209}. This is akin to the construction of Birkoff normal form, which may fail due to resonances, but which does succeed in completely integrable systems (cf. \cite{MR0501141,MR2150385}). As we will discuss below, we now know that KdV admits exact conservation laws adapted to every regularity $s\geq -1$; this offers a rather transparent explanation for the otherwise startling success of \cite{MR1969209} in constructing almost conserved quantities. The Miura map \cite{MR0252825}, implements a first iteration toward the construction of Birkoff normal form by converting the KdV equation to the mKdV equation, which has a nonlinearity that is one degree higher. This transformation was one of the first indications that there was something peculiar about \eqref{KdV}. Moreover, a one-parameter generalization of this transformation, due to Gardner, led to the first proof of the existence of infinitely many polynomial conservation laws; see \cite{MR0252826}. The Miura map has been very popular in the study of KdV at low regularity. Most particularly, it allows one to work at positive regularity, where many nonlinear transformations (e.g., pointwise products) are much better behaved. The breakdown of traditional PDE techniques ultimately stems from a high-high-low frequency interaction that makes it impossible to approximate the KdV flow by a linear evolution even locally in time. This particular frequency interaction appears in many fluid models, due to the ubiquity of the advection nonlinearity $(u\cdot\nabla) u$, and is exploited crucially in the construction of solutions exhibiting energy growth. It is worth noting that among the family of monomial gKdV equations, namely, those of the form $\partial_t q = -\partial_x^3 q \pm \partial_x (q^k)$, only for the completely integrable models (i.e., $k=2,3$) does the local well-posedness threshold deviate from scaling. Indeed, the completely integrable models are \emph{less} well-posed relative to scaling than those with $k\geq 4$. Ultimately, we see that complete integrability does not completely ameliorate the severity of this nonlinearity when acting on solutions of low regularity. In this vein, we contend that the complete integrability of a system is not divorced from the class of initial data on which it is studied. The PDE $\partial_t q = \partial_x q$ posed on the line might immediately be classed as completely integrable; it even belongs to the KdV hierarchy. However, when the initial data is white-noise, we see that the dynamics is mixing! On the basis of the results of this paper, we may say that the term \emph{completely integrable} continues to apply to \eqref{KdV} in the class $H^s({\mathbb{R}})$ when $s\geq -1$. We have not yet explained in what sense \eqref{KdV} can be regarded as completely integrable. The most common definition applied in finite-dimensional mechanics is that the system has sufficiently many Poisson commuting, functionally independent, conserved quantities. Here, sufficiently many means half the dimension of the ambient symplectic manifold. As noted earlier, the fact that \eqref{KdV} admits infinitely many independent conserved quantities was first proved in \cite{MR0252826}. We have already seen three: the mass, momentum, and energy. In the original paper, the conservation laws were presented in a microscopic form, that is, as \begin{equation}\label{micro law} \partial_t \rho(t,x) + \partial_x j(t,x) = 0, \end{equation} where the densities $\rho$ and the currents $j$ are given by particular polynomials in $q$ and its derivatives. The (macroscopic) conserved quantities are then obtained integrating $\rho$ over the whole line or circle, as appropriate. The polynomial nature of these conserved quantities is such that, except in the case of $H^\infty$ data (i.e. all derivatives square integrable), all but finitely many of them are infinite. Moreover, it is also not immediately clear whether these constitute a sufficient number of conserved quantities to call the system completely integrable, even in Schwartz space. These concerns turn out to be unwarranted. To explain, we begin with an innovation of Lax \cite{MR0235310}, namely, the introduction of the Lax pair: Defining \begin{align*} L(t) &:= -\partial_x^2 + q(t,x) \qtq{and} P(t) := - 4 \partial_x^3 + 3\bigl(\partial_x q(t,x) + q(t,x) \partial_x \bigr) \end{align*} it is easy to verify that \begin{align*} \text{$q(t)$ solves \eqref{KdV}} \iff \frac{d\ }{dt} L(t) = [P(t),\, L(t)]. \end{align*} As $P(t)$ is always anti-self-adjoint, this shows that at each time slice, the Schr\"odinger operator with potential $q(t,x)$ is unitarily equivalent to that built from the initial data $q(0,x)$. Speaking loosely, we may say that all spectral properties of $L(t)$ are conserved under the KdV flow. One of the beauties of the Lax pair is that it works equally well in both geometries. However, once we try to speak more precisely about which spectral properties are conserved, this unity quickly dissolves. We will first discuss the periodic case where related ideas have been most successful in tackling the well-posedness problem. The Schr\"odinger operator on the circle with (periodic) potential $q$ has purely discrete spectrum. This remains true for potentials that are merely $H^{-1}$ because such perturbations of $-\partial_x^2$ are relatively compact. The Lax pair shows that these (periodic) eigenvalues are then conserved under the flow and so we obtain an infinite sequence of conserved quantities that extend to the case of very low regularity. There is a direct connection between these eigenvalues and the polynomial conservation laws mentioned earlier; see, for example, \cite[\S3]{MR0397076}. As it turns out, these eigenvalues are not the most convenient objects for further development. Rather, one should consider the spectrum of the Schr\"odinger operator associated to the $1$-periodic potential, acting on the whole line. This set is wholly determined by the periodic eigenvalues; see \cite{MR0749109}. Nevertheless, this new perspective suggests an alternate set of conserved quantities, namely, the lengths of the gaps in the spectrum. The virtue of these new quantities can be seen already in the fact that these numbers effectively capture the $H^s$ norms of the potential, at least if $s\geq0$; see \cite{MR0409965}. While such a priori bounds are useful for well-posedness questions (particularly, to extend solutions globally in time), they do not suffice. For the purposes of well-posedness, there is no better expression of complete integrability than the existence of action-angle coordinates. Such coordinates are now known to exist for \eqref{KdV} with data in $H^{-1}({\mathbb{R}}/{\mathbb{Z}})$ and this result was decisive in the proof of global well-posedness in \cite{MR2267286}. A key step down this path was the discovery that one should adopt the Dirichlet spectrum (together with the gap lengths) to form a complete set of coordinates and secondly, that these points (which lie in the gaps) should properly be interpreted as lying on the Riemann surface obtained by gluing together two copies of the complex plane cut along the spectrum. These considerations lead to the definition of angle variables (cf. \cite{MR0427869,MR0427731}) and thence to associated actions \cite{MR0403368}. A very pedagogical account of these constructions can be found in \cite{MR1997070}; moreover, this monograph culminates in a proof that these variables define \emph{global} action-angle coordinates on each symplectic leaf in the phase-space $L^2({\mathbb{R}}/{\mathbb{Z}})$. The proof in \cite{MR2267286} of global well-posedness in $H^{-1}({\mathbb{R}}/{\mathbb{Z}})$ required two more steps. The first, carried out in \cite{MR2179653}, was the extension of these coordinates (as a global analytic symplectic diffeomorphism) to $H^{-1}({\mathbb{R}}/{\mathbb{Z}})$. The second was to gain adequate control of the frequencies (i.e., the time derivatives of the angles). Usually, these frequencies are computed as the derivatives of the Hamiltonian with respect to the corresponding actions. However, the Hamiltonian $H_\text{KdV}$ does not make sense as a function on $H^{-1}({\mathbb{R}}/{\mathbb{Z}})$! Let us now turn our attention to the case of \eqref{KdV} posed on the line. We begin by describing a system of coordinates discovered already in \cite{GGKM} that linearize the flow (at least for a suitable class of data). While not action-angle variables themselves, such variables can be readily expressed in terms of them; see \cite{MR0303132}. To the best of our knowledge, the broadest class in which the construction that follows has been successfully completed is $\mathcal L^1_1:= \{q\in L^1({\mathbb{R}}): x q(x) \in L^1({\mathbb{R}})\}$; see \cite{MR0897106,MR0792566}. As we will discuss later, there are compelling reasons to doubt that this construction can be taken much further without substantial new ideas. Given $q\in \mathcal L^1_1$ and $k\in{\mathbb{C}}$ with $\Im k\geq 0$, there are unique solutions $f_\pm(x;k)$ to $$ -f''(x) + q(x)f(x) = k^2 f(x) \qtq{satisfying} f_\pm(x) = e^{\pm ik x} +o(1) \quad\text{as $x\to\pm\infty$.} $$ These are known as Jost solutions and depend analytically on $k$. For $k\in{\mathbb{R}}\setminus\{0\}$, $f_+(x;\pm k)$ are linearly independent solutions to our ODE. Thus we may define connection coefficients, say $a(k)$ and $b(k)$, so that \begin{equation}\label{a and b} f_-(x;k) = a(k) f_+(x;-k) + b(k) f_+(x;k). \end{equation} Note that $a(k)$ extends analytically to the upper half-plane, since it can be expressed through the Wronskian of $f_+$ and $f_-$. There is no such extension of $b(k)$. The relation to the Wronskian also shows that $a(k)$ has zeros in the upper half-plane precisely at those points $i\kappa_n$ for which $-\kappa_n^2$ is an eigenvalue of the Schr\"odinger operator. The objects introduced so far do not uniquely characterize the potential $q$. To do so, one must also consider \emph{norming constants}, $c_n>0$, associated to each eigenvalue $-\kappa_n^2$. These describe the large-$x$ asymptotics of the $L^2$-normalized eigenfunction $\psi_n(x)$; specifically, $e^{\kappa_n x} |\psi(x)| \to c_n$ as $x\to+\infty$. As shown already in \cite{GGKM}, the objects just described evolve very simply under \eqref{KdV}: $a(k)$, $|b(k)|$, and the eigenvalues remain constant, while $\arg( b(k))$ and $\log(c_n)$ evolve linearly. As the forward and inverse scattering problems have been shown to be well-posed in the class $\mathcal L^1_1$, this yields a proof of well-posedness of \eqref{KdV} in this class. This is the natural analogue of the argument that has proven so successful in the circle geometry. Unfortunately, well-posedness of the forward/inverse scattering problems (as they are currently understood) begins to break down under very mild relaxations of the condition $q\in \mathcal L^1_1$. For example, the scattering data fails to determine $q$ already for potentials that are bounded and $O(x^{-2})$ at infinity, due to the presence of zero-energy eigenvalues; see \cite{MR0875319}. Relaxing our decay restrictions on $q$ to merely $O(|x|^{-1})$ at infinity, gives rise to further problems: positive energy (embedded) eigenvalues may occur (cf. \cite[\S XIII.13]{MR0493421}); moreover, Jost solutions may fail to exist (without WKB correction) at every positive energy. In \cite{MR2138138}, it is shown that embedded singular continuous spectrum can occur as soon as one passes beyond $O(|x|^{-1})$ decay, even in the slightest. Moreover, potentials $q\in L^2({\mathbb{R}})$ can yield essentially arbitrary embedded singular spectrum; see \cite{MR2552106}. The appearance of such exotic spectra leads us to believe that seeking a solution to the well-posedness problem for \eqref{KdV} in $H^{-1}({\mathbb{R}})$ through the inverse scattering methodology has little chance of success at this time. In particular, we are not aware of any proposal for action-angle variables in such a scenario. This raises the following question: What other manifestation of complete integrability may hold the key to further progress on the well-posedness problem? Our answer, in this paper, is the existence of a wealth of commuting flows. As we will see, the method we propose does not completely supplant PDE techniques, but rather, like the Miura map, provides a new avenue for their application to the KdV problem. As the existence of an abundance of commuting flows is a necessary (but not sufficient) condition for a system to be considered completely integrable, the method has a good chance of being applicable to any PDE that is considered completely integrable. The commuting flows associated to the traditional sequence of conserved quantities (based on polynomials in $q$ and its derivatives) are not what we have in mind. Their well-posedness is at least as difficult as for \eqref{KdV} itself. Moreover, there is no sense in which they approximate the KdV flow; they are better considered as flowing in orthogonal directions. Rather, we begin our discussion with \begin{equation}\label{Intro renorm} \alpha(\kappa;q) := - \log[a(i\kappa;q)] + \tfrac{1}{2\kappa}\int q(x)\,dx, \end{equation} where $a(k;q)$ denotes the coefficient $a(k)$ from \eqref{a and b} associated to the potential $q$. As noted previously, both $a(k;q)$ and $\int q$ are conserved under the KdV flow; thus one should expect $\alpha(\kappa)$ to also be conserved whenever it is defined. Unaware that the same idea had already been implemented by Rybkin in \cite{MR2683250}, the authors together with X.~Zhang showed in \cite{KVZ} that $\alpha(\kappa;q)$ is a real-analytic function of $q\in H^{-1}({\mathbb{R}})$, provided $\kappa\geq 1 + 45\|q\|_{H^{-1}}^2$. We also gave a direct proof that it is conserved for Schwartz initial data. In these arguments, both we and Rybkin use the fact that $a(k;q)$ can also be written as a Fredholm determinant; see \eqref{O37} below. That such a determinant representation of this scattering coefficient is possible was first noticed in the setting of three-dimensional central potentials in \cite{MR0044404}. (See \cite[Proposition~5.4]{MR2154153} or \cite[Lemma~2.8]{MR2310217} for simple proofs in one dimension.) The renormalization of $\log|a(k)|$ appearing in \eqref{Intro renorm} is essential for considering $q\in H^{-1}({\mathbb{R}})$; without it, one would need to restrict to potentials that are at least conditionally integrable. Incidentally, this renormalizing term can also be predicted as the leading behaviour of the phase shift via WKB theory. The goal of the paper \cite{KVZ} was the construction of a variety of low-regularity conservation laws for KdV both on the line and on the circle. (NLS and complex mKdV were also treated there by the same method.) In the case of $H^{-1}({\mathbb{R}})$ bounds for KdV, our argument is essentially that of Rybkin \cite{MR2683250}, who obtained the same result. Another proof (also independent of Rybkin) can be found in \cite{MR3400442}. In the line setting, general $H^s({\mathbb{R}})$ bounds for KdV, NLS, and mKdV were obtained, independently, by Koch and Tataru \cite{KT}. For an earlier partial result, see also \cite{MR3292346}. In the circle setting, bounds of this type were obtained considerably earlier; see \cite{MR2179653}. In this paper, we will not rely on the results of \cite{MR3400442,MR2179653,KVZ,KT,MR2683250}. In fact, the proof of Theorem~\ref{T:ls conv} below relies on our development of an alternate argument, which also yields the global $H^{-1}$ bound. Specifically, we will develop a microscopic version (cf. \eqref{micro law}) of the macroscopic conservation law from \cite{KVZ,MR2683250}. A priori bounds of the type just described do not in themselves yield well-posedness. Indeed, conservation of momentum was known already to Korteweg and de~Vries, yet the corresponding well-posedness result did not appear until \cite{MR1215780}. The key obstacle is always to control differences of solutions. While individual solutions admit infinitely many conservation laws, the difference of two solutions need not have any. As discussed previously, the map $q\mapsto \alpha(\kappa;q)$ is analytic; therefore, its derivative (with respect to $q$) is represented (in the sense \eqref{derivative}) by an analytic $H^1$-valued function of $q\in H^{-1}$. Thus, when we consider the Hamiltonian evolution induced by this functional, namely, $$ \frac{d\ }{dt} q(t) = \partial_x \frac{\delta \alpha}{\delta q}, $$ we see that the right-hand side is a Lipschitz function on $H^{-1}$ and so well-posedness of this equation follows by the standard ODE argument. Our ambition (and this appears to be a new idea) is to approximate \eqref{KdV} by this flow. It turns out that this is possible after one further renormalization, as we will now explain. It was observed already in \cite{MR0303132} that $\log[a(i\kappa)]$ acts as a generating function for the polynomial conserved quantities; in particular, this yields the asymptotic expansion $$ \alpha(\kappa;q) = \tfrac{1}{4\kappa^3} P(q) - \tfrac{1}{16\kappa^5} H_\text{KdV}(q) + O(\kappa^{-7}), $$ using the notations \eqref{I HKdV} and \eqref{I P}. Inspired by this, one may then postulate that the Hamiltonian $$ H_\kappa := - 16 \kappa^5 \alpha(\kappa;q) + 4 \kappa^2 P(q) $$ provides a good approximation to the KdV Hamiltonian for $\kappa$ large. More ambitiously, one may hope that the KdV flow is well approximated by the flow under $H_\kappa$. Verifying this and so deducing Theorem~\ref{T:main} occupies the central portion of the paper, namely, Sections~3--5. Several observations are in order. Firstly, while $\alpha(\kappa,q)$ is a analytic function on $H^{-1}$, the approximate Hamiltonian $H_\kappa$ is not, because momentum is not. Nevertheless, well-posedness of the resulting flow is still elementary; see Proposition~\ref{P:H kappa}. The problem of estimating the discrepancy between the $H_\kappa$ flow and the full KdV flow is much simplified by the fact that the two flows commute. Indeed, it reduces the question of such an approximation to showing that the flow induced by the difference $H_\text{KdV} - H_\kappa$ is close to the identity for $\kappa$ large and bounded time intervals. Naturally, one needs to show that this flow is close to the identity in the $H^{-1}$ metric; however, this follows from proximity in much weaker norms, say $H^{-3}$. The central point here is equicontinuity, or what is equivalent, tightness on the Fourier side; see Lemma~\ref{L:equi 1}. The equicontinuity of orbits under the flows of interest to us follows from the fact that all conserve $\alpha(\kappa;q)$; see Lemma~\ref{L:equi 2}. Indeed, from \eqref{alpha as I2}, we see that this functional effectively captures how much of the $H^{-1}$ norm of $q$ lives at frequencies $\xi$ with $|\xi|\gtrsim \kappa$. One further innovation informs our implementation of the program laid out above, namely, the adoption of the `good unknown' $x\mapsto\kappa - \tfrac{1}{2g(x)}$. Here $g(x):=g(x;\kappa,q)$ denotes the diagonal of the Green's function associated to the potential $q$ at energy $-\kappa^2$. For a discussion of this object, see Section~\ref{S:2}. In particular, it is shown there that the map from $q(x)$ to $\kappa - \tfrac{1}{2g(x)}$ is a real-analytic diffeomorphism, thus, justifying the notion that $g(x)$ may effectively replace the traditional unknown $q(x)$. Both the diagonal Green's function and its reciprocal appear naturally in several places in our argument, including in the conserved density $\rho$ introduced in \eqref{E:rho defn} and in the dynamics associated to the Hamiltonian $H_\kappa$; see \eqref{H kappa flow q}. Although our embracement of $g(x)$ is certainly responsible for the simplicity of many of our estimates and concomitantly, for the brevity of the paper, we caution the reader that it is not in itself the key to overcoming the fundamental obstacle confounding previous investigators, namely, the problem of estimating differences between two solutions. We are not aware of any obstruction to extending the method employed here to a wide range of integrable systems, including those in the AKNS family. As evidence in favour of this assertion, we demonstrate in Section~\ref{S:periodic} how our method applies in the setting of KdV on the circle. In Appendix~\ref{S:A} we apply it to the next equation in the KdV hierarchy, following up on an enquiry of a referee. Regarding models in the AKNS family, we note that the functional $\alpha(\kappa,q)$ discussed in \cite{KVZ} is easily seen to have several of the favorable properties needed for our arguments, such as providing global norm control, yielding equicontinuity, and inducing a well-posed Hamiltonian flow. We do not consider what our results may imply for (real) mKdV via the Miura map. Rather, it is our hope that our method may soon be adapted to give an \emph{intrinsic} treatment of the more general \emph{complex} mKdV, which fits within the AKNS family of integrable systems. Finally, while the ideas presented here are rooted in the complete integrability of KdV, we believe they may prove fruitful beyond this realm. Specifically, we envision the $H_\kappa$ flow being used as a leading approximation for KdV-like equations in much the same way as the Airy equation, $\partial_t q = -q'''$, has been used as an approximation of KdV itself. \subsection{Local smoothing}\label{SS:ls} The local smoothing effect is observed for a wide range of dispersive equations in Euclidean space, both linear and nonlinear. The underlying physical principle is that when high-frequency components of a wave travel very quickly, they must spend little time in any fixed finite region of space. Thus, one should expect a gain in regularity locally in space on average in time. This phenomenon seems to have been first appreciated by Kato, both for linear \cite{MR0190801,MR0234314} and nonlinear \cite{MR0759907} problems. In \cite{MR0759907}, it is shown that for Schwartz solutions to \eqref{KdV} one has $$ \int_{-1}^1 \int_{-1}^1 |q'(t,x)|^2 \,dx\,dt \lesssim \| q(0) \|_{L^2}^2 + \| q(0) \|_{L^2}^6. $$ This is then used to prove the existence of global weak solutions to \eqref{KdV} for initial data in $L^2({\mathbb{R}})$. Prior to this, existence of global weak solutions (in either geometry) was known only for data in $H^1$; see \cite{MR0261183}. In \cite{MR3400442}, Buckmaster and Koch proved the existence of an analogous a priori local-smoothing estimate one degree lower in regularity (on both sides). This is achieved by using a Miura-type map and adapting Kato's local smoothing estimate for mKdV to the presence of a kink. This technology is then used to prove the existence of global weak/distributional solutions to \eqref{KdV} with initial data in $H^{-1}({\mathbb{R}})$. (The nonlinearity may now be interpreted distributionally, because local smoothing guarantees that $q(t,x)$ is locally square integrable in space-time.) As is usual with the construction of weak solutions, the arguments do not yield uniqueness and continuity in time is only shown with respect to the weak topology. (Continuous dependence on the initial data is hopeless without first knowing uniqueness.) For a restricted class of $H^{-1}$ initial data (namely, that in the range of the traditional Miura map), the existence of weak solutions was shown earlier in \cite{MR2189502}; see also \cite{MR0990865}. In Section~\ref{S:7} we will give a new derivation of the a priori local smoothing bound of \cite{MR3400442}. Our argument is based on the discovery of a new microscopic conservation law \eqref{E:l5.1h} adapted to regularity $H^{-1}({\mathbb{R}})$, which is then integrated against a suitably chosen weight function. It is not difficult to extend the a priori bound to the full class of solutions constructed in Theorem~\ref{T:main}. However, we are able to take the argument one step further and show the following (cf. Proposition~\ref{P:loc smoothing}): \begin{theorem}\label{T:ls conv} Let $q$ and $\{q_n:n\in{\mathbb{N}}\}$ be solutions to \eqref{KdV} on the line in the sense of Theorem~\ref{T:main}. If the initial data obey $q_n(0)\to q(0)$ in $H^{-1}({\mathbb{R}})$, then \begin{equation} \iint_K \bigl| q(t,x) - q_n(t,x)\bigr|^2\,dx\,dt \to 0\qtq{as} n\to\infty \end{equation} for every compact set $K\subset {\mathbb{R}}\times{\mathbb{R}}$. \end{theorem} It follows immediately from this result that the solutions we construct are indeed distributional solutions in the line case. \subsection*{Acknowledgements} R. K. was supported, in part, by NSF grant DMS-1600942 and M. V. by grant DMS-1500707. We would also like to thank the referee, whose comments and questions led to the inclusion of Appendix~\ref{S:A}. \subsection{Notation and Preliminaries}\label{S:1.1} Many of the functions considered in this paper have numerous arguments. For example, the diagonal Green's function ultimately depends on the location in space $x$, an energy parameter $\kappa$, and the wave profile $q$, which itself depends on time. We find it advantageous to readability to suppress some of these dependencies from time to time. We use prime solely to indicate derivatives in $x$; thus $f' =\partial_x f$. Our conventions for the Fourier transform are as follows: \begin{align*} \hat f(\xi) = \tfrac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} e^{-i\xi x} f(x)\,dx \qtq{so} f(x) = \tfrac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} e^{i\xi x} \hat f(\xi)\,d\xi \end{align*} for functions on the line and \begin{align*} \hat f(\xi) = \int_0^1 e^{- i\xi x} f(x)\,dx \qtq{so} f(x) = \sum_{\xi\in 2\pi{\mathbb{Z}}} \hat f(\xi) e^{i\xi x} \end{align*} for functions on the circle ${\mathbb{R}}/{\mathbb{Z}}$. Concomitant with this, we define \begin{align*} \| f\|_{H^{s}({\mathbb{R}})}^2 = \int_{\mathbb{R}} |\hat f(\xi)|^2 (4+|\xi|^2)^s \,d\xi \qtq{and} \| f\|_{H^{s}({\mathbb{R}}/{\mathbb{Z}})}^2 = \sum_{\xi\in 2\pi{\mathbb{Z}}} (4+\xi^2)^s |\hat f(\xi)|^2 . \end{align*} The use of the number $4$ here rather than the more traditional $1$ has no meaningful effect on these Hilbert spaces (the norms are equivalent); however, this definition simplifies our exposition by making certain key relations exact identities. More generally, we define \begin{align*} \| f\|_{H^{s}_\kappa({\mathbb{R}})}^2 = \int_{\mathbb{R}} |\hat f(\xi)|^2 (4\kappa^2+|\xi|^2)^s \,d\xi \qtq{and} \| f\|_{H^{s}_\kappa({\mathbb{R}}/{\mathbb{Z}})}^2 = \sum_{\xi\in 2\pi{\mathbb{Z}}} (4\kappa^2+\xi^2)^s |\hat f(\xi)|^2 . \end{align*} Note that $H^{1}_\kappa$ is an algebra in either geometry. Indeed, one readily sees that $$ \| f g \|_{H^{1}_\kappa} \lesssim \| f \|_{H^{1}} \| g \|_{H^{1}_\kappa} \leq \| f \|_{H^{1}_\kappa} \| g \|_{H^{1}_\kappa} \quad\text{uniformly for $\kappa\geq 1$.} $$ By duality, this implies that $$ \| f h \|_{H^{-1}_\kappa} \lesssim \| f \|_{H^{\vphantom{+}1}_{\vphantom{\kappa}}} \| h \|_{H^{-1}_\kappa} \quad\text{uniformly for $\kappa\geq 1$.} $$ Throughout the paper, we will employ the $L^2$ pairing. This informs our identification of $H^{-1}$ and $H^1$ as dual spaces and our notation for functional derivatives: \begin{equation}\label{derivative} \frac{d\ }{ds}\biggr|_{s=0} F(q+sf) = dF\bigl|_q (f) = \int \frac{\delta F}{\delta q}(x) f(x)\,dx . \end{equation} We write ${\mathfrak{I}}_p$ for the Schatten class of compact operators whose singular values are $\ell^p$ summable. In truth, we shall use the Hilbert--Schmidt class ${\mathfrak{I}}_2$ almost exclusively. When we do use ${\mathfrak{I}}_1$, it will only be as a notation for products of Hilbert--Schmidt operators; see \eqref{I1 from I2}. Let us quickly recall several facts about the class ${\mathfrak{I}}_2$ that we will use repeatedly: An operator $A$ on $L^2({\mathbb{R}})$ is Hilbert--Schmidt if and only if it admits an integral kernel $a(x,y)\in L^2({\mathbb{R}}\times{\mathbb{R}})$; moreover, \begin{align*} \| A \|_{L^2\to L^2} \leq \| A \|_{{\mathfrak{I}}_2} = \iint |a(x,y)|^2\,dx\,dy. \end{align*} The product of two Hilbert--Schmidt operators is trace class; moreover, \begin{align*} \tr(AB) := \iint a(x,y)b(y,x)\,dy\,dx = \tr(BA) \qtq{and} |\tr(AB)| \leq \| A \|_{{\mathfrak{I}}_2} \| B \|_{{\mathfrak{I}}_2}. \end{align*} Lastly, Hilbert--Schmidt operators form a two-sided ideal in the algebra of bounded operators; indeed, \begin{align*} \| B A C \| \leq \| B \|_{L^2\to L^2} \| A \|_{{\mathfrak{I}}_2}\| C \|_{L^2\to L^2}. \end{align*} All of this (and much more) is explained very clearly in \cite{MR2154153}. For the arguments presented here, the problem \eqref{KdV} posed on circle is more favorably interpreted as a problem on the whole line with periodic initial data. Correspondingly, even in this case, we will be dealing primarily with operators on the whole line, albeit with periodic coefficients. When we do need to discuss operators acting on the circle ${\mathbb{R}}/{\mathbb{Z}}$ in connection with prior work, these will be distinguished by the use of calligraphic font. \section{Diagonal Green's function}\label{S:2} The goal of this section is to discuss the Green's function $G(x,y)$ associated to the whole-line Schr\"odinger operator $$ L := -\partial_x^2 + q $$ for potentials \begin{align}\label{B delta} q \in B_\delta := \{ q \in H^{-1}({\mathbb{R}}) : \| q \|_{H^{-1}({\mathbb{R}})} \leq \delta\} \end{align} and $\delta$ small. Particular attention will be paid to the diagonal $g(x):=G(x,x)$ and its reciprocal $1/g(x)$; the latter appears in the energy density associated to the key microscopic conservation law for KdV. Let us briefly recall one key fact associated to the Schr\"odinger operator with $q\equiv 0$: The resolvent \begin{align}\label{R resolvent} R_0(\kappa) = (-\partial^2_x + \kappa^2)^{-1} \qtq{has integral kernel} G_0(x,y;\kappa) = \tfrac{1}{2\kappa} e^{-\kappa|x-y|} \end{align} for all $\kappa>0$. \begin{prop}\label{P:sa L} Given $q\in H^{-1}({\mathbb{R}})$, there is a unique self-adjoint operator $L$ associated to the quadratic form $$ \psi \mapsto \int |\psi'(x)|^2 + q(x) |\psi(x)|^2\,dx \qtq{with domain} H^1({\mathbb{R}}). $$ It is semi-bounded. Moreover, for $\delta\leq \frac12$ and $q\in B_\delta$, the resolvent is given by the norm-convergent series \begin{align}\label{E:R series} R := (L+\kappa^2)^{-1} = \sum_{\ell=0}^\infty (-1)^\ell \sqrt{R_0} \Bigl( \sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^\ell \sqrt{R_0} \end{align} for all $\kappa\geq 1$. \end{prop} \begin{proof} The key estimate on which all rests is the following: \begin{align}\label{R I2} \Bigl\| \sqrt{R_0}\, q\, \sqrt{R_0} \Bigr\|^2_{\text{op}} \leq \Bigl\| \sqrt{R_0}\, q\, \sqrt{R_0} \Bigr\|^2_{\mathfrak I_2({\mathbb{R}})} &= \frac1\kappa \int \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2}\,d\xi . \end{align} For $q\in \mathcal S({\mathbb{R}})$ the Hilbert--Schmidt norm can be evaluated directly using \eqref{R resolvent}: \begin{align*} \Bigl\| \sqrt{R_0}\, q\, \sqrt{R_0} \Bigr\|^2_{\mathfrak I_2({\mathbb{R}})} &= \frac1{4\kappa^2} \iint q(x) e^{-2\kappa|x-y|}q(y)\,dx\,dy = \text{RHS\eqref{R I2}}. \end{align*} This then extends to all $q\in H^{-1}({\mathbb{R}})$ by approximation. From \eqref{R I2}, we see that \begin{align*} \int q(x)|\psi(x)|^2 \,dx \leq \kappa^{-1/2} \|q\|_{H^{-1}} \int |\psi'(x)|^2 + \kappa^2 |\psi(x)|^2\,dx \qtq{for all} \psi \in H^1({\mathbb{R}}), \end{align*} at least for all $\kappa\geq 1$. (Note that the LHS here should be interpreted via the natural pairing between $H^{-1}$, which contains $q$, and $H^1$, which contains $|\psi|^2$.) This estimate shows that $q$ is an infinitesimally form-bounded perturbation of the case $q\equiv 0$ and so the existence and uniqueness of $L$ follows from \cite[Theorem X.17]{MR0493420}. In view of \eqref{R I2}, the series \eqref{E:R series} converges provided we just choose $\delta <1$. \end{proof} \begin{prop}[Diffeomorphism property]\label{P:diffeo} There exists $\delta>0$ so that the following are true for all $\kappa\geq 1${\upshape:}\\ (i) For each $q\in B_\delta$, the resolvent $R$ admits a continuous integral kernel $G(x,y;\kappa,q)${\upshape;} thus, we may unambiguously define \begin{align}\label{g defn} g(x;\kappa,q):=G(x,x;\kappa,q). \end{align} (ii) The mappings \begin{align}\label{diffeos} q\mapsto g-\tfrac1{2\kappa} \qtq{and} q\mapsto \kappa-\tfrac1{2g} \end{align} are (real analytic) diffeomorphisms of $B_\delta$ into $H^1({\mathbb{R}})$.\\ (iii) If $q(x)$ is Schwartz, then so are $g(x)- \tfrac1{2\kappa}$ and $\kappa-\tfrac1{2g(x)}$. Indeed, \begin{align}\label{g stronger mapping} \|g'(x)\|_{H^{s}} \lesssim_s \| q \|_{H^{s-1}} \qtq{and} \| \langle x\rangle ^s g'(x) \|_{L^2} \lesssim_s \| \langle x\rangle ^s q \|_{H^{-1}} \end{align} for every integer $s\geq 0$. \end{prop} \begin{remark} The diffeomorphism property is necessarily restricted to a neighborhood of the origin because for $q$ large, the spectrum of $L$ may intersect $-\kappa^2$. \end{remark} \begin{proof} Initially, we ask that $\delta\leq \frac12$; later, we will add further restrictions. From \eqref{E:R series} and \eqref{R I2}, we see that \begin{align*} \Bigl\| \sqrt{\kappa^2-\partial_x^2} \bigl(R - R_0\bigr) \sqrt{\kappa^2-\partial_x^2} \Bigr\|_{{\mathfrak{I}}_2} < \infty \qtq{for all} q\in B_\delta \qtq{and all} \kappa\geq 1. \end{align*} Consequently, $G-G_0$ exists as an element of $H^1({\mathbb{R}})\otimes H^1({\mathbb{R}})$. Here we mean tensor product in the Hilbert-space sense (cf. \cite{MR0493419}); note that $H^1({\mathbb{R}})\otimes H^1({\mathbb{R}})$ is comprised of those $f\in H^1({\mathbb{R}}^2)$ for which $\partial_x\partial_yf \in L^2({\mathbb{R}}^2)$. It follows that $G(x,y;\kappa,q)$ is a continuous function of $x$ and $y$ and we may define \begin{align}\label{E:g series} g(x) = g(x;\kappa,q) = \tfrac1{2\kappa} + \sum_{\ell=1}^\infty (-1)^\ell \Bigl\langle\sqrt{R_0}\delta_x,\ \Bigl( \sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^\ell \sqrt{R_0} \delta_x\Bigr\rangle, \end{align} where inner products are taken in $L^2({\mathbb{R}})$. This settles (i). Next we observe that by \eqref{E:R series} and \eqref{R I2}, \begin{align*} \Bigl| \int f(x) \bigl[g(x) -\tfrac1{2\kappa}\bigr]\,dx \Bigr| &\leq \sum_{\ell=1}^\infty \Bigl\| \sqrt{R_0}\, f\, \sqrt{R_0} \Bigr\|_{\mathfrak I_2({\mathbb{R}})}\Bigl\| \sqrt{R_0}\, q\, \sqrt{R_0} \Bigr\|_{\mathfrak I_2({\mathbb{R}})}^\ell \\ &\leq 2\delta\kappa^{-1} \|f\|_{H^{-1}({\mathbb{R}})} \end{align*} for any Schwartz function $f$. Thus $g-\frac1{2\kappa}\in H^1({\mathbb{R}})$; indeed, \begin{align}\label{g H1 bound} \bigl\| g - \tfrac1{2\kappa} \bigr\|_{H^1({\mathbb{R}})} \leq 2\delta\kappa^{-1}. \end{align} Moreover, this argument precisely shows the convergence of the series \eqref{E:g series} and so that the mapping from $q\in B_\delta$ to $g-\frac1{2\kappa}\in H^{1}({\mathbb{R}})$ is real analytic. Given $f\in H^{-1}({\mathbb{R}})$, the resolvent identity implies \begin{align}\label{O35pp} \frac{d\ }{ds}\biggr|_{s=0} g(x;q+sf) = - \int G(x,y)f(y)G(y,x)\,dy . \end{align} In particular, by \eqref{R resolvent}, $$ dg\bigr|_{q\equiv 0} = - \kappa^{-1} R_0(2\kappa), $$ which is an isomorphism of $H^{-1}_\kappa$ onto $H^1_\kappa$, with condition number equal to $1$. Moreover, by \eqref{R I2}, \eqref{E:R series}, and duality, \begin{align}\label{inverse input} \bigl\| dg\bigr|_{q\equiv 0} - dg\bigr|_q \bigr\|_{H^{-1}_\kappa \to H^{1\vphantom{+}}_\kappa} \lesssim \kappa^{-1} \|q\|_{H^{-1}_\kappa} \lesssim \delta \Bigl\| \bigl(dg\bigr|_{q\equiv 0} \bigr)^{-1} \Bigr\|_{H^{1\vphantom{+}}_\kappa \to H^{-1}_\kappa}^{-1} . \end{align} Thus choosing $\delta$ sufficiently small, the inverse function theorem guarantees that \begin{align}\label{g is diffeo} q \mapsto g - \tfrac{1}{2\kappa} \quad\text{is a diffeomorphism of $\{ q : \|q\|_{H^{-1}_\kappa} \leq \delta\}$ into $H^{1}_\kappa$}. \end{align} Note that \eqref{inverse input} combined with the standard contraction-mapping proof of the implicit function theorem guarantees that $\delta$ can be chosen independently of $\kappa$. The claimed $H^{-1}\to H^1$ diffeomorphism property of this map then follows since $$ \|q\|_{H^{-1}_\kappa} \leq \|q\|_{H^{-1}} \qtq{and} \| f\|_{H^1_\kappa} \lesssim_\kappa \|f\|_{H^1}. $$ Choosing $\delta$ even smaller if necessary, \eqref{g H1 bound} together with the embedding $H^1\hookrightarrow L^\infty$ guarantees that $$ \tfrac1{4\kappa} \leq g(x) \leq \tfrac{3}{4\kappa} \quad\text{for all $q\in B_\delta$.} $$ Consequently, the second mapping in \eqref{diffeos} is also real-analytic. To prove that it is a diffeomorphism (for some $\kappa$-independent choice of $\delta$), we simply note that $$ f \mapsto \frac{f}{1+f} $$ is a diffeomorphism from a neighbourhood of zero in $H^1({\mathbb{R}})$ into $H^1({\mathbb{R}})$, write $$ \kappa-\tfrac1{2g} = \kappa \frac{2\kappa(g-\frac1{2\kappa} )}{1+2\kappa(g-\frac1{2\kappa} )}, $$ and use \eqref{g H1 bound} together with \eqref{g is diffeo}. We now turn our attention to part (iii). The Green's function associated to a translated potential is simply the translation of the original Green's function. Correspondingly, \begin{align}\label{translation identity} g(x+h;q) = g\bigl(x; q(\cdot+h)\bigr) \qquad\text{for all $h\in{\mathbb{R}}$.} \end{align} Differentiating with respect to $h$ at $h=0$ and invoking \eqref{E:g series} yields \begin{align*} \int \bigl[\partial_x^s g(x)\bigr] f(x)\,dx &\leq \sum_{\ell=1}^\infty \sum_{\sigma} \binom{s}{\sigma} \Bigl\| \sqrt{R_0}\, f\, \sqrt{R_0} \Bigr\|_{\mathfrak I_2({\mathbb{R}})} \prod_{k=1}^\ell \Bigl\| \sqrt{R_0}\, q^{(\sigma_k)}\, \sqrt{R_0} \Bigr\|_{\mathfrak I_2({\mathbb{R}})}. \end{align*} Here, the inner sum extends over multi-indices $\sigma=(\sigma_1,\ldots,\sigma_\ell)$ with $|\sigma|=s$. Maximizing over unit vectors $f\in H^{-1}$, exploiting \eqref{R I2}, and using $$ \prod_{k=1}^\ell \bigl\| q^{(\sigma_k)} \bigr\|_{H^{-1}} \leq \bigl\| q^{(s)} \bigr\|_{H^{-1}} \bigl\| q \bigr\|_{H^{-1}}^{\ell-1}, $$ which is merely an application of Holder's inequality in Fourier variables, this yields \begin{align*} \bigl\|\partial_x^s g(x)\bigr\|_{H^{1}} &\leq \sum_{\ell=1}^\infty \ell^s \bigl\| q^{(s)} \bigr\|_{H^{-1}} \delta^{\ell-1} \lesssim_s \bigl\| q\bigr\|_{H^{s-1}}. \end{align*} Thus we have verified the first claim in \eqref{g stronger mapping}. To address the second assertion in \eqref{g stronger mapping}, we first make the following claim: For every integer $s\geq 0$, \begin{align}\label{R langle commutator} \langle x\rangle^s R_0 = \sum_{r=0}^s \sqrt{R_0}\, A_{r,s} \sqrt{R_0}\, \langle x\rangle^r \qtq{with operators} \| A_{r,s} \|_{L^2\to L^2}\lesssim_s 1. \end{align} This is easily verified recursively, by repeatedly using the following commutators: \begin{equation}\label{basic commutators} \begin{aligned} \bigl[ \langle x\rangle,\ R_0\bigr] = R_0 \bigl[ -\partial_x^2 +\kappa^2 ,\ \langle x\rangle \bigr] R_0 &= -R_0 \bigl(\tfrac{x}{\langle x\rangle} \partial_x + \partial_x\tfrac{x}{\langle x\rangle}\bigr) R_0 \\ \bigl[ \tfrac{x}{\langle x\rangle} \partial_x + \partial_x\tfrac{x}{\langle x\rangle},\ \langle x\rangle\bigr] &= 2\tfrac{x^2}{\langle x\rangle^2}. \end{aligned} \end{equation} In connection with \eqref{R langle commutator}, let us also pause to note that \begin{align}\label{E:weight change} \bigl\| \langle x\rangle^r q \bigr\|_{H^{-1}} \lesssim_{s} \bigl\| \langle x\rangle^s q \bigr\|_{H^{-1}} \quad\text{for any pair of integers $0\leq r\leq s$}, \end{align} since $\langle x\rangle^{-1}\in H^1({\mathbb{R}})$, which is an algebra. By applying \eqref{E:g series}, \eqref{R I2}, \eqref{R langle commutator}, and \eqref{E:weight change}, we deduce that \begin{align*} \int f(x) & \langle x\rangle^s \bigl[g(x) -\tfrac{1}{2\kappa} \bigr]\,dx \\ &\lesssim_s \sum_{\ell=1}^\infty \sum_{r=0}^s \Bigl\| \sqrt{R_0}\, f\, \sqrt{R_0} \Bigr\|_{\mathfrak I_2({\mathbb{R}})} \Bigl\| \sqrt{R_0}\, \langle x\rangle^r q\, \sqrt{R_0} \Bigr\|_{\mathfrak I_2({\mathbb{R}})} \delta^{\ell-1} \\ &\lesssim_s \bigl\| f \bigr\|_{H^{-1}} \bigl\| \langle x\rangle^s q \bigr\|_{H^{-1}}. \end{align*} Optimizing over $f\in H^{-1}({\mathbb{R}})$, it then follows that $$ \| \langle x\rangle^s g'(x) \|_{L^2({\mathbb{R}})} \lesssim_s \bigl\| \langle x\rangle^s \bigl[g(x) -\tfrac{1}{2\kappa} \bigr] \bigr\|_{H^1({\mathbb{R}})} \lesssim_s \| \langle x\rangle ^s q \|_{H^{-1}}, $$ thereby completing the proof of \eqref{g stronger mapping} and so the proof of the proposition. \end{proof} \begin{prop}[Elliptic PDE]\label{P:elliptic} The diagonal Green's function obeys \begin{align} g'''(x) &= 2 \bigl[q(x) g(x) \bigr]' + 2 q(x) g'(x) + 4\kappa^2 g'(x) .\label{E:l5.1a} \end{align} \end{prop} \begin{proof} By virtue of being the Green's function, \begin{align*} \bigl( -\partial_x^2 + q(x) \bigr) G(x,y) = -\kappa^2 G(x,y) + \delta(x-y) = \bigl( -\partial_y^2 + q(y) \bigr) G(x,y) \end{align*} and consequently, \begin{align*} \bigl( \partial_x + \partial_y\bigr)^3 G(x,y) &= \bigl(q'(x)+q'(y)\bigr)G(x,y) + 2\bigl(q(x)+q(y)\bigr) \bigl( \partial_x + \partial_y\bigr) G(x,y) \\ &\qquad - \bigl(q(x)-q(y)\bigr) \bigl( \partial_x - \partial_y\bigr) G(x,y) +4\kappa^2\bigl(\partial_x + \partial_y\bigr) G(x,y). \end{align*} Thus specializing to $y=x$, we deduce that $$ g'''(x) = 2 q'(x) g(x) + 4 q(x) g'(x) + 4 \kappa^2 g'(x), $$ which agrees with \eqref{E:l5.1a} after regrouping terms. \end{proof} \begin{remark} As will be discussed in the proof of Lemma~\ref{L:D 1/g}, the Green's function can be expressed in terms of two solutions $\psi_\pm(x)$ to the Sturm--Liouville equation (the Weyl solutions); see \eqref{G from psi}. In this sense, $g(x)=\psi_+(x)\psi_-(x)$ was seen to obey \eqref{E:l5.1a} already in \cite{Appell}. \end{remark} \begin{prop}[Introducing $\rho$]\label{P:Intro rho} There exists $\delta>0$ so that \begin{align}\label{E:rho defn} \rho(x;\kappa,q) := \kappa - \tfrac{1}{2g(x)} + \tfrac12\int e^{-2\kappa|x-y|} q(y)\,dy \end{align} belongs to $L^1({\mathbb{R}}) \cap H^1({\mathbb{R}})$ for all $q\in B_\delta$ and $\kappa\geq 1$. Moreover, fixing $x\in{\mathbb{R}}$, the map $q\mapsto \rho(x)$ is non-negative and convex. Additionally, \begin{align}\label{E:alpha defn} \alpha(\kappa;q) := \int_{\mathbb{R}} \rho(x)\,dx \end{align} defines a non-negative, real-analytic, strictly convex function of $q\in B_\delta$, and satisfies \begin{align}\label{alpha as I2} \alpha(\kappa;q) \approx \frac{1}{\kappa} \int_{\mathbb{R}} \frac{|\hat q(\xi)|^2\,d\xi}{\xi^2+4\kappa^2}, \end{align} uniformly for $q\in B_\delta$ and $\kappa\geq 1$. Lastly, \begin{align}\label{O37} \alpha(\kappa;q) = - \log\det_2\left( 1+ \sqrt{R_0}\, q \, \sqrt{R_0} \right). \end{align} \end{prop} \begin{remarks} 1. Although we shall have no use for the strict convexity of $q\mapsto\alpha(\kappa;q)$ in this paper, it does have important consequences. Most notably, by the Radon--Riesz argument, it shows that weakly continuous solutions conserving $\alpha(\kappa)$ are automatically norm-continuous. 2. As noted in the Introduction (see \eqref{Intro renorm} and subsequent discussion), the quantity $\alpha(\kappa;q)$ is essentially the logarithm of the transmission coefficient and so well-studied. Nevertheless, none of the literature we have studied contains the representation \eqref{E:alpha defn} in terms of the reciprocal of the Green's function. Rather, prior works employ an integral representation based on the logarithmic derivative of one of the Jost solutions; see \eqref{E:a from Weyl}. To the best of our knowledge, this approach originates in \cite[\S3]{MR0303132}, where it was shown to be an effective tool for deriving polynomial conservation laws and for demonstrating that these polynomial conservation laws appear as coefficients in the asymptotic expansion of the logarithm of the transmission coefficient as $\kappa\to\infty$. \end{remarks} Before turning to the proof of Proposition~\ref{P:Intro rho}, we first explain the meaning of RHS\eqref{O37} and then present two lemmas that we shall need. The symbol $\det_2$ denotes the renormalized Fredholm determinant introduced by Hilbert in \cite{Hilbert}; see \cite{MR2154153} for a more up-to-date exposition. In the context of Proposition~\ref{P:Intro rho}, our choice of $\delta$ guarantees that the operator $$ A = \sqrt{R_0}\, q \, \sqrt{R_0} \qtq{obeys} \| A \|_{{\mathfrak{I}}_2} < 1. $$ Consequently, it suffices for what follows to exploit only the notion of the trace of an operator (rather than determinant) thanks to the identity \begin{align}\label{det series} -\log \det_2 \bigl(1 + A \bigr) = \tr\big( A - \log(1+A) \bigr) = \sum_{\ell=2}^\infty \frac{(-1)^\ell}{\ell} \tr\bigl( A^\ell \bigr). \end{align} We shall not delve deeply into such matters here, since \eqref{O37} has no bearing on the proof of well-posedness for KdV; indeed, our only reason for verifying this identity is to make the link to the prior works \cite{KVZ,MR2683250}, which might otherwise seem unrelated. \begin{lemma}\label{L:D 1/g} There exists $\delta>0$ so that \begin{align}\label{GgG identity} \int \frac{G(x,y;\kappa,q)G(y,x;\kappa,q)}{2g(y;\kappa,q)^2}\,dy = g(x;\kappa,q) \end{align} for all $q\in B_\delta$ and all $\kappa\geq 1$. \end{lemma} \begin{remark} Augmenting the proof below with the results of \cite[\S8.3]{MR0069338} shows that \eqref{GgG identity} holds also in the case of $q\in H^{-1}({\mathbb{R}}/{\mathbb{Z}})$ and $\kappa\geq 1$ that obey \eqref{periodic smallness}. As below, one first uses analyticity to reduce to a case where one may apply ODE techniques, more specifically, to the case of small smooth periodic potentials. \end{remark} \begin{proof} We choose $\delta>0$ as needed for Proposition~\ref{P:diffeo}. In this case, both sides of \eqref{GgG identity} are analytic functions of $q$. Consequently, it suffices to prove the result under the additional hypotheses that $q$ is Schwartz and $\|q\|_{L^\infty}<1$. Techniques in Sturm--Liouville theory (cf. \cite[\S3.8]{MR0069338}) show that there are solutions $\psi_\pm(x)$ to \begin{align}\label{ODE} -\psi'' + q \psi = -\kappa^2 \psi \end{align} that decay (along with derivatives) exponentially as $x\to\pm\infty$ and grow exponentially as $x\to\mp\infty$. Constancy of the Wronskian guarantees that these Weyl solutions (as they are known) are unique up to scalar multiples; we (partially) normalize them by requiring the Wronskian relation \begin{equation}\label{E:Wron} \psi_+(x) \psi_-'(x) - \psi_+'(x)\psi_-(x) = 1 \end{equation} and that $\psi_\pm(x) >0$. Note that the Sturm oscillation theorem guarantees that neither solution may change sign. Using the Weyl solutions, we may write the Green's function as \begin{align}\label{G from psi} G(x,y) = \psi_+(x\vee y) \psi_-(x\wedge y). \end{align} In this way, the proof of the lemma reduces to showing that \begin{align}\label{pre FTC} \tfrac12 \int_{-\infty}^x \Bigl[\tfrac{\psi_+(x)}{\psi_+(y)}\Bigr]^2 \,dy + \tfrac12 \int_x^\infty \Bigl[\tfrac{\psi_-(x)}{\psi_-(y)}\Bigr]^2 \,dy = \psi_+(x)\psi_-(x). \end{align} However, by \eqref{E:Wron}, we have \begin{align*} \tfrac{d\ }{dy} \tfrac{\psi_-(y)}{\psi_+(y)} = \tfrac{1}{\psi_+(y)^2} \qtq{and} \tfrac{d\ }{dy} \tfrac{\psi_+(y)}{\psi_-(y)} = - \tfrac{1}{\psi_-(y)^2}. \end{align*} Thus \eqref{pre FTC} follows by the fundamental theorem of calculus and the exponential behavior of $\psi_\pm(y)$, as $|y|\to \infty$. \end{proof} \begin{remark} As mentioned above, there is an alternate integral representation of $\alpha(\kappa;q)$ introduced much earlier. The proof of Lemma~\ref{L:D 1/g} provides the requisite vocabulary to explain what that is: \begin{align}\label{E:a from Weyl} \log[a(i\kappa)] = - \int \tfrac{\psi_+'(y)}{\psi_+(y)} + \kappa \,dy = \int \tfrac{\psi_-'(y)}{\psi_-(y)} - \kappa \,dy. \end{align} Here $\psi_\pm$ represent the Weyl solutions; however, the formula applies equally well using the Jost solutions, since they differ only in normalization. It is in this equivalent form that the first identity appears in \cite[\S3]{MR0303132}. Averaging these two representations and invoking \eqref{E:Wron} and then \eqref{G from psi} yields \begin{align}\label{E:a from little g} \log[a(i\kappa)] = \int \tfrac{1}{2\psi_-(y)\psi_+(y)} - \kappa \,dy = \int \tfrac{1}{2g(y)} - \kappa\,dy, \end{align} which is readily seen to be equivalent to \eqref{E:alpha defn}. One easy way to distinguish these three representations is the fact that $\psi_+(y)$ depends only on the values of $q$ on the interval $[y,\infty)$, while $\psi_-(y)$ is determined by $q$ on the interval $(-\infty,y]$; on the other hand, $g(y)$ depends on the values of $q$ throughout the real line. \end{remark} The following identity will be used not only in the proof of Proposition~\ref{P:Intro rho}, but also in Section~\ref{S:3}. \begin{lemma}\label{L:G ibp} Given Schwartz functions $f$ and $q$, \begin{align*} & \int G(x,y;\kappa,q) \bigl[ - f'''(y) + 2q(y)f'(y) + 2\bigl(q(y)f(y)\bigr)'+4\kappa^2 f'(y)\bigr]G(y,x;\kappa,q)\,dy \\ &= 2 f'(x)g(x;\kappa,q) - 2f(x)g'(x;\kappa,q). \end{align*} This identity also holds if merely $f(x)-c$ is Schwartz for some constant $c$. \end{lemma} \begin{proof} The argument that follows applies equally well irrespective of the presence/absence of the constant $c$. Alternately, as both sides of the identity are linear in $f$, the cases $f$ Schwartz and $f$ constant can be treated separately. However, when $f$ is constant the identity can be obtained more swiftly by other means; see \eqref{translation identity'}. The most elementary proof proceeds from the defining property of $G$, namely, $$ \bigl(-\partial_y^2 + q(y) + \kappa^2\bigr)G(y,x) = \bigl(-\partial_y^2 + q(y) + \kappa^2\bigr)G(x,y) = \delta(x-y) $$ and integration by parts. However, we find the argument more palatable when presented in terms of operator identities. Specifically, from the operator identity \begin{align*} - f''' &= (-\partial^2+\kappa^2)f' + f' (-\partial^2+\kappa^2) - 2(-\partial^2+\kappa^2)f\partial + 2\partial f(-\partial^2+\kappa^2) - 4\kappa^2f', \end{align*} it follows that \begin{align*} - R f''' R = f'R - 2 R qf' R + Rf' - 2f\partial R - 2R[\partial,qf]R + 2 R\partial f - 4\kappa^2 Rf'R. \end{align*} Noting, for example, that $$ g'(x) = \langle\delta_x, [\partial,R]\delta_x\rangle, $$ the lemma then follows by considering the diagonal of the associated integral kernel. \end{proof} \begin{proof}[Proof of Proposition~\ref{P:Intro rho}] By \eqref{R resolvent}, $$ \tfrac12\int e^{-2\kappa|x-y|} q(y)\,dy = 2\kappa [R_0(2\kappa) q](x). $$ Combined with Proposition~\ref{P:diffeo}, this shows $\rho\in H^1({\mathbb{R}})$. Next we write $$ \rho(x) = 2\kappa^2 \bigl[g - \tfrac1{2\kappa} + \tfrac1{\kappa} R_0(2\kappa) q\bigr](x) - \tfrac{2\kappa^2}{g(x)}[g(x)-\tfrac1{2\kappa}]^2 . $$ The second summand belongs to $L^1({\mathbb{R}})$ by Proposition~\ref{P:diffeo}; thus it remains to consider the first summand. To this end, we use \eqref{E:g series} and \eqref{R I2} to obtain \begin{align} \int \bigl[g - \tfrac1{2\kappa} + \tfrac1{\kappa} R_0(2\kappa) q\bigr](x) f(x)\,dx &= \sum_{\ell=2}^\infty (-1)^\ell \tr\Bigl\{\sqrt{R_0} f \sqrt{R_0} \Bigl( \sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^\ell \Bigr\} \notag\\ &\leq \| f \|_{L^\infty} \Bigl\|\sqrt{R_0}\Big\|_{op}^2 \Bigl\| \sqrt{R_0}\,q\,\sqrt{R_0}\Bigr\|^2_{{\mathfrak{I}}_2} \sum_{\ell=2}^\infty \delta^{\ell-2},\label{E:L1 est} \end{align} from which we may conclude that $\rho\in L^1({\mathbb{R}})$. Note that the arguments just presented actually show that $q\mapsto\rho$ is real analytic as a mapping of $B_\delta$ into $L^1\cap H^1$. To show convexity at fixed $x$, we compute derivatives. As in \eqref{O35pp}, the resolvent identity guarantees that \begin{align}\label{drho} d[\rho(x)]\bigr|_q (f) = \tfrac{-1}{2g(x)^2} \int G(x,y) f(y) G(y,x) \,dy + \tfrac12 [e^{-2\kappa|\cdot|} * f ](x) \end{align} and thence \begin{align}\label{ddrho} d^2[\rho(x)]\bigr|_q (f,h) = {}&{}\tfrac{-1}{g(x)^3} \iint G(x,y) f(y) G(y,x) G(x,z) h(z) G(z,x) \,dy\,dz \\ & + \tfrac{1}{g(x)^2} \iint G(x,y) f(y) G(y,z) h(z) G(z,x) \,dy\,dz. \notag \end{align} Multiplying through by $g(x)^3>0$ we then see that the convexity of $\rho(x)$ is reduced to the assertion that $$ \bigl\langle \sqrt{R} \delta_x,\sqrt{R} \delta_x\bigr\rangle \bigl\langle \sqrt{R} \delta_x,\sqrt{R}fRf\sqrt{R}\,\sqrt{R} \delta_x\bigr\rangle - \bigl\langle \sqrt{R} \delta_x,\sqrt{R}f\sqrt{R}\, \sqrt{R} \delta_x\bigr\rangle^2 \geq 0 $$ for all $f\in H^{-1}({\mathbb{R}})$. (Here inner-products are taken in $L^2({\mathbb{R}})$ which contains $\sqrt{R} \delta_x$.) The veracity of this assertion now follows immediately from the Cauchy--Schwarz inequality. Specializing \eqref{drho} to $q\equiv 0$ and substituting \eqref{R resolvent} shows \begin{align}\label{d rho 0} \frac{\delta\rho(x)}{\delta q}\biggr|_{q\equiv0} = 0. \end{align} Note also that $\rho(x)\equiv0$ when $q\equiv 0$. In this way the convexity of $q\mapsto \rho(x)$ guarantees its positivity. Let us now turn our attention to $\alpha(\kappa;q)$. In view of the preceding, we already know that this is a non-negative, convex, and real-analytic function of $q\in B_\delta$. It remains to show strict convexity, \eqref{alpha as I2}, and \eqref{O37}. As we have already noted, $\rho(x)\equiv 0$ when $q\equiv 0$. Thus \eqref{O37} holds trivially in this case. In general \eqref{O37} follows easily from \begin{align}\label{delta alpha} \frac{\delta \alpha}{\delta q} = \tfrac{1}{2\kappa} - g(x) = \frac{\delta\ }{\delta q} - \log\det_2\left( 1+ \sqrt{R_0}\, q \, \sqrt{R_0} \right), \end{align} which we will now verify. From \eqref{drho} and Lemma~\ref{L:D 1/g}, \begin{align*} \frac{d\ }{ds}\biggr|_{s=0} \alpha(\kappa; q+sf) &= - \iint \frac{G(y,x)G(x,y)}{2g(x)^2} f(y)\,dx \,dy+ \tfrac1{2\kappa}\int f(y)\,dy \\ &= \int \Bigl[\tfrac{1}{2\kappa} - g(x)\Bigr] f(x)\,dx, \end{align*} at least for Schwartz functions $f$. This proves the first equality in \eqref{delta alpha}. From \eqref{det series} and \eqref{E:g series}, we have \begin{align*} \frac{d\ }{ds}\biggr|_{s=0} - \log&\det_2\left( 1+ \sqrt{R_0}\, (q+sf) \, \sqrt{R_0} \right) \\ &= \sum_{\ell=2}^\infty (-1)^\ell\tr\Bigl\{ \Bigl(\sqrt{R_0}\, q \, \sqrt{R_0} \Bigr)^{\ell-1} \sqrt{R_0}\, f \, \sqrt{R_0} \Bigr\} \\ &= \int \Bigl[\tfrac{1}{2\kappa} - g(x)\Bigr] f(x)\,dx. \end{align*} This verifies the second equality in \eqref{delta alpha} and so finishes the proof of \eqref{O37}. Toward verifying strict convexity and \eqref{alpha as I2}, let us first compute the Hessian of $\alpha(\kappa)$ at $q\equiv 0$. From \eqref{ddrho} and \eqref{R resolvent}, we have \begin{align} d^2\alpha\bigr|_{q\equiv 0} (f,f) &= -\tfrac{1}{2\kappa} \iiint e^{-2\kappa|x-y| - 2\kappa|x-z|} f(y)f(z) \,dx\,dy\,dz \notag\\ &\quad + \tfrac{1}{2\kappa} \iiint e^{-\kappa|x-y| - \kappa|y-z| - \kappa|z-x|} f(y) f(z) \,dx\,dy\,dz \label{delta2alpha}\\ &= \tfrac{1}{4\kappa^2} \iint e^{-2\kappa|y-z|} f(y) f(z) \,dy\,dz = \tfrac{1}{\kappa} \int \frac{|\hat f(\xi)|^2}{\xi^2+4\kappa^2}\,d\xi. \notag \end{align} As $\alpha(\kappa)$ is real analytic, this immediately shows strict convexity and \eqref{alpha as I2} in some neighbourhood of $q\equiv 0$; however, to verify that the size $\delta$ of this neighbourhood may be taken independent of $\kappa$, we must adequately control the modulus of continuity of the Hessian. From \eqref{inverse input} and the first identity in \eqref{delta alpha}, we have $$ \Bigl|\Bigl( d^2\alpha\bigr|_{q\equiv 0} - d^2\alpha\bigr|_{q}\Bigr)(f,f)\Bigr| \lesssim \delta\kappa^{-1} \int \frac{|\hat f(\xi)|^2}{\xi^2+4\kappa^2}\,d\xi, $$ thereby settling the matter. \end{proof} \section{Dynamics}\label{S:3} The natural Poisson structure on $\mathcal S({\mathbb{R}})$ or $C^\infty({\mathbb{R}}/{\mathbb{Z}})$ associated to the KdV equation is \begin{align}\label{3.0} \{ F, G \} = \int \frac{\delta F}{\delta q}(x) \biggl(\frac{\delta G}{\delta q}\biggr) '(x) \,dx . \end{align} This structure is degenerate: $q\mapsto \int q$ is a Casimir (i.e. Poisson commutes with everything). It is common practice to say that this is the Poisson bracket associated to the (degenerate) almost complex structure $J=\partial_x$ and the $L^2$ inner product. We shall not need such notions; however, they do suggest a very convenient notation for the time-$t$ flow under the Hamiltonian $H$: $$ q(t) = e^{t J\nabla\! H} q(0). $$ Note that under our sign conventions, $$ \frac{d\ }{dt}\ F\circ e^{t J\nabla\! H} = \{ F, H \} \circ e^{t J\nabla\! H}. $$ As two simple examples, we note that for $$ P:=\int \tfrac12 |q(x)|^2\,dx \qtq{and} H_\text{KdV} := \int \tfrac12 |q'(x)|^2 + q(x)^3 \,dx, $$ we have \begin{align}\label{trivial delta} \frac{\delta P}{\delta q}(x) = q(x) \qtq{and} \frac{\delta H_\text{KdV}}{\delta q}(x) = -q''(x) + 3q(x)^2. \end{align} Thus, the flow associated to $P$ is precisely $\partial_t q = \partial_x q$, which is to say, $P$ represents momentum (= generator of translations); the flow associated to $H_\text{KdV}$ is precisely the KdV equation. Note that $H_\text{KdV}$ and $P$ Poisson commute: $$ \{H_\text{KdV},P\}=\int \bigl(-q''(x)+3q(x)^2\bigr) q'(x)\,dx = \int \bigl(-\tfrac12 q'(x)^2 + q(x)^3\bigr)' \,dx =0. $$ This simultaneously expresses that the KdV flow conserves $P$ and that $H_\text{KdV}$ is conserved under translations. Moreover, the two flows commute: $$ e^{s J\nabla\! P} \circ e^{t J\nabla\! H_\text{KdV}} = e^{t J\nabla\! H_\text{KdV}} \circ e^{s J\nabla\! P} \qtq{for all} s,t\in{\mathbb{R}}, $$ at least as mappings of Schwartz space. The claim that the KdV flow commutes with translations is without controversy; nonetheless, it is important for what follows to see that it stems precisely from the vanishing of the Poisson bracket. Fortunately, by restricting our attention to Schwartz-space solutions, we may simply apply the standard arguments from differential geometry; see, for example, \cite[\S39]{MR0997295}. We will also consider one more Hamiltonian, namely, \begin{align}\label{H kappa defn} H_\kappa := - 16 \kappa^5 \alpha(\kappa) + 2 \kappa^2 \int q(x)^2\,dx \end{align} which, formally at least, converges to $$ H_\text{KdV} := \int \tfrac12 |q'(x)|^2 + q(x)^3 \,dx $$ as $\kappa\to\infty$. In due course, we will see that $H_\kappa$ leads to a well-posed flow on $H^{-1}$ and that it Poisson commutes with both $P$ and $H_\text{KdV}$, at least as a functional on Schwartz space. For the moment, however, let us describe the evolution of the diagonal Green's function under the KdV flow. \begin{prop}\label{L:5.2} Given $\delta>0$, there is a $\delta_0>0$ so that for every Schwartz solution $q(t)$ to KdV with initial data $q(0)\in B_{\delta_0}$, we have \begin{align}\label{prop small} \sup_{t\in{\mathbb{R}}}\| q(t)\|_{H^{-1}({\mathbb{R}})} \leq \delta. \end{align} Moreover, for each $\kappa\geq 1$, the quantities $g(t,x)=g(x;\kappa,q(t))$, $\rho(t,x)=\rho(x;\kappa,q(t))$, and $\alpha(\kappa;q(t))$ obey \begin{gather} \tfrac{d\ }{dt}\, g(t,x) = -2 q'(t,x) g(t,x) + 2 q(t,x) g'\!(t,x) - 4\kappa^2 g'\!(t,x) \label{E:l5.1c}\\ \tfrac{d\ }{dt} \, \tfrac{1}{2g(t,x)} = \Bigl( \tfrac{q(t,x)}{g(t,x)} - \tfrac{2\kappa^2}{g(t,x)} + 4\kappa^3\Bigr)' \label{E:l5.1g} \\ \tfrac{d\ }{dt} \rho(t,x) = \Bigl(\tfrac32 \bigl[e^{-2\kappa|\cdot|}* q^2\bigr](t,x) + 2q(t,x)\bigl[ \kappa - \tfrac{1}{2g(t,x)}\bigr] - 4\kappa^2\rho(t,x) \Bigr)' \label{E:l5.1h} \\ \tfrac{d\ }{dt} \alpha(\kappa;q(t)) = 0. \label{E:l5.1z} \end{gather} \end{prop} \begin{proof} Without loss of generality, we may require that $\delta$ is a small as we wish. We shall require that $\delta$ meets the requirements of Propositions~\ref{P:diffeo},~\ref{P:elliptic}, and~\ref{P:Intro rho}. As an initial choice, we then set $\delta_0=\tfrac12\delta$. This guarantees that these propositions are all applicable to $q(t)$ for some open interval of times containing $t=0$. (Schwartz solutions are necessarily continuous in $H^{-1}({\mathbb{R}})$.) We will show below that equations \eqref{E:l5.1c}--\eqref{E:l5.1z} are valid on this time interval. But then, choosing $\kappa=1$ in \eqref{alpha as I2} and \eqref{E:l5.1z}, we obtain $$ \| q(t) \|_{H^{-1}} \lesssim \| q(0) \|_{H^{-1}} $$ on this interval. Thus we see that \eqref{prop small} holds globally in time, after updating our choice of $\delta_0$, if necessary. It remains to show that the stated differential equations apply to Schwartz solutions whose $H^{-1}$ norm is small enough that the results of Section~\ref{S:2} apply. We begin the proof in earnest after one minor preliminary: by taking an $h$ derivative in \eqref{translation identity} and using the resolvent identity, we have \begin{align}\label{translation identity'} g'(x; q) = - \int G(x,y) q'(y) G(y,x)\,dy. \end{align} By the resolvent identity, then Lemma~\ref{L:G ibp}, and then \eqref{translation identity'}, \begin{align*} \frac{d\ }{dt} g(&x; q(t)) = - \int G(x,y) \bigl[ -q'''(t,y) + 6q(t,y)q'(t,y) \bigr] G(y,x)\,dy \\ &= - 2 q'(t,x)g(x; q(t)) + 2 q(t,x) g'(x; q(t)) + 4\kappa^2 \int G(x,y) q'(t,y) G(y,x)\,dy \\ &= - 2 q'(t,x)g(x; q(t)) + 2 q(t,x) g'(x; q(t)) - 4\kappa^2 g'(x;q(t)). \end{align*} This proves \eqref{E:l5.1c}. Alternately, \eqref{E:l5.1c} can be derived from the Lax pair formulation of KdV; specifically, \begin{align*} \frac{d\ }{dt} \bigl(L(t)+\kappa^2\bigr)^{-1} = \bigl[ P(t), \bigl(L(t)+\kappa^2\bigr)^{-1} \bigr] . \end{align*} We leave the details to the interested reader. Equation \eqref{E:l5.1g} follows immediately from \eqref{E:l5.1c} and the chain rule, while \eqref{E:l5.1h} is simply a combination of \eqref{E:l5.1g} and \eqref{KdV}. Lastly, \eqref{E:l5.1z} follows from integrating \eqref{E:l5.1h} in $x$ over the whole line. \end{proof} \begin{remark} Combining \eqref{E:l5.1a} and \eqref{E:l5.1c} yields \begin{align} \tfrac{d\ }{dt}\, g(x) &= \Bigl( 2g''(x) -6q(x)g(x) - 12\kappa^2g(x) + 6\kappa \Bigr)' , \label{E:l5.1c'} \end{align} from which we see that there is also a microscopic conservation law for the KdV flow associated to $g(x)$. Ultimately, however, this turns out to be a consequence of the conservation of $\alpha(\kappa)$; specifically, we have $$ \frac{d\ }{d\kappa} \alpha(\kappa) = - 2\kappa \int g(x) - \tfrac{1}{2\kappa} + \tfrac{1}{4\kappa^3} q(x) \,dx. $$ \end{remark} \begin{prop}\label{P:H kappa} Fix $\kappa\geq 1$. The Hamiltonian evolution induced by $H_\kappa$ is \begin{align}\label{H kappa flow q} \tfrac{d\ }{dt} q(x) = 16\kappa^5 g'(x;\kappa) + 4\kappa^2 q'(x). \end{align} This flow is globally well-posed for initial data in $B_\delta$, for $\delta>0$ small enough (independent of $\kappa$), and conserves $\alpha(\varkappa)$ for any $\varkappa\geq 1$. Moreover, in the case of Schwartz-class initial data, the solution is Schwartz-class for all time, the associated diagonal Green's function evolves according to \begin{align}\label{H kappa flow g} \tfrac{d\ }{dt} \, \tfrac{1}{2g(x;\varkappa)} &= - \tfrac{4\kappa^5}{\kappa^2-\varkappa^2} \Bigl( \tfrac{g(x;\kappa)}{g(x;\varkappa)} -\tfrac{\varkappa}{\kappa} \Bigr)' + 4\kappa^2 \Bigl( \tfrac{1}{2g(x;\varkappa)} - \varkappa \Bigr)' \quad\text{if $\varkappa\neq \kappa$}, \end{align} and the flow commutes with that of $H_\text{KdV}$. \end{prop} \begin{proof} From \eqref{delta alpha} and \eqref{trivial delta} we see that $$ \frac{\delta H_\kappa}{\delta q} = - 16 \kappa^5\bigl[\tfrac{1}{2\kappa} - g(x;\kappa,q)\bigr] + 4 \kappa^2 q(x) , $$ from which \eqref{H kappa flow q} immediately follows. Rewriting \eqref{H kappa flow q} as the integral equation $$ q(t,x) = q(0,x+4\kappa^2 t) + \int_0^t 16\kappa^5 g'\bigl(x+4\kappa^2(t-s);\kappa,q(s)\bigr) \,ds , $$ we see that local well-posedness follows by Picard iteration and the estimate $$ \bigl\| g'(x,q) - g'(x,\tilde q) \bigr\|_{H^{-1}} \lesssim \bigl\| g(x,q) - g(x,\tilde q) \bigr\|_{H^{1}} \lesssim \| q -\tilde q \|_{H^{-1}}, $$ which in turn follows from the diffeomorphism property. Global well-posedness follows from local well-posedness, once we prove that $\alpha(\varkappa)$ is conserved, since we may then use \eqref{alpha as I2} to guarantee that the solution remains small in $H^{-1}$. (This argument appeared already in Proposition~\ref{L:5.2}.) Moreover, because the problem is $H^{-1}$-locally well-posed, it suffices to verify conservation of $\alpha(\varkappa)$ just in the case of Schwartz initial data. Note that \eqref{g stronger mapping} shows that solutions with Schwartz initial data remain in Schwartz class. So let us consider a Schwartz solution $q(t)$ to \eqref{H kappa flow q} and endeavor to prove conservation of $\alpha(\varkappa)$. Actually, it suffices to prove \eqref{H kappa flow g}, because conservation of $\alpha(\varkappa)$ follows from this and \eqref{H kappa flow q}. By the resolvent identity and \eqref{H kappa flow q}, \begin{align*} \tfrac{d\ }{dt} \tfrac{1}{2 g(t,x;\varkappa)} ={} & \tfrac{8\kappa^5}{g(t,x;\varkappa)^2} \int G(x,y;\varkappa,q(t)) g'(t,y;\kappa) G(y,x;\varkappa,q(t))\,dy \\ & + \tfrac{2\kappa^2}{g(t,x;\varkappa)^2} \int G(x,y;\varkappa,q(t)) q'(t,y) G(y,x;\varkappa,q(t))\,dy. \end{align*} From here we substitute the following rewriting of \eqref{E:l5.1a} $$ 4(\kappa^2-\varkappa^2) g'(y;\kappa) = -\bigl[ - g'''(y;\kappa) + 2\bigl(q(y)g(y;\kappa)\bigr)' + 2 q(y)g'(y;\kappa) + 4\varkappa^2g'(y;\kappa)\bigr] $$ into the first term and use Lemma~\ref{L:G ibp}, while for the second term we employ \eqref{translation identity'}. In this way, we deduce that \begin{align*} \tfrac{d\ }{dt} \tfrac{1}{2 g(t,x;\varkappa)} &= - \tfrac{4\kappa^5}{(\kappa^2-\varkappa^2) g(t,x;\varkappa)^2} \bigl[ g'(t,x;\kappa)g(t,x;\varkappa) - g(t,x;\kappa)g'(t,x;\varkappa) \bigr] \\ &\qquad - \tfrac{2\kappa^2}{g(t,x;\varkappa)^2} g'(t,x;\varkappa), \end{align*} which agrees with \eqref{H kappa flow g}. Lastly, by \eqref{H kappa defn} and Proposition~\ref{L:5.2}, \begin{align*} \{ H_\kappa, H_\text{KdV} \} = - 16 \kappa^5 \{ \alpha(\kappa), H_\text{KdV} \} + 4 \kappa^2 \{ P, H_\text{KdV} \} = 0, \end{align*} which shows that the $H_\kappa$ and $H_\text{KdV}$ flows commute, at least as mappings on Schwartz space. \end{proof} \section{Equicontinuity}\label{S:4} Let us first recall the meaning of equicontinuity: \begin{definition} A subset $Q$ of $H^s$ is said to be \emph{equicontinuous} if \begin{gather} q(x+h) \to q(x) \quad\text{in $H^s$ as $h\to 0$, uniformly for $q\in Q$.} \label{E:equi1} \end{gather} \end{definition} This definition works in great generality. For $H^s$ spaces, it is also common to define equicontinuity as tightness of the Fourier transform. The two approaches are easily reconciled, as our next lemma shows. \begin{lemma}\label{L:equi 1} Fix $-\infty < \sigma < s <\infty$. Then:\\ (i) A bounded subset $Q$ of $H^s({\mathbb{R}})$ is equicontinuous in $H^s({\mathbb{R}})$ if and only if \begin{gather} \int_{|\xi|\geq \kappa} |\hat q(\xi)|^2 (\xi^2+4)^s \,d\xi \to 0 \qtq{as $\kappa\to \infty$, uniformly for $q\in Q$.} \label{E:equi2} \end{gather} (ii) A sequence $q_n$ is convergent in $H^s({\mathbb{R}})$ if and only if it is convergent in $H^\sigma({\mathbb{R}})$ and equicontinuous in $H^s({\mathbb{R}})$. \end{lemma} \begin{proof} As $Q$ is bounded and \begin{align*} \int |e^{i\xi h}-1|^2 |\hat q(\xi)|^2 (\xi^2+4)^s \,d\xi &\lesssim \kappa^2 h^2 \int |\hat q(\xi)|^2 (\xi^2+4)^s \,d\xi \\ &\qquad + \int_{|\xi|>\kappa} |\hat q(\xi)|^2 (\xi^2+4)^s \,d\xi, \end{align*} we see that \eqref{E:equi2} implies \eqref{E:equi1}. To prove the converse, we note that \begin{align*} \int |e^{i\xi h}-1|^2\, \kappa e^{-2\kappa|h|}\,dh &= \tfrac{2\xi^2}{\xi^2+4\kappa^2} \gtrsim 1 - \chi_{[-\kappa,\kappa]}(\xi) \end{align*} and hence \begin{align*} \int\, \| q(x+h) - q(x) \|_{H^s({\mathbb{R}})}^2 \kappa e^{-2\kappa|h|}\,dh \gtrsim \int_{|\xi|>\kappa} |\hat q(\xi)|^2 (\xi^2+4)^s \,d\xi. \end{align*} Let us now turn attention to (ii). As the forward implication is trivial, we need only consider sequences $q_n$ that are convergent in $H^\sigma({\mathbb{R}})$ and equicontinuous in $H^s({\mathbb{R}})$. But then writing \begin{align*} \int |\hat q_n(\xi) - \hat q_m(\xi)|^2 (\xi^2+4)^s \,d\xi &\leq (\kappa^2+4)^{s-\sigma} \int |\hat q_n(\xi) - \hat q_m(\xi)|^2 (\xi^2+4)^\sigma \,d\xi \\ &\qquad + \int_{|\xi|>\kappa} |\hat q_n(\xi) - \hat q_m(\xi)|^2 (\xi^2+4)^s \,d\xi \end{align*} and employing \eqref{E:equi2}, we see that the sequence is Cauchy in $H^{s}({\mathbb{R}})$ and so convergent there also. \end{proof} It is now easy to see that equicontinuity in $H^{-1}({\mathbb{R}})$ is readily accessible through the conserved quantity $\alpha(\kappa;q)$: \begin{lemma}\label{L:equi 2} A subset $Q$ of $B_\delta$ is equicontinuous in $H^{-1}({\mathbb{R}})$ if and only if \begin{gather} \kappa \alpha(\kappa;q) \to 0 \quad\text{as $\kappa\to \infty$, uniformly for $q\in Q$.} \label{E:equi3} \end{gather} \end{lemma} \begin{proof} By virtue of \eqref{alpha as I2}, it suffices to show that $Q$ is equicontinuous in $H^{-1}({\mathbb{R}})$ if and only if \begin{gather} \lim_{\kappa\to\infty} \ \sup_{q\in Q}\ \int_{{\mathbb{R}}} \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2} \,d\xi = 0. \label{E:equi3'} \end{gather} That \eqref{E:equi3'} implies \eqref{E:equi2} and hence equicontinuity follows immediately from \begin{align*} \int_{|\xi|\geq \kappa} \frac{|\hat q(\xi)|^2}{\xi^2+4} \,d\xi \lesssim \int_{{\mathbb{R}}} \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2} \,d\xi . \end{align*} On the other hand, \eqref{E:equi2} implies \eqref{E:equi3'} by virtue of the boundedness of $Q$ and \begin{align*} \int \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2} \,d\xi &\lesssim \tfrac{\varkappa^2}{\kappa^2} \int \frac{|\hat q(\xi)|^2}{\xi^2+4} \,d\xi + \int_{|\xi|>\varkappa} \frac{|\hat q(\xi)|^2 \,d\xi}{\xi^2+4}. \qedhere \end{align*} \end{proof} From the preceding lemma and the conservation of $\alpha(\kappa)$ we readily deduce the following: \begin{prop}\label{P:equi} Let $Q\subset B_\delta$ be a set of Schwartz functions that is equicontinuous in $H^{-1}({\mathbb{R}})$. Then \begin{align}\label{Q star} Q^* = \bigl\{ e^{J\nabla(t H_\text{KdV} + s H_\kappa)} q : q\in Q,\ t,s\in {\mathbb{R}},\text{ and } \kappa\geq 1 \bigr\} \end{align} is equicontinuous in $H^{-1}({\mathbb{R}})$. By virtue of this, \begin{align}\label{uniform to q} 4\kappa^3\bigl[ \tfrac1{2\kappa} - g(x;\kappa,q) \bigr] \to q \quad\text{in $H^{-1}({\mathbb{R}})$ as $\kappa\to\infty$}, \end{align} uniformly for $q\in Q^*$. \end{prop} \begin{proof} By Lemma~\ref{L:equi 2} and \eqref{alpha as I2}, the boundedness and equicontinuity of $Q$ guarantees that $\alpha(\kappa;q)$ is uniformly bounded on $Q$ and that $$ \lim_{\kappa\to\infty} \kappa \alpha(\kappa;q) = 0 \quad\text{uniformly for $q\in Q$.} $$ But then since $\alpha(\kappa;q)$ is conserved under these flows, we may reverse this reasoning to deduce that $Q^*$ is equicontinuous as well. Looking back to \eqref{E:L1 est}, \eqref{R I2}, and \eqref{alpha as I2}, we have $$ \kappa^3 \bigl\| \tfrac1{2\kappa} - g(\,\cdot\;;\kappa,q) - \tfrac1{\kappa} R_0(2\kappa) q\bigr\|_{L^1} \lesssim \kappa \alpha(\kappa;q), $$ which converges to zero as $\kappa\to\infty$ uniformly for $q\in Q^*$ by \eqref{E:equi3}. In this way, the proof of \eqref{uniform to q} is reduced to the simple calculation $$ \| 4\kappa^2 R_0(2\kappa)q - q \|_{H^{-1}}^2 = \int \frac{\xi^4 |\hat q(\xi)|^2}{(\xi^2+4\kappa^2)^2}\,\frac{d\xi}{\xi^2+4} \leq \int \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2}\,d\xi $$ and \eqref{E:equi3'}. \end{proof} \section{Well-posedness}\label{S:5} \begin{theorem}\label{T:converge} Let $q_n(t)$ be a sequence of Schwartz solutions to \eqref{KdV} on the line and fix $T>0$. If $q_n(0)$ converges in $H^{-1}({\mathbb{R}})$ then so does $q_n(t)$, uniformly for $t\in[-T,T]$. \end{theorem} \begin{proof} Let us first reduce to the case $q_n(0)\in B_\delta$ for any fixed $\delta>0$, which is required in order to apply many of the results of the previous sections. This is easily handled by a simple scaling argument: if $q(t,x)$ is a Schwartz solution to \eqref{KdV}, then so is \begin{align}\label{q scaling} q_\lambda(t,x) = \lambda^2 q(\lambda^3 t, \lambda x) \end{align} for any $\lambda>0$; moreover, \begin{align}\label{H-1 scaling} \| q_\lambda(0) \|_{H^{-1}({\mathbb{R}})}^2 = \lambda \int \frac{|\hat q(0,\xi)|^2\,d\xi}{\xi^2+4\lambda^{-2}}, \end{align} which converges to zero as $\lambda\to 0$. Although it is incidental to the current proof, let us note here that \begin{equation}\label{G scaling} \begin{gathered} G(x,y;\kappa,q_\lambda) = \lambda^{-1} G(\lambda x,\lambda y;\lambda^{-1}\kappa,q),\\ \rho(x;\kappa,q_\lambda) =\lambda\rho(\lambda x;\lambda^{-1}\kappa,q),\qtq{and} \alpha(\kappa;q_\lambda) = \alpha(\lambda^{-1}\kappa;q). \end{gathered} \end{equation} By commutativity of the flows, we have \begin{align*} q_n(t) = e^{t J\nabla(H_\text{KdV} - H_\kappa)} \circ e^{t J\nabla H_\kappa} q_n(0). \end{align*} Thus, setting $Q=\{q_n(0)\}$ and defining $Q^*$ as in \eqref{Q star}, we have \begin{align} \sup_{|t|\leq T} \| q_n(t) - q_m(t) \|_{H^{-1}} &\leq \sup_{|t|\leq T} \| e^{t J\nabla H_\kappa} q_n(0) - e^{t J\nabla H_\kappa} q_m(0) \|_{H^{-1}} \label{diff 1}\\ &\qquad + 2 \sup_{q\in Q^*} \sup_{|t|\leq T} \| e^{tJ\nabla (H_\text{KdV} - H_\kappa)} q - q \|_{H^{-1}}. \notag \end{align} Note that $Q^*$ is equicontinuous in $H^{-1}({\mathbb{R}})$; this follows from Proposition~\ref{P:equi}. For fixed $\kappa$, the first term in RHS\eqref{diff 1} converges to zero as $n,m\to\infty$ due to the well-posedness of the $H_\kappa$ flow; see Proposition~\ref{P:H kappa}. Thus, it remains to prove that \begin{align}\label{56} \lim_{\kappa\to\infty} \sup_{q\in Q^*} \ \sup_{|t|\leq T}\ \| e^{tJ\nabla (H_\text{KdV} - H_\kappa)} q - q \|_{H^{-1}} =0. \end{align} We prove \eqref{56} by considering the reciprocal of the diagonal Green's function at some fixed energy. To this end, we fix $\varkappa\geq 1$ and adopt the following notations: given $q\in Q^*$ and $\kappa\geq \varkappa +1$, $$ q(t) := e^{tJ\nabla (H_\text{KdV} - H_\kappa)} q \qtq{and} g(t,x;\varkappa) := g(x;\varkappa,q(t)). $$ Note that $q(t)\in (Q^*)^*=Q^*$ for any $t\in{\mathbb{R}}$. Combining \eqref{E:l5.1g} and \eqref{H kappa flow g}, we obtain \begin{align*} \tfrac{d\ }{dt} \tfrac{1}{2g(t,x;\varkappa)} &= \Bigl\{ \tfrac{1}{g(t,x;\varkappa)} \Bigl( q(t,x) + \tfrac{4\kappa^5}{\kappa^2-\varkappa^2}\bigl[g(t,x;\kappa)-\tfrac{1}{2\kappa}\bigr] - \tfrac{4\varkappa^5}{\kappa^2-\varkappa^2} \bigl[ g(t,x;\varkappa) -\tfrac{1}{2\varkappa}\bigr]\Bigr)\Bigr\}' \end{align*} and thence \begin{align*} \bigl\| \tfrac{d\ }{dt} \bigl(\varkappa - \tfrac{1}{2g(t;\varkappa)}\bigr) \bigr\|_{H^{-2}} &\lesssim \bigl\| q(t,x) + 4\kappa^3\bigl[g(t,x;\kappa)-\tfrac{1}{2\kappa}\bigr]\bigr\|_{H^{-1}} \\ &\quad \ {} + \kappa \bigl\| g(t,x;\kappa)-\tfrac{1}{2\kappa}\bigr\|_{H^{-1}} + \kappa^{-2} \bigl\| g(t,x;\varkappa)-\tfrac{1}{2\varkappa}\bigr\|_{H^{-1}} \end{align*} uniformly for $q\in Q^*$ and $\kappa\geq \varkappa+1$. (The implicit constants here depend on $\varkappa$.) But then, by the fundamental theorem of calculus and Proposition~\ref{P:equi}, \begin{align} \lim_{\kappa\to\infty}\; \sup_{q\in Q^*}\ \sup_{|t|\leq T}\ \bigl\| \tfrac{1}{2g(t;\varkappa)} - \tfrac{1}{2g(0;\varkappa)} \bigr\|_{H^{-2}} =0. \end{align} In view of Lemma~\ref{L:equi 1}(ii), we may upgrade this convergence to \begin{align}\label{60} \lim_{\kappa\to\infty} \; \sup_{q\in Q^*} \ \sup_{|t|\leq T}\ \bigl\| \tfrac{1}{2g(t;\varkappa)} - \tfrac{1}{2g(0;\varkappa)} \bigr\|_{H^{1}} =0, \end{align} due to the equicontinuity of the set $$ E:= \Bigl\{ \varkappa - \tfrac{1}{2g(x;\varkappa,q(t))} \in H^1({\mathbb{R}}) : q\in Q^* \text{ and } t\in{\mathbb{R}}\Bigr\} $$ in $H^1({\mathbb{R}})$. This property of $E$ holds because, by the diffeomorphism property and the relation \eqref{translation identity}, it is equivalent to equicontinuity of $Q^*$. Lastly, the diffeomorphism property shows that \eqref{60} implies \eqref{56} and so completes the proof of Theorem~\ref{T:converge}. \end{proof} The line case of Theorem~\ref{T:main} follows from the next corollary. We then extend this to higher values of $s$. \begin{corollary}\label{C:1} The KdV equation is globally well-posed in $H^{-1}({\mathbb{R}})$ in the following sense: The solution map extends (uniquely) from Schwartz space to a jointly continuous map $$ \Phi:{\mathbb{R}}\times H^{-1}({\mathbb{R}})\to H^{-1}({\mathbb{R}}). $$ In particular, $\Phi$ has the group property: $\Phi(t+s)=\Phi(t)\circ \Phi(s)$. Moreover, each orbit $\{\Phi(t,q) : t\in{\mathbb{R}}\}$ is bounded and equicontinuous in $H^{-1}({\mathbb{R}})$. Concretely, \begin{align}\label{global bound} \sup_t \| q(t) \|_{H^{-1}({\mathbb{R}})} \lesssim \| q(0) \|_{H^{-1}({\mathbb{R}})} + \| q(0) \|_{H^{-1}({\mathbb{R}})}^3 . \end{align} \end{corollary} \begin{proof} Given $q\in H^{-1}({\mathbb{R}})$, we may define $\Phi(t,q)$ by choosing some sequence of Schwartz solutions $q_n(t)$ with $q_n(0)\to q$ in $H^{-1}({\mathbb{R}})$ and then set $$ \Phi(t,q)= \lim_{n\to\infty} q_n(t). $$ By virtue of Theorem~\ref{T:converge}, this limit exists in $H^{-1}({\mathbb{R}})$, it is independent of the sequence $q_n$, and the convergence is uniform on compact intervals of time. Now consider a sequence $q_n\to q \in H^{-1}({\mathbb{R}})$ and fix $T>0$. Theorem~\ref{T:converge} guarantees that there is a sequence of Schwartz solutions $\tilde q_n$ so that $$ \sup_{|t|\leq T} \| \tilde q_n(t) - \Phi(t,q_n) \|_{H^{-1}} \to 0 \qtq{as $n\to\infty$.} $$ But then $\tilde q_n(0) \to q$ in $H^{-1}$ and so Theorem~\ref{T:converge} implies $$ \sup_{|t|\leq T} \| \tilde q_n(t) - \Phi(t,q) \|_{H^{-1}} \to 0 \qtq{as $n\to\infty$.} $$ As each $\tilde q_n(t)$ is itself $H^{-1}({\mathbb{R}})$-continuous in time, this proves joint continuity of $\Phi$. As $\Phi$ is continuous, the group property on $H^{-1}({\mathbb{R}})$ is inherited from that on Schwartz space. For small initial data, boundedness and equicontinuity of orbits follows from conservation of $\alpha(\kappa)$, \eqref{alpha as I2}, and Lemma~\ref{L:equi 2}. In fact, this argument shows that $$ \sup_t \| q(t) \|_{H^{-1}({\mathbb{R}})} \lesssim \| q(0) \|_{H^{-1}({\mathbb{R}})} \qtq{for} q(0)\in B_\delta $$ and $\delta>0$ sufficiently small. Equicontinuity and \eqref{global bound} for large data then follow from the scaling transformation \eqref{q scaling}. \end{proof} \begin{corollary}\label{C:2} The KdV equation is globally well-posed in $H^{s}({\mathbb{R}})$ for all $s\geq -1$. \end{corollary} \begin{proof} In view of the preceding, it suffices to prove an analogue of Theorem~\ref{T:converge} in $H^s({\mathbb{R}})$. We shall content ourselves with the treatment of $s\in(-1,0)$ here, since we may do so in a simple and uniform manner; moreover, together with Corollary~\ref{C:1}, this covers all cases not previously known, namely, $s\in[-1,-\tfrac34)$. Given Schwartz solutions $q_n(t)$ to \eqref{KdV} with $q_n(0)$ convergent in $H^{s}({\mathbb{R}})$ and $T>0$, we may apply Theorem~\ref{T:converge} to obtain convergence of $q_n(t)$ in $H^{-1}({\mathbb{R}})$, uniformly for $t\in[-T,T]$. The goal is to upgrade this to uniform convergence in $H^s({\mathbb{R}})$. In view of Lemma~\ref{L:equi 1}, this amounts to demonstrating $H^s({\mathbb{R}})$-equicontinuity of the set $\{q_n(t):n\in{\mathbb{N}}\text{ and } t\in[-T,T]\}$. To prove equicontinuity, we employ the following trick we used in \cite{KVZ}: Integrating both sides of \eqref{alpha as I2} against the measure $\kappa^{2+2s} \,d\kappa$ over the interval $[\kappa_0,\infty)$, we obtain \begin{align}\label{int alpha} \int_{\kappa_0}^\infty \alpha(\kappa;q) \kappa^{2+2s} \,d\kappa \approx \int |\hat q(\xi)|^2 (\xi^2+4\kappa_0^2)^s \,d\xi, \end{align} where the implicit constants depend only on $s$. Notice that LHS\eqref{int alpha} is conserved by the flow and so, it follows that \begin{align}\label{int alpha;} \int |\hat q_n(t,\xi)|^2 (\xi^2+4\kappa_0^2)^s \,d\xi \approx \int |\hat q_n(0,\xi)|^2 (\xi^2+4\kappa_0^2)^s \,d\xi \end{align} uniformly for $n\in\mathbb{N}$ and $t\in{\mathbb{R}}$. As the initial data $q_n(0)$ are $H^s({\mathbb{R}})$-convergent, they are $H^s({\mathbb{R}})$-equicontinuous and so RHS\eqref{int alpha;} converges to zero as $\kappa_0\to\infty$ uniformly in $n$. But then LHS\eqref{int alpha;} converges to zero as $\kappa_0\to\infty$ uniformly in $n$, thus proving equicontinuity of $\{q_n(t) : n\in\mathbb{N}\text{ and } t\in{\mathbb{R}}\}$. \end{proof} \section{The periodic case}\label{S:periodic} With the exception of Section~\ref{S:2}, very little of substance changes in carrying over the arguments presented so far to the case of KdV with initial data in $H^{-1}({\mathbb{R}}/{\mathbb{Z}})$. Nevertheless, there are several reasons why we chose not to present both geometries simultaneously (as in our prior work \cite{KVZ}). First and foremost, we avoid the necessity of continually interrupting the principal line of reasoning to discuss minor changes (often notational) associated to the two geometries. Secondly, in the non-periodic setting, the scaling transformation \eqref{q scaling} allows us effortlessly to focus attention on small solutions, which manifests in the appearance of $\delta$ throughout our arguments thus far. To overcome the lack of scaling-invariance in the periodic case, we follow the approach we used in \cite{KVZ}. Although we still maintain that this is the best solution, we can attest that it burdens the exposition considerably. Looking to \eqref{H-1 scaling} and \eqref{G scaling}, we see that rescaling $q$ transforms the parameter $\kappa$. Correspondingly, the smallness condition for $q$ can be replaced by a relation involving $\kappa$ and $q$. On the other hand, many formulas become tremendously ugly if we do not employ the simplifications made possible by requiring $\kappa\geq 1$. This reasoning leads to the following \emph{coupled} conditions on $q$ and $\kappa$ that we shall impose: \begin{equation}\label{periodic smallness} \kappa\geq 1 \qtq{and} \kappa^{-1/2} \| q \|_{H^{-1}({\mathbb{R}}/{\mathbb{Z}})} \leq \delta. \end{equation} Here $\delta>0$ remains our over-arching smallness parameter, whose value will be allowed to shrink as the argument progresses. Concomitant with this, for fixed $\kappa\geq 1$, we define \begin{equation}\label{periodic B delta} B_{\delta,\kappa} := \{ q\in H^{-1}({\mathbb{R}}/{\mathbb{Z}}) : \kappa^{-1/2} \| q \|_{H^{-1}({\mathbb{R}}/{\mathbb{Z}})} \leq \delta\bigr\}. \end{equation} To make the arguments as parallel as possible, we shall insist on working with a Lax operator $$ L = -\partial_x^2 + q(x) $$ (and its resolvent) acting on $L^2({\mathbb{R}})$ with periodic coefficients and \emph{not} as an operator on $L^2({\mathbb{R}}/{\mathbb{Z}})$. This deviates from our treatment in \cite{KVZ}. In light of our convention, $L$ is no longer a relatively Hilbert--Schmidt (or even relatively compact) perturbation of the case $q\equiv 0$. As many arguments in Section~\ref{S:2} were founded on \eqref{R I2}, which is the quantitative expression of this, those arguments do not automatically carry over to the periodic case. In Lemma~\ref{L:A.1} we obtain the key substitute for \eqref{R I2}. We will then show how to use this to obtain the analogues of the results from Section~\ref{S:2} in the periodic setting. By comparison, Section~\ref{S:3} is almost devoid of estimates (and those that do appear are easily adapted). Rather, it is preoccupied with identities that hold pointwise in space and so are immune to the ambient geometry. Once we have proved \eqref{periodic alpha} below, everything in Section~\ref{S:4} carries over by simply replacing every instance of integration with respect to $\xi$ by summation over $\xi\in2\pi{\mathbb{Z}}$. The principal difficulty in transferring the proof of Theorem~\ref{T:converge} to the circle case is the absence of the scaling symmetry \eqref{q scaling}; we have already explained how this can be avoided. The only change needed for the treatment of the remaining results in Section~\ref{S:5}, namely, Corollaries~\ref{C:1} and~\ref{C:2}, is to employ \eqref{periodic alpha} whenever the original argument calls on \eqref{alpha as I2}. Let us turn now to the central matter at hand, namely, obtaining analogues of the principal results of Section~\ref{S:2} in the periodic setting. \begin{lemma}\label{L:A.1} Fix $\psi\in C^\infty_c({\mathbb{R}})$. If $q,f\in H^{-1}({\mathbb{R}}/{\mathbb{Z}})$, then \begin{align} \bigl\| \sqrt{R_0}\,q \sqrt{R_0} \bigr\|_{L^2({\mathbb{R}})\to L^2({\mathbb{R}})}^2 &\lesssim \kappa^{-1} \sum_{\xi\in2\pi{\mathbb{Z}}} \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2},\label{A.1.1} \\ \bigl\| \sqrt{R_0}\, f\psi R_0 q \sqrt{R_0} \bigr\|_{{\mathfrak{I}}_1(L^2({\mathbb{R}}))} &\lesssim \kappa^{-1} \| f \|_{H^{-1}_\kappa({\mathbb{R}}/{\mathbb{Z}})} \| q \|_{H^{-1}_\kappa({\mathbb{R}}/{\mathbb{Z}})},\label{A.1.2} \end{align} both uniformly for $\kappa\geq 1$. \end{lemma} Note that ${\mathfrak{I}}_1(L^2({\mathbb{R}}))$ denotes the ideal of trace-class operators acting on the Hilbert space $L^2({\mathbb{R}})$. Here and below, we simply use trace-class as a notational convenience for denoting operators representable as a product of Hilbert--Schmidt operators: \begin{align}\label{I1 from I2} \| B \|_{{\mathfrak{I}}_1} = \inf \bigl\{ \|B_1\|_{{\mathfrak{I}}_2}\|B_1\|_{{\mathfrak{I}}_2} : B = B_1 B_2 \bigr\} . \end{align} For a proper discussion of trace-class, including the veracity of \eqref{I1 from I2}, see \cite{MR2154153}. Before beginning the proof of Lemma~\ref{L:A.1}, we describe one more preliminary: Given $f\in L^2({\mathbb{R}})$ and $\theta\in[0,2\pi]$, we define $$ f_\theta(x) = \sum_{\xi\in2\pi{\mathbb{Z}}} \hat f(\xi+\theta) e^{ix(\xi+\theta)}, $$ which may be regarded as a jointly square-integrable function of $x\in[0,1]$ and $\theta\in[0,2\pi]$. Indeed, $$ \int_{{\mathbb{R}}} |f(x)|^2\,dx = \int_0^{2\pi} \| f_\theta\|_{L^2([0,1])}^2\,d\theta. $$ Moreover, any (pseudo)differential operator $L$ with $1$-periodic coefficients acts fibre-wise, which is to say it commutes with multiplication by any function of $\theta$. Note that what we describe here is simply the standard direct integral representation of a periodic operator (cf. \cite[\S XIII.16]{MR0493421}). \begin{proof}[Proof of Lemma~\ref{L:A.1}] As the operator appearing in \eqref{A.1.1} is self-adjoint, it suffices to take $f\in L^2({\mathbb{R}})$ and consider \begin{align*} \langle f, \sqrt{R_0} q \sqrt{R_0} \,f\rangle_{L^2} = \int_0^{2\pi} \langle f_\theta, {\mathcal M}_\theta q {\mathcal M}_\theta f_\theta \rangle\,d\theta, \end{align*} where ${\mathcal M}_\theta :L^2([0,1])\to L^2([0,1])$ is defined via $$ {\mathcal M}_\theta : \sum_{\xi\in2\pi{\mathbb{Z}}} c_\xi e^{ix(\xi+\theta)} \mapsto \sum_{\xi\in2\pi{\mathbb{Z}}} \frac{c_\xi e^{ix(\xi+\theta)}}{\sqrt{(\xi+\theta)^2+\kappa^2}}. $$ In this way, we see that $$ \| \sqrt{R_0} q \sqrt{R_0} \|_{L^2({\mathbb{R}})\to L^2({\mathbb{R}})} = \bigl\| \| {\mathcal M}_\theta q {\mathcal M}_\theta \|_{L^2([0,1])\to L^2([0,1])} \bigr\|_{L^\infty_\theta}. $$ The estimate \eqref{A.1.1} now follows by bounding operator norms by Hilbert--Schmidt norms and the equivalence \begin{align}\label{mcM norm} \| {\mathcal M}_\theta q {\mathcal M}_\theta \|_{{\mathfrak{I}}_2(L^2([0,1]))}^2 \approx \kappa^{-1} \sum_{\xi\in2\pi{\mathbb{Z}}} \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2}, \end{align} which is valid uniformly for $\theta\in[0,2\pi)$. We turn now to \eqref{A.1.2}. From \eqref{basic commutators}, we find \begin{align*} \sqrt{R_0}\, f\psi R_0 q \sqrt{R_0}\, &= \sqrt{R_0}\, f\psi\langle x\rangle R_0 \langle x\rangle^{-1} q \sqrt{R_0} + \sqrt{R_0}\, f\psi \sqrt{R_0}\, A \sqrt{R_0}\, \langle x\rangle^{-1} q \sqrt{R_0} \\ \text{with}\quad A &= \sqrt{R_0}\, \bigl(\tfrac{x}{\langle x\rangle} \partial_x + \partial_x\tfrac{x}{\langle x\rangle}\bigr) \sqrt{R_0}\,. \end{align*} Evidently, $A$ is an $L^2({\mathbb{R}})$-bounded operator. From \eqref{R I2} we obtain \begin{align*} \bigl\| \sqrt{R_0}\, \langle x\rangle^{-1} q \sqrt{R_0} \bigr\|_{{\mathfrak{I}}_2(L^2({\mathbb{R}}))} &\lesssim \kappa^{-1/2} \bigl\| \langle x\rangle^{-1} q \bigr\|_{H^{-1}_\kappa({\mathbb{R}})} \lesssim \kappa^{-1/2} \bigl\| q \bigr\|_{H^{-1}_\kappa({\mathbb{R}}/{\mathbb{Z}})} \end{align*} and similarly, \begin{align*} \bigl\| \sqrt{R_0}\, f\psi \sqrt{R_0} \bigr\|_{{\mathfrak{I}}_2(L^2({\mathbb{R}}))} + \bigl\| \sqrt{R_0}\, f\psi\langle x\rangle\sqrt{R_0} \bigr\|_{{\mathfrak{I}}_2(L^2({\mathbb{R}}))} \lesssim \kappa^{-1/2} \bigl\| f \bigr\|_{H^{-1}_\kappa({\mathbb{R}}/{\mathbb{Z}})}. \end{align*} Combining the preceding immediately yields \eqref{A.1.2}. \end{proof} \begin{prop}\label{P:periodic 1} Let $q\in H^{-1}({\mathbb{R}}/{\mathbb{Z}})$. There is a unique self-adjoint operator $L$ acting on $L^2({\mathbb{R}})$ associated to the semi-bounded quadratic form $$ \psi \mapsto \int_{{\mathbb{R}}} |\psi'(x)|^2 + q(x) |\psi(x)|^2\,dx . $$ Furthermore, there exists $\delta>0$, so that if $q$ and $\kappa$ obey \eqref{periodic smallness}, then the resolvent $R:=(L+\kappa^2)^{-1}$ admits a continuous integral kernel $G(x,y;\kappa,q)$ given by the uniformly convergent series \begin{align}\label{E:periodic G} G(x,y;\kappa, q) = \tfrac{1}{2\kappa} e^{-\kappa|x-y|} + \sum_{\ell=1}^\infty (-1)^\ell \Bigl\langle \sqrt{R_0}\, \delta_x, \Bigl(\!\sqrt{R_0}\,q \sqrt{R_0}\Bigr)^\ell \sqrt{R_0}\,\delta_y\Bigr\rangle. \end{align} \end{prop} \begin{proof} Regarding the existence and uniqueness of $L$, we note that \eqref{A.1.1} guarantees that $q$ is an infinitesimally form bounded perturbation and then apply \cite[Theorem~X.17]{MR0493420}. This is the same argument used in the proof of Proposition~\ref{P:sa L}. Using Plancherel, it is easy to check that $x\mapsto \sqrt{R_0}\delta_x$ is H\"older-continuous as a map from ${\mathbb{R}}$ to $L^2({\mathbb{R}})$. Thus, convergence of the series \eqref{E:periodic G} and continuity of the result follows whenever $\sqrt{R_0}\,q \sqrt{R_0}$ is a contraction; this in turn follows from \eqref{periodic smallness} and \eqref{A.1.1} when $\delta$ is small enough. \end{proof} We define $g(x;\kappa,q)$ and $\rho(x;\kappa,q)$ exactly as in Section~\ref{S:2}; see \eqref{g defn} and \eqref{E:rho defn}. Let us now demonstrate their basic properties: \begin{prop}\label{P:periodic 2} There exists $\delta>0$, so that the following are true for all $\kappa\geq 1:$\\ (i) The mappings \begin{align}\label{periodic diffeos} q\mapsto g-\tfrac1{2\kappa} \qtq{and} q\mapsto \kappa-\tfrac1{2g} \end{align} are (real analytic) diffeomorphisms of $B_{\delta,\kappa}$ into $H^1({\mathbb{R}}/{\mathbb{Z}})$.\\ (ii) For every $q\in B_{\delta,\kappa}$ and every integer $s\geq 0$, \begin{align}\label{periodic stronger} \|g'(x)\|_{H^{s}({\mathbb{R}}/{\mathbb{Z}})} \lesssim_s \| q \|_{H^{s-1}({\mathbb{R}}/{\mathbb{Z}})} . \end{align} (iii) For every $q\in B_{\delta,\kappa}$, $\rho(x;q)$ is non-negative and in $H^1({\mathbb{R}}/{\mathbb{Z}})$. Moreover, \begin{align}\label{periodic alpha} \alpha(\kappa,q):=\int_0^1 \rho(x)\,dx \approx \kappa^{-1} \sum_{\xi\in 2\pi{\mathbb{Z}}} \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2}, \end{align} uniformly for $\kappa\geq 1$ and $q\in B_{\delta,\kappa}$. \end{prop} \begin{proof} First we show that $g\in H^{1}({\mathbb{R}}/{\mathbb{Z}})$. That it is periodic is self-evident from \eqref{E:periodic G}. To estimate its norm, we pick $\psi\in C^\infty_c({\mathbb{R}})$ so that $$ \sum_{k\in{\mathbb{Z}}} \psi(x-k) \equiv 1. $$ The utility of this partition of unity for us stems from the duality relation $$ \| h \|_{H^{1}_\kappa({\mathbb{R}}/{\mathbb{Z}})} = \sup \biggl\{ \int_{\mathbb{R}} h(x) \psi(x) f(x)\,dx : f\in C^\infty({\mathbb{R}}/{\mathbb{Z}}) \text{ and }\|f\|_{H^{-1}_\kappa({\mathbb{R}}/{\mathbb{Z}})}\leq 1\biggr\}. $$ Now given $f\in C^\infty({\mathbb{R}}/{\mathbb{Z}})$, from \eqref{E:periodic G} we obtain that \begin{align}\label{dual g} \int \bigl[g(x) -\tfrac1{2\kappa}\bigr] \psi(x) f(x) \,dx = \sum_{\ell=1}^\infty (-1)^\ell \tr\Bigl\{ \!\sqrt{R_0}\,f\psi \sqrt{R_0} \Bigl(\!\sqrt{R_0}\,q \sqrt{R_0}\Bigr)^\ell \Bigr\} \end{align} and thence, using \eqref{A.1.2}, \eqref{A.1.1}, and \eqref{periodic B delta}, \begin{align}\label{periodic g in H1} \bigl\|g(x) -\tfrac1{2\kappa}\bigr\|_{H^1({\mathbb{R}}/{\mathbb{Z}})} \lesssim \kappa^{-1} \|q\|_{H^{-1}({\mathbb{R}}/{\mathbb{Z}})}, \end{align} provided $\delta$ is chosen sufficiently small. Moreover, this argument shows that the first mapping in \eqref{periodic diffeos} is real-analytic by directly proving convergence of the power series. When combined with \eqref{translation identity}, the estimates just presented also lead to a proof of \eqref{periodic stronger}; for further details see the proof of Proposition~\ref{P:diffeo}. We consider now the inverse mapping. As in Section~\ref{S:2}, $$ \kappa \cdot dg\bigr|_{q\equiv 0} = - R_0(2\kappa) $$ is a unitary map of $H^{-1}_\kappa({\mathbb{R}}/{\mathbb{Z}})$ onto $H^1_\kappa({\mathbb{R}}/{\mathbb{Z}})$. Thus, by the inverse function theorem, $q\mapsto g-\tfrac1{2\kappa}$ is a diffeomorphism in some neighbourhood of zero. We must verify that the inverse mapping extends to the whole of $B_{\delta,\kappa}$. Differentiating \eqref{dual g} with respect to $q$ and applying \eqref{A.1.1} and \eqref{A.1.2} yields \begin{align}\label{periodic dg in H1} \Bigl\| dg -dg\bigr|_{q\equiv0} \Bigr\|_{H^{-1}_\kappa\to H^{\vphantom{+}1}_\kappa} \lesssim \kappa^{-3/2} \|q\|_{H^{-1}({\mathbb{R}}/{\mathbb{Z}})}, \end{align} which suffices for this task. The diffeomorphism property extends from the first map in \eqref{periodic diffeos} to the second by exactly the same argument presented in the proof of Proposition~\ref{P:diffeo}. We turn now to part (iii), focussing on \eqref{periodic alpha}. As previously, we proceed by computing derivatives, beginning with \begin{align}\label{peri red} \frac{d\ }{ds}\biggr|_{s=0} \int_0^1 \frac{dy}{2g(y;q+sf)} = \int_0^1 g(x) f(x)\,dx. \end{align} This may be proved as follows: By the resolvent identity, periodicity, and Lemma~\ref{L:D 1/g}, \begin{align*} \text{LHS\eqref{peri red}} &= \int_0^1 \int_{\mathbb{R}} \frac{G(y,x)f(x)G(x,y)}{2g(y)^2}\,dx\,dy \\ &= \sum_{k\in{\mathbb{Z}}} \int_0^1 \int_0^1 \frac{G(y,x+k)f(x+k)G(x+k,y)}{2g(y)^2}\,dx\,dy \\ &= \sum_{k\in{\mathbb{Z}}} \int_0^1 \int_0^1 \frac{G(y-k,x)f(x)G(x,y-k)}{2g(y-k)^2}\,dx\,dy \\ &= \int_0^1 \int_{\mathbb{R}} \frac{G(y,x)G(x,y)}{2g(y)^2}\,dy\,f(x)\,dx = \text{RHS\eqref{peri red}}. \end{align*} Beginning with \eqref{peri red} and using \eqref{O35pp} shows \begin{align*} d^2\alpha\bigr|_{q\equiv 0} (f,f) &= \tfrac{1}{4\kappa^2} \int_0^1 \!\! \int_{\mathbb{R}} e^{-2\kappa|y-z|} f(y) f(z) \,dy\,dz = \tfrac{1}{\kappa} \sum_{\xi\in 2\pi{\mathbb{Z}}} \frac{|\hat f(\xi)|^2}{\xi^2+4\kappa^2}. \end{align*} Relying also on \eqref{periodic dg in H1} yields \begin{align*} \Bigl| d^2\alpha(f,f) - d^2\alpha\bigr|_{q\equiv 0} (f,f)\Bigr| & \lesssim \kappa^{-3/2} \|q\|_{H^{-1}({\mathbb{R}}/{\mathbb{Z}})} \|f\|_{H^{-1}_\kappa({\mathbb{R}}/{\mathbb{Z}})}^2. \end{align*} In this way, we see that the power series expansion of $\alpha$ as a function of $q$ is dominated by its quadratic term throughout \eqref{periodic B delta}, thus proving \eqref{periodic alpha}. \end{proof} Proposition~\ref{P:periodic 2} contains no analogue of \eqref{O37}. Unlike in the decaying case, the quantity defined in \eqref{periodic alpha} does not coincide precisely with the renormalized perturbation determinant considered in \cite{KVZ}. To describe the connection, we must introduce several new objects. Henceforth, we consider only $q\in C^\infty({\mathbb{R}}/{\mathbb{Z}})$. Once we have established suitable identities in this setting, they may be extended via analyticity to $q$ that are merely $H^{-1}({\mathbb{R}}/{\mathbb{Z}})$. Let ${\mathcal R}_0$ denote the resolvent associated to the Laplacian on $[0,1]$ with periodic boundary conditions; concretely, $$ {\mathcal R}_0 : \sum_{\xi\in2\pi{\mathbb{Z}}} c_\xi e^{i\xi x} \mapsto \sum_{\xi\in2\pi{\mathbb{Z}}} \frac{c_\xi e^{i\xi x}}{\xi^2 + \kappa^2}. $$ Note that this coincides with ${\mathcal M}_0^2$ where ${\mathcal M}_0$ is as in the proof of \eqref{A.1.1}. Using \eqref{mcM norm}, we see that the resolvent of the operator $\mathcal{L}=-\partial_x^2+q$, acting on $L^2([0,1])$ with periodic boundary conditions, can be expanded in a convergent series $$ {\mathcal R} = {\mathcal R}_0 + \sum_{\ell=1}^\infty (-1)^\ell \sqrt{{\mathcal R}_0} \left(\sqrt{{\mathcal R}_0}\, q \, \sqrt{{\mathcal R}_0}\right)^\ell \sqrt{{\mathcal R}_0} $$ whenever $\kappa$ and $q$ satisfy \eqref{periodic smallness} for suitable $\delta>0$. Moreover, the kernels of these operators can be found by the method of images: \begin{align}\label{mcR0 kernel} \langle \delta_x, {\mathcal R}_0\delta_y\rangle = \sum_{k\in{\mathbb{Z}}} \langle \delta_x, R_0\delta_{y+k}\rangle = \tfrac{1}{2\kappa(1-e^{-\kappa})} \bigl[ e^{-\kappa\|x-y\|} + e^{-\kappa(1-\|x-y\|)} \bigr], \end{align} where $\|x-y\|=\dist(x-y,{\mathbb{Z}})$, and similarly, \begin{align}\label{mcR kernel} \mathcal G(x,y) := \langle \delta_x, {\mathcal R} \delta_y\rangle = \sum_{k\in{\mathbb{Z}}} G(x,y+k). \end{align} The last object we need to define is the Lyapunov exponent. Let $\psi_\pm(x;\kappa)$ be the Weyl solutions introduced earlier in this section. Due to the periodicity of $q$, we see that $x\mapsto\psi_+(x+1)$ and $x\mapsto\psi_-(x+1)$ constitute equally good Weyl solutions and so must differ from the originals by numerical constants. Noting the constancy of the Wronskian as well as the square-integrability constraint, we see that there is a $\gamma=\gamma(\kappa)>0$ so that $$ \psi_+(x+1;\kappa) = e^{-\gamma(\kappa)} \psi_+(x;\kappa) \qtq{and} \psi_-(x+1;\kappa) = e^{+\gamma(\kappa)} \psi_-(x;\kappa). $$ This quantity $\gamma$ is known as the Lyapunov exponent. Employing these relations to sum in \eqref{mcR kernel}, we deduce that \begin{align}\label{mcR kernel'} \mathcal G(x,x) = \frac{1+e^{-\gamma}}{1-e^{-\gamma}} G(x,x). \end{align} \begin{prop} For $q\in C^\infty({\mathbb{R}}/{\mathbb{Z}})$ and $\kappa$ satisfying \eqref{periodic smallness}, we have \begin{align}\label{Lyapunov formula} \gamma(\kappa) = \int_0^1 \frac{dx}{2g(x)}, \end{align} \begin{align}\label{periodic trace} \tr\left(\sqrt{{\mathcal R}_0}\, q \, \sqrt{{\mathcal R}_0} \right) = \tfrac{1+e^{-\kappa}}{1-e^{-\kappa}} \int_0^1 \bigl[\tfrac{1}{2} e^{-2\kappa|\cdot|}*q\bigr](x) \,dx = \tfrac1{2\kappa}\,\tfrac{1+e^{-\kappa}}{1-e^{-\kappa}} \int_0^1 q(x) \,dx, \end{align} which is a Casimir, and \begin{align}\label{periodic determinant} \log\det\left( 1 + \sqrt{{\mathcal R}_0}\, q \, \sqrt{{\mathcal R}_0} \right) = \log\bigl(e^{\gamma} - 2 + e^{-\gamma} \bigr) - \log\bigl(e^{\kappa} - 2 + e^{-\kappa} \bigr). \end{align} Here the trace and determinant are with respect to the Hilbert space $L^2({\mathbb{R}}/{\mathbb{Z}})$. \end{prop} \begin{proof} The proof of \eqref{Lyapunov formula} is very simple: combining $g(x)=\psi_+(x)\psi_-(x)$ with the Wronskian relation \eqref{E:Wron}, we have \begin{align*} \int_0^1 \frac{dx}{2g(x)} = \tfrac12 \int_0^1 \frac{d }{dx} \log\Bigl[\tfrac{\psi_-(x)}{\psi_+(x)}\Bigr]\,dx = \tfrac12 \log\Bigl[\tfrac{\psi_-(x+1)\psi_+(x)}{\psi_-(x)\psi_+(x+1)}\Bigr] = \gamma. \end{align*} By \eqref{mcR0 kernel}, we have $$ \tr\Bigl\{ \sqrt{{\mathcal R}_0}\, q \, \sqrt{{\mathcal R}_0} \Bigr\} = \frac{(1+e^{-\kappa})}{2\kappa(1-e^{-\kappa})} \int_0^1 q(x)\,dx, $$ while \begin{align*} \int_0^1\int_{\mathbb{R}} \tfrac{1}{2} e^{-2\kappa|x-y|} q(y)\,dy\,dx &= \sum_{k\in{\mathbb{Z}}} \int_0^1\int_0^1 \tfrac{1}{2} e^{-2\kappa|x-k-y|} q(y)\,dy\,dx \\ &= \int_0^1\int_{\mathbb{R}} \tfrac{1}{2} e^{-2\kappa|x-y|} q(y) \,dx\,dy = \tfrac{1}{2\kappa} \int_0^1 q(y)\,dy. \end{align*} This proves \eqref{periodic trace}. The identity \eqref{periodic determinant} can be readily deduced from \cite[Theorem~2.9]{MR0559928}, which is a recapitulation of venerable results of Hill and of Wittaker and Watson. For completeness, we given an alternate proof paralleling our arguments from the rapidly decreasing case. By \eqref{Lyapunov formula}, we see that \eqref{periodic determinant} holds in the case $q\equiv 0$. Moreover, arguing as in the decaying case, we find that for any $f\in C^\infty({\mathbb{R}}/{\mathbb{Z}})$, $$ \frac{d\ }{ds}\biggr|_{s=0} \log\det\left( 1 + \sqrt{{\mathcal R}_0}\, (q+sf) \, \sqrt{{\mathcal R}_0} \right) (x) = \int_0^1 \mathcal G(x,x) f(x)\,dx. $$ On the other hand, by \eqref{Lyapunov formula} and \eqref{peri red}, $$ \frac{d\ }{ds}\biggr|_{s=0} \log\bigl(e^{\gamma(\kappa;q+sf)} - 2 + e^{-\gamma(\kappa;q+sf)} \bigr) = \frac{e^\gamma-e^{-\gamma}}{(e^{\gamma} - 2 + e^{-\gamma})}\int_0^1 g(x) f(x)\,dx. $$ In view of \eqref{mcR kernel'}, these two derivatives agree. Thus equality in \eqref{periodic determinant} extends to all $q\in B_{\delta,\kappa}$. \end{proof} \begin{corollary} For smooth initial data, the conservation of $$ \int_0^1 \rho(x)\,dx $$ under the KdV flow, which follows from \eqref{E:l5.1h}, is equivalent to conservation of $$ -\log\det_2\left( 1 + \sqrt{{\mathcal R}_0}\, q \, \sqrt{{\mathcal R}_0} \right), $$ which was proved in \cite{KVZ}. \end{corollary} \section{Local smoothing}\label{S:7} Our first goal in this section is to derive a local smoothing result for $H^{-1}$-solutions to \eqref{KdV} on the line. A similar a priori estimate was obtained by Buckmaster and Koch in \cite{MR3400442} via the Miura map. \begin{lemma}[Local smoothing]\label{L:loc smoothing} There exists $\delta>0$ so that for every $H^{-1}({\mathbb{R}})$-solution $q(t)$ to \eqref{KdV}, in the sense of Corollary~\ref{C:1}, with initial data $q(0)\in B_\delta$, \begin{equation}\label{E:loc smoothing *} \sup_{t_0,x_0\in{\mathbb{R}}} \ \int_{0}^{1} \!\! \int_{0}^{1} |q(t-t_0,x-x_0)|^2 \,dx\,dt \lesssim \delta^2. \end{equation} \end{lemma} \begin{proof} As noted already in Proposition~\ref{L:5.2}, conservation of $\alpha(\kappa=1)$ guarantees that \begin{align*} \| q \|_{L^\infty_x H^{-1}_x} ^2 \lesssim \delta^2. \end{align*} This allows us to choose $\delta$ sufficiently small that all results from Section~\ref{S:3} can be applied at all times $t\in{\mathbb{R}}$. It also means that it suffices to prove \eqref{E:loc smoothing *} with $t_0=x_0=0$. Let us now fix a smooth function $\phi$ whose derivative $\phi'$ is positive and Schwartz and define $$ \psi(x) = \tfrac32 \int_{\mathbb{R}} e^{-2|x-y|} \phi'(y) \,dy, $$ which is positive everywhere. Suppose first that $q(t)$ is a Schwartz solution to \eqref{KdV}. Setting $\kappa=1$ in \eqref{E:l5.1h}, we obtain \begin{align*} \tfrac{d\ }{dt} \rho(t,x) &= \Bigl( \tfrac32 \bigl[e^{-2|\cdot|}* q(t)^2\bigr](x) + 2q(t,x)\bigl[ 1 - \tfrac{1}{2g(t,x)}\bigr] - 4\rho(t,x) \Bigr)'. \end{align*} Integrating this against $\phi(x)$ and integrating by parts, yields \begin{align} \int_0^1 \! \int_{\mathbb{R}} |q(t,x)|^2 \psi(x) \,dx\,dt &= \int_{\mathbb{R}} [\rho(0,x)-\rho(1,x)] \phi(x) \,dx \notag\\ &\quad - 2 \int_0^1 \! \int_{\mathbb{R}} q(t,x)\bigl[ 1 - \tfrac{1}{2g(t,x)}\bigr] \phi'(x) \,dx\,dt \label{64}\\ &\quad + 4 \int_0^1 \! \int_{\mathbb{R}} \rho(t,x) \phi'(x) \,dx\,dt . \notag \end{align} But by the results of Section~\ref{S:3}, the right-hand side is bounded uniformly; thus \eqref{E:loc smoothing *} follows for Schwartz solutions. Next we allow $q(t)$ to be a general (not Schwartz) solution to \eqref{KdV} and suppose $q_n(t)$ is a sequence of Schwartz solutions with $q_n(0)\to q(0)$ in $H^{-1}({\mathbb{R}})$. By weak lower-semicontinuity of the $L^2$-norm and the fact that weak convergence is guaranteed by Theorem~\ref{T:converge}, \begin{equation*} \int_{0}^{1} \int_{0}^{1} |q(t,x)|^2 \,dx\,dt \leq \liminf_{n\to\infty} \int_{0}^{1} \int_0^1 |q_n(t,x)|^2 \,dx\,dt. \end{equation*} Thus, \eqref{E:loc smoothing *} for such general solutions $q(t)$ follows from the Schwartz-class case already proven. \end{proof} Using this a priori bound as a stepping stone, we will now show that solutions whose initial data converge in $H^{-1}({\mathbb{R}})$ actually converge in the local smoothing norm as claimed in Theorem~\ref{T:ls conv}. In fact, the following proposition is strictly stronger than this theorem because of the additional uniformity in $x_0$. \begin{prop}\label{P:loc smoothing} Let $q(t)$ and $q_n(t)$ be $H^{-1}({\mathbb{R}})$-solutions to \eqref{KdV}, in the sense of Corollary~\ref{C:1}, with initial data $q_n(0)\to q(0)$ in $H^{-1}({\mathbb{R}})$. Then for every $T>0$, \begin{equation}\label{E:loc smoothing n} \lim_{n\to\infty} \ \sup_{x_0\in{\mathbb{R}}} \ \int_{-T}^T \int_0^1 |q(t,x-x_0)-q_n(t,x-x_0)|^2 \,dx\,dt = 0. \end{equation} In particular, solutions in the sense of Corollary~\ref{C:1} are distributional solutions. \end{prop} A major part of the argument leading to Proposition~\ref{P:loc smoothing} is a refinement of the proof of Lemma~\ref{L:loc smoothing}. The key improvement stems from analyzing the behavior of the various terms in \eqref{64} as $\kappa\to\infty$, rather than simply setting $\kappa=1$. We begin with the following preliminary estimates: \begin{lemma} Fix $\psi\in C^\infty_c({\mathbb{R}})$ with $\supp(\psi)\subset (0,1)$. There exists $\delta>0$ so that \begin{gather} \bigl\| \psi(x)\bigl[g(x)-\tfrac1{2\kappa}\bigr] + \kappa^{-1}[R_0(2\kappa)(q\psi)](x) \bigr\|_{L^2({\mathbb{R}})}^2 \lesssim \kappa^{-7} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr] \label{E:psi g}\\ \bigl\| \psi(x)\bigl[\kappa -\tfrac1{2g(x)}\bigr] + 2\kappa [R_0(2\kappa)(q\psi)](x) \bigr\|_{L^2({\mathbb{R}})}^2 \lesssim \kappa^{-3} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr] \label{E:psi 1/g}\\ \ \ \biggl| \int \rho(x) \psi(x)^2 \,dx - \tfrac{1}{2\kappa} \int \frac{|\widehat{q\psi} (\xi)|^2\,d\xi}{\xi^2+4\kappa^2} \biggr| \lesssim \kappa^{-7/2} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr] \label{E:psi rho}\\ \biggl| \iint q(x)^2 \kappa e^{-2\kappa|x-y|} \psi(y)^2 \,dx\,dy - \int q(x)^2 \psi(x)^2 \,dx \biggr| \lesssim \int_{\mathbb{R}} \frac{|q(x)|^2\,dx}{\kappa(1+x^2)} \label{E:psi dumb} \end{gather} for every $q\in B_\delta$ and $\kappa\geq 1$. (Note that the implicit constants depend on $\psi$.) \end{lemma} \begin{proof} We begin with a commutator calculation: \begin{align*} [\psi(x),R_0] &= R_0 \bigl(-2\partial_x \psi'(x) + \psi''(x)\bigr) R_0 \\ &= R_0 \bigl(-2\partial_x\bigr) [\psi'(x),R_0] + R_0 \bigl(-2\partial_x\bigr)R_0\psi'(x) + R_0\psi''(x) R_0. \end{align*} This shows that for $\kappa\geq 1$, we can write \begin{equation}\label{psi comm bound} \begin{gathered}\relax [\psi(x),R_0] = \sqrt{R_0} A \sqrt{R_0} = \sqrt{R_0} B \sqrt{R_0} + \sqrt{R_0} C \sqrt{R_0} \psi'(x) \\ \text{with} \quad \|A\|_{L^2\to L^2} + \|C\|_{L^2\to L^2} \lesssim \kappa^{-1} \qtq{and} \|B\|_{L^2\to L^2} \lesssim \kappa^{-2}. \end{gathered} \end{equation} From the series \eqref{E:g series}, we have \begin{align} &\int_{\mathbb{R}}\{ \psi(x)\bigl[g(x)-\tfrac1{2\kappa}\bigr] + \kappa^{-1}[R_0(2\kappa)(q\psi)](x) \bigr\} f(x)\,dx \label{dual psi g}\\ ={}& \sum_{\ell\geq 2} (-1)^\ell \tr\Bigl\{ \sqrt{R_0}\,f\,\sqrt{R_0} \sqrt{R_0}\,\psi q\,\sqrt{R_0} \Bigl( \sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^{\ell-1} \Bigr\} \notag\\ &\ +\sum_{\ell\geq 1} (-1)^\ell \tr\Bigl\{ \sqrt{R_0}\,f\,\sqrt{R_0} B \Bigl( \sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^\ell \Bigr\} \notag\\ &\ +\sum_{\ell\geq 1} (-1)^\ell \tr\Bigl\{ \sqrt{R_0}\,f\,\sqrt{R_0} C \sqrt{R_0}\,\psi' q\,\sqrt{R_0} \Bigl( \sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^{\ell-1} \Bigr\}. \notag \end{align} Using \begin{align}\label{I2 S6} \Bigl\| \sqrt{R_0}\,h\,\sqrt{R_0} \Bigr\|_{{\mathfrak{I}}_2} \lesssim \kappa^{-3/2} \| h \|_{L^2} \qtq{and} \Bigl\| \sqrt{R_0}\,q\,\sqrt{R_0} \Bigr\|_{{\mathfrak{I}}_2} \lesssim \kappa^{-1/2} \| q \|_{H^{-1}}, \end{align} which follow from \eqref{R I2}, we then deduce that \begin{align*} \text{LHS\eqref{dual psi g}} \leq \kappa^{-7/2} \|f\|_{L^2} \Bigl\{ \|\psi q\|_{L^2} + 1 + \|\psi' q\|_{L^2}\Bigr\}, \end{align*} provided, say, $\delta\leq\frac12$. This proves \eqref{E:psi g}. For future use, we also note that with the aid of \eqref{E:psi g}, one may readily show \begin{align}\label{step to rho2} \int \bigl(g(x) - \tfrac1{2\kappa}\bigr)^2\psi(x)^2\,dx + \bigl\| \kappa^{-1} R_0(2\kappa)(q\psi) \bigr\|_{L^2({\mathbb{R}})}^2 \lesssim \kappa^{-6} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr] . \end{align} Next we prove \eqref{E:psi 1/g}. This almost follows from \eqref{E:psi g}; indeed, writing \begin{align}\label{1/g expansion} \kappa -\tfrac1{2g(x)} = 2\kappa^2\bigl( g(x) - \tfrac1{2\kappa}\bigr) - \tfrac{2\kappa^2}{g(x)} \bigl( g(x) - \tfrac1{2\kappa}\bigr)^2 \end{align} and invoking \eqref{E:psi g}, we are left only to prove \begin{align}\label{psi/g left} \int \tfrac{2\kappa^2}{g(x)} \bigl( g(x) - \tfrac1{2\kappa}\bigr)^2 \psi(x)^2\,dx \lesssim \kappa^{-3} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr]. \end{align} This then follows from \eqref{step to rho2}. We begin the proof of \eqref{E:psi rho} by expanding one step further than \eqref{1/g expansion} to write \begin{equation* \begin{gathered} \rho(x) =\sum_{i=1}^3 \rho_i(x) \qtq{with} \rho_1(x):= 2\kappa^2\bigl\{g(x) - \tfrac1{2\kappa} + \tfrac1\kappa [R_0(2\kappa)q](x)\bigr\},\\ \rho_2(x):= - 4\kappa^3\bigl(g(x) - \tfrac1{2\kappa}\bigr)^2 \qtq{and} \rho_3(x):= 4\kappa^3\bigl(g(x) - \tfrac1{2\kappa}\bigr)^3 / g(x). \end{gathered} \end{equation*} Let us begin our analysis with the contribution of $\rho_1$. From \eqref{E:g series}, we have \begin{align*} \int \rho_1(x)\psi(x)^2\,dx = 2\kappa^2 \sum_{\ell\geq 2} (-1)^\ell \tr\Bigl\{ \psi(x)^2 \sqrt{R_0}\Bigl(\sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^\ell \sqrt{R_0}\Bigr\}. \end{align*} Now for any $\ell\geq2$, we have from \eqref{I2 S6}, \eqref{psi comm bound} and its adjoint that \begin{align*} &\tr \Bigl\{ \psi(x)^2 \sqrt{R_0} \Bigl(\sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^\ell \sqrt{R_0} \Bigr\} \\ &\ \ = \tr \Bigl\{ \sqrt{R_0}\,\psi q R_0^2 \psi q\,\sqrt{R_0} \Bigl(\sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^{\ell-2} \Bigr\} + O\biggl( \kappa^{-6} \delta^{\ell-2} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr] \biggr) . \end{align*} The error term here sums acceptably over $\ell\geq 2$. The contribution of the first term is also acceptable provided we restrict to $\ell\geq 3$; indeed, $$ \Bigl| \tr \Bigl\{ \sqrt{R_0}\,\psi q R_0^2 \psi q\,\sqrt{R_0} \Bigl(\sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^{\ell-2} \Bigr\} \Bigr| \lesssim \kappa^{-5-\frac{\ell-2}2} \delta^{\ell-2} \int_0^1 |q(x)|^2\,dx. $$ Combining all of this, we deduce that \begin{align* \int \rho_1(x)\psi(x)^2\,dx = 2\kappa^2 \tr \Bigl\{ R_0\psi q R_0 \psi q R_0 \Bigr\} + O\biggl( \kappa^{-7/2} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr] \biggr) . \end{align*} We turn now to $\rho_2$. Combining \eqref{step to rho2} and \eqref{E:psi g} gives \begin{align*} \biggl| \int \bigl(g(x) - \tfrac1{2\kappa}\bigr)^2\psi(x)^2\,dx - \kappa^{-2} \bigl\| R_0(2\kappa)(q\psi) \bigr\|_{L^2({\mathbb{R}})}^2 \biggr| \lesssim \kappa^{-13/2} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr]. \end{align*} Therefore, \begin{align* \int \rho_2(x)\psi(x)^2\,dx = - 4 \kappa \bigl\| R_0(2\kappa)(q\psi) \bigr\|_{L^2({\mathbb{R}})}^2 + O\biggl( \kappa^{-7/2} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr] \biggr) . \end{align*} We now consider $\rho_3$. From \eqref{R I2} we have \begin{align*} \Bigl\| \sqrt{R_0}\,h\,\sqrt{R_0} \Bigr\|_{{\mathfrak{I}}_2}^2 \lesssim \kappa^{-1} \| h \|_{L^1}^2 \int\frac{d\xi}{\xi^2+4\kappa^2} \lesssim \kappa^{-2} \| h \|_{L^1}^2 . \end{align*} Employing this to estimate the series \eqref{E:g series} via duality, we obtain $$ \bigl\| g - \tfrac1{2\kappa} \bigr\|_{L^\infty} \lesssim \kappa^{-3/2} \delta . $$ Combining this with \eqref{step to rho2} yields \begin{align* \int \rho_3(x)\psi(x)^2\,dx \lesssim \kappa^{-7/2} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr] . \end{align*} To derive the claim \eqref{E:psi rho} by combining our results on each part of $\rho$, we need one additional identity, namely, $$ 2\kappa^2 \tr \Bigl\{ R_0\psi q R_0 \psi q R_0 \Bigr\} - 4 \kappa \bigl\| R_0(2\kappa)(q\psi) \bigr\|_{L^2({\mathbb{R}})}^2 = \tfrac{1}{2\kappa} \int \frac{|\widehat{q\psi} (\xi)|^2\,d\xi}{\xi^2+4\kappa^2} . $$ That this equality holds follows from the same calculation we carried out in \eqref{delta2alpha}. The last estimate \eqref{E:psi dumb} is relatively trivial. As $\int \kappa e^{-2\kappa|x-y|}\,dy =1$, \begin{align*} \biggl| \psi(x)^2 - \int \kappa e^{-2\kappa|x-y|} \psi(y)^2 \,dy \biggr| \lesssim \int \kappa e^{-2\kappa|y-x|} |x-y| \,dy \lesssim \kappa^{-1}, \end{align*} which settles the case $|x|\leq 10$. For $|x|\geq 10$, we have \begin{align*} \biggl| \psi(x)^2 - \int \kappa e^{-2\kappa|x-y|} \psi(y)^2 \,dy \biggr| = \int_0^1 \kappa e^{-2\kappa|y-x|} \psi(y)^2 \,dy \lesssim \kappa e^{-\kappa|x|} , \end{align*} which offers more than enough decay in both $x$ and $\kappa$. \end{proof} \begin{lemma}\label{L:kappa ls} There is a $\delta>0$ so that the following is true: Let $Q$ be a family of Schwartz solutions to \eqref{KdV} on the line such that $\{q(0):q\in Q\}$ is an equicontinuous subset of $B_\delta$. Then, for any $\psi\in C^\infty_c({\mathbb{R}})$ and any $T>0$, we have \begin{equation}\label{E:loc smoothing hi} \lim_{\kappa\to\infty} \ \sup_{q\in Q} \ \int_{-T}^T \int \frac{\xi^2 |\widehat{q\psi} (t,\xi)|^2\,d\xi}{\xi^2+4\kappa^2} \,d\xi\,dt = 0. \end{equation} \end{lemma} \begin{proof} Without loss of generality, we may assume that $\supp(\psi)\subset (0,1)$. Throughout the proof, we regard $\psi$ and $T$ as fixed and implicit constants may depend on them. Multiplying \eqref{E:l5.1h} by $\kappa$ and integrating against $$ \phi(x) = \int_{-\infty}^x \psi(y)^2\,dy, $$ we obtain \begin{align} \int \kappa [\rho(T,x)-\rho(-T,x)] & \phi(x) \,dx \label{lsk1}\\ &= -\tfrac32 \int_{-T}^T \! \iint |q(t,x)|^2 \kappa e^{-2\kappa|x-y|}\psi(y)^2 \,dx\,dy\,dt \label{lsk2}\\ &\quad - 2\kappa \int_{-T}^T \! \int q(t,x)\bigl[ \kappa - \tfrac{1}{2g(t,x)}\bigr] \psi(x)^2 \,dx\,dt \label{lsk3}\\ &\quad + 4\kappa^3 \int_{-T}^T \! \int \rho(t,x) \psi(x)^2 \,dx\,dt.\label{lsk4} \end{align} We will discuss these terms one at a time. From Proposition~\ref{P:Intro rho}, we have $$ \bigl| \text{\eqref{lsk1}} \bigr| \lesssim \kappa \alpha(\kappa;q), $$ which converges to zero as $\kappa\to\infty$ uniformly for $q\in Q$; see Lemma~\ref{L:equi 2}. Combining \eqref{E:psi dumb} and Lemma~\ref{L:loc smoothing} yields \begin{align*} \biggl| \eqref{lsk2} + \tfrac32 \int_{-T}^T \! \int |q(t,x)|^2 \psi(x)^2 \,dx\,dt \biggr| \lesssim \kappa^{-1}, \end{align*} or equivalently, by Plancherel, \begin{align*} \biggl| \eqref{lsk2} + \tfrac32 \int_{-T}^T \! \int |\widehat{\psi q}(t,\xi)|^2 \,d\xi\,dt \biggr| \lesssim \kappa^{-1}. \end{align*} From \eqref{E:psi 1/g} and Lemma~\ref{L:loc smoothing}, we have \begin{align*} \biggl| \eqref{lsk3} - \int_{-T}^T \! \iint \psi(x)q(t,x) \kappa e^{-2\kappa|x-y|} q(t,y)\psi(y) \,dx\,dy\,dt \biggr| \lesssim \kappa^{-1/2}, \end{align*} or equivalently (see \eqref{R resolvent}), \begin{align*} \biggl| \eqref{lsk3} - \int_{-T}^T \! \int \frac{4\kappa^2|\widehat{\psi q}(t,\xi)|^2}{\xi^2+4\kappa^2} \,d\xi\,dt \biggr| \lesssim \kappa^{-1/2}. \end{align*} From \eqref{E:psi rho} and Lemma~\ref{L:loc smoothing}, $$ \biggl| \eqref{lsk4} - \tfrac{1}{2} \int_{-T}^T \! \int \frac{4\kappa^2|\widehat{\psi q}(t,\xi)|^2}{\xi^2+4\kappa^2} \,d\xi\,dt \biggr| \lesssim \kappa^{-1/2}. $$ The claim \eqref{E:loc smoothing hi} now follows from recombining \eqref{lsk1}--\eqref{lsk4}. \end{proof} \begin{proof}[Proof of Proposition~\ref{P:loc smoothing}] By the same scaling argument as in Theorem~\ref{T:converge}, it suffices to prove \eqref{E:loc smoothing n} for solutions that are small in $L^\infty_t H^{-1}_x$. Moreover, we may assume that $q_n$ are Schwartz solutions, since Proposition~\ref{P:loc smoothing} in this reduced generality provides precisely the tool necessary to obtain the full version by approximation. Let us fix $\psi\in C^\infty_c({\mathbb{R}})$, not identically zero, with $\supp(\psi)\subseteq(0,1)$. By a simple covering argument, it suffices to show that \begin{align}\label{E:ls 1} \lim_{n\to\infty} \ \sup_{x_0\in{\mathbb{R}}} \ \int_{-T}^T \Bigl\|\bigl[q_n(t,x)-q(t,x)\bigr] \psi(x+x_0) \Bigr\|_{L^2({\mathbb{R}})}^2 \,dt = 0. \end{align} We will do this by breaking into high- and low-frequency components, using a refined local smoothing argument to handle the former, and applying Theorem~\ref{T:converge} to handle the latter. The frequency decomposition is based on the multipliers $$ m_\text{hi}(\xi) = \frac{|\xi|}{\sqrt{\xi^2+4\kappa^2}} \qtq{and} m_\text{lo}(\xi) = \sqrt{1 - m_\text{hi}(\xi)^2} = \frac{2\kappa}{\sqrt{\xi^2+4\kappa^2}}. $$ We begin with the low frequencies. For $\kappa$ fixed, Theorem~\ref{T:converge} implies \begin{align} \lim_{n\to\infty} &\ \sup_{x_0\in{\mathbb{R}}}\ \int_{-T}^T \; \Bigl\|m_\text{lo}(-i\partial_x) \Bigl(\bigl[q_n(t)-q(t)\bigr] \psi(\cdot + x_0) \Bigr)\Bigr\|_{L^2({\mathbb{R}})}^2 \,dt\notag\\ &\lesssim \kappa T \lim_{n\to\infty}\ \sup_{x_0\in{\mathbb{R}}}\ \Bigl\| \bigl[q_n(t,x)-q(t,x)\bigr] \psi(x+x_0) \Bigr\|_{L^\infty_t H^{-1}_x ([-T,T]\times{\mathbb{R}})}\label{ls low} \\ &\lesssim \kappa T \|\psi\|_{H^1({\mathbb{R}})} \lim_{n\to\infty} \bigl\| q_n(t,x)-q(t,x) \bigr\|_{L^\infty_t H^{-1}_x ([-T,T]\times{\mathbb{R}})} =0.\notag \end{align} We turn now to the high-frequency part. As the sequence $q_n(0)$ is convergent in $H^{-1}({\mathbb{R}})$, it is equicontinuous there. Proposition~\ref{P:equi} then guarantees that $\{q_n(t) : t\in{\mathbb{R}}\text{ and } n\in{\mathbb{N}}\}$ is also $H^{-1}({\mathbb{R}})$-equicontinuous. Thus Lemma~\ref{L:kappa ls} implies \begin{equation}\label{ls n high} \lim_{\kappa\to\infty} \ \sup_{n} \int_{-T}^T \; \Bigl\|m_\text{hi}(-i\partial_x) [q_n(t)\psi(\cdot+x_0)] \Bigr\|_{L^2({\mathbb{R}})}^2 \,dt = 0. \end{equation} Note that by Theorem~\ref{T:converge} and weak lower-semicontinuity, it then follows that \begin{equation}\label{ls q high} \lim_{\kappa\to\infty} \int_{-T}^T \; \Bigl\|m_\text{hi}(-i\partial_x) [q(t)\psi(\cdot+x_0)] \Bigr\|_{L^2({\mathbb{R}})}^2 \,dt = 0. \end{equation} We are now ready to put the pieces together. From \eqref{ls n high} and \eqref{ls q high}, we see that we can make the high-frequency contribution to LHS\eqref{E:ls 1} small, uniformly in $n$, by choosing $\kappa$ sufficiently large. But then by \eqref{ls low}, we may make the low-frequency contribution as small as we wish by choosing $n$ sufficiently large. This proves \eqref{E:ls 1} and so \eqref{E:loc smoothing n}. Lastly, integration by parts shows that Schwartz solutions are distributional solutions of the initial-value problem, which is to say \begin{align*} \int h(0,x)q(0,x)\,dx + \int_0^\infty \!&\! \int [\partial_t h](t,x) q(t,x) \,dx\,dt \\ & = \int_0^\infty \!\! \int - [\partial_x^3 h](t,x) q(t,x) + 3[\partial_x h](t,x) q(t,x)^2 \,dx\,dt \end{align*} for every $h\in C^\infty_c({\mathbb{R}}\times{\mathbb{R}})$ (as well as the analogous statement backwards in time). This now extends to $H^{-1}$ solutions via Corollary~\ref{C:1} and Proposition~\ref{P:loc smoothing}. \end{proof}
{ "timestamp": "2019-04-29T02:20:12", "yymm": "1802", "arxiv_id": "1802.04851", "language": "en", "url": "https://arxiv.org/abs/1802.04851", "abstract": "We prove global well-posedness of the Korteweg--de Vries equation for initial data in the space $H^{-1}(R)$. This is sharp in the class of $H^{s}(R)$ spaces. Even local well-posedness was previously unknown for $s<-3/4$. The proof is based on the introduction of a new method of general applicability for the study of low-regularity well-posedness for integrable PDE, informed by the existence of commuting flows. In particular, as we will show, completely parallel arguments give a new proof of global well-posedness for KdV with periodic $H^{-1}$ data, shown previously by Kappeler and Topalov, as well as global well-posedness for the 5th order KdV equation in $L^2(R)$.Additionally, we give a new proof of the a priori local smoothing bound of Buckmaster and Koch for KdV on the line. Moreover, we upgrade this estimate to show that convergence of initial data in $H^{-1}(R)$ guarantees convergence of the resulting solutions in $L^2_\\text{loc}(R\\times R)$. Thus, solutions with $H^{-1}(R)$ initial data are distributional solutions.", "subjects": "Analysis of PDEs (math.AP)", "title": "KdV is wellposed in $H^{-1}$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363746096915, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7084883502261634 }
https://arxiv.org/abs/hep-th/9603103
Quasi-Exactly Solvable Potentials on the Line and Orthogonal Polynomials
In this paper we show that a quasi-exactly solvable (normalizable or periodic) one-dimensional Hamiltonian satisfying very mild conditions defines a family of weakly orthogonal polynomials which obey a three-term recursion relation. In particular, we prove that (normalizable) exactly-solvable one-dimensional systems are characterized by the fact that their associated polynomials satisfy a two-term recursion relation. We study the properties of the family of weakly orthogonal polynomials defined by an arbitrary one-dimensional quasi-exactly solvable Hamiltonian, showing in particular that its associated Stieltjes measure is supported on a finite set. From this we deduce that the corresponding moment problem is determined, and that the $k$-th moment grows like the $k$-th power of a constant as $k$ tends to infinity. We also show that the moments satisfy a constant coefficient linear difference equation, and that this property actually characterizes weakly orthogonal polynomial systems.
\section{qes}, we explain in \section{rr} how to construct the weakly orthogonal polynomial system associated to each of the normal forms of a one-dimensional \qes. Hamiltonian listed in \rf{GKOnorm}, \rf{GKO}. Like the polynomial system introduced in \rf{BenDun}, this system always satisfies a three-term recursion relation, whose coefficients we explicitly compute. This allows us to prove that one-dimensional (normalizable) exactly solvable Hamiltonians are characterized by the fact that their associated polynomials satisfy a two-term recursion relation. In \section{op} we show that the polynomials associated to an arbitrary one-dimensional \qes. Hamiltonian enjoy properties completely akin to those listed above for the Hamiltonian \eq{H}. We also study in this section the properties of the moment functional defined by the family of weakly orthogonal polynomials of a \qes. Hamiltonian, giving a rigorous proof of the fact that its associated Stieltjes measure is supported on a finite set, \rf{KUW}, so that the integral \eq{innp} reduces to a finite sum. From this we deduce that the associated (Hamburger or Stieltjes) moment problem is determined, and that the $k$-th moment behaves like the $k$-th power of a constant for large $k$, illustrating this statement with an explicit example for the Hamiltonian \eq{H}. We also show that the moments satisfy a constant coefficient linear difference equation, a property which in fact characterizes weakly orthogonal polynomial systems. The paper ends (\section{conc}) with a brief review of these results, stressing the role played by weak orthogonality---as opposed to true orthogonality---in their derivation. \Section{qes} Quasi-exactly Solvable Potentials. For the reader's convenience, we present in this Section a summary of the major results in the theory of \qes. systems that we shall need in the sequel. A one-dimensional \sc. operator (or Hamiltonian) $H = -\partial_x} \def\dz{\partial_z^2+V(x)$ is {\it \qes.} if there exists a \fd. Lie algebra of first-order \do.s $$ \frak g} \def\h{\frak h} \def\hn{\h^n} \def\Pn{\Cal P_n = \Span\set{\xi_a(x)\partial_x} \def\dz{\partial_z+\eta_a(x)}{1\le a\le r}\equiv \Span\set{T_a(x)}{1\le a\le r} $$ such that: \item{i)}$\frak g} \def\h{\frak h} \def\hn{\h^n} \def\Pn{\Cal P_n$ leaves invariant a \fd. module of smooth functions $\frak N} \def\Nn{\Bbb N\subset\C^\infty(\R)$, \ie $X\cdot f\in\frak N} \def\Nn{\Bbb N$ for all $f\in\frak N} \def\Nn{\Bbb N$ and all $X\in\frak g} \def\h{\frak h} \def\hn{\h^n} \def\Pn{\Cal P_n$. In other words, $\frak g} \def\h{\frak h} \def\hn{\h^n} \def\Pn{\Cal P_n$ admits a finite-dimensional representation in terms of smooth functions. \item{ii)}$H$ is in the universal enveloping algebra of $\frak g} \def\h{\frak h} \def\hn{\h^n} \def\Pn{\Cal P_n$, \ie $H$ can be expressed as a polynomial in the generators $T_a$, $\ran ar$, of $\frak g} \def\h{\frak h} \def\hn{\h^n} \def\Pn{\Cal P_n$. A Lie algebra of first-order \do.s satisfying i) is called {\it \qes..} A Hamiltonian $H$ satisfying condition ii) above for an arbitrary (not necessarily \qes.) Lie algebra $\frak g} \def\h{\frak h} \def\hn{\h^n} \def\Pn{\Cal P_n$ is said to be {\it Lie-algebraic.} \smallskip If $H$ is \qes., it follows that the restriction of $H$ to $\frak N} \def\Nn{\Bbb N$ is a \fd. linear operator $\frak N} \def\Nn{\Bbb N\to\frak N} \def\Nn{\Bbb N$, and therefore the eigenfunctions of $H$ lying in $\frak N} \def\Nn{\Bbb N$ and its corresponding eigenvalues can be exactly computed by purely algebraic methods (diagonalizing a square matrix of order $\dim\frak N} \def\Nn{\Bbb N$). We shall refer to these eigenfunctions of $H$ as lying in $\frak N} \def\Nn{\Bbb N$ as its {\it algebraic eigenfunctions} (although, of course, they need not be algebraic functions in the technical sense of the word). The functions in $\frak N} \def\Nn{\Bbb N$ need not a priori satisfy any boundary conditions (like square-integrability, periodicity, vanishing at the endpoints, etc.) coming from the physics of the problem, whose mathematical purpose is to guarantee that $H$ is a self-adjoint operator. If they do, then the restriction of $H$ to $\frak N} \def\Nn{\Bbb N$ is self-adjoint, and therefore $H$ has exactly $\dim\frak N} \def\Nn{\Bbb N$ linearly independent algebraic eigenfunctions, whose corresponding $\dim\frak N} \def\Nn{\Bbb N$ {\it real} eigenvalues (counting multiplicities) are exactly (\ie algebraically) computable. We shall say in this case that the \qes. potential $H$ (or the potential $V$) is {\it fully algebraic}. See \rf{GKOnorm} and \rf{GKO} for an in-depth discussion of fully algebraic potentials under the boundary condition of square-integrability on $\R$. It can be shown (\cf \rf{GKOqes}) that a \qes. \sc. operator $H$ can be expressed as a polynomial of degree at most two in the generators $T_a$, $\ran ar$, of $\frak g} \def\h{\frak h} \def\hn{\h^n} \def\Pn{\Cal P_n$. Moreover, a well known theorem, \rf{Tur}, \rf{KOlado}, \rf{GKOqes}, asserts that every \qes. Lie algebra of first-order \do.s $\frak g} \def\h{\frak h} \def\hn{\h^n} \def\Pn{\Cal P_n$ is related by a (local) change of variable $$ z=\zeta(x) \Eq{cv} $$ and a {\it gauge transformation} with gauge factor $\mu(z)>0$ to (a subalgebra of) one of the Lie algebras $\frak g} \def\h{\frak h} \def\hn{\h^n} \def\Pn{\Cal P_n^n=\h^n\oplus\R$, where $\h^n=\Span\{J_-^n,J_0^n,J_+^n\}\approx\sL{2,\R}$, $$ J_-^n = \dz,\qquad J_0^n = z\dz-\frac n2,\qquad J_+^n=z^2\dz-n\,z \Eq{jeps} $$ and $n$ is a nonnegative integer. In other words, every element $X(x)\in\frak g} \def\h{\frak h} \def\hn{\h^n} \def\Pn{\Cal P_n$ is of the form $$ X(x) = \left.\mu(z)\cdot J(z)\cdot \frac1{\mu(z)}\right|_{z=\zeta(x)},\qquad J(z)\in\frak g} \def\h{\frak h} \def\hn{\h^n} \def\Pn{\Cal P_n^n, $$ for some fixed $n$. This implies that the {\it gauge Hamiltonian} $$ H_{\hbox{\sevenrm gauge}}(z) = \left.\frac1{\mu(z)}\cdot H(x)\cdot \mu(z)\right|_{x=\zeta^{-1}(z)} \Eq{hgh} $$ is also a polynomial of degree at most two in the generators $J_\eps^n$, \ie (dropping the explicit $n$ dependence in the generators $J_\eps^n$) $$ -H_{\hbox{\sevenrm gauge}}=\sum_{a,b} c_{ab}\,J_a\,J_b+\sum_a c_a\,J_a + c_*, \Eq{hgj} $$ for some real constants $c_*$, $c_a$, and $c_{ab}=c_{ba}$ (the minus sign is for later convenience). The spectral problems of $H$ and $H_{\hbox{\sevenrm gauge}}$ are related in an obvious way: indeed, from \eq{hgh} it follows that if $\chi(z)$ is an eigenfunction of $H_{\hbox{\sevenrm gauge}}$ with eigenvalue $E$ then $$ \psi(x) = \left.\mu(z) \chi(z)\vphantom{_p}\right|_{z=\zeta(x)} \Eq{algwf} $$ will be an eigenfunction of $H$ with the same eigenvalue (not taking into account the boundary conditions). Since the Lie algebra $\frak g} \def\h{\frak h} \def\hn{\h^n} \def\Pn{\Cal P_n^n$ admits as invariant module the space $\CP_n$ of real polynomials of degree at most $n$ in $z$, if $H$ is fully algebraic then $H_{\hbox{\sevenrm gauge}}$ has $n+1$ linearly independent algebraic eigenfunctions lying in $\CP_n$. Hence $H$ has $n+1$ linearly independent algebraic eigenfunctions of the form \eq{algwf}, with $\chi\in\CP_n$ a polynomial of degree at most $n$. From \eq{jeps} and \eq{hgj} it follows, \rf{GKOnorm}, that the gauge Hamiltonian is of the form\goodbreak $$ \Eeqn{ -H_{\hbox{\sevenrm gauge}} &= P(z)\,\dz^2+\left\{Q(z)-\frac{n-1}2P'(z)\right\}\,\dz\\ &\kern 1in +\left\{R-\frac n2 Q'(z)+\frac{n(n-1)}{12}P''(z)\right\}, \Eq{hgz}} $$ where $P$, $Q$ and $R$ are polynomials of degrees 4, 2 and 0, respectively, given by $$\eeqn{ P(z) = c_{++}z^4+2c_{+0}z^3+c_{00}z^2+2c_{0-}z+c_{--},\Eq P\\ Q(z) = c_ + z^2+c_0 z + c_-,\Eq Q\\ R = \frac{n(n+2)}{12}c_{00}+c_*.\Eq R } $$ Note that, due to the Casimir relation $$ J_0^2-\frac12(J_+J_-+J_-J_+) = \frac n4(n+2), $$ we have set, without loss of generality, $c_{+-}=0$. There are also explicit formulas for the change of variables \eq{cv} and gauge factor $\mu(z)$ needed to put the differential operator \eq{hgz} in \sc. form, \cf \rf{GKOnorm}. Indeed, assuming that $P(z)>0$ on an interval $I$ then for $z\in I$ we have $$ x = \zeta^{-1}(z) = \int^z\frac{dy}{\sqrt{P(y)}},\qquad \mu(z) = P(z)^{-n/4}\exp\left\{\int^z\frac{Q(y)}{2P(y)}dy\right\} \Eq{xmu} $$ and $$ V(x) = -R+\left.\frac{-n(n+2)\left (P P'' - \fr34 P'^2\right ) - 3(n+1) \left( Q P' - 2 P Q'\right) + 3Q^2}{12P}\right|_{z=\zeta(x)}, \Eq{pot} $$ where the primes denote derivatives with respect to $z$. The canonical form \eq{hgz} of the \qes. Hamiltonian $H$ is not unique, since there is a residual symmetry group preserving the Lie algebra $\hn$, given by the adjoint action on $\hn$ of the Lie group of transformations generated by $\frak g} \def\h{\frak h} \def\hn{\h^n} \def\Pn{\Cal P_n^n=\hn\oplus\R$. More precisely, the elements of $\frak g} \def\h{\frak h} \def\hn{\h^n} \def\Pn{\Cal P_n^n$ are the infinitesimal generators of the standard $\GL{2,\R}$ action on the space $\Pn$, given by $$ p(z)\in\Pn\mapsto \hat p(w) = (\gamma w+\delta)^n p\left(\frac{\alpha w+\beta}{\gamma w + \delta}\right),\qquad \pmatrix{\alpha &\beta\cr\gamma &\delta}\in \GL{2,\R}. \Eq{rhon $$ We shall denote, as is customary, by $\rho_{n}$ this (irreducible) multiplier representation of $\GL{2,\R}$ on $\Pn$. Note that the action \eq{rhon} is just the composition of the projective transformation $$ z = \frac{\alpha w+\beta}{\gamma w+\delta} $$ and the gauge transformation with gauge factor $\mu(w) = (\gamma w+\delta)^n$. The adjoint action of $\GL{2,\R}$ on $\hn$ induced by \eq{rhon} is given by $$ J(z)\mapsto \Jhat(w) = (\gamma w + \delta)^n\cdot J\left(\frac{\alpha w+\beta}{\gamma w + \delta}\right)\cdot (\gamma w + \delta)^{-n}. \Eq{jt} $$ A straightforward calculation, \rf{GKOnorm}, shows that the generators of $\hn$ transform under the representation $\rho_{2,-1}$---where $\rho_{n,i}=\rho_n\otimes\det^i$, $\det:A\mapsto\det A$ being the standard determinantal representation--- independently of $n$. As a consequence of all this, the transformed differential operator $$ \widehatH_{\hbox{\sevenrm gauge}} = (\gamma w + \delta)^n\cdot H_{\hbox{\sevenrm gauge}}\left(\frac{\alpha w+\beta}{\gamma w + \delta}\right)\cdot (\gamma w + \delta)^{-n} \Eq{hghat} $$ is still of the form \eq{hgz}, with $P$, $Q$ and $R$ replaced by appropriate polynomials $\Phat$, $\Qhat$ and $\Rhat$ of respective degrees 4, 2 and 0. It can be shown, \cf\rf{GKOnorm}, that $\Rhat=R$ and $$ \Phat(w) = \frac{(\gamma w+\delta)^4}{\Delta^2} P\left(\frac{\alpha w+\beta}{\gamma w + \delta}\right),\qquad \Qhat(w) = \frac{(\gamma w+\delta)^2}\Delta Q\left(\frac{\alpha w+\beta}{\gamma w + \delta}\right), \Eq{PQ} $$ with $$ \Delta=\det\pmatrix{\alpha&\beta\cr\gamma&\delta}. $$ Hence the polynomials $P$, $Q$ and $R$ determining the differential operator $H_{\hbox{\sevenrm gauge}}$ transform under the representations $\rho_{4,-2}$, $\rho_{2,-1}$ and $\rho_{0}$ of $\GL{2,\R}$. Furthermore, the algebraic eigenfunctions of $H_{\hbox{\sevenrm gauge}}$ clearly transform under the representation $\rho_n$; indeed, if $\chi(z)$ is an eigenfunction of $H_{\hbox{\sevenrm gauge}}$ with eigenvalue $E$ then it follows from \eq{hghat} that $$ \hat\chi(w) = (\gamma w + \delta)^n\cdot \chi\left(\frac{\alpha w+\beta}{\gamma w + \delta}\right) \Eq{chih} $$ is an eigenfunction of $\widehatH_{\hbox{\sevenrm gauge}}$ with the same eigenvalue. In \rf{GKO} and \rf{GKOnorm}, the form-invariance of the differential operator $H_{\hbox{\sevenrm gauge}}$ under the $\GL{2,\R}$ action \eq{hghat} described above was exploited to place $H_{\hbox{\sevenrm gauge}}$ in canonical form. Indeed, it can be shown that there are ten inequivalent real normal forms for a (nonzero) fourth-degree polynomial $P$ \rf{fnote1} transforming under the representation $\rho_{4,-2}$ of $\GL{2,\R}$, each of which leads to a canonical form for $H_{\hbox{\sevenrm gauge}}$. Of these ten canonical forms, five correspond to {\it normalizable} Hamiltonians, whose algebraic eigenfunctions are square-integrable (provided the coefficients $c_{ab}$ and $c_a$ satisfy certain inequalities), and the remaining are associated to Hamiltonians with periodic potentials. The five normal forms associated to normalizable Hamiltonians, which are characterized by the fact that $P$ has at least one multiple root on the real projective line $\rp$, are given by% $$\ntable{ \nu(z^2+1),\\ \nu(z^2-1),\hphantom{\bigl(1-k^2(1-z^2)\bigr)}\\ \nu z^2,\\ z,\\ 1,}\Eq{normlist} $$ where $\nu>0$ is a real parameter. For example, the \qes. potential discussed in \rf{BenDun} corresponds to the fourth normalizable canonical form $P(z)=z$. The remaining normal forms, corresponding to periodic potentials, are $$ \Ntable6{ \nu(1-z^2)(1-\kappa^2 z^2),\\ \nu(1-z^2)\bigl(1-\kappa^2(1-z^2)\bigr),\\ \nu(1+z^2)\bigl(1+(1-\kappa^2) z^2\bigr),\\ \nu(1+z^2)^2,\\ \nu(1-z^2), }\Eq{perlist} $$ where $\nu>0$, $0<\kappa<1$. \Section{rr} The Recursion Relation. Let $H=-\partial_x} \def\dz{\partial_z^2 +V(x)$ be a \qes. Hamiltonian. From the previous section, we know that there is a change of variable \eq{cv} and gauge factor $\mu(z)>0$ such that $H(x)=\left.\mu(z)\cdotH_{\hbox{\sevenrm gauge}}(z)\cdot\frac1{\mu(z)}\right|_{z=\zeta(x)}$, with $H_{\hbox{\sevenrm gauge}}$ given by \eq{hgj} (and $c_{+-}=0$). Furthermore, if $H$ is fully algebraic then it has $n+1$ algebraic eigenfunctions of the form \eq{algwf}, with $\chi(z)\in\CP_n$ an eigenfunction of $H_{\hbox{\sevenrm gauge}}$. Let $\chi_E(z)$ be an eigenfunction of $H_{\hbox{\sevenrm gauge}}$ with eigenvalue $E$ (not necessarily a polynomial in $z$). Writing $$ \chi_E(z) = \sum_{k=0}^\infty P_k(E)\chi_k(z), \Eq{chie} $$ where $$ \chi_k(z) = \frac{z^k}{k!},\qquad k\ge0, $$ and taking into account that $$ J_-\cdot\chi_k = \chi_{k-1},\qquad J_0\cdot\chi_k = \left(k-\frac n2\right)\chi_k,\qquad J_+\cdot\chi_k = (k-n)(k+1)\chi_{k+1}, \Eq{jaction} $$ \cf \eq{jeps}, we easily find that the coefficients $P_k(E)$ satisfy the following five-term recursion relation: \goodbreak $$ \Eeqn{ -c_{--}P_{k+2} &= \left[(2k-n+1)c_{0-}+c_-\right]P_{k+1}\\&\quad{} + \left[E+c_*+c_0\bigl(k-\frac n2\bigr)+c_{00}\bigl(k-\frac n2\bigr)^2\right]P_{k}\\ &\quad {}+k(k-1-n)\left[(2k-n-1)c_{+0}+c_+\right]P_{k-1}\\&\quad{} +k(k-1)(k-1-n)(k-2-n)\,c_{++}P_{k-2},\qquad k\ge 0. \Eq{grr}} $$ If $c_{--}\ne0$, the general solution of the recursion relation \eq{grr} depends on the two arbitrary functions $P_0(E)$ and $P_1(E)$. This simply reflects the fact that when $c_{--}\ne0$ the leading coefficient $P(z)$ of $H_{\hbox{\sevenrm gauge}}$ does not vanish at $z=0$ (\cf\eq P); thus, the differential equation $(H_{\hbox{\sevenrm gauge}}-E)\,\chi_E = 0$ has a regular point at the origin, and therefore it admits two linearly independent solutions \eq{chie} analytic at 0. If $P_0(E)$ and $P_1(E)$ are chosen to be polynomials in $E$, then \eq{grr} implies that all the coefficients $P_k(E)$ are polynomials in $E$. However, the general recursion relation \eq{grr} suffers from two major drawbacks. In the first place, even if we choose $P_0(E)$ and $P_1(E)$ as polynomials of degree 0 and 1 in $E$, respectively, \eq{grr} is incompatible with the desirable property that $P_k(E)$ be of degree $k$ in $E$ for all $k$, unless $c_{--}=0$. Secondly, even in this case \eq{grr} will be in general a four-term recursion relation, implying that the polynomials $P_k(E)$ may not be orthogonal with respect to any (nonzero) Stieltjes measure $d\omega(E)$. Indeed, it is well known, \rf{Chi}, \rf{Erd}, that a necessary and sufficient condition for a family of polynomials $\left\{P_k\right\}_{k=0}^\infty$ (with $\deg P_k=k$) to form an orthogonal polynomial system is that $P_k$ satisfies a {\it three\/}-term recursion relation of the form $$ P_{k}=(A_k E + B_k) P_{k-1} + C_{k} P_{k-2},\qquad k\ge1, \Eq{rr} $$ where the coefficients $A_k,B_k,C_k$ are {\it independent of $E$,} $A_k\ne0$, $C_1=0$, and $C_k\ne0$ for $k\ge1$. If the coefficient $C_k$ in \eq{rr} vanishes for some positive integer $k$, then this recursion relation only defines a {\it weakly} orthogonal polynomial system \rf{fnote2}. It is one of the main goals of this paper to show that both difficulties described above can always be overcome, provided (roughly speaking) that we expand the eigenfunction $\chi_E$ with respect to an appropriate variable. This will be achieved by using the non-uniqueness of $H_{\hbox{\sevenrm gauge}}$, due to the $\GL2$ symmetry described in the previous section, to place $H_{\hbox{\sevenrm gauge}}$ in a suitable canonical form. From the form of the recursion relation \eq{grr}, it follows that both difficulties described above disappear if $$ c_{--} = c_{++} = 0. \Eq{cond} $$ Indeed, if \eq{cond} holds then \eq{grr} reduces to the three-term recursion relation $$ \seqn{ -\left[(2k-n-1)c_{0-}+c_-\right]P_{k}=\\ \kern3em\left[E+c_*+c_0\bigl(k-\frac n2-1\bigr)+c_{00}\bigl(k-\frac n2-1\bigr)^2\right]P_{k-1}\\ \kern3em{}+(k-1)(k-2-n)\left[(2k-n-3)c_{+0}+c_+\right]P_{k-2}, \qquad k\ge 1, \Eq{trr}} $$ which uniquely determines all the functions $P_k(E)$ in terms of $P_0(E)$ provided that, for all positive integer values of $k$, the coefficient of the left-hand side of \eq{trr} does not vanish. If $P_0(E)$ is taken as a constant, for instance if $P_0(E)=1$, then \eq{trr} implies that $P_k(E)$ is a polynomial of degree $k$ in $E$ for all $k\ge0$. Let us see now that we can always arrange for \eq{cond} to be satisfied, by using the action \eq{PQ} to transform $P(z)$ into a normal form $\Phat(w)$ for which \eq{cond} holds. Indeed, \eq{cond} simply states that the polynomial $P(z)$ vanishes at $z=0$ and $z=\infty$, when $z$ is allowed to vary over the complex projective line $\Bbb{CP}} \def\rp{\Bbb{RP}$. Note that we need $z$ to belong to the {\it complex} projective line at this stage so that $P$ is guaranteed to have a root, which is essential for the argument that follows. Consequently, the $\GL{2,\R}$ action described in the previous section will be replaced in what follows by a $\GL{2,\C}$ action. We can assume, first of all, that $P(z)$ is one of the normal forms listed in equations \eq{normlist} and \eq{perlist}. We must distinguish three cases, characterized by the position of the roots of $P$ in the complex projective line. Indeed, either $P$ has two different roots $z_1\ne z_2$ in $\Bbb{CP}} \def\rp{\Bbb{RP}$, or it has four coincident roots. In the first case, either one of the roots is at infinity, or both roots are finite. {\it Case 1:} $P$ has two different roots $z_1\ne z_2=\infty$.% \par\noindent This case occurs when $P$ is one of the first four normalizable canonical forms \eq{normlist}, or the fifth periodic canonical form \eq{perlist}. In this case, the translation $w=z-z_1$ transforms $P(z)$ into a polynomial $\Phat(w)$ vanishing at zero and infinity. In the original $z$ coordinate, by \eq{chih} this amounts to replacing \eq{chie} by $$ \chi_E(z) = \sum_{k=0}^\infty P_k(E)\frac{(z-z_1)^k}{k!}. \Eq{alt1} $$ In other words, we expand $\chi_E(z)$ as a power series around the point $z=z_1$, which is a singular point of the linear differential equation $(H_{\hbox{\sevenrm gauge}}-E)\chi_E=0$ (if $z_1$ is a simple root of $P$, $z_1$ is actually a regular singular point, whose indicial equation is easily seen to have 0 as a root). By \eq{algwf}, in the ``physical" coordinate $x$ \eq{alt1} becomes $$ \psi_E(x) = \mu\bigl(\zeta(x)\bigr)\, \sum_{k=0}^\infty P_k(E)\frac{(\zeta(x)-z_1)^k}{k!}. \Eq{xalt1} $$ {\it Case 2:} $P$ has two different finite roots $z_1\ne z_2$.% \par\noindent This is the case when $P$ is one of the first four periodic normal forms \eq{perlist}. The projective transformation $w=(z-z_1)/(z-z_2)$ will again transform $P(z)$ into a polynomial $\Phat(w)$ vanishing at $w=0,\infty$. Going back to the original $z$ coordinate, by \eq{chih} we just have to replace \eq{chie} by $$ \chi_E(z) = (z-z_2)^n\,\sum_{k=0}^\infty\frac1{k!} P_k(E)\left(\frac{z-z_1}{z-z_2}\right)^k, \Eq{alt2} $$ apart from an inessential overall factor. In terms of the physical coordinate $x$, \eq{alt2} can be written as $$ \psi_E(x) = \mu\bigl(\zeta(x)\bigr)\, (\zeta(x)-z_2)^n\,\sum_{k=0}^\infty\frac1{k!} P_k(E)\left(\frac{\zeta(x)-z_1}{\zeta(x)-z_2}\right)^k. \Eq{xalt2} $$ {\it Case 3:} $P$ has a quadruple root.% \par\noindent This corresponds to the fifth normalizable canonical form, $P=1$, which has a quadruple root at infinity. Note that $P=1$ implies that the physical coordinate $x$ can be taken as the canonical coordinate $z$. By \eq{P}, we have $$ c_{++}=c_{+0}=c_{00}=c_{0-}=0,\qquad c_{--}=1. $$ Performing an additional translation, if necessary, we can also take without loss of generality $c_-=Q(0)=0$ (notice that $P$ is constant, and therefore does not change under translations). Thus equation \eq{grr} reduces in this case to $$ -P_{k+2} = \left[E+c_*+c_0\bigl(k-\frac n2\bigr)\right]P_{k} +k(k-1-n) c_+ P_{k-1},\qquad k\ge 0. \Eq{ghr} $$ Since $P=1$ is the fifth normalizable case of references \rf{GKOnorm}, \rf{GKO}, $c_+$ must vanish if we want $H$ to be {\it normalizable}, \ie the algebraic eigenfunctions of $H$ to be square-integrable. Therefore, in this case \eq{ghr} reduces to $$ -P_{k+2} = \left[E+c_*+c_0\bigl(k-\frac n2\bigr)\right]P_{k},\qquad k\ge 0, $$ which is equivalent to two two-term recursion relations for the even and odd coefficients $P^0_j=P_{2j}$ and $P^1_j=P_{2j+1}$, namely $$ -P^\eps_{j+1} = \left[E+c_*+c_0\bigl(2j+\eps-\frac n2\bigr)\right]P^\eps_{j},\qquad j\ge 0; \quad\eps=0,1. \Eq{reo} $$ Note that in this case the potential is $V(x)=\frac14c_0^2x^2-c_*$ (with $c_0<0$), \cf \rf{GKOnorm}. To complete the discussion of Cases 1 and 2, we still have to deal with an important technical issue; namely, we must find under what conditions the coefficient of $P_k$ in \eq{trr} never vanishes for positive integer values of $k$. Let $\Phat$ and $\Qhat$ be the transforms of $P$ and $Q$ under the projective transformation $z\mapsto w$ defined in the foregoing discussion of Cases 1 and 2; note that, by construction, $\Phat(w)$ vanishes at $w=0,\infty$. The coefficient of interest can be expressed as $$ (2k-n-1)\,\hat c_{0-}+\hat c_-,\qquad k\ge1, \Eq{ccoeff} $$ where $$ \hat c_{0-}=\frac12\Phat'(0),\qquad \hat c_-=\Qhat(0). $$ From \eq{PQ} it easily follows that $$ \hat c_{0-}=\frac12 P'(z_1),\qquad \hat c_-=Q(z_1) \Eq{ctrans} $$ for Case 1 ($w=z-z_1$), and $$ \hat c_{0-}=\frac{P'(z_1)}{2(z_1-z_2)},\qquad \hat c_-=\frac{Q(z_1)}{z_1-z_2} \Eq{cproj} $$ for Case 2 ($w=(z-z_1)/(z-z_2)$). We shall now distinguish three subcases: {\it Case i.} $z_1$ is a simple real root of $P$% \par\noindent This case occurs when $P$ is one of the canonical forms 2, 4, 6, 7, or 10. Note that in this case the mapping $z\mapsto w$ is real, and so are the coefficients $\hat c_{0-}$, $\hat c_-$. From \eq{xmu} and \eq{ctrans}--\eq{cproj} it is immediate to deduce the asymptotic formulas $$x \mathop{\sim}\limits_{z\to z_1} \abs{z-z_1}^{\frac12},\qquad \mu(z) \mathop{\sim}\limits_{z\to z_1} \abs{z-z_1}^{\frac14\left(\frac{\hat c_-}{\hat c_{0-}}-n\right)}, $$ where we have dropped unessential constant multiplicative factors from the right-hand side, and have taken for convenience $z_1$ as the lower limit of the integral giving $x$ in terms of $z$. We saw in the previous section that when $H$ is fully algebraic it has $n+1$ linearly independent algebraic eigenfunctions of the form \eq{algwf}, where $\chi\in\Pn$. It follows that the polynomial factor $\chi(z)$ cannot vanish at the origin for all the algebraic eigenfunctions of $H$. Hence there is at least one algebraic eigenfunction of $H$ whose asymptotic behavior at $x=0$ is given by $$ \psi(x)\mathop{\sim}\limits_{x\to0}\abs x^{\frac12\left(\frac{\hat c_-}{\hat c_{0-}}-n\right)}. $$ If {\it all} the algebraic eigenfunctions of $H$ are regular at $x=0$, then we must have $$ \frac{\hat c_-}{\hat c_{0-}}-n\ge 0. \Eq{cineq} $$ Since \eq{ccoeff} can be written as $$ 2\hat c_{0-}\left[ \frac12\left(\frac{\hat c_-}{\hat c_{0-}}-n\right) + \left(k-\frac12\right)\right], $$ it follows from \eq{cineq} that the coefficient \eq{ccoeff} cannot vanish in this case. {\it Case ii.} $z_1$ is a simple complex root of $P$% \par\noindent In this case $P$ is either the first or the eighth canonical form. Since $z_1$ is not real, the mapping $z\mapsto w$ is not real either, and the above asymptotic argument is not valid (the eigenfunctions of $H$ need not be regular outside the real axis). For the first canonical form \eq{normlist}, we can take $w=z-i$ and therefore $$ \hat c_{0-} = i\,\nu,\qquad \hat c_- = c_- - c_+ +i\,c_0 $$ from \eq{ctrans}. Hence the coefficient \eq{ccoeff} does not vanish in this case provided that the following conditions are satisfied: $$ c_-\ne c_+ \qquad\hbox{\rm or}\qquad \frac12\left(n+1-\frac{c_0}\nu\right)\ne 1,2,\dots. \Eq{c1} $$ It is easily checked that the choice $w=z+i$ leads exactly to the same conditions. For the eighth canonical form, we can take $w=(z-i)/(z+i)$, and therefore, from \eq{cproj} $$ \hat c_{0-} = \frac12\nu \kappa^2,\qquad \hat c_- = \frac{c_0}2+\frac i2(c_+-c_-). $$ Hence in this case the conditions for the coefficient \eq{ccoeff} not to vanish are $$ c_-\ne c_+ \qquad\hbox{\rm or}\qquad \frac12\left(n+1-\frac{c_0}{\nu\kappa^2}\right)\ne 1,2,\dots. \Eq{c8} $$ It is straightforward to check that the choice $w=(z+i)/(z-i)$ yields the same conditions, while the other natural choice $w=(\sqrt{1-\kappa^2}z\mp i)/(\sqrt{1-\kappa^2}z\pm i)$ only has the effect of replacing the first condition \eq{c8} by $c_+\ne(1-\kappa^2)c_-$. {\it Case iii.} $z_1$ is a multiple root of $P$% \par\noindent This case takes place when $P$ is either the third or the ninth canonical form, and in both cases \eq{ccoeff} reduces to $\hat c_-$. For the third canonical form \eq{normlist}, if $c_-\ne0$ then we take $w=z$, and therefore $\hat c_-=c_-\ne0$. If $c_-=0$, then $c_+\ne0$ if all the algebraic eigenfunctions of $H$ are square-integrable (see \rf{GKOnorm}). Hence, taking $w=1/z$, we get $\Phat(w)=\nu w^2$ and $\Qhat=-(c_++c_0 w)$, so that $\hat c_{0-}=0$ and $\hat c_- = -c_+\ne0$. Hence the coefficient \eq{ccoeff} cannot vanish in this case. Finally, if $P$ is the ninth canonical form \eq{perlist} then $w=(z-i)/(z+i)$ and $$ \hat c_- = \frac{c_0}2+\frac i2(c_+-c_-). $$ Hence \eq{ccoeff} will not vanish if $$ c_+\ne c_-\qquad\hbox{\rm or}\qquad c_0\ne 0. \Eq{c9} $$ Note that when \eq{c9} does not hold $V$ reduces to a constant potential: $$ V = \frac{c_-^2}{4\nu}-\frac5{12} n(n+2)-c_*. $$ In summary, the previous analysis shows that the critical coefficient \eq{ccoeff} cannot vanish for any positive integer $k$ provided that $V$ is fully algebraic, that all its algebraic eigenfunctions are regular (or square-integrable, for the third normalizable canonical form \eq{normlist}), and that conditions \eq{c1}, \eq{c8}, and \eq{c9} are satisfied when $P$ is one of the normal forms 1, 8 or 9, respectively. If \eq{ccoeff} doesn't vanish, defining new polynomials $\Phat_k$ by $$ P_k = \cases{\displaystyle \frac{(-1)^k}{(2\hat c_{0-})^k}\, \frac{\Phat_k}{\Gamma\left(\frac{\hat c_-}{2\hat c_{0-}}+k-\frac n2+\frac12\right)},& \qquad if $\hat c_{0-}\ne0$;\cr\displaystyle \frac{(-1)^k}{\hat c_-^k}\,\Phat_k,&\qquad if $\hat c_{0-}=0$ } \Eq{phatk} $$ the recursion relation \eq{trr} can be written in the more standard form $$ \seqn{ \Phat_{k+1} = \left[E+c_*+\hat c_0\left(k-\frac n2\right)+\hat c_{00}\left(k-\frac n2\right)^2\right]\,\Phat_k\\\quad- k(k-n-1)\left[\hat c_{+0}(2k-n-1)+\hat c_+\right] \left[\hat c_{0-}(2k-n-1)+\hat c_-\right]\Phat_{k-1}, \quad k\ge0. \Eq{nrr} } $$ We have thus proved the main theorem in this section: \Th{main} Let\/ $V$ be a fully algebraic one-dimensional \qes. potential whose algebraic eigenfunctions are all regular \(or normalizable, if\/ $V$ corresponds to the third or fifth canonical forms in \eq{normlist}\). Assume, furthermore, that conditions \eq{c1}, \eq{c8} or \eq{c9} are satisfied, if\/ $V$ is obtained from the first, eighth or ninth canonical forms \eq{normlist}--\eq{perlist}, respectively. Then\/ $V$ defines a family of weakly orthogonal polynomials $\bigl\{\Phat_k\bigr\}_{k=0}^\infty$ satisfying a three-term recursion relation \eq{nrr} \(or \eq{reo}, if\/ $V$ corresponds to the fifth canonical form\). The polynomials $\Phat_k$ are defined by \eq{phatk} and \eq{xalt1}, if\/ $V$ is associated to one of the canonical forms 1--4 or 10, or by \eq{phatk} and \eq{xalt2}, if\/ $V$ corresponds to one of the normal forms 6--9. Finally, the potential\/ $V$ associated to the fifth canonical form defines two families of weakly orthogonal polynomials $P^0_j=P_{2j}$ and $P^1_j=P_{2j+1}$ through \eq{xalt1}. We shall say that a \qes. potential $V$ is {\it exactly solvable} if it is independent of the ``spin" parameter $n$. This implies that $V$ has $n$ algebraic eigenvalues and eigenfunctions for arbitrary $n\in\Nn$, so that we can algebraically compute an infinite number of eigenvalues of $V$ (leaving aside the boundary conditions). All exactly solvable normalizable one-dimensional potentials have been classified; see \rf{GKOnorm} for a complete list. The quintessential example of exactly solvable one-dimensional potential is the harmonic oscillator potential, which corresponds to the fifth canonical form \eq{normlist}. We have seen in the previous section that in this case there are two families of orthogonal polynomials (the odd and even coefficients in \eq{xalt1}), each of which satisfies a two-term recursion relation \eq{reo}. We shall now show that, as conjectured in \rf{BenDun}, the latter property actually characterizes exactly solvable normalizable potentials: \Th{es} The weakly orthogonal polynomial system associated to an exactly solvable normalizable potential satisfies a two-term recursion relation. \Proof The proof is a simple case-by-case analysis using the classification of exactly solvable normalizable potentials given in \rf{GKOnorm}. Indeed, for the first normalizable canonical form $P(z)=\nu(z^2+1)$ we have $w=z\mp i$, and therefore $\Phat(w) = P(w\pm i) = \nu w(w\pm 2i)$, so that $\hat c_{+0}=0$. Since $\Qhat(w) = Q(w\pm i)$, we also have $\hat c_+ = \Qhat''(0)/2 = Q''(\pm i)/2 = c_+$. But the exactly solvable potentials associated to this normal form are characterized by the vanishing of $c_+$, \rf{GKOnorm}, so that $\hat c_+=\hat c_{+0}=0$, and \eq{nrr} is a two-term recursion relation. Similarly, for the second normalizable canonical form, $P(z)=\nu(z^2-1)$ and, for instance, $w=z\mp1$. Proceeding as before we obtain that $\hat c_{+0}=0$ and $\hat c_+=c_+$. Since exactly solvable potentials are again those satisfying the condition $c_+=0$, \eq{nrr} reduces to a two-term recursion relation. The third normalizable canonical form has $P(z)=\nu z^2$, and therefore $c_{+0}=c_{0-}=0$. The exactly solvable potentials are characterized by the vanishing of the coefficients $c_+$ or $c_-$, but not both simultaneously. In the former case we can take $w=z$, while in the latter $w$ is proportional to $1/z$ (see the foregoing discussion on the vanishing of the critical coefficient \eq{ccoeff}). In either case, the coefficient of $P_{k-1}$ in \eq{nrr} vanishes identically. The fourth normalizable canonical form is given by $P(z)=z$, so that $w=z$ and $\hat c_{+0}=c_{+0}=0$, and its exactly solvable potentials are defined by the vanishing of the coefficient $c_+=\hat c_+=0$, so that \eq{nrr} is two-term. Finally, for the fifth normalizable canonical form $P(z)=1$ all normalizable potentials are automatically exactly solvable (they are translates of the harmonic oscillator), and we have already seen that its associated orthogonal polynomials satisfy the two-term recursion relations \eq{reo}. \qed \Section{op} The Orthogonal Polynomials. We shall study in this section the properties of the family of weakly orthogonal polynomials associated to a \qes. one-dimensional Hamiltonian in the manner described in the previous section. Since, as we shall see, these properties can be established directly from the recursion relation \eq{nrr} or \eq{reo}, these polynomials have basically the same properties as those studied by Bender and Dunne in \rf{BenDun}. We have seen in the previous section that the polynomials $\Phat(E)$ defined by a \qes. one-dimensional Hamiltonian satisfy a three-term recursion relation of the form $$ \Phat_{k+1} = (E-b_k)\,\Phat_k-a_k\,\Phat_{k-1},\qquad k\ge0, \Eq{crr} $$ with $a_0=0$ and $$ a_{n+1} = 0. \Eq{an1} $$ For the fifth canonical form, the polynomials $P^0_k$ and $P^1_k$ also satisfy a recursion relation of the form \eq{crr}, with $a_k=0$ for all $k\ge0$. Note that the coefficients $a_k$, $b_k$ in \eq{crr} are guaranteed to be real only for the canonical forms 2--7 and 10 (for which $P$ has a real root). As remarked in the previous section, the vanishing of $a_k$ for a positive integer value of $k$ means that the polynomials $\Phat_k$ are only weakly orthogonal. In particular, many classical results, based on the fact that $a_k>0$ (or sometimes $a_k\ge0$) for $k\ge 1$ cannot be applied in our case. By Favard's theorem, \rf{Chi}, there is a {\it moment functional}, that is a linear functional $\CL$ acting in the space $\C[E]$ of (complex) univariate polynomials, such that the polynomials $\Phat_k$ are orthogonal under $\CL$: $$ \CL(\Phat_k\,\Phat_l) = \gamma_k\,\delta_{kl},\qquad k,l\in\Nn. \Eq{orth} $$ The functional $\CL$ is unique if we impose the normalization condition $\CL(\Phat_0)=\CL(1)=1$. It is also known (Boas's theorem, \rf{Chi}) that there is a (not necessarily unique) function of bounded variation $\omega$ such that $$\CL(p)=\int_{-\infty}^{\infty}p(E)\,d\omega(E) \Eq{omega}$$ for an arbitrary polynomial $p$. The coefficient $\gamma_k=\CL(\Phat_k^2)$, which therefore plays the role of the square of the norm of $\Phat_k$, can be computed by multiplying \eq{crr} by $\Phat_{k-1}$ and taking $\CL$ of both sides, obtaining $$ 0 = \gamma_k-a_k\gamma_{k-1},\qquad k\ge 1. $$ Taking into account that $\gamma_0=\CL(1)=1$ we get $$ \gamma_k = \prod_{j=1}^k a_j,\qquad k\ge1. \Eq{gamma} $$ In particular, from this formula follows one of the key properties of the weakly orthogonal polynomial system associated to a one-dimensional \qes. Hamiltonian. Namely, from \eq{an1} we have $$ \gamma_k=0,\qquad k\ge n+1, $$ so that {\it all the polynomials $\Phat_k$ with $k\ge n+1$ have zero norm.} From this formula it also follows that the ``squared norms" $\gamma_k$ will be positive for $k\le n$ if and only if $a_k>0$ for $1\le k\le n$. It can be shown by a straightforward computation that this is always the case when $P$ is one of canonical forms 2--4 in \eq{normlist}, assuming that all the eigenfunctions of $H$ are square-integrable and that $H$ is {\it not} exactly solvable. Note also that when $H$ is normalizable (canonical forms 1--5 in \eq{normlist}) and exactly solvable then $a_k=0$ for all $k\ge0$. Hence the square norms of all the polynomials $\Phat_k$ vanish, from which it easily follows from \eq{crr} that $\CL=\delta(E-b_0)$. Other important properties of the polynomials $\Phat_k$ concern their zeros. Classically, \rf{Chi}, it can be shown that if $a_k>0$ for all $k\in\Nn$ then the zeros of the polynomials $\Phat_k$ satisfying a three-term recursion relation \eq{crr} are real and simple. In our case the condition $a_k>0$ for all $k\in\Nn$ can never hold on account of \eq{an1}. However, if $H$ is fully algebraic it can still be proved that all the zeros of $\Phat_{n+1}$ are real and simple. Indeed, by hypothesis $H$ is self-adjoint on the space $\frak N} \def\Nn{\Bbb N$ of functions of the form \eq{algwf}, with $\chi\in\CP_n$. Hence $H$ has $n+1$ linearly independent algebraic eigenfunctions lying in $\frak N} \def\Nn{\Bbb N$, whose corresponding eigenvalues are real (by self-adjointness) and distinct ($H$ being a one-dimensional Sturm--Liouville operator). Let us denote by $E_0< E_1<\dots< E_n$ these $n+1$ real eigenvalues of $H$ on $\frak N} \def\Nn{\Bbb N$, and by $\psi_l(x)\equiv\psi_{E_l}(x)$ the eigenfunction corresponding to the eigenvalue $E_l$. Then \eq{algwf} and either \eq{xalt1} or \eq{xalt2} imply that $P_k(E_l)=0$, or equivalently $\Phat_k(E_l)=0$, for $k\ge n+1$ and $0\le l\le n$. In particular, since $\Phat_{n+1}$ is of degree $n+1$ and all the eigenvalues $E_l$ are different, it follows that $$ \Phat_{n+1}(E) = \prod_{l=0}^n(E-E_l), \Eq{pn1} $$ where we have used \eq{crr} and the fact that $\Phat_0=1$. In other words, {\it $\Phat_{n+1}$ has $n+1$ simple real zeros at the $n+1$ algebraic eigenvalues of $H$.} Furthermore, from the fact that $\Phat_{k}$ vanishes at $E_l$ for $k\ge n+1$ we conclude that there exist monic polynomials $Q_k$ of degree $k$ such that $$ \Phat_{k+n+1} = Q_k \Phat_{n+1},\qquad k\ge 0. \Eq{fp} $$ This is the so called {\it factorization property} of the polynomial system $\{\Phat_k\}_{k\in\Nn}$, \cf \rf{BenDun}. Note that the vanishing of $\Phat_k(E_l)$ for all $k\ge n+1$ is consistent with the recursion relation on account of \eq{an1}. In fact, when $a_k$ is positive for $k\ge1$ and $b_k$ is real for $k\ge0$, \eq{pn1} follows directly from the recursion relation by \lm{lopos}, without using the fact that the polynomials $\Phat_k$ are associated to a fully algebraic \qes. one-dimensional Hamiltonian. The vanishing of $\Phat_k(E_l)$ for $k>n+1$ is then an immediate consequence of $\Phat_{n+1}(E_l)=0$, the recursion relation \eq{crr} and \eq{an1}. From the previous equation and \eq{crr} it follows that the polynomials $Q_k$ also satisfy a three-term recursion, namely $$ Q_{k+1} = (E-b_{k+n+1}) Q_k - a_{k+n+1} Q_{k-1},\qquad k\ge0, $$ and are therefore orthogonal with respect to an appropriate moment functional $\CL_Q$ (in general different from $\CL$). It was heuristically argued in \rf{KUW} that $$ \CL = \sum_{j=0}^n \omega_j\,\delta(E-E_j) \Eq{mf} $$ on $\C[E]$, where the coefficients $\omega_j$ are defined by $$ \sum_{l=0}^n \Phat_k(E_l)\,\omega_l = \delta_{k0},\qquad k=0,1,\dots,n. \Eq{os} $$ Equivalently, the discrete Stieltjes measure $d\hat \omega(E)$ defined by the function $$ \hat \omega(E) = \sum_{j=0}^n \omega_j\,\theta(E-E_j), \Eq{wd} $$ where $\theta(t)$ is Heaviside's step function, satisfies \eq{omega}. Note that the linear system \eq{os} uniquely defines the $n+1$ constants $\omega_j$, since by \eq{alt1} or \eq{alt2} its coefficient matrix is the matrix of the change of basis $\left\{c_k(z-z_1)^k/k!\right\}_{k=0}^n$ or $\left\{c_k(z-z_1)^k(z-z_2)^{n-k}/k!\right\}_{k=0}^n$ to $\left\{\chi_{E_l}\right\}_{l=0}^n$ in $\C\otimes\CP_n$, $c_k$ being the coefficient of $\Phat_k$ in \eq{phatk}. It is not difficult to show rigorously that \eq{orth} is satisfied. Indeed, by the uniqueness of $\CL$ this is equivalent to showing that if $\CL_0 = \sum_{j=0}^n \omega_j\,\delta(E-E_j)$ then $$ \CL_0(\Phat_k\Phat_l) = 0,\qquad k\ne l, \Eq{orthomeg} $$ and that $$ \CL_0(\Phat_0) = \CL_0(1) = 1, $$ since $\CL(\Phat_k^2)$ and $\CL_0( \Phat_k^2)$ must coincide if \eq{orthomeg} holds due to the recursion relation \eq{crr}. From the definition of $\omega_j$ we deduce that the last equation, together with \eq{orthomeg} for $k=0$ and $l=1,\dots,n$, are satisfied. Suppose now that \eq{orthomeg} holds for $k=0,1,\dots,K$ ($K\le n-1$) and $k<l\le n$. Multiplying \eq{crr} by $\Phat_l$ and taking $\CL_0$ of both sides we obtain $$ \CL_0(\Phat_{K+1}\Phat_l) = \CL_0((E-b_K)\Phat_{K}\Phat_l)-a_K\CL_0(\Phat_{K-1}\Phat_l) = \CL_0(E\Phat_K\Phat_l) $$ if $K+1<l\le n$, by the induction hypothesis. But, using again \eq{crr}, $$ \CL_0(E\Phat_K\Phat_l) = \CL_0(\Phat_K\cdot E \Phat_l) = \CL_0(\Phat_K\Phat_{l+1})+ b_l\CL_0(\Phat_K\Phat_l) + a_l \CL_0(\Phat_K\Phat_{l-1}) = 0, $$ by the induction hypothesis (since $l>K+1$ implies $l-1>K$). Hence \eq{orthomeg} is true for $0\le k,l\le n$. Finally, \eq{orthomeg} is trivially true when $k$ or $l$ are greater than $n$ by the factorization property \eq{fp} and \eq{pn1}. We shall next show that all the coefficients $\omega_j$ are positive if $b_k$ is real for all $0\le k\le n$ and $a_k>0$ for $1\le k\le n$. (Several instances of this result were checked numerically in \rf{KUW} for the orthogonal polynomials associated to the Hamiltonian \eq H.) The proof is based on the following simple lemma: \Lm{lopos} If $a_k>0$ for $k=1,2,\dots, n$ and $b_k$ is real for $k=0,1,\dots,n$ then $\CL$ is {\rm positive-definite} on $\CP_{2n}$. In other words, if $p\in\CP_{2n}$ is a real polynomial of degree at most $2n$, $p\ne0$ and $p(E)\ge0$ for all $E\in\R$ then $\CL(p)>0$. \Proof A polynomial $p\in\CP_{2n}$ which is non-negative for all real values of $E$ must be of the form $q^2+r^2$, where $q,r\in\CP_n$ are real polynomials. Write $q=\sum_{k=0}^l q_k \Phat_k$; then all the coefficients $q_k$ are real, since $\Phat_k$ is a real polynomial for $0\le k\le n$ by the hypotheses. Using the orthogonality of the polynomials $\Phat_k$ we obtain $\CL(q^2) = \sum_{k=0}^n q_k^2 \gamma_k$. Similarly, if $r=\sum_{k=0}^l p_k\Phat_k$ then $\CL(r) = \sum_{k=0}^n r_k^2 \gamma_k$, and $\CL(p)=\sum_{k=0}^n (q_k^2+r_k^2) \gamma_k$. Since $\gamma_k>0$ for $k=0,1,\dots,n$ by \eq{gamma} and the hypothesis on the coefficients $a_k$, it follows that $\CL(p)\ge0$, and $\CL(p)=0$ if and only if $q_k=r_k=0$ for $k=0,1,\dots,n$, that is if $p=0$.\qed \Pr{opos} If $a_k>0$ for $k=1,2,\dots, n$ and $b_k$ is real for $k=0,1,\dots,n$ then $\omega_k>0$ for all $k=0,1,\dots n$. \Proof Apply the previous lemma to the polynomials $\prod_{0\le j\ne k\le n}(E-E_j)^2\in\CP_{2n}$ for $k=0,1,\dots n$.\qed Note that the hypotheses of the previous proposition are satisfied when $P$ is one of canonical forms 2, 3 or 4, provided that all the eigenfunctions of $H$ are square-integrable and that $H$ is {\it not}\/ exactly solvable. In particular, it is satisfied by the Hamiltonian \eq{H}. The (Hamburger) {\it moment problem} for the moment functional \eq{mf} associated to the weakly orthogonal polynomials defined by a \qes. one-dimensional Hamiltonian consists in determining whether there is a {\it distribution function} (\ie a non-decreasing function of bounded variation) $\omega$ such that $\CL$ can be represented by \eq{omega} for an arbitrary polynomial $p$. We have already shown that this problem has a solution \eq{wd}, since \eq{wd} is clearly non-decreasing and of bounded variation. We shall next show that this solution is unique (up to an additive constant), so that the moment problem associated to the weakly orthogonal polynomial system $\{\Phat_k\}_{k\in\Nn}$ is always determined \rf{fnote3}. Essentially, this is due to the fact that the {\it spectrum} $$ \sigma(\hat \omega) = \Set{E\in\R}{\hat \omega(E+\delta)-\hat \omega(E-\delta)>0,\;\forall\delta>0} $$ of the distribution function \eq{wd} is the finite set $\left\{E_l\right\}_{l=0}^n$ \rf{fnote4}. According to a well known result in the classical theory of orthogonal polynomials, \rf{Chi}, a distribution function $\omega$ defines a positive-definite functional on $\C[E]$ through integration with respect to the Stieltjes measure $d\omega(E)$ if and only if the spectrum of $\omega$ is infinite. Since $\CL$ is not positive-definite ($\CL(\Phat_{n+1}^2)=\gamma_{n+1}=0$), any solution $\omega$ of \eq{omega} must have a finite spectrum, and will thus be of the form $$ \omega(E) = \sum_{k=0}^{\tilde n} \tilde \omega_k \theta(E-\Etilde_k)+C $$ for some constant $C$, up to an immaterial redefinition of $\omega$ in $\sigma(\omega)$. If $I$ is a compact interval containing $\sigma(\hat \omega)\union\sigma(\omega)$, then $$ \CL(p) = \int_I p(E)\,d\hat \omega(E) = \int_I p(E)\,d\omega(E),\qquad\forall p\in\C[E]. $$ Since $I$ is compact, a well known theorem (\cf \rf{Chi}) shows that $\hat \omega$ and $\omega$ differ by a constant at all points in which both $\hat \omega$ and $\omega$ are continuous. But this easily implies that $E_k=\Etilde_k$ and $\omega_k = \tilde \omega_k$ for $k=0,1,\dots,n=\tilde n$, whence $\omega=\hat \omega+C$, as stated. Note that the same argument shows that the moment problem in any interval containing $[E_0,E_n]$; in particular, the (Stieltjes) moment problem in $[E_0,\infty)$ is also determined. In this respect, the weakly orthogonal polynomials associated to a \qes. one-dimensional Hamiltonian behave in exactly the same way as the classical orthogonal polynomials, whose moment problem is also determined, \rf{Chi}. The {\it moments} of the moment functional $\CL$ are by definition the numbers $$ \mu_k = \CL(E^k) = \int_{-\infty}^\infty E^k\,d\hat \omega(E) = \sum_{l=0}^n \omega_l\,E^k_l,\qquad k\in\Nn. \Eq{muk} $$ If the hypotheses of \pr{opos} hold, all the moments are real. From \eq{muk} we see that the module of the $k$-th moment $\mu_k$ does not grow factorially as $k$ tends to infinity, as argued in \rf{BenDun}, but instead it diverges like the $k$-th power of a constant \rf{fnote5}. We shall next show that if the coefficient $a_k$ satisfies the condition $$ a_k\ne0,\qquad 1\le k\le n, \Eq{ahp} $$ which guarantees that the polynomials $\Phat_k$ have non-zero norm for $k\le n$, then {\it the moments $\mu_k$ with $k\ge n+1$ satisfy a constant coefficient difference equation of order $n+1$.} To this end, recall first of all that the bilinear form $\inner{p,q} = \CL(p\,q)$ defined by $\CL$ in $\C[E]$, when restricted to the subspace $\C\otimes\CP_l$, is represented in the basis $\{E^k\}_{0\le k\le l}$ by the symmetric matrix $(\mu_{i+j})_{0\le i,j\le l}$, whose determinant we shall denote by $\Delta_l$. On the other hand, the matrix of the bilinear form $\inner{\cdot\,,\,\cdot}$ in the basis $\{\Phat_k\}_{0\le k\le l}$ is clearly $\diag(1,\gamma_1,\dots,\gamma_l)$; therefore, by \eq{gamma} and the hypothesis on the coefficients $a_k$, we conclude that $\Delta_n\ne0$ and $$ \Delta_{k} = 0,\qquad k\ge n+1. \Eq{dz} $$ In particular, since $\Delta_n\ne0$ but $\Delta_{n+1}=0$, the last column of $\Delta_{n+1}$ must be a linear combination of the remaining columns, so that $$ \mu_k = \sum_{i=1}^{n+1} c_i\,\mu_{k-i},\qquad n+1\le k\le 2(n+1), \Eq{murr} $$ for some (in general complex) constants $c_1,\dots,c_{n+1}$. An easy induction argument using \eq{dz} then shows that the above relation is actually valid with the {\it same} constant coefficients $c_i$ for all $k\ge n+1$, as claimed. In fact, it is not hard to see that $c_i$ in \eq{murr} is minus the coefficient of $E^{n+1-i}$ in $\Phat_{n+1}$. Indeed, write $\Phat_{n+1}=E^{n+1}-p_n$, with $$ p_n = \sum_{i=1}^{n+1} \tilde c_i\,E^{n+1-i}, $$ and let $Q_k = E^k-q_{k-1}$, so that $q_{-1}=0$ and $\deg q_{k-1}\le k-1$ for $k\ge 1$. From \eq{fp} it follows that $$ \Phat_{k} = E^{k}-E^{k-n-1} p_n - q_{k-n-2}\Phat_{n+1},\qquad k\ge n+1, $$ which by \eq{orth} implies that $$ \mu_{k} = \CL(E^{k}) = \CL(E^{k-n-1}p_n) = \sum_{i=1}^{n+1} \tilde c_i\,\mu_{k-i},\qquad n+1\le k\le 2(n+1). $$ Comparing with \eq{murr} and taking into account the linear independence of the columns of $\Delta_n$ we immediately obtain that $\tilde c_i = c_i$ for $i=1,2,\dots,n+1$, as stated. Note that the fact that the moments satisfy a constant coefficient recursion relation \eq{murr} (with $k\ge n+1$) actually characterizes weakly orthogonal polynomial systems. Indeed, \eq{murr} simply expresses the fact that the $(n+2)$-th column of $\Delta_l$ for $l\ge n+1$ is a linear combination of the first $n+1$ columns. Hence the recursion relation implies \eq{dz}, and since $\Delta_{n+1} = \prod_{j=1}^{n+1} \gamma_j$ this means that $\gamma_k=0$ for some $k\le n+1$, so that $a_k=0$ for some $k\le n+1$ by \eq{gamma}. Consider, for example, the Hamiltonian \eq{H} studied in \rf{BenDun}, which corresponds to the fourth canonical form with $$ n = J-1,\quad c_+=-16,\quad c_0=c_*=0,\quad c_-=2s+\frac12(n-1). \Eq{hcoeffs} $$ The coefficients of the corresponding recursion relation \eq{crr} are easily found to be $$ b_k=0,\quad a_k = 16k(J-k)(k+2s-1),\qquad k\ge0. \Eq{hrr} $$ Since we can take $s\ge1/2$ without loss of generality, we see that $a_k>0$ for $1\le k\le n$, so that \eq{ahp} is satisfied. Furthermore, since $b_k$ vanishes for all $k\ge0$ the polynomials $\Phat_k$ have parity $(-1)^k$, and therefore all the odd moments vanish (the corresponding moment functional is said to be {\it symmetric}). For $J=3$ (that is, $n=2$), according to the foregoing observations we know that the moments satisfy a third-order recursion relation of the form \eq{murr}, whose coefficients are minus the coefficients of $E^2$, $E$ and $1$ in $\Phat_3$. From \eq{crr} (with $\Phat_0=1$) we obtain $$ \Phat_1 = E,\qquad \Phat_2=E^2-64s,\qquad \Phat_3(E)=E^3-32(4s+1)E, \Eq{ps} $$ so that $c_1=c_3=0$---as expected, since the moment functional is symmetric---, and $c_2=32(4s+1)$. Therefore the even moments satisfy the first-order recursion relation $$ \mu_{2j} = 32(4s+1) \mu_{2j-2},\qquad j\ge2, \Eq{mu2j} $$ and since $\mu_2 = \gamma_1 = a_1 = 64s$, from \eq{mu2j} we obtain $$ \mu_{2j} = 32^{j-1}(4s+1)^{j-1}\cdot 64s,\qquad j\ge1. \Eq{mu2js} $$ Thus, in this case $\mu_{2j}$ has a pure power growth. The same result can be obtained using \eq{muk}. Indeed, from \eq{ps} we have $$ E_0=-\lambda\equiv-\sqrt{32(4s+1)},\qquad E_1=0,\qquad E_2=\lambda, $$ and therefore $$ \omega_0=\frac s{4s+1},\qquad\omega_1=\frac{2s+1}{4s+1},\qquad\omega_2=\omega_0 $$ from \eq{os} and \eq{ps}. Thus $$ \mu_k = \frac s{4s+1}\left[(-\lambda)^k+\lambda^k\right], $$ which yields $\mu_{2j+1}=0$ for $j\ge0$ and \eq{mu2js}. \Section{conc} Conclusions. We have shown in this paper how every \qes. one-dimensional Hamiltonian satisfying conditions \eq{c1}--\eq{c9} defines a weakly orthogonal polynomial system $\{\Phat_k\}_{k=0}^\infty$ through the three-term recursion relation \eq{nrr} (with initial condition $\Phat_0=1$). It is important, in this context, to emphasize the {\it weak} orthogonality of the polynomials $\Phat_k$, \ie the fact that the norm of $\Phat_k$ may vanish---and in fact {\it does} vanish for $k\ge n+1$, $n$ being the ``spin" parameter present in the Hamiltonian. As explained in \section{op}, this is an inevitable consequence of the vanishing of the coefficient of $\Phat_{k-1}$ in the recursion relation \eq{nrr} for $k=n+1$, which is made possible by the fact that the parameter $n$ is a non-negative integer. The latter fact, however, is an intrinsic property of one-dimensional \qes. (as opposed to merely Lie-algebraic) Hamiltonians; indeed, it is a key factor in the explanation of the partial integrability of a \qes. Hamiltonian outlined in \section{qes}. To better illustrate this point, consider the Hamiltonian \eq{H}, which is Lie-algebraic for all real values of the parameter $J$. Indeed, $H$ can be written in the form \eq{hgh}--\eq{hgj}, with $\zeta(x)=x^2/4$, $\mu(z) = e^{-4z^2}z^{s-1/4}$, $c_{++}=c_{+0}=c_{00}=c_{+-}=c_{--}=0$, $c_{0-}=1/2$, and the remaining coefficients given by \eq{hcoeffs}, where now $n$ is to be regarded as an arbitrary real parameter. When $n$ is not a non-negative integer, the generators \eq{jeps} don't leave invariant any finite-dimensional polynomial module $\CP_n$, so that $H$ is in general non-integrable---there is no special reason for $H$ to have algebraically computable eigenfunctions of the form \eq{algwf}, with $\chi$ a polynomial. However, even when $n$ is not a non-negative integer, the Lie-algebraic nature of $H$ and conditions \eq{cond} imply that the polynomials $\Phat_k$ defined by \eq{algwf}, \eq{chie} and \eq{phatk} still satisfy a three-term recursion relation \eq{crr}, with the coefficients given by \eq{hrr}. In other words, what makes $H$ \qes. is not merely the fact that its associated polynomials satisfy a three-term recursion relation \eq{crr} (which implies their orthogonality with respect to some Stieltjes measure), but the fact that the coefficient $a_k$ in this recursion relation vanishes for some positive integer value of $k$, so that the associated polynomials $\Phat_k$ can only be weakly orthogonal. As we saw in \section{op}, the Stieltjes measure with respect to which the polynomials $\Phat_k$ associated to a \qes. Hamiltonian $H$ are orthogonal is supported in the set of algebraic eigenvalues of $H$, which is a finite set. For this reason, the polynomials $\Phat_k$ are {\it discrete polynomials.} Although the classical (Hermite, Legendre, Laguerre, Tchebycheff, etc.) polynomials of Mathematical Physics are orthogonal with respect to a continuous measure, discrete (Charlier, Hahn, Krawtchouk, Meixner, Tchebycheff, etc.) polynomials have also been studied in the mathematical literature of orthogonal polynomials, \cf\rf{Chi}. Note that a discrete polynomial system is truly---as opposed to weakly---orthogonal if and only if the supporting set of its Stieltjes measure is infinite. Some of the discrete polynomials cited above, like the Hahn, Krawtchouk or discrete Tchebycheff polynomials, are in fact weakly orthogonal. In general, weakly orthogonal polynomials arise naturally, for instance, in the theory of approximate polynomial curve fitting, \rf{Pec}. More recently, \rf{SmiTur}, the study of second-order finite difference eigenvalue equations with infinitely many polynomial solutions has led to an interesting connection between a non-standard finite-dimensional representation of $\sL2$ and certain families of weakly orthogonal discrete polynomials (Hahn polynomials and analytically continued Hahn polynomials). Let us stress, in closing, that the present paper deals only with one-dimensional \qes. Hamiltonians. It is an interesting open problem to generalize these results to \qes. multi-dimensional systems, a possibility already considered in \rf{KUW}, where a heuristic (but inconclusive, in our opinion) argument was advanced suggesting that all \qes. systems give rise to weakly orthogonal polynomials. In the two-dimensional case, at least, the classification of \qes. Lie algebras of first-order differential operators in two variables presented in \rf{GKOqes} and \rf{GKOreal} could be used as a starting point for an analysis along the present lines. \Section{ack} Acknowledgments. It is a pleasure to thank M. A. Mart\'\i n-Delgado, who pointed out to us reference \rf{BenDun}, Gabriel \'Alvarez, for useful discussions regarding the theory of classical orthogonal polynomials, and Carlos Finkel, for providing several key references. The authors would also like to acknowledge the partial financial support of the DGICYT under grant no.~PB92--0197. \bigskip\bigskip \References \bye
{ "timestamp": "1996-03-15T11:55:13", "yymm": "9603", "arxiv_id": "hep-th/9603103", "language": "en", "url": "https://arxiv.org/abs/hep-th/9603103", "abstract": "In this paper we show that a quasi-exactly solvable (normalizable or periodic) one-dimensional Hamiltonian satisfying very mild conditions defines a family of weakly orthogonal polynomials which obey a three-term recursion relation. In particular, we prove that (normalizable) exactly-solvable one-dimensional systems are characterized by the fact that their associated polynomials satisfy a two-term recursion relation. We study the properties of the family of weakly orthogonal polynomials defined by an arbitrary one-dimensional quasi-exactly solvable Hamiltonian, showing in particular that its associated Stieltjes measure is supported on a finite set. From this we deduce that the corresponding moment problem is determined, and that the $k$-th moment grows like the $k$-th power of a constant as $k$ tends to infinity. We also show that the moments satisfy a constant coefficient linear difference equation, and that this property actually characterizes weakly orthogonal polynomial systems.", "subjects": "High Energy Physics - Theory (hep-th)", "title": "Quasi-Exactly Solvable Potentials on the Line and Orthogonal Polynomials", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363746096914, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7084883502261633 }
https://arxiv.org/abs/1104.2882
Minimum Weight Cycles and Triangles: Equivalences and Algorithms
We consider the fundamental algorithmic problem of finding a cycle of minimum weight in a weighted graph. In particular, we show that the minimum weight cycle problem in an undirected n-node graph with edge weights in {1,...,M} or in a directed n-node graph with edge weights in {-M,..., M} and no negative cycles can be efficiently reduced to finding a minimum weight triangle in an Theta(n)-node undirected graph with weights in {1,...,O(M)}. Roughly speaking, our reductions imply the following surprising phenomenon: a minimum cycle with an arbitrary number of weighted edges can be "encoded" using only three edges within roughly the same weight interval! This resolves a longstanding open problem posed by Itai and Rodeh [SIAM J. Computing 1978 and STOC'77].A direct consequence of our efficient reductions are O (Mn^{omega})-time algorithms using fast matrix multiplication (FMM) for finding a minimum weight cycle in both undirected graphs with integral weights from the interval [1,M] and directed graphs with integral weights from the interval [-M,M]. The latter seems to reveal a strong separation between the all pairs shortest paths (APSP) problem and the minimum weight cycle problem in directed graphs as the fastest known APSP algorithm has a running time of O(M^{0.681}n^{2.575}) by Zwick [J. ACM 2002].In contrast, when only combinatorial algorithms are allowed (that is, without FMM) the only known solution to minimum weight cycle is by computing APSP. Interestingly, any separation between the two problems in this case would be an amazing breakthrough as by a recent paper by Vassilevska W. and Williams [FOCS'10], any O(n^{3-eps})-time algorithm (eps>0) for minimum weight cycle immediately implies a O(n^{3-delta})-time algorithm (delta>0) for APSP.
\section{Introduction} We consider the algorithmic problem of finding a minimum weight cycle (i.e., weighted girth) in weighted directed and undirected graphs. Surprisingly, although the problem is very fundamental, the state of the art for it dates back to a seminal paper by Itai and Rodeh~\cite{Clique1}, first presented in STOC'77, that deals only with the \emph{unweighted} variant of the problem. Itai and Rodeh presented an $O(n^\omega)$-time algorithm for an $n$-node unweighted undirected graph and an $O(n^\omega \log n)$-time algorithm for an $n$-node unweighted directed graph. (Here $\omega$ is the exponent of square matrix multiplication over a ring, and $\omega<2.376$~\cite{cw90}.) In the same paper, Itai and Rodeh posed the question whether similar results exist for weighted graphs. In this paper we provide a positive answer to this longstanding open problem by presenting $\tilde{O}(Mn^\omega)$-time algorithms for directed graphs with integral edge weights in $[-M,M]$ (and no negative cycles) and for undirected graphs with integral edge weights in $[1,M]$. Our algorithmic results are obtained using new reductions that carefully combine new algorithmic ideas and special combinatorial properties of the minimum weight cycle. More specifically, we reduce the problem to the problem of finding a minimum weight {\em triangle} in a $\Theta(n)-$node \emph{undirected} graph with weights in $\{1,\ldots,O(M)\}$. This reveals also a surprising phenomenon: a minimum cycle with an arbitrary number of weighted edges can be efficiently ``encoded'' using a cycle of only \emph{three} edges whose weights are roughly within the same interval! Moreover, our results imply a strong \emph{equivalence} between the cycle and triangle problems. \paragraph{Minimum cycle and APSP.} Recently, in FOCS'10 Vassilevska W. and Williams~\cite{focsy} showed that the minimum weight cycle problem is equivalent to many other graph and matrix problems for which no truly subcubic ($O(n^{3-\eps})$-time for constant $\eps>0$) algorithms are known. They showed that if there is a truly subcubic algorithm for the minimum weight cycle problem, then many other problems such as the all-pairs-shortest-paths (APSP) problem also have truly subcubic algorithms. Hence, the minimum weight cycle problem has a pivotal role in understanding the complexity of many fundamental polynomial problems in a similar spirit to the role of 3SAT for NP-hard problems. It is interesting to compare the minimum cycle problem with APSP. In directed graphs, the minimum weight cycle can be computed easily by computing APSP. Given the distance $d[u,v]$ between all pairs of vertices $u,v$, the weight of the minimum cycle is $\min_{u,v} w(u,v)+d[v,u]$. Hence, we can compute the minimum weight cycle in cubic time using Floyd-Warshall's APSP algorithm~\cite{floyd,warshall} (or Pettie's $O(mn+ n^2\log \log n)$ time algorithm~\cite{Pettie04} if the graph is sparse). If the edge weights are integers in $[-M,M]$, we can use Zwick's~\cite{zwickbridge} $O(M^{0.681}n^{2.575})$ time algorithm to obtain an algorithm for minimum cycle with the same runtime. Improving Zwick's running time and in particular obtaining an $\tilde{O}(Mn^\omega)$ running time for APSP in directed graphs is one of today's frontier questions in graph algorithms. Our new $\tilde{O}(Mn^\omega)$-time algorithm for minimum cycle in directed graphs shows that it is not really necessary to compute all pairs shortest paths in order to compute the minimum weight cycle in directed graphs. This seems to reveal a strong separation between APSP and the minimum cycle problem in directed graphs. The minimum cycle problem in undirected graphs differs from the problem in directed graphs in that the reduction to APSP no longer works: an edge $(u,v)$ might also be the shortest path from $v$ to $u$, and $\min_{u,v} w(u,v)+d[v,u]$ might be $2w(u,v)$ and not the weighted girth of the graph. This represents a nontrivial hurdle. Nevertheless, in this paper we show how to overcome this hurdle and obtain an $\tilde{O}(Mn^\omega)$ time algorithm for undirected graphs with integer weights in $[1,M]$. This matches the runtime of the best APSP algorithm in such graphs: In a paper first presented at STOC'92, Seidel~\cite{Seidel} showed that APSP in undirected and unweighted $n$-node graphs can be solved in $\tilde{O}(n^\omega)$ time. In FOCS'99, Shoshan and Zwick~\cite{sz99} (following Galil and Margalit~\cite{GM97}) showed that APSP in undirected $n$-node graphs with integer edge weights in $[0,M]$ can be solved in $\tilde{O}(Mn^\omega)$ time, thus extending Seidel's running time to weighted undirected graphs. \paragraph{Our results: reductions and algorithms.} We develop our algorithms by first obtaining extremely efficient reductions from the minimum weight cycle problem to the minimum weight triangle problem which preserve the interval in which the weights lie, within a constant factor. \noindent \emph{Undirected graphs.} For undirected graphs our results are as follows. \begin{theorem} Let $G(V,E,w)$ be an undirected graph with $w:E\rightarrow \{1,\ldots, M\}$ and let $C$ be a minimum cycle in $G$. There is an $O(n^2 (\log nM) \log n)$ time deterministic algorithm that computes a cycle $\hat{C}$ and constructs $O(\log n)$ graphs $G'_1,\ldots,G'_k$ on $\Theta(n)$ nodes and edge weights in $\{1,\ldots,O(M)\}$ such that either $w(\hat{C})=w(C)$ or the minimum out of all weights of triangles in the graphs $G'_i$ is exactly $w(C)$. \label{thm:undir} \end{theorem} Since a minimum weight triangle in a graph with weights bounded by $O(M)$ can be found via a single distance product computation in $\tilde{O}(Mn^\omega)$ time~\cite{agm97,Yuval}, we obtain the following corollary. \begin{corollary} A minimum weight cycle in an $n$-node undirected graph with integer edge weights in $[1,M]$ can be found in $\tilde{O}(Mn^\omega)$ time. \end{corollary} \noindent \emph{Directed graphs.} Our reduction for undirected graphs relies on the fact that distances are symmetric. It is unlikely that it is possible to modify the reduction so that it works for directed graphs as well. Hence, for directed graphs new ideas are required. The reduction to minimum triangle is not as efficient, however, the resulting algorithm for minimum cycle in directed graphs has the same running time as the one for undirected graphs with nonnegative weights. Our approach for directed graphs can be combined with our approach for undirected graphs to yield an efficient algorithm also for {\em mixed} graphs, that is, graphs which contain both directed and undirected edges. The approach works, provided the weights of the mixed graph are nonnegative. When negative edge weights are allowed, a negative cycle may exist. Finding a minimum weight cycle when its weight is negative is an NP-hard problem, as it solves Hamiltonian cycle. When negative weights are allowed, the minimum cycle problem in the absence of negative cycles is in P for both directed and undirected graphs, but is NP-hard for mixed graphs~\cite{papamix}. Our techniques for directed graphs are strong enough to support negative edge weights within the same running time as when the weights are nonnegative. This is extremely interesting, as the typical way to reduce the general problem to the nonnegative weights problem involves computing node {\em potentials} (see e.g.~\cite{dirsssp}). These potentials however typically increase the magnitude of the weights to even $\Omega(Mn)$, which would be bad if our goal is to use algorithms that have exponential dependence on the bit representation of the weights, such as $\Ot(Mn^\omega)$. We circumvent the potential approach by focusing on the general problem directly. We obtain: \begin{theorem} Let $G(V,E,w)$ be a directed graph on $n$ nodes, $w:E\rightarrow \{-M,\ldots, M\}$. In $\tilde{O}(Mn^\omega)$ time one can construct $O(\log n)$ graphs $G'_1,\ldots,G'_k$ on $\Theta(n)$ nodes and edge weights in $\{1,\ldots,O(M)\}$ so that the minimum out of all weights of triangles in the graphs $G'_i$ is exactly the weighted girth of $G$. \label{thm:dir} \end{theorem} \paragraph{Our results: equivalences.} Vassilevska W. and Williams~\cite{focsy} showed that the minimum triangle and minimum cycle problems are equivalent, under subcubic reductions. Their reduction from minimum triangle to minimum cycle only increased the number of nodes and the size of the edge weights by a constant factor. However, their reduction from minimum cycle to minimum triangle was not tight; it only proved that an $O(n^{3-\eps})$ algorithm for minimum triangle would imply an $O(n^{3-\eps/3})$ algorithm for minimum cycle. Our reductions, on the other hand, imply a much stronger equivalence between the two problems. This equivalence is especially strong for undirected graphs with integral weights from the range $[1,M]$. \begin{corollary} If there is a $T(n,M)$ time algorithm for the minimum cycle problem in undirected graphs with integral edge weights in $[1,M]$, then there is a $T(O(n),O(M))+O(n^2)$ time algorithm for the minimum triangle problem in such graphs. Conversely, if there is a $T(n,M)$ time algorithm for the minimum triangle problem in undirected graphs with integral edge weights in $[1,M]$, then there is an $O(T(O(n),O(M))\log n+ n^2\log n\log nM)$ time algorithm for the minimum cycle problem in such graphs. \label{cor:equiv} \end{corollary} A natural question is whether the triangle problem is special. Do similar reductions exist between minimum cycle and minimum $k$-cycle for constant $k>3$? We answer this in the affirmative. \begin{theorem} Let $k$ be any fixed constant. Let $G(V,E,w)$ be a graph on $n$ nodes, $w:E\rightarrow \{1,\ldots, M\}$. One can construct $O(\log n)$ undirected graphs $G'_1,\ldots,G'_\ell$ on $\Theta(kn)$ nodes and edge weights in $\{1,\ldots,O(M)\}$ so that the minimum out of all weights of $k$-cycles in the graphs $G'_i$ is exactly the weighted girth of $G$. Moreover, given the minimum weight $k$-cycle of the graphs $G'_i$, one can find a minimum weight cycle of $G$ in $O(n)$ additional time. If $G$ is directed, the reduction runs in $\tilde{O}(Mn^\omega)$ time, and if $G$ is undirected, it runs in $\tilde{O}(n^2\log nM)$ time. \label{thm:equiv2} \end{theorem} \paragraph{Our results: approximation.} Another approach to gain efficiency for problems with seemingly no subcubic time exact algorithms has been to develop fast approximation algorithms (see~\cite{zwickbridge,aingworth,almostshortest} in the context of shortest paths). Lundell and Lingas~\cite{ll09} gave two approximation algorithms for the girth problem: an $\tilde{O}(n^{1.5})$ time $8/3$-approximation for undirected unweighted graphs, and an $O(n^2 (\log n)\log nM)$ time $2$-approximation for undirected graphs with integer weights in the range $\{1,\ldots, M\}$. Very recently, Roditty and Tov~\cite{liamsoda} improved the approximation factor to $4/3$-approximation for the weighted case while keeping the running time unchanged. Due to Zwick's~\cite{zwickbridge} $\tilde{O}(n^\omega/\eps \log (M/\eps))$ time $(1+\eps)$-approximation for APSP and the simple reduction from minimum weight cycle in directed graphs to APSP, the girth of a directed graph admits an $(1+\eps)$-approximation in $\tilde{O}(n^\omega/\eps \log (M/\eps))$ time. Our reduction from Theorem~\ref{thm:undir} implies the same result for undirected graphs with nonnegative weights as well, following up on the work of Roditty and Tov from SODA'11~\cite{liamsoda}. \begin{theorem} There is an $\tilde{O}(n^\omega/\eps \log (M/\eps))$ time $(1+\eps)$-approximation algorithm for the minimum cycle problem in undirected graphs with integral weights in $[1, M]$. \end{theorem} \section{Preliminaries} Let $G(V,E,w)$ be a weighted graph, where $V$ is its set of {\em vertices} or {\em nodes} (we use these terms interchangeably), $E\subseteq V\times V$ is its set of edges, and $w:E\rightarrow \{1,\ldots,M\}$ is a weight function. The function $w(\cdot,\cdot)$ can be extended to the entire $V\times V$ by setting $w(u,v)=\infty$ for every $(u,v)\notin E$. Unless otherwise noted, $n$ refers to the number of nodes in the graph. An edge can be directed or undirected. An \emph{undirected} graph is a graph with undirected edges only and a directed graph is a graph with directed edges only. A \emph{mixed } graph is a graph that may have both directed and undirected edges. All graphs considered in this paper are {\em simple}. A graph is simple if it does not contain self loops or multiple copies of the same edge. In a simple mixed graph, a node pair $x,y$ cannot be connected by both a directed and an undirected edge. In both directed and mixed simple graphs, two directed edges $(x,y)$ and $(y,x)$ in opposite directions are allowed since they are considered distinct. We define a cycle $C$ in a graph $G(V,E,w)$ to be an ordered set of vertices $\{v_1,v_2,\ldots, v_\ell\}$, such that $(v_i,v_{i+1})\in E$ for every $i < \ell$ and $(v_\ell,v_1)\in E$. Let $w(C)$ be the sum of the weights of the edges of $C$ and let $w_{\max}(C)$ be the weight of the heaviest edge. We denote with $d_C[v_i,v_j]$ the weight of the path that traverses the cycle from $v_i$ to $v_j$ by passing from $v_i$ to $v_{i+1}$ and so on. In the case that $j<i$ we traverse from $v_\ell$ to $v_1$ and continue until we reach $v_i$. Let $n(C)$ denote the number of vertices/edges in $C$. A cycle $C$ is {\em simple} if no node or edge appears twice in $C$. With this definition, an undirected graph cannot have a simple cycle on $2$ nodes, where as directed and mixed graphs can, provided the two cycle edges are in opposite directions and hence distinct. \section{Our approach}\label{s-approach} Our reductions are based on a combinatorial property of cycles in weighted directed, undirected and mixed graphs that might be of independent interest. This property is extremely useful as it shows that crucial portions of the minimum weight cycle are shortest paths. We present this property in the following lemma. \begin{lemma}[Critical edge] Let $G(V,E,w)$ be a weighted graph, where $w: E \rightarrow\mathbb{R}$, and assume that $G$ does not contain a negative cycle. Let $C=\{v_1,v_2,\ldots, v_\ell\}$ be a cycle in $G$ of weight $w(C)\geq 0$ and let $s\in C$. There exists an edge $(v_i,v_{i+1})$ on $C$ such that $\lceil w(C)/2\rceil-w(v_i,v_{i+1}) \leq d_C[s,v_i]\leq \lfloor w(C)/2\rfloor$ and $\lceil w(C)/2\rceil-w(v_i,v_{i+1}) \leq d_C[v_{i+1},s]\leq \lfloor w(C)/2\rfloor$. Furthermore, if $C$ is a minimum weight cycle in $G$ then $d[s,v_i]=d_C[s,v_i]$ and $d[v_{i+1},s]=d_C[v_{i+1},s]$. \label{lemma:middle} \end{lemma} \begin{proof} We can assume, wlog, that $s=v_1$. We start to traverse along $C$ from $v_1$ until we reach the first edge $(v_i,v_{i+1})$ that satisfies $d_C[v_1,v_i]\leq \lfloor w(C)/2\rfloor$ and $d_C[v_{1},v_{i}]+w(v_i,v_{i+1})\geq \lceil w(C)/2\rceil$. Since $d_C[v_1,v_1]=0\leq \lfloor w(C)/2\rfloor$ either we find an edge $(v_i,v_{i+1})$ that satisfies the requirement, where $i< \ell$ or we reach $v_\ell$ without finding such an edge. In the latter case $d_C[v_1,v_\ell] \leq \lfloor w(C)/2\rfloor$ and since $d_C[v_{1},v_{\ell}]+w(v_\ell,v_{1})=w(C)\geq \lceil w(C)/2\rceil$ the edge $(v_\ell,v_1)$ satisfies the requirement. It follows immediately from the properties of the edge $(v_i,v_{i+1})$ that $d_C[v_{1},v_{i}]\geq \lceil w(C)/2\rceil - w(v_i,v_{i+1})$ and hence we get that $\lceil w(C)/2\rceil-w(v_i,v_{i+1}) \leq d_C[v_1,v_i]\leq \lfloor w(C)/2\rfloor$ as required. We now bound $d_C[v_{i+1},v_1]$. We know that $d_C[v_{i+1},v_1] = w(C) - (d_C[v_1,v_i]+w(v_i,v_{i+1}))$. Since $d_C[v_{1},v_{i}]+w(v_i,v_{i+1})\geq \lceil w(C)/2\rceil$ we get that $d_C[v_{i+1},v_1] \leq \lfloor w(C)/2\rfloor$. Also, since $d_C[v_1,v_i]\leq \lfloor w(C)/2\rfloor$ we get that $d_C[v_{i+1},v_1] \geq \lceil w(C)/2\rceil-w(v_i,v_{i+1})$. It remains to show that if $C$ is a minimum weight cycle, then $d[v_1,v_i]=d_C[v_1,v_i]$ and $d[v_{i+1},v_1]=d_C[v_{i+1},v_1]$. If $G$ is a directed graph, then it is straightforward to see that the minimality of $C$ implies that $d_C[u,v]=d[u,v]$ for every $u,v\in C$ and in particular $d[v_1,v_i]=d_C[v_1,v_i]$ and $d[v_{i+1},v_1]=d_C[v_{i+1},v_1]$ as required. Thus, we only need to consider the case that $G$ is an undirected graph. From the first part of the proof we know that $d_C[v_{i+1},v_{1}]\leq \lfloor w(C)/2\rfloor$. If $d[v_{i+1},v_{1}]<d_C[v_{i+1},v_{1}]$, then let $P$ be the path from $v_{i+1}$ to $v_1$ of weight $d[v_{i+1},v_{1}]$ and let $C_2$ be the portion of $C$ from $v_1$ to $v_{i+1}$. The union of $P$ and $C_2$ is a walk in $G$ whose weight is strictly less than $w(C)$. Furthermore, since $d[v_{i+1},v_{1}]<d_C[v_{i+1},v_{1}]\leq \lfloor w(C)/2\rfloor\leq w(C_2)$, $P$ and $C_2$ must differ by at least one edge and hence $P\cup C_2$ contains some simple cycle of weight less than $w(C)$, a contradiction to the minimality of $C$. The argument for showing that $d[v_1,v_i]=d_C[v_1,v_i]$ is symmetric. \end{proof} Lemma~\ref{lemma:middle} shows that it is possible to decompose every cycle into three portions: a single edge of weight at most $O(M)$ and two pieces whose weight differs by at most $O(M)$, and which are shortest paths if the cycle is of minimum weight. This observation is crucial for our efficient reductions. Another important piece of Lemma~\ref{lemma:middle} is that \emph{every} vertex on the cycle has a critical edge. This is especially important in the directed graph case. Armed with Lemma~\ref{lemma:middle} we can describe the general framework of our approach. Suppose that we have some way to compute a function $D:V \times V \rightarrow \mathbb{R}$ that satisfies: \begin{itemize} \item For every $u,v\in V$, $d[u,v] \leq D[u,v]$ \item There exists a vertex $v$ on the minimum cycle $C$ whose critical edge $(x,y)$ endpoints satisfy $D[v,x]=d[v,x]$ and $D[y,v]=d[y,v]$. \end{itemize} In Section~\ref{s-undirected} we show how to compute a function $D$ in $O(n^2\log n\log Mn)$ time for undirected graphs with integral weights from $[1,M]$, and in Section~\ref{s-directed} we show how to compute a function $D$ in $\tilde{O}(Mn^\omega)$ time for directed graphs with integral weights from $[-M,M]$ and no negative cycles. Now consider the following (multi-)graph $G'(V',E',w')$ where $V'=V^1\cup V^2$ and $V^1,V^2$ are disjoint copies of $V$. For every $D[a,b]$ which was computed we place an edge between $a^1\in V^1$ and $b^2\in V^2$ and (for directed graphs) also one between $a^2\in V^2$ and $b^1\in V^1$. These edges get weight $D[a,b]$ and correspond to the two large portions of the minimum cycle. Further, for every edge $(a,b)$ in $G$, we add an edge from $a^2\in V^2$ to $b^2\in V^2$ with weight $w(a,b)$, i.e. $V^2$ induces a copy of $G$; these edges correspond to the critical edge of the minimum cycle. In our reduction for directed graphs we further transform $G'$ into a simple undirected tripartite graph. Consider $v,x,y$ from the second bullet above. By Lemma~\ref{lemma:middle}, $D[v,x]+w(x,y)+D[y,v]=d_C(v,x)+w(x,y)+d_C(y,v)=w(C).$ Hence $G'$ will contain $\{v^1,x^2,y^2\}$ as a triangle of weight $w(C)$. Our reductions in the next two sections give transformations which ensure that every triangle in $G'$ corresponds to a simple cycle in $G$ and that $\{v^1,x^2,y^2\}$ is preserved as a triangle. Since the values $D[\cdot,\cdot]$ are upper bounds on the distances, $\{v^1,x^2,y^2\}$ is a minimum weight triangle in $G'$. The graph $G'$ however can have really large weights; $D[\cdot,\cdot]$ can be as large as $Mn$ in general. Thus our transformations also apply a weight reduction technique which reduces all edge weights to the interval $[-O(M),O(M)]$. This technique is different for undirected and directed graphs. \paragraph{Finding a minimum cycle.} Our reductions show that the minimum cycle problem can be efficiently reduced to the minimum triangle problem in a different graph with roughly the same number of nodes and weight sizes. Here we briefly discuss how one can actually find a minimum weight triangle in an $n$-node graph $G(V,E,w)$ with integral edge weights in the interval $[-M,M]$. With our reductions, this will give algorithms for the minimum cycle problem as well. Let $A$ be the $n\times n$ adjacency matrix of $G$ defined as $A[i,j]=w(i,j)$ whenever $(i,j)\in E$ and $A[i,j]=\infty$ otherwise. A well known approach to finding a minimum weight triangle mimics Itai and Rodeh's algorithm for unweighted triangle finding~\cite{Clique1}. It first computes the distance product of $A$ with itself, $(A\star A)[i,j]=\min_k A[i,k]+A[k,j]$, to find for every pair of nodes $i,j$ the minimum length of a path with at most $2$ edges between them. Then, the weight of a minimum triangle is exactly $$\min_{i,j} A[j,i]+(A\star A)[i,j].$$ Finding the actual triangle takes only $O(n)$ time after one finds $i,j$ minimizing the above expression. Thus the running time is dominated by the time required for computing $A\star A$. The algorithm of Alon, Galil and Margalit~\cite{agm97} (following Yuval~\cite{Yuval}) does this in $\tilde{O}(Mn^\omega)$ time, whenever the entries of $A$ are integers in $[-M,M]$. Hence a minimum triangle, can be found in $\tilde{O}(Mn^\omega)$ time. \section{Minimum weight cycle in undirected graphs with weights in $\{1,\ldots, M\}$}\label{s-undirected} Let $G(V,E,w)$ be an undirected graph with integral edge weights from the range $[1,M]$. In this section we show that in $\Ot(n^2\log Mn)$ time we can compute a cycle whose weight is at most twice the weight of the minimum weight cycle and a new undirected graph $G'(V',E',w')$ with integral weights from the range $[-M,M]$ whose minimum triangle if exists corresponds to the minimum weight cycle in $G$, with constant probability. (To boost the probability of success, we actually create $O(\log n)$ graphs $G'$.) If $G'$ does not have a triangle then the cycle that we have computed is the minimum weight cycle of $G$. We start by presenting an $\Ot(n^2)$ time algorithm that given an integer $t$ either reports a cycle of length $2t$ or computes all distances up to $t$. The computed distances are used to form the graph $G'$ \paragraph{Cycle or Distance Computation.} The algorithm works in iterations and in each iteration it repeats the same procedure from a new vertex of the graph. This procedure is a simple adaptation of Dijkstra's algorithm. The input in each iteration is a source vertex $s$ and an integer value $t$. The algorithm either reports a cycle of length at most $2t$ or computes the distances from $s$ to every vertex that is within distance $t$ from $s$. Lingas and Lundell~\cite{ll09} used a similar approach in order to compute a $2$-approximation of the minimum weight cycle. Their algorithm, however, either returns a cycle of length at most $2t$ or computes the distances from $s$ to every vertex that is within distance $2t$ from $s$. This small difference between the two algorithms is crucial for our needs. The algorithm is given in Algorithm~\ref{A-min-cycle}. The running time of the algorithm is $O(n^2 \log n)$. The algorithm repeats the procedure Cycle? $n$ times, each time with a different vertex. Every run of Cycle? takes at most $O(n \log n)$ time since it stops with the first cycle it detects. In the next Lemma we prove an important property of the algorithm. \begin{lemma}\label{L-bin-search} For any integer $t$, Min-Cycle$(G,t)$ either finds a cycle of weight at most $2t$, or computes all distances of length at most $t$. \end{lemma} \begin{proof} A cycle is reported when a vertex $u$ is extracted from the priority queue $Q$ and for one of its edges $(u,v)$ that is being relaxed the value of $d[v]$ before the relaxation is not infinity. As any distance and any distance estimation are at most $t$, if a cycle is reported it must be of length at most $2t$. If a cycle is not reported, then our algorithm is almost identical to Dijkstra's algorithm. The only difference is that our algorithm relaxes an edge $(u,v)$ when $u$ is extracted from the priority queue if and only if $d[u]+w(u,v) \leq t$, while Dijkstra's algorithm relaxes all edges of $u$ with no restriction. This implies that our algorithm computes all distances that are smaller or equal $t$. \end{proof} \begin{table}[t] \begin{multicols}{2} \begin{algorithm}[H]\label{A-min-cycle} \caption{Min-Cycle($G,t$)} \ForEach{$s\in V$}{$C'\gets$ Cycle?($G,s,2t$)\; \lIf{$w(C') < w(C^*)$}{$C^* \gets C'$} } \Return $C^*$ \end{algorithm} \begin{algorithm}[H]\label{A-cycle?} \caption{Cycle?($G,s,t'$)} \lForEach{$v\in V$}{$d[v] \gets \infty$\;} $d[s] = 0$\; $Q \gets \{ s \}$\; \While{$Q \neq \emptyset$} { $u \gets $ Extract-Min($Q$)\; Controlled-Relax($u,t'/2$)\; } \end{algorithm} \columnbreak \begin{algorithm}[H]\label{A-control-relax} \caption{Controlled-Relax($u,w_u$)} $(u,v)\gets $ Extract-Min($Q_u$)\; \While{$d[u]+w(u,v) \leq w_u$}{ \eIf{$d[v] \neq \infty$} {report a cycle and stop\;} {Relax($u,v$)\;} $(u,v)\gets $ Extract-Min($Q_u$)\; } \end{algorithm} \begin{figure}[H] \centering \includegraphics[height=0.6cm]{line.eps} \vspace{-9pt}\caption{In the gray area Min-Cycle reports a cycle.}\label{F-critical-value} \end{figure} \end{multicols} \end{table} \paragraph{The reduction to minimum triangle.} Our goal is to prove Theorem~\ref{thm:undir}. The main part of the proof is describing an algorithm that computes an upper bound for the minimum weight cycle and an instance $G'$ of minimum triangle, such that either the girth of the graph is exactly the upper bound, or with constant probability the minimum triangle weight in $G'$ is the girth of $G$. Below we only present $G'$ as having large weights. Later on, we find a value $t$ with which we use Lemma~\ref{L-bin-search}, so that $2t$ is a bound on the minimum cycle weight that is tight within $O(M)$. As mentioned in Section~\ref{s-approach}, this value allows us to reduce the weights of $G'$ so that they fall in the interval $[-O(M),O(M)]$. \begin{reminder}{Theorem~\ref{thm:undir}} Let $G(V,E,w)$ be an undirected graph with $w:E\rightarrow \{1,\ldots, M\}$ and let $C$ be a minimum cycle in $G$. There is an $O(n^2 (\log nM) \log n)$ time deterministic algorithm that computes a cycle $\hat{C}$ and constructs $O(\log n)$ graphs $G'_1,\ldots,G'_k$ on $\Theta(n)$ nodes and edge weights in $\{1,\ldots,O(M)\}$ such that either $w(\hat{C})=w(C)$ or the minimum out of all weights of triangles in the graphs $G'_i$ is exactly $w(C)$. \end{reminder} The weight of the minimum cycle is an integer value from the range $[1,nM]$. From Lemma~\ref{L-bin-search} it follows that we can use algorithm Min-Cycle to perform a binary search over this range in order to find the largest value $t\in [1,nM]$ for which Min-Cycle$(G,t)$ does not report a cycle but computes all distances of length at most $t$ (see Figure~\ref{F-critical-value}). This implies that by running Min-Cycle$(G,t+1)$ we obtain a cycle of weight at most $2t+2$. Hence, we only need to show that it is possible to detect the minimum cycle in the case that its weight $w(C)$ is $2t+1$ or less. Let us first prove some consequences of the fact that Min-Cycle$(G,t)$ does not report a cycle \begin{lemma} Let $C=\{v_1,v_2,\ldots, v_\ell\}$ be a minimum cycle in $G(V,E,w)$. Suppose that Min-Cycle$(G,t)$ does not report a cycle. There are three \textbf{distinct} vertices $v_i,v_{i+1},v_j\in C$ such that $d_C[v_j,v_i] + w(v_i,v_{i+1}) > t$ and $d_C[v_{i+1},v_j] + w(v_i,v_{i+1}) > t$. \label{lemma:tbound} \end{lemma} \begin{proof} Let $(v_i,v_{i+1})$ be the critical edge for $v_1$ given by Lemma~\ref{lemma:middle}. Assume first that $v_1\neq v_i$ and $v_1\neq v_{i+1}$. If either $d_C[v_1,v_i] + w(v_i,v_{i+1}) \leq t$ or $d_C[v_{i+1},v_1] + w(v_i,v_{i+1}) \leq t$ then the edge $w(v_i,v_{i+1})$ is relaxed. Assume that we are in the case that $d_C[v_1,v_i] + w(v_i,v_{i+1}) \leq t$. Then after $(v_i,v_{i+1})$ is relaxed $d[v_{i+1}]\leq t$. If $d[v_{i+1}]<\infty$ before the relaxation of $(v_i,v_{i+1})$ the algorithm stops and reports a cycle. If $d[v_{i+1}]=\infty$ before the relaxation of $(v_i,v_{i+1})$ then a cycle will be detected as well but only when the edge $(v_{i+2},v_{i+1})$ is relaxed. This edge must be relaxed since $d_C[v_{i+1},v_1]\leq \lfloor w(C)/2\rfloor \leq t$ which implies that $v_{i+2}$ will be extracted and its edge $(v_{i+2},v_{i+1})$ will satisfy the relaxation requirement and will be relaxed. We conclude that if either $d_C[v_1,v_i] + w(v_i,v_{i+1}) \leq t$ or $d_C[v_{i+1},v_1] + w(v_i,v_{i+1}) \leq t$ then Min-Cycle$(G,t)$ must report a cycle, giving a contradiction. We now turn to the case that either $v_1 = v_i$ or $v_1 = v_{i+1}$. Assume wlog that $v_1=v_i$, that is, $(v_i,v_{i+1})=(v_1,v_2)$. From Lemma~\ref{lemma:middle} we know that $w(v_1,v_2)\geq \lceil w(C)/2 \rceil$ and $d_C[v_2,v_1]\leq \lfloor w(C)/2 \rfloor$. We also know that there is at least one additional vertex $v_\ell$ between $v_2$ and $v_1$ on the cycle $C$. We now apply Lemma~\ref{lemma:middle} on the vertex $v_\ell$. It is easy to see that in that case the edge $(v_1,v_2)$ will be the critical edge of $v_\ell$ as well. We now have three different vertices and the rest of this case is identical to the first case. \end{proof} As a first attempt, we create the new graph $G'(V',E',w')$ as follows. The vertex set $V'$ contains two copies $V^1$ and $V^2$ of $V$. For $i=1,2$, let $E^i$ be the set of edges with both endpoints in $V^i$. The set $E^1$ is empty and the set $E^2$ is $E$, that is, $(u^2,v^2)\in E^2$ if and only if $(u,v)\in E$. Let $E^{12}$ be the set of edges with one endpoint in $V^1$ and one endpoint in $V^2$. Let $u^1 \in V^1$ and $v^2\in V^2$. If the distance between $u$ and $v$ was computed by Min-Cycle$(G,t)$ then we add an edge $(u^1,v^2)$ to $E^{12}$ with weight $d[u,v]$. We show that there is triangle in $G'(V',E',w')$ that corresponds to the minimum cycle of $G$ and has the same weight. \begin{claim}\label{C-G'-first-try} Let $C=\{v_1,v_2,\ldots, v_\ell\}$ be a minimum cycle in $G(V,E,w)$. Assume that $w(C)\leq 2t+1$. There exists a triangle in $G'(V',E',w')$ on vertices of $C$ of weight $w(C)$. \end{claim} \begin{proof} Without loss of generality, let $v_1$ be the vertex $v_j$ from Lemma~\ref{lemma:tbound}, and let $v_i$ and $v_{i+1}$ be the other two vertices. From Lemma~\ref{lemma:tbound} we know that all three vertices are distinct and that $d_C[v_1,v_i] + w(v_i,v_{i+1}) > t$ and $d_C[v_{i+1},v_1] + w(v_i,v_{i+1}) > t$. Combining this with the fact that $C$ is a minimum cycle and $w(C)\leq 2t+1$ we get that $d[v_1,v_i]=d_C[v_1,v_i] \leq t$ and $d[v_{i+1},v_1] = d_C[v_{i+1},v_1] \leq t$. When Cycle? is run from $v_1$ it computes $d[v_1,v_i]$ and $d[v_1,v_{i+1}]$. Hence, there must be a triangle of weight $w(C)$ in $G'$ on the vertices $v_1^1$,$v_i^2$ and $v_{i+1}^2$. \end{proof} \begin{figure}[t]\label{F-G-to-G'} \centering \includegraphics[height=4.2cm]{example.eps} \caption{(a) A minimum cycle in $G$ that is transformed into a triangle in $G'$. (b) A simple path in $G$ that is transformed into a triangle in $G'$.} \end{figure} The claim above shows only one direction, that is, if there is a minimum cycle $C$ of weight at most $2t+1$ in $G$ then there is a corresponding triangle in $G'$ on vertices $y^2,z^2\in V^2$ and $x^1\in V^1$, that correspond to vertices of $C$ with the same weight. This situation is depicted in Figure~\ref{F-G-to-G'}(a). To complete the reduction we must show that there are no false positives: triangles in $G'$ of smaller weight which do not correspond to a minimum cycle of $G$. Unfortunately, this is not the case and $G'$ may have such false triangles with smaller weight. This situation is depicted in Figure~\ref{F-G-to-G'}(b). Let $x,y,z\in V$. If there is a shortest path of length at most $t$ from $x$ to $z$ whose last edge is $(y,z)$ then there is a triangle in $G'$. To see that notice that there are two different shortest paths one from $x$ to $z$ and one from $x$ to $y$, both of length at most $t$. In such a case the graph $G'$ includes the edges $(x^1,y^2)$ and $(x^1,z^2)$ and together with the edge $(y^2,z^2)$ they form a triangle. Moreover, such a triangle has the same structure as a valid triangle and might be of smaller weight, thus, a triangle detection algorithm cannot distinguish between a valid triangle and a false triangle. In what follows we first show that this is the only situation in which a false triangle is formed and then we show a construction that avoids such false triangles. In the above pathological case the only reason that the triangle $x^1,y^2,z^2$ did not correspond to a cycle, was because we had two different paths $P_1$ and $P_2$ that both start in the same vertex and the last vertex of one of these paths was the vertex right before the last vertex of the other path. In the next lemma we show that this is the {\em only} bad case. \begin{lemma} Let $x,y,z\in V$ be three distinct vertices. Let $P_1=y\rightarrow y'\rightarrow \ldots\rightarrow x$ and $P_2=x\rightarrow \ldots\rightarrow z'\rightarrow z$ be simple shortest paths between $y$ and $x$ and $x$ and $z$ respectively. Let $y'\neq z$ and $z'\neq y$ and let $(z,y)\in E$. Then, $P_1\cup P_2\cup \{(z,y)\}$ contains a simple cycle of weight at most $w(P_1)+w(P_2)+w(z,y)$.\label{lemma:twopaths} \end{lemma} \begin{proof} Let $P^{-1}_1$ be $P_1$ with its edges reversed. Look at $P^{-1}_1$ and $P_2$. There are two options. Either one path is a subpath of the other, or there is a node $x'$ such that $x\rightarrow \ldots\rightarrow x'$ is a subpath of both, and $x'$ is followed by $q_1$ in $P^{-1}_1$ and by $q_2\neq q_1$ in $P_2$. Consider the first case. Wlog, $P_2$ is a subpath of $P^{-1}_1$ (the other inclusion is symmetric). Since $y'\neq z$, the subpath between $y$ and $z$ on $P_1$ has at least 2 edges, and adding edge $(z,y)$ produces a simple cycle of weight less than the sum of the two original path weights. Consider the second case when $x',q_1,q_2$ exist as above. If there is some node between $x'$ and $y$ on $P^{-1}_1$ which also appears in $P_2$ after $x'$, then let $q$ be the first such node. Then no node on $P_2$ between $x'$ and $q$ appears between $x'$ and $q$ in $P^{-1}_1$. The two disjoint simple paths between $x'$ and $q$ form a simple cycle on at least $3$ nodes since $q_1\neq q_2$. The weight of this cycle is less than the sum of the two original path weights. Finally, suppose no such $q$ exists. Then the subpaths of $P^{-1}_1$ and $P_2$ between $x'$ and $y$ and $x'$ and $z$ share no vertices and hence adding edge $(z,y)$ closes a simple cycle of weight at most $w(P_1)+w(P_2)+w(z,y)$. \end{proof} Lemma~\ref{lemma:twopaths} implies that our reduction to minimum triangle will work, provided that we can ensure that for every triangle $x^1,y^2,z^2$ in $G'$, the last node $z'$ before $z$ on the shortest path from $x$ to $z$ in $G$ is distinct from $y$. To do this, we use the color-coding method from the seminal paper of Alon, Yuster and Zwick~\cite{AYZ97}. The idea is as follows. Let $\{C_1,C_2\}$ be two distinct colors and suppose we assign to every node of $G$ one of these colors independently and uniformly at random. Fix four vertices $y,y',z,z'$. The probability that $\color(y')=\color(z')=C_1$ and $\color(y)=\color(z)=C_2$ is $1/2^4=O(1)$. Now we will modify $G'(V',E',w')$ from before. Recall that $V'=V^1\cup V^2$. For every node $x$ of $G$ we add a copy $x^1$ to $V^1$, so that $V^1$ is a copy of $V$. Furthermore, if color$(x)=C_2$ we also add a copy $x^2$ to $V^2$. We now define the set of edges $E'$. Let $E^{ij}$ be the set of edges between $V^i$ and $V^j$, for $i,j\in \{1,2\}$. The edge set $E^{11}$ is empty, so $E' = E^{12} \cup E^{22}$. Let $x,z',z\in V$ such that $(z',z)$ is the last edge of the shortest path from $x$ to $z$. The sets $E^{12}$ and $E^{22}$ are defined as follows: $$ E^{12} = \{ (x^1,z^2) \mid \color(z')=C_1 \wedge \color(z)=C_2\} \;\;\;\;\;\;\;\;\;\;\;\; E^{22} = \{ (x^2,z^2) \mid \color(x)=C_2 \wedge \color(z)=C_2\}.$$ The weight of an edge $(x^1,z^2)\in E^{12}$ is $d[x,z]$. The weight of an edge $(x^2,z^2)\in E^{22}$ is $w(x,z)$. We now prove that $G'$ does not contain false triangles. \begin{lemma}\label{L-G'-has-no-false-triangle} If $T=\{x,y,z\}$ is triangle in $G'$ then there exists a simple cycle $C$ in $G$ such that $\{x,y,z\}\subseteq C$ and $w(C) \leq w(T)$. \end{lemma} \begin{proof} Any triangle in $G'$ either have one vertex from $V^1$ and two vertices from $V^2$ or all three vertices from $V^2$. In the latter case the triangle is also in $G$ so we focus in the former case, that is, $T=\{x^1,y^2,z^2\}$ is a triangle in $G'$ such that $x^1\in V^1$ and $y^2,z^2\in V^2$. Let $x,y,z\in V$ be the vertices that correspond to $x^1,y^2$ and $z^2$ in $G$. Let $y'$ ($z'$) be the last vertex before $y$ ($z$) on the shortest path $P_1$ ($P_2$) between $x$ and $y$ ($z$) in $G$. The fact that $(x^1,y^2)\in E^{12}$ and $(x^1,z^2)\in E^{12}$ implies that $\color(y')=\color(z')=C_1$ and $\color(y)=\color(z)=C_2$. Hence we get that $y'\neq z$ and $z'\neq y$. Combining this with the fact that $E^{22}\subseteq E$ we get that the paths $P_1$, $P_2$ and the edge $(y,z)$ satisfy the requirements of Lemma~\ref{lemma:twopaths}, and there is a simple cycle of weight at most $w(P_1)+w(P_2)+w(y,z)=w(T)$ in $G$. \end{proof} Now that we have shown that $G'$ does not contain false triangles we prove that the minimum weight cycle in $G$ corresponds to a triangle in $G'$. (This can be viewed as proving Claim~\ref{C-G'-first-try} for the new construction of $G'$). \begin{claim}\label{C-G'-second-try} Let $C=\{v_1,v_2,\ldots, v_\ell\}$ be a minimum cycle in $G(V,E,w)$. Assume that $w(C)\leq 2t+1$. Then there exists a triangle in $G'(V',E',w')$ on vertices of $C$ of weight $w(C)$, with constant probability. \end{claim} \begin{proof} Without loss of generality, let $v_1$ be the vertex $v_j$ from Lemma~\ref{lemma:tbound}, and let $v_i$ and $v_{i+1}$ be the other two vertices. As in the proof of Claim~\ref{C-G'-first-try}, $d[v_1,v_i]=d_C[v_1,v_i]\leq t$ and $d[v_{i+1},v_1]=d_C[v_{i+1},v_1]\leq t$ and these values are computed by Min-Cycle$(G,t)$ . The random coloring is successful when $\color(v_{i-1})=C_1$, $\color(v_i)=C_2$, $\color(v_{i+1})=C_2$ and $\color(v_{i+2})=C_1$. The probability that this happens is $1/2^4 = O(1)$. The triangle $\{v^1_1,v^2_i,v^2_{i+1}\}$ is in $G'$ exactly when the coloring is successful, and hence $C$ is represented by that triangle in $G'$ with constant probability. The weight of the triangle $\{v^1_1,v^2_i,v^2_{i+1}\}$ is $d[v_1,v_i]+w(v_i,v_{i+1})+d[v_{i+1}, v_1]= w(C)$. \end{proof} \paragraph{Weight reduction.} Currently, the maximum edge weight in $G'$ can be as large as $\Omega(nM)$ as the weights of edges in $E^{12}$ are distances in $G$. To complete the reduction, we show that it is possible to reweight the edges of $G'$ without changing the minimum triangle so that the edge weights will be integers from the range $[-M,M]$. The key idea is to use Lemma~\ref{lemma:tbound} in two different ways. As we previously mentioned, Lemma~\ref{lemma:tbound} implies that $d_C[v_j,v_i] \leq t$ and $d_C[v_{i+1},v_j] \leq t$. Moreover, the bounds $d_C[v_j,v_i] + w(v_i,v_{i+1}) > t$ and $d_C[v_{i+1},v_j] + w(v_i,v_{i+1}) > t$ imply that $d_C[v_j,v_i] > t - M$ and $d_C[v_{i+1},v_j] > t - M$. Thus, we can remove from $E^{12}$ every edge of weight strictly more than $t$ and every edge of weight $t-M$ or smaller with no effect on the minimum triangle in $G'$. We now decrease the weights of all the edges that were left in $E^{12}$ by $t$. The weight of every triangle in $G'$ with a node from $V^1$ has decreased by exactly $2t$. Hence, the minimum triangle out of those with a node in $V^1$ remains the same. The weights of edges in $E^{12}$ are now integers from the interval $[-M,0]$, and the rest of the edge weights are still in $[1,M]$. If the minimum weight triangle in $G'$ now has nodes only from $V^2$, then this triangle was also the minimum weight one in $G'$ before the reweighting, and hence corresponds to a minimum weight cycle, with high probability. Otherwise, the minimum weight triangle in $G'$ has a node from $V^1$. The minimum out of these triangles was also the minimum one among the triangles with a node in $V^1$ also before the reweighting. Hence it also corresponds to a minimum weight cycle, with high probability. This completes the description of our construction. \paragraph{Derandomization.} The reduction can be made deterministic, just as in the color-coding paper of Alon \etal~\cite{AYZ97}, by using a $k$-perfect hash family, a family $F=\{f_1,\ldots,f_{|F|}\}$ of hash functions from $\{1,\ldots, n\}$ to $\{1,\ldots,k\}$ so that for every $V'\subset V$ with $|V'|=k$, there exists some $i$ so that $f_i$ maps the elements of $V'$ to distinct colors. In our case, $k=2$. By enumerating through the functions of $F$, and using each $f_i$ in place of the random coloring, our reduction runs in $O(n^2(\log nM)\log n + |F| n^2)$ time, provided each $f_i$ can be evaluated in constant time. Our reduction produces $O(|F|)$ instances of minimum triangle. Schmidt and Siegel~\cite{schmidtsiegel} (following Fredman, Komlos and Szemeredi~\cite{fks84}) gave an explicit construction of a $k$-perfect family in which each function is specified using $O(k)+2\log\log n$ bits. For our case of $k=2$, the size of the family is therefore $O(\log^2 n)$. The value of each one of the hash functions on each specified element of $V$ can be evaluated in $O(1)$ time. Alon, Yuster and Zwick~\cite{AYZ97}, reduced the size of the hash family to $O(\log n)$. Using this family we can derandomize our reduction so that it runs in deterministic $O(n^2(\log nM)\log n)$ time. \section{Minimum cycle in directed graphs with weights in $\{-M,\ldots, M\}$}\label{s-directed} In this section we consider directed graphs graphs with possibly negative weights but no negative cycles. In contrast to the situation in undirected graphs it is relatively easy to reduce the minimum cycle problem in directed graphs to the problem of computing all pairs shortest paths. If $D$ is the distance matrix of a directed graph then its minimum cycle has weight $\min_{i,j} D[i,j]+w(j,i)$. Hence, using Zwick's APSP algorithm~\cite{zwickbridge} we can compute the minimum cycle in $O(M^{0.681}n^{2.575})$ time. In this section we show that the minimum cycle problem in directed graphs can be reduced to the problem of finding a minimum triangle in an undirected graph. This also implies that the minimum weight cycle in directed graphs can be computed in $\tilde{O}(Mn^\omega)$ time. Similarly to before, our approach will be to compute upper bounds on the distances in the graph so that for some node $s$ on the minimum cycle $C$ and its critical edge $(v_i,v_{i+1})$ we obtain the exact distances $d[s,v_i]=d_C[s,v_i]$ and $d[v_{i+1},s]=d_C[v_{i+1},s]$. \paragraph{Computing cycle distances.} The Dijkstra-like approach in the previous section does not work for directed graphs. It also only applies when the edge weights are nonnegative. Here we utilize a new approach that allows us to reduce the minimum cycle problem in directed graphs with integral weights in the interval $[-M,M]$ to the minimum triangle problem in undirected graphs with weights in $[-M,M]$. Our result is more general than before. However this comes at a cost: the reduction no longer takes nearly quadratic time, but consumes $\Ot(Mn^\omega)$ time. Our approach uses the fact that Lemma~\ref{lemma:middle} applies for \emph{every} vertex of a cycle, together with a result by Yuster and Zwick~\cite{yusterzwick05} obtained by a clever modification of Zwick's APSP algorithm~\cite{zwickbridge} given in Theorem~\ref{thm:yz} below. \begin{theorem}[Yuster and Zwick '05]\label{thm:yz} Given an $n$-node directed graph with integral edge weights in the interval $[-M,M]$, in $\tilde{O}(Mn^\omega)$ time one can compute an $n\times n$ matrix $D$ such that the $i,j$ entry of the distance product $D\star D$ contains the distance between $i$ and $j$. \end{theorem} The matrix $D$ can contain entries with values as large as $\Omega(Mn)$ and so $D\star D$ is not known to be computable in truly subcubic time, even when $M$ is small. Nevertheless, the theorem applies to general graphs with positive or negative weights. It also gives an $\tilde{O}(Mn^\omega)$ time algorithm for detecting a negative cycle in a graph, and is extremely useful in computing minimum cycles. The Yuster-Zwick algorithm proceeds in stages. In each stage $\ell$, a node subset sample $B_\ell$ is maintained so that each node is in $B_\ell$ with probability at least $\min\{1,9(2/3)^\ell\ln n\}$. They prove the following lemma. \begin{lemma}[Yuster and Zwick '05] For every stage $\ell$ and any node $s\in B_\ell$ and node $v\in V$, the algorithm has estimates $D[s,v]$ and $D[v,s]$, so that if the shortest path from $s$ to $v$ has at most $(3/2)^\ell$ edges then $D[s,v]=d[s,v]$, with high probability. Similarly, if the shortest path from $v$ to $s$ has at most $(3/2)^\ell$ edges then $D[v,s]=d[v,s]$ with high probability.\label{lemma:yz} \end{lemma} The Yuster-Zwick algorithm also provides an additional matrix $\Pi$ of {\em predecessors} so that if $k=\Pi[i,j]$, then $k$ is the predecessor of $j$ on a simple path from $i$ to $j$ of weight $D[i,j]$. Similarly, one can obtain a matrix $\Pi'$ of {\em successors} so that if $k=\Pi'[i,j]$, then $k$ is the successor of $i$ on a simple path from $i$ to $j$ of weight $D[i,j]$. Now, first use the algorithm to check whether the given graph has a negative cycle. If it does not, then let $C$ be the minimum weight cycle, $w(C)\geq 0$. Recall that $n(C)$ is the number of vertices/edges on $C$. Let $\ell$ be the minimum value so that $n(C)\leq (3/2)^\ell$. Note that then $n(C)\geq (3/2)^{\ell-1}$. The probability that a particular node $s$ of $C$ is not in $B_\ell$ is at most $1-(2/3)^\ell(9\ln n)$. The events are independent for all $s$ in $C$, and so the probability that no node of $C$ is in $B_\ell$ is at most $$(1-(2/3)^\ell (9\ln n))^{n(C)}\leq (1-(2/3)^\ell (9\ln n))^{(3/2)^{\ell-1}}\leq 1/n^6.$$ Thus the probability that some node $s$ of $C$ is in $B_\ell$ is $1-\poly^{-1}(n)$; furthermore by Lemma~\ref{lemma:yz} (with high probability) for all $x\in C$, the Yuster-Zwick algorithm has computed $D[s,x]=d[s,x]$ and $D[x,s]=d[x,s]$, since the number of edges on the respective shortest paths are at most $n(C)\leq (3/2)^\ell$. In particular, this means that $D[s,v_i]=d[s,v_i]$ and $D[v_{i+1},s]=d[v_{i+1},s]$ for the critical edge $(v_i,v_{i+1})$ for $s$ on $C$ as Lemma~\ref{lemma:middle} applies for every vertex of $C$. Moreover, since $C$ is a minimum weight cycle with $w(C)\geq 0$, by Lemma~\ref{lemma:middle}, the paths between $s$ and $v_i$ and $v_{i+1}$ and $s$ on $C$ are shortest paths between $s$ and $v_i$ and $v_{i+1}$ and $s$, respectively. Hence, $d_C[s,v_{i}]=d[s,v_{i}]=D[s,v_{i}]$ and $d_C[v_{i+1},s]=d[v_{i+1},s]=D[v_{i+1},s]$, with high probability. \paragraph{Creating the minimum triangle instance $G'$.} $G'$ will still be undirected, but unlike the construction for undirected graphs, $G'$ will now be tripartite. The vertex set $V'$ of $G'$ has partitions $V^1,V^2,V^3$ which are all copies of $V$. The construction is as follows. For every directed edge $(u,v)$ of $G$, add an edge from $u^2\in V^2$ to $v^3\in V^3$ with weight $w(u,v)$. Furthermore, for every two nodes $x,y$ so that $D[x,y]<\infty$ add an edge from $x^1\in V^1$ to $y^2\in V^2$ and one from $x^3\in V^3$ to $y^1\in V^1$, each with weight $D[x,y]$. Hence the edges between $x^1\in V^1$ and $y^2\in V^2$ correspond to directed paths from $x$ to $y$, and the edges between $x^3\in V^3$ and $y^1\in V^1$ correspond to directed paths from $y$ to $x$. Hence any triangle in $G'$ corresponds to a directed closed walk in $G$. However, any such closed walk must contain a simple cycle of no larger weight: If the walk is not simple, find a closest pair of copies of a node $v$ on the walk. These copies enclose a simple cycle $C''$ in $G$. Now, either $C''$ has no larger weight than the walk, or removing it from the walk produces a smaller closed walk of negative weight, and hence $G$ contains a negative cycle, which we assumed is not the case. Since $G'$ contains no false positives, the minimum triangle of $G'$ corresponds exactly to $C$. \paragraph{Weight reduction.} As in the construction for undirected graphs, the maximum edge weight in $G'$ can be as large as $\Omega(nM)$. Here we give a different way to reduce them to the interval $[-M,M]$. Let $t$ be a parameter which we will be changing. Intuitively, our goal will be to set $t$ to something roughly $\lfloor w(C)/2\rfloor$; for our purposes, it will be sufficient for $t$ to be $\leq \lfloor w(C)/2\rfloor$. Initially, $t=Mn$. Now, check whether there is a triangle $a^1 \in V^1$, $b^2 \in V^2$, $c^3 \in V^3$ in $G'$ so that $D[a^1,b^2], D[c^3,a^1]\leq t$. We run a binary search on $t$ in the interval $[0, Mn]$, until we find the smallest $t$ such that there is such a triangle. Each search can be done using Boolean matrix product: create a matrix $A$ which is $1$ wherever $D$ is $\leq t$ and $0$ otherwise; multiply $A$ by itself and check for a triangle closed by an edge $(b^2,c^3)$, $b^2\in V^2, c^3\in V^3$. This takes $O(n^\omega \log w(C))$ time. Let (whp) $\{s^1,v_{i}^2,v_{i+1}^3\}$ be the triangle in $G'$ that corresponds to the minimum cycle $C$ of $G$. Since $\{s^1,v_{i}^2,v_{i+1}^3\}$ is a valid triangle, and $d_C[s,v_{i}],d_C[v_{i+1},s]\leq \lfloor w(C)/2\rfloor$ by Lemma~\ref{lemma:middle}, then after the completion of the binary search, $t\leq \lfloor w(C)/2\rfloor$. Furthermore, since $C$ is a minimum cycle and by the definition of $t$, $w(C) \leq 2t+w(e)$, where $e$ is some edge in $G$, which implies that $w(C)\leq 2t+M$. Hence, $t\leq \lfloor w(C)/2\rfloor$ and $w(C)/2\leq t+M/2$. Now, take $G'$ and remove every edge $(c^3,a^1)\in V^3\times V^1$ with $D[c^3,a^1]> t+M/2$ and every $(a^1,b^2)\in V^1\times V^2$ with $D[a^1,b^2]>t+M/2$. If an edge has weight $\leq \lfloor w(C)/2\rfloor\leq t+M/2$, it is not removed. In particular, $(s^1,v_i^2)$ and $(v_{i+1}^3,s^1)$ are still edges, by Lemma~\ref{lemma:middle}. Remove every $(c^3,a^1)\in V^3\times V^1$ with $D[c^3,a^1]<t-M$ and every $(a^1,b^2)\in V^1\times V^2$ with $D[a^1,b^2]<t-M$. If an edge has weight $\geq \lfloor w(C)/2\rfloor - M\geq t-M$, then it is not removed. Hence again $(s^1,v_{i}^2)$ and $(v_{i+1}^3,s^1)$ are not removed because their weight is at least $t-M$ as follows from Lemma~\ref{lemma:middle}. All remaining edges in $(V^3\times V^1)\cup (V^1\times V^2)$ have integral weights in $[t-M,t+M/2]$, and $C$ is still represented by the minimum triangle $\{ s^1,v_{i}^2,v_{i+1}^3\}$. Now, for every remaining edge $(a,b)\in (V^1\times V^2) \cup (V^3\times V^1)$, change its weight to $D[a,b]-t$. The weights of the edges of $G'$ are now in the interval $[-M,M]$. Furthermore, since the weights of all triangles have decreased by $2t$, the minimum triangle of $G'$ is still the same. This completes the construction of $G'$. \paragraph{Derandomization.} The only randomized part of our reduction is our use of Yuster and Zwick's result. Their algorithm can be derandomized, as pointed out in their paper~\cite{yusterzwick05} without affecting our use of their result. Hence, we obtain a deterministic reduction from minimum cycle in directed graphs to minimum triangle in undirected graphs which runs in $O(Mn^\omega\log n + n^\omega(\log Mn)\log n)$ time and does not increase the size of the graph or the edge weights by more than a constant factor. \section{Discussion}\label{s-discuss} We have obtained separate algorithms for minimum cycle for undirected graphs with nonnegative weights and for directed graphs with possibly negative weights. A natural question is whether one can obtain an algorithm that works for both types of graphs, or more generally for \emph{mixed} graphs: graphs with both directed and undirected edges. This turns out to be possible for mixed graphs with nonnegative weights. (The problem is NP-hard for mixed graphs with positive and negative weights, even when there are no negative cycles.) The idea for the proof of Theorem~\ref{thm:mix} below is to compute the distance estimates $D[\cdot,\cdot]$ using the approach from our reduction for directed graphs. This is possible since when the weights are nonnegative, one can reduce the shortest paths problem in undirected or mixed graphs to that in directed graphs by replacing each undirected edge $(u,v)$ by the two directed edges $(u,v)$ and $(v,u)$. Then the entire approach from the previous section applies up until the triangle instance needs to be constructed. To construct the triangle instance, we use the color-coding technique from our minimum cycle algorithm for undirected graphs, but only on the undirected edges. The derandomization also follows from the previous two sections. More details follow. Just as in the construction for directed graphs, for every directed edge $(u,v)$ of $G$, add an edge from $u^2\in V^2$ to $v^3\in V^3$ with weight $w(u,v)$. As in the construction for undirected graphs, randomly assign every node of $G$ one of two different colors $\{C_1,C_2\}$ independently uniformly at random. For every undirected edge $(u,v)$ of $G$, add an edge from $u^2\in V^2$ to $u^3\in V^3$ of weight $w(u,v)$ if and only if $\color(u)=\color(v)=C_2$. Consider two nodes $x,y$ so that $D[x,y]<\infty$. Let $x'=\Pi[x,y]$ and $y'=\Pi'[x,y]$ be the first node after $x$ and the last node before $y$, respectively, on the path from $x$ to $y$ with weight $D[x,y]$. If $(x,x')$ is a directed edge, then add an edge from $x^3\in V^3$ to $y^1\in V^1$, just as in the directed graph construction. Similarly, if $(y',y)$ is a directed edge, then add an edge from $x^1\in V^1$ to $y^2\in V^2$. Otherwise, if $(y',y)$ is undirected, add an edge from $x^1\in V^1$ to $y^2\in V^2$ only if $\color(y')=C_1$ and $\color(y)=C_2$. If $(x,x')$ is undirected, add an edge from $x^3\in V^3$ to $y^1\in V^1$ only if $\color(x')=C_1$ and $\color(x)=C_2$. The weights of these edges are all $D[x,y]$. By Lemma~\ref{lemma:twopaths} (which also applies to mixed graphs) now every triangle in $G'$ corresponds to a simple cycle of no larger weight. Furthermore, the minimum cycle of $G$ is represented by a triangle of the same weight in $G'$ with constant probability. This follows from the color-coding and by the fact that for some node $s$ of $C$ and its middle edge $(v_{i},v_{i+1})$, $D[s,v_{i}]=d(s,v_{i})=d_C(s,v_{i})$ and $D[v_{i+1},s]=d(v_{i+1},s)=d_C(v_{i+1},s)$. \begin{theorem} Let $G(V,E,w)$ be a mixed graph on $n$ nodes, $w:E\rightarrow \{1,\ldots, M\}$. In $\tilde{O}(Mn^\omega)$ time one can construct $O(\log n)$ graphs $G'_1,\ldots,G'_k$ on $\Theta(n)$ nodes and edge weights in $\{1,\ldots,O(M)\}$ so that the minimum out of all weights of triangles in the graphs $G'_i$ is exactly the weighted girth of $G$. \label{thm:mix} \end{theorem} Shortest paths and minimum cycles in undirected graphs with positive and negative weights are of a completely different nature than the corresponding problems in directed graphs or in undirected graphs with nonnegative weigths. In the absence of negative cycles, shortest paths and cycles can be solved via matching techniques (see e.g.~\cite{lawler,kortevygen,gabow83,gabow85}). However, the running times are not as good as the corresponding ones for directed graphs. For instance, APSP can be solved in $O(\min\{n^3,mn\log n\})$ time~\cite{gabow83}, whereas the corresponding problem in directed graphs can be solved in $O(\min\{n^3\log\log^3 n/\log^2 n,mn+n^2\log n\})$~\cite{chan07j,floyd,warshall}. None of the shortest paths algorithms for directed graphs, including Yuster-Zwick's algorithm, apply for undirected graphs when there are negative weights, as they would confuse any negative weight edge with a negative cycle. Hence our approach from Section~\ref{s-directed} would not work. Our approach from Section~\ref{s-undirected} also fails, even if we have already computed APSP in the graph. The main reason is that Lemma~\ref{lemma:twopaths} does not apply when the weights can be negative, and hence the color-coding technique cannot be applied, as is. Computing minimum cycles in undirected graphs with possibly negative weights may require entirely new techniques. \paragraph{Extension to $k$-cycles.} We now show that the minimum weight cycle problem in undirected and directed graphs can be reduced to the minimum $k$-cycle problem in undirected graphs for every $k\geq 4$. In order to obtain our reduction to minimum $k$-cycle in undirected graphs with integral edge weights in $[0,O(M)]$ it suffices to provide a reduction from minimum weight triangle in an $n$-node undirected graph with weights in $[-M,M]$ to minimum $k$-cycle in a $\Theta(n)$-node undirected graph with edge weights in $[0,O(M)]$, and then to combine this reduction with our reduction from minimum weight cycle to minimum weight triangle. This proves Theorem~\ref{thm:equiv2}. \begin{lemma} Let $k\geq 4$ be fixed. Given an $n$-node undirected graph $G$ with integral edge weights in $[-M,M]$, one can construct in $O(n^2)$ time an undirected graph $G'$ on $\Theta(n)$ nodes and integral edge weights in $[0,6M]$ so that if $G$ has at least one triangle and the minimum triangle weight is $W$, then the minimum weight $k$-cycle in $G'$ has weight $W+15M$. \end{lemma} \begin{proof} Without loss of generality, we can assume that the instance of minimum triangle is tripartite with partitions $V^1,V^2,V^3$. Remove each node $v\in V^1$ and its incident edges and replace it with a path on $k-2$ nodes $P_v=\{v_1\rightarrow v_2\rightarrow\ldots\rightarrow v_{k-2}\}$ as follows. For every original edge $(u,v)$ with $u\in V^2$, add an edge $(u,v_1)$ with weight $w(u,v)$, and for every original edge $(v,u)$ with $u\in V^3$, add an edge $(v_{k-2},u)$ with weight $w(v,u)$. Let the weights of the path edges $(v_i,v_{i+1})$ for $i\in \{1,\ldots,k-3\}$ be all $0$. Increase the weights of all edges of $G'$ which are not on paths $P_v$ by $5M$. This forms a weighted graph $G'$ on $O(kn)$ nodes and weights in $[0,6M]$. Every triangle $v\in V^1,u\in V^2,z\in V^3$ of weight $W$ in the original graph has a corresponding $k-$cycle in $G'$ of weight $W+15M$. Now consider any $k-$cycle $C$ of $G'$. If $C$ contains an edge of a path $P_v$ corresponding to a node $v\in V^1$, then it must contain the entire path since every node $v_i$ for $i\in \{2,\ldots,k-3\}$ has degree exactly $2$. Hence $C$ contains exactly $2$ other nodes which must close a cycle with $P_v$. Hence the other two nodes are from $V^2$ and $V^3$, and there is a corresponding triangle in $G$ of weight $15M$ less. If on the other hand $C$ does not contain an edge of a path $P_v$, then it has $k$ edges of weight at least $4M$, and hence $w(C)\geq 4kM\geq 16M$ for $k\geq 4$. Any triangle of $G$, however, corresponds to a cycle of weight $\leq 15M<16M\leq w(C)$. Hence the minimum weight $k$-cycle in $G'$ must correspond to a triangle in $G$, if $G$ contains a triangle. \end{proof} \begin{small}
{ "timestamp": "2011-04-15T02:03:26", "yymm": "1104", "arxiv_id": "1104.2882", "language": "en", "url": "https://arxiv.org/abs/1104.2882", "abstract": "We consider the fundamental algorithmic problem of finding a cycle of minimum weight in a weighted graph. In particular, we show that the minimum weight cycle problem in an undirected n-node graph with edge weights in {1,...,M} or in a directed n-node graph with edge weights in {-M,..., M} and no negative cycles can be efficiently reduced to finding a minimum weight triangle in an Theta(n)-node undirected graph with weights in {1,...,O(M)}. Roughly speaking, our reductions imply the following surprising phenomenon: a minimum cycle with an arbitrary number of weighted edges can be \"encoded\" using only three edges within roughly the same weight interval! This resolves a longstanding open problem posed by Itai and Rodeh [SIAM J. Computing 1978 and STOC'77].A direct consequence of our efficient reductions are O (Mn^{omega})-time algorithms using fast matrix multiplication (FMM) for finding a minimum weight cycle in both undirected graphs with integral weights from the interval [1,M] and directed graphs with integral weights from the interval [-M,M]. The latter seems to reveal a strong separation between the all pairs shortest paths (APSP) problem and the minimum weight cycle problem in directed graphs as the fastest known APSP algorithm has a running time of O(M^{0.681}n^{2.575}) by Zwick [J. ACM 2002].In contrast, when only combinatorial algorithms are allowed (that is, without FMM) the only known solution to minimum weight cycle is by computing APSP. Interestingly, any separation between the two problems in this case would be an amazing breakthrough as by a recent paper by Vassilevska W. and Williams [FOCS'10], any O(n^{3-eps})-time algorithm (eps>0) for minimum weight cycle immediately implies a O(n^{3-delta})-time algorithm (delta>0) for APSP.", "subjects": "Data Structures and Algorithms (cs.DS)", "title": "Minimum Weight Cycles and Triangles: Equivalences and Algorithms", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363741964573, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7084883499292155 }
https://arxiv.org/abs/1609.07631
A note on the Gaussian curvature on noncompact surfaces
We give a short proof of the following fact. Let $\Sigma$ be a connected, finitely connected, noncompact manifold without boundary. If $g$ is a complete Riemannian metric on $\Sigma$ whose Gaussian curvature $K$ is nonnegative at infinity, then $K$ must be integrable. In particular, we obtain a new short proof of the fact that if $\Sigma$ admits a complete metric whose Gaussian curvature is nonnegative and positive at one point, then $\Sigma$ is diffeomorphic to $\mathbb{R}^2$.
\section{Introduction} In~\cite{RosSto94}, J.Rosenberg and S.Stolz conjectured that a closed manifold $X$ admits a metric of positive scalar curvature when the cylinder $X\times\mathbb{R}$ admits a \emph{complete} metric of positive scalar curvature. When $X$ is one-dimensional, this conjecture corresponds to the statement that the cylinder $S^1\times\mathbb{R}$ cannot carry a complete Riemannian metric of positive scalar curvature. This fact is well-known and follows for instance from~\cite[Corollary~6.13]{GroLaw83}. One of the main aims of this note is to provide an elementary proof of this fact. More generally, we consider a class of noncompact surfaces, which are called finitely connected. Roughly speaking, they are obtained by removing a finite number of points from a compact surface. For such surfaces, the Euler characteristic is obviously well-defined. Suppose that $g$ is a complete Riemannian metric on a connected, finitely connected, noncompact surface without boundary $\Sigma$. We show that if the Gaussian curvature $K$ of $g$ is nonnegative at infinity, then $K$ must be integrable and its integral must be smaller than the Euler characteristic of $\Sigma$. As an application, we obtain a new proof of the fact that a connected, finitely connected noncompact surface without boundary with nonpositive Euler characteristic cannot carry a complete metric whose Gaussian curvature is nonnegative and positive at one point. In particular, this applies to the cylinder $S^1\times\mathbb{R}$, since it is finitely connected and $\chi(S^1\times\mathbb{R})=\chi(S^1)=0$. \section{The theorem} Before stating the theorem, we review some basic notions on noncompact surfaces. We say that a smooth surface without boundary $\Sigma$ is \emph{finitely connected} if there exists a compact surface with boundary $\Omega\subset \Sigma$ such that \begin{enumerate}[label=(\roman*)] \item the boundary of $\Omega$ is a disjoint union of closed simple curves $l_1,\ldots, l_p$; \item the open set $\Sigma\setminus \Omega$ is a disjoint union of the cylinders $C_j:=l_j\times (0,\infty)$, for $j=1,\ldots p$. \end{enumerate} Notice that equivalently a noncompact surface without boundary $\Sigma$ is finitely connected if it is homeomorphic to a closed surface $\Sigma_1$ with $p$ points removed. In this case, the singular homology groups of $\Sigma$ have finite rank and the Euler characteristic $\chi(\Sigma)$ of $\Sigma$ is given by the formula \begin{equation}\label{E:Euler char. finitely connected} \chi(\Sigma)\ =\ \chi(\Sigma_1)\,-\,p\,. \end{equation} For the basic notions on finitely connected surfaces, we refer to~\cite[Section~2.1]{ShiShiTan03}. Let $(\Sigma,g)$ be a noncompact Riemannian surface . Let $K$ be the Gaussian curvature of $g$, and $dA_g$ the area element. We say that a Riemannian surface $(\Sigma,g)$ admits \emph{total curvature} if \[ \int_\Sigma K_+\,dA_g\,<\infty\quad\qquad\text{ or }\quad\qquad\int_\Sigma K_-\,dA_g\,<\infty\,, \] where $K_+=\max\{K,0\}$ and $K_-:=\max(-K,0)$. In this case, the extended real number \begin{equation}\label{E:total curvature} c(\Sigma;g)\ :=\ \int_\Sigma K\,dA_g\ =\ \int_\Sigma K_+\,dA_g\,-\,\int_\Sigma K_-\,dA_g\in[-\infty,\infty] \end{equation} is called the \emph{total curvature} of $(\Sigma,g)$ (for more details on the notion of total curvature for noncompact surfaces, see~\cite[Section~2.1]{ShiShiTan03}). We say that the Gaussian curvature $K$ is \emph{nonnegative at infinity} if $K\geq 0$ outside a compact subset of $\Sigma$. Notice that in this case $K_-$ is compactly supported so that $(\Sigma,g)$ admits total curvature $c(\Sigma;g)$ ranging over the interval $(-\infty,+\infty]$. \begin{theorem}\label{T:main theorem} Let $(\Sigma,g)$ be a connected, finitely connected, complete Riemannian surface without boundary. If $K$ is nonnegative at infinity, then $(\Sigma,g)$ admits \emph{finite} total curtvature. Moreover, we have \begin{equation}\label{E:G-B inequality} 2\pi\,\chi(\Sigma)\ \geq\ c(\Sigma;g)\,. \end{equation} \end{theorem} \begin{remark} Formula~\eqref{E:G-B inequality} can be deduced from the first part of the theorem by using the Gauss-Bonnet inequality due to Cohn-Vossen (cf.~\cite{Coh35} and~\cite[Theorem~2.2.1]{ShiShiTan03}). In our proof of Theorem~\ref{T:main theorem}, we also prove inequality~\eqref{E:G-B inequality} in the case when the Gaussian curvature is nonnegative at infinity. \end{remark} Notice that, by formula~\eqref{E:Euler char. finitely connected}, if a connected, finitely connected, noncompact surface has positive Euler characteristic, then it is homeomorphic (and hence diffeomorphic) to $\mathbb{R}^2$. Therefore, from Theorem~\ref{T:main theorem} we obtain a new proof of the following fact. \begin{corollary}\label{C:Cheeger-Gromoll} Let $\Sigma$ be a connected, finitely connected, noncompact surface. If $\Sigma$ admits a complete metric whose Gaussian curvature is nonnegative and positive at one point, then $\Sigma$ is diffeomorphic to $\mathbb{R}^2$. \end{corollary} \begin{remark} Corollary~\ref{C:Cheeger-Gromoll} is a direct consequence of the soul conjecture of Cheeger and Gromoll, proved in full generality by Perelman in~\cite{Per94}, when specialized to the two-dimensional case. \end{remark} \section{The approximation procedure} In this section we present the proof of Theorem~\ref{T:main theorem}. Let $\Sigma$ be a noncompact, connected, finitely connected surface without boundary. Let $\Omega\subset \Sigma$ be a compact submanifold with boundary such that the boundary $\partial\Omega$ of $\Omega$ consists of $p$-copies of $S^1$ and $\Sigma\setminus\Omega=\bigsqcup_{j=1}^pC_j$, where each $C_j$ is a copy of the cylinder $S^1\times (0,\infty)$. We also assume $\Sigma$ is \emph{orientable} and pick an orientation. Let $\Sigma_h$ be the compact surface with boundary obtained by truncating the cylindrical ends of $\Sigma$ at the height $h$. This means that the boundary $\partial\Sigma_h$ of $\Sigma_h$ is the disjoint union of $p$ copies of $S^1$ and $\Sigma\setminus\Sigma_h=\bigsqcup_{j=1}^p\{S^1\times (h,\infty)\}$. The \emph{total geodesic curvature} of $\Sigma_h$ is defined by \begin{equation}\label{E:lambda} \lambda(h)\ :=\ \int_{\partial\Sigma_h}K\,, \end{equation} where the boundary $\partial\Sigma_h$ is \emph{positively} oriented with respect to the given orientation of $\Sigma$ and $K$ is the \emph{geodesic curvature} of $\partial\Sigma_h$ (see~\cite[Section~4-4, Definition~10]{docarmo}). \begin{lemma}\label{L:lambda>=0} Let $(\Sigma,g)$ be a connected, finitely connected, \emph{orientable}, complete Riemannian surface without boundary. If the Gaussian curvature of $g$ is nonnegative at infinity, then $\lambda(h)$ converges to a nonnegative number $L$, as $h$ goes to infinity. \end{lemma} \begin{remark} Our proof of this lemma is based on the fact that we can choose on each cylindrical end $C_j$ suitable coordinates which simplify the components of the metric. Such coordinates were first used by S. Rosenberg in~\cite{SRos82} to provide a short proof of Cohn-Vossen inequality. \end{remark} \subsection{Proof of Lemma~\ref{L:lambda>=0}} Since $\Sigma_h$ is a retract of $\Sigma$, using the Gauss-Bonnet theorem (see~\cite[Section~4-5]{docarmo}) on $\Sigma_h$ we obtain \begin{equation}\label{E:Gauss-Bonnet} 2\pi\,\chi (\Sigma)\ =\ 2\pi\,\chi (\Sigma_h)\ =\ c(\Sigma_h;g)\,+\,\lambda (h)\,. \end{equation} Since $K$ is nonnegative at infinity, there exists $h_1>0$ such that $K\geq 0$ on $\Sigma\setminus\Sigma_{h}$, for all $h\geq h_1$. From~\eqref{E:Gauss-Bonnet} we deduce that the function \begin{equation}\label{E:L1} \lambda(h)\ =\ 2\pi\,\chi(\Sigma)\,-\,c(\Sigma_h;g) \end{equation} is nonincreasing on the interval $(h_1,\infty)$, so that the extended real number \begin{equation}\label{E:L3} L\ :=\ \lim_{h\rightarrow\infty}\lambda(h)\in [-\infty,+\infty) \end{equation} is well-defined. To conclude the proof, it remains to show that we must have $L\geq 0$. In order to get more information on the function $\lambda(h)$ and the number $L$, we compute a local expression for the geodesic curvature $K$. As observed in~\cite{SRos82}, we can choose coordinates $(t_j,\theta_j)$ on the cylindrical end $C_j$ in a way that \begin{enumerate}[label=(\roman*)] \item for all $P\in C_j$, the basis \[ \left\{\frac{\partial}{\partial t_j}\Big|_P \,,\ \frac{\partial}{\partial\theta_j}\Big|_P\right\} \] of the tangent space $T_PC_j\cong \mathbb{R}\oplus\mathbb{R}$ is positively oriented; \item the metric $g$, restricted to the cylindrical end $C_j$, is of the form \[ g(t_j,\theta_j)\ =\ dt_j^2\,+\,G_j(t_j,\theta_j)\,d\theta_j^2\,,\qquad\qquad (t_j,\theta_j)\in C_j\,, \] where $G_j:C_j\rightarrow (0,\infty)$ is a smooth function. \end{enumerate} For the rigorous construction of the coordinates $(t_j,\theta_j)$, we refer to~\cite[page 747]{SRos85}. With this choice, the curve $\gamma_j^h(s)=(t_j(s),\theta_j(s))=(h,s)$ parametrizes $\partial \Sigma_h\cap C_j$ with \emph{positive} orientation (see~\cite[pp.~267-268]{docarmo}). Moreover, by~\cite[Section~4-4, Proposition~3]{docarmo}, the geodesic curvature $K$ of $\gamma_j^h$ takes the form \begin{equation}\label{E:geodesic curvature} K\ =\ \frac{1}{2\,\sqrt{G_j}\,}\,\frac{\partial G_j}{\partial t_j}\,\frac{d\theta_j}{ds} \ =\ \frac{\partial}{\partial t_j}\,\sqrt{G_j}\,, \end{equation} from which \[ \int_{\partial\Sigma_h\cap C_j}K\ =\ \int_0^{2\pi}\left(\frac{\partial}{\partial t_j}\,\sqrt{G_j}\,\right)(h,s)\,ds\ =\ \frac{d}{dh}\,\int_0^{2\pi}\sqrt{G_j(h,s)}\,ds\,. \] Hence, \begin{equation}\label{E:L2} \lambda(h)\ =\ \mu^\prime(h)\,, \end{equation} where \begin{equation}\label{E:mu} \mu(h)\ :=\ \sum_{j=1}^p\,\int_0^{2\pi}\sqrt{G_j(s,h)}\,ds\,. \end{equation} Finally, we use Equation~\eqref{E:L2} to deduce that the number $L$ defined by~\eqref{E:L3} must be nonnegative. Suppose indeed that $L<0$. Then by Equation~\eqref{E:L2} it follows that $\lim_{h\rightarrow \infty}\mu(h)=-\infty$, which is impossible, since, by the expression~\eqref{E:mu}, $\mu(h)$ is a strictly positive function. Therefore, we must have $L\geq 0$, which concludes the proof. \hfill$\square$ \subsection{Proof of Theorem~\ref{T:main theorem}} Let $\Sigma$, $g$ be as in the hypothesis of Theorem~\ref{T:main theorem}. We also assume $\Sigma$ is \emph{orientable}: the case when $\Sigma$ is nonorientable is obtained, by a standard argument, by considering the orientable double cover. From~\eqref{E:Gauss-Bonnet}, we have \[ c(\Sigma_h;g)\ =\ 2\pi\,\chi(\Sigma)\,-\,\lambda(h)\,. \] Taking the limit for $h\rightarrow\infty$ and using Lemma~\ref{L:lambda>=0}, we deduce that \begin{equation}\label{E:G-B nonnegative at infty} c(\Sigma;g)\ =\ 2\pi\,\chi(\Sigma)\,-\,L\ \leq\ 2\pi\,\chi(\Sigma)\,. \end{equation} Therefore, $c(\Sigma;g)$ is finite and satisfies the Gauss-Bonnet Inequality~\eqref{E:G-B inequality}. \hfill$\square$
{ "timestamp": "2016-12-02T02:08:30", "yymm": "1609", "arxiv_id": "1609.07631", "language": "en", "url": "https://arxiv.org/abs/1609.07631", "abstract": "We give a short proof of the following fact. Let $\\Sigma$ be a connected, finitely connected, noncompact manifold without boundary. If $g$ is a complete Riemannian metric on $\\Sigma$ whose Gaussian curvature $K$ is nonnegative at infinity, then $K$ must be integrable. In particular, we obtain a new short proof of the fact that if $\\Sigma$ admits a complete metric whose Gaussian curvature is nonnegative and positive at one point, then $\\Sigma$ is diffeomorphic to $\\mathbb{R}^2$.", "subjects": "Differential Geometry (math.DG)", "title": "A note on the Gaussian curvature on noncompact surfaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363737832231, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7084883496322677 }
https://arxiv.org/abs/1708.05407
The $6\times 6$ grid is $4$-path-pairable
Let $G=P_6\Box P_6$ be the $6\times 6$ grid, the Cartesian product of two paths of six vertices. Let $T$ be the set of eight distinct vertices of $G$, called terminals, and assume that $T$ is partitioned into four terminal pairs $\{s_i,t_i\}$, $1\leq i\leq 4$. We prove that $G$ is $4$-path-pairable, that is, for every $T$ there exist in $G$ pairwise edge disjoint $s_i,t_i$-paths, $1\leq i\leq 4$.
\section{Introduction} For $k$ fixed, a graph $G$ is {\it $k$-path-pairable}, if for any set of $k$ disjoint pairs of vertices, $s_i,t_i$, $1\leq i\leq k$, there exist pairwise edge-disjoint $s_i,t_i$-paths in $G$. The {\it path-pairability number}, denoted $pp(G)$, is the largest $k$ such that $G$ is $k$-path-pairable. In \cite{pxp} we determine the path-pairability number of the grid graph $G(a,b)=P_{a}\Box P_{b}$, the Cartesian product of two paths on $a$ and $b$ vertices, where there is an edge between two vertices, $(i,j)$ and $(p,q)$, if and only if $|p-i|+|q-j|=1$, for $1\leq i\leq a$, $1\leq j\leq b$. \begin{theorem}[\cite{pxp}] \label{PxP} If $k=\min\{a,b\}$, then $$pp(G(a,b))=\left\{ \begin{array}{ccllll} &k-1& \hbox{ for } &k&=2,3,4\\ &3& \hbox{ for } &k&=5 \\ &4& \hbox{ for } &k&\geq 6 \qquad \end{array}\right..$$ \end{theorem} We complete the proof of the formula in Theorem \ref{PxP} by proving our main result: \begin{theorem} \label{main}$pp(G(6,6))=4$. \end{theorem} In Section \ref{proof} the proof of Theorem \ref{main} is given in two parts. In Proposition \ref{66upper} we present a pairing of ten terminals which does not give a linkage in $G(6,6)$. To show that $G(6,6)$ is $4$-path-pairable Proposition \ref{66} uses a sequence of technical lemmas. These lemmas are listed next in Section \ref{lemmas}, and they are proved separately in two notes, \cite{heavy} and \cite{escape}. \section{Technical lemmas} \label{lemmas} Let $T=\{s_1,t_1,s_2,t_2,s_3,t_3,$ $s_4,t_4\}$ be the set of eight distinct vertices of the grid $G=P_6\Box P_6$, called {\it terminals}. The set $T$ is partitioned into four terminal pairs, $\pi_i=\{s_i,t_i\}$, $1\leq i\leq 4$, to be linked in $G$ by edge disjoint paths. A (weak) {\it linkage} for $\pi_i$, $1\leq i\leq 4$, means a set of edge disjoint $s_i,t_i$-paths $P_i\subset G$. The grid $G$ partitions into four $P_3\Box P_3$ grids called {\it quadrants}. We say that a set of terminals in a quadrant $Q\subset G$ {\it escape} from $Q$ if there are pairwise edge disjoint `mating paths' from the terminals into distinct mates (exits) located at the union of a horizontal and a vertical boundary line of $Q$. A quadrant $Q\subset G$ is considered to be `crowded', if it contains $5$ or more terminals. Among the technical lemmas the proof of three lemmas pertaining to crowded quadrants was presented in \cite{heavy}. The technical lemmas for `sparse' quadrants containing at most $4$ terminals are proved in \cite{escape}. \subsection{Escaping from crowded quadrants} Let $A$ be a horizontal and let $B$ be a vertical boundary line of a quadrant $Q\subset G$, and for a subgraph $S\subseteq G$ set $\|S\|=|T\cap S|$. \begin{lemma} \label{heavy78} If $\|Q\|=7$ or $8$, then there is a linkage for two or more pairs in $Q$, and there exist edge disjoint escape paths for the unlinked terminals into distinct exit vertices in $A\cup B$.\ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else$\Box$\fi \end{lemma} \begin{lemma} \label{heavy6} If $\|Q\|=6$, then there is a linkage for one or more pairs in $Q$, and there exist edge disjoint escape paths for the unlinked terminals into distinct exit vertices of $A\cup B$ such that $B\setminus A$ contains at most one exit.\ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else$\Box$\fi \end{lemma} \begin{lemma} \label{heavy5} If $\|Q\|=5$ and $\{s_1,t_1\}\subset Q$, then there is an $s_1,t_1$-path $P_1\subset Q$, and the complement of $P_1$ contains edge disjoint escape paths for the three unlinked terminals into distinct exit vertices of $A\cup B$ such that $B\setminus A$ contains at most one exit.\ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else$\Box$\fi \end{lemma} \subsection{Escaping from sparse quadrants} The vertices of a grid are represented as elements $(i,j)$ of a matrix arranged in rows $A(i)$ and columns $B(j)$. W.l.o.g. we may assume that $Q$ is the upper left quadrant of $G=P_6\Box P_6$, and thus $A=A(3)\cap Q$ and $B=B(3)\cap Q$ are the horizontal and vertical boundary lines of $Q$, respectively. For a vertex set $S\subset V(G)$ and a subgraph $H\subseteq G$, $H-S$ is interpreted as the subgraph obtained by the removal of $S$ and the incident edges from $H$; $S$ is also interpreted as the subgraph of $G$ induced by $S$; $x\in H$ simply means a vertex of $H$. Mating (or shifting) a terminal $w$ to vertex $w^\prime$, called a mate of $w$, means specifying a $w,w^\prime$-path called a {\it mating path}. Finding a linkage for two pairs are facilitated using the property of a graph being {\it weakly $2$-linked}, and by introducing the concept of a {\it frame}. A graph $H$ is weakly $2$-linked, if for every $u_1,v_1,u_2,v_2\in H$, not necessarily distinct vertices, there exist edge disjoint $u_i,v_i$-paths in $H$, for $i=1,2$. A weakly $2$-linked graph must be $2$-connected, but $2$-connectivity is not a sufficient condition. The next lemma lists a few weakly $2$-linked subgrids (the simple proofs are omitted). \begin{lemma} \label{w2linked} The grid $P_3\Box P_k$, and the subgrid of $P_k\Box P_k$ induced by $(A(1)\cup A(2)) \cup$ $ (B(1)\cup B(2))$ is weakly $2$-linked, for $k\geq 3$. \ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else$\Box$\fi \end{lemma} We use the $3$-path-pairability of certain grids proved in \cite{pxp} (see in Theorem \ref{PxP}). \begin{lemma} \label{3pp} The grid $P_4\Box P_k$, is $3$-path-pairable, for $k\geq 4$. \ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else$\Box$\fi \end{lemma} Let $C\subset G$ be a cycle and let $x$ be a fixed vertex of $C$. Take two edge disjoint paths from a member of $\pi_j$ to $x$, for $j=1$ and $2$, not using edges of $C$. Then we say that the subgraph of the union of $C$ and the two paths to $x$ define a {\it frame} $[C,x]$, for $\pi_1,\pi_2$. A frame $[C,x]$, for $\pi_1,\pi_2$, helps find a linkage for the pairs $\pi_1$ and $\pi_2$; in fact, it is enough to mate the other members of the terminal pairs onto $C$ using mating paths edge disjoint from $[C,x]$ and each other. The concept of a frame facilitates `communication' between quadrants of $G$. For this purpose frames in $G$ can be built on two standard cycles $C_0, C_1\subset G$ as follows. Let $C_0$ be the innermost $4$-cycle of $G$ induced by $(A(3)\cup A(4))\cap (B(3)\cup B(4))$, and let $C_1$ be the $12$-cycle around $C_0$ induced by the neighbors of $C_0$. Given a quadrant $Q$ we usually set $x_0=Q\cap C_0$ and we denote by $x_1$ the middle vertex of the path $Q\cap C_1$. (For instance, in the upper right quadrant of $G$, $x_0=(3,4)$ and $x_1=(2,5)$.) \begin{figure}[htp] \begin{center} \tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=3pt] \tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt] \tikzstyle{B} = [rectangle, draw=black!, minimum width=4pt, inner sep=4pt] \tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt] \tikzstyle{Wedge} = [draw,line width=1.5pt,-,black!100] \begin{tikzpicture} \foreach \x in {0,1,2,3,4,5} \foreach \y in {0,1,2,3,4,5} { \node[V] () at (\x,\y) {};} \foreach \u in {0,1,2,3,4} \foreach \v in {1,2,3,4,5} { \draw (\u,\v) -- (\u+1,\v); \draw (\u,\v-1) -- (\u,\v); } \draw (0,0) -- (5,0) -- (5,5); \draw[line width=.5pt,snake=zigzag] (2,2) -- (3,2) -- (3,3) -- (2,3) -- (2,2); \draw[Wedge,->](4,4)--(5,4) -- (5,3) -- (3.1,3); \draw[Wedge,->] (4,5) -- (3,5) -- (3,3.1) ; \draw[line width=.5pt,snake=zigzag] (2.5,4) -- (4,4) -- (4,1) -- (2.5,1); \node[T](s1) at (4,4){}; \node[txt] () at (4.3,4.3){$s_1$}; \node[T,label=above:$s_2$]() at (4,5){}; \node[T]() at (3,4){}; \node[txt] () at (2.7,4.3){$s_3$}; \node[]() at (3,3){o}; \node[txt] () at (2.7,2.7){$x_0$}; \node[txt] () at (3.7,3.7){$x_1$}; \node[txt] () at (1.65,2.5){$C_0$}; \node[txt] () at (3.5,1.5){$C_1$}; \end{tikzpicture} \hskip1truecm \begin{tikzpicture} \foreach \x in {0,1,2,3,4,5} \foreach \y in {0,1,2,3,4,5} { \node[V] () at (\x,\y) {};} \foreach \u in {0,1,2,3,4} \foreach \v in {1,2,3,4,5} { \draw (\u,\v) -- (\u+1,\v); \draw (\u,\v-1) -- (\u,\v); } \draw (0,0) -- (5,0) -- (5,5); \draw[Wedge,->](4,4) -- (5,4) -- (5,3) -- (4.1,3); \draw[Wedge,->] (3,4) -- (3,3) -- (3.9,3) ; \draw[line width=.5pt,snake=zigzag] (2.5,4) -- (4,4) -- (4,1) -- (2.5,1); \node[T](s1) at (4,4){}; \node[txt] () at (4.3,4.3){$s_1$}; \node[T,label=above:$s_2$]() at (4,5){}; \node[T]() at (3,4){}; \node[txt] () at (2.7,4.3){$s_3$}; \node[]() at (4,3){o}; \node[txt] () at (4.3,2.7){$w$}; \node[txt] () at (3.5,1.5){$C_1$}; \end{tikzpicture} \end{center} \caption{Framing $[C_0,x_0]$ for $\pi_1,\pi_2$, and $[C_1,w]$, for $\pi_1,\pi_3$} \label{frames} \end{figure} Let $\alpha\in\{0,1\}$ be fixed, assume that there are two terminals in a quadrant $Q$ belonging to distinct pairs, say $s_1\in\pi_1$, $s_2\in \pi_2$, and let $w\in Q\cap C_\alpha$. We say that $[C_\alpha,w]$ is a {\it framing in $Q$ for $\pi_1,\pi_2$ to $C_\alpha$}, if there exist edge disjoint mating paths in $Q$ from $s_1$ and from $s_2$ to $w$, edge disjoint from $C_1$ (see examples in Fig.\ref{frames} for framing in the upper right quadrant). \begin{lemma} \label{frame} Let $s_1\in\pi_1,s_2\in\pi_2$ be two (not necessarily distinct) terminals/mates in a quadrant $Q$. (i) For any mapping $\gamma:\{s_1,s_2\}\longrightarrow\{C_0,C_1\}$, there exist edge disjoint mating paths in $Q$ from $s_j$ to vertex $s_j^\prime\in\gamma(s_j)$, $j=1,2$, not using edges of $C_1$. (ii) For any fixed $\alpha\in\{0,1\}$, there is a framing $[C_\alpha,x_\alpha]$, for $\pi_1, \pi_2$, where $x_\alpha\in C_\alpha\cap Q$ and the mating paths are in $Q$. \end{lemma} \begin{lemma} \label{12toCa} Let $s_p,s_q,s_r$ be distinct terminals in a quadrant $Q$ belonging to three distinct pairs. Then there is a framing in $Q$ for $\pi_p,\pi_q$ to $C_\alpha$, for some $\alpha\in\{0,1\}$, and there is an edge disjoint mating path in $Q$ from $s_r$ to $C_\beta$, where $\beta=\alpha+1\pmod 2$, and edge disjoint from $C_1$. \end{lemma} \begin{lemma} \label{Caforpq} Let $s_1,s_2,s_3$ be distinct terminals in a quadrant $Q$ (belonging to distinct pairs); let $y_0\in Q$ be a corner vertex of $Q$ with degree three in $G$, and let $z\in \{x_0,y_0\}$ be a fixed corner vertex of $Q$. Then, (i) for some $1\leq p<q\leq 3$, there is a framing in $Q$ for $\pi_p,\pi_q$ to $C_0$, and there is an edge disjoint mating path in $Q$ from the third terminal to $C_1$; (ii) for some $1\leq p<q\leq 3$, there is a framing in $Q$ for $\pi_p,\pi_q$ to $C_1$, and there is an edge disjoint mating path in $Q$ from the third terminal to $z$; \end{lemma} \begin{lemma} \label{exit} Let $A$ be a boundary line of a quadrant $Q\subset G$. Let $Q_0$ be the subgraph obtained by removing the edges of $A$ from $Q$, and let $Q_i$, $1\leq i\leq 4$, be one of the subgraphs in Fig.\ref{Qmin} obtained from $Q_0$ by edge removal and edge contraction. (i) For any $H=Q_i$, $1\leq i\leq 4$, and for any three distinct terminals of $H$ there exist edge disjoint mating paths in $H$ from the terminals into not necessarily distinct vertices in $A$. \begin{figure}[htp] \tikzstyle{W} = [double,line width=.5,-,black!100] \tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt] \tikzstyle{T} = [circle, minimum width=1pt, fill, inner sep=1pt] \tikzstyle{A} = [rectangle, minimum width=1pt, draw=black,fill=white, inner sep=1.4pt] \begin{center} \begin{tikzpicture} \draw[dashed] (0.5,0.5)--(2.3,0.5)--(2.3,0.9)--(.5,0.9)--(.5,0.5); \foreach \x in {.7,1.4,2.1}\draw(\x,.7)--(\x,2.1); \foreach \y in {1.4,2.1}\draw(.7,\y)--(2.1,\y); \foreach \x in {.7,1.4,2.1}\foreach \y in {.7,1.4,2.1}\node[V] () at (\x,\y){}; \foreach \x in {.7,1.4,2.1}\node[A]()at(\x,.7){}; \node() at (.2,.7){A}; \node() at (1.4,0){$Q_0$}; \end{tikzpicture} \hskip.5cm \begin{tikzpicture} \draw (1.4,2.1)--(1.4,1.4)--(2.1,1.4) (.7,1.4)--(1.4,1.4) (.7,.7)--(.7,1.4) (1.4,.7)--(1.4,1.4) (2.1,.7)--(2.1,2.1); \foreach \x in {.7,1.4,2.1}\foreach \y in {.7,1.4}\node[V] () at (\x,\y){}; \node[T]()at(2.1,2.1){};\node[T]()at(1.4,2.1){}; \foreach \x in {.7,1.4,2.1}\node[A]()at(\x,.7){}; \node() at (1.4,0){$Q_1$}; \end{tikzpicture} \hskip.5cm \begin{tikzpicture} \draw (.7,1.4)--(.7,2.1)--(1.4,1.4) (2.1,1.4)--(.7,1.4) (.7,.7)--(.7,1.4) (1.4,.7)--(1.4,1.4) (2.1,.7)--(2.1,2.1); \foreach \x in {.7,1.4,2.1}\foreach \y in {.7,1.4}\node[V] () at (\x,\y){}; \foreach \x in {.7,2.1}\node[T] () at (\x,2.1){}; \foreach \x in {.7,1.4,2.1}\node[A]()at(\x,.7){}; \node() at (1.4,0){$Q_2$}; \end{tikzpicture} \hskip.5cm \begin{tikzpicture} \draw (.7,2.1)--(2.1,2.1); \foreach \x in {.7,1.4,2.1}\draw(\x,.7)--(\x,2.1); \foreach \x in {1.4,2.1}\foreach \y in {.7,1.4}\node[V] () at (\x,\y){}; \foreach \y in {.7,2.1}\node[V] () at (.7,\y){}; \node[T]()at(1.4,2.1){}; \node[T]()at(2.1,2.1){}; \foreach \x in {.7,1.4,2.1}\node[A]()at(\x,.7){}; \node() at (1.4,0){$Q_3$}; \end{tikzpicture} \hskip.5cm \begin{tikzpicture} \draw (.7,2.1)--(2.1,2.1); \foreach \x in {.7,1.4,2.1}\draw(\x,.7)--(\x,2.1); \foreach \x in {.7,2.1}\foreach \y in {.7,1.4}\node[V] () at (\x,\y){}; \foreach \y in {.7,2.1}\node[V] () at (1.4,\y){}; \node[T]()at(.7,2.1){}; \node[T]()at(2.1,2.1){}; \foreach \x in {.7,1.4,2.1}\node[A]()at(\x,.7){}; \node() at (1.4,0){$Q_4$}; \end{tikzpicture} \caption{Mating into $A$ in adjusted quadrants} \label{Qmin} \end{center} \end{figure} (ii) If $s_1,t_1,s_2$ are not necessarily distinct terminals in $Q_0$ then there is an $s_1,t_1$-path in $Q_0$ and an edge disjoint mating path from $s_2$ into a vertex of $A$. (iii) From any three distinct terminals of $Q_0$ there exist pairwise edge disjoint mating paths into three distinct vertices of $A$. Furthermore, the claim remains true if two terminals not in $A$ coincide. \end{lemma} \begin{lemma} \label{heavy4} Let $A,B$ be a horizontal and a vertical boundary line of quadrant $Q$, let $c$ be the corner vertex of $Q$ not in $A\cup B$, and let $b$ be the middle vertex of $B$ (see $Q_0$ in Fig.\ref{except}). Denote by $Q_0$ the grid obtained by removing the edges of $A$ from $Q$, and let $T$ be a set of at most four distinct terminals in $Q_0$. (i) If $T\subset Q_0-A$ and $c\notin T$, then for every terminal $s\in T$, there is a linkage in $Q_0$ to connect $s$ to $b$, and there exist edge disjoint mating paths in $Q_0$ from the remaining terminals of $T$ into not necessarily distinct vertices of $A$. \begin{figure}[htp] \tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt] \tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt] \tikzstyle{A} = [circle, minimum width=.5pt, draw=black, inner sep=2pt] \tikzstyle{B} = [rectangle, minimum width=1pt, draw=black,fill=white, inner sep=1.4pt] \begin{center} \begin{tikzpicture} \draw[dashed] (0.5,0.5)--(2.3,0.5)--(2.3,.9)--(.5,.9)--(.5,0.5); \foreach \y in {1.4,2.1}\draw (.7,\y)--(2.1,\y); \foreach \x in{.7,1.4,2.1}\draw (\x,.7)--(\x,2.1); \foreach \x in {.7,1.4,2.1}\foreach \y in {1.4,2.1}\node[V] () at (\x,\y){}; \foreach \x in{.7,1.4,2.1}\node[B] () at (\x,.7){}; \node() at (2.1,2.4){B}; \node() at (.3,1){A}; \node() at (1.4,0){$Q_0$}; \node[B,label=left:$c$] () at(.7,2.1){}; \node[A] () at(2.1,1.4){}; \node() at (2.35,1.4){$b$}; \end{tikzpicture} \hskip1cm \begin{tikzpicture} \foreach \y in {1.4,2.1}\draw (.7,\y)--(2.1,\y); \foreach \x in{.7,1.4,2.1}\draw (\x,.7)--(\x,2.1); \foreach \x in {.7,1.4,2.1}\foreach \y in {1.4,2.1}\node[V] () at (\x,\y){}; \foreach \x in{.7,1.4,2.1}\node[B] () at (\x,.7){}; \node[T,label=left:$s_{3}$] () at (.7,.7){}; \node[T,label=left:$s_{2}$] () at (.7,1.4){}; \node[T,label=left:$s_1$] () at (.7,2.1){}; \node() at (1.4,0){$T_1$}; \node[A] () at(2.1,1.4){}; \node() at (2.35,1.4){$b$}; \end{tikzpicture} \hskip1cm \begin{tikzpicture} \foreach \y in {1.4,2.1}\draw (.7,\y)--(2.1,\y); \foreach \x in{.7,1.4,2.1}\draw (\x,.7)--(\x,2.1); \foreach \x in {.7,1.4,2.1}\foreach \y in {1.4,2.1}\node[V] () at (\x,\y){}; \foreach \x in{.7,1.4,2.1}\node[B] () at (\x,.7){}; \node[T,label=above:$s_{2}$] () at (2.1,2.1){}; \node[T,label=above:$s_{3}$] () at (1.4,2.1){}; \node[T,label=above:$s_{4}$] () at (.7,2.1){}; \node[T] () at(.7,2.1){}; \node[T] () at(2.1,1.4){}; \node() at (1.4,0){$T_2$}; \node() at (2.4,1.4){$s_1$}; \end{tikzpicture} \caption{Projection to $A$} \label{except} \end{center} \end{figure} (ii) If $T$ is different from $T_1$ and $T_2$ in Fig.\ref{except}, then for $\min\{3,|T|\}$ choices of a terminal $s\in T$, there is a linkage in $Q_0$ to connect $s$ to $b$, and there exist edge disjoint mating paths in $Q_0$ from the remaining terminals of $T$ into not necessarily distinct vertices of $A$. (iii) If $T$ is one of $T_1$ and $T_2$ in Fig.\ref{except}, then the claim in (ii) above is true only for $s=s_1$ and $s_2$. \end{lemma} \begin{lemma} \label{boundary} Let $A,B$ be a horizontal and a vertical boundary line of a quadrant $Q$. For every $s_1,t_1,s_2,s_3\in Q$ and $\psi: \{s_2,s_3\} \longrightarrow \{A,B\}$, there is a linkage for $\pi_1$, and there exist edge disjoint mating paths in $Q$ from $s_j$, $j=2,3$, to distinct vertices $s_j^*\in \psi(s_j)$. \end{lemma} \section{Proof of Theorem \ref{main}} \label{proof} \begin{proposition} \label{66upper} The $6\times 6$ grid is not $5$-path-pairable. \end{proposition} \begin{proof} Eight terminals are located in the upper left quadrant of $G=P_6\Box P_6$ as shown in Fig.\ref{cluster}; $t_1$ and $t_5$ are be placed anywhere in $G$. We claim that there is no linkage for $\pi_i$, $1\leq i\leq 5$. Assume on the contrary that there are pairwise edge disjoint $s_i,t_i$-paths, for $1\leq i\leq 5$. Then $P_1, P_2,P_3, $ and $P_5$ must leave the upper left $2\times 2$ square. \begin{figure}[htp] \begin{center} \tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt] \tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=3pt] \tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt] \tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt] \tikzstyle{Wedge} = [draw,line width=2.2pt,-,black!100] \tikzstyle{M} = [circle, draw=black!, minimum width=1pt, inner sep=3.5pt] \begin{tikzpicture} \foreach \x in {1,...,3}\draw (\x,0.2)--(\x,3); \foreach \y in {1,...,3}\draw (1,\y)--(3.8,\y); \foreach \x in {1,...,3}\foreach \y in {1,...,3}\node[B]()at(\x,\y){}; \foreach \x in {1,2}\foreach \y in {1,2,3}\node[T]()at(\x,\y){}; \foreach \y in {2,3}\node[T]()at(3,\y){}; \draw[->,line width=1pt] (2,2)--(2,1.2); \draw[->,line width=1pt] (1,1)--(1.8,1); \draw[->,line width=1pt] (1,2)--(1,1.1); \draw[->,line width=1pt] (1,1)--(1,.3); \node[label=above:$s_1$]()at(.6,2.9){}; \node[label=left:$s_2$]()at(1,2){}; \node[label=left:$s_3$]()at(1,1){}; \node[label=right:$s_4$]()at(1.9,.8){}; \node[label=right:$t_4$]()at(2.9,1.8){}; \node[label=above:$t_3$]()at(2,3){}; \node[label=above:$t_2$]()at(3,3){}; \node[label=right:$s_5$]()at(1.9,1.8){}; \node[M]()at(2,1){}; \end{tikzpicture} \end{center} \caption{Unresolvable pairings} \label{cluster} \end{figure} By symmetry, we may assume that $P_5$ starts with the edge $s_5 - (3,2)$, furthermore, either $P_1$ or $P_2$ must use the edge $(2,1) - (3,1)$. Then either $P_3$ or one of $P_1$ and $P_2$ uses the edge $(3,1) - (3,2)$. Thus a bottle-neck is formed at vertex $(3,2)$, since two paths are entering there and $P_4$ must leave it, but only two edges, $(3,2) - (3,3)$ and $(3,2) - (4,2)$ are available, a contradiction. \end{proof} \begin{proposition} \label{66} The $6\times 6$ grid is $4$-path-pairable. \end{proposition} \begin{proof} We partition the grid $G=P_6\Box P_6$ into four quadrants, named NW, NE, SW, SE according to their `orientation'. Given the terminal pairs $\pi_i=\{s_i,t_i\}\subset G$, $1\leq i\leq 4$, a solution consists of pairwise edge disjoint $s_i,t_i$-paths, $P_i$, for $1\leq i\leq 4$, and is referred to as a {\it linkage} for $\pi_i$, $1\leq i\leq 4$. Our procedure described in terms of a tedious case analysis is based on the distribution of $T=\cup_{i=1}^4\pi_i$ in the four quadrants. The distributions of the terminals with respect to the quadrants are described with a so called {\it q-diagram} $\mathcal{D}$ defined as a (multi)graph with four nodes labeled with the four quadrants $Q_1, Q_2,Q_3, Q_4\subset G$, and containing four edges (loops and parallel edges are allowed): for each terminal pair $\{s_i,t_i\}\subset T$, $1\leq i\leq 4$, there is an edge $Q_aQ_b\in \mathcal{D}$ if and only if $s_i\in Q_a$, $t_i\in Q_b$. The proof is split into main cases A and B according to whether some quadrant contains a terminal pair, that is the diagram $\mathcal{D}$ is loopless, or $\mathcal{D}$ contains a loop. \\ \noindent Case A: no quadrant of $G$ contains a terminal pair. Observe that in this case the maximum degree in the q-diagram is at most $4$. A.1: every quadrant has two terminals (the q-diagram is $2$-regular). There are four essentially different distributions, apart by symmetries of the grid, see in Fig.\ref{2222}. We may assume that $s_1,s_2\in NW$ and $s_3,s_4\in Q$, where $Q=$SE for the leftmost q-diagram and $Q=$NE for the other ones as indicated by the blackened nodes of the q-diagrams \begin{figure}[htp] \begin{center} \tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=3pt] \tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1.5pt] \tikzstyle{Q} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=3pt] \begin{tikzpicture} \draw(.7,.7)--(.7,1.4)--(1.4,1.4)--(1.4,.7)--(.7,.7); \foreach \x in {0.7,1.4} \foreach \y in {0.7,1.4} {\node[Q] () at (\x,\y) {};} \node[V] () at (.7,1.4) {}; \node[V] () at (1.4,.7) {}; \node[label=left:{\small 1}]() at (.9,1.05){}; \node[label=right:{\small 2}]() at (.7,1.6){}; \end{tikzpicture} \hskip1cm \begin{tikzpicture} \draw(.7,1.4)--(1.4,.7)--(1.4,1.4)--(.7,.7)--(.7,1.4); \foreach \x in {0.7,1.4} \foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};} \node[V] () at (.7,1.4) {}; \node[V] () at (1.4,1.4) {}; \node[label=left:{\small 1}]() at (1,1.1){}; \node[label=right:{\small 2}]() at (.6,1.25){}; \end{tikzpicture} \hskip1cm \begin{tikzpicture} \draw(.65,.7)--(.65,1.4)(1.35,1.4)--(1.35,.7); \draw(.75,.7)--(.75,1.4)(1.45,1.4)--(1.45,.7); \foreach \x in {0.7,1.4} \foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};} \node[V] () at (.7,1.4) {}; \node[V] () at (1.4,1.4) {}; \node[label=left:{\small 1}]() at (.9,1.05){}; \node[label=right:{\small 2}]() at (.5,1.05){}; \end{tikzpicture} \hskip1cm \begin{tikzpicture} \draw(.65,.7)--(1.35,1.4) (1.35,.7)--(.65,1.4); \draw(.75,.7)--(1.45,1.4)(1.45,.7)--(.75,1.4); \foreach \x in {0.7,1.4} \foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};} \node[V] () at (.7,1.4) {}; \node[V] () at (1.4,1.4) {}; \node[label=left:{\small 1}]() at (1.15,1.1){}; \node[label=right:{\small 2}]() at (.6,1.3){}; \end{tikzpicture} \end{center} \caption{$\|Q\|=2$, for every quadrant} \label{2222} \end{figure} For each distributions we apply Lemma \ref{frame} (ii) to obtain a framing in NW for $\pi_1,\pi_2$ to $C_0$ and another framing in $Q$ for $\pi_3,\pi_4$ to $C_1$. Since the other two quadrants contain two-two terminals, it is possible to mate $t_1,t_2$ into vertices of $C_0$ and $t_3,t_4$ into vertices of $C_1$ by using Lemma \ref{frame} (i). Then the linkage is completed along the cycles $C_0$ and $C_1$.\\ A.2: the maximum degree of the q-diagram is $3$, and there is just one node with maximum degree, let $\|NW\|=3$. Fig.\ref{A12cases} lists q-diagrams with this property. \begin{figure}[htp] \begin{center} \tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=3pt] \tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1.5pt] \tikzstyle{Q} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=3pt] \begin{tikzpicture} \draw(.7,.7)--(.7,1.4)--(1.4,1.4)--(1.4,.7)--(.7,1.4); \foreach \x in {0.7,1.4} \foreach \y in {0.7,1.4} {\node[Q] () at (\x,\y) {};} \node[V] () at (.7,1.4) {}; \node[V] () at (1.4,1.4) {}; \node[label=left:{\small 1}]() at (.95,1.05){}; \node[label=right:{\small 2}]() at (.6,1.05){}; \node[label=above:{\small 3}]() at (1.05,1.2){}; \node[label=right:{\small 4}]() at (1.15,1.05){}; \end{tikzpicture} \hskip.6cm \begin{tikzpicture} \draw(.7,.7)--(.7,1.4)--(1.4,1.4)--(.7,.7) (.7,1.4)--(1.4,.7); \foreach \x in {0.7,1.4} \foreach \y in {0.7,1.4} {\node[Q] () at (\x,\y) {};} \node[V] () at (.7,1.4) {}; \node[V] () at (1.4,1.4) {}; \node[label=left:{\small 1}]() at (.95,1.1){}; \node[label=right:{\small 2}]() at (.55,1.2){}; \end{tikzpicture} \hskip.8cm \begin{tikzpicture} \draw (.7,1.37)--(1.4,.6) (.7,1.53)--(1.4,.755) (.7,.7)--(1.4,1.4)--(.7,1.4); \foreach \x in {0.7,1.4} \foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};} \node[V] () at (.7,1.4) {}; \node[V] () at (1.4,1.4) {}; \node[label=left:{\small 1}]() at (1.15,1.1){}; \node[label=right:{\small 2}]() at (.65,1.2){}; \end{tikzpicture} \hskip.6cm \begin{tikzpicture} \draw(.65,.7)--(.65,1.4) (.7,1.4)--(1.4,1.4)--(1.4,.7); \draw (.75,.7)--(.75,1.4); \foreach \x in {0.7,1.4} \foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};} \node[V] () at (.7,1.4) {}; \node[V] () at (1.4,1.4) {}; \node[label=left:{\small 1}]() at (.9,1.05){}; \node[label=right:{\small 2}]() at (.5,1.05){}; \end{tikzpicture} \hskip.6cm \begin{tikzpicture} \draw (.65,.7)--(.65,1.4) (.7,1.4)--(1.4,.7)--(1.4,1.4); \draw (.75,.7)--(.75,1.4); \foreach \x in {0.7,1.4} \foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};} \node[V] () at (.7,1.4) {}; \node[V] () at (1.4,.7) {}; \node[label=left:{\small 1}]() at (.9,1.){}; \node[label=right:{\small 2}]() at (.5,1.){}; \end{tikzpicture} \end{center} \caption{$NW$ has $3$ terminals all other quadrants have less} \label{A12cases} \end{figure} Let $s_1,s_2,s_3\in NW$, let $Q=$NE for the first four q-diagrams, and let $Q=$SE for the last q-diagram (see blackened nodes in Fig.\ref{A12cases}). Applying Lemma \ref{12toCa} with quadrant NW and $p=1, q=2$, we obtain a framing in NW for $\pi_1,\pi_2$ to $C_\alpha$, for some $\alpha\in \{0,1\}$, furthermore, we obtain a mating of $s_3$ into a vertex in $C_\beta\cap NW$, where $\beta=\alpha+1 \pmod 2$. Recall that the remaining quadrants contain at most two terminals. We use Lemma \ref{frame} (ii) with quadrant Q which yields a framing in Q for $\pi_3,\pi_4$ to $C_\beta$.The solution is completed by mating the remaining terminals to the appropriate cycles applying Lemma \ref{frame} (i).\\ A.3: there are two quadrants containing three terminals, let $\|NW\|=\|Q\|=3$, where $Q=$NE or SE, see Fig.\ref{twomax3}. \begin{figure}[htp] \begin{center} \tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=3pt] \tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1.5pt] \tikzstyle{Q} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=3pt] \begin{tikzpicture \draw (.7,.7)--(1.4,.7); \draw (.7,1.33)--(1.4,1.33)(.7,1.4)--(1.4,1.4) (.7,1.47)--(1.4,1.47); \draw ; \foreach \x in {0.7,1.4} \foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};} \node[label=below:{\small 4}]() at (1.05,1.3){}; \node[label=below:{\small (I)}]() at (1.05,.6){}; \end{tikzpicture} \hskip.6cm \begin{tikzpicture} \draw (.7,1.35)--(1.4,1.35) (.7,1.45)--(1.4,1.45) (1.4,.7)--(1.4,1.4) (.7,.7)--(.7,1.4); \foreach \x in {0.7,1.4} \foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};} \node[label=above:{\small 1}]() at (1.05,1.2){}; \node[label=above:{\small 2}]() at (1.05,.85){}; \node[label=left:{\small 3}]() at (.9,1.05){}; \node[label=right:{\small 4}]() at (1.15,1.05){}; \node[label=below:{\small (II)}]() at (1.05,.7){}; \end{tikzpicture} \hskip.6cm \begin{tikzpicture} \draw (.7,1.37)--(1.4,.6) (.7,1.53)--(1.4,.755) (1.4,.7)--(1.4,1.4) (.7,.7)--(.7,1.4); \foreach \x in {0.7,1.4} \foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};} \node[label=left:{\small 3}]() at (.9,1.05){}; \node[label=right:{\small 4}]() at (1.15,1.05){}; \node[label=below:{\small (III)}]() at (1.05,.6){}; \end{tikzpicture} \hskip.6cm \begin{tikzpicture} \draw (.7,1.35)--(1.4,1.35) (.7,1.45)--(1.4,1.45) (.7,1.4)--(1.4,.7)--(1.4,1.4); \foreach \x in {0.7,1.4} \foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};} \node[label=right:{\small 2}]() at (.7,1.25){}; \node[label=above:{\small 1}]() at (1.05,1.2){}; \node[label=below:{\small (IV)}]() at (1.05,.6){}; \node[label=right:{\small 4}]() at (1.15,1.05){}; \end{tikzpicture} \hskip.6cm \begin{tikzpicture} \draw (.7,1.35)--(1.4,1.35) (.7,1.45)--(1.4,1.45) (.7,1.4)--(1.4,.7)(1.4,1.4)--(.7,.7); \foreach \x in {0.7,1.4} \foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};} \node[label=right:{\small 2}]() at (.7,1.25){}; \node[label=above:{\small 1}]() at (1.05,1.2){}; \node[label=below:{\small (V)}]() at (1.05,.6){}; \end{tikzpicture} \hskip.6cm \begin{tikzpicture} \draw (.7,1.37)--(1.4,.6) (.7,1.53)--(1.4,.755) (1.4,.7)--(1.4,1.4)--(.7,1.4); \foreach \x in {0.7,1.4} \foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};} \node[label=above:{\small 3}]() at (1.05,1.2){}; \node[label=right:{\small 4}]() at (1.15,1.05){}; \node[label=below:{\small (VI)}]() at (1.05,.6){}; \end{tikzpicture} \hskip.6cm \begin{tikzpicture} \draw (.65,1.37)--(1.35,.6) (.75,1.53)--(1.45,.755) (.72,1.42)--(1.42,.68) (.7,.7)--(1.4,1.4 ); \foreach \x in {0.7,1.4} \foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};} \node[label=left:{\small 4}]() at (1.35,.8){}; \node[label=below:{\small (VII)}]() at (1.05,.6){}; \end{tikzpicture} \end{center} \caption{$\|NW\|=\|Q\|=3$} \label{twomax3} \end{figure} Let $s_1,s_2,s_3\in NW$, and $t_1,t_2\in Q$. For the q-diagram (I) we define $G^*=G-(A(5)\cup A(6))$. We mate the terminals from row $A(4)$ to $s_4^\prime, t_4^\prime\in A(5)\cup A(6)$ along columns of $G$. Since $G^*\cong P_4\Box P_6$ is $3$-path-pairable by Lemma \ref{3pp}, there is a linkage for $\pi_1,\pi_2,\pi_3$ in $G^*$. Furthermore, there is an edge disjoint $s_4^\prime,t_4^\prime$-path in the connected subgrid $A(5)\cup A(6)$ thus completing a solution. \\ For the q-diagrams (II) and (III) let $t_4\in Q$, where $Q=$NE or SE, respectively. We apply Lemma \ref{exit} (iii) for NW and for $Q$ with horizontal boundary line $A=A(3)\cap NW$ and in $A=A(3)\cap NE$ or $A(4)\cap SE$, respectively. \begin{figure}[htp] \begin{center} \tikzstyle{A} = [circle, minimum width=.5pt, draw=black, inner sep=2pt] \tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt] \tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt] \tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt] \begin{tikzpicture} \draw[line width=1.5pt] (1,2)--(3,2); \draw[line width=1.5pt] (.5,2)--(.5,1.5)--(2,1.5)--(2,2); \draw[->,double,snake](1.5,2)--(1.5,1.1); \draw[->,double,snake](2.5,2)--(2.5,1.1); \foreach \y in {.5,1,1.5,2,2.5,3} \draw (.5,\y)--(3,\y); \foreach \x in {.5,1,1.5,2,2.5,3} \draw (\x,.5)--(\x,3); \foreach \x in {.5,1,1.5,2,2.5,3}\node[A]() at (\x,2){}; \foreach \x in {.5,1,1.5,2,2.5,3} \foreach \y in {.5,1,1.5,2,2.5,3} \node[V]() at (\x,\y){}; \node() at (4.1,2){$A(3)$}; \node() at (4.1,1.5){$A(4)$}; \node() at (0.2,2.25){$s_2^\prime$}; \node() at (2.7,2.25){$t_4^\prime$}; \node() at (1.3,2.3){$s_3^\prime$}; \node() at (.8,2.3){$s_1^\prime$}; \node() at (3.25,2.25){$t_1^\prime$}; \node() at (2.22,2.25){$t_2^\prime$}; \end{tikzpicture} \begin{tikzpicture} \draw[line width=1.5pt] (1,2)--(1,1.5)--(2,1.5); \draw[line width=1.5pt] (.5,2)--(2.5,2)--(2.5,1.5); \draw[->,double,snake](1.5,2)--(1.5,1.1); \draw[->,double,snake](3,1.5)--(3,2.4); \foreach \y in {.5,1,1.5,2,2.5,3} \draw (.5,\y)--(3,\y); \foreach \x in {.5,1,1.5,2,2.5,3} \draw (\x,.5)--(\x,3); \foreach \x in {.5,1,1.5,2,2.5,3} \foreach \y in {.5,1,1.5,2,2.5,3} \node[V]() at (\x,\y){}; \foreach \x in {.5,1,1.5}\node[A]() at (\x,2){}; \foreach \x in {2,2.5,3}\node[A]() at (\x,1.5){}; \node() at (2.7,1.3){$t_1^\prime$}; \node() at (0.2,2.25){$s_1^\prime$}; \node() at (1.3,2.3){$s_3^\prime$}; \node() at (.8,2.3){$s_2^\prime$}; \node() at (3.25,1.3){$t_4^\prime$}; \node() at (2.2,1.3){$t_2^\prime$}; \end{tikzpicture} \end{center} \caption{Cases (II) and (III)} \label{A3A4} \end{figure} Thus we obtain six distinct mates $s_1^\prime,s_2^\prime,s_3^\prime\in A(3)\cap NW$ and $t_1^\prime,t_2^\prime,t_4^\prime\in A(3)\cap NE$ or $A(4)\cap SE$, see the encircled vertices in Fig. \ref{A3A4}. Observe that the mating paths are edge disjoint from the $2\times 6$ grid $G^*=A(3)\cup A(4)$. The mating paths from $s_3^\prime$ and $t_4^\prime$ can be extended into the neighboring quadrants containing $t_3$ and $s_4$ along the columns of $G$ (zigzag lines in Fig.\ref{A3A4}). Furthermore, a linkage for $\pi_2$ can be completed by an $s_2^\prime, t_2^\prime$-path in $G^*$ not using edges of $A(3)$, and a linkage for $\pi_3$ can be completed by an $s_3^\prime, t_3^\prime$-path in $G^*$ not using edges of $A(4)$. \\ The solution for the q-diagrams (IV) and (V) follows a similar strategy using Lemma \ref{heavy4}. Assume that as a result of applying Lemma \ref{heavy4} twice, for quadrants NW and NE, we find a common index $\ell\in\{1,2\}$, say $\ell=1$, that satisfies the following property: there exists a path in NW from $s_1$ to $s_1^\prime=(2,3)$, and there exists a path in NE from $t_1$ to $t_1^\prime=(2,4)$, furthermore, terminals $s_2,s_3\in NW$, $t_2,t_4\in NE$ are mated into not necessarily distinct vertices $s_2^\prime,s_3^\prime,t_2^\prime,t_4^\prime\in A(3)$ using edge disjoint mating paths. Now we complete a linkage for $\pi_1$ by adding the edge $s_1^\prime t_1^\prime\in A(2)$. Since the mating paths do not use edges of $A(3)$, a linkage for $\pi_2$ can be completed by adding an $s_2^\prime, t_2^\prime$-path in $A(3)$. Next we extend the mating paths from $s_3^\prime$ and $t_4^\prime$ into $s_3^{*},t_4^{*}\in A(4)$ along the columns of $G$. Since $G^*=G-(NW\cup NE)$ contains $t_3,s_3^*,s_4,t_4^{*}$ and since, by Lemma \ref{w2linked}, $G^*\cong P_3\Box P_6$ is weakly $2$-linked, the linkage for $\pi_3,\pi_4$ can be completed in $G^*$. A common index $\ell\in\{1,2\}$ as above exists by the pigeon hole principle if one of the terminal set in NW or in NE is different from type $T_1$ in Fig.\ref{except}. If the terminals in both quadrants are of type $T_1$, then we have $s_1,s_2,s_3\in B(1)\cap NW$ and $t_1,t_2,t_4\in B(6)\cap NE$. Now we mate $s_1,s_2,t_1,t_2$ into the $3\times 4$ grid $G^\prime$ induced by $(NW\cup NE)\setminus (B(1)\cup B(6))$ along their rows, furthermore, we mate $s_3,t_4$ to vertices $s_3^*, t_4^*\in A(4)$ along their columns. Since $G^\prime$ is $2$-path-pairable, a linkage for $\pi_1,\pi_2$ can be completed in $G^\prime$. Since the weakly $2$-linked $G^*=G-(NW\cup NE)$ contains $s_3^*,t_3,s_4,t_4^*$, there are edge disjoint ptahs from $s_3^*$ to $t_3$ and from $s_4$ to $t_4^*$ completeing a linkage in $G^*$ for $\pi_3$ and $\pi_4$.\\ \begin{figure}[htp] \begin{center} \tikzstyle{A} = [circle, minimum width=.5pt, draw=black, inner sep=2pt] \tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt] \tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt] \tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt] \begin{tikzpicture} \draw[line width=1.5pt] (1.5,3)--(1.5,.5)--(3,.5); \draw[line width=1.5pt] (1,3)--(1,1.5)--(3,1.5); \draw[->,double](.5,3)--(1.9,3); \draw[->,double](3,1)--(3,1.9); \foreach \y in {.5,1,1.5,2,2.5,3} \draw (.5,\y)--(3,\y); \foreach \x in {.5,1,1.5,2,2.5,3} \draw (\x,.5)--(\x,3); \foreach \x in {.5,1,1.5,2,2.5,3} \foreach \y in {.5,1,1.5,2,2.5,3} \node[V]() at (\x,\y){}; \foreach \x in {.5,1,1.5}\node[T]() at (\x,3){}; \foreach \y in {.5,1,1.5}\node[T]() at (3,\y){}; \node() at (3.3,1){$t_4$}; \node() at (3.3,1.5){$t_1$}; \node() at (3.3,.5){$t_2$}; \node() at (3.3,2.15){$t_4^\prime$}; \node() at (2.15,3.3){$s_3^\prime$}; \node() at (.5,3.3){$s_3$}; \node() at (1,3.3){$s_2$}; \node() at (1.5,3.3){$s_1$}; \end{tikzpicture} \hskip1cm \begin{tikzpicture} \draw[line width=1.5pt] (1,2)--(1,1)--(2,1) (1.5,3)--(1.5,1.5)--(3,1.5); \draw[->, line width=1.2] (1.5,3)--(1.9,3); \draw[->, line width=1.2] (2.5,1.5)--(2.5,1.9); \foreach \y in {.5,1,1.5,2,2.5,3} \draw (.5,\y)--(3,\y); \foreach \x in {.5,1,1.5,2,2.5,3} \draw (\x,.5)--(\x,3); \foreach \x in {.5,1,1.5,2,2.5,3} \foreach \y in {.5,1,1.5,2,2.5,3} \node[V]() at (\x,\y){}; \foreach \x in {2.5,3}\node[A]() at (\x,1.5){}; \foreach \y in {3}\node[A]() at (1.5,\y){}; \node[A]() at (2,1){}; \node[A]() at (1,2){}; \node[A]() at (2,3){}; \node[A]() at (2.5,2){}; \node() at (2.3,.75){$t_1^\prime$}; \node() at (2.75,1.27){$t_4^\prime$}; \node() at (2.75,2.2){$t_4^*$}; \node() at (3.3,1.27){$t_2^\prime$}; \node() at (1.3,3.3){$s_3^\prime$}; \node() at (2.2,3.3){$s_3^*$}; \node() at (1.3,2.78){$s_2^\prime$}; \node() at (.75,2.3){$s_1^\prime$}; \end{tikzpicture} \end{center} \caption{Case (VI)} \label{VI} \end{figure} For the diagram (VI) suppose that $s_1,s_2,s_3\in A(1)\cap NW$ and $t_1,t_2,t_4\in B(6)\cap SE$. Mate $s_3$ into $s_3^\prime\in B(4)$ along $A(1)$, and mate $t_4$ into $t_4^\prime\in A(3)$ along $B(6)$. Since $s_3^\prime,t_3,s_4,t_4^\prime\in NE$, and NE is weakly $2$-linked, a linkage can be completed in NE for $\pi_3,\pi_4$. For the pairs $\pi_1,\pi_2$ a linkage can be obtained easily by taking shortest paths through SW as shown in the left of Fig.\ref{VI}. Assume now that the terminals in one of the quadrants NW and SE is not of type $T_1$ as before. Then we apply Lemma \ref{heavy4} for NW with $A=B(3)\cap NW$ and $b=(3,2)$, and we apply Lemma \ref{heavy4} for SE with $A=A(4)\cap SE$ and $b=(5,4)$. Then by the pigeon hole principle, we obtain a common index $\ell\in\{1,2\}$, say $\ell=1$, which satisfies: there exists a path in NW from $s_1$ to $s_1^\prime=(3,2)$, and there exits a path in SE from $t_1$ to $t_1^\prime=(5,4)$, furthermore, terminals $s_2,s_3\in NW$, $t_2,t_4\in NE$ are mated into not necessarily distinct vertices $s_2^\prime,s_3^\prime\in B(3)$ and $t_2^\prime,t_4^\prime\in A(4)$ using edge disjoint mating paths. We take an $s_2^\prime,t_2^\prime$-path in $B(3)\cup A(4)$ to complete a linkage for $\pi_2$. Then the mating paths to $s_3^\prime,t_4^\prime$ are extended into $s_3^*,t_4^*\in NE$. Since NE is weakly $2$-linked, a linkage can be completed there for $\pi_3,\pi_3$ (see on the right of Fig.\ref{VI}).\\ For q-diagram (VII) we apply Lemma \ref{Caforpq} (i) with $Q=NW$ and $y_0=(1,3)$. W.l.o.g. we assume that there is a framing in $NW$ for $\pi_1,\pi_2$ with $C_1$ and a mating of $s_3$ into $y_0$. We extend this mating path to $s_3^*=(1,4)\in NE$. Next we apply Lemma \ref{12toCa} with $Q=SE$, and $p=1, q=2$. Thus we obtain a framing in $SE$ for $\pi_1,\pi_2$ to $C_\alpha$, for some $\alpha\in\{0,1\}$. \begin{figure}[htp] \begin{center} \tikzstyle{M} = [circle, minimum width=.5pt, draw=black, inner sep=2pt] \tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt] \tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt] \tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt] \tikzstyle{C} = [circle, minimum width=1pt, fill=white, inner sep=2pt] \tikzstyle{A} = [circle, draw=black!, minimum width=1pt, fill, inner sep=1.7pt] \tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=3pt] \tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt] \tikzstyle{Wedge} = [draw,line width=2.2pt,-,black!100] \begin{tikzpicture} \draw[->,line width=1.2pt] (0,5) -- (0,4) -- (.9,4); \draw[->,line width=1.2pt] (1,5) -- (1,4.1); \draw[->,line width=1.2pt] (4,2) -- (3.1,2); \draw[->,line width=1.2pt] (0,3) -- (0,3) -- (2,3) -- (2,4.9); \draw[->,line width=1.2pt] (2,5) -- (2.9,5); \draw[->,line width=1.2pt] (5,2) -- (5,0) -- (3,0) -- (3,1.9); \draw[->,line width=1.2pt] (5,1)--(4.1,1); \draw[snake,line width=.5pt] (1,2) -- (3,2) -- (3,4) -- (1,4) -- (1,2) (3.1,5) -- (4,5) -- (4,1); \draw[dashed] (0,4) -- (0,0) -- (3,0) (4,0)--(4,1)--(0,1) (5,2)--(5,5)--(4,5); \draw[dashed] (0,5) -- (2,5) (3,4)--(5,4) (2,3)--(5,3) (0,2)--(1,2) (4,2)--(5,2); \draw[dashed] (1,0) -- (1,2) (2,0)--(2,3); \foreach \x in {0,1,2,3,4,5} \foreach \y in {0,1,2,3,4,5} { \node[B] () at (\x,\y) {};} \node[T](s4) at (3,4){}; \node[txt]() at (3.35,4.3) {$s_4$}; \node[T](t4) at (2,1){}; \node[txt]() at (1.7,.7) {$t_4$}; \node[T,label=right:$t_3$](t3) at (5,1){}; \node[T](t1) at (4,2){}; \node[txt]() at (4.35,1.7) {$t_1$}; \node[txt]() at (4.3,.7) {$t_3^\prime$}; \node[T,label=right:$t_2$](t2) at (5,2){}; \node[T,label=above:$s_1$](s1) at (0,5){}; \node[T,label=above:$s_2$](s2) at (1,5){}; \node[T,label=left:$s_3$](s3) at (0,3){}; \node(s3*) at (3,5.35){$s_3^*$}; \node[M] () at (1,4) {}; \node[M] () at (3,2) {}; \node[M] () at (4,1) {}; \node[M,label=above:$y_0$] () at (2,5) {}; \node[M] () at (3,5) {}; \node[txt]() at (1.4,3.6) {$D$}; \node[txt]() at (3.6,3.6) {$P_3^*$}; \end{tikzpicture} \begin{tikzpicture} \draw[double,line width=.3pt] (1,2) -- (3,2) -- (3,4) -- (1,4) -- (1,2) (3,5) -- (4,5) -- (4,1) -- (5,1); \draw[double,line width=.3pt] (0,3) -- (2,3) -- (2,5) -- (3,5) (0,5) -- (0,4) -- (1,4) (1,5)--(1,4); \draw[double,line width=.3pt] (4,2) -- (3,2) (5,2) -- (5,0)--(3,0)--(3,2); \draw[line width=2.2pt] (4,5) -- (5,5) -- (5,3) -- (2,3) -- (2,0); \draw[line width=2.2pt] (3,5) -- (3,4) -- (5,4); \draw[line width=2.2pt] (1,1)-- (1,2) -- (0,2) -- (0,0) -- (2,0) -- (2,2); \draw[dashed] (0,5)--(2,5) (0,4)--(0,2) (5,3)--(5,2)--(4,2) (0,1)--(4,1); \draw[dashed] (0,1)--(1,1) (4,0)--(4,1) (1,0)--(1,1) (2,0)--(3,0); \foreach \x in {0,1,2,3,4,5} \foreach \y in {0,1,2,3,4,5} { \node[B] () at (\x,\y) {};} \node[B]()at (4,3) {}; \node[T](s4) at (3,4){}; \node[txt]() at (3.35,4.3) {$s_4$}; \node[T](t4) at (2,1){}; \node[txt]() at (1.7,.7) {$t_4$}; \node[T,label=right:$t_3$](t3) at (5,1){}; \node[T](t1) at (4,2){}; \node[txt]() at (4.35,1.7) {$t_1$}; \node[T,label=right:$t_2$](t2) at (5,2){}; \node[T,label=above:$s_1$](s1) at (0,5){}; \node[T,label=above:$s_2$](s2) at (1,5){}; \node[T,label=left:$s_3$](s3) at (0,3){}; \node[txt]() at (2.6,5.3) {$P_3$}; \node[txt]() at (2.6,3.6) {$P_2$}; \node[txt]() at (1.4,2.4) {$P_1$}; \end{tikzpicture} \end{center} \caption{Case (VII)} \label{VII} \end{figure} For $\alpha =1$, a linkage for $\pi_1,\pi_2$ is completed along $C_1$ and $t_3$ is mated in SE to $(4,4)\in C_0$. It remains to build a framing in $NE$ for $\pi_3,\pi_4$ with $C_0$. For this purpose we apply Lemma \ref{frame} with $s_3^*,s_4\in NE$ and mate $t_4$ in SW to $C_0$ not using edges of $C_1$. For $\alpha =0$, the solution is obtained by combining the frames as follows. Let $D$ be the $8$-cycle spanned by the neighbors of $(3,3)$ (see the left of Fig.\ref{VII}). Observe that no edges of $D$ have been used by the mating paths in the two framing. Thus a linkage $P_1,P_2$ for $\pi_1,\pi_2$ is completed around $D$. A linkage $P_3$ for $\pi_3$ can be completed by a path $P^*_3\subset A(1)\cup B(5)$ from $s_3^*$ to $t_3^\prime=(5,5)$. The right picture in Fig.\ref{VII} shows that $s_4$ and $t_4$ are not disconnected by the linkage built so far, the tree highlighted in the picture saturates all vertices of $NE\cup SW$ and edge disjoint from $P_1\cup P_2\cup P_3$. Hence there is a linkage for $\pi_4$.\\ A.4: $\|NW\|=4$, let $s_1,s_2,s_3,s_4\in NW$. Two cases will be distinguished according to whether there is a quadrant $Q\neq$NW with three or more terminals or not. By symmetry, we may assume that $\|NE\|\geq \|SW\|$. A.4.1: $\|Q\|\geq 3$, where $Q=$NE or SE. In each case we apply Lemma \ref{heavy4} twice: for $NW$ with $A=A(3)\cap NW$, $B=B(3)\cap NW$, then for $Q$, with $A=A(3)\cap Q$, $B=B(3)\cap Q$, if $Q=$NE or with $A=B(4)\cap Q$, $B=A(4)\cap Q$, if $Q=$SE. Assume that there is a common index $\ell$, $1\leq j\leq 4$, resulting from the two applications of Lemma \ref{heavy4}, such that $s_\ell$ is linked to $(2,3)$ in $NW$ and $t_\ell\in NE$ is linked in $NE$ to $(2,3)$ or $t_\ell\in SE$ is linked in SE to $(4,4)$, furthermore, the remaining (five or six) terminals are mated into $A(3)\cap NW$ and into $A(3)\cap NW$ or $B(4)\cap SE$. First we complete a linkage for $\pi_\ell$ by the inclusion of the edge $(2,3)-(2,4)$. W.l.o.g. assume that $\ell=1$. Lemma \ref{heavy4} also implies that the mating paths leading from $s_2,s_3,s_4$ to the not necessarily distinct mates $s_2^\prime,s_3^\prime,s_4^\prime$ are not using the edges of $A(3)$, and similarly, the mating paths in $Q$ to the not necessarily distinct mates $t_i^\prime\in Q$ are not using the edges $A(3)$ or $B(4)$, for $Q=$NE or SE. \begin{figure}[htp] \begin{center} \tikzstyle{M} = [circle, minimum width=.5pt, draw=black, inner sep=2pt] \tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt] \tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt] \tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt] \tikzstyle{C} = [circle, minimum width=1pt, fill=white, inner sep=2pt] \tikzstyle{A} = [circle, draw=black!, minimum width=1pt, fill, inner sep=1.7pt] \tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt] \tikzstyle{Wedge} = [draw,line width=2.2pt,-,black!100] \begin{tikzpicture} \draw[double,line width=.5pt] (0,0) -- (0,2)--(5,2)--(5,0)--(0,0); \draw[double,line width=.5pt] (3,0) -- (3,2); \draw[double,line width=.5pt](4,2)--(4,0); \draw[double,line width=.5pt] (0,1) -- (5,1) (0,2) -- (5,2); \draw[double,line width=.5pt] (1,0) -- (1,2) (2,0) -- (2,2); \draw[->,line width=1.2pt] (0,5)--(0,3.1); \draw[->,line width=1.2pt] (0,3)--(0,2.1); \draw[->,line width=1.2pt] (1,5)--(2,5)--(2,3.1); \draw[->,line width=1.2pt] (2,3)--(2,2.1); \draw[->,line width=1.2pt] (5,4)--(5,3.1); \draw[->,line width=1.2pt] (5,3)--(5,2.1); \draw[line width=2.2pt] (1,3)--(1,4)--(4,4)--(4,5); \draw[line width=2.2pt] (2,3)--(3,3)--(3,4); \draw[dashed] (5,3)--(3,3)-- (3,2) (3,5)-- (3,4) (0,3)--(2,3); \draw[dashed] (0,4)--(1,4)--(1,5)--(0,5) (1,2)--(1,3) (2,5)--(5,5)--(5,4)--(4,4)--(4,2) ; \foreach \x in {0,1,2,3,4,5} \foreach \y in {0,1,2,3,4,5} { \node[B] () at (\x,\y) {};} \node[B]()at (4,3) {}; \node[M]()at (2,4) {};\node[M]()at (3,3) {}; \node[M]()at (0,3) {};\node[M]()at (5,3) {}; \node[T](s2) at (2,3){}; \node[txt]() at (.3,3.3) {$s_3^\prime$}; \node[T,label=above:$s_3$](s3) at (0,5){}; \node[T,label=above:$s_4$](s4) at (1,5){}; \node[T](s1) at (1,3){}; \node[txt](s1) at (.75,2.7) {$s_1$}; \node[txt]() at (2.3,2.7) {$s_4^\prime$}; \node[txt]() at (3.3,2.7) {$t_2^\prime$}; \node[txt]() at (4.7,3.3) {$t_4^\prime$}; \node[txt](s2) at (1.7,2.7) {$s_2$}; \node[txt](s1') at (1.7,4.3) {$s_1^\prime$}; \node[txt](t1') at (3.3,4.3) {$t_1^\prime$}; \node[T](t2) at (3,4){}; \node() at (3.3,3.75){$t_2$}; \node[T,label=above:$t_1$](t1) at (4,5){}; \node[T,label=right:$t_4$](t4) at (5,4){}; \node[T,label=below:$t_3$](t3) at (3,0){}; \node(s1*) at (2.3,1.75){$s_4^*$}; \node[M] () at (2,2) {}; \node(s2*) at (.3,1.75){$s_3^*$}; \node[M]()at (0,2) {}; \node(t1*) at (4.7,1.7){$t_4^*$}; \node[M]()at (5,2) {}; \node[txt]() at (2.6,3.4) {$P_2$}; \node[txt]() at (2.5,4.4) {$P_1$}; \end{tikzpicture} \hskip1cm \begin{tikzpicture} \draw[double,line width=.5pt] (0,0) -- (2,0) (0,1)--(2,1) (0,2)--(2,2); \draw[double,line width=.5pt] (0,0) -- (0,2) (1,0) -- (1,2) (2,0) -- (2,2); \draw[->,line width=1.2pt] (0,3)--(0,2.1); \draw[->,line width=1.2pt] (1,3)--(1,2.1); \draw[->,line width=1.2pt] (5,1)--(3.1,1); \draw[->,line width=1.2pt] (3,1)--(2.1,1); \draw[->,line width=1.2pt] (1,4)--(1,3.1); \draw[->,line width=1.2pt] (3,2)--(2.1,2); \draw[line width=2.2pt] (1,5)--(1,4)--(4,4)--(4,0)--(3,0); \draw[line width=2.2pt] (0,5)--(0,3)--(3,3)--(3,2)--(5,2); \draw[dashed] (0,5)--(5,5)--(5,0)--(4,0) (3,5)-- (5,5)-- (5,4) (1,4)--(1,5) (2,5)--(2,2); \draw[dashed] (0,4)--(0,3)--(1,3) (4,4)--(5,4) (4,5)--(4,2) (3,5)--(3,3)--(5,3) (2,0)--(3,0)--(3,2) (0,4)--(1,4); \foreach \x in {0,1,2,3,4,5} \foreach \y in {0,1,2,3,4,5} { \node[B] () at (\x,\y) {};} \node[B]()at (4,3) {}; \node[T](s4) at (1,4){}; \node[txt]() at (1.3,3.7) {$s_4$}; \node() at (1.3,2.7) {$s_4^\prime$}; \node() at (2.3,4.3) {$s_1^\prime$}; \node[T](t4) at (3,2){}; \node()at(2.75,1.75) {$t_4$}; \node[T,label=left:$s_3$](s3) at (0,3){}; \node(s3') at (.3,2.7) {$s_2^\prime$}; \node[T,label=below:$t_1$](t1) at (3,0){}; \node[T,label=right:$t_2$](t2) at (5,2){}; \node[txt](t3') at (3.35,.7) {$t_3^\prime$}; \node[txt](t1') at (4.35,1.7) {$t_1^\prime$}; \node[txt](t2') at (3.3,1.7) {$t_2^\prime$}; \node[T,label=above:$s_2$](s2) at (0,5){}; \node[T,label=above:$s_1$](s1) at (1,5){}; \node[T,label=right:$t_3$](t3) at (5,1){}; \node[M]()at (0,2) {}; \node[M]()at (1,2) {}; \node[M]()at (4,2) {}; \node[M]()at (2,2) {}; \node[M]()at (2,4) {}; \node[M]()at (3,1) {}; \node[M]()at (2,1) {}; \node(s4*) at (1.25,1.75){$s_4^*$}; \node(s3*) at (.3,1.75){$s_3^*$}; \node[M]()at (1,3) {}; \node(t3*) at (1.7,.7){$t_3^*$}; \node(t3*) at (1.758,1.7){$t_4^*$}; \node[txt]() at (2.6,2.6) {$P_2$}; \node[txt]() at (1.4,4.4) {$P_1$}; \end{tikzpicture} \end{center} \caption{$\|NW\|=4$, $\|Q\|\geq 3$} \label{NW4} \end{figure} Next a linkage for another pair $\pi_j$, $2\leq j\leq 4$ is completed along $A(3)$ (if $Q=$NE) or along $A(3)\cup B(4)$ (if $Q=$SE), where $j$ is selected as follows: $j$ is arbitrary provided all mates are distinct; $j$ is an index if $s_j^\prime\in A(3)$ or $t_j^\prime\in A(3)$ or $t_j^\prime\in B(4)$ is the only vertex hosting two mates in $NW$ (or in $Q$); if both NW and $Q$ contain repeated mates, then $j\in\{2,3,4\}$ is selected to satisfy that both $s_j^\prime$ and $t_j^\prime$ are repeated mates (such index $j$ exists by the pigeon hole principle). W.l.o.g. let $j=2$. Finally, a linkage can be obtained by extending (three or four) mating paths from the remaining distinct mates into neighbors in $SW\cup SE$ (if $Q=$NE) or into neighbors in SW (if $Q=$SE). Then the linkage for $\pi_3,\pi_4$ can be completed in the $3\times 6$ grid $SW\cup SE$ or in the quadrant SW which are both weakly-$2$-linked, by Lemma \ref{w2linked} (Fig.\ref{NW4} shows solutions). Therefore a solution is obtained once a common index $\ell$ can be selected to link $\pi_\ell$ as above. By the pigeon hole principle there is a common index $\ell$ unless the terminal set in NW is of type $T_2$, and the terminal set in $Q$ is of type $T_1$ or $T_2$ in Fig.\ref{except} (ii). We handle the exceptional cases one-by-one. Let $s_1=(2,3), s_2=(1,3)$, $s_3=(1,2)$ and $s_4=(1,1)$. For $\|SE\|=4$, if the terminals in NW are located according to type $T_2$ as well, then the argument using the common index $\ell$ can be repeated by switching the role of NE and SE. Since the pattern $T_2$ is not symmetric about the diagonal, the solution above works. \begin{figure}[htp] \begin{center} \tikzstyle{M} = [circle, minimum width=.5pt, draw=black, inner sep=2pt] \tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt] \tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt] \tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt] \tikzstyle{C} = [circle, minimum width=1pt, fill=white, inner sep=2pt] \tikzstyle{A} = [circle, draw=black!, minimum width=1pt, fill, inner sep=1.7pt] \tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt] \tikzstyle{Wedge} = [draw,line width=2.2pt,-,black!100] \begin{tikzpicture} \foreach \x in {0,...,5} \foreach \y in {0,1,2,3,4,5} { \draw (0,\y)--(5,\y); \draw (\x,0)--(\x,5);} \draw[line width=2.2pt] (2,5)--(4,5) (1,5)--(1,4)--(3,4)--(3,5) ; \draw[line width=2.2pt] (0,5)--(0,2)--(3,2)--(3,4) (2,4)--(2,3)--(5,3)--(5,5) ; \foreach \x in {0,1,2,3,4,5} \foreach \y in {0,1,2,3,4,5} { \node[B] () at (\x,\y) {};} \node[M]() at (5,3){}; \node[M]() at (3,2){}; \node[M]() at (0,2){}; \node[M]() at (2,3){}; \node() at (5.3,2.7) {$t_1^\prime$}; \node() at (1.7,2.7) {$s_1^\prime$}; \node[T](s1) at (2,4){}; \node() at (1.75,3.75) {$s_1$}; \node[T,label=above:$s_4$](s4) at (0,5){}; \node[T,label=above:$s_3$](s3) at (1,5){}; \node[T,label=above:$s_2$](s2) at (2,5){}; \node[T,label=above:$t_2$](t2) at (4,5){}; \node[T,label=above:$t_1$](t1) at (5,5){}; \node[T,label=above:$t_3$](t3) at (3,5){}; \node[T](t4) at (3,4){}; \node() at (3.25,3.75) {$t_4$}; \node() at (3.25,1.75) {$t_4^\prime$}; \node() at (.3,1.75) {$s_4^\prime$}; \node() at (.4,2.4) {$P_4$}; \node() at (4.6,3.4) {$P_1$}; \node() at (3.6,4.6) {$P_2$}; \node() at (1.4,4.4) {$P_3$}; \end{tikzpicture} \end{center} \caption{$\|NE\|=4$} \label{diagonal2} \end{figure} For $\|NE\|=4$, we have $\{t_3,t_4\}=\{(1,4),(2,4)\}$ and $\{t_1,t_2\}=\{(1,5),(1,6)\}$. Thus there are two pairs are in $A(1)$, say $\pi_2,\pi_3\subset A(1)$. Their linkage can be done using the $s_2,t_2$-path $P_2\subset A(1)$ and the $s_3,t_3$-path $P_3\subset (B(2)\cup A(2)\cup B(4))$. The remaining terminals can be mated along their distinct columns into vertices $s_1^\prime,t_1^\prime\in A(3)$ and $s_4^\prime,t_4^\prime\in A(4)$. The linkage for $\pi_1,\pi_4$ can be completed along $A(3)$ and $A(4)$, respectively (see Fig.\ref{diagonal2}). \begin{figure}[htp] \begin{center} \tikzstyle{M} = [circle, minimum width=.5pt, draw=black, inner sep=2pt] \tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt] \tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt] \tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt] \tikzstyle{C} = [circle, minimum width=1pt, fill=white, inner sep=2pt] \tikzstyle{A} = [circle, draw=black!, minimum width=1pt, fill, inner sep=1.7pt] \tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt] \tikzstyle{Wedge} = [draw,line width=2.2pt,-,black!100] \begin{tikzpicture} \foreach \x in {0,...,5} \foreach \y in {0,1,2,3,4,5} { \draw (0,\y)--(5,\y); \draw (\x,0)--(\x,5);} \draw[->,line width=1.2pt] (5,3)--(5,2.1); \draw[double,line width=.5pt] (1.7,5.6)--(5.7,5.6)-- (5.7,2.7)--(1.7,2.7)--(1.7,5.6); \draw[double,line width=.5pt] (1.3,5.6)--(-.4,5.6)--(-.4,-.7)--(5.7,-.7) --(5.7,2.35)--(1.3,2.35)--(1.3,5.6) ; \foreach \x in {0,1,2,3,4,5} \foreach \y in {0,1,2,3,4,5} { \node[B] () at (\x,\y) {};} \node[M](t3') at (5,2){}; \node() at (5.3,2) {$t_3^\prime$}; \node[T](s1) at (2,4){}; \node() at (2.3,3.7) {$s_1$}; \node[T,label=above:$s_4$](s4) at (0,5){}; \node[T,label=above:$s_3$](s3) at (1,5){}; \node[T,label=above:$s_2$](s2) at (2,5){}; \node[T,label=right:$t_2$](t2) at (5,4){}; \node[T,label=right:$t_1$](t1) at (5,5){}; \node[T,label=below:$t_4$](t4) at (2,0){}; \node[T,label=right:$t_3$](t3) at (5,3){}; \node[txt]() at (5.4,.4) {$G^\prime$}; \end{tikzpicture} \hskip1cm \begin{tikzpicture} \foreach \x in {0,...,5} \foreach \y in {0,1,2,3,4,5} { \draw (0,\y)--(5,\y); \draw (\x,0)--(\x,5);} \draw[double,line width=.5pt] (1.7,5.6)--(5.7,5.6)-- (5.7,-.7)--(3.7,-.7)--(3.7,3.55)--(1.7,3.55)--(1.7,5.6); \draw[double,line width=.5pt] (1.3,5.6)--(-.4,5.6)-- (-.4,-.7)--(3.3,-.7) --(3.3,3.35)--(1.3,3.35)--(1.3,5.6) ; \foreach \x in {0,1,2,3,4,5} \foreach \y in {0,1,2,3,4,5} { \node[B] () at (\x,\y) {};} \node[T](s1) at (2,4){}; \node() at (2.3,3.7) {$s_1$}; \node[T,label=above:$s_4$](s4) at (0,5){}; \node[T,label=above:$s_3$](s3) at (1,5){}; \node[T,label=above:$s_2$](s2) at (2,5){}; \node[T,label=below:$t_2$](t2) at (4,0){}; \node[T,label=below:$t_1$](t1) at (5,0){}; \node[T,label=below:$t_3$](t3) at (3,0){}; \node[T](t4) at (2,2){}; \node() at (2.3,2.3){$t_4$}; \node[txt]() at (5.4,.4) {$G^\prime$}; \end{tikzpicture} \end{center} \caption{} \label{Qexcept} \end{figure} For $\|NE\|=3$ we have $\{t_1,t_2\}=\{(1,6),(2,6)\}$ and $(3,6)=t_3$ or $t_4$. First we mate the terminal at $(3,6)$ to $(4,6)$. The grid $G^\prime=(SW\cup SE)\cup (B(1)\cup B(2))$ is $2$-path pairable, thus a linkage for $\pi_3,\pi_4$ can be completed in $G^\prime$. In $G-G^\prime$ which is $2$-path-pairable as well, there is a linkage for $\pi_1,\pi_2$ (see the left of Fig. \ref{Qexcept}). For $\|SE\|=3$ we have $\{t_1,t_2\}=\{(6,5),(6,6)\}$ and $(6,4)=t_3$ or $t_4$. The $2$-path pairable grid $G^\prime=(A(1)\cup A(2))\cup (B(5)\cup B(6))\setminus (B(1)\cup B(2))$ and its complement are both $2$-path pairable. Thus $G^\prime$ contains a linkage for $\pi_1,\pi_2$, and $G-G^\prime$ contains a linkage for $\pi_3,\pi_4$ (see the right of Fig. \ref{Qexcept}).\\ In the remaining cases we have $\|Q\|\leq 2$, for every $Q\neq$ NW. Since $2\geq \|NE\|\geq \|SW\|$, we have either $\|SE\|=2$ and $\|NE\|=\|SW\|= 1$ or $\|NE\|=2$.\\ A.4.2: $\|SE\|=2$ and $\|NE\|=\|SW\|= 1$. We apply Lemma \ref{heavy4} for NW with $A=A(3)\cap NW$ and $B=B(3)\cap NW$. There are at least two terminals that can be mapped into $(2,3)$, hence by the pigeon hole principle, there is an index $1\leq \ell\leq 4$ such that $s_\ell$ is mated to $s_\ell^\prime=(2,3)$ and $t_\ell\in NE\cup SE$. W.l.o.g. we may assume that $\ell=1$, and the terminals $s_2,s_3,s_4$ are mated into $s_2^\prime,s_3^\prime,s_4^\prime\in A(3)$ by the lemma. If $t_1\in NE$, then a linkage for $\pi_1$ is completed by an $s_1^\prime,t_1$-path. Moreover, if $s_2^\prime,s_3^\prime,s_4^\prime$ are distinct then the linkage for the remaining terminals can be completed in the $3$-path-pairable grid $G^*=G-(A(1)\cup A(2))$. Assume now that $s_2^\prime,s_3^\prime,s_4^\prime$ are not distinct, let $w\in A(3)\cap NW$ be the mate of two terminals of NW, that is $s_i^\prime=s_j^\prime=w$, for some $2\leq i<j\leq 4$ (actually, one of them is a terminal, $s_i^\prime=s_i$ or $s_j^\prime=s_j$). Since $\|SW\|\leq 1$, $t_i$ or $t_j$ is a terminal in $NE\cup SE$, say $t_i\in NE\cup SE$; let $t_k$ be the third terminal in $NE\cup SE$ (that is $t_1,t_i,t_k\in N$). We plan to specify a linkage for $\pi_1$ by mating $t_1$ to $s_1^\prime$, then specify a linkage for $\pi_i$ by mating $t_i$ to a vertex of $A(3)$; the remaining terminals can be mated into the weakly $2$-linked SW and find there a linkage for $\pi_j,\pi_k$. The plan is easy to realize provided $t_1\in NE$. It is enough to mate $t_i\in SE$ along its column to $t_i^\prime\in A(3)$, then $s_j^\prime,s_k^\prime\in A(3)$ to $s_j^*,s_k^*\in A(4)$, and to mate $t_k\in SE$ along its row to $t_k^*\in B(3)$. Assume now that $t_1\in SE$. We introduce three auxiliary terminals in NE, let $x=(2,4)$, $x^\prime=(3,5)$, and $y=(3,4)$. There exist an $x,x^\prime$-path $X$ and an edge disjoint path $Y$ from the terminal of NE to $y$ not using edges of $A(3)$. If the terminal of NE is $t_k$, and thus $t_1,t_i\in SE$, then we extend $Y$ to $t_k^*=(4,3)$ by adding the path $y-(4,4)-t_k^*$, furthermore, we mate $t_1$ to $t_1^\prime=(4,5)$ and we mate $t_i$ to $t_i^\prime=(4,6)$ (see the left of Fig.\ref{adhoc}). If the terminal of NE is $t_i$, and thus $t_1,t_k\in SE$, let $t_i^\prime=y$ be the mate of $t_i$, we mate $t_1$ to $t_1^\prime=(4,5)$, and we mate $t_k$ to $t_k^\prime=(6,3)$. In each case we complete a linkage for $\pi_1$ by adding the path $X$ and the two edges $s_1^\prime x$ and $t_1^\prime x^\prime$; and we complete a linkage for $\pi_i$ by adding the $s_i^\prime, t_i^\prime$-path in $A(3)$. The unpaired terminals/mates from $A(3)\cup B(4)$ are mated into the weakly $2$-linked quadrant SW to complete a solution. An example is shown in the left of Fig.\ref{adhoc}. \begin{figure}[htp] \begin{center} \tikzstyle{M} = [circle, minimum width=.5pt, draw=black, inner sep=2pt] \tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt] \tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt] \tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt] \tikzstyle{C} = [circle, minimum width=1pt, fill=white, inner sep=2pt] \tikzstyle{A} = [circle, draw=black!, minimum width=1pt, fill, inner sep=1.7pt] \begin{tikzpicture} \draw[line width=2.2pt] (0,4)--(3,4)(4,3)--(4,2)--(4,1)--(3,1); \draw[line width=2.2pt] (1,3)--(5,3)--(5,2); \draw[snake](3,4)--(4,4)--(4,3) (4,5)--(3,5)--(3,3); \foreach \x in {0,1,2} \draw[double,line width=.5pt] (\x,2)--(\x,0); \foreach \y in {0,1,2} \draw[double,line width=.5pt] (0,\y)--(2,\y); \draw[->,line width=1.2pt] (3,0)-- (5,0)--(5,1.9); \draw[->,line width=1.2pt] (0,5)--(0,3.1); \draw[->,line width=1.2pt] (0,3)--(0,2.1); \draw[->,line width=1.2pt] (1,5)--(1,3.1); \draw[->,line width=1.2pt] (1,3)--(1,2.1); \draw[->,line width=1.2pt] (3,3)--(3,2)--(2.1,2); \draw[dashed] (0,5)--(3,5) (4,4)--(4,5)--(5,5)--(5,3)(4,0)--(4,1)--(5,1); \draw[dashed] (2,5)--(2,2) (2,0)--(3,0)--(3,2)--(5,2)(2,1)--(3,1)(4,4)--(5,4); \foreach \x in {0,1,2,3,4,5} \foreach \y in {0,1,2,3,4,5} { \node[B] () at (\x,\y) {};} \node[T](t1) at (3,1){}; \node()at(3.3,.7){$t_1$}; \node[T,label=above:$t_k$](tk) at (4,5){}; \node[T](ti') at (5,2){}; \node() at (5.3,1.75) {$t_i^\prime$}; \node[T,label=below:$t_j$](tj) at (0,0){}; \node[T,label=below:$t_i$](ti) at (3,0){}; \node[T,label=above:$s_k$](sk) at (0,5){}; \node[T,label=left:$s_1$](s1) at (0,4){}; \node[T,label=above:$s_j$](sj) at (1,5){}; \node[T](si) at (1,3){}; \node() at (.7,2.7) {$s_i$}; \node[M]()at (4,3) {}; \node() at (4.3,3.3) {$x^\prime$}; \node[M]()at (3,4) {}; \node() at (3.3,4.3) {$x$}; \node[M]()at (3,3) {}; \node() at (3.25,3.25) {$y$}; \node() at (.7,3.3) {$w$}; \node[M,label=left:$s_k^\prime$]()at (0,3) {}; \node[M]()at (2,4) {}; \node() at (1.7,4.3) {$s_1^\prime$}; \node[M]()at (4,2) {}; \node() at (4.3,1.7) {$t_1^\prime$}; \node[T](s3) at (1,3){}; \node()at(1.35,3.3){$s_j^\prime$}; \node[M]()at (1,2) {}; \node() at (1.35,2.3) {$s_j^*$}; \node[M]()at (2,2) {}; \node() at (2.35,2.3) {$t_k^*$}; \node[M,label=left:$s_k^*$]()at (0,2) {}; \node() at (3.65,3.65) {$X$}; \node() at (2.7,4.7) {$Y$}; \node() at (.4,.4) {$G^*$}; \end{tikzpicture} \hskip1cm \begin{tikzpicture} \foreach \x in {0,...,5} \draw[double,line width=.5pt] (\x,3)--(\x,0); \foreach \y in {0,...,3} \draw[double,line width=.5pt] (0,\y)--(5,\y); \draw[line width=2.2pt] (0,5)--(5,5)--(5,4); \draw[->,line width=1.2pt] (2,4)--(0,4)--(0,3.1); \draw[->,line width=1.2pt] (1,5)--(1,3.1); \draw[->,line width=1.2pt] (2,5)-- (2,3.1); \draw[->,line width=1.2pt] (4,5)-- (4,3.1); \draw[dashed] (0,5)--(0,4) (2,4)--(5,4)--(5,3) (3,5) -- (3,3); \foreach \x in {0,1,2,3,4,5} \foreach \y in {0,1,2,3,4,5} { \node[B] () at (\x,\y) {};} \node[T,label=right:$t_4$](t2) at (5,4){}; \node[T](t4) at (2,2){}; \node()at(2.3,1.7){$t_2$}; \node[T,label=below:$t_1$](t1) at (5,0){}; \node[T,label=above:$t_3$](t3) at (4,5){}; \node[T,label=above:$s_1$](s1) at (2,5){}; \node[T,label=above:$s_4$](s4) at (0,5){}; \node[T,label=above:$s_3$](s3) at (1,5){}; \node[T](s2) at (2,4){}; \node() at (1.7,4.3) {$s_2$}; \node[M]()at (2,3) {}; \node() at (1.7,3.3) {$s_1^{*}$}; \node[M]()at (1,3) {}; \node() at (.7,3.3) {$s_3^*$}; \node[M]()at (4,3) {}; \node() at (3.7,3.3) {$t_3^*$}; \node[M]()at (0,3) {}; \node() at (-.3,3.3) {$s_2^*$}; \node() at (4.6,4.6) {$P_4$}; \node() at (.4,.4) {$G^*$}; \end{tikzpicture} \end{center} \caption{$\|NW\|=4$} \label{adhoc} \end{figure} A.4.3: $\|NE\|=2$. The solution starts with Lemma \ref{heavy4} applied for NW with $A=A(3)\cap NW$ and $B=B(3)\cap NW$. Since $\|NW\|=4$, there are three terminals that can be mated to $b=(2,3)$ and the other ones to $A$ unless the four terminals in NW are located according to type $T_2$ in Fig.\ref{except}. First we sketch a solution for this exceptional case when $\{s_1,s_2\}=\{(1,3),(2,3)\}$ and $\{s_3,s_4\}=\{(1,1),(1,2)\}$, furthermore, the two terminals in NE are $t_3,t_4$. The solution on the right of Fig.\ref{adhoc} starts with a linkage $P_4$ for $\pi_4$ not using edges of NW not in $A(1)\cap NW$, and mating the other terminals of $NW\cup NE$ into distinct vertices $s_1^*,s_2^*,s_3^*,t_3^*\in A(3)$. The linkage is completed for $\pi_1,\pi_2,\pi_3$ in the $3$-path-pairable $G^*=G-(A(1)\cup A(2))$. Therefore, we may assume that the terminals in NW are not in position $T_2$, and when we apply Lemma \ref{heavy4} for NW with $A=A(3)\cap NW$ and $B=B(3)\cap NW$, there are three terminals one can map into $(2,3)$. By the pigeon hole principle, there is an index $1\leq \ell\leq 4$ such that $s_\ell$ is mated to $s_\ell^\prime=(2,3)$ and $t_\ell\in NE\cup SE$. W.l.o.g. we may assume that $\ell=1$, and the terminals $s_2,s_3,s_4$ are mated into $s_2^\prime,s_3^\prime,s_4^\prime\in A(3)$ by the lemma. If $s_2^\prime,s_3^\prime,s_4^\prime$ are distinct, then we follow the solution given for the particular case above. A linkage for $\pi_1$ is specified first, then the $3$-path-pairability of $G^*=G-(A(1)\cup A(2))$ is used to obtain a linkage for the remaining pairs. \begin{figure}[htp] \begin{center} \tikzstyle{M} = [circle, minimum width=.5pt, draw=black, inner sep=2pt] \tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt] \tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt] \tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt] \tikzstyle{C} = [circle, minimum width=1pt, fill=white, inner sep=2pt] \tikzstyle{A} = [circle, draw=black!, minimum width=1pt, fill, inner sep=1.7pt] \tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt] \tikzstyle{Wedge} = [draw,line width=2.2pt,-,black!100] \hskip1cm \begin{tikzpicture} \foreach \x in {0,1,2} \draw[double,line width=.5pt] (\x,2)--(\x,0); \foreach \y in {0,1,2} \draw[double,line width=.5pt] (0,\y)--(2,\y); \draw[line width=2.2pt] (4,5)--(4,4)--(2,4); \draw[line width=2.2pt] (5,2)--(5,3)--(1,3); \draw[->,line width=1.2pt] (0,5)--(0,3.1); \draw[->,line width=1.2pt] (1,5)--(1,3.1); \draw[dashed] (0,5)--(5,5)--(5,4)(4,1)--(4,0)--(5,0)--(5,2); \draw[dashed] (0,4)--(2,4)--(2,2)--(5,2) (2,0)--(4,0); \draw[dashed] (3,5)--(3,0) (4,1)--(5,1) (0,3)--(1,3)(5,3)--(5,4); \draw[->,line width=1.2pt] (1,3)--(1,2.1); \draw[->,line width=1.2pt] (4,3)--(4,2.1); \draw[->,line width=1.2pt] (4,2)--(4,1)-- (2.1,1); \draw[->,line width=1.2pt] (0,3)--(0,2.1); \draw[->,line width=1.2pt] (5,4)--(4,4)--(4,3.1); \foreach \x in {0,1,2,3,4,5} \foreach \y in {0,1,2,3,4,5} { \node[B] () at (\x,\y) {};} \node[M]() at (4,3){}; \node()at(4.3,2.7){$t_k^\prime$}; \node[T,label=above:$t_1$](t1) at (4,5){}; \node[T](tk) at (5,4){}; \node() at (5.3,3.75) {$t_k$}; \node[T,label=below:$t_j$](tj) at (0,0){}; \node[T](ti) at (5,2){}; \node() at (5.3,1.75) {$t_i$}; \node[T,label=above:$s_k$](sk) at (0,5){}; \node[T,label=left:$s_1$](s1) at (0,4){}; \node[T,label=above:$s_j$](sj) at (1,5){}; \node[T](si) at (1,3){}; \node() at (.7,2.7) {$s_i$}; \node() at (.7,3.3) {$w$}; \node[M]()at (2,4) {}; \node() at (1.7,4.3) {$s_1^\prime$}; \node[M]()at (3,4) {}; \node() at (2.7,4.3) {$t_1^\prime$}; \node[M]()at (3,1) {}; \node[M]()at (4,2) {}; \node[M,label=left:$s_k^\prime$]()at (0,3) {}; \node[M]()at (5,3) {}; \node() at (5.3,2.7) {$t_i^{\prime}$}; \node[M]()at (2,1) {}; \node()at(2.3,.7){$t_k^*$}; \node[T](sj) at (1,3){}; \node() at (1.35,3.3) {$s_j^\prime$}; \node[M]()at (1,2) {}; \node() at (1.35,2.3) {$s_j^*$}; \node[M,,label=left:$s_k^*$]()at (0,2) {}; \node[txt]() at (3.6,4.4) {$P_1$}; \node[txt]() at (2.5,2.5) {$P_2$}; \node[txt]() at (.4,.4) {$G^*$}; \end{tikzpicture} \hskip.3cm \begin{tikzpicture} \draw[line width=2.2pt] (0,4)--(4,4)--(4,5); \draw[snake] (0,3)--(0,1)--(4,1)--(4,3); \foreach \x in {1,2} \draw[double,line width=.5pt] (\x,3)--(\x,0); \foreach \y in {0,3} \draw[double,line width=.5pt] (1,\y)--(2,\y); \draw[->,line width=1.2pt] (0,5)--(0,3.1); \draw[->,line width=1.2pt] (1,5)--(1,3.1); \draw[->,line width=1.2pt] (4,4)--(4,3.1); \draw[->,line width=1.2pt] (0,0)--(.9,0); \draw[dashed] (0,5)--(5,5)--(5,0) -- (2,0) (0,0)--(0,1); \draw[dashed] (4,1)--(5,1) (5,2)--(0,2)(0,3)--(1,3); \draw[dashed] (3,5)-- (3,0) (2,3) --(5,3)(4,0)--(4,1); \draw[dashed] (2,5)--(2,4)(4,4)--(5,4)(2,3)--(2,4); \foreach \x in {0,1,2,3,4,5} \foreach \y in {0,1,2,3,4,5} { \node[B] () at (\x,\y) {};} \node[T,label=above:$s_2$](s2) at (0,5){}; \node[T,label=left:$s_1$](s1) at (0,4){}; \node[T,label=above:$s_4$](s4) at (1,5){}; \node[T](s3) at (1,3){}; \node() at (.7,2.7) {$s_3$}; \node() at (.7,3.3) {$w$}; \node[T](t3) at (2,2){}; \node() at (2.3,1.7) {$t_3$}; \node[T,label=below:$t_4$](t4) at (0,0){}; \node[T](t2) at (4,4){}; \node() at (4.3,4.3) {$t_2$}; \node[T,label=above:$t_1$](t1) at (4,5){}; \node[M]()at (4,3) {}; \node() at (4.3,2.7) {$t_2^\prime$}; \node[M,label=left:$s_2^\prime$]()at (0,3) {}; \node() at (1.3,3.3) {$s_4^\prime$}; \node[M,label=below:$t_4^*$]()at (1,0) {}; \node[txt]() at (3.6,4.4) {$P_1$}; \node[txt]() at (1.5,.4) {$C$}; \node[txt]() at (3.4,1.4) {$X$}; \node[txt]() at (-.7,1) {$A(\ell)$}; \end{tikzpicture} \end{center} \caption{$\|NW\|=4$, $\|NE\|=2$} \label{Q2} \end{figure} Thus we assume that $s_2^\prime,s_3^\prime,s_4^\prime$ are not distinct. Let $w\in A(3)\cap NW$ be the mate of two terminals of NW, that is $s_i^\prime=s_j^\prime=w$, for some $2\leq i<j\leq 4$ (actually, one of them is a terminal, $s_i^\prime=s_i$ or $s_j^\prime=s_j$). For $\|SW\|\leq 1$, $t_i$ or $t_j$ is a terminal in $NE\cup SE$, say $t_i\in NE\cup SE$. Now $t_i$ is mated to $t_i^\prime\in A(3)$ and a linkage for $\pi_i$ is completed by adding the $s_i^\prime,t_i^\prime$-path in $A(3)$. Then we mate the unlinked terminals of $NW\cup NE$ into the weakly $2$-linked quadrant SW to complete there the linkage for the remaining pairs $\pi_j,\pi_k$ (see in the left of Fig.\ref{Q2}). To tackle the last subcase we may assume that $t_1,t_k\in NE$ and $t_i,t_j\in SW$. W.l.o.g. let $i=3,j=4$, that is $s_3^\prime=s_4^\prime=w\in A(3)\cap NW$ and $t_3,t_4\in SW$. Let $P_1\subset NW\cup NE$ be a linkage for $\pi_1$, let $s_2^\prime\in (A(3)\setminus\{w\})\cap NW$ and $t_2^\prime \in A(3)\cap NE$, as obtained before. There is a row $A(\ell)$, $4\leq \ell\leq 6$ containing no terminal, thus a linkage for $\pi_2$ can be completed by adding the path $Y$ from $s_2^\prime$ to $t_2^\prime$ in the union of their columns and $A(\ell)$. Define $C\subset SW$ to be the cycle bounded vertically by the two columns not containing $s_2^\prime$ and bounded horizontally by $A(3)$ and by row $A(5)$ if $\ell=6$ or by row $A(6)$ if $\ell\neq 6$. In this way we obtain a frame $[C,w]$. Since $t_3,t_4\notin X$, if $t_3$ and/or $t_4$ is not in $C$ it can be mated easily into $C$ along its row, thus the linkage for $\pi_3,\pi_4$ is obtained along the frame. An example is shown in the right of Fig.\ref{Q2}.\\ Case B: there is a quadrant containing a pair. Assume that $NW$ contains a pair and it has the largest number of terminals with this property.\\ B.1. $\|NW\|=7$ or $8$. The strategy consists in linking two pairs in NW and mating the remaining terminals into $G-NW$ which is weakly $2$-linked by Lemma \ref{w2linked}. This plan works due to Lemma \ref{heavy78}.\\ B.2. $\|NW\|=6$. First we assume that NW consists of three pairs, let $\pi_1,\pi_2,\pi_3\in NW$. We extend NW into the grid $H\cong P_4\Box P_4$ by including the $7$-path $L\subset (A(4)\cup B(4))$ between $(1,4)$ and $(4,1)$. By Lemma \ref{w2linked}, $H$ is $3$-path-pairable, therefore, there is a linkage in $H$ for $\pi_1,\pi_2$ and $\pi_3$. Removing the edges of $H$ from $G$ a connected graph remains, which contains a linkage for $\pi_4$. Next we assume that $NW$ contains $\pi_1, \pi_2$ and the terminals $s_3,s_4$. W.l.o.g. assume that $\|NE\|\geq \|SW\|$, and let $A=NW\cap A(3)$, $B=NW\cap B(3)$ and $x_0=(3,3)$. We apply Lemma \ref{heavy6} for $Q=NW$. We obtain a linkage for $\pi_1$ (or $\pi_2$ or both), and a mating of the remaining terminals into distinct vertices of $A\cup B$ such that $B-x_0$ contains at most one mate. We extend these mating paths ending at $(A\cup B)\setminus\{x_0\}$ into at most three vertices of $\{(4,1),(4,2),(1,4),(2,4)\}$. Observe that after this step both quadrants NE and SW contain at most three (not necessarily distinct) terminals/mates. Let $G^*=G-(A(1)\cup A(2)\cup B(1)\cup B(2))$, and let $L^*\subset (A(3)\cup B(3))$ be the $7$-path bounding $G^*$. By applying Lemma \ref{exit} (iii) twice, the terminals/mates in NE and those in SW can be mated to distinct vertices of $L^*$ without using edges in $L^*$. By Lemma \ref{3pp}, $G^*\cong P_4\Box P_4$ is $3$-path-pairable, hence the linkage for $\pi_2$ (or $\pi_1$) and $\pi_3,\pi_4$ can be completed in $G^*$.\\ B.3. $\|NW\|=5$. First we assume that NW contains $\pi_1,\pi_2$ and a terminal $s_3$. As in case B.2, we extend $NW$ into the grid $H\cong P_4\Box P_4$ by including the $7$-path $L\subset (A(4)\cup B(4))$ from $(1,4)$ to $(4,1)$. Let $s_3^*$ be any terminal-free vertex on $L$. Since $H$ is $3$-path-pairable, there is a linkage in $H$ for the pairs $\pi_1,\pi_2$ and $\{s_3,s_3^*\}$. Next we mate $s_3^*$ and the remaining terminals of $L$ into $G-H$ using edges from $G$ to $G-H$. By Lemma \ref{w2linked}, $G-H=A(5)\cup A(6)\cup B(5)\cup B(6)$ is weakly $2$-linked thus a linkage can be completed there for the pairs $\pi_3,\pi_4$. Next we assume that $NW$ contains $\pi_1$ and the terminals $s_2,s_3,s_4$. W.l.o.g. assume that $\|NE\|\geq\|SW\|$, and apply Lemma \ref{heavy5} with $NW$ to obtain a linkage for $\pi_1$ and mates $s_2^\prime,s_3^\prime\in A(3)$, $s_4^\prime\in B(3)\cap NW$. For $\|NE\|\leq 2$ the solution is completed similarly to the one in B.2 as above. For $\|NE\|=3$ we extend the mating paths from $s_2^\prime,s_3^\prime$ to vertices $s_2^*,s_3^*\in A(4)\cap SW$ and the mating path from $s_4^\prime$ to $s_4^{*}\in B(4)\cap NE$. If $s_4^{*}\notin\{t_2,t_3,t_4\}$, then we apply Lemma \ref{boundary} to obtain a linkage for $\pi_4$ and the mating of $t_2,t_3$ to $t_2^*,t_3^*\in A(4)$. Assume that $s_4^{*}=t_i$, for some $2\leq i\leq 4$. If $s_4^*=t_4$, then a linkage is obtained for $\pi_4$, and we mate $t_2,t_3$ to $t_2^*,t_3^*\in A(4)$. W.l.o.g. let $s_4^*=t_2=w$. For $w=(3,4)$ we take a $w,t_4$-path in the weakly $2$-linked NE to complete the linkage for $\pi_4$, and we mate $t_3$ to $t_3^\prime\in A(3)\cap NE$; then we mate $t_2,t_3^\prime$ to $t_2^*,t_3^*\in A(4)$. For $w=(2,4)$ we mate $t_2$ into $(4,4)$ along $B(4)$. Then we take a $w,t_4$-path in the weakly $2$-linked $NE\setminus (3,3)$ to complete the linkage for $\pi_4$, and we mate $t_3$ to $t_3^*\in A(4)$. In each case we have $s_2^*,t_2^*,s_3^*,t_3^*\in A(4)$, thus a linkage for $\pi_2,\pi_3$ can be completed in the weakly $2$-linked halfgrid $SW\cup SE$.\\ B.4: $\|NW\|\leq 4$. Recall that NW contains a pair, say $\pi_1$, and $NW$ has the largest number of terminals with this property. B.4.1. If $\|NW\|=2$ or $3$, then it contains one pair, say $\pi_1$, and by the choice of NW, we have $\|NE\|\leq 3$. Applying Lemma \ref{exit} (ii) for NW and (iii) for NE, there is a linkage in NW for $\pi_1$ and there are distinct matings for the other terminals of $NW\cup NE$ into distinct vertices of $A(3)$ without using edges of $A(3)$. A linkage for $\pi_2,\pi_3,\pi_3$ can be completed in the grid $G-(A(1)\cup A(2))\cong P_4\Box P_6$ which is $3$-path-pairable by Lemma \ref{3pp}. B.4.2. Let $\|NW\|=4$. If NW contains two pairs then their linkage can be done in NW and the linkage of the other two pairs in $G-NW$, since both $Q$ and $G-NW$ are $2$-path-pairable, by Lemma \ref{w2linked}. Thus we may assume that NW contains $\pi_1$ and terminals $s_2,s_3$. We distinguish cases where $\pi_4$ is contained by some quadrant $Q\neq NW$ or $s_4,t_4$ are in distinct quadrants. If $\pi_4\subset Q$, then we may assume, by symmetry, that $Q=$NE or SE. For $\pi_4\subset NE$, we apply Lemma \ref{boundary} twice. The pair $\pi_1$ is linked in NW and $s_2,s_3$ are mated into $A(3)\cap NW$; the pair $\pi_4$ is linked in NE and the remaining terminals are mated to $A(3)\cap NE$. Then the four (distinct) terminals/mates from $A(3)$ are mated further into $SW\cup SE$ which is weakly $2$-linked by Lemma \ref{w2linked}. Thus a linkage for $\pi_2, \pi_3$ can be completed in $SW\cup SE$. Assume next that $\pi_4\subset SE$. We apply Lemma \ref{boundary} with NW to obtain a linkage for $\pi_1$ and mates of $s_2,s_3$ through the appropriate boundary of NW into the neighboring quadrants. For $s_j$, $j=2,3$, we set $\psi(s_j)\subset A(3)$ if $t_j\in SW$, and $\psi(s_j)\subset B(3)$ if $t_j\in NE\cup SE$. Using Lemma \ref{boundary} with SE we take a linkage for $\pi_4$ and mate $t_j\in SE$ into the neighboring quadrant where $s_j$ is mated from NW. Then we obtain the linkage for $\pi_2,\pi_3$ in the weakly $2$-linked quadrants SW and/or NE. The remaining cases, where $\pi_4$ does not belong to any quadrant are listed in Fig.\ref{1P2S}. \begin{figure}[htp]\begin{center} \tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt] \begin{tikzpicture} \draw (0,0) -- (1,0) -- (1,1) -- (0,1) -- (0,0); \draw (1.2,0) -- (2.2,0) -- (2.2,1) -- (1.2,1) -- (1.2,0); \draw (0,1.2) -- (1,1.2) -- (1,2.2) -- (0,2.2) -- (0,1.2); \draw (1.2,1.2) -- (2.2,1.2) -- (2.2,2.2) -- (1.2,2.2) -- (1.2,1.2); \node[txt](NW) at (.5, 1.9){$s_1$ $t_1$}; \node[txt]() at (.5, 1.45){$s_2$ $s_3$}; \node[txt](SW) at (.5, .7){$\emptyset$}; \node[txt](NE) at (1.7, 1.9){$s_4$}; \node[txt](SE) at (1.7, .7){$t_2$ {$t_3$}}; \node[txt]() at (1.7, .35){$t_4$}; \node[txt]() at (1.15, -.6){(I)}; \end{tikzpicture} \hskip.8truecm \begin{tikzpicture} \draw (0,0) -- (2.2,0) -- (2.2,1) -- (0,1) -- (0,0); \draw (0,1.2) -- (1,1.2) -- (1,2.2) -- (0,2.2) -- (0,1.2); \draw (1.2,1.2) -- (2.2,1.2) -- (2.2,2.2) -- (1.2,2.2) -- (1.2,1.2); \node[txt](NW) at (.5, 1.9){$s_1$ $t_1$}; \node[txt]() at (.5, 1.45){$s_2$ $s_3$}; \node[txt](NE) at (1.7, 1.9){$t_2$ $s_4$}; \node[txt](SE) at (1.1, .5){$t_3$ {$t_4$}}; \node[txt]() at (1.15, -.6){(II)}; \end{tikzpicture} \hskip.8truecm \begin{tikzpicture} \draw (0,0) -- (2.2,0) -- (2.2,1) -- (0,1) -- (0,0); \draw (0,1.2) -- (1,1.2) -- (1,2.2) -- (0,2.2) -- (0,1.2); \draw (1.2,1.2) -- (2.2,1.2) -- (2.2,2.2) -- (1.2,2.2) -- (1.2,1.2); \node[txt](NW) at (.5, 1.9){$s_1$ $t_1$}; \node[txt]() at (.5, 1.45){$s_2$ $s_3$}; \node[txt](NE) at (1.7, 1.9){$t_2$ $t_3$}; \node[txt]() at (1.7, 1.45){$s_4$}; \node[txt]() at (1.1, .5){$t_4$}; \node[txt]() at (1.15, -.6){(III)}; \end{tikzpicture} \hskip.8truecm \begin{tikzpicture} \draw (0,0) -- (1,0) -- (1,1) -- (0,1) -- (0,0); \draw (1.2,0) -- (2.2,0) -- (2.2,1) -- (1.2,1) -- (1.2,0); \draw (0,1.2) -- (1,1.2) -- (1,2.2) -- (0,2.2) -- (0,1.2); \draw (1.2,1.2) -- (2.2,1.2) -- (2.2,2.2) -- (1.2,2.2) -- (1.2,1.2); \node[txt](NW) at (.5, 1.9){$s_1$ $t_1$}; \node[txt]() at (.5, 1.45){$s_2$ $s_3$}; \node[txt](SW) at (.5, .7){$t_4$ }; \node[txt](NE) at (1.7, 1.9){ $s_4$}; \node[txt](SE) at (1.7, .7){ $t_2$ $t_3$}; \node[txt]() at (1.15, -.6){(IV)}; \end{tikzpicture} \end{center} \caption{$\|NW\|=4$, $\pi_1\subset NW$} \label{1P2S} \end{figure} For type (I), Lemma \ref{boundary} is used for SE and for NW similarly as above. Thus we obtain a linkage in NW for $\pi_1$, a linkage in SW for $\pi_2,\pi_3$, and a linkage in NE for $\pi_4$. For type (II), we apply Lemma \ref{boundary} with NW to find a linkage for $\pi_1$ and to mate $s_2$ to $s_2^*\in B(4)\cap NE$ and to mate $s_3$ into $s_3^*\in A(4)\cap SW$. Then Lemma \ref{exit} (ii) is used with NE to complete a linkage in NW for $\pi_2$ and to mate $t_3$ to $t_3^*\in A(4)\cap SE$. The linkage for $\pi_3,\pi_4$ can be completed in $SW\cup SE$ which is weakly $2$-linked, by Lemma \ref{w2linked}. For types (III) and (IV), the linkage for $\pi_2,\pi_3,\pi_4$ will be done by mating the terminals appropriately into $G^*=G-(A(1)\cup A(2)\cup B(1)\cup B(2))\cong P_4\Box P_4$ which is $3$-path-pairable, by Lemma \ref{3pp}. \begin{figure}[htp]\begin{center} \tikzstyle{M} = [circle, minimum width=.5pt, draw=black, inner sep=2pt] \tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt] \tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt] \tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt] \tikzstyle{C} = [circle, minimum width=1pt, fill=white, inner sep=2pt] \tikzstyle{A} = [circle, draw=black!, minimum width=1pt, fill, inner sep=1.7pt] \tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt] \tikzstyle{Wedge} = [draw,line width=2.2pt,-,black!100] \begin{tikzpicture} \draw[->,line width=1.2pt] (1,2) -- (1,0) -- (1.9,0); \draw[->,line width=1.2pt] (1,5) -- (1,3.1); \draw[->,line width=1.2pt] (1,3) -- (1,2) -- (1.9,2); \draw[->,line width=1.2pt] (0,4) -- (0,2.9); \draw[->,line width=1.2pt] (0,3) -- (0,2) (0,2) -- (0,1) -- (1.9,1); \draw[->,line width=1.2pt] (3,5) -- (5,5) -- (5, 3.1); \draw[->,line width=1.2pt] (3,4) -- (3,3.1); \draw[->,line width=1.2pt] (4,4) -- (4,3.1); \draw[double,line width=.5pt] (2,0) -- (5,0) -- (5,3) --(2,3)--(2,0); \draw[double,line width=.5pt] (3,0) -- (3,3) (4,0) --(4,3); \draw[double,line width=.5pt] (2,1) -- (5,1) (2,2) --(5,2); \draw[line width=2.2pt] (0,5) -- (0,4) --(2,4); \draw[dashed] (0,5) -- (3,5) (2,4)-- (5,4) (0,3)--(2,3) (0,2)--(1,2) (0,1)--(0,0)--(1,0); \draw[dashed] (2,3) -- (2,5) (3,4)-- (3,5) (4,4)--(4,5); \foreach \x in {0,1,2,3,4,5} \foreach \y in {0,1,2,3,4,5} { \node[B] () at (\x,\y) {};} \node[T,label=above:$s_1$](s1) at (0,5){}; \node[T,label=left:$s_2$](s2) at (0,4){}; \node[T,label=above:$s_3$](s3) at (1,5){}; \node[T,label=above:$s_4$](s4) at (3,5){}; \node[T](t1) at (2,4){}; \node[txt]() at (1.7,4.3){$t_1$} ; \node[T](t3) at (4,4){}; \node[txt]() at (4.3,4.3){$t_3$} ; \node[T](t2) at (3,4){};\node[txt]() at (3.3,4.3){$t_2$} ; \node[T](t4) at (1,2){};\node[txt]() at (.7,1.7){$t_4$} ; \node[M,label=left:$s_2^\prime$](s2') at (0,3){} ; \node[txt](t2*) at (2.7,2.7){$t_2^*$} ; \node[M](s4*) at (5,3){} ; \node[txt] () at (2.3,.35) {$s_3^{*}$}; \node[M]() at (1,3){} ; \node[M]() at (2,1){} ; \node[M]() at (2,0){} ; \node[M]() at (2,2){} ; \node[M]() at (3,3){} ; \node[M]() at (4,3){} ; \node[txt]() at (2.35,1.4){$s_2^{*}$} ; \node[txt]() at (1.35,2.7){$s_3^\prime$} ; \node[txt]() at (2.35,2.3){$t_4^{*}$} ; \node[txt]() at (3.7,2.7){$t_3^*$} ; \node[txt]() at (4.7,2.7){$s_4^*$} ; \node[txt]() at (4.6,.4){$G^*$} ; \node[txt]() at (.4,4.4){$P_1$} ; \end{tikzpicture} \end{center} \caption{Solution for a pairing of type (III)} \label{B32} \end{figure} First we apply Lemma \ref{exit} (iii) for $NE$ to mate the terminals in $NE$ into distinct vertices of $A(3)$ with mating paths not using edges in $A(3)$. Next we use Lemma \ref{boundary} to obtain a linkage for $\pi_1$ and to mate $s_j$ into $s_j^\prime\in A(3)\cap NW$, for $j=2,3$. If $s_j^\prime\neq (3,3)$, then we extend its mating path into $A(4)\cap SW$. Applying Lemma \ref{exit} (iii) for the terminals/mates in $SW$ we obtain the mates $s_2^{*}, s_3^{*}, t_4^*\in B(3)$. (In case of type (III) it is possible that $t_4\in SE$, when we just take $t_4^*=t_4$.) Then the linkage for $\pi_2,\pi_3$, and $\pi_4$ can be completed, since all mating paths leading to $G^*$ are edge disjoint from $G^*$. An example is shown in Fig.\ref{B32}. \end{proof}
{ "timestamp": "2017-08-21T02:00:35", "yymm": "1708", "arxiv_id": "1708.05407", "language": "en", "url": "https://arxiv.org/abs/1708.05407", "abstract": "Let $G=P_6\\Box P_6$ be the $6\\times 6$ grid, the Cartesian product of two paths of six vertices. Let $T$ be the set of eight distinct vertices of $G$, called terminals, and assume that $T$ is partitioned into four terminal pairs $\\{s_i,t_i\\}$, $1\\leq i\\leq 4$. We prove that $G$ is $4$-path-pairable, that is, for every $T$ there exist in $G$ pairwise edge disjoint $s_i,t_i$-paths, $1\\leq i\\leq 4$.", "subjects": "Combinatorics (math.CO)", "title": "The $6\\times 6$ grid is $4$-path-pairable", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363733699888, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.70848834933532 }
https://arxiv.org/abs/2103.04766
The size of Betti tables of edge ideals arising from bipartite graphs
Let $\operatorname{pd}(I(G))$ and $\operatorname{reg}(I(G))$ respectively denote the projective dimension and the regularity of the edge ideal $I(G)$ of a graph $G$. For any positive integer $n$, we determine all pairs $(\operatorname{pd}(I(G)),\, \operatorname{reg}(I(G)))$ as $G$ ranges over all connected bipartite graphs on $n$ vertices.
\section{Introduction} Let $G$ be a finite simple graph with the vertex set $V(G)=\{x_1,\dots ,x_n\}$. Let $S=\Bbbk[x_1,\dots ,x_n]$ be the polynomial ring in $n$ variables over a field $\Bbbk$. The \textit{edge ideal} of $G$, denoted by $I(G)$, is the monomial ideal generated by the monomials $x_ix_j$ such that $\{x_i,x_j\}$ is an edge of $G$. Edge ideals of bipartite graphs were studied in the literature for several purposes. Fernández-Ramos and Gimenez \cite{FG} gave a characterization of bipartite graphs whose edge ideal has regularity $3$. When $G$ is an unmixed bipartite graph, Kummini \cite{Ku} described the regularity of $I(G)$ in terms of the induced matching number of $G$ and Kimura \cite{K2} gave a combinatorial description of the projective dimension of $I(G)$ via complete bipartite subgraphs satisfying certain conditions. Van Tuyl \cite{VT} provided a formula for the regularity of the edge ideal of a sequentially Cohen-Macaulay bipartite graph in terms of the induced matching number of the graph. Jayanthan et al. \cite{JNS} computed the regularity of powers of edge ideals for several subclasses of bipartite graphs. Herzog and Hibi \cite{HH_bipartite} classified all bipartite graphs which are Cohen-Macaulay and Van Tuyl and Villarreal \cite{VTV} classified those which are shellable. In a recent article, Hà and Hibi \cite{HaHibi} considered the following problem: Given a graph $G$ on $n$ vertices, what are the possible values of $(\pd(S/I(G)),\, \reg(S/I(G)))$? They determined all such pairs when $\pd(S/I(G))$ attains its minimum possible value $2\sqrt{n}-2$ or when $\reg(S/I(G))$ attains its minimum possible value $1$. Hibi et al. \cite{HKKMVT} determined all tuples consisting of the values of depth, regularity, dimension and the degree of the $h$-polynomial of $S/I(G)$ as $G$ ranges over all Cameron-Walker graphs on $n$ vertices. Similar type of problems were studied recently in \cite{FKVT, HKM, HKMVT, HMVT}. In this article, we determine all pairs $(\pd(I(G)),\, \reg(I(G)))$ as $G$ ranges over all connected bipartite graphs on $n$ vertices. To state our main result precisely, for any positive integer $n$ we denote by $\bpt(n)$ the set of connected bipartite graphs on the vertices $\{x_1,\dots ,x_n\}$. We define \[\displaystyle \bpt_{\pd}^{\reg}(n)=\{(\pd(S/I(G)),\reg(S/I(G))): G \in \bpt(n)\} \] which is the set of sizes of Betti tables of $S/I(G)$ as $G$ ranges over all connected bipartite graphs on $n$ vertices. Our main result is then the following theorem: \begin{theorem}[Theorem~\ref{thm:main theorem}] Let $n\geq 4$ be an integer. Then \[\displaystyle \bpt_{\pd}^{\reg}(n)=\{(p,r)\in \mathbb{Z}^2 : 1\leq r < \Big\lfloor \frac{n}{2}\Big\rfloor, \, \Big\lceil \frac{n}{2} \Big\rceil \leq p \leq n-2 \}\cup \{(n-1,1)\} \cup A_n \] where $A_n=\emptyset$ if $n$ is even and, $A_n=\{(\lceil n/2 \rceil, \lfloor n/2 \rfloor)\}$ if $n$ is odd. \end{theorem} We make use of the graph parameters (induced) matching number, co-chordal cover number and maximum size of minimal vertex covers to bound the regularity and projective dimension. Along the way, we describe all the pairs $(\pd(I(G)),\, \reg(I(G)))$ as $G$ ranges over all trees on $n$ vertices. \section{Preliminaries} \subsection{Graph theory background} Given a finite simple graph $G$, we denote by $V(G)$ and $E(G)$ respectively the vertex set and the edge set of $G$. We say a vertex $u$ is \textit{neighbor} of (or, \textit{adjacent} to) another vertex $v$ if $\{u,v\}\in E(G)$. We denote by $N(u)$ the set consisting of all neighbors of $u$ in $G$. We define $N[u]$ by $N[u]=N(u)\cup \{u\}$. We call a vertex \textit{isolated} if it has no neighbors. A graph $H$ is called a \textit{subgraph} of $G$ if $V(H)\subseteq V(G)$ and $E(H)\subseteq E(G)$. A subgraph $H$ of $G$ is called an \textit{induced subgraph} if for any two vertices $u,v$ in $H$, $\{u,v\}\in E(H)$ if $\{u,v\}\in E(G)$. If $U$ is a subset of $V(G)$, we define the induced subgraph of $G$ on $U$ as the subgraph whose vertex set is $U$ and whose edge set is $\{\{x,y\} : x,y\in U \text{ and } \{x,y\} \in E(G)\}$. Moreover, for any $W\subseteq V(G)$, we denote by $G-W$ the induced subgraph of $G$ on $V(G)\setminus W$. To simplify the notation, if $W=\{x\}$ consists of a single vertex, then we write $G-x$ for $G-W$. The \textit{complement} of $G$, denoted by $G^c$, is a graph that has the same vertices as $G$ such that $\{x,y\}\in E(G^c)$ if and only if $\{x,y\}\notin E(G)$. A graph $G$ is called \textit{connected} if for every pair of vertices $x$ and $y$, there is a path in $G$ that starts at $x$ and ends at $y$. A maximal connected subgraph of $G$ is called a \textit{connected component} of $G$. We say $G$ is a \textit{forest} if $G$ has no cycle subgraphs. A connected forest is called a \textit{tree}. It is well-known that every tree on $n$ vertices has exactly $n-1$ edges. An \textit{independent set} in $G$ is a subset of vertices which contain no edges of $G$. A \textit{bipartite} graph is a graph that contains no odd cycles. The vertex set of a bipartite graph can be partitioned to two independent sets. A bipartite graph $G$ with vertex bipartition $V(G)=A\cup B$ is called a \textit{complete bipartite graph} if every vertex in $A$ is adjacent to every vertex in $B$. A graph is called \textit{chordal} if it has no induced cycles of length greater than three. A graph $G$ is called \textit{co-chordal} if $G^c$ is chordal. A \textit{matching} of $G$ is a collection of edges which are pairwise disjoint. The \textit{matching number} of $G$, denoted by $\mat(G)$, is defined by \[\mat(G)=\max\{|M|: M \text{ is a matching of } G\}.\] A matching $M=\{e_1,\dots ,e_k\}$ of $G$ is called an \textit{induced matching} of $G$ if the induced subgraph of $G$ on $\cup_{i=1}^ke_i$ has exactly $k$ edges. The \textit{induced matching number} of $G$, denoted by $\indm(G)$, is the maximum cardinality of an induced matching of $G$. A \textit{perfect matching} of $G$ is a matching $M$ such that each vertex of $G$ belongs to some edge in $M$. The \textit{co-chordal cover number} of $G$, denoted by $\coc(G)$, is the minimum number of co-chordal subgraphs required to cover the edges of $G$, i.e., \[\displaystyle \coc(G)=\min\{r: E(G)=\bigcup_{i=1}^rE(H_i), \text{ each }H_i \text{ is a co-chordal subgraph of } G\}.\] For any positive integer $n$, we denote $\{1,\dots ,n\}$ by $[n]$. A \textit{vertex cover} $C$ of a graph $G$ is a subset of vertices such that every edge of $G$ contains a vertex from $C$. A vertex cover is called \textit{minimal} if no proper subset of it is a vertex cover. The maximum cardinality of a minimal vertex cover of $G$ is denoted by $\tau_{\max}(G)$. \subsection{Algebra background} Let $G$ be a graph with the vertex set $V(G)=\{x_1,\dots ,x_n\}$. Let $\Bbbk$ be a field and let $S=\Bbbk[x_1,\dots ,x_n]$ be the polynomial ring in $n$ variables over $\Bbbk$. The \textit{edge ideal} of $G$, denoted by $I(G)$, is the monomial ideal defined by \[I(G)=(x_ix_j : \{x_i,x_j\} \text{ is an edge of } G).\] Let $M$ be a finitely generated graded $S$-module. Then $M$ has a minimal graded free resolution of the form \begin{equation*}\label{eq:resolution} 0 \longrightarrow \bigoplus_{j \in \mathbb{Z}} S(-j)^{b_{p,j}(M)} \longrightarrow \cdots\longrightarrow \bigoplus_{j \in \mathbb{Z}}S(-j)^{b_{0,j}(M)} \longrightarrow M \longrightarrow 0 . \end{equation*} The numbers $b_{i,j}(M)$ are called the \textit{graded Betti numbers} of $M$. The \textit{projective dimension} of $M$, denoted by $\pd(M)$, is defined by \[\pd(M)=\max\{i: b_{i,j}(M)\neq 0 \text{ for some } j\}.\] The \textit{(Castelnuovo-Mumford) regularity} of $M$, denoted by $\reg(M)$, is defined by \[\reg(M)=\max\{j-i: b_{i,j}(M)\neq 0 \}.\] \begin{theorem}\cite[Corollary~3.8]{BBHsurvey}\label{thm:regularity formula connected components} If $G$ is a graph with connected components $G_1,\dots ,G_r$, then \[\reg(S/I(G))=\sum_{i=1}^{r}\reg(S/I(G_i)).\] \end{theorem} The following bounds on the regularity and projective dimension of edge ideals are well-known, see for example \cite[Lemma~3.1]{DHS} and \cite[Lemma~3.2]{DS}. \begin{lemma}\cite{DHS, DS}\label{lem: key lemma} For any vertex $x$ of a graph $G$, the short exact sequence \[0 \rightarrow \frac{S}{I(G):(x)}(-1) \rightarrow \frac{S}{I(G)} \rightarrow \frac{S}{I(G)+(x)} \rightarrow 0\] gives the following bounds for the regularity and projective dimension: \begin{enumerate} \item $\reg(S/I(G))\leq \max\{\reg(S/I(G-x)), \, \reg(S/I(G-N[x]))+1\}$, \item $\pd(S/I(G))\leq \max\{\pd(S/I(G-x))+1, \, \pd(S/I(G-N[x]))+|N(x)|\}$. \end{enumerate} \end{lemma} The \textit{Stanley-Reisner ideal} of a simplicial complex $\Delta$ is the squarefree monomial ideal generated by the monomials corresponding to non-faces of $\Delta$. The following theorem of Hochster \cite{Hoc} provides a formula for the graded Betti numbers of Stanley-Reisner ideals. \begin{theorem}[Hochster’s Formula]\cite{Hoc}\label{thm:hoch} Let $I_{\Delta}$ be the Stanley-Reisner ideal of a simplicial complex $\Delta$. If $i\geq 0$ and $u$ is a squarefree monomial, then \[b_{i,u}(I_{\Delta})=\dim_\Bbbk \tilde{H}_{\deg u-i-2}(\Delta[u]; \Bbbk) \] where $\Delta[u]=\{\sigma\in \Delta: \sigma \subseteq U\}$ and $U$ consists of those vertices that correspond to the variables dividing $u$. \end{theorem} If $G$ is a graph, the \textit{independence complex} of $G$ is a simplicial complex whose faces are independent sets of $G$. The edge ideal $I(G)$ of $G$ is the Stanley-Reisner ideal of the independence complex of $G$. By a theorem of Terai \cite{T}, $\pd(S/I(G))$ is equal to the regularity of the Alexander dual of $I(G)$. Therefore, the projective dimension problem for edge ideals is equivalent to the regularity problem for so-called cover ideals. The next theorem can be deduced from \cite[Corollary~3.3]{MV} or \cite[Corollary~8.2.14]{HH} both of which are stated in the more general setting of monomial ideals but in dual terms. \begin{theorem}\cite{HH, MV}\label{thm: pd lower bound for any graph} For any graph $G$, $\pd(S/I(G))\geq \tau_{\max}(G)$. Moreover, the equality holds when $S/I(G)$ is sequentially Cohen-Macaulay. \end{theorem} Since forests are known to be sequentially Cohen-Macaulay (see \cite{F} or \cite{FVT}), we have an exact formula for $\pd(S/I(G))$ when $G$ is a forest. This formula was proved independently by several authors in the literature. \begin{theorem}\cite{K,Z}\label{thm: forest pd} If $G$ is a forest, then $\pd(S/I(G))=\tau_{\max}(G)$. \end{theorem} The following lower bound was also proved several times in the literature: \begin{theorem}\cite{HVT, Kat, K, Z} \label{thm: reg lower bound for any graph} For any graph $G$, $\reg(S/I(G))\geq \indm(G)$. \end{theorem} When $G$ is a forest, the regularity can be described by $\indm(G)$. \begin{theorem}\cite{HVT, K, Z}\label{thm: forest reg} If $G$ is a forest, then $\reg(S/I(G))=\indm(G)$. \end{theorem} Hà and Van Tuyl \cite{HVT} gave an upper bound for the regularity of edge ideal of any graph via the matching number of the graph: \begin{theorem}\cite{HVT}\label{thm: matching number upper bound} For any graph $G$, $\reg(S/I(G))\leq \mat(G)$. \end{theorem} Woodroofe \cite{W} improved the upper bound in the previous theorem by replacing the matching number with the co-chordal cover number: \begin{theorem}\cite{W}\label{thm: co-chord number upper bound} For any graph $G$, $\reg(S/I(G))\leq \coc(G)$. \end{theorem} \section{Edge Ideals of Bipartite Graphs} In the following lemma, we provide a rough estimate for the possible values of regularity and projective dimension of edge ideals of bipartite graphs. \begin{lemma}\label{lem: coarse bound} Let $G$ be bipartite graph on $n\geq 2$ vertices which has no isolated vertices. Then \begin{enumerate} \item $\lceil n/2 \rceil \leq \pd(S/I(G)) \leq n-1$, \item $1\leq \reg(S/I(G)) \leq \lfloor n/2 \rfloor$. \end{enumerate} \end{lemma} \begin{proof} Since $I(G)$ is generated in degree two, it is clear that $\pd(S/I(G))\leq n-1$ and $\reg(S/I(G))\geq 1$. Let $V(G)=A\cup B$ be a bipartition of the vertex set of $G$. Then either $A$ or $B$ has cardinality at least $\lceil n/2 \rceil$. Since $G$ has no isolated vertices, both $A$ and $B$ are minimal vertex covers. Hence $\tau_{\max}(G)\geq \lceil n/2 \rceil$ and $\pd(S/I(G))\geq \lceil n/2 \rceil$ follows from Theorem~\ref{thm: pd lower bound for any graph}. Lastly, $\reg(S/I(G)) \leq \lfloor n/2 \rfloor$ follows from Theorem~\ref{thm: matching number upper bound} as $\mat(G)$ cannot exceed the minimum of the cardinalities of $A$ and $B$. \end{proof} We will determine when the regularity upper bound in Lemma~\ref{lem: coarse bound} can be realized by connected bipartite graphs. It turns out that when $n$ is an even integer greater than two, no connected graph attains the regularity value in the upper bound. On the other hand, when $n$ is odd, we will see that the regularity upper bound is sharp and, in such case, the projective dimension is uniquely determined. \begin{theorem}\label{thm:even case} Let $G$ be a connected graph on $n\geq 4$ vertices where $n$ is even. If the matching number of $G$ is $n/2$, then $\coc(G)< n/2$. \end{theorem} \begin{proof} Let $\{e_1,\dots ,e_{n/2}\}$ be a matching of $G$. Then it is a perfect matching. Since $ G$ is connected, we may assume that there is an edge $e$ such that $e\cap e_1 \neq\emptyset$ and $e\cap e_2\neq \emptyset$. Let $H_1$ denote the induced subgraph of $G$ on $e_1\cup e_2$. Furthermore, for each $2\leq i \leq n/2-1$ let $H_i$ be the subgraph of $G$ which consists of those edges $f\in E(G)$ with $f\cap e_{i+1}\neq \emptyset$. Then each $H_i$ is co-chordal and $E(G)=E(H_1)\cup \dots \cup E(H_{n/2-1})$. Hence $\coc(G)\leq n/2-1$. \end{proof} \begin{corollary}\label{cor: even regularity} If $G$ is a connected graph on $n\geq 4$ vertices where $n$ is even, then $\reg(S/I(G))<n/2$. \end{corollary} \begin{proof} By Theorem~\ref{thm: matching number upper bound} we have $\reg(S/I(G))\leq \mat(G)\leq n/2$. Assume for a contradiction $\reg(S/I(G))= n/2$. Then $\mat(G)=n/2$. From Theorem~\ref{thm: co-chord number upper bound} and Theorem~\ref{thm:even case} it follows that $\reg(S/I(G))\leq \coc(G)<n/2$, contradiction. \end{proof} We can actually classify all graphs $G$ on even number of vertices for which $\reg(S/I(G))$ is equal to half the number of vertices: \begin{corollary}\label{cor: even disconnected regularity} Let $G$ be a graph on $n$ vertices where $n$ is even. Then $\reg(S/I(G))=n/2$ if and only if $G$ consists of $n/2$ disjoint edges. \end{corollary} \begin{proof} Let us assume that $n\geq 4$ as the statement is clear otherwise. If $G$ consists of $n/2$ disjoint edges, then $\reg(S/I(G))=n/2$ follows from Theorem~\ref{thm:regularity formula connected components}. To show the converse, let $\reg(S/I(G))=n/2$. Then by Theorem~\ref{thm: matching number upper bound} the matching number of $G$ is $n/2$ and $G$ has a perfect matching. By Corollary~\ref{cor: even regularity} the graph $G$ is disconnected. Then every connected component of $G$ has a perfect matching. Let $G_1,\dots,G_r$ with $r\geq 2$ be the connected components of $G$. Let $|V(G_i)|=2k_i$ for each $i\in [r]$. Assume for a contradiction one of the connected components has at least two edges. We may assume that $k_1\geq 2$. Then by Corollary \ref{cor: even regularity} we have $\reg(S/I(G_1))<k_1$. Moreover, $\reg(S/I(G_i))\leq \mat(G_i)=k_i$ for each $i=2,\dots ,r$. Using Theorem~\ref{thm:regularity formula connected components} we get \[\reg(S/I(G))=\sum_{i=1}^{r}\reg(S/I(G_i)) < \sum_{i=1}^{r}k_i =n/2\] which is a contradiction. \end{proof} We can now investigate the regularity upper bound in Lemma~\ref{lem: coarse bound} when $n$ is an odd integer. The following theorem classifies all bipartite graphs $G$ on $n$ vertices with $\reg(S/I(G))=(n-1)/2$. \begin{theorem}\label{thm: main theorem odd case} Let $G$ be a bipartite graph on $n$ vertices where $n$ is an odd number. Then $\reg(S/I(G))=\lfloor n/2 \rfloor$ if and only if $\indm(G)=\lfloor n/2 \rfloor$. \end{theorem} \begin{proof} Let $n=2k+1$. If $\indm(G)=k$, then $G$ is a forest and $\reg(S/I(G))=k$ by Theorem~\ref{thm: forest reg}. Now, suppose that $\reg(S/I(G))=k$. Since $\mat(G)\geq \reg(S/I(G))$ by Theorem~\ref{thm: matching number upper bound}, there exists a matching $M=\{e_1,\dots ,e_k\}$ of $G$. Let $x$ be the vertex of $G$ which does not belong to any edge in $M$. We may assume that $k\geq 2$ because $\indm(G)=1$ is clear when $k=1$. We consider two cases: \textit{Case 1:} Suppose that $x$ is an isolated vertex of $G$. Then \[k=\reg(S/I(G))=\reg(S/I(G-x))\] and by Corollary~\ref{cor: even disconnected regularity} it follows that $M$ is an induced matching of $G-x$. Hence $M$ is an induced matching of $G$. \textit{Case 2:} Suppose that $\{x,y\}$ is an edge of $G$ where $e_k=\{y,z\}$. By Lemma~\ref{lem: key lemma} \[\reg(S/I(G))\leq \max\{\reg(S/I(G-y)),\, \reg(S/I(G-N[y]))+1\}.\] Hence either $\reg(S/I(G-y))\geq k$ or $\reg(S/I(G-N[y]))\geq k-1$. Observe that $G-y$ is a bipartite graph such that one side of the bipartition has $k-1$ vertices. This implies that the matching number of $G-y$ is at most $k-1$. Therefore, $\reg(S/I(G-y))< k$ follows from Theorem~\ref{thm: matching number upper bound}. Thus we must have $\reg(S/I(G-N[y]))\geq k-1$. Similarly, since $\mat(G-N[y])\leq k-1$, it follows from Theorem~\ref{thm: matching number upper bound} that \[\reg(S/I(G-N[y]))= k-1=\mat(G-N[y]).\] Hence $N(y)=\{x,z\}$. Corollary~\ref{cor: even disconnected regularity} implies that $\{e_1,\dots ,e_{k-1}\}$ is an induced matching of $G-N[y]$ and thus it is induced matching of $G$. If $y$ is the only neighbor of $x$, then $\{e_1,\dots, e_{k-1}, \{x,y\}\}$ is an induced matching of $G$ and nothing is left to show. Suppose that $x$ has at least two neighbors, say $y$ and $u$. Without loss of generality, we may assume that $e_{k-1}=\{u,v\}$. By Lemma~\ref{lem: key lemma} \[\reg(S/I(G))\leq \max\{\reg(S/I(G-x)),\, \reg(S/I(G-N[x]))+1\}.\] Hence either $\reg(S/I(G-x))\geq k$ or $\reg(S/I(G-N[x]))\geq k-1$. Observe that $G-N[x]$ is a bipartite graph such that one side of the bipartition has at most $k-2$ vertices. This implies that the matching number of $G-N[x]$ is at most $k-2$. Therefore, $\reg(S/I(G-N[x]))< k-1$ by Theorem~\ref{thm: matching number upper bound}. Thus we must have $\reg(S/I(G-x))\geq k$. In fact, $\reg(S/I(G-x))=k$ because the matching number of $G-x$ is equal to $k$. Then Corollary~\ref{cor: even disconnected regularity} implies that $G-x$ consists of $k$ disjoint edges. Then $M$ is an induced matching of $G-x$. Thus $M$ is an induced matching of $G$. \end{proof} \begin{remark} In Theorem \ref{thm: main theorem odd case} the bipartite assumption cannot be dropped. Indeed, if $G$ is a cycle graph of length $5$, then $\reg(S/I(G))=2$ but $\indm(G)=1$. \end{remark} \begin{corollary}\label{cor: odd regularity} Let $G$ be a connected bipartite graph on $n=2k+1$ vertices such that $k\geq 1$. Suppose that $\reg(S/I(G))=k$. Then $\pd(S/I(G))=k+1$. \end{corollary} \begin{proof} By Theorem \ref{thm: main theorem odd case} the induced matching number of $G$ is $k$. Let $M=\{e_1,\dots ,e_k\}$ be an induced matching of $G$. Let $x$ be the vertex of $G$ that does not belong to any edge in $M$. Then since $G$ is connected, for every $i\in [k]$, the vertex $x$ is adjacent to exactly one endpoint of $e_i$. Hence $G$ is a tree with $\tau_{\max}(G)=k+1$ and the proof is complete because of Theorem~\ref{thm: forest pd}. \end{proof} The next result describes all connected bipartite graphs for which the projective dimension upper bound in Lemma~\ref{lem: coarse bound} can be realized. \begin{proposition}\label{prop: pd max value} Let $G$ be a connected bipartite graph on $n\geq 2$ vertices. Then $\pd(S/I(G))=n-1$ if and only if $G$ is a complete bipartite graph. Moreover, in such case, $\reg(S/I(G))=1$. \end{proposition} \begin{proof} If $G$ is a complete bipartite graph, then $\pd(S/I(G))=n-1$ and $\reg(S/I(G))=1$ was proved in \cite{J}. Now, suppose that $\pd(S/I(G))=n-1$. Since $G$ is connected, $I(G)\neq (0)$. Then $n-2=\pd(S/I(G))-1=\pd(I(G))$. Then there exists a squarefree monomial $u$ such that $b_{n-2,u}(I(G))\neq 0$. Since $I(G)$ is generated in degree two, the degree of $u$ must be $n$. Theorem~\ref{thm:hoch} implies $\dim_\Bbbk\tilde{H}_{0}(\Delta; \Bbbk)\neq 0$ where $\Delta$ is the independence complex of $G$. Then $\Delta$ is disconnected. Let $V(G)=A\cup B$ be a bipartition of the vertex set of $G$. Since $G$ has no isolated vertices, both $A$ and $B$ are facets of $\Delta$. Assume for a contradiction $G$ is not complete bipartite. Then there exists $a\in A$ and $b\in B$ such that $\{a,b\}\notin E(G)$. We now show that $\Delta$ is connected. First, observe that $A,\{a,b\},B$ is a chain from $A$ to $B$. Let $\sigma$ and $\tau$ be two facets of $\Delta$. Since both $A$ and $B$ are facets, we may assume that $\sigma \cap A\neq \emptyset$ and $\tau \cap A\neq \emptyset$. Then $\sigma, A, \tau$ is a chain from $\sigma$ to $\tau$ which shows that $\Delta$ is connected, a contradiction. \end{proof} Now that we have determined when the upper bounds in Lemma~\ref{lem: coarse bound} can be attained by connected bipartite graphs, our next goal is to show that for any integers $p$ and $q$ with $\lceil n/2 \rceil \leq p <n-1$ and $1\leq r <\lfloor n/2 \rfloor$ there exists a connected bipartite graph with $\pd(S/I(G))=p$ and $\reg(S/I(G))=r$. We will show the existence of such graph in two steps (Theorem~\ref{thm: bipartite construction} and Theorem~\ref{thm: trees (r,p) description}) as our construction depends on whether $p+r$ exceeds $n$ or not. \begin{theorem}\label{thm: bipartite construction} Let $n, p$ and $r$ be integers with $3\leq r \leq n/2-1$ and $n-r<p<n-1$. Then there exists a connected bipartite graph $G$ on $n$ vertices such that $\reg(S/I(G))=r$ and $\pd(S/I(G))=p$. \end{theorem} \begin{proof} Since $r\leq n/2-1$ there exists an integer $t\geq 0$ such that $n=2r+2+t$. Moreover, $n-r<p$ implies that $r+t+3\leq p$. Thus, we may assume that $p=a+r+t$ for some $a\geq 3$. Note that $a\leq r$ because $p<n-1$. Let $G$ be the graph with the vertex set \[V(G)=\{u_1,\dots,u_r\}\cup \{v_1,\dots,v_r\}\cup \{x,y,z_1,\dots ,z_t\} \] and the edge set \[E(G)=\{\{u_i,v_i\}: i\in [r]\} \cup \{\{x,v_i\}: i\in[a]\}\cup \{\{y,u_i\}: i\in [r]\} \cup\{\{y,z_i\}: i\in[t]\}.\] One can easily see that $G$ is a connected bipartite graph on $n$ vertices. We will now show that $G$ possesses the properties stated in the theorem. Observe that $\{\{u_i,v_i\}: i\in [r]\}$ is an induced matching of $G$. Therefore $\reg(S/I(G))\geq \indm(G)\geq r$ by Theorem~\ref{thm: reg lower bound for any graph}. On the other hand, by Lemma~\ref{lem: key lemma} we have \[\reg(S/I(G))\leq \max\{\reg(S/I(G-x)),\, \reg(S/I(G-N[x]))+1\}.\] Observe that $G-x$ is a tree with $\indm(G-x)=r$ and $G-N[x]$ is a tree with $\indm(G-N[x])=\max\{1,r-a\}$. Therefore, by Theorem~\ref{thm: forest reg} we get \[\reg(S/I(G))\leq \max\{r,\, \max\{1, r-a\}+1\}\leq r \] as desired. To evaluate the projective dimension, first observe that by Theorem~\ref{thm: pd lower bound for any graph} \[\pd(S/I(G))\geq \tau_{\max}(G)\geq a+r+t.\] On the other hand, by Lemma~\ref{lem: key lemma} we have \[\pd(S/I(G))\leq \max\{\pd(S/I(G-N[y]))+|N(y)|,\, \pd(S/I(G-y))+1\}.\] Moreover, $G-y$ is a forest with $\tau_{\max}(G-y)=r+1$ and $G-N[y]$ is a forest with $\tau_{\max}(G-N[y])=a$. Since $y$ has exactly $r+t$ neighbors and $a\geq 3$, it follows from Theorem~\ref{thm: forest pd} that \[\pd(S/I(G))\leq \max\{a+r+t,\, r+2\} \leq a+r+t\] which completes the proof. \end{proof} \begin{remark}\label{rk: minimality} Let $C$ be a minimal vertex cover of $G$. Then by the minimality, for every $v\in C$, there exists an edge $e$ in $G$ such that $e\cap C=\{v\}$. \end{remark} \begin{remark}\label{rk: forest max number of edges} If $H$ is a forest on $n$ vertices, then it has at most $n-1$ edges. \end{remark} \begin{lemma}\label{lem: trees lemma} Let $G$ be a tree on $n$ vertices with $\indm(G)=r$ and $\tau_{\max}(G)=p$. Then $p\leq n-r$. \end{lemma} \begin{proof} Assume for a contradiction there exists a minimal vertex cover $C$ of cardinality at least $n-r+1$. Let $M=\{e_1,\dots ,e_r\}$ be an induced matching such that $\cup_{i=1}^ae_i\subseteq C$ and $e_j \not\subseteq C$ for each $j=a+1,\dots ,r$. Since $C$ contains at least $n-r+1$ vertices it follows that $a\geq 1$. Let $U=V(G)\setminus \cup_{i=1}^re_i$. For each $i\in[a]$, let $e_i=\{x_i,y_i\}$. By Remark~\ref{rk: minimality}, for each $i\in [a]$ there exists $u_i,v_i\in U$ such that $\{x_i,u_i\}, \{y_i,v_i\}\in E(G)$ and $u_i,v_i\notin C$. We claim that $|\cup_{i=1}^a\{u_i,v_i\}|\geq a+1$. Let $H$ be the induced subgraph of $G$ on the vertices $(\cup_{i=1}^ae_i) \cup (\cup_{i=1}^a\{u_i,v_i\})$. Since $H$ is a forest, $3a \leq E(H)\leq V(H)-1$ by Remark~\ref{rk: forest max number of edges}. This proves the claim as $H$ has at least $3a+1$ vertices. Now, we conclude that $U\cap C$ has at most $n-2r-a-1$ elements. On the other hand, $(V(G)\setminus U)\cap C$ has exactly $r+a$ elements. Thus $C$ has at most $n-r-1$ elements, a contradiction. \end{proof} Let $\tree(n)$ denote the set of all trees on the vertices $\{x_1,\dots ,x_n\}$. We define \[\displaystyle \tree_{\pd}^{\reg}(n)=\{(\pd(S/I(G)),\reg(S/I(G))): G \in \tree(n)\} \] which consists of all sizes of Betti tables of $S/I(G)$ as $G$ ranges over all trees on $n$ vertices. \begin{theorem}\label{thm: trees (r,p) description} Let $G$ be a tree on $n\geq 4$ vertices. Then \[\tree^{\reg}_{\pd}(n)=\{(p,r)\in \mathbb{Z}^2 : 1\leq r < n/2, \, \lceil n/2 \rceil \leq p \leq n-r \}. \] \end{theorem} \begin{proof} By Lemma~\ref{lem: coarse bound}, Corollary~\ref{cor: even regularity}, and Lemma~\ref{lem: trees lemma} we have \[\tree^{\reg}_{\pd}(n)\subseteq \{(p,r)\in \mathbb{Z}^2 : 1\leq r < n/2, \, \lceil n/2 \rceil \leq p \leq n-r \}. \] To show the equality, let $1\leq r < n/2$ and $\lceil n/2 \rceil \leq p \leq n-r$ be fixed. By Theorem~\ref{thm: forest pd} and Theorem~\ref{thm: forest reg} it suffices to find a tree $G$ on $n$ vertices with $\indm(G)=r$ and $\tau_{\max}(G)=p$. Since $n> 2r$ we may assume that $n=2r+a$ for some $a\geq 1$. Then since $p\leq n-r$ we obtain $p-a\leq r$. So, we may also assume that $r=p-a+b$ for some $b\geq 0$. Since $p\geq \lceil n/2 \rceil$ we get $p-r\geq 1$. Hence $a-b\geq 1$. Let $t=a-b-1$. Let $G$ be the graph on the vertex set \[V(G)=\{u_1,\dots u_r,v_1,\dots,v_r\} \cup \{x,y_1,\dots,y_b\}\cup\{z_1,\dots ,z_t\}\] and the edge set \[E(G)=\{\{u_i,v_i\}: i\in [r]\}\cup \{\{x,v_i\}: i\in[r]\}\cup \{\{x,y_i\}:i\in [b]\} \cup \{\{v_r,z_i\}: i\in [t]\}.\] It is clear that $G$ is a tree on $n$ vertices. It is not hard to see that $\{\{u_i,v_i\}:i\in [r]\}$ is an induced matching of maximum cardinality. Moreover, $\{u_1,\dots, u_r, x, z_1,\dots, z_t\}$ is a minimal vertex cover of cardinality $p$. We will now show that $\tau_{\max}(G)=p$. Let $C$ be a minimal vertex cover of $G$. We consider the following cases. \textit{Case 1}: Suppose that $v_r\notin C$. Then $\{u_r,z_1,\dots ,z_t,x\}\subset C$. Moreover, $y_i\notin C$ for every $i\in [b]$. This implies that for each $i\in [r]$, either $u_i\in C$ or $v_i\in C$, but not both. Hence $|C|=p$. \textit{Case 2}: Suppose that $v_r \in C$. Then $u_r\notin C$ and $z_i\notin C$ for each $i\in [t]$. We consider two cases: \textit{Case 2.1}: Suppose that $x\in C$. Then $y_i\notin C$ for each $i\in [b]$. This implies that for each $i\in [r]$, either $u_i\in C$ or $v_i\in C$, but not both. Hence $|C|=r+1\leq p$ as desired. \textit{Case 2.2}: Suppose that $x\notin C$. Then $C=\{v_1,\dots,v_r, y_1,\dots ,y_b\}$. Thus we get \[|C|=r+b=r+(r-p+a)=2r-p+a=2r-p+(n-2r)=n-p \leq p\] where the last inequality follows from the assumption that $\lceil n/2 \rceil \leq p$. \end{proof} For any positive integer $n$ let $\bpt(n)$ denote the set of connected bipartite graphs on the vertices $\{x_1,\dots ,x_n\}$. We define \[\displaystyle \bpt_{\pd}^{\reg}(n)=\{(\pd(S/I(G)),\reg(S/I(G))): G \in \bpt(n)\}. \] Finally, we arrived at our main result: \begin{theorem}\label{thm:main theorem} Let $n\geq 4$ be an integer. Then \[\displaystyle \bpt_{\pd}^{\reg}(n)=\{(p,r)\in \mathbb{Z}^2 : 1\leq r < \Big\lfloor \frac{n}{2}\Big\rfloor, \, \Big\lceil \frac{n}{2} \Big\rceil \leq p \leq n-2 \}\cup \{(n-1,1)\} \cup A_n \] where $A_n=\emptyset$ if $n$ is even and, $A_n=\{(\lceil n/2 \rceil, \lfloor n/2 \rfloor)\}$ if $n$ is odd. \end{theorem} \begin{proof} Keeping Lemma~\ref{lem: coarse bound} in mind, first observe that the set $A_n$ is determined by Corollary~\ref{cor: even regularity} and Corollary~\ref{cor: odd regularity}. By Proposition~\ref{prop: pd max value}, $(n-1,1)$ is the only pair in $\bpt_{\pd}^{\reg}(n)$ of the form $(n-1,r)$. The rest of the proof follows from Theorem~\ref{thm: bipartite construction} and Theorem~\ref{thm: trees (r,p) description}. \end{proof}
{ "timestamp": "2021-03-09T02:40:24", "yymm": "2103", "arxiv_id": "2103.04766", "language": "en", "url": "https://arxiv.org/abs/2103.04766", "abstract": "Let $\\operatorname{pd}(I(G))$ and $\\operatorname{reg}(I(G))$ respectively denote the projective dimension and the regularity of the edge ideal $I(G)$ of a graph $G$. For any positive integer $n$, we determine all pairs $(\\operatorname{pd}(I(G)),\\, \\operatorname{reg}(I(G)))$ as $G$ ranges over all connected bipartite graphs on $n$ vertices.", "subjects": "Commutative Algebra (math.AC); Combinatorics (math.CO)", "title": "The size of Betti tables of edge ideals arising from bipartite graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363733699888, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.70848834933532 }
https://arxiv.org/abs/1602.02210
Classification accuracy as a proxy for two sample testing
When data analysts train a classifier and check if its accuracy is significantly different from chance, they are implicitly performing a two-sample test. We investigate the statistical properties of this flexible approach in the high-dimensional setting. We prove two results that hold for all classifiers in any dimensions: if its true error remains $\epsilon$-better than chance for some $\epsilon>0$ as $d,n \to \infty$, then (a) the permutation-based test is consistent (has power approaching to one), (b) a computationally efficient test based on a Gaussian approximation of the null distribution is also consistent. To get a finer understanding of the rates of consistency, we study a specialized setting of distinguishing Gaussians with mean-difference $\delta$ and common (known or unknown) covariance $\Sigma$, when $d/n \to c \in (0,\infty)$. We study variants of Fisher's linear discriminant analysis (LDA) such as "naive Bayes" in a nontrivial regime when $\epsilon \to 0$ (the Bayes classifier has true accuracy approaching 1/2), and contrast their power with corresponding variants of Hotelling's test. Surprisingly, the expressions for their power match exactly in terms of $n,d,\delta,\Sigma$, and the LDA approach is only worse by a constant factor, achieving an asymptotic relative efficiency (ARE) of $1/\sqrt{\pi}$ for balanced samples. We also extend our results to high-dimensional elliptical distributions with finite kurtosis. Other results of independent interest include minimax lower bounds, and the optimality of Hotelling's test when $d=o(n)$. Simulation results validate our theory, and we present practical takeaway messages along with natural open problems.
\section{Introduction} The recent popularity of machine learning has resulted in the extensive teaching and use of \textit{prediction} in theoretical and applied communities and the relative lack of awareness or popularity of the topic of Neyman-Pearson style \textit{hypothesis testing} in the computer science and related ``data science'' communities. As a result, when practically faced with what is effectively a hypothesis testing problem, where statisticians would construct and study an appropriate test statistic for ``direct'' testing, data scientists often take a route involving the use of prediction (via estimating a classifier/regressor), and use that as a proxy for ``indirect'' hypothesis testing. We study one example of this phenomenon in this paper, concerning arguably the most classical testing and prediction problems -- we discuss \textbf{two sample testing} (are the two underlying distributions the same?) and \textbf{classification} (learning a classifier that separates the two distributions well, implicitly assuming they are not the same). These problems will be studied in the linear setting, when the underlying distributions are Gaussians -- this statement will become clearer when we formally define the problem setups in Section \ref{sec:back}. For now, it suffices to say that for Gaussians, the natural linear classifier is Fisher's linear discriminant (also known as Fisher's LDA) Practitioners familiar with machine learning but not the hypothesis testing literature might not recognize the testing problem, and may instead find it intuitive to perform indirect testing in the following way -- first estimate a classifier and then see if its accuracy is significantly different from chance and if it is, then conclude the distributions are different. The central question that this paper seeks to answer is ``\textbf{what price does one pay for doing testing indirectly instead of directly?}''. As we shall detail in Section \ref{sec:back}, the notion of \textit{cost} or \textit{price} that is appropriate for the Neyman-Pearson hypothesis testing paradigm, is the power achievable at a fixed false positive level $\alpha$ (in other words, the lowest possible type-2 error achievable at some prespecifed target type-1 error). We would like to answer this question in a worst-case sense, relying on the minimax theory from frequentist statistics. So, more formally, we can restate our question as ``\textbf{how much power do we lose compared to the minimax power, for performing hypothesis testing indirectly after prediction?}''. This question is interesting because prediction (learning a predictor) is in some sense a harder problem than testing -- in one case we are chasing a real vector (the linear classifier) and in the other case we want a binary output (the result of the hypothesis test). We can use that estimated real vector to then do testing, but that may not be optimal (it may perform worse than a ``direct'' test). Indeed, Vapnik's advice \cite{vapnik2013nature} for solving problems with limited information is: \begin{quotation} When solving a given problem, try to avoid solving a more general problem as an intermediate step. \end{quotation} One could be tempted to conjecture that one is attempting to solve a harder problem than needed at first, and this could serve as a bottleneck for the easier problem. Surprisingly, our analysis shows that the aforementioned possible hurdle does not occur in the problems we study, and the prediction before testing does not seem to pose a significant bottleneck. Indeed, in from the perspective of minimax testing error rates, linear classification does allow us to perform two sample mean testing optimally\footnote{In this paper, by \textit{optimally} we typically mean rate-optimally, i.e. we ignore constant factors.}, at least under Gaussianity assumptions under which such a detailed analysis is possible to carry out. Before we delve into the details, it is worth mentioning that even though this paper is a theoretical endeavor, the question was initially practically motivated. Many scientific questions are naturally posed as two sample tests -- examples abound in epidemiology and neuroscience. As a hypothetical example from the latter, consider a particular region of interest in the brain, say the hippocampus. Say we are interested in determining whether the hippocampus responds differently under two situations (say listening to loud harsh sounds vs soft smooth sounds), or for a person with a medical condition (patient) and a person without the condition (control) -- the condition could be as varied as depression, autism, Parkinson's disease, etc. Then, one collects and analyzes brain data for the same patient under the two contrasting stimuli (to study the effect of change in that stimulus), or for different normal and ill patients under the same stimulus (to study effect of onset of disease). It is increasingly common in the field of neuroscience, see \cite{pereira2009machine}, to assess whether there is a significant difference between the two sets of data collected by learning a classifier to differentiate between them (because, for instance, they may be more familiar with the problem of classification than that of two sample testing). Neuroscientists call this style of brain decoding, as pattern discrimination and a positive answer can be seen as preliminary evidence that the mental process of interest might occur within the portion of the brain being studied -- see \cite{olivetti2012induction} for a recent discussion of related issues. The results of our paper would then reassure the neuroscientist that their use of prediction instead of testing, even in the high dimensional setting (where dimensionality is large relative to sample size), should not reduce their power much. At the same time, it should also serve as a warning that a constant factor loss of power might be possible, and for scientific disciplines in which data is not plenty, the scientist should be wary of using prediction techniques for disguised hypothesis testing problems. \paragraph{Paper Outline.} In Section \ref{sec:back}, we formally define both problems and provide some relevant background information. In Section \ref{sec:lowertst}, we discuss lower bounds for two sample testing. In Section \ref{sec:powertst} we study the power of linear classification for two sample mean testing. In Section \ref{sec:disc}, we discuss related problem settings, before concluding in Section \ref{sec:conc}. \paragraph{Notation} Let $\mathcal N_d(\mu,\Sigma)$ refer to the $d$-variate Gaussian distribution with mean $\mu \in \mathbb R^d$ and $d \times d $ positive definite covariance matrix $\Sigma$. With a slighty abuse of notation, we shall also use $\mathcal N_d(z; \mu,\Sigma)$ to denote the corresponding Gaussian pdf at a point $z$ which is given by $(2\pi)^{-d/2} \mathrm{det}(\Sigma)^{-1/2} \exp(-\tfrac1{2} (z-\mu)^T \Sigma (z-\mu))$. $\| \cdot \|$ refers to the standard Euclidean 2-norm. Let $\mathbb{I} [ \cdot ]$ denote the standard 0-1 indicator function, $\mathbb R$ denote the reals, $\mathbb E$ denote expectation. \section{Background }\label{sec:back} In this section, we introduce the two main topics that we touch upon in this paper -- two sample mean testing and Fisher's linear discriminant analysis (LDA). In both these problems, we will be working in the high-dimensional setting, which means the number of dimensions and points can both go to infinity simultaneously (we will think of them as being polynomially related since we avoid sparsity assumptions). \subsection{Two Sample Mean Testing} Consider the problem of testing whether two $d$-variate Gaussian distributions $\mathbb{P}$ with density $p(x) \stackrel{\mathrm{def}}{=} \mathcal N_d(x; \mu_0, \Sigma )$ and $\mathbb{Q}$ with density $q(y) \stackrel{\mathrm{def}}{=} \mathcal N_d(y; \mu_1, \Sigma)$ are identical or not. Given $n$ i.i.d. samples $X_1,...,X_n \in \mathbb R^d$ and $Y_1,...,Y_n \in \mathbb R^d$ from $\mathbb{P}$ and $\mathbb{Q}$ respectively, we want to differentiate between the null hypothesis that they are the same and the alternate hypothesis that they are different: $$ H_0: \mathbb{P} = \mathbb{Q} \text{ vs. } H_1: \mathbb{P} \neq \mathbb{Q}. $$ Since we assume that $\mathbb{P},\mathbb{Q}$ are Gaussians with equal covariance, this boils down to $$ H_0: \mu_0 = \mu_1 \text{ vs. } H_1: \mu_0 \neq \mu_1. $$ Two-sample testing is a fundamental decision-theoretic problem, having a long history in statistics -- for example, the past century has seen a wide adoption of the t-statistic by \cite{hotelling} to decide if two samples have different population means. It was introduced in the parametric setting for univariate Gaussians, but it has been generalized to multivariate non-Gaussian settings as well. If we assume that $\Sigma$ is known, then Hotelling's t-statistic is $$ T_H = (\widehat \mu_0 - \widehat \mu_1)^T \Sigma^{-1} (\widehat \mu_0 - \widehat \mu_1) $$ \begin{remark} We assume that we have the same number of points drawn from $\mathbb{P},\mathbb{Q}$. However if instead we had $n_1,n_2$ points respectively, all of the claimed results will still hold (almost identically) in spirit, whenever $n_1/(n_1+n_2)$ converges to a constant in $(0,1)$. This fact can be simply verified by looking at the more general expressions for power derived in \cite{bs,sd,cq}. We choose to avoid this complication since it is unnecessary for this paper's main message. \end{remark} \subsection{Fisher's Linear Discriminant Classifier} Consider the problem of learning a classifier to differentiate between two $d$-variate Gaussian distributions $\mathbb{P}$ with density $p(x) \stackrel{\mathrm{def}}{=} \mathcal N_d(x; \mu_0, \Sigma )$ and $\mathbb{Q}$ with density $q(y) \stackrel{\mathrm{def}}{=} \mathcal N_d(y; \mu_1, \Sigma)$. In this paper, \textbf{we assume that $\Sigma$ is known} -- we briefly discuss the difficulties of unknown $\Sigma$ in Section \ref{sec:disc}. Given $n$ i.i.d. samples $X_1,...,X_n \in \mathbb R^d$ and $Y_1,...,Y_n \in \mathbb R^d$ from $\mathbb{P}$ and $\mathbb{Q}$ respectively, we want to classify a new point $Z$ i.e. we need to predict whether it came from $\mathbb{P}$ or $\mathbb{Q}$. Let $\mathbb{P},\mathbb{Q}$ correspond to labels 0, 1 respectively. If $\mu_0$ and $\mu_1$ are also known, then the optimal classifier is given by Bayes rule: \begin{eqnarray*} \mathbb{I} \left[\log \frac{q(Z)}{p(Z)} > 0 \right] &=& \mathbb{I} \left[ (\mu_0 - \mu_1)^T \Sigma^{-1} \left(Z - \frac{(\mu_0 + \mu_1)}{2} \right) > 0 \right] \end{eqnarray*} We denote $\delta \stackrel{\mathrm{def}}{=} \Sigma^{-1/2} (\mu_0 - \mu_1)$ and $\mu \stackrel{\mathrm{def}}{=} \frac{(\mu_0 + \mu_1)}{2}$ so that we can write the Bayes rule as $$ \mathbb{I} \Big[ \delta^T \Sigma^{-1/2} (Z - \mu ) > 0 \Big] $$ Then, Fisher's Linear Discriminant Analysis (LDA) classification rule is given by $$ \mathrm{LDA}_{n,n}(Z) \stackrel{\mathrm{def}}{=} \mathbb{I} \Big[ \widehat \delta^T \Sigma^{-1/2} (Z - \widehat\mu ) > 0 \Big] $$ where $\widehat \delta$ and $\widehat \mu$ are the corresponding plugin empirical estimators of $\delta$ and $\mu$ using $\widehat \mu_0 \stackrel{\mathrm{def}}{=} \sum_i X_i/n$ and $\widehat \mu_1 \stackrel{\mathrm{def}}{=} \sum_i Y_i/n$ and the subscript $n,n$ is to remind the reader of the implicit dependence of $\mathrm{LDA}_{n,n}(Z)$ on the $n$ input data points from each class. It has its roots in Fisher's work \cite{fisher1936use,fisher1940precision} and was later developed further by Wald \cite{wald1944statistical} and Anderson \cite{anderson1951classification} (due to which it is also sometimes called the Anderson statistic). Define the error of LDA conditioned on the input data as: \begin{eqnarray} \hspace{-0.2in} \mathcal{E}_n &\stackrel{\mathrm{def}}{=}& (\mathcal{E}_1 + \mathcal{E}_2)/2 \label{eq:errcond}\\ \hspace{-0.2in} \text{where } \mathcal{E}_1 &\stackrel{\mathrm{def}}{=}& \Pr_{Z \sim \mathbb{P}}(\mathrm{LDA}_{n,n}(Z) = 1 ~|~ X_1^n, Y_1^n), \nonumber \\ \hspace{-0.2in} \mathcal{E}_2 &\stackrel{\mathrm{def}}{=}& \Pr_{Z \sim \mathbb{Q}}(\mathrm{LDA}_{n,n} (Z) = 0 ~|~ X_1^n, Y_1^n). \nonumber \end{eqnarray} Clearly, $\mathcal{E}_n$ is a non-constant random variable that depends on the input data. Next, define the (unconditional) error of LDA as \begin{eqnarray} \hspace{-0.2in} E_n &\stackrel{\mathrm{def}}{=}& (E_1 + E_2)/2 \label{eq:err}\\ \hspace{-0.2in} \text{where } E_1 &\stackrel{\mathrm{def}}{=}& \mathbb E_{n,n} \left[ \Pr_{Z \sim \mathbb{P}}(\mathrm{LDA}_{n,n}(Z) = 1 ~|~ X_1^n, Y_1^n) \right], \nonumber \\ \hspace{-0.2in} E_2 &\stackrel{\mathrm{def}}{=}& \mathbb E_{n,n} \left[ \Pr_{Z \sim \mathbb{Q}}(\mathrm{LDA}_{n,n} (Z) = 0 ~|~ X_1^n, Y_1^n) \right]. \nonumber \end{eqnarray} where $\mathbb E_{n,n}$ denotes the expectation with respect to the $n$ input points from each class, and $X_1^n, Y_1^n$ denote the input datasets. Note that since the input data has already been integrated out, $E,E_1,E_2$ do not depend on the input data and are only functions of $n,d,\|\delta\|$. One can estimate $E$ in a few different ways. One simple way is via sample splitting -- we form the LDA classifier using the first $n/2$ samples of each class, and estimate its test error using the remaining $n/2$ samples of each class. We denote the sample-splitting error as $\widehat E^S$ defined as \begin{eqnarray} \widehat E^S &\stackrel{\mathrm{def}}{=}& (\widehat E^S_1 + \widehat E^S_2)/2 \label{eq:errS}\\ \text{where } \widehat E^S_1 &\stackrel{\mathrm{def}}{=}& \frac1{n/2}\sum_{i=1}^{n/2} \mathbb{I} \Big[ \mathrm{LDA}_{n/2,n/2}(X_{n/2 + i}) = 1 \Big], \nonumber \\ \widehat E^S_2 &\stackrel{\mathrm{def}}{=}& \frac1{n/2}\sum_{i=1}^{n/2} \mathbb{I} \Big[ \mathrm{LDA}_{n/2,n/2}(Y_{n/2 + i}) = 0 \Big] . \nonumber \end{eqnarray} where $\mathrm{LDA}_{n/2,n/2}$ represents the LDA classifier formed from the first $n/2$ points of each class. It is clear from the definitions that the LDA classifier will have a true accuracy significantly above half if and only if $\mu_0 \neq \mu_1$. This implies that one can actually use $\widehat E^S$ as a test statistic for two sample testing. We shall derive the power of such a test in Section \ref{sec:powertst} and compare it to the best possible power (in a minimax sense). \begin{remark} For clarity, we only deal with the case of equal sample sizes and when the prior probability of drawing a sample from $P$ and $Q$ is equal. When we have an unbalanced prior probabilities of sampling from each class $(\alpha_0,\alpha_1) \neq (1/2,1/2)$ where $\alpha_0 + \alpha_1 =1$, one can verify that these results carry forward in the same spirit. To explain, we would observe $n_0$ and $n_1$ points in each class, where $n_0/(n_0 + n_1)$ converges to $\alpha_0 \in (0,1)$ and the overall error in Eq.\eqref{eq:err} would be $\alpha_0 E_0 + \alpha_1 E_1$ and one can generalize our results using the expressions of \cite{zollanvari2011analytic} to achieve similar conclusions, or more specifically, Eq.(3.8) in \cite{raudys2004results}. \end{remark} \section{Lower bounds for two sample testing}\label{sec:lowertst} Before we present our analysis of the power of indirect two sample testing (via classification), we first begin by understanding the fundamental minimax lower bounds for two sample testing. \cite{RamRed15c} prove that when the two random variables $X',Y' \in \mathbb R^d$ are both Gaussian with identity covariance, with a mean difference of $\delta' \in \mathbb R^d$, the minimax power (over all tests having access to $n$ points from each, that have type-1 error at most $\alpha$) for testing $\delta' = 0$ vs $\delta' \neq 0$ is given by $$ \Phi \left( - \frac{\sqrt{d}}{\sqrt{d + n \|\delta'\|^2}} z_\alpha + \frac{\|\delta'\|^2}{\sqrt{8\frac{d}{n^2} + 8 \frac{\|\delta'\|^2}{n}}} \right) + o(1) ~ $$ where $\Phi$ is the standard Gaussian CDF, $z_\alpha$ is the point representing the right $\alpha$-quantile of the standard Gaussian distribution, i.e. $\Phi(-z_\alpha) = \alpha$ and $\Phi(z_\alpha) = 1-\alpha$. This paper treats $z_\alpha$ as a constant (for example, $z_\alpha \approx 2$ for $\alpha=0.05$) The way we translate this lower bound into our setting is as follows. Given a dataset $\{X_i,Y_i\}$ from Gaussians that have mean difference $\delta$ and common covariance $\Sigma$, we form standardized variables $X_i' \stackrel{\mathrm{def}}{=} \Sigma^{-1/2} X_i, Y_i' \stackrel{\mathrm{def}}{=} \Sigma^{-1/2} Y_i$. Then the mean difference between $X',Y'$ is $\delta' = \Sigma^{-1/2}\delta$ and $X',Y'$ also have identity covariance. Now we can apply the aforementioned lower bound of \cite{RamRed15c}. Resubstituting $ \|\delta'\|_2^2 = \delta^T \Sigma^{-1} \delta$ we then get a lower bound for power given by $$ \Phi \left( - \frac{\sqrt{d}}{\sqrt{d + n \delta^T \Sigma^{-1}\delta }} z_\alpha + \frac{\delta^T \Sigma^{-1} \delta}{\sqrt{8\frac{d}{n^2} + 8 \frac{\delta^T \Sigma^{-1} \delta}{n}}} \right) + o(1) ~. $$ For convenience of notation, we shall use $\Psi^2 := \delta^T \Sigma^{-1} \delta$ to denote our signal to noise ratio. The way to interpret the above bound is as follows. The first term inside the parentheses is not of interest for our purposes, its magnitude being bounded by the constant $z_\alpha$. The second term is what determines the rate at which the power approaches 1. When $\delta=0$, the power reduces to $\Phi(-z_\alpha) = \alpha$ and if $d,n$ are thought of as fixed, larger $\delta$ leads to larger power. The key in high dimensions, however, is how the power depends jointly on the signal to noise ratio $\Psi$, $d,n$. To see this clearer, in the low SNR regime when $\Psi^2 << d/n$, the power lower bound morally simplifies to \begin{equation} \Phi \left( -z_\alpha + \frac{n \Psi^2}{\sqrt{8d}} \right) + o(1) \label{eq:lowSNR} \end{equation} We can already see that at constant SNR, $n$ only needs to scale faster than $\sqrt{d}$ for test power to asymptotically approach unity -- this $\sqrt{d}/n$ scaling is unlike the $d/n$ scaling one typically sees in prediction problems (for prediction error or classifier recovery). Let us now see how the power of the classification accuracy of Fisher's LDA fares against the aforementioned minimax lower bounds for power. \section{Upper bounds for power of Fisher's LDA} \label{sec:powertst} We begin by noting that \cite{raudys2004results} give an expression for the true (unknown) classification error $E$ from Eq.\eqref{eq:err} : \begin{equation}\label{eq:errRaudy} E_n = \Phi\left( - \frac{\Psi}{2} \frac1{\sqrt{1+ \frac{2d}{n\Psi^2}}} \right) \end{equation} where we introduce the subscript $n$ to remind us that this expression captures the error of the classifier if $n$ points were used in training. This expression was proved by the authors to be asymptotically exact in the high-dimensional setting, under the so-called Raudys-Kolmogorov double asymptotics of $n,d \to \infty$ with $n/d \to c \in (0,\infty)$. When conditioned on the data, the sample splitting error estimator $\widehat E^S$ in Eq.\eqref{eq:errS} is an unbiased estimator of $\mathcal{E}_{n/2}$, i.e. $$ \mathbb E[ \widehat E^S ~ |~ X_1^n, Y_1^n ] = \mathcal{E}_{n/2}. $$ Marginalizing out the data, we see that $$ \mathbb E [ \widehat E^S ] = \mathbb E_{n/2,n/2} [\mathcal{E}_{n/2}] = E_{n/2}. $$ Further, note that $\widehat E^S$ is a sum of indicator functions (coin flips), each of which are (conditionally) i.i.d. given the training sample. Hence, conditional on the data may conclude that $\widehat E^S$ is approximately normally distributed with mean $\mathcal{E}_{n/2}$ and conditional variance $\frac{\mathcal{E}_{n/2}(1-\mathcal{E}_{n/2})}{n/2}$. Since we are (morally) interested in the setting where the signal to noise ratio $\Psi$ is small enough that the problem is hard, we may restrict our thinking to the regime where $\Psi$ is near 0, $E_{n/2}$ and $\mathcal{E}_{n/2}$ are close to $1/2$, and we do not lose much accuracy when we \textit{conservatively} approximate $\mathcal{E}_{n/2}(1-\mathcal{E}_{n/2})$ by its upper bound $1/4$. Hence, \textit{un}conditional on the training data, a reasonable asymptotic approximation is given by $$ \widehat E^S \sim \mathcal N\left(E_{n/2}, \frac1{2n}\right). $$ Note that under $H_0$, we have $$\widehat E^S \sim \mathcal N\left(\frac1{2},\frac1{2n} \right).$$ Hence, we define our test statistic to be the (scaled) difference between the estimated error and half: $$ T^S := \sqrt{n/2} - \sqrt{2n} \widehat E^S ~=~ \sqrt{2n}(1/2 - \widehat E^S) $$ which is distributed as a standard Gaussian $\mathcal N(0,1)$ under the null, and a $\mathcal N(\sqrt{2n} (1/2 - E_{n/2}), 1)$ under the alternative. Hence, if $z_\alpha$ is the $1-\alpha$ quantile of the standard Gaussian distribution, i.e. $\Pr(Z > z_\alpha) = \alpha$ for a standard Gaussian $Z$, then the corresponding test \begin{equation} \mathbb{I} \Big[ T^S > z_\alpha \Big] \label{eq:test} \end{equation} has an asymptotic type-1 error (probability of wrongly rejecting the null hypothesis when the null is true) controlled by $\alpha$, and it has power (probability of rightfully rejecting the null when the null is indeed false) given by \begin{eqnarray} \hspace{-0.2in} \Pr(T^S > z_\alpha) &=& \Pr \left( T^S - \sqrt{2n} (1/2 - E_{n/2}) > z_\alpha - \sqrt{2n} (1/2 - E_{n/2}) \right) \nonumber\\ \hspace{-0.2in} &\approx& \Phi \left( \sqrt{2n} (1/2 - E_{n/2}) - z_\alpha \right) \label{eq:power} \end{eqnarray} where the $\approx$ is due to the variance approximation of $1/4$. Hence, the power comes down to how $E_{n/2}$ behaves around $1/2$ (the value achieved when $\Psi=0$). Once more, since we are interested in the regime when $E_{n/2}$ is close to half (the testing problem is hard), we do not lose much accuracy with the following Taylor expansion: $$ \Phi(t) \approx \Phi(0) + \phi(0) t $$ where $\phi$ is the Gaussian pdf, hence $\phi(0)=1/\sqrt{2\pi}$ and $\Phi(0)=1/2$. From Eq.\eqref{eq:errRaudy} when using $n/2$ points, we get \begin{eqnarray*} E_{n/2} &=& \Phi\left( - \frac{\Psi}{2} \frac1{\sqrt{1+ \frac{4d}{n\Psi^2}}} \right) \\ &\approx& 1/2 - \frac{\Psi}{2\sqrt{2\pi}} \frac1{\sqrt{1+ \frac{4d}{n\Psi^2}}} \end{eqnarray*} and hence $$ 1/2 - E_{n/2} \approx \frac{\Psi^2}{\sqrt{8\pi}} \frac1{\sqrt{\Psi^2+ \frac{4d}{n}}}. $$ Substituting this back into Eq.\eqref{eq:power}, we get the power of $T^S$ can be approximately given by \begin{eqnarray*} \Phi\left( \frac{\sqrt{2n}\Psi^2}{\sqrt{8\pi}} \frac1{\sqrt{\Psi^2+ \frac{4d}{n}}} - z_\alpha \right) &=& \Phi\left( \frac{\Psi^2}{\sqrt{4\pi \frac{\Psi^2}{n} + 16\pi \frac{d}{n^2}}} - z_\alpha \right) \end{eqnarray*} Comparing this with the lower bound expression $$ \Phi \left( \frac{\Psi^2}{\sqrt{8\frac{\Psi^2}{n} + 8\frac{d}{n^2 }}} - \frac{\sqrt{d}}{\sqrt{d + n \Psi^2 }} z_\alpha \right) + o(1) , $$ we can conclude that using linear classification accuracy is essentially minimax optimal, up to small constant factors. Specifically, when $\Psi^2 << d/n$, i.e. we are in the low SNR regime when the errors are close to half, the asymptotic power of Fisher's LDA is given by $$ \Phi \left( \frac{n\Psi^2}{\sqrt{16\pi d}} - z_\alpha \right) $$ which is just a small constant factor worse than the minimax lower bound in Eq.\eqref{eq:lowSNR}. \section{When $\Sigma$ is unknown} For two sample testing in the fixed $d$ setting, there are strong reasons to prefer Hotelling's t-test. For example, it is known to be the ``uniformly most powerful'' test when $\mathbb{P},\mathbb{Q}$ are univariate Gaussians under fairly general assumptions \cite{kariya81,simaika41,anderson58,salaevskii71}. In a seminal paper by \cite{bs}, the authors proved that $T_H$ has asymptotic power tending to the (trivial) value of $\alpha$ in the high dimensional setting, when $d,n \to \infty$ with $d/n \to 1-\epsilon$ for small $\epsilon$. This is the regime where $d<n$ so $\widehat \Sigma$ is invertible, but they intuitively pointed out that with around $d$ samples, one would get a very ill-conditioned estimate of (the inverse of) the $d \times d$ matrix $\Sigma$ which has $d^2$ unknowns. This motivated the study of alternate test statistics; for instance, the authors show that dropping $\widehat \Sigma$ from the Hotelling test statistic entirely leads to a test that does have asymptotic power tending to $1$ in the high-dimensional setting. Following that, \cite{sd} propose (in a similar spirit) the test statistic $$ T_{SD} := (\widehat \mu_0 - \widehat \mu_1)^T \mathrm{diag}(\widehat \Sigma) (\widehat \mu_0 - \widehat \mu_1) $$ that replaces $\widehat \Sigma$ by $\mathrm{diag}(\widehat \Sigma)$ in Hotelling's statistic also leads to high-dimensional consistency. For the classification problem, there is a similar vein of results paralleling the results of two sample testing, even though this connection has seemingly not been explicitly mentioned in either set of papers to the best of our knowledge. For example, \cite{bickel2004some} prove that when $\Sigma$ is unknown, the error of Fisher's LDA can be terrible in the high dimensional setting. They instead consider the ``Naive Bayes'' (NB) classification rule given by $$ \text{NB}_{n,n}(Z) \stackrel{\mathrm{def}}{=} \mathbb{I} \Big[ \widehat \delta^T \mathrm{diag}(\widehat \Sigma)^{-1/2} (Z - \widehat\mu ) > 0 \Big] $$ which just assumes that the features are independent (similarly using $\mathrm{diag}(\widehat \Sigma)$ as the estimator of $\Sigma$ in the LDA rule). The show that NB can have a much better accuracy than LDA in the high dimensional setting (in the worst case). This understated connection has important implications for extending the results of our paper. Indeed, from the aforementioned examples of high dimensional consistency resulting from using $\mathrm{diag}( \widehat \Sigma)$ to replace $\widehat \Sigma$ in both two sample testing and classification, one should expect that the conclusions of this paper should also morally extend to the setting where $\Sigma$ is unknown, when both the classifier and the two-sample test use the same substitute for $\widehat \Sigma$. \section{Permutation Testing} In practice, instead of using the kind of asymptotic approximations that we have analyzed, one often employs \textit{randomization tests} also known as \textit{permutation tests}. For a \textit{direct} two sample test, we would do the following. \textbf{Permutation Test:} Calculate $T^*$ on the full data $X,Y$. Repeat P times : \textit{Pool the samples $X,Y$ into one bag, permute the samples, split it into two parts $X^p, Y^p$. Evaluate the test statistic on each of these permuted samples, call this $T^p$. } Sort all the statistics $T^*,T^1,...,T^P$; if the rank of $T^*$ in the right $\alpha$-quantile (larger accuracy than $1-\alpha$ fraction), then reject the null hypothesis. We would like to note the two possible ways of applying permutation testing within the classification via sample splitting framework, because of the subtleties involved. The methods below differ in the italicized text. \textbf{Method 1:} Split data into two halves, call these $X^1, Y^1$ and $X^2, Y^2$. Train the classifier on $X^1,Y^1$, call this $f^*$. Evaluate accuracy of $f^*$ on $X^2, Y^2$, call this $a^*$. Repeat $P$ times \footnote{In practice people choose $P$ in the range of 100s to 1000s.} : \textit{Pool the samples $X^2, Y^2$ into one bag, permute the samples, and then split it into two parts, $X^p, Y^p$. Evaluate the accuracy of $f^*$ on this permuted data, call this $a^p$.} Sort all the accuracies $a^*,a^1,...,a^P$; if the rank of $a^*$ in the right $\alpha$-quantile (larger accuracy than $1-\alpha$ fraction), then reject the null hypothesis. \textbf{Method 2:} Split data into two halves, call these $X^1, Y^1$ and $X^2, Y^2$. Train the classifier on $X^1,Y^1$, call this $f^*$. Evaluate accuracy of $f^*$ on $X^2, Y^2$, call this $a^*$. Repeat $P$ times : \textit{Pool all samples $X^1,Y^1,X^2,Y^2$ into one bag, permute the samples, and then split it into 4 parts $X^p,Y^p,X'^p,Y'^p$. Train a new classifier $f^p$ on the first half, evaluate it on the second half, to get accuracy $a^p$.} Sort all the accuracies $a^*,a^1,...,a^P$; if the rank of $a^*$ in the right $\alpha$-quantile (larger accuracy than $1-\alpha$ fraction), then reject the null hypothesis. In our opinion, method 2 should be preferred to method 1, but their difference is rather subtle. The first method tests whether $f^*$ is significantly different from chance. The second method tests whether any classifier can be learned that performs significantly different from chance. In other words, for testing the null hypothesis as we have stated it, permutation testing should be wrapped around the whole procedure of calculating a test statistic, not just around the second half of such a procedure. We currently don't have a formal analysis to support this but we only expect minor (asymptotically negligible) differences between the finite-sample performance of the studied test and the permutation-variant, making the qualitative statements from our analysis carry forward to the permutation setting \section{Related Settings}\label{sec:disc} Here we discuss how our results may be extended to a larger context. \subsection{Leave one out classification accuracy} Another natural estimator for accuracy, as an alternative to sample-splitting, is a leave-one-out estimator $\widehat E^L$, defined as \begin{eqnarray} \widehat E^L &\stackrel{\mathrm{def}}{=}& (\widehat E^L_1 + \widehat E^L_2)/2 \label{eq:errL}\\ \text{where } \widehat E^L_1 &\stackrel{\mathrm{def}}{=}& \frac1{n}\sum_{i=1}^{n} \mathbb{I}\Big[ \mathrm{LDA}_{n\backslash i,n}(X_i) = 1 \Big],\nonumber \\ \widehat E^L_2 &\stackrel{\mathrm{def}}{=}& \frac1{n}\sum_{i=1}^{n} \mathbb{I}\Big[ \mathrm{LDA}_{n,n\backslash i}(Y_i) = 0 \Big] .\nonumber \end{eqnarray} where $\mathrm{LDA}_{n\backslash i,n}$ (or $\mathrm{LDA}_{n,n\backslash i}$) denotes the LDA classifier created from all points except $X_i$ (or all points except $Y_i$). We conjecture that a test statistic based on this might save the constant factor loss in power due to sample splitting. \subsection{Resubstitution classification accuracy} Since leave-one-out estimators are computationally intensive, one might be tempted to use the training data itself to test the classifier. This resubstitution error would be defined as \begin{eqnarray} \widehat E^R &\stackrel{\mathrm{def}}{=}& (\widehat E^R_1 + \widehat E^R_2)/2 \label{eq:errL}\\ \text{where } \widehat E^R_1 &\stackrel{\mathrm{def}}{=}& \frac1{n}\sum_{i=1}^{n} \mathbb{I}\Big[ \mathrm{LDA}_{n,n}(X_i) = 1 \Big],\nonumber \\ \widehat E^R_2 &\stackrel{\mathrm{def}}{=}& \frac1{n}\sum_{i=1}^{n} \mathbb{I}\Big[ \mathrm{LDA}_{n,n}(Y_i) = 0 \Big] .\nonumber \end{eqnarray} where we first train on all the data and then test on all the data. Of course such an estimate would be overoptimistic, and would be scorned upon as an estimate of the true accuracy $E$ of the classifier. However, one might wonder if the null distribution would be similarly optimistically biased, nullifying the optimistic bias of $\widehat E^R$. Our conjecture is that a test statistic based on resubstitution would actually be worse than both sample splitting and leave-one-out. \subsection{Non-linear Classification} Another natural setting is that of nonlinear classification. An examination of the test statistics used (Hotelling and its variants) shows that they are closely related to the statistics based on the kernel Maximum Mean Discrepancy of \cite{mmd12} and the kernel FDA of \cite{kfda}, when specifically instantiated with the linear kernel. Similarly, for classification, a kernelized LDA was proposed by \cite{mika1999fisher} which specializes to Fisher's LDA when the linear kernel is employed. Given the parallels observed in the other settings, one might naturally conjecture that the spirit of the results of this paper can be extended to such kernelized nonlinear settings as well. \subsection{Neyman-Pearson Classification} We would like to mention that while a Neyman-Pearson classification framework was proposed and analyzed in \cite{scott05np}, the setting is quite different since that work considers the problem of minimizing probability of error for one class, subject to a bound on the probability of error of the other class. We are instead interested in minimizing probability of not detecting the two classes are different, subject to a bound on the probability of detecting them as different when they are actually the same. Thus, we consider a different connection to testing than \cite{scott05np} and a classification approach for our testing problem appears to be harder than testing (what label is it for each data point vs are all labels the same for all points). \section{Experiments} Here, we ran a couple of simple simulations to compare the performance of the algorithm (upper bound) with the theoretical lower bound. The setup is as follows: for each of $E$ difference choices of $d,n$, with $d/n = \kappa$, we draw $n$ samples from two $d$ dimensional identity-covariance Gaussians that have mean difference given by $\Psi \cdot (1,1,1,1,...,1)$. We split the sample into two parts, train the classifier on the first and test it on the second. We use a cutoff $z_\alpha$ to determine if the test in Eq.\eqref{eq:test} rejects or not. We then repeat this procedure $R$ times to determine the power, i.e. probability of rejecting while controlling level at $\alpha$. On all plots, higher is better. \subsection{Constant power} In the following experiment, we choose $E = 30, R = 200, z_\alpha=2, \Psi = 3/d^{1/4}, \kappa =1$ with $d,n$ varying from 20 to 600 in increments of 20. This setting was selected because it leads to (asymptotically) constant power according our theoretical analysis, for both upper and lower bounds, where the theoretical minimax power should approach $$ \Phi( -2 + 9/\sqrt{8}) \approx 0.88 $$ and hence would be suitable for visualization on a graph (\cite{RamRed15c} already proved that the lower bound is asymptotically tight including all constants, so it serves as an excellent benchmark). \begin{figure}[h!] \begin{center} \includegraphics[width=0.7\textwidth]{const-power-2.png} \end{center} \end{figure} The slightly bumps in the blue curve are because we use $R$ repetitions to calculate the power (the number of times it rejected divided by $R$). The larger the $R$ used, the smoother the estimated curve would be. However, we can already make out from the above curve that it is tracking the minimax power accurately, but is consistently slightly lower. \subsection{Increasing power} In the following experiment, the setup is almost the same as the previous experiment, except that for each $1 \leq e \leq E$, we used $\Psi = \frac{e}{10 d^{1/4}}$. In this setting, the theoretical prediction is that the power will increase from (near) zero in the first setting to (near) one in the last setting. (once more, we chose $\Psi$ so that we can visualize the effect easily without saturation near zero or one) \begin{figure}[h!] \begin{center} \includegraphics[width=0.7\textwidth]{inc-power.png} \end{center} \end{figure} Once more, the power of the classifier accuracy is tracking the minimax power quite accurately, while always staying below it. \section{Conclusion}\label{sec:conc} This paper gave a basic statistical analysis for the use of classification accuracy as a test statistic for two sample testing. Theoretically, we find that the classification accuracy of Fisher's LDA is rate-optimal in the minimax sense for two sample mean testing of Gaussians with known covariance. We conjecture that such results should also hold when the covariance is unknown, and for nonlinear settings. Practically, the possible constant factor loss in power may be a worry in settings where sample sizes are low, dimensionality is high, signal to noise ratio is small, and practitioners may want to get the most from their data. \section*{Acknowledgments} The authors would like to acknowledge the NSF grant IIS-1247658 and AFOSR YIP FA9550-14-1-0285. AR also thanks Arthur Gretton, Leila Wehbe and Amit Datta for discussions during the early phase of this project.
{ "timestamp": "2016-02-09T02:02:12", "yymm": "1602", "arxiv_id": "1602.02210", "language": "en", "url": "https://arxiv.org/abs/1602.02210", "abstract": "When data analysts train a classifier and check if its accuracy is significantly different from chance, they are implicitly performing a two-sample test. We investigate the statistical properties of this flexible approach in the high-dimensional setting. We prove two results that hold for all classifiers in any dimensions: if its true error remains $\\epsilon$-better than chance for some $\\epsilon>0$ as $d,n \\to \\infty$, then (a) the permutation-based test is consistent (has power approaching to one), (b) a computationally efficient test based on a Gaussian approximation of the null distribution is also consistent. To get a finer understanding of the rates of consistency, we study a specialized setting of distinguishing Gaussians with mean-difference $\\delta$ and common (known or unknown) covariance $\\Sigma$, when $d/n \\to c \\in (0,\\infty)$. We study variants of Fisher's linear discriminant analysis (LDA) such as \"naive Bayes\" in a nontrivial regime when $\\epsilon \\to 0$ (the Bayes classifier has true accuracy approaching 1/2), and contrast their power with corresponding variants of Hotelling's test. Surprisingly, the expressions for their power match exactly in terms of $n,d,\\delta,\\Sigma$, and the LDA approach is only worse by a constant factor, achieving an asymptotic relative efficiency (ARE) of $1/\\sqrt{\\pi}$ for balanced samples. We also extend our results to high-dimensional elliptical distributions with finite kurtosis. Other results of independent interest include minimax lower bounds, and the optimality of Hotelling's test when $d=o(n)$. Simulation results validate our theory, and we present practical takeaway messages along with natural open problems.", "subjects": "Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Statistics Theory (math.ST); Machine Learning (stat.ML)", "title": "Classification accuracy as a proxy for two sample testing", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363725435203, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7084883487414244 }
https://arxiv.org/abs/2211.10499
Fundamental groups of reduced suspensions are locally free
In this paper, we analyze the fundamental group $\pi_1(\Sigma X,\overline{x_0})$ of the reduced suspension $\Sigma X$ where $(X,x_0)$ is an arbitrary based Hausdorff space. We show that $\pi_1(\Sigma X,\overline{x_0})$ is canonically isomorphic to a direct limit $\varinjlim_{A\in\mathscr{P}}\pi_1(\Sigma A,\overline{x_0})$ where each group $\pi_1(\Sigma A,\overline{x_0})$ is isomorphic to a finitely generated free group or the infinite earring group. A direct consequence of this characterization is that $\pi_1(\Sigma X,\overline{x_0})$ is locally free for any Hausdorff space $X$. Additionally, we show that $\Sigma X$ is simply connected if and only if $X$ is sequentially $0$-connected at $x_0$.
\section{Introduction} When a based space $(X,x_0)$ is well-pointed, i.e. the inclusion $\{x_0\}\to X$ is a cofibration, the reduced suspension $\Sigma X$ with canonical basepoint $\overline x_0$ is homotopy equivalent to the unreduced suspension $SX$ and it follows that $\pi_1(\Sigma X,\overline x_0)$ is free on the set of path components of $X$ not containing $x_0$. When $X$ is not well-pointed, the situation becomes more delicate but includes some of the most important examples in the algebraic topology of locally complicated spaces. For example: \begin{enumerate} \item If $\mathbb{E}_0=\{1,2,3,\dots\}\cup\{\infty\}$ is the one-point compactification of the natural numbers, then $\Sigma \mathbb{E}_0$ is homeomorphic to the usual ($1$-dimensional) infinite earring space $\mathbb{E}_1$ whose fundamental group is extensively studied and applied \cite{CChe,Edafreesigmaproducts}. Moreover, the iterated suspension $\Sigma^n\mathbb{E}_0$ is the $n$-dimensional earring \cite{EK00higher} sometimes called the Barratt-Milnor Spheres \cite{BarrattMilnor}. \item If $X=\{(0,0)\}\cup\{(x,\sin(1/x))\mid 0<x<1\}$ is the topologists sine curve, then $\Sigma X$ is the harmonic archipelago $\mathbb{H}\mathbb{A}$, introduced in \cite{BS98}, and whose fundamental group has also been widely studied, c.f. \cite{CHMArchipelago,Corsontriplecone,Corsonpmods,HHarchip,Hojkauniversal}. \item If $X=C\mathbb{E}_0=\mathbb{E}_0\times I/\mathbb{E}_0\times\{1\}$ is the unreduced cone over a convergent sequence (a converging fan of arcs) and the image of $(\infty,0)$ is taken to be the basepoint of $X$, this provides what is perhaps the most well-known example where $X$ is a path-connected compact metric space but $\pi_1(\Sigma X)$ is non-trivial (in fact, it is isomorphic to the archipelago group $\pi_1(\mathbb{H}\mathbb{A})$) \end{enumerate} The archipelago group $\pi_1(\mathbb{H}\mathbb{A})$ is readily shown to be a direct limit of sequence of earring groups \cite[Prop. 1]{Hojkauniversal}. Hence, it is natural to ask if fundamental groups of reduced suspensions can always be ``built out of" free groups and earring groups. In this paper, we answer this question in the affirmative by establishing methods that allow us to characterize the algebraic structure of $\pi_1(\Sigma X,\overline x_0)$ in substantial generality. Independently, the authors of \cite{CorsonHojkaRed} have studied fundamental groups of reduced suspensions, restricting their attention to $X$ which are first countable at $x_0$ and focusing on the establishment of a dichotomy that allows one to decide when the groups $\pi_1(\Sigma X,\overline x_0)$ contain a copy of the archipelago group $\pi_1(\mathbb{H}\mathbb{A})$. We find little overlap with their results and methods. For our main result, we consider a based space $(X,x_0)$ and let $\mathscr{P}$ denote the set of all subsets $A\subseteq X$ containing $x_0$ for which (1) $A$ has countably many path components, (2) each path component of $A$ is a Peano continuum, and (3) the path components of $A$ cluster at $x_0$. When ordered by inclusion, $\mathscr{P}$ becomes a directed set (see \Cref{directed}). The main technical achievement of this paper is the following. \begin{theorem}\label{mainthm} Let $(X,x_0)$ be a based Hausdorff space. The inclusion maps $A\to X$, $A\in\mathscr{P}$ induce a canonical isomorphism $\psi:\varinjlim_{A\in\mathscr{P}} \pi_1(\Sigma A, \overline x_0)\to \pi_1(\Sigma X, \overline x_0)$. \end{theorem} We also show that each of the groups $\pi_1(\Sigma A, \overline x_0)$ is either isomorphic to a finitely generated free group or the infinite earring group $\pi_1(\mathbb{E}_1)$ (\Cref{loc-free}) and is therefore locally free. Since direct limits of locally free groups are locally free \cite[Lemma 24]{CHMArchipelago}, the following consequence is immediate. \begin{theorem}\label{homology} For every based Hausdorff space $(X,x_0)$, $\pi_1(\Sigma X,\overline x_0)$ is locally free. \end{theorem} It is certainly possible to extend the results of \cite{CorsonHojkaRed} using \Cref{mainthm}; however, we avoid doing so here. Rather, we briefly mention two other consequences of our methods. In \Cref{equiv}, we give a complete characterization of the spaces $X$ for which $\Sigma X$ is simply connected. Additionally, since abelianization is a left adjoint functor, it preserves direct limits. Thus $\psi$ descends to an isomorphism $\psi':\varinjlim_{A\in\mathscr{P}} H_1(\Sigma A)\to H_1(\Sigma X)$ on singular homology. It is known that $H_1(\mathbb{E}_1)$ is isomorphic to $\mathbb{Z}^{\mathbb{N}}\oplus (\mathbb{Z}^{\mathbb{N}}/\oplus_{\mathbb{N}}\mathbb{Z})$ \cite{EKH1ofHE} and $H_1(\mathbb{H}\mathbb{A})$ is isomorphic to $\mathbb{Z}^{\mathbb{N}}/\oplus_{\mathbb{N}}\mathbb{Z}$ \cite{HHcotorsion,KarimovRepovs} but that such isomorphisms are highly non-constructive. \begin{corollary} For any based Hausdorff space $(X,x_0)$, $H_1(\Sigma X)$ is canonically isomorphic to a direct limit of groups each of which is (not canonically) isomorphic to either $\mathbb{Z}^n$ or $\mathbb{Z}^{\mathbb{N}}\oplus (\mathbb{Z}^{\mathbb{N}}/\oplus_{\mathbb{N}}\mathbb{Z})$. In particular, $H_1(\Sigma X)$ is torsion-free. \end{corollary} \section{Notation and Preliminaries} Throughout, $X$ will denote a Hausdorff topological space with basepoint $x_0$. The closed unit interval will be denoted $I$ and $I^2$ is the closed unit square. A \textit{path} in $X$ is a continuous map $\alpha:I\to X$, which we call a \textit{loop} if $\alpha(0)=\alpha(1)$. We let $\alpha\cdot\beta$ denote the usual \textit{concatenation} of paths when $\alpha(1)=\beta(0)$ and we let $\alpha^{-}(t)=\alpha(1-t)$ denote the \textit{reverse} of $\alpha$. The constant path at $x\in X$ is denoted by $c_x$. If $[a,b],[c,d]\subseteq I$ and $\alpha:[a,b]\to X$, $\beta:[c,d]\to X$ are maps, we write $\alpha\equiv\beta$ if $\alpha=\beta\circ h$ for some increasing homeomorphism $h: [a,b]\to [c,d]$; if $h$ is linear and if it does not create confusion, we will identify $\alpha$ and $\beta$. Note that $\equiv$ is an equivalence relation. We write $\alpha\simeq\beta$ if $\alpha$ and $\beta$ are homotopic relative to their endpoints. Certainly, if $\alpha\equiv\beta$, then $\alpha\simeq\beta$ by a homotopy with image in $\text{Im}(\alpha)$. \begin{definition} Given $x_0\in X$, we say that a countable collection $\mathscr{A}$ of subsets of $X$ \textit{clusters at} $x_0$ if for every neighborhood $U$ of $x_0$, we have $A\subseteq U$ for all but finitely many $A\in\mathscr{A}$. If $\mathscr{A}$ is indexed by $\mathbb{N}=\{1,2,3,\dots\}$, we may say that $\mathscr{A}$ \textit{converges toward} $x_0$. As special cases, we also define the following. If $C\subseteq X$ is a countable subset, we say $C$ \textit{clusters at} $x_0$ if $\mathscr{A}=\{\{c\}\mid c\in C\}$ clusters at $x_0$. Similarly, if $\mathscr{F}=\{f_j\mid j\in J\}$ is a countable collection of maps $f_j:Y_j\to X$, we say $\mathscr{F}$ \textit{clusters at} $x_0$ if $\{\text{Im}(f_j)\mid j\in J\}$ clusters at $x_0$. \end{definition} Let $(A_j,a_j)$, $j\in J$ be a countable collection of based spaces. The \textit{shrinking wedge} of this collection is the space $\widetilde{\bigvee}_{j\in J}(A_j,a_j)$ whose underlying set is the usual one-point union $\bigvee_{j\in J}(A_j,a_j)$ with canonical basepoint $a_0$ but where $U\subseteq \widetilde{\bigvee}_{j\in J}(A_j,a_j)$ is open if and only if $U\cap A_j$ is open in $A_j$ for all $n\in\mathbb{N}$ and if $A_j\subseteq U$ for all but finitely many $j\in J$ whenever $a_0\in U$. Note that $\widetilde{\bigvee}_{j\in J}(A_j,a_j)$ may be canonically embedded as a closed subspace of the direct product $\prod_{j\in J}A_j$. Moreover, by restricting to individual wedge-summands, we see that based maps $f:(\widetilde{\bigvee}_{j\in J}(A_j,a_j),a_0)\to (X,x_0)$ are in one-to-one correspondence with collections $\{f_j\}_{j\in J}$ of based maps $f_j:A_j\to X$ that cluster at $x_0$. Let $\mathcal{I}$ be a countable linearly ordered set and $\{\alpha_i\}_{i\in\mathcal{I}}$ be collection of loops $\alpha_i:(I,\partialI)\to (X,x_0)$ that clusters at $x_0$. The $\mathcal{I}$-indexed concatenation of this collection is the loop $\prod_{i\in\mathcal{I}}\alpha_i$ defined as follows: let $\mathscr{U}$ be a collection of disjoint open intervals in $(0,1)$ such that (1) $\bigcup\mathscr{U}$ is dense in $[0,1]$ and (2) when $\mathscr{U}$ is equipped with the natural ordering inherited from $(0,1)$ there is an order isomorphism $\psi:\mathcal{I}\to\mathscr{U}$. Then $\prod_{i\in\mathcal{I}}\alpha_i$ maps $I\backslash\bigcup\mathscr{U}$ to $x_0$ and agrees with $\alpha_i$ on $\overline{\phi(i)}$. Up to the equivalence relation $\equiv$, the definition of $\prod_{i\in\mathcal{I}}\alpha_i$ does not depend on the choice of $\mathscr{U}$. \subsection{Reduced Suspensions} For based space $(X,x_0)$, the quotient space \[\Sigma X=\frac{X\times [0,1]}{\{x_0\}\times [0,1]\cup X\times \{0,1\}}\] is the \textit{reduced suspension} of $(X,x_0)$. Let $q:X\times [0,1]\to \Sigma X$ denote the canonical quotient map. The canonical basepoint of $\Sigma X$, which is the image of $\beth=\{x_0\}\times [0,1]\cup X\times \{0,1\}$, will be denoted $\overline x_0$. Since $q$ maps $(X\backslash \{x_0\})\times (0,1)$ homeomorphically onto $\Sigma X\backslash \{\overline x_0\}$, we will often identify these spaces. Given any point $x\in X$, there is a canonical path $\lambda_x:I\to \Sigma X$ given by $\lambda_x(t)=q(x,t)$ called the \textit{adjoint path at} $x$. We refer to $\lambda_x$ as the \textit{adjoint path at} $x$ since the function $x\mapsto \lambda_x$ is the adjoint of the identity map $\Sigma X\to \Sigma X$. Note that $\lambda_{x_0}$ is the constant path at $\overline x_0$. Let $\mathbb{E}_0=\{1,2,3,\dots\}\cup\{\infty\}$ be the one-point compactification of the natural numbers with basepoint $\infty$. This case is of particular interest since convergent sequences $\{x_n\}_{n\in\mathbb{N}}\to x_0$ in $X$ correspond uniquely to based maps $(\mathbb{E}_0,\infty)\to (X,x_0)$. We refer to $\mathbb{E}_0$ as the \textit{$0$-dimensional earring space} since $\mathbb{E}_1=\Sigma \mathbb{E}_0$ is the \textit{$1$-dimensional earring space}, i.e. $\mathbb{E}_1$ is canonically homeomorphic to the planar set $\bigcup_{n\in\mathbb{N}}C_n$ where $C_n$ the circle of radius $1/n$ centered at $(1/n,0)$. In this example, $\lambda_{n}:I\to \mathbb{E}_1$ is the loop parameterizing the $n$-th circle. \subsection{Standard neighborhoods} Since we work in significant generality, it is necessary to describe a convenient basis of neighborhoods at $\overline x_0$. Given a pointed open cover $\mathscr{V}=\{V_x\mid x\in X\}$ of $X$, i.e. where $x\in V_x$ for all $x\in X$, and any (not necessarily continuous) function $\eta:X\to (0,1/3)$, we may define \[O(\mathscr{V},\eta)=(V_{x_0}\times [0,1])\cup \bigcup_{x\in X}V_x\times ([0,\eta(x))\cup(1-\eta(x),1]).\] We refer to a set of the form $O(\mathscr{V},\eta)$ as a \textit{standard neighborhood of} $\beth$ in $X\times I$. Hence, $(V\times I)\cup (X\times [0,1/3)\cup (2/3,1])$ is the largest possible standard neighborhood for given neighborhood $V=V_{x_0}$ of $x_0$. A set of the form $O(\mathscr{V},\eta)$ is saturated with respect to $q$ and we refer to its image $q(O(\mathscr{V},\eta))$ as a \textit{standard neighborhood of }$\overline x_0$. The standard neighborhoods of $\overline x_0$ form a neighborhood base at $\overline x_0$ in $\Sigma X$. Hence, if $f_k:Y\to X\times [0,1]$, $k\in\mathbb{N}$ is a sequence of maps such that for every standard neighborhood $O(\mathscr{V},\eta)$ of $\beth$, we have $\text{Im}(f_k)\subseteq O(\mathscr{V},\eta)$ for all but finitely many $k\in\mathbb{N}$, then $\{q\circ f_k\}_{k\in\mathbb{N}}$ is a sequence of maps $Y\to\Sigma X$ that converges toward $\overline x_0$. Finally, we establish notation for the following open cover of $O(\mathscr{V},\eta)$: let \begin{enumerate} \item $O(\mathscr{V},\eta)^{\downarrow}=(V_{x_0}\times [0,1])\cup \bigcup_{x\in X}V_x\times [0,\eta(x))$, \item $O(\mathscr{V},\eta)^{\uparrow}=(V_{x_0}\times [0,1])\cup \bigcup_{x\in X}V_x\times (1-\eta(x),1]$. \end{enumerate} \begin{remark}\label{closed} Let $Y$ be a closed subset of $X$ containing $x_0$ and $O(\mathscr{W},\zeta)$ be a standard neighborhood in $Y\times I$ with $\mathscr{W}=\{W_y\mid y\in Y\}$. We can extend $\zeta:Y\to (0,1/3)$ to $\eta:X\to (0,1/3)$ by defining $\eta(x)=1/4$ for all $x\in X\setminus Y$. If we set $\mathscr{V}=\{V_x\mid x\in X\}$ where $V_x=W_x$ when $x\in Y$ and $V_x=X\backslash Y$ if $x\in X\backslash Y$, then $O(\mathscr{V},\eta)\cap (Y\times I)=O(\mathscr{W},\zeta)$. It follows that the inclusion $Y\to X$ induces an embedding $\Sigma Y\to \Sigma X$. Hence, we may identify $\Sigma Y$ naturally as a subspace of $\Sigma X$. \end{remark} \subsection{Downward and upward-sliding homotopies} \begin{definition} Let $S\subseteq \Sigma X$ be a subset. \begin{enumerate} \item The \textit{downward hull} of $S$ in $\Sigma X$ is $\dh(S)=\{\overline x_0\}\cup\bigcup_{(x,t)\in S\backslash\overline x_0}q(\{x\}\times [0,t])$. \item The \textit{upward hull} of $S$ in $\Sigma X$ is $\mathbf{uh}(S)=\{\overline x_0\}\cup\bigcup_{(x,t)\in S\backslash\overline x_0}q(\{x\}\times [t,1])$. \end{enumerate} \end{definition} \begin{remark} Notice that if $A\subseteq B\subseteq \Sigma X$, then $\dh(A)\subseteq \dh(B)$ and $\mathbf{uh}(A)\subseteq \mathbf{uh}(B)$. If $O(\mathscr{V},\eta)$ is a standard neighborhood of $\beth$, and $S\subseteq q(O(\mathscr{V},\eta)^{\downarrow})$, then $\dh(S)\subseteq \dh(q(O(\mathscr{V},\eta)^{\downarrow}))= q(O(\mathscr{V},\eta)^{\downarrow})$ and if $S\subseteq q(O(\mathscr{V},\eta)^{\uparrow})$, then $\mathbf{uh}(S)\subseteq \mathbf{uh}(q(O(\mathscr{V},\eta)^{\uparrow}))= q(O(\mathscr{V},\eta)^{\uparrow})$. \end{remark} \begin{definition} Suppose that $Y$ is a space and $f=(f_1,f_2):Y\to X\times I$ is a map. \begin{enumerate} \item The \textit{downward homotopy} of $f$ is the map $H_{f}^{\downarrow}:Y\times I\to X\times I$ defined by $H^{\downarrow}_{f}(y,t)=(f_1(y), f_2(y)(1-t))$; \item The \textit{upward homotopy} of $f$ is the map $H_{f}^{\uparrow}:Y\times I\to X\times I$ defined by $H^{\uparrow}_{f}(y,t)=(f_1(y), f_2(y)(1-t)+t)$. \end{enumerate} \end{definition} \begin{remark} Note that if $\text{Im}(f)\subseteq \Sigma X\backslash\{\overline x_0\}$, then $\text{Im}(q\circ H_{f}^{\downarrow})=\dh(\text{Im}(f))$ and $\text{Im}(q\circ H_{f}^{\uparrow})=\mathbf{uh}(\text{Im}(f))$. Additionally, \begin{enumerate} \item If $\text{Im}(f)\subseteq O(\mathscr{V},\eta)^{\downarrow}$, then $\text{Im}(H^{\downarrow})\subseteq O(\mathscr{V},\eta)^{\downarrow}$, \item If $\text{Im}(f)\subseteq O(\mathscr{V},\eta)^{\uparrow}$, then $\text{Im}(H^{\uparrow})\subseteq O(\mathscr{V},\eta)^{\uparrow}$. \end{enumerate} \end{remark} \begin{lemma}\label{hep} Let $K$ be a simplicial complex and $L\subseteq K$ a subcomplex. There exists a retraction $r:K\times I\to K\times\{1\}\cup L\times I$. Moreover, if $i:K\times\{1\}\cup L\times I\to K\times I$ denotes the inclusion, we may assume that $i\circ r(\sigma \times I)\subseteq \sigma\times I$ for each simplex $\sigma\subseteq K$. \end{lemma} The first part of the lemma is simply the statement that $(K,L)$ has the homotopy extension property. The second part of the lemma is an immediate consequence of standard proofs of the first part, see \cite[Prop 0.16]{Hatcher} for example. We explicitly state the second part of the lemma because it will be useful for us to have a precise control of the retraction produced. \begin{definition}\label{push-up-down} Let $(K,L)$ be a pair of simplicial complexes with $L\subseteq K$. Let $f=(f_1,f_2):K\to X\times I$ be a map, and let $H_f^{\downarrow}:K\times I\to X\times I$ be the downward homotopy of $f$. Let $r$ be the retraction of $K\times I$ onto $K\times\{1\}\cup L\times I$ from \Cref{hep}. The restriction of $H^{\downarrow}_f\circ r$ to $K\times\{0\}$ defines a map $f^{\downarrow}_L:K\to X\times I$ which we call the \textit{pushdown of $f$ rel. $L$}. Note that $f^{\downarrow}_L|_L=f|_L$ and that $f^{\downarrow}_L$ has image in $$X\times\{0\}\cup\Big\{(f_1(a),t)\mid a\in L, 0\leq t\leq f_2(a)\Big\}.$$ In a symmetric manner, if $H_{f}^{\uparrow}:Y\times I\to X\times I$ is the upward homotopy of $f$ and $r$ is the retraction of $K\times I$ onto $K\times\{1\}\cup L\times I$ from \Cref{hep}, then the restriction of $H^{\uparrow}_f\circ r$ to $K\times\{0\}$ defines a map $f^{\uparrow}_L:K\to X\times I$ which we call the \textit{pushup of $f$ rel. $L$}. \end{definition} If $L$ is clear from context, we may simply write $f^{\downarrow}$ and $f^{\uparrow}$ for the pushdown and pushup of $f$ respectively. \begin{remark}\label{updownsimplices} Note that $f$ is homotopic rel. $L$ to $f_{L}^{\downarrow}$ (respectively $f_{L}^{\uparrow}$) by a homotopy with image in $\bigcup_{k\in K}\{k\}\times[0,f_2(k)]$ (respectively, by a homotopy with image in $\bigcup_{k\in K}\{k\}\times[f_2(k),1]$). Moreover, the condition placed on $r$ from \Cref{hep} in the definition of $f_L^{\downarrow}$ (respectively $f_L^{\uparrow}$) ensures that for each simplex $\sigma\subseteq K$, $f^{\downarrow}_L(\sigma)\subseteq \bigcup_{x\in \sigma}\{x\}\times[0,f_2(x)]$ (respectively, $f^{\uparrow}_L(\sigma)\subseteq \bigcup_{x\in \sigma}\{x\}\times[f_2(x), 1]$). In particular, this means that $q\circ f^{\downarrow}_L(\sigma)\subseteq \dh(q\circ f(\sigma))$ and $q\circ f^{\uparrow}_L(\sigma)\subseteq \mathbf{uh}(q\circ f(\sigma))$ for each simplex $\sigma\subseteq K$. \end{remark} \begin{example}\label{pathexample} The first situation in which we will apply pushdowns and pushups is the case where $(K,L)=(I,\partial I)$. If $\alpha=(\alpha_1,\alpha_2):I\to X\backslash\{x_0\}\times (0,1)$ is a path from $(a,s)$ to $(b,t)$, then $\alpha\simeq\alpha_{\partial I}^{\downarrow}$ in $X\times I$. Moreover, when viewing $\alpha$ as a path $\alpha:I\to \Sigma X$, we have \[\alpha \simeq q\circ \alpha_{\partial I}^{\downarrow}\equiv (\lambda_{a})|_{[0,s]}^{-}\cdot c_{\overline x_0}\cdot \lambda_{b}|_{[0,t]}.\] by a homotopy with image in $\dh(\text{Im}(q\circ \alpha))$. Similarly, $\alpha$ is path-homotopic to \[q\circ \alpha_{\partial I}^{\uparrow}\equiv (\lambda_{a})|_{[s,1]}\cdot c_{\overline x_0}\cdot (\lambda_{b}|_{[t,1]})^{-}.\] by a homotopy with image in $\mathbf{uh}(\text{Im}(q\circ \alpha))$. \end{example} \section{Deforming arbitrary loops in $\Sigma X$} \begin{definition} Let $\mathscr{C}(X,x_0)$ denote the set of countable subsets of $X$ containing $x_0$ that cluster at $x_0$. If it will not cause confusion we may write $\mathscr{C}$ instead of $\mathscr{C}(X,x_0)$. \end{definition} If $C\in \mathscr{C}$, then, since $X$ is assumed to be Hausdorff, $C$ is closed in $X$ and either finite or homeomorphic to $\mathbb{E}_0$. Thus $\Sigma C$ may be viewed as a subspace of $\Sigma X$ (recall \Cref{closed}) and is homeomorphic to either a point, a finite wedge of circles, or $\mathbb{E}_1$. \begin{definition} We say loop $\alpha:[a,b]\to \Sigma X$ is \textit{irreducible} if $\alpha^{-1}(\overline x_0)=\{a,b\}$. \end{definition} \begin{lemma}\label{simplelooplemma} For every irreducible loop $\alpha:I\to \Sigma X$, there exists $C\in\mathscr{C}$ and a surjective loop $\beta:I\to \Sigma C$, which is path-homotopic to $\alpha$ in $\Sigma X$. Moreover, if $\text{Im}(\alpha)$ lies in standard neighborhood $q(O(\mathscr{V},\eta))$, then this path-homotopy may be chosen to have image in $q( O(\mathscr{V},\eta))$. \end{lemma} \begin{proof} Let $\alpha:(I,\partial I)\to (\Sigma X, \overline x_0)$ be an irreducible loop. Find a strictly increasing function $a:\mathbb{Z}\to (0,1)$, written $a(n)=a_n$ such that as $n\to \infty$, $a(n)\to 1$ and $a(-n)\to 0$. Set $\alpha_n=\alpha|_{[a_n,a_{n+1}]}$ so that $\alpha\equiv\prod_{n\in\mathbb{Z}}\alpha_n$. Since $\alpha$ is irreducible, we have $\text{Im}(\alpha_n)\subseteq (X\backslash\{x_0\})\times (0,1)$ for all $n\in\mathbb{Z}$ and thus we may write $\alpha(a_n)=(y_n,s_n)$ for $y_n\in X\backslash\{x_0\}$ and $s_n\in (0,1)$. Consider the open cover of $X\times I$ by sets $U^{\downarrow}=X\times [0,2/3)$ and $U^{\uparrow}=X\times (1/3,1]$. Since $\alpha_n:I\to X\times I$ is continuous, we may find a partition $a_{n}=t_{(n,0)}<t_{(n,1)}<t_{(n,2)}<\cdots <t_{(n,k_n)}=a_{n+1}$ such that $\alpha_n$ maps each interval $[t_{(n,j)},t_{(n,j+1)}]$ into either $U^{\downarrow}$ or $U^{\uparrow}$. Notice that $T=\{t_{(n,k)}\mid n\in\mathbb{Z},0\leq j\leq k_n\}$ is a closed subset of $I$ with $\inf(T)=0$, $\sup(T)=1$, and which has the order type of $\mathbb{Z}$. Choose an order isomorphism $a':\mathbb{Z}\to T$. By replacing $a$ with $a'$, we may assume from the start that $\alpha_n$ has image in either $U^{\downarrow}$ or $U^{\uparrow}$ for all $n\in\mathbb{Z}$. Recall the pushup and pushdown construction in \Cref{push-up-down}, particularly the case described in \Cref{pathexample}. If $\text{Im}(\alpha_n)\subseteq U^{\downarrow}$, we have homotopy $H_{\alpha_n}^{\downarrow}:[a_n,a_{n+1}]\timesI\to X\times I$ and the pushdown $\alpha_{n}^{\downarrow}:[a_n,a_{n+1}]\to X\times I$ rel. $\partial I$. Then $q\circ H_{\alpha_n}^{\downarrow}$ can be modified to a path-homotopy $G_n:[a_n,a_{n+1}]\timesI\to \Sigma X$ from $\alpha_n$ to $\gamma_n=q\circ \alpha_{n}^{\downarrow}$ with image in $\dh(\text{Im}(\alpha_n))$. On the other hand, if $\text{Im}(\alpha_n)\nsubseteq U^{\downarrow}$, we have $\text{Im}(\alpha_n)\subseteq U^{\uparrow}$ and we consider the homotopy $H_{\alpha_n}^{\uparrow}:[a_n,a_{n+1}]\timesI\to X\times I$ and the pushup $\alpha_{n}^{\uparrow}:[a_n,a_{n+1}]\to X\times I$ rel. $\partial I$. Then $q\circ H_{\alpha_n}^{\uparrow}$ can be modified to a path-homotopy $G_n:[a_n,a_{n+1}]\timesI\to \Sigma X$ from $\alpha_n$ to $\gamma_n=q\circ \alpha_{n}^{\uparrow}$ with image in $\mathbf{uh}(\text{Im}(\alpha_n))$. Let $G:I^2\to \Sigma X$ be defined so that $G(\{0,1\}\timesI)=\overline x_0$ and to agree with $G_n$ on $[a_n,a_{n+1}]\timesI$. To verify the continuity of $G$, it suffices to check the continuity of $G$ at points in $\{0,1\}\timesI$. Let $O(\mathscr{V},\eta)$ be a standard neighborhood of $\beth$ and recall that $O(\mathscr{V},\eta)\subseteq (V_{x_0}\times I)\cup (X\times [0,1/3)\cup (2/3,1])$. The continuity of $\alpha$ at $0$ ensures that there exists a $N\in\mathbb{N}$ such that $\alpha([0,a_{1-N}])\subseteq q(O(\mathscr{V},\eta))$. Thus $\text{Im}(\alpha_n)\subseteq O(\mathscr{V},\eta)$ for $n\leq -N$. Fix $n\leq -N$. If $\text{Im}(\alpha_n)\subseteq U^{\downarrow}$, then $\text{Im}(\alpha_n)$ lies in $(X\times [0,2/3))\cap O(\mathscr{V},\eta)\subseteq O(\mathscr{V},\eta)^{\downarrow}$. Therefore, \[\text{Im}(G_n)=\text{Im}(q\circ H_{\alpha_n}^{\downarrow})\subseteq \dh(\text{Im}(\alpha_n))\subseteq \dh(q(O(\mathscr{V},\eta)^{\downarrow}))\subseteq q(O(\mathscr{V},\eta)).\]On the other hand, if $\text{Im}(\alpha_n)\nsubseteq U^{\downarrow}$, then $\text{Im}(\alpha_n)$ lies in $(X\times (1/3,1])\cap O(\mathscr{V},\eta)$ and we have \[\text{Im}(G_n)=\text{Im}(q\circ H_{\alpha_n}^{\uparrow})\subseteq \mathbf{uh}(\text{Im}(\alpha_n))\subseteq \mathbf{uh}(q(O(\mathscr{V},\eta)^{\uparrow}))\subseteq q(O(\mathscr{V},\eta)).\]Thus $G([0,a_{1-N}]\times I)\subseteq q(O(\mathscr{V},\eta))$, proving that $G$ is continuous at the points of $\{0\}\times I$. The symmetric argument shows that $G$ is continuous at the points of $\{1\}\timesI$. Note that this analysis ensures that if $\text{Im}(\alpha)\subseteq q(O(\mathscr{V},\eta))$, then $\text{Im}(G)\subseteq q(O(\mathscr{V},\eta))$. Now $G$ gives a path-homotopy in $\Sigma X$ from $\alpha$ to $\gamma(t)=G(t,1)$ where $\gamma_n=\gamma|_{[a_n,a_{n+1}]}$. Recall from \Cref{pathexample} that since $\alpha_n$ has endpoints $(y_n,s_n)$ and $(y_{n+1},s_{n+1})$, $\gamma_n$ is a reparameterization of either $(\lambda_{y_{n}})|_{[0,s_n]}^{-}\cdot c_{\overline x_0}\cdot \lambda_{y_{n+1}}|_{[0,s_{n+1}]}$ or $(\lambda_{y_n})|_{[s_n,1]}\cdot c_{\overline x_0}\cdot (\lambda_{y_{n+1}}|_{[s_{n+1},1]})^{-}$. After deleting the middle constant subloops, this factorization of $\gamma$ may be reassociated to a $\mathbb{Z}$-indexed concatenation $\delta\equiv\prod_{n\in\mathbb{Z}}\zeta_n$ where each factor $\zeta_n$ is an inverse pair $(\lambda_{y_{n}})|_{[0,s_n]}\cdot(\lambda_{y_{n}})|_{[0,s_n]}^{-} $ or $(\lambda_{y_n})|_{[s_n,1]}^{-}\cdot(\lambda_{y_n})|_{[s_n,1]}$ or a non-trivial loop of the form $\lambda_{y_n}^{\epsilon_n}$ for $\epsilon_n\in\{\pm\}$. Set $M=\{m\in\mathbb{Z}\mid \zeta_m\equiv\lambda_{y_m}^{\epsilon_m}\}$, viewed as a suborder of $\mathbb{Z}$ (note that $M$ is order isomorphic to either $\{1,2,\dots,m\}$, $\mathbb{N}$, the reverse order of $\mathbb{N}$, or $\mathbb{Z}$). By canceling the factors $\zeta_n$, $n\in\mathbb{Z}\backslash M$ (those which are inverse pairs), we see that $\gamma$ either contracts in $\text{Im}(\gamma)$ or is path-homotopic within $\text{Im}(\gamma)$ to a concatenation of the form $\beta=\prod_{m\in M}\lambda_{y_{m}}^{\epsilon_m}$. Set $C=\{x_0\}\cup\{y_{m}\mid m\in M\}$. Certainly, we have $\text{Im}(\beta)=\Sigma C$. Since the homotopy $\gamma\simeq \beta$ lies in $\text{Im}(\gamma)$, the composed homotopy $\alpha\simeq \gamma$ still lies in $q(O(\mathscr{V},\eta))$ whenever $\text{Im}(\alpha)$ does. To complete the proof, we check that $C\in\mathscr{C}$, i.e. that $C$ clusters at $x_0$. Let $V$ be a neighborhood of $x_0$ in $X$ and consider the standard neighborhood $W=V\timesI\cup (X\times [0,1/3)\cup (2/3,1])$ of $\beth$. The continuity of $\beta$ ensures that $\lambda_{y_m}^{\epsilon_m}(1/2)=(y_{m},1/2)\in q(W)$ for all but finitely many $m\in M$. In particular, $y_{m}\in V$ for all but finitely many $m\in M$. Thus $C$ clusters at $x_0$. \end{proof} In the next theorem, we use the irreducible loop case to prove the same result for arbitrary loops. \begin{theorem}\label{standardform} For every based loop $\alpha:I\to \Sigma X$, there exists $C\in\mathscr{C}$ and a surjective loop $\beta:I\to \Sigma C$, which is path-homotopic to $\alpha$ in $\Sigma X$. Moreover, if $\text{Im}(\alpha)$ lies in standard neighborhood $q(O(\mathscr{V},\eta))$, then this path-homotopy may be chosen to have image in $q( O(\mathscr{V},\eta))$. \end{theorem} \begin{proof} The theorem is clear for the constant loop at $\overline x_0$. Consider a non-constant loop $\alpha:I\to \Sigma X$ based at $\overline x_0$. Let $\mathscr{U}$ be the set of connected components of $I\backslash \alpha^{-1}(\overline x_0)$ and note that if $J\in \mathscr{U}$, then $\alpha|_{\overline{J}}$ is an irreducible loop. Moreover, the continuity of $\alpha$ ensures that if $O(\mathscr{V},\eta)$ is a standard neighborhood, then $\alpha(\overline{J})\subseteq q(O(\mathscr{V},\eta))$ for all but finitely many $J\in\mathscr{U}$. Applying \Cref{simplelooplemma} to each $\alpha|_{\overline{J}}$, we obtain $C_J\in\mathscr{C}$ and a surjective loop $\beta_{J}:\overline{J}\to \Sigma C_{J}$, which is path-homotopic to $\alpha$ in $\Sigma X$. Let $G_{J}:\overline{J}\times I\to \Sigma X$ be such a homotopy (constructed as in the proof of \Cref{simplelooplemma}) from $\alpha|_{\overline{J}}$ to $\beta_J$ so that if $\text{Im}(\alpha|_{\overline{J}})$ lies in standard neighborhood $q( O(\mathscr{V},\eta))$, then $\text{Im}(G_J)\subseteq q( O(\mathscr{V},\eta))$. Define $G:I^2\to \Sigma X$ so that $G(\alpha^{-1}(\overline x_0)\times I)=\overline x_0$ and so that for every $J\in\mathscr{U}$, $G$ agrees with $G_{J}$ on $\overline{J}\times I$. Given a standard neighborhood $ O(\mathscr{V},\eta)$, we have $\alpha(\overline{J})\subseteq q(O(\mathscr{V},\eta))$ for all but finitely many $J\in\mathscr{U}$ and thus $\text{Im}(G_J)\subseteq q( O(\mathscr{V},\eta))$ for all but finitely many $J\in\mathscr{U}$. The continuity of $G$ is a direct consequence of this observation. Note that if $\text{Im}(\alpha)\subseteq q( O(\mathscr{V},\eta))$, then our choice of $G_J$ ensures that $\text{Im}(G)\subseteq q( O(\mathscr{V},\eta))$. Thus the second statement of the theorem is established. Set $\beta(t)=G(t,1)$ and $C=\bigcup_{J\in\mathscr{U}}C_J$. Since $\text{Im}(\beta_J)= \Sigma C_J$, it's clear that $\text{Im}(\beta)= \Sigma C$. It suffices to check that $C\in\mathscr{C}$, i.e. that $C$ clusters at $x_0$. Let $V$ be a neighborhood of $x_0$ in $X$ and consider the standard neighborhood $W=V\timesI\cup (X\times [0,1/3)\cup (2/3,1])$. Then $\Sigma C_J=\text{Im}(\beta_J)\subseteq \text{Im}(G_J)\subseteq q(W)$ for all but finitely many $J\in\mathscr{U}$. Thus $C_J\subseteq V$ for all but finitely many $J\in\mathscr{U}$. Since each set $C_J$ clusters at $x_0$, it follows that $C$ clusters at $x_0$. \end{proof} \begin{remark}\label{structureofbeta} We take a moment to highlight an important feature of the maps constructed in the proofs of \Cref{simplelooplemma} and \Cref{standardform}, which will be referred to later on. Starting with an arbitrary loop $\alpha$, the conclusion of \Cref{standardform} produces a set $C\in\mathscr{C}$, a loop $\beta$, and a homotopy $G$ from $\alpha$ to $\beta$. The constructions used ensure that for each component $(a,b)$ of $I\backslash \beta^{-1}(\overline x_0)$, we have $\beta|_{[a,b]}\equiv \lambda_{x}^{\epsilon}$ for some $x\in C$ and $\epsilon\in\{\pm\}$. \end{remark} \begin{remark} Another interpretation of \Cref{standardform} is the following: for each $C\in\mathscr{C}$, let $f_C:\Sigma C\to \Sigma X$ be the inclusion map. Then \[\pi_1(\Sigma X,\overline x_0)=\bigcup_{C\in\mathscr{C}}(f_C)_{\#}(\pi_1(\Sigma C,\overline x_0)),\]that is, every element of $\pi_1(\Sigma X,\overline x_0)$ lies in the homomorphic image of a finitely generated free group or the infinite earring group where the homomorphism is induced by an inclusion map. \end{remark} Toward a characterization of the based spaces $(X,x_0)$ for which $\Sigma X$ is simply connected, we give the following definition. \begin{definition} A space $X$ is \textit{sequentially $0$-connected} at $x_0\in X$ if for every convergent sequence $\{x_k\}_{k\in\mathbb{N}}\to x_0$, there exists a sequence $\{\alpha_k\}_{k\in\mathbb{N}}$ of paths in $X$ that converges toward $x_0$ and such that $\alpha_k(0)=x_0$ and $\alpha_k(1)=x_k$ for all $k\in\mathbb{N}$. \end{definition} Certainly, if a space $X$ is first countable and locally path connected at $x_0$, then $X$ is sequentially $0$-connected at $x_0$. \begin{lemma} For any based space $(X,x_0)$, $\Sigma X$ is sequentially $0$-connected at $\overline x_0$. \end{lemma} \begin{proof} Suppose $\{y_k\}_{k\in\mathbb{N}}\to\overline x_0$ in $\Sigma X$. We may assume that $y_k\neq \overline x_0$ and thus $y_k=(x_k,t_k)\in X\timesI\backslash\beth$ for all $k$. If $0<t_k\leq 1/2$, let $\beta_k\equiv \lambda_{x_k}|_{[0,t_k]}$ and if $1/2<t_k<1$, let $\beta_k\equiv \lambda_{x_k}|_{[t_k,1]}^{-}$. Then $\{\beta_k\}_{k\in\mathbb{N}}$ is a sequence of paths in $\Sigma X$ from $\overline x_0$ to $y_k$. We check that $\{\beta_k\}_{k\in\mathbb{N}}$ converges toward $\overline x_0$. Let $O(\mathscr{V},\eta)$ be a standard neighborhood of $\beth$ so that $q(O(\mathscr{V},\eta))$ is a basic neighborhood of $\overline x_0$ in $\Sigma X$. Then there exists $K$ such that $(x_k,t_k)\in O(\mathscr{V},\eta)$ for all $k\geq K$. Fix $k\geq K$. If $(x_k,t_k)\in V_{x_0}\times [0,1]$, then it is clear that $\text{Im}(\beta_k)\subseteq q(O(\mathscr{V},\eta))$. Otherwise, $(x_k,t_k)\in V_x\times [0,\eta(x))\cup (1-\eta(x),1]$ for some $x\in X$. If $ (x_k,t_k)\in V_x\times [0,\eta(x))$, then $\text{Im}(\beta_k)=\dh(q(x_k,t_k))\subseteq q(O(\mathscr{V},\eta))$ and if $ (x_k,t_k)\in V_x\times (1-\eta(x),1]$, then $\text{Im}(\beta_k)=\mathbf{uh}(q(x_k,t_k))\subseteq q(O(\mathscr{V},\eta))$. Thus $\text{Im}(\beta_k)\subseteq q (O(\mathscr{V},\eta))$ for all $k\geq K$. \end{proof} We now show that if $X$ is sequentially $0$-connected at $x_0$, then $\Sigma X$ is simply connected in a strong sense ``at $x_0$." \begin{definition} A space $X$ is \textit{sequentially $1$-connected} at $x_0\in X$ if $X$ is sequentially $0$-connected and if every sequence of loops $f_n:(S^1,\ast)\to (X,x_0)$, $n\in\mathbb{N}$ that converges toward $x_0$, there exists a sequence of maps $g_n:D^2\to X$, $n\in\mathbb{N}$ on the closed unit disk that converges toward $x_0$ and such that $g_{n}|_{S^1}=f_n$ for every $n\in\mathbb{N}$. \end{definition} \begin{proposition}\label{seq-1-connected} If $X$ is sequentially $0$-connected at $x_0$, then $\Sigma X$ is sequentially $1$-connected at $\overline x_0$. In particular, $\pi_1(\Sigma X,\overline x_0)$ is trivial. \end{proposition} \begin{proof} Suppose that $\{\alpha_n\}_{n\in\mathbb{N}}$ is a sequence of based loops in $\Sigma X$ that converges toward $\overline x_0$. By \Cref{standardform}, for each $n\in\mathbb{N}$, we may assume there is some $C_n\in\mathscr{C}$ such that $\text{Im}(\alpha_n)=\Sigma C_n$. Moreover, by the second statement of \Cref{standardform}, we may assume that the collection $\{C_n\mid n \in\mathbb{N}\}$ clusters at $x_0$. Thus if we set $C=\bigcup_{n\in\mathbb{N}}C_n$, we have that $C\in\mathscr{C}$. Since $X$ is sequentially $0$-connected at $x_0$, for each $c\in C$, there exists a path $\gamma_c$ from $x_0$ to $c$ such that the collection $\{\gamma_c\mid c\in C\}$ clusters at $x_0$. Fix an enumeration $C_n\backslash\{x_0\}= \{c_{n,1}, c_{n, 2}, \dots \}$ of the (possibly finite) set $C_n\backslash\{x_0\}$. Using the paths $\gamma_c$, we may construct paths $\delta_n:I\to X$ such that $\delta_n(0)=x_0$, $\delta_n(1/k)=c_{n,k}$ for all values of $k$ used in the enumeration of $C_n\backslash\{x_0\}$, and such that the sequence $\{\delta_n\}_{n\in\mathbb{N}}$ converges toward $x_0$. The map $\theta_n:C_n\to I$, $\theta_n(c_{n,k})=1/k$, $\theta_n(x_0)=0$ is an embedding and gives factorization $\alpha_n=\Sigma\delta_n\circ \Sigma\theta_n\circ\alpha'_n$ through the contractible space $\Sigma I$, in which $\alpha_n':I\to \Sigma C_n$ is the map agreeing with $\alpha_n$ under the inclusion $i_n:\Sigma C_n\to \Sigma X$. \[\xymatrix{ I \ar[d]_-{\alpha'_n} \ar[r]^-{\alpha_n} & \Sigma X \\ \Sigma C_n \ar@{^{(}->}[ur]_{i_n} \ar[r]_{\Sigma\theta_n} & \Sigma I \ar[u]_{\Sigma\delta_n} }\] Thus the loops $\alpha_n$, $n\in\mathbb{N}$ contract by a sequence of null-homotopies that converge toward $\overline x_0$. \end{proof} \section{Fundamental groups of reduced suspensions are locally free} Toward a proof of \Cref{mainthm}, we give the following definition. Recall that a \textit{Peano continuum} is a connected compact metric space that is locally connected. \begin{definition} For a space $X$ with basepoint $x_0$, let $\mathscr{P}(X,x_0)$ denote the set of all subsets $A\subseteq X$ containing $x_0$ and which satisfy: \begin{enumerate} \item $A$ has countably many path components \item each path component of $A$ is a Peano continuum \item the path components of $A$ cluster at $x_0$. \end{enumerate} If it will not cause confusion, we may write $\mathscr{P}$ instead of $\mathscr{P}(X,x_0)$. \end{definition} While $\mathscr{P}$ evidently becomes a partial order under subset inclusion ($A\leq B$ if $A\subseteq B$), it is not immediately clear if $\mathscr{P}$ is directed with respect to this relation. We recall and establish some general topology results, which will help us verify that $\mathscr{P}$ is directed as well aid in the subsequent applications of $\mathscr{P}$. \begin{theorem}[Hahn-Mazurkiewicz theorem] A space $X$ is a Peano continuum if and only if it is Hausdorff and there exists a continuous surjection $[0,1]\to X$. \end{theorem} The Hahn-Mazurkiewicz theorem implies that Peano continua are locally path connected, since quotients of locally path-connected spaces are locally path connected. Another direct consequence is the following lemma. \begin{lemma}\label{fin-union} Let $\{A_i\}_{i\leq n}$ be a finite collection of subsets of $X$, each of which is a Peano continuum. If $\bigcup_{i\leq n}A_i$ is path connected, then it is a Peano continuum. \end{lemma} We also require the following fact, which applies more generally. \begin{lemma}\label{loc-connected} Let $C_1, C_2, \dots, C_n$ be a finite collection of closed subsets of a space $X$. If $C_i$ is locally connected for each $i\leq n$, then so is the union $\bigcup_{i\leq n}C_i$. \end{lemma} \begin{proof} It is straightforward to see that the disjoint union $\coprod_{i\leq n}C_i$ is locally connected and that $\coprod_{i\leq n}C_i\to \bigcup_{i\leq n}C_i$ is a closed continuous surjection, hence a quotient map. The lemma then follows from the fact that the quotient of a locally connected space is locally connected. \end{proof} \begin{remark}\label{weakly-loc} A space $X$ is \textit{weakly locally connected at $x\in X$} if there exists a basis of connected neighborhoods at $x\in X$, though these neighborhoods are not required to be open. If a space $X$ is locally connected at a point $x$, then it is clearly also weakly locally connected at $x$, however the converse does not always hold. While these properties are distinct at individual points, a space $X$ is weakly locally connected at all points $x\in X$ if and only if it is locally connected at all points $x\in X$ \cite[Theorem 2.5]{degrootmcdowell}. \end{remark} \begin{lemma}\label{connected} Let $\mathscr{A}=\{A_i\}_{i\in\mathbb{N}}$ be a countably infinite collection of subsets of $X$ which cluster at $x_0$, where each $A_i$ is a Peano continuum. If $\bigcup_{i\in\mathbb{N}}A_i$ is path connected, then $\bigcup_{i\in\mathbb{N}}A_i\cup \{x_0\}$ is a Peano continuum. \end{lemma} \begin{proof} Let $A = \bigcup_{i\in\mathbb{N}}A_i\cup \{x_0\}$. We begin by showing that $A$ is compact and metrizable. Let $B_i = A_i\cup\{x_0\}\subset X$ for each $i$. Each $B_i$ is compact and metrizable since $A_i$ is, and thus $\prod_{i\in\mathbb{N}}B_i$ is compact and metrizable. As a closed subset of $\prod_{i\in\mathbb{N}}B_i$, the shrinking wedge $\widetilde{\bigvee}_{i\in\mathbb{N}} (B_i, x_0)$ is compact and metrizable as well. Finally, $A$ is a Hausdorff quotient of the compact metrizable space $\widetilde{\bigvee}_{i\in\mathbb{N}} (B_i, x_0)$, hence compact and metrizable. We have that $A$ is connected since every open set containing $x_0$ intersects $\bigcup_{i\in\mathbb{N}}A_i$, which is path connected by assumption. If $a\in A$ and $a\neq x_0$, then by taking disjoint neighborhoods $U$ and $V$ of $a$ and $x_0$ respectively and noting that $V$ contains all but finitely many $A_i\in\mathscr{A}$, we see that $U$ intersects only finitely many $A_i\in\mathscr{A}$. Let $B$ be the union of the finitely many $A_i$ that intersect $U$. Then $B$ is locally connected by \Cref{loc-connected} and thus has a basis of connected open neighborhoods at $a$. Since we may assume the neighborhoods are contained in $U$, we see that $A$ is locally connected at all points $a\neq x_0$. We now show that $A$ is weakly locally connected at $x_0$, which by \Cref{weakly-loc} will finish showing that $A$ locally connected, and thus a Peano continuum. Let $U$ be an open neighborhood of $x_0$ in $A$. We partition $\mathscr{A}$ into three sets. Let $\mathscr{B} = \{A_i\in\mathscr{A}\mid A_i\subseteq U\}$, $\mathscr{D} = \{A_i\in \mathscr{A}\mid x_0\in A_i, A_i\nsubseteq U\}$, and $\mathscr{E} = \{A_i\in \mathscr{A}\mid x_0\notin A_i, A_i\nsubseteq U\}$. Note that $\mathscr{D}$ and $\mathscr{E}$ are finite sets. For each $D\in \mathscr{D}$, find a connected open neighborhood $U_D$ of $x_0$ in $D$ such that $U_D\subseteq U$. Let $N\subseteq U$ be the union $$N = \bigcup_{D\in\mathscr{D}}U_D\cup\bigcup \mathscr{B}\cup\{x_0\}.$$ Though $N$ need not be open, it is a neighborhood of $x_0$ because $C=\bigcup_{D\in\mathscr{D}}(D\setminus U_D)\cup \bigcup\mathscr{E}$ is a closed subset of $A$ whose complement $V=A\backslash C$ is open in $A$ and satisfies $x_0\in V\subseteq N$. We now show that $N$ must have only finitely many connected components, in which case the connected component $W$ of $N$ containing $x_0$ is open in $N$. Toward a contradiction, suppose that $N$ has infinitely many connected components. Each connected component of $N$ distinct from $W$ is a finite union of elements of $\mathscr{B}$. Since $\bigcup_{i\in\mathbb{N}}A_i$ is path connected, each such component must intersect an element of either $\mathscr{D}$ or $\mathscr{E}$. Since both $\mathscr{D}$ and $\mathscr{E}$ are finite sets, there exists $A_N\in \mathscr{D}\cup\mathscr{E}$ such that the set $F= A_N\setminus\bigcup_{D\in\mathscr{D}} U_D$ intersects an infinite number of the components of $N$. However, $F$ is a closed set that does not contain $x_0$ yet has non-trivial intersection with every neighborhood of $x_0$; a contradiction. Finally, since $W$ is open in $N$, we may write $W=N\cap W'$ for a set $W'$ that is open in $A$. Then $V\cap W'$ is open in $A$ and $x_0\in V\cap W'=(V\cap N)\cap W'=V\cap W\subseteq W$. Thus $W$ is a connected neighborhood (in $A$) of $x_0$ contained in $U$. We conclude that $A$ is weakly locally connected at $x_0$. \end{proof} \begin{proposition}\label{components-are-Peano} Let $\mathscr{A}=\{A_i\}_{i\in \mathcal{I}}$ be a countable collection of subsets of $X$ which cluster at $x_0$, and where each $A_i$ is a Peano continuum. Each path component of $\bigcup_{i\in \mathcal{I}} A_i\cup\{x_0\}$ is a Peano continuum. \end{proposition} \begin{proof} If $\mathscr{A}$ is finite, then the proposition is clear from \Cref{fin-union} so we assume $\mathscr{A}$ is infinite. Let $A=\bigcup_{i\in \mathcal{I}} A_i\cup\{x_0\}$ and let $C$ be a path component of $A$. If infinitely many elements of $\mathscr{A}$ are subsets of $C$, then \Cref{connected} implies that $C$ also contains $x_0$. So if $x_0\notin C$, then $C$ is the union of only finitely many $A_i\in \mathscr{A}$, in which case \Cref{fin-union} implies that $C$ is a Peano continuum. So now assume that $C$ is the path component of $A$ containing $x_0$. Let $\mathscr{B}=\{A_i\in\mathscr{A}\mid A_i\subseteq C\}$. Note that a path component $P$ of $\bigcup \mathscr{B}$ either contains $x_0$ or contains infinitely many $A_i$. In either case, $P\cup\{x_0\}$ is a Peano continuum (using \Cref{fin-union} or \Cref{connected}). Thus if $\{P_j\}_{j\in J}$ is the set of path components of $\bigcup\mathscr{B}$, then $\{P_j\cup\{x_0\}\}_{j\in J}$ is a collection of Peano continua that clusters at $x_0$ and whose union is $C$. Finally, applying \Cref{fin-union} (when $J$ is finite) or \Cref{connected} (when $J$ is infinite) to the collection $\{P_j\cup\{x_0\}\}_{j\in J}$, it follows that $C$ is a Peano continuum. \end{proof} Let $\leq$ be the partial order on $\mathscr{P}$ defined by subset inclusion. \begin{corollary}\label{directed} $(\mathscr{P}, \leq)$ is directed set. \end{corollary} \begin{proof} We must show that every pair in $\mathscr{P}$ has an upper bound in $\mathscr{P}$. So let $A_1, A_2\in \mathscr{P}$ and set $A=A_1\cup A_2$. It is straightforward to see that $A$ has countably many path components, and that these path components cluster at $x_0$. Moreover, each path component of $A$ is a Peano continuum by \Cref{components-are-Peano}. Hence $A\in \mathscr{P}$. \end{proof} If $A\in\mathscr{P}$, then $A$ is closed in $X$ and, according to Remark \Cref{closed}, $\Sigma A$ may be identified canonically as a subspace of $\Sigma X$. We implicitly make this identification in the following results. \begin{lemma}\label{loc-free} For each $A\in\mathscr{P}$, $\pi_1(\Sigma A, \overline x_0)$ is either free on finitely many generators or isomorphic to $\pi_1(\mathbb{E}_1, b_0)$. \end{lemma} \begin{proof} Let $\{A_i\}_{i\in \mathcal{I}}$ denote the path components of $A$, where the index set $\mathcal{I}\subseteq \mathbb{N}\cup\{0\}$ contains $0$, and $A_0$ is the path component of $x_0$. Let $a_0=x_0$ and for each $i\in \mathcal{I}$ with $i\neq 0$, choose a point $a_i\in A_i$ and let $C=\{a_i\}_{i\in \mathcal{I}}$. Let $r:A\to C$ be the retraction which maps each path component $A_i$ to $a_i$. Note that, when $i\neq 0$, $A_i$ is open in $A$ and therefore, $r$ is continuous on $A_i$. Next, consider a point $x\in A_0$. A basic open neighborhood of $r(x)=a_0$ in $C$ has the form $U=C\backslash \{a_i\mid i\in F\}$ for some finite set $F\subseteq \mathcal{I}\backslash \{0\}$. Now $V=\bigcup\{A_i\mid i\in \mathcal{I}\backslash F\}$ is an open neighborhood of $x$ such that $r(V)=U$. Thus $r$ is continuous at $a$. Let $\Sigma r:\Sigma A\to \Sigma C$ be the retraction induced by $r$. If $i_\#:\pi_1(\Sigma C,\overline x_0)\to \pi_1(\Sigma A,\overline x_0)$ is the homomorphism induced by inclusion, then $i_\#$ is injective since it has left inverse $\Sigma r_\#$. We now show that $i_\#$ is surjective. Let $[\alpha]\in \pi_1(\Sigma A,\overline x_0)$. By \Cref{standardform}, there exists $B\in\mathscr{C}(A,x_0)$ and a based loop $\beta:I\to \Sigma B$ that is path-homotopic to $\alpha$ in $\Sigma A$. Moreover, recall from \Cref{structureofbeta} that $\beta$ is constructed so that if $\mathcal{J}$ is the set of connected components of $I\backslash\beta^{-1}(\overline x_0)$, then for each $J\in \mathcal{J}$, $\beta|_{\overline{J}}\equiv \lambda_{x_J}^{\epsilon_{J}}$ for some $x_J\in B$ and $\epsilon_{J}\in\{\pm\}$. The continuity of $\beta$ ensures that $\{x_J\mid J\in\mathcal{J}\}$ clusters at $x_0$. For each $J\in\mathcal{J}$, we let $i_J$ denote the element of $\mathcal{I}$ such that $x_J\in A_{i_J}$. Observe that, since $\{x_J\mid J\in\mathcal{J}\}$ clusters at $x_0$, the set $\{J\in\mathcal{J}\mid i_J=n\}$ can only be infinite if $n=0$. Now, for each $J\in\mathcal{J}$, we choose a path $\gamma_J:I\to A_{i_J}$ from $x_J$ to $a_{i_J}$ as follows. If $i_J\neq 0$, then the path $\gamma_{J}$ may be chosen arbitrarily. Note that the (possibly finite) set $\{x_J\mid i_J=0\}$ clusters at $x_0$. Since $A_0$ is a Peano continuum, it is sequentially $0$-connected at $x_0$. Therefore, we may choose the set $\{\gamma_J\mid i_J=0\}$ so that it clusters at $x_0$. With all paths $\gamma_J$, $J\in\mathcal{J}$ chosen in this way, it follows that the set $\{\gamma_J\mid J\in\mathcal{J}\}$ clusters at $x_0$. For each $J\in\mathcal{J}$, let $H_J:\overline{J}\times I\to \Sigma A$ be the map defined by $H_J(s,t)=\lambda_{\gamma_J(t)}^{\epsilon_J}(s)$. Now, let $H:I^2\to \Sigma A$ be defined so that $H(\beta^{-1}(\overline x_0)\times I)=\overline x_0$ and $H(s,t)=H_J(s,t)$ when $s\in \overline{J}$. Then $H(s,0)=\beta(s)$ and $H(s,1)$ has image in $\Sigma C$. The continuity of $H$ is guaranteed since every neighborhood of $x_0$ in $X$ contains all but finitely many of the sets $\text{Im}(\gamma_J)$. It follows that any neighborhood of $x_0$ in $\Sigma X$ contains all but finitely many of the sets $\text{Im}(H_J)$. Then the homotopy class of the loop $H(s,1)$ is mapped to $[\alpha]$ by $i_\#$, showing that $i_\#$ is surjective. Therefore, $i_\#$ is an isomorphism. We finish by observing that $\pi_1(\Sigma C)$ is free on finitely many generators when $C$ is finite and $\pi_1(\Sigma C)\cong \pi_1(\mathbb{E}_1)$ when $C$ is infinite. \end{proof} Let $\{\pi_1(\Sigma A,\overline x_0)\}_{A\in\mathscr{P}}$ be the direct system of groups indexed by $\mathscr{P}$ where the bonding maps $\phi_{AB}:\pi_1(\Sigma A, \overline x_0)\to \pi_1(\Sigma B,\overline x_0)$ for $A\leq B$ are induced by the inclusion $A\to B$. We may then consider the direct limit $\varinjlim_{A\in\mathscr{P}} \pi_1(\Sigma A, \overline x_0)$. For each $B\in \mathscr{P}$, let $\phi_B$ denote the map $\phi_B:\pi_1(\Sigma B, \overline x_0)\to \varinjlim_{A\in\mathscr{P}}\pi_1(\Sigma A,\overline x_0)$. Since for each $B\in\mathscr{P}$, the inclusion $B\to X$ induces a homomorphism $\psi_B:\pi_1(\Sigma B, \overline x_0)\to \pi_1(\Sigma X, \overline x_0)$, we obtain an induced homomorphism $$\psi:\varinjlim_{A\in\mathscr{P}} \pi_1(\Sigma A, \overline x_0)\to \pi_1(\Sigma X, \overline x_0).$$ \begin{lemma}\label{inj} Let $A_1\in\mathscr{P}$, let $\alpha:I\to \Sigma A_1$ be a loop based at $\overline x_0$, and suppose that $\alpha$ is null-homotopic in $\Sigma X$. Then there exists $A_2\in\mathscr{P}$ such that $\alpha$ is null-homotopic in $\Sigma A_2$. \end{lemma} \begin{proof} Let $\alpha:I\to \Sigma A_1$ be a loop based at $\overline x_0$ and suppose $H:I^2\to \Sigma X$ is a null-homotopy of $\alpha$. Since any second countable surface (with boundary) admits a triangulation \cite[p. 107]{Ahlfors}, let $T:K\to I^2\setminus H^{-1}(\overline x_0)$ be a triangulation of $I^2\setminus H^{-1}(\overline x_0)$. To simplify the notation, we identify $K$ with $I^2\setminus H^{-1}(\overline x_0)$ under $T$ so that a simplex $\sigma$ of $K$ is equally regarded as the closed subset $T(\sigma)$ of $I^2\setminus H^{-1}(\overline x_0)$. Let $K_m$ denote the set of $m$-simplices in the simplicial complex $K$. Let $U_0, U_1$ be the open sets in $K$ defined by $$U_0=H^{-1}\Big(q\big((X\setminus x_0)\times (0, 2/3)\big)\Big),\quad U_1=H^{-1}\Big(q\big((X\setminus x_0)\times (1/3, 1)\big)\Big).$$ Let $\mathscr{S}_0$ be the collection of all $\sigma\in K_2$ such that $\sigma\subseteq U_0$, and let $\mathscr{S}_1$ be the collection of all $\sigma\in K_2$ such that $\sigma\subseteq U_1$. Since $U_0$ and $U_1$ cover $K$, by a theorem of J.H.C. Whitehead \cite[Theorem 35]{JHCWhitehead}, there exists a subdivision $K'$ of $K$ such that every simplex of $K'$ is contained in either $U_0$ or $U_1$. Thus, by replacing $K$ with $K'$, we may assume that $\mathscr{S}_0\cup\mathscr{S}_1=K_2$. Let $\mathscr{B}$ denote the collection of all $\tau\in K_1$ such that there exists $\sigma\in \mathscr{S}_0$ and $\sigma'\in \mathscr{S}_1$ with $\tau=\sigma\cap\sigma'$, and let $B=\bigcup\mathscr{B}$. Since $H(B)\subset q((X\setminus x_0)\times (1/3, 2/3))$, let $B'$ be the image of $H(B)$ under the projection $p:q((X\setminus x_0)\times (1/3, 2/3))\to X\setminus x_0$. Finally, set $A_2=A_1\cup B'$. To show that $A_2\in\mathscr{P}$, it suffices to show that $B'\cup\{x_0\}\in \mathscr{P}$. First write $\mathscr{B}=\{\tau_i\}_{i\in \mathcal{I}}$, and let $t_i =p(H(\tau_i))\subseteq X$ for each $i\in \mathcal{I}$. Observe that $t_i$ is a Peano continuum for each $i\in \mathcal{I}$ by the Hahn-Mazurkiewicz theorem. If $U$ is a neighborhood of $x_0$ in $X$, let $$V=q\Big((U\times[0,1])\cup (X\times[0, 1/3)\cup (2/3, 1])\Big).$$ Since $V$ is an open neighborhood of $\overline x_0$ in $\Sigma X$, $H^{-1}(V)$ is an open set in $I^2$ which contains $H^{-1}(\overline x_0)$. Then $C=I^2\setminus H^{-1}(V)$ is a compact subset of $I^2\setminus H^{-1}(\overline x_0)$. This implies that $C$ intersects $\tau_i$ for only finitely many $i\in \mathcal{I}$ since $K$ is necessarily locally finite. Then $V$ contains $H(\tau_i)$ for all but finitely many $i\in \mathcal{I}$. Due to the fact that $H(\tau_i)\subseteq q(X\times (1/3, 2/3))$ for all $i\in\mathcal{I}$, it follows that $U$ contains $t_i$ for all but finitely many $i\in \mathcal{I}$. Hence we may apply \Cref{components-are-Peano} to see that $B'\cup \{x_0\}=\bigcup_{i\in \mathcal{I}}t_i\cup \{x_0\}$ is in $\mathscr{P}$. Thus $A_2\in \mathscr{P}$. We now construct a map $H':I^2\to \Sigma A_2$ such that $H'|_{\partial I^2}=H|_{\partial I^2}$, which will prove the lemma. Let $C_{0}$ be the union of all $\tau\in K_1$ such that there is exactly one $\sigma\in \mathscr{S}_0$ such that $\tau\subset \sigma$. Likewise, let $C_{1}$ be the union of all $\tau\in K_1$ such that there is exactly one $\sigma\in \mathscr{S}_1$ such that $\tau\subset \sigma$. Let $S_{0}=\bigcup \mathscr{S}_0$ and $S_{1}=\bigcup \mathscr{S}_1$. Then $(S_{0}, C_{0})$ and $(S_{1}, C_{1})$ are simplicial complex pairs. Let $f=H|_{S_{0}}$ and $g=H|_{S_{1}}$. Since $f$ and $g$ have image in $\Sigma X\setminus \overline x_0$, we may regard them as maps with codomain $X\times I$. Let $f_{C_{0}}^{\downarrow}$ be the pushdown of $f$ rel. $C_0$ as described in \Cref{push-up-down}. Similarly, let $g_{C_{1}}^{\uparrow}$ be the pushup of $g$ rel. $C_1$. We define a map $H':I^2\to \Sigma X$ by setting $H'|_{S_{0}}=q\circ f_{C_{0}}^{\downarrow}$, $H'|_{S_{1}}=q\circ g_{C_{1}}^{\uparrow}$, and $H'(H^{-1}(\overline x_0))=\overline x_0$. Note that $I\times\{0\}\subset C_0\cup C_1\cup H^{-1}(\overline x_0)$, hence $H'|_{I\times\{0\}}=H|_{I\times\{0\}}=\alpha$. By the definition of $f^{\downarrow}_{C_0}$ and $g^{\uparrow}_{C_1}$, we have that $H'$ has image in $\dh(H(C_0))\cup \mathbf{uh}(H(C_1))$. Since $C_0\cup C_1=(I\times\{0\})\cup B$, we have that $H(C_0)\cup H(C_1)= \text{Im}(\alpha)\cup H(B)$. It follows that $H'$ has image in $\Sigma A_2=\Sigma (A_1\cup B')$. All that remains is to verify that $H'$ is continuous. Since $f$ and $g$ agree on $B=C_{0}\cap C_{1}=S_{0}\cap S_{1}$, so too do $f^{\downarrow}_{C_{0}}$ and $g^{\uparrow}_{C_1}$. Thus $H'$ is continuous at all points in $S_{0}\cup S_{1}$. Now suppose that $y\in I^2\setminus(S_{0}\cup S_{1})=H^{-1}(\overline x_0)$. Then $H'(y)=\overline x_0$, so let $U=q(O(\mathscr{V}, \eta))$ be a standard open neighborhood of $\overline x_0$ in $\Sigma X$, and let $U^\downarrow = q(O(\mathscr{V}, \eta)^\downarrow)$ and $U^\uparrow = q(O(\mathscr{V}, \eta)^\uparrow)$. Since $H^{-1}(U)$ is an open set in $I^2$ containing of $H^{-1}(\overline x_0)$, we have that $H^{-1}(U)$ contains all but finitely many $\sigma\in K_2$. So let $F$ be the finite union of all $\sigma\in K_2$ such that $\sigma$ is not contained in $H^{-1}(U)$. Then $F$ is a closed set whose complement $V=I^2\setminus F$ is an open set in $I^2$ which contains $H^{-1}(\overline x_0)$ and satisfies $\overline V\subseteq H^{-1}(U)$. Recalling \Cref{updownsimplices}, we have that $f^{\downarrow}_{C_0}(\sigma)\subset \dh(f(\sigma))$ for all $\sigma\in \mathscr{S}_0$, and $g^{\uparrow}_{C_1}(\sigma)\subset \mathbf{uh}(g(\sigma))$ for all $\sigma\in \mathscr{S}_1$. Hence $H'(V)\subseteq \dh(H(\overline V\cap S_0))\cup \mathbf{uh}(H(\overline V\cap S_1))$. Since $H(\overline V\cap S_0)\subseteq q(X\times [0, 2/3))\cap U$, we have that $H(\overline V\cap S_0)\subseteq U^{\downarrow}$. Consequently, $\dh(H(\overline V\cap S_0))\subseteq U^\downarrow\subseteq U$. By a symmetric argument, $\mathbf{uh}(H(\overline V\cap S_1))\subseteq U^\uparrow\subseteq U$. Thus, $V$ is an open neighborhood of $y$ such that $H'(V)\subset U$. Hence $H'$ is continuous. \end{proof} \begin{corollary}\label{adjoint-paths} Two adjoint paths $\lambda_x$ and $\lambda_y$ are homotopic in $\Sigma X$ if and only if $x$ and $y$ lie in the same path component of $X$. \end{corollary} \begin{proof} If $\alpha:I\to X$ is a path from $x$ to $y$, then $H:I^2\to \Sigma X$, $H(s,t)=q(\alpha(t),s)$ is a homotopy rel. $\partial I$ from $\lambda_x$ to $\lambda_y$. Conversely, suppose that $\lambda_x$ and $\lambda_y$ are homotopic. Then $\lambda_x\cdot\lambda_{y}^{-}$ is nullhomotopic in $\Sigma X$. \Cref{inj} implies that there exists $A\in\mathscr{P}$ such that $\lambda_x\cdot\lambda_{y}^{-}$ is nullhomotopic in $\Sigma A$. Let $\{A_i\}_{i\in \mathcal{I}}$ denote the path components of $A$, with $A_0$ being the path component of $x_0$. Let $a_0=x_0$ and for each $i\in\mathcal{I}$, choose a point $a_i\in A_i$. Let $C=\{a_i\}_{i\in\mathcal{I}}$ and let $\Sigma r:\Sigma A\to \Sigma C$ be the retraction as in the proof of \Cref{loc-free}. We have that $\Sigma r\circ\lambda_x=\lambda_{a_i}$ and $\Sigma r\circ\lambda_y=\lambda_{a_j}$ for some choice of $i,j\in\mathcal{I}$. But then $$1=[\Sigma r\circ(\lambda_x\cdot\lambda_y^-)]=[\lambda_{a_i}][\lambda_{a_j}]^{-1}$$ in $\pi_1(\Sigma C,\overline x_0)$. However, $\Sigma C$ is homeomorphic to a finite wedge of circles or $\mathbb{E}_1$ and in such a space, we must have $i=j$. Hence $x$ and $y$ are both contained in $A_i=A_j$, a path-connected subset of $X$. \end{proof} Finally, we prove our main result. \begin{proof}[Proof of \Cref{mainthm}] By \Cref{standardform}, every element of $\pi_1(\Sigma X,\overline x_0)$ is of the form $i_\#([\alpha])$ for some $[\alpha]\in \pi_1(\Sigma C, \overline x_0)$ where $C\subseteq X$ is a countable set which clusters at $x_0$ and $i:\Sigma C\to \Sigma X$ is the inclusion. Since $C\in \mathscr{C}\subseteq\mathscr{P}$, we have that $\psi$ is surjective. For injectivity, let $a\in \varinjlim_{A\in\mathscr{P}} \pi_1(\Sigma A, \overline x_0)$ and suppose that $\psi(a)=1$. There exists $B\in \mathscr{P}$ and $[\alpha]\in\pi_1(\Sigma B, \overline x_0)$ such that $\phi_B([\alpha])=a$. The fact that $\psi(a)=1$ and $\psi(a)=\psi\circ \phi_B([\alpha])=\psi_B([\alpha])$ implies that $\alpha$ is null-homotopic in $\Sigma X$. \Cref{inj} implies that there exists $C\in\mathscr{P}$ such that $\alpha$ is null-homotopic in $\Sigma C$, that is, $\phi_{BC}([\alpha])=1$. Hence $a=\phi_B([\alpha])=\phi_{C}\circ \phi_{BC}([\alpha])=1$. This shows that $\psi$ is injective. \end{proof} \begin{theorem}\label{equiv} For a Hausdorff space $X$, the following are equivalent: \begin{enumerate} \item $X$ is sequentially 0-connected at $x_0$, \item $\Sigma X$ is sequentially $1$-connected at $\overline x_0$, \item $\Sigma X$ is simply connected. \end{enumerate} \end{theorem} \begin{proof} The implication $(1)\Rightarrow (2)$ is \Cref{seq-1-connected}, and $(2)\Rightarrow (3)$ follows by definition. We show that $(3)\Rightarrow (1)$ holds. So suppose that $\Sigma X$ is simply connected and $\{x_n\}_{n\in\mathbb{N}}\to x_0$ in $X$. We may assume that all terms in this sequence are distinct and not equal to $x_0$. Let $B=\{x_0\}\cup\{x_n\mid n\in\mathbb{N}\}$ and consider the $\mathbb{N}$-concatenation loop $\alpha=\prod_{n\in\mathbb{N}}\lambda_{x_n}$ in $\Sigma B$. By \Cref{mainthm} and the fact that $\Sigma X$ is simply connected, there exists $A\in\mathscr{P}$ with $B\subseteq A$ such that the inclusion $\iota: B\to A$ gives $(\Sigma \iota)_{\#}([\alpha])=1$. Let $\mathscr{A}=\{A_i\mid i\in\mathcal{I}\}$ be the set of path components of $A$ where $A_0$ denotes the path component containing $x_0$. For each $i\in\mathcal{I}$ pick a point $a_i\in A_i$, in particular, selecting $a_0=x_0$. Let $C=\{a_i\mid i\in \mathcal{I}\}$ and note that $\Sigma C$ is homeomorphic to a point, a finite wedge of circles, or $\mathbb{E}_1$. Let $r:A\to C$ be the retraction mapping $A_i$ to $a_i$ and note that $x_n\in A_i$ if and only if $r(x_n)=a_i$. The induced homomorphism $(\Sigma r)_{\#}:\pi_1(\Sigma A,\overline x_0)\to\pi_1(\Sigma C,\overline x_0)$ gives $(\Sigma (r\circ \iota))_{\#}([\alpha])=1$. However, the loop $\Sigma (r\circ \iota)\circ \alpha= \prod_{n\in\mathbb{N}}\lambda_{r(x_n)}$ can only be null-homotopic in $\Sigma C$ if $r(x_n)=x_0$ for all $n\in\mathbb{N}$. Thus $x_n\in A_0$ for all $n\in\mathbb{N}$. Since $A_0$ is a Peano continuum, it is locally path connected and first countable, hence sequentially $0$-connected. Thus we may find a sequence of paths $\{\alpha_n\}_{n\in\mathbb{N}}$ in $A_0$ which clusters at $x_0$ and such that $\alpha_n(0)=x_0$ and $\alpha_n(1)=x_n$ for all $n\in\mathbb{N}$. We conclude that $X$ is sequentially $0$-connected at $x_0$. \end{proof} In \cite{Eda90}, K. Eda constructs a simply connected space $X$ which is locally simply connected at a point $x\in X$, yet not sequentially $1$-connected at $x$. The space constructed by Eda, as a set, is a quotient of an infinite number of copies of $C\mathbb{E}_1$, the unreduced cone over the infinite earring. The construction gives $X$ a topology which makes it not first countable at $x$. A similar construction, but instead using copies of the unreduced cone over the space $\mathbb{E}_0$, can be used to show that there exists a path connected space $X$ which is locally path connected at a point $x\in X$, yet not sequentially $0$-connected at $x$. Applying \Cref{equiv} to this space yields the following corollary. \begin{corollary} There exists a path-connected space $(X,x_0)$ which is locally path connected at its basepoint $x_0$ but such that $\Sigma X$ is not simply connected. \end{corollary}
{ "timestamp": "2022-11-22T02:01:28", "yymm": "2211", "arxiv_id": "2211.10499", "language": "en", "url": "https://arxiv.org/abs/2211.10499", "abstract": "In this paper, we analyze the fundamental group $\\pi_1(\\Sigma X,\\overline{x_0})$ of the reduced suspension $\\Sigma X$ where $(X,x_0)$ is an arbitrary based Hausdorff space. We show that $\\pi_1(\\Sigma X,\\overline{x_0})$ is canonically isomorphic to a direct limit $\\varinjlim_{A\\in\\mathscr{P}}\\pi_1(\\Sigma A,\\overline{x_0})$ where each group $\\pi_1(\\Sigma A,\\overline{x_0})$ is isomorphic to a finitely generated free group or the infinite earring group. A direct consequence of this characterization is that $\\pi_1(\\Sigma X,\\overline{x_0})$ is locally free for any Hausdorff space $X$. Additionally, we show that $\\Sigma X$ is simply connected if and only if $X$ is sequentially $0$-connected at $x_0$.", "subjects": "Algebraic Topology (math.AT)", "title": "Fundamental groups of reduced suspensions are locally free", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363725435202, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7084883487414242 }
https://arxiv.org/abs/1710.10616
$k$-Foldability of Words
We extend results regarding a combinatorial model introduced by Black, Drellich, and Tymoczko (2017+) which generalizes the folding of the RNA molecule in biology. Consider a word on alphabet $\{A_1, \overline{A}_1, \ldots, A_m, \overline{A}_m\}$ in which $\overline{A}_i$ is called the complement of $A_i$. A word $w$ is foldable if can be wrapped around a rooted plane tree $T$, starting at the root and working counterclockwise such that one letter labels each half edge and the two letters labeling the same edge are complements. The tree $T$ is called $w$-valid.We define a bijection between edge-colored plane trees and words folded onto trees. This bijection is used to characterize and enumerate words for which there is only one valid tree. We follow up with a characterization of words for which there exist exactly two valid trees.In addition, we examine the set $\mathcal{R}(n,m)$ consisting of all integers $k$ for which there exists a word of length $2n$ with exactly $k$ valid trees. Black, Drellich, and Tymoczko showed that for the $n$th Catalan number $C_n$, $\{C_n,C_{n-1}\}\subset \mathcal{R}(n,1)$ but $k\not\in\mathcal{R}(n,1)$ for $C_{n-1}<k<C_n$. We describe a superset of $\mathcal{R}(n,1)$ in terms of the Catalan numbers by which we establish more missing intervals. We also prove $\mathcal{R}(n,1)$ contains all non-negative integer less than $n+1$.
\section{Introduction} The molecule ribonucleic acid (RNA) consists of a single strand of the four nucleotides adenine, uracil, cytosine, and guanine. In short, RNA is representable by finite sequences (or words) from the alphabet $A$, $U$, $C$, and $G$, lending itself to combinatorial study. In contrast to the double helix of DNA, the single-stranded nature of RNA often results in RNA folding onto itself as the nucleotides form bonds. As in DNA, we have the Watson-Crick pairs so that $C$ and $G$ form bonds and $A$ and $U$ form bonds. However, RNA has one more bond that may form, the wobble pair $G$ and $U$. It is worth noting that when RNA folds onto itself, not all nucleotides on a strand form bonds. Predicting the folded structure of RNA is important as the folded structure gives indication of its functionality. In this paper, we direct our attention to a generalized combinatorial model, motivated by the folding of RNA. This model was first introduced by Black, Drellich, and Tymoczko~\cite{black2015} with an initial restriction made to the Watson-Crick bonding pairs, leaving the potential $GU$ bond for future study. With this restriction, we relabel our words to use the letters $A_1$, $\o{A}_1$, $A_2$, $\o{A}_2$ where $A_1$ only bonds with $\o{A}_1$ and $A_2$ only bonds with $\o{A}_2$. Further, we do not limit ourselves to an alphabet with only four letters. In particular, fix an integer $m\geq 1$ and expand the alphabet to $A_1, \o{A}_1, A_2, \o{A}_2, \ldots, A_m, \o{A}_m$ where $A_i$ and $\o{A}_i$ are called \emph{complements} and $A_i$ may only form a bond with $\o{A}_i$ and vice versa. We say that this is an alphabet on $m$ letters and their complements. Define the length of a word $w$ to be the number of letters in the word, letting $\varepsilon$ be the word of length zero (the empty word). As in~\cite{black2015}, we assume that when a word folds onto itself, every letter is matched with exactly one other letter. Thus, a \emph{folding} of a word can be represented by a non-crossing perfect matching of the letters so that two matched letters are complements. Recall that the Catalan numbers enumerate the non-crossing perfect matchings on $2n$ points (see \cite{stanleyec2}, problem 6.19, part o). In our model, the underlying word restricts the allowable matching edges based on the letter corresponding to each point. However, the word $A_1 \o{A}_1A_1 \o{A}_1\ldots A_1 \o{A}_1$ admits every non-crossing perfect matching. Section~\ref{sec:prelim} contains some preliminaries and background from the work of Black, Drellich, and Tymoczko~\cite{black2015}. In Section~\ref{sec:1fold}, we define a bijection between foldings of words and edge-colored plane trees. This is used to enumerate the words which fold in precisely one way, a problem posed in \cite{black2015}. We also characterize $2$-foldable words by a decomposition in terms of $1$-foldable words. Section ~\ref{sec:R(n,m)} is devoted to studying the set $\mathcal{R}$ of integers $k$ such that there is a word which folds in precisely $k$ ways. We give a superset of $\mathcal{R}$, making a strong connection with Catalan numbers. In search of the smallest value which is not found in $\mathcal{R}$, we also determine a large consecutive set of small values in $\mathcal{R}$. \section{Preliminaries}\label{sec:prelim} In addition to non-crossing perfect matchings, the Catalan numbers also enumerate plane trees. A \emph{plane tree} is a straight line drawing of a rooted tree embedded in the plane with the root above all other vertices. This induces a left-to-right ordering of the children of a vertex. To obtain an ordering on the half edges of a plane tree, start at the root and trace the perimeter counterclockwise, touching each side of an edge exactly once. Fix a word $w=w[1]w[2]\cdots w[2n]$ and let $T$ be a plane tree with $n$ edges. Following the order of the half edges, label the $i^{th}$ half edge of $T$ with $w[i]$. We say that $T$ is \emph{$w$-valid} if for each edge of $T$, the two letters from $w$ which label that edge are complements. \begin{definition} A word $w$ is said to be \emph{foldable} if there is a plane tree that is $w$-valid. For integer $k\ge 0$, a word $w$ is \emph{$k$-foldable} if there are exactly $k$ plane trees that are $w$-valid. \end{definition} For example, $w=A_1\o{A}_1A_1A_2\o{A}_2\o{A}_1$ is $2$-foldable as seen in Figure~\ref{fig-prelim-basic}. The corresponding non-crossing perfect matchings are also given. Further, the word $w_n=(A_1\o{A}_1)^n$ is $C_n$-foldable as every plane tree with $n$ edges is $w_n$-valid. \begin{figure}[!ht] \begin{tikzpicture} \foreach \x in {1,2} \draw (0,\x) [fill=black] circle (0.06); \draw (-1,0)--(0,1)--(0,2); \draw [above] (0,2) node {\textit{\small{root}}}; \draw (1,0) [fill=black] circle (0.06); \draw (-1,0) [fill=black] circle (0.06); \draw (0,1)--(1,0); \draw (0,1.5) [left] node {$A_1$}; \draw (-.4,.3) node {$A_1$}; \draw (-.6,.6) [left] node {$\o{A}_1$}; \draw (.4,.3) node {$A_2$}; \draw (.6,.6) [right] node {$\o{A}_2$}; \draw (0,1.54) [right] node {$\o{A}_1$}; \end{tikzpicture} \hspace{0in} \begin{tikzpicture} \foreach \x in {1,2} \draw (\x,-.05) node {$A_1$}; \foreach \x in {3} \draw (\x,-.05) node {$A_2$}; \foreach \x in {1,3} \draw (\x+.5,0) node {$\o{A}_1$}; \foreach \x in {2} \draw (\x+.5,0) node {$\o{A}_2$}; \draw (1,0.25) to [out=60, in=120] (3.5,0.25); \foreach \x in {1,2} \draw (\x+.5,0.25) to [out=90, in =90] (\x+1,0.25); \draw [color=white](2,-1) circle (0.06); \end{tikzpicture} \hspace{.3in} \begin{tikzpicture} \foreach \x in {1} \draw (0,\x) [fill=black] circle (0.06); \draw [above] (0,1) node {\textit{\small{root}}}; \draw (-1,0)--(0,1); \draw (1,0) [fill=black] circle (0.06); \draw (-1,0) [fill=black] circle (0.06); \draw (0,1)--(1,0); \draw (-.3,.35) node {$\o{A}_1$}; \draw (-.6,.6) [left] node {${A}_1$}; \draw (.4,.3) node {$A_1$}; \draw (.6,.6) [right] node {$\o{A}_1$}; \draw (1,-1) [fill=black] circle (0.06); \draw (1,-1)--(1,0); \draw (1,-.5) [left] node {$A_2$}; \draw (1,-.45) [right] node {$\o{A}_2$}; \end{tikzpicture} \hspace{0in} \begin{tikzpicture} \foreach \x in {1,2} \draw (\x,-.05) node {$A_1$}; \foreach \x in {3} \draw (\x,-.05) node {$A_2$}; \foreach \x in {1,3} \draw (\x+.5,0) node {$\o{A}_1$}; \foreach \x in {2} \draw (\x+.5,0) node {$\o{A}_2$}; \draw (2,0.25) to [out=60, in=120] (3.5,0.25); \foreach \x in {1,2.5} \draw (\x,0.25) to [out=90, in =90] (\x+.5,0.25); \draw [color=white](2,-1) circle (0.06); \end{tikzpicture} \caption{The two foldings of $w=A_1\o{A}_1A_1A_2\o{A}_2\o{A}_1$ and their corresponding non-crossing perfect matchings.} \label{fig-prelim-basic} \end{figure} Black, Drellich, and Tymoczko~\cite{black2015} defined the following greedy algorithm to produce a folding of $w$. Given a word $w$ of length $n$, for each $i \leq n$ starting at $i=1$, create a matching as follows: Match $w[i]$ with $w[j]$ provided $j$ is the largest index such that $j<i$, $w[j]$ is not yet matched, and $w[j]$ is a complement of $w[i]$. If no such $j$ exists, leave $w[i]$ (temporarily) unmatched. If $w$ is foldable, this algorithm will produce a non-crossing perfect matching of $w$\cite{black2015}; the folding produced by the greedy algorithm is called the \emph{greedy folding}. The folding on the right in Figure~\ref{fig-prelim-basic} is the greedy folding. We will examine the following four sets in more detail. \begin{definition}[Black, Drellich, and Tymoczko~\cite{black2015}] Fix $n,m\in \mathbb{Z}^+$ and $k\in \mathbb{Z}^+ \cup \{0\}$. Let $\mathcal{S} = \mathcal{S}(n,m)$ be the collection of words of length $2n$ from an alphabet with $m$ letters and their complements. For $w\in \mathcal{S}$, define the following quantities: \begin{itemize} \item $\mathcal{P}(n,m)$ is the set of words in $\mathcal{S}$ that are foldable. \item $\mathcal{S}_k(n,m)$ is the set of words in $\mathcal{S}$ that are $k$-foldable.\footnote{Note that~\cite{black2015} uses $\mathcal{N}(n,m,k)$ instead of $\mathcal{S}_k(n,m)$.} \item $\mathcal{V}(w)$ is the set of plane trees that are $w$-valid. \item $\mathcal{R}(n,m)$ is the set of integers $k$ for which $\mathcal{S}_k(n,m)$ is non-empty. \end{itemize} \end{definition} The set $\mathcal{S}(n,m)$ can also be viewed as the length-$2n$ elements of the free group on $m$ generators. However, we are primarily interested in foldable words, which are precisely those that reduce to the identity element in the free group, so we make no further group theoretic connections. Let $w\in \mathcal{P}(n,m)$. Heitsch, Condon, and Hoos~\cite{heitsch} defined a local move to transform one plane tree in $\mathcal{V}(w)$ into another. For two trees in $\mathcal{V}(w)$, there is a move from one to the other if there is a pair of edges that can be re-paired as in Figure~\ref{fig-prelim-decomp}. This defines a directed graph $G_w$ with a vertex for each plane tree in $\mathcal{V}(w)$ and an edge from $T_1$ to $T_2$ when there is a Type~1 move from $T_1$ to $T_2$. The following were proved in~\cite{black2015}. \begin{theorem}[See Section~3 in~\cite{black2015}]\label{thm-graph} Let $w$ be a foldable word. \begin{enumerate} \item The greedy folding is a unique source of $G_w$. \item If $T_0$ is the greedy folding and $T\in \mathcal{V}(w)$, then there exists a path in $G_w$ from $T_0$ to $T$. \end{enumerate} \end{theorem} \begin{figure}[!ht] \centering \begin{tikzpicture} \foreach \x in {(-2,0),(0,2), (2,0)} \draw \x [fill=black] circle (0.06); \draw (-2,0)--(0,2)--(2,0); \draw [dashed] (0,2)..controls+(-.4,.8)..(0,3)..controls+(.4,-.2)..(0,2); \draw [dashed] (0,2)..controls+(-.4,-.8)..(0,1)..controls+(.4,.2)..(0,2); \draw [dashed] (-2,0)..controls+(-.75,-.25)..(-2.75,-.75)..controls+(.5,0)..(-2,0); \draw [dashed] (2,0)..controls+(.75,-.25)..(2.75,-.75)..controls+(-.5,0)..(2,0); \draw (-2.4,-.4) node {2}; \draw (2.4,-.4) node {4}; \draw (0,2.65) node {1}; \draw (0,1.35) node {3}; \draw (-1.2,1.2) node {$A$}; \draw (-.85,.7) node {$\o{A}$}; \draw (1.2,1.2) node {$\o{A}$}; \draw (.9,.65) node {${A}$}; \begin{scope}[shift={+(.5,0)}] \draw [->,>=stealth, line width=1.5pt] (2.5,1.2)--(4.5,1.2); \draw [->,>=stealth, line width=1.5pt] (4.5,.8)--(2.5,.8); \draw (3.5,1.2) node [above] {Type~1}; \draw (3.5,.8) node [below] {Type~2}; \end{scope} \begin{scope}[shift={+(1,0)}] \foreach \x in {-.5,1,2.5} \draw [fill=black] (6,\x) circle (0.06); \draw (6,-.5)--(6,2.5); \draw [dashed] (6,2.5)..controls+(-.4,.8)..(6,3.5)..controls+(.4,-.2)..(6,2.5); \draw [dashed] (6,-.5)..controls+(-.4,-.8)..(6,-1.5)..controls+(.4,.2)..(6,-.5); \draw [dashed] (6,1)..controls+(-.75,0)..(5,.5)..controls+(.5,-.1)..(6,1); \draw [dashed] (6,1)..controls+(.75,0)..(7,.5)..controls+(-.5,-.1)..(6,1); \draw (6,3.1) node {1}; \draw (6,-1.1) node {3}; \draw (6.6,.7) node {4}; \draw (5.4,.7) node {2}; \draw (6,1.79) node [right] {$\o{A}$}; \draw (6,1.75) node [left] {${A}$}; \draw (6,.15) node [right] {${A}$}; \draw (6,.19) node [left] {$\o{A}$}; \end{scope} \end{tikzpicture} \caption{Local moves between valid trees} \label{fig-prelim-decomp} \end{figure} \section{Characterization and enumeration of foldable words}\label{sec:1fold} In this section we give a bijection between foldings of words and edge-colored plane trees. Using this bijection we characterize both $1$-foldable and $2$-foldable words. An enumeration of $1$-foldable words is also given. \subsection{Doubled Alphabet} Fix a foldable word $w=w[1]w[2] \ldots w[2n]$. In any folding of $w$, if $w[i]$ bonds with $w[j]$, then $i$ and $j$ must have different parities because the subword $w[i+1] \cdots w[j-1]$ must be foldable and hence has an even (possibly zero) number of letters. This leads to the notion of a \emph{doubled alphabet} to reflect that $\{\text{odd } A_i,\text{ even }\o{A}_i\}$ and $\{\text{even }A_i,\text{ odd }\o{A}_i\}$ are the only possible bonds between an $A_i$ and an $\o{A}_i$. For an alphabet $\{A_1,\o{A}_1,\ldots,A_m,\o{A}_m\}$ and a word $w\in \mathcal{S}(n,m)$, define $\hat{w} \in \mathcal{S}(n,2m)$ on the doubled alphabet, $\{A_1,\o{A}_1,\ldots,A_{2m},\o{A}_{2m}\}$, as follows: \begin{itemize} \item If $w[2\ell] = A_i$, then $\hat{w}[2\ell]=\o{A}_{m+i}$. \item If $w[2\ell+1] = \o{A}_i$, then $\hat{w}[2\ell+1]=A_{m+i}$. \end{itemize} \begin{definition} Fix $n,m\in \mathbb{Z}^+$. \begin{itemize} \item $\hat{\cS}(n,m)$ is the set of words $w \in \mathcal{S}(n,m)$ for which each letter in an odd-index position is from $\{A_1, A_2, \ldots, A_m\}$, and each letter in even-index position is from $\{\o{A}_1, \o{A}_2, \ldots, \o{A}_m\}$. \item $\hat{\cP}(n,m) \coloneqq \hat{\cS}(n,m) \cap \mathcal{P}(n,m)$. \end{itemize} \end{definition} \begin{proposition} For $n,m\in \mathbb{Z}^+$, the map $w \mapsto \hat{w}$ defines two bijections: \[ \mathcal{S}(n,m) \longleftrightarrow \hat{\cS}(n,2m) \] and \[ \mathcal{P}(n,m) \longleftrightarrow \hat{\cP}(n,2m). \] \end{proposition} \subsection{Walks on Regular Trees} This alternation between letters from $\{A_1, A_2, \ldots, A_m\}$ and letters from $\{\o{A}_1, \o{A}_2, \ldots, \o{A}_m\}$ gives us greater ability to enumerate foldable words. To demonstrate, let us view words in $\hat{\cS}$ as walks on an infinite regular tree. The infinite, unrooted, $m$-regular tree $T_m$ has $m$ distinct edges incident to every vertex, so for convenience, we can use the label set $\{A_1, A_2, \ldots, A_m\}$. Given a walk on $T_m$, write down the sequence of edge labels, but on even-index steps, write down the complement of the labeling letter instead of the letter. So from any fixed vertex, walks of length $2n$ are in bijection with the elements of $\hat{\cS}(n,m)$. A walk is \emph{closed} if it ends where it begins. Note that a walk is closed precisely when its corresponding word in $\mathcal{S}$ is foldable. That is, closed walks on $T_m$ from a fixed vertex are in bijection with the elements of $\hat{\cP}(n,m)$. These were enumerated by Quenell: \begin{theorem}[Equation~(19) in \cite{quenell1994}] \label{thm-enum-closed-walks} Fix integer $m\geq 2$ and vertex $v$ of $T_m$. The generating function $f_m(x)$ for the number $a_n$ of length-$2n$ closed walks on $T_m$ starting at $v$ is \begin{eqnarray*} f_m(x) = \sum_{n=0}^{\infty} a_nx^n & = & \frac{2(m-1)}{m-2+m\sqrt{1 - 4(m-1)x}} \\ & = & \frac{2-m\left(1-\sqrt{1 - 4(m-1)x}\right)}{2(1-m^2x)}. \end{eqnarray*} \end{theorem} \begin{corollary} For integers $m\geq 2$ and $n \geq 1$ and vertex $v$ of $T_m$, the number $a_n$ of length-$2n$ closed walks on $T_m$ starting at $v$ is \begin{eqnarray*} a_n & = & m^{2n} - \sum_{i=1}^n \frac{m^{(1 + 2(n-i))}(m-1)^{i}}{4i-2}\binom{2i}{i}. \end{eqnarray*} \end{corollary} \begin{proof} By Newton's generalized binomial theorem, \begin{eqnarray*} \sqrt{1 - 4(m-1)x} & = & \sum_{n=0}^\infty \binom{1/2}{n}(-4(m-1)x)^n \\ & = & \sum_{n=0}^\infty \frac{ \prod_{i = 0}^{n-1} \left(\frac{1}{2} - i \right) }{n!} (-4(m-1)x)^n \\ & = & \sum_{n=0}^\infty \frac{-(m-1)^{n}}{2n-1}\binom{2n}{n} x^n. \end{eqnarray*} Substituting this back into $f_m(x)$, we get \begin{eqnarray*} f_m(x) & = & \frac{2-m\left(\sum_{n=1}^\infty \frac{(m-1)^{n}}{2n-1}\binom{2n}{n} x^n \right)}{2(1-m^2x)}. \end{eqnarray*} Setting $b_i = \frac{m(m-1)^{i}}{4i-2}\binom{2i}{i}$ for $i \geq 1$, we have \begin{eqnarray*} f_m(x) & = & \frac{1 - \sum_{n=1}^\infty b_n x^n}{1 - m^2x} \\ & = & 1 + (m^2 - b_1)x + (m^2(m^2 - b_1) - b_2)x^2 + \cdots \\ & = & \sum_{n=0}^\infty \left(m^{2n} - \sum_{i=1}^n b_im^{2(n-i)}\right)x^n. \end{eqnarray*} \end{proof} We can obtain asymptotics for $a_n$ using the Maple\texttrademark{} package \textbf{algolib} (version 17.0), or the saddle point method, on the generating function. \begin{corollary} \label{cor-closed-walk-asymp} For fixed $m\geq 3$ and vertex $v$ of $T_m$, the number $a_n$ of length-$2n$ closed walks on $T_m$ starting at $v$ is asymptotically \[ a_n = \frac{(4m-4)^n}{n^{3/2}} \left(\frac{m(m-1)}{\sqrt{\pi}(m-2)^2}+ O\!\left(\frac{1}{\sqrt{n}}\right) \right) . \] \end{corollary} Recall that $\mathcal{P}(n,m)$ is in bijection with $\hat{\cP}(n,2m)$, which can be enumerated by closed walks on $T_{2m}$. Thus, using Corollary~\ref{cor-closed-walk-asymp} with $2m$, we have that for fixed $m\geq 2$ the number of foldable words of length $2n$ as $n$ approaches infinity is asymptotically \begin{equation} \label{eqn-foldable-asymp} |\mathcal{P}(n,m)| = \Theta(n^{-3/2}(8m-4)^n) . \end{equation} \subsection{Labeling Plane Trees} There is a natural bijection between foldings of words and (not necessarily proper) edge-colorings of rooted plane trees which is most clearly seen by examining the foldings of $\hat{w}$ rather than $w$. More generally, we consider words in $\hat{\cP}$---that is, with alternating unbarred and barred letters---rather than in $\mathcal{P}$. Set \[\mathcal{T}(n,m) \coloneqq \{(w,T)\,\vert\, w\in\hat{\cP}(n,m),\;T\in \mathcal{V}(w)\},\] where the element $(w,T)$ is viewed as a folding of $w$ around $T$. With $[m]$ denoting the set $\{1,2,\ldots,m-1,m\}$, define \[\mathcal{C}(n,m) \coloneqq \{(c,T)\,\vert\, T\text{ is a plane tree with }n\text{ edges and } c:E(T) \to [m]\},\] so that the elements of $\mathcal{C}(n,m)$ represent edge-colored plane trees, where the coloring is not necessarily proper. \begin{theorem} \label{thm-Tnm-Cnm} For all integers $n\geq 0$ and $m\geq 1$, $|\mathcal{T}(n,m)| = |\mathcal{C}(n,m)|$. \end{theorem} \begin{proof} We will define a bijection from $\mathcal{T}(n,m)$ to $\mathcal{C}(n,m)$. Fix an arbitrary $(w,T)\in \mathcal{T}(n,m)$. Define the edge-coloring $c:E(T) \to [m]$ so that $c(e)=i$ if the half edges of $e\in E(T)$ are labeled $(A_i,\o{A}_i)$ or $(\o{A}_i,A_i)$. Figure~\ref{fig-fold-color-ex} gives an example of this mapping. \begin{figure}[!ht] \centering \begin{tikzpicture}[scale=1.2] \foreach \x in {(0,3),(-1,2),(1,2), (-2,1),(0,1),(-1,0),(1,0)} \draw \x [fill=black] circle (0.06); \draw (0,3) node[above] {\textit{\small{root}}}; \draw (1,0)--(-1,2)--(0,3)--(1,2) (-2,1)--(-1,2) (-1,0)--(0,1); \draw (-.25,2.3) node {$\o{A}_1$}; \draw (-.75,2.6) node {$A_1$}; \draw (-1.25,1.3) node {${A}_3$}; \draw (-1.75,1.6) node {$\o{A}_3$}; \draw (-.25,.3) node {$\o{A}_1$}; \draw (-.75,.6) node {$A_1$}; \draw (.4,2.3) node {${A}_2$}; \draw (.75,2.6) node {$\o{A}_2$}; \draw (-.6,1.3) node {$\o{A}_3$}; \draw (-.25,1.6) node {${A}_3$}; \draw (.4,.3) node {${A}_3$}; \draw (.75,.6) node {$\o{A}_3$}; \end{tikzpicture} \hspace{.5in} \begin{tikzpicture}[scale=1.2] \foreach \x in {(0,3),(-1,2),(1,2), (-2,1),(0,1),(-1,0),(1,0)} \draw \x [fill=black] circle (0.06); \draw (0,3) node[above] {\textit{\small{root}}}; \draw (1,0)--(-1,2)--(0,3)--(1,2) (-2,1)--(-1,2) (-1,0)--(0,1); \draw (-.65,2.6) node {$1$}; \draw (-1.65,1.6) node {$3$}; \draw (-.65,.6) node {$1$}; \draw (.65,2.6) node {$2$}; \draw (-.35,1.6) node {$3$}; \draw (.65,.6) node {$3$}; \end{tikzpicture} \caption{A folding of $d(w) = A_1\o{A}_3A_3\o{A}_3A_1\o{A}_1A_3\o{A}_3A_3\o{A}_1A_2\o{A}_2$ and the corresponding edge-coloring of the tree.} \label{fig-fold-color-ex} \end{figure} The inverse function is defined as follows. Fix $(c,T)\in \mathcal{C}(n,m)$. The color of each edge indicates the two letters that will be assigned to its half edges. It only remains to determine which letter will be assigned to which half edge. In the ordering of the half edges of $T$, each edge will have an even half edge and an odd half edge. This is because the subtree below the edge contains an even number of half edges. Assign the letter with the bar to the even half edge and the other to the odd half edge. This labeling is precisely the folding of $w$ on $T$. Since an inverse exists, the function defined is an injection. \end{proof} \subsection{1-foldable classification and enumeration} The correspondence between foldings of words and edge-colored plane trees leads to an enumeration of words which are $1$-foldable. Denote with $\mathcal{T}_k(n,m)$ the set of all $(w,T) \in \mathcal{T}(n,m)$ where $w$ is $k$-foldable. Then $|\mathcal{T}_k(n,2m)| = k \cdot |\mathcal{S}_k(n,m)|$ for any $k \in \mathbb{Z}^+$ and, in particular, $|\mathcal{T}_1(n,2m)| = |\mathcal{S}_1(n,m)|$. \begin{theorem} The words in $\mathcal{S}_1(n,m)$ are in bijection with the proper $2m$-edge-colorings of plane trees with $n$ edges. \label{thm-1fold-bij} \end{theorem} \begin{proof} First recall that the graph of valid trees for a given word is connected (Theorem~\ref{thm-graph}). The bijection in Theorem~\ref{thm-Tnm-Cnm} can be used to detect available local moves from the edge-coloring. In particular, a move exists precisely when two incident edges have the same color. Therefore, elements of $\mathcal{C}(n,m)$ with a proper edge-coloring correspond exactly with elements of $\mathcal{T}_1(n,m)$. Hence, elements of $\mathcal{C}(n,2m)$ with a proper edge-coloring are in correspondence with elements of $\mathcal{S}_1(n,m)$. \end{proof} Using this classification, we now enumerate $1$-foldable words. \begin{lemma} \label{lma-fold-colors} Let $T$ be a plane tree with degree multiset $\{1^{\alpha_1},2^{\alpha_2},\ldots\}$ where there are $\alpha_i$ vertices with degree $i$. Then the number of proper $k$-edge-colorings of $T$ is \begin{equation} \label{eqn-fold-colorings} k\prod_{i=1}^{\infty}\left( \binom{k-1}{i-1}(i-1)!\right)^{\alpha_i}, \end{equation} where $0^0$ is understood to equal 1. \end{lemma} \begin{proof} Fix a leaf $v$. Let $u$ be the unique neighbor of $v$ and let $\deg(u)$ be the degree of $u$. Color the edge $\{v,u\}$ with one of $k$ colors. Next, color the remaining $\deg(u)-1$ incident edges, which can be ordered in $(\deg(u)-1)!$ ways. Visiting the vertices via a breadth-first search, each vertex will contribute a similar factor to the product as all but one of its incident edges will already be colored. \end{proof} Let $\Delta(T)$ be the maximum degree in $T$. Note that if $\Delta(T)>k$, expression~\eqref{eqn-fold-colorings} collapses to zero as expected since $\Delta(T)$ colors are required for a proper coloring of the edges incident to a maximum-degree vertex. \begin{lemma}[Mallows and Wacher~\cite{mallowswacher}] \label{lma-fold-treenum} Let $RPT(\alpha_1,\alpha_2,\ldots)=RPT(\mathbf{\alpha})$ be the number of plane trees with degree multiset $\{1^{\alpha_1},2^{\alpha_2},\ldots\}$. Then \begin{equation} RPT(\mathbf{\alpha}) = \frac{2}{\alpha_1}\binom{1+\alpha_2+2\alpha_3+3\alpha_4+\cdots}{\alpha_1-1,\alpha_2,\alpha_3,\ldots}. \end{equation} \end{lemma} For any sequence of non-negative integers $(\alpha_2,\alpha_3,\ldots)$, there is a plane tree with degree multiset $\{1^{\alpha_1},2^{\alpha_2},\ldots\}$ for an appropriate choice of $\alpha_1$, the number of leaves. In particular, $RPT(\alpha_1, \alpha_2, \ldots)\neq 0$ if and only if \begin{equation} \label{eqn-fold-a1} \alpha_1 = 2 + \alpha_3 + 2\alpha_4 + 3\alpha_5 + 4\alpha_6 + \cdots = 2 + \sum_{i=2}^\infty (i-2)\alpha_i. \end{equation} \begin{lemma} \label{lma-deg-conds} For the multiset $\{2^{\alpha_2},3^{\alpha_3},\ldots\}$, set $\alpha_1$ as in~\eqref{eqn-fold-a1}, and let $T$ be a plane tree with the degree multiset $\{1^{\alpha_1}, 2^{\alpha_2},3^{\alpha_3},\ldots\}$. Then $\alpha_1,\alpha_2,\ldots$ satisfy the conditions \begin{enumerate} \item $\alpha_i=0$ for $i> 2m$, and \item $\displaystyle\sum_{i=1}^{2m} i\alpha_i = 2n$, \end{enumerate} if and only if $T$ has $n$ edges and can be properly edge-colored with $2m$ colors. \end{lemma} \begin{proof} For any tree $T$, the edge chromatic number $\chi'(T) = \Delta(T)$. Condition~1 is equivalent to saying $\Delta(T) \le 2m$, so $T$ is $2m$-edge-colorable. Condition~2 is equivalent to $T$ having $n$ edges. \end{proof} Lemma~\ref{lma-deg-conds} gives an explicit characterization of the degree conditions for a plane tree to have a proper edge-coloring. Having previously established a correspondence between foldings of $1$-foldable words and proper edge-colorings of plane trees, the following theorem is now clear. \begin{theorem} \label{thm-1-fold} The number of $1$-foldable words of length $2n$ on an alphabet with $m$ letters and their complements is \begin{equation} \label{eqn-1-fold} \sum \frac{2}{\alpha_1}\binom{n}{\alpha_1-1,\alpha_2,\alpha_3,\ldots,\alpha_{2m}}\cdot 2m\prod_{i=1}^{2m}\left( \binom{2m-1}{i-1}(i-1)!\right)^{\alpha_i}, \end{equation} where the sum is over all non-negative sequences $(\alpha_1, \alpha_2,\alpha_3,\ldots,\alpha_{2m})$ such that $\sum_{i=1}^{2m} i\alpha_i = 2n$ and $\alpha_1 = 2 + \sum_{i=2}^{2m}(i-2)\alpha_i$. \end{theorem} \begin{proof} By Theorem~\ref{thm-1fold-bij} and Lemmas~\ref{lma-fold-colors}, \ref{lma-fold-treenum}, and~\ref{lma-deg-conds} with the observation that \[ n = 1 + \alpha_2 + 2 \alpha_3 + \cdots + (2m-1)\alpha_{2m}.\] \end{proof} \begin{example} When $m=1$, expression~\eqref{eqn-1-fold} is a summation with one term, when $(\alpha_1, \alpha_2)=(2, n-1)$, and gives $2n$ words of length $2n$ which are $1$-foldable. The $2n$ words are exactly $\{A^i \o{A}\,^n A^{n-i}: i\in[n]\}$ and $\{\o{A}\,^i A^n\o{A}\,^{n-i}: i\in [n]\}$. \end{example} \begin{example} When $m=2$, each term in~\eqref{eqn-1-fold} has the following form: \begin{align*} & \frac{2}{\alpha_1}\binom{n}{\alpha_1-1,\alpha_2,\alpha_3,\alpha_4}\cdot 4\prod_{i=1}^{4}\left( \binom{3}{i-1}(i-1)!\right)^{\alpha_i} \\ &= \frac{8}{\alpha_1}\cdot\binom{n}{\alpha_1-1,\alpha_2,\alpha_3,\alpha_4}\cdot(1\cdot 0!)^{\alpha_1}\cdot(3\cdot 1!)^{\alpha_2} \cdot (3\cdot2!)^{\alpha_3} \cdot (1\cdot 3!)^{\alpha_4} \\ &= \frac{8}{3(2+\alpha_3+2\alpha_4)}\cdot\binom{n}{1+\alpha_3+2\alpha_4,n-2\alpha_3-3\alpha_4-1,\alpha_3,\alpha_4}\cdot 3^{n - \alpha_3 - 2\alpha_4} \cdot 2^{\alpha_3 + \alpha_4}, \end{align*} with $\alpha_3$ and $\alpha_4$ positive integers such that $n > 2\alpha_3 + 3\alpha_4$. The multinomial coefficient pulls the maximum of this term away from the boundaries; that is, as $n$ grows, the maximum is not found where one of $\{\alpha_1-1,\alpha_2,\alpha_3,\alpha_4\}$ is $o(n)$. Let us therefore assume that $\lim_{n \to \infty} \frac{\alpha_3}{n} = x$ and $\lim_{n \to \infty} \frac{\alpha_4}{n} = y$ for positive constants $x$ and $y$ with $2x + 3y < 1$. Applying Stirling's approximation, this term is asymptotically \begin{align*} & \frac{(1 + o(1))^n}{ (x+2y)^{n(x+2y)} \cdot (1 - 2x - 3y)^{n(1 - 2x - 3y)} \cdot x^{xn} \cdot y^{yn} } \cdot 3^{n(1 - x - 2y)} \cdot 2^{n(x + y)} \\ &= \left[ \frac{3 + o(1)}{1-2x-3y} \cdot \left(\frac{2(1-2x-3y)^2}{3x(x+2y)}\right)^{\!x} \cdot \left(\frac{2(1-2x-3y)^3}{9y(x+2y)^2}\right)^{\!y} \right]^n. \end{align*} Numerically, the base of this exponential is maximized when $(x,y) \approx (0.22103,0.07050)$, which gives a maximum term of $(8.65936223 \pm 2^{-25})^n$. Since the number of terms in the sum is polynomial in $n$ for fixed $m$, this is also an asymptotic approximation for the whole sum. Compare this to the $16^n$ length-$2n$ words on an alphabet of $m=2$ letters and their complements, of which $(12+o(1))^n$ are foldable by equation~\eqref{eqn-foldable-asymp}. \end{example} \subsection{2-foldable classification} Using the bijection with edge-colored trees, we can also classify $2$-foldable words. In particular, a foldable word is $2$-foldable if the edge-colored tree corresponding to the greedy folding has only one pair of incident edges with the same color, and the tree that results after making the corresponding Type~1 move at those edges has only one pair of incident edges with the same color. An equivalent characterization is those trees that have an $A$-decomposition, defined as follows. \begin{definition} An $A$-decomposition of a word $w$ is a list of words $u_1$, $u_2$, $u_3$, $v_1$, and $v_2$ such that \begin{enumerate} \item $w = u_1Av_1\o{A}u_2Av_2\o{A}u_3$ for some (possibly barred) letter $A$, \item the words $u_1u_3$, $u_2$, $v_1$, and $v_2$ are foldable, and \item the words $u_1u_2A\o{A}u_3$ and $Av_1v_2\o{A}$ are $1$-foldable. \end{enumerate} \end{definition} Note that we consider here any word in $\mathcal{S}$, not necessarily with an alternating bar pattern. Moreover, $A$ in condition~(1) may be a barred letter. In this case, $\o{A}$ signifies its unbarred complement. \begin{figure}[!ht] \centering \begin{tikzpicture} \foreach \x in {(-2,0),(0,2), (2,0)} \draw \x [fill=black] circle (0.06); \draw (-2,0)--(0,2)--(2,0); \draw [dashed] (0,2)..controls+(-.5,.8)..(0,3)..controls+(.5,-.2)..(0,2); \draw [dashed] (0,2)..controls+(-.4,-.8)..(0,1)..controls+(.4,.2)..(0,2); \draw [dashed] (-2,0)..controls+(-.75,-.25)..(-2.75,-.75)..controls+(.5,0)..(-2,0); \draw [dashed] (2,0)..controls+(.75,-.25)..(2.75,-.75)..controls+(-.5,0)..(2,0); \draw (-2.4,-.4) node {$v_1$}; \draw (2.4,-.4) node {$v_2$}; \draw (0,2.65) node {$u_1u_3$}; \draw (0,1.35) node {$u_2$}; \draw (-1.2,1.2) node {$A$}; \draw (-.85,.7) node {$\o{A}$}; \draw (1.2,1.2) node {$\o{A}$}; \draw (.9,.65) node {$A$}; \begin{scope}[shift={+(1,0)}] \foreach \x in {-.5,1,2.5} \draw [fill=black] (6,\x) circle (0.06); \draw (6,-.5)--(6,2.5); \draw [dashed] (6,2.5)..controls+(-.5,.8)..(6,3.5)..controls+(.5,-.2)..(6,2.5); \draw [dashed] (6,-.5)..controls+(-.4,-.8)..(6,-1.5)..controls+(.4,.2)..(6,-.5); \draw [dashed] (6,1)..controls+(-.75,0)..(5,.5)..controls+(.5,-.1)..(6,1); \draw [dashed] (6,1)..controls+(.75,0)..(7,.5)..controls+(-.5,-.1)..(6,1); \draw (6,3.1) node {$u_1u_3$}; \draw (6,-1.1) node {$u_2$}; \draw (6.6,.7) node {$v_2$}; \draw (5.4,.7) node {$v_1$}; \draw (6,1.79) node [right] {$\o{A}$}; \draw (6,1.75) node [left] {$A$}; \draw (6,.15) node [right] {$A$}; \draw (6,.19) node [left] {$\o{A}$}; \end{scope} \end{tikzpicture} \caption{Two valid trees of a word with an $A$-decomposition.} \label{fig-1-decomp} \end{figure} \begin{theorem} A word $w$ is $2$-foldable if and only if it has an $A$-decomposition. \end{theorem} \begin{proof} Suppose $w$ has an $A$-decomposition. Then $w = u_1Av_1\o{A}u_2Av_2\o{A}u_3$ can be folded into the two edge-colored trees using parts~(1) and~(2) of the definition of $A$-decomposition. Parts~(2) and~(3) of the definition imply that $u_1u_3$, $v_1$, $u_2$, and $v_2$ are $1$-foldable. By part~(3) of the definition, the only incident edges with local moves in either of these two trees are the edges labeled by $A\o{A}$ or $\o{A}A$ in Figure~\ref{fig-1-decomp}. Thus there are no other local moves, and since the state space graph is connected, these are the only two foldings of $w$. Now suppose $w$ is $2$-foldable. The two foldings correspond bijectively to two edge-colored trees, which are adjacent by local moves shown in Figure~\ref{fig-prelim-decomp}. Thus we have properties~(1) and~(2) of an $A$-decomposition, and~(3) follows from the fact that $w$ is $2$-foldable so has no other local moves. \end{proof} \section{The values in $\mathcal{R}(n,m)$} \label{sec:R(n,m)} In this section we develop a better understanding of the set $\mathcal{R}(n,m)$ of all $k$ for which there is a word in $\mathcal{S}(n,m)$ which is $k$-foldable. Black, Drellich, and Tymoczko~\cite{black2015} initiated this study with the following proposition. \begin{proposition}[\cite{black2015}] Let $C_i \coloneqq \frac{1}{i+1} \binom{2i}{i}$, the $i\textsuperscript{th}$ Catalan number. For integers $n>1$ and $m>0$, $\{C_{n-1},C_n\} \subset \mathcal{R}(n,m)$ but if $C_{n-1}<k<C_n$ then $k\not\in \mathcal{R}(n,m)$. \label{prop-top-gap} \end{proposition} Wagner~\cite{wagner2015} further investigated $\mathcal{R}(n,m)$ and established monotonicity in the following sense. \begin{proposition}[\cite{wagner2015}]\label{prop-Rnm-n-monotone} For positive integers $n$ and $m$, $\mathcal{R}(n,m) \subsetneq \mathcal{R}(n+1,m).$ \end{proposition} Note that the previous two propositions establish that $C_i\in \mathcal{R}(n,m)$ for $1\leq i\leq n$. Wagner also showed monotonicity of $\mathcal{R}(n,m)$ in $m$. \begin{proposition}[\cite{wagner2015}] For positive integers $n$ and $m$, $\mathcal{R}(n,m) \subseteq \mathcal{R}(n,m+1)$. Further, there exist $n$ and $m$ such that $\mathcal{R}(n,m) \neq \mathcal{R}(n,m+1)$. \end{proposition} We focus our attention mainly on the case $m=1$. To give some indication of the values in $\mathcal{R}(n,1)$, computationally we find: \begin{align*} \mathcal{R}(0,1) = \{1\}; \hspace{.4in} \\ \mathcal{R}(1,1) = \mathcal{R}(0,1) \cup \{& 0\}; \\ \mathcal{R}(2,1) = \mathcal{R}(1,1) \cup \{& 2\}; \\ \mathcal{R}(3,1) = \mathcal{R}(2,1) \cup \{& 5\}; \\ \mathcal{R}(4,1) = \mathcal{R}(3,1) \cup \{& 3,4,14\}; \\ \mathcal{R}(5,1) = \mathcal{R}(4,1) \cup \{& 7,10,42\}; \\ \mathcal{R}(6,1) = \mathcal{R}(5,1) \cup \{& 6,8,12,16,18,19,25,28,132\}; \\ \mathcal{R}(7,1) = \mathcal{R}(6,1) \cup \{& 9,15,20,30,40,43,52,56,70,84,429 \};\\ \mathcal{R}(8,1) = \mathcal{R}(7,1)\cup \{& 22,23,24,26,32,35,36,38,50,55,73,74,80,85,96, \\ & 106,114,115,126,157,160,174,196,210,264,1430 \}. \end{align*} Working toward a more thorough understanding of the set $\mathcal{R}(n,1)$, we first construct a superset of $\mathcal{R}(n,1)$ in Theorem~\ref{thm-Rnm-superset} providing some structure for the values that can appear in $\mathcal{R}(n,1)$. From there, we determine intervals of integers which do not lie in the set, such as integers in the interval $[C_{n-1}+1, C_n -1]$ from Proposition~\ref{prop-top-gap}. Then, in search of the smallest value $k$ such that $k\not\in \mathcal{R}(n,1)$, we conclude by proving $\{0,1,2,\ldots, n\}\subseteq \mathcal{R}(n,1)$ and hence in $\mathcal{R}(n,m)$. \subsection{Catalan numbers and $\mathcal{R}(n,1)$} The Catalan numbers, which enumerate the plane trees, are an integral part of the set $\mathcal{R}(n,m)$. As already noted, $C_t\in \mathcal{R}(n,1)$ for all $1\leq t\leq n$. In fact, $C_t$ is the number of foldings of $(A\o{A})^t$, and this is the maximum number of foldings for a word of length $2t$. Theorem~\ref{thm-Rnm-superset} establishes a superset of $\mathcal{R}(n,1)$ which highlights the fundamental nature of the Catalan numbers in the values of $\mathcal{R}(n,1)$. The following discussion and examples motivate the theorem. As previously mentioned, for calculating the values in $\mathcal{R}(n,1)$, it suffices to consider words in $\hat{\cP}(n,2)$, foldable words which strictly alternate between unbarred and barred letters. For readability in this case, we will use $A$ and $B$ instead of $A_1$ and $A_2$. Fix such a foldable word $w$ with $t$ entries that are $A$ and $n-t$ entries that are $B$. Without loss of generality, assume $w$ begins with $A$. For example, let $w=A\o{A}(B\o{B})^5 A\o{A} (B\o{B})^7 A\o{A} (B\o{B})^4$. Here $t = 3$ and $n - t = 5 + 7 + 4 = 16$. Now consider the maximal subwords (consecutive letters) of $w$ which consist of only the letters $B$ and $\o{B}$. We call these subwords \emph{maximal $B$-subwords.} Let $\ell_1, \ell_2, \ldots, \ell_m$ be the number of letters in each of these maximal $B$-subwords. Consequently $m \leq 2t$ and $\sum_{i=1}^{m} \ell_i = 2(n-t)$. In our present example, $w$ has $m = 3$ maximal $B$-subwords with lengths $\ell_1=10$, $\ell_2=14$, and $\ell_3=8$. Fix a non-crossing matching on the letters $A$ and $\o{A}$ in $w$. We will use the term \emph{$A$-matching} to refer to such a partial matching of $w$. Because of the alternating pattern of barred letters in the doubled alphabet any $A$-matching partitions the maximal $B$-subwords into groups of the form $(B\o{B})^s$ or $(\o{B}B)^s$ when concatenated, where $2s$ is the sum of the corresponding $\ell_i$ values. We will refer to these as \emph{$B$-groupings}. Thus, for each $A$-matching $\varphi$, there is at least one non-crossing perfect matching of $w$ which extends $\varphi$ since $w$ was foldable. See Figure~\ref{fig-A-match-ex-1} for one possible $A$-matching on our example word $w$. With this $A$-matching, the $B$-subwords have been partitioned into $(B\o{B})^5$ with $2\cdot 5 = \ell_1$ and $(B\o{B})^{11}$ with $2\cdot 11 = \ell_2 + \ell_3$. \begin{figure}[ht!] \begin{tikzpicture} \draw [above] (0,0) node {$A$}; \draw [above] (.4,0) node {$\o{A}$}; \draw [dashed](.7,.2)--(2.7,.2); \draw [below] (1.7,.2) node {$(B \o{B})^5$}; \draw [above] (3,0) node {$A$}; \draw [above] (3.4,0) node {$\o{A}$}; \draw [dashed] (3.7,.2)--(5.7,.2); \draw [below] (4.7,.2) node {$(B \o{B})^7$}; \draw [above] (6,0) node {$A$}; \draw [above] (6.4,0) node {$\o{A}$}; \draw [dashed] (6.7,.2)--(8.7,.2); \draw [below] (7.7,.2) node {$(B \o{B})^4$}; \draw [line width=1.3] (0,.5) to [out=60, in=120] (3.4,.5); \draw [line width=1.3] (.4,.5) to [out=60, in=120] (3,.5); \draw [line width=1.3] (6,.5) to [out=80, in=100] (6.4,.5); \end{tikzpicture} \caption{An $A$-matching of $A\o{A}(B\o{B})^5 A\o{A} (B\o{B})^7 A\o{A} (B\o{B})^4$ which can be extended in $C_5 C_{11}$ ways.} \label{fig-A-match-ex-1} \end{figure} For a $B$-grouping of length $2t$, there are $C_t$ ways to fold the group. Thus, each $A$-matching $\varphi$ of $w$ extends to $\prod_{i=1}^j C_{s_i}$ non-crossing perfect matchings of $w$, where $j$ is the number of $B$-groupings and each $s_i$ is half of the sum of the corresponding subset of $\{\ell_1,\ldots, \ell_m\}$. For the present example, the $A$-matching in Figure~\ref{fig-A-match-ex-1} extends to $C_5 C_{11}$ non-crossing matchings of $w$. Alternatively, for the $A$-matching $\varphi' = \{w[1]w[2], w[13]w[14], w[29]w[30]\}$, there is nothing separating the maximal $B$-subwords, so $\varphi'$ extends in $C_{5+7+4} = C_{16}$ ways. The following example highlights the structure that results from an $A$-matching when there are maximal $B$-subwords of odd length. \begin{example} \label{ex-superset2} Let $w = A(\o{B} B)^4\o{B}A\o{A}(B\o{B})^6B\o{A}(B\o{B})^4 A \o{A}$. Here $\ell_1 = 9$, $\ell_2 = 13$ and $\ell_3 = 8$. Again, there are multiple non-crossing $A$-matchings, but any such matching ensures that the maximal $B$-subwords of lengths $9$ and $13$ will be free to match with each other because they are the only two of odd length. One non-crossing $A$-matching is $\varphi = \{w[1]w[26],w[11]w[12], w[35]w[36]\}$ and another is $\varphi' = \{w[1]w[36], w[11]w[12],w[26]w[35]\}$. Both $\varphi$ and $\varphi'$ extend in $C_{11}\cdot C_4$ ways. (See Figure~\ref{fig-A-match-ex-2}.) \end{example} \begin{figure}[ht!] \begin{tikzpicture}[scale=.8, every node/.style={scale=.8}] \draw [above] (.4,0) node {${A}$}; \draw [dashed] (.7,.2)--(2.7,.2); \draw [below] (1.7,.2) node {$(\o{B} B)^4 \o{B}$}; \draw [above] (3,0) node {$A$}; \draw [above] (3.4,0) node {$\o{A}$}; \draw [dashed](3.7,.2)--(5.7,.2); \draw [below] (4.7,.2) node {$(B \o{B})^6 B$}; \draw [above] (6,0) node {$\o{A}$}; \draw [dashed] (6.3,.2)--(8.3,.2); \draw [below] (7.3,.2) node {$(B \o{B})^4$}; \draw [line width=1.2] (.4,.5) to [out=60, in=120] (6,.5); \draw [line width=1.2] (3,.5) to [out=60, in=120] (3.4,.5); \draw [line width=1.2] (8.6,.5) to [out=80, in=100] (9,.5); \draw [above] (8.6,0) node {$A$}; \draw [above] (9,0) node {$\o{A}$}; \end{tikzpicture} \hspace{.3in} \begin{tikzpicture}[scale=.8, every node/.style={scale=.8}] \draw [above] (.4,0) node {${A}$}; \draw [dashed] (.7,.2)--(2.7,.2); \draw [below] (1.7,.2) node {$(\o{B} B)^4 \o{B}$}; \draw [above] (3,0) node {$A$}; \draw [above] (3.4,0) node {$\o{A}$}; \draw [dashed](3.7,.2)--(5.7,.2); \draw [below] (4.7,.2) node {$(B \o{B})^6 B$}; \draw [above] (6,0) node {$\o{A}$}; \draw [dashed] (6.3,.2)--(8.3,.2); \draw [below] (7.3,.2) node {$(B \o{B})^4$}; \draw [line width=1.2] (.4,.5) to [out=50, in=130] (9,.5); \draw [line width=1.2] (3,.5) to [out=60, in=120] (3.4,.5); \draw [line width=1.2] (6,.5) to [out=70, in=110] (8.6,.5); \draw [above] (8.6,0) node {$A$}; \draw [above] (9,0) node {$\o{A}$}; \end{tikzpicture} \caption{All $A$-matchings of $A(\o{B} B)^4\o{B}A\o{A}(B\o{B})^6B\o{A}(B\o{B})^4 A \o{A}$ described in Example~\ref{ex-superset2} which can be extended in $C_5 C_{11}$ ways.} \label{fig-A-match-ex-2} \end{figure} We are now ready to state a theorem that defines a superset for $\mathcal{R}(n,1)$ in terms of Catalan numbers. \begin{theorem}\label{thm-Rnm-superset} For a positive integer $n$, every value in $\mathcal{R}(n,1)$ can be expressed as $\sum_{r=1}^{h} p_r$, where each $p_r$ is of the form $\prod_{i = 1}^j C_{s_i}$ with $0\leq h \leq C_t$ for some $0\leq t \leq \frac{n}{2}$ and $n-t = s_1 + \cdots + s_j$ for some $1\leq j \leq t+1$. \end{theorem} \begin{proof} Again, it suffices to consider $w \in \hat{\cP}(n,2)$. Without loss of generality, assume $A$ occurs $t$ times in $w$ with $1\leq t \leq \frac{n}{2}$. Let $\ell_1,\ldots, \ell_m$ be the lengths of the maximal $B$-subwords of $w$. Fix a non-crossing $A$-matching $\varphi$ of $w$. There are at most $C_t$ such matchings. Then $\varphi$ induces a grouping of the maximal $B$-subwords and thus a partition of the multiset $\{\ell_1, \ell_2, \ldots, \ell_m\}$ into multisets $S_1, S_2, \ldots, S_q$. Let $r_i$ be the sum of the elements in $S_i$. We claim each $r_i$ is even. This follows from the fact that the number of letters between any $A$ and $\o{A}$ is even and all $A$ and $\o{A}$ letters are paired in a non-crossing perfect matching. Therefore, we can write $r_i = 2s_i$ for some $s_i \in\mathbb{N}$. Moreover, the $B$-subwords left to pair together by the $A$-matching group to have the form $(B\o{B})^{s_i}$ or $(\o{B}B)^{s_i}$ and thus can be matched in $C_{{s_i}}$ ways. Since there are $t$ letters which are $A$, the arcs from the $A$-matching partition the $B$-subwords into at most $t+1$ groups with lengths $2s_1, 2s_2, \ldots, 2s_j$ where $j \leq t+1$ and $\sum_{i=1}^j s_i = n-t$. The number of ways to extend the $A$-matching to a non-crossing matching of $w$ is $\prod_{i=1}^j C_{s_i}$. \end{proof} Computing the set described in Theorem~\ref{thm-Rnm-superset}, we have the following supersets of $\mathcal{R}(n,m)$ where the ellipses indicate that all integers in that range are present in the set. \[ \begin{array}{l c l} \mathcal{R}(1,1) & \subseteq & \{0, 1\}; \\ \mathcal{R}(2,1) & \subseteq & \{0, 1, 2\}; \\ \mathcal{R}(3,1) & \subseteq & \{0, 1, 2, 5\}; \\ \mathcal{R}(4,1) & \subseteq & \{0, \ldots, 5, 14\}; \\ \mathcal{R}(5,1) & \subseteq & \{0, \ldots, 7, 10, 14, 42\}; \\ \mathcal{R}(6,1) & \subseteq & \{0, \ldots, 22, 25, 28, 42, 132\}; \\ \mathcal{R}(7,1) & \subseteq & \{0, \ldots, 52, 56, 57, 58, 60, 61, 70, 84, 132, 429\}; \\ \mathcal{R}(8,1) & \subseteq & \{0, \ldots, 178, 182, 183, 184, 186, 187, 196, 210, 264, 429, 1430\}. \end{array} \] The following corollary establishes the largest five values in $\mathcal{R}(n,1)$. Consequently, any integer between $C_{n-2}+C_{n-3}$ and $C_n$ is only in $\mathcal{R}(n,1)$ if it is one of these five values. \begin{corollary} \label{cor-Rn1-gaps} For $n \geq 13$, \[ \mathcal{R}(n,1) \subseteq \{0, \ldots, C_{n-2} + C_2\cdot C_{n-4}\} \cup \{C_{n-2} + C_{n-3}, C_3\cdot C_{n-3}, C_2\cdot C_{n-2}, C_{n-1}, C_n\}. \] \end{corollary} \begin{proof} For integers $n$ and $t$ with $0 \leq t \leq \floor{n/2}$, let $Y(n,t)$ be the set of all integers of the form $\prod C_{b_i}$ with $b_1 + b_2 + ... + b_j = n-t$ and $1 \leq j \leq t+1$. For example, $Y(n,0) = \{ C_n \}$ and \[ Y(n,1) = \left\{C_{n-1}C_0, C_{n-2}C_{1}, C_{n-3}C_{2}, \ldots, C_{\ceil{\frac{n-1}{2}}}C_{\floor{\frac{n-1}{2}}} \right\}. \] Then let $Z(n,t)$ be the set of all $h$-term sums of elements (with repetition) from $Y(n,t)$ with $0 \leq h \leq C_t$. Theorem~\ref{thm-Rnm-superset} says $\mathcal{R}(n,1) \subseteq Z(n,0) \cup \cdots \cup Z(n, \ceil{n/2})$. One convenient property of Catalan numbers is $C_{n-j}C_j < C_{n-i}C_i$ for $0\leq i < j \leq n/2$. As a consequence, $C_{n-t}C_0$ is the maximum value in $Y(n,t)$ and $C_{t}(C_{n-t}C_0)$ is the maximum value in $Z(n,t)$. Now note that $Z(n,0)=Y(n,0)$ and $Z(n,1)=Y(n,1)$ which are given above. When $t=2$, $C_2(C_{n-2}C_0)=2C_{n-2}$ is the maximum value in $Z(n,2)$. The next three largest values in $Z(n,2)$ are \[C_{n-2}C_{0} + C_{n-3}C_{1}> C_{n-3}C_0 + C_{n-3}C_0 = 2C_{n-3} > C_{n-3}C_1 + C_2C_{n-4},\] all of which are less than $C_3C_{n-3}$. Since the largest two values in $Y(n,3)$ are $C_{n-3}C_0$ and $C_{n-4}C_1$, the maximum value in $Z(n,3)$ is $C_3C_{n-3}$ while the next largest value is $4C_{n-3}+C_{n-4}$ which is smaller than $C_{n-2} + C_2C_{n-4}$. Finally, for $4\leq t \leq n/2$, all values in $Z(n,t)$ are at most $C_4C_{n-4}$ which is less than $C_{n-2} + 2 C_{n-4}$. \end{proof} Although Theorem~\ref{thm-Rnm-superset} only defines a superset for $\mathcal{R}(n,1)$, we show that the largest gaps stated in Corollary~\ref{cor-Rn1-gaps} do hold for $\mathcal{R}(n,m)$ in general. \begin{proposition} For $n \geq 13$, \[ \mathcal{R}(n,m) \subseteq \{0, 1, \ldots, C_{n-2} + C_2\cdot C_{n-4}\} \cup \{C_{n-2} + C_{n-3}, C_3\cdot C_{n-3}, C_2\cdot C_{n-2}, C_{n-1}, C_n\}. \] \end{proposition} \begin{proof} Fix a $k$-foldable word $w \in \hat{\cP}(n,2m)$. Let $n_i$ be the number of occurrences of $A_i$ in $w$. First note that $k \leq \prod_{i=1}^{2m} C_{n_i}$. This is tight if $w=(A_1\o{A}_1)^{n_1} (A_2\o{A}_2)^{n_2} \ldots (A_{2m}\o{A}_{2m})^{n_{2m}}$. Since $C_t \geq C_i C_{t-i}$ for $0\leq i \leq t/2$, if $w$ has at least $3$ letters and their complements, then $k\leq C_1C_1C_{n-2}$ which is less than $C_{n-2}+C_2 C_{n-4}$. So if $k \geq C_{n-2}+C_2C_{n-4}$, then $w \in \hat{\cP}(n,2)$ and the result follows from Corollary~\ref{cor-Rn1-gaps}. \end{proof} Having described a nontrivial superset for $\mathcal{R}(n,m)$, we turn our attention to finding values that are contained in $\mathcal{R}(n,1)$ and hence in $\mathcal{R}(n,m)$. The next proposition verifies that the largest five values in the Corollary~\ref{cor-Rn1-gaps} superset truly are in $\mathcal{R}(n,1)$. \begin{proposition} For $n\geq 3$, \[ \{C_{n-2}+C_{n-3}, C_3C_{n-3}, C_2C_{n-2}, C_{n-1}, C_n \}\subseteq \mathcal{R}(n,1). \] \label{prop-large_gaps} \end{proposition} \begin{proof} The word $(A\o{A})^t (\o{A}A)^{n-t} \in \mathcal{P}(n,1)$ has the same number of non-crossing perfect matchings as the corresponding word $(A\o{A})^t (B\o{B})^{n-t} \in \hat{\cP}(n,2)$, so we have $C_tC_{n-t}\in \mathcal{R}(n,1)$ for $0\leq t \leq n$. It only remains to show that $C_{n-2}+C_{n-3}\in \mathcal{R}(n,1)$ when $n\geq 3$. Consider the words \[ w = (A\o{A})^{n -3}\,\o{A}AA\o{A}\o{A}A \quad \text{ and } \quad \hat{w}= (A\o{A})^{n -3}\,B\o{B}A\o{A}B\o{B}, \] which have the same number of non-crossing perfect matchings. There are only two $B$-matchings in $\hat{w}$. When each $B$ is matched with the $\o{B}$ that immediately follows it, there are $C_{n-2}$ ways to extend this to a non-crossing perfect matching of $\hat{w}$. The other $B$-matching can be extended in only $C_{n-3}$ ways. So $\hat{w}$, and hence $w$, has $C_{n-2}+C_{n-3}$ non-crossing perfect matchings. \end{proof} The Catalan numbers provided a way to describe the largest values in $\mathcal{R}$. Before discussing the smallest values, we use the Catalan numbers once more to establish a family of values in $\mathcal{R}$. \begin{proposition} \label{prop-jCn} For non-negative integers $j$ and $\ell$ that satisfy $2j\leq n-\ell$, \[(j+1) C_{\ell} \in \mathcal{R}(n,1).\] \end{proposition} \begin{proof} Consider the word $w_{j,\ell} = (\o{A}A)^{\ell}A^{n-\ell - j}\o{A} \,^j A^j\o{A}\,^{n-\ell-j} \in \mathcal{S}(n,1)$. Each $A$ in the prefix $(\o{A}A)^{\ell}$ must match with an $\o{A}$ also in the prefix. Otherwise, the number of $A$'s and $\o{A} 's$ between the matched pair will not be equal. There are $C_{\ell}$ ways to define a non-crossing matching on $(\o{A}A)^{\ell}$. The remainder of the word $A^{n-\ell-j}\o{A}\,^j A^j \o{A}\,^{n-\ell-j}$ matches in $(j+1)$ ways since $j<n-\ell-j$. It can be folded onto a path of length $n-\ell$ rooted at one end. Then after making the only Type~2 move available, one more available move is created. After $j$ of these moves, we will have explored the space of all foldings. Thus $w_{j,\ell}$ folds in $(j+1)C_{\ell}$ ways. \end{proof} \subsection{Small Values in $\mathcal{R}(n,1)$} In search of the smallest value which is not in $\mathcal{R}(n,1)$, we find that (aside from $3 \not\in\mathcal{R}(3,1)$) all integers $i\leq n$ are in $\mathcal{R}(n,1)$. \begin{proposition}\label{prop-small} If $n\geq 4$, then $\{0,1,\ldots,n\}\subseteq \mathcal{R}(n,1)$. \end{proposition} \begin{proof} First notice that $A^{2n}$ is not foldable and $A^n \o{A}\,^n$ is $1$-foldable for any $n$. Also, $A^{n-2}\o{A}\,^2A^2\o{A}\,^{n-2}$ is $3$-foldable when $n\geq 4$. Thus we have $\{0,1,3\} \subseteq \mathcal{R}(n,1)$ for $n \geq 4$. Let $n$ be a positive integer. Consider the word $w_\ell=\o{A}A^{\ell}\o{A}\,^{j}A^{j}\o{A}\,^{\ell}A$ where $1 \leq \ell < n$ and $j=n-1-\ell$. We will attain the even values in the interval $[2,n]$ when $j < \ell$ and the odd values in the interval $[5,n]$ when $\ell \leq j$. If $j < \ell$, that is $0\leq j\leq\frac{n-2}{2}$, we find that $2j+2\in \mathcal{R}(n,1)$. To see this, observe that $w_{\ell}$ folds around both trees in Figure~\ref{fig-small} provided $\alpha\in \{0,1,\ldots, j\}$. Thus $w_{\ell}$ is at least $(2j+2)$-foldable. From the tree on the left, there is a Type~2 move that will transform the tree into the one on the right. At the vertex of degree $4$, there is a Type~1 move (if $\alpha>0$) and a Type~2 move (if $\alpha<j$), each of which results in a tree of the same description with a different value of $\alpha$. A similar argument can be made for the tree on the right. As the graph $G_{w_{\ell}}$ of $w_{\ell}$-valid trees is connected, $w_{\ell}$ is exactly $(2j+2)$-foldable. If $\ell \leq j$, that is $1\leq \ell <\frac{n}{2}$, we find that $2\ell+3\in \mathcal{R}(n,1)$. First observe that $w_{\ell}[1]$ must bond with $w_{\ell}[2]$, $w_{\ell}[2n]$ or $w_{\ell}[2j+2]$ based on the subwords created. In the case that $w_{\ell}[1]$ bonds with $w_{\ell}[2n]$, the subword $A^{\ell}\o{A}\,^{j}A^{j}\o{A}\,^{\ell}$ folds in exactly $\ell+1$ ways (as on the left tree in Figure~\ref{fig-small}). In the case that $w_{\ell}[1]$ bonds with $w_{\ell}[2]$, then $w_{\ell}[2n]$ bonds with either $w_{\ell}[2n-1]$ or $w_{\ell}[2\ell+1]$, the first extends to $\ell$ different foldings of $w_{\ell}$ and the second extends in only one way. Finally, if $w_{\ell}[1]$ bonds with $w_{\ell}[2j+2]$, there is again only one way to extend this to a folding of $w_{\ell}$. Thus $w_{\ell}$ is $(2\ell+3)$-foldable. \begin{figure}[ht!] \begin{tikzpicture \coordinate (bottom) at (0,-1); \coordinate (top) at (0,5); \coordinate (mid) at (0,2); \coordinate (left) at (-3,1); \coordinate (right) at (3,1); \foreach \x in {1,2,3,4,5} \path (bottom)--(mid) coordinate [pos=.20*\x] (b\x) {}; \foreach \x in {1,2,3,4,5} \path (mid)--(top) coordinate [pos=.20*\x] (t\x) {}; \foreach \x in {1,2,3,4} \path (left)--(mid) coordinate [pos=.25*\x] (l\x) {}; \foreach \x in {1,2,3,4} \path (right)--(mid) coordinate [pos=.25*\x] (r\x) {}; \draw (top) node[above] {\textit{\small{root}}}; \draw (bottom)--(b2); \draw (b3)--(mid); \draw (mid)-- (t1) (t2)--(top); \path (b2)--(b3) node [pos=.7] {$\vdots$}; \path (t1)--(t2) node [pos=.7] {$\vdots$}; \foreach \x in {1,2,3,4,5} \draw [fill=black] (b\x) circle (0.06); \foreach \x in {1,2,3,4,5} \draw [fill=black] (t\x) circle (0.06); \draw [fill=black] (bottom) circle (0.06); \draw (left)--(l1) (l2)--(mid)--(r2) (r1)--(right); \path (l1) -- (l2) node [midway, sloped] {$\cdots$}; \path (r1) -- (r2) node [midway, sloped] {$\cdots$}; \foreach \x in {1,2,3,4} \draw [fill=black] (l\x) circle (0.06); \foreach \x in {1,2,3,4} \draw [fill=black] (r\x) circle (0.06); \draw [fill=black] (left) circle (0.06); \draw [fill=black] (right) circle (0.06); \begin{scope}[every node/.style={scale=.7}] \foreach \x in {1,2,4,5} \path (bottom)--(mid) node [pos = .20*\x-.1, left=-.05] {$\overline{A}$}; \foreach \x in {1,2,4,5} \path (bottom)--(mid) node [pos = .20*\x-.1, right=-.05, yshift=-1] {${A}$}; \foreach \x in {1,3,4} \path (mid)--(top) node [pos = .20*\x-.1, left=-.05, yshift=-1] {${A}$}; \foreach \x in {1,3,4} \path (mid)--(top) node [pos = .20*\x-.1, right=-.05] {$\overline{A}$}; \foreach \x in {5} \path (mid)--(top) node [pos = .20*\x-.1, left=-.05] {$\overline{A}$}; \foreach \x in {5} \path (mid)--(top) node [pos = .20*\x-.1, right=-.05, yshift=-1] {${A}$}; \foreach \x in {1,3,4} \path (left)--(mid) node [pos = .25*\x - .15, above=0] {$A$}; \foreach \x in {1,3,4} \path (left)--(mid) node [pos = .25*\x - .15, below=0] {$\overline{A}$}; \foreach \x in {1,3,4} \path (right)--(mid) node [pos = .25*\x - .15, above=0] {$\overline{A}$}; \foreach \x in {1,3,4} \path (right)--(mid) node [pos = .25*\x - .15, below=0] {${A}$}; \end{scope} \path (left)--(mid) node [pos=.95] (left1) {}; \draw [decoration={brace}, decoration={raise=2ex}, decorate] (left)-- (left1) node [midway,above=.35, sloped] {$\alpha$}; \path (right)--(mid) node [pos=.95] (right1) {}; \draw [decoration={brace}, decoration={raise=2ex}, decorate] (right1)--(right) node [midway,above=.35, sloped] {$\alpha$}; \end{tikzpicture} \hspace{.4in} \begin{tikzpicture \coordinate (bottom) at (0,-1); \coordinate (top) at (0,5); \coordinate (mid) at (0,2); \coordinate (left) at (-3,1); \coordinate (right) at (3,1); \foreach \x in {1,2,3,4,5} \path (bottom)--(mid) coordinate [pos=.20*\x] (b\x) {}; \foreach \x in {1,2,3,4,5} \path (mid)--(top) coordinate [pos=.20*\x] (t\x) {}; \foreach \x in {1,2,3,4} \path (left)--(mid) coordinate [pos=.25*\x] (l\x) {}; \foreach \x in {1,2,3,4} \path (right)--(mid) coordinate [pos=.25*\x] (r\x) {}; \coordinate (lt) at (-.75,3.5); \coordinate (rt) at (.75,3.5); \draw (lt)--(t3)--(rt); \foreach \x in {lt,rt} \draw[fill=black] (\x) circle (0.06); \draw (0,4) node[above] {\textit{\small{root}}}; \draw (bottom)--(b2); \draw (b3)--(mid); \draw (mid)-- (t1) (t2)--(t3); \path (b2)--(b3) node [pos=.7] {$\vdots$}; \path (t1)--(t2) node [pos=.7] {$\vdots$}; \foreach \x in {1,2,3,4,5} \draw [fill=black] (b\x) circle (0.06); \foreach \x in {1,2,3} \draw [fill=black] (t\x) circle (0.06); \draw [fill=black] (bottom) circle (0.06); \draw (left)--(l1) (l2)--(mid)--(r2) (r1)--(right); \path (l1) -- (l2) node [midway, sloped] {$\cdots$}; \path (r1) -- (r2) node [midway, sloped] {$\cdots$}; \foreach \x in {1,2,3,4} \draw [fill=black] (l\x) circle (0.06); \foreach \x in {1,2,3,4} \draw [fill=black] (r\x) circle (0.06); \draw [fill=black] (left) circle (0.06); \draw [fill=black] (right) circle (0.06); \path (left)--(mid) node [pos=.95] (left1) {}; \draw [decoration={brace}, decoration={raise=2ex}, decorate] (left)-- (left1) node [midway,above=.35, sloped] {$\alpha$}; \path (right)--(mid) node [pos=.95] (right1) {}; \draw [decoration={brace}, decoration={raise=2ex}, decorate] (right1)--(right) node [midway,above=.35, sloped] {$\alpha$}; \begin{scope}[every node/.style={scale=.7}] \path (lt)--(top) node [pos = .4, above=-.45] {$\overline{A}$}; \path (lt)--(top) node [pos = .4, below=.47] {${A}$}; \path (rt)--(top) node [pos = .4, above=-.45] {${A}$}; \path (rt)--(top) node [pos = .4, below=.47] {$\overline{A}$}; \foreach \x in {1,2,4,5} \path (bottom)--(mid) node [pos = .20*\x-.1, left=-.05] {$\overline{A}$}; \foreach \x in {1,2,4,5} \path (bottom)--(mid) node [pos = .20*\x-.1, right=-.05, yshift=-1] {${A}$}; \foreach \x in {1,3} \path (mid)--(top) node [pos = .20*\x-.1, left=-.05, yshift=-1] {${A}$}; \foreach \x in {1,3} \path (mid)--(top) node [pos = .20*\x-.1, right=-.05] {$\overline{A}$}; \foreach \x in {1,3,4} \path (left)--(mid) node [pos = .25*\x - .15, above=0] {$A$}; \foreach \x in {1,3,4} \path (left)--(mid) node [pos = .25*\x - .15, below=0] {$\overline{A}$}; \foreach \x in {1,3,4} \path (right)--(mid) node [pos = .25*\x - .15, above=0] {$\overline{A}$}; \foreach \x in {1,3,4} \path (right)--(mid) node [pos = .25*\x - .15, below=0] {${A}$}; \end{scope} \end{tikzpicture} \caption{The general $w_{\ell}$-valid trees for $w_{\ell}=\o{A}A^{\ell}\o{A}\,^{j}A^{j}\o{A}\,^{\ell}A$ in the proof of Proposition~\ref{prop-small}.} \label{fig-small} \end{figure} \end{proof} \section{Conclusion} There are still many questions to be answered regarding the sets $\mathcal{S}_k(n,m)$ and $\mathcal{R}(n,m)$. For example, the number of $2$-foldable words, $|\mathcal{S}_2(n,m)|$, is not known. A complete description of $\mathcal{R}(n,m)$ remains to be determined. We are particularly interested in the smallest value $k$ for which $k\not\in \mathcal{R}(n,m)$. \section*{Acknowledgements} We would like to thank Gavin King, Vaclav Kotesovec, and Derrick Stolee for many helpful conversations. All authors were supported in part by NSF-DMS grant \#1500662, ``The 2015 Rocky Mountain-Great Plains Graduate Research Workshop in Combinatorics.'' Smith was also supported in part by NSF-DMS grant \#1344199. Computations in this paper were performed with Maple\texttrademark{} 2016.2 and SageMath. Maple is a trademark of Waterloo Maple Inc. Algolib was developed by the Algorithms Project at INRIA. Sage code was executed in CoCalc by SageMath, Inc.
{ "timestamp": "2017-10-31T01:10:23", "yymm": "1710", "arxiv_id": "1710.10616", "language": "en", "url": "https://arxiv.org/abs/1710.10616", "abstract": "We extend results regarding a combinatorial model introduced by Black, Drellich, and Tymoczko (2017+) which generalizes the folding of the RNA molecule in biology. Consider a word on alphabet $\\{A_1, \\overline{A}_1, \\ldots, A_m, \\overline{A}_m\\}$ in which $\\overline{A}_i$ is called the complement of $A_i$. A word $w$ is foldable if can be wrapped around a rooted plane tree $T$, starting at the root and working counterclockwise such that one letter labels each half edge and the two letters labeling the same edge are complements. The tree $T$ is called $w$-valid.We define a bijection between edge-colored plane trees and words folded onto trees. This bijection is used to characterize and enumerate words for which there is only one valid tree. We follow up with a characterization of words for which there exist exactly two valid trees.In addition, we examine the set $\\mathcal{R}(n,m)$ consisting of all integers $k$ for which there exists a word of length $2n$ with exactly $k$ valid trees. Black, Drellich, and Tymoczko showed that for the $n$th Catalan number $C_n$, $\\{C_n,C_{n-1}\\}\\subset \\mathcal{R}(n,1)$ but $k\\not\\in\\mathcal{R}(n,1)$ for $C_{n-1}<k<C_n$. We describe a superset of $\\mathcal{R}(n,1)$ in terms of the Catalan numbers by which we establish more missing intervals. We also prove $\\mathcal{R}(n,1)$ contains all non-negative integer less than $n+1$.", "subjects": "Combinatorics (math.CO)", "title": "$k$-Foldability of Words", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.985936372130286, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7084883484444765 }
https://arxiv.org/abs/2206.02995
Strong cospectrality in trees
We prove that no tree contains a set of three vertices which are pairwise strongly cospectral. This answers a question raised by Godsil and Smith in 2017.
\section{Introduction}\label{intro} Let $G$ be a finite simple graph and $A$ its adjacency matrix. A continuous-time quantum walk can be defined having $G$ as an underlying graph, and in certain models where no external interference exists, all properties of the walk are determined by the spectrum of $A$. A desirable property for a quantum walk is that at certain times the quantum state input at a vertex is transferred to another --- if this occurs with probability $1$, then it is called perfect state transfer, and if it occurs with probability close to $1$, it is called pretty good state transfer. In both cases, a necessary condition is that the two vertices involved are so that their projections onto the eigenspaces of the graph are either equal or minus each other, in which case the vertices will be called strongly cospectral. Precisely, if $a$ and $b$ are vertices of $G$ and $A = \sum_{r} \theta_r E_r$ is the spectral decomposition of $A$, then $a$ and $b$ are strongly cospectral if $E_r e_a = \pm E_r e_b$, for all $r$, where $e_a$ stands for the characteristic vector of the vertex $a$. Strongly cospectral vertices have been extensively studied in \cite{godsil2017strongly} and we do not aim to survey all results therein. However, it is enlightening to realize that if two vertices are strongly cospectral, then they are cospectral in conventional usage of the term, meaning, the graphs obtained upon the removal of each are going to have the same spectrum. Cospectral vertices have been studied for a long time, and in the context of trees, they are a key piece in Schwenk's \cite{schwenk1973almost} seminal paper. Two vertices $a$ and $b$ in $G$ are similar if there is an automorphism of the graph that maps $a$ to $b$. This automorphism, restricted to $G \setminus a$, implies that $G \setminus a \simeq G\setminus b$, and if the latter isomorphism exists even when no automorphism of $G$ maps $a$ to $b$, we say that $a$ and $b$ are pseudo-similar. Again, these concepts have been around for some time, see for instance \cite{kimble1981pseudosimilar,godsil1983graphs}. It is immediate to note that similar and pseudosimilar vertices are cospectral, so it is natural to wonder what is their connection to the concept of strong cospectrality. It is perhaps natural to expect that similar vertices are strongly cospectral but this is false --- for instance, no pair of vertices in $K_n$ with $n \geq 3$ is strongly cospectral. In fact, if $a$ and $b$ are strongly cospectral and there is an automorphism that fixes $a$, then it must also fix $b$. This suggests that strong cospectrality captures a sort of regularity or symmetry that must distinguish the pair of vertices from the remaining vertices of the graph. Naturally at this point one gets suspicious of the fact that in a given graph there cannot be a set of three or more vertices which are pairwise strongly cospectral. This suspicion is further reinforced by the fact that perfect state transfer, the quantum property inspiring the definition of strong cospectrality, is indeed monogamous (see \cite{kay2011basics}): no three vertices in a given graph can be involved in perfect state transfer with each other. Yet, there are graphs with three or more vertices pairwise strongly cospectral. The easiest examples are vertices of smallest degree in cartesian powers of paths of different lenght, so as long they have simple eigenvalues. With simple eigevalues, cospectrality is equivalent to strong cospectrality (see \cite{godsil2017strongly}), it is enough to simply check that the number of closed walks of fixed length around the vertices is constant for all of them. So why bother with trees? First, there is special interest in understanding quantum walks in trees (see for instance \cite{CoutinhoLiu2}) because trees model quantum systems which are likely cheaper and easier to build. Unfortunately, there is no known example of perfect state transfer in tress with more than 3 vertices, and this may as well be a consequence of the fact that strong cospectrality in trees is not as common as it is for other graphs. In fact, our result is the first to display a disparity: even though there are graphs with arbitrarily large sets of pairwise strongly cospectral vertices, no such set will exist in a tree. Second, trees seem to behave differently than graphs when it comes to cospectrality. A famous example is the fact that almost all trees have a cospectral mate \cite{schwenk1973almost}, whereas the opposite is widely believed to be true for graphs in general (a conjecture due to W. Haemers). Our result shows that one further aspect of this difference, and therefore hopefully serves as inspiration for future investigations. Third, and most importantly, the question on whether there are trees with three or more vertices pairwise strongly cospectral was asked by Godsil and Smith \cite{godsil2017strongly}. We answer it fully in the negative. In Section \ref{sec:prelim} we introduce the basic facts and notation used throughout the paper. In Section \ref{sec:result}, we state a key lemma and prove our main result. Section \ref{sec:lemma} is dedicated to prove the key lemma. \section{Graph spectra and polynomials}\label{sec:prelim} In this paper, we will denote by $\phi^G$ the characteristic polynomial of the graph $G$ in the variable $t$. If $\theta_0,\theta_1,\dots, \theta_d$ are the distinct eigenvalues of the adjacency matrix $A$ of $G$, then we denote by $E_r$ the orthogonal projection onto the $\theta_r$-eigenspace. Two vertices $i$ and $j$ of the graph $G$ are called \textit{cospectral} if $\phi^{G\setminus i}=\phi^{G\setminus j}$. Using walk generating functions, it is possible to write the entries of $E_r$ in terms of these polynomials, as follows: \begin{equation} (E_r)_{i,i} = \frac{(t-\theta_r)\phi^{G \setminus i}}{\phi^ G} \bigg|_{t = \theta_r}, \label{eq:diagonal} \end{equation} \noindent whereas this is well defined as $\theta_r$ is a pole of order at most $1$ in $\phi^{G\setminus i}/\phi^G$ (see \cite{GodsilAlgebraicCombinatorics,coutinho2021quantum}). From this, it follows that $i$ and $j$ are cospectral if and only if, for every $r$ in $\{0,1,\dots,d\}$, $(E_r)_{i,i}=(E_r)_{j,j}$. If, moreover, $E_r e_i = \pm E_r e_j$ for every $r$ in $\{0,1,\dots,d\}$, then $i$ and $j$ are \textit{strongly cospectral}. It is easy to verify that this is equivalent to requiring that they are cospectral and that $(E_r)_{i,i} = \pm (E_r)_{i,j}$ for every $r$ in $\{0,1,\dots,d\}$. Walk generating functions also provide the following expression \begin{equation}(E_r)_{i,j} = \frac{(t-\theta_r)\sqrt{\phi^{G \setminus i}\phi^{G \setminus j} - \phi^G \phi^{G \setminus \{i,j\}}}}{\phi^ G} \bigg|_{t = \theta_r} \label{eq:offdiagonal} \end{equation} and so we obtain the following result: \begin{theorem}[Corollary 8.4 in \cite{godsil2017strongly}]\label{thm:strcospec} Vertices $i$ and $j$ of a graph $G$ are strongly cospectral if and only if $\phi^{G \setminus i} = \phi^{G \setminus j}$ and all poles of $\phi^{G\setminus \{i,j\}}/\phi^G$ are simple. \end{theorem} Note that as a consequence, if $i$ and $j$ are in distinct connected components of the graph $G$, then $i$ and $j$ are not strongly cospectral. At first sight, the expression within the square root in \eqref{eq:offdiagonal} is not clearly a perfect square, but in fact it is, and given by the following expression: \begin{lemma}[Lemma 2.1 in \cite{GodsilAlgebraicCombinatorics}]\label{lem:wronskian} Let $i$ and $j$ be vertices in the graph $G$. Then, \[\phi^{G\setminus i}\phi^{G\setminus j}-\phi^{G\setminus\{i,j\}}\phi^G=\left(\sum_{P:i\to j}\phi^{G\setminus P}\right)^2,\] where the sum is over all the paths from $i$ to $j$. \end{lemma} Our proof will require manipulating ratios of characteristic polynomials of a graph and its vertex deleted subgraphs, and for that end, we have found it more convenient to introduce the following notation. Given a graph $G$ and vertex $i$, have \begin{equation} \alpha_i^G =\dfrac{\phi^G}{\phi^{G\setminus i}}. \label{eq:alpha} \end{equation} We end this section establishing a description of the graph of this rational function. \begin{lemma}[Theorem 1.5 in \cite{GodsilAlgebraicCombinatorics} ]\label{lem:derivative} Let $G$ be a graph. Then, the derivative of $\phi^G$ is given by $(\phi^G)'=\sum_{i\in V(G)}\phi^{G\setminus i}$. \end{lemma} \begin{lemma}\label{lemma:derivative_quotient} Let $i$ be a vertex in the graph $G$. Then, $(\alpha_i^G)'(t)\geq 1$ for every $t$ that is not a pole of $\alpha_i^G$. In particular, $\alpha_i^G(t)$ has only simple zeros and poles, and is increasing and surjective on each of its branches. \end{lemma} \begin{proof} Naturally, all zeros and poles of $\alpha_i^G$ are real. By taking the derivative in \eqref{eq:alpha} and by Lemmas \ref{lem:derivative} and \ref{lem:wronskian}, it follows that \[(\alpha_i^G)'= 1+\sum_{j\in V(G\setminus i)}\left(\dfrac{\sum_{P:i\to j}\phi^{G\setminus P}}{\phi^{G\setminus i}}\right)^2.\] This implies that $(\alpha_i^G)'(t)\geq 1$ for every $t$ that is not a zero of $\phi^{G\setminus i}$. It follows by continuity that $(\alpha_i^G)'(t)\geq 1$ for every $t$ that is not a pole of $\alpha_i^G$. As a consequence, $\alpha_i^G$ is increasing and surjective in each of its branches and all of its zeros are simple. Since $\deg(\phi^G)=\deg(\phi^{G\setminus i})+1$, the number of zeros of $\alpha_i^G$ is one more than the number of poles counted with multiplicity of $\alpha_i^G$. But in each branch, because $\alpha_i^G$ is increasing, there can only be one zero of $\alpha_i^G$. Putting this all together, it follows that all the poles of $\alpha_i^G$ are also simple. \end{proof} \section{Main result}\label{sec:result} The result we prove in this section is that no tree has three (or more) pairwise strongly cospectral vertices. First, we show that if such a set of pairwise strongly cospectral vertices exist, then there must be a cut vertex whose removal separates all three of them in different connected components. In fact, note that it is enough to assume they are pairwise cospectral. \begin{lemma}\label{lem:notinpath} If three vertices in a tree are pairwise cospectral, then they do not lie on a path. \end{lemma} \begin{proof} Let $T$ be a tree as in Figure~\ref{fig:same_path}, where the vertices $i$, $j$ and $k$ are on the same path. We will prove that $j$ cannot be cospectral to both $i$ and $k$. \begin{figure} \centering \includegraphics[scale=0.2]{vertices_in_same_path.png} \caption{Representation of the tree $T$ with vertices $i$, $j$ and $k$ on the same path.} \label{fig:same_path} \end{figure} Let $\theta_1(G)$ be the largest eigenvalue of the adjacency matrix of the graph $G$. As $j$ is a cut-vertex, it follows that $\theta_1(T\setminus j)=\max\{\theta_1(T_i),\theta_1(T_j),\theta_1(T_k)\}$. But since $\theta_1(H) < \theta_1(G)$ for every proper subgraph $H$ of a connected graph $G$ (as a consequence of the Perron-Frobenius Theorem, see \cite[Chapter 8]{GodsilRoyle}), we have $\theta_1(T_i) < \theta_1(T \setminus k)$; $\theta_1(T_j) < \theta_1(T \setminus i), \theta_1(T \setminus k)$; $\theta_1(T_k) < \theta_1(T \setminus i)$. Thus $\theta_1(T \setminus j) < \max\{\theta_1(T \setminus i), \, \theta_1(T \setminus k)\}$, and therefore $j$ cannot be cospectral to both $i$ and $k$. \end{proof} Next, we state a main technical lemma, which we will prove in the next section. \begin{lemma} \label{lem:restriction} Assume vertices $i$, $j$ and $k$ are pairwise strongly cospectral in a graph $G$, and that there exists $v$ so that each $i$, $j$ and $k$ lie in a different component of $G \setminus v$. Thus, for any $\theta \in \Rds$, if $\alpha_i^{G\setminus v}(\theta)=0$, then $\alpha_j^{G\setminus v}(\theta)=\alpha_k^{G\setminus v}(\theta)\neq 0$. \end{lemma} The last lemma describes a situation that is not possible. \begin{theorem}\label{main_result} Let $G$ be a graph with three pairwise cospectral vertices $i$, $j$ and $k$, and assume that there is a cut-vertex $v$ such that these cospectral vertices are in distinct connected components of $G\setminus v$. Then, one of the pairs of cospectral vertices is not strongly cospectral. \end{theorem} \begin{proof} Assume, by contradiction, that the vertices $i$, $j$ and $k$ are pairwise strongly cospectral. First, note by the Lemma~\ref{lemma:derivative_quotient} that if $\theta$ is a sufficiently large negative number, then $\alpha_i^{G\setminus v}(\theta)$, $\alpha_j^{G\setminus v}(\theta)$ and $\alpha_k^{G\setminus v}(\theta)$ are all negative. On the other hand, if $\theta$ is a sufficiently large positive number, then $\alpha_i^{G\setminus v}(\theta)$, $\alpha_j^{G\setminus v}(\theta)$ and $\alpha_k^{G\setminus v}(\theta)$ are all positive. Let $\tau$ be the smallest real number so that at least one of them is equal to zero, and $\lambda$ the largest real number so that at least one is equal to zero. Observe also by Lemma~\ref{lemma:derivative_quotient} that $\alpha_i^{G\setminus v}$, $\alpha_j^{G\setminus v}$ and $\alpha_k^{G\setminus v}$ are increasing and continuous in each branch. Finally, note that by Lemma~\ref{lem:restriction} it cannot happen that two terms among $\alpha_i^{G\setminus v}(\theta)$, $\alpha_j^{G\setminus v}(\theta)$ and $\alpha_k^{G\setminus v}(\theta)$ are simultaneously equal to $0$. Thus, it must be that between $\tau$ and $\lambda$ there is at least one real number $\theta$ for which among $\alpha_i^{G\setminus v}(\theta)$, $\alpha_j^{G\setminus v}(\theta)$ and $\alpha_k^{G\setminus v}(\theta)$ there is a negative number, a positive number and a $0$. But this contradicts Lemma~\ref{lem:restriction}. So the vertices $i$, $j$ and $k$ cannot be pairwise strongly cospectral. \end{proof} This leads to the promised result. \begin{corollary}\label{main_result_trees} There is no tree with three pairwise strongly cospectral vertices. \end{corollary} \begin{proof} By Lemma~\ref{lem:notinpath}, if these vertices exist, then it must be so that there is another vertex of the tree whose removal puts each in a different component. By Theorem~\ref{main_result}, this is not possible. \end{proof} It is possible to give a statement analogous to Theorem~\ref{main_result} for matching polynomials, but in this case the definition of strong cospectrality should be given by the analogous statement for matching polynomials of Theorem~\ref{thm:strcospec}. The proof in this case will follow similarly, with the exception that Lemmas~\ref{lem:derivative} and~\ref{lem:wronskian} are replaced by~\cite[Theorem 1.1]{GodsilAlgebraicCombinatorics} and~\cite[Lemma 4.1]{GodsilAlgebraicCombinatorics}, respectively. In the next section, we work on the proof of Lemma~ \ref{lem:restriction}. \section{Proof of the key lemma} \label{sec:lemma} \subsection{Properties of the $\alpha$'s} We start this section writing the definition of strongly cospectral vertices in terms of the $\alpha$'s defined in \eqref{eq:alpha}. \begin{lemma}\label{lemma:strongly_cospectral_cf} Let $i$ and $j$ be distinct vertices in the graph $G$. Then, $i$ and $j$ are strongly cospectral if, and only if, $\alpha_i^G=\alpha_j^{G}$ and $\alpha_i^G(\theta)=\alpha_j^{G}(\theta)\neq 0$ whenever $\alpha_i^{G\setminus j}(\theta)=0$ or $\alpha_j^{G\setminus i}(\theta)=0$. \end{lemma} \begin{proof} Our proof proceeds via Theorem~\ref{thm:strcospec}. Observe that $\phi^{G\setminus i}=\phi^{G\setminus j}$ if, and only if, $\alpha_i^G=\alpha_j^{G}$. We claim that $\dfrac{\phi^{G\setminus \{i,j\}}}{\phi^{G}}$ has a double pole at $\theta$ if, and only if, $\alpha_i^G(\theta)$, $\alpha_j^{G}(\theta)$, $\alpha_i^{G\setminus j}(\theta)$, $\alpha_j^{G\setminus i}(\theta)$ are all equal to zero. In order to see this, simply notice that, \[\dfrac{\phi^{G\setminus \{i,j\}}}{\phi^{G}} = \dfrac{1}{\alpha_i^G\alpha_j^{G\setminus i}}=\dfrac{1}{\alpha_j^{G}\alpha_i^{G\setminus j}},\] \noindent and $\alpha_i^G$, $\alpha_j^{G}$, $\alpha_i^{G\setminus j}$ and $\alpha_j^{G\setminus i}$ have simple zeros by the Lemma~\ref{lemma:derivative_quotient}. \end{proof} In what follows in order to work with the rational functions $\alpha_i^G$ for vertex deleted subgraphs we make use of a technique called \textit{contraction}. This technique is inspired by the theory of continued fractions and has also been used in the context of matching polynomials~\cite{spier2020refined}. For distinct vertices $i$ and $j$ in a graph $G$ denote by, \[\lambda_{i j}^G := -\left(\dfrac{\sum_{P:i\to j}\phi^{G\setminus P}}{\phi^{G\setminus \{i,j\}}}\right)^2.\] From here onwards, if $q(x)$ is a rational function, then if write $q(\theta) = \infty$ we simply mean that $\theta$ is a pole of $q(x)$. Observe that $\lambda_{i j}^G=\lambda_{j i}^G$, and that either $\lambda_{i j}^G(\theta) = \infty$ or $\lambda_{i j}^G(\theta)<0$, for every real number $\theta$. Note also that $\lambda_{i j}^G$ has double zeros and poles. It is an immediate consequence of Lemma~\ref{lem:wronskian} that, \begin{equation} \alpha_i^G=\alpha_i^{G\setminus j}+\dfrac{\lambda_{i j}^G}{\alpha_j^{G\setminus i}}\text{ and } \alpha_j^{G}=\alpha_j^{G\setminus i}+\dfrac{\lambda_{i j}^G}{\alpha_i^{G\setminus j}}. \label{eq:alphaslambdas} \end{equation} \noindent In this case we have the following useful observation. \begin{lemma}\label{lambda_zero} If $\lambda_{i j}^G(\theta)=0$, then $\alpha_i^G(\theta)=\alpha_i^{G\setminus j}(\theta)$. \end{lemma} \begin{proof} Note that $\lambda_{i j}^G$ has double zeros, while, by Lemma~\ref{lemma:derivative_quotient}, $\alpha_j^{G\setminus i}$ has simple zeros. It follows that if $\lambda_{i j}^G(\theta)=0$, then $\dfrac{\lambda_{i j}^G}{\alpha_j^{G\setminus i}}(\theta)=0$, which in turn implies $\alpha_i^G(\theta)=\alpha_i^{G\setminus j}(\theta)$. \end{proof} Our next result shows that, more generally, given $\alpha_j^{G\setminus i}(\theta)$, $\alpha_i^{G\setminus j}(\theta)$ and $\lambda_{i j}^G(\theta)\neq \infty$, we can compute $\alpha_i^G(\theta)$ and $\alpha_j^{G}(\theta)$. In the statement of the next result we use the following conventions. For every $C$ in $\Rds \cup\{\infty\}$, $\frac{0}{C}=0$ and $\infty+C=C+\infty=\infty$; if $C\neq \infty$, then $\frac{C}{\infty}=0$; if $C\notin \{0,\infty\}$, then $\frac{C}{0}=\infty$. \begin{lemma}\label{lambda_finite} If $\lambda_{i j}^G(\theta)\neq \infty$, then, assuming the above conventions, \[\alpha_i^G(\theta)=\alpha_i^{G\setminus j}(\theta)+\dfrac{\lambda_{i j}^G(\theta)}{\alpha_j^{G\setminus i}(\theta)}.\] \end{lemma} \begin{proof} If $\lambda_{i j}^G(\theta)=0$, then, by Lemma~\ref{lambda_zero}, it holds that $\alpha_i^G(\theta)=\alpha_i^{G\setminus j}(\theta)$, and because the zeros of the $\lambda$'s are double and the zeros of the $\alpha$'s are simple, the equality follows. Therefore, assume that $\lambda_{i j}^G(\theta)$ is in $(-\infty,0)$. In this case, the result follows immediately, except when $\alpha_i^{G\setminus j}(\theta)=\infty$ and $\alpha_j^{G\setminus i}(\theta)=0$. Assume we are in this last situation, then we claim that $\alpha_i^G(\theta)=\infty$. To see this, observe that, by Lemma~\ref{lemma:derivative_quotient}, for every $\varepsilon>0$ sufficiently small it holds that $\alpha_i^{G\setminus j}(\theta-\varepsilon)>0>\alpha_i^{G\setminus j}(\theta+\varepsilon)$ and $\alpha_j^{G\setminus i}(\theta-\epsilon)<0<\alpha_j^{G\setminus i}(\theta+\varepsilon)$. Then, since $\alpha_i^G=\alpha_i^{G\setminus j}+ \lambda_{i j}^G/\alpha_j^{G\setminus i}$ and $\lambda_{i j}^G$ is negative, we obtain $\alpha_i^G(\theta-\varepsilon)>0>\alpha_i^G(\theta+\varepsilon)$ for every $\epsilon>0$ sufficiently small. It follows by Lemma~\ref{lemma:derivative_quotient} that $\alpha_i^G(\theta)=\infty$. \end{proof} In case $\lambda_{i j}^G(\theta)=\infty$ we can still obtain some information. \begin{lemma}\label{lambda_infinite} If $\lambda_{i j}^G(\theta) = \infty$, then $\alpha_i^{G\setminus j}(\theta)=\alpha_j^{G\setminus i}(\theta)=\infty$. If, in addition, $\alpha_j^{G}(\theta)=\infty$, then $\alpha_i^G(\theta)=\infty$. \end{lemma} \begin{proof} First, by \eqref{eq:alphaslambdas}, we have \begin{comment} \begin{align*} \lambda_{i j}^G & = -\left(\dfrac{\sum_{P:i\to j}\phi^{G\setminus P}}{\phi^{G\setminus \{i,j\}}}\right)^2=-\dfrac{\phi^{G\setminus i}\phi^{G\setminus j}-\phi^{G\setminus \{i,j\}}\phi^{G}}{(\phi^{G\setminus \{i,j\}})^2} \\ & = -\dfrac{\phi^{G\setminus i}}{\phi^{G\setminus \{i,j\}}}\dfrac{\phi^{G\setminus j}}{\phi^{G\setminus \{i,j\}}}+\dfrac{\phi^{G}}{\phi^{G\setminus \{i,j\}}}, \end{align*} which implies that \end{comment} \[\lambda_{i j}^G=\alpha_i^{G\setminus j}(-\alpha_j^{G\setminus i}+\alpha_i^G)=\alpha_j^{G\setminus i}(-\alpha_i^{G\setminus j}+\alpha_j^{G}).\] \noindent Now, notice that $\lambda_{i j}^G$ has double poles, while $\alpha_i^G$, $\alpha_j^{G}$, $\alpha_i^{G\setminus j}$ and $\alpha_i^{G\setminus j}$ have simple poles by the Lemma~\ref{lemma:derivative_quotient}. Thus, due to these last expressions for $\lambda_{i j}^G$, it follows that $\lambda_{i j}^G(\theta)=\infty$ can only happen if $\alpha_i^{G \setminus j}( \theta)=\alpha_j^{G\setminus i}(\theta)=\infty$. This proves the first part of the statement. For the second part of the statement, observe that, \[\alpha_i^G\alpha_j^{G\setminus i}=\dfrac{\phi^{G}}{\phi^{G\setminus \{i,j\}}}=\alpha_j^{G}\alpha_i^{G\setminus j}.\] \noindent It follows that if, in addition, $\alpha_j^{G}(\theta)=\infty$, then $\alpha_i^G\alpha_j^{G\setminus i}$ has a double pole at $\theta$, which implies that $\alpha_i^G(\theta)=\infty$, proving the second part of the statement. \end{proof} \subsection{The proof of Lemma \ref{lem:restriction}} \textit{ In this subsection, we assume the graph $G$ has the property that there exists a vertex $v$ such that the vertices $i$, $j$ and $k$ are in distinct connected components of $G\setminus v$.} On Figure~\ref{fig_graph} there is a representation of a graph with this property. \begin{figure}[h] \begin{center} \includegraphics[scale=0.2]{fig_graph.png}\end{center} \caption{A representation of a graph $G$ such that the vertices $i$, $j$ and $k$ are in distinct connected components of $G\setminus v$.} \label{fig_graph} \end{figure} Our proof of Lemma~\ref{lem:restriction} will follow from conditions imposed by the pairwise strong cospectrality of $i$, $j$ and $k$ on $\alpha_i^{G\setminus v}$, $\alpha_j^{G\setminus v}$ and $\alpha_k^{G\setminus v}$. Observe that, since $i$, $j$ and $k$ are in distinct connected components of $G\setminus v$, $\alpha_i^{G\setminus v}=\alpha_i^{G\setminus \{v,j\}}=\alpha_i^{G\setminus \{v,j,k\}}$ and $\lambda_{i v}^G=\lambda_{i v}^{G\setminus j}=\lambda_{i v}^{G\setminus \{j,k\}}$, and similar identities by changing the roles of $i$, $j$ and $k$. In what follows, we make heavy use of these facts without further mention. \begin{lemma}\label{lemma:lambda_equal_zero} If $\lambda_{i v}^G(\theta)=0$, then $\alpha_i^G(\theta)=\alpha_i^{G\setminus v}(\theta)=\alpha_i^{G\setminus j}(\theta)$. \end{lemma} \begin{proof} First, note that, \[\alpha_i^G=\alpha_i^{G\setminus v}+\dfrac{\lambda_{i v}^G}{\alpha_v^{G\setminus i}}\text{ and } \alpha_i^{G\setminus j}=\alpha_i^{G\setminus \{j,v\}}+\dfrac{\lambda_{i v}^G}{\alpha_v^{G\setminus \{j, i\}}}.\] \noindent As a consequence, by Lemma~\ref{lambda_zero}, $\alpha_i^G(\theta)=\alpha_i^{G\setminus v}(\theta)$ and $\alpha_i^{G\setminus j}(\theta )=\alpha_i^{G\setminus \{j,v\}}(\theta)$, but it also follows that $\alpha_i^{G\setminus \{j,v\}}(\theta)=\alpha_i^{G \setminus v}(\theta)$. \end{proof} This last result has the following crucial corollary for strongly cospectral vertices. \begin{corollary}\label{lambda_not_zero} If vertices $i$ and $j$ are strongly cospectral and $\alpha_i^{G\setminus v}(\theta)=0$, then $\lambda_{i v}^G(\theta)\neq 0$. \end{corollary} \begin{proof} Assume otherwise, then, by Lemma~\ref{lemma:lambda_equal_zero}, both $\alpha_i^G(\theta)$ and $\alpha_i^{G\setminus j}(\theta)$ are equal to $\alpha_i^{G\setminus v}(\theta)=0$, which is impossible by Lemma~\ref{lemma:strongly_cospectral_cf}. \end{proof} The next results develop the consequences of the conclusion of Corollary~\ref{lambda_not_zero}. \begin{lemma}\label{forcing_v_in_infty} If $\alpha_i^{G\setminus v}(\theta)=0$ and $\lambda_{i v}^G(\theta)\neq 0$, then, \[\alpha_v^{G}(\theta)=\alpha_v^{G\setminus j}(\theta)=\alpha_v^{G\setminus \{j,k\}}(\theta)=\infty.\] \end{lemma} \begin{proof} First, note that $\lambda_{i v}^G(\theta)\neq \infty$ because if this was not the case then, by Lemma~\ref{lambda_infinite}, $\alpha_i^{G\setminus v}(\theta)=\infty$, which is impossible. Then, note that, \begin{itemize} \item[] $\alpha_v^{G}=\alpha_v^{G\setminus i}+ \lambda_{i v}^G/\alpha_i^{G\setminus v}$, \item[] $\alpha_v^{G\setminus j}=\alpha_v^{G\setminus \{j,i\}}+\lambda_{i v}^G/\alpha_i^{G\setminus \{j,v\}},$ \item[] $\alpha_v^{G\setminus \{j,k\}}=\alpha_v^{G\setminus \{j,k,i\}}+\lambda_{i v}^G/\alpha_i^{G\setminus \{j,k,v\}}.$ \end{itemize} \noindent But we also have that $\alpha_i^{G\setminus \{j,k,v\}}(\theta)=\alpha_i^{G\setminus \{j,v\}}(\theta)=\alpha_i^{G\setminus v}(\theta)=0$. It follows from Lemma~\ref{lambda_finite} that $\alpha_v^{G}(\theta)=\alpha_v^{G\setminus j}(\theta)=\alpha_v^{G\setminus \{j,k\}}(\theta)=\infty$. \end{proof} The next proposition presents the main consequence from the conclusion of Corollary~\ref{lambda_not_zero}, from which more consequences will follow with the hypothesis of strong cospectrality of $j$ and $k$. \begin{lemma}\label{forcing_equality} If $\alpha_i^{G\setminus v}(\theta)=0$ and $\lambda_{i v}^G(\theta)\neq 0$, then, \[\alpha_j^{G}(\theta)=\alpha_j^{G\setminus v}(\theta)=\alpha_j^{G\setminus k}(\theta).\] \end{lemma} \begin{proof} If $\lambda_{j v}^G(\theta)= \infty$, then Lemma~\ref{lambda_infinite} implies $\alpha_j^{G\setminus v}(\theta)=\infty$. But by Lemma~\ref{forcing_v_in_infty} we also have $\alpha_v^{G}(\theta)=\alpha_v^{G\setminus k}(\theta)=\infty$, from which follows by the second part of Lemma~\ref{lambda_infinite} that $\alpha_j^{G}(\theta)=\alpha_j^{G\setminus k}(\theta)=\infty$. It follows that, $\alpha_j^{G}(\theta)=\alpha_j^{G\setminus v}(\theta)=\alpha_j^{G\setminus k}(\theta)=\infty$, as we wanted. Now, assume that $\lambda_{j v}^G(\theta)\neq \infty$. Observe that, \[\alpha_j^{G}=\alpha_j^{G\setminus v}+\dfrac{\lambda_{j v}^G}{\alpha_v^{G\setminus j}},\quad \alpha_j^{G\setminus k}=\alpha_j^{G\setminus \{k,v\}}+\dfrac{\lambda_{j v}^G}{\alpha_v^{G\setminus \{k, j\}}}.\] \noindent But by Lemma~\ref{forcing_v_in_infty} we also have $\alpha_v^{G\setminus j}(\theta)=\alpha_v^{G\setminus \{j,k\}}(\theta)=\infty$. It then follows by Lemma~\ref{lambda_finite} that, $\alpha_j^{G}(\theta)=\alpha_j^{G\setminus v}(\theta)$ and $\alpha_j^{G\setminus k}(\theta)=\alpha_j^{G\setminus \{k,v\}}(\theta)=\alpha_j^{G\setminus v}(\theta)$, which finishes the proof. \end{proof} If the vertices $j$ and $k$ are cospectral, this last result has the following corollary. \begin{corollary}\label{equality_of_alphas} If $\alpha_i^{G\setminus v}(\theta)=0$ and $\lambda_{i v}^G(\theta)\neq 0$, and $j$ and $k$ are cospectral, then, \[\alpha_j^{G\setminus k}(\theta)=\alpha_j^{G\setminus v}(\theta)=\alpha_j^{G}(\theta)=\alpha_k^{G}(\theta)=\alpha_k^{G\setminus v}(\theta)=\alpha_k^{G\setminus j}(\theta).\] \noindent Furthermore, if $j$ and $k$ are strongly cospectral, then this common value is different than zero. \end{corollary} \begin{proof} By Lemma~\ref{forcing_equality}, $\alpha_j^{G}(\theta)=\alpha_j^{G\setminus v}(\theta)=\alpha_j^{G\setminus k}(\theta)$ and $\alpha_k^{G}(\theta)=\alpha_k^{G\setminus v}(\theta)=\alpha_k^{G\setminus j}(\theta)$. But $j$ and $k$ are cospectral, so $\alpha_j^{G}(\theta)=\alpha_k^{G}(\theta)$. This proves the first part of the statement. Now, if $j$ and $k$ are strongly cospectral, then the common value of these quantities cannot be zero. To see this, observe that if this were not the case, then in particular $\alpha_j^{G}(\theta)=\alpha_j^{G\setminus k}(\theta)=0$, which is impossible by Lemma~\ref{lemma:strongly_cospectral_cf}. \end{proof} As a consequence we are ready to prove the key lemma. \setcounter{lem6}{5} \begin{lemma*} Assume vertices $i$, $j$ and $k$ are pairwise strongly cospectral in a graph $G$, and that there exists $v$ so that each $i$, $j$ and $k$ lie in a different component of $G \setminus v$. Thus, for any $\theta \in \Rds$, if $\alpha_i^{G\setminus v}(\theta)=0$, then $\alpha_j^{G\setminus v}(\theta)=\alpha_k^{G\setminus v}(\theta)\neq 0$. \end{lemma*} \begin{proof} Note that by the Corollary~\ref{lambda_not_zero}, as $i$ and $j$ are strongly cospectral, it follows that $\lambda_{i v}^G(\theta)\neq 0$. But then by the Corollary~\ref{equality_of_alphas}, as $j$ and $k$ are strongly cospectral, it follows that $\alpha_j^{G\setminus v}(\theta)=\alpha_k^{G\setminus v}(\theta)\neq 0$. \end{proof} \subsection*{Acknowledgements} Emanuel Juliano acknowledges the scholarship FAPEMIG/PROBIC. Authors thank Chris Godsil for bringing up the question that motivated this paper. \bibliographystyle{plain}
{ "timestamp": "2022-06-08T02:07:50", "yymm": "2206", "arxiv_id": "2206.02995", "language": "en", "url": "https://arxiv.org/abs/2206.02995", "abstract": "We prove that no tree contains a set of three vertices which are pairwise strongly cospectral. This answers a question raised by Godsil and Smith in 2017.", "subjects": "Combinatorics (math.CO)", "title": "Strong cospectrality in trees", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363717170517, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7084883481475287 }
https://arxiv.org/abs/1102.5065
On $(\le k)$-edges, crossings, and halving lines of geometric drawings of $K_n$
Let $P$ be a set of points in general position in the plane. Join all pairs of points in $P$ with straight line segments. The number of segment-crossings in such a drawing, denoted by $\crg(P)$, is the \emph{rectilinear crossing number} of $P$. A \emph{halving line} of $P$ is a line passing though two points of $P$ that divides the rest of the points of $P$ in (almost) half. The number of halving lines of $P$ is denoted by $h(P)$. Similarly, a $k$\emph{-edge}, $0\leq k\leq n/2-1$, is a line passing through two points of $P$ and leaving exactly $k$ points of $P$ on one side. The number of $(\le k)$-edges of $P$ is denoted by $E_{\leq k}(P) $. Let $\rcr(n)$, $h(n)$, and $E_{\leq k}(n) $ denote the minimum of $\crg(P)$, the maximum of $h(P)$, and the minimum of $E_{\leq k}(P) $, respectively, over all sets $P$ of $n$ points in general position in the plane. We show that the previously best known lower bound on $E_{\leq k}(n)$ is tight for $k<\lceil (4n-2) /9\rceil $ and improve it for all $k\geq \lceil (4n-2) /9 \rceil $. This in turn improves the lower bound on $\rcr(n)$ from $0.37968\binom{n} {4}+\Theta(n^{3})$ to {277/729}\binom{n}{4}+\Theta(n^{3})\geq 0.37997\binom{n}{4}+\Theta(n^{3})$. We also give the exact values of $\rcr(n)$ and $h(n) $ for all $n\leq27$. Exact values were known only for $n\leq18$ and odd $n\leq21$ for the crossing number, and for $n\leq14$ and odd $n\leq21$ for halving lines.
\section{Introduction} We consider three important well-known problems in Combinatorial Geometry: the rectilinear crossing number, the maximum number of halving lines, and the minimum number of $(\leq k) $-edges of complete geometric graphs on $n$ vertices. All point sets in this paper are in the plane, finite, and in general position. Let $P$ be a finite set of points in general position in the plane. The \emph{rectilinear crossing number }of $P$, denoted by $\mathop{\rm cr}(P)$, is the number of crossings obtained when all straight line segments joining pairs of points in $P$ are drawn. (A \emph{crossing }is the intersection of two segments in their interior.) The \emph{rectilinear crossing number }of $n$ is the minimum number of crossings determined by any set of $n$ points, i.e., $\overline{\hbox{\rm cr}}(n)=\min\{ \mathop{\rm cr}(P):\vert P\vert =n\}$. The problem of determining $\overline{\hbox{\rm cr}}(n)$ for each $n$ was posed by Erd\H{o}s and Guy in the early seventies \cite{EG},\cite{G1}. This is equivalent to finding the minimum number of convex quadrilaterals determined by $n$ points, as every pair of crossing segments bijectively corresponds to the diagonals of a convex quadrilateral. A \emph{halving line} of $P$ is a line passing through two points of $P$ and dividing the rest in almost half. So when $P$ has $n$ points and $n$ is even, a halving line of $P$ leaves $n/2-1$ points of $P$ on each side; whereas when $n$ is odd, a halving line leaves $( n-3) /2$ points on one side and $(n-1)/2$ on the other. The number of halving lines of $P$ is denoted by $h(P) $. Generalizing a halving line, a $k$\emph{-edge} of $P$, with $0\leq k\leq n/2-1$, is a line through two points of $P$ leaving exactly $k$ points on one side. The number of $k$-edges of $P$ is denoted by $E_{k}( P) $. Since a halving line is a $( \lfloor n/2\rfloor -1) $-edge, then $E_{\lfloor n/2\rfloor -1}( P) =h( P) $. Similarly, for $0\leq k \leq n/2-1$, $E_{\leq k}( P) $ and $E_{\geq k}( P) $ denote the number of $( \leq k) $-edges and $( \geq k) $-edges of $P$, respectively. That is, $E_{\leq k}( P) =\sum_{j=0}^{k}E_{j}( P) $ and $E_{\geq k}( P) =\sum_{j=k}^{\lfloor n/2\rfloor -1}E_{j}( P) =\binom{n}{2}-\sum_{j=0}^{k-1} E_{j}( P) $. Let $h(n)$ and $E_{\leq k}( n) $ be the maximum of $h(P)$ and the minimum of $E_{\leq k}( P) $, respectively, over all sets $P$ of $n$ points. A concept closely related to $k$-edges is that of \emph{$k$-sets}; a $k$-set of $P$ is a set $Q$ that can be separated from $P \setminus Q$ with a straight line. Rotating this separating line clockwise until it hits a point on each side yields a $(k-1)$-edge, and it turns out that this association is bijective. Thus the number of $k$-sets of $P$ is equal to the number of $(k-1)$-edges of $P$. As a consequence, any of the results obtained here for $k$-edges can be directly translated into equivalent results for $(k+1)$-sets. Erd\H{o}s, Lov\'{a}sz, Simmons, and Straus \cite{ELSS}, \cite{L} first introduced the concepts of halving lines, $k$-sets, and $k$-edges. Since the introduction of these parameters back in the early 1970s, the determination (or estimation) of $\overline{\hbox{\rm cr}}(n)$, $h(n)$, and $E_{\le k}(n)$ have become classical problems in combinatorial geometry. General bounds are known but exact values have only been found for small $n$. The best known general bounds for the halving lines are $\Omega(ne^{c\sqrt{\log n}})\leq h(n)\leq O( n^{4/3}) $, due to T\'{o}th \cite{T} and Dey \cite{D}, respectively. The previously best asymptotic bounds for the crossing number are \begin{equation} 0.3792\binom{n}{4}+\Theta(n^{3})\leq\overline{\hbox{\rm cr}}\left( n\right) \leq0.380488\binom{n}{4}+\Theta(n^{3}). \label{boundscr} \end{equation} The lower bound is due to Aichholzer et al. \cite{AGOR} and it follows from Inequality (\ref{lower}) as we indicate below. The upper bound follows from a recursive construction devised by \'{A}brego and Fern\'{a}ndez-Merchant \cite{AF2} using the a suitable initial construction found by the authors in \cite{ACFLS}. The best lower bound for the minimum number of $( \leq k) $-edges is \begin{equation} E_{\leq k}\left( n\right) \geq3\binom{k+2}{2}+3\binom{k+2-\lfloor n/3\rfloor}{2}-\max\left\{ 0,(k+1-\lfloor n/3\rfloor)(n-3\lfloor n/3\rfloor)\right\} \text{,} \label{lower} \end{equation} due to Aichholzer et al. \cite{AGOR}. Further references and related problems can be found in \cite{BMP}. The last two problems are naturally related, and their connection to the first problem is shown by the following identity, independently proved by L\'{o}vasz et al. \cite{LVWW} and \'{A}brego and Fern\'{a}ndez-Merchant \cite{AF}. For any set $P$ of $n$ points, \begin{align} \mathop{\rm cr}(P) & =3\binom{n}{4}- {\displaystyle\sum\limits_{k=0}^{\left\lfloor n/2\right\rfloor -1}} k\left( n-k-2\right) E_{k}\left( P\right) ,\text{ or equivalently} \nonumber\\ \mathop{\rm cr}(P) & = {\displaystyle\sum\limits_{k=0}^{\left\lfloor n/2\right\rfloor -2}} \left( n-2k-3\right) E_{\leq k}\left( P\right) -\frac{3}{4}\binom{n} {3}+\left( 1+\left( -1\right) ^{n+1}\right) \frac{1}{8}\binom{n}{2}. \label{crossingsvsksets} \end{align} Hence, lower bounds on $E_{\leq k}( n) $ give lower bounds on $\overline{\hbox{\rm cr}}(n)$. \bigskip The majority of our results (all non-constructive parts) are proved in the more general context of generalized configurations of points, where the points in $P$ are joined by pseudosegments rather than straight line segments. Goodman and Pollack \cite{GP80} established a correspondence between the set of generalized configurations of points and what they called \emph{allowable sequences.} In Section \ref{allowableseq}, we define allowable sequences, introduce the necessary notation to state the three problems above in the context of allowable sequences, and include a summary of results for these problems in both, the geometric and the allowable sequence context. \begin{table}[h] \begin{center}% \begin{tabular} [c]{r|rrrrrrrrrr}% $n$ & $14$ & $16$ & $18$ & $20$ & $22$ & $23$ & $24$ & $25$ & $26$ & $27$\\\hline \multicolumn{1}{l|}{$\overset{}{% \begin{tabular} [c]{l}% $h(n)=\widetilde{h}(n) \smallskip$% \end{tabular} \ \ \ \ \ }$} & $22^{\ast}$ & $27$ & $33$ & $38$ & $44$ & $75$ & $51$ & $85$ & $57$ & $96$\\ \multicolumn{1}{l|}{% \begin{tabular} [c]{l}% $\overline{\hbox{\rm cr}}(n)=\widetilde{\hbox{\rm cr}}(n) $% \end{tabular} } & $324^{\ast}$ & $603^{\ast}$ & $1029^{\ast}$ & $1657$ & $2528$ & $3077$ & $3699$ & $4430$ & $5250$ & $6180$% \end{tabular} \end{center} \caption{New exact values. The $^{\ast}$ values were only known in the rectilinear case.}% \label{newvalues}% \end{table} The main result in this paper is Theorem \ref{main} in\ Section \ref{proof central theorem}, which bounds $E_{\geq k}(P)$ by a function of $E_{k-1}(P)$. This result has the following important consequences. \begin{enumerate} \item In Section \ref{halving lines}, we find exact values of $\overline{\hbox{\rm cr}} (n) $ and $h(n)$ for $n\leq27$. Exact values were only known for $n\leq18$ and odd $n\leq21$ in the case of $\overline{\hbox{\rm cr}} (n)$, and for $n\leq14$ and odd $n\leq21$ in the case of $h(n)$. (See Table \ref{newvalues}.) We also show that the same values are achieved for the more general case of the pseudolinear crossing number $\widetilde{\hbox{\rm cr}} (n)$ and the maximum number of halving pseudolines $\widetilde{h}( n) $. (See Section \ref{allowableseq} for the definitions.) \item Theorem \ref{recursive} in Section \ref{lower bound k-sets} improves the lower bound in Inequality (\ref{lower}) for $k\geq\left\lceil (4n-11)/9\right\rceil $. It gives a recursive lower bound whose asymptotic value is given by \[ E_{\leq k}( n) \geq\binom{n}{2}-\frac{1}{9}\sqrt{1-\frac{2k+2} {n}}( 5n^{2}+19n-31) , \] as shown in Corollary \ref{explicit}. \item Theorem \ref{th: crossing} in Section \ref{lower crossings} improves the lower bound in Inequality (\ref{boundscr}) to \[ \overline{\hbox{\rm cr}}(n) \geq\frac{277}{729}\binom{n}{4}+\Theta\left( n^{3}\right) \geq0.37997\binom{n}{4}+\Theta\left( n^{3}\right) . \] \end{enumerate} In Section \ref{constructions}, and to complement item 2 above, we show that Inequality (\ref{lower}) is tight for $k<\left\lceil (4n-11)/9\right\rceil $. More precisely, we construct sets of points simultaneously achieving equality in Inequality (\ref{lower}) for all $k<\left\lceil (4n-11)/9\right\rceil $. Several results of this paper appeared (without proofs) in the conference proceedings of LAGOS'07 \cite{AFLS2, AFLS3}. \section{\label{allowableseq}Allowable sequences and generalized configurations of points} Any set $P$ of $n$ points in the plane can be encoded by a sequence of permutations of the set $[ n] =\{ 1,2,...,n\} $ as follows. Consider a directed line $l$. Orthogonally project $P$ onto $l$ and label the points of $P$ from $1$ to $n$ according to their order in $l$. In this order, the identity permutation $( 1,2,...,n) ,$ is the first permutation of our sequence. Note that $l$ can be chosen so that none of the projections overlap. Continuously rotate $l$ counterclockwise. The order of the projections of $P$ onto $l$ changes every time two projections overlap, that is, every time a line through two points of $P$ becomes perpendicular to $l$. Each time this happens, a new permutation is recorded as part of our sequence. After a $180^{\circ}$-rotation of $l$ we obtain a sequence of $\binom{n}{2}+1$ permutations such that the first permutation $( 1,2,...,n) $ is the identity, the last permutation $( n,n-1,...,2,1)$ is the reverse of the identity, any two consecutive permutations differ by a transposition of adjacent elements, and any pair of points (labels $1,...,n$) transpose exactly once. This sequence is known as a \emph{halfperiod} \emph{of the circular sequence }associated to $P$. The \emph{circular sequence} of $P$ is then a doubly infinite sequence of permutations obtained by rotating $l$ indefinitely in both directions. As an abstract generalization of a circular sequence, a \emph{simple allowable sequence }on $[n] $ is a doubly infinite sequence $\Pi=( ...,\pi_{-1},\pi_{0},\pi_{1},...) $ of permutations of $[ n] $, such that any two consecutive permutations $\pi_{i}$ and $\pi_{i+1}$ differ by a transposition $\tau(\pi_{i})$ of neighboring elements, and such that for every $j$, $\pi_{j}$ is the reverse permutation of $\pi_{j+\binom{n}{2}}$. A \emph{halfperiod} of $\Pi$ is a sequence of $\binom{n}{2}+1$ consecutive permutations of $[ n] $. As before, any halfperiod of $\Pi$ uniquely determines $\Pi$ and all properties for halfperiods mentioned above still hold. Moreover, the halfperiod $\pi=( \pi_{i},\pi_{i+1},...,\pi_{i+\binom{n}{2}}) $ is completely determined by the transpositions $\tau(\pi_{i}),\tau(\pi_{i+1}),\ldots,\tau(\pi_{i+\binom{n}{2}-1}).$ Note that the sequence $(\ldots,\tau(\pi_{-1}), \tau(\pi_0), \tau(\pi_1)\ldots)$ is $\tbinom{n}{2}$-periodic. Thus we indistinctly refer to $\pi$ as a sequence of permutations or as a sequence of (suitable) transpositions. Allowable sequences that are the circular sequence of a set of points are called \emph{stretchable}. A \emph{pseudoline} is a curve in $\mathbb{P}^{2}$, the projective plane, whose removal does not disconnect $\mathbb{P}^{2}$. Alternatively, a pseudoline is a simple curve in the plane that extends infinitely in both directions. A \emph{simple generalized configuration} \emph{of points} consists of a set of $\binom{n}{2}$ pseudolines and $n$ points in the plane such that each pseudoline passes through exactly two points, and any two pseudolines intersect exactly once. Circular and allowable sequences were first introduced by Goodman and Pollack \cite{GP80}. They proved that not every allowable sequence is stretchable and established a correspondence between allowable sequences\emph{ }and generalized configurations of points. The three problems at hand can be extended to generalized configurations of points, or equivalently, to simple allowable sequences. In this new setting, a transposition of two points in positions $k$ and $k+1$, or $n-k$ and $n-k+1$ in a simple allowable sequence $\Pi$ corresponds to a $( k-1) $-edge. We say that such transposition is a $k$-transposition, or respectively, a $( n-k) $-transposition, and if $1\leq k\leq n/2$ all these transpositions are called $k$\emph{-critical}. Therefore $E_{k}( \Pi) ,$ $E_{\leq k}( \Pi) ,$ and $E_{\geq k}( \Pi) $ correspond to the number of $( k+1) $-critical, $( \leq k+1) $-critical, and $( \geq k+1) $-critical transpositions in any halfperiod of $\Pi$. A halving line of $\Pi$ is a $\lfloor n/2\rfloor $-transposition, and thus $h( \Pi) =E_{\lfloor n/2\rfloor -1}( \Pi) $. Identity (\ref{crossingsvsksets}), which relates the number of $k$-edges to the crossing number, was originally proved for allowable sequences. In this setting, a \emph{pseudosegment} is the segment of a pseudoline joining two points in a generalized configuration of points, and $\mathop{\rm cr}( \Pi) $ is the number of pseudosegment-crossings in the generalized configuration of points that corresponds to the allowable sequence $\Pi$. All these definitions and functions coincide with their original counterparts for $P$ when $\Pi$ is the circular sequence of $P$. However, when $\overline{\hbox{\rm cr}}(n),$ $h( n) ,$ and $E_{\leq k}( n) $ are minimized or maximized over all allowable sequences on $[ n] $ rather than over all sets of $n$ points, the corresponding quantities may change and therefore we use the notation $\widetilde{\hbox{\rm cr}}(n),\widetilde{h}(n),$ and $\widetilde{E}_{\leq k}( n) $. Because $n$-point sets correspond to the stretchable simple allowable sequences on $[n]$, it follows that $\widetilde{\hbox{\rm cr}}(n)\leq\overline{\hbox{\rm cr}}(n)$, $\widetilde{h}(n)\geq h(n)$, and $\widetilde {E}_{\leq k}( n) \leq E_{\leq k}( n) .$ Tamaki and Tokuyama \cite{TT} extended Dey's upper bound for allowable sequences to $\widetilde{h}(n)=O(n^{4/3})$ . \'{A}brego et al. \cite{ABFLS2} proved that the lower bound for $E_{\leq k}( n)$ in Inequality (\ref{lower}) is also a lower bound on $\widetilde{E}_{\leq k}( n) $. They used this bound to extend (and even slightly improve) the corresponding lower bound on $\overline{\hbox{\rm cr}}(n)$ to $\widetilde{\hbox{\rm cr}}(n)$. Our main result, Theorem \ref{main} in Section \ref{proof central theorem}, concentrates on the central behavior of allowable sequences. We bound $E_{\geq k}( \Pi) $ by a function of $E_{k-1}( \Pi) $. As a consequence, we improve (or match) the upper bounds on $\widetilde{h}(n)$ for $n\leq27,$ and thus the lower bounds on $\widetilde{\hbox{\rm cr}}(n)$ in the same range. This is enough to match the corresponding best known geometric constructions \cite{A} for $h( n) $ and $\overline{\hbox{\rm cr}}(n)$. This shows that for all $n\leq27,$ $\widetilde{h}(n)=h( n) $ and $\widetilde{\hbox{\rm cr}}(n)=\overline{\hbox{\rm cr}}(n)$ whose exact values are summarized in Table \ref{newvalues}. \section{\label{proof central theorem}The Central Theorem} In this section, we present our main theorem. Given a halfperiod $\pi=( \pi_{0},\pi_{1},\pi_{2},...,\pi_{\binom{n}{2}}) $ of an allowable sequence and an integer $1\leq k < n/2$, the $k$\emph{-center} of the permutation $\pi_{j}$, denoted by $C(k,\pi_{j}) $, is the set of elements in the middle $n-2k$ positions of $\pi_{j}$. Let $L_{0},C_{0},$ and $R_{0}$ be the set of elements in the first $k$, middle $n-2k$, and last $k$ positions, respectively, of the permutation $\pi_{0}$. Define \[ s\left( k,\pi\right) =\min\left\{ \left\vert C_{0}\cap C\left( k,\pi _{i}\right) \right\vert :0\leq i\leq\binom{n}{2}\right\} . \] Note that $s( k,\pi) \leq n-2k-1$ because at least one of the $n-2k$ elements of $C_{0}$ must leave the $k$-center. \begin{theorem} \label{main}Let $\Pi$ be an allowable sequence on $[ n] $ and $\pi$ any halfperiod of $\Pi$. If $s=s(k,\pi) $, then \[ E_{\geq k}\left( \Pi\right) \leq\left( n-2k-1\right) E_{k-1}\left( \Pi\right) -\frac{s}{2}\left( E_{k-1}\left( \Pi\right)-n+1 \right) . \] \end{theorem} \begin{proof} For presentation purposes, we divide this proof into subsections.\medskip Let $\Pi$ be an allowable sequence on $[n] $ and $\pi= (\pi_{0},\pi_{1},\pi_{2},...,\pi_{\binom{n}{2}}) $ any halfperiod of $\Pi$, $s=s( k,\pi) $, and $K=E_{k-1}( \pi) $. Suppose that $\pi_{i_{1}},\pi_{i_{2}},...,\pi_{i_{K}}$ is the subsequence of permutations in $\pi$ obtained when the $k$-critical transpositions $\tau(\pi_{i_{1}}),\tau(\pi_{i_{2}}),...,\tau(\pi_{i_{K}})$ of $\pi$ occur (in this order). For simplicity we write $\tau_j$ instead of $\tau(\pi_{i_j})$. These permutations partition $\pi$ into $K+1$ parts $B_{0}( \pi) ,$ $B_{1}( \pi) ,$ $B_{2}( \pi) ,$ $...,$ $B_{K}( \pi) $ called \emph{blocks}, where $B_{j}( \pi) =\{ \pi_{l}% :i_{j}\leq l<i_{j+1}\} $ for $1\leq j\leq K-1$, $B_{0}( \pi) =\{ \pi_{l}:0\leq l<i_{1}\} $, and $B_{K}( \pi) =\{ \pi_{l}:i_{K}\leq l\leq\binom{n}{2}\} $. Denote by $p_{j}$ the point that enters the $k$-center of $\pi_{i_{j}}$ with $\tau_{j}$. We say that a $( \geq k+1) $-critical transposition in $B_{j}( \pi) ,1\leq j\leq K,$ is an \emph{essential} transposition if it involves $p_{j}$ or if it occurs before $\tau_{1}$, and a \emph{nonessential} transposition otherwise. \begin{figure}[hp] \begin{center} \includegraphics[ height=2.911in, width=5.0825in ] {xdiagram2.eps} \caption{Classification of essential $k$-critical transpositions.} \label{fig:classification} \end{center} \end{figure} \subsubsection*{Rearrangement of $\pi$} We claim that, to bound $E_{\geq k}( \Pi) $, we can assume that all $( \geq k+1) $-critical transpositions of $\pi$ are essential transpositions. To show this, in case $\pi$ has nonessential transpositions, we modify $\pi$ so that the obtained halfperiod $\lambda$ satisfies $E_{j}( \pi) =E_{j}( \lambda) $ for all $j<k$, and thus $E_{\geq k}( \pi) =E_{\geq k}( \lambda) $; and either $\lambda$ has only essential transpositions or the last nonessential transposition of $\lambda$ occurs in an earlier permutation than the last nonessential transposition of $\pi$. Applying this procedure enough times, we end with a halfperiod $\lambda$ all of whose $( \geq k+1) $-critical transpositions are essential and such that $E_{j}( \pi) =E_{j}( \lambda) $ for all $j\leq k$, and thus $E_{\geq k}( \pi) =E_{\geq k}( \lambda) $. This is how $\lambda$ is constructed. Suppose $B_{j}( \pi) $ is the last block of $\pi$ that contains nonessential transpositions. Define $\lambda$ as the halfperiod that coincides with $\pi$ everywhere except for the $( \geq k+1) $-transpositions in $B_{j}( \pi) $. All nonessential transpositions in $B_{j}( \pi) $ take place right before $\tau_{j}$ in $\lambda$, and right after $\tau_{j}$ occurs, all essential transpositions in $B_{j}( \pi) $ occur consecutively in $B_{j}( \lambda) $ but probably in a different order than in $B_{j}( \pi) $, so that the final position of $p_{j}$ is the same in $B_{j}( \pi) $ and $B_{j}( \lambda) $. Note that in fact the last permutations of the blocks $B_{j}( \pi) $ and $B_{j}( \lambda) $ are equal. \subsubsection*{Classification of $k$-critical transpositions} From now on, we assume that $\pi$ only has essential transpositions. We classify the $k$-critical transpositions as follows (see Figure \ref{fig:classification}): $\tau_{j}$ is an \emph{arriving} transposition if $p_{j}\in C_{0}$. An arriving transposition is $m$\emph{-augmenting} if it increments the number of elements in $C_{0}$ in the $k$-center from $m-1$ to $m$, and it is \emph{neutral }otherwise. We say that $\tau_{j}$ is a \emph{returning} transposition if it is a $k$-transposition and $p_{j}\in R_{0}$, or if it is an $(n-k)$-transposition and $p_{j}\in L_{0}$. That is, $p_{i}$ is \textquotedblleft getting back\textquotedblright% \ to its starting region. Similarly, $\tau_{j}$ is a \emph{departing} transposition if it is a $k$-transposition and $p_{j}\in L_{0}$, or if it is an $(n-k)$-transposition and $p_{j}\in R_{0}$. That is, $p_{j}$ is \textquotedblleft getting away\textquotedblright\ from its original region. We say that a departing transposition $\tau_{j}$ is a \emph{cutting }transposition, if $\tau_{j}$ is a $k$-transposition and the next $k$-critical transposition that involves $p_{j}$ is an $(n-k)$-transposition; or if $\tau_{i}$ is an $(n-k)$-transposition and the next $k$-critical transposition that involves $p_{j}$ is a $k$-transposition. All other departing transpositions are called \emph{stalling}. Finally, we define the \emph{weight} of a $k$-critical transposition $\tau _{j}$, denoted by $w(\tau_{j})$, as the number of $( \geq k+1) $-critical transpositions in $B_{j}( \pi) $ that are not between two elements of $C_{0}$. Transpositions with weight at most $n-2k-1-s$ are called \emph{light}. All other transpositions are \emph{heavy}. Let $A,N,R,C,S_{\text{light}}$, and $S_{\text{heavy}}$ be the number of augmenting, neutral, returning, cutting, light stalling, and heavy stalling transpositions, respectively. Then $K=A+N+R+C+S_{\text{light}}+S_{\text{heavy}% }$. \subsubsection*{Bounding $E_{\geq k}( \Pi) $} Observe that the $k$-center of all permutations in $B_{0}( \pi) $ remains unchanged. It follows that all $( \geq k+1) $-critical transpositions of $B_{0}( \pi) $ are between elements of $C_{0}$. Thus $\sum_{j=1}^{K}w( \tau_{j}) $ counts all $( \geq k+1) $-critical transpositions except those between two elements of $C_{0}$. There are $\binom{n-2k}{2}$ transpositions between elements of $C_{0}$, but each neutral transposition corresponds to a $k$-critical (not $( \geq k+1) $-critical) transposition between two elements of $C_{0}$. Thus \begin{equation} E_{\geq k}( \Pi) \leq\binom{n-2k}{2}-N+\sum\limits_{j=1}^{K}w( \tau_{j}). \label{degrees inequality} \end{equation} \subsubsection*{Bounds for the weight of a $k$-critical transposition} We bound the weight of a transposition depending on its class (departing, returning, etc.), as well as the number of transpositions within a class, if necessary. For $j \geq 1$ all $( \geq k+1) $-critical transpositions in $B_{j}( \pi) $ involve $p_{j}$ and thus $w( \tau _{j}) \leq n-2k-1.$ However, since the weight of $\tau_{j}$ does not count transpositions between two elements of $C_{0}$, and there are always at least $s$ elements of $C_{0}$ in the $k$-center, then $w( \tau _{j}) \leq n-2k-s$ whenever $\tau_{j}$ is arriving (because $p_{j}\in C_{0}$). Moreover, if $\tau_{j}$ is $m$-augmenting, then $w( \tau _{j}) \leq n-2k-m$. If $\tau_{j}$ is a returning transposition, then $p_{j}$ has already been transposed with all the elements of $C_{0}$ that are in the $k$-center of $\pi_{i_{j}}$. Since there are at least $s$ such elements, then $w( \tau_{j}) \leq n-2k-1-s$. Summarizing, \begin{equation} w\left( \tau_{j}\right) \leq\left\{ \begin{array} {ll}% n-2k-1&\text{for all }\tau_{j}\text{,}\\ n-2k-s,&\text{if }\tau_{j}\text{ is neutral,}\\ n-2k-m,&\text{if }\tau_{j}\text{ is }m\text{-augmenting,}\\ n-2k-1-s,&\text{if }\tau_{j}\text{ is light stalling or returning.}% \end{array} \right. \label{bounds on weights}% \end{equation} \subsubsection*{Bounding $C$} We bound the number of cutting transpositions. Since the first (last) $k$ elements of $\pi_{0}$ are the last (first) elements of $\pi_{\binom{n}{2}}$, then the $2k$ elements not in $C_{0}$ must participate in at least one cutting transposition. That is, $C\geq2k$. Note that, if $p\notin C_{0}$ participates in $c\geq2$ cutting transpositions, then there must be at least $c-1$ returning transpositions of $p$. In other words, there must be at least $C-2k\geq0$ returning transpositions. There are $C$ cutting transpositions and at least $n-2k-s$ arriving transpositions (at least one $m$-augmenting arriving transposition for each $s+1\leq m\leq n-2k$). Then $K-C-( n-2k-s) $ counts all other $k$-critical transpositions, including in particular all returning transpositions. Thus $K-C-( n-2k-s) \geq C-2k$, that is,% \begin{equation} 2C \leq 4k+K-n+s. \label{bounding crossing transp}% \end{equation} \subsubsection*{Augmenting and heavy stalling transpositions} We keep track of the augmenting and heavy stalling transpositions together. To do this, we consider the bipartite graph $G$ whose vertices are the augmenting and the heavy stalling transpositions. The augmenting transposition $\tau_{l}$ is adjacent in $G$ to the heavy stalling transposition $\tau_{j}$ if $j<l$, $p_{j}$ is in the $k$-center of all permutations in blocks $B_{j}$ to $B_{l}$, one transposition from $\tau_{j}$ and $\tau_{l}$ is a $k$-transposition and the other is an $( n-k) $-transposition, and $p_{l}$ does not swap with $p_{j}$ in $B_{l}( \pi) $. We bound the degree of a vertex in $G$. Let $\tau_{j}$ be a heavy stalling transposition. If $p_{j}\in L_{0}$ (the case $p_{j}\in R_{0}$ is equivalent), then $\tau_{j}$ is a $k$-transposition. Because $p_{j}$ moves to the right exactly $w( \tau_{j})>n-2k-1-s$ positions within $B_{j}( \pi) $, it follows that the $k$-center right before $\tau_{j+1}$ occurs (i.e., the $k$-center of $\pi_{i_{j+1}-1}$) has at most $n-2k-1-w( \tau_{j}) <s$ points of $C_{0}$ to the right of $p_{j}$. Also, since $\tau_{j}$ is stalling, the next time that $p_{j}$ leaves the $k$-center is by a $k$-transposition $\tau_{j+a}$. This means that the $k$-center right before $\tau_{j+a}$ occurs (i.e., the $k$-center of $\pi_{i_{j+a}-1}$) has at least $s$ points of $C_{0}$ to the right of $p_{j}$. Thus, between $\tau_{j}$ and $\tau_{j+a}$ there must be at least $s-( n-2k-1-w( \tau_{j}) ) $ arriving $( n-k) $-transpositions $\tau_{l}$ such that $p_{l}$ remains to the right of $p_{j}$ in $B_{l}( \pi) $, i.e., $p_{l}$ does not swap with $p_{j}$ in $B_{l}( \pi) $. These transpositions are adjacent to $\tau_{j}$ and thus the degree of $\tau_{j}$ in $G$ is at least $w( \tau_{j}) -( n-2k-1-s) $. Hence, \[ \left\vert E(G) \right\vert \geq\sum\limits_{\tau_{j}\text{ heavy stalling}}\left( w\left( \tau_{j}\right) -\left( n-2k-1-s\right) \right) , \] where $E(G)$ is the set of edges of $G$. Let $\tau_{l}$ be an $m$-augmenting transposition. Since $p_{l}\in C_{0}$, and weights do not count transpositions between two elements of $C_{0}$, then at most $n-2k-m-w( \tau_{l}) $ points in $L_{0}\cup R_{0}$ do not swap with $p_{l}$ in $B_{l}( \pi) $. Only these points are possible $p_{j}$s such that $\tau_{j}$ is adjacent to $\tau_{l}$. Thus the degree of $\tau_{l}$ in $G$ is at most $n-2k-m-w( \tau_{l}) \leq n-2k-1-s-w( \tau_{l}) $. Note that there is at least one $m$-augmenting transposition for each $s+1\leq m\leq n-2k$. This is because the $k$-center of at least one permutation of $\pi$ contains exactly $s$ elements of $C_{0}$ (by definition of $s$), and the $k$-center of $\pi_{\binom{n}{2}}$ contains exactly $n-2k$ elements of $C_{0}$ (since it coincides with $C_{0}$). Then the number of elements in the $k$-center must be eventually incremented from $s$ to $n-2k$. For each $s+1\leq m\leq n-2k$, we use $n-2k-m-w( \tau_{l}) $ to bound the degree of \emph{one }$m$-augmenting transposition. For all other augmenting transpositions we use the bound $n-2k-1-s-w( \tau_{l}) $. Hence \begin{align*} \left\vert E\left( G\right) \right\vert & \leq\sum\limits_{\tau_{j}\text{ augmenting}}\left( \left( n-2k-1-s\right) -w\left( \tau_{j}\right) \right) -\sum\limits_{m=s+1}^{n-2k}\left( m-s-1\right) \\ & =\sum\limits_{\tau_{j}\text{ augmenting}}\left( \left( n-2k-1-s\right) -w\left( \tau_{j}\right) \right) -\binom{n-2k-s}{2}. \end{align*} The previous two inequalities imply that \begin{equation} \sum\limits_{\tau_{j}\text{ augmenting}}w\left( \tau_{j}\right) +\sum\limits_{\tau_{j}\text{ heavy stalling}}w\left( \tau_{j}\right) \leq\left( n-2k-1-s\right) \left( A+S_{\text{heavy}}\right) -\binom {n-2k-s}{2}. \label{augmenting and heavy stall} \end{equation} \subsubsection*{Final calculations} We use inequalities (\ref{bounds on weights}) and (\ref{augmenting and heavy stall}) to bound $\sum_{i=1}^{K}w( \tau_{i}) -N$. \begin{align*} \sum\limits_{j=1}^{K}w\left( \tau_{j}\right) -N & =\sum\limits_{\tau _{j}\text{ cutting}}w\left( \tau_{j}\right) +\sum\limits_{\tau_{j}\text{ augmenting}}w\left( \tau_{j}\right) +\sum\limits_{\tau_{j}\text{ heavy stalling}}w\left( \tau_{j}\right) \\ & +\sum\limits_{\tau_{j}\text{ light stalling}}w\left( \tau_{j}\right) +\sum\limits_{\tau_{j}\text{ returning}}w\left( \tau_{j}\right) +\sum\limits_{\tau_{j}\text{ neutral}}w\left( \tau_{j}\right) -N\\ & \leq\left( n-2k-1\right) C+\left( n-2k-1-s\right) \left( A+S_{\text{heavy}}\right) -\binom{n-2k-s}{2}\\ & +\left( n-2k-1-s\right) \left( S_{\text{light}}+R\right) +\left( n-2k-s\right) N-N\\ & \leq sC+\left( n-2k-1-s\right) K-\binom{n-2k-s}{2}. \end{align*} By Inequality (\ref{degrees inequality}), \begin{align*} E_{\geq k}\left( \Pi\right) & \leq\binom{n-2k}{2}-\binom{n-2k-s} {2}+sC+\left( n-2k-1-s\right) K\\ & =\left( n-2k-1\right) K-\frac{s}{2}\left( 2K-2n+4k+1+s-2C\right) . \end{align*} Finally, by Inequality (\ref{bounding crossing transp}), \[ E_{\geq k}\left( \Pi\right) \leq\left( n-2k-1\right) K-\frac{s}{2}\left( K-n+1 \right). \qedhere \] \end{proof} \section{\label{halving lines}New exact values for $n\leq27$} In this section, we give exact values of $h( n) $ and $\widetilde{% h}( n) $ for $n\leq27$. We start by stating a relaxed version of Theorem \ref{main}, which we use in the special case when $k=\lfloor n/2\rfloor -1$. \begin{corollary} \label{coro: maxs}Let $\Pi $ be a simple allowable sequence on $[ n] $ and $\pi $ any halfperiod of $\Pi $. If $s=s( k,\pi ) $, then \begin{equation*} E_{\geq k}\left( \Pi \right) \leq \left( n-2k-1\right) E_{k-1}\left( \Pi \right) +\binom{s}{2}\leq \left( n-2k-1\right) E_{k-1}\left( \Pi \right) + \binom{n-2k-1}{2}. \end{equation*} \end{corollary} \begin{proof} There are at least $n-2k-s$ elements of $C_{0}$ that leave the $k$-center, so there are at least $n-2k-s$ arriving transpositions. In addition, there are at least $ 2k$ departing transpositions, one per element not in $C_{0}$. It follows that $E_{k-1} ( \Pi) \geq2k+( n-2k-s) =n-s$. The first inequality now follows directly from Theorem \ref{main}. Finally, $ s\leq n-2k-1$ for all halfperiods of $\Pi$ which yields the second inequality. Another consequence is that $E_{k-1} ( \Pi) \geq n-s \geq 2k+1$, which is in fact the minimum possible value of $E_{k-1}$ (cf. \cite{LVWW}). \end{proof} The previous corollary implies the following result for halving lines. \begin{corollary} \label{coro: halving}If $\Pi $ is a simple allowable sequence on $[ n% ] $ and $n\geq 8$, then% \begin{equation*} h\left( \Pi \right) \leq \left\{ \begin{array}{ll} \left\lfloor \frac{1}{24}n(n+30)-3\right\rfloor &\text{ if }n\text{ is even,}\vspace{0.1in} \\ \left\lfloor \frac{1}{18}(n-3)(n+45)+\frac{1}{9}\right\rfloor &\text{ if }n \text{ is odd.} \end{array}% \right. \end{equation*} \end{corollary} \begin{proof} If $k=\lfloor n/2\rfloor -1$ on Corollary \ref{coro: maxs}, then $ E_{\geq \lfloor n/2\rfloor -1}(\Pi )=h(\Pi )$ and thus $h(\Pi )\leq (n-2\lfloor n/2\rfloor +1)E_{\geq \lfloor n/2\rfloor -2}(\Pi )+\binom{ n-2\lfloor n/2\rfloor +1}{2}$, that is, \begin{equation*} h\left( \Pi \right) \leq \left\{ \begin{array}{ll} E_{n/2-2}\left( \Pi \right) &\text{ if }n\text{ is even,} \vspace{0.1in}\\ 2E_{\left( n-1\right) /2-2}\left( \Pi \right) +1 &\text{ if }n\text{ is odd.} \end{array} \right. \end{equation*} Moreover, because $E_{\leq \lfloor n/2\rfloor -3}(\Pi )+E_{\lfloor n/2\rfloor -2}(\Pi )+h(\Pi )=\binom{n}{2}$, it follows that \begin{equation*} h\left( \Pi \right) \leq \left\{ \begin{array}{ll} \left\lfloor \frac{1}{2}\binom{n}{2}-\frac{1}{2}E_{\leq n/2-3}\left( \Pi \right) \right\rfloor &\text{ if }n\text{ is even, } \vspace{0.1in}\\ \left\lfloor \frac{2}{3}\binom{n}{2}-\frac{2}{3}E_{\leq \left( n-1\right) /2-3}\left( \Pi \right) +\frac{1}{3}\right\rfloor &\text{ if }n\text{ is odd.} \end{array}% \right. \end{equation*}% The bound in Inequality (\ref{lower}) is also valid in the more general context of allowable sequences \cite{ABFLS2}. Using this bound for $E_{\leq k}(\Pi )$ when $k=\lfloor n/2\rfloor -3$, and considering all residue classes of $n$ modulo 18 with $n\geq 8$, it follows that $\lfloor \frac{1}{2}% \binom{n}{2}-\frac{1}{2}E_{\leq n/2-3}( \Pi ) \rfloor \leq \lfloor n(n+30)/24-3\rfloor $ when $n$ is even, and $\lfloor \frac{2}{3}% \binom{n}{2}-\frac{2}{3}E_{\leq ( n-1) /2-3}( \Pi ) +% \frac{1}{3}\rfloor \leq \lfloor (n-3)(n+45)/18+1/9\rfloor $ when $n$ is odd. \end{proof} Because $h(n)\leq cn^{4/3}$, the inequality in Corollary \ref{coro: halving} is only useful for small values of $n$. However, even with the current best constant $c=(31287/8192)^{1/3}<1.5721$ \cite{AA, PRTT}, our bound is better when $n$ is even in the range $8\leq n\leq 184$. The exact values of $h(n)$ were previously known only for even $n \le 14$ or odd $n\le 21$ \cite{AA, BR}. The exact values of $\overline{\hbox{\rm cr}} (n)$ were previously known only for even $n\leq 18$ or odd $n\leq 21$ \cite{AGOR}. The values in Table \ref{newvalues} correspond to the upper bounds obtained by Corollary \ref{coro: halving} when $n$ is even, $14\leq n\leq 26$ or $n$ is odd, $23\leq n\leq 27$. We also obtained new lower bounds for $\widetilde{ \hbox{\rm cr}}( n) $ in this range of values of $n$. The identity $E_{\leq \lfloor n/2\rfloor -2}( \Pi ) =\binom{n}{2} -h(\Pi )$ together with Corollary \ref{coro: halving} give a new lower bound for $E_{\leq \lfloor n/2\rfloor -2}( \Pi ) $. Using this bound for $k=\lfloor n/2\rfloor -2$ and the bound in Inequality (\ref{lower}) for $k\leq \lfloor n/2\rfloor -3$ in Identity (\ref{crossingsvsksets}) yields the values in Table \ref{newvalues} for $\widetilde{\hbox{\rm cr}}( n) $. For example, if $n=24$ then $E_{\leq 10}(\Pi )=\binom{24}{2}-h(24)\geq 276-51=225$ and by Inequality (\ref {lower}), the vector $(E_{\leq 0}(\Pi ),E_{\leq 1}(\Pi ),E_{\leq 2}(\Pi ),\ldots ,E_{\leq 9}(\Pi ))$ is bounded below entry-wise by $ (3,9,18,30,45,63,84,108,138,174)$, so Identity (\ref{crossingsvsksets}) implies that $\widetilde{\hbox{\rm cr}}( 24) =\sum_{k=0}^{10}(21-2k)E_{\leq k}(\Pi )-\frac{3}{4}\binom{24}{3}\geq 3699$. All the bounds shown in Table \ref{newvalues} are attained by Aichholzer's et al. constructions \cite{A}, and thus Table \ref{newvalues} actually shows the exact values of $\widetilde{h}( n) $, $h( n) $, $% \widetilde{\hbox{\rm cr}}( n) $, and $\overline{\hbox{\rm cr}}% ( n) $ for $n$ in the specified range. For $28\leq n\leq 33$, Table \ref{smallbounds} shows the new reduced gap between the lower and upper bounds of $h( n) $ and $\widetilde{h}% ( n) $. \begin{table}[h] \begin{center}% \begin{tabular} [c]{r|rrrrrr}% $n$ & $28$ & $29$ & $30$ & $31$ & $32$ & $33$\\\hline $\overset{}{% \begin{tabular} [c]{l}% $h\left( n\right) \geq\smallskip$% \end{tabular} }$ & $63$ & $105$ & $69$ & $115$ & $73$ & $126$\\% \begin{tabular} [c]{l}% $\widetilde{h}\left( n\right) \leq\smallskip$% \end{tabular} & $64$ & $107$ & $72$ & $118$ & $79$ & $130$\\% \end{tabular} \end{center} \caption{Updated bounds for $28\leq n\leq33$}% \label {smallbounds} \end{table} \section{\label{lower bound k-sets}New lower bound for the number of $( \leq k) $-edges} In this section, we obtain a new lower bound for the number of $\leq$$k$-edges. Our emphasis is on finding the best possible asymptotic result as well as the best bounds that apply to the small values of $n$ for which the exact value is unknown. Theorem \ref{recursive} provides the exact result that can be applied to small values of $n$, whereas Corollary \ref{explicit} is suitable enough to give the best asymptotic behavior. Let $m=\lceil(4n-11)/9\rceil$. For each $n$, define the following recursive sequence.% \begin{align*} u_{m-1} & =3\binom{m+1}{2}+3\binom{m+1-\lfloor n/3\rfloor}{2}-3\left( m-\left\lfloor \frac{n}{3}\right\rfloor \right) \left( \frac{n}% {3}-\left\lfloor \frac{n}{3}\right\rfloor \right) \text{ and}\\ u_{k} & =\left\lceil \frac{1}{n-2k-2}\left( \binom{n}{2}+(n-2k-3)u_{k-1}% \right) \right\rceil \text{ for }k\geq m\text{.}% \end{align*} The following is the new lower bound on $E_{\leq k}( n) $. It follows from Theorem \ref{main}. \begin{theorem} \label{recursive}For any $n$ and $k$ such that $m-1\leq k\leq(n-3)/2$, \[ E_{\leq k}(n)\geq u_{k}. \] \end{theorem} \begin{proof} We need the following two lemmas to estimate the growth of the sequence $u_k$ with respect to $n$ and $k$. For presentation purposes, we defer their proofs to the end of the section. \begin{lemma} \label{double ineq}For any $k$ such that $m-1\leq k\leq(n-5)/2,$ \begin{equation} 3\sqrt{1-\frac{2k+9/2}{n}}<\frac{\binom{n}{2}-u_{k}}{\binom{n}{2}-u_{m-1}} \leq3\sqrt{1-\frac{2k+2}{n}}\text{.} \label{boundingtheus} \end{equation} \end{lemma} \begin{lemma} \label{estim}For any $k$ such that $m\leq k\leq(n-5)/2$, \[ 3\sqrt{1-\frac{2k+9/2}{n}}\left( \binom{n}{2}-u_{m-1}\right) \geq\left( n-1\right) \left( n-2k-3\right) . \] \end{lemma} We prove the stronger statement $\widetilde{E}_{\leq k}( n) \geq u_{k}$. Let $\Pi$ be an allowable sequence on $[ n] $ and $\pi$ any of its halfperiods. We proceed by induction on $k$. If $k=m-1$ the result holds by Inequality (\ref{lower}), proved in the more general context of allowable sequences \cite{ABFLS2}. Assume that $k\geq m$ and $E_{\leq k-1}(\Pi)\geq u_{k-1}$. Let $s=s( k+1,\pi) $; by Theorem \ref{main}, \[ E_{\geq k+1}\left( \Pi\right) \leq\left( n-2k-3\right) E_{k}\left( \Pi\right) -\frac{s}{2}\left( E_{k}\left( \Pi\right) -\left( n-1\right) \right) . \] If $s=0$ or $E_{k}(\Pi)\geq n-1,$ then $E_{\geq k+1}(\Pi )\leq(n-2k-3)E_{k}(\Pi)$. Thus \[ \binom{n}{2}-E_{\leq k}(\Pi)\leq\left( n-2k-3\right) \left( E_{\leq k} (\Pi)-E_{\leq k-1}(\Pi)\right) , \] and by induction \begin{align*} E_{\leq k}(\Pi) &\geq\frac{1}{n-2k-2}\left( \binom{n}{2}+(n-2k-3)E_{\leq k-1}\left( \Pi\right) \right)\\ &\geq\frac{1}{n-2k-2}\left( \binom{n}{2}+(n-2k-3)u_{k-1} \right) , \end{align*} which implies that $E_{\leq k}(\Pi)\geq u_{k}$ by definition of $u_{k}$. Now assume $s>0$ and $E_{k}(\Pi)<n-1$. Because $E_{k}( \Pi) \geq2k+3$ (see the proof of Corollary \ref{coro: maxs}), it follows that $k\leq(n-5)/2$. By Theorem \ref{main},% \begin{align*} E_{\geq k+1}(\Pi) & \leq(n-2k-3)E_{k}(\Pi)-\frac{s}{2}\left( E_{k}% (\Pi)-\left( n-1\right) \right) \\ & =(n-2k-3-\frac{s}{2})E_{k}(\Pi)+\frac{s}{2}\left( n-1\right). \end{align*} Recall that $s=s(k+1,\pi)\leq n-2k-3$. Because $E_{k}(\Pi)<n-1$, it follows that \begin{align*} E_{\geq k+1}(\Pi) & \leq(n-2k-3-\frac{s}{2})(n-1)+\frac{s}{2}\left( n-1\right) \\ & =\left( n-1\right) \left( n-2k-3\right) \text{.}% \end{align*} Therefore% \begin{equation*} E_{\leq k}(\Pi)=\binom{n}{2}-E_{\geq k+1}(\Pi)\geq\binom{n}{2}-(n-1)\left( n-2k-3\right) \text{.}% \end{equation*} By Lemma \ref{estim},% \[ E_{\leq k}(\Pi)\geq\binom{n}{2}-3\sqrt{1-\frac{2k+9/2}{n}}\left( \binom{n}% {2}-u_{m-1}\right) , \] and by Lemma \ref{double ineq}, $E_{\leq k}(\Pi)\geq u_{k}$ for all allowable sequences $\Pi$ on $[ n] $. Therefore $E_{\leq k}(n)\geq \widetilde{E}_{\leq k}(n)\geq u_{k}$. \end{proof} \begin{corollary} \label{explicit}For any $n$ and $k$ such that $m-1\leq k\leq(n-2)/2$, \[ E_{\leq k}(n)\geq\binom{n}{2}-\frac{1}{9}\sqrt{1-\frac{2k+2}{n}}\left( 5n^{2}+19n-31\right).% \] \end{corollary} \begin{proof} Let $\Pi$ be an allowable sequence on $[n]$. If $k=\lfloor n/2\rfloor -1$, then $E_{\leq \lfloor n/2\rfloor -1}( \Pi ) =\binom{n}{2}$. For $k<\lfloor n/2\rfloor -1$, it follows that $n\geq3$ and from Theorem \ref{recursive} and Lemma \ref{double ineq}, \[ E_{\leq k}(\Pi)\geq u_{k}\geq\binom{n}{2}-3\sqrt{1-\frac{2k+2}{n}}\left( \binom{n}{2}-u_{m-1}\right) .% \] Considering the possible residues of $n$ modulo $9$, it can be verified that for $n\geq3$, \[ u_{m-1} \geq \frac{17}{54} n^2-\frac{65}{54} n+\frac{31}{27}\text { (equality if $n\equiv 3$ (mod 9))}. \] Therefore $E_{\leq k}(n)\geq\widetilde{E}_{\leq k}(n)\geq\binom{n}{2}-\frac{1} {9}\sqrt{1-\frac{2k+2}{n}}( 5n^{2}+19n-31) $. \end{proof} \subsection*{Proofs of the Lemmas} \begin{proof}[Proof of Lemma \ref{double ineq}] The integer range $[ m-1,(n-5)/2] $ is empty for $n\leq5$. Assume $n\geq6$ and proceed by induction on $k$. If $k=m-1$, then $3\sqrt{1-(2m+5/2)/n}\leq1\leq3\sqrt{1-2m/n}$ is equivalent to $\lceil ( 4n-11) /9\rceil \leq4n/9\leq\lceil ( 4n-11) /9\rceil +5/4$ which holds in general. Assume that $k\geq m$ and that (\ref{boundingtheus}) holds for $k-1$. From the definition of $u_k$ and the induction hypothesis, \begin{align*} \binom{n}{2}-u_{k} & \leq\binom{n}{2}-\frac{1}{n-2k-2}\left( \binom{n}{2}+(n-2k-3)u_{k-1}\right) \\ & =\frac{n-2k-3}{n-2k-2}\left( \binom{n}{2}-u_{k-1}\right) \leq3\left( \binom{n}{2}-u_{m-1}\right) \frac{n-2k-3}{n-2k-2}\sqrt{1-\frac{2k}{n}}, \end{align*} and $(n-2k-3)\sqrt{1-2k/n}\;/ (n-2k-2)\leq \sqrt{1-(2k+2)/n}$ because $k \leq (n-5)/2$, which proves the second inequality in (\ref{boundingtheus}). Similarly, from the definition of $u_k$ and the induction hypothesis, \begin{align*} \binom{n}{2}-u_{k} & \geq\binom{n}{2}-\frac{1}{n-2k-2}\left( \binom{n}{2}+(n-2k-3)u_{k-1}\right) -1\\ & =\frac{n-2k-3}{n-2k-2}\left( \binom{n}{2}-u_{k-1}\right) -1\geq3\left( \binom{n}{2}-u_{m-1}\right) \frac{n-2k-3}{n-2k-2}\sqrt{1-\frac{2k+5/2}{n}}-1. \end{align*} Hence, to prove the second inequality in (\ref{boundingtheus}), it is enough to show that $3( \binom{n}{2}-u_{m-1}) d>1,$ where \begin{equation} d=\frac{n-2k-3}{n-2k-2}\sqrt{1-\frac{2k+5/2}{n}}-\sqrt{1-\frac{2k+9/2}{n}} \label{definitiond} \end{equation} is always positive because $k\leq(n-5)/2$. First note that% \[ u_{m-1}\leq3\binom{m+1}{2}+3\binom{m+1-\lfloor n/3\rfloor}{2}\leq 3\binom{(4n+6)/9}{2}+3\binom{(n+10)/9}{2}, \] which implies that% \begin{equation} 3\left( \binom{n}{2}-u_{m-1}\right) \geq\frac{1}{9}\left( 5n^{2}% -25n+4\right) .\label{boundbinomial-u}% \end{equation} Multiplying the easily-verified inequality \[ 1>\frac{\left( n-2k-3\right) \sqrt{n-2k-5/2}+\left( n-2k-2\right) \sqrt{n-2k-9/2}}{\left( 2n-4k-5\right) \sqrt{n-2k-5/2}}% \] by Identity (\ref{definitiond}), yields \begin{align*} d & >\frac{n-2k-9/4}{\left( n-2k-2\right) ^{2}\sqrt{n\left( n-2k-5/2\right) }}\cdot\frac{2n-4k-4}{2n-4k-5}\label{d ineq}\\ & >\frac{n-2k-9/4}{\left( n-2k-2\right) ^{2}\sqrt{n\left( n-2k-5/2\right) }}\nonumber\\ & =\left( 1-\frac{1}{4\left( n-2k-2\right) }\right) \frac{1}{\left( n-2k-2\right) \sqrt{n\left( n-2k-2-1/2\right) }}.\nonumber \end{align*} Since $(4n-11)/9\leq k\leq(n-5)/2$, then $3\leq n-2k-2\leq(n+4)/9$. Thus% \[ d>\left( 1-\frac{1}{12}\right) \frac{27}{\left( n+4\right) \sqrt{n\left( n-1/2\right) }}=\frac{99}{4\left( n+4\right) \sqrt{n\left( n-1/2\right). }}% \] This inequality, together with Inequality (\ref{boundbinomial-u}), imply that for all $n\geq6$,% \[ 3\left( \binom{n}{2}-u_{m-1}\right) d>\frac{11}{4}\left( \frac {5n^{2}-25n+4}{\left( n+4\right) \sqrt{n\left( n-1/2\right) }}\right) >1.\qedhere \] \end{proof} \begin{proof}[Proof of Lemma \ref{estim}] For each $n\leq40$ the integer range $[ m,(n-5)/2] $ is either empty or contains only $k=\lfloor(n-5)/2\rfloor $. For these cases, the inequality can easily be verified. Assume $n\geq41$, it follows from Inequality (\ref{boundbinomial-u}) that \[ 9\left( 1-\frac{2k+9/2}{n}\right) \left( \binom{n}{2}-u_{m-1}\right) ^{2}\geq\frac{(n-2k-9/2)\left( 5n^{2}-25n+4\right) ^{2}}{81n}. \] Since $k\leq(n-5)/2$, then \[ n-2k-9/2\geq\frac{(n-2k-3)^{2}}{n-2k+3}. \] Also $k\geq m\geq(4n-11)/9$ implies $n-2k+3\leq(n+49)/9$ and thus \[ \frac{(n-2k-9/2)\left( 5n^{2}-25n+4\right) ^{2}}{81n}\geq\frac {(n-2k-3)^{2}\left( 5n^{2}-25n+4\right) ^{2}}{9n\left( n+49\right) }\text{.}% \] Finally, for $n\geq41$, \[ \frac{\left( 5n^{2}-25n+4\right) ^{2}}{9n\left( n+49\right) }\geq (n-1)^{2}, \] and consequently% \[ 9\left( 1-\frac{2k+9/2}{n}\right) \left( \binom{n}{2}-u_{m-1}\right) ^{2}\geq(n-1)^{2}(n-2k-3)^{2}.\qedhere \] \end{proof} \section{\label{lower crossings}New lower bound on $\overline{\hbox{\rm cr}}(n)$} In this section, we use Corollary \ref{explicit} to get the following new lower bound on $\overline{\hbox{\rm cr}}(n)$. \begin{theorem} $\overline{\hbox{\rm cr}}(n)\geq\frac{277}{729}\binom{n}{4}+\Theta(n^{3})>0.379972\binom {n}{4}+\Theta(n^{3})$. \label{th: crossing} \end{theorem} \begin{proof} We actually prove that the right hand side is a lower bound on $\widetilde{\hbox{\rm cr}}(n)$. According to (\ref{crossingsvsksets}), if $\Pi$ is an awollable sequence on $[ n] $, then \[ \mathop{\rm cr}\left( \Pi\right) =\binom{n}{4}\left( 24 {\displaystyle\sum\limits_{k=0}^{\left\lfloor n/2\right\rfloor -1}} \frac{1}{n}\left( 1-\frac{2k}{n}\right) \frac{E_{\leq k}\left( \Pi\right) }{n^{2}}\right) +\Theta\left( n^{3}\right). \] Using Inequality (\ref{lower}) for $0\leq k\leq m-1$ gives \[ \frac{E_{\leq k}\left( \Pi\right) }{n^{2}}\geq\frac{3}{2}\left( \frac{k}% {n}\right) ^{2}+\frac{3}{2}\max\left( 0,\frac{k}{n}-\frac{1}{3}\right) ^{2}-\Theta\left( \frac{1}{n}\right). \] Similarly, if $m\leq k\leq\lfloor n/2\rfloor -1$, then by Corollary \ref{explicit},% \[ \frac{E_{\leq k}\left( \Pi\right) }{n^{2}}\geq\frac{1}{2}-\frac{5}{9}% \sqrt{1-\frac{2k}{n}}+\Theta\left( \frac{1}{n}\right) \text{.}% \] Therefore,% \begin{align*} \mathop{\rm cr}\left( \Pi\right) & \geq\binom{n}{4}\left( 24\int_{0}^{4/9}\frac{3}% {2}(1-2x)\left( x^{2}+\max\left( 0,x-\frac{1}{3}\right) ^{2}\right) dx\right) \\ & +\binom{n}{4}\left( 24\int_{4/9}^{1/2}\left( 1-2x\right) \left( \frac{1}{2}-\frac{5}{9}\sqrt{1-2x}\right) dx\right) +\Theta(n^{3})\\ & \geq\binom{n}{4}\left( \frac{86}{243}+\frac{19}{729}\right) +\Theta (n^{3})=\frac{277}{729}\binom{n}{4}+\Theta(n^{3})\text{.}\qedhere \end{align*} \end{proof} The following is the list of best lower bounds for $\widetilde{\hbox{\rm cr}}( n) $ in the range $28\leq n\leq99$ that follow from using Identity (\ref{crossingsvsksets}) with the bound in either Inequality (\ref{lower}) or the new bound from Theorem \ref{recursive}. \begin{center}% \begin{tabular} [c]{l|l||l|l||l|l||l|l||l|l||l|l}% $n$ & $\widetilde{\hbox{\rm cr}}\left( n\right) \geq$ & $n$ & $\widetilde{\hbox{\rm cr}}\left( n\right) \geq$ & $n$ & $\widetilde{\hbox{\rm cr}}\left( n\right) \geq$ & $n$ & $\widetilde{\hbox{\rm cr}}\left( n\right) \geq$ & $n$ & $\widetilde{\hbox{\rm cr}}\left( n\right) \geq$ & $n$ & $\widetilde{\hbox{\rm cr}}\left( n\right) \geq$\\\hline\hline $28$ & 7233 & $40$ & 33048 & $52$ & 99073 & $64$ & 234223 & $76$ & 475305 & $88$ & 866947\\ $29$ & 8421 & $41$ & 36674 & $53$ & 107251 & $65$ & 249732 & $77$ & 501531 & $89$ & 907990\\ $30$ & 9723 & $42$ & 40561 & $54$ & 115878 & $66$ & 265888 & $78$ & 528738 & $90$ & 950372\\ $31$ & 11207 & $43$ & 44796 & $55$ & 125087 & $67$ & 282974 & $79$ & 557191 & $91$ & 994394\\ $32$ & 12830 & $44$ & 49324 & $56$ & 134798 & $68$ & 300767 & $80$ & 586684 & $92$ & 1039840\\ $33$ & 14626 & $45$ & 54181 & $57$ & 145030 & $69$ & 319389 & $81$ & 617310 & $93$ & 1086725\\ $34$ & 16613 & $46$ & 59410 & $58$ & 155900 & $70$ & 338913 & $82$ & 649190 & $94$ & 1135377\\ $35$ & 18796 & $47$ & 65015 & $59$ & 167344 & $71$ & 359311 & $83$ & 682308 & $95$ & 1185551\\ $36$ & 21164 & $48$ & 70948 & $60$ & 179354 & $72$ & 380531 & $84$ & 716507 & $96$ & 1237263\\ $37$ & 23785 & $49$ & 77362 & $61$ & 192095 & $73$ & 402798 & $85$ & 752217 & $97$ & 1290844\\ $38$ & 26621 & $50$ & 84146 & $62$ & 205437 & $74$ & 425980 & $86$ & 789077 & $98$ & 1346029\\ $39$ & 29691 & $51$ & 91374 & $63$ & 219457 & $75$ & 450078 & $87$ & 827289 & $99$ & 1402932 \end{tabular} \end{center} \section{A point-set with few $(\le k)$-edges for every $k \le {4n/9}-1$}\label{constructions} Combining Inequality (\ref{lower}) and Theorem~\ref{recursive}, we obtain the best known lower bound for $E_{\le k}(n)$. If $n$ is a multiple of $9$ and $k \le (4n/9)-1$, then this bound reads \begin{align} {E_{\leq k}(n)}\ge % \begin{cases} 3\binom{k+2}{2} & \text{\quad if $0 \le k\leq n/3-1$,} \\[0.2cm] 3\binom{k+2}{2}+3\binom{k-n/3+2}{2} & \text{\quad if $n/3 \le k\leq 4n/9-2$,} \\[0.2cm] 3\binom{(4n/9-1)+2}{2}+3\binom{(4n/9-1)-n/3+2}{2}+3 & \text{\quad if $k=4n/9-1$.} \end{cases}% \label{eq:thelowerbound} \end{align} Our aim in this section is to show that this bound is tight for $n \ge 27$. This improves on the construction in~\cite{AGOR3}, where tightness for Inequality (\ref{eq:thelowerbound}) is proved for $k \le (5n/12)$. We recursively construct, for each integer ${r}\ge 3$, a $9{r}$-point set $S_{{r}}$ such that for every $k \le (4n/9)-1$, $E_{\le k}(S_{{r}})$ equals the right hand side of~(\ref{eq:thelowerbound}). \subsubsection*{Constructing the sets $S_{r}$} If $a$ and $b$ are distinct points, then $\edge{ab}$ denotes the line spanned by $a$ and $b$, and $\oL{ab}$ denotes the closed line segment with endpoints $a$ and $b$, directed from $a$ towards $b$. Let $\theta$ denote the clockwise rotation by an angle of $2\pi/3$ around the origin. At this point the reader may want to take a sneak preview at Figure~\ref{fig:figure2}, where $S_3$ is sketched. For each ${r} \ge 3$ the set $S_{{r}}$ is naturally partitioned into nine sets of size $r$: $A_{r}=\{a_1,\ldots,a_{r}\}$, $A_{r}'=\{a_1',\ldots,a_{r}'\}$, $A_{r}''$, and their respective $2\pi/3$ and $4\pi/3$ rotations around the origin. The elements of $A_{r}''$ are not labeled because they change in each iteration. For $i=1,\ldots,r$, we let $b_i=\theta(a_i), b_i'=\theta(a_i'), c_i=\theta^2(a_i)$, and $c_i'=\theta^2(a_i)$. Thus if we let $B_{r}=\{b_1,\ldots,b_{r}\}$, $B_{r}'=\{b_1',\ldots,b_{r}'\}$, $B_{r}''=\theta(A_{r}'')$, $C_{r}=\{c_1,\ldots,c_{r}\}$, $C_{r}'=\{c_1',\ldots,c_{r}'\}$, and $C_{r}''=\theta^2(A''_{r})$, then we obtain $B_{r}\cup B_{r}'\cup B_{r}''$ (respectively, $C_{r}\cup C_{r}'\cup C_{r}''$) by applying $\theta$ (respectively, $\theta^2$) to $A_{r}\cup A_{r}'\cup A_{r}''$. We refer to this property as the $3$-{\em symmetry} of $S_{{r}}$. As we mentioned before, the construction of the sets $S_{r}$ is recursive. For ${r} \ge 3$, we obtain $A_{{r}+1}$ and $A_{{r}+1}'$ by adding suitable points $a_{{r}+1}$ to $A_{{r}}$ and $a'_{{r}+1}$ to $A_{{r}}'$. Keeping $3$-symmetry, this determines $B_{{r}+1}$, $B'_{{r}+1}$, $C_{{r}+1}$, and $C'_{{r}+1}$. However, the set $A''_{{r}+1}$ is {\em not} obtained by adding a point to $A''_{r}$, but instead is defined in terms of $B_{{r}+1},B'_{{r}+1}, C_{{r}+1}$, and $C'_{{r}+1}$; this explains why we have not listed the elements in $A''_{r}, B''_{r}$, and $C''_{r}$. Before moving on with the construction, we remark that the sets $S_{r}$ contain subsets of more than two collinear points. As it will become clear from the construction, the points can be slightly perturbed to general position, so that the number of $(\le k)$-edges remains unchanged for every $k \le 4n/9-1$. \begin{figure}[hp] \begin{center} \includegraphics[scale=.95]{xS3.eps} \caption{The $27$-point set $S_{3}$. The points $a_\infty, a'_\infty, b_\infty, b'_\infty, c_\infty,$ and $c'_\infty$ do not belong to $S_3$.} \label{fig:figure2} \end{center} \end{figure} We start by describing $S_{3}$, see Figure~\ref{fig:figure2}. First we explicitly fix $A_3$ and $A_3'$: $a_1 = (-700, -50)$, $a_2 = (-410 ,150)$, $a_3 = (-436 , 144 )$, $a'_1=( -1300, 20 )$, $a'_2 =(-1200,-10 )$, and $a'_3=(-1170 ,-14 )$. Thus $B_3, B_3',C_3,$ and $C_3'$ also get determined. For the points in $A_3''$ we do not give their exact coordinates, instead we simply ask that they satisfy the following: all the points in $A_3''$ lie on the $x$-axis, and are sufficiently far to the left of $A_{3} \cup A_{3}'$ so that if a line $\ell_1$ passes through a point in $A_3''$ and a point in $S_3\setminus {(B_3''\cup C_3'')} $, and a line $\ell_2$ passes through two points in $S_3\setminus{A_3''}$, then the slope of $\ell_1$ is smaller in absolute value than the slope of $\ell_2$, i.e., $\ell_1$ is closer (in slope) to a horizontal line, than $\ell_2$. \begin{figure}[h] \begin{center} \includegraphics[scale=1]{xbPlacement.eps} \caption{$b_{{r}+1}$ is placed in between $b_{r}$ and $b_\infty$, above the line $\edge{a'_{r} a_2}$.} \label{fig:figure3} \end{center} \end{figure} We need to define six auxiliary points not in $S_{r}: a_\infty = \edge{a_2 a_3} \cap \edge{c_2 c_3}$ and $a'_\infty = \edge{a'_2 a'_3} \cap \edge{a_2 a_3}$. As expected, let $b_\infty=\theta(a_\infty), c_\infty=\theta^2(a_\infty), b'_\infty=\theta(a'_\infty),$ and $c'_\infty=\theta^2(a'_\infty)$. We now describe how to get $S_{{r}+1}$ from $S_{{r}}$. The crucial step is to define the points $b_{{r}+1}$ and $a'_{{r}+1}$ to be added to $B_{r}$ and $A'_{{r}}$ to obtain $B_{{r}+1}$ and $A'_{{r}+1}$, respectively. Then we construct $A''_{{r}+1}$ and applying $\theta$ and $\theta^2$ to $B_{{r}+1}$, $A'_{{r}+1}$, and $A''_{{r}+1}$, we obtain the rest of $S_{{r}+1}$. Suppose that for some ${r} \ge 3$, the set $S_{r}$ has been constructed so that the following properties hold for ${t} = {r}$ (this is clearly true for the base case ${r}=3$): \begin{description} \item{(I)} The points $a_2,\ldots,a_{t}$ appear in this order along $\oL{a_2 a_\infty}$. \item{(II)} The points $a_2',\ldots,a_{t}'$ appear in this order along $\oL{a'_2 a'_\infty}$. \item{(III)} For all $i=2,\ldots,{t}-1$ and $j=2,\ldots,{t}$, $\edge{a'_i a_j}$ intersects the interior of $\oL{b_i b_{i+1}}$. \item{(IV)} For all $j=2,\ldots,{t}$, $\edge{a'_{t} a_j}$ intersects the interior of $\oL{b_{t} b_{\infty}}$. \end{description} Now we add $b_{{r}+1}$ and $a'_{{r}+1}$. Place $b_{{r}+1}$ anywhere on the open line segment determined by $b_\infty$ and the intersection point of $\edge{a_{r}'a_2}$ with $\oL{b_{r} b_\infty}$. (The existence of this intersection point is guaranteed by (IV), see Figure~\ref{fig:figure3}). Place $a'_{{r}+1} $ anywhere on the open line segment determined by $a'_\infty$ and the intersection point of $\edge{b_{{r}+1} a_\infty}$ with $\oL{a'_{r} a'_\infty}$. (This intersection exists because $a_\infty', a_\infty, a_2$, and $b_\infty$ are collinear and appear in this order along $\edge{a_\infty' b_\infty}$, the line $\edge{a_\infty' b_\infty}$ separates $b_{r+1}$ from $a_{r}'$, and the line $\edge{a_{r}' a_2}$ separates $b_{r+1}$ from $a_\infty$, see Figure~\ref{fig:figure4}). Thus $B_{{r}+1}$ and $A'_{{r}+1}$ and consequently $A_{{r}+1}, C_{{r}+1}, B'_{{r}+1}$, and $C'_{{r}+1}$, are defined. It is straightforward to check that (I)--(IV) hold for $t = {r}+1$. \begin{figure}[htbp] \begin{center} \includegraphics[scale=1]{xaPlacement.eps} \caption{$a'_{{r}+1}$ is placed in between $a'_{r}$ and $a'_\infty$, below the line $\edge{a_\infty b_{{r}+1}}$.} \label{fig:figure4} \end{center} \end{figure} It only remains to describe how to construct $A''_{{r}+1}$. As we mentioned above, this set is not a superset of $A''_{{r}}$, instead it gets defined analogously to $A''_3$: we let the points in $A_{{r}+1}''$ lie on the $x$-axis, and sufficiently far to the left of $A_{{r}+1} \cup A_{{r}+1}'$, so that if $\ell_1$ passes through a point in $A_{{r}+1}''$ and through a point in $S_{{r}+1}\setminus {(B_{{r}+1}''\cup C_{{r}+1}'')} $, and $\ell_2$ spans two points in $S_{{r}+1}\setminus{A_{{r}+1}''}$, then the slope of $\ell_1$ is smaller in absolute value than the slope of $\ell_2$. \subsubsection*{ Calculating $E_{\le k}(S_{r})$} We fix ${r}\ge 3$, and proceed to determine $E_{\le k}(S_{{r}})$ for each $k$, $0\le k \le 4{r}-1$. It is now convenient to label the elements of $A''_{{r}}, B''_{{r}}$, and $C''_{{r}}$. Let $a''_1,a''_2,\ldots,a''_r$ be the elements of $A''_{{r}}$, ordered as they appear from left to right along the negative $x$-axis. As expected, let $b''_i=\theta(a''_i)$ and $c''_i=\theta^2(a''_i)$, for $i=1,\ldots,{r}$. We call a $k$-edge {\em bichromatic} if it joins two points with different label letters (i.e., if it is of the form $ab, bc$, or $ac$); otherwise, a $k$-edge is {\em monochromatic}. A monochromatic edge is {\em of type $aa$} if it is of the form $\edge{a_i a_j}$ for some integers $i,j$; edges of types $aa', aa'', a'a', a'a'', a''a''$ (and their counterparts for $b$ and $c$) are similarly defined. Finally, we say that an edge of any of the types $aa, aa', aa'', a'a', a'a''$, or $a''a''$ is {\em of type} $\mathbf{A}$; edges of types $\mathbf{B}$ and $\mathbf{C}$ are similarly defined. We let $\ebic{k}$ (respectively, $\emono{k}$) stand for the number of bichromatic (respectively, monochromatic) $(\le k)$-edges, so that $E_{\le k}(S_{{r}}) = \ebic{k}(S_{{r}}) + \emono{k}(S_{{r}})$. We say that a finite point set $P$ is $3$\emph{-decomposable} if it can be partitioned into three equal-size sets $\overline{A}$, $\overline{B}$, and $\overline{C}$ satisfying the following: there is a triangle $T$ enclosing $P$ such that the orthogonal projections of $P$ onto the three sides of $T$ show $\overline{A}$ between $\oL{B}$ and $\oL{C}$ on one side, $\oL{B}$ between $\oL{A}$ and $\oL{C}$ on another side, and $\oL{C}$ between $\oL{A}$ and $\oL{B}$ on the third side (see~\cite{ACFLS}). We say that $\{\oL{A},\oL{B},\oL{C}\}$ is a $3$-{\em decomposition} of $P$. It is easy to see that if we let $\oL{A}:=A_{r} \cup A_{r}'\cup A_{r}''$, $\oL{B}:=B_{r} \cup B_{r}'\cup B_{r}''$, and $\oL{C}:=C_{r}\cup C_{r}'\cup C_{r}''$, then $\{\oL{A}, \oL{B}, \oL{C}\}$ is a $3$-decomposition of $S_{{r}}$: indeed, it suffices to take an enclosing triangle of $S_{r}$ with one side orthogonal to the line spanned by the points in $A''$, one side orthogonal to the line spanned by the points in $B''$, and one side orthogonal to the line spanned by the points in $C''$. Thus, it follows from Claim 1 in~\cite{ACFLS} (where it is proved in the more general setting of allowable sequences) that \begin{align} {\ebic{k}(S_{{r}})}=% \begin{cases} 3\binom{k+2}{2},& \text{\quad if $0 \le k\leq 3{r}-1$;} \\[0.2cm] 3\binom{3{r}+1}{2}+(k-3{r}+1)9r,& \text{\quad if $3{r}\le k\le 4{r} -1$.}% \end{cases}% \label{eq:thebic} \end{align} We now count the monochromatic $(\le k)$-edges. By $3$-symmetry, it suffices to focus on those {of type} $\mathbf{A}$. It is readily checked that for all $i$ and $j$ distinct integers, $\edge{a_i a_j}, \edge{a'_i a'_j}$, and $\edge{a''_i a''_j}$ are $k$-critical edges for some $k > 4{r}-1$. The same is true for $\edge{a_i a'_j}$ whenever $i$ and $j$ are not both equal to $1$ (when $i\neq 1$ and $j\neq 1$ this follows from (III) and (IV) ), while $\edge{a_1 a'_1}$ is a $(4{r}-1)$-edge. Now, for each $i,j$, $1 \le i \le {r}$, $2\le j \le {r}$, $\edge{a_i'' a'_j}$ is a $(4{r}+i-j)$-edge, while $a_i'' a'_1$ is a $(4{r}+i-2)$-edge. Finally, if $1\le i \le {r}$ and $2\le j \le {r}$, then $\edge{a_i'' a_j}$ is a $(3{r}+i+j-3)$-edge, and $\edge{a_i'' a_1}$ is a $(3{r}+i-1)$-edge. In conclusion (to obtain (i), we recall that a $k$-edge is also a $(9{r}-2-k)$-edge): \begin{description} \item{(i)} for $1 \le s \le {r}$, the number of $(3{r}-1+{s})$-edges of types $a'a''$ or $aa''$ is $2s$; \item{(ii)} there is exactly one $(4{r}-1)$-edge of type $aa'$; and \item{(iii)} all other edges of type $\mathbf{A}$ are $k$-critical edges for some $k > 4{r} -1$. \end{description} It follows that the number of $(\le k)$-edges of type $A$ is \begin{description} \item{(a)} $0$, for $k \le 3{r}-1$; \item{(b)} $2\sum_{s=1}^{k-(3r-1)} s=2 {\tbinom{k-3r+2}{2}} $, for $3{r} \le k \le 4{r}-2$; \item{(c)} $1 + 2\sum_{s=1}^{(4r-1)-(3r-1)} s = 2 {\tbinom{r+1}{2}} + 1$, for $k = 4{r}-1$. \end{description} By $3$-symmetry, for each integer $k$ there are exactly as many $(\le k)$-edges of type $\mathbf{A}$ as there are of type $\mathbf{B}$, and of type $\mathbf{C}$. Therefore \begin{align} {\emono{k}(S_{{r}})}=% \begin{cases} 0 &\text{\quad if $0 \le k\leq 3{r}-1$,} \\[0.2cm] 6 {k-(3{r}-2)\choose 2} &\text{\quad if $3{r} \le k\leq 4{r}-2$,} \\[0.2cm] 6 {r+1 \choose 2} + 3 &\text{\quad if $k=4{r}-1$.} \end{cases}% \label{eq:themono} \end{align} Because $E_{\le k}(S_{{r}}) = \ebic{k}(S_{{r}}) + \emono{k}(S_{{r}})$, it follows by identities ~(\ref{eq:thebic}) and~(\ref{eq:themono}) that $E_{\le k}(S_{{r}})$ equals the right hand side of~(\ref{eq:thelowerbound}). \section{Concluding remarks} The Inequality in Theorem \ref{main} is best possible. That is, there are $n$-point sets $P$ whose simple allowable sequence $\Pi$ gives equality in the Inequality of Corollary \ref{coro: maxs}: \[ E_{\geq k}( \Pi ) = ( n-2k-1) E_{k-1}( \Pi ) + \binom{s}{2}. \] We present two constructions. The first has $s=n-2k-1$ and consists of $2k+1$ points which are the vertices of a regular polygon and $n-2k-1$ central points very close to the center of the polygon. This construction was given in \cite{LVWW} to show that $E_{k-1} \geq 2k+1$ is best possible. Indeed, note that the $(k-1)$-edges of $P$ correspond to the larger diagonals of the polygon, and so $E_{k-1}( \Pi )=2k+1$; moreover, any edge formed by two points in the central part or one point in the central part and a vertex of the polygon determine a $(\geq k)$-edge. Thus $E_{\geq k}( \Pi ) =\tbinom{n-2k-1}{2}+(2k+1)(n-2k-1)$, which achieves the desired equality. The second construction has $s=0$ and thus it can only be achieved when $k \geq n/3$. Consider a $(2t+1)$-regular polygon where each vertex is replaced by a set of $m$ points on a small segment pointing in the direction of the center of the polygon. Let $\Pi$ be the allowable sequence corresponding to this point-set, $n=(2t+1)m$, and $k=tm$. It is straightforward to verify that $E_{k-1}(\Pi)=(2t+1)m$ and $E_{\geq k} (\Pi)=2(2t+1)\tbinom{m}{2}$. Thus $E_{\geq k}(\Pi)=(m-1) E_{k-1}(\Pi)=(n-2k-1)E_{k-1}(\Pi)$. Prior to this work, there were two results that provided a lower bound for $E_{\leq k}(P)$ based on the behavior of values of $k$ close to $n/2$. First, Welzl \cite{W} as a particular case of a more general result proved that $E_{\leq k}(P) \geq F_1(k,n)$, where \[ F_1(k,n)=\binom{n}{2}-2n\left( \sum_{j=k+1}^{n/2}k\right) ^{1/2}<\binom{n}{2}- \frac{\sqrt{2}}{2}n^{3/2}\sqrt{n-2k}. \] Second, Balogh and Salazar \cite{BS} proved that $E_{\leq k}(P) \geq F_2(k,n)$, where $F_2(k,n)$ is a function that, for $n/3 \leq k \leq n/2$, satisfies that \begin{equation*} F_2(k,n) < \binom{n}{2}- \frac{13\sqrt{3}}{36}n^{3/2} \sqrt{n-2k}+o(n^{2}). \end{equation*}% By direct comparison, it follows that both $F_1(k,n)$ and $F_2(k,n)$ are smaller than the bound in Corollary \ref{explicit}. Thus our bound is better than these two previous bounds. A nice feature of Theorem \ref{main} is that it can give better bounds for $E_{\leq k}(n)$ and $k$ large enough, and for $\overline{\hbox{\rm cr}}(n)$, provided someone finds a better bound than Inequality (\ref{lower}) for $E_{\leq k}(n)$ when $4n/9<k<n/2$. For example, \'Abrego et al. \cite{AFLS} considered $3$-regular point sets $P$. These are point-sets with the property that for $1\leq j \leq n/3$, the $j$th depth layer of $P$ has exactly 3 points of $P$. A point $p\in P$ is in the $j$th depth layer if $p$ belongs to a $(j-1)$-edge but not to a $(\leq j-2)$-edge of $P$. If $n$ is a multiple of 18, they proved the following lower bound: \begin{equation} E_{\leq k}(P) \geq 3\binom{k+2}{2}+3\binom{k+2- n/3}{2}+18 \binom{k+2- 4n/9}{2}.\label{eq:third binomial} \end{equation} This is better than the bound in Theorem \ref{recursive} for $k>4n/9$, however using Theorem \ref{main} it is possible to find an even better lower bound when $k\geq 17n/36$. We construct a new recursive sequence $ u^\prime$ starting at $m=17n/36$ given by \begin{align*} u^\prime_{m-1} & =3\binom{m+1}{2}+3\binom{m+1-\lfloor n/3\rfloor}{2}+18 \binom{m+1-\lfloor 4n/9 \rfloor}{2} \text{ and}\\ u^\prime_{k} & =\left\lceil \frac{1}{n-2k-2}\left( \binom{n}{2}+(n-2k-3)u^\prime_{k-1}% \right) \right\rceil \text{ for }k\geq m\text{.} \end{align*} The value of $m=17n/36$ is the smallest possible for which $u^\prime_m$ is greater than the right-side of Inequality (\ref{eq:third binomial}). Following the proof of Theorem \ref{recursive} it is possible to show that $E_{\leq k}(P) \geq u^\prime_{k}$ for $17n/36 \leq k <n/2$. Thus, if we could show that Inequality (\ref{eq:third binomial}) holds for arbitrary point sets $P$, then we know that bound will no longer be tight for $k \geq 17n/36$. From equivalent statements to lemmas \ref{double ineq} and \ref{estim}, it follows that $u^\prime_k \sim \tbinom{n}{2}-(7\sqrt{2} n^2/18)\sqrt{1-2k/n}$. This in turn improves the crossing number of $3$-regular point-sets $P$ to $\overline{\hbox{\rm cr}}(P)\geq 0.380024\tbinom{n}{4}+\Theta(n^3)$. In \cite{ACFLS} we considered other class of point-sets called $3$-decomposable. These are point-sets $P$ for which there is a triangle $T$ enclosing $P$ and a balanced partition $A$, $B$, and $C$ of $P$, such that the orthogonal projections of $P$ onto the sides of $T$ show $A$ between $B$ and $C$ on one side, $B$ between $A$ and $C$ on another side, and $C$ between $A$ and $B$ on the third side. For $3$-decomposable sets $P$ we were able to prove a lower bound consisting of an infinite series of binomial coefficients: \begin{equation} E_{\leq k}(P) \geq 3\binom{k+2}{2}+3\binom{k+2-n/3}{2} +3\sum_{j=2}^{\infty}j(j+1)\binom{k+2-c_{j}n}{2}, \label{eq:inftybinom} \end{equation} where $c_j={1}/{2}-{1}/({3j(j+1)})$. Our main result does not improve this lower bound, however it gives an interesting heuristic that provides some evidence about the potential truth of this inequality for unrestricted point-sets $P$. If we assume that the sum of the first $t+1$ terms in the right-side of Inequality (\ref{eq:inftybinom}) is a lower bound for $E_{\leq k}(P)$, then, just as we outlined in the previous paragraph for $t=2$, Theorem \ref{main} gives a better bound when $k$ is big enough. This happens to be precisely when $k \geq c_{t+1} n$, which is also the value of $k$ for which the next term in the sum of Inequality (\ref{eq:inftybinom}) gives a nonzero contribution. It was also shown in \cite{ACFLS} that Inequality (\ref{eq:inftybinom}) implies the following bound for $3$-decomposable sets $P$: \begin{equation} \overline{\hbox{\rm cr}}(P) \geq \frac{2}{27}(15-\pi^2)\tbinom{n}{4}+\Theta(n^3)>0.380029\tbinom{n}{4}+\Theta(n^3).\label{eq:cross infty} \end{equation} Theorem \ref{main} does not improve the $\tbinom{n}{4}$ coefficient, but it improves the speed of convergence. For instance, using Theorem \ref{main} together with the first 30 terms of Inequality (\ref{eq:inftybinom}) gives a better bound than the one obtained solely from the first 101 terms of Inequality (\ref{eq:inftybinom}). Finally, we reiterate our conjectures from \cite{ACFLS} that inequalities (\ref{eq:inftybinom}) and (\ref{eq:cross infty}) are true for unrestricted point-sets $P$. We in fact conjecture that for every $k$ and $n$, the class of $3$-decomposable sets contains optimal sets for both $E_{\leq k}(n)$ and $\overline{\hbox{\rm cr}} (n)$.
{ "timestamp": "2011-03-17T01:02:06", "yymm": "1102", "arxiv_id": "1102.5065", "language": "en", "url": "https://arxiv.org/abs/1102.5065", "abstract": "Let $P$ be a set of points in general position in the plane. Join all pairs of points in $P$ with straight line segments. The number of segment-crossings in such a drawing, denoted by $\\crg(P)$, is the \\emph{rectilinear crossing number} of $P$. A \\emph{halving line} of $P$ is a line passing though two points of $P$ that divides the rest of the points of $P$ in (almost) half. The number of halving lines of $P$ is denoted by $h(P)$. Similarly, a $k$\\emph{-edge}, $0\\leq k\\leq n/2-1$, is a line passing through two points of $P$ and leaving exactly $k$ points of $P$ on one side. The number of $(\\le k)$-edges of $P$ is denoted by $E_{\\leq k}(P) $. Let $\\rcr(n)$, $h(n)$, and $E_{\\leq k}(n) $ denote the minimum of $\\crg(P)$, the maximum of $h(P)$, and the minimum of $E_{\\leq k}(P) $, respectively, over all sets $P$ of $n$ points in general position in the plane. We show that the previously best known lower bound on $E_{\\leq k}(n)$ is tight for $k<\\lceil (4n-2) /9\\rceil $ and improve it for all $k\\geq \\lceil (4n-2) /9 \\rceil $. This in turn improves the lower bound on $\\rcr(n)$ from $0.37968\\binom{n} {4}+\\Theta(n^{3})$ to {277/729}\\binom{n}{4}+\\Theta(n^{3})\\geq 0.37997\\binom{n}{4}+\\Theta(n^{3})$. We also give the exact values of $\\rcr(n)$ and $h(n) $ for all $n\\leq27$. Exact values were known only for $n\\leq18$ and odd $n\\leq21$ for the crossing number, and for $n\\leq14$ and odd $n\\leq21$ for halving lines.", "subjects": "Combinatorics (math.CO); Computational Geometry (cs.CG); Discrete Mathematics (cs.DM)", "title": "On $(\\le k)$-edges, crossings, and halving lines of geometric drawings of $K_n$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363713038173, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7084883478505808 }
https://arxiv.org/abs/2103.00531
Sensitivity of low-rank matrix recovery
We characterize the first-order sensitivity of approximately recovering a low-rank matrix from linear measurements, a standard problem in compressed sensing. A special case covered by our analysis is approximating an incomplete matrix by a low-rank matrix. We give an algorithm for computing the associated condition number and demonstrate experimentally how the number of linear measurements affects it.In addition, we study the condition number of the rank-r matrix approximation problem. It measures in the Frobenius norm by how much an infinitesimal perturbation to an arbitrary input matrix is amplified in the movement of its best rank-r approximation. We give an explicit formula for the condition number, which shows that it does depend on the relative singular value gap between the rth and (r+1)th singular values of the input matrix.
\section{Introduction} \label{sec_introduction} Compressed sensing \cite{CRT2006, Donoho2006,DE2011,EK2012,FR2013} is a general methodology for recovering an unknown but structured signal $y \in \mathbb{R}^k$ from a measurement $a = L(y) \in \mathbb{R}^\ell$, where $\ell$ can be much smaller than $k$ and $L$ is a sensing operator. In this paper, we consider affine linear maps as sensing operators. The goal is to recover the unknown signal using only information about the compressed signal. The low-rank matrix recovery problem is a specific instance of compressed sensing. Herein, it is assumed that the unknown signal, an $m \times n$ matrix $Y$, (approximately) exhibits a low-rank structure of known rank $r$. The goal is to find a rank-$r$ matrix close to the unknown matrix $Y$ from the compressed sensing $A=L(Y)$. A prominent application of the low-rank matrix recovery problem is in \textit{collaborative filtering} or \textit{recommender systems}. Consider the so-called \emph{Netflix problem}~\cite{BL2009} for instance. Here, the data consists of an $m \times n$ matrix for $m$ users and $n$ movies and the $(i,j)$th entry contains the rating of user $i$ for movie $j$. Not all users have rated every movie. Thus, not all entries of the data matrix are available. It is incomplete. Attempting to fill in the missing values corresponds to predicting personalized movie ratings for each user. A common assumption is that the rating of movies by users is determined by unobserved latent factors, and that a low-rank factorization reveals exactly such latent factors. This assumption was exploited by several submissions of the Netflix prize competition, including SVD++ \cite{Koren2008}, timeSVD++ \cite{Koren2009b}, and the eventual winning solution \cite{Koren2009}. Recovering a low-rank matrix from incomplete observations, as in the Netflix problem, is precisely a low-rank matrix recovery problem. Suppose that the number of known ratings is $\ell$. We arrange the known values of the incomplete matrix in a vector $A\in\mathbb R^\ell$. The projection from matrices to incomplete matrices is denoted by~$L:\mathbb R^{m\times n}\to\mathbb R^\ell$. In the case of the Netflix problem~$L$ is a coordinate projection. In this paper, we consider the more general case where $L$ can be any affine linear map. Formally, the low-rank matrix recovery problem consists of solving \begin{equation}\label{lrr_problem}\tag{R} \argmin_{\substack{Y\in \mathbb R^{m\times n},\\ \mathrm{rank}(Y)=r}} \,\frac{1}{2} \Vert A-L(Y)\Vert^2, \end{equation} where $\Vert \cdot \Vert$ is the Euclidean norm on $\mathbb R^\ell$. If the rank is too large relative to $\ell$, then this problem is ill posed: It has infinitely many solutions. This can be seen by letting~$Y$ be an unconstrained matrix. Then, $L^{-1}(A)$ is an affine linear space of dimension~$mn-\ell$. This reveals another motivation for the low-rank assumption: it makes the problem \cref{lrr_problem} well posed. Indeed, if $\ell$ is large enough, we can recover $L$ uniquely in the following sense: Denote by $\Var M_{r}$ the set of matrices of rank $r$ and by $\Var M_{\leq r}$ the set of matrices of rank at most $r$. We say that $L$ can be \emph{generically uniquely recovered}, if there exists a \emph{real algebraic subvariety} $\Sigma\subsetneq \Var M_{\leq r}$, such that for all $Y\in\Var M_r\setminus \Sigma$, we have $L^{-1}(L(Y)) \cap \Var M_r= \{Y\}$. This implies that for almost all compressed rank-$r$ matrices $X=L(Y)\in\mathbb R^\ell$ there is only one rank-$r$ matrix $Y$ such that $X=L(Y)$. In other words, if $L$ can be generically uniquely recovered, the problem \cref{lrr_problem} is well posed almost everywhere. There is a vast body of literature on recoverability properties for low-rank matrix recovery. Specifically, \cite[Theorem~1.2]{RWX2021} implies that, if $\ell> r(m+n-r)$, then almost all $L$ can be recovered generically. Other problems that can be solved with low-rank matrix recovery include collaborative filtering \cite{BKV2009, RS2005}, image inpainting \cite{Magazine2014,KSK2015,MatrixInpainting2015}, dimensionality reduction \cite{SW2006, SY2007}, embedding problems \cite{LLE2995}, and multi-class learning \cite{AEP2008, OTM2010}. In all of these applications it is important to understand the sensitivity of the output with respect to perturbations in the input. Eisenberg \cite{Eisenberg2020} summarizes this as follows: \begin{quote} \emph{```Many investigations of big data solve inverse problems [...]. The sensitivity of results to uncertainties [...] is crucial to determine the reliability and thus utility of results.''} \end{quote} Indeed, while low-rank matrix recovery is well posed almost everywhere, this does not automatically mean, however, that it is \textit{well conditioned}. That is, the unique solution might vary drastically as $Y$ is changed to $Y + E$ where $\|E\| = \epsilon$ is some exceedingly small perturbation. For the Netflix problem this translates to the pertinent question of how sensitive the predicted movie ratings are to small perturbations in the known ratings (which by their nature can never be truly exact). Studying sensitivity of low-rank matrix recovery is the motivation for this paper. We want to characterize the sensitivity of the output, the unknown structured signal $Y\in\mathbb R^{m\times n}$, with respect to small perturbations of the input data, the compressed sensing $A\in\mathbb R^\ell$. For this, we study the associated \emph{condition number} $\kappa_\mathrm{recovery}(A,Y)$. We give a formal definition of this number in \cref{sec:contributions}, but at this point it is sufficient to think about an asymptotically sharp bound \[ \Vert Y - Y'\Vert_F \leq \kappa_\mathrm{recovery}(A,Y)\,\Vert A-A'\Vert, \] where $\| \cdot \|_F$ denotes the Frobenius norm. Here, $Y'$ is the solution of \cref{lrr_problem} for the input $A'=A+\Delta A$, which is a small perturbation of $A$. By asymptotically sharp we mean that the inequality is a sharp inequality in the limit as $\Vert \Delta A\Vert \to 0$. Our first main contribution is a practical linear algebra algorithm for computing the condition number $\kappa_\mathrm{recovery}(A,Y)$ of low-rank matrix recovery. This algorithm is presented in \cref{alg_matrix_recovery}. We show in \cref{prop_complexity} below that for certain structured sensing operators, the computational complexity of the algorithm is $\mathcal{O}( \phi s^3 )$, where $s = (m+n-r)r$ is the problem size\footnote{The problem size is defined here as the dimension of the optimization domain in \cref{lrr_problem}. It will be stated formally in \cref{sec:contributions}.} and $\ell = \phi s$, where the \emph{oversampling rate} $\phi>1$ is typically a small constant. Up to a factor $\phi$, this is the same complexity as one step of a standard (Riemannian) Newton method \cite{AMS2008,Boumal2020} for solving optimization problem \cref{lrr_problem}. We apply the algorithm in \cref{sec:experiments}. Our second contribution is an explicit formula of the condition number for the special case of \emph{low-rank approximation}. Here, the input data is $A\in\mathbb R^{m\times n}$ and the problem is approximating $A$ with a matrix of low rank $r$, i.e., solving \begin{equation}\label{lra_problem} \tag{A} \argmin_{\substack{Y\in \mathbb R^{m\times n},\\ \mathrm{rank}(Y)=r}} \,\frac{1}{2}\Vert A-Y\Vert^2_F, \end{equation} where the norm is the Frobenius norm. This problem is the special case of \cref{lrr_problem} when $L$ is the identity. The usual approach for solving the low-rank approximation problem is by computing a compact singular value decomposition (SVD) $A = \sum_{i=1}^{\min\{m,n\}} \sigma_i \vect{u}_i \vect{v}_i^T$ with $\sigma_1\geq \cdots\geq \sigma_{\min\{m,n\}}$. Then, a solution of \cref{lra_problem} is given by the truncated SVD $Y=\sum_{i=1}^{r} \sigma_i \vect{u}_i \vect{v}_i^T$. We denote the condition number in this case by $\kappa_\mathrm{approximation}(A,Y)$. Our second main result, \cref{main_thm1}, characterizes the condition number of low-rank approximation of $A$: \begin{align} \label{eqn_main_thm1} \kappa_\mathrm{approximation}(A,Y) = \frac{1}{1-\frac{\sigma_{r+1}}{\sigma_r}} = \frac{\sigma_r}{\sigma_r - \sigma_{r+1}}. \end{align} This means that the sensitivity of approximating $A$ with the low-matrix~$Y$ depends on the \emph{singular value gap} between $\sigma_{r+1}$ and $\sigma_r$ of $A$. The input $A$ is \emph{ill-posed}, if $\sigma_{r-1} = \sigma_r$. This may come as a surprise as some results in the literature may be interpreted as stating that the sensitivity of this approximation problem does not depend on the singular value gap. A recent paper by Drineas and Ipsen \cite{DI2019} even sums it up in their title: \textit{``Low-rank matrix approximations do not need a singular value gap.''} For this reason, we compare our results to the literature in \cref{sec:Hackbusch} and clear up seeming paradoxes like this. The outline of this paper is as follows. In the next section we compare the result for low-rank approximation to existing insights from the literature. Thereafter, \cref{sec:contributions} formally states the main results and assumptions of our study. \Cref{sec:H} investigates the Hessian of the objective function from \cref{lra_problem}, which provides a crucial contribution to the condition numbers of both problems \cref{lrr_problem,lra_problem}. Armed with insights about the Hessian, we characterize the condition number of \cref{lra_problem} in \cref{sec_low_rank_approximation}. The condition number of \cref{lrr_problem} is analyzed in \cref{sec:recovery}; in this case, we are unfortunately not able to derive a closed expression. For this reason, \cref{alg_matrix_recovery} presents a numerical algorithm for computing it. Numerical experiments with both low-rank approximation and recovery are featured in \cref{sec:experiments}. \section{Comparison to prior results}\label{sec:Hackbusch} In the literature we could not find results on the sensitivity of low-rank matrix recovery. However, for the special case of low-rank approximation there are several results in the literature. We discuss them next. \subsection{Drineas and Ipsen's no-gap result} Drineas and Ipsen's article \cite{DI2019} is titled \textit{``Low-rank matrix approximation do not need a singular value gap.''} At first sight, this seems to contradict our results, but on closer inspection the paradox quickly disappears. Dirineas and Ipsen study error bounds for the \emph{approximation error} of a low-rank approximation, as measured by the Schatten $p$-norm of the residual $P_U^\perp A$, where $A \in \mathbb{R}^{m \times n}$ is the matrix to approximate and $P_U^\perp$ projects onto the orthogonal complement of a fixed $r$-dimensional subspace $U \subset \mathbb{R}^m$. That is, they derive error bounds for $\| P_U^\perp A \|_p$ as either the fixed subspace $U$ or the matrix $A$ is perturbed in \cite[Theorem 1]{DI2019} and \cite[Theorem 2]{DI2019}, respectively. For example, when perturbing $A$, \cite[Theorem 2]{DI2019} states that \[ \big| \| P_U^\perp A \|_p - \| P_U^\perp A' \|_p \big| \le \|A - A'\|_p. \] Our results, on the other hand, describe what happens to the best rank-$r$ approximation of $A$ as it is perturbed. That is, using terminology closer to \cite{DI2019}, we show that \[ \| P_{U^*}^\perp(A) \, A - P_{U^*}^\perp(A') \, A' \|_F \le \frac{\sigma_r}{\sigma_r - \sigma_{r+1}} \| A - A' \|_F , \] where $P_{U^*}^\perp(A)$ projects $A$ to the best rank-$r$ approximation.\footnote{Equivalently, but closer to \cite{DI2019} in formulation, it projects the column space of $A$ to $U^*$, the $r$-dimensional subspace of left singular vectors associated to the largest $r$ singular values.} Note that in our result both $A$ and the projector $P_{U^*}^\perp(A)$ are perturbed as $P_{U^*}^\perp(A)$ varies with $A$. \subsection{An error bound of Hackbusch} Next, we discuss the result from \cite{Hackbusch2016} by Hackbusch. For this we let $A\in\mathbb R^{m\times n}$ and we denote by $A':=A+\Delta A$ a perturbation of~$A$. If $Y'\in \mathcal M_r$ is the best rank-$r$ approximation of $A'$ and if $Y_\mathrm{computed}\in\mathcal M_r$ is any other rank-$r$ matrix, then Theorem~4.5 in \cite{Hackbusch2016} asserts that \begin{equation}\label{hackbusch} \Vert Y'-Y_\mathrm{computed}\Vert_F \leq q\,\Vert A'-Y_\mathrm{computed}\Vert_F, \end{equation} where $q=\tfrac{1+\sqrt{5}}{2}\approx 1.62$ is a constant that does not depend on the input data. The main rationale for this bound is that $Y_\mathrm{computed}$ could be a cheap approximation of the rank-$r$ truncated SVD, e.g., obtained from randomized methods \cite{HMT2011} or adaptive cross approximation \cite{BR2000}. The bound states that if the approximation is $Y_\mathrm{computed}$, then $Y_\mathrm{computed}$ deviates from $Y'$ by at most $q$ times $\Vert A'-Y_\mathrm{computed}\Vert_F$. The latter can be computed from the data, so the quality of the computation can be assessed. The fact that \cref{hackbusch} involves a constant upper bound seems contradictory to \cref{eqn_main_thm1}. However, our result states that for sufficiently small $\Vert A' - A\Vert_F$ we have \begin{equation}\label{hackbusch2} \Vert Y' - Y \Vert_F \leq \frac{\sigma_r}{\sigma_r- \sigma_{r+1}}\,\Vert A' - A \Vert_F + o( \Vert A' - A \Vert_F^2 ), \end{equation} where $Y$ is a best rank-$r$ approximation of $A$, and $\sigma_r,\sigma_{r+1}$ are the $r$th and $(r+1)$th singular values of $A$. This means that a small perturbation $\Delta A$ of the input $A$ is amplified in the output by the condition number $\kappa_\text{approximation}(A,Y)$ in the worst case. If $0 \ne \sigma_{r}\approx\sigma_{r+1}$, this factor is huge. Assume that $A$ is the true matrix we want to compute a rank-$r$ approximation of and that $A'=A+\Delta A$ is a perturbation of $A$, e.g., due to roundoff or measurement errors. In this case, the bound \cref{hackbusch} does not tell the whole story and could be complemented with \cref{eqn_main_thm1}. Indeed, even if we can approximate $A'$ closely by $Y_\text{computed}$ so that $\Vert A'-Y_\mathrm{computed}\Vert_F$ is small, the matrix $Y_\mathrm{computed}$ can still be far from the best rank-$r$ approximation $Y$ of the true matrix $A$. Combining Hackbusch's result with \cref{eqn_main_thm1} yields \begin{align*} \Vert Y-Y_\mathrm{computed}\Vert_F &\leq \Vert Y-Y'\Vert_F + \Vert Y'-Y_\mathrm{computed}\Vert_F \\ &\leq q \Vert A'-Y_\mathrm{computed}\Vert_F + \frac{\sigma_r}{\sigma_r-\sigma_{r+1}} \Vert A' - A \Vert_F. \end{align*} The first term of the final bound follows from Euclidean geometry, while the second term is the effect of curvature of $\mathcal M_r$. Finally, observe that both \cref{hackbusch} and \cref{hackbusch2} agree on a constant upper bound when~$A'$ is a perturbation of the rank-$r$ matrix $A = Y_\text{computed}$. In this case, $\sigma_r > \sigma_{r+1} = 0$ so that $\kappa_\text{approximation}(A,Y_\text{computed}) = 1$. \subsection{First-order perturbations of the SVD by Hua and Sarkar} The earliest result on the sensitivity of the best low-rank approximation to a matrix we could locate in the literature is by Hua and Sarkar \cite{HS89}. They show that ``the first-order perturbations in the SVD truncated matrices [...] can be simply expressed in terms of the perturbations in the original data matrices'' and they conclude from their analysis that ``the SVD truncations do not affect the first order perturbations''. This also seems to contradict \cref{eqn_main_thm1}, where we show that a best rank-$r$ approximation \emph{can} change by (much) more than the norm of the perturbation. The paradox disappears when we take into account that Hua and Sarkar assume that the input $A$ is itself a rank-$r$ matrix. Thus, the $(r+1)$th singular value of $A$ is $\sigma_{r+1}=0$ and so, by \cref{eqn_main_thm1}, we have, once more, $\kappa_\mathrm{approximation}(A,Y) = 1$, which is fully consistent with their result. \subsection{Perturbation expansions of Vu, Chunikhina, and Raich} The main result of Vu, Chunikhina, and Raich \cite[Theorem 1]{VCR2021_arxiv} turns Feppon and Lermusiaux's analysis \cite{FL2018} into a rigorous perturbation bound for the best rank-$r$ approximation for arbitrary input matrices. We also use Feppon and Lermusiaux's work in \cref{sec_low_rank_approximation,sec:recovery}. Consequently, the effect of curvature pops up in \cite[Theorem 1]{VCR2021_arxiv}, consistent with \cref{eqn_main_thm1}. Nevertheless, we think our first-order error bound \[ \| Y - Y' \|_F \le \frac{\sigma_r}{\sigma_r - \sigma_{r+1}} \| A - A' \|_F + \mathcal{O}(\|A-A'\|_F^2), \] where $A'$ is a perturbation of $A$ and $Y$ and $Y'$ are the best rank-$r$ approximations of $A$ and $A'$ respectively, is more succinct than the bound in \cite[Theorem~1]{VCR2021_arxiv}. In addition, our analysis extends to the low-rank matrix recovery problem. \section{Statement of the main results}\label{sec:contributions} As in the introduction we consider an affine linear map \[ L:\mathbb R^{m\times n}\to \mathbb R^\ell, \, Y\mapsto M(Y) + b, \] which is called the sensing operator. Let $$\Var{M}_r:=\{X\in\mathbb R^{m\times n} \mid \mathrm{rank}(X)=r\}$$ be the set of matrices of rank equal to $r$. It is a \textit{smooth embedded submanifold} of $\mathbb{R}^{m\times n}$ of dimension $\dim \Var{M}_r = (m+n-r)r$ \cite{HM1994}. This implies that the set $\Var{M}_r$ is equipped with a topology and smoothness structure that is inherited from the ambient space~$\mathbb{R}^{m\times n}$. This enables a vast generalization of calculus on such domains \cite{Lee2013}. Precisely this smooth structure will make it much easier to compute the desired condition numbers. The set of sensed matrices will be denoted by \[ \Var{I}_r:=L(\Var{M}_r). \] We also define the set of matrices of rank bounded by $r$: \[ \Var{M}_{\leq r}:=\{X\in\mathbb R^{m\times n} \mid \mathrm{rank}(X)\leq r\}. \] It is both the Euclidean closure of $\Var{M}_r$ and a real algebraic variety in $\mathbb{R}^{m\times n}$, defined by the vanishing of $(r+1)\times (r+1)$-minors \cite{Harris1992}. The goal of this paper is to determine the first-order sensitivity of \cref{lrr_problem}. The input to this problem is a compressed sensing $A \in \mathbb{R}^\ell$, while the output is necessarily restricted to be a rank-$r$ matrix. The first complication one encounters is that there can be no or several solutions $Y$ for an input $A$. A priori we should expect to deal with a \emph{set-valued solution map} $ R : \mathbb{R}^{\ell} \rightrightarrows \Var{M}_r, \; A \mapsto \argmin_{Y \in \Var{M}_r} \, \frac{1}{2} \Vert A-L(Y)\Vert^2. $ The condition number of this map can be analyzed with the general techniques we introduced in \cite{BV2020}. In low-rank recovery, however, the geometry of the problem is more well-behaved than the general case. This allows for a clearer presentation that eliminates the intricacy of solution manifolds in \cite{BV2020}. We explain this next. The geometry of our setting is depicted in \cref{fig_geometry}. It illustrates that \cref{lrr_problem} decouples into two subproblems: (i) minimizing the distance from $\mathcal I_r$ to $A$, and (ii) inverting the map $L$. Fortunately, it turns out that under reasonable assumptions~$L^{-1}$ is a differentiable \textit{function} almost everywhere in the precise sense of \cref{prop_ass1} below. Those assumptions are the following. \begin{assumption}\label{ass1} We assume that $L(Y)=M(Y)+b$ can be generically uniquely recovered and that there exists a $Y\in\Var M_r$ such that $M|_{\Tang{Y}{\Var{M}_r}}$ has full rank. \end{assumption} As in the introduction, being uniquely recoverable means that there is an algebraic subvariety $\Sigma\subsetneq \Var M_{\leq r}$, such that~$L^{-1}(L(Y)) \cap \Var M_r = \{Y\}$ for all $Y\in\Var M_r\setminus \Sigma$. Theorem~1.2 in \cite{RWX2021} asserts that almost all $L$ have this property. Note that for reasons of dimension, the number of measurements $\ell \ge \dim \Var{M}_r = (m+n-r)r$ is a necessary condition for generic unique recoverability. The property that $M|_{\Tang{Y}{\Var{M}_r}}$ has full rank can be checked algorithmically by choosing a random $Y\in\Var M_r$ and using standard linear algebra methods to compute the rank. A rich list of examples of such $M$'s are given by the following coordinate projection operators: Let $[u_1,\ldots,u_m]\in\mathbb R^{m\times m}$ and $[v_1,\ldots,v_n]\in\mathbb R^{n\times n}$ be orthogonal matrices. We claim that if $M$ is of the form \begin{align*} &M(Y)=(u_i^TYv_j)_{(i,j)\in I}, \quad \text{where} \quad \vert I \vert = \ell\\ &\text{and} \quad I\supset \{(i,j) \in \{1,\ldots,m\}\times \{1,\ldots,n\} \mid i\leq r \text{ or } j\leq r\}, \end{align*} there exists $Y$ such that $M|_{\Tang{Y}{\Var{M}_r}}$ has full rank (note that choosing such an $M$ is possible, because $\ell > r(n+m-r)$). To see this, we define $U=[u_1,\ldots,u_r]$ and $V=[v_1,\ldots,v_r]$ and set $Y:=UV^T$. By \cite{HM1994}, the normal space of $\Var M_r$ at~$Y$ is $ \Norm{Y}{\Var{M}_r} = U^\perp \otimes V^\perp, $ where $U^\perp$ and $V^\perp$ are the orthogonal complements of the column spans of $U$ and $V$, respectively. This shows that $\ker M\subset \Norm{Y}{\Var{M}_r}$ and so~$M|_{\Tang{Y}{\Var{M}_r}}$ has full rank. We prove the next result in \Cref{sec_ass1}. \begin{proposition}\label{prop_ass1} Under \cref{ass1} there exist smooth embedded submanifolds $\Var{R}_r \subset \Var{M}_r$ and $\Var{S}_r \subset \Var{I}_r = L(\Var{M}_r)$, which are both dense in their supsets, such that \[ L|_{\Var{R}_r} : \Var{R}_r \to \Var{S}_r \] is a global diffeomorphism. \end{proposition} A diffeomorphism is a smooth bijective map between manifolds whose inverse map is smooth. \color{blue} \color{black} \begin{figure} \begin{tikzpicture}[remember picture] \node[inner sep=0] at (0,0) {\includegraphics[width=.98\textwidth]{Manifolds2.pdf}}; \node at (-4,2.6) {$A$}; \node at (-5.0,2.6) {$A'$}; \node at (-3.9,0.9) {$\Pi$}; \node at (-4.1,-1.45) {$X$}; \node at (-3.1,-0.6) {$\Tang{X}{\Var{S}_r}$}; \node at (-5.1,-2.1) {$X'$}; \node at (4.7,-1.3) {$Y$}; \node at (3.2,-2.2) {$Y'$}; \node at (0,-1.2) {$L$}; \node at (-3.9,-3.5) {$\mathcal S_r\subset \mathcal I_r$}; \node at (-3.9,-4) {(sensed recoverable matrices)}; \node at (3.9,-3.5) {$\mathcal R_r\subset \mathcal M_r$}; \node at (3.9,-4) {(recoverable rank-$r$ matrices)}; \end{tikzpicture} \caption{\label{fig_geometry} The simplified geometry of this paper. On the right is the submanifold $\mathcal R_r\subset \mathcal M_r$ of recoverable rank-$r$ matrices, and on the left is the submanifold $\mathcal S_r\subset \mathcal I_r$ of sensed recoverable matrices. If the affine linear map $L$ is generic, then it restricts to a diffeomorphism $\mathcal R_r\to\mathcal S_r$. The low-rank matrix recovery problem consists of two steps: (i) projecting the data point $A\in\mathbb{R}^\ell$ to $X\in\Var{S}_r$ with $\Pi$, and (ii) finding $Y\in\Var{R}_r$ with $L(Y)=X$. Therefore, the sensitivity of the output $Y$ with respect to the input perturbation $A' - A$ depends on the combined impact of (i) the curvature of $\Var{S}_r$ which causes $X$ to move to $X' \in \Var{S}_r$ as $A$ moves to $A'$, and (ii) the sensitivity of inverting $L\mid_{\mathcal R_r}$ which forces $Y$ to move to $Y' \in \Var{R}_r$ as $X$ moves to $X'$.} \end{figure} For minimizing the distance from $\Var{S}_r$ to the input $A\in\mathbb{R}^\ell$ we consider the following open subset of the input space $\mathbb{R}^\ell$: \[ \Var{D} = \mathbb{R}^\ell \setminus \Var{Z}, \] where $\Var{Z}$ is (the Euclidean closure of) the set of points for which $\min_{X\in\Var{S}_r}\,\tfrac{1}{2}\Vert A-X\Vert^2$ does not have a \textit{unique} solution (because it has multiple or no solutions). Note that we have replaced $\Var{I}_r$ by the submanifold $\Var{S}_r$ here. Since $\Var{S}_r$ is an embedded\footnote{This means that its smooth structure is compatible with the smooth structure on $\mathbb{R}^\ell$.% } submanifold of $\mathbb{R}^\ell$, the existence of a \textit{tubular neighborhood} \cite{Lee2013} of $\Var{S}_r$, an open neighborhood containing $\Var{S}_r$ in $\mathbb{R}^\ell$, guarantees that $\mathcal D$ contains at least this open subset.\footnote{Erd\"os's result~\cite{Erdos1945} implies that the set of points which have several infima of the distance function to $\mathcal{S}_r$ is of Lebesgue measure zero. However, $\mathcal{S}_r$ may not be closed, and so there can be points with no minimizer on $\mathcal{S}_r$. The set of such points can be full-dimensional. Think of a cusp with the node removed.} On $\Var{D}$ we can define $\Pi: \Var{D} \to \Var{S}_r, A\mapsto \argmin_{X\in\mathcal S_r}\,\tfrac{1}{2}\Vert A-X\Vert^2$, the projection onto $\mathcal S_r$. In summary, the foregoing closer look at the geometry of the recovery problem \cref{lrr_problem} allows us to arrive at a \emph{recovery map} $R = (L|_{\mathcal R_r})^{-1}\circ \Pi$. This is the map \begin{equation} \label{eqn_vandereycken_formulation2} R: \mathcal D \to \Var{R}_r,\quad A \mapsto \argmin_{Y \in \Var{R}_r} \frac{1}{2} \| A - L(Y) \|^2. \end{equation} It is a smooth (uni-valued) map! With the foregoing concessions (\cref{ass1}, open dense submanifolds $\Var{R}_r$ and $\Var{S}_r$, removing $\mathcal Z$ from the domain) we can apply Rice's \cite{Rice1966} classic definition of the condition number of a map for $A \in \mathcal D$: \begin{equation}\label{def_cond} \kappa_\mathrm{recovery}(A, Y) = \lim_{\epsilon \to 0}\sup_{\substack{\Delta A\in\mathbb R^\ell,\\ \Vert \Delta A\Vert \leq \epsilon}}\,\frac{\Vert R(A) - R(A+\Delta A)\Vert_F}{\Vert \Delta A\Vert}, \end{equation} where $Y = R(A) \in \Var{R}_r$ is the recovered rank-$r$ matrix. If $A\in\mathcal Z$ is outside the locus, where we can obtain the recovery map (\ref{def_cond}), we define $\kappa_\mathrm{recovery}(A, Y) = \infty$. \begin{remark}\label{rmk1} Note that \cref{eqn_vandereycken_formulation2,def_cond} allow us to study the condition number of the global minimizer $Y = R(A)$ of \cref{eqn_vandereycken_formulation2}. By considering the graph of $L : \Var{R}_r \to \Var{S}_r$, we see that the results of \cite{BV2020} also apply in their general form. This means that the analysis in this paper also covers local minima and critical points $Y$ with $A - L(Y) \perp \Tang{L(Y)}{\Var{S}_r}$ as in \cite{BV2020}. Nevertheless, we will focus on (local) minima, because they are the main interest in applications. \end{remark} \subsection{Low-rank matrix recovery} The condition number of (well-posed) low-rank matrix recovery in \cref{eqn_vandereycken_formulation2} at $A\in \mathcal D$ with output $Y=R(A)$ can be obtained from Theorem 7.3 in \cite{BV2020}: \begin{equation}\label{def_CN}\tag{C} \kappa_\mathrm{recovery}(A,Y) = \| (M|_{\mathrm T_Y\mathcal M_r})^{-1} H_{A,X}^{-1} \|_2. \end{equation} Herein, $\| \cdot \|_2$ is the spectral norm relative to the Frobenius norms on $\mathbb{R}^{m\times n}$ and the Euclidean norm $\mathbb{R}^\ell$, $M|_{\mathrm T_Y\mathcal M_r}$ is the derivative of $L|_{\Var{M}_r}$, and $H_{A,X}$ is the \emph{Riemannian Hessian} of the squared distance function $d_A : \Var{S}_r \to \mathbb{R}, X \mapsto \frac{1}{2} \| A - X \|^2$ at the point~$X=L(Y)$. This Riemannian Hessian generalizes the classic Euclidean Hessian and contains the second derivatives of $d_A$ on $\Var{S}_r$; it is discussed in \cref{sec:H}. As one can see from \cref{fig_geometry} and also from the formula \cref{def_CN}, the condition number $\kappa_\mathrm{recovery}(A,Y)$ is determined by two parts: \begin{enumerate} \item[(i)] the sensitivity of the recovery map $(L|_{\mathcal M_r})^{-1}$, and \item[(ii)] the \emph{curvature} of the manifold of sensed rank-$r$ matrices $\Var{S}_r$ at $L(Y)$. \end{enumerate} The effect of curvature on condition is depicted in \cref{fig_osculating}. If $A$ is a \emph{center of curvature} with base point $X$ of the parabola-shaped manifold, then the Riemannian Hessian $H_{A,X}$ is not invertible. In this case we have~$\kappa_\mathrm{recovery}(A,Y)=\infty$ and we call the input $A$ \emph{ill-posed}. The center of curvature for $X$ is shown in \cref{fig_osculating} as the gray point in the center of the displayed circle. \begin{figure} \begin{tikzpicture}[scale = 1.54] \draw[black] plot[smooth,domain=-1.2:1.2] (\x, {-(\x)*(\x)}); \draw[black] (0,-1/2) circle[radius=1/2]; \draw[black] (0,0) circle[radius=1.25pt] node[above, xshift=5pt] {$X$}; \draw[black] (-0.35,-0.35 * 0.35) circle[radius=1.25pt] node[above, xshift=-15pt] {$X+\Delta X$}; \draw[->] (0,-0.3) -- (0,-0.1); \fill[gray] (0,-1/2) circle[radius=1.25pt]; \fill[black] (0,-0.4) circle[radius=1.25pt] node[right] {$A$}; \draw[->] (-0.1,-0.4) -- (-0.19,-0.4) node[left, fill=white] {$A+\Delta A$}; \draw[->, shorten >=3pt] (-0.2,-0.35) -- (-0.35,-0.35 * 0.35) ; \fill[white] (0,0) circle[radius=1.24pt]; \fill[white] (-0.35,-0.35 * 0.35) circle[radius=1.24pt]; \end{tikzpicture} \caption{\label{fig_osculating} The picture shows how curvature affects the sensitivity of computing closest points on nonlinear objects. In this case, the curvature of the parabola amplifies the error $\Delta A$ in $A$. The amplification of errors is determined by the eigenvalues of the Riemannian Hessian $H_{A,X}$. } \end{figure} \subsection{Low-rank matrix approximation}\label{sec:contr_lra} We turn to the special case when $L$ is the identity map. This corresponds to the problem of approximating a matrix $A \in \mathbb{R}^{m\times n}$ by a rank-$r$ matrix. We denote the condition number of \emph{low-rank approximation} by $\kappa_\mathrm{approximation}$. It is also defined by \cref{def_CN}, where $L$ is now the identity and $\Var{R}_r = \Var{S}_r$. Our main result in this setting is the following result. \begin{theorem}\label{main_thm1} Let $A = \sum_{i=1}^{\min\{m,n\}} \sigma_i \vect{u}_i \vect{v}_i^T$ be an SVD of $A$ with ordered singular values $\sigma_1\geq \ldots\geq \sigma_{\min\{m,n\}}$. Let $Y\in R(A)$, so that $Y = \sum_{i=1}^{r} \sigma_i \vect{u}_i \vect{v}_i^T$ is a rank-$r$ truncated SVD of $A$. Then, the condition number of finding a best rank-$r$ approximation at $(A,Y)$ is \[ \kappa_\mathrm{approximation}(A,Y) = \frac{1}{1 - \frac{\sigma_{r+1}}{\sigma_r}} = \frac{\sigma_r}{\sigma_r - \sigma_{r+1}} \] or $1$ if $\sigma_r = 0$. \end{theorem} We prove this theorem in \cref{sec_low_rank_approximation} below. Note that $\kappa_\mathrm{approximation}(A,Y)=\infty$ if and only if ${\sigma_{r+1} = \sigma_r}$. \section{The Riemannian Hessian of the distance function}\label{sec:H} The \textit{Riemannian Hessian} \cite{Lee1997,riemannian_geometry,Petersen} generalizes the classic Hessian matrix from multivariate functions to maps on manifolds. We focus on the Riemannian Hessian of the distance function $d_A : \Var{X} \to \mathbb{R}, X \mapsto \frac{1}{2} \| X - A \|_F^2$ from $A \in \mathbb{R}^n$ to the smoothly embedded submanifold $\Var{X} \subset \mathbb{R}^n$. This manifold is equipped with the \textit{Riemannian metric} inherited from the Euclidean space $\mathbb{R}^n$. That is, every tangent space $\Tang{X}{\Var{X}}$ is equipped with the inner product $g_X(x,y) = x^T y$, where $x, y \in \Tang{X}{\Var{X}}$ are viewed as vectors in $\mathbb{R}^n$. The goal of this section is not to provide a rigorous derivation of the Riemannian Hessian in general, but rather present an accessible account for submanifolds of Euclidean space that highlights its connection to classic differential-geometric objects like the second fundamental form and Weingarten map, which will be used in the technical results. An alternative accessible account can be found in \cite[Chapter 5]{Boumal2020}. Ignoring the manifold structure for a moment, in classic multivariate analysis the Hessian of $d_A$ at $X$ would be \begin{align} \label{eqn_hessian} \tag{E} \deriv{(\deriv{d_A}{X})}{X} &= \deriv{(\dot{X} \mapsto \langle \dot{X}, X - A \rangle)}{X} \\ \nonumber &= (\dot{X},\ddot{X}) \mapsto \langle \dot{X}, \ddot{X} \rangle - \langle (\deriv{\dot{X}}{X})(\ddot{X}), A - X \rangle. \end{align} In the classic setting, $\dot{X}$ is a vector in $\mathbb{R}^n$ which bears no particular relationship to~$X$. Hence, in Euclidean geometry, the second term involving $(\deriv{\dot{X}}{X})(\ddot{X})$ vanishes. When $X$ is restricted to lie on a manifold $\Var{X}$, the interpretation of \cref{eqn_hessian} changes substantially. The derivative of a smooth map $f : \Var{X} \to \Var{Y}$ between manifolds at $X \in \Var{X}$ is a linear map $\deriv{f}{X} : \Tang{X}{\Var{X}} \to \Tang{f(X)}{\Var{Y}}$ between the respective tangent spaces \cite{Lee2013}. This means that $\dot{X}$ and $\ddot{X}$ are elements of $\Tang{X}{\Var{X}}$, which we can view in $\mathbb{R}^n$ as an affine linear space attached at $X$. Consequently, $(\deriv{\dot{X}}{X})(\ddot{X})$ should be interpreted as the directional derivative of the tangent vector $\dot{X}\in\Tang{X}{\Var{X}}$ as the base point $X \in \Var{X}$ is infinitesimally moved in the direction of $\ddot{X}\in\Tang{X}{\Var{X}}$. Based on this interpretation, circumventing vector fields, we could define \[ \nabla^2 : \Tang{X}{\Var{X}} \times \Tang{X}{\Var{X}} \to \mathbb{R}^n, \quad (\dot{X}, \ddot{X}) \mapsto \frac{\mathrm{d}}{\mathrm{d} t}\big|_{t=0} \mathrm{P}_{\Tang{\gamma(t)}{\Var{X}}} (\dot{X}), \] where $\gamma(t) \subset \Var{X}$ is an \textit{integral curve} \cite{Lee2013} realizing $\ddot{X}$, i.e., $\gamma$ is a smooth map from a neighborhood of $0 \in \mathbb{R}$ with $\gamma(0) = X$ and $\gamma'(0) = \ddot{X}$. Since $\Tang{X}{\Var{X}}$ can be viewed as an affine linear subspace of $\mathbb{R}^n$, we can decompose the latter as $\mathbb{R}^n = \Tang{X}{\Var{X}} \oplus \mathrm{N}_{X}\Var{X}$, where $\mathrm{N}_{X}\Var{X}$ is the normal space of $\Var{X}$ at $X$, i.e., the orthogonal complement of $\Tang{X}{\Var{X}}$. Projecting $\nabla^2$ to the normal space yields a fundamental object in Riemannian geometry: the \textit{second fundamental form} $\mathit{I\!I}_X$ \cite{Lee1997,ONeill1983,ONeill2001,riemannian_geometry,Petersen}. This is the bilinear map \begin{equation}\label{SFF} \mathit{I\!I}_X : \Tang{X}{\Var X}\times \Tang{X}{\Var{X}} \to \Norm{X}{\Var{X}},\quad (\dot{X}, \ddot{X}) \mapsto \mathrm{P}_{\Norm{X}{\Var{X}}}\left( \nabla^2(\dot{X}, \ddot{X}) \right) \end{equation} If we contract the output of this map with a normal vector $N \in \Norm{X}{\Var{X}}$, we obtain the so-called \textit{Weingarten map} or \textit{shape operator}, another classic and well-studied object in Riemannian geometry \cite{Lee1997,ONeill1983,ONeill2001,riemannian_geometry,Petersen}: \[ S_N : \Tang{X}{\Var X} \times \Tang{X}{\Var X} \to \mathbb{R}, \quad (\dot{X}, \ddot{X}) \mapsto \langle N, \mathit{I\!I}_X(\dot{X}, \ddot{X}) \rangle. \] This Weingarten map is a self-adjoint operator $\Tang{X}{\Var{X}}\to \Tang{X}{\Var{X}}$. At a critical point $N = A - X \in \Norm{X}{\Var{X}}$, it follows that we can interpret \cref{eqn_hessian} in terms of classic, well-studied objects in Riemannian geometry, namely as the following linear endomorphism on $\Tang{X}{\Var{X}}$: \begin{equation}\tag{H}\label{eqn_Hess_distance} H_{A,X} = \mathbf{1}_{\Tang{X}{\Var{X}}} - S_{N}, \end{equation} where $\mathbf{1}_{\Tang{X}{\Var{X}}}$ is the identity on ${\Tang{X}{\Var{X}}}$ and $S_N$ is viewed as linear map $\Tang{X}{\Var{X}} \to \Tang{X}{\Var{X}}$. The map $H_{A,X}$, or its matrix representation, is called the Riemannian Hessian.\footnote{For arbitrary $X$, the Riemannian Hessian is also defined by \cref{eqn_Hess_distance} for $N = \mathrm{P}_{\Norm{X}{\Var{X}}}(A - X)$ \cite{Petersen}.} \subsection{Principal curvatures} The Riemannian Hessian of $d_A$ contains geometric information about the way $\Var{X}$ curves inside of $\mathbb{R}^n$ \cite{Petersen}. Let $N \in\mathrm{N}_X\Var X$, $\eta = \frac{N}{\Vert N\Vert}$, and $s:=\dim \Var{X}$. The real eigenvalues $\lambda_1, \ldots, \lambda_s$ of the Weingarten map $S_{\eta}$ are called the \emph{principal curvatures} of $\Var X$ in the direction $\eta$. They measure how much~$\Var{X}$ curves at $X$ in the direction $\eta$. If $\lambda_i$ is a principal curvature of $\Var{X}$ at $X$ with associated unit-norm eigenvector $u_i$, then in the plane $P_i = \mathrm{span}(u_i, \eta)$ spanned by $u_i$ and $\eta$, the intersection of the manifold $\Var{X}$ with $P_i$ can be locally approximated to \textit{second} order at $X$ by a segment of an \textit{osculating circle} with center $X + \lambda_i^{-1} \eta$. This circular arc passes through $X$ with derivative $u_i \in \Tang{X}{\Var{X}}$; see \cref{fig_osculating}. With this additional terminology, we obtain the following observation from \cref{eqn_Hess_distance}. \begin{lemma}\label{norm_of_H} Let $N:=A-X \in \Norm{X}{\Var{X}}$. Then, \[ \Vert (H_{A,X})^{-1}\Vert_2 = \max_{1\leq i\leq s} (\vert 1 - \Vert N\Vert \lambda_i|)^{-1}, \] where the norm is the spectral norm and the $\lambda_i$ are the principal curvatures. \end{lemma} \subsection{The second fundamental form as a tensor}\label{sec:SFF_3_tensor} By multilinear algebra \cite{Greub1978}, we can represent the second fundamental form $\mathit{I\!I}_{X}$ from \cref{SFF} by a three-dimensional tensor in $(\Tang{X}{\Var{X}})^*\otimes (\Tang{X}{\Var{X}})^* \otimes \mathrm{N}_{X}\Var{X}$, where $(\,\cdot\,)^*$ denotes the dual. This tensor is symmetric in the first two factors; see \cite[equation (H)]{BV2020} or \cite{Petersen} for more details. The dual space $(\Tang{X}{\Var{X}})^*$ is identified with $\Tang{X}{\Var{X}}$ via the standard Euclidean inner product $\langle\cdot,\cdot\rangle$ on $\mathbb{R}^n$ because $\mathcal X\subset \mathbb R^n$ is an embedded manifold inheriting the Riemannian structure from $\mathbb R^n$. Let~$\vect{s}_i, \vect{t}_i \in \Tang{X}{\Var{X}}$ and $\vect{u}_i \in \Norm{X}{\Var{X}}$ be vectors so we can write \begin{equation*} \mathit{I\!I}_X = \sum_{i=1}^r \vect{s}_i \otimes \vect{t}_i \otimes \vect{u}_i \end{equation*} for some $r$; such an expression exists \cite{Greub1978}. The corresponding bilinear map is then $ \mathit{I\!I}_X( \vect{a}, \vect{b} ) = \mathit{I\!I}_X( \vect{b}, \vect{a} ) = \sum_{i=1}^r \left( \langle \vect{s}_i, \vect{a} \rangle \cdot \langle \vect{t}_i, \vect{b} \rangle \right) \vect{u}_i. $ Let $N \in \Norm{X}{\Var{X}}$ be a normal vector at $X$. The contraction of $\mathit{I\!I}_X$ with $N$ along the third factor is defined by \begin{equation*} N^T \cdot_3 \mathit{I\!I}_X = \sum_{i=1}^r \langle N, \vect{u}_i \rangle\, \vect{s}_i \otimes \vect{t}_i. \end{equation*} In the standard basis of $\mathbb{R}^n$, the tensor $\vect{s}_i \otimes \vect{t}_i$ would be represented by the rank-one matrix~$\vect{s}_i \vect{t}_i^T$, so that the foregoing equation can also be viewed naturally as a $\dim\Var{X} \times \dim\Var{X}$ matrix. Comparing with the definition of the Weingarten map and \cref{eqn_Hess_distance}, we see the latter can also be expressed as \begin{equation}\tag{H'}\label{H_new_form_general_setting} H_{A,X} = \mathbf 1_{\Tang{X}{\Var X}} - N^T \cdot_3 \mathit{I\!I}_X. \end{equation} \section{Sensitivity of low-rank approximation}\label{sec_low_rank_approximation} In low-rank matrix approximation the sensing operator $L$ is given by $M$ being the identity on ${\Var{M}_r}$ and $b=0$. In this case, $\Var{R}_r = \Var{S}_r$. Following \cref{def_CN} the condition number of low-rank approximation is \[ \kappa_\mathrm{approximation}(A,Y) = \|H_{A,Y}^{-1} \|_2, \] where $H_{A,Y}$ is the Riemannian Hessian of $d_A:\mathcal M_r \to \mathbb{R},\, Y\mapsto \tfrac{1}{2}\Vert A-Y\Vert^2$ at $Y$. Let $s:=\dim \Var M_r$ and $N=A-Y$. Since $Y$ minimizes the distance function $d_A$, we have $N\in\mathrm{N}_X \Var{R}_r$; i.e., $N$ is a normal vector of $\Var{R}_r$ at $Y$. By \cref{norm_of_H}, we have \begin{equation}\label{cn_Mr} \kappa_\mathrm{approximation}(A,Y) = \max_{1\leq i\leq s} (\vert 1 - \Vert N\Vert \lambda_i\vert)^{-1}, \end{equation} where $\lambda_1,\ldots,\lambda_s$ are the principal curvatures of $\Var{R}_r$ at $Y$ and in direction $\eta:=\frac{N}{\Vert N\Vert}$. The principal curvatures (of open submanifolds) of $\Var{M}_r \subset \mathbb{R}^{m \times n}$ can be derived from Amelunxen and B\"urgisser's \cite[Proposition 6.3]{AmBu2015} and were also stated by Feppon and Lermusiaux \cite[Theorem 24]{FL2018}. Let $A = \sum_{i=1}^{\min\{m,n\}} \sigma_i \vect{u}_i \vect{v}_i^T$ be the SVD of $A$ with the singular values $\sigma_1\geq \cdots\geq \sigma_{\min\{m,n\}}$ sorted decreasingly, and let the truncated rank-$r$ SVD be $Y =\sum_{i=1}^r\sigma_i \vect{u}_i \vect{v}_i^T \in \Var{R}_r$. The principal curvatures at $Y$ in the normal direction $\eta = \frac{N}{\|N\|}$ are, on the one hand, \begin{equation}\label{pc_of_Mr} c_{(b,i,j)} := \frac{(-1)^b}{\Vert N\Vert} \frac{\sigma_{{r+j}}}{\sigma_{i}}, \quad b=0,1;\; i = 1, \ldots, r; \text{ and } j=1,\ldots, \min\{m,n\}-r, \end{equation} and the other $r(m+n-r)-2(\min\{m,n\}-r)r$ principal curvatures are equal to zero \cite{AmBu2015,FL2018}. We can now prove \cref{main_thm1}. \begin{proof}[Proof of \cref{main_thm1}] We combine and \cref{cn_Mr} and \cref{pc_of_Mr} to get \[ \kappa_{\mathrm{approximation}}(A,Y) = \max_{\substack{1\leq i\leq r,\\ 1\leq j\leq \min\{m,n\}-r}} \Big(1 - \frac{\sigma_{{r+j}}}{\sigma_{i}}\Big)^{-1} = \Big(1 - \frac{\sigma_{{r+1}}}{\sigma_{r}}\Big)^{-1}. \] This proves \cref{main_thm1}. \end{proof} \begin{remark} In \cref{rmk1} we mentioned that the analysis of condition numbers also carries over the critical points. The critical points of the squared distance function $d_A$ for $A = \sum_{i=1}^{\min\{m,n\}} \sigma_i \vect{u}_i \vect{v}_i^T$ are all of the form $Y = \sum_{i\in I} \sigma_i \vect{u}_i \vect{v}_i^T$, where $I$ is a subset the indices with $|I|=r$. It can be shown that the condition number of low-rank approximation at such a critical point is $\max_{i\in I, j\not\in I} (1 - \tfrac{\sigma_{{j}}}{\sigma_{i}})^{-1}$. \end{remark} \Cref{main_thm1} shows that, if there is a clear gap between the $r$th and $(r+1)$th singular value, then the best rank-$r$ approximation problem is well-conditioned. However, if $\sigma_r \approx \sigma_{r+1}$ then the problem is nearly ill-conditioned, by \cref{main_thm1}. An example makes this effect clear: let $\vect{e}_1, \vect{e}_2\in\mathbb{R}^2$ be the two standard basis vectors. The best rank-$1$ approximation of \[ A = \begin{bmatrix} 1 + \epsilon &0 \\ 0& 1 - \epsilon \end{bmatrix} \;\;\text{is}\;\; Y = (1+\epsilon) \vect{e}_1 \vect{e}_1^T = \begin{bmatrix} 1 &0 \\ 0& 0 \end{bmatrix} + \epsilon \begin{bmatrix} 1 & 0\\ 0& 0 \end{bmatrix}, \] and we have $\kappa_{\mathrm{approximation}}(A,Y) = \frac{1}{2}(1 + \epsilon^{-1})$. Therefore, a large deviation of $Y$ may be expected when perturbing $A$: We perturb $A$ by $\epsilon(\vect{e}_1\vect{e}_2^T + \vect{e}_2\vect{e}_1^T)$, resulting in the matrix~$A' := \left[\begin{smallmatrix} 1 + \epsilon & \epsilon \\ \epsilon & 1 - \epsilon \end{smallmatrix}\right]$. The best rank-$1$ approximation of the matrix $A'$ is $\tfrac{1+\sqrt{2}\epsilon}{2(2 + \sqrt{2})} \left[\begin{smallmatrix} 1 + \sqrt{2} \\ 1 \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 + \sqrt{2} \\ 1 \end{smallmatrix}\right]^T $, which is equal to \begin{equation*} \begin{bmatrix} \frac{1}{4}(2 + \sqrt{2}) & \frac{1}{2\sqrt{2}} \\[5pt] \frac{1}{2\sqrt{2}} & \frac{1}{2(2+\sqrt{2})} \end{bmatrix} + \epsilon \begin{bmatrix} \frac{\sqrt{2}}{4}(2 + \sqrt{2}) & \frac{1}{2} \\[5pt] \frac{1}{2} & \frac{1}{4(1+\sqrt{2})} \end{bmatrix} \end{equation*} and unrecognizable from $Y$. A unit-order change results from a perturbation of size $\sqrt{2}\epsilon$, as could have been anticipated from $\kappa_\mathrm{approximation}(A,Y) \approx \epsilon^{-1}$. \section{Sensitivity of low-rank matrix recovery}\label{sec:recovery} We continue with our discussion of low-rank recovery. Here, $L(Y) = M(Y) + b$ is a sufficiently general sensing operator for which we assume \cref{ass1} and \cref{prop_ass1} hold. The point $A\in \mathbb{R}^\ell$ is the input data to the recovery problem, $X=L(Y) \in\Var{S}_r$ is a sensed rank-$r$ matrix approximating $A$, and~$Y \in \Var{R}_r$ is the recoverable rank-$r$ matrix that projects to $X$. Recall from \cref{def_CN} that the condition number of low-rank matrix recovery is \[ \kappa_\mathrm{recovery}(A,Y) = \| (M|_{\mathrm T_Y\Var{R}_r})^{-1} H_{A,X}^{-1} \|_2, \] where $H_{A,X}$ is the Riemannian Hessian of the squared distance to the manifold $d_A : \Var{S}_r \to \mathbb{R}, X \mapsto \frac{1}{2} \| A - X \|^2$ at $X = L(Y)$. This section derives a closed expression for $H_{A,X}$. Unfortunately, we are unable to derive a closed expression for $\kappa_\mathrm{recovery}(A,Y)$. Therefore, we will present a simple and efficient linear algebra algorithm for evaluating $\kappa_\mathrm{recovery}(A,Y)$ in the next section. Recall from \cref{H_new_form_general_setting} that the Riemannian Hessian can be expressed in terms of the second fundamental form. Thus, our problem reduces to computing the latter. We can rely on the following lemma, which shows how curvature transforms under affine linear diffeomorphisms. While this is considered an elementary result in differential geometry, we could not locate a suitable reference, so a proof is included in the appendix for self-containedness. \begin{lemma} \label{lemma_L} Consider Riemannian embedded submanifolds $\Var{U}\subset \mathbb{R}^N$ and $\Var{W} \subset \mathbb{R}^\ell$ both of dimension~$s$. Let $L : \mathbb{R}^N \to \mathbb{R}^\ell,\; Y\mapsto M(Y)+b$ be an affine linear map that restricts to a diffeomorphism from $\Var{U}$ to $\Var{W}$. For a fixed $Y\in \Var U$, let $(E_1,\ldots, E_s)$ be a basis of $\Tang{Y}{\Var{U}}$. For each $1\leq i\leq s$ let $F_i = M( E_i )$. Then, $(F_1,\ldots,F_s)$ is a basis of the tangent space $\Tang{X}{\Var{W}}$ at $X=L(Y)$, and we have \[ \mathit{I\!I}_{X}(F_i, F_j) = \mathrm{P}_{\mathrm{N}_{X} \Var{W}}\left( M( \mathit{I\!I}_Y(E_i, E_j) ) \right). \] \end{lemma} This lemma shifts our problem to computing the second fundamental form of (recoverable) rank-$r$ matrices $\Var{R}_r \subset \Var{M}_r$. The latter was computed in \cite[Section 4.5]{AMT2013} and \cite[Proposition 22]{FL2018}. In the next subsection we will evaluate the latter at an orthonormal basis, so a succinct matrix representation is obtained. \subsection{Second fundamental form of rank-$r$ matrices}\label{SFF_rank_r} Let $Y = U \Sigma V^T$ be a compact SVD of $Y \in \Var{R}_r$, such that $\Sigma$ is the diagonal matrix with entries the singular values $\sigma_1\geq \cdots\geq \sigma_r$. Then, by the fact that $\Var{R}_r$ is an open submanifold of $\Var{M}_r$ and \cite{HM1994}, the tangent and normal spaces to $\Var{R}_r$ at $Y$ are \begin{equation}\label{tangent_space_M_r} \Tang{Y}{\Var{R}_r} = (U \otimes V) \oplus (U^\perp \otimes V) \oplus (U \otimes V^\perp) \quad\text{ and }\quad \Norm{Y}{\Var{R}_r} = U^\perp \otimes V^\perp, \end{equation} where $U$ and $V$ are conveniently identified with their column spans, $\oplus$ denotes the direct sum of (orthogonal) linear subspaces, and $(\cdot)^\perp$ denotes the orthogonal complement of a subspace. Let the columns of $U$ be $u_1, \ldots, u_r$, and let $v_1, \ldots, v_r$ be the columns of~$V$. Let $u_{r+1}, \ldots, u_m$ be an orthonormal basis of $U^\perp$, and $v_{r+1},\ldots,v_{n}$ one for $V^\perp$. Then, \begin{alignat*}{2} U &\otimes V &&= \operatorname{span}( u_i v_j^T \mid 1 \le i, j \le r ),\\ U^\perp &\otimes V &&= \operatorname{span}( u_i v_j^T \mid 1 \le j \le r < i \le m ), \\ U &\otimes V^\perp &&= \operatorname{span}( u_i v_j^T \mid 1 \le i \le r < j \le n ),\\ U^\perp &\otimes V^\perp &&= \operatorname{span}( u_i v_j^T \mid r < i \le m, r < j\le n ). \end{alignat*} For brevity we define $E_{ij} = u_i v_j^T$ for all $i$ and $j$. We also define the rank-$1$ matrices \[ \phi_{ij} = \begin{cases} 0 & \text{if } E_{ij} \in U\otimes V, \\ 0 & \text{if } E_{ij} \in U\otimes V^\perp,\\ \sigma_j^{-1} u_i e_j^T & \text{if } E_{ij} \in U^\perp \otimes V, \end{cases}\quad\text{and}\quad \psi_{ij} = \begin{cases} v_j e_i^T & \text{if } E_{ij} \in U\otimes V, \\ v_j e_i^T & \text{if } E_{ij} \in U\otimes V^\perp,\\ 0 & \text{if } E_{ij} \in U^\perp \otimes V. \end{cases} \] Then, we have the unique decomposition $E_{ij} = U \psi_{ij}^T + \phi_{ij} (V \Sigma)^T$. We obtain the following formula from \cite[Proposition 22]{FL2018}: \[ \mathit{I\!I}_Y( E_{i,j}, E_{k,l} ) = \mathrm{P}_{\Norm{Y}{\Var{M}_r}}( \phi_{ij}\, \psi_{kl}^T + \phi_{kl} \psi_{ij}^T). \] It can be verified by direct computation that the expression simplifies to \begin{align} \label{eqn_sff_compact} \mathit{I\!I}_Y( E_{ij}, E_{kl} ) = \begin{cases} \sigma_j^{-1} \delta_{kj} E_{il} & \text{if } E_{ij} \in U^\perp \otimes V \text{ and } E_{kl} \in U\otimes V^\perp, \\ \sigma_{l}^{-1} \delta_{il} E_{kj} & \text{if } E_{ij} \in U \otimes V^\perp \text{ and } E_{kl} \in U^\perp\otimes V, \\ 0 & \text{otherwise}. \end{cases} \end{align} Herein $\delta_{ab}$ is the Kronecker delta. The fact that $\mathit{I\!I}_Y$ restricted to $U\otimes V$ is zero is actually a priori clear from geometric considerations: the second fundamental form measures the curvature of $\Var{R}_r$ inside of $\mathbb{R}^{m\times n}$ and if $Y = U \Sigma V^T$, there is a whole linear space contained in $\Var{R}_r$ passing through $Y$, namely $U \otimes V$. The part of the second fundamental form arising from contravariant differentiation of the basis vectors of $U \otimes V$ thus vanishes completely. Since the $E_{ij}$ form an orthonormal basis, it can be deduced from \cref{eqn_sff_compact} that the following is the second fundamental form of $\Var{M}_r$ at $Y$ viewed as element of the tensor space $(\Tang{Y}{\Var{R}_r})^* \otimes (\Tang{Y}{\Var{R}_r})^* \otimes \Norm{Y}{\Var{R}_r}$: \begin{align} \nonumber \mathit{I\!I}_Y &= \sum_{i=r+1}^m \sum_{j=1}^r \sum_{k=1}^r \sum_{l=r+1}^n E_{ij}^T \otimes E_{kl}^T \otimes \Big( \frac{1}{\sigma_j} \delta_{kj} E_{il} \Big) \\ \nonumber &\hspace{4.25cm} + \sum_{i=1}^r \sum_{j=r+1}^n \sum_{k=r+1}^m \sum_{l=1}^r E_{ij}^T \otimes E_{kl}^T \otimes \Big( \frac{1}{\sigma_l} \delta_{il} E_{kj} \Big),\\ \nonumber &= \sum_{i=r+1}^m \sum_{l=r+1}^n \sum_{k=1}^r \frac{1}{\sigma_k} E_{ik}^T \otimes E_{kl}^T \otimes E_{il} + \sum_{k=r+1}^m \sum_{j=r+1}^n \sum_{l=1}^r \frac{1}{\sigma_l} E_{lj}^T \otimes E_{kl}^T \otimes E_{kj}, \\ \label{eqn_stuff_complete} &= \sum_{i=r+1}^m \sum_{j = r+1}^n \sum_{k=1}^r \frac{1}{\sigma_k} (E_{ik}^T \otimes E_{kj}^T + E_{kj}^T \otimes E_{ik}^T) \otimes E_{ij}; \end{align} see also the discussion in \cref{sec:SFF_3_tensor}. Note that since our Riemannian metric is the standard Euclidean inner product $(A,B)\mapsto \mathrm{Trace}(A^TB)$ on $\mathbb{R}^{m \times n}$, dualization consists of transposition. \subsection{Second fundamental form of sensed rank-$r$ matrices}\label{SFF_incomplete_rank_r} We can now compute the second fundamental form of the sensed manifold $\Var{S}$. Let us denote $F_{ij} = M(E_{ij})$. These are the images of the basis vectors of $\Tang{Y}{\Var{R}_r}$ under the derivative $\deriv{L}{Y} = \dot{Y} \mapsto M(\dot{Y})$. As before, let $X=L(Y) \in\Var{S}_r$. We conclude from \cref{lemma_L,eqn_stuff_complete} that the second fundamental for $\mathit{I\!I}_X$, viewed as an element of $(\Tang{X}{\Var{S}_r})^* \otimes (\Tang{X}{\Var{S}_r})^* \otimes \Norm{X}{\Var{S}_r}$, is \begin{equation}\label{SFF_X} \mathit{I\!I}_{X} = \sum_{i=r+1}^m \sum_{j = r+1}^n \sum_{k=1}^r \frac{1}{\sigma_k} \left( F_{ik}^\dagger \otimes F_{kj}^\dagger + F_{kj}^\dagger \otimes F_{ik}^\dagger \right) \otimes \mathrm{P}_{\Norm{X}{\Var{W}_r}}(M(E_{ij})), \end{equation} where $F_{ij}^\dagger$ is the dual basis vector of $F_{ij}$; that is, $\langle F_{ij}^\dagger, F_{kl} \rangle = \delta_{ik} \delta_{jl}$.\footnote{It is customary to denote the dual basis by $F_{ij}^*$. This dual basis is often defined with respect to an orthonormal basis in $\mathbb R^\ell$. We chose $\dagger$ to emphasize that taking the dual of $F_{ij}$ in this way does not result in the dual basis vector $F_{ij}^\dagger$.} Recall from~\cref{H_new_form_general_setting} that the formula for the Riemannian Hessian is $H_{A,X} = \mathbf 1_{\Tang{X}{\Var X}} - N^T \cdot_3 \mathit{I\!I}_X$, where $N=A - X$. The second term in this formula is thus \begin{equation}\label{SFF_X_2} N^T \cdot_3 \mathit{I\!I}_X = \sum_{i=r+1}^m \sum_{j = r+1}^n \sum_{k=1}^r \frac{1}{\sigma_k} \langle N, M(E_{ij}) \rangle \, \left( F_{ik}^\dagger \otimes (F_{kj}^\dagger)^T + F_{kj}^\dagger \otimes (F_{ik}^\dagger)^T \right). \end{equation} As discussed in \cref{sec:SFF_3_tensor}, this expression can be represented naturally by a matrix in $(\Tang{X}{\Var{S}_r})^* \otimes \Tang{X}{\Var{S}_r}$. The transposition on the right-hand side originated from taking duals as $N^T \cdot_3 \mathit{I\!I}_X$ can be seen as a bilinear map $\Tang{X}{\Var{S}_r} \times \Tang{X}{\Var{S}_r} \to \mathbb{R}$. Equation \cref{SFF_X_2} specifies in abstract terms the contraction of the second fundental form by $N$. We can use this to compute the condition number $\kappa_\mathrm{recovery}(A,Y)$. Indeed, $\kappa_\mathrm{recovery}(A,Y)$ is the spectral norm of the inverse of~$H_{A,X} \circ M$. Unfortunately, we were not able to determine a handy expression for the inverse of $H_{A,X}$. For this reason, we explain how the condition number can be computed using standard linear algebra software in the next subsection by expressing the operator in coordinates. \section{An algorithm for computing the condition number}\label{alg_matrix_recovery} The spectral norm in the definition \cref{def_CN} of $\kappa_\mathrm{recovery}(A,Y)$ can be computed efficiently in coordinates if we choose orthonormal bases for respectively the codomain and domain of the operator $H_{A,X}M$. In such bases, the spectral norm coincides with the $2$-norm of the coordinate matrix by classic linear algebra. The inverse of the smallest singular value of this matrix representation of $H_{A,X}M$ is then the condition number $\kappa_\mathrm{recovery}(A,Y)$. \subsection{Determining the dual basis} First, we express the dual basis $F_{ij}^\dagger \in (\Tang{X}{\Var{S}_r})^*$ in the standard basis of $(\mathbb{R}^\ell)^*$. We assume that $M$ computes the coordinates in the standard basis $(e_1, \ldots, e_\ell)$ of $\mathbb{R}^\ell$. Consequently, the basis vectors $F_{ij}$ are given in these coordinates by \( F_{ij} = M( E_{ij} ). \) Let $F = [F_{ij}]$ be the $\ell \times s$ matrix formed by placing the $F_{ij}$'s as column vectors, where $s = \dim\Var{R}_r = (m+n-r)r$. The dual basis of $F$, expressed in coordinates with respect to the standard basis $(e_1^T, \ldots, e_\ell^T)$ of $(\mathbb{R}^\ell)^*$, is then given by the rows of the Moore--Penrose pseudoinverse of $F$; indeed, $F^\dagger F = I_{s}$ so the rows $F_{ij}^\dagger$ are the dual basis vectors. \subsection{Matrix representation of the Weingarten map} From \cref{SFF_X_2}, we can now conclude that the matrix of the Weingarten map $S_N=N^T \cdot_3 \mathit{I\!I}_X$ relative to the standard basis $(e_1,\ldots,e_\ell)$ of $\mathbb{R}^\ell$ and $(e_1^T,\ldots,e_\ell^T)$ of $(\mathbb{R}^\ell)^*$ is \[ S_N = (F^\dagger)^T \begin{bmatrix} 0_{r^2} & & \\[0.2em] & 0_{(m-r)r} & V \\[0.3em] & V^T & 0_{(n-r)r} \end{bmatrix} F^\dagger, \] where $0_{a}$ denotes an $a \times a$ matrix of zeros and $V$ is defined in the next paragraph. Consequently, $S_N$ is a square matrix of size $r^2 + (m-r)r + (n-r)r = (m+n-r)r = s= \dim \mathcal M_r$. We see from \cref{SFF_X_2} that the foregoing matrix $V \in \mathbb{R}^{(m-r)r \times (n-r)r}$ is indexed by a multi-index $(I,J) = ((ij),(kl))$ with $1 \le j \le r < i \le m$ and $1 \le n \le r < l \le n$. Its entries are: \begin{align} \label{eqn_compute_v} V = \begin{bmatrix} \frac{\delta_{jk}}{\sigma_j} \langle N, M(E_{il}) \rangle \end{bmatrix}_{(ij),(kl)}; \end{align} compare this with the start of the equations that led to \cref{eqn_stuff_complete}. \begin{remark} Note that for fixed $(i, l)$, the submatrix of $V$ formed by $1 \le j, k \le r$ is a multiple of the identity. After a suitable symmetric permutation of rows and columns, we thus can write $V = \Sigma^{-1} \otimes Z$, where $\Sigma = \operatorname{diag}(\sigma_1, \ldots, \sigma_r)$ and $Z = [\langle N, M(u_{i} v_l^T)\rangle ]_{\substack{r+1 \le i \le m,\\ r+1 \le l \le n}}$. This observation can be exploited to further simplify computations with $V$. Doing this implies a particular ordering of the basis vectors in~$(F^\dagger)^T$, which needs to be respected when computing $R$ below. As the computational gains associated with this observation do not lead to an improvement of the asymptotic running time, we decided not to exploit it in the discussion below. \end{remark} Consider the $QR$ factorization $F = QR$. As the columns $F \in \mathbb{R}^{\ell \times s}$ form a basis (recall that $\ell \ge s$), $R$ is an $s \times s$ invertible matrix. It follows that $F^\dagger = R^{-1} Q^T$. We can thus write \[ S_N = Q R^{-T} \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & V \\ 0 & V^T & 0 \end{bmatrix} R^{-1} Q^T. \] Partitioning $R$ conformally with the block structure of the middle matrix, we have \[ R = \begin{bmatrix} R_{11} & R_{12} & R_{13} \\ 0 & R_{22} & R_{23} \\ 0 & 0 & R_{33} \end{bmatrix}, \quad\text{and}\quad R^{-1} = \begin{bmatrix} R_{11}^{-1} & T & T' \\ 0 & R_{22}^{-1} & -R_{22}^{-1} R_{23} R_{33}^{-1} \\ 0 & 0 & R_{33}^{-1} \end{bmatrix}, \] where $T, T'$ are unspecified matrices. Consequently, the foregoing expression of $S_N$ can be simplified to \[ S_N = Q \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & R_{22}^{-T} V R_{33}^{-1} \\ 0 & R_{33}^{-T} V^T R_{22}^{-1} & - \mathrm{Sym}(R_{33}^{-T} V^T R_{22}^{-1} R_{23} R_{33}^{-1} ) \end{bmatrix} Q^T, \] where $\mathrm{Sym}(Z) = Z + Z^T$ symmetrizes its input. \subsection{Computing the condition number} Recall that the columns of $Q$ form an orthonormal basis of $\Tang{X}{\Var{S}_r}$. Therefore, the identity $\mathbf{1}_{\Tang{X}{\mathcal S_r}}$ is represented with respect to the standard basis on $\mathbb{R}^\ell$ (and its dual) as $Q Q^T$. By definition, $F = QR$ is the change of basis matrix $M : \Tang{Y}{\Var{R}_r} \to \Tang{X}{\Var{S}_r}$ from $E_{ij}$ to $F_{ij}$ represented with respect to the orthonormal basis $E_{ij}^T \in (\mathbb{R}^{m\times n})^*$ and the standard basis on $\mathbb{R}^\ell$. Putting all of this together, and using \cref{eqn_Hess_distance}, we find that \begin{align} \label{eqn_compute_tn} H_{A,X} M = Q \begin{bmatrix} \mathbf 1_{r^2} & \\ & \mathbf 1_{(m-r)r} & - R_{22}^{-T} V R_{33}^{-1} \\ & -R_{33}^{-T} V^T R_{22}^{-1} & \mathbf{1}_{(n-r)r} + \mathrm{Sym}(R_{33}^{-T} V^T R_{22}^{-1} R_{23} R_{33}^{-1} ) \end{bmatrix} R, \end{align} where $\mathbf 1_a$ is the $a\times a$ identity matrix. Let us write $T_N$ for the matrix in the middle in \cref{eqn_compute_tn}, so that $H_{A,X}M = QT_NR$. This is a matrix representation relative to the standard orthogonal basis of $\mathbb{R}^\ell$ and the orthonormal basis $E_{ij}^T$ on~$(\mathbb{R}^{m\times n})^*.$ It follows that \[ \kappa_{\mathrm{recovery}}(A,Y) = \frac{1}{\sigma_{s}( T_N R )}, \] where $\sigma_s(T_N R)$ is the smallest singular value of the $s \times s$ matrix $T_N R$ and where, as before, $s = \dim \Var{M}_r = (m+n-r)r$. We can ignore $Q$ because it has orthonormal columns. \subsection{The algorithm} We can now put all components together. We assume that a rank-$r$ matrix $Y = U \Sigma V^T \in \mathbb{R}^{m \times n}$ is given (factored or not). We assume without loss of generality that $m \ge n$. We are also given $X = L(Y)$ and $N = A - X$ lies (approximately) in the normal space $\Norm{X}{\Var{S}_r}$. As before, $s = \dim \Var{R}_r$. The numerical algorithm we propose for the condition number $\kappa_\mathrm{recovery}(A,Y)$ proceeds as follows: \begin{enumerate} \item[S1.] Compute orthonormal bases $U^\perp \in \mathbb{R}^{m \times m-r}$ and $V^\perp \in \mathbb{R}^{n \times n-r}$ for the orthogonal complements of $U$ and $V$ respectively via a full SVD of $Y$. \item[S2.] Construct the $\ell \times s$ change of basis matrix $F = [M(E_{ij})]_{i \le r \text{ or } j \le r}$ as well as the $\ell \times mn - s$ matrix $G = [M(E_{ij})]_{i, j > r}$. \item[S3.] Compute the QR decomposition $F = Q R$. \item[S4.] Ensure that $N = A - X$ is numerically orthogonal to the tangent space $\Tang{X}{\Var{S}_r}$ by computing $N \leftarrow N - Q (Q^T N)$ twice. \item[S5.] Construct the matrix $V$ by the formula \cref{eqn_compute_v} and the precomputed $G$. \item[S6.] Compute the matrix $T_N$ following \cref{eqn_compute_tn}, and then compute $Z = T_N R$. \item[S7.] Compute the smallest singular value $\sigma_s(Z)$ of $Z$. \item[S7.] Output $\kappa_\mathrm{recovery}(A,Y) = \sigma_s(Z)^{-1}$. \end{enumerate} The cost of computing the condition number with the foregoing algorithm depends on the cost of applying the linear part $M$ of the sensing operator to the basis vectors $E_{ij} = u_i v_j^T$. Let us denote the maximal cost by $C_M$. The cost is \begin{align*} &\underbrace{ m^3}_{S1.} + \underbrace{mn C_M}_{S2.} + \underbrace{\ell s^2}_{S3.} + \underbrace{\ell s}_{S4.} + \underbrace{\ell (mn - s) r}_{S5.} + \underbrace{s^3}_{S6.} + \underbrace{s^3}_{S7.} \\[0.3em] = \; &\mathcal{O}( mn C_M + \ell s^2 ), \end{align*} where in the last step we used $\ell > s = (m+n-r)r$ and $(mn-s)r < s^2$. For practical sensing operators with $\ell = \phi s$ and $\phi > 1$ a small constant, this usually means the cost is dominated by the cost for computing the QR-factorization of the change-of-basis matrix $F$. A general sensing operator $L : \mathbb{R}^{m\times n} \to \mathbb{R}^\ell$ has cost $C_M = mn\ell$. As we have $\ell > (m+n-r)r$, this implies the overall cost for computing the condition number would be a rather impressive $m^3 n^2 r$. Fortunately, many sensing operator are structured. Consider, for example, the structured sensing operator discussed after \cref{ass1} \begin{align} \label{eqn_structured_sensing} L(X) = \operatorname{diag}(B^T X C) + \vect{b} = (B \odot C)^T \vecc{X} + \vect{b}, \end{align} which is defined by the $m \times \ell$ matrix $B$, the $n \times \ell$ matrix $C$ and a vector $\vect{b} \in \mathbb{R}^\ell$. In the foregoing, $\odot$ is the (columnwise) Khatri--Rao product of its arguments. The derivative of $L$ is the map $\dot{X} \mapsto \operatorname{diag}(B^T \dot{X} C)$. Hence, if $\dot{X} = u_i v_j^T$, we see that the derivative can be applied effectively by $(B^T u_i) \circledast (v_j^T C)$, where $\circledast$ is the Hadamard or elementwise product. The computational complexity is only $\ell(m+n+1)$ in this case. With such a sensing operator, the condition number can be computed in~$\mathcal{O}(s^3)$ operations. In conclusion, we proved the next result. \begin{proposition}\label{prop_complexity} Let the sampling operator be as in \cref{eqn_structured_sensing} and $\ell = \phi s$. Then, the condition number $\kappa_\mathrm{recovery}(A,Y)$ where $A \in \mathbb{R}^\ell$ and $Y \in \Var{R}_r \subset \mathbb{R}^{m\times n}$ can be computed in $\mathcal{O}(\phi s^3)$ operations, where $s = \dim \Var{M}_r = (m+n-r)r$. \end{proposition} This complexity is cubic in the problem size $s = \dim\Var{M}_r$. This means that computing the solution's condition number is as expensive as one step of a Riemannian Newton method for solving the recovery problem. An example of such a structured sensing operator appears in the Netflix problem from the introduction. Herein, the sensing operator $L$ selects $\ell$ coordinates $(i_k,j_k)$ of $\mathbb{R}^{m\times n}$ and the other elements are unknown. This can be expressed as in \cref{eqn_structured_sensing} by taking $\vect{b}=0$, $B = [e_{i_k}]_k$ and $C = [e_{j_k}]_k$. \section{Numerical experiment}\label{sec:experiments} We present a numerical experiment investigating the condition number of low-rank matrix recovery. The experiment was performed on a computer system running Ubuntu 18.04.5 LTS and comprising a quad-core Intel Core i7-4770K CPU (3.5GHz clockspeed) with 32GB main memory. Our Julia implementation including some experiments are available from the git repository at \begin{quote} \url{https://gitlab.kuleuven.be/u0072863/MatrixRecoverySensitivity}. \end{quote} We investigate the sensitivity of low-rank matrix recovery where the sensing operator is a random low-rank sensing operator. The $i$th measurement of the sensing operator $L : \mathbb{R}^{m \times n} \to \mathbb{R}^\ell$ performs $L_i(Y) = \mathbf{v}_i^T Y \mathbf{w}_i$ where $\mathbf{v}_i \in \mathbb{R}^m$ and $\vect{w}_i \in \mathbb{R}^n$ are vectors whose elements are drawn i.i.d.~from a standard Gauss distribution. We investigate the influence of the number of measurements $$\ell = \varphi \dim \Var{M}_r = \varphi (m+n-r)r.$$ Here, $\varphi$ is the \textit{oversampling rate}: $\varphi=1$ suffices for finite recoverability. We also investigate the influence the relative distance $t$ of the input matrix \[ A_t = X + t\frac{\|X\|}{\Vert N\Vert}\cdot N \] from the sensed input manifold $\Var{S}_r$. Herein, $X=L(Y) \in \mathbb{R}^\ell$ is the image under~$L$ of a randomly chosen rank-$r$ matrix $Y=AB^T$ with $A$ and $B$ random Gaussian matrices, and $N$ is a random unit-norm normal vector at $Y$. The normal vector $N$ is chosen as follows: we sample $\eta$ as a random Gaussian vector in $\mathbb R^\ell$ and orthogonally project onto the normal space so that $N=\mathrm{P}_{\mathrm{N}_{X}\Var S_r}(\eta)$. \begin{figure}[t] \begin{center} \includegraphics[height=6.625cm]{./CompressedSensingHeatmap.pdf} \end{center} \caption{The base-$10$ logarithm of the condition number $\kappa_\textrm{recovery}$ for various combinations of the oversampling factor $\phi$ and the distance signed $t$.} \label{exp_LR3} \end{figure} In our experiment, we took $(m,n,r)=(50,40,10)$. For $t$ we took $299$ linearly spaced samples between $-1$ and $1$, and $\varphi$ $150$ linearly spread samples between $1$ and $10$ were chosen. The corresponding number of measurements $\ell$ was the integer part of $\varphi \dim \Var{M}_r$. Note that all $\ell_{\max} = 10\dim\Var{M}_r$ vectors $\vect{v}_i$ and $\vect{w}_i$ are generated beforehand and we always use the first $\ell$ measurements for a particular $\varphi$. We are thus only adding measurements as $\varphi$ is increased. The base-$10$ logarithm of the condition number of low-rank recovery at $(A_t,Y)$ is visualized in \cref{exp_LR3}. By considering a vertical column in the figure, we can see the effect of adding additional measurements on the condition number. A phase transition can be made out in the figure. The very dark area in the figure corresponds to the cases where $Y$ is a local minimizer of the distance to $A_t$. In the purple--red area, on the other hand, $Y$ is no longer a local minimizer and the condition numbers can be significantly higher; anywhere from around $10$ to $\infty$. The key feature \cref{exp_LR3} demonstrates is that some amount of oversampling $\varphi>1$ in the measurements is necessary for very well-conditioned local minimizers, i.e., $\kappa_\text{recovery}(A_t,Y) \le 1$, especially when the input matrix~$A_t$ does not lie on the manifold of sensed matrices $\Var{S}_r$, i.e., $|t|>0$. In many practical applications this is true, such as in the Netflix problem, because it is only \textit{assumed} that the output $Y$ can be well-approximated by a low-rank matrix on $\Var{R}_r$. Consequently, the sensed matrix is not expected to lie on the sensed manifold $\Var{S}_r = L(\Var{R}_r)$ either.
{ "timestamp": "2021-03-02T02:27:18", "yymm": "2103", "arxiv_id": "2103.00531", "language": "en", "url": "https://arxiv.org/abs/2103.00531", "abstract": "We characterize the first-order sensitivity of approximately recovering a low-rank matrix from linear measurements, a standard problem in compressed sensing. A special case covered by our analysis is approximating an incomplete matrix by a low-rank matrix. We give an algorithm for computing the associated condition number and demonstrate experimentally how the number of linear measurements affects it.In addition, we study the condition number of the rank-r matrix approximation problem. It measures in the Frobenius norm by how much an infinitesimal perturbation to an arbitrary input matrix is amplified in the movement of its best rank-r approximation. We give an explicit formula for the condition number, which shows that it does depend on the relative singular value gap between the rth and (r+1)th singular values of the input matrix.", "subjects": "Numerical Analysis (math.NA)", "title": "Sensitivity of low-rank matrix recovery", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.985936370890583, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7084883475536328 }
https://arxiv.org/abs/math/0702301
Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting
The problem of recovering the sparsity pattern of a fixed but unknown vector $\beta^* \in \real^p based on a set of $n$ noisy observations arises in a variety of settings, including subset selection in regression, graphical model selection, signal denoising, compressive sensing, and constructive approximation. Of interest are conditions on the model dimension $p$, the sparsity index $s$ (number of non-zero entries in $\beta^*$), and the number of observations $n$ that are necessary and/or sufficient to ensure asymptotically perfect recovery of the sparsity pattern. This paper focuses on the information-theoretic limits of sparsity recovery: in particular, for a noisy linear observation model based on measurement vectors drawn from the standard Gaussian ensemble, we derive both a set of sufficient conditions for asymptotically perfect recovery using the optimal decoder, as well as a set of necessary conditions that any decoder, regardless of its computational complexity, must satisfy for perfect recovery. This analysis of optimal decoding limits complements our previous work (ARXIV:math.ST/0605740) on sharp thresholds for sparsity recovery using the Lasso ($\ell_1$-constrained quadratic programming) with Gaussian measurement ensembles.
\section{Introduction} Suppose that we are given a set of $\ensuremath{n}$ observations of a fixed but unknown vector $\ensuremath{\beta^*} \in \real^\mdim$. In a variety of settings, it is known \emph{a priori} that the vector $\ensuremath{\beta^*}$ is sparse, meaning that its support set $\ensuremath{S}$---corresponding to those indices $i$ for which $\ensuremath{\beta^*}_i$ is non-zero---is relatively small, say with \mbox{size $|\ensuremath{S}| =: \, \ensuremath{s} \ll \mdim$}. Sparsity recovery refers to the problem of correctly estimating the support set $\ensuremath{S}$ based on a set of noisy observations. This sparsity recovery problem is of broad interest, arising in various areas, including subset selection in regression~\cite{Miller90}, structure estimation in graphical models~\cite{Meinshausen06}, sparse approximation~\cite{Devore93,Natarajan95}, signal denoising~\cite{Chen98}, and compressive sensing~\cite{Donoho06,CandesTao05}. A great deal of work over the past few years has focused on the performance of computationally tractable methods, many based on $\ell_1$ or other convex relaxations, both for recovering the exact sparsity pattern as well as related problems in sparse approximation. We provide a brief overview of those parts of this extensive literature most relevant to our work in Section~\ref{SecRelatedWork} below. Of equal interest and complementary in nature, however, are the information-theoretic limits associated with the performance of \emph{any} procedure for sparsity recovery. Such understanding of fundamental limitations is crucial in assessing the behavior of computationally tractable methods. In particular, there is little point in proposing novel methods for sparsity recovery, possibly with higher computational complexity, if currently extant and computationally tractable methods achieve the information-theoretic limits. On the other hand, an information-theoretic analysis can reveal where there currently exists a gap between the performance of computationally tractable methods, and the fundamental limits. Indeed, the information-theoretic analysis of this paper makes contributions of both types. With this motivation in mind, the focus of this paper is on the information-theoretic limitations of sparsity recovery. In particular, our analysis focuses on the noisy and high-dimensional setting, meaning that the observations are contaminated by noise, and all three problem parameters---the \emph{number of observations} $\ensuremath{n}$, the \emph{model dimension} $\mdim$, and the \emph{sparsity index} $\ensuremath{s}$, defined below---may tend to infinity. Our main results, stated more precisely in Section~\ref{SecMainResults}, are necessary and sufficient conditions on the triplet $(\ensuremath{n}, \mdim, \ensuremath{s})$ for exact recovery. In particular, given noisy linear observations based on measurement vectors drawn from the standard Gaussian ensemble, we derive both a set of sufficient conditions for asymptotically perfect recovery using the optimal decoder, as well as a set of necessary conditions that any decoder must satisfy for perfect recovery. The analysis given here complements our earlier paper~\cite{Wainwright06a_aller} that established precise thresholds on the success/failure of the Lasso (i.e., $\ell_1$-constrained quadratic programming) for sparsity recovery. The remainder of this paper is organized as follows. In Section~\ref{SecRelatedWork}, we provide a more precise formulation of the problem, and a brief discussion of past work, whereas Section~\ref{SecMainResults} provides a precise statement of our main results, and a discussion of their consequences. Section~\ref{SecAnalysis} and the appendices are devoted to the proofs of our main results, and we conclude in Section~\ref{SecDiscussion} with a discussion of open directions. \subsection{Problem formulation and past work} \label{SecRelatedWork} We begin with a more precise formulation of the problem, as well as a discussion of previous work, with emphasis on that most closely related to the results in this paper. Let $\ensuremath{\beta^*} \in \real^\mdim$ be a fixed but unknown vector; we refer to the ambient dimension $\mdim$ as the \emph{model dimension}. Define the support set of $\ensuremath{\beta^*}$ as \begin{eqnarray} \ensuremath{S} & \defn & \{ i \in \{1, \ldots, \ensuremath{p} \} \; \mid \; \ensuremath{\beta^*}_i \neq 0 \}. \end{eqnarray} We refer to its size $\ensuremath{s} \defn |\ensuremath{S}|$ as the \emph{sparsity index}. Finally, suppose that we are given a set of $\ensuremath{n}$ observations, of the form \begin{eqnarray} \label{EqnLinearObs} \Ysca_i & = & \ensuremath{x}_i^T \ensuremath{\beta^*} + \Wsca_i, \qquad i = 1, \ldots, \ensuremath{n} \end{eqnarray} where each $\ensuremath{x}_i \in \real^\mdim$ is a measurement vector, and $\Wsca_i \sim N(0, \sigma^2)$ is additive Gaussian noise. Of interest are conditions on the triplet $(\ensuremath{n}, \mdim, \ensuremath{s})$ under which a given method either succeeds or fails in recovering the sparsity pattern $\ensuremath{S}$. \myparagraph{Observation models} The linear observation model~\eqref{EqnLinearObs} can be studied in either its noiseless variant ($\sigma^2 = 0$), or the noisy setting ($\sigma^2 > 0$); this paper focuses exclusively the noisy setting. In addition, previous work has addressed both deterministic families and random ensembles of measurement vectors $\{\ensuremath{x}_i \}_{i=1}^\ensuremath{n}$. The analysis in this paper is based on the \emph{standard Gaussian measurement ensemble}, in which each measurement vector $\ensuremath{x}_i$ is drawn from the zero-mean isotropic Gaussian distribution $N(0, I_{\mdim \times \mdim})$. \myparagraph{Error metrics} Consider some method that generates the vector $\estim{\beta} \in \real^\mdim$ as an estimate of the truth $\ensuremath{\beta^*}$. There are various distinct criteria for assessing how close the estimate is to the truth, including \bit \item various $\ell_p$ norms $\Exs \|\estim{\beta} - \ensuremath{\beta^*} \|_p$, especially $\ell_2$ and $\ell_1$, or \item some measurement of predictive power (e.g., $\Exs[\|Y_i - \estim{Y}_i \|_2^2]$, where $\estim{Y}_i$ is the estimate based on $\estim{\beta}$). \eit Given the abundance of recent results on sparse approximation (not all of which are mutually comparable), it is particularly important to specify up front the choice of error metric. In this paper, we focus exclusively on the sparsity recovery problem, for which the appropriate error metric is simply the $0-1$ loss associated with the event of recovering the correct support $\ensuremath{S}$---viz.: \begin{eqnarray} \label{EqnRecover} \rho(\estim{\beta}, \ensuremath{\beta^*}) & = & \Ind \left[ \left \{ \estim{\beta}_i \neq 0 \quad \forall i \in \ensuremath{S} \right \} \cap \left \{ \estim{\beta}_j = 0 \quad \forall j \notin \ensuremath{S} \right \} \right]. \end{eqnarray} \myparagraph{Past work} Closely related in its information-theoretic spirit is the earlier paper of Fletcher et al.~\cite{Fletcher06} that analyzed the standard Gaussian ensemble from a rate-distortion perspective, studying the average $\ell_2$-error of the optimal decoder. The results given here also address the information-theoretic limitations, albeit of the sparsity recovery problem, using the error metric~\eqref{EqnRecover} as opposed to $\ell_2$-norm. In a related but distinct line of work, the use of $\ell_1$-relaxation for sparse approximation has a lengthy history; relatively early papers from the 1990s include the work of Chen, Donoho and Saunders~\cite{Chen98}, as well as Tibshirani~\cite{Tibshirani96} on $\ell_1$-constrained quadratic programming (known as the Lasso in the statistics literature). A great deal of subsequent work has analyzed the performance of $\ell_1$-relaxations, both in the noiseless~\cite{Elad02,Feuer03,Malioutov04} and noisy setting~\cite{Tropp06} for deterministic ensembles, as well as the noiseless~\cite{Donoho04a,CandesTao05,DonTan06} and noisy setting~\cite{CandesTao06,CanRomTao06,Donoho04b,Meinshausen06,Zhao06,Wainwright06a_aller} for random ensembles. Other work has provided conditions under which estimation of a noise-contaminated vector via the Lasso~\cite{CanRomTao06,Donoho04b} or other types of convex relaxation~\cite{CandesTao06} is stable in the $\ell_2$ sense; however, such $\ell_2$-stability does not guarantee exact recovery of the underlying sparsity pattern. A notable feature of the results given here is that they apply to completely general scaling of the triplet $(\ensuremath{n}, \mdim, \ensuremath{s})$. In contrast, most previous work has addressed one of two possible special cases of sparsity scaling: (a) either the \emph{linear sparsity regime}~\cite[e.g]{ CandesTao05,Donoho04a,Donoho04b}, in which $\ensuremath{s} = \alpha \mdim$ for some $\alpha \in (0,1)$; or (b) the \emph{sublinear sparsity regime}~\cite[e.g.,]{Meinshausen06,Zhao06}, in which $\ensuremath{s}/\mdim$ tends to zero. Depending on the underlying motivation for sparse approximation, both of these sparsity regimes are of independent interest. In covering the full range of scaling, the results given here are complementary to those of our previous paper~\cite{Wainwright06a_aller} that provided threshold results, also applicable to general scaling of $(\ensuremath{n}, \mdim, \ensuremath{s})$, for the success/failure of the Lasso when used for sparsity recovery with random Gaussian measurement ensembles. We discuss connections to previous work in more technical detail following the statement of our main results below. \subsection{Our contributions} \label{SecMainResults} The analysis of this paper procedure is asymptotic in nature, focusing on scaling conditions on the triplet $(\ensuremath{n}, \ensuremath{p}, \ensuremath{s})$ under which asymptotically exact recovery is either possible or impossible. As mentioned previously, we focus on the linear observation model~\eqref{EqnLinearObs} in the noisy setting ($\sigma^2 > 0$), and with the measurement vectors $\ensuremath{x}_i$ drawn in an i.i.d. manner from the standard Gaussian $N(0, I_{\mdim \times \mdim})$ ensemble. A decoder is a mapping from the $\ensuremath{n}$-vector of observations $\Ysca$ to an estimated subset---say of the form $\ensuremath{\estim{\Sset}} = \phi(Y)$. We think of the underlying true vector $\ensuremath{\beta^*} \in \real^\mdim$ with its support $\ensuremath{S}$ randomly chosen, uniformly over all ${\mdim \choose \ensuremath{s}}$ subspaces of size $\ensuremath{s}$. Accordingly, the average error probability $\ensuremath{p_{\operatorname{err}}}$ of any decoder is given by \begin{equation*} \ensuremath{p_{\operatorname{err}}}(\phi) \; = \; \frac{1}{{\mdim \choose \ensuremath{s}}} \sum_{\ensuremath{S}, \; |\ensuremath{S}| = \ensuremath{s}} \Prob[\phi(Y) \neq \ensuremath{S} \, \mid \, \ensuremath{S}]. \end{equation*} Here the term $\ensuremath{\mathbb{P}}[\phi(Y) \neq \ensuremath{S} \, \mid \, \ensuremath{S}]$ corresponds to the probability, conditioned on the true underlying support being $\ensuremath{S}$ and averaging over the measurement noise $\Wsca$, the choice of Gaussian random matrix $\ensuremath{X}$, and the choice of the entries $\beta^*_{\ensuremath{S}}$ on the fixed support $\ensuremath{S}$, that the decoder makes an error. We say that \begin{itemize} \item the sparsity recovery is \emph{asymptotically reliable} (error-free) if $\ensuremath{p_{\operatorname{err}}}(\phi) \rightarrow 0$ as $\ensuremath{n} \rightarrow +\infty$, and \item the sparsity recovery is \emph{asymptotically unreliable} if for some constant $c > 0$, the error probability stays bounded $\ensuremath{p_{\operatorname{err}}}(\phi) \geq c$ as $\ensuremath{n} \rightarrow +\infty$. \end{itemize} \noindent In addition to the three parameters $(\ensuremath{n}, \ensuremath{p}, \ensuremath{s})$, our results also involve the minimum value of the unknown vector $\ensuremath{\beta^*}$ on its support, given by \begin{eqnarray} \label{EqnDefnMinVal} \ensuremath{\mathcal{M}}(\ensuremath{\beta^*}) & \defn & \min_{i \in \ensuremath{S}} |\beta^*_i|. \end{eqnarray} We begin by stating a set of conditions on the triplet $(\ensuremath{n}, \mdim, \ensuremath{s})$ which are sufficient to ensure asymptotically perfect recovery of the sparsity pattern: \btheos[Sufficient conditions] \label{ThmSuff} If $(\ensuremath{n} - \ensuremath{s}) \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) \rightarrow +\infty$, then the following condition suffices to ensure asymptotically reliable recovery: for some fixed constant $C > 0$, \begin{eqnarray} \label{EqnSuffGeneral} \ensuremath{n} & > & C \; \max\left \{ \ensuremath{s} \log (\mdim/\ensuremath{s}), \; \frac{1}{\ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*})} \log(\mdim - \ensuremath{s}) \right \}. \end{eqnarray} \etheos \noindent The proof of this claim, given in Section~\ref{SecSuff}, is constructive in nature, based on direct analysis of the error probability associated with the optimal decoder. \btheos[Necessary conditions] \label{ThmNec} Asymptotically reliable recovery is impossible under the following condition: for some fixed constant $C' > 0$: \begin{eqnarray} \label{EqnNecGeneral} \ensuremath{n} & < & \left[\frac{C'}{\ensuremath{s} \; \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*})} \right] \; \ensuremath{s} \log \frac{\mdim}{\ensuremath{s}}. \end{eqnarray} \etheos \noindent The proof of this claim, given in Section~\ref{SecNec}, is somewhat more indirect in nature, based on exploiting a corollary of Fano's inequality~\cite{Cover, Hasminskii78,IbrHas81,Yu97}, in order to lower bound the probability of error for a restricted hypothesis testing problem. To interpret these results, we consider two distinct regimes of sparsity: \myparagraph{Regime of sublinear sparsity} First suppose that the sparsity is sublinear, meaning that \mbox{$\ensuremath{s} = o(\mdim)$.} Based on the two theorems, we identify the critical scaling as $\ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) = \Theta(1/\ensuremath{s})$. With this scaling, the sufficient condition in Theorem~\ref{ThmSuff} reduces to $\ensuremath{n} > C \: \ensuremath{s} \; \max \{ \log(\mdim - \ensuremath{s}), \log \frac{\mdim}{\ensuremath{s}} \}$, whereas the necessary condition in Theorem~\ref{ThmNec} reduces to $\ensuremath{n} < C' \: \ensuremath{s} \log \frac{\mdim}{\ensuremath{s}}$. For many choices of sublinear sparsity (e.g., $\ensuremath{s} = \mathcal{O}(\sqrt{\mdim})$), we have $\log \frac{\mdim}{\ensuremath{s}} = \Omega(\log (\mdim - \ensuremath{s})) - o(1)$, so that we can summarize the two conditions as a threshold of the order $\ensuremath{n} = \Theta(\ensuremath{s} \log(\mdim - \ensuremath{s}))$. To compare with our previous work~\cite{Wainwright06a_aller} on computationally tractable methods, we established that $\ell_1$-constrained quadratic programming (Lasso) has a threshold\footnote{Those results~\cite{Wainwright06a_aller} allowed the minimum value to scale as $\ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) = f(\ensuremath{s})/\ensuremath{s}$, where $f$ is any function such that $\lim_{\ensuremath{s} \rightarrow +\infty} f(\ensuremath{s}) = +\infty$.} for success/failure of order $\ensuremath{n} = \Theta \left(\ensuremath{s} \log (\mdim - \ensuremath{s}) \right)$, so that the Lasso essentially achieves the information-theoretic bounds. \myparagraph{Regime of linear sparsity} Next consider the regime of linear sparsity, in which $\ensuremath{s} = \alpha \mdim$ for some $\alpha \in (0,1)$. Considering first the sufficient conditions of Theorem~\ref{ThmSuff}, we see that as long as $\ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) \ensuremath{s} \rightarrow +\infty$, then $\ensuremath{n} = \Theta(\mdim)$ observations are sufficient to ensure asymptotically reliable recovery. This information-theoretic condition should be compared with our earlier analysis~\cite{Wainwright06a_aller} of $\ell_1$-constrained quadratic programming (the Lasso); one consequence of this work is that if \mbox{$\ensuremath{n} < 2 \ensuremath{s} \log (\mdim -\ensuremath{s})$,} then the Lasso fails with probability converging to one, even if $\ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*})$ stays bounded away from zero. Given that $2 \ensuremath{s} \log(\mdim-\ensuremath{s}) \gg \Theta(\mdim)$ for linear sparsity $\ensuremath{s} = \alpha \mdim$, we see that there is a substantial gap between the performance of the Lasso and the optimal decoder in the linear sparsity regime. Thus, Theorem~\ref{ThmSuff} raises the interesting question as to the existence of computationally efficient techniques for asymptotically reliable recovery in the regime of linear sparsity. \section{Analysis} \label{SecAnalysis} This section is devoted to the proofs of Theorems~\ref{ThmSuff} and~\ref{ThmNec}. We begin by setting up some useful notation to be used throughout the remainder of the paper. \subsection{Notation and set-up} For compactness in notation, let us use $\ensuremath{X}$ to denote the $\ensuremath{n} \times \ensuremath{p}$ matrix formed with the vectors $\ensuremath{x}_k = (\ensuremath{x}_{k1}, \ensuremath{x}_{k2}, \ldots, \ensuremath{x}_{k \ensuremath{p}}) \in \real^\ensuremath{p}$ as rows, and the vectors $\ensuremath{X}_j = (\ensuremath{x}_{1j}, \ensuremath{x}_{2j}, \ldots, \ensuremath{x}_{\ensuremath{n} j})^T \in \real^\ensuremath{n}$ as columns, as follows: \begin{eqnarray} \label{EqnAmatDefn} \ensuremath{X} & \defn & \begin{bmatrix} \ensuremath{x}_1^T \\ \ensuremath{x}_2^T \\ \vdots \\ \ensuremath{x}_\ensuremath{n}^T \end{bmatrix} \; = \; \begin{bmatrix} \ensuremath{X}_1 & \ensuremath{X}_2 & \cdots & \ensuremath{X}_\ensuremath{p} \end{bmatrix}. \end{eqnarray} Using $\Ysca$ and $\Wsca$ to denote the $\ensuremath{n}$-dimensional observation and noise vectors respectively, we can re-write our linear observation model~\eqref{EqnLinearObs} in matrix-vector form as follows: \begin{eqnarray} \label{EqnLinMatObs} \Ysca & = & \ensuremath{X} \ensuremath{\beta^*} + \Wsca. \end{eqnarray} Given any subset $\ensuremath{V} \subseteq \{1, \ldots, \mdim \}$, we use the notation $\beta^*_\ensuremath{V}$ to denote the $|\ensuremath{V}|$-dimensional subvector $\{ \beta^*_i, \; i \in \ensuremath{V} \}$, and similarly for other vectors (e.g., $\Ysca$, etc.). In an analogous manner, we use $\Amatt{\ensuremath{V}}$ to denote the $\ensuremath{n} \times |\ensuremath{V}|$ matrix with columns $\{ \ensuremath{X}_i, \; i \in \ensuremath{V} \}$. From herein, we assume without loss of generality that $\sigma^2 = 1$, so that $\Wsca \sim N(0, I_{\ensuremath{n} \times \ensuremath{n}}$) is simply a standard Gaussian vector. (Note that any scaling of $\sigma$ can be accounted for in the scaling of $\ensuremath{\beta^*}$, via the parameter $\ensuremath{\mathcal{M}}(\ensuremath{\beta^*})$.) In addition, we use the following standard notation for asymptotics of real sequences $\{a_n\}$ and $\{b_n\}$: (i) $a_n = \mathcal{O}(b_n)$ means that $a_n \leq C b_n$ for some constant $C \in (0, \infty)$; (ii) $a_n = \Omega(b_n)$ means that $a_n \geq C' b_n$ for some constant $C' \in (0, \infty)$; (iii) $a_n = \Theta(b_n)$ is shorthand for $a_n = \mathcal{O}(b_n)$ and $a_n = \Omega(b_n)$, and (iv) $a_n = o(b_n)$ means that $a_n/b_n \rightarrow 0$. \subsection{Proof of Theorem~\ref{ThmSuff}} \label{SecSuff} \myparagraph{Optimal decoding} We begin by describing the ``best'' decoder, that is optimal in terms of minimizing the probability of error $\ensuremath{p_{\operatorname{err}}}(\phi)$ over all decoding rules. It is based on the following real-valued function, defined on the subsets $\ensuremath{U} \subset \{1, \ldots, \ensuremath{p} \}$, as \begin{equation} \ensuremath{f}(\ensuremath{U}; \Ysca, \ensuremath{X}, \ensuremath{\beta^*}) \; = \; \arg \min_{\Beta{\ensuremath{U}}} \left \{\|Y - \Amatt{\ensuremath{U}} \Beta{\ensuremath{U}} \|_2^2 \right \}. \end{equation} We frequently write $\ensuremath{f}(\ensuremath{U})$ as a shorthand; note that this value corresponds to the error associated with the best estimator of $Y$ that lies in $\ensuremath{\operatorname{Range}}(\Amatt{\ensuremath{U}})$. The optimal decoder chooses the best subset $\ensuremath{\estim{\Sset}}$ based on the minimal value of this error, ranging over all subsets $\ensuremath{U}$ of size $\ensuremath{s}$: \begin{eqnarray} \label{EqnOptDecoder} \ensuremath{\estim{\Sset}} \; = \; \ensuremath{\phi_{\operatorname{opt}}}(Y) & \defn & \arg \min_{|\ensuremath{U} | = \ensuremath{s} } \ensuremath{f}(\ensuremath{U}; \Ysca, \ensuremath{X}, \ensuremath{\beta^*}). \end{eqnarray} Note that by symmetry, the error probability \mbox{$\Prob[\ensuremath{\estim{\Sset}} \neq \ensuremath{S} \, \mid \, \ensuremath{S}]$} is in fact the same regardless of which underlying set $\ensuremath{S}$ acts as the true one. Consequently, we can view the choice of $\ensuremath{S}$ as fixed (and hence non-random), and write \begin{equation} \ensuremath{p_{\operatorname{err}}}(\phi) \; = \; \Prob[\phi(Y) \neq \ensuremath{S}], \end{equation} which should now be understood as an unconditional probability (with $\ensuremath{S}$ fixed). \myparagraph{Analysis of error probability} Consider the difference $\ensuremath{\Delta}(\ensuremath{U}) \defn \ensuremath{f}(\ensuremath{U}) - \ensuremath{f}(\ensuremath{S})$ between the reconstruction error $\ensuremath{f}(\ensuremath{S})$ using the true subset $\ensuremath{S}$, versus the error $\ensuremath{f}(\ensuremath{U})$ candidate subset $\ensuremath{U}$. For any subset $\ensuremath{U}$ such that $\Amatt{\ensuremath{U}}$ is full rank, define the $\ensuremath{n} \times \ensuremath{n}$ matrices \begin{subequations} \begin{eqnarray} \Projmat{\ensuremath{U}} & \defn & \Gfullmat{\Amatt{\ensuremath{U}}}, \qquad \qquad \mbox{and} \\ \Projmatc{\ensuremath{U}} & \defn & I_{\ensuremath{n} \times \ensuremath{n}} - \Gfullmat{\Amatt{\ensuremath{U}}}. \end{eqnarray} \end{subequations} Note that $\Projmat{\ensuremath{U}}$ and $\Projmatc{\ensuremath{U}}$ are both orthogonal projection matrices, associated with the $\ensuremath{s}$-dimensional range space $\ensuremath{\operatorname{Range}}(\Amatt{\ensuremath{U}})$ and $(\ensuremath{n}-\ensuremath{s})$-dimensional nullspace $\ensuremath{\operatorname{Ker}}(\Amatt{\ensuremath{U}})$ respectively. With these definitions, we state the following result (see Appendix~\ref{AppAlgebra} for a proof): \blems \label{LemAnalyzeForm} For a given vector $\ensuremath{\beta^*}$ with support $\ensuremath{S}$, the optimal decoder declares $\ensuremath{U}$ over $\ensuremath{S}$ if and only if the random variable \begin{eqnarray} \label{EqnAnalyzeForm} \ensuremath{\Delta}(\ensuremath{U}) & = & \left \| \Projmatc{\ensuremath{U}} \left(\Amatt{\ensuremath{S \backslash U}} \ensuremath{\beta^*}_{\ensuremath{S \backslash U}} + \Wsca \right) \right \|^2 - \left \| \Projmatc{\ensuremath{S}} \Wsca \right \|^2. \end{eqnarray} is negative. \elems \noindent Overall, the optimal decoder fails if and only if at least one $\ensuremath{U}$ (with cardinality $|\ensuremath{U}| = \ensuremath{s}$) is preferable to $\ensuremath{S}$; consequently, the probability of error can be written as \begin{eqnarray} \label{EqnErrorProbOne} \ensuremath{\mathbb{P}}[\ensuremath{\estim{\Sset}} \neq \ensuremath{S}] & = & \ensuremath{\mathbb{P}} \Big[\bigcup_{\ensuremath{U} \neq \ensuremath{S}, \; |\ensuremath{U}| = \ensuremath{s}} \{ \ensuremath{\Delta}(\ensuremath{U}) < 0 \} \Big]. \end{eqnarray} In order to analyze this error probability, we begin by considering the range of possible integers $k \defn |\ensuremath{S} \backslash \ensuremath{U}|$, corresponding to the complement of the overlap. The following lemma characterizes the exponential decay rates of the random variable $\ensuremath{\Delta}(\ensuremath{U})$: \blems \label{LemExpUpper} For fixed $k$ (with $1 \leq k \leq \ensuremath{s}$), we have for any $\ensuremath{U}$ with $|\ensuremath{S \backslash U}| = k$, \begin{eqnarray} \label{EqnExpUpper} \prob[\ensuremath{\Delta}(\ensuremath{U}) < 0 ] & \leq & \exp \left \{\frac{-(\ensuremath{n} - \ensuremath{s})\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2}{12 \left(\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2 + 4\right)} \right \} + 2 \, \exp \left \{ -\frac{k}{4} \left[-1 + \frac{1}{4}(\ensuremath{n} - \ensuremath{s}) \frac{ \|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2}{k} \right]^2 \right \}. \end{eqnarray} \elems \spro We begin by conditioning on the Gaussian noise vector $\Wsca$. Since each element of $\Amatt{\ensuremath{S \backslash U}}$ is standard normal, each entry of the random vector $\Amatt{\ensuremath{S \backslash U}} \ensuremath{\beta^*}_{\ensuremath{S \backslash U}}$ is zero-mean Gaussian with variance $\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2$. Consequently, if we rescale by the standard deviation, then the random vector \begin{equation*} \|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^{-1} \; \left(\Amatt{\ensuremath{S \backslash U}} \ensuremath{\beta^*}_{\ensuremath{S \backslash U}} + \Wsca\right) \end{equation*} is an $\ensuremath{n}$-dimensional Gaussian random vector with independent entries, each with with unit variance, and mean vector $\Wsca$. Applying the orthogonal transform $\Projmatc{\ensuremath{U}}$ reduces the number of degrees of freedom to $(\ensuremath{n} - \ensuremath{s})$, so that we conclude that \begin{equation*} \|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^{-2} \; \left\| \Projmatc{\ensuremath{U}} \left( \Amatt{\ensuremath{S \backslash U}} \ensuremath{\beta^*}_{\ensuremath{S \backslash U}} + \Wsca \right) \right \|^2 \end{equation*} is a non-central $\chi^2$ variate with $d = \ensuremath{n} - \ensuremath{s}$ degrees of freedom, and non-centrality parameter $\nu = \|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}} \|^{-2} \|\Projmatc{\ensuremath{U}} \Wsca\|^2$. With these choices of $(d, \nu)$, we have \begin{eqnarray*} \ensuremath{\mathbb{P}}[\ensuremath{\Delta}(\ensuremath{U}) < 0 \, \mid \, \Wsca] & = & \ensuremath{\mathbb{P}} \left[\chi^2(d, \nu) < t \right] \end{eqnarray*} where we have set $t \defn \frac{\|\Projmatc{\ensuremath{S}} \Wsca\|^2}{\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}} \|^2}$ for shorthand. Thus, conditioned on $\Wsca$, our problem reduces to bounding the tail of a non-central $\chi^2$ variate. In Appendix~\ref{AppChiTail}, we state some known tail bounds~\cite{Birge01} on such variates, which we use here. In order to apply these bounds, we condition on the following ``good event'', defined in terms of $\Wsca$ \begin{eqnarray*} \ensuremath{\mathcal{A}} & = & \left\{ \left | \frac{ \|\Projmatc{\ensuremath{U}} \Wsca\|^2 - \|\Projmatc{\ensuremath{S}} \Wsca \|^2}{\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2} \right | \leq \frac{\ensuremath{n}- \ensuremath{s}}{2} \right \} \bigcap \left \{ \|\Projmatc{\ensuremath{U}} \Wsca\|^2 \leq 2 (\ensuremath{n} - \ensuremath{s}) \right \}. \end{eqnarray*} Note that the first event defining $\ensuremath{\mathcal{A}}$ ensures that \begin{eqnarray} \label{EqnKeyCond} d + \nu - t & = & (\ensuremath{n} - \ensuremath{s}) + \frac{1}{\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2} \left(\|\Projmatc{\ensuremath{U}} \Wsca \|^2 - \|\Projmatc{\ensuremath{S}} \Wsca \|^2 \right) \; \geq \; \frac{\ensuremath{n}-\ensuremath{s}}{2} \;\geq \; 0. \end{eqnarray} Consequently, conditioned on $\ensuremath{\mathcal{A}}$, we may set $x \defn \frac{\left(d + \nu - t\right)^2}{4 (d + 2\nu)}$ in equation~\eqref{EqnNoncentB} to obtain the upper bound \begin{eqnarray} \log \ensuremath{\mathbb{P}}[ \ensuremath{\Delta}(\ensuremath{U}) < 0 \; \mid \; \ensuremath{\mathcal{A}}] & \leq & -\frac{\left (d + \nu - t \right)^2}{4 (d + 2\nu)} \nonumber \\ & = & -\frac{ \left ( [\ensuremath{n} - \ensuremath{s}] + \frac{\|\Projmatc{\ensuremath{U}} \Wsca\|^2 - \|\Projmatc{\ensuremath{S}} \Wsca\|^2}{\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2} \right)^2}{4 \left( [\ensuremath{n}-\ensuremath{s}] + 2 \frac{\|\Projmatc{\ensuremath{U}} \Wsca\|^2}{\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}} \|^{2}} \right)} \nonumber \\ & = & - \left(\ensuremath{n} - \ensuremath{s} \right) \; \frac{ \left ( 1 + \frac{\|\Projmatc{\ensuremath{U}} \Wsca\|^2 - \|\Projmatc{\ensuremath{S}} \Wsca\|^2}{(\ensuremath{n}-\ensuremath{s}) \; \|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2} \right)^2}{4 \left(1+ 2 \frac{\|\Projmatc{\ensuremath{U}} \Wsca\|^2}{(\ensuremath{n} - \ensuremath{s}) \; \|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}} \|^{2}} \right)} \nonumber \\ & \stackrel{(b)}{\leq} & - \left(\ensuremath{n} - \ensuremath{s} \right) \; \frac{1/2}{4\left(1+ 4/\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2 \right)} \nonumber \\ \label{EqnTermOne} & = & - \left(\ensuremath{n} - \ensuremath{s} \right) \; \frac{\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2}{8 \left(\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2 + 4\right)}, \end{eqnarray} where inequality (b) makes use of the second event defining $\ensuremath{\mathcal{A}}$. We complete the proof by observing that \begin{eqnarray} \label{EqnOverall} \ensuremath{\mathbb{P}}[ \ensuremath{\Delta}(\ensuremath{U}) < 0] & \leq & \ensuremath{\mathbb{P}}[ \ensuremath{\Delta}(\ensuremath{U}) < 0 \; \mid \; \ensuremath{\mathcal{A}}] + \ensuremath{\mathbb{P}}[\ensuremath{\mathcal{A}}^c], \end{eqnarray} so that it suffices to upper bound $\ensuremath{\mathbb{P}}[\ensuremath{\mathcal{A}}^c]$. By union bound, we have \begin{eqnarray} \label{EqnUnion} \ensuremath{\mathbb{P}}[\ensuremath{\mathcal{A}}^c] & \leq & \ensuremath{\mathbb{P}} \left[\left | \frac{ \|\Projmatc{\ensuremath{U}} \Wsca\|^2 - \|\Projmatc{\ensuremath{S}} \Wsca \|^2}{\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2} \right | \geq \frac{\ensuremath{n}- \ensuremath{s}}{2} \right] + \ensuremath{\mathbb{P}} \left[\|\Projmatc{\ensuremath{U}} \Wsca\|^2 \geq 2 (\ensuremath{n} - \ensuremath{s}) \right]. \end{eqnarray} Since $\|\Projmatc{\ensuremath{U}} \Wsca\|^2$ is a central $\chi^2$ with $(\ensuremath{n} - \ensuremath{s})$ degrees of freedom, we may apply the tail bounds from Appendix~\ref{AppChiTail} to conclude that \begin{eqnarray} \label{EqnTermThree} \ensuremath{\mathbb{P}}\left[\|\Projmatc{\ensuremath{U}} \Wsca\|^2 \geq 2 (\ensuremath{n} - \ensuremath{s}) \right] & \leq & \exp(-(\ensuremath{n} - \ensuremath{s})/12). \end{eqnarray} Turning to the first term on the RHS on equation~\eqref{EqnUnion}, we observe that \begin{eqnarray*} \|\Projmatc{\ensuremath{U}} \Wsca\|^2 - \|\Projmatc{\ensuremath{S}} \Wsca \|^2 & = & \|\Projmat{\ensuremath{U}} \Wsca\|^2 - \|\Projmat{\ensuremath{S}} \Wsca \|^2 \; \edist \; \sum_{i \in \ensuremath{U \backslash S }} Z_i^2 - \sum_{j \in \ensuremath{S \backslash U}} Z_j^2, \end{eqnarray*} where $\{Z_i, Z_j \}$ are i.i.d. standard normal variates. Now if the difference $\sum_{i \in \ensuremath{U \backslash S }} Z_i^2 - \sum_{j \in \ensuremath{S \backslash U}} Z_j^2$ is to exceed $\frac{1}{2}(\ensuremath{n} - \ensuremath{s}) \|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2$, then at least one of the terms must exceed $\frac{1}{4}(\ensuremath{n} - \ensuremath{s}) \|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2$. Moreover, we observe that $\sum_{j \in \ensuremath{S \backslash U}} Z_j^2$ is $\chi^2_k$, where $k = |\ensuremath{S \backslash U}|$. Hence, we have \begin{eqnarray*} \log \ensuremath{\mathbb{P}} \left[\left | \frac{ \|\Projmatc{\ensuremath{U}} \Wsca\|^2 - \|\Projmatc{\ensuremath{S}} \Wsca \|^2}{\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2} \right | \geq \frac{\ensuremath{n}- \ensuremath{s}}{2} \right] & \leq & \log 2 \, \ensuremath{\mathbb{P}} \left[ \frac{\chi^2_k}{k} \geq \frac{1}{4}(\ensuremath{n} - \ensuremath{s}) \frac{\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2}{k} \right] \\ & = & \log 2 \ensuremath{\mathbb{P}} \left[\chi^2_k -k \geq k \left\{ -1 + \frac{1}{4}(\ensuremath{n} - \ensuremath{s}) \frac{\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2}{k} \right \} \right] \\ & \leq & -\frac{k}{4} \left[-1 + \frac{1}{4}(\ensuremath{n} - \ensuremath{s}) \frac{ \|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2}{k} \right]^2 + \log 2, \end{eqnarray*} where we have used the upper bound~\eqref{EqnCleanUpCent} from Appendix~\ref{AppChiTail} with $x \defn \frac{k}{4} \left( -1 + \frac{1}{4}(\ensuremath{n} - \ensuremath{s}) \frac{\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2}{k} \right)^2$ in the final inequality. \fpro \myparagraph{Weakened but simpler bound} In order to make further progress, we simplify the bound~\eqref{EqnExpUpper} from Lemma~\ref{LemExpUpper}, at the expense of weakening it, by noting that for all $k \geq 1$, we have $\|\ensuremath{\beta^*}_{\ensuremath{S \backslash U}}\|^2 \geq k \, \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*})$, so that \begin{eqnarray} \label{EqnWeakOne} \prob[\ensuremath{\Delta}(\ensuremath{U}) \leq 0 ] & \leq & \exp \left \{\frac{-(\ensuremath{n} - \ensuremath{s}) k \ \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*})}{12 \left(k \, \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) + 4\right)} \right \} + 2 \, \exp \left \{-\frac{k}{4} \left[\frac{\ensuremath{n} - \ensuremath{s}}{4} \: \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) - 1 \right]^2 \right \}. \end{eqnarray} The advantage of this weakened bound is that it is independent of the subset $\ensuremath{U}$, and depends only on the parameter $k = |\ensuremath{S \backslash U}|$. From this weakened bound~\eqref{EqnWeakOne}, we see the necessity (at least for this analysis) of the requirement $(\ensuremath{n} - \ensuremath{s}) \, \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) \rightarrow +\infty$, so that the second error term decays asymptotically. Under this requirement, we have (for sufficiently large $\ensuremath{n}$) that the second error exponent can be bounded as \begin{eqnarray*} -\frac{k}{4} \left[\frac{\ensuremath{n} - \ensuremath{s}}{4} \: \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) - 1 \right]^2 & \leq & -\frac{k}{12} \left[ \frac{\ensuremath{n} - \ensuremath{s}}{4} \: \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) - 1\right] \\ & \leq & -\frac{k}{4} \frac{\ensuremath{n} - \ensuremath{s}}{8} \: \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) \\ & \leq & \frac{-(\ensuremath{n} - \ensuremath{s}) k \, \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*})}{12 \left(k \, \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) + 8\right)}. \end{eqnarray*} The first error exponent is also upper bounded by this same quantity, so that we can simplify the upper bound to \begin{eqnarray} \label{EqnWeakTwo} \prob[\ensuremath{\Delta}(\ensuremath{U}) \leq 0 ] & \leq & 3\, \exp \left \{ \frac{-(\ensuremath{n} - \ensuremath{s}) k \, \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*})}{12 \left(k \, \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) + 8\right)} \right \}. \end{eqnarray} Denote by $N(k)$ the number of subsets $\ensuremath{U}$ of size $\ensuremath{s}$, with overlap exactly equal to $k$. A standard counting argument yields that, for each $k$ with $1 \leq k \leq \ensuremath{s}$, there are \begin{eqnarray} \label{EqnExactNK} N(k) & = & {\ensuremath{s} \choose k} \, {\mdim - \ensuremath{s} \choose k} \end{eqnarray} such subsets. Using this simple bound~\eqref{EqnWeakTwo} and union bound applied to the representation~\eqref{EqnErrorProbOne}, we can upper bound the error probability as \begin{eqnarray} \label{EqnFinalUnion} \ensuremath{\mathbb{P}}[\ensuremath{\estim{\Sset}} \neq \ensuremath{S}] & \leq & 3 \sum_{k=1}^\ensuremath{s} {\ensuremath{s} \choose k} \, {\mdim - \ensuremath{s} \choose k} \; \exp \left \{ \frac{-(\ensuremath{n} - \ensuremath{s}) k \, \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*})}{12 \left(k \, \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) + 8\right)} \right \}. \end{eqnarray} \myparagraph{Analysis of the upper bound} We now analyze the upper bound~\eqref{EqnFinalUnion}; in particular, our goal is to derive sufficient conditions for each of the terms in the summation to vanish asymptotically. In order to deal with the binomial coefficients, we make use of the bounds (see Appendix~\ref{AppBinBound}) \begin{equation} \log {\ensuremath{s} \choose k} \leq k \log \frac{\ensuremath{s} e}{k}, \qquad \mbox{and} \qquad \log {\mdim - \ensuremath{s} \choose k} \leq k \log \frac{(\mdim-\ensuremath{s}) e}{k}. \end{equation} Applying these two bounds, we conclude that the (logarithm of the) $k^{th}$ term is upper bounded by \begin{equation*} k \left [ 2 + \log \frac{\ensuremath{s}}{k} + \log \frac{\mdim - \ensuremath{s}}{k} \right] - \frac{(\ensuremath{n} - \ensuremath{s}) k \, \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*})}{12 \left(k \, \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) + 8\right)}. \end{equation*} Requiring this term to be negative asymptotically is equivalent to having \begin{eqnarray} (\ensuremath{n} - \ensuremath{s}) & \geq & \frac{12 \left(k \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) + 8\right)}{k \, \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*})} \; k \left [ 2 + \log \frac{\ensuremath{s}}{k} + \log \frac{\mdim - \ensuremath{s}}{k} \right] \nonumber \\ \label{EqnInterBound} & = & 12 \left(k \, + \frac{8}{\ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*})} \right) \left \{ 2 + \log \frac{\ensuremath{s}}{k} + \log \frac{\mdim - \ensuremath{s}}{k} \right\}. \end{eqnarray} In order to understand the behavior of this lower bound, we consider $k$ in two distinct regimes: \bit \item On one hand, if $k = \ensuremath{\gamma} \ensuremath{s}$ for some $\ensuremath{\gamma} \in (0,1)$, then the second term on the RHS of the bound~\eqref{EqnInterBound} is dominated by the term $\log \frac{\mdim - \ensuremath{s}}{\ensuremath{\gamma} \ensuremath{s}} = \Omega(\log \frac{\mdim}{\ensuremath{s}})$, so that the overall lower bound is dominated by $\max \{\ensuremath{s}, \ensuremath{\mathcal{M}}^{-2}(\ensuremath{\beta^*})\} \log(\mdim/\ensuremath{s})$. \item On the other hand, if $k = o(\ensuremath{s})$, the lower bound is dominated by the maximum of linear growth $\ensuremath{s}$, and the quantity \mbox{$\ensuremath{\mathcal{M}}^{-2}(\ensuremath{\beta^*}) \, \log(\mdim - \ensuremath{s})$.} \eit Overall, we conclude that the condition \begin{eqnarray} \label{EqnFinalSublinCond} \ensuremath{n} & > & C \; \max\left \{ \ensuremath{s} \log (\mdim/\ensuremath{s}), \; \frac{1}{\ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*})} \log(\mdim - \ensuremath{s}) \right \}, \end{eqnarray} for some constant $C > 0$ is sufficient in order to achieve asymptotically reliable recovery, as claimed in Theorem~\ref{ThmSuff}. \subsection{Proof of Theorem~\ref{ThmNec}} \label{SecNec} We now turn to the proof of the necessary conditions given in Theorem~\ref{ThmNec}. \myparagraph{Fano method} Our analysis is based on a well-known lower bound on the probability of error in a multiway hypothesis testing problem in terms of Kullback-Leibler divergences. In the non-parametric statistics literature~\cite{Hasminskii78,IbrHas81,Yu97}, this approach is referred to as the Fano method, since the bound is a corollary of Fano's inequality from information theory~\cite{Cover}. Here we state and make use of the following variant~\cite{Yatracos88}: \blems Consider a family of $\ensuremath{N}$ distributions $\{\prob_1, \ldots, \prob_\ensuremath{N} \}$. Then the average probability of error in performing in a hypothesis test over this family is lower bounded as \begin{eqnarray*} \ensuremath{p_{\operatorname{err}}} & \geq & 1 - \frac{ \frac{1}{\ensuremath{N}^2} \sum \limits_{i,j =1}^\ensuremath{N} D(\prob_i \, \| \, \prob_j) + \log 2}{\log \left(\ensuremath{N}-1 \right)}, \end{eqnarray*} where $D(\prob_i \, \| \, \prob_j)$ denotes the Kullback-Leibler divergence between distributions $\prob_i$ and $\prob_j$. \elems \myparagraph{Restricted problem} Consider the collection of all $\ensuremath{N} = {\mdim \choose \ensuremath{s}}$ subsets of size $\ensuremath{s}$ chosen from $\{1, \ldots, \mdim\}$. In order to produce lower bounds, we analyze the behavior of the optimal decoder for a restricted problem, in which we assume that for any fixed support $\ensuremath{S}$, it is known \emph{a priori} that $\ensuremath{\beta^*}_i = \ensuremath{\mathcal{M}}(\ensuremath{\beta^*})$ for all indices $i \in \ensuremath{S}$. (Recall that $\ensuremath{\mathcal{M}}(\ensuremath{\beta^*})$ is the minimum absolute value of entries in the support of $\ensuremath{\beta^*}$.) This problem is simply an $\ensuremath{N}$-way hypothesis testing problem, in which the observation under the hypothesis associated with subset $\ensuremath{U}$ takes the form \begin{eqnarray} \label{EqnModObs} \Ysca & = & \Amatt{\ensuremath{U}} \ensuremath{\vec{v}} + \Wsca, \end{eqnarray} where $\ensuremath{\vec{v}} = \ensuremath{\mathcal{M}}(\ensuremath{\beta^*}) \vec{1}_\ensuremath{s}$ is a rescaled $\ensuremath{s}$-vector of ones, and $\Wsca \sim N(0, I_{\ensuremath{n} \times \ensuremath{n}})$. Let us index the collection of all $\ensuremath{s}$-sized subsets with $i = 1, 2, \ldots, \ensuremath{N}$, and use $\ensuremath{U}[i]$ to denote the corresponding support. For each index $i$, let $\prob_i$ denote the multivariate Gaussian distribution with mean $\Amatt{\ensuremath{U}[i]} \ensuremath{\vec{v}}$ and covariance matrix $I_{\ensuremath{n} \times \ensuremath{n}}$; note that $\prob_i$ is simply the class-conditional distribution of $\Ysca$ under the hypothesis $\ensuremath{U}[i]$. Moreover, the Kullback-Leibler divergence between any such pair is given by $D(\prob_i \, \| \, \prob_j) = \frac{1}{2} \|\Amatt{\ensuremath{U}[i]} \ensuremath{\vec{v}} - \Amatt{\ensuremath{U}[j]} \ensuremath{\vec{v}} \|_2^2$, so that the corresponding Fano bound takes the form \begin{eqnarray*} \ensuremath{p_{\operatorname{err}}} & \geq & 1 - \frac{1}{2} \frac{ \frac{1}{ N^2} \sum_{i, j =1}^{N} \|\Amatt{\ensuremath{U}[i]} \ensuremath{\vec{v}} - \Amatt{\ensuremath{U}[j]} \ensuremath{\vec{v}} \|_2^2 + 2 \log 2}{\log [N-1]}. \end{eqnarray*} \myparagraph{Upper bounds via concentration} Thus, in order to ensure that $p_e$ stays bounded away from zero, we need to (upper) bound the quantity $\frac{1}{2} \frac{1}{\ensuremath{N}^2} \sum_{i, j=1}^{\ensuremath{N}} \|\Amatt{\ensuremath{U}[i]} \ensuremath{\vec{v}} - \Amatt{\ensuremath{U}[j]} \ensuremath{\vec{v}} \|_2^2 \big / \log [\ensuremath{N}-1]$ away from one. For a given pair of subsets $(\ensuremath{U}, \ensuremath{V})$ in our collection, consider the random variable \mbox{$Z_{\ensuremath{U}, \ensuremath{V}} \defn \|\Amatt{\ensuremath{U}} \ensuremath{\vec{v}} - \Amatt{\ensuremath{V}} \ensuremath{\vec{v}} \|_2^2$.} A little calculation shows that $Z_{\ensuremath{U}, \ensuremath{V}} \sim \ensuremath{\gamma}(\ensuremath{U}, \ensuremath{V}) \chi^2_\ensuremath{n}$, where \begin{equation} \label{EqnDefnGamvar} \ensuremath{\gamma}(\ensuremath{U}, \ensuremath{V}) = 2 \, \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) \, \left (\ensuremath{s} - |\ensuremath{U} \cap \ensuremath{V}| \right). \end{equation} The following result bounds the upper tail behavior of the random variable $Z = \frac{1}{\ensuremath{N}^2} \sum_{\ensuremath{U} \neq \ensuremath{V}} Z_{\ensuremath{U}, \ensuremath{V}}$. \blems \label{LemConcen} The tail of $Z$ obeys the bound \begin{eqnarray*} \prob \left[Z \geq 4 \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) \ensuremath{s} \ensuremath{n} \right] & \leq & \frac{1}{2}. \end{eqnarray*} \elems \noindent Using this lemma (see Appendix~\ref{AppConcen} for a proof of this claim), we are guaranteed that at least $1/2$ of the Gaussian ensembles satisfy the upper bound \begin{eqnarray} \label{EqnKeyQuant} \frac{1}{2} \; \frac{\frac{1}{\ensuremath{N}^2} \sum \limits_{i,j =1}^{\ensuremath{N}} D(\prob_i \, \| \, \prob_j)}{\log [\ensuremath{N}-1]} & = & \frac{1}{2} \; \frac{\frac{1}{\ensuremath{N}^2} \sum_{\ensuremath{U} \neq \ensuremath{V}} Z_{\ensuremath{U}, \ensuremath{V}}}{\log[\ensuremath{N}-1]} \; \leq \; \frac{4 \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) \ensuremath{s} \ensuremath{n}}{\log [\ensuremath{N}-1]}. \end{eqnarray} Hence, as long as the quantity~\eqref{EqnKeyQuant} remains bounded from above away from one, the Fano bound implies that the probability of error averaged over the whole ensemble will remain bounded away from zero. Consequently, we obtain the necessary condition that \begin{eqnarray*} \ensuremath{n} & > & \frac{\log [\ensuremath{N}-1]}{4 \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) \ensuremath{s}} \end{eqnarray*} for reliable recovery with probability one asymptotically. To obtain a more transparent bound, we first lower bound $N$ via $\log [N-1] \geq \frac{1}{2} \log \ensuremath{N}$, and then further via \begin{eqnarray*} \frac{1}{2} \log \ensuremath{N} & = & \frac{1}{2} \log {\mdim \choose \ensuremath{s}} \; \geq \; \frac{1}{2} \ensuremath{s} \; \log \frac{\mdim}{\ensuremath{s}}, \end{eqnarray*} as stated in Appendix~\ref{AppBinBound}. Consequently, we obtain the necessary condition \begin{eqnarray} \ensuremath{n} > \Omega \left(\frac{1}{\ensuremath{s} \; \ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*})} \ensuremath{s} \log \frac{\mdim}{\ensuremath{s}} \right), \end{eqnarray} as stated in Theorem~\ref{ThmNec}. \section{Conclusion} \label{SecDiscussion} In this paper, we have analyzed the information-theoretic limits of the sparsity recovery problem for the linear observation model~\eqref{EqnLinearObs} with measurement vectors drawn from the standard Gaussian ensemble. We have established both lower and upper bounds on the number of observations $\ensuremath{n}$ as a function of the model dimension $\mdim$ and sparsity index $\ensuremath{s}$ that are required for asymptotically reliable recovery. There are a variety of open questions raised by our analysis. First, while our upper and lower bounds are essentially matching for certain regimes of scaling (e.g., sublinear sparsity with the minimum $\ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*}) = \Theta(1/\ensuremath{s})$), it is likely that the analysis can be tightened in other regimes. In particular, the analysis of the necessary conditions (see proof of Theorem~\ref{ThmNec}) involves some slack since it is based on analyzing a very restricted ensemble. Second, our results (in particular, a corollary of Theorem~\ref{ThmSuff}) reveal that with the sparsity index scaling linearly ($\ensuremath{s} = \alpha \mdim$ for some $\alpha \in (0,1)$), as long, as the minimum value $\ensuremath{\mathcal{M}}^2(\ensuremath{\beta^*})$ decays sufficiently slowly, then asymptotically reliable recovery is possible with only a linear number of observations (i.e., $\ensuremath{n} = \beta \mdim$ for some $\beta > 0$). Since our previous work~\cite{Wainwright06a_aller} established that the Lasso ($\ell_1$-constrained quadratic programming) cannot achieve reliable recovery in this particular $(\ensuremath{n}, \mdim, \ensuremath{s})$ regime, it remains to determine a computationally tractable method that approaches such performance in the regime of linear sparsity. Third, whereas the current analysis has focused on a very special class of Gaussian ensemble, the analysis given here could be extended to a broader class of measurement ensembles. \subsection*{Acknowledgements} This work was partially supported by NSF CAREER Award CCF-0545862, NSF Grant DMS-0605165, and an Alfred P. Sloan Foundation Fellowship. We thank Peter Bickel for helpful discussions and pointers.
{ "timestamp": "2007-02-20T07:47:11", "yymm": "0702", "arxiv_id": "math/0702301", "language": "en", "url": "https://arxiv.org/abs/math/0702301", "abstract": "The problem of recovering the sparsity pattern of a fixed but unknown vector $\\beta^* \\in \\real^p based on a set of $n$ noisy observations arises in a variety of settings, including subset selection in regression, graphical model selection, signal denoising, compressive sensing, and constructive approximation. Of interest are conditions on the model dimension $p$, the sparsity index $s$ (number of non-zero entries in $\\beta^*$), and the number of observations $n$ that are necessary and/or sufficient to ensure asymptotically perfect recovery of the sparsity pattern. This paper focuses on the information-theoretic limits of sparsity recovery: in particular, for a noisy linear observation model based on measurement vectors drawn from the standard Gaussian ensemble, we derive both a set of sufficient conditions for asymptotically perfect recovery using the optimal decoder, as well as a set of necessary conditions that any decoder, regardless of its computational complexity, must satisfy for perfect recovery. This analysis of optimal decoding limits complements our previous work (ARXIV:math.ST/0605740) on sharp thresholds for sparsity recovery using the Lasso ($\\ell_1$-constrained quadratic programming) with Gaussian measurement ensembles.", "subjects": "Statistics Theory (math.ST); Information Theory (cs.IT)", "title": "Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363700641143, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.708488346959737 }
https://arxiv.org/abs/2108.09962
Fractional Helly theorem for Cartesian products of convex sets
Helly's theorem and its variants show that for a family of convex sets in Euclidean space, local intersection patterns influence global intersection patterns. A classical result of Eckhoff in 1988 provided an optimal fractional Helly theorem for axis-aligned boxes, which are Cartesian products of line segments. Answering a question raised by Bárány and Kalai, and independently Lew, we generalize Eckhoff's result to Cartesian products of convex sets in all dimensions. In particular, we prove that given $\alpha \in (1-\frac{1}{t^d},1]$ and a finite family $\mathcal{F}$ of Cartesian products of convex sets $\prod_{i\in[t]}A_i$ in $\mathbb{R}^{td}$ with $A_i\subset \mathbb{R}^d$ if at least $\alpha$-fraction of the $(d+1)$-tuples in $\mathcal{F}$ are intersecting then at least $(1-(t^d(1-\alpha))^{1/(d+1)})$-fraction of sets in $\mathcal{F}$ are intersecting. This is a special case of a more general result on intersections of $d$-Leray complexes. We also provide a construction showing that our result on $d$-Leray complexes is optimal. Interestingly the extremal example is representable as a family of cartesian products of convex sets, implying the bound $\alpha>1-\frac{1}{t^d}$ and the fraction $(1-(t^d(1-\alpha))^{1/(d+1)})$ above are also best possible. The well-known optimal construction for fractional Helly theorem for convex sets in $\mathbb{R}^d$ does not have $(p,d+1)$-condition for sublinear $p$. Inspired by this we give constructions showing that, somewhat surprisingly, imposing additional $(p,d+1)$-condition has negligible effect on improving the quantitative bounds in neither the fractional Helly theorem for convex sets nor Cartesian products of convex sets. Our constructions offer a rich family of distinct extremal configurations for fractional Helly theorem, implying in a sense that the optimal bound is stable.
\section{Introduction} A family of non-empty sets is {\em intersecting} if all sets within have an element in common. Let $\mathcal{F}$ be a (possibly infinite) family of non-empty sets. The {\em Helly number} of $\mathcal{F}$ is the minimal size of a subfamily $\mathcal{H}$ such that every proper subfamily of $\mathcal{H}$ is intersecting but $\mathcal{H}$ itself is not intersecting. Helly's theorem~\cite{Hel23}, one of the most classical result about intersection patterns of convex sets in Euclidean spaces, asserts that the family of all convex sets in $\mathbb{R}^d$ has Helly number $d+1$. There are a large number of variants and applications of Helly's theorem. See \cite{ALS17} for an overview on such Helly type theorems. One of the most important generalizations of Helly's theorem is the fractional Helly theorem, showing that if we only assume a positive fraction of the $(d+1)$-tuples are intersecting, then there is still a large intersecting subfamily. More precisely, the fractional Helly theorem asserts that for every positive integer $d$, there exists a function $\beta_d:(0,1]\to(0,1]$ such that for every $\alpha \in (0,1]$ and finite family $\mathcal{F}$ of convex sets in $\mathbb{R}^d$, if at least $\alpha\binom{|\mathcal{F}|}{d+1}$ of the $(d+1)$-tuples of $\mathcal{F}$ are intersecting, then $\mathcal{F}$ contains an intersecting subfamily of size at least $\beta_d(\alpha)|\mathcal{F}|$. The fractional Helly theorem was first shown by Katchalski and Liu~\cite{KL79} with a lower bound $\beta_d(\alpha) \geq \frac{\alpha}{d+1}$. When $d = 1$, it was shown by Abbot and Katchalski~\cite{AK79} that the optimal bound is $\beta_1(\alpha) = 1-\sqrt{1-\alpha}$. Later, Kalai~\cite{Kal84} and Eckhoff~\cite{Eck85} proved the optimal bound for all dimention: $\beta_d(\alpha) = 1-(1-\alpha)^{1/(d+1)}$. See also~\cite{AK85} for a simple proof, which uses a set pair inequality~\cite{Alon85}. In fact, the $\beta_d(\alpha)|\mathcal{F}|$ bound is only asymptotically tight and we can add a positive $o(|\mathcal{F}|)$ term. Indeed, these results are all proved in the following exact form. \begin{theorem}[The fractional Helly Theorem] Let $d,r$, and $n$ be positive integers such that $n>d+r$, and $\mathcal{F}$ is a family of $n$ convex sets in $\mathbb{R}^d$. If more than $\binom{n}{d+1}-\binom{n-r}{d+1}$ of the $(d+1)$-tuples of the family $\mathcal{F}$ are intersecting, then $\mathcal{F}$ contains an intersecting subfamily of size at least $d+r+1$. \end{theorem} \subsection{Cartesian product of convex sets} While the convex sets in $\mathbb{R}^d$ has Helly number $d+1$, a proper subfamily of it may have a smaller Helly number. We observe the following fractional Helly type statement for such families of convex sets in $\mathbb{R}^d$. \begin{proposition}\label{thm:frachel_smallhelly} Let $d$ and $r$ be positive integers such that $d+1 \geq r \geq 2$. Then there exist $c_{d+1,r} \in [0,1)$ and $\beta_{d+1,r}:(c_{d+1,r},1]\to(0,1]$ such that for every (possibly infinite) family $\mathcal{F}$ of convex sets in $\mathbb{R}^d$ with Helly number $r$ the following holds: for every finite subfamily $\mathcal{G}$ of $\mathcal{F}$ and $\alpha \in (c_{d+1,r},1]$, if at least $\alpha\binom{|\mathcal{G}|}{r}$ of the $r$-tuples are intersecting, then $\mathcal{G}$ contains an intersecting subfamily of size $\beta_{d+1,r}(\alpha)|\mathcal{G}|$. \end{proposition} Note that the fractional Helly theorem is the special case $r=d+1$ with $c_{d+1,d+1}=0$ and $\beta_{d+1,d+1}(\alpha)= 1-(1-\alpha)^{1/(d+1)}$. When $r < d+1$, the assumption that $\mathcal{F}$ has Helly number $r$ above is necessary. Consider for instance a family of hyperplanes in $\mathbb{R}^d$ in general position. Here, hyperplanes in $\mathbb{R}^d$ are {\em in general position} if for every collection of $m \leq d$ hyperplanes, their intersection is a $(d-m)$-flat and every $d+1$ or more hyperplanes have no point in common. It is clear that there are no intersecting subfamily of size larger than $d$ while every $d$-tuple is intersecting. One of the most well-studied examples of families of convex sets with small Helly number is the family of Cartesian products of convex sets. For positive integers $t$ and $d$, let $\mathcal{F}_{t,d}$ be the family of all convex sets of the form $A_1\times A_2\times\cdots\times A_t \subset \mathbb{R}^d\times\mathbb{R}^d\times\cdots\times\mathbb{R}^d \simeq \mathbb{R}^{td}$, where each $A_i$ is a convex set in $\mathbb{R}^d$. Note that $\mathcal{F}_{t,d}$ has Helly number $d+1 \leq td+1$, hence Proposition~\ref{thm:frachel_smallhelly} applies. In particular, $\mathcal{F}_{t,1}$ is the family of axis-aligned boxes. A classical result of Eckhoff~\cite{Eck88} in 1988 provided a quantitatively optimal version of Proposition~\ref{thm:frachel_smallhelly} for axis-aligned boxes $\mathcal{F}_{t,1}$ with $c_{td+1,2}=1-\frac{1}{t}$ and $\beta_{td+1,2}(\alpha)=1-\sqrt{t(1-\alpha)}$. Answering a question raised by B\'ar\'any and Kalai~\cite[Problem 3.7]{BK21} and independently Lew~\cite{Lew}, we generalize Eckhoff's theorem on axis-aligned boxes to higher dimension, proving a quantitatively optimal fractional Helly type theorem for Cartesian products of convex sets in all dimensions. \begin{theorem}\label{thm:frachel_genbox} Let $t$ and $d$ be positive integers. Let $\mathcal{F}$ be a finite subfamily of $\mathcal{F}_{t,d}$. For every $\alpha \in (1-\frac{1}{t^d},1]$, if at least $\alpha\binom{|\mathcal{F}|}{d+1}$ of the $(d+1)$-tuples are intersecting, then $\mathcal{F}$ contains an intersecting subfamily of size $(1-(t^d(1-\alpha))^{1/(d+1)})|\mathcal{F}|$. \end{theorem} In addition, we provide a construction that shows the condition $\alpha > 1-\frac{1}{t^d}$ in the assumption and the fraction $1-(t^d(1-\alpha))^{1/(d+1)}$ in the conclusion cannot be improved. Indeed, we prove both the above theorem and the construction in their exact forms, see Theorem~\ref{thm:leray_intersection} and Theorem~\ref{thm:frachel_genbox2}. \subsection{Intersection of $d$-Leray complexes} The assertions of Helly's theorem and its colorful~\cite{KM05} and fractional~\cite{Kal84} generalizations hold for more general set-systems, which satisfy certain topological conditions. One important such topological condition is the hereditary homological dimension ($d$-Lerayness) of the simplicial complex (nerve) that reflects all the intersection pattern of a given family. An {\em abstract simplicial complex} $X$ with ground set $V$ is a collection of subsets of $V$ that is closed under taking subsets: that is, if $\sigma$ and $\tau$ are subsets of $V$ such that $\sigma \subset \tau \in X$, then $\sigma \in X$. Each $\sigma \in X$ is called a {\em face}, or a {\em simplex} of $X$ and each $v\in V$ is called a {\em vertex} of $X$. For $W \subset V$, we denote by $X[W]$ the subcomplex of $X$ induced by $W$, that is, \[X[W] := \{\sigma \in X: \sigma \subset W\}.\] When $V$ is finite, $X$ is {\em $d$-Leray} if the reduced homology groups in dimension at least $d$ are trivial for all induced subcomplexes, that is, $\tilde{H}_i(X[W];\mathbb{Q}) = 0$ for all $i \geq d$ and $W \subset V$. Given a family $\mathcal{F}$ of non-empty sets, the {\em nerve} of $\mathcal{F}$ is the simplicial complex \[N(\mathcal{F}) := \{\mathcal{H} \subset \mathcal{F}: \mathcal{H} = \varnothing\text{ or }\mathcal{H}\text{ is intersecting}\}.\] It is a well-known fact that the nerve of a finite family of convex sets in $\mathbb{R}^d$ is $d$-Leray. See \cite{Tan13} for an overview on Helly type theorems and Leray complexes. It is straightforward from the definition of the $d$-Lerayness that given a $d$-Leray complex $K$, every vertex subset $W$ with $\binom{W}{d+1} \subset K$ is a face of $K$. This shows that every family of non-empty sets has Helly number at most $d+1$ if the nerve of any finite subfamily is $d$-Leray. The optimal fractional Helly theorem for convex sets in $\mathbb{R}^d$ also can be generalized to $d$-Leray complexes~\cite[Theorem 13]{AKMM02}: if $K$ is a $d$-Leray complex on $n$ vertices, then \begin{align}\label{leray_frachel} \dim K < d+r \implies f_d(K) \leq \binom{n}{d+1} - \binom{n-r}{d+1}, \end{align} where $f_d(K)$ is the number of $d$-dimensional faces of $K$ and the dimension $\dim K $ of the complex $K$ is the size of largest face minus $1$. We study the following natural question on the intersection of $d$-Leray complexes. Write $[t] := \{1,2,\ldots,t\}$. Let $K_i$, $i\in[t]$, be $n$-vertex $d$-Leray complexes and consider their intersection $\bigcap_{i\in [t]} K_i$. If $\bigcap_{i\in [t]} K_i$ has dimension less than $d+r$, how many $d$-dimensional faces can it have? Consider the following construction: For $r\geq (t-1)d$ and $n> r+d$, partition a set $V$ of $n$ vertices into $V_0, V_1,\dots, V_t$ with size $r-(t-1)d, n_1,\dots, n_t$ respectively and let any set $U\subseteq V$ satisfying $|U\cap V_i|\leq d$ for all $i\in [t]$ be a face. If we optimize over the choices of $n_1,\dots, n_t$ with $n_1+\dots + n_t = n-r+(t-1)d$, we obtain a simplicial complex with $$g_d(n,t,r) = \binom{n}{d+1} - \min_{n_1,\dots, n_t} \sum_{i\in [t]} \binom{n_i}{d+1}$$ $d$-dimensional faces. The value of $g_d(n,t,r)$ is attained when all $n_1,\dots, n_t$ are as close as possible, i.e. $|n_i-n_j|\leq 1$ for all $i,j\in [t]$. We will see later that such a simplicial complex is an intersection of $t$ many $d$-Leray complexes and has the dimension exactly $d+r-1$. Denote by $K_d(n,t,r)$ the above complex with the optimal choice of $n_1,\dots, n_t$. We show that given the dimension of the intersection of $d$-Leray complexes, $g_d(n,t,r)$ bounds the number of $d$-dimensional faces from above. Thus, the construction $K_d(n,t,r)$ is an extremal example. \begin{theorem}\label{thm:leray_intersection} Let $d$, $r$, $t$, and $n$ be positive integers such that $n > d+r$ and $r \geq (t-1)d$. Let $V$ be a set of $n$ vertices, and let $K_1,\ldots,K_t$ be $d$-Leray complexes on $V$. If the intersection $K = \bigcap_{i=1}^{t}K_i$ has dimension $\dim K < d+r$, then \[f_d(K) \leq g_d(n,t,r).\] Moreover, the above upper bound is best possible, that is, there exists such $K$ that satisfies the equality. \end{theorem} \begin{rmk}\label{rmk:Hellynumber} Theorem~\ref{thm:frachel_genbox} in fact follows from the above theorem. The nerve of Cartesian products $\prod_{i\in [t]}A_i$ in $\prod_{i\in [t]}\mathbb{R}^d$ is the intersection of the nerves of their $i$-th coordinate projections. Each nerve of the projection is a nerve of convex sets in $\mathbb{R}^d$, hence a $d$-Leray complex. If we rewrite the above exact result into an asymptotic form, $\alpha \binom{n}{d+1}$ being equal to $g_d(n,t,r)+1$ implies that $d+r+1 \geq (1- (t^d (1-\alpha))^{1/(d+1)}) n$. Therefore, Theorem~\ref{thm:frachel_genbox} immediately follows from Theorem~\ref{thm:leray_intersection}. \end{rmk} \subsection{Rich family of extremal configurations} A classical construction showing the sharpness of the fractional Helly theorem in~\eqref{leray_frachel} is as follows: considering $r$ copies of $\mathbb{R}^d$ and $n-r$ hyperplanes in general position. For the sharpness of Theorem~\ref{thm:leray_intersection}, one can consider the complex $K_d(n,t,r)$ (see Remark~\ref{rmk:tightness}). Unless $r$ is very close to $n$, both of those constructions contain a large $\Omega(n)$-size subfamily with no intersecting $(d+1)$-tuples. It is thus natural to ask if we can achieve a better bound when stepping away from such families, that is, when we consider families that contain no large subfamilies without intersecting $(d+1)$-tuples. Note that this is equivalent to imposing a \emph{$(p,d+1)$-condition}, i.e. every $p$-tuple contains an intersecting $(d+1)$-tuple. The $(p,d+1)$-condition when $p$ is a constant is well-studied; it is related to the $(p,q)$-theorem, see concluding remark for more on this. It is very tempting to conjecture that the quantitative bounds in the fractional Helly theorem or in Theorems~\ref{thm:frachel_genbox} and~\ref{thm:leray_intersection} can be further improved for $n$-element families with $(o(n),d+1)$-condition, i.e.~every linear-size subfamily contains an intersecting $(d+1)$-tuple. Indeed, such a phenomenon has occurred in Ramsey-Tur\'an theory in which the maximum edge-density of a graph without a fixed size clique can be significantly lowered when imposing an additional sublinear independence number condition, see e.g.~\cite{LRSS21} and references therein. Our next result, somewhat to our own surprise, shows that this speculation is not true in a strong sense. Let alone an $(o(n),d+1)$-condition, even a $(p,q)$-condition with constant $p$ is not sufficient. For given $\alpha \in (0,1]$, we construct families $\mathcal{F}$ in $\mathbb{R}^d$ with $\alpha \binom{|\mathcal{F}|}{d+1}$ intersecting $(d+1)$-tuples with $(C_{\alpha,d},d+1)$-condition which have the same bound as in~\eqref{leray_frachel}. Here $C_{\alpha,d}$ is some constant depending on $\alpha$ and $d$. Our construction shows that the optimal bound in fractional Helly theorem is stable in the sense that there is a rich family of different extremal configurations. \begin{theorem}\label{thm:d-repr_example} Let $d,r,n$ be positive integers with $n\geq d+r$. There exists a family $\mathcal{F}$ of $n$ convex sets in $\mathbb{R}^d$ satisfying the following, where $K$ is the nerve of $\mathcal{F}$. \begin{itemize} \item Any subfamily of more than $d+\frac{n-d}{r+1}$ sets in $\mathcal{F}$ contains an intersecting $(d+1)$-tuple, \item the maximal size of an intersecting subfamily of $\mathcal{F}$ is at most $d+r$, i.e. $\dim K< d+r$, \item $\mathcal{F}$ contains $\binom{n}{d+1} - \binom{n-r}{d+1}$ intersecting $(d+1)$-tuples, i.e. the equality in \eqref{leray_frachel} holds. \end{itemize} \end{theorem} Note that for given $\alpha \in (0,1]$, the equation $\alpha\binom{n}{d+1}= \binom{n}{d+1} - \binom{n-r}{d+1}$ implies $$r = \beta_d(\alpha) n + o(n) = (1- (1-\alpha)^{1/(d+1)})n+o(n),$$ and the bound $p=d+\frac{n-d}{r+1}+1$ on $(p,q)$-condition is at most a constant $C_{\alpha,d}$ depending on $\alpha$ and $d$. Thus $(C_{\alpha,d},q)$-condition does not improve the bound $\beta_d(\alpha)= 1- (1-\alpha)^{1/(d+1)}$ any further. We also give a construction for the sharpness of Theorem~\ref{thm:frachel_genbox}, showing that the additional $(C_{\alpha,t,d},d+1)$-condition has negligible effect on the quantitative bound of fractional Helly theorem for Cartesian product of convex sets. \begin{theorem}\label{thm:frachel_genbox2} Let $d,t,r,n$ be positive integers with $n\geq d+r$ and $r>(t-1)d$. There exists a family $\mathcal{F}\subseteq \mathcal{F}_{t,d}$ of $n$ convex sets satisfying the following. \begin{itemize} \item any subfamily of at least $d + \frac{n- t(d-1)}{r-(t-1)d}$ sets in $\mathcal{F}$ contains an intersecting $(d+1)$-tuple, \item the maximal size of an intersecting subfamily of $\mathcal{F}$ is at most $d+r$, \item the number of intersecting $(d+1)$-tuples is exactly $g_d(n,t,r)$. \end{itemize} \end{theorem} Again, for given $\alpha\in (1- \frac{1}{t^d},1]$, if $\mathcal{F}$ contains $\alpha\binom{n}{d+1} = g_d(n,t,r)$ intersecting $(d+1)$-tuples, then we have $$r = (1- (t^d(1-\alpha))^{1/(d+1)})n+o(n)$$ and the bound $d + \frac{n-t(d-1)}{r-(t-1)d}+1$ is at most a constant $C_{\alpha,t,d}$ depending only on $\alpha, d, t$. Hence, $(C_{\alpha,t,d},q)$-condition does not improve the bound $1- (t^d(1-\alpha))^{1/(d+1)}$ any further. \medskip \noindent\textbf{Organization.} The rest of the paper will be organized as follows. In Section~\ref{sec:Leray_intersect}, we prove Theorem~\ref{thm:leray_intersection}, and provide an example that shows the tightness of Theorems~\ref{thm:frachel_genbox} and ~\ref{thm:leray_intersection}. In Section~\ref{sec:construction}, we give constructions to prove Theorems~\ref{thm:d-repr_example} and~\ref{thm:frachel_genbox2}. We finish with a few concluding remarks which include some open problems. \section{Intersection of $d$-Leray complexes}\label{sec:Leray_intersect} The basic idea to prove Theorem~\ref{thm:leray_intersection} is applying the fractional Helly theorem for $d$-Leray complexes repeatedly. The following proposition is a useful tool for our optimization. It is an easy consequence of Karamata's inequality (see, e.g., \cite{Kar32}). \begin{proposition}\label{prop:jensen} Let $t$ and $k$ be positive integers. Let $x_1,\dots, x_t$ be nonnegative integers with $x=\sum_{i\in [t]} x_i$. If $x \geq tq + s$ for some nonnegative intger $q$ and $s$ with $0 \leq s < t$, then \[\sum_{i\in[t]} \binom{x_i}{k} \geq s \binom{q+1}{k} + (t-s)\binom{q}{k}.\] \end{proposition} Before we prove Theorem~\ref{thm:leray_intersection}, we remark that, when $d = 1$ and $r = t-1$, we have an obvious upper bound from a famous result in extremal graph theory, the so-called Tur\'{a}n's theorem. \begin{theorem}[Tur\'{a}n's theorem, \cite{Tur41}]\label{thm:turan} Let $q$, $s$, $t$ and $n$ be positive integers such that $n = tq + s$ and $0 \leq s < t$. For every graph on $n$ vertices with no clique of size $t+1$, the number of edges is at most \[ g_1 (n,t,t-1)=\binom{n}{2} - s\binom{q+1}{2} - (t-s)\binom{q}{2} .\] \end{theorem} Now we prove the optimal fractional Helly theorem for the intersection of $d$-Leray complexes. \begingroup \def\ref{thm:leray_intersection}{\ref{thm:leray_intersection}} \begin{theorem} Let $d$, $r$, $t$, and $n$ be positive integers such that $n > d+r$ and $r \geq (t-1)d$. Let $V$ be a set of $n$ vertices, and let $K_1,\ldots,K_t$ be $d$-Leray complexes on $V$. If the intersection $K = \bigcap_{i=1}^{t}K_i$ has dimension $\dim K < d+r$, then \[f_d(K) \leq g_d(n,t,r).\] \end{theorem} \addtocounter{theorem}{-1} \endgroup \begin{proof} If $t = 1$, then the statement follows from \eqref{leray_frachel}. Thus we may assume $t > 1$. Adding all $i$-tuples with $i\leq d$ does not change the $d$-Lerayness and affect the conclusion of the statement, we may assume that every $i$-tuple in $V$ forms a face for $i\leq d$. From each $K_i$, we will take a set $F_i$ of $(d+1)$-tuples that are not faces so that $F_i$'s are mutually disjoint. Then we obtain an upper bound $f_d(K) \leq \binom{n}{d+1} - \sum_{i}|F_i|$. Let $\bar{f_d}(\cdot)$ be the number of $(d+1)$-tuples that are not faces. Note that we may assume $\dim K \geq d$. Then we have $\dim K_i \geq \dim K \geq d$ for each $i \in [t]$. Suppose $\dim K_1 = r_1 + d-1$ for some $r_1 > 0$. Then, by~\eqref{leray_frachel}, we have $\bar{f_d}(K_1) \geq \binom{n-r_1}{d+1}$. Let $F_1$ be the set of all $(d+1)$-tuples contributing to $\bar{f_d}(K_1)$, and let $W_1$ be a $(d+r_1-1)$-dimensional face in $K_1$. For $1 \leq j < t$, we define $F_{j+1}$ and $W_{j+1}$ inductively as follows. Assume that $W_j$ is a $(d+r_j-1)$-dimensional face of $K_j$. Let $$r_{j+1} = \dim K_{j+1}[W_j] -(d-1).$$ Then we have \[\bar{f_d}(K_{j+1}[W_j]) \geq \binom{|W_j|-r_{j+1}}{d+1} = \binom{d+r_j-r_{j+1}}{d+1}.\] Indeed, this is trivial if $r_{j+1}=0$ and follows from \eqref{leray_frachel} if $r_{j+1}>0$. Let $F_{j+1}$ be the set of all $(d+1)$-tuples contributing to $\bar{f_d}(K_{j+1}[W_j])$. Note that $F_{j+1}$ is disjoint from $\bigcup_{1 \leq i \leq j} F_i$, since $W_i$ is a simplex in $K_i$ for each $i\in [j]$. Let $W_{j+1}$ be a largest face in $K_{j+1}$ which is a $(d+r_{j+1}-1)$-dimensional face. As we assume every $d$-tuple forms a face, we have $r_{j+1}\geq 0$. Repeating this defines $F_1,\dots, F_t$ and $W_1,\dots, W_t$. By collecting all $F_j$'s, we obtain \begin{align}\label{toapplyjensen} \bar{f_d}(K) \geq \binom{n-r_1}{d+1} + \binom{d+r_1-r_2}{d+1} + \binom{d+r_2-r_3}{d+1} + \cdots + \binom{d+r_{t-1}-r_t}{d+1}.\end{align} Note that by definition $W_t \in K$, thus we have $$d+r_t -1 =\dim K[W_t]\leq \dim K < d+r,$$ implying that $r_t \leq r$. Consequently, \[(n-r_1)+ (d+r_1-r_2) +\dots + (d+r_{t-1}-r_t) \geq n-r + (t-1)d.\] By Proposition~\ref{prop:jensen}, $\bar{f_d}(K)$ is at least $\sum_{i\in [t]} \binom{n_i}{d+1}$ where $\sum_{i\in [t]} n_i = n-r+(t-1)d$ and $|n_i-n_j|\leq 1$ for all $i\neq j$. This completes the proof. \end{proof} \begin{rmk}\label{rmk:tightness} To see the sharpness of Theorem~\ref{thm:leray_intersection}, take the complex $K_{d}(n,t,r)$ and let $K_i$, $i\in[t]$, be the simplicial complex where a set $U\subset V$ is a face of $K_i$ if and only if $|U\cap V_i|\leq d$. This complex $K_i$ can be expressed as the nerve of the family $\mathcal{F}_i$ of convex sets in $\mathbb{R}^d$ consisting of $|V|-|V_i|$ copies of $\mathbb{R}^d$ corresponding to the vertices outside $V_i$ and $|V_i|$ hyperplanes in general position in $\mathbb{R}^d$ corresponding to the vertices in $V_i$. For each $v\in V$ , let $\psi_i(v)$ be the convex set in $\mathcal{F}_i$ corresponding to the vertex $v$. Let $\mathcal{F}$ be the collection of $n$ Cartesian products $\prod_{i\in [k]} \psi_i(v)$, $v\in V$. Then the nerve of $\mathcal{F}$ is $K_d(n,t,r) = \bigcap_{i\in [t]} K_i$. A maximal face contains all vertices in $V_0$ and $d$ vertices from each $V_i$ with $i>0$. The dimension of $K_d(n,t,r)$ is exactly $d+r-1$. This shows that the upper bound in Theorem~\ref{thm:leray_intersection} is tight. \end{rmk} \section{No large subfamily without intersecting $(d+1)$-tuples}\label{sec:construction} \subsection{Extremal example of convex sets in $\mathbb{R}^d$}\label{subsec:d-repr} We now prove Theorem~\ref{thm:d-repr_example} by constructing a family of convex sets in $\mathbb{R}^d$ satisfying the conditions in the theorem. For this, we need the following lemma. \begin{lemma}\label{lemma:hyperplanes} Let $d$, $n$, and $r$ be positive integers such that $n \geq d+r$. Then there exist $n$ convex sets $A_1,A_2,\ldots,A_n$ in $\mathbb{R}^d$ such that for every $1 \leq i_1 < i_2 < \cdots < i_{d+1} \leq n$, the intersection $\bigcap_{j =1}^{d+1}{A_{i_j}}$ is non-empty if and only if $i_{d+1} - i_d \leq r$. \end{lemma} Before we prove Lemma~\ref{lemma:hyperplanes}, we first present the proof of Theorem~\ref{thm:d-repr_example} based on Lemma~\ref{lemma:hyperplanes}. \begin{proof}[Proof of Theorem~\ref{thm:d-repr_example}] Let $\mathcal{H}$ be the hypergraph on $[n]$ such that for every $1 \leq a_1 < a_2 < \cdots < a_{d+1} \leq n$, the set $\{a_1,a_2,\ldots,a_{d+1}\}$ is an edge in $\mathcal{H}$ if and only if $a_{d+1} - a_d \leq r$. By Lemma~\ref{lemma:hyperplanes}, there exists a family $\mathcal{F}$ of $n$ convex sets in $\mathbb{R}^d$ whose $(d+1)$-intersection hypergraph is isomorphic to $\mathcal{H}$, i.e. the collection of intersecting $(d+1)$-tuples of convex sets in $\mathcal{F}$ forms a hypergraph isomorphic to $\mathcal{H}$. We claim that the family $\mathcal{F}$ satisfy the conditions in the statement. \begin{claim}\label{claim:f_d} $\displaystyle{|E(\mathcal{H})| = \binom{n}{d+1} - \binom{n-r}{d+1}}$. \end{claim} \begin{poc} It suffices to show that $\Big|\binom{[n]}{d+1}-E(\mathcal{H})\Big| = \binom{n-r}{d+1}.$ For each $A\in \binom{[n]}{d+1}-E(\mathcal{H})$, we know that the largest number in $A$ is bigger than the second largest number plus $r$. Let $h(A)$ be the set we obtain from $A$ by replacing the largest number $a$ in $A$ with $a-r$. Then it is easy to see that $h$ is a bijection from $\binom{[n]}{d+1}-E(\mathcal{H})$ to $\binom{[n-r]}{d+1}$, proving the claim. \end{poc} \begin{claim}\label{claim:dim} The maximal size of a clique of $\mathcal{H}$ is $d+r$, and the maximal size of an independent set of $\mathcal{H}$ is $d + \lfloor\frac{n-d}{r+1}\rfloor$. \end{claim} \begin{poc} Let $W$ be a set of vertices $w_1,w_2,\ldots,w_k$ of $\mathcal{H}$ such that $1 \leq w_1 < w_2 < \cdots < w_k \leq n$. We first prove the maximum size of a clique of $\mathcal{H}$ is $d+r$. If $W$ forms a clique, then $\{w_1,\dots,w_d,w_k\}$ must be an edge in $\mathcal{H}$. Thus $w_d< w_{d+1}< \dots < w_k \leq w_d+r$. This implies $k-d\leq r$. Hence, every clique of $\mathcal{H}$ has size at most $d + r$. On the other hand, the equality holds $\{1,2,\ldots,d+r\}$ is a clique of size $d+r$. Next, we prove that the maximum size of an independent set of $\mathcal{H}$ is $d+s$, where $s=\lfloor\frac{n-d}{r+1}\rfloor$. Again, by the definition of $\mathcal{H}$, $W$ is independent in $\mathcal{H}$ if and only if $w_j - w_i > r$ for every $d \leq i < j \leq k$. Therefore, if $W$ is an independent set of $\mathcal{H}$, then we have $n\geq w_k \geq (r+1)(k-d)+ w_{d}$. Since $w_{d}\geq d$, we have $|W|=k \leq d+ s$. On the other hand, $$\{1,\dots, d-1,d, d+(r+1), d+2(r+1),\dots, d+ s(r+1)\}$$ is an independent set of size $d+ s$. \end{poc} Note that, by Helly's theorem for convex sets in $\mathbb{R}^d$, a clique of $\mathcal{H}$ of size at least $d+1$ corresponds to an intersecting subfamily of $\mathcal{F}$. Thus, Claim~\ref{claim:f_d} and Claim~\ref{claim:dim} shows that $\mathcal{F}$ satisfies the conditions of the statement. This completes the proof. \end{proof} Now we give a proof of Lemma~\ref{lemma:hyperplanes} via an explicit construction. \begin{proof}[Proof of Lemma~\ref{lemma:hyperplanes}] We construct $A_k$ inductively. Let $e_d = (0,0,\ldots,0,1)\in\mathbb{R}^d$. First, let $u_1,u_2,\ldots,u_d$ be vectors in $\mathbb{R}^d$ such that every $d$ of the vectors $u_1,u_2,\ldots,u_d,e_d$ are linearly independent in $\mathbb{R}^d$. Let $H_1,H_2,\ldots,H_d$ be $d$ hyperplanes in $\mathbb{R}^d$ where $u_i$ is the normal vector of $H_i$ for each $i\in [d]$. Note that $H_1,\dots, H_d$ are in general position and every $d$ hyperplanes in general position in $\mathbb{R}^d$ meet at exactly one point. We start by letting $A_i = H_i$ for each $i \in [d]$. Given a point $x = (x_1,x_2,\ldots,x_d) \in \mathbb{R}^d$, denote by $\pi_d(x)$ the $d$-th coordinate of $x$, that is, $\pi_d(x) = x_d$. Let $k \geq d$. Suppose we have found hyperplanes $H_{d+1},\ldots,H_k$ having normal vectors $u_{d+1},\ldots,u_k$ respectively and convex sets $A_{d+1}, \ldots, A_k$ such that the following hold. \begin{enumerate}[(i)] \item Every $d$ of the vectors $u_1,u_2,\ldots,u_k,e_d$ are linearly independent. In particular, the hyperplanes $H_1,H_2,\ldots,H_k$ are in general position in $\mathbb{R}^d$. \item For each $d < i \leq k$, there exists a positive real number $s_i$ such that \[A_i = \bigcup_{0 \leq s \leq s_i} \left(H_i-s\cdot e_d\right),\] where $H_i-s\cdot e_d = \{z-(0,0,\ldots,0,s): z \in H_i\}$. That is, $A_i$ is the region bounded by two parallel hyperplanes $H_i-s_i\cdot e_d$ and $H_i$. Thus, for each $\sigma\in\binom{[k]}{d}$, the intersection $A_\sigma = \bigcap_{i \in \sigma}A_i$ is compact. \item $\bigcap_{j =1}^{d+1}{A_{i_j}} \neq \varnothing$ if and only if $i_{d+1} - i_d \leq r$ for every $1 \leq i_1 < i_2 < \cdots < i_{d+1} \leq k$. \item For every $\sigma, \tau \in \binom{[k]}{d}$, \[\max\{\pi_d(x): x \in A_\sigma\} < \max\{\pi_d(x): x \in A_\tau\}\] if the maximal element of $\sigma$ is smaller than the maximal element of $\tau$. Note that the above maximum exists as $A_{\sigma}$ and $A_{\tau}$ are both compact. \end{enumerate} Let $\ell = k+1-r$. We will define a hyperplane $H_{k+1}$ with normal vector $u_{k+1}$ and a closed convex set $A_{k+1}\subset \mathbb{R}^d$ that satisfies the above (i)--(iv). For each $\sigma \in \binom{[k]}{d}$, let $t_\sigma = \max\{\pi_d(x):x\in A_\sigma\}$. In order to ensure (iv), we take a vector $y \in \mathbb{R}^d$ with large $d$-th coordinate such that $\pi_d(y) > t_\sigma$ for any $\sigma \in \binom{[k]}{d}$. Let $H$ be the hyperplane passing through $y$ orthogonal to $e_d$: \[H = \{z \in \mathbb{R}^d: \langle z-y,e_d\rangle = 0\},\] where $\langle a,b\rangle$ is the Euclidean inner product of two vectors $a, b \in \mathbb{R}^d$. Note that for any point $z$ on the hyperplane $H$, we have $\pi_d(z) = \pi_d(y) > t_\sigma$ for $\sigma \in \binom{[k]}{d}$. In order to ensure (iii) later, we wish to take $A_{k+1}$ so that it intersects $A_{\sigma}$ for $\sigma \in \binom{[k]}{d}$ if and only if $\sigma$ contains a number larger than equal to $\ell$. As (iv) holds for $H_1,\dots, H_k$, to check that (iii) remains true with $A_{k+1}$ added, we only have to consider $(k+1)$-tuples containing $k+1$ and $\sigma \in \binom{[\ell]}{d}$. For this, let \[ t = \min\{t_\sigma:\sigma\in\binom{[\ell]}{d} \text{ and } \ell\in\sigma\} \quad \text{and} \quad t' = \max\{t_\sigma:\sigma\in\binom{[\ell-1]}{d}\}. \] When $\ell-1 < d$, let $t' = -\infty$. Then by definition and (iv), we have $t' < t$, and hence we can take a positive real number $s_{k+1}$ such that $t' < \pi_d(y)-s_{k+1} < t$. Let \[A = \bigcup_{0\leq s\leq s_{k+1}}\left(H-s\cdot e_d\right)\] Clearly, if $H$ and $A$ were to play the roles of $H_{k+1}$ and $A_{k+1}$, respectively, then (ii), (iii), and (iv) hold. However, the normal vector of $H$ is $e_d$, so (i) would not hold in this case. See Figure~\ref{sec4-fig1} for an illustration when $d=2, r=1, k=4$. \begin{figure}[htbp] \centering \includegraphics[scale=0.67]{sec4-fig1.pdf} \caption{If $A=A_{k+1}$ then conditions (ii), (iii), and (iv) hold.} \label{sec4-fig1} \end{figure} In order to ensure (i) while keeping (ii)--(iv), we slightly perturb $H$ and $A$. Take $u_{k+1} = e_d + (\epsilon_1,\epsilon_2,\ldots,\epsilon_d) \in\mathbb{R}^d$ for some $\epsilon_i \in \mathbb{R}$, $i \in [d]$, and let \[H_{k+1} = \{z \in \mathbb{R}^d: \langle z-y,u_{k+1}\rangle = 0\}\;\;\;\text{and}\;\;\;A_{k+1} = \bigcup_{0\leq s \leq s_{k+1}}(H_{k+1}-s\cdot e_d).\] Since $k$ is finite, we can take $\epsilon_i$'s with $|\epsilon_i|$ small enough, so that (i) holds while (ii)--(iv) still hold. See Figure~\ref{sec4-fig2} for an illustration of such modification to the example in Figure~\ref{sec4-fig1}. \begin{figure}[htbp] \centering \includegraphics[scale=0.67]{sec4-fig2_v2.pdf} \caption{$A_5$ is obtained by modifying $A$. It satisfies (i), (ii), (iii), and (iv).} \label{sec4-fig2} \end{figure} Repeat this process until we obtain $A_n$. This completes the proof due to (iii). \end{proof} \begin{comment} Start with $d$ hyperplanes $A_1,A_2,\ldots,A_d$ of $\mathbb{R}^d$ in general position,Definitely, the desired conditions hold for these hyperplanes. Suppose we have constructed $n$ hyperplanes $H_1,H_2,\ldots,H_n$ in $\mathbb{R}^d$ that satisfies the conditions. Let $u_i$ be the normal vector of $H_i$ for each $i$. We show that there exists a hyperplane $H_{n+1}$ in $\mathbb{R}^d$ so that $H_1,H_2,\ldots,H_{n+1}$ satisfies the conditions. For each $\sigma \in \binom{[n]}{d}$, let $x_\sigma$ be the unique point in $\bigcap_{i \in \sigma}H_i$. Take $y \in \mathbb{R}^d$ such that $\pi_d(y) > \pi_d(x_\sigma)$ for all $\sigma \in \binom{[n]}{d}$. Consider the hyperplane $H_{n+1}' = \{z \in \mathbb{R}^d: \langle z-y,e_d\rangle = 0\}$, where $\langle a,b\rangle$ is the Euclidean inner product of two vectors $a, b \in \mathbb{R}^d$. Note that for every $z \in \mathcal{H}_{n+1}'$, we have $\pi_d(z) = \pi_d(y) > \pi_d(x_\sigma)$ for all $\sigma \in \binom{[n]}{d}$. For any $\epsilon > 0$, we can take small enough $\epsilon_1,\epsilon_2,\ldots,\epsilon_d > 0$, so that for $u_{n+1} = e_d + (\epsilon_1,\epsilon_2,\ldots,\epsilon_d)$ and $H_{n+1} = \{z \in \mathbb{R}^d: \langle z-y,u_{n+1}\rangle = 0\}$ the following holds: for each $\tau \in \binom{[n]}{d-1}$, if $z_\tau$ is the unique point in $(\bigcap_{i\in\tau}H_i) \cap H_{n+1}$, then $|\pi_d(y)-\pi_d(z_\tau)| < \epsilon$. Therefore, since the number of hyperplanes is finite, we can find $\epsilon_1,\epsilon_2,\ldots,\epsilon_d > 0$ so that \begin{itemize} \item the hyperplanes $H_1,H_2,\ldots,H_{n+1}$ are in general position, \item for each $\tau \in \binom{[n]}{d-1}$, we have $\pi_d(z_\tau) > \pi_d(x_\sigma)$ for all $\sigma \in \binom{[n]}{d}$. \end{itemize} Now $H_1,H_2,\ldots,H_{n+1}$ are $n+1$ hyperplanes in $\mathbb{R}^d$ that satisfies the desired conditions. \end{comment} \subsection{Extremal example of Cartesian products of convex sets in $\mathbb{R}^d$} Based on the construction in Section~\ref{subsec:d-repr}, we can also construct a subfamily of $\mathcal{F}_{t,d}$ that does not contain a large subfamily without intersecting $(d+1)$-tuples and satisfies an almost optimal fractional Helly property for $(d+1)$-tuples. We construct this subfamily to prove Theorem~\ref{thm:frachel_genbox2}. \begin{proof}[Proof of Theorem~\ref{thm:frachel_genbox2}] Let $n_1,\dots, n_t$ and $r_1,\dots, r_t$ be integers so that $n_1+\dots + n_t = n$ and $r_1 + \dots + r_t = r-(t-1)d$, and moreover $ n_1\leq \dots \leq n_t \leq n_1+1$ and $r_1\leq \dots \leq r_t \leq r_1+1$. In particular, then each $r_i$ is either $\left\lfloor\frac{r-(t-1)d}{t}\right\rfloor$ or $\left\lceil\frac{r-(t-1)d}{t}\right\rceil$ and we have $$\sum_{i\in [t]} (n_i-r_i) = n-r+(t-1)d \enspace \text{ and } \enspace \left| (n_i-r_i) - (n_j-r_j) \right|\leq 1 \text{ for all } i,j\in [t].$$ For each $i\in [t]$, by Theorem~\ref{thm:d-repr_example}, there exists a family $\mathcal{F}'_i$ of $n_i$ convex sets in $\mathbb{R}^d$ such that the following holds. \begin{enumerate}[(a)] \item Any family of more than $d + \frac{n_i -d}{r_i+1}$ convex sets in $\mathcal{F}'_i$ contains an intersecting $(d+1)$-tuple, \item the maximal size of an intersecting subfamily of $\mathcal{F}'_i$ is at most $d+r_i$, and \item $\mathcal{F}'_i$ contains exactly $\binom{n_i-r_i}{d+1}$ non-intersecting $(d+1)$-tuples. \end{enumerate} Recall that all $d$-tuples in the family in Theorem~\ref{thm:d-repr_example}, hence also those in $\mathcal{F}_i'$, are intersecting. For $A\in \mathcal{F}'_i$, let $\pi^{-1}_i(A)$ be the set of all points $x\in (\mathbb{R}^{d})^t$ whose $i$-th coordinate projection $\pi_i(x)$ lies in $A$. In other words, $\pi^{-1}_i(A)$ is the Cartesian product of $A\in \mathcal{F}'_i$ and $t-1$ copies of $\mathbb{R}^d$ so that $i$-th coordinate projection of the product is $A$. Let $$\mathcal{F}= \bigcup_{ i\in [t]} \{ \pi^{-1}_i(A): A\in \mathcal{F}'_i\}.$$ Thus $\mathcal{F}$ has size $|\mathcal{F}|=\sum_{i\in[t]}|\mathcal{F}_i'|=n$. Note that any subfamily $\mathcal{F}^*$ of $d+1$ sets in $\mathcal{F}$ are intersecting if not all of them come from the same $\mathcal{F}'_i$, i.e. $\mathcal{F}^* \not\subset \{ \pi^{-1}_i(A): A\in \mathcal{F}'_i\}$ for any $i$. For each $i\in [t]$, as we know $n_i\leq \frac{n+t-1}{t}$ and $r_i\geq \frac{r - (t-1)d-(t-1)}{t}$, we have \[ d+ \frac{n_i -d}{r_i+1}< d + \frac{n - t(d-1)}{r-(t-1)d}.\] Consider a subfamily $\mathcal{F}^*$ of at least $d + \frac{n-t(d-1)}{r-(t-1)d}$ sets in $\mathcal{F}$. If not all of them come from the same $\mathcal{F}'_i$, then it contains an intersecting $(d+1)$-tuple. If all of them lie in $\{ \pi^{-1}_i(A): A\in \mathcal{F}'_i\}$, as $|\mathcal{F}^*| > d+ \frac{n_i -d}{r_i+1}$, (a) implies that this contains an intersecting $(d+1)$-tuple. As any $d+1$ sets not coming from the same $\mathcal{F}'_i$ are intersecting, (b) implies that the largest family of intersecting sets has size at most $\sum_{i\in [t]} (d+r_i) = d + r$. Moreover, as all non-intersecting $(d+1)$-tuples come from the same $\mathcal{F}'_i$, (c) implies that the number of non-intersecting $(d+1)$-tuples in $\mathcal{F}$ is exactly $\sum_{i\in [t]} \binom{n_i-r_i}{d+1}$. Hence, the number intersecting $(d+1)$-tuples in $\mathcal{F}$ is $$\binom{n}{d+1} - \sum_{i\in [t]} \binom{n_i-r_i}{d+1} = g_d(n,t,r).$$ The equality holds as $n_i-r_i$ sums up to $n-r+(t-1)d$ and $|(n_i-r_i)- (n_j-r_j)|\leq 1$ for all $i,j\in [t]$. This proves the theorem. \end{proof} \section{Concluding remarks}\label{sec:rmk} \subsection{Induced density vs forbidden blowup} Given a family $\mathcal{F}$ of non-empty sets, the {\em colorful Helly number} is the maximal integer $m$ such that there exists finite subfamilies $\mathcal{F}_1,\ldots,\mathcal{F}_{m-1}$ of $\mathcal{F}$ such that each $\mathcal{F}_i$ is not intersecting and for all $A_i \in \mathcal{F}_i$, the intersection $\bigcap_{i\in[m-1]}A_i$ is non-empty. By taking all $\mathcal{F}_i$'s identical, one can see that the colorful Helly number is always greater than or equal to the Helly number. The colorful Helly theorem~\cite{Bar82}, which is another important generalization of Helly's theorem, asserts that the family of all convex sets in $\mathbb{R}^d$ has colorful Helly number $d+1$. See also \cite{KM05} for the colorful Helly theorem for $d$-Leray complexes. Kim~\cite{Kim17} recently gave an alternative way to show the robustness of the fractional Helly theorem, that is, $\beta_d$ tends to $1$ as $\alpha$ tends to $1$, using the colorful Helly theorem as a blackbox, see also \cite{BFMOP14, BGT21} for related works. Improving the idea in~\cite{Kim17}, Holmsen \cite{Hol20} proved an extremal graph theoretic result, extending the work of Gy\'arf\'as, Hubenko and Solymosi~\cite{GHS02} to hypergraphs. It roughly states that dense hypergraphs with certain forbidden configuration must contain a linear-size clique. In particular, this result implies that the fractional Helly theorem can be derived from the colorful Helly theorem in a purely combinatorial way. Such work is of great interest, as it has been one of the most fundamental question in Helly type problems to find sufficient combinatorial conditions to give some fractional Helly type result for abstract set-systems, see e.g. \cite{Mat04, Pat20, HL21, GHP21} for such works. Write $K_t^{(2)}$ for the \emph{2-blowup} of $K_t$. That is, $K_t^{(2)}=K_{2,\ldots, 2}$ is the complete $t$-partite graph with each part of size 2. Holmsen's result~\cite{Hol20} for graphs reads as follows. \begin{comment} Let $t\in\mathbb{N}$ and let $G$ be a graph with no induced 2-blowup of a $t$-clique $K_t^{(2)}$. If $G$ has positive $K_t$-density, then $G$ contains a linear-size clique. We extend Holmsen's result to graphs forbidding 2-blowup of an arbitrary graph. \begin{theorem}\label{thm:ind-blowup} Let $H$ be a graph, and let $G$ be a graph with no induced copy of $H^{(2)}$. If $G$ has positive induced $H$-density, then there is a linear-size clique in $G$. \end{theorem} It is tempting to assume only positive $H$-density, or equivalently positive $K_{\chi(H)}$-density above. However, positive induced $H$-density is necessary. Consider the following example. Let $H=P_3$ be the path with three vertices, then $H^{(2)}=K_{2,4}$. Let $G$ be obtained by placing the complement of a triangle Ramsey graph~\cite{Kim95} in each partite set of $K_{n/2,n/2}$. Then $G$ has positive $P_3$-density and contains no induced $K_{2,4}$, but the largest clique in $G$ is of size only $O(\sqrt{n\log n})$. \end{comment} \begin{theorem}[\cite{Hol20}] \label{thm:ind-blowup} Let $G$ be a graph with no induced copy of $K_t^{(2)}$. If $G$ has positive $K_t$-density, then there is a linear-size clique in $G$. \end{theorem} We sketch a conceptually simpler proof of Theorem~\ref{thm:ind-blowup} using Szemer\'edi regularity lemma. We use some standard terminologies used in the literature, see, e.g. \cite{KSSS02}. Regularity lemma asserts that any graph $G$ can be decomposed into bounded number (say $k$) of parts so that most of the pairs of parts have certain pseudorandom properties. The reduced graph $R$ is a graph on vertex set $[k]$, where $ij$ is an edge if and only if the bipartite graph between $i$-th and $j$-th parts is random-like. By a standard embeddimg lemma, if $G$ has positive density of $K_t$, then the reduced graph $R$ contains a copy of $K_t$. An elementary argument shows that if all the parts corresponding to a copy of $K_t$ in $R$ are not dense enough, then one can embed $K_t^{(2)}$ using these parts in an obvious way. Thus, there must exist a part $V$ which is very dense, say having edge-density at least $1-1/(100t)$. By keep deleting vertices of low degree if exists, one can obtain a subset $V'\subseteq V$ with $|V'|\geq |V|/(100t)$ satisfying $\delta(G[V'])\geq (1- 1/(10t)) |V'|$. With this minimum degree condition, we can keep taking non-edges $u_iv_i$ from the common neighborhood of $u_1,v_1,\dots, u_{i-1}, v_{i-1}$ within $V'$ if exists. Note that the minimum degree condition ensures that the common neighborhood has size at least $|V'|/2$ if $i\leq t$. This either provides an induced copy of $K_t^{(2)}$ or a linear size clique, yielding the desired result. \subsection{Fractional Helly properties of abstract set-systems} Let $c \in [0,1)$ be a real number, $r > 1$ be an integer, and $\mathcal{F}$ be a family of non-empty sets. We say $\mathcal{F}$ satisfies a {\em fractional Helly property for $r$-tuples over $(c,1]$} if it satisfies the following: there exists $\gamma:(c,1]\to(0,1]$ such that for every $\alpha \in (c,1]$ and finite subfamily $\mathcal{H}$ of $\mathcal{F}$, if at least $\alpha\binom{|\mathcal{H}|}{r}$ of the $r$-tuples of $\mathcal{H}$ are intersecting, then $\mathcal{H}$ contains an intersecting subfamily of size at least $\gamma(\alpha)|\mathcal{H}|$. In general, bounded Helly number may not guarantee any fractional Helly property. To see this, consider the following concept. Given a family $\mathcal{F}$ of sets, the {\em intersection graph} is a graph with vertex set $\mathcal{F}$ whose edge sets is the set of all intersecting pairs of $\mathcal{F}$. Observe that any graph can be represented as the intersection graph of a family of non-empty sets with Helly~number~$2$. Let $G$ be a graph on $V$ and let $\mathcal{C}$ be the set of all maximal cliques of $G$. For each $v \in V$, let $C_v$ be the set of all maximal cliques of $G$ that contain $v$, that is, $C_v = \{C \in \mathcal{C}: v \in C\}$. Observe that, for a vertex subset $W$, $\bigcap_{v \in W}C_v \neq \varnothing$ if and only if $W$ is a clique in $G$. This implies that the family $\mathcal{G} = \{C_v: v \in V\}$ has Helly number $2$ and that the intersection graph of $\mathcal{G}$ is isomorphic to $G$. Now, let $\mathcal{F}$ be a family of non-empty sets with Helly number $2$, whose intersection graph is the disjoint union of all possible graphs. Then, for every integer $r \geq 2$, there is no $c \in [0,1)$ such that $\mathcal{F}$ satisfies the fractional Helly property for $r$-tuples over $(c,1]$. For example, let $\mathcal{F}_m$ be a subfamily of $\mathcal{F}$ whose intersection graph is isomorphic to the complete $m$-partite graph $K_{m,m,\ldots,m}$. Clearly, $\mathcal{F}_m$ consists of $m^2$ members and the maximal size of an intersecting subfamily of $\mathcal{F}$ is $m = o(|\mathcal{F}_m|)$. On the other hand, there are exactly $\binom{m^2}{2}-m\binom{m}{2} = (1-o(1))\binom{|\mathcal{F}_m|}{2}$ intersecting pairs in $\mathcal{F}$. The crucial reason why Proposition~\ref{thm:frachel_smallhelly} holds is that any family of convex sets in $\mathbb{R}^d$ satisfies the fractional Helly property for $(d+1)$-tuples over $(0,1]$. In this point of view, we can reformulate Proposition~\ref{thm:frachel_smallhelly} in a slightly generalized form as follows. \begin{proposition}\label{thm:frachel_smallhelly_abstract} Let $k$ and $r$ be positive integers such that $k \geq r \geq 2$ and let $\alpha>0$. Then there exists $c_{k,r,\alpha} \in [0,1)$ such that the following holds: for every finite family $\mathcal{F}$ of non-empty sets with Helly number $r$, if $\mathcal{F}$ satisfies the fractional Helly property for $k$-tuples over $(\alpha,1]$, then it satisfies the fractional Helly property for $r$-tuples over $(c_{k,r,\alpha},1]$. \end{proposition} \begin{proof}[Sketch of proof] Let $G$ be the graph consisting of all 1-dimensional faces of $\mathcal{F}$. Note that by taking $c_{k,r,\alpha}\rightarrow 1$, the edge-density of $G$, hence also the $K_k$-density of $G$, tends to 1. We can then apply the fractional Helly property for $k$-tuples over $(\alpha,1]$. \end{proof} \subsection{The number of higher dimensional faces in $\mathcal{F}_{t,d}$} A more general form of \eqref{leray_frachel} for higher dimension is known: for every $j \geq d$, \[f_j(K) \leq \sum_{i=0}^{d}\binom{n-r}{i}\binom{r}{j+1-i}.\] It would be interesting to find the tight upper bound for $f_j(K)$ for $j > d$, when $K$ is the nerve of a finite subfamily of $\mathcal{F}_{t,d}$. When $d = 1$, it was shown in \cite{Eck88} that the construction in Remark~\ref{rmk:tightness} has the maximum number of intersecting $(j+1)$-tuples for every $j \geq d$. It was observed by Lew~\cite{Lew} that Eckhoff's arguments also applies to the intersection of $t$ many $1$-Leray complexes. A natural guess is that the same holds for the intersections of $d$-Leray complexes for all $d>1$. We conclude this discussion with a reformulation of a more general problem posed by B\'{a}r\'{a}ny and Kalai~\cite[Problem 3.8]{BK21}. \begin{prob}[\cite{BK21}]\label{prob:hnumber2} For every positive integers $d,k,n,r$ with $n > d+r, k \geq 2$, find the minimum integer $T(d,k,n,r)$ such that the following holds: for every finite family $\mathcal{F}$ of non-empty sets such that $\mathcal{F}$ has Helly number $2$ and the nerve of $\mathcal{F}$ is $d$-Leray, if more than $T(d,k,n,r)$ of the $(k+1)$-tuples of $\mathcal{F}$ are intersecting, then $\mathcal{F}$ contains an intersecting subfamily of size $d+r+1$. \end{prob} \subsection{Complexes with smaller dimensions} Theorem~\ref{thm:leray_intersection} describes the fractional Helly property when $r \geq (t-1)d$. It remains to investigate how $f_d(K)$ behaves when $r < (t-1)d$. When $d = 1$, it was observed in \cite{Eck88} that the obvious upper bound follows from Theorem~\ref{thm:turan} is tight. For the completeness of this remark, we include a proof. \begin{theorem} Let $K_1,K_2,\ldots,K_t$ be $1$-Leray complexes on $V$ with $|V| = n$, and let $K = \bigcap_{i\in[t]}K_i$. If $\dim K < m \leq t$, then \[f_1(K) \leq \binom{n}{2} - s\binom{\left\lceil\frac{n}{m}\right\rceil}{2} - (m-s)\binom{\left\lfloor\frac{n}{m}\right\rfloor}{2},\] where $s$ is a non-negative integer such that $0 \leq s < t$ and $n \equiv s$ modulo $m$. Moreover, there exists such $K$ that satisfies the equality. \end{theorem} \begin{proof} Let $G$ be a graph on $V$ whose edge set is the set of all $1$-dimensional faces of $K$. Recall that, as mentioned in Remark~\ref{rmk:Hellynumber}, a vertex subset $W$ with $|W| \geq 2$ is a face in $K$ if and only if every pair in $W$ is a face in $K$. Thus the condition $\dim K < m$ implies that $G$ has no clique of size $m + 1$. Then upper bound immediately follows from Theorem~\ref{thm:turan}. For the equality of the upper bound, consider a vertex partition $V = V_1 \cup V_2 \cup \cdots \cup V_m$ such that $|V_i| = \left\lceil\frac{n}{m}\right\rceil$ for $1 \leq i \leq s$ and $|V_i| = \left\lfloor\frac{n}{m}\right\rfloor$ for $s+1 \leq i \leq m$. For each $i \in [m]$, let $K_i$ be the complex on $V$ such that $W \subset V$ is a face in $K_i$ if and only if $|W \cap V_i| \leq 1$. Note that each $K_i$ can be express as the nerve of $|V|-|V_i|$ copies of $\mathbb{R}$ and $|V_i|$ distinct point on $\mathbb{R}$, and hence $K_i$ is $1$-Leray. Let $K = \cap_{i\in[m]}K_i$. Then $K$ is a complex on $V$ such that $W \subset V$ is a face in $K$ if and only if $|W \cap V_i| \leq 1$ for each $i \in [m]$. In particular, $f_1(K) = \binom{n}{2} - \sum_{i\in[m]}\binom{|V_i|}{2}$, as required. \end{proof} Note that for a constant $c$ the condition $\dim K < c$ in $d$-Leray complexes already provides an upper bound on the number of $d$-dimensional faces. Consider a $(d+1)$-uniform hypergraphs without any clique of size $c$, what would be the maximum possible number of edges in this hypergraph? The answer to this question is the Tur\'{a}n number of the $(d+1)$-uniform complete hypergraph on $c$ vertices. In turn, this provides a natural upper bound on the number of $d$-dimensional faces in the intersection of $t$ many $d$-Leray complexes with $\dim K <(t-1)d$. Would this upper bound be tight? As the Tur\'{a}n number of such a complete hypergraph is not even known for $d>1$, and it is expected that the extremal hypergraphs have more complicated structures than the constructions in Theorems~\ref{thm:d-repr_example} and \ref{thm:frachel_genbox2}. So, it seems unlikely that the bound supposedly provided from the hypergraph Tur\'{a}n number is tight for the family of convex sets in $\mathcal{F}_{t,d}$ or even in $\mathbb{R}^d$. \subsection{$(p,q)$-condition for constant $p$} In Theorem~\ref{thm:frachel_genbox2}, we constructed a family of convex sets in $\mathcal{F}_{t,d}$ that satisfies $(C_{\alpha,t,d},d+1)$-condition. We showed that $(C_{\alpha,t,d},d+1)$-condition does not provide any improvement on the fractional Helly property for $(d+1)$-tuples when $\alpha\binom{|\mathcal{F}|}{d+1}$ intersecting $(d+1)$-tuples are present for $\alpha \in (1- \frac{1}{t^d},1].$ However, would such $(p,q)$-condition extend the range of $\alpha$ for the fractional Helly property for $(d+1)$-tuples? More precisely, does there exists $\delta < 1 - \frac{1}{t^d}$ such that the families of convex sets satisfying $(p,d+1)$-condition with bounded $p$ satisfy a fractional Helly property for $(d+1)$-tuples over $(\delta,1]$, instead of over $(1-\frac{1}{t^d},1]$? This follows from the $(p,q)$-theorem for convex sets due to Alon and Kleitman~\cite{AK92}. Roughly speaking, the $(p,q)$-theorem asserts that, if a family of convex sets satisfies a sufficiently strong intersection property, then the whole family can be ``pierced'' by few points. Here is a precise statement of the $(p,q)$-theorem. \begin{theorem}[$(p,q)$-theorem, \cite{AK92}]\label{thm:pq} For every positive integers $d$, $p$, and $q$ with $p \geq q \geq d+1$, there exists a positive integer $N = N(d;p,q)$ such that the following holds: for every finite family $\mathcal{F}$ of convex sets in $\mathbb{R}^d$ that satisfies the $(p,q)$-condition, there exists a set of at most $N$ points in $\mathbb{R}^d$ that meets all members of $\mathcal{F}$. \end{theorem} Note that Helly's theorem implies $N(d;d+1,d+1) = 1$. See, for example, \cite{KST18,Rub21} for recent development related to the $(p,q)$-theorem. See also \cite{AKMM02} for the $(p,q)$-theorem for abstract set-systems. As a corollary of Theorem~\ref{thm:pq}, we can obtain the following. \begin{cor}\label{cor:pq} For every positive integers $d$, $p$, and $t$ with $p \geq d+1$, there exists a positive integer $M_t(d;p)$ such that the following holds: for every finite family $\mathcal{F} \subset \mathcal{F}_{t,d}$ that satisfies the $(p,d+1)$-condition, there exists a set of at most $M_t(d;p)$ points in $\mathbb{R}^{td}$ that meets all members of $\mathcal{F}$. \end{cor} \begin{proof} Let \[\mathcal{F} = \{A_{i,1}\times A_{i,2}\times\cdots\times A_{i,t} \subset \mathbb{R}^{td}: i\in[n]\},\] where $A_{i,j} \subset \mathbb{R}^d$ is a convex set in $\mathbb{R}^d$ for each $i \in [n]$ and $j \in [t]$. Suppose $\mathcal{F}$ satisfies the $(p,d+1)$-condition. Then, for each $j \in [t]$, the family $\mathcal{G}_j := \{A_{i,j}: i\in[n]\}$ also satisfies the $(p,d+1)$-condition, and hence there exists a set $S_i \subset \mathbb{R}^d$ of at most $N(d;p,d+1)$ points that intersects each member of $\mathcal{G}_j$. Now, the set \[S := S_1 \times S_2 \times \cdots \times S_t \subset \mathbb{R}^d \times \mathbb{R}^d \times \cdots \mathbb{R}^d \simeq \mathbb{R}^{td}\] meets all members of $\mathcal{F}$. This proves $M_t(d;p) \leq N(d;p,d+1)^t$. \end{proof} \begin{rmk} Let $d$ and $p$ be positive integers such that $p \geq d+1$. For every finite subfamily $\mathcal{F}$ of $\mathcal{F}_{t,d}$ satisfying $(p,d+1)$-condition, no matter how small $f_d(N(\mathcal{F}))$ is, Corollary~\ref{cor:pq} guarantees the existence of a subfamily of size at least $\frac{n}{M_t(d;p)}$. Hence, this implies that such a family has a fractional Helly property for $(d+1)$-tuples over $(0,1]$. However, in order to conclude this, we might want to ask the following question: which values $\alpha$ in $(0,1]$ are realizable? In other words, for which $\alpha \in (0,1]$, there exists a family $\mathcal{F}\subseteq \mathcal{F}_{t,d}$ of $n$ sets satisfying $(p,d+1)$-condition while having $\alpha \binom{n}{d+1}$ intersecting $(d+1)$-tuples. For example, from the hypergraph Tur\'{a}n theorem, we know that such a family $\mathcal{F}$ must have at least $\binom{p-1}{d}^{-1} \binom{n}{d+1}$ intersecting $(d+1)$-tuples, hence $\alpha$ cannot be in the interval $(0,\binom{p-1}{d}^{-1})$. This bound is not tight even for general $(d+1)$-uniform hypergraphs. Hence this leaves the following interesting question: if $\mathcal{F}$ satisfies $(p,d+1)$-condition, then how many intersecting $(d+1)$-tuples must it have? If our choice of $p$ ensures that this bound is smaller than $1-\frac{1}{t^d}$, then we would be able to say that the $(p,d+1)$-condition indeed ensures that the fractional Helly property for $(d+1)$-tuples holds over a larger range of $\alpha$. \end{rmk} Also, it is natural to ask the best possible $M_t(d;p)$. This kind of questions has been studied for the case $d=1$, that is, for axis-aligned boxes: see, for example, \cite{CD20,CSZ18}. We conclude the section with a well-known conjecture on $M_2(1;p)$. \begin{question}[\cite{Weg65}] Let $\mathcal{F}$ be a family of axis-aligned boxes in the plane. If every $p$-tuple contains intersecting triple, then there exists a set of $2p-3$ points in the plane that meets all members of $\mathcal{F}$. \end{question} \section*{Acknowledgment} The authors thank Alan Lew for introducing the problem about the intersection of $d$-Leray complexes. The authors also thank Andreas Holmsen for suggesting an idea for the construction in Section~\ref{subsec:d-repr}. \bibliographystyle{abbrv}
{ "timestamp": "2021-08-31T02:31:01", "yymm": "2108", "arxiv_id": "2108.09962", "language": "en", "url": "https://arxiv.org/abs/2108.09962", "abstract": "Helly's theorem and its variants show that for a family of convex sets in Euclidean space, local intersection patterns influence global intersection patterns. A classical result of Eckhoff in 1988 provided an optimal fractional Helly theorem for axis-aligned boxes, which are Cartesian products of line segments. Answering a question raised by Bárány and Kalai, and independently Lew, we generalize Eckhoff's result to Cartesian products of convex sets in all dimensions. In particular, we prove that given $\\alpha \\in (1-\\frac{1}{t^d},1]$ and a finite family $\\mathcal{F}$ of Cartesian products of convex sets $\\prod_{i\\in[t]}A_i$ in $\\mathbb{R}^{td}$ with $A_i\\subset \\mathbb{R}^d$ if at least $\\alpha$-fraction of the $(d+1)$-tuples in $\\mathcal{F}$ are intersecting then at least $(1-(t^d(1-\\alpha))^{1/(d+1)})$-fraction of sets in $\\mathcal{F}$ are intersecting. This is a special case of a more general result on intersections of $d$-Leray complexes. We also provide a construction showing that our result on $d$-Leray complexes is optimal. Interestingly the extremal example is representable as a family of cartesian products of convex sets, implying the bound $\\alpha>1-\\frac{1}{t^d}$ and the fraction $(1-(t^d(1-\\alpha))^{1/(d+1)})$ above are also best possible. The well-known optimal construction for fractional Helly theorem for convex sets in $\\mathbb{R}^d$ does not have $(p,d+1)$-condition for sublinear $p$. Inspired by this we give constructions showing that, somewhat surprisingly, imposing additional $(p,d+1)$-condition has negligible effect on improving the quantitative bounds in neither the fractional Helly theorem for convex sets nor Cartesian products of convex sets. Our constructions offer a rich family of distinct extremal configurations for fractional Helly theorem, implying in a sense that the optimal bound is stable.", "subjects": "Combinatorics (math.CO)", "title": "Fractional Helly theorem for Cartesian products of convex sets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363758493942, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7084883451752404 }
https://arxiv.org/abs/math/0510603
Approximation by smooth functions with no critical points on separable Banach spaces
We characterize the class of separable Banach spaces $X$ such that for every continuous function $f:X\to\mathbb{R}$ and for every continuous function $\epsilon:X\to\mathbb(0,+\infty)$ there exists a $C^1$ smooth function $g:X\to\mathbb{R}$ for which $|f(x)-g(x)|\leq\epsilon(x)$ and $g'(x)\neq 0$ for all $x\in X$ (that is, $g$ has no critical points), as those Banach spaces $X$ with separable dual $X^*$. We also state sufficient conditions on a separable Banach space so that the function $g$ can be taken to be of class $C^p$, for $p=1,2,..., +\infty$. In particular, we obtain the optimal order of smoothness of the approximating functions with no critical points on the classical spaces $\ell_p(\mathbb{N})$ and $L_p(\mathbb{R}^n)$. Some important consequences of the above results are (1) the existence of {\em a non-linear Hahn-Banach theorem} and (2) the smooth approximation of closed sets, on the classes of spaces considered above.
\section[Introduction and main results]{Introduction and main results} The Morse-Sard theorem \cite{Sard1, Sard2} states that if $f:\mathbb{R}^{n}\longrightarrow \mathbb{R}^{m}$ is a $C^r$ smooth function, with $r>\max\{n-m, 0\}$, and $C_{f}$ is the set of critical points of $f$, then the set of critical values $f(C_{f})$ is of Lebesgue measure zero in $\mathbb{R}^{m}$. This result has proven to be very valuable in a large number of areas, especially in differential topology and analysis (see for instance \cite{Hirsch, YomdinComte} and the references therein). Additional geometric and analytical properties of the set of critical values in different versions of the Morse-Sard theorem, together with a study on the sharpness of the hypothesis of the Morse-Sard theorem, have been obtained in \cite{Bates1, Bates2, Bates3, Bates4, Bates-Moreira, Moreira}. For many important applications of the Morse-Sard theorem, it is enough to know that any given continuous function can be uniformly approximated by a smooth map whose set of critical values has empty interior \cite{Hirsch, YomdinComte}. We refer to this as an {\em approximate Morse-Sard theorem}. The same type of approximation could prove key to the study of related problems in the infinite-dimensional domain. In this paper, we will prove the strongest version of an approximate Morse-Sard theorem that one can expect to be true for a general infinite-dimensional separable Banach space, namely that every continuous function $f:X\longrightarrow \mathbb R$, where $X$ is an infinite-dimensional Banach space $X$ with separable dual $X^*$, can be uniformly approximated by a $C^1$ smooth function $g:X\longrightarrow \mathbb R$ which does not have any critical point. In some cases where more information about the structure of the Banach space $X$ is known, we will extend our result to higher order of differentiability, $C^p$ ($p>1$). Our result will also allow us to demonstrate two important corollaries. The first one is the existence of {\em a non-linear Hahn-Banach theorem} which shows that two disjoint closed subsets in $X$ can be separated by a 1-codimensional $C^p$ smooth manifold of $X$ (which is the set of zeros of a $C^p$ smooth function with no critical points on $X$). The second one states that every closed subset of $X$ can be approximated by $C^p$ smooth open subsets of $X$. To put our work in context, let us briefly review some of the work established for the infinite-dimensional version of the Morse-Sard theorem. Smale \cite{Smale} proved that if $X$ and $Y$ are separable connected smooth manifolds modelled on Banach spaces and $f:X\longrightarrow Y$ is a $C^r$ Fredholm map then $f(C_{f})$ is of first Baire category and, in particular, $f(C_{f})$ has no interior points provided that $r>\max\{\textrm{index}(df(x)), 0\}$ for all $x\in X$. Here, index($df(x)$) stands for the index of the Fredholm operator $df(x)$, that is, the difference between the dimension of the kernel of $df(x)$ and the codimension of the image of $df(x)$, which are both finite. These assumptions are very strong as they impose that when $X$ is infinite-dimensional then $Y$ is necessarily infinite-dimensional too (in other words, there is no Fredholm map $f:X\longrightarrow\mathbb{R}$). In fact, as Kupka proved in \cite{Kupka}, there are $C^\infty$ smooth functions $f:\ell_2\longrightarrow\mathbb{R}$ (where $\ell_2$ is the separable Hilbert space) such that their sets of critical values $f(C_{f})$ contain intervals and hence have non-empty interiors and positive Lebesgue measure. Bates and Moreira \cite{Bates-Moreira, Moreira} showed that this function $f$ can even be taken to be a polynomial of degree three. Azagra and Cepedello-Boiso \cite{AC} have shown that every continuous mapping from the separable Hilbert space into $\mathbb{R}^{m}$ can be uniformly approximated by $C^\infty$ smooth mappings with no critical points. Unfortunately, since the core of their proof requires the use of the special properties of the Hilbertian norm, this cannot be extended to non-Hilbertian Banach spaces. P. H\'{a}jek and M. Johanis \cite{HJ} established the same kind of result in the case when $X$ is a separable Banach space which contains $c_0$ and admits a $C^p$-smooth bump function. In this case, the approximating functions are of class $C^p$, $p=1, 2,..., \infty$. This method is based on the result that the range of the derivative of a $C^2$ smooth function from $c_0$ to $\mathbb R$ is a countable union of compact sets \cite{Hajek}. However, as the authors noted, their method is not applicable when the space $X$ has the Radon-Nikod\'{y}m property (e.g., when $X$ is reflexive), which leaves out all the classical Banach spaces $\ell_p$ and $L_{p}(\mathbb{R}^{n})$ for $1<p<\infty$. As stated above, we prove that for any infinite-dimensional Banach space $X$ with a separable dual $X^*$, the set of $C^1$ smooth, real-valued functions with no critical points is uniformly dense in the space of all continuous, real-valued functions on $X$. This solves completely the problem of the approximation on separable Banach spaces by smooth, real-valued functions with no critical points when the order of smoothness of the approximating functions is one. Hence, we obtain the following characterization. For a separable Banach space $X$, the following are equivalent: (i) $X^*$ is separable, and (ii) the set of $C^1$ smooth, real-valued functions on $X$ with no critical points is uniformly dense in the space of all continuous, real-valued functions on $X$. This result can be included in our main theorem which also applies to higher order of differentiability. Before stating our main theorem, recall that a norm $||\cdot||$ in a Banach space $X$ is LUR (locally uniformly rotund \cite{DGZ}) if $\lim_n||x_n-x||=0$ whenever the sequence $\{x_n\}_n$ and the point $x$ are included in the unit sphere of the norm $||\cdot||$ and $\lim_n||x_n+x||=2$. A norm $||\cdot||$ in $X$ is $C^p$ smooth if it is $C^p$ smooth in $X\setminus\{0\}$. \begin{thm}\label{approximation theorem} Let $X$ be an infinite dimensional separable Banach space $X$ with a LUR and $C^p$ smooth norm $||\cdot||$, where $p\in \mathbb N\cup \{\infty\}$. Then, for every continuous mapping $f:X\longrightarrow\mathbb{R}$ and for every continuous function $\varepsilon:X\longrightarrow (0,\infty)$, there exists a $C^p$ smooth mapping $g:X\longrightarrow\mathbb{R}$ such that $|f(x)-g(x)|\leq\varepsilon(x)$ for all $x\in X$ and $g$ has no critical points. \end{thm} Our proof involves: {\em i)} a special construction of carefully perturbed partitions of unity in an open subset of the unit sphere of the Banach space \, $Y=X\oplus\mathbb R$ \, by means of a sequence of linear functionals in $Y^*$, {\em ii)} the study and use of the properties of the range of the derivative of the norm in $Y$, $Y^*$ and their finite dimensional subspaces (Lemmas \ref{case N} and \ref{vectoradicional} below), and {\em iii)} the use of $C^p$ deleting diffeomorphisms from $X$ onto $X\setminus O$, where $O$ is a bounded, closed, convex subset of $X$. The following example gives the optimal order of smoothness of the approximation functions with no critical points for $\ell_p(\mathbb N)$ and $L_{p}(\mathbb{R}^{n})$. \begin{ex}\label{example} It follows immediately from Theorem \ref{approximation theorem} that one can approximate every continuous, real-valued function on $\ell_{p}(\mathbb{N})$ and $L_{p}(\mathbb{R}^{n})$ \ ($1<p<\infty$) \ with $C^{\overline{p}}$ smooth, real-valued functions with no critical points, where \ $\overline{p}=[p]$ \ if $p$ is not an integer, \ $\overline{p}=p-1$ \ if $p$ is an odd integer, and \ $\overline{p}=\infty$ \ if $p$ is an even integer. Indeed, the standard norms of the classical separable Banach spaces $\ell_{p}(\mathbb{N})$ and $L_{p}(\mathbb{R}^{n})$ are LUR and $C^{\overline{p}}$ smooth \cite{DGZ}. \end{ex} \bigskip Since every Banach space with separable dual admits an equivalent LUR and $C^1$ smooth norm \cite{DGZ}, we immediately deduce from Theorem \ref{approximation theorem} the announced characterization of the property of approximation by $C^1$ smooth functions with no critical points. \begin{cor}\label{aproximacionC^1sinpuntoscriticos} Let $X$ be a separable Banach space. The following are equivalent: \begin{enumerate} \item The dual space $X^*$ is separable, \item for every continuous mapping $f:X\longrightarrow\mathbb{R}$ and for every continuous function $\varepsilon:X\longrightarrow (0,\infty)$, there exists a $C^1$ smooth mapping $g:X\longrightarrow\mathbb{R}$ such that $|f(x)-g(x)|\leq\varepsilon(x)$ and $g$ has no critical points. \end{enumerate} \end{cor} \medskip Next, we establish a similar statement for higher order smoothness on separable Banach spaces with a $C^p$ smooth bump function ($p\ge 2$) and unconditional basis. We combine Theorem \ref{approximation theorem} and the results on fine approximation given in \cite{AFGJL}, to obtain the optimal order of smoothness of the approximating functions with no critical points on a large class within the Banach spaces with separable dual. In particular the following Corollary applies even when the space $X$ lacks a norm which is simultaneously LUR and $C^2$ smooth. \medskip \begin{cor} \label{fineaprox} Let $X$ be a separable Banach space with unconditional basis. Assume that $X$ has a $C^p$ smooth Lipschitz bump function, \ where $p\in \mathbb N\cup \{\infty\}$. Then, for every continuous mapping $f:X\longrightarrow\mathbb{R}$ and for every continuous function $\varepsilon:X\longrightarrow (0,\infty)$, there exists a $C^p$ smooth mapping \ $g:X\longrightarrow\mathbb{R}$ such that $|f(x)-g(x)|\leq\varepsilon(x)$ and $g$ has no critical points. \end{cor} \begin{proof} Since $X$ is separable and admits a $C^p$ smooth bump function, the dual space $X^*$ is separable. Thus we obtain, from Corollary \ref{aproximacionC^1sinpuntoscriticos}, a $C^1$ smooth function $h:X\longrightarrow \mathbb R$ such that \ $h'(x)\neq 0$ \ and \ $|f(x)-h(x)|<\frac{\varepsilon(x)}{2}$ \ for every \ $x\in X$. Let us denote by $||\cdot||$ the dual norm on $X^*$. Define the continuous function \ $\overline{\varepsilon}:X\longrightarrow (0,\infty)$, \ $\overline{\varepsilon}(x)=\frac12\min\{\varepsilon(x),\ ||h'(x)||\}$, for $x\in X$. Now, by the main result of \cite{AFGJL}, there is a $C^p$ smooth function $g:X\longrightarrow\mathbb{R}$ such that \ $|h(x)-g(x)|<\overline{\varepsilon}(x)$ \ and \ $||h'(x)-g'(x)||<\overline{\varepsilon}(x)$, \ for every $x\in X$. The latter implies that \ $||h'(x)||-||g'(x)||< \frac12 ||h'(x)||$, \ and therefore \ $0<\frac12||h'(x)||<||g'(x)||$ \ for every $x\in X$. Hence, \ $g$ is a $C^p$ \ smooth function with no critical points and \ $|f(x)-g(x)|<|f(x)-h(x)|+|h(x)-g(x)|<\varepsilon(x)$ \ for every $x\in X$. \end{proof} The proof of the above corollary yields to the following remark. \begin{rem} Assume that a separable Banach space $X$ satisfies the $C^1$-fine approximation property by $C^p$ smooth, real-valued functions, i.e., for every $C^1$ smooth function $f:X\longrightarrow \mathbb R$ and every continuous function $\varepsilon:X\longrightarrow \mathbb (0,\infty)$ there is a $C^p$ smooth function $h:X\longrightarrow \mathbb R$ such that $|f(x)-h(x)|\le \varepsilon(x)$ and $|f'(x)-h'(x)|\le \varepsilon(x)$, for every $x\in X$. Then, the conclusion of Corollary \ref{fineaprox} holds. \end{rem} \medskip Furthermore, our results allow us to make the following conclusions. \begin{rem} {\em \hfill{ } \begin{enumerate} \item All of the results presented above hold in the case when one replaces $X$ with an open subset $U$ of $X$. Actually, the same proof given in the section to follow (with obvious modifications) can be used. \item Whenever $X$ has the property that every continuous, real-valued function on $X$ can be approximated by $C^p$ smooth, real-valued functions with no critical points, one can deduce the following Corollaries. \end{enumerate} } \end{rem} \begin{cor}[A nonlinear Hahn-Banach theorem]\label{nonlinear Hahn-Banach theorem} Let $X$ be any of the Banach spaces considered in the above results. Then, for every two disjoint closed subsets $C_1$, $C_2$ of $X$, there exists a $C^p$ smooth function $\varphi:X\longrightarrow\mathbb{R}$ with no critical points, such that the level set $M=\varphi^{-1}(0)$ is a $1$-codimensional $C^p$ smooth submanifold of $X$ that separates $C_{1}$ and $C_{2}$, in the following sense: define $U_{1}=\{x\in X : \varphi(x)<0\}$ and $U_{2}=\{x\in X : \varphi(x)>0\}$, then $U_1$ and $U_2$ are disjoint $C^p$ smooth open sets of $M$ with common boundary $\partial U_{1}=\partial U_{2}=M$, and such that $C_{i}\subset U_{i}$ for $i=1, 2$. \end{cor} Recall that an open subset $U$ of $X$ is said to be $C^p$ smooth provided its boundary $\partial U$ is a $C^p$ smooth one-codimensional submanifold of $X$. \begin{cor}[Smooth approximation of closed sets]\label{Smooth approximation of closed sets} Every closed subset of any of the Banach spaces $X$ considered above can be approximated by $C^p$ smooth open subsets of $X$ in the following sense: for every closed set $C\subset X$ and every open set $W$ containing $C$ there is a $C^p$ smooth open set $U$ so that $C\subset U\subseteq W$. \end{cor} \begin{cor}[Failure of Rolle's Theorem]\label{failure of Rolle's theorem} For every open subset $U$ of any of the above Banach spaces $X$ there is a continuous function $f$ on $X$ whose support is the closure of $U$, and such that $f$ is $C^p$ smooth on $U$ and yet $f$ has no critical point in $U$. \end{cor} \bigskip \section[Proof]{Proof of Theorem \ref{approximation theorem}} Recall that a norm $N(\cdot)$, in a Banach space $E$, is (1) {\em strictly convex} if the unit sphere of the norm $N(\cdot)$ does not include any segment line. Equivalently, $N(\frac{x+y}{2})<1$ for every $x,y$ in the unit sphere with $x\not=y$; (2) {\em WUR (weakly uniformly rotund)} if $\lim_n(x_n-y_n)=0$ in the weak topology whenever the sequences $\{x_n\}_n$ and $\{y_n\}_n$ are included in the unit sphere and $\lim_nN(x_n+y_n)=2$. \medskip We denote by $\mathbb R^*$ the set of non zero real numbers. We also let $[u_1,...,u_n]$ stand for the linear span of the vectors $u_1$,...,$u_n$. Let us denote by $S_{||\cdot||}$ and $S_{||\cdot||^*}$ the unit sphere of a Banach space $(Z,||\cdot||)$ and its dual $(Z^*,||\cdot||^*)$, respectively. \medskip The following two geometrical lemmas will be essential to the proof of Theorem \ref{approximation theorem}. \bigskip \begin{lem}\label{case N} Let $Z=[u_1,...,u_n]$ be a n-dimensional space ($n>1$) with a differentiable norm $||\cdot||$ (G\^{a}teaux or Fr\'{e}chet differentiable, since both notions coincide for convex functions defined on finite dimensional spaces). Let us consider real numbers \ $0<\alpha_i<1$,\ for $i=1,...,n-1$ and define $\mathcal{R}$ as the subset of numbers $\alpha\in\mathbb R^*$ satisfying that \begin{equation}\label{vacio} \big\{T\in S_{||\cdot||^*}:\ T(u_i)=\alpha_i\,, \ i=1,...,n-1, \ T(u_n)=\alpha \bigr\}=\emptyset. \end{equation} Then, the cardinal of $\mathbb R^*\setminus \mathcal{R}$ is at most two. \end{lem} \begin{proof} First, assume that the set \begin{equation*} F=\{T\in S_{||\cdot||^*}: \ T(u_1)=\alpha_1,...,T(u_{n-1})=\alpha_{n-1} \}\end{equation*} is non-empty (otherwise we have finished). \begin{figure} \centering \begin{tabular}{ccc} \includegraphics[width=4.5cm]{separable2argument.eps} & \includegraphics[width=5cm]{separable3argument.eps} & \includegraphics[width=5cm]{separable3bargument.eps} \\ (a) & (b) & (c) \end{tabular} \caption{(a): Case $n=2$. (b) and (c): Case $n=3$.} \label{} \end{figure} As the above pictures for the cases $n=2$ and $n=3$ suggest, there are at most two different tangent affine hyperplanes to the unit sphere $S_{||\cdot||}$ containing the affine subspace passing through the points $\frac{u_1}{\alpha_1}$,...,$\frac{u_{n-1}}{ \alpha_{n-1}}$, that is there are at most two different linear mappings $T_1$, $T_2$ in $F$. Indeed, assume first that the cardinal of $F$ is one and let us denote by $T_1$ the element of $F$. Then any real number $\alpha \not\in\{0,\,T_1(u_n)\}$ satisfies condition \eqref{vacio} and $\mathcal{R}=\mathbb R^*\setminus \{T_1(u_n)\}$. Now, if there are two elements on $F$, $T_1\not=T_2$, we claim that any other {\em different} element of $F$, say $T_3$, can be written as $T_3=\gamma T_1+(1-\gamma)T_2$ for some $\gamma\in\mathbb R\setminus\{0,1\}$. Indeed, since $T_3-T_2\not=0$ and $T_1-T_2\not=0$ belong to the one dimensional subspace $u_1^{\bot}\cap...\cap u_{n-1}^{\bot}\cap[u_1,...,u_n]^*$, we have that there is $\gamma\in\mathbb R$ with $T_3-T_2=\gamma(T_1-T_2)$. Since $T_3\not=T_2$, the constant $\gamma\not=0$. In addition, since $T_3\not= T_1$, we have that $\gamma \not=1$, which proves our claim. Furthermore, this implies that the three {\em different} points $T_1,\,T_2$ and $T_3$ of the dual unit sphere lie in a common line, and, because $\|\cdot\|^{*}$ is convex, the segment line which passes through $T_1,\,T_2$ and $T_3$, and whose end points are two of these three points, is included in $S_{\|\cdot\|^*}$. But this is in contradiction to the fact that the dual norm $\|\cdot\|^*$ is strictly convex (because the norm $\|\cdot\|$ is differentiable and $Z$ is finite dimensional, see \cite{DGZ}). Finally, any real number $\alpha \not\in\{0,\,T_1(u_n),\,T_2(u_n)\}$ satisfies condition \eqref{vacio} and $\mathcal{R}=\mathbb R^*\setminus\{T_1(u_n),\,T_2(u_n)\}$. \end{proof} \medskip From the proof of Lemma \ref{case N} we obtain the following. \begin{lem}\label{vectoradicional} Let $Z=[u_1,...u_n]$ be a n-dimensional space ($n\in \mathbb N $) with a differentiable norm $||\cdot||$. Consider real numbers \ $0<\alpha_i<1$, \, for $i=1,...,n-1$. Then, the cardinal of the set \begin{equation}\label{vacio3x} \big\{T\in S_{||\cdot||^*}: \ \ T(u_i)=\alpha_i\,,\ i=1,...,n-1\bigr\}. \end{equation} is at most two. \end{lem} \bigskip The general strategy of the proof of Theorem \ref{approximation theorem} is as follows. We consider the space \ $Y=X\oplus \mathbb{R}$ \ and define the following norm on $Y$: \begin{itemize} \item If \ $p>1$, \ for every $y=(x,r)\in Y$, put \ $N(y)=N(x,r)=(||x||^2+r^2)^{1/2}$, \ where $||\cdot||$ is a LUR and \ $C^p$ \ smooth norm on \ $X$. Then, clearly the norm \ $N$ \ is \ LUR \ and $C^1$ smooth on $Y$. Moreover, \ $N$ \ is \ $C^{p}$ \ smooth on \ the open set $Y\setminus\{(0,\lambda):\ \lambda\in \mathbb R\}$. Define $\nu=(0,1)$ and take \ $\beta\in Y^*\setminus\{0\}$ \ such that \ $X=\ker \beta$. \ Select \ $\beta_1\in Y^*\setminus[\beta]$ \ such that $\beta_1(\nu)\not=0$ \ and \ $\omega \in \ker \beta\setminus \ker \beta_1 $. \ Consider the closed hyperplane of $Y$, \ $X_1=\ker \beta_1 $. Then, the restriction of the norm $N$ to $X_1$ is a $C^p$ smooth and LUR norm on $X_1$. Now, the (equivalent) norm considered in \ $Y=X_1\oplus [\omega]$, \ defined as \ $|z+\lambda \omega|=(N(z)^2+\lambda^2)^{1/2}$, where $z\in X_1$ \ and \ $\lambda\in \mathbb R$, \ is LUR and $C^1$ smooth on $Y$ \ and $C^p$ \ smooth on \ $Y\setminus[\omega]$. In particular, the norm \ $|\cdot|$ \ is \ $C^p$ \ smooth on the open set \ $\mathcal{U}=Y\setminus \ker \beta=\{(x,r):\ x\in X,\,r\not=0\}$. It could also be proved that the Banach space $Y$ admits an equivalent LUR and $C^p$ smooth norm on $Y$ with bounded derivatives up to the order $p$. Nevertheless, a LUR and $C^1$ smooth norm on $Y$ \ and \ $C^p$ \ smooth on \ $\mathcal{U}$, \ is sufficient to prove our result. Recall that if $X$ has a LUR and $C^p$ smooth norm \ and $p>1$, then $X$ is superreflexive \cite{DGZ}. \medskip \item If $p=1$, since the dual space $Y^*$ is separable, there is a norm $|\cdot|$ on $Y$ which is LUR, $C^1$ smooth and WUR whose dual is strictly convex \cite{DGZ}. Recall that if the norm $|\cdot|$ is WUR, then the dual norm $|\cdot|^*$ is uniformly G\^{a}teux smooth, and thus, G\^{a}teaux smooth. \end{itemize} \medskip \noindent Therefore, if $X$ is reflexive, the dual norm $|\cdot|^*$ is LUR and $C^1$ smooth \cite{DGZ}. If $X$ is not reflexive, the dual norm $|\cdot|^*$ is strictly convex and G\^{a}teaux smooth. \medskip Let us denote\ $S:=S_{|\cdot|}$, the unit sphere of $(Y,|\cdot|)$ \ and \ $S^*:=S_{|\cdot|^*}$, the unit sphere of $(Y^*,|\cdot|^*)$. Let us consider, the duality mapping of the norm $|\cdot|$ defined as \begin{align*} D&: S \longrightarrow S^* \\ D&(x)=|\cdot|'(x), \end{align*} which is $|\cdot|-|\cdot|^*$ continuous because the norm $|\cdot|$ is of class $C^1$. \medskip We establish a $C^p$ diffeomorphism $\Phi$ between $X$ and half unit sphere in $Y$,\ $S^+:=\{y=(x,r)\in Y:\ r>0\}$,\ as follows: $\Phi:X\longrightarrow S^+ $ \ is the composition \ $\Phi=\Pi\circ i$, \ where $i$ is the inclusion \ $i:X\longrightarrow Y,\ i(x)=(x,1)$ \ and $\Pi$ is defined by \ $\Pi:Y\setminus\{0\}\longrightarrow S$, \ $\Pi(y)=\frac{\,y\,}{\,|y|\,}$. \medskip In order to simplify the notation, we will make the proof for the case of a constant $\varepsilon>0$. By taking some standard technical precautions the same proof will work in the case of a positive continuous function $\varepsilon:X\to (0,+\infty)$ (at the end of the proof we will explain what small changes should be made). Now, given a continuous function \ $f:\,X \longrightarrow\mathbb{R}$, \ we consider the composition \ $F:=f\circ\Phi^{-1}:\,S^+\longrightarrow \mathbb{R}$, which is continuous as well.\ For any given $\varepsilon >0$ we will \ $3\varepsilon$-approximate \ $F$ \ by a $C^p$ smooth function \ $H:S^+\longrightarrow \mathbb{R}$ \ with the properties that: \begin{itemize} \item the set of critical points of $H$ is the countable union of a family of disjoint sets $\{K_n\}_n$\,, \item there are countable families of open slices $\{O_n\}_n$ \ and \ $\{B_n\}_n$ in $S^+$, such that \ $\cup_n\overline{B}_n$ is relatively closed in $S^+$, \ $\operatorname{dist}(B_n,\ X\times\{0\})>0$, \ $K_n\subset O_n\subset B_n$\,, \ $\operatorname{dist}(O_n,\,S^+\setminus B_n)>0$ \ and \ $\operatorname{dist}(B_n, \cup_{m\not=n}B_m)>0$, \ for every $n\in\mathbb N$, \item the oscillation of $F$ in every $B_n$ is less than $\varepsilon$.\end{itemize} \noindent (We will consider slices of $S^+$ of the form $\{x\in S: f(x)>r\}$\,,\ where $f$ is a continuous linear functional of norm one and $0<r<1$. Recall also that the distance between two sets $A$ and $A'$ in a Banach space $(M,|\cdot|_M)$ is defined as the real number \ $\operatorname{dist}(A,A'):=\inf\{|a-a'|_M:\ a\in A\,,\ a'\in A'\}$.) \medskip \noindent Then we will prove that the function $h:=H\circ\Phi$ is a $C^p$ smooth function on $X$, which $3\varepsilon$-approximates $f$, and the set of critical points of $h$, $C=\{x\in X: h'(x)=0\}$, can be written as $C=\bigcup_{n=1}^{\infty}\mathcal{K}_{n}$, where, for every $n\in \mathbb N$, the set $\mathcal{K}_n:=\Phi^{-1}(K_n)$ is contained in the {\em open, convex, bounded and $C^p$ smooth body} $\mathcal{O}_n:=\Phi^{-1}(O_n)$, which in turn is contained in the open, convex, bounded and $C^p$ smooth body $\mathcal{B}_n:=\Phi^{-1}(B_n)$, in such a way that $\textrm{dist}(\mathcal{O}_n, X\setminus \mathcal{B}_n)>0$, \ the oscillation of $f$ in $\mathcal{B}_n$ is less than \ $\varepsilon$, \ $\cup_n\overline{\mathcal{B}}_n$ is closed \ and \ $\operatorname{dist}(\mathcal{B}_n, \cup_{m\not=n}\mathcal{B}_m)>0$. Once we have done this, we will compose the function $h$ with a sequence of deleting diffeomorfisms which will eliminate the critical points of $h$. More precisely, for each set $\mathcal{O}_n$ we will find a $C^p$ diffeomorphism $\Psi_n$ from $X$ onto $X\setminus \overline{\mathcal{O}}_n$ so that $\Psi_n$ is the identity off $\mathcal{B}_n$. Then, by defining $g:=h\circ\bigcirc_{n=1}^{\infty}\Psi_{n}$, we will get a $C^p$ smooth function which $4\varepsilon$- approximates $f$ and which has no critical points. \medskip The most difficult part in this scheme is the construction of the function $H$. We will inductively define linearly independent functionals $g_{k}\in Y^{*}$, open subsets $U_k$ of $S^{+}$, points $x_k\in U_k$, real numbers $a_k\not=0$ satisfying $|a_k-F(x_k)|<\varepsilon$, real numbers $\gamma_{k}$ and $\gamma_{i,j}$ in the interval $(0,1)$ \ (with $i+j=k$), functions $h_k$ of the form $h_k=\varphi_k(g_k)\,\phi_{k-1,1}(g_{k-1})\,\cdots\phi_{1,k-1}(g_1),$ \ where the \ $\varphi_k$,\ $\phi_{k-1,1}$,\,...,\,$\phi_{1,k-1}$ are suitably chosen $C^\infty$ functions on the real line, and functions $r_k$ of the form $r_k=s_k g_k+(1-s_kg_k(x_k))$ (with very small $s_k\not=0$), and put $$ {\bf H_k}=\frac{\sum_{i=1}^k a_ir_ih_i}{\sum_{i=1}^k h_i}, $$ where ${\bf H_k}: U_1\cup...\cup U_k\longrightarrow \mathbb R$. The interior of the support of $h_k$ will be the set \begin{equation*} U_k=\{x\in S^{+}:\ g_1(x)<\gamma_{1,k-1},..., \, g_{k-1}(x)<\gamma_{k-1,1} \ \text{ and }\ g_k(x)>\gamma_k\}, \end{equation*} where the oscillation of the function $F$ will be less than $\varepsilon$. Denote by $T_x$ the (vectorial) tangent hyperplane to $S^+$ at the point $x$, that is $T_x:=\operatorname{ker}D(x)$. The derivative of ${\bf H_k}$ at every point $x\in U_1\cup...\cup U_k$ will be shown to be the restriction to $T_x$ of a {\em nontrivial linear combination of the linear functionals } $g_{1}, ..., g_{k}$. Then, by making use of Lemmas \ref{case N} and \ref{vectoradicional} and choosing the $\gamma_{i,j}$ close enough to $\gamma_{i}$, we will prove that the set of critical points of ${\bf H_k}$ is a finite union of pairwise disjoint sets which are contained in a finite union of pairwise disjoint slices, with positive distance between any two slices (see Figure \ref{dibujo n=3} below). These slices will be determined by functionals in finite sets $N_k\subset Y^*$ defined by a repeated application of Lemmas \ref{case N} and \ref{vectoradicional}. The function $H$ will be then defined as $$ H=\frac{\sum_{k=1}^\infty a_kr_kh_k}{\sum_{k=1}^\infty h_k}. $$ \medskip Let us begin with the formal construction of the functions ${\bf H_{k}}$. We will use the notation $H_k$ and $H'_{k}$ when $H_k$ and its derivative $H_{k}'$ are thought to be defined on an open subset of $Y$ and reserve the symbols ${\bf H_{k}}$ and ${\bf H_{k}'}(x)$ for the restriction of $H_k$ and $H_{k}'(x)$ to a subset of $S$ and to the tangent space $T_{x}$ of $S$ at $x$, respectively. \medskip Since the norm $|\cdot|$ is LUR we can find, for every $x\in S^+$, open slices $R_x=\{y\in S: \ f_x(y)>\delta_x\}\subset S^+$ \ and \ $P_x=\{y\in S: \ f_x(y)>\delta_x^4\}\subset S^+$, \ where \ $0<\delta_x<1$ \ and \ $|f_x|=1=f_x(x)$, \ so that the oscillation of $F$ in every \ $P_x$ is less than $\varepsilon$. We also assume, for technical reasons, and with no loss of generality, that $\operatorname{dist}(P_x,\,X\times\{0\}\,)>0$. \medskip \noindent Since $Y$ is separable we can select a countable subfamily of $\{R_x\}_{x\in S^+}$, which covers $S^+$. Let us denote this countable subfamily by $\{S_n\}_n$, where $S_n:=\{y\in S: f_n(y)>\delta_n\}$. Recall that the oscillation of $F$ in every $P_n:=\{y\in S: f_n(y)>\delta_n^4\}$ is less than \ $\varepsilon$ \ and \ $\operatorname{dist}(P_n, \ X\times\{0\})>0$. \medskip \noindent $\bold{\diamond}$ {\bf For $\bold{k=1}$}, define \begin{align*} h_1&:\,S^+\longrightarrow \mathbb R \\ h_1&=\varphi_1(f_1), \end{align*} where $\varphi_1$ is a $C^\infty$ function on $\mathbb R$ satisfying \begin{align*} \varphi_1(t)&=0 \ \text{ if }\ t\le\delta_1 \\ \varphi_1(1)&=1 \notag \\ \varphi_1'(t)&>0 \ \text{ if } \ t>\delta_1\, . \notag \end{align*} Notice that the interior of the support of $h_1$ is the open set $S_1$. Denote by $x_1$ the point of $S^+$ satisfying $f_1(x_1)=1$. Now select $a_1\in \mathbb R^*=\mathbb R\setminus\{0\}$\ with \ $|a_1-F(x_1)|<\varepsilon$ and define the auxiliary function \begin{align*} &r_1:S^+\longrightarrow \mathbb R,\\ &r_1=s_1f_1+(1-s_1f_1(x_1)),\notag \end{align*} where we have selected $s_1$ so that $a_1s_1>0$ and $|s_1|$ small enough so that the oscillation of $r_1$ on $S_1$ is less than \ $\frac{\varepsilon}{\,|a_1|\,}$. \, Notice that $r_1(x_1)=1$. Define \begin{align*} &{\bf H_1}:S_1\longrightarrow \mathbb R\\ &{\bf H_1}=\frac{a_1r_1h_1}{h_1}=a_1r_1.\notag \end{align*} The function $\bold{H_1}$ is $C^p$ smooth in $S_1$ and the set of critical points of ${\bf H_1}$, \begin{equation*}Z_1= \{x\in S_1:\,H'_1(x)=0 \ \text{ on } \ T_x\}\end{equation*} consists of the unique point $x_1$. Indeed, $\bold{H'_1}(x)=H'_1(x)|_{T_x}=a_1s_1f_1|_{T_x}\equiv 0$ \ iff $D(x)=f_1$. This implies that \ $Z_1=\{x_1\}$. Now select real numbers $\gamma_{1,1}'$,\ $t_1$ and $l_1$ \ such that \ $\delta_1<\gamma_{1,1}'<t_1<\l_1<1$ and define the open slices \begin{equation*} O_{f_1}=\{x\in S: \ f_1(x)>l_1\} \quad \text{ and } \quad B_{f_1}=\{x\in S: \ f_1(x)>t_1\}. \end{equation*} Clearly the above sets satisfy that \ $Z_1\subset O_{f_1}\subset B_{f_1}\subset S_1$, \ $\operatorname{dist}(O_{f_1},\, S\setminus B_{f_1})>0$ \ and \ $\operatorname{dist}(B_{f_1},\, \{x\in S:\ f_1(x)\le \gamma_{1,1}'\})>0$. \medskip \noindent In order to simplify the notation in the rest of the proof, let us denote by \ $\gamma_1=\delta_1$, \ $U_1=R_1=S_1$, \ $g_1=f_1$, \ $z_1=x_1$ \ and $\Gamma_1=N_1=\{g_1\}$. Let us define $\sigma_{1,1}=a_1s_1$ and write ${\bf H_1'}=\sigma_{1,1}\bold{g_1}$ on $U_1$ where $\bold{g_1}$ is the restriction of $g_1$ to $T_x$ whenever we evaluate $\bold{H_1'}(x)$. In addition, if $A\subset S$, we denote by $A^c=S\setminus A$. \bigskip \noindent $\bold{\diamond}$ {\bf For $\bold{k=2}$}. Let us denote by $y_2\in S^{+}$ the point satisfying $f_2(y_2)=1$. If either $\{g_1,\ D(y_2)=f_2\}$ are lineally dependent (this only occurs when $g_1=f_2)$ \ or \ $g_1(y_2)=\gamma_1$, we use the density of the norm attaining functionals (Bishop-Phelps Theorem) and the continuity of $D$ to modify $y_2$ and find $z_2\in S^{+}$ so that $\{g_1,\ D(z_2):=g_2\}$ are l.i., \ $g_1(z_2)\not=\gamma_1$ \ and \begin{equation*} \{x\in S:f_2(x)>\delta_2^2\}\subset\{x\in S: g_2(x)>{\nu_2}\}\subset\{x\in S:f_2(x)>\delta_2^3\}, \end{equation*} for some ${\nu_2}\in(0,1)$. If \ $g_1(y_2)\not=\gamma_1$ \ and $\{g_1,f_2\}$ are l.i., define $g_2=f_2$ and $z_2=y_2$. \ Then, apply Lemma \ref{case N} to the 2-dimensional space $[g_1,g_2]$ with the norm $|\cdot|^*$ (the restriction to $[g_1,g_2]$ of the dual norm $|\cdot|^* $ considered in $Y^*$) and the real number $\gamma_1\in(0,1)$ to obtain ${\gamma_2}\in(0,1)$ close enough to $\nu_2$ so that \begin{equation*} S_2=\{x\in S:f_2(x)>\delta_2\}\subset\{x\in S: g_2(x)>{\gamma_2}\}\subset\{x\in S: f_2(x)>\delta_2^4\}=P_2 \end{equation*} and \begin{equation}\label{empty2} \{T\in [g_1,g_2]^* :\ |T|=1, \ T(g_1)=\gamma_1 \ \text{ and } \ T(g_2)={\gamma_2} \}=\emptyset \end{equation} Recall that the norm $|\cdot|^*$ is G\^{a}teaux differentiable on $Y^*$ and therefore the restriction of this norm to $[g_1,g_2]$, which we shall denote by $|\cdot|^*$ as well, is a differentiable norm on the space $[g_1,g_2]$ (G\^{a}teaux and Fr\'{e}chet notions of differentiability are equivalent in the case of {\em convex} functions defined on {\em finite-dimensional} spaces). Therefore, we can apply Lemma \ref{case N} to the norm $|\cdot|^*$ in the space $[g_1,g_2]$. In fact, the same argument works for any finite dimensional subspace of $Y^*$ and we will apply Lemma \ref{case N} in the next steps to larger finite dimensional subspaces of $Y^*$. Define the sets \begin{align*}R_2&=\{x\in S: g_2(x)>\gamma_2\}, \ \ \text{ and }\\ U_2'=\{&x\in S:\ g_1(x)<\gamma_{1,1}'\,,\ g_2(x)>\gamma_2\}.\end{align*} Assume that $U_1\cap R_2\not=\emptyset$ \ and consider the set $$M_2=D^{-1}([g_1,g_2])\cap U_2'\cap U_1.$$ In the case that $M_2=\emptyset$, we select as $\gamma_{1,1}$ any point in $(\gamma_1,\gamma_{1,1}')$. In the case that $M_2\not=\emptyset$ and $\gamma_1<\inf\{g_1(x):\ x\in M_2\}$, we select $\gamma_{1,1}$ \ so that \ $$\gamma_1<\gamma_{1,1}<\inf\{g_1(x):\ x\in M_2\}.$$ \medskip \noindent In the case that $\gamma_1=\inf\{g_1(x):\ x\in M_2\}$ and in order to obtain an appropriate $\gamma_{1,1}$ we need to study the limits of the sequences $\{x_n\}\subset M_2$ such that $\lim_n g_1(x_n)=\gamma_1$. Define \begin{equation*} F_2'=\{T\in[g_1,g_2]^*: \ |T|=1 \ \text{ and } \ T(g_1)=\gamma_1 \}.\end{equation*} From Lemma \ref{vectoradicional}, we deduce that the cardinal of the set $F_2'$ is at most two. Furthermore, since $|\cdot|^*$ is strictly convex, the cardinal of the set \begin{equation*} N_2'=\{g\in S^*\cap[g_1,g_2]: \ T(g)=1 \ \text{ for some } \ T\in F_2'\}\end{equation*} is at most two. \medskip Let us take any sequence $\{x_n\}\subset M_2$ with $\lim_ng_1(x_n)=\gamma_1$. Consider every $x_n$ as an element of $X^{**}$ and denote by $\bold{x_n}$ its restriction to $[g_1,g_2]$. Recall that if $x_n\in M_2$, then $D(x_n)\in S^* \cap[g_1,g_2]$, \ for every $n\in \mathbb N$.\ Moreover, the sequence of restrictions $\{\bold{x_n}\}\subset [g_1,g_2]^*$ satisfies that \begin{equation*} 1=|x_n|\ge|\bold{x_n}|=\max\{\bold{x_n}(h):\ h\in S^*\cap[g_1,g_2]\}\ge \bold{x_n}(D(x_n))=D(x_n)(x_n)=1,\end{equation*} for every $n\in \mathbb{N}$. Thus, there is a subsequence \ $\{\bold{x_{n_j}}\}$ \ converging to an element \ $T\in [g_1,g_2]^*$ \ with \ $|T|=1$. Since $\lim_jg_1(x_{n_j})=\lim_{j}\bold{x_{n_j}}(g_1)=\gamma_1$, then $T(g_1)=\gamma_1$ and this implies that $T\in F_2'$. Furthermore, if $g\in N_2'$ and $T(g)=1$, then $\lim_j\bold{x_{n_j}}(g)=1$. In addition, $T(g_2)=\lim_j\bold{x_{n_j}}(g_2)=\lim_j g_2(x_{n_j})\ge \gamma_2$. Then, from condition \eqref{empty2}, we deduce that $T(g_2)>\gamma_2$. Let us define \begin{align*} F_2=\{T\in F_2': \text{ there} & \text{ is a sequence } \ \{x_n\}\subset M_2 \ \text{ with } \ \lim_n\bold{x_n}=T \ \text{ and } \ \lim_n\bold{x_n}(g_1)=\gamma_1 \},\\ N_2&=\{g\in N_2': \ T(g)=1 \ \text{ for some } \ T\in F_2\}. \end{align*} Select a real number \ $\gamma_{2}'$ \ satisfying \ $\gamma_2< \gamma_{2}'<\min\{T(g_2):\ T\in F_2\}$ (recall that $F_2$ is finite). Let us prove the following Fact. \medskip \begin{fact}\label{fact 1} \hfill{ } \begin{enumerate} \item There are numbers \ $0<t_2<l_2<1$ \ such that for every $g\in N_2$, the slices $$O_g:=\{x\in S:\ g(x)>l_2\} \ \text{ and } \quad B_g:=\{x\in S:\ g(x)>t_2\}$$ satisfy that \begin{align}\label{inclusionfact1} O_g&\subset B_g\subset \{x\in S: \ g_1(x)<\gamma_{1,1}'\,,\ g_2(x)>\gamma_{2}'\} \ \text{ and }\\ \label{intersectionfact1} &\operatorname{dist}(B_g,B_{g'})>0, \text{ whenever } g,g'\in N_2\,, \ g\not=g'. \end{align} \item There is \ $\gamma_{1,1}\in (\gamma_1,\gamma_{1,1}')$,\ such that if $x\in M_2$ and $g_1(x)<\gamma_{1,1}$,\ then $x\in O_g$, for some $g\in N_2$. \end{enumerate} \end{fact} \noindent {\em Proof of Fact \ref{fact 1}.} (1) First, if $X$ is reflexive, we know that for every \ $g\in N_2$ \ there is \ $x_g\in S$ \ such that \ $D(x_g)=g$.\ Since \ $\mathbf{x_g}(g)=1$ \ and \ $|\cdot|^*$ \ is G\^{a}teaux smooth, \ then $\bold{x_g}\in F_2$. This implies that $\bold{x_g}(g_1)=\gamma_1<\gamma_{1,1}'$ and $\bold{x_g}(g_2)>\gamma_{2}'$. Hence, $x_g\in \{x\in S: \ g_1(x)<\gamma_{1,1}',\ g_2(x)>\gamma_{2}'\} $. Now, since the norm $|\cdot|$ is LUR and $D(x_g)=g$, the functional $g$ strongly exposes $S$ at the point $x_g$. Taking into account that $N_2$ is finite we can hence obtain real numbers $0<t_2<l_2<1$ and slices $O_g$ and $B_g$ satisfying conditions \eqref{inclusionfact1} and \eqref{intersectionfact1}, \ for every $g\in N_2$. \medskip \noindent Now consider a non reflexive Banach space $X$. Let us first prove \eqref{inclusionfact1}. Assume, on the contrary, that there is a point $g \in N_2$ and there is a sequence $\{y_n\}\subset S$ satisfying $g(y_n)>1-\frac1n$ with either \ $g_1(y_n)\ge \gamma_{1,1}' $ \ or \ $g_2(y_n)\le \gamma_{2}'$ \ for every $n\in \mathbb N$. Since \ $g\in N_2$ \ there is a sequence \ $\{x_n\}\subset M_2$ \ with \ $\lim_n g_1(x_n)=\gamma_1$,\ $\lim_n g_2(x_n)>\gamma_{2}'$ \ and \ $\lim_n g(x_n)=1$. In particular, \begin{equation*} \frac{g(x_n)+1-\frac1n}{2}< g\left(\frac{x_n+y_n}{2}\right)\le \left|\frac{x_n+y_n}{2}\right|\le 1, \end{equation*} and thus $\lim_n\left|\frac{x_n+y_n}{2}\right|= 1$. Recall that, in this case the norm $|\cdot|$ is WUR, and hence $x_n-y_n\xrightarrow{\omega} 0$\ (weaky converges to zero). This last assertion gives a contradiction since either \ $\limsup_ng_1(x_n-y_n)\le \gamma_1-\gamma_{1,1}'<0$ \ or \ $\liminf_ng_2(x_n-y_n)\ge \lim_ng_2(x_n)-\gamma_{2}'>0$. \ Therefore we can find real numbers \ $0<t_2<l_2<1$ \ and slices \ $O_g$ \ and \ $B_g$ \ for every \ $g\in N_2$, satisfying condition \eqref{inclusionfact1}. In order to obtain \eqref{intersectionfact1} we just need to modify $t_2$ and $l_2$ and select them close enough to $1$. Indeed, assume on the contrary, that there are sequences $\{y_n\}\subset S$ and $\{z_n\}\subset S$ and $g,g'\in N_2$, $g\not=g'$, such that $\lim_ng(y_n)=1$, \ $\lim_ng'(z_n)=1$ and $\lim_n|y_n-z_n|=0$. Then, \begin{align*}\frac{g(y_n)+g'(z_n)}{2}&\le \frac{(g+g')(y_n)+g'(z_n-y_n)}{2}\le \frac{(g+g')(y_n)+|z_n-y_n|}{2}\\&\le \frac{|g+g'|^*}{2}+ \frac{|z_n-y_n|}{2}\le 1+\frac{|z_n-y_n|}{2}. \end{align*} Since the limit of the first and last terms in the above chain of inequalities is $1$, we deduce that \ $|g+g'|^*=2$. Since the norm $|\cdot|^*$ is strictly convex, we deduce that $g=g'$, a contradiction. \medskip \noindent (2) Assume, on the contrary, that for every $n\in \mathbb N$, there is \ $x_n\in M_2$ \ with \ $g_1(x_n)\le \gamma_1+\frac1n$ \ and \ $\{x_n: n \in \mathbb N\} \cap (\cup_{g\in N_2}O_g)=\emptyset$. Since \ $\lim_ng_1(x_n)=\gamma_1$ \ and \ $\{x_n\}\subset M_2$, \ from the comments preceding Fact \ref{fact 1}, we know that there is a subsequence $\{x_{n_j}\}$ and $g\in N_2$ satisfying that $\lim_jg(x_{n_j})=1$, which is a contradiction. This finishes the proof of Fact \ref{fact 1}. $\Box$ \medskip \medskip If $U_1\cap R_2=\emptyset$, we can select as $\gamma_{1,1}$ any number in $(\gamma_1,\gamma_{1,1}')$. Now, we define, \begin{align*}h_2&:\,S^+\longrightarrow \mathbb R\\ h_2&=\varphi_2(g_2)\,\phi_{1,1}(g_1) \end{align*} where $\varphi_2$ and $\phi_{1,1}$ are $C^\infty$ functions on $\mathbb{R}$ satisfying: \begin{align*} \varphi_2(t)&=0 \ \text{ if }\ t\le\gamma_2 \\ \varphi_2(1)&=1 \\ \varphi_2'(t)&>0 \ \text{ if } \ t>\gamma_2\, , \end{align*} and \begin{align*} \phi_{1,1}(t)&=1 \ \ \text{ if }\ t\le \textstyle{\frac{\gamma_1+\gamma_{1,1}}{2}} \\ \phi_{1,1}(t)&=0 \ \ \text{ if }\ t\ge \gamma_{1,1} \notag\\ \phi'_{1,1}(t)&<0 \ \ \text{ if }\ t\in (\,\textstyle{\frac{\gamma_1+\gamma_{1,1}}{2}},\, \gamma_{1,1}). \notag\end{align*} Notice that the interior of the support of $h_2$ is the open set $$U_2=\{x\in S: g_1(x)< \gamma_{1,1}\,,\ g_2(x)>{\gamma_{2}}\}.$$ \noindent Select one point $x_2\in U_2$, a real number $a_2\in \mathbb R^*$ with $|a_2-F(x_2)|<\varepsilon$ and define the auxiliary function \begin{align*} &r_2:S^+\longrightarrow \mathbb R,\\ &r_2=s_2g_2+(1-s_2g_2(x_2)),\notag \end{align*} where we have selected $s_2$ so that $s_2a_2>0$ and $|s_2|$ is small enough so that the oscillation of $r_2$ on $U_2$ is less than \ $\frac{\varepsilon}{|a_2|}$. \, Notice that $r_2(x_2)=1$. \noindent Let us study the critical points $Z_2$ of the function \begin{align*} &{\bf H_2}: U_1\cup U_2\longrightarrow \mathbb R,\\ \notag &{\bf H_2}=\frac{a_1r_1h_1+a_2r_2h_2}{h_1+h_2}. \end{align*} Let us prove that $Z_2=\{x\in U_1\cup U_2:\,H'_2(x)= 0 \ \text{ on } \ T_x\}$ can be included in a finite number of pairwise disjoint slices within $U_1\cup U_2$ by splitting it conveniently into up to four sets. \noindent First, \ if \ $x\in U_1\setminus U_2$,\ we have that $H_2(x)=a_1r_1(x)$ \ and \ $\bold{H_2'}(x)=H_2'(x)|_{T_x}=a_1s_1g_1|_{T_x}\equiv 0$ \ iff \ $D(x)=g_1$. \ Thus, \ $Z_2\cap(U_{1}\setminus U_{2})\subseteq\{z_{1}\}$. \ Second, if $x\in U_2\setminus U_1$,\ we have $H_2(x)=a_2r_2(x)$ \ and \ $\bold{H_2'}(x)=H_2'(x)|_{T_x}=a_2s_2g_2|_{T_x}\equiv 0$ iff $D(x)=g_2$. \ Then, if $z_2\in U_2\setminus U_1$, \ ${\bf H_2}$ has one critical point in $U_2\setminus U_1$, namely $z_2$; in this case, since $g_1(z_2)\not=\gamma_1$, the point $z_2$ actually belongs to $U_2\setminus \overline{U}_1$. \noindent Now, let us study the critical points of $\bold{H_2}$ in $U_1\cap U_2$. In order to simplify the notation, let us put $\Lambda_1=\frac{h_1}{h_1+h_2}$, and denote by \ $\bold{g_1}$ \ and $\bold{g_2}$ \ the restrictions \ $g_1|_{T_x}$ \ and \ $g_2|_{T_x}$, respectively, whenever we consider $\bold{H_2'}(x)$ \ and $\Lambda_1'(x)$. Then, \ ${\bf H_2}=a_1r_1\Lambda_1 +a_2r_2(1-\Lambda_1)$ \ and \begin{align*} {\bf H_2'}&=a_1s_1\Lambda_1 \bold{g_1}+a_2s_2(1-\Lambda_1)\bold{g_2} +(a_1r_1-a_2r_2)\Lambda'_1 \\ &=\sigma_{1,1}\Lambda_1 \bold{g_1}+a_2s_2(1-\Lambda_1)\bold{g_2} +(H_1-a_2r_2)\Lambda'_1\end{align*} By computing $\Lambda_1'$, we obtain $\Lambda_1'=\xi_{1,1}\bold{g_1}+ \xi_{1,2}\bold{g_2}$, where the coefficients $\xi_{1,1}$ \ $\xi_{1,2}$ are continuous functions on $U_1\cup U_2$ and have the following form, \begin{align*} \xi_{1,1}&=\frac{\varphi_1'(g_1)h_2-h_1\varphi_2(g_2)\phi_{1,1}'(g_1)}{(h_1+h_2)^2} \\ \xi_{1,2}&=\frac{-h_1\varphi_2'(g_2)\,\phi_{1,1}(g_1)}{(h_1+h_2)^2} \end{align*} Thus \ $ {\bf H_2'}=\sigma_{2,1}\bold{g_1}+\sigma_{2,2}\bold{g_2}$, \ where \ $\sigma_{2,1}$ and $\sigma_{2,2}$ are continuous functions on $U_1\cup U_2$ and have the following form \begin{align}\label{derivada de H2} \sigma_{2,1}&=\sigma_{1,1}\Lambda_1+(H_1-a_2r_2)\xi_{1,1} \\ \sigma_{2,2}&= a_2s_2(1-\Lambda_1)+(H_1-a_2r_2)\xi_{1,2}. \notag \end{align} \noindent Notice that if $x\in U_1\cap U_2$\,, then \ $\sigma_{1,1}>0$, \ $a_2s_2>0$, \ $\Lambda_1>0$, \ $1-\Lambda_1>0$, \ $\xi_{1,1}>0$ \ and \ $\xi_{1,2}<0$. Therefore, on $U_1\cap U_2$\,, the coefficient $\sigma_{2,1}$ is strictly positive whenever $H_1-a_2r_2\ge 0$, \ and the coefficient $\sigma_{2,2}$ is strictly positive whenever $H_1-a_2r_2\le 0$. Since the vectors $g_1$ and $g_2$ are l.i., if $x\in U_1\cap U_2$ \ and ${\bf H_2'}(x):T_x\longrightarrow \mathbb R$ \ is identically zero, \ there is necessarily \ $\varrho\not=0$ with $D(x)=\varrho (\sigma_{2,1}(x)g_1+\sigma_{2,2}(x)g_2)$. Thus, $D(x)\in [g_1,g_2]$. \medskip \noindent The set $Z_2$ can be split into the disjoint sets $Z_2=Z_1 \cup Z_{2,1}\cup Z_{2,2}$, where \begin{equation*} Z_{2,1}=\begin{cases}\{z_2\}, & \text{ if } \ z_2\in U_2\setminus \overline{U}_1\\ \emptyset, & \text{ otherwise } \end{cases} \end{equation*} and \ $Z_{2,2}$ \ is a subset (possibly empty) within \ $ U_1\cap U_2 \cap D^{-1}([g_1,g_2])$. Now, let us check that $Z_{2,2}\subset\cup_{g\in N_2}O_g$. \ Indeed, if \ $x\in Z_{2,2}$\,, \ then \ $x\in U_1\cap U_2\subset U_1\cap U_2'$\,, \ $D(x)\in [g_1,g_2]$ \ and $g_1(x)<\gamma_{1,1}$. \ This implies, according to Fact \ref{fact 1}, that \ $x\in \cup_{g\in N_2}O_g$. \medskip In the case when \ $Z_{2,1}=\{z_2\}$ \ and \ $z_2\not\in \cup_{g\in N_2}{\overline{O}_g}$\,, we select, if necessary, a larger $t_2$\, \ with $t_2<l_2$\,, so that \ $z_2\notin \cup_{g\in N_2}\overline{B}_g$. Since the norm \ $|\cdot|$ \ is LUR and \ $D(z_2)=g_2$\,, \ the functional $g_2$ strongly exposes $S$ at the point $z_2$ and we may select numbers \ $0<t_2'<l_2'<1$ \ and open slices, which are neighborhoods of \ $z_2$\,, \ defined by \begin{equation*} O_{g_2}:=\{x\in S: g_2(x)>l_2'\} \quad \text{ and } \quad B_{g_2}:=\{x\in S: g_2(x)>t_2'\}, \end{equation*} satisfying \ $O_{g_2}\subset B_{g_2}\subset \{x\in S: \ g_1(x)<\gamma_{1,1}',\ g_2(x)>\gamma_{2}'\} $ and $\operatorname{dist}(B_{g_2},B_g)>0$ for every $g\in N_2$. In this case, we define $\Gamma_2=N_2\cup\{g_2\}$. \medskip Now, if $Z_{2,1}=\{z_2\}\in \cup_{g\in N_2}{\overline{O}_g}$,\ we select, if necessary, a smaller constant $l_2$, with $0<t_2<l_2<1$, so that $Z_{2,1}=\{z_2\}\in \cup_{g\in N_2}{O_g}$\,. In this case, and also when $Z_{2,1}=\emptyset$, we define $\Gamma_2=N_2$. Notice that, in any of the cases mentioned above, Fact \ref{fact 1} clearly holds for the (possibly) newly selected real numbers $t_2$ and $l_2$. \medskip Notice that the distance between any two sets $B_{g}$, \ $B_{g'}$, \ $g,g'\in \Gamma_1\cup \Gamma_2$, \ $g\not=g'$, \ is positive. \ Moreover, $Z_1\subset O_{g_1}\subset B_{g_1}\subset U_1=R_1$, \ and \ $Z_{2,1}\cup Z_{2,2}\subset \cup_{g\in \Gamma_2}O_g \subset \cup_{g\in \Gamma_2} B_g \subset U_2'\subset R_2$. Therefore, $Z_2=Z_1\cup Z_{2,1}\cup Z_{2,2}\subset \cup_{g\in \Gamma_1 \cup \Gamma_2}O_g \subset \cup_{g\in \Gamma_1 \cup \Gamma_2}B_g \subset U_1\cup U_2=R_1\cup R_2$. In addition, we have $\operatorname{dist}(\cup_{g\in \Gamma_1 \cup \Gamma_2}B_g , \ (U_1\cup U_2)^c)>0$. \medskip It is worth remarking that ${\bf H_2'}=\sigma_{2,1}\bold{g_1}+\sigma_{2,2}\bold{g_2}$ in $U_1\cup U_2$, where $\sigma_{2,1}$ and $ \sigma_{2,2}$ are continuous functions and at least one of the coefficients $\sigma_{2,1}(x),\,\sigma_{2,2}(x)$ is strictly positive, for every \ $x\in U_1\cup U_2$. Moreover, $\sigma_{2,1}(x)=0$ whenever $x\not\in U_1$, and $\sigma_{2,2}(x)=0$ whenever $x\not\in U_2$. \medskip In order to clarify the construction in the general case, let us also explain in detail the construction of the function $h_3$ and locate the critical points of the function $\bold{H_3'}$. \medskip \noindent $\bold{\diamond}$ {\bf For $\bold{j=3}$,} let us denote by $y_3\in S$ the point satisfying $f_3(y_3)=1$. If $\{g_1,g_2,f_3\}$ are lineally dependent, \ or if \ $g_1(y_3)=\gamma_1$, \ or if \ $g_2(y_3)=\gamma_2$\,, we can use the density of the norm attaining functionals (Bishop-Phelps Theorem) and the continuity of $D$ to modify $y_3$ and find $z_3\in S$ so that: $g_1(z_3)\not=\gamma_1$,\ $g_2(z_3)\not={\gamma_2}$,\ $\{g_1,g_2,g_3:=D(z_3)\}$ are linearly independent (l.i.), and \begin{equation*} \{x\in S:\ f_3(x)>\delta_3^2\}\subset\{x\in S:\ g_3(x)>{\nu_3}\}\subset\{x\in S: f_3(x)>\delta_3^3\} \end{equation*} for some ${\nu_3}\in (0,1)$. If $\{g_1,g_2,f_3\}$ are l.i., \ $g_1(y_3)\not=\gamma_1$, \ and \ $g_2(y_3)\not=\gamma_2$, we define $g_3=f_3$ and $z_3=y_3$. Then, we apply Lemma \ref{case N} to the l.i. vectors, $\{g_1,g_2,g_3\}$ and the real numbers $\gamma_1\in(0,1)$,\ ${\gamma_2}\in(0,1)$ and obtain $\gamma_3\in (0,1)$ close enough to $\nu_3$ so that \begin{equation*} S_3=\{x\in S:\ f_3(x)>\delta_3 \}\subset\{x\in S:\ g_3(x)>{\gamma_3}\}\subset \{x\in S: \ f_3(x)>\delta_3^4\}=P_3, \end{equation*} \begin{equation}\label{3-1} \{T\in [g_1,g_2,g_3]^*:\ T(g_1)=\gamma_1\,, \ T(g_2)={\gamma_2}\,,\ T(g_3)={\gamma_3}\ \text{ and } \ |T|=1 \}=\emptyset, \end{equation} \begin{equation}\label{3-2} \{T\in [g_1,g_3]^*:\ T(g_1)=\gamma_1\,,\ T(g_3)={\gamma_3}\ \text{ and } \ |T|=1 \}=\emptyset, \end{equation} \begin{equation}\label{3-3} \{T\in [g_2,g_3]^*:\ \ T(g_2)={\gamma_2}\,,\ T(g_3)={\gamma_3}\ \text{ and } \ |T|=1 \}=\emptyset. \end{equation} Select $\gamma_{2,1}'\in (\gamma_2,\gamma_{2}' )$ \ and define \begin{align*} R_3=&\{x \in S:\, g_3(x)>\gamma_3\} \ \text{ and } \\ U_3'=\{s\in S: g_1&(x)< \gamma_{1,2}'\,,\ g_2(x)<\gamma_{2,1}'\ \text{ and } \ g_3(x)>\gamma_3\},\end{align*} where $\gamma_{1,2}'$ is a number in $(\gamma_1,\,\frac{\gamma_1+\gamma_{1,1}}{2}).$ \noindent Notice that $\operatorname{dist}(B_g,\,U_3')>0$ \ for every \ $g\in \Gamma_1\cup \Gamma_2$\,. Assume that $R_3\cap (U_1\cup U_2)\not = \emptyset$, and consider the sets \begin{align*} M_{3,1}&=\{x\in (U_1\cap U_3') \setminus U_2:\ D(x)\in[g_1,g_3]\},\\ M_{3,2}&=\{x\in (U_2\cap U_3') \setminus U_1:\ D(x)\in[g_2,g_3]\},\\ M_{3,1,2}&=\{x\in U_1\cap U_2 \cap U_3':\ D(x)\in[g_1,g_2,g_3] \},\end{align*} and $M_3=M_{3,1}\cup M_{3,2}\cup M_{3,1,2}$\,. In the case that $M_3=\emptyset$, we select as $\gamma_{2,1}$ any point in $(\gamma_2,\gamma_{2,1}')$ and $\gamma_{1,2}$ any point in $(\gamma_1,\gamma_{1,2}' )$. In the case that $M_3\not=\emptyset$ and $\operatorname{dist}(M_3, (U_1\cup U_2)^c)>0$ we can easily find $\gamma_{2,1}\in (\gamma_2,\gamma_{2,1}')$ and $\gamma_{1,2}\in (\gamma_1,\gamma_{1,2}' )$ with $M_3\subset \{x\in S: g_1(x)>\gamma_{1,2}\}\cup \{x\in S: g_2(x)>\gamma_{2,1}\}$. In the case that \ $\operatorname{dist}(M_3, (U_1\cup U_2)^c)=0$ \ and in order to obtain suitable constants $\gamma_{2,1}$ and $\gamma_{1,2}$\,, we need to study the limits of the sequences $\{x_n\}\subset M_3$ such that $\lim_n \operatorname{dist}(x_n,(U_1\cup U_2)^c)=0$. Define the sets \begin{align*} F_{3,i}'&=\{T\in[g_i,g_3]^*: \ T(g_i)=\gamma_i \ \text{ and } \ |T|=1\} \quad \text{ for } i=1,2, \\ F_{3,1,2}'&=\{T\in[g_1,g_2,g_3]^* :\ T(g_1)=\gamma_1, \ T(g_2)=\gamma_2 \ \text{ and } \ |T|=1 \}, \end{align*} and \begin{align*} N_{3,i}'&=\{g\in S^*\cap [g_i,g_3]: \ T(g)=1 \ \text{ for some } \ T\in F_{3,i}'\} \quad \text{ for } i=1,2,\\ N_{3,1,2}'&=\{g\in S^* \cap[g_1,g_2,g_3] :\ T(g)=1 \ \text{ for some } \ T\in F_{3,1,2}'\}. \end{align*} Since the norm $|\cdot|^*$ is G\^{a}teaux smooth, we apply Lemma \ref{vectoradicional} to the finite dimensional space $[g_1,g_2,g_3]$ and the restriction of the norm $|\cdot|^*$ to $[g_1,g_2,g_3]$ (which is a differentiable norm on the space $[g_1,g_2,g_3]$), and deduce that the cardinal of any of the sets $F_{3,i}'$\,, $F_{3,1,2}'$ is at most two. Furthermore, from the strict convexity of the norm $|\cdot|^*$ we obtain that the cardinal of any of the sets $N_{3,i}'$ and $N_{3,1,2}'$\,, \ is at most two. Let us consider, for $i=1,2$, the norm-one extensions to \ $[g_1,g_2,g_3]$ \ of the functionals of $F_{3,i}'$\,, that is, \begin{equation*} F_{3,i}''=\{T\in [g_1,g_2,g_3]^*:\, T|_{[g_i,g_3]}\in F_{3,i}' \ \text{ and } \ |T|=1\}. \end{equation*} Since the norm $|\cdot|^*$ is G\^{a}teaux smooth, for every $G\in F_{3,i}'$ there is exactly {\em one} norm-one extension $T$ to $[g_1,g_2,g_3]$. Therefore the cardinal of the set $F_{3,i}''$ is at most two. Hence the sets $F_3':=F_{3,1}''\cup F_{3,2}''\cup F_{3,1,2}'$ and $N_3':=N_{3,1}'\cup N_{3,2}'\cup N_{3,1,2}'$ are finite. In addition, as a consequence of the equalities \eqref{3-1}, \eqref{3-2} and \eqref{3-3}, we deduce that $T(g_3)\not=\gamma_3$ \ for every \ $T\in F_3'$. Indeed, if $T\in F_{3,1,2}'$ the assertion follows immediately from \eqref{3-1}. If \ $T\in F_{3,i}''$ \ for some \ $i\in\{1,2\}$, \ then \ $T|_{[g_i,g_3]}\in F_{3,i}'$, that is, $|T|_{[g_i,g_3]}|=1$ \ and \ $T(g_i)=\gamma_i$. From \eqref{3-2} \ for $i=1$, and \eqref{3-3} for $i=2$, we obtain that $T(g_3)\not=\gamma_3$. We can restrict our study to one of the following kind of sequences: \medskip \begin{enumerate} \item Fix $i\in \{1,2\}$. Consider any sequence $\{x_n\}\subset M_{3,i}$ \ such that \ $\lim_n \operatorname{dist}(x_n,(U_1\cup U_2)^c)=0$. Then, it easily follows that $\lim_n g_i(x_n)=\gamma_i$. Indeed, \begin{itemize} \item if $\{x_n\}\subset M_{3,1}$, then in particular \ $\{x_n\}\subset U_1=R_1$. Therefore, \ $\operatorname{dist}(x_{n}, (U_1\cup U_2)^c)\ge \operatorname{dist}(x_{n}, R_1^c)$. Thus, $\lim_n \operatorname{dist}(x_{n}, R_1^c) =0$ and this implies that $\lim_n g_1(x_n)=\gamma_1$; \item if $\{x_n\}\subset M_{3,2}$, then in particular \ $\{x_n\}\subset U_2\subset R_2$. Recall that $U_1\cup U_2=R_1\cup R_2$. Therefore, \ $\operatorname{dist}(x_{n}, (U_1\cup U_2)^c)\ge \operatorname{dist}(x_{n}, R_2^c)$. Thus, $\lim_n \operatorname{dist}(x_{n}, R_2^c) =0$ and this implies that $\lim_n g_2(x_n)=\gamma_2$. \end{itemize} \noindent Now, let us take any sequence $\{x_n\}\subset M_{3,i}$ such that $\lim_n g_i(x_n)=\gamma_i$. Consider every $x_n$ as an element of $X^{**}$ and denote by $\bold{x_n}$ its restriction to $[g_1,g_2,g_3]$. Recall that \ $D(x_n)\in S^*\cap [g_i,g_3]$ \ for every $n\in \mathbb N$. Then, the sequence of restrictions $\{\bold{x_n}\}\subset [g_1,g_2,g_3]^*$ satisfies that \begin{align*} \qquad 1&=|x_n|\ge|\bold{x_n}|\ge |\bold{x_n}|_{[g_i,g_3]}|= \max\{\bold{x_n}(h):\ h\in S^*\cap[g_i,g_3]\}\\ &\ge \bold{x_n}(D(x_n))=D(x_n)(x_n)=1,\end{align*} \noindent for every $n\in \mathbb{N}$. Thus, there is a subsequence \ $\{\bold{x_{n_j}}\}$ \ converging to an element \ $T\in [g_1,g_2,g_3]^*$ \ with \ $|T|=|T|_{[g_i,g_3]}|=1$. Since $\lim_jg_i(x_{n_j})=\lim_{j}\bold{x_{n_j}}(g_i)=\gamma_i$\,, we have that \ $T(g_i)=\gamma_i$ \ and this implies that \ $T|_{[g_i,g_3]}\in F_{3,i}'$ \ and \ $T\in F_{3,i}''$. Furthermore, if $g\in N_{3,i}'$ and $T(g)=1$, then $\lim_j\bold{x_{n_j}}(g)=1$. In addition, $T(g_3)=\lim_j\bold{x_{n_j}}(g_3)=\lim_j g_3(x_{n_j})\ge \gamma_3$ because $\{x_{n_j}\}\subset U_3'$. Then, from condition \eqref{3-3} if $i=1$ and condition \eqref{3-2} if $i=2$, we deduce that $T(g_3)>\gamma_3$. Finally, let us check that $T(g_{s})=\lim_j\bold{x_{n_j}}(g_{s})\le \gamma_{s}$\,, where $s\in \{1,2\}$ and $s\not= i$\,: \begin{itemize} \item if $i=1$, the sequence $\{x_{n_j}\}\subset M_{3,1}$ and thus $\{x_{n_j}\}\subset (U_1\cap U_3')\setminus U_2$. In particular \ $\{x_{n_j}\}\subset U_3'$ \ and \ $g_1(x_{n_j})< \gamma_{1,2}'<\gamma_{1,1}$ \ for every $j\in \mathbb N$. Therefore, if \ $x_{n_j}\not\in U_2$ for all $j$, we must have $\bold{x_{n_j}}(g_2)=g_2(x_{n_j})\le \gamma_2$ \ for every $j\in \mathbb N$. \item if $i=2$, the sequence $\{x_{n_j}\}\subset M_{3,2}$ and thus $\{x_{n_j}\}\subset (U_2\cap U_3')\setminus U_1$. In particular \ $x_{n_j}\not\in U_1=R_1$\,, for every $j\in \mathbb N$ \ and this implies \ $\bold{x_{n_j}}(g_1)=g_1(x_{n_j})\le \gamma_1$ \ for every $j\in \mathbb N$. \end{itemize} \medskip \item Consider a sequence $\{x_n\}\subset M_{3,1,2}$, such that $\lim_n \operatorname{dist}(x_n,(U_1\cup U_2)^c)=0$. Then, it easily follows that \ $\lim_n g_i(x_n)=\gamma_i$ \ for $i=1,2$. Indeed, $U_1\cup U_2=R_1\cup R_2$ and then $\operatorname{dist}(x_n, (R_1\cup R_2)^c)\ge \operatorname{dist}(x_n, R_i^c)$ for every $n\in \mathbb N$ and $i=1,2$. Hence $\lim_n\operatorname{dist}(x_n, R_i^c)=0$. Since $\{x_n\}\subset R_i$,\ for $i=1,2$, \ we obtain that $\lim_ng_i(x_n)=\gamma_i$, \ for $i=1,2$. \noindent Now, let us take any sequence $\{x_n\}\subset M_{3,1,2}$ such that $\lim_n g_i(x_n)=\gamma_i$, for every $i=1,2$. Consider every $x_n$ as an element of $X^{**}$ and denote by $\bold{x_n}$ its restriction to $[g_1,g_2,g_3]$. Then, the sequence of restrictions $\{\bold{x_n}\}\subset [g_1,g_2,g_3]^*$ satisfies that \begin{equation*} \quad \qquad 1=|x_n|\ge|\bold{x_n}|=\max\{\bold{x_n}(h): \,h\in S^*\cap[g_1,g_2,g_3]\} \ge \bold{x_n}(D(x_n))=D(x_n)(x_n)=1,\end{equation*} \noindent for every $n\in \mathbb{N}$. Thus, there is a subsequence \ $\{\bold{x_{n_j}}\}$ \ converging to an element \ $T\in [g_1,g_2,g_3]^*$ \ with \ $|T|=1$. Since \ $\lim_jg_i(x_{n_j})=\lim_{j}\bold{x_{n_j}}(g_i)=\gamma_i$ \ for \ $i=1,2$, \ then \ $T(g_i)=\gamma_i$ \ for \ $i=1,2$,\ and this implies that $T\in F_{3,1,2}'$. Furthermore, if $g\in N_{3,1,2}'$ and $T(g)=1$, then $\lim_j\bold{x_{n_j}}(g)=1$. In addition, $T(g_3)=\lim_j\bold{x_{n_j}}(g_3)=\lim_j g_3(x_{n_j})\ge \gamma_3$ because $\{x_{n_j}\}\subset U_3'$. Then, from condition \eqref{3-1}, we deduce that $T(g_3)>\gamma_3$. \end{enumerate} \medskip \noindent Let us define, for $i=1,2$, \begin{equation*} F_{3,i}=\{T\in F_{3,i}'':\text{ there is } \ \{x_n\}\subset M_{3,i} \, \text{ with }\, \lim_n\bold{x_n}(g_i)=\gamma_i\,, \text{ and } \lim_n\bold{x_n}=T\}, \end{equation*} \begin{align*} F_{3,1,2}=\{T\in F_{3,1,2}':\text{ there is } \ \{x_n\}\subset M_{3,1,2} \, \text{ with }\, \lim_n\bold{x_n}(g_1)=\gamma_1\,, & \ \lim_n\bold{x_n}(g_2)=\gamma_2 \\ & \quad \text{ and } \ \lim_n\bold{x_n}=T\}, \end{align*} and \begin{equation} F_3=F_{3,1}\cup F_{3,2}\cup F_{3,1,2}. \end{equation} Select a real number \ $\gamma_{3}'$ \ satisfying \ $\gamma_3< \gamma_{3}'<\min\{T(g_3):\ T\in F_3\}$ (recall that $F_3$ is finite), and define, \begin{equation*} N_{3,i}=\{g\in N_{3,i}': \text{ there is } T\in F_{3,i} \, \text{ with }\, T(g)=1 \}, \ \text{ for } i=1,2, \end{equation*} \begin{equation*} N_{3,1,2}=\{g\in N_{3,1,2}':\text{ there is } T \in F_{3,1,2} \, \text{ with }\, T(g)=1\}, \end{equation*} and $N_3=N_{3,1}\cup N_{3,2}\cup N_{3,1,2}$. Let us prove the following Fact. \medskip \begin{fact} \label{3} \begin{enumerate} \item There are numbers \ $0<t_3<l_3<1$ \ such that for every $g\in N_3$\,, the slices $$O_g:=\{x\in S:\ g(x)>l_3\} \ \text{ and } \ B_g:=\{x\in S:\ g(x)>t_3\}$$ satisfy that \begin{align}\label{inclusion} O_g\subset B_g & \subset \{x\in S: \ g_1(x)<\gamma_{1,2}'\,,\ g_2(x)<\gamma_{2,1}'\,, \ g_3(x)>\gamma_{3}'\} \ \text{ and }\\ \label{intersection} &\operatorname{dist}(B_g,B_{g'})>0, \text{ whenever } g,g'\in N_3\,, \ g\not=g'. \end{align} \item There are numbers \ $\gamma_{1,2}\in (\gamma_1,\gamma_{1,2}')$ \ and \ $\gamma_{2,1}\in (\gamma_2,\gamma_{2,1}')$ \ such that if $x\in M_3$\,, \ $g_1(x)<\gamma_{1,2}$ \ and \ $g_2(x)<\gamma_{2,1}$\,, \ then \ $x\in O_g$\,, for some $g\in N_3$. \end{enumerate} \end{fact} \medskip \noindent{\bf Proof of Fact \ref{3}. } (1) First, if $X$ is reflexive, we know that for every \ $g\in N_3$ \ there is \ $x_g\in S$ \ such that \ $D(x_g)=g$. Let us study the three possible cases: \begin{itemize} \item If $g\in F_{3,1}$\,, denote by $\mathbf{x_g}$ the restriction of $x_g$ to $[g_1,g_2,g_3]$. Since $\mathbf{x_g}(g)=1$ and $|\cdot|^*$ is G\^{a}teaux smooth, then $\bold{x_g}=T$ \ for some $T\in F_{3,1}$. This implies that $\bold{x_g}(g_1)=\gamma_1<\gamma_{1,2}'$\,, \ $\bold{x_g}(g_3)>\gamma_{3}'$ \ and \ $\bold{x_g}(g_2)\le \gamma_2< \gamma_{2,1}'$. Hence, $x_g\in \{x\in S: \ g_1(x)<\gamma_{1,2}'\,,\ g_2(x)>\gamma_{2,1}' \text{ and } g_3(x)>\gamma_{3}'\}$. \medskip \item If $g\in F_{3,2}$\,, denote by $\mathbf{x_g}$ the restriction of $x_g$ to $[g_1,g_2,g_3]$. Since $\mathbf{x_g}(g)=1$ and $|\cdot|^*$ is G\^{a}teaux smooth, then \ $\bold{x_g}=T$ \ for some $T\in F_{3,2}$. This implies that $\bold{x_g}(g_2)=\gamma_2<\gamma_{2,1}'$\,, \ $\bold{x_g}(g_3)>\gamma_{3}'$ \ and \ $\bold{x_g}(g_1)\le \gamma_1< \gamma_{1,2}'$. Hence, $x_g\in \{x\in S: \ g_1(x)<\gamma_{1,2}'\,,\ g_2(x)>\gamma_{2,1}' \text{ and } g_3(x)>\gamma_{3}'\}$. \medskip \item If $g\in F_{3,1,2}$\,, denote by $\mathbf{x_g}$ the restriction of $x_g$ to $[g_1,g_2,g_3]$. Since $\mathbf{x_g}(g)=1$ and $|\cdot|^*$ is G\^{a}teaux smooth, then $\bold{x_g}=T$ \ for some $T\in F_{3,1,2}$. This implies that $\bold{x_g}(g_1)=\gamma_1<\gamma_{1,2}'$\,, \ $\bold{x_g}(g_2)=\gamma_2<\gamma_{2,1}'$ \ and \ $\bold{x_g}(g_3)> \gamma_{3}'$. Hence, $x_g\in \{x\in S: \ g_1(x)<\gamma_{1,2}',\ g_2(x)>\gamma_{2,1}' \text{ and } g_3(x)>\gamma_{3}'\}$. \end{itemize} \medskip \noindent Now, since the norm $|\cdot|$ is LUR and $D(x_g)=g$, the functional $g$ strongly exposes $S$ at the point $x_g$ for every $g\in N_3$. Since $N_3$ is finite, we can hence obtain real numbers $0<t_3<l_3<1$ and slices $O_g$ and $B_g$\,, \ for every $g\in N_3$\,, satisfying conditions \eqref{inclusion} and \eqref{intersection}. \medskip \noindent Now consider a non reflexive Banach space $X$. Let us first prove \eqref{inclusion}. Assume, on the contrary, that there is a point $g \in N_3$ and there is a sequence $\{y_n\}\subset S$ satisfying $g(y_n)>1-\frac1n$ and such that $g_1(y_n)\ge \gamma_{1,2}' $, or $g_2(y_n)\ge \gamma_{2,1}'$, or $g_3(y_n)\le \gamma_3'$\,, for every $n\in \mathbb N$. If $g\in N_{3}$ \ there is a sequence \ $\{x_n\}\subset M_{3}$ \ with \ $\lim_n g_i(x_n)\le \gamma_i$\,,\ for $i=1,2$, \ $\lim_n g_3(x_n)>\gamma_3'$ \ and \ $\lim_n g(x_n)=1$. In particular, \begin{equation*} \frac{g(x_n)+1-\frac1n}{2}\le g\left(\frac{x_n+y_n}{2}\right)\le \left|\frac{x_n+y_n}{2}\right|\le 1, \end{equation*} and thus $\lim_n\left|\frac{x_n+y_n}{2}\right|= 1$. Recall that in the non reflexive case, the norm $|\cdot|$ is WUR, and then $x_n-y_n\xrightarrow{\omega} 0$\ (weaky converges to zero). This last assertion gives a contradiction since we have $\limsup_ng_1(x_n-y_n)\le \gamma_1-\gamma_{1,2}'<0$ \ or \ $\limsup_ng_2(x_n-y_n)\le \gamma_2-\gamma_{2,1}'<0$ \ or \ $\liminf_n g_3(x_n-y_n)\ge \lim_n g_3(x_n)-\gamma_3'>0$. Therefore we can find real numbers $0<t_2<l_2<1$ and slices $O_g$ and $B_g$ \ for every $g\in N_3$\,, satisfying condition \eqref{inclusion}. The proof of \eqref{intersection} is the same as the one given in Fact \ref{fact 1}, where the only property we need is the strict convexity of $|\cdot|^*$. \medskip \noindent (2) Assume, on the contrary, that for every $n\in \mathbb N$, there is $x_n\in M_3$ with $g_i(x_n)\le \gamma_i+\frac1n$, for $i=1,2$ \ and $\{x_n:\ n \in \mathbb N\} \cap (\cup_{g\in N_3}O_g)=\emptyset$. Then, there is a subsequence of $\{x_n\}$, which we keep denoting by $\{x_n\}$, such that $\{x_n\}\subset M_{3,1}$, \ or \ $\{x_n\}\subset M_{3,2}$, \ or \ $\{x_n\}\subset M_{3,1,2}$. In the first case, $\lim_ng_1(x_n)=\gamma_1$. In the second case, $\lim_ng_2(x_n)=\gamma_2$. In the third case, $\lim_ng_i(x_n)=\gamma_i$\,, \ for every $i=1,2$. From the definition of $F_3$ and $N_3$ and the comments preceding Fact \ref{3}, we know that there is a subsequence $\{x_{n_j}\}$ and $g\in N_3$ satisfying that $\lim_jg(x_{n_j})=1$, which is a contradiction. This finishes the proof of Fact \ref{3}. $\Box$ \bigskip If $R_3\cap (U_1\cup U_2)=\emptyset$ we may select as $\gamma_{1,2}$ any number in $(\gamma_1,\gamma_{1,2}')$ and $\gamma_{2,1}$ any number in $(\gamma_2,\gamma_{2,1}')$. \medskip Now we define $h_3$ as follows: \begin{align*} h_3&:\, S^+\longrightarrow \mathbb R\\h_3&=\varphi_3(g_3)\,\phi_{2,1}(g_2)\,\phi_{1,2}(g_1), \end{align*} where $\varphi_3$,\ $\phi_{2,1}$ and $\phi_{1,2}$ are $C^\infty$ functions on $\mathbb R$ satisfying that \begin{align*} \varphi_3(t)&=0 \ \ \text{ if } t \le {\gamma_3}\\ \varphi_3(1)&=1\\ \varphi_3'(t)&>0 \ \ \text{ if } t>{\gamma_3} \end{align*} and \begin{align*} \phi_{1,2}(t) & =1 \ \ \text{ if } \ t\le \textstyle{\frac{\gamma_1+\gamma_{1,2}}{2}} \qquad & \qquad \phi_{2,1}(t) & =1 \ \ \text{ if } \ t\le {\textstyle{\frac{\gamma_2+\gamma_{2,1}}{2}}} \\ \phi_{1,2}(t) & =0 \ \ \text { if } \ t\ge \gamma_{1,2} \qquad & \qquad \phi_{2,1}(t) & =0 \ \ \text{ if } \ t\ge \gamma_{2,1} \\ \phi_{1,2}'(t) & <0 \ \ \text{ if } \ t \in \bigl(\textstyle{\frac{\gamma_1+\gamma_{1,2}}{2}}, \, \gamma_{1,2}\bigr) \qquad & \qquad \phi_{2,1}'(t) & <0 \ \ \text{ if }\ t\in \bigl( \textstyle{\frac{\gamma_2+\gamma_{2,1}}{2}}, \, \gamma_{2,1} \bigr) \end{align*} \noindent Clearly the interior of the support of $h_3$ is the set \begin{equation*} U_3=\{x\in S^+:\ g_1(x)<\gamma_{1,2}\,, \ g_2(x)<\gamma_{2,1} \ \text{ and } \ g_3(x)>\gamma_3\}.\end{equation*} Select one point $x_3\in U_3$, a real number $a_3\in \mathbb R^*$ with $|a_3-F(x_3)|<\varepsilon$ \ and define the auxiliary function \begin{align*} &r_3:S^+\longrightarrow \mathbb R,\\ &r_3=s_3g_3+(1-s_3g_3(x_3)),\notag \end{align*} where we have selected $s_3$ so that $s_3a_3>0$ and $|s_3|$ is small enough so that the oscilation of $r_3$ on $U_3$ is less than \ $\frac{\varepsilon}{\,|a_3|}$. \, Notice that $r_3(x_3)=1$. \medskip \noindent Let us study the critical points $Z_3$ of the $C^p$ smooth function \begin{align*} &{\bf H_3}: U_1\cup U_2\cup U_3\longrightarrow \mathbb R,\\ \notag &{\bf H_3}=\frac{a_1r_1h_1+a_2r_2h_2+a_3r_3h_3}{h_1+h_2+h_3}. \end{align*} Let us prove that $Z_3:=\{x\in U_1\cup U_2\cup U_3:\,H'_3(x)=0 \text{ on } T_x\}$ can be included in a finite number of disjoint slices within $U_1\cup U_2\cup U_3$ by splitting it conveniently into the (already defined) $Z_1$, $Z_2$ and up to four more disjoint sets within $U_3$, as the Figure \ref{dibujo n=3} suggests. \medskip \noindent The function ${\bf H_3'}$ can be written as ${\bf H_3'}=\sigma_{3,1} \bold {g_1}+\sigma_{3,2} \bold {g_2}+\sigma_{3,3} \bold {g_3}$,\ where \ $\sigma_{3,i}$ \ are continuous and real functions on $U_1\cup U_2 \cup U_3$ \ and \ $\bold{g_i}$ \ denotes the restriction \ $g_i|_{T_x}$, \ $i=1,2,3$, whenever we evaluate $\bold{H_3'}(x)$ on $T_x$. \medskip \noindent Clearly $\bold{H_3}$ and $\bold{H_3'}$ restricted to $(U_1\cup U_2)\setminus U_3$ coincide with $\bold{H_2}$ and $\bold{H_2'}$ respectively. Then, $Z_3\setminus U_3=Z_3\setminus \overline{U}_3=Z_2$. Let us study the set $Z_3\cap U_3$. \noindent First, if $x\in U_3\setminus (U_1\cup U_2)$, \ then \ $H_3(x)=a_3r_3(x)$ \ and \ $H_3'(x)=a_3r_3'(x)=a_3s_3g_3$. Therefore $\bold{H_3'}(x)=a_3s_3g_3|_{T_x}\equiv 0$ iff $D(x)=g_3$. If the point $z_3\in U_3\setminus (U_1 \cup U_2)$ then, $H_3$ has exactly one critical point in $U_3\setminus (U_1\cup U_2)$; in this case, since $g_1(z_3)\not=\gamma_1$\ and \ $g_2(z_3)\not={\gamma_2}$\,, the point $z_3$ actually belongs to $U_3\setminus (\overline{U}_1 \cup \overline{U}_2)$. \medskip \noindent Now, let us study the critical points of ${\bf H_3}$ in $U_3\cap(U_1\cup {U_2})$. If we define \ $\Lambda_2=\frac{h_1+h_2}{h_1+h_2+h_3}$, \ then we can rewrite \ $\bold{H_3}$ \ in \ $U_3\cap(U_1\cup U_2)$ \ as \begin{equation*} \bold{H_3}=\frac{a_1r_1h_1+a_2r_2h_2}{h_1+h_2}\,\cdot \frac{h_1+h_2}{h_1+h_2+h_3}+\frac{a_3r_3h_3}{h_1+h_2+h_3}=\bold{H_2}\ \Lambda_2 +a_3r_3(1-\Lambda_2), \end{equation*} \noindent and \ $$\bold{{H_3}'}=\bold{{H_2}'}\Lambda_2 +a_3s_3(1-\Lambda_2)\bold{g_3}+(\bold{H_2}-a_3r_3)\Lambda'_2.$$ By computing $\Lambda'_2$, we obtain $\Lambda'_2=\xi_{2,1}\bold{g_1}+\xi_{2,2}\bold{g_2}+\xi_{2,3} \bold{g_3}$, where the coefficients $\xi_{2,1},\ \xi_{2,2} $ \ and \ $\xi_{2,3}$ are continuous functions of the following form: \begin{align}\label{derivada de Lambda} \xi_{2,1}&=\frac{-\varphi_3(g_3)\phi_{2,1}(g_2)\phi_{1,2}'(g_1)(h_1+h_2) +h_3\varphi_1'(g_1)+h_3\varphi_2(g_2)\phi_{1,1}'(g_1)}{(h_1+h_2+h_3)^2},\\ \xi_{2,2}&= \frac{-\varphi_3(g_3)\phi_{2,1}'(g_2)\phi_{1,2}(g_1)(h_1+h_2)+h_3\varphi_2'(g_2)\phi_{1,1}(g_1)}{(h_1+h_2+h_3)^2},\, \notag\\ \xi_{2,3}&= \frac{-\varphi_3'(g_3)\,\phi_{2,1}(g_2)\,\phi_{1,2}(g_1)(h_1+h_2)}{(h_1+h_2+h_3)^2}.\,\notag \end{align} \noindent Since $g_1(x)< \gamma_{1,2}<\frac{\gamma_1+\gamma_{1,1}}{2}$ \ for every $x\in U_3$\,, \ we have that $\phi_{1,1}'(g_1(x))=0$ \ for every $x\in U_3$, and we can drop the term $h_3\varphi_2(g_2)\phi_{1,1}'(g_1)$ in the above expression of $\xi_{2,1}$. Thus, if $x\in U_3\cap(U_1\cup U_2)$, the coefficients $\sigma_{3,1}\,, \ \sigma_{3,2}\,, \ \sigma_{3,3} $ \ for $\bold{H_3'}$ have the following form, \begin{align*} \sigma_{3,1}&=\sigma_{2,1}\Lambda_2+(\bold{H_2}-a_3r_3)\xi_{2,1}\\ \sigma_{3,2}&=\sigma_{2,2}\Lambda_2+(\bold{H_2}-a_3r_3)\xi_{2,2}\\ \sigma_{3,3}&=a_3s_3(1-\Lambda_2)+(\bold{H_2}-a_3r_3)\xi_{2,3},\\ \end{align*} \noindent where \ $a_3s_3>0$, \ $\Lambda_2>0$, \ $1-\Lambda_2>0$, \ $\xi_{2,1}\ge 0$, \ $\xi_{2,2}\ge 0$, \ $\xi_{2,1}+\xi_{2,2}> 0$ \ and \ $\xi_{2,3}<0$ \ on \ $U_3\cap (U_1\cup U_2)$. Therefore, if $H_2-a_3r_3\le 0$, the coefficient $\sigma_{3,3}>0$. When $H_2-a_3r_3 \ge 0$ and $\sigma_{2,2}>0$, we have that $\sigma_{3,2}>0$. Finally, when $H_2-a_3r_3\ge 0$ and $\sigma_{2,1}> 0$, we have $\sigma_{3,1}>0$ (recall that for every $x\in U_1\cup U_2$, there is $j\in\{1,2\}$ such that $\sigma_{2,j}(x)>0$). Since the vectors $\{g_1,g_2,g_3\}$ are lineally independent we get that, if $\bold{H_3'}(x)= 0$ for some $x\in U_3\cap (U_1 \cup U_2)$, then there necessarily exists $\varrho\not=0$ such that $D(x)=\varrho(\sigma_{3,1}(x)g_1 +\sigma_{3,2}(x)g_2+ \sigma_{3,3}(x)g_3),$ \ that is $D(x)\in[g_1,\, g_2,\,g_3].$ \ \medskip \noindent In fact we can be more accurate and obtain that if $x\in (U_3\cap U_2)\setminus U_1$ \ and \ ${\bf H_3'}(x)=0$ then $D(x)\in [g_2,g_3]$. Indeed, in step 2 we proved that $\sigma_{2,1}=0$ in $U_2\setminus U_1$. Moreover, the functions $\varphi_1(g_1)$, $\phi_{1,1}(g_1)$ and $\phi_{1,2}(g_1)$ are constant outside $U_1$, thus their derivatives vanish outside $U_1$. This implies $\xi_{2,1}=0$ and consequently $\sigma_{3,1}=0$ in $(U_3\cap U_2)\setminus U_1$. Similarly, if \ $x\in (U_3\cap U_1)\setminus U_2$ \ and ${\bf H_3'}(x)=0$,\ then $D(x)\in [g_1,g_3]$. Indeed, from step 2 we know that $\sigma_{2,2}=0$ on $U_1\setminus U_2$. Moreover, the function $\varphi_2'(g_2)\phi_{1,1}(g_1)$ vanishes outside $U_2$. In addition, if $x\in (U_3\cap U_1)\setminus U_2$ then $g_1(x)<\gamma_{1,2}<\gamma_{1,1}$ and hence $g_2(x)\le \gamma_2$. Thus $\phi_{2,1}'(g_2(x))=0$, which implies $\xi_{2,2}(x)=0$. Consequently \ $\sigma_{3,2}(x)=0$ \ if \ $x\in(U_3\cap U_1)\setminus U_2$. \medskip \noindent Define the sets \begin{align*} Z_{3,1}&=\begin{cases}\{z_3\}, & \text{ if } \ z_3\in U_3\setminus (\overline{U}_1\cup \overline{U}_2)\\ \emptyset, & \text{ otherwise } \end{cases}\\ Z_{3,2}&=Z_3\cap U_3\cap (U_1\cup U_2). \end{align*} Now, let us check that $Z_{3,2}\subset\cup_{g\in N_3}O_g$. Indeed, if $x\in Z_{3,2}$\,, then $x\in (U_1\cup U_2)\cap U_3$. Now, \begin{itemize} \item if $x\in (U_1\cap U_3)\setminus U_2$\,, then $D(x)\in [g_1,g_3]$. Since \ $ (U_1\cap U_3)\setminus U_2\subset (U_1\cap U_3')\setminus U_2$ \ we can deduce that \ $x\in M_{3,1}\subset M_3$. \item If \ $x\in (U_2\cap U_3)\setminus U_1$, then $D(x)\in [g_2,g_3]$. Since $ (U_2\cap U_3)\setminus U_1\subset (U_2\cap U_3')\setminus U_1$ we can deduce that $x\in M_{3,2}\subset M_3$. \item If $x\in U_1\cap U_2 \cap U_3$, then $D(x)\in [g_1,g_2,g_3]$. Since $ U_1\cap U_2\cap U_3\subset U_1\cap U_2\cap U_3'$ we can deduce that $x\in M_{3,1,2}\subset M_3$. \end{itemize} Finally, since $x\in U_3$\,,\ we have that $g_1(x)<\gamma_{1,2}$ and $g_2(x)<\gamma_{2,1}$. We apply Fact \ref{3}(2) to conclude that there is $g\in N_3$ such that $x\in O_g$. \medskip In the case when $Z_{3,1}=\{z_3\}\not\in \cup_{g\in N_3}\overline{O}_g$, we select if necessary, a larger $t_3$\,, \ with $t_3<l_3$\,, so that \ $z_3\not\in \cup_{g\in N_3}\overline{B}_g$. Since the norm is LUR and $D(z_3)=g_3$ we may select numbers $0<t_3'<l_3'<1$ and open slices, which are neighborhoods of $z_3$ defined by \begin{equation*} O_{g_3}:=\{x\in S: g_3(x)>l_3'\} \quad \text{ and } \quad B_{g_3}:=\{x\in S: g_3(x)>t_3'\}, \end{equation*} satisfying \ $O_{g_3}\subset B_{g_3}\subset \{x\in S: \ g_1(x)<\gamma_{1,2}',\ g_2(x)<\gamma_{2,1}', \ g_3(x)>\gamma_3'\} $ \ and \ $\operatorname{dist}(B_{g_3}, B_g)>0$ for every $g\in N_3$. In this case, we define \ $\Gamma_3=N_3\cup\{g_3\}$. \medskip Now, if $Z_{3,1}=\{z_3\}\in \cup_{g\in N_3}{\overline{O}_g}$,\ we select, if necessary, a smaller constant $l_3$\,, with $0<t_3<l_3<1$, so that $Z_{3,1}=\{z_3\}\in \cup_{g\in N_3}{O_g}$\,. In this case, and also when $Z_{3,1}=\emptyset$, we define $\Gamma_3=N_3$. Notice that, in any of the cases mentioned above, Fact \ref{3} clearly holds for the (possibly) newly selected real numbers $t_3$ and $l_3$. \medskip Then, the distance between any two sets $B_{g}$, \ $B_{g'}$, \ $g,g'\in \Gamma_1\cup \Gamma_2 \cup \Gamma_3$, \ $g\not=g'$, \ is strictly positive. \ Moreover \ $Z_{3,1}\cup Z_{3,2}\subset \cup_{g\in \Gamma_3} O_g \subset \cup_{g\in \Gamma_3} B_g\subset U_3'\subset R_3$. Therefore, $Z_3=Z_1\cup Z_2 \cup Z_{3,1}\cup Z_{3,2}\subset \cup_{g\in \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 }O_g \subset \cup_{g\in \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 }B_g \subset U_1\cup U_2\cup U_3=R_1\cup R_2 \cup R_3$. Finally, recall that \ $\operatorname{dist}(B_g, R_3^c)>0$, \ for every \ $g\in \Gamma_3$ \ and \ $\operatorname{dist}(B_g\,,\, (U_1\cup U_2\cup U_3)^c)>0$ \ for every \ $g\in \Gamma_1\cup \Gamma_2 \cup \Gamma_3$. \begin{figure} \includegraphics[width=14cm]{sinbase2.eps}\\ \caption{Case $n=3$: the decomposition of $Z_3$.}\label{dibujo n=3} \end{figure} \medskip It is worth mentioning that, by combining all the results obtained in the step $n=3$, we deduce that ${\bf H_3'}=\sigma_{3,1} \bold {g_1}+\sigma_{3,2} \bold {g_2}+\sigma_{3,3} \bold {g_3}$,\ where \ $\sigma_{3,i}$ \ are continuous functions on \ $U_1\cup U_2 \cup U_3$\,, \ $\sigma_{3,i}(x)=0$ \ whenever \ $x\not \in U_i$\,, \ and for every \ $x\in U_1\cup U_2 \cup U_3$ \ there is at least one coefficient \ $\sigma_{3,j}(x)>0$. \bigskip \bigskip \noindent $\bold{\diamond}$ Assume that, in the steps {\bf $j=2, ..., k$}, with $k\geq 2$, we have selected points $z_j\in S^+$ and constants $\gamma_j\in (0,1)$, with \ $g_{1}(z_j)\not=\gamma_{1}$\,,\,...,\,$g_{j-1}(z_j)\not=\gamma_{j-1}$\,, \ $\{g_1,...,g_k:=D(z_k)\}$ \ linearly independent functionals such that \begin{align} S_j= \{x\in S:\ f_j(x)>\delta_j\}\subset\{x\in S:\ g_j(x)>{\gamma_j}\}\subset \{x\in S: \ f_j(x)>\delta_j^4\}=P_j, \end{align} for all $j=2,...,k$, and \begin{equation*} \{T\in [g_{i_1},...,g_{i_s},g_j]^*: \ g_{i_1}(x)=\gamma_{i_1}\,,\, ... \,,g_{i_s}(x)=\gamma_{i_s}\,,\ g_j(x)=\gamma_j \ \text{ and } \ |T|=1 \}=\emptyset, \end{equation*} for every $1\le i_1<...<i_s\le j-1,\ \text{ and } \ 1\le s\le j-1$, $2\leq j\leq k$. Assume we have defined the functions \ $h_j=\varphi_j(g_j)\,\phi_{j-1,1}(g_{j-1})\,\cdots\phi_{1,j-1}(g_1),$ \ where \ $\varphi_j$\,,\ $\phi_{j-1,1}$\,,\,...,\,$\phi_{1,j-1}$ are $C^\infty$ functions on $\mathbb R$ satisfying \begin{align*} \varphi_j(t)&=0 \ \ \text{ if } t \le {\gamma_j}\\ \varphi_j(1)&=1\\ \varphi_j'(t)&>0 \ \ \text{ if } t>{\gamma_j} \end{align*} and \begin{align*} \phi_{1,j-1}(t) & =1 \ \ \text{ if } \ \textstyle{ t\le \frac{\gamma_1+\gamma_{1,j-1}}{2}},&....., & \ \ \phi_{j-1,1}(t) =1 \ \ \text{ if } \ \textstyle{t\le {\frac{\gamma_{j-1}+\gamma_{j-1,1}}{2}}} \\ \phi_{1,j-1}(t) & =0 \ \ \text { if } \ t\ge \gamma_{1,j-1},&....., & \ \ \phi_{j-1,1}(t) =0 \ \ \text{ if } \ t\ge \gamma_{j-1,1} \\ \phi_{1,j-1}'(t) & <0 \ \ \text{ if } \ \textstyle{t \in \bigl(\frac{\gamma_1+\gamma_{1,j-1}}{2}}, \, \gamma_{1,j-1}\bigr),&....., & \ \ \phi_{j-1,1}'(t) <0 \ \ \text{ if }\ t\in \bigl( \textstyle{\frac{\gamma_{j-1}+\gamma_{j-1,1}}{2}}, \, \gamma_{j-1,1} \bigr), \end{align*} where \ $\gamma_1<\gamma_{1,j-1}\,,......, \gamma_{j-1}<\gamma_{j-1,1}$, and $2\leq j\leq k$. \medskip \noindent The interior of the support of $h_j$ is the set \begin{equation*} U_j=\{x\in S:\ g_1(x)<\gamma_{1,j-1}\,,..., \, g_{j-1}(x)<\gamma_{j-1,1} \ \text{ and } \ g_j(x)>\gamma_j\}. \end{equation*} Assume we have also defined \ the $C^p$ smooth functions \ $r_j$ \ and \ $\bold{H_j}$: \begin{align*} &r_j:S^+\longrightarrow \mathbb R, \qquad \quad & {\bf H_j}&: U_1\cup U_2\cup...\cup U_j\longrightarrow \mathbb R,\\ &r_j=s_j g_j+(1-s_jg_j(x_j)),\notag \qquad \quad &{\bf H_j}&=\frac{\sum_{i=1}^j a_ir_ih_i}{\sum_{i=1}^j h_i}, \end{align*} for $2\leq j\leq k$, where $x_j\in U_j$\, the numbers $a_j\,,\, s_j\in \mathbb R^*$ satisfy that $|a_j-F(x_j)|<\varepsilon$, \ $s_j a_j>0$, and the oscillation of $r_j$ on $U_j$ is less than \ $\frac{\varepsilon}{\,|a_j|}$. \noindent Assume that for $2\le j \le k$ the set of critical points \ $Z_j$ \ of \ ${\bf H_j}$ \ is a union of the form $Z_j=Z_{j-1}\cup Z_{j,1}\cup Z_{j,2}\,,$ where $Z_{j-1}$ is the set of critical points of ${\bf H_{j-1}}$, the sets $Z_{j-1},\,Z_{j,1},\,Z_{j,2}$ are pairwise disjoint, \ $Z_{j}\subset D^{-1}([g_1,...,g_j])$, \ $Z_{j-1}\subset (U_1\cup...\cup U_{j-1})\setminus \overline{U}_j$ \ and \ $Z_{j,1}\cup Z_{j,2}\subset U_j$. Furthermore, assume that there is an open subset $U_j'$ such that \ $$U_j\subset U_j'\subset R_j:=\{x\in S:\ g_j(x)>\gamma_j \}$$ \ and $\operatorname{dist}(B_g\,,U_j')>0$, \ for every $g\in \Gamma_1\cup...\cup \Gamma_{j-1}$,\ there is a finite subset $\Gamma_j\subset S^*$ and open slices of $S$, \begin{equation*} B_g:=\{x\in S: g(x)>t_j\} \quad \text{ and } \quad O_g:=\{x\in S: \ g(x) > l_j\}, \quad 0<t_j<l_j<1, \end{equation*} satisfying $B_g\subset U_j'$ \ for every $g\in \Gamma_j$, \ $\operatorname{dist}(B_g,B_{g'})>0$ \ whenever $g,g'\in \Gamma_j$, \ $g\not=g'$ \ and there is $\gamma_j'\in (\gamma_j,1)$ such that $$Z_{j,1}\cup Z_{j,2}\subset \cup_{g\in \Gamma_j} O_g\subset \cup_{g\in \Gamma_j} B_g\subset U_j'\cap \{x\in S:\ g_j(x)>\gamma_j'\}\subset U_j.$$ \medskip \noindent Finally, assume that for $2\le j\le k$, \ ${\bf H_j'}=\sigma_{j,1}\,\bold{g_1}+\cdots+\sigma_{j,j}\,\bold{g_j}$ \ on $U_1\cup...\cup U_j$\,, \ where \ $\sigma_{j,i}$ \ are continuous functions on \ $U_1\cup...\cup U_j$, for every \ $i=1,....,j$.\ and assume that for every $x\in U_1 \cup \dots \cup U_j$ there is at least one strictly positive coefficient \ $\sigma_{j,m}(x)$, \ and that if \ $x\in (U_1\cup...\cup U_j)\setminus U_m$ , \ with \ $m\in \{1,...,j\}$, \ then $\sigma_{j,m}(x)=0$. \bigskip \bigskip \noindent $\bold{\diamond}$ Now, let us denote by $y_{k+1}\in S$ the point satisfying $f_{k+1}(y_{k+1})=1$. If either $\{g_1,...,g_k,f_{k+1}\}$ are lineally dependent or $g_i(y_{k+1})=\gamma_i$ for some $i\in \{1,...,k\}$, we can use the density of the norm attaining functionals (Bishop-Phelps Theorem) and the continuity of $D$ to slightly modify $y_{k+1}$ and find $z_{k+1}\in S$ so that: $ g_i(z_{k+1})\not=\gamma_i$\,, \ for every $i=1,...,k$, \ $\{g_1\,,...,g_k\,,g_{k+1}:=D(z_{k+1})\}$ are l.i. and \begin{align*} \{x\in S:\ f_{k+1}(x)>\delta_{k+1}^2\}\subset\{x\in S:\ g_{k+1}(x)>{\nu_{k+1}}\}\subset \{x\in S: \ f_{k+1}(x)>\delta_{k+1}^3\}, \end{align*} for some $\nu_{k+1}\in (0,1)$. If $g_i(y_{k+1})\not=\gamma_i$ for every $i\in \{1,...,k\}$ and $\{g_1,...,g_k,f_{k+1}\}$ are lineally independent, we define $z_{k+1}=y_{k+1}$ and $g_{k+1}=f_{k+1}$. Then we apply Lemma \ref{case N} to the l.i. vectors, $\{g_1,...,g_{k+1}\}$ and the real numbers $\gamma_1,....,\gamma_{k}$ and obtain $\gamma_{k+1}\in (0,1)$ close enough to $\nu_{k+1}$ \ so that \begin{align*} S_{k+1}&=\{x\in S:\ f_{k+1}(x)>\delta_{k+1}\}\\ &\subset\{x\in S:\ g_{k+1}(x)>{\gamma_{k+1}}\}\subset \{x\in S: \ f_{k+1}(x)>\delta_{k+1}^4\}=P_{k+1} \end{align*} and \begin{equation}\label{empty k+1} \{T\in [g_{i_1}\,,...,\,g_{i_s}\,,g_{k+1}]^* :\,T(g_{i_1})=\gamma_{i_1}\,,...,\,T(g_{i_s})=\gamma_{i_s}\,, T(g_{k+1})=\gamma_{k+1} \ \text{ and } \ |T|=1 \}=\emptyset \end{equation} for every $1\le i_1<...<i_s\le k$ \ and \ $ 1\le s\le k$. \medskip \noindent Define $$R_{k+1}=\{x\in S: \ g_{k+1}(x)>\gamma_{k+1}\}.$$ Recall that $\cup_{g\in \Gamma_k} B_g\subset U_k'\cap \{x\in S: \ g_k(x)>\gamma_k'\}$ \ and select \ $\gamma_{k,1}'\in (\gamma_k\,,\gamma_k')$. In addition, we select numbers \begin{equation}\label{half} \gamma_{k-1,2}'\in \big(\gamma_{k-1}\,,\,\frac{\gamma_{k-1}+\gamma_{k-1,1}}{2}\big)\,,....,\,\gamma_{1,k}'\in \big(\gamma_1\,,\,\frac{\gamma_1+\gamma_{1,k-1}}{2}\big), \end{equation} and define the open set \begin{equation}U_{k+1}'=\{x\in S:\ g_1(x)<\gamma_{1,k}'\,,..., \, g_{k}(x)<\gamma_{k,1}' \ \text{ and }\ g_{k+1}(x)>\gamma_{k+1}\}. \end{equation} Notice that \ $\operatorname{dist}(B_g,\,U_{k+1}')>0$ \ for every \ $g\in \Gamma_1\cup...\cup \Gamma_k $. \medskip Assume that $R_{k+1}\cap(U_1\cup...\cup U_k)\not=\emptyset$ and define, for every $1\le i_1<...<i_s \le k$ \ and \ $1\le s\le k$\,, the set \begin{align*} M_{k+1,i_1,...,i_s}=\bigl\{x\in U_{i_1}\cap...\cap U_{i_s}\cap U_{k+1}':\ x\not\in U_j \ \text{ for every } & \ j\in \{1,...,k\} \setminus \{i_1,...,i_s\}, \\ & \text{ and } D(x)\in[g_{i_1},...,g_{i_s},g_{k+1}] \bigr\}. \end{align*} and $$M_{k+1}=\bigcup\{M_{k+1,i_1,...,i_s}: \ 1\le i_1<...<i_s \le k \ \text{ and } \ 1\le s\le k\}.$$ In the case when $M_{k+1}=\emptyset$ we select as $\gamma_{1,k}$ any point in $(\gamma_1,\gamma_{1,k}')$,...., and $\gamma_{k,1}$ any point in $(\gamma_k,\gamma_{k,1}')$. Notice that $U_1\cup ...\cup U_k=R_1\cup...\cup R_k$. In the case when \ $M_{k+1}\not=\emptyset$ \ and \ $\operatorname{dist}(M_{k+1}\,, (U_1\cup...\cup U_k)^c)=\operatorname{dist}(M_{k+1}\,, (R_1\cup...\cup R_k)^c)>0$, we can immediately find $\gamma_{1,k}\in (\gamma_1\,,\gamma_{1,k}')$\,,....,$\gamma_{k\,,1}\in(\gamma_k\,,\gamma_{k,1}')$ with $M_{k+1}\subset \{x\in S: g_1(x)>\gamma_{1,k}\}\cup...\cup \{x\in S: g_k(x)>\gamma_{k,1}\}$. \medskip In the case when $\operatorname{dist}(M_{k+1}\,, (U_1\cup...\cup U_k)^c)=0$ and in order to find suitable positive numbers $\gamma_{1,k}\,,...,\gamma_{k,1}$, we need to study the limits of the sequences $\{x_n\} \subset M_{k+1}$ such that $\lim_n \operatorname{dist}(x_n, (U_1\cup...\cup U_k)^c)=0$. Define, for every $1\le i_1<...<i_s \le k$ and $1\le s\le k$, the sets \begin{align*} F_{k+1,i_1,...,i_s}'&=\{T\in [g_{i_1},...,g_{i_s},g_{k+1}]^*: \ T(g_i)=\gamma_i \ \text{ for every } \ i\in\{i_1,...,i_s\} \text{ and } \ |T|=1\},\\ N_{k+1,i_1,...,i_s}'&=\{g\in S^*\cap [g_{i_1},...,g_{i_s},g_{k+1}]:\ T(g)=1 \ \text{ for some } \ T\in F_{k+1,i_1,...,i_s}'\}. \end{align*} Since the norm $|\cdot|^*$ is G\^{a}teaux smooth, we can apply Lemma \ref{vectoradicional} to the finite dimensional space $[g_{i_1},...,g_{i_s},g_{k+1}]$ with the norm $|\cdot|^*$ restricted to this finite dimensional space, and deduce that the cardinal of any of the sets $F_{k+1,i_1,...,i_s}'$ is at most two. Moreover, since the norm is strictly convex, the cardinal of each set $N_{k+1,i_1,...,i_s}'$ is at most two. Let us consider the {\em norm-one} extensions to \ $[g_1\,,...,g_k\,,g_{k+1}]$ \ of the elements of \ $F_{k+1,i_1,...,i_s}'$\,, that is, \begin{equation*} F_{k+1,i_1,...,i_s}''=\{T\in [g_1,...,g_{k+1}]^*:\ T|_{[g_{i_1},...,g_{i_s},g_{k+1}]}\in F_{k+1,i_1,...,i_s}' \ \text{ and } \ |T|=1 \}. \end{equation*} Since the norm $|\cdot|^*$ is G\^{a}teaux smooth, for every $G\in F_{k+1,i_1,...,i_s}'$ there is a {\em unique norm-one } extension $T$ defined on $[g_1\,,...,g_{k+1}]$. Thus, the cardinal of the every set $F_{k+1,i_1,...,i_s}''$ is at most two. Therefore the sets \begin{align*} F_{k+1}'&=\bigcup \{F_{k+1,i_1,...,i_s}'': \ 1\le i_1<...<i_s \le k \ \text{ and } \ 1\le s\le k\} \\N_{k+1}'&=\bigcup\{N_{k+1,i_1,...,i_s}':\ 1\le i_1<...<i_s \le k \ \text{ and } \ 1\le s\le k \} \end{align*} are finite. As a consequence of equality \eqref{empty k+1}, we deduce that $T(g_{k+1})\not=\gamma_{k+1}$ for every $T\in F_{k+1}'$. We can restrict our study to the following kind of sequences: Fix \ $ 1\le s\le k$ \ and \ $1\le i_1<...<i_s \le k$ \ and consider a sequence $\{x_n\}\subset M_{k+1,i_1,...,i_s}$ \ such that $\lim_n\operatorname{dist}(x_n, (U_1\cup ...\cup U_k)^c)=0$. Let us prove that for every \ $i\in\{i_1,...,i_s\}$, \ $\lim_n g_i(x_n)= \gamma_i$\,. Indeed, if $\{x_n\}\subset M_{k+1,i_1,...,i_s}$\,, then in particular $\{x_n\}\subset U_i\subset R_i$ for every $i\in \{i_1\,,...,i_s\}$. Recall that $U_1\cup...\cup U_k=R_1\cup ...\cup R_k$. Therefore, $\operatorname{dist}(x_n\,,(U_1\cup...\cup U_k)^c)\ge \operatorname{dist}(x_n\,,R_i^c)$ \ for every $i\in \{i_1,...,i_s\}$. Thus, $\lim_n\operatorname{dist}(x_n\,,R_i^c)=0$ \ for every $i\in \{i_1,...,i_s\}$. Since $\{x_n\}\subset R_i$\,, this implies that $\lim_ng_i(x_n)=\gamma_i$, \ for every $i\in \{i_1,...,i_s\}$. \medskip Now, let us take any sequence $\{x_n\} \subset M_{k+1,i_1,...,i_s}$ such that $\lim_n g_i(x_n)=\gamma_i$\,, \ for $i\in\{i_1,...,i_s\}$. Consider every $x_n$ as an element of $X^{**}$ and denote by $\bold{x_n}$ its restriction to $[g_1,...,g_{k+1}]$. Recall that $D(x_n)\in S^*\cap [g_{i_1},...,g_{i_s},g_{k+1}]$ for every $n\in \mathbb N$. Then, the sequence of restrictions $\{\bold{x_n}\} \subset [g_1,...,g_{k+1}]^*$ satisfies that \begin{align*} 1=&|x_n| \ge |\bold{{x_n}}|\ge |\bold{{x_n}}|_{[g_{i_1},...,g_{i_s},g_{k+1}]}|= \max \{\bold{x_n}(h):\ h\in S^*\cap [g_{i_1},...,g_{i_s},g_{k+1}] \} \\&\ge \bold{x_n}(D(x_n))=D(x_n)(x_n)=1, \end{align*} for every $n\in \mathbb N$. Thus, there is a subsequence $\{\bold{x_{n_j}}\}$ converging to an element $T\in [g_1,...,g_{k+1}]^*$ \ with \ $|T|=1$ \ and \ $|T|_{[g_{i_1},...,g_{i_s},g_{k+1}]}|=1$. Since \ $\lim_j g_i(x_{n_j})= \gamma_i$ \ for every \ $i\in \{i_1,...,i_s\}$, \ we have that $T(g_i)=\gamma_i$ \ for every \ $i\in \{i_1,...,i_s\}$. \ This implies that $T|_{[g_{i_1},...,g_{i_s},g_{k+1}]}\in F_{k+1,i_1,...,i_s}'$ \ and \ $T\in F_{k+1,i_1,...,i_s}''$. Furthermore, \ if \ $g\in N_{k+1,i_1,...,i_s}'$ \ and \ $T(g)=1$, \ then \ $\lim_j \bold{x_{n_j}}(g)=1$. In addition, $T(g_{k+1})=\lim_j \bold{x_{n_j}}(g_{k+1})=\lim_jg_{k+1}(x_{n_j})\ge \gamma_{k+1}$ because $\{x_{n_j}\}\subset U_{k+1}'$. Then, from condition \eqref{empty k+1}, we deduce that \ $T(g_{k+1})>\gamma_{k+1}$. Finally, let us check that $g_i(x_{n})\le \gamma_i$ for every \ $i\in\{1,...,k\}\setminus \{i_1,...,i_s\}$ \ and \ $n\in \mathbb N$. Indeed, since $\{x_n\}\subset U_{k+1}'$, twe have $g_1(x)<\gamma_{1,k}'<\gamma_{1,i-1},...,g_{i-1}(x)<\gamma_{i-1,k+2-i}'<\gamma_{i-1,1}$. Now, from the definition of $U_i$ and the fact that $\{x_n:\ n\in \mathbb N\}\cap U_i=\emptyset$, we deduce that $g_i(x_n)\le \gamma_{i}$, for every $n\in \mathbb N$. Finally, if \ $T=\lim_j \bold{x_{n_j}}$ \ in \ $[g_1,...,g_{k+1}]$, then \ $T(g_i)=\lim_j \bold{x_{n_j}}(g_i)\le \gamma_i$,\ for every \ $i\in\{1,...,k\}\setminus \{i_1,...,i_s\}$. \bigskip \noindent Let us define, for every $1\le s\le k$ and $1\le i_1<...<i_s\le k$, the sets \begin{align*} F_{k+1,i_1,...,i_s}=\{T\in F_{k+1,i_1,...,i_s}'':\ \text{ there is } \{x_n\} &\subset M_{k+1,i_1,...,i_s} , \text{ with }\ \lim_n \bold{x_n}(g_i)=\gamma_i, \\ & \quad \text{ for } \ i\in\{i_1,...,i_s\} \ \text{ and } \ \lim_n\bold{x_n}=T\}, \end{align*} \begin{equation*} N_{k+1,i_1,...,i_s}=\{g\in N_{k+1,i_1,...,i_s}':\ \text{ there is } T\in F_{k+1,i_1,...,i_s} \text{ with } T(g)=1\}, \end{equation*} and \begin{align*} F_{k+1}&=\bigcup \{F_{k+1,i_1,...,i_s}: \ 1\le s\le k \ \text{ and } \ 1\le i_1<...<i_s\le k\},\\ N_{k+1}&=\bigcup \{N_{k+1,i_1,...,i_s}: \ 1\le s\le k \ \text{ and } \ 1\le i_1<...<i_s\le k\}, \end{align*} which are all finite. Select a real number $\gamma_{k+1}'$ satisfying $\gamma_{k+1}<\gamma_{k+1}'<\min\{T(g_{k+1}): \ T\in F_{k+1}\}$. \medskip \begin{fact}\label{k+1} \begin{enumerate} \item There are numbers \ $0<t_{k+1}<l_{k+1}<1$ \ such that for every $g\in N_{k+1}$, the slices $$O_g:=\{x\in S:\ g(x)>l_{k+1}\} \quad \text{ and } \quad B_g:=\{x\in S:\ g(x)>t_{k+1}\}$$ satisfy that \begin{align}\label{inclusionk+1} O_g&\subset B_g\subset \{x\in S: \ g_1(x)<\gamma_{1,k}'\,,...,\ g_k(x)<\gamma_{k,1}'\,, \ g_{k+1}(x)>\gamma_{k+1}'\} \quad \text{ and }\\ \label{intersectionk+1} &\operatorname{dist}(B_g,B_{g'})>0, \text{ whenever } g,g'\in N_{k+1}, \ g\not=g'. \end{align} \item There are numbers\ $\gamma_{1,k}\in (\gamma_1,\gamma_{1,k}')$,....,$\gamma_{k,1}\in (\gamma_k,\gamma_{k,1}')$ such that if $x\in M_{k+1}$\,, \ $g_1(x)<\gamma_{1,k}$\,,....,\,$g_k(x)<\gamma_{k,1}$\,,\ then $x\in O_g$\,, for some $g\in N_{k+1}$. \end{enumerate} \end{fact} \medskip \noindent{\bf Proof of Fact \ref{k+1}. } (1) First, if $X$ is reflexive, we know that for every \ $g\in N_{k+1}$ \ there is \ $x_g\in S$ \ such that \ $D(x_g)=g$. There is $1\le s\le k$ and $1\le i_1<...<i_s\le k$ such that $g\in F_{k+1,i_1,...,i_s}$. Denote by $\mathbf{x_g}$ the restriction of $x_g$ to $[g_1,...,g_{k+1}]$. Since $\mathbf{x_g}(g)=1$ and $|\cdot|^*$ is G\^{a}teaux smooth, we have that $\bold{x_g}=T$ for some \ $T\in F_{k+1,i_1,...,i_s}$. \ This implies that $\bold{x_g}(g_i)=\gamma_i<\gamma_{i,k+1-i}'$\,, \ whenever $i\in\{i_1,...,i_s\}$, \ $\bold{x_g}(g_{k+1})>\gamma_{k+1}'$ \ and \ $\bold{x_g}(g_i)\le \gamma_i< \gamma_{i,k+1-i}'$\,, \ whenever $i\in\{1,...,k\}\setminus\{i_1,...,i_s\}$. Hence, $x_g\in \{x\in S: \ g_1(x)<\gamma_{1,k}'\,,...,\ g_k(x)<\gamma_{k,1}' \ \text{ and } \ g_{k+1}(x)>\gamma_{k+1}'\}$. \noindent Now, since the norm $|\cdot|$ is LUR and $D(x_g)=g$, the functional $g$ strongly exposes $S$ at the point $x_g$ for every $g\in N_{k+1}$. Since $N_{k+1}$ is finite, we can obtain real numbers $0<t_{k+1}<l_{k+1}<1$ and slices $O_g$ and $B_g$, \ for every $g\in N_{k+1}$, satisfying conditions \eqref{inclusionk+1} and \eqref{intersectionk+1}. \medskip \noindent Now consider a non reflexive Banach space $X$. Let us first prove \eqref{inclusionk+1}. Assume, on the contrary, that there is a point \ $g \in N_{k+1}$ \ and there is a sequence \ $\{y_n\}\subset S$ \ satisfying \ $g(y_n)>1-\frac1n$ \ with \ $g_1(y_n)\ge \gamma_{1,k}' $\,,...., or \ $g_k(y_n)\ge \gamma_{k,1}'$, \ or \ $g_{k+1}(y_n)\le \gamma_{k+1}'$\,, \ for every $n\in \mathbb N$. If $g\in N_{k+1}$ \ there is a sequence \ $\{x_n\}\subset M_{k+1}$ \ with \ $\lim_n g_i(x_n)\le \gamma_i$\,,\ for every \ $i\in\{1,...,k\}$, \ $\lim_n g_{k+1}(x_n)>\gamma_{k+1}'$ \ and \ $\lim_n g(x_n)=1$. In particular, \begin{equation*} \frac{g(x_n)+1-\frac1n}{2}\le g\left(\frac{x_n+y_n}{2}\right)\le \left|\frac{x_n+y_n}{2}\right|\le 1, \end{equation*} and thus $\lim_n\left|\frac{x_n+y_n}{2}\right|= 1$. Since in this case the norm $|\cdot|$ is WUR, we have that $x_n-y_n\xrightarrow{\omega} 0$\ (weakly converges to zero). This last assertion gives a contradiction since either \linebreak $\limsup_ng_i(x_n-y_n)\le \gamma_i-\gamma_{i,k+1-i}'<0$ \ for some $i\in\{1,...,k\}$ \ or \ $\liminf_n g_{k+1}(x_n-y_n)\ge \lim_n g_{k+1}(x_n)-\gamma_{k+1}'>0$. Therefore, we can find real numbers $0<t_{k+1}<l_{k+1}<1$ and slices $O_g$ and $B_g$ \ for every $g\in N_{k+1}$, satisfying condition \eqref{inclusionk+1}. The proof of \eqref{intersectionk+1} is the same as the one given in Fact \ref{k+1}, where the only property we need is the strict convexity of $|\cdot|^*$. \medskip \noindent (2) Assume, on the contrary, that for every $n\in \mathbb N$, there is $x_n\in M_{k+1}$ with $g_i(x_n)\le \gamma_i+\frac1n$, for every $i\in\{1,...,k\}$ \ and $\{x_n:\ n \in \mathbb N\} \cap (\cup_{g\in N_{k+1}}O_g)=\emptyset$. Then there is a subsequence of $\{x_n\}$, which we denote by $\{x_n\}$ as well, and there are numbers \ $1\le s\le k$ \ and \ $1\le i_1<...<i_s<k$ \ such that $\{x_n\}\subset M_{k+1,i_1,...,i_s}$. In particular, \ $\{x_n\}\subset U_i\subset R_i$ \ and then \ $g_i(x_n)>\gamma_i$ \ for every \ $i\in\{i_1\,,...,i_s\}$ \ and \ $n\in \mathbb N$. Hence, \ $\lim_ng_i(x_n)=\gamma_i$ \ for \ every \ $i\in\{i_1,...,i_s\}$. Since $\{x_n\}\subset M_{k+1,i_1,...,i_s}$\,, from the comments preceding Fact \ref{k+1}, we know that there is a subsequence $\{x_{n_j}\}$ and $g\in N_{k+1,i_1,...,i_s}$ satisfying that $\lim_jg(x_{n_j})=1$, which is a contradiction. This finishes the proof of Fact \ref{k+1}. $\Box$ \medskip If $R_{k+1}\cap (U_1\cup...\cup U_k)=\emptyset$ \ we may select as $\gamma_{1,k}$ any number in $(\gamma_1, \gamma_{1,k}')$\,,...., and \ $\gamma_{k,1}$ \ any number in \ $(\gamma_k, \gamma_{k,1}')$. \medskip Now we define $h_{k+1}$, \begin{align*} h_{k+1} & : S^+\longrightarrow \mathbb R \\ h_{k+1} & =\varphi_{k+1}(g_{k+1})\,\phi_{k,1}(g_{k})\,\cdots\phi_{1,k}(g_1),\end{align*} with \ $\varphi_{k+1}$,\ $\phi_{k,1}$,\,...,\,$\phi_{1,k}$ $C^\infty$ functions on $\mathbb R$ satisfying \begin{align*} \varphi_{k+1}(t)&=0 \ \ \text{ if } t \le {\gamma_{k+1}}\\ \varphi_{k+1}(1)&=1\\ \varphi_{k+1}'(t)&>0 \ \ \text{ if } t>{\gamma_{k+1}} \end{align*} and \begin{align*} \phi_{1,k}(t) & =1 \ \ \text{ if } \ \textstyle{ t\le \frac{\gamma_1+\gamma_{1,k}}{2}},&....., & \ \ \phi_{k,1}(t) =1 \ \ \text{ if } \ \textstyle{t\le {\frac{\gamma_{k}+\gamma_{k,1}}{2}}} \\ \phi_{1,k}(t) & =0 \ \ \text { if } \ t\ge \gamma_{1,k},&....., & \ \ \phi_{k,1}(t) =0 \ \ \text{ if } \ t\ge \gamma_{k,1} \\ \phi_{1,k}'(t) & <0 \ \ \text{ if } \ \textstyle{t \in \bigl(\frac{\gamma_1+\gamma_{1,k}}{2}}, \, \gamma_{1,k}\bigr),&....., & \ \ \phi_{k,1}'(t) <0 \ \ \text{ if }\ t\in \bigl( \textstyle{\frac{\gamma_{k}+\gamma_{k,1}}{2}}, \, \gamma_{k,1} \bigr), \end{align*} \noindent Clearly the interior of the support of $h_{k+1}$ is the set \begin{equation*} U_{k+1}=\{x\in S: \ g_1(x)<\gamma_{1,k}\,,...,\,g_{k}(x)<\gamma_{k,1} \ \text{ and } \ g_{k+1}(x)>\gamma_{k+1}\}.\end{equation*} Select one point $x_{k+1}\in U_{k+1}$, a real number $a_{k+1}\in \mathbb R^*$ with $|a_{k+1}-F(x_{k+1})|<\varepsilon$ and define the auxiliary function \begin{align*} &r_{k+1}:S^+\longrightarrow \mathbb R,\\ &r_{k+1}=s_{k+1}g_{k+1}+(1-s_{k+1}g_{k+1}(x_{k+1})),\notag \end{align*} where we have selected $s_{k+1}$ so that $s_{k+1}a_{k+1}>0$ and $|s_{k+1}|$ is small enough so that the oscilation of $r_{k+1}$ on $U_{k+1}$ is less than \ $\frac{\varepsilon}{\,|a_{k+1}|}$. \, Notice that $r_{k+1}(x_{k+1})=1$. \noindent Let us study the set of critical points $Z_{k+1}$ of the $C^p$ smooth function \begin{align*} &{\bf H_{k+1}}: U_1\cup ...\cup U_{k+1}\longrightarrow \mathbb R,\\ \notag &{\bf H_{k+1}}=\frac{\sum_{i=1}^{k+1} a_ir_ih_i}{\sum_{i=1}^{k+1} h_i}. \end{align*} Let us prove that $Z_{k+1}:=\{x\in U_1\cup ... \cup U_{k+1}: \, H'_{k+1}(x)=0 \text{ on } T_x \}$ can be included in a finite union of disjoint slices within $U_1\cup ... \cup U_{k+1}$ by splitting $Z_{k+1}$ conveniently into the (already defined) set $Z_k$ and a finite number of disjoint sets within $U_{k+1}$. \medskip \noindent It is straightforward to verify that ${\bf H_{k+1}'}=\sigma_{k+1,1} \bold {g_1}+...+\sigma_{k+1,k+1} \bold{g_{k+1}}$,\ where \ $\sigma_{k+1,i}$ \ are continuous functions on $U_1\cup ... \cup U_{k+1}$ \ and \ $\bold{g_i}$ \ denotes the restriction \ $g_i|_{T_x}$, \ $i=1,...,k+1$, whenever we evaluate ${\bf H_{k+1}'}(x)$. \medskip \noindent Clearly the restrictions of $\bold{H_{k+1}}$ and $\bold{H_{k+1}'}$ to $(U_1\cup ...\cup U_k)\setminus U_{k+1}$ coincide with $\bold{H_k}$ and $\bold{H_k'}$ respectively. Then, $Z_{k+1}\setminus U_{k+1}=Z_k=Z_{k+1}\setminus \overline{U}_{k+1}$. Let us study the set $Z_{k+1}\cap U_{k+1}$. \noindent First, if $x\in U_{k+1}\setminus (U_1\cup ...\cup U_k)$,\ then \ $H_{k+1}(x)=a_{k+1}r_{k+1}(x)$ and $H_{k+1}'(x)=a_{k+1}r_{k+1}'(x)$. Therefore $\bold{H_{k+1}'}(x)=a_{k+1}s_{k+1}g_{k+1}|_{T_x}= 0$ \ iff \ $D(x)=g_{k+1}$. If the point $z_{k+1}\in U_{k+1}\setminus (U_1 \cup ...\cup U_k)$, then ${\bf H_{k+1}}$ has exactly one critical point in $U_{k+1}\setminus (U_1\cup ...\cup U_k)$; in this case, since $g_i(z_{k+1})\not=\gamma_i$ \ for every \ $i=1,..,k$, \ the point $z_{k+1}$ actually belongs to $U_{k+1}\setminus (\overline{U}_1 \cup ... \cup \overline{U}_k)$. \medskip \noindent Now, let us study the critical points of ${\bf H_{k+1}}$ in $U_{k+1}\cap(U_1\cup ...\cup U_k)$. If we define \ $\Lambda_{k}=\frac{\sum_{i=1}^k h_i}{\sum_{i=1}^{k+1} h_i}$, \ then we can rewrite \ $\bold{H_{k+1}}$ \ on \ $U_{k+1}\cap(U_1\cup ...\cup U_k)$ \ as \begin{equation*} \bold{H_{k+1}}=\frac{\sum_{i=1}^{k}a_ir_ih_i}{\sum_{i=1}^k h_i}\,\cdot \frac{\sum_{i=1}^k h_i}{\sum_{i=1}^{k+1} h_i}+\frac{a_{k+1}r_{k+1}h_{k+1}}{\sum_{i=1}^{k+1} h_i}=\bold{H_k}\ \Lambda_k +a_{k+1}r_{k+1}(1-\Lambda_{k}), \end{equation*} \noindent and \ $${\bf {H_{k+1}'}}={\bf {H_k'}}\Lambda_k +a_{k+1}s_{k+1}(1-\Lambda_k)\bold{g_{k+1}}+({\bf H_k}-a_{k+1}r_{k+1})\Lambda'_k.$$ Notice that, on the open set $U_{k+1}$\,, we have that \ $\phi_{i,j}(g_i)\equiv 1 $, \ whenever $i+j\le k$. Indeed, on the one hand, \ if $x\in U_{k+1}$, and $i\in \{1,...,k\}$, then $g_i(x)<\gamma_{i,k+1-i}\le \frac{\gamma_i+\gamma_{i,j}}{2}$, whenever $i+j\le k$. \, On the other hand, \ $\phi_{i,j}(t)\equiv 1$ \ if \ $t\le \frac{\gamma_i+\gamma_{i,j}}{2}$. \ Therefore $h_i|_{U_{k+1}}=\varphi_i(g_i)$, \ for every \ $i=1,....,k$, \ and $$\Lambda_{k}=\frac{\sum_{i=1}^{k} \varphi_i(g_i)}{\sum_{i=1}^{k} \varphi_i(g_i)+h_{k+1}}.$$ By computing $\Lambda'_k$ \ in $U_{k+1}$, we obtain $\Lambda'_k=\xi_{k,1}\bold{g_1}+...+\xi_{k,k+1} \bold{g_{k+1}}$, where the coefficients $\xi_{k,1},...,\xi_{k,k+1}$ \ are continuous functions of the following form: \begin{align*}\label{derivada de Lambda k} &\xi_{k,j}= \frac{-\varphi_{k+1}(g_{k+1})\,\phi_{j,k+1-j}'(g_j)\, (\prod_{i=1;\,i\not=j}^{k}\phi_{i,k+1-i}(g_i))(\sum_{i=1}^k h_i)+h_{k+1}\varphi_j'(g_j)}{(\sum_{i=1}^{k+1}h_i)^2}, \quad j=1,...,k \\ &\xi_{k,k+1}= \frac{-\varphi_{k+1}'(g_{k+1})\, ( \prod_{i=1}^k \phi_{i,k+1-i}(g_i))\, (\sum_{i=1}^k h_i)}{(\sum_{i=1}^{k+1} h_i)^2}.\, \end{align*} Thus, if $x\in U_{k+1}\cap(U_1\cup ...\cup U_k)$, the coefficients $\sigma_{k+1,1},....,\sigma_{k+1,k+1}$ \ for ${\bf H_{k+1}'}$ have the following form, \begin{align*} &\sigma_{k+1,j}=\sigma_{k,j}\Lambda_k+({\bf H_k}-a_{k+1}r_{k+1})\xi_{k,j}, \qquad \text{ for } \ j=1,...,k\\ &\sigma_{k+1,k+1}=a_{k+1}s_{k+1}(1-\Lambda_k)+({\bf H_k}-a_{k+1}r_{k+1})\xi_{k,k+1}.\\ \end{align*} \noindent Notice that in $U_{k+1}\cap (U_1\cup ...\cup U_k)$, \ $a_{k+1}s_{k+1}>0$, \ $\Lambda_k>0$, \ $1-\Lambda_k>0$, \ $\xi_{k,j}\ge 0$, \ for every \ $j=1,...,k$, \ $\sum_{j=1}^k\xi_{k,j}> 0$ \ and \ $\xi_{k,k+1}<0$. Therefore, if $H_k-a_{k+1}r_{k+1}\le 0$, the coefficient $\sigma_{k+1,k+1}>0$. When $H_k-a_{k+1}r_{k+1} \ge 0$ and $\sigma_{k,j}>0$, the coefficient $\sigma_{k+1,j}>0$ (recall that, from the step $k$ we know that, for every $x\in U_1\cup ...\cup U_k$ there exists at least one $j\in \{1,...,k\}$ \ with \ $\sigma_{k,j}>0$). Hence, if \ ${\bf H_{k+1}'}(x)= 0$ \ for some \ $x\in U_{k+1}\cap (U_1 \cup ...\cup U_k)$, \ there necessarily exists $\varrho\not=0$ such that $D(x)=\varrho(\sigma_{k+1,1}(x)g_1 +...+\sigma_{k+1,k+1}(x)g_{k+1})$, \ that is $D(x)\in[g_1,\,...,g_{k+1}].$ \ \medskip \noindent In fact we can be more accurate and obtain that if \ ${\bf H_{k+1}'}(x)=0$, \ $x\in U_{k+1}\cap (U_1\cup...\cup U_k)$ \and $x\not\in \cup_{j\in F}U_j$ \ for some proper subset \ $F\subset\{1,...,k\}$, then \ $D(x)\in \operatorname{span}\,\{g_j:\ j\in\{1,...,k+1\}\setminus F\}.$ Indeed, from step $k$ we know that, if $x \in(U_1\cup...\cup U_k) \setminus U_j$, where $j\in\{1,...,k\}$, then $\sigma_{k,j}(x)=0$. Now, if $j\in F$ and $j=1$, it is clear that the functions $\varphi_1'(g_1)$ and $\phi_{1,k}'(g_1)$ vanish outside $U_1$. This implies $\xi_{k,1}(x)=0$ and consequently $\sigma_{k+1,1}(x)=0$. If \ $j\in F$ and $2\le j\le k$, since $x\in U_{k+1}$ we know that \begin{equation*} g_1(x)<\gamma_{1,k}<\gamma_{1,j-1}\,,.......,g_{j-1}(x)<\gamma_{j-1,k+2-j}<\gamma_{j-1,1}\,, \end{equation*} and then necessarily \ $g_j(x)\le \gamma_j$. Since the functions \ $\varphi_j'(g_j)$ \ and \ $\phi_{j,k+1-j}'(g_j)$ \ vanish whenever $g_j \le \gamma_j$, \ we deduce \ $\xi_{k,j}(x)=0$ \ and thus \ $\sigma_{k+1,j}(x)=0$. \medskip \noindent Let us now define the sets \begin{align*} Z_{k+1,1}&=\begin{cases}\{z_{k+1}\}, & \text{ if } \ z_{k+1}\in U_{k+1}\setminus (\overline{U}_1\cup ...\cup \overline{U}_k)\\ \emptyset, & \text{ otherwise } \end{cases}\\ Z_{k+1,2}&=Z_{k+1}\cap U_{k+1}\cap (U_1\cup ...\cup U_k). \end{align*} Now, let us check that $Z_{k+1,2}\subset\cup_{g\in N_{k+1}}O_g$. Indeed, if $x\in Z_{k+1,2}$\,, \ there are constants \ $1\le s\le k$ \ and \ $1\le i_1<...<i_k\le k $\,, such that $x\in U_{k+1}\cap U_{i_1}\cap ...\cap U_{i_s}$ \ and \ $x\not\in \cup_{j\in F} U_j$, where $F=\{1,...,k\}\setminus \{i_1,...,i_s\}$. From the preceding assertion, $D(x)\in [g_{i_1},...,g_{i_s},g_{k+1}]$. From the definition of $M_{k+1,i_1,...,i_s}$ \ and the fact that $U_{k+1}\subset U_{k+1}'$\,, \ we obtain that \ $x\in M_{k+1,i_1,...i_s}\subset M_{k+1}$. \ Since $x\in U_{k+1}$,\ we have that $g_1(x)<\gamma_{1,k}$\,,...,$g_k(x)<\gamma_{k,1}$. We apply Fact \ref{k+1}(2) to conclude that there is $g\in N_{k+1}$ such that $x\in O_g$. \medskip In the case when $Z_{k+1,1}=\{z_{k+1}\}\not\in \cup_{g\in N_{k+1}}\overline{O}_g$\,, we select, if necessary, a larger $t_{k+1}$, \ with $t_{k+1}<l_{k+1}$\,, so that \ $z_{k+1}\not\in \cup_{g\in N_{k+1}}\overline{B}_g$ \ for every \ $g\in N_{k+1}$. Since the norm is LUR and $D(z_{k+1})=g_{k+1}$ \ we may select numbers \ $0<t_{k+1}'<l_{k+1}'<1$ \ and open slices, which are neighborhoods of \ $z_{k+1}$ \ defined by \begin{equation*} O_{g_{k+1}}:=\{x\in S: g_{k+1}(x)>l_{k+1}'\} \quad \text{ and } \quad B_{g_{k+1}}:=\{x\in S: g_{k+1}(x)>t_{k+1}'\}, \end{equation*} satisfying \ $O_{g_{k+1}}\subset B_{g_{k+1}}\subset \{x\in S: \ g_1(x)<\gamma_{1,k}'\,,...,\ g_k(x)<\gamma_{k,1}'\,, \ g_{k+1}(x)>\gamma_{k+1}'\} $ \ and \ $\operatorname{dist}(B_{g_{k+1}},\,O_g)>0$, \ for every $g\in N_{k+1}$. In this case, we define \ $\Gamma_{k+1}=N_{k+1}\cup\{g_{k+1}\}$. \medskip Now, if $Z_{{k+1},1}=\{z_{k+1}\}\in \cup_{g\in N_{k+1}}{\overline{O}_g}$,\ we select, if necessary a smaller constant $l_{k+1}$\,, with $0<t_{k+1}<l_{k+1}<1$\,, so that $Z_{k+1,1}=\{z_{k+1}\}\in \cup_{g\in N_{k+1}}{O_g} $. In this case, and also when $Z_{k+1,1}=\emptyset$, we define $\Gamma_{k+1}=N_{k+1}$. Notice that, in any of the cases mentioned above, Fact \ref{k+1} clearly holds for the (possibly) newly selected real numbers $t_{k+1}$ and $l_{k+1}$. \medskip Then, the distance between any two sets $B_{g}$, \ $B_{g'}$, \ where \ $g,g'\in \Gamma_1\cup... \cup \Gamma_{k+1}$, \ and \ $g\not=g'$, \ is strictly positive. \ Moreover \ $Z_{{k+1},1}\cup Z_{{k+1},2}\subset \cup_{g\in \Gamma_{k+1}} O_g \subset \cup_{g\in \Gamma_{k+1}} B_g\subset U_{k+1}'\subset R_{k+1}$. Therefore, $Z_{k+1}=Z_1\cup ...\cup Z_k \cup Z_{{k+1},1}\cup Z_{{k+1},2}\subset \cup_{g\in \Gamma_1 \cup ... \cup \Gamma_{k+1} }O_g \subset \cup_{g\in \Gamma_1 \cup ... \cup \Gamma_{k+1} }B_g \subset U_1\cup...\cup U_{k+1}=R_1\cup ... \cup R_{k+1}$. Also, recall that \ $\operatorname{dist}(B_g, R_{k+1}^c)>0$, \ for every \ $g\in \Gamma_{k+1}$ \ and \ $\operatorname{dist}(B_g, (U_1\cup ...\cup U_{k+1})^c)>0$,\ for every \ $g\in \Gamma_1\cup ... \cup \Gamma_{k+1}$. \medskip Finally, let us notice that, by combining the results obtained in step k+1, we deduce that \ ${\bf H_{k+1}'}=a_{k+1}s_{k+1}\bold{g_{k+1}}$ \ in \ $U_{k+1} \setminus (U_1\cup ...\cup U_k)$ \ and \ ${\bf H_{k+1}'}={\bf H_k'}$ on $(U_1\cup...\cup U_k)\setminus U_{k+1}$, \ and in general \ ${\bf H_{k+1}'}=\sigma_{k+1,1}\,\bold{g_1}+\cdots+\sigma_{k+1,k+1}\,\bold{g_{k+1}}$ \ on \ $U_1\cup...\cup U_{k+1}$, \ where \ $\sigma_{k+1,i}$ \ are continuous functions on \ $U_1\cup...\cup U_{k+1}$, \ for \ $i=1,....,k+1$, \ and for every $x\in U_1 \cup \dots \cup U_{k+1}$ \ there is at least one $i\in \{1,...,k+1\}$ \ such that $\sigma_{k+1,i}(x)>0$. Furthermore, \ $\sigma_{k+1,j}(x)=0$ \ whenever \ $x\in (U_1 \cup ...\cup U_{k+1}) \setminus U_j$, \ \ $j\in \{1,...,k+1\}$. \bigskip Once we have defined, by induction, the functions $h_k$, \ $r_k$ \ and the constants $a_k$, for all $k\in \mathbb N$, we define \begin{align*} &H:S^+\longrightarrow \mathbb R\\ &H=\frac{\sum_{k=1}^\infty a_kr_kh_k}{\sum_{k=1}^\infty h_k}. \end{align*} It is straightforward to verify that the family $\{U_k\}_{k\in \mathbb N}$ of open sets of $S^+$ is a locally finite open covering of $S^+$. Thus, for every $x\in S^+$ there is $k_x\in \mathbb N$ and a (relatively open in $S^+$) neighborhood \ $V_x\subset S^+$ \ of \ $x$, such that \ $V_x\cap(\cup_{k>k_x} U_k)=\emptyset$ \ and therefore \ $H|_{V_x}={\bf H_{k_x}}|_{V_x}$. Thus $H$ is $C^p$ smooth whenever the functions $\{h_k\}_{k\in \mathbb N}$ are $C^p$ smooth. \medskip \begin{fact} The function \ $H$ \ $3\,\varepsilon$-approximates \ $F$ \ in $S^+$. \end{fact} \noindent {\em Proof.} Recall that the oscillation of $F$ in $U_k$ is less that $\varepsilon$, \ the oscillation of $r_k$ in $U_k$ is less than $\frac{\varepsilon}{\,|a_k|\,}$\, , \ $|a_k-F(x_k)|<\varepsilon$ \ and \ $r_k(x_k)=1$,\ for every $k\in \mathbb N$. Now, if $h_k(x)\not=0$, then $x\in U_k$ and \begin{align}\label{akrk-F} |a_kr_k(x)-F(x)|& \le|a_kr_k(x)-a_kr_k(x_k)|+|a_kr_k(x_k)-F(x)|\\ \notag &= |a_k||r_k(x)-r_k(x_k)|+|a_k-F(x)|\\ \notag &\le |a_k|\frac{\varepsilon}{|a_k|}+|a_k-F(x_k)|+|F(x_k)-F(x)| \le 3\varepsilon. \end{align} Hence, \begin{align*} |H(x)-F(x)|=\frac{\bigl|\sum_{k=1}^\infty(a_kr_k(x)-F(x))\,h_k(x) \bigr|}{\sum_{k=1}^\infty h_k (x)}\le \frac{\sum_{k=1}^\infty|a_kr_k(x)-F(x)|\,h_k(x) }{\sum_{k=1}^\infty h_k (x)}\le 3\varepsilon. \Box \end{align*} Let us denote by \ $C$ \ the critical points of $H$ in $S^+$. Since for every $x\in S^+$, there is $k_x\in \mathbb N$ and a (relatively open in $S^+$) neighborhood $V_x\subset S^+$ of $x$ such that $V_x\cap(\cup_{k>k_x} U_k)=\emptyset$, we have that $H|_{V_x}=\bold{H_{k_x}}|_{V_x}$ \ and $C\subset \cup_k Z_k$. Recall that $\cup_k Z_k \subset \bigcup \{O_g: \ g\in \cup_k\Gamma_{k}\} \subset \bigcup \{B_g:\ g\in \cup_k\Gamma_k\}$, \ the oscillation of $F$ on $B_g$ is less than $\varepsilon$ \ and $\operatorname{dist}(B_g,B_{g'})>0$, \ for every $g,g'\in \cup_k \Gamma_k$ with $g\not=g'$. Furthermore, from the inductive construction of the sets $\{B_g: \ g\in\cup_k \Gamma_k \}$, \ it is straightforward to verify that (i) for every $k>1$, if \ $g\in \Gamma_k $ \ and \ $g'\in \cup_{m>k}\Gamma_m$, \ then $\operatorname{dist}(B_g,\,B_{g'})\ge \gamma_k'-\gamma_{k,1}'>0$ \ and (ii) if $g\in \Gamma_1$ \ and \ $g'\in \cup_{m>1}\Gamma_m$, then $\operatorname{dist}(B_g,\,B_{g'})\ge t_1-\gamma_{1,1}'>0$. Therefore, for every $g\in\cup_k\Gamma_k$, \begin{equation}\label{distanciasBs} \operatorname{dist}(B_g\,, \,\bigcup\{B_{g'}:\ g'\in \cup_k\Gamma_k,\ \ g'\not=g\})>0. \end{equation} We relabel the countable families of open slices \ $\{O_g\}_{g\in \cup_k\Gamma_k}$ \ and \ $\{B_g\}_{g\in \cup_k\Gamma_k}$ \ as \ $\{O_n\}$, \ $ \{B_n\}$, \ respectively. Notice that the set $\cup_n\overline{B}_n$ is a (relatively) closed set in $S^+$. Indeed, if $\{x_j\}\subset \cup_n\overline{B}_n$ and $\lim_j x_j=x\in S^+$, since $\cup_n U_n'$ \ is also a locally finite open covering of $S^+$, there is $n_x$ and a (relatively open in $S^+$) neighborhood $W_x\subset S^+$ of $x$\,, \ such that $W_x\cap(\cup_{n> n_x}U_n')=\emptyset$. In addition, from the construction of the family $\{B_n\}$, \ there is \ $N\in\mathbb N$ \ such that \ $\cup_{n>N} \overline{B}_n\subset \cup_{n> n_x}U_n'$\,, and thus there is \ $j_0\in \mathbb N$ \ with \ $\{x_{j}\}_{j> j_0}\subset \cup_{n=1}^{N} \overline{B}_n$. Hence \ $x\in \cup_{n=1}^{N} \overline{B}_n\subset\cup_{n} \overline{B}_n$. \bigskip Let us denote \ $\mathcal{B}_n=\Phi^{-1}(B_n)$ \ and \ $\mathcal{O}_n=\Phi^{-1}(O_n)$, \ for every $n\in \mathbb N$. \begin{fact} $\mathcal{O}_n$ and $\mathcal{B}_n$ are open, convex and bounded subsets of $X$, for every $n\in\mathbb N$. \end{fact} \noindent {\em Proof.} Since $\Phi$ is continuous, it is clear that $\mathcal{O}_n$ \ and \ $\mathcal{B}_n$ are open sets. The sets ${O}_n$ and ${B}_n$ are slices of the form $R=\{x\in S: \ b(x)>\delta\}$ \ for some \ $b\in S^*$ \ and $\delta>0$ such that $\operatorname{dist}(R,X\times\{0\})>0$. Let us prove that $\mathcal{R}:=\Phi^{-1}(R)$ is convex and bounded in $X$. First, let us check that the cone in $Y$ generated by $R$ \ and defined by $$\operatorname{cone}(R)=\{\lambda x:\ x\in R, \ \lambda> 0\}=\{x\in Y: b(\textstyle{\frac{x}{\,|x|\,}})>\delta\}$$ is a convex set: consider $0\le \alpha \le 1$ and $x,x'\in \operatorname{cone}(R)$. Then, \begin{align*} b(\alpha x+(1-\alpha)x')&=\alpha b(x)+ (1-\alpha )b(x')> \alpha\delta|x|+(1-\alpha)\delta|x'|\\&=\delta|\alpha x|+\delta |(1-\alpha) x'|\ge \delta |\alpha x+ (1-\alpha) x'|,\end{align*} and this implies that $\alpha \,x+(1-\alpha)\,x'\in \operatorname{cone}(R)$. Therefore, the intersection of the two convex sets \ $\operatorname{cone}(R)\cap (X\times\{1\})=\Pi^{-1}(R)$ \ is convex. Now, it is clear that $\mathcal{R}=\Phi^{-1}(R)=i^{-1}(\Pi^{-1}(R))$ is convex as well. Let us prove that \ $\Pi^{-1}(R)$ \ is bounded in \ $Y$. Consider the linear bounded operator \ $\pi_2:Y=X\oplus \mathbb R\longrightarrow \mathbb R$,\ $\pi_2(x,r)=r$\,, \ for every $(x,r)\in X\oplus\mathbb R$. Then, \ $\Pi^{-1}(y)=\frac{y}{\pi_2(y)}$\,, for every $y\in S^+$. On the one hand, \ $d:=\operatorname{dist}(R,\,X\times\{0\})>0$ \ and then \ $$\ \ \ \pi_2(x,r)=r=\frac{|(x,r)-(x,0)|}{|(0,1)|}\ge \frac{d}{|(0,1)|}:=s>0, \quad \text{ for every } \ (x,r)\in R. $$ On the other hand, \begin{equation*} \bigr|\Pi^{-1}(y)\bigl|=\frac{|y|}{\pi_2(y)}=\frac{1}{\pi_2(y)}\le \frac{1}{s}, \quad \text{ for every } y\in R, \end{equation*} and thus $\Pi^{-1}(R)$ is bounded. Since the norm \ $||\cdot||$ \ considered on \ $X\times\{0\}$ \ (defined as \ $||(x,0)||=||x||$) \ and the restriction of the norm \ $|\cdot|$ \ to \ $X\times\{0\}$ are equivalent norms on \ $X\times\{0\}$, there exist constants $m,M>0$ such that $m\,||x-x'||\le|i(x)-i(x')|=|(x,1)-(x',1)|=|(x-x',0)|\le M ||x-x'||$, \ for every $x,x'\in X$. Hence, \begin{align*}\hspace{1cm}||\Phi^{-1}(y)||&=||i^{-1}(\Pi^{-1}(y))-i^{-1}(0,1)||\le \frac{1}{m}|\Pi^{-1}(y)-(0,1)| \\& \le \frac{1}{m}\,\bigl(|\Pi^{-1}(y)|+|(0,1)|\bigr) \le \frac{1 +s|(0,1)|}{s\cdot m},\end{align*} for every $y\in R$, what shows that \ $\mathcal{R}$ \ is bounded in \ $X$. $\Box$ \medskip \begin{fact} $\overline{\mathcal{O}}_n $ \ and \ $\overline{\mathcal{B}}_n$ \ are (closed convex and bounded) $C^p$ smooth bodies, \ for every $n\in \mathbb N$. \end{fact} \noindent {\em Proof.} We already know that these sets are closed, convex and bounded bodies, hence it is enough to prove that their boundaries $\partial\mathcal{O}_n$ and $\partial \mathcal{B}_n$ are $C^p$ smooth one-codimensional submanifolds of $X$. Since $\partial\mathcal{B}_n=\Phi^{-1}(\partial B_n)$, $\partial\mathcal{O}_n=\Phi^{-1}(\partial O_n)$, and $\Phi$ is a $C^p$ diffeomorphism, this is the same as showing that $\partial O_n$ and $\partial B_n$ are $C^p$ smooth one-codimensional submanifolds of $S$. But, if $O_n$ is defined by $O_n=\{y\in S: g_{n}(y)>\beta_{n}\}$, we have that $\partial O_n$ is the intersection of $S$ with the hyperplane $X_n=\{y\in Y: g_{n}(y)=\beta_{n}\}$ of $Y$, and $X_n$ is transversal to $S$ at every point of $\partial O_n$ (otherwise the hyperplane $X_n$ would be tangent to $S$ at some point of $\partial O_n$ and, by strict convexity of $S$, this implies that $\partial O_n=X_n\cap S$ is a singleton, which contradicts the fact that $O_n$ is a nonempty open slice of $S$), hence the intersection $\partial O_n=S\cap X_n$ is a one-codimensional submanifold of $S$. The same argument applies to $\partial B_n$. $\Box$ \medskip \begin{fact} $\textrm{dist}(\mathcal{O}_n, X\setminus \mathcal{B}_n)>0$ \ and $\operatorname{dist}\left(\mathcal{B}_n,\cup_{m\not=n}\mathcal{B}_{m}\right)>0$, \ for every $n\in\mathbb N$. \end{fact} \noindent {\em Proof.} This is a consequence of the fact that \ $\operatorname{dist}(O_n, S^+\setminus B_n)>0$, \ $\operatorname{dist}\left(B_n,\cup_{m\not=n}B_{m}\right)>0$, \ and \ $\Phi$ is Lipschitz. Indeed, on the one hand, recall that $|i(x)-i(x')|=|(x-x',0)|\le M||x-x'||$, \ for every $x,x'\in X$. On the other hand, \begin{equation*}\hspace{1cm}\bigl|\Pi(y)-\Pi(y')\bigl|=\frac{|\,y\,|y'|-y'\,|y|\,|}{|y|\,|y'|}= \frac{\bigr|y\,(|y'|-|y|)+(y-y')|y|\bigl|}{|y|\,|y'|}\le \frac{2|y-y'|}{|y'|}\le \frac{2}{\zeta}|y-y'|\,,\end{equation*} for every $y,y'\in X\times \{1\}$, \ where $\zeta=\operatorname{dist}(0,X\times\{1\})>0.$ Therefore, $|\Phi(x)-\Phi(x')|\le \frac{2M}{\zeta}||x-x'||$, \ for every $x,x'\in X$. Now, if two sets $A,A'\subset S^+$ satisfy that $\operatorname{dist}(A,A')>0$, then $\operatorname{dist}(A,A')\le |a-a'|\le \frac{2M}{\zeta}||\Phi^{-1}(a)-\Phi ^{-1}(a')||$, for every $a\in A$,\ $a'\in A'$. Therefore, $0<\operatorname{dist}(A,A')\le \frac{2M}{\zeta}\operatorname{dist}(\Phi^{-1}(A),\,\Phi ^{-1}(A'))$. $\Box$ \medskip \begin{fact} For every $n\in\mathbb N$, there exists a $C^p$ diffeomorphism \ $\Psi_n$ \ from \ $X$ \ onto \ $X\setminus \overline{\mathcal{O}}_n$ \ such that \ $\Psi_n$ \ is the identity off \ $\mathcal{B}_n$. \end{fact} \noindent {\em Proof.} Assume that $0\in \mathcal{O}_n$. Since $\operatorname{dist}(\mathcal{O}_n, X\setminus \mathcal{B}_n)>0$, \ there is \ $\delta_n>0$ \ such that \ $\operatorname{dist}((1+\delta_n)\mathcal{O}_n\,,\,\mathcal{B}_n)>0$.\ We can easily construct a $C^{p}$ smooth radial diffeomorphism \ $\Psi_{n,2}$ \ from \ $X\setminus\{0\}$ \ onto \ $X\setminus \overline{\mathcal{O}}_n$ \ satisfying \ $\Psi_{n,2}(x)=x$ \ if \ $x\notin (1+\delta_n)\mathcal{O}_n$. Indeed, take a \ $C^{\infty}$ \ smooth function $\lambda_n:[0,\infty)\longrightarrow [1,\infty)$ satisfying that $\lambda_n(t)=t$ for $t\geq 1+\delta_n$, $\lambda_n(0)=1$ \ and \ $\lambda'_n(t)>0$ for $t>0$, and define $$ \Psi_{n,2}(x)=\lambda_n(\mu_n(x))\,\frac{x}{\mu_n(x)}, $$ for $x\in X\setminus\{0\}$, where $\mu_n$ is the Minkowski functional of $\overline{\mathcal{O}}_n$, which is $C^p$ smooth on $X\setminus\{0\}$. Now, since $0\in \mathcal{O}_n$, there is $\alpha_n>0$ such that $\alpha_n B_{||\cdot||}\subset \mathcal{O}_n$\,. According to \cite[Proposition 3.1]{Dobrowolski1} and \cite[Lemma 2]{Dobrowolski2} (see also \cite{Az}), there exists a \ $C^p$ \ diffeomorphism \ $\Psi_{n,1}$ \ from $X$ \ onto \ $X\setminus\{0\}$ \ such that $\Psi_{n,1}$ is the identity off $\alpha_n B_{||\cdot||}$ (this set may be regarded as the unit ball of a equivalent $C^p$ smooth norm on $X$). Then, the composition \ $\Psi_{n}:=\Psi_{n,2}\circ\Psi_{n,1}$ \ is a \ $C^p$ diffeomorphism from \ $X$ \ onto \ $X\setminus\overline{\mathcal{O}}_n$ \ such that \ $\Psi_{n}$ \ is the identity off \ $\mathcal{B}_n$. If $0\not\in \mathcal{O}_n$, select $\omega_n\in \mathcal{O}_n$ \ and repeat the above construction of the diffeomorphism with the sets \ $\mathcal{O}_n-\omega_n$ \ and $\mathcal{B}_n-\omega_n$. Then, $\Psi_{n}:=\tau_{\omega_n}\circ\Psi_{n,2}\circ\Psi_{n,1}\circ\tau_{-\omega_n}$ is the required $C^p $ diffeomorphism,\ where \ $\tau_\omega(x)=x+\omega$. $\Box$ \medskip Now, the infinite composition $\Psi=\bigcirc_{n=1}^\infty \Psi_n$ is a well-defined $C^p$ diffeomorphism from $X$ onto $X\setminus \cup_n \overline{ \mathcal{O}}_n$\,, which is the identity outside $\cup_n \mathcal{B}_n$ and $\Psi(\mathcal{B}_n)\subset \mathcal{B}_n$. This follows from the fact that, for every $x\in X$, there is an open neighborhood $V_x$ and $n_x\in\mathbb N$ such that $V_x\cap(\cup_{n\not=n_x} \mathcal{B}_n)=\emptyset$, and therefore $\Psi|_{V_x}= \Psi_{n_x}|_{V_x}$. \medskip Finally, let us check that the $C^p$ smooth function \begin{align*} &g:X\longrightarrow \mathbb R \\ &g:=H\circ \Phi\circ\Psi\end{align*}$4\varepsilon$-approximates $f$ on $X$ and $g$ does not have critical points. Indeed, for every $x\in X$, if $\Psi(x)\not=x$, then there is $\mathcal{B}_{n_x}$ such that $x\in \mathcal{B}_{n_x}$. Since the oscillation of $f$ in $\mathcal{B}_{n_x}$ is less than $\varepsilon$ and $\Psi(x)\in \mathcal{B}_{n_x}$, we can deduce that $|f(\Psi(x))-f(x)|<\varepsilon$, for every $x\in X$. Recall that $F\circ \Phi=f$ \ and \ $|H(x)-F(x)|<3\varepsilon$, for every $x\in S^+$. Then, \begin{align}\label{g-f} |g(x)-f(x)|&=|H\circ\Phi(\Psi(x))-F\circ\Phi(x)|\\ \notag &\le |H(\Phi(\Psi(x)))-F(\Phi(\Psi(x)))|+ |F\circ\Phi(\Psi(x))-F\circ\Phi(x)|\\ \notag &\le 3\varepsilon +\varepsilon=4\varepsilon, \end{align} for every $x\in X$. Since $\Phi$ and $\Psi$ are $C^p$ diffeomorphisms, we have that $g'(x)=0$ if and only if $H'(\Phi(\Psi(x)))=0$. \ For every $x\in X$, \ $\Psi(x)\not\in \cup_n\overline{\mathcal{O}}_n$ \ and thus \ $\Phi(\Psi(x))\not\in \cup_n \overline{O}_n$. It follows that \ $H'(\Phi(\Psi(x)))\not=0$ \ and \ $g$ does not have any critical point. \medskip Before finishing the proof, let us say what additional precautions are required in the case when $\varepsilon$ is a strictly positive continuous function: \begin{itemize} \item the slices \ $S_k=\{x\in S: \ f_k(x)>\delta_k\}$, \ $(k\in\mathbb N)$ \ are selected with the additional property that the oscillation of the two functions \ $F$ \ and \ $\overline{\varepsilon}=\varepsilon \circ \Phi^{-1}$ \ in \ $S_k$ \ are less than \ $\frac{\overline{\varepsilon}(y_k)}{2}$, \ where $y_k$ is the point of $S^+ $ satisfying $f_k(y_k)=1$; \ this implies, in particular, that \ $\frac{1}{2}\,\overline{\varepsilon}(y_k)<\overline{\varepsilon}(x)<\frac{3}{2}\,\overline{\varepsilon}(y_k)$,\ for every $x\in S_k$; \item the real numbers \ $a_k\in \mathbb R^*$ \ satisfy that \ $|a_k-F(x_k)|<\frac{\overline{\varepsilon}(y_k)}{2}$; \item the oscillation of $r_k$ in $S_k$ is less than $\frac{\overline{\varepsilon}(y_k)}{|a_k|}$. \end{itemize} From the above conditions and inequality \eqref{akrk-F}, it can be deduced that if \ $x\in U_k$\,, then $|a_kr_k(x)-F(x)|\le 2\,\overline{\varepsilon}(y_k)<4\overline{\varepsilon}(x).$ \ From this, it can be obtained that $|H(x)-F(x)|\le 4\overline{\varepsilon}(x)$, \ for every \ $x\in S^+$. Equivalently, $|H\circ \Phi(x)-F\circ \Phi(x)|=|H\circ \Phi(x)-f(x)|<4\,\varepsilon(x)$, \ for every $x\in X$. Now, if $x\not=\Psi(x)$, then there is $\mathcal{B}_{n_x}$ such that $x,\Psi(x)\in \mathcal{B}_{n_x}$. Thus, $|f(\Psi(x))-f(x)|< \frac{\varepsilon(\Phi^{-1}(y_{n_x}))}{2}<\varepsilon(x)$. Now, from inequality \eqref{g-f}, we obtain: (a) if $x\in B_{n_x}$ \ for some $n_x$,\ then \ $|g(x)-f(x)|\le 4\,\varepsilon(\Psi(x))+\varepsilon(x)\le 6\,\varepsilon(\Phi^{-1}(y_{n_x}))+\varepsilon(x)\le 13\,\varepsilon(x)$, and \ (b) if \ $x\not\in\cup_nB_n$\,, then $|g(x)-f(x)|\le 4\,\varepsilon(\Psi(x))=4\,\varepsilon(x).$ \ This finishes the proof of Theorem \ref{approximation theorem}. $\Box$ \bigskip \begin{rem} {\rm The construction of the function $g$ with no critical points that approximates $f$ with a constant $\varepsilon>0$, is considerably shorter in the case that either (i) $X=\ell_2(\mathbb N)$ (and we use West Theorem \cite{West})\ or \ (ii) $X$ is non-reflexive and the norm $|\cdot|$ considered on $Y$ can be constructed with the additional property that the set $\{ f\in Y^{*}: f \textrm{ does not attain its norm} \}$ contains a dense subspace (except the zero functional). Indeed, in the first case, we can define as $|\cdot|$ the standard norm on $\ell_2(\mathbb N)$. In both cases, the use of the auxiliary functions $r_n$\, is not required,\ we can consider the slice \ $R_n:=S_n$ (that is, the additional construction of the sequence of slices $\{R_n\}$ is not required) \ and \ we can select for every $n\in \mathbb N$, \ any strictly decreasing sequence \ $\{\gamma_{n,i}\}_i$ \ such that \ $\lim_i\gamma_{n,i}=\delta_n$\,. \ Then, \ let us choose a non-zero functional $w\in Y^*\setminus [f_n: n\in \mathbb N]$ (where $[f_n:n\in \mathbb N]$ denotes the space of all finite linear combinations of the set $\{f_n:\, n\in \mathbb N\})$ with $|w|^*<\varepsilon$, \ and define \ $H=\frac{\sum_ia_ih_i}{\sum_ih_i}+w$ \ and \ $\bold{H_n}=\frac{\sum_{i=1}^na_ih_i}{\sum_{i=1}^nh_i}+w$, \ for every $n\in \mathbb N$. We obtain in the case (i), that \ $Z_n$ \ (the critical points of $\bold{H_n}$), is included in the compact set $D^{-1}([f_1,...,f_n,w]\cap S^*)$. Therefore, the set $C$ of critical points of $H$ and thus the set $\mathcal{C}$ of critical points of the composition \ $ H \circ \Phi $, \ are closed and locally compact sets of $S^+$ and $\ell_2$\,, respectively. \ Now, $g$ is obtained, by applying West Theorem \cite{West}, considering a $C^\infty$ \ deleting diffeomorphism $\Psi$ from $\ell_2(\mathbb N)$ onto $\ell_2(\mathbb N)\setminus \mathcal{C}$, with the additional property that the family \ $\{(x,\,\Psi(x)):\ x\in\ell_2(\mathbb N)\}$ \ refines the open convering $ \{\Phi^{-1}(S_{n});\ n\in \mathbb N\}) $. Finally, \ we can define \ $g:= H \circ \Phi \circ \Psi$. \noindent In the case (ii), we can select the family $\mathcal{G}=\{f_n:\ n\in \mathbb N\}\cup\{w\}$ \ with the additional requirement that \ $[\mathcal{G}]\setminus\{0\}$ \ is included in the set of non-norm attaining functionals. \ Thus, the sets of critical points of both $\bold{H_n}$ and $H$ are empty. Therefore, the set of critical points of $g:=H\circ\Phi$ is empty and $g$ approximates $f$. Notice that this case is particularly interesting because the use of a deleting diffeomorphism is not required. } \end{rem} \begin{center} {\bf Acknowledgements} \end{center} \noindent This research was carried out during Jim\'{e}nez-Sevilla's stay at the Mathematics Department of Ohio State University; Jim\'{e}nez-Sevilla wishes to thank very specially Peter March and Boris Mityagin for their kind hospitality. Azagra thanks Gilles Godefroy and Yves Raynaud for all their help during his stay at Institut de Math\'{e}matiques de Jussieu (Universit\'{e} Paris 6). \bigskip
{ "timestamp": "2005-10-27T16:27:41", "yymm": "0510", "arxiv_id": "math/0510603", "language": "en", "url": "https://arxiv.org/abs/math/0510603", "abstract": "We characterize the class of separable Banach spaces $X$ such that for every continuous function $f:X\\to\\mathbb{R}$ and for every continuous function $\\epsilon:X\\to\\mathbb(0,+\\infty)$ there exists a $C^1$ smooth function $g:X\\to\\mathbb{R}$ for which $|f(x)-g(x)|\\leq\\epsilon(x)$ and $g'(x)\\neq 0$ for all $x\\in X$ (that is, $g$ has no critical points), as those Banach spaces $X$ with separable dual $X^*$. We also state sufficient conditions on a separable Banach space so that the function $g$ can be taken to be of class $C^p$, for $p=1,2,..., +\\infty$. In particular, we obtain the optimal order of smoothness of the approximating functions with no critical points on the classical spaces $\\ell_p(\\mathbb{N})$ and $L_p(\\mathbb{R}^n)$. Some important consequences of the above results are (1) the existence of {\\em a non-linear Hahn-Banach theorem} and (2) the smooth approximation of closed sets, on the classes of spaces considered above.", "subjects": "Functional Analysis (math.FA); Differential Geometry (math.DG)", "title": "Approximation by smooth functions with no critical points on separable Banach spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.98593637543616, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7084883448782926 }
https://arxiv.org/abs/2109.09580
G-invariant Spin Structures on Spheres
We examine which of the compact connected Lie groups that act transitively on spheres of different dimensions leave the unique spin structure of the sphere invariant. We study the notion of invariance of a spin structure and prove this classification in two different ways; through examining the differential of the actions and through representation theory.
\section*{Introduction} Given a Lie group $G$ acting on a manifold $M$, it is a natural question to ask which structures of the manifold are preserved by $G$. For example, if $M$ is orientable, then connected Lie groups always preserve an orientation. However, if we consider that $M$ admits spin structures (which can be regarded as a refinement of orientation) the question of their preservation by $G$ is more complicated. For example, even if the manifold has a unique spin structure and the Lie group is connected, it is possible for the action to not preserve it, as we will show in the main theorem. The question is even more relevant in the case of homogeneous spaces $G/H$, where the group action determines the manifold. Here, spin structures can be nicely characterized in terms of lifts of the isotropy representation to the group $\operatorname{Spin}(n)$ (see \cite{ALEKSEEVSKYD} and references therein). The question is motivated by the utility of $G$-invariant spin structures; in addition to their elegant form mentioned above, G-invariance allows to compute easily the spinors which are invariant with respect to the connection induced by the action of the Lie group. This fact is a further motivation to carefully study the case of homogeneous spaces: Most examples of geometries admitting special spinors are indeed homogeneous (see for example \cite{wang1989parallel}, \cite{bar1993real} and \cite{Agr}). A particularly interesting case are the spheres. Most commonly viewed as the homogeneous space $S^n=\operatorname{SO}(n+1)/\operatorname{SO}(n)$, they can actually be realized, due to their great amount of symmetries, as various homogeneous decompositions, according to the different groups acting transitively and effectively on them. This groups were classified by D.Montgomery and H.Samuelson (see \cite{Samuelson}). Part of the preliminaries and an appendix at the end of this paper are dedicated to describing each of these nine possible group actions, together with their isotropy representations, as a comprehensible survey of these classical results is very difficult to find in the literature. Understanding the features of these actions is of particular importance in differential geometry; one of their most remarkable properties being the well-known fact that the transitive effective Lie groups on spheres appear as holonomy groups according to Berger's classification (\cite{Berger}) on simply connected Riemannian manifolds, with the exception of $\operatorname{Spin}(9)$ and $\operatorname{Sp}(n)\cdot \operatorname{U}(1)$. Our main theorem studies which of these actions leave the unique spin structure of spheres invariant. More precisely, we prove the following: \begin{mthm0} Let $G$ be a compact connected Lie group acting transitively and effectively on a sphere of appropriate dimension. Then, its unique spin structure is $G$-invariant if and only if $G$ is simply connected, or for $n$ odd, $\operatorname{Sp}(n+1)\cdot\operatorname{U}(1)$ or $\operatorname{Sp}(n+1)\cdot\operatorname{Sp}(1)$. \end{mthm0} Perhaps the most surprising fact is that there are non-simply connected Lie groups which preserve the spin structure. We present two different methods to prove the main theorem. One is of more differential geometric nature and based on the definition of the isotropy representation using differentials, through the fact that the actions are linear. The other approach uses the characterization of the isotropy representation as a restriction of the adjoint representation of the transitive Lie group, which enables the use of the tools of representation theory. This method is more general as it is not necessary for the action to be linear. It is important to point out that in general we do not need to compute the whole isotropy representation, since it is enough to find the image of loops whose classes generate the fundamental group of the stabiliser $H$. By choosing the adequate loop representative we can reduce the complexity of the computation. The paper is divided as follows. First, we recall the basic concepts about the theory of homogeneous spaces we need, focusing on describing the different transitive group actions on spheres, and recall the concept of a $G$-invariant spin structure, with special attention to the case of homogeneous spaces. In the second section, we prove our main result, which we divide into lemmata for each of the transitive group actions. We end the paper with a few closing remarks and results. In particular we point out that our main theorem yields a complete classification of $G$-invariant spin structures on spheres, with $G$ a compact connected Lie group acting transitively. \subsection*{Acknowledgements} The authors would like to thank Travis Schedler for fruitful discussions about the representation theory used in this paper. \section{Preliminaries} \subsection{Homogeneous spaces} In this section, we recall some well-known facts about homogeneous spaces. For more details we refer to \cite[Chapter 7]{BesseArthur2008EM}. Let $M$ be an orientable n-dimensional Riemannian manifold on which a connected Lie group $G$ acts transitively from the left (i.e. a homogeneous space) and fix a point $o\in M$. Then, it is well-known that $M \simeq G/H$, where $H=\operatorname{Stab}(o)$. We denote by $\mu_{g'}: M\rightarrow M,\,gH\mapsto g'gH$ the group action of $G$ on $M$. Additionally, we assume that the isotropy subgroup $H$ is compact, where $o:=eH$. Then $M$ admits an invariant Riemannian metric, and the group $G$ acts by orientation-preserving isometries. We denote by $\operatorname{Ad}^G: G \rightarrow \operatorname{GL}(\mathfrak{g})$ and $\operatorname{Ad}^H: H \rightarrow GL(\mathfrak{h})$ the adjoint representations of $G$ and $H$ respectively, where $\mathfrak{g}=T_eG$ denotes as usual the Lie algebra of $G$ and $\mathfrak{h}$ the Lie algebra of $H$. Riemannian homogeneous spaces are reductive, i.e. there exists an $H$-invariant vector space $\mathfrak{m}$ such that $\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{m}$. In this case, the tangent space $T_o(G/H)$ can be identified with $\mathfrak{m}$. By definition, the isotropy subgroup fixes the point $o$. Hence for all $h\in H$ the differential $(d\mu_h)_o:T_o(G/H)\rightarrow T_{h\cdot o}(G/H)=T_o(G/H)$ yields a linear action of $H$ on the tangent space called the isotropy representation $\sigma:H\longrightarrow \operatorname{GL}(T_o(G/H))$, such that $\sigma (h) :=(d\mu_h)_o$. As the group acts via orientation preserving isometries, we may take the image to lie inside $\operatorname{SO}(T_o(G/H))$. We will assume furthermore that $G$ acts effectively and that therefore the isotropy representation is injective. By the preceding paragraphs, the adjoint representation of $G$ restricted to $H$ satisfies $\operatorname{Ad}^G\vert_H \cong \operatorname{Ad}^H \oplus \sigma$. Therefore, if $\mathfrak{m}$ is any $H$-subrepresentation of $\mathfrak{g}$ such that $\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{m}$, we have $(\operatorname{Ad}^G\vert_H, \mathfrak{m}) \cong (\sigma, T_o(G/H))$. It is important to describe precisely the structure of the orthonormal frame bundle $FM$ of the homogeneous space $G/H$. By our previous considerations, this is a $\operatorname{SO}(\mathfrak{m})$-bundle and we can consequently identify $F_{o}M$ with $\operatorname{SO}(\mathfrak{m})$. Consider now the $H$-bundle $p: G\times_\sigma \operatorname{SO}(\mathfrak{m})\rightarrow G/H$ associated to the bundle $G\rightarrow G/H$, where the right group action of $H$ is given by $(g,A)\mapsto (gh,\sigma(h^{-1})A)=:[g,A]$. The map $$G\times_\sigma \operatorname{SO}(\mathfrak{m})\rightarrow FM,\quad [g,A]\mapsto (gH,d\mu_g A)$$ is a bundle isomorphism, with $d\mu_g: F_oM\rightarrow F_{gH}M$ being the pushforward of the action by $G$. \subsection{Classification of group actions on spheres} \label{subsection_groupactions} We are interested in compact connected Lie groups acting effectively and transitively on spheres. Note that the compactness condition enable us to work with isometric actions, while we can always assume that the group is connected by taking the identity component, which would also act transitively on the homogeneous space. These groups were firstly classified by Montgomery and Samuelson in \cite{Samuelson}. We will recall the actions of the classical Lie groups below, as we will explicitly need them later. Since a comprehensive (and comprehensible) survey of the description of these actions is somewhat hidden in the literature, we added the exceptional cases in an appendix at the end of the paper. We will also describe the loops whose class generate the fundamental group of the connected Lie groups which are not simply connected. For this purpose, we denote the usual rotation by \[R(t)=\left( \begin{matrix} \cos(2\pi t) & -\sin(2\pi t)\\ \sin(2\pi t) & \cos(2\pi t) \end{matrix}\right). \] \paragraph{The actions of $\operatorname{SO}(n+1)$, $\operatorname{U}(n+1)$, $\operatorname{SU}(n+1)$ and $\operatorname{Sp}(n+1)$.} First we recall that \begin{eqnarray*} \operatorname{SO}(n)&:=&\{A\in \operatorname{O}(n):\det A=1\}, \textrm{ where }\operatorname{O}(n):=\{A\in \operatorname{GL}(n,\mathbb{R}):A^tA=AA^t=Id\},\\ \operatorname{SU}(n)&:=&\{A\in \operatorname{U}(n): \det A=1\}, \textrm{ where }\operatorname{U}(n):=\{A\in \operatorname{GL}(n,\mathbb{C}):A\Bar{A}^t=\Bar{A}^tA=Id\}, \\ \operatorname{Sp}(n)&:=&\{A\in\operatorname{GL}(n,\mathbb{H}):A\Bar{A}^t=\Bar{A}^tA=Id\}. \end{eqnarray*} The group $\operatorname{SO}(n+1)$ acts transitively on the unit sphere $S^n:=\{(v_0,\dots,v_n)\in \mathbb{R}^{n+1}:\sum|v_i|^2=1\}$ by the usual matrix multiplication. It is easy to see that the isotropy group at $(1,0, \dots ,0)^T$ is given in block from by the matrix $\left(\begin{array}{c|c} 1 & 0 \\ \hline 0 & \operatorname{SO}(n) \end{array}\right) $. In order to find the generator of $\pi_1(\operatorname{SO}(n))=\mathbb{Z}_2$ for $n>2$, we use the fact that $S^n$ is $2$-connected for $n>2$ ($\pi_1(S^n)=\pi_2(S^n)=0$), together with the well-known long exact sequence of homotopy groups for the principal bundle associated to the homogeneous space structure, to conclude that the group morphism induced at the level of fundamental groups by the inclusion of the isotropy subgroup is an isomorphism (or surjective when $n=2$). Hence we can track back the generator of $\pi_1(\operatorname{SO}(n))$ to $\pi_1(\operatorname{SO}(2))$. Thus, we can write a generating loop $\alpha_n:I\longrightarrow \operatorname{SO}(n)$ of $\pi_1(\operatorname{SO}(n))$ by $\alpha_n(t)=\left(\begin{array}{c|c} Id_{n-2} & 0 \\ \hline 0 & R(t)\end{array}\right)$. Similarly $\operatorname{U}(n+1)$ and $\operatorname{SU}(n+1)$ act by matrix multiplication on the unit sphere in the complex $(n+1)$-dimensional space, $S^{2n+1}:=\{(v_0,...,v_n)\in \mathbb{C}^{n+1}:\sum|v_i|^2=1\}$, where $\mathbb{C}^{n+1}$ is identified with $\mathbb{R}^{2n+2}$, and $\operatorname{Sp}(n+1)$ acts on the unit sphere in the quaternions $S^{4n+3}=\{(v_0,...,v_n)\in \mathbb{H}^{n+1}:\sum|v_i|^2=1\}$. Here $v_i=a_i+\textbf{i} b_i+\textbf{j} c_i+ \textbf{k} d_i$ and $|v_i|^2=a_i^2+b_i^2+c_i^2+d_i^2$ and again we can identify $\mathbb{H}^{n+1}$ with $\mathbb{R}^{4n+4}$. The isotropy groups can be computed as above and are $\operatorname{U}(n)$, $\operatorname{SU}(n)$ and $\operatorname{Sp}(n)$ respectively. We can use the same argument as above to find a loop whose class generates $\pi_1(\operatorname{U}(n))=\mathbb{Z}$. Knowing that the loop $\beta_1:I\longrightarrow\operatorname{U}(1)$ such that $\beta_1(t)=e^{2\pi i t}$ generates $\pi_1(\operatorname{U}(1))=\mathbb{Z}$ it is straightforward to see that the loop we seek is $\beta_n(t)=\left(\begin{array}{c|c} Id_{n-1} & 0 \\ \hline 0 & e^{2\pi i t}\end{array}\right)$. \paragraph{The actions of $\operatorname{Sp}( n+1)\cdot \operatorname{U}( 1)$ and $\operatorname{Sp}( n+1)\cdot \operatorname{Sp}( 1)$} The case $\operatorname{Sp}( n+1)\cdot \operatorname{U}( 1)$ and $\operatorname{Sp}( n+1)\cdot \operatorname{Sp}( 1)$ are analogous, so we will only explain the case $\operatorname{Sp}( n+1)\cdot \operatorname{U}( 1)$ in detail. We have that $\operatorname{Sp}( n+1)\cdot \operatorname{U}( 1)=\operatorname{Sp}( n+1)\times \operatorname{U}( 1)/\{\pm(Id,1)\}$, so the group is doubly covered by $\operatorname{Sp}( n+1)\times \operatorname{U}( 1)$. We denote the elements of $\operatorname{Sp}( n+1)\times \operatorname{U}( 1)$ by $(A,z)$ and the elements of $\operatorname{Sp}( n+1)\cdot \operatorname{U}( 1)$ by $[A,z]$. Finally, we need an embedding of $\operatorname{U}( 1)\subset \mathbb{C}$ in $\operatorname{Sp}( 1)\subset \mathbb{H}$. We choose $\iota:\operatorname{U}( 1)\hookrightarrow \operatorname{Sp}( 1)$ where $\iota(a+ib)=a+\textbf{i} b+\textbf{j} 0+ \textbf{k} 0$. Note that there are other possible inclusions. We define the action of $\operatorname{Sp}( n+1)\cdot \operatorname{U}( 1)$ in the following way. Given $[A,z]\in \operatorname{Sp}( n+1)\cdot \operatorname{U}( 1)$, we define $\mu_{[A,z]}:S^{4n+3}\longrightarrow S^{4n+3}$ such that $\mu_{[A,z]}(v)=Av(\iota(z))^{-1}$, where $v(\iota(z))^{-1}=(v_0(\iota(z))^{-1},...,v_n(\iota(z))^{-1})$. Note that this shows why we take $\operatorname{Sp}( n+1)\cdot \operatorname{U}( 1)$ instead of $\operatorname{Sp}( n+1)\times \operatorname{U}( 1)$ to get an effective action. The action is moreover transitive, since the usual action of $\operatorname{Sp}(n+1)$ on $S^{4n+3}$ is transitive. Now, we compute the isotropy subgroup $H$ for $p=(1,0,...,0)$. If $[A,z]\in H$ then $Ap=(\iota(z),0,...,0)$. From there, it is straightforward to see that \[H=\{[\left(\begin{array}{c|c} \iota(z) & 0 \\ \hline 0 & A \end{array}\right),z]:A\in \operatorname{Sp}( n),z\in \operatorname{U}( 1)\}\cong \operatorname{Sp}( n)\cdot \operatorname{U}( 1). \] Moreover, the inclusion $f_n:\operatorname{Sp}( n)\cdot \operatorname{U}( 1)\longrightarrow \operatorname{Sp}( n+1)\cdot \operatorname{U}( 1)$ fulfills \[f_n([A,z])=[\left(\begin{array}{c|c} \iota(z) & 0 \\ \hline 0 & A \end{array}\right),z]. \] Once again, all the inclusions of isotropy subgroups are isomorphisms at the level of fundamental groups. Thus we can track back the generator of $\pi_1(\operatorname{Sp}( n)\cdot \operatorname{U}( 1))=\mathbb{Z}$ to the case $n=0$. Then, we have that the generator the fundamental group of $\operatorname{U}( 1)/\mathbb{Z}_2$ is represented by the loop $\gamma_0:I\longrightarrow \operatorname{U}( 1)/\mathbb{Z}_2$ such that $\gamma_0(t)=[e^{i\pi t}]$. Using the inclusions, we can see that the generator of $\pi_1(\operatorname{Sp}( n)\cdot \operatorname{U}( 1))=\mathbb{Z}$ is represented by a loop $\gamma_n:I\longrightarrow \operatorname{Sp}( n)\cdot \operatorname{U}( 1)$ such that $\gamma_n(t)=[\iota(e^{i\pi t})Id,e^{i\pi t}]$. The action of $\operatorname{Sp}( n+1)\cdot \operatorname{Sp}( 1)=\operatorname{Sp}( n+1)\times \operatorname{Sp}( 1)/\{\pm(Id,1)\}$ on $S^{4n+3}$, as well as its isotropy group, are the same as $\operatorname{Sp}( n)\cdot \operatorname{U}( 1)$, just we replace the inclusion $\iota(z)$ by $z$ and take $z$ in $\operatorname{Sp}(1)$. A loop whose class generates the fundamental group of $\pi_1(\operatorname{Sp}(n)\cdot\operatorname{Sp}(1))\cong \mathbb{Z}_2 $ is $\gamma_n'(t)=[\iota(e^{i\pi t})Id,\iota(e^{i\pi t})]$. \subsection{Spin structures on homogeneous spaces} Let $M$ be an orientable smooth Riemannian manifold and let $(FM,p,M,\operatorname{SO}(n))$ be its orthonormal frame bundle. Recall that the group $\operatorname{Spin}(n)$ is the unique connected double covering of $\operatorname{SO}(n)$, which we denote by $\lambda$. Then a spin structure is a pair $(P, \Lambda)$, where $\Lambda: P \rightarrow FM$ is a 2-covering such that we have a principal $\operatorname{Spin}(n)$-bundle $(P,p'=p\circ\Lambda,M,\operatorname{Spin}(n))$ and the following diagram commutes; \[ \begin{tikzcd} P \times \operatorname{Spin}(n) \arrow[r, "\Phi_{\operatorname{Spin}}"] \arrow[dd, "\Lambda \times \lambda"] & P \arrow[dd, "\Lambda"] \arrow[dr, "p"] \\ & & X \\ FM \times \operatorname{SO}(n) \arrow[r, "\Phi_{\operatorname{SO}}"] & FM \arrow [ur, "p"] \end{tikzcd} \] here $\Phi_{\operatorname{Spin}}$ and $\Phi_{\operatorname{SO}}$ denote the action of these groups on the total space of their respective principal bundles. In addition, two spin structures $(P_1,\Lambda_1)$ and $(P_2,\Lambda_2)$ are said to be equivalent if there exists a principal bundle isomorphism $f:P_1\longrightarrow P_2$ such that $\Lambda_2\circ f=\Lambda_1$. Although we fix a Riemannian metric to define a spin structure, it is well-known that the obstruction to its existence is purely topological. More precisely, an orientable Riemannian manifold $M$ is spin if and only if its second Stiefel-Whitney class vanishes, $w_2(M)=0$. Then, there exists a one to one correspondence between spin structures up to equivalence and cohomology classes $c\in H^1(FM,\mathbb{Z}_2)$ such that $i^*(c)\neq 0$, where $i$ is the inclusion of a fiber. Moreover, the number of spin structures is precisely $|H^1(M,\mathbb{Z}_2)|$ (see \cite[Chapter 2]{Friedrich}). It is immediate to conclude from the above discussion the well known fact that $S^n$ is spin, with a unique spin structure. The first step to study the relation between spin structures and group actions is to understand how an orientation preserving isometry $f:M\longrightarrow M$ transforms spin structures on a spin manifold $M$. Recall that each orientation preserving isometry induces an isomorphism on the frame bundle of the manifold, which we also denote by $f:FM\longrightarrow FM$, such that $f(p,(v_1,...,v_n))=(f(p),(df_{|p}v_1,...,df_{|p}v_n))$. We would like to extend this map to a map to a spin structure $(P,\Lambda)$. More precisely, we seek a map $\tilde{f}:P\longrightarrow P$ such that \[ \begin{tikzcd} P\ar{r}{\tilde{f}}\ar{d}{\Lambda} & P\ar{d}{\Lambda}\\ FM\ar{r}{f} & FM\\ \end{tikzcd} \] that must also preserve the fibers and be equivariant with respect $\operatorname{Spin}(n)$ right action on $P$. Since the spin structure is represented by a cohomology class $c\in H^1(FM,\mathbb{Z}_2)$ such that $i^*(c)\neq0\in H^1(\operatorname{SO}(n),\mathbb{Z}_2)$, then it can be seen that the map $f:FM\longrightarrow FM$ can be lifted to a map $\tilde{f}:P\longrightarrow P$ between structures if and only if $f^*(c)=c$ by using \v{C}ech cohomology (see \cite{Chichilnisky1996351}). It is important to remark that there are always two possible lifts. Now, we can introduce the concept of $G$-invariant spin structure. Let $G$ be Lie group acting by orientation preserving isometries on a spin manifold $M$ and let $\phi_g:M\longrightarrow M$ be the isometry induced by the action of an element $g\in G$ (we use $\phi$ instead of $\mu$ to remark that the action is not necessarily transitive). As above, we also denote by $\phi_g$ the unique bundle isomorphism induced in $FM$. Then: \begin{defn0}\label{Ginvariant spin structure def} We say that a spin structure $(P, \Lambda)$ is $G$-invariant if there exists an action of $G$ on $P$ covering the action of $G$ on $FM$. This means that for each $g \in G$ there is an isomorphism $\tilde{\phi}_g:P\longrightarrow P$ such that $\Lambda\circ\tilde{\phi}_g=\phi_g\circ\Lambda$. \end{defn0} \begin{remark0}\label{invariant and subgroups} Let $G'\subset G$ be Lie groups acting on a spin manifold $M$. If a fixed spin structure is $G$-invariant, then it is also $G'$-invariant. Thus, if the spin structure is not $G'$ invariant, then it is not $G$-invariant. \end{remark0} Since the action of a connected Lie group is homotopically trivial, the above cohomological condition is always fulfilled. This implies that if a connected Lie group acts on a spin manifold $M$, then either $G$ or a $2$-covering $\tilde{G}$ acts in a way that preserves the spin structure (see \cite{Chichilnisky1996351}). Note that the action of $\tilde{G}$ on $M$ is not effective. Finally, if $G$ is simply connected the only possibility for $\tilde{G}$ is $G\times \mathbb{Z}_2$. This implies that every spin structure is $G$-invariant when $G$ is simply connected. For the case of Riemannian homogeneous space, the action of $G$ on its frame bundle $G\times_\sigma\operatorname{SO}(n)$ is simply $\mu_g([g',A])=[gg',A]$. Assume now that there exists a lift of the isotropy representation to the spin group, that is a group morphism $\tilde{\sigma}:H\longrightarrow \operatorname{Spin}(n)$ such that $\sigma=\lambda\circ \tilde{\sigma}$. Note that by using covering space theory and the fact that $\operatorname{Spin}(n)$ is simply connected for $n>2$, it is straightforward to see that this lift exists if and only if $\sigma_*:\pi_1(H)\longrightarrow \pi_1(\operatorname{SO}(n))$ is trivial. Then, it is a known fact that we can construct a spin structure of the form $G\times_{\tilde{\sigma}} \operatorname{Spin}(n)$ with a 2-covering $\Lambda([g,x])=[g,\lambda(x)]$. The next proposition shows that the existence of a lift and the existence of a $G$-invariant spin structure are equivalent. One of the implications is well-known (see \cite{ALEKSEEVSKYD,CGT}), while the converse statement and its proof are not clearly presented in the literature (for example, it is only mentioned in \cite{bar1992dirac}). Since it is a key step to prove the main theorem we provide a detailed proof of this fact. \begin{prop0}\label{invariant spin structures and lifts} Let $M=G/H$ be a spin homogeneous space where $H$ is connected. Then there exists a $G$-invariant spin structure if and only if the isotropy representation lifts to a map $\tilde{\sigma}:H\longrightarrow \operatorname{Spin}(n)$. \end{prop0} \begin{proof} Assume that the isotropy representation lifts to a group morphism $\tilde{\sigma}:H\longrightarrow \operatorname{Spin}(n)$, then $\Lambda:G\times_{\tilde{\sigma}}\operatorname{Spin}(n)\longrightarrow G\times_\sigma \operatorname{SO}(n)$ is a spin structure. We consider the action of $G$ on $G\times_{\tilde{\sigma}}\operatorname{Spin}(n)$ such that $\tilde{\mu}_g:G\times_{\tilde{\sigma}}\operatorname{Spin}(n)\longrightarrow G\times_{\tilde{\sigma}}\operatorname{Spin}(n)$ fulfils that $\tilde{\mu}_g([g',x])=[gg',x]$ for every $g\in G$. It is clear that $\Lambda\circ\tilde{\mu}_g=\mu_g\circ\Lambda $, hence the spin structure is $G$-invariant. Now, we assume that there exists a $G$-invariant spin structure $(P,\Lambda)$ and that the isotropy representation does not lift in order to reach a contradiction. With these conditions we can find a loop $\gamma(t)$ in $H$ with $\gamma(0)=\gamma(1)=e$ such that $\sigma_*[\gamma(t)]=1\in \mathbb{Z}_2$, therefore we have a path $\tilde{\gamma}(t)$ in $\operatorname{Spin}(n)$ such that $\tilde{\gamma}(0)=1$, $\tilde{\gamma}(1)=-1$ and $\lambda(\tilde{\gamma}(t))=\sigma(\gamma(t))$. Since $\mu_{\gamma(0)}$ and $\mu_{\gamma(1)}$ are the identity on $G\times_\sigma \operatorname{SO}(n)$, thus $\tilde\mu_{\gamma(0)}$ and $\tilde\mu_{\gamma(1)}$ shall be the identity on $P$. Our aim is to see that if $\tilde\mu_{\gamma(0)}$ is the identity then $\tilde\mu_{\gamma(1)}$ can not the identity. We consider the path $\tilde{\mu}_{\gamma(t)}(y)$ in $P$, where $y$ is an element such that $\Lambda(y)=[e, Id]$. Note that $\tilde{\mu}_{\gamma(0)}(y)=y$. Then we have that $\Lambda(\tilde{\mu}_{\gamma(t)}(y))=\mu_{\gamma(t)}[e,Id]=[\gamma(t),Id]=[e,\sigma(\gamma(t))]$. Thus the path in $P$ descends to a loop inside a fiber of the orthonormal frame bundle. Since $\Lambda$ preserves the fibers, we have that $\tilde{\mu}_{\gamma(t)}(y)$ is in a fiber of the $\operatorname{Spin}(n)$-principal bundle. Therefore, there exists a path $\gamma'(t)$ in $\operatorname{Spin}(n)$ such that $\gamma'(0)=1$ and $\tilde{\mu}_{\gamma(t)}(y)=y\gamma'(t)$. This implies that $[e,\sigma(\gamma(t))]=\Lambda(\tilde{\mu}_{\gamma(t)}(y))=\Lambda(y\gamma'(t))=\Lambda(y)\lambda(\gamma'(t))=[e,\lambda(\gamma'(t))]$. Therefore, we have that $\sigma(\gamma(t))=\lambda(\gamma'(t))$ and since $\gamma'(0)=\tilde{\gamma}(0)=1$, we conclude $\gamma'(t)=\tilde{\gamma}(t)$. Hence, $\tilde{\mu}_{\gamma(1)}(y)=-y$, which implies that $\tilde{\mu}_{\gamma(t)}\neq Id_P$, reaching a contradiction. Thus the claim follows. \end{proof} If the lift exists then it is unique since $H$ is connected. In this case, a $G$-invariant spin structure $(P,\Lambda)$ on $G/H$ is equivalent to the spin structure $G\times_{\tilde{\sigma}}\operatorname{Spin}(n)$ given by the map $f:G\times_{\tilde{\sigma}}\operatorname{Spin}(n)\longrightarrow P$ such that $f([g,x])=\tilde{\mu}_g(p_ox)$, where $\Lambda(p_o)=[e,Id]$. This leads to the next corollary: \begin{corollary0} If there exists a $G$-invariant spin structure on $G/H$ with $H$ connected, then it is unique. \end{corollary0} \begin{remark0} Let $G$ be a connected Lie group, seen as the homogeneous spaces $G\cong G/\{e\}$. Then, the isotropy representation lifts trivially and we have a $G$-invariant spin structure, which is corresponds to the trivial bundle $G\times \operatorname{Spin}(n)$. This spin structure is the only $G$-invariant spin structure on $G$. This fact is especially relevant when $G$ has multiple inequivalent spin structures, like $T^n$, since it gives a preferred spin structure to work with. \end{remark0} In the spirit of the main theorem, we can also ask which connected Lie groups $G$ acting transitively on other Lie groups $M$, seen as a manifold, leave its trivial spin structure invariant. There are cases, like $M=T^n$, where the only group acting transitively on it is the group itself, thus the answer is trivial. However, if we pick $M=\operatorname{SU}(2)\cong S^3$, then $\operatorname{SO}(4)$ also acts transitively and effectively on $M$, but as we will see later, the unique spin structure of $S^3$ is not $\operatorname{SO}(4)$-invariant. \section{Main Result} The groups $\operatorname{SU}(n)$, $\operatorname{Sp}(n)$, $\operatorname{G}_2$, $\operatorname{Spin}(7)$ and $\operatorname{Spin}(9)$ are simply connected, so in these cases the isotropy representations always lift and the spin structure is invariant. Hence, we only need to compute the cases $\operatorname{SO}(n)$, $\operatorname{U}(n)$, $\operatorname{Sp}(n)\cdot\operatorname{U}(1)$ and $\operatorname{Sp}(n)\cdot\operatorname{Sp}(1)$. We present two different approaches to prove the theorem for these groups. Our first approach is to compute $\sigma(\gamma(t))=(d\mu_{\gamma(t)})_o$, where $\gamma(t)$ is a loop whose class generates the fundamental group of the group we study. In the second approach we use representation theory to find $\sigma$ by using that $\operatorname{Ad}^G\vert_H =\operatorname{Ad}^H \oplus \sigma$, and then compute $\sigma(\gamma(t))$. For each of the following lemmata corresponding to the four cases above, we provide first the proof using the first approach and then the more representation theoretical technique. Note that for the complex and quaternionic group, we will be using complex representations which are more convenient to deal with. One can uniquely recover, up to isomorphism, the isotropy representation from its complexification: if $\rho, \rho'$ are real finite-dimensional representations of any group, then $\rho \cong \rho'$ if and only if the complexifications $(\rho \otimes \mathbb{C}) \cong (\rho' \otimes \mathbb{C})$ are isomorphic. This follows because $(\rho \otimes \mathbb{C})^{\mathbb{R}} \cong \rho \otimes (\mathbb{C})^{\mathbb{R}} \cong \rho \oplus \rho$. Here $\psi^{\mathbb{R}}$ denotes the underlying real representation of a complex one $\psi$. \begin{lemma0}\label{SO(n) not invariant} The spin structure of $S^n$ is not $\operatorname{SO}(n+1)$-invariant. \end{lemma0} \begin{proof} Recall that in this case, $S^n=\operatorname{SO}(n+1)/\operatorname{SO}(n)$ and we take $o=(1,0,...,0)$. Then the action is by restriction of a linear action, hence the differential $(d\mu_{\alpha_n(t)})_o:T_o\mathbb{R}^{n+1}\longrightarrow T_o\mathbb{R}^{n+1}$ is the matrix $f_n(\alpha_n(t))$, where $\alpha_n$ is the generating loop described in Section \ref{subsection_groupactions}. If we restrict it to $T_pS^{n}=\{v\in\mathbb{R}^{n+1}:\langle v,(1,0,...,0) \rangle=0\}=\langle e_2,...,e_{n+1}\rangle$, we obtain that $\sigma(\alpha_n(t))=\alpha_n(t)\in \operatorname{SO}(n)$. Since $[\alpha_n(t)]$ is the generator of $\pi_1(\operatorname{SO}(n))$, we can conclude that the isotropy representation does not lift, which implies that the spin structure is not $\operatorname{SO}(n+1)$-invariant. Let us now denote by $\lambda_n$ the standard representation of $\operatorname{SO}(n)$, which is given by the left multiplication on $\mathbb{R}^n$. It is a well-known fact that $\operatorname{Ad}^{\operatorname{SO}(n)} \cong \bigwedge^2\lambda_n$. Recall that $\sigma=\operatorname{Ad}^{\operatorname{SO}(n+1)}|_{\operatorname{SO}(n)}:\operatorname{SO}(n)\longrightarrow \operatorname{SO}(\mathfrak{m})$ and that we have the chain of isomorphisms $\operatorname{Ad}^{\operatorname{SO}(n+1)}|_{\operatorname{SO}(n)}\cong \bigwedge^2(\lambda_{n+1}|_{\operatorname{SO}(n)})\cong \bigwedge^2(\lambda_n\oplus 1)\cong \bigwedge^2\lambda_{n}\oplus\lambda_n$, where $1$ denotes the trivial representation. The first summand is the adjoint representation of the isotropy subgroup $\operatorname{SO}(n)$, which means that $\sigma\cong\lambda_n$. This implies, as we have already seen, that $\sigma(\alpha_n(t))=\alpha_n(t)$ and hence the isotropy representation does not lift. \end{proof} The next case we discuss is the action of $\operatorname{U}(n)$. \begin{lemma0}\label{U(n) not invariant} The spin structure of $S^{2n+1}$ is not $\operatorname{U}(n+1)$-invariant. \end{lemma0} \begin{proof} We proceed in the same way for $S^{2n+1}=\operatorname{U}(n+1)/\operatorname{U}(n)$. Let again $\beta_n$ be the generating loop described in Section \ref{subsection_groupactions}. In order to compute $\sigma(\beta_n(t)) $, we view the action as a linear action on $S^{2n+1}\subset \mathbb{C}^{n+1}\subset\mathbb{R}^{2n+2}$. We can decomplexify the matrices of $\operatorname{U}(n+1)$ in order to see them in $\operatorname{SO}(2n+2)$. Consequently, the isotropy representation is given by the natural inclusions $ \operatorname{U}(n)\subset \operatorname{SO}(2n)\subset \operatorname{SO}(2n+1)$. Therefore, we obtain that $\sigma(\beta_n(t))=\left(\begin{array}{c|c} Id_{2n-1} & 0 \\ \hline 0 & R(t)\end{array}\right)\in \operatorname{SO}(2n+1)$. Like in the above case, this means that the isotropy representation does not lift and the spin structure is not $\operatorname{U}(n)$-invariant. We now look at the second approach. Let $\mu_n$ be the standard complex representation of $\operatorname{U}(n)$ as a matrix acting on $\mathbb{C}^{n}$. Then, the complexified adjoint representation satisfies $\operatorname{Ad}^{\operatorname{U}(n+1)}\otimes \mathbb{C} \cong \mu_{n+1} \otimes_{\mathbb{C}} \mu_{n+1}^*\cong \mu_{n+1} \otimes_{\mathbb{C}} \bar{\mu}_{n+1}$. If we restrict it to $\operatorname{U}(n)$, we obtain that $\operatorname{Ad}^{\operatorname{U}(n+1)}\otimes \mathbb{C} \vert_{\operatorname{U}(n)} \cong (\mu_n \oplus 1) \otimes_{\mathbb{C}} (\bar{\mu}_n \oplus 1) \cong (\mu_n \otimes \bar{\mu}_n) \oplus \bar{\mu}_n \oplus \mu_n \oplus 1 $. Therefore, the complexified isotropy representation is isomorphic to $\bar{\mu}_n \oplus \mu_n \oplus 1$. Note that this is isomorphic to the complexification of $\mu_n^{\mathbb{R}} \oplus 1$ (where now $1$ is the real trivial representation). So the real isotropy representation is isomorphic to this. If we apply the isotropy representation to $\beta_n(t)$, we obtain $[R(t) \oplus 1 \oplus...\oplus 1 ]$. So we will obtain a rotation matrix that generates the fundamental group of $\operatorname{SO}(2n+1)$. \end{proof} For the sake of completeness, we also compute the (complexified) isotropy representation for the Lie groups $\operatorname{SU}(n)$ and $\operatorname{Sp}(n)$. In the first case, the sphere is the same as for $\operatorname{U}(n)$, with a restricted action, so we simply restrict the previous isotropy representation to $\operatorname{SU}(n)$. For the second case, let $\nu_n$ be the standard complex representation of $\operatorname{Sp}(n)$ (of complex dimension $2n$). Then, $\operatorname{Ad}^{\operatorname{Sp}(n+1)}\otimes \mathbb{C}=S^2(\nu_{n+1})$, where $S^2$ denotes the second complex linear symmetric power. Thus, $\operatorname{Ad}^{\operatorname{Sp}(n+1)}\otimes \mathbb{C}|_{\operatorname{Sp}(n)}\cong S^2(\nu_{n}\oplus1 \oplus 1)\cong S^2(\nu_n)\oplus(S^1(\nu_n)\otimes S^1(1\oplus 1))\oplus S^2(1 \oplus 1)\cong S^2(\nu_n)\oplus\nu_n\oplus \nu_n \oplus 1 \oplus 1 \oplus 1$. This implies that the complexified isotropy representation is isomorphic to the last sum after removing the first summand. Note that this is isomorphic to the complexification of $\nu_n^{\mathbb{R}} \oplus 1 \oplus 1 \oplus 1$. Thus the real isotropy representation is isomorphic to this. \begin{lemma0}\label{Sp(n)U(1) Sp(n)Sp(1) invariant} The spin structure of $S^{4n+3}$ is invariant by the transitive actions of $\operatorname{Sp}(n+1)\cdot\operatorname{U}(1)$ and $\operatorname{Sp}(n+1)\cdot\operatorname{Sp}(1)$ if and only if $n$ is odd. \end{lemma0} \begin{proof} Both case are analogous, so we focus mainly in the case $S^{4n+3}=\operatorname{Sp}(n+1)\cdot\operatorname{U}(1)/\operatorname{Sp}(n+1)\cdot\operatorname{U}(1)$. Like in the previous cases, we start by computing $\sigma([A,z])=(d\mu_{f_n[A,z]})_o$. In order to do this, we will see that $\mu_{f_n[A,z]}$ is a linear map acting on $S^{4n+3}\subset \mathbb{R}^{4n+4}$. We have that $\mu_{\gamma_n(t)}(v)=\iota(e^{i\pi t})v\iota(e^{i\pi t})^{-1}=(C_{\iota(e^{i\pi t})}v_0,...,C_{\iota(e^{i\pi t})}v_n)$, where $C_{w}:\operatorname{Sp}( 1)\longrightarrow \operatorname{Sp}( 1)$ is the conjugation by an element $w\in \operatorname{Sp}( 1)$. Then, a tedious computation shows that the conjugation induces a linear map on $\mathbb{R}^4$ (which we can restrict to $S^3$) given by a $4\times 4$ matrix \[\left(\begin{array}{c|c} Id & 0 \\ \hline 0 & R(t) \end{array}\right), \] Therefore, we have that $\mu_{f_n(\gamma_n(t))}\in \operatorname{SO}(4n+4)$ is \[\left(\begin{array}{c|c|c} \begin{array}{c|c} Id & 0 \\ \hline 0 & R(t)\end{array} & \cdots &0 \\ \hline \vdots & \ddots & \vdots \\ \hline 0 & \cdots & \begin{array}{c|c} Id & 0 \\ \hline 0 & R(t)\end{array}\\ \end{array} \right). \] Now, we use that $T_pS^{4n+3}=\{v\in\mathbb{R}^{4n+4}:\langle v,(1,0,...,0) \rangle=0\}=\langle e_2,...,e_{4n+4}\rangle$. This means that we have that \[ (d\mu_{f_n(\gamma_n(t))})_o=\left(\begin{array}{c|c|c} \begin{array}{c|c} 1 & 0 \\ \hline 0 & R(t)\end{array} & \cdots &0 \\ \hline \vdots & \ddots & \vdots \\ \hline 0 & \cdots & \begin{array}{c|c} Id & 0 \\ \hline 0 & R(t)\end{array}\\ \end{array} \right)\in \operatorname{SO}(4n+3). \] Since, we have $n+1$ rotations, we can conclude that $\sigma_*[\gamma_n]=n+1\mod 2$. Hence, the isotropy representation lifts when $n$ is odd. In other words, the spin structure of $S^{8m-1}$ is invariant by the action of $\operatorname{Sp}(2m+2)\cdot \operatorname{U}(1)$ and $\operatorname{Sp}(2m+2)\cdot \operatorname{Sp}(1)$. In order to use our second method, we need again to find the isotropy representation first. We note that $G=\operatorname{Sp}(n+1)\cdot\operatorname{Sp}(1)$ and $\operatorname{Sp}(n+1) \times \operatorname{Sp}(1)$ both have the same (complexified) Lie algebra $\mathfrak{g}^{\mathbb{C}}\cong\mathfrak{sp_{\mathbb{C}}}(2n+2)\oplus \mathfrak{sp_{\mathbb{C}}}(2)$ (note that here the dimension is not the quaternionic but the complex dimension). Moreover $\mathfrak{g}^{\mathbb{C}}=\mathfrak{m}^{\mathbb{C}}\oplus\mathfrak{h}^{\mathbb{C}}$, where $\mathfrak{m}^{\mathbb{C}}$ is isomorphic, as an $\mathfrak{h}^{\mathbb{C}}$-representation, to the complexified isotropy representation. Since by Section \ref{subsection_groupactions}, $H=\{[\left(\begin{array}{c|c} \iota(z) & 0 \\ \hline 0 & A \end{array}\right),z]:A\in \operatorname{Sp}( n),z\in \operatorname{Sp}( 1)\}\cong \operatorname{Sp}( n)\cdot \operatorname{Sp}( 1)$, we have that \[\mathfrak{h}^{\mathbb{C}}=\{(\left(\begin{array}{c|c} d\iota(\xi) & 0 \\ \hline 0 & \mathfrak{a} \end{array}\right),\xi):\mathfrak{a}\in \mathfrak{sp_{\mathbb{C}}}(2n), \xi\in \mathfrak{sp}_\mathbb{C}( 2)\}\cong \mathfrak{sp_{\mathbb{C}}}(2n)\oplus \mathfrak{sp_{\mathbb{C}}}(2)'\subseteq \mathfrak{g}_{\mathbb{C}},\] where $\mathfrak{sp_{\mathbb{C}}}(2)'=\{(\left(\begin{array}{c|c} d\iota(\xi) & 0 \\ \hline 0 & 0 \end{array}\right),\xi):\xi\in \mathfrak{sp}_\mathbb{C}( 2)\}$. Note that for every direct sum of Lie algebras $\mathfrak{g} = \mathfrak{g}_1 \oplus \mathfrak{g}_2$, the first summand is a subrepresentation of $\mathfrak{g}$ under the adjoint action; on $\mathfrak{g}_1$, $\mathfrak{g}_1$ acts by its adjoint representation, and $\mathfrak{g}_2$ acts trivially. Consequently if $\mathfrak{h}$ is a Lie subalgebra of $\mathfrak{g}$, its action on $\mathfrak{g}_1$ is via the composition $\displaystyle \mathfrak{h} \to \mathfrak{g} \twoheadrightarrow \mathfrak{g}_1 \mathop{\to}^{\operatorname{Ad}} \operatorname{GL}(\mathfrak{g}_1)$. Under the projection to the first summand $\mathfrak{sp}_{\mathbb{C}}(2n+2)$, $\mathfrak{h}^{\mathbb{C}}$ maps isomorphically to $\mathfrak{sp}_{\mathbb{C}}(2) \oplus \mathfrak{sp}_{\mathbb{C}}(2n)$. By the preceding paragraph, the action of $\mathfrak{h}^{\mathbb{C}}$ on the first summand $\mathfrak{sp_{\mathbb{C}}}(2n+2)$ of $\mathfrak{g}^{\mathbb{C}}$ is the restriction of the adjoint representation of $\mathfrak{sp_{\mathbb{C}}}(2n+2)$ to $\mathfrak{sp_{\mathbb{C}}}(2n)\oplus \mathfrak{sp_{\mathbb{C}}}(2)\subseteq \mathfrak{sp_{\mathbb{C}}}(2n+2)$. Now, a complement to $\mathfrak{h}^{\mathbb{C}}$ in $\mathfrak{g}^{\mathbb{C}}$ can be obtained as the complement to $\mathfrak{sp}_{\mathbb{C}}(2n)$ in the first factor $\mathfrak{sp}_{\mathbb{C}}(2n+2)$. Namely, as we computed previously, $$\mathfrak{sp}_{\mathbb{C}}(2n+2) \cong S^2(\mathbb{C}^{2n+2}) \cong S^2 \mathbb{C}^{2n} \oplus (\mathbb{C}^{2n} \otimes_{\mathbb{C}} \mathbb{C}^2) \oplus S^2 \mathbb{C}^2,$$ with the first summand $S^2 \mathbb{C}^{2n}$ corresponding to $\mathfrak{sp}_{\mathbb{C}}(2n)$. So we can identify $\mathfrak{m}^{\mathbb{C}}$ with $(\mathbb{C}^{2n} \otimes_{\mathbb{C}} \mathbb{C}^2) \oplus \mathfrak{sp}_{\mathbb{C}}(2)$. In terms of matrices this is: $$\mathfrak{m}^{\mathbb{C}} = \{(\left(\begin{array}{c|c} * & * \\ \hline * & 0 \end{array}\right),0)\} \in \mathfrak{sp}_{\mathbb{C}}(2n+2) \oplus \mathfrak{sp}_{\mathbb{C}}(2).$$ As a result we can realize the complexified isotropy representation as $(\nu_n \boxtimes_{\mathbb{C}} \nu_1) \oplus (\operatorname{Ad}^{\operatorname{Sp}(1)} \otimes \mathbb{C})$, where $\nu_n$ is the standard representation of $\operatorname{Sp}(n)$. Here the action on the second summand is via the projection $H=\operatorname{Sp}(n) \cdot \operatorname{Sp}(1) \to \operatorname{Sp}(1)/\pm I$. This representation is isomorphic to the complexification of $\tilde{\nu}_n^{\mathbb{R}} \oplus \operatorname{Ad}^{\operatorname{Sp}(1)}$. Here, $\tilde{\nu}_n^{\mathbb{R}}$ denotes the real representation corresponding to the first summand, which is nothing but $\mathbb{H}^n$ with the action described in Section \ref{subsection_groupactions} (note that $\operatorname{Sp}(1)$ does not act trivially). Note that for the second summand, the action of $\operatorname{Sp}(n)\cdot\operatorname{Sp}(1)$ factors through the projection $\operatorname{Sp}(n)\cdot\operatorname{Sp}(1) \twoheadrightarrow \operatorname{Sp}(1)/\pm I$ (whose adjoint representation is the same as that of $\operatorname{Sp}(1)$ itself). In the case of $\operatorname{Sp}(n)\cdot\operatorname{U}(1)$, including $\operatorname{U}(1)$ into $\operatorname{Sp}(1)$ as above, we must restrict the factor for $\operatorname{Sp}(1)$ to that of $\operatorname{U}(1)$. This yields the complexified representation $\nu_n \boxtimes \nu_1 \vert_{\operatorname{U}(1)} \oplus \operatorname{Ad}^{\operatorname{Sp}(1)}\vert_{\operatorname{U}(1)}\otimes \mathbb{C})$. Note that $\nu_1|_{\operatorname{U}(1)}$ and $\operatorname{Ad}^{\operatorname{Sp}(1)}|_{\operatorname{U}(1)} \otimes \mathbb{C}$ are reducible, as they are complex representations of the abelian group $\operatorname{U}(1)$ of finite dimension greater than one. It is well-known how to decompose these into one-dimensional representations (e.g., by viewing $\operatorname{U}(1)$ as the maximal torus inside $\operatorname{Sp}(1)$ and then decomposing the standard and complex adjoint representations of $\operatorname{Sp}(1)$, or alternatively of the complexified Lie algebra, isomorphic to $\mathfrak{sl}_{\mathbb{C}}(2)$). The one-dimensional representations of $\operatorname{U}(1)$ are of the form $\rho_m: z \mapsto (z^m)$ for $m \in \mathbb{Z}$. We have $\nu_1|_{\operatorname{U}(1)} \cong \rho_1 \oplus \rho_{-1}$ and $\operatorname{Ad}^{\operatorname{Sp}(1)}|_{\operatorname{U}(1)} \otimes \mathbb{C} \cong \rho_{2} \oplus \rho_0 \oplus \rho_{-2}$, where $\rho_0 = 1$ is the trivial representation. Putting this together, the complexified isotropy representation decomposes as: \[\nu_n \boxtimes (\rho_{-1}\oplus\rho_1) \oplus\rho_2 \oplus 1 \oplus \rho_{-2}.\] This is isomorphic to the complexification of $\tilde{\nu}_n^{\mathbb{R}}|_{\operatorname{Sp}(n) \cdot U(1)} \oplus \rho_2^{\mathbb{R}} \oplus 1$. Here $\rho_2^{\mathbb{R}}$ is a representation of $U(1)$ which factors through the quotient $\operatorname{U}(1)/\pm I$, so its overall representation of $\operatorname{Sp}(n) \cdot U(1)$ is given as the composition $\displaystyle \operatorname{Sp}(n) \cdot \operatorname{U}(1) \twoheadrightarrow \operatorname{U}(1)/\pm I \mathop{\to}^{\rho_2} \operatorname{GL}(1,\mathbb{C})$. So this sum is isomorphic to the real isotropy representation. Plugging in the generator calculated above, one obtains: \[\left(\begin{array}{c|c|c} \begin{array}{c|c} Id & 0 \\ \hline 0 & R(t)\end{array} & \cdots &0 \\ \hline \vdots & \ddots & \vdots \\ \hline 0 & \cdots & \begin{array}{c|c} Id & 0 \\ \hline 0 & R(t)\end{array}\\ \end{array} \right) \oplus (R(t)) \oplus 1. \] We note that this matrix contains $n+1$ generators of the fundamental group of $\operatorname{SO}(4n+3)$ on the diagonal, and can thus be expressed as the product of $n+1$ generators of the fundamental group $\mathbb{Z}/2$. This is equal to the parity of $n+1$, so the isotropy representation lifts for $n+1$ even. \end{proof} We summarise the information of the transitive actions on spheres in Table \ref{table:isotropy}. \begin{table}[h!] \centering \begin{tabular}{ |c||c|c|c|c| } \hline Lie group & Manifold & Isotropy Subgroup &Isotropy Representation & $G$-invariant spin ?\\ \hline $\operatorname{SO}(n+1)$ &$S^{n}$ & $\operatorname{SO}(n)$& $\lambda_n$ & No\\ $\operatorname{U}(n+1)$ &$S^{2n+1}$ & $\operatorname{U}(n)$& $\mu_n^{\mathbb{R}} \oplus 1$ & No\\ $\operatorname{SU}(n+1)$ &$S^{2n+1}$ & $\operatorname{SU}(n)$& $\mu_n^{\mathbb{R}} \oplus 1$ & Yes\\ $\operatorname{Sp}(n+1)$ &$S^{4n+3}$ & $\operatorname{Sp}(n)$& $\nu_n^{\mathbb{R}} \oplus 1 \oplus 1 \oplus 1$ & Yes\\ $\operatorname{Sp}(n+1)\operatorname{Sp}(1)$ &$S^{4n+3}$ & $\operatorname{Sp}(n)\operatorname{Sp}(1)$& $\tilde{\nu}_{n}^{\mathbb{R}} \oplus \operatorname{Ad}^{\operatorname{Sp}(1)}$ & $n$ odd\\ $\operatorname{Sp}(n+1)\operatorname{U}(1)$ &$S^{4n+3}$ & $\operatorname{Sp}(n)\operatorname{U}(1)$ & $\tilde{\nu}_{n}^{\mathbb{R}} \oplus \operatorname{Ad}^{\operatorname{Sp}(1)}\vert_{\operatorname{U}(1)} $ & $n$ odd\\ $\operatorname{G}_2$ &$S^6$ & $\operatorname{SU}(3)$& $\mu_3^{\mathbb{R}}$ & Yes\\ $\operatorname{Spin}(7)$ &$S^7$ & $\operatorname{G}_2$& $\zeta$ & Yes\\ $\operatorname{Spin}(9)$ &$S^{15}$ & $\operatorname{Spin}(7)$& $\lambda_7\oplus \Delta_7$ & Yes\\ \hline \end{tabular} \caption{Invariance of the spin structure on the sphere by transitive and effective Lie group actions.}\label{table:isotropy} \end{table} \subsection{Closing remarks} As we have shown in the main theorem, there are cases where the spin structure of the sphere is not $G$-invariant, but we can then take a double covering which does preserve the spin structure. In all of the non-lifting cases, there is in fact a unique connected double covering up to isomorphism. The next remark focus on them. \begin{remark0} For $\operatorname{SO}(n+1)$ acting on $S^n$, the double covering is precisely $\operatorname{Spin}(n)$ acting non-effectively on $S^n$. More explicitly, the action is given by $\mu_{x}(v)=\lambda(x)v$ for $x\in \operatorname{Spin}(n+1)$ and $v\in S^{n}$. For $\operatorname{U}(n+1)$ acting on $S^{2n+1}$, the double covering is $\operatorname{MU}(n+1) := \{(A, z) \in U(n+1) \times \mathbb{C}^\times \mid \det A = z^2\}$ (sometimes called the metaunitary group). For $\operatorname{Sp}(n+1)\cdot \operatorname{Sp}( 1)$ and $\operatorname{Sp}(n+1) \cdot U(1)$ the double coverings are $\operatorname{Sp}(n+1) \times \operatorname{Sp}(1)$ and $\operatorname{Sp}(n+1) \times U(1)$, respectively. \end{remark0} Now, given any connected group $G$ acting transitively on a sphere by isometries via $\alpha: G \to \operatorname{Isom}(S^n)$, then $G/\ker(\alpha)$ is one of the groups in Table \ref{table:isotropy}. Then there exists a $G$-invariant spin structure on $S^n$ if and only if either the last column is ``Yes'' or the map $G \twoheadrightarrow G/\ker(\alpha)$ factors through the aforementioned connected double covering. This classifies all groups $G$ acting transitively on a sphere by isometries for which the sphere admits a $G$-invariant spin structure. The next proposition finds a nice relationship between $G$-invariant spin structures on $G/H$ and finite subgroups of $G$: \begin{prop0} Let $M=G/H$ be a spin homogeneous space where $H$ is connected. Then, a spin structure in $G/H$ is $G$-invariant if and only if the spin structure is invariant by all subgroups of $G$ isomorphic to $\mathbb{Z}_2$. \end{prop0} \begin{proof} One direction is trivial, since if the spin structure is $G$-invariant, then it is also invariant by all its subgroups. Thus, we only need to proof the converse. Assume that there is a spin structure which is not $G$-invariant, then there exists $\alpha\in\pi_1(H)$ such that $\sigma_*(\alpha)\neq 0$. Firstly, we want to find an adequate loop $a:S^1\longrightarrow H$, such that $[a]=\alpha$. Let $i:T^r\hookrightarrow H$ be the inclusion of a maximal torus, then the induced map $i_*:\mathbb{Z}^r\cong \pi_1(T,e)\longrightarrow\pi_1(H,e)$ is surjective \cite[Theorem 7.1]{brocker2013representations}. Alternatively, one can see that $G/T$ is simply connected by arguing that $G/T$ is a complex manifold with a CW-structure with only even dimensional cells, and then use the long exact sequence of homotopy. Then, there exists $x=(x_1,...,x_r)\in \mathbb{Z}^r$ such that $i_*(x)=\alpha$. Note that we may assume without lost of generality that $x$ is primitive in $\mathbb{Z}^r$ (i.e there is no $y\in\mathbb{Z}^r$ and $\lambda\in\mathbb{Z}$ with $|\lambda|>1$ such that $x=\lambda y$). Thus, we have a map $\tilde{a}:\mathbb{R}\longrightarrow \mathbb{R}^r$ such that $\tilde{a}(t)=xt$, which induces a loop $a:S^1\longrightarrow T^s$ whose class is $(x_1,...,x_r)$. Moreover, this map is an injective group morphism. Therefore, we have seen that we can choose the representative of $\alpha\in\pi_1(H)$ to be an injective group morphism $a:S^1\longrightarrow H$ (note that if we choose a non primitive element, the map may not be injective). We want to see that the involution $a(\tfrac{1}{2})$ does not preserve the spin structure. Because $\sigma_*(\alpha)\neq 0$, we can deduce that the spin structure is not $S^1$-invariant, thus we have the short exact sequence $$ 1\longrightarrow \mathbb{Z}_2\longrightarrow S'\longrightarrow S_1\longrightarrow 1,$$ where $S'\cong S^1$ is the double covering which preserves the spin structure of $G/H$. Then, the lift of $\mathbb{Z}_2$ is a subgroup of $S^1$, which implies that it is $\mathbb{Z}_4$. Consequently, the involution action $\mathbb{Z}_2=\{Id,a(\tfrac{1}{2})\}$ does not leave the spin structure invariant. \end{proof} \begin{example0} We can find explicitly a subgroup $\mathbb{Z}_2$ in the case of $S^n$ with the group action of $\operatorname{SO}(n)$. By the above proposition, we can pick the involution $f=\mu_{\alpha_n(\tfrac{1}{2})}$. The isometry $f:S^n\longrightarrow S^n$ is given by restricting the linear map $A:\mathbb{R}^{n+1}\longrightarrow \mathbb{R}^{n+1}$, where $A\in \operatorname{SO}(n+1)$ is the matrix \[A=\left(\begin{array}{c|c} Id_{n-1} & 0 \\ \hline 0 & -Id_2\end{array}\right).\] We claim that the spin structure of $S^n$ is not invariant under this action. Indeed, the induced map by $f$ on the frame bundle $FS^n=\operatorname{SO}(n+1)$, $f_A:\operatorname{SO}(n+1)\longrightarrow \operatorname{SO}(n+1)$, is simply the matrix multiplication $X\mapsto AX$. Thus it lifts to a map $\tilde{f}:\operatorname{Spin}(n+1)\longrightarrow \operatorname{Spin}(n+1)$ which fulfils that $\tilde{f}(x)=(e_{n} \cdot e_{n+1}) \cdot x$, where $\cdot$ is the Clifford multiplication (see the appendix for details on Clifford algebras). Since $e_n \cdot e_{n+1} \cdot e_n \cdot e_{n+1}=-1$, the map $\tilde{f}^2(x)=-x$. Hence, the lift of the action on the spin structure is given by the group $\mathbb{Z}_4$, which implies that the spin structure is not invariant. \end{example0} Finally, it would be interesting to study the same question of G-invariant spin structures on other homogeneous spaces. The starting point of this problem has two difficult conditions to check; firstly,given a homogeneous space $G/H$, we need to know which Lie groups acts transitively and effectively on it and secondly we need to know if it admits a spin structure. The first condition has been studied in \cite{HsiangSu1968,Oniscik_1968}, while we refer to \cite{CGT,ALEKSEEVSKYD} for the study of spin structures on some classes of homogeneous spaces. For example, we can consider the complex projective space $\mathbb{C}P^{m}$, which is known to be spin if and only if $m$ is odd. Then, if $m=2n+1$, the only groups that act effectively and transitively on them are $\operatorname{SU}(2n+2)$ and $\operatorname{Sp}(n+1)$ (see \cite{Oniscik_1968}). Therefore, its unique spin structure is invariant by both group actions.
{ "timestamp": "2021-09-21T02:39:32", "yymm": "2109", "arxiv_id": "2109.09580", "language": "en", "url": "https://arxiv.org/abs/2109.09580", "abstract": "We examine which of the compact connected Lie groups that act transitively on spheres of different dimensions leave the unique spin structure of the sphere invariant. We study the notion of invariance of a spin structure and prove this classification in two different ways; through examining the differential of the actions and through representation theory.", "subjects": "Differential Geometry (math.DG)", "title": "G-invariant Spin Structures on Spheres", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363750229259, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7084883445813449 }
https://arxiv.org/abs/1809.02162
Escaping Saddle Points in Constrained Optimization
In this paper, we study the problem of escaping from saddle points in smooth nonconvex optimization problems subject to a convex set $\mathcal{C}$. We propose a generic framework that yields convergence to a second-order stationary point of the problem, if the convex set $\mathcal{C}$ is simple for a quadratic objective function. Specifically, our results hold if one can find a $\rho$-approximate solution of a quadratic program subject to $\mathcal{C}$ in polynomial time, where $\rho<1$ is a positive constant that depends on the structure of the set $\mathcal{C}$. Under this condition, we show that the sequence of iterates generated by the proposed framework reaches an $(\epsilon,\gamma)$-second order stationary point (SOSP) in at most $\mathcal{O}(\max\{\epsilon^{-2},\rho^{-3}\gamma^{-3}\})$ iterations. We further characterize the overall complexity of reaching an SOSP when the convex set $\mathcal{C}$ can be written as a set of quadratic constraints and the objective function Hessian has a specific structure over the convex set $\mathcal{C}$. Finally, we extend our results to the stochastic setting and characterize the number of stochastic gradient and Hessian evaluations to reach an $(\epsilon,\gamma)$-SOSP.
\subsubsection{#1}\vspace{-3\baselineskip}\color{black}\bigskip{\noindent \bf \thesubsubsection. #1.}} \newcommand{\myparagraph}[1]{\needspace{1\baselineskip}\medskip\noindent {\it #1.}} \newcommand{\myparagraphtc}[1]{\needspace{1\baselineskip}\medskip\noindent {\it #1.}\addcontentsline{toc}{subsubsection}{\qquad\qquad\quad#1}} \section{Introduction} There has been a recent revival of interest in non-convex optimization, due to obvious applications in machine learning. While the modern history of the subject goes back six or seven decades, the recent attention to the topic stems from new applications as well as availability of modern analytical and computational tools, providing a new perspective on classical problems. Following this trend, in this paper we focus on the problem of minimizing a smooth nonconvex function over a convex set as \begin{equation}\label{eq:main_problem} \text{minimize}\ f(\bbx), \qquad \text{subject to}\ \bbx\in \ccalC, \end{equation} where $\bbx\in \reals^d$, $\ccalC \subset \reals^d$ is a closed convex set, and $f:\reals^d \to \reals$ is a twice continuously differentiable function over $\ccalC$. It is known that finding a global minimum of Problem~\eqref{eq:main_problem} is hard. Equally well-known is the fact that for certain nonconvex problems, all local minimizers are global. These include, for example, matrix completion \citep{DBLP:conf/nips/GeLM16}, phase retrieval \citep{DBLP:conf/isit/SunQW16}, and dictionary learning \citep{DBLP:journals/tit/SunQW17}. For such problems, finding a global minimum of \eqref{eq:main_problem} reduces to the problem of finding one of its local minima. Given the well-known hardness results in finding stationary points, recent focus has shifted in characterizing approximate stationary points. When the objective function $f$ is convex, finding an $\eps$-first-order stationary point is often sufficient since it leads to finding an approximate local (and hence global) minimum. However, in the nonconvex setting, even when the problem is unconstrained, i.e., $\ccalC= \reals^d$, convergence to a first-order stationary point (FOSP) is not enough as the critical point to which convergence is established might be a saddle point. It is therefore natural to look at higher order derivatives and search for a second-order stationary points. Indeed, under the assumption that all the saddle points are strict (formally defined later), in both unconstrained and constrained settings, convergence to a second-order stationary point (SOSP) implies convergence to a local minimum. While convergence to an SOSP has been thoroughly investigated in the recent literature for the unconstrained setting, the overall complexity for the constrained setting has not been studied yet. \textbf{Contributions.} Our main contribution is to propose a generic framework which generates a sequence of iterates converging to an approximate second-order stationary point for the constrained nonconvex problem in \eqref{eq:main_problem}, when the convex set $\ccalC$ has a specific structure that allows for approximate minimization of a quadratic loss over the feasible set. The proposed framework consists of two main stages: First, it utilizes first-order information to reach a first-order stationary point; next, it incorporates second-order information to escape from a stationary point if it is a local maximizer or a strict saddle point. We show that the proposed approach leads to an $(\eps,\gamma)$-second-order stationary point (SOSP) for Problem \eqref{eq:main_problem} (check Definition~\ref{def_sosp_constrained}). The proposed approach utilizes advances in constant-factor optimization of nonconvex quadratic programs \citep{ye1992affine,fu1998approximation,tseng2003further} that find a $\rho$-approximate solution over $\ccalC$ in polynomial time, where $\rho$ is a positive constant smaller than $1$ that depends on the structure of $\ccalC$. When such approximate solution exists, the sequence of iterates generated by the proposed framework reaches an $(\eps,\gamma)$-SOSP of Problem~\eqref{eq:main_problem} in at most $\mathcal{O}(\max\{\eps^{-2},\rho^{-3}\gamma^{-3}\})$ iterations. We show that quadratic constraints satisfy the required condition for the convex set $\ccalC$ if the objective function Hessian $\nabla^2 f$ has a specific structure over the convex set $\ccalC$ (formally described later). For this case, we show that it is possible to achieve an $(\eps,\gamma)$-SOSP after at most $\mathcal{O}(\max\{\tau\eps^{-2} ,d^3m^7\gamma^{-3}\})$ arithmetic operations, where $d$ is the dimension of the problem and $\tau$ is the number of required arithmetic operations to solve a linear program over $\ccalC$ or to project a point onto $\ccalC$. We further extend our results to the stochastic setting and show that we can reach an $(\eps,\gamma)$-SOSP after computing at most $\mathcal{O}(\max\{\eps^{-4},\eps^{-2}\rho^{-4}\gamma^{-4}, \rho^{-7}\gamma^{-7}\})$ stochastic gradients and $\mathcal{O}(\max\{\eps^{-2}\rho^{-3}\gamma^{-3},\rho^{-5}\gamma^{-5}\})$ stochastic Hessians. \subsection{Related work} \textbf{Unconstrained case}. The rich literature on nonconvex optimization provides a plethora of algorithms for reaching stationary points of a smooth \textit{unconstrained} minimization problem. Convergence to first-order stationary points (FOSP) has been widely studied for both deterministic \citep{nesterov2013introductory, DBLP:conf/stoc/AgarwalZBHM17, DBLP:journals/corr/CarmonDHS16, DBLP:conf/icml/CarmonDHS17, carmon2017lowerpart1, carmon2017lowerpart2} and stochastic settings \citep{DBLP:conf/cdc/ReddiSPS16,DBLP:conf/icml/ReddiHSPS16,DBLP:conf/icml/ZhuH16,DBLP:conf/nips/LeiJCJ17}. Stronger results which indicate convergence to an SOSP are also established. Numerical optimization methods such as trust-region methods \citep{DBLP:journals/jc/CartisGT12,DBLP:journals/mp/CurtisRS17,DBLP:journals/jgo/MartinezR17} and cubic regularization algortihms \citep{DBLP:journals/mp/NesterovP06,DBLP:journals/mp/CartisGT11,DBLP:journals/mp/CartisGT11a} can reach an approximate second-order stationary point in a finite number of iterations; however, typically the computational complexity of each iteration could be relatively large due to the cost of solving trust-region or regularized cubic subproblems. Recently, a new line of research has emerged that focuses on the overall computational cost to achieve an SOSP. These results build on the idea of escaping from strict saddle points with perturbing the iterates by injecting a properly chosen noise \citep{DBLP:conf/colt/GeHJY15, DBLP:conf/icml/Jin0NKJ17, DBLP:journals/corr/abs-1711-10456}, or by updating the iterates using the eigenvector corresponding to the smallest eigenvalue of the Hessian \citep{DBLP:journals/corr/CarmonDHS16, DBLP:journals/corr/abs-1708-08694, xu2017first,royer2017complexity,DBLP:conf/stoc/AgarwalZBHM17,DBLP:conf/aistats/ReddiZSPBSS18,paternain2017second}. \textbf{Constrained case}. Asymptotic convergence to first-order and second-order stationary points for the constrained optimization problem in~\eqref{eq:main_problem} has been studied in the numerical optimization community \citep{burke1990convergence,conn1993global,facchinei1998convergence,di2005convergence}. Recently, finite-time analysis for convergence to an FOSP of the generic smooth constrained problem in~\eqref{eq:main_problem} has received a lot of attention. In particular, \citep{lacoste2016convergence} shows that the sequence of iterates generated by the update of Frank-Wolfe converges to an $\eps$-FOSP after $\mathcal{O}(\eps^{-2})$ iterations. The authors of \citep{ghadimi2016mini} consider norm of gradient mapping as a measure of non-stationarity and show that the projected gradient method has the same complexity of $\mathcal{O}(\eps^{-2})$. Similar result for the accelerated projected gradient method is also shown \citep{ghadimi2016accelerated}. Adaptive cubic regularization methods in \citep{cartis2012adaptive,cartis2013evaluation,cartis2015evaluation} improve these results using second-order information and obtain an $\eps$-FOSP of Problem~\eqref{eq:main_problem} after at most $\mathcal{O}(\eps^{-3/2})$ iterations. Finite time analysis for convergence to an SOSP has also been studied for linear constraints. In particular, \citep{bian2015complexity} studies convergence to an SOSP of \eqref{eq:main_problem} when the set $\ccalC$ is a linear constraint of the form $\bbx\geq 0$ and propose a trust region interior point method that obtains an $(\eps,\sqrt{\eps})$-SOSP in $\mathcal{O}(\eps^{-3/2})$ iterations. The work in \citep{haeser2017optimality} extends their results to the case that the objective function is potentially not differentiable or not twice differentiable on the boundary of the feasible region. The authors in \citep{cartis2017second} focus on the general convex constraint case and introduce a trust region algorithm that requires $\mathcal{O}(\epsilon^{-3})$ iterations to obtain an SOSP; however, each iteration of their proposed method requires access to the exact solution of a nonconvex quadratic program (finding its global minimum) which, in general, could be computationally prohibitive. To the best of our knowledge, our paper provides the first finite-time overall computational complexity analysis for reaching an SOSP of Problem~\eqref{eq:main_problem}. \section{Preliminaries and Definitions} In the case of \textit{unconstrained} minimization of the objective function $f$, the first-order and second-order necessary conditions for a point $\bbx^*$ to be a local minimum of that are defined as $\nabla f(\bbx^*)=\bb0_d$ and $\nabla^2 f(\bbx^*)\succeq \bb0_{d\times d}$, respectively. If a point satisfies these conditions it is called a \textit{second-order stationary point} (SOSP). If the second condition becomes strict, i.e., $\nabla^2 f(\bbx)\succ \bb0$, then we recover the sufficient conditions for a local minimum. However, to derive finite time convergence bounds for achieving an SOSP, these conditions should be relaxed. In other words, the goal should be to find an \textit{approximate} SOSP where the approximation error can be arbitrarily small. For the case of unconstrained minimization, a point $\bbx^*$ is called an $(\eps,\gamma)$-second-order stationary point if it satisfies $ \|\nabla f(\bbx^*)\| \leq \eps$ and $\nabla^2 f(\bbx^*)\succeq -\gamma \bbI_d,$ where $\eps$ and $\gamma$ are arbitrary positive constants. To study the constrained setting, we first state the necessary conditions for a local minimum of problem \eqref{eq:main_problem}. \begin{proposition}[\citep{bertsekas1999nonlinear}]\label{prop:nec_conds} If $\bbx^*\in \ccalC$ is a local minimum of the function $f$ over the convex set $\ccalC$, then \begin{align} &\nabla f(\bbx^*)^\top(\bbx-\bbx^*)\geq 0 ,\quad \forall \ \bbx \in \ccalC, \label{eq:nec_cond_first_order}\\ & (\bbx-\bbx^*)^\top\nabla^2 f(\bbx^*)(\bbx-\bbx^*) \geq 0 ,\quad \forall \ \bbx \in \ccalC\ \ \st\ \nabla f(\bbx^*)^\top(\bbx-\bbx^*)=0.\label{eq:nec_cond_second_order} \end{align} \end{proposition} The conditions in \eqref{eq:nec_cond_first_order} and \eqref{eq:nec_cond_second_order} are the first-order and second-order necessary optimality conditions, respectively. By making the inequality in \eqref{eq:nec_cond_second_order} strict, i.e., $(\bbx-\bbx^*)^\top\nabla^2 f(\bbx^*)(\bbx-\bbx^*)>0$, we recover the sufficient conditions for a local minimum when $\ccalC$ is a polyhedral \citep{bertsekas1999nonlinear}. Further, if the inequality in~\eqref{eq:nec_cond_second_order} is replaced by $(\bbx-\bbx^*)^\top\nabla^2 f(\bbx^*)(\bbx-\bbx^*) \geq \delta \|\bbx-\bbx^*\|^2$ for some $\delta>0$, we obtain the sufficient conditions for a local minimum of Problem \eqref{eq:main_problem} for any convex constraint $\ccalC$; see \citep{bertsekas1999nonlinear}. If a point $\bbx^*$ satisfies the conditions in~\eqref{eq:nec_cond_first_order} and \eqref{eq:nec_cond_second_order} it is an SOSP of Problem~\eqref{eq:main_problem}. As in the unconstrained setting, the first-order and second-order optimality conditions may not be satisfied in finite number of iterations, and we focus on finding an approximate SOSP. \begin{definition}\label{def_sosp_constrained} Recall the twice continuously differentiable function $f:\reals^d\to \reals$ and the convex closed set $\ccalC\subset \reals^d$ introduced in Problem~\eqref{eq:main_problem}. We call $\bbx^*\in \ccalC$ an $(\eps,\gamma)$-second order stationary point of Problem~\eqref{eq:main_problem} if the following conditions are satisfied. \begin{align} &\nabla f(\bbx^*)^\top(\bbx-\bbx^*)\geq -\eps ,\quad \forall \ \bbx \in \ccalC,\\ & (\bbx-\bbx^*)^\top\nabla^2 f(\bbx^*)(\bbx-\bbx^*) \geq -\gamma ,\quad \forall \ \bbx \in \ccalC\ \ \st\ \nabla f(\bbx^*)^\top(\bbx-\bbx^*)=0. \end{align} If a point only satisfies the first condition, we call it an $\eps$-first order stationary point. \end{definition} We further formally define strict saddle points for the constrained optimization problem in~\eqref{eq:main_problem}. \begin{definition}\label{strict_saddle_constrained} A point $\bbx^*\in \ccalC$ is a $\delta$-strict saddle point of Problem~\eqref{eq:main_problem} if (i) for all $\bbx \in \ccalC$ the condition $\nabla f(\bbx^*)^\top(\bbx-\bbx^*)\geq 0$ holds, and (ii) there exists a point $\bby$ such that \begin{equation} (\bby-\bbx^*)^\top\nabla^2 f(\bbx^*)(\bby-\bbx^*) < -\delta ,\quad \ \bby \in \ccalC\ \text{and}\ \nabla f(\bbx^*)^\top(\bby-\bbx^*)=0. \end{equation} \end{definition} According to Definitions \ref{def_sosp_constrained} and \ref{strict_saddle_constrained} if all saddle points are $\delta$-strict and $\gamma\leq \delta$, any $(\eps,\gamma)$-SOSP of Problem~\eqref{eq:main_problem} is an approximate local minimum. We emphasize that in this paper we do not assume that all saddles are strict to prove convergence to an SOSP. We formally defined strict saddles just to clarify that if all the saddles are strict then convergence to an approximate SOSP is equivalent to convergence to an approximation local minimum. Our goal throughout the rest of the paper is to design an algorithm which finds an $(\eps,\gamma)$-SOSP of Problem~\eqref{eq:main_problem}. To do so, we first assume the following conditions are satisfied. \begin{assumption}\label{assumption:lip_grad} The gradients $\nabla f$ are $L$-Lipschitz continuous over the set $\ccalC$, i.e., for any $\bbx,\tbx\in \ccalC$, \begin{align} \| \nabla f(\bbx)-\nabla f(\tbx)\| \leq L\|\bbx-\tbx\|. \end{align} \end{assumption} \begin{assumption}\label{assumption:lip_hessian} The Hessians $\nabla^2 f$ are $M$-Lipschitz continuous over the set $\ccalC$, i.e., for any $\bbx,\tbx\in \ccalC$ \begin{align} \| \nabla^2 f(\bbx)-\nabla^2 f(\tbx)\| \leq M\|\bbx-\tbx\|. \end{align} \end{assumption} \begin{assumption}\label{assumption:bounded_diameter} The diameter of the compact convex set $\ccalC$ is upper bounded by a constant $D$, i.e., \begin{align} \max_{\bbx,\tbx \in \ccalC}\{\|\bbx-\hbx\|\}\leq D. \end{align} \end{assumption} \section{Main Result} In this section, we introduce a generic framework to reach an $(\eps,\gamma)$-SOSP of the non-convex function $f$ over the convex set $\ccalC$, when $\ccalC$ has a specific structure as we describe below. In particular, we focus on the case when we can solve a quadratic program (QP) of the form \begin{align}\label{eq:quad_problem_template} & \text{minimize}\quad \bbx^\top\bbA\bbx+\bbb^\top\bbx +c \qquad \text{subject to}\quad \bbx\in \ccalC, \end{align} up to a constant factor $\rho\leq 1$ in a finite number of arithmetic operations. Here, $\bbA\in\reals^d$ is a symmetric matrix, $\bbb\in \reals^d$ is a vector, and $c\in \reals$ is a scalar. To clarify the notion of solving a problem up to a constant factor $\rho$, consider $\bbx^*$ as a global minimizer of \eqref{eq:quad_problem_template}. Then, we say Problem \eqref{eq:quad_problem_template} is solved up to a constant factor $ \rho\in (0,1]$ if we have found a feasible solution $\tbx\in \ccalC$ such that \begin{equation}\label{rho_opt_cond} {\bbx^*}^\top\bbA\bbx^*+\bbb^\top\bbx^*+c\ \leq\ \tbx^\top\bbA\tbx+\bbb^\top\tbx +c \ \leq\ \rho({\bbx^*}^\top\bbA\bbx^*+\bbb^\top\bbx^*+c). \end{equation} Note that here w.l.o.g. we have assumed that the optimal objective function value ${\bbx^*}^\top\bbA\bbx^*+\bbb^\top\bbx^*+c$ is non-positive. Larger constant $\rho$ implies that the approximate solution is more accurate. If $\tbx$ satisfies the condition in \eqref{rho_opt_cond}, we call it a $\rho$-approximate solution of Problem~\eqref{eq:quad_problem_template}. Indeed, if $\rho=1$ then $\tbx$ is a global minimizer of Problem~\eqref{eq:quad_problem_template}. In Algorithm \ref{alg_generic_framework}, we introduce a generic framework that achieves an $(\eps,\gamma)$-SOSP of Problem~\eqref{eq:main_problem} whose running time is polynomial in $\eps^{-1}$, $\gamma^{-1}$, $\rho^{-1}$ and $d$, when we can find a $\rho$-approximate solution of a quadratic problem of the form \eqref{eq:quad_problem_template} in a time that is polynomial in $d$. The proposed scheme consists of two major stages. In the first phase, as mentioned in Steps 2-4, we use a first-order update, i.e., a gradient-based update, to find an $\eps$-FOSP, i.e., we update the decision variable $\bbx$ according to a first-order update until we reach a point $\bbx_t$ that satisfies the condition \begin{equation}\label{eq:first_order_stat_cond} \nabla f(\bbx_t)^\top(\bbx-\bbx_t)\geq -\eps, \quad \forall \ \bbx\in \ccalC . \end{equation} In Section \ref{sec:first_order_section}, we study in detail projected gradient descent and conditional gradient algorithms for the first order phase of the proposed framework. Interestingly, both of these algorithms require at most $\mathcal{O}(\eps^{-2})$ iterations to reach an $\eps$-first order stationary point. The second stage of the proposed scheme uses second-order information of the objective function~$f$ to escape from the stationary point if it is a local maximum or a strict saddle point. To be more precise, if we assume that $\bbx_t$ is a feasible point satisfying the condition \eqref{eq:first_order_stat_cond}, we then aim to find a descent direction by solving the following quadratic program \begin{align}\label{eq:quad_problem} & \text{minimize}\quad q(\bbu) := (\bbu-\bbx_t)^\top\nabla^2 f(\bbx_t)(\bbu-\bbx_t) \nonumber\\ & \text{subject to}\quad \bbu\in \ccalC, \quad \nabla f(\bbx_t)^\top(\bbu-\bbx_t)=0, \end{align} up to a constant factor $\rho$ where $\rho\in(0,1]$. To be more specific, if we define $q(\bbu^*)$ as the optimal objective function value of the program in \eqref{eq:quad_problem}, we focus on the cases that we can obtain a feasible point $\bbu_t$ which is a $\rho$-approximate solution of Problem~\eqref{eq:quad_problem}, i.e., $\bbu_t\in \ccalC$ and \begin{equation}\label{eq:rho_approx_solution} q(\bbu^*)\ \leq\ q(\bbu_t)\ \leq \ \rho \ q(\bbu^*). \end{equation} The problem formulation in \eqref{eq:quad_problem} can be transformed into the quadratic program in \eqref{eq:quad_problem_template}; see Section~\ref{sec:second_stage} for more details. Note that the constant $\rho$ is independent of $\eps$, $\gamma$, and $d$ and only depends on the structure of the convex set $\ccalC$. For instance, if $\ccalC$ is defined in terms of $m$ quadratic constraints one can find a $\rho=m^{-2}$ approximate solution of \eqref{eq:quad_problem} after at most $\mathcal{\tilde{O}}(md^3)$ arithmetic operations (Section~\ref{sec:second_stage}). After computing a feasible point $\bbu_t$ satisfying the condition in \eqref{eq:rho_approx_solution}, we check the quadratic objective function value at the point $\bbu_t$, and if the inequality $q(\bbu_t)<-\rho\gamma$ holds, we follow the update \begin{equation}\label{eq:second_stage_update} \bbx_{t+1} = (1-\sigma) \bbx_{t} + \sigma \bbu_t, \end{equation} where $\sigma$ is a positive stepsize. Otherwise, we stop the process and return $\bbx_t$ as an $(\eps,\gamma)$-second order stationary point of Problem~\eqref{eq:main_problem}. To check this claim, note that Algorithm \ref{alg_generic_framework} stops if we reach a point $\bbx_t$ that satisfies the first-order stationary condition $\nabla f(\bbx_t)^\top(\bbx-\bbx_t) \geq -\eps$, and the objective function value for the $\rho$-approximate solution of the quadratic subproblem is larger than $-\rho\gamma$, i.e., $q(\bbu_t)\geq -\rho \gamma $. The second condition alongside with the fact that $q(\bbu_t)$ satisfies \eqref{eq:rho_approx_solution} implies that $q(\bbu^*)\geq -\gamma$. Therefore, for any $\bbx\in\ccalC$ and $\nabla f(\bbx_t)^\top(\bbx-\bbx_t) =0$, it holds that \begin{equation}\label{eq_SONOC} (\bbx-\bbx_t)^\top\nabla^2 f(\bbx_t)(\bbx-\bbx_t)\geq -\gamma. \end{equation} These two observations show that the outcome of the proposed framework in Algorithm \ref{alg_generic_framework} is an $(\eps,\gamma)$-SOSP of Problem~\eqref{eq:main_problem}. Now it remains to characterize the number of iterations that Algorithm~\ref{alg_generic_framework} needs to perform before reaching an $(\eps,\gamma)$-SOSP which we formally state in the following theorem. \begin{algorithm}[t] \begin{algorithmic}[1] \caption{Generic framework for escaping saddles in constrained optimization}\label{alg_generic_framework} \small{\REQUIRE Stepsize $\sigma>0$. Initialize $\bbx_0\in\ccalC$ \FOR {$t=1,2,\ldots$} \IF{ $\bbx_t$ is not an $\eps$-first order stationary point } \STATE Compute $\bbx_{t+1}$ using first-order information (Frank-Wolfe or projected gradient descent) \ELSE \STATE Find $ \bbu_t$: a $\rho$-approximate solution of \eqref{eq:quad_problem} \IF {$q(\bbu_t) < -\rho\gamma$ } \STATE Compute the updated variable $\bbx_{t+1} = (1-\sigma) \bbx_{t} + \sigma \bbu_t$;\vspace{-1mm} \ELSE \STATE Return $\bbx_t$ and stop. \ENDIF \ENDIF \ENDFOR} \end{algorithmic} \end{algorithm} \begin{theorem}\label{thm:main_thm} Consider the optimization problem in \eqref{eq:main_problem}. Suppose the conditions in Assumptions \ref{assumption:lip_grad}-\ref{assumption:bounded_diameter} are satisfied. If in the first-order stage, i.e., Steps 2-4, we use the update of Frank-Wolfe or projected gradient descent, the generic framework proposed in Algorithm \ref{alg_generic_framework} finds an $(\eps,\gamma)$-second-order stationary point of Problem~\eqref{eq:main_problem} after at most $\mathcal{O}(\max\{\eps^{-2} , \rho^{-3}\gamma^{-3}\})$ iterations. \end{theorem} To prove the claim in Theorem~\ref{thm:main_thm}, we first review first-order conditional gradient and projected gradient algorithms and show that if the current iterate is not a first-order stationary point, by following either of these updates the objective function value decreases by a constant of $\mathcal{O}(\eps^2)$ (Section~\ref{sec:first_order_section}). We then focus on the second stage of Algorithm~\ref{alg_generic_framework} which corresponds to the case that the current iterate is an $\eps$-FOSP and we need to solve the quadratic program in \eqref{eq:quad_problem} approximately (Section~\ref{sec:second_stage}). In this case, we show that if the iterate is not an $(\eps,\gamma)$-SOSP, by following the update in \eqref{eq:second_stage_update} the objective function value decreases at least by a constant of $\mathcal{O}(\rho^3\gamma^3)$. Finally, by combining these two results it can be shown that Algorithm~\ref{alg_generic_framework} finds an $(\eps,\gamma)$-SOSP after at most $\mathcal{O}(\max\{\eps^{-2} , \rho^{-3}\gamma^{-3}\})$ iterations. \section{First-Order Step: Convergence to a First-Order Stationary Point}\label{sec:first_order_section} In this section, we study two different first-order methods for the first stage of Algorithm \ref{alg_generic_framework}. The result in this section can also be independently used for convergence to an FOSP of Problem~\eqref{eq:main_problem} satisfying \begin{equation}\label{eq:first_order_stationary_point_def} \nabla f(\bbx^*)^\top(\bbx-\bbx^*)\geq -\eps,\qquad \forall\ \bbx\in\ccalC, \end{equation} where $\eps>0$ is a positive constant. Although for Algorithm~\ref{alg_generic_framework} we assume that $\ccalC$ has a specific structure as mentioned in \eqref{eq:quad_problem_template}, the results in this section hold for any closed and compact convex set~$\ccalC$. To keep our result as general as possible, in this section, we study both conditional gradient and projected-based methods when they are used in the first-stage of the proposed generic framework. \subsection{Conditional gradient update} The conditional gradient (Frank-Wolfe) update has two steps. We first solve the linear program \begin{equation}\label{eq:FW_first_step} \bbv_t=\argmax_{\bbv\in\ccalC} \{-\nabla f(\bbx_t)^\top\bbv\}. \end{equation} Then, we compute the updated variable $\bbx_{t+1}$ according to the update \begin{equation}\label{eq:FW_second_step} \bbx_{t+1}= (1-\eta)\bbx_t + \eta \bbv_t, \end{equation} where $\eta$ is a stepsize. In the following proposition, we show that if the current iterate is not an $\eps$-first order stationary point, then by updating the variable according to \eqref{eq:FW_first_step}-\eqref{eq:FW_second_step} the objective function value decreases. The proof of the following proposition is adopted from \citep{lacoste2016convergence} \begin{proposition}\label{FW_proposition} Consider the optimization problem in \eqref{eq:main_problem}. Suppose Assumptions~\ref{assumption:lip_grad} and \ref{assumption:bounded_diameter} hold. Set the stepsize in \eqref{eq:FW_second_step} to $\eta=\eps/D^2L$. Then, if the iterate $\bbx_t$ at step $t$ is not an $\eps$-first order stationary point, the objective function value at the updated variable $\bbx_{t+1}$ satisfies the inequality \begin{equation} f(\bbx_{t+1}) \leq f(\bbx_t) - \frac{\eps^2}{2D^2L}. \end{equation} \end{proposition} The result in Proposition \ref{FW_proposition} shows that by following the update of the conditional gradient method the objective function value decreases by $\mathcal{O}(\eps^2)$, if an $\eps$-FOSP is not achieved. \begin{remark} In step 3 of Algorithm~\ref{alg_generic_framework} we first check if $\bbx_t$ is an $\eps$-FOSP. This can be done by evaluating \begin{equation}\label{stop_criteria} \min_{\bbx\in\ccalC} \{\nabla f(\bbx_t)^\top(\bbx-\bbx_t)\}=\max_{\bbx\in\ccalC} \{-\nabla f(\bbx_t)^\top\bbx\} + \nabla f(\bbx_t)^\top\bbx_t \end{equation} and comparing the optimal value with $-\eps$. Note that the linear program in \eqref{stop_criteria} is the same as the one in \eqref{eq:FW_first_step}. Therefore, by checking the first-order optimality condition of $\bbx_t$, the variable $\bbv_t$ is already computed, and we need to solve only one linear program per iteration. \end{remark} \subsection{Projected gradient update} The projected gradient descent (PGD) update consists of two steps: (i) descending through the gradient direction and (ii) projecting the updated variable onto the convex constraint set. These two steps can be combined together and the update can be explicitly written as \vspace{1mm} \begin{equation}\label{eq:PGD_update} \bbx_{t+1}= \pi_{\ccalC}\{\bbx_t -\eta \nabla f(\bbx_t)\}, \end{equation} where $\pi_{\ccalC}(.)$ is the Euclidean projection onto the convex set $\ccalC$ and $\eta$ is a positive stepsize. In the following proposition, we first show that by following the update of PGD the objective function value decreases by a constant until we reach an $\eps$- FOSP. Further, we show that the number of required iterations for PGD to reach an $\eps$-FOSP is of $\mathcal{O}(\eps^{-2})$. \begin{proposition}\label{PGD_proposition} Consider Problem \eqref{eq:main_problem}. Suppose Assumptions~\ref{assumption:lip_grad} and \ref{assumption:bounded_diameter} are satisfied. Further, assume that the gradients $\nabla f(\bbx)$ are uniformly bounded by $K$ for all $\bbx\in\ccalC$. If the stepsize of the projected gradient descent method defined in \eqref{eq:PGD_update} is set to $\eta=1/L$ the objective function value decreases by \begin{equation}\label{pgd_dec} f(\bbx_{t+1}) \leq f(\bbx_t) -\frac{\eps^2L}{2(K+ LD)^2}, \end{equation} Moreover, iterates reach a first-order stationary point satisfying \eqref{eq:first_order_stationary_point_def} after at most $\mathcal{O}(\eps^{-2})$ iterations. \end{proposition} Proposition \ref{PGD_proposition} shows that by following the update of PGD the function value decreases by $\mathcal{O}(\eps^2)$ until we reach an $\eps$-FOSP. It further shows PGD obtains an $\eps$-FOSP satisfying \eqref{eq:first_order_stationary_point_def} after at most $\mathcal{O}(\eps^{-2})$ iterations. To the best of our knowledge, this result is also novel, since the only convergence guarantee for PGD in \citep{ghadimi2016mini} is in terms of number of iterations to reach a point with a gradient mapping norm less than $\eps$, while our result characterizes number of iterations to satisfy \eqref{eq:first_order_stationary_point_def}. \begin{remark} To use the PGD update in the first stage of Algorithm~\ref{alg_generic_framework} one needs to define a criteria to check if $\bbx_t$ is an $\eps$-FOSP or not. However, in PGD we do not solve the linear program $\min_{\bbx\in\ccalC} \{\nabla f(\bbx_t)^\top(\bbx-\bbx_t)\}$. This issue can be resolved by checking the condition $\|\bbx_t- \bbx_{t+1}\| \leq \eps/(K+ LD)$ which is a sufficient condition for the condition in \eqref{eq:first_order_stationary_point_def}. In other words, if this condition holds we stop and $\bbx_t$ is an $\eps$-FOSP; otherwise, the result in \eqref{pgd_dec} holds and the function value decreases. For more details please check the proof of Proposition \ref{PGD_proposition}. \end{remark} \section{Second-Order Step: Escape from Saddle Points}\label{sec:second_stage} In this section, we study the second stage of the framework in Algorithm~\ref{alg_generic_framework} which corresponds to the case that the current iterate is an $\eps$-FOSP. Note that when we reach a critical point the goal is to find a feasible point $\bbu\in \ccalC $ in the tangent space $\nabla f (\bbx_t)^\top(\bbu-\bbx_t)=0$ that makes the inner product $(\bbu-\bbx_t)^\top \nabla^2 f(\bbx_t)(\bbu-\bbx_t)$ smaller than $-\gamma$. To achieve this goal we need to check the minimum value of this inner product over the constraints, i.e., we need to solve the quadratic program in \eqref{eq:quad_problem} up to a constant factor $\rho\in(0,1]$. In the following proposition, we show that the updated variable according to \eqref{eq:second_stage_update} decreases the objective function value if the condition $q(\bbu_t)< -\rho \gamma $ holds. \begin{proposition}\label{SOU_proposition} Consider the quadratic program in \eqref{eq:quad_problem}. Let $\bbu_t$ be a $\rho$-approximate solution for quadratic subproblem in \eqref{eq:quad_problem}. Suppose that Assumptions \ref{assumption:lip_hessian} and \ref{assumption:bounded_diameter} hold. Further, set the stepsize $\sigma=\rho\gamma/MD^3$. If the quadratic objective function value $q$ evaluated at $\bbu_t$ satisfies the condition $q(\bbu_t)< -\rho \gamma $, then the updated variable according to \eqref{eq:second_stage_update} satisfies the inequality \begin{align} f(\bbx_{t+1}) \leq f(\bbx_t) -\frac{\rho^3\gamma^3}{3M^2D^6}. \end{align} \end{proposition} The only unanswered question is how to solve the quadratic subproblem in \eqref{eq:quad_problem} up to a constant factor $\rho\in(0,1]$. For general $\ccalC$, the quadratic subproblem could be NP-hard \citep{murty1987some}; however, for some special choices of the convex constraint $\ccalC$, this quadratic program (QP) can be solved either exactly or approximately up to a constant factor. In the following section, we focus on the quadratic constraint case, but indeed there are other classes of constraints that satisfy our required condition. \subsection{Quadratic constraints case} In this section, we focus on the case where the constraint set $\ccalC$ is defined as the intersection of $m$ ellipsoids centered at the origin.\footnote{To simplify the constant factor approximation $\rho$ we assume ellipsoids are centered at the origin. If we drop this assumption then $\rho$ will depend on the maximum distance between the origin and the boundary of each of the ellipsoids, e.g., see equation (6) in \citep{tseng2003further}.} In particular, assume that the set $\ccalC$ is given by \begin{equation} \ccalC:= \{\bbx \in \reals^d \mid \bbx^\top\bbQ_i\bbx \leq 1, \ \ \forall\ i=1,\dots,m\}, \end{equation} where $\bbQ_i\in \mathbb{S}_{+}^d$. Under this assumption, the QP in \eqref{eq:quad_problem} can be written as \begin{align}\label{main_qcqp_problem} & \min_{\bbu} \quad (\bbu-\bbx_t)^\top \nabla^2 f(\bbx_t)(\bbu-\bbx_t)\nonumber\\ & \text{s.t.} \quad \bbu^\top\bbQ_i\bbu \leq 1, \quad \for\ i=1,\dots, m \ \quad \text{and}\ \ \nabla f (\bbx_t)^\top(\bbu-\bbx_t)=0. \end{align} Note that the equality constraint $\nabla f (\bbx_t)^\top(\bbu-\bbx_t)=0$ does not change the hardness of the problem and can be easily eliminated. To do so, first define a new optimization variable $\bbz:= \bbu-\bbx_t$ to obtain \begin{align}\label{main_qcqp_problem_2} & \min_{\bbz} \quad \bbz^\top \nabla^2 f(\bbx_t)\bbz\nonumber\\ & \text{s.t.} \quad (\bbz+\bbx_t)^\top\bbQ_i(\bbz+\bbx_t) \leq 1,\quad \for\ i=1,\dots, m \ \quad\ \text{and}\ \ \nabla f (\bbx_t)^\top\bbz=0, \end{align} Then, find a basis for the tangent space $\nabla f(\bbx_t)^\top\bbz=0$. Indeed, using the Gramm-Schmidt procedure, we can find an orthonormal basis for the space $\reals^d$ of the form $\{\bbv_1,\dots, \bbv_{d-1}, \frac{\nabla f(\bbx_t)}{\|\nabla f(\bbx_t)\|}\}$ at the complexity of $\mathcal{O}(d^3)$. If we define $\bbA=[\bbv_1;\dots;\bbv_{d-1}]\in \reals^{d\times d-1}$ as the concatenation of the vectors $\{\bbv_1,\dots, \bbv_{d-1}\}$, then any vector $\bbz$ satisfying $\nabla f(\bbx_t)^\top\bbz=0$ can be written as $\bbz=\bbA\bby$ where $\bby\in \reals^{d-1}$. Hence, \eqref{main_qcqp_problem_2} is equivalent to \begin{align}\label{main_qcqp_problem_3} &\min_{\bbz} \quad \bby^\top\bbA^\top \nabla^2 f(\bbx_t)\bbA\bby\nonumber\\ &\text{s.t.} \quad (\bbA\bby+\bbx_t)^\top\bbQ_i(\bbA\bby+\bbx_t) \leq 1,\quad \for\ i=1,\dots, m. \end{align} This procedure reduces the dimension of the problem from $d$ to $d-1$. It is not hard to check that the center of ellipsoids in \eqref{main_qcqp_problem_3} is $-\bbA^\top \bbx_t$. By a simple change of variable $\bbA\hby:= \bbA\bby+\bbx_t$ we obtain \begin{align}\label{main_qcqp_problem_4} & \min_{\bbz} \quad \hby^\top\bbA^\top \nabla^2 f(\bbx_t)\bbA \hby - 2 \bbx_t^\top \nabla^2 f(\bbx_t) \bbA \hby +\bbx_t^\top \nabla^2 f(\bbx_t)\bbx_t \nonumber\\ &\text{s.t.} \quad \hby^\top \bbA^\top\bbQ_i\bbA\hby \leq 1, \quad \for\ i=1,\dots, m. \end{align} Define the matrices $\tbQ_i:=\bbA^\top\bbQ_i\bbA$ and $\bbB_t:=\bbA^\top \nabla^2 f(\bbx_t)\bbA$, the vector $\bbs_t=- 2\bbx_t^\top \nabla^2 f(\bbx_t) \bbA$, and the scalar $\bbc_t:=\bbx_t^\top \nabla^2 f(\bbx_t)\bbx_t $. Using these definitions the problem reduces to \begin{align}\label{main_qcqp_problem_5} & \min_{\bbz} \quad q(\hby) := \hby^\top\bbB_t\hby +\bbs_t^\top \hby+c_t \qquad &\text{s.t.} \quad \hby^\top\tbQ_i\hby \leq 1, \quad \for\ i=1,\dots, m. \end{align} Note that the matrices $\tbQ_i\in \mathbb{S}_{+}^d$ are positive semidefinite, while the matrix $\bbB_t \in \mathbb{S}^d$ might be indefinite. Indeed, the optimal objective function value of the program in \eqref{main_qcqp_problem_5} is equal to the optimal objective function value of \eqref{main_qcqp_problem}. Further, note that if we find a $\rho$-approximate solution $\hby^*$ for \eqref{main_qcqp_problem_5}, we can recover a $\rho$-approximate solution $\bbu^*$ for \eqref{main_qcqp_problem} using the transformation $\bbu^* = \bbA\hby^*$. The program in \eqref{main_qcqp_problem_5} is a specific \textit{Quadratic Constraint Quadratic Program} (QCQP), where all the constraints are centered at $\bb0$. For the specific case of $m=1$, the duality gap of this problem is zero and simply by transferring the problem to the dual domain one can solve Problem \eqref{main_qcqp_problem_5} exactly. In the following proposition, we focus on the general case of $m\geq1$ and explain how to find a $\rho$-approximate solution for \eqref{main_qcqp_problem_5}. \begin{proposition}\label{prop:quad_constraint} Consider Problem \eqref{main_qcqp_problem_5} and define $q_{min}$ as the minimum objective value of the problem. Based on the result in \citep{fu1998approximation}, there exists a polynomial time method that obtains a point $\hby^*$ \begin{equation}\label{ye_bound} q(\hby^*) \leq \frac{1-\zeta}{m^2(1+\zeta)^2}\ q_{min} + \left(1-\frac{1-\zeta}{m^2(1+\zeta)^2}\right) \bbx_t^\top \nabla^2 f(\bbx_t)\bbx_t \end{equation} after at most $\mathcal{O}( d^3 ( m\log(1/\delta) + \log(1/\zeta) +\log d))$ arithmetic operations, where $\delta$ is the ratio of the radius of the largest inscribing sphere over that of the smallest circumscribing sphere of the feasible set. Further, based on \citep{tseng2003further}, using a SDP relaxation of \eqref{main_qcqp_problem_5} one can find a point $\hby^*$ such that \begin{equation}\label{tseng_bound} q(\hby^*) \leq \frac{1}{m}\ q_{min} + \left(1-\frac{1}{m}\right) \bbx_t^\top \nabla^2 f(\bbx_t)\bbx_t. \end{equation} \end{proposition} \begin{proof} {If we define the function $\tilde{q}$ as $\tilde{q}(\bbx):={q}(\bbx)-c_t$, using the approaches in \citep{fu1998approximation} and \citep{tseng2003further}, we can find a $\rho$ approximate solution for $\min_{\hby} \tilde{q}(\hby) $ subject to $\hby^\top\tbQ_i\hby \leq 1$ for $i=1,\dots, m$. In other words, we can find a point $\hby^*$ such that $\tilde{q}(\hby^*) \leq \rho \ \tilde{q}_{min}$ where $0<\rho<1$ and $\tilde{q}_{min}$ is the minimum objective function value of $\tilde{q}$ over the constraint set which satisfies $\tilde{q}_{min} = q_{min}-c_t$. Replacing $\tilde{q}(\hby^*) $ and $\tilde{q}_{min}$ by their definitions and regrouping the terms imply that $\hby^*$ satisfies the condition $q(\hby^*) \leq \rho q_{min} + (1-\rho)c_t$. Replacing $\rho$ by $\frac{1-\zeta}{m^2(1+\zeta)^2}$ (which is the constant factor approximation shown in \citep{fu1998approximation}) leads to the claim in \eqref{ye_bound}, and substituting $\rho$ by $1/m$ (which is the approximation bound in \citep{tseng2003further}) implies the result in \eqref{tseng_bound}.} \end{proof} The result in Proposition \ref{prop:quad_constraint} indicates that if $\bbx_t^\top \nabla^2 f(\bbx_t)\bbx_t$ is non-positive, then one can find a $\rho$-approximate solution for Problem~\eqref{main_qcqp_problem_5} and consequently Problem~\eqref{main_qcqp_problem}. This condition is satisfied if we assume that $\max_{\bbx\in \ccalC} \bbx^\top \nabla^2 f(\bbx)\bbx\leq 0$. For instance, for a concave minimization problem over the convex set $\ccalC$ this condition is satisfied. In fact, it can be shown that our analysis still stands even if $\max_{\bbx\in \ccalC} \bbx^\top \nabla^2 f(\bbx)\bbx$ is at most $\mathcal{O}( \gamma) $. Note that this condition is significantly weaker than requiring the function to be concave when restricted to the feasible set. The condition essentially implies that the quadratic term in the Taylor expansion of the function evaluated at the origin should be negative (or not too positive). \begin{corollary} Consider a convex set $\ccalC$ which is defined as the intersection of $m\geq 1$ ellipsoids centered at the origin. Further, assume that the objective function Hessian $\nabla^2 f$ satisfies the condition $\max_{\bbx\in \ccalC} \bbx^\top \nabla^2 f(\bbx)\bbx\leq 0$. Then, for $\rho=1/m$ and $\rho=1/m^2$, it is possible to find a $\rho$-approximate solution of Problem~\eqref{eq:quad_problem} in time polynomial in $m$ and $d$. \end{corollary} By using the approach in \citep{fu1998approximation}, we can solve the QCQP in \eqref{main_qcqp_problem_4} with the approximation factor $\rho\approx 1/m^2$ for $m\geq 1$ at the overall complexity of $\mathcal{\tilde{O}}(md^3)$ when the constraint $\ccalC$ is defined as $m$ convex quadratic constraints. As the total number of calls to the second-order stage is at most $\mathcal{{O}}(\rho^{-3}\gamma^{-3})= \mathcal{{O}}(m^6\gamma^{-3})$, we obtain that the total number of arithmetic operations for the second-order stage is at most $\mathcal{\tilde{O}}(m^7d^3\gamma^{-3})$. The constant factor can be improved to $1/m$ if we solve the SDP relaxation problem suggested in \citep{tseng2003further}. \section{Stochastic Extension} In this section, we focus on stochastic constrained minimization problems. Consider the optimization problem in \eqref{eq:main_problem} when the objective function $f$ is defined as an expectation of a set of stochastic functions $F:\reals^{d}\times\reals^{r}\to \reals $ with inputs $\bbx\in \reals^d$ and $\bbTheta\in \reals^r$, where $\bbTheta$ is a random variable with probability distribution $\ccalP$. To be more precise, we consider the optimization problem \begin{equation}\label{eq:stoc_main_problem} \text{minimize}\ f(\bbx):=\E{F(\bbx,\bbTheta)}, \qquad \text{subject to}\ \bbx\in \ccalC. \end{equation} Our goal is to find a point which satisfies the necessary optimality conditions with high probability. Consider the vector $\bbd_{t}= ({1}/{b_g})\sum_{i=1}^{b_g} \nabla F(\bbx_t,\bbtheta_{i})$ and matrix $\bbH_{t}= ({1}/{b_H})\sum_{i=1}^{b_H} \nabla^2 F(\bbx_t,\bbtheta_{i})$ as stochastic approximations of the gradient $\nabla f(\bbx_t)$ and Hessian $\nabla^2 f(\bbx_t)$, respectively. Here $b_g$ and $b_H$ are the gradient and Hessian batch sizes, respectively, and the vectors $\bbtheta_i$ are the realizations of the random variable $\bbTheta$. In Algorithm~\ref{alg_generic_framework_stocastic}, we present the stochastic variant of our proposed scheme for finding an $(\eps,\gamma)$-SOSP of Problem~\eqref{eq:stoc_main_problem}. Algorithm~\ref{alg_generic_framework_stocastic} differs from Algorithm~\ref{alg_generic_framework} in using the stochastic gradients $\bbd_t$ and Hessians $\bbH_t$ in lieu of the exact gradients $\nabla f(\bbx_t)$ and $\nabla^2 f(\bbx_t)$ Hessians. The second major difference is the inequality constraint in step 6. Here instead of using the constraint $ \bbd_t^\top(\bbu-\bbx_t)=0$ we need to use $ \bbd_t^\top(\bbu-\bbx_t)\leq r$, where $r>0$ is a properly chosen constant. This modification is needed to ensure that if a point satisfies this constraint with high probability it also satisfies the constraint $\nabla f(\bbx_t)^\top(\bbu-\bbx_t)=0$. {This modification implies that we need to handle a linear inequality constraint instead of the linear equality constraint, which is computationally manageable for some constraints including the case that $\ccalC$ is a single ball constraint \citep{jeyakumar2014trust}.} To prove our main result we assume that the following conditions also hold. \begin{algorithm}[t] \begin{algorithmic}[1] \caption{}\label{alg_generic_framework_stocastic} \small{\REQUIRE Stepsize $\sigma_t>0$. Initialize $\bbx_0\in\ccalC$ \FOR {$t=1,2,\ldots$} \STATE Compute $\bbv_t=\argmax_{\bbv\in\ccalC} \{-\bbd_t^\top\bbv\}$ \IF{ $\bbd_t^{\top}(\bbv_t-\bbx_t)< -\eps/2$ } \STATE Compute $\bbx_{t+1}= (1-\eta)\bbx_t +\eta \bbv_t $ \ELSE \STATE Find $ \bbu_t$: a $\rho$-approximate solution of\\ $ \quad \min_{\bbu} \quad (\bbu-\bbx_t)^\top \bbH_t(\bbu-\bbx_t)\qquad \text{s.t.} \ \ \bbu\in\ccalC,\ \bbd_t^\top(\bbu-\bbx_t)\leq r.$ \IF {$q(\bbu_t) < -\rho\gamma/2$ } \STATE Compute the updated variable $\bbx_{t+1} = (1-\sigma) \bbx_{t} + \sigma \bbu_t$; \ELSE \STATE Return $\bbx_t$ and stop. \ENDIF \ENDIF \ENDFOR} \end{algorithmic}\end{algorithm} \begin{assumption}\label{assumption:bounded_variance} The variance of the stochastic gradients and Hessians are uniformly bounded by constants $\nu^2$ and $\xi^2$, respectively, i.e., for any $\bbx\in \ccalC$ and $\bbtheta$ we can write \begin{align}\label{eq:bound_on_var} \E{\| \nabla F(\bbx,\bbtheta) - \nabla f(\bbx) \|^2} \leq \nu^2, \qquad \E{\| \nabla^2 F(\bbx,\bbtheta)-\nabla^2 f(\bbx)\|^2} \leq \xi^2. \end{align} \end{assumption} \begin{theorem}\label{thm:main_thm_stochastic} Consider the optimization problem in \eqref{eq:stoc_main_problem}. Suppose the conditions in Assumptions \ref{assumption:lip_grad}-\ref{assumption:bounded_variance} are satisfied. If the batch sizes are $b_g=\mathcal{O}(\max\{\rho^{-4}\gamma^{-4}, \eps^{-2}\})$ and $b_H=\mathcal{O}(\rho^{-2}\gamma^{-2})$ and we set the parameter $r=\mathcal{O}(\rho^2\gamma^2)$, then the outcome of the proposed framework outlined in Algorithm~\ref{alg_generic_framework_stocastic} is an $(\eps,\gamma)$-second-order stationary point of Problem~\eqref{eq:stoc_main_problem} with high probability. Further, the total number of iterations to reach such point is at most $\mathcal{O}(\max\{\eps^{-2}, \rho^{-3}\gamma^{-3}\})$ with high probability. \end{theorem} The result in Theorem~\ref{thm:main_thm_stochastic} indicates that the total number of iterations to reach an $(\eps,\gamma)$-SOSP is at most $\mathcal{O}(\max\{\eps^{-2}, \rho^{-3}\gamma^{-3}\})$. As each iteration at most requires $\mathcal{O}(\max\{\rho^{-4}\gamma^{-4}, \eps^{-2}\})$ stochastic gradient and $\mathcal{O}(\rho^{-2}\gamma^{-2})$ stochastic Hessian evaluations, the total number of stochastic gradient and Hessian computations to reach an $(\eps,\gamma)$-SOSP is of $\mathcal{O}(\max\{\eps^{-2}\rho^{-4}\gamma^{-4}, \eps^{-4}, \rho^{-7}\gamma^{-7}\})$ and $\mathcal{O}(\max\{\eps^{-2}\rho^{-3}\gamma^{-3},\rho^{-5}\gamma^{-5}\})$, respectively. \section{Appendix} \subsection{Proof of Proposition \ref{prop:nec_conds}} The claim in \eqref{eq:nec_cond_first_order} follows from Proposition 2.1.2 in \citep{bertsekas1999nonlinear}. The proof for the claim in \eqref{eq:nec_cond_second_order} is similar to the proof of Proposition 2.1.2 in \citep{bertsekas1999nonlinear}, and we mention it for completeness. We prove the claim in \eqref{eq:nec_cond_second_order} by contradiction. Suppose that $(\bbx-\bbx^*)^\top\nabla^2 f(\bbx^*)(\bbx-\bbx^*) < 0$ for some $\bbx\in \ccalC$ satisfying $\nabla f(\bbx^*)^\top(\bbx-\bbx^*)= 0$. By the mean value theorem, for any $\eps>0$ there exists an $\alpha\in[0,1]$ such that \begin{align} &f(\bbx^*+\eps(\bbx-\bbx^*))\nonumber\\ & = f(\bbx^*) + \eps \nabla f(\bbx^*)^\top(\bbx-\bbx^*) + \eps^2 (\bbx-\bbx^*)\nabla^2 f(\bbx^*+\alpha\eps(\bbx-\bbx^*))^\top(\bbx-\bbx^*), \end{align} Use the relation $\nabla f(\bbx^*)^\top(\bbx-\bbx^*)= 0$ to simplify the right hand side to \begin{equation}\label{eq:1} f(\bbx^*+\eps(\bbx-\bbx^*)) =f(\bbx^*)+ \eps^2 (\bbx-\bbx^*)\nabla^2 f(\bbx^*+\alpha\eps(\bbx-\bbx^*))^\top(\bbx-\bbx^*). \end{equation} Note that since $(\bbx-\bbx^*)^\top\nabla^2 f(\bbx^*)(\bbx-\bbx^*) < 0$ and the Hessian is continuous, we have for all sufficiently small $\eps>0$, $(\bbx-\bbx^*)\nabla^2 f(\bbx^*+\alpha\eps(\bbx-\bbx^*))^\top(\bbx-\bbx^*)<0$. This observation and the expression in \eqref{eq:1} follows that for sufficiently small $\eps$ we have $f(\bbx^*+\eps(\bbx-\bbx^*))<f(\bbx^*)$. Note that the point $\bbx^*+\eps(\bbx-\bbx^*)$ for all $\eps\in [0,1]$ belongs to the set $\ccalC$ and satisfies the inequality $\nabla f(\bbx^*)^\top((\bbx^*+\eps(\bbx-\bbx^*))-\bbx^*)= 0$. Therefore, we obtained a contradiction of the local optimality of $\bbx^*$. \subsection{Proof of Proposition \ref{FW_proposition}} First consider the definition $G(\bbx_t)= \max_{\bbx \in \ccalC}\{ -\nabla f(\bbx_t)^\top(\bbx-\bbx_t) \}$ which is also known as Frank-Wolfe gap \citep{lacoste2016convergence}. This constant measures how close the point $\bbx_t$ is to be a first-order stationary point. If $G(\bbx_t)\leq\eps$, then $\bbx_t$ is an $\eps$-first-order stationary point. Let's assume that $G(\bbx_t)>\eps$. Then, based on the Lipschitz continuity of gradients and the definition of $G(\bbx_t)$ we can write \begin{align} f(\bbx_{t+1}) &\leq f(\bbx_t) +\nabla f(\bbx_t)^\top(\bbx_{t+1}-\bbx_{t}) +\frac{L}{2}\|\bbx_{t+1}-\bbx_{t}\|^2\nonumber\\ &= f(\bbx_t) +\eta \nabla f(\bbx_t)^\top(\bbv_t-\bbx_{t}) +\frac{L\eta^2}{2}\|\bbv_{t}-\bbx_{t}\|^2\nonumber\\ &\leq f(\bbx_t) -\eta G(\bbx_t) +\frac{\eta^2D^2L}{2}, \end{align} where the last inequality follows from $\|\bbv_{t}-\bbx_{t}\|\leq D$. Replacing the stepsize $\eta$ by its value $\eps /D^2L$ and $G(\bbx_t)$ by its lower bound $\eps$ lead to \begin{align} f(\bbx_{t+1}) &\leq f(\bbx_t) -\frac{\eps^2}{2D^2L} . \end{align} This result implies that if the current point $\bbx_t$ is not an $\eps$-first order stationary point, by following the update of Frank-Wolfe algorithm the objective function value decreases by ${\eps^2}/{2D^2L} $. Therefore, after at most $2D^2L(f(\bbx_0)-f(\bbx^*))/\eps^2 $ iterations we either reach the global minimum or one of the iterates $\bbx_t$ satisfies $G(\bbx_t)\leq\eps$ which implies that \begin{equation} \nabla f(\bbx_t)^\top(\bbx-\bbx_t)\geq -\eps,\qquad \forall\ \bbx\in\ccalC, \end{equation} and the claim in Proposition \ref{FW_proposition} follows. \subsection{Proof of Proposition \ref{PGD_proposition}} First note, that based on the projection property we know that \begin{equation} (\bbx_t -\eta \nabla f(\bbx_t) - \bbx_{t+1})^\top(\bbx- \bbx_{t+1})\leq 0, \qquad \forall\ \bbx\in \ccalC. \end{equation} Therefore, by setting $\bbx=\bbx_t$ we obtain that \begin{equation} \eta \nabla f(\bbx_t) ^\top(\bbx_{t+1}-\bbx_t)\leq -\|\bbx_t- \bbx_{t+1}\|^2. \end{equation} Hence, we can replace the inner product $\nabla f(\bbx_t) ^\top(\bbx_{t+1}-\bbx_t)$ by its upper bound $-\|\bbx_t- \bbx_{t+1}\|^2/\eta$ \begin{align} f(\bbx_{t+1}) &\leq f(\bbx_t) +\nabla f(\bbx_t)^\top(\bbx_{t+1}-\bbx_{t}) +\frac{L}{2}\|\bbx_{t+1}-\bbx_{t}\|^2\nonumber\\ &\leq f(\bbx_t) -\frac{\|\bbx_t- \bbx_{t+1}\|^2}{\eta}+\frac{L}{2}\|\bbx_{t+1}-\bbx_{t}\|^2\nonumber\\ &= f(\bbx_t) -\frac{L}{2}\|\bbx_{t+1}-\bbx_{t}\|^2, \end{align} where the equality follows by setting $\eta=1/L$. Indeed, if $\bbx_{t+1}=\bbx_t$ then we are at a first-order stationary point, however, we need a finite time analysis. To do so, note that for any $\bbx\in\ccalC$ we have \begin{equation} (\bbx_t -\eta \nabla f(\bbx_t) - \bbx_{t+1})^\top(\bbx- \bbx_{t+1})\leq 0. \end{equation} Therefore, for any $\bbx\in\ccalC$ it holds \begin{equation} \nabla f(\bbx_t) ^\top(\bbx- \bbx_{t+1})\geq L (\bbx_t- \bbx_{t+1})^\top(\bbx- \bbx_{t+1}), \end{equation} which implies that \begin{align} \nabla f(\bbx_t) ^\top(\bbx- \bbx_{t}) &\geq \nabla f(\bbx_t) ^\top( \bbx_{t+1}-\bbx_t) + L (\bbx_t- \bbx_{t+1})^\top(\bbx- \bbx_{t+1})\nonumber\\ &\geq -K \|\bbx_{t+1}-\bbx_t\| - LD \|\bbx_t- \bbx_{t+1}\|\nonumber\\ &\geq -(K+ LD) \|\bbx_t- \bbx_{t+1}\|, \end{align} where $K$ is an upper bound on the norm of gradient over the convex set $\ccalC$. Therefore, we can write \begin{align} \min_{\bbx\in\ccalC} \nabla f(\bbx_t) ^\top(\bbx- \bbx_{t}) \geq -(K+ LD) \|\bbx_t- \bbx_{t+1}\|, \end{align} Combining these results, we obtain that we should check the norm $\|\bbx_t- \bbx_{t+1}\|$ at each iteration and check whether if it is larger than $\eps/(K+ LD)$ or not. If the norm is larger than the threshold then \begin{align} f(\bbx_{t+1})\leq f(\bbx_t) -\frac{\eps^2L}{2(K+ LD)^2}. \end{align} If the norm is smaller than the threshold then we stop and the iterate $\bbx_t$ satisfies the inequality \begin{equation} \nabla f(\bbx_t)^\top(\bbx-\bbx_t)\geq -\eps,\qquad \forall\ \bbx\in\ccalC. \end{equation} Note that this process can not take more than $\mathcal{O}(\frac{f(\bbx_0)-f(\bbx^*)}{\eps^2})$ iterations. \subsection{Proof of Proposition \ref{SOU_proposition}} The Taylor's expansion of the function $f$ around the point $\bbx_t$ and $M$-Lipschitz continuity of the Hessians imply that \begin{equation}\label{eq:proof_main_thm_100} f(\bbx_{t+1}) \leq f(\bbx_t) + \nabla f(\bbx_t)^\top(\bbx_{t+1}-\bbx_t) +\frac{1}{2} (\bbx_{t+1}-\bbx_t)^\top \nabla^2 f(\bbx) (\bbx_{t+1}-\bbx_t) + \frac{M}{6} \|\bbx_{t+1}-\bbx_t\|^3. \end{equation} Replace $\bbx_{t+1}-\bbx_t$ by the expression $\sigma(\bbu_{t}-\bbx_t)$ to obtain \begin{equation}\label{eq:proof_main_thm_200} f(\bbx_{t+1}) \leq f(\bbx_t) +\sigma \nabla f(\bbx_t)^\top( \bbu_{t}-\bbx_t)+ \frac{\sigma^2}{2} (\bbu_t-\bbx_t)^\top \nabla^2 f(\bbx) (\bbu_t-\bbx_t) + \frac{M\sigma^3}{6} \|\bbu_t-\bbx_t\|^3. \end{equation} Since, $\bbu_t$ is a $\rho$-approximate solution for the subproblem in \eqref{eq:quad_problem} with the objective function value $q(\bbu_t)\leq -\rho \gamma$, we can substitute the quadratic term $(\bbu_t-\bbx_t)^\top \nabla^2 f(\bbx) (\bbu_t-\bbx_t)$ by its upper bound $-\rho \gamma$. Additionally, the vector $\bbu_t$ is chosen such that $\nabla f(\bbx_t)^\top( \bbu_{t}-\bbx_t)=0$ and therefore the linear term in \eqref{eq:proof_main_thm_200} can be eliminated. Further, the cubic term $ \|\bbu_t-\bbx_t\|^3$ is upper bounded by $D^3$ since both $\bbu_t$ and $\bbx_t$ belong to the convex set $\ccalC$. Applying these substitutions into \eqref{eq:proof_main_thm_200} yields \begin{equation}\label{eq:proof_main_thm_300} f(\bbx_{t+1}) \leq f(\bbx_t) -\frac{\sigma^2\rho\gamma}{2} + \frac{\sigma^3MD^3}{6} . \end{equation} By setting $\sigma=\rho\gamma/MD^3$ in \eqref{eq:proof_main_thm_300} it follows that \begin{align} f(\bbx_{t+1}) & \leq f(\bbx_t) -\frac{\rho^3\gamma^3}{2M^2D^6} + \frac{\rho^3\gamma^3}{6M^2D^6} \nonumber\\ &= f(\bbx_t) -\frac{\rho^3\gamma^3}{3M^2D^6}. \end{align} Therefore, in this case, the objective function value decreases at least by a fixed value of $\mathcal{O}(\rho^{3}\gamma^{3})$. \subsection{Proof of Theorem \ref{thm:main_thm}} Then at each iteration, either the first oder optimality condition is not satisfied and the function value decreases by a constant of $\mathcal{O}(\eps^{2})$, or this condition is satisfied and we use a second-order update which leads to a objective function value decrement of $\mathcal{O}(\rho^{3}\gamma^{3})$. This shows that if have not reached an $(\eps,\gamma)$-second order stationary point the objective function value decreases at least by $\mathcal{O}(\min\{\eps^{2}, \rho^{3}\gamma^{3}\})$. Therefore, we either reach the global minimum or converge to an $(\eps,\gamma)$-second order stationary point of Problem~\eqref{eq:main_problem} after at most $\mathcal{O}\left(\frac{f(\bbx_0)-f(\bbx^*)}{\min\{\eps^{2}, \rho^{3}\gamma^{3}\}}\right)$ iterations which also can be written as $\mathcal{O}((f(\bbx_0)-f(\bbx^*))(\eps^{-2}+ \rho^{-3}\gamma^{-3}))$. \subsection{Proof of Theorem \ref{thm:main_thm_stochastic}} In this proof, for notation convenience, we define $\eps'=\eps/2$ and $\gamma'=\gamma/2$. First, note that the condition in Assumption~\ref{assumption:bounded_variance} and the fact that $\nabla F(\bbx,\bbtheta)$ and $\nabla^2 F(\bbx,\bbtheta)$ are the unbiased estimators of the gradient $\nabla f(\bbx)$ and Hessian $\nabla^2 f(\bbx)$ imply that the variance of the batch gradient $\bbd_t$ and the batch Hessian $\bbH_t$ approximations are upper bounded by \begin{equation}\label{eq:bound_on_batch_approx} \E{ \|\bbd_t-\nabla f(\bbx_t)\|^2}\leq \frac{\nu^2}{b_g}, \qquad \E{ \|\bbH_t-\nabla^2 f(\bbx_t)\|^2}\leq \frac{\xi^2}{b_H}. \end{equation} Here we assume that $b_g$ and $b_H$ satisfy the following conditions, \begin{equation}\label{condition_on_batch_sizes} b_g =\max\left\{ \frac{324\nu^2 M^2 D^8}{\rho^4\gamma'^4} , \frac{16D^2\nu^2}{\eps'^2} \right\} , \qquad b_H =\frac{81D^4\xi^2}{\rho^2\gamma'^2} . \end{equation} We further set the parameter $r$ as \begin{equation}\label{def_r_para} r= \frac{\rho^2\gamma'^2}{18MD^3}. \end{equation} Now we proceed to analyze the complexity of Algorithm 2. First, consider the case that the current iterate $\bbx_t$ satisfies the inequality $\bbd_t^{\top} (\bbv_t-\bbx_t) < -\eps'$ and therefore we perform the first-order update in step 4. In this case, we can show that \begin{align}\label{proof_stochastic_100} f(\bbx_{t+1}) & \leq f(\bbx_t) + \nabla f(\bbx_t)^\top (\bbx_{t+1}-\bbx_{t}) +\frac{L}{2}\|\bbx_{t+1}-\bbx_{t}\|^2 \nonumber\\ & = f(\bbx_t) + \eta \nabla f(\bbx_t)^\top (\bbv_{t}-\bbx_{t}) +\frac{\eta^2L}{2}\|\bbv_t-\bbx_{t}\|^2 \nonumber\\ & \leq f(\bbx_t) + \eta \bbd_t^\top (\bbv_{t}-\bbx_{t}) +\eta (\nabla f(\bbx_t)-\bbd_t)^\top (\bbv_{t}-\bbx_{t}) +\frac{\eta^2LD^2}{2} \nonumber\\ & \leq f(\bbx_t) - \eta\eps' +\eta D \|\nabla f(\bbx_t)-\bbd_t\| +\frac{\eta^2LD^2}{2}, \end{align} where in the last inequality we used $\bbd_t^{\top} (\bbv_t-\bbx_t) < -\eps'$ and the fact that both $\bbv_t$ and $\bbx_t$ belong to the set $\ccalC$ and therefore $\|\bbx_t-\bbv_t\|\leq D$. Consider $\ccalF_t$ as the sigma algebra that measures all sources of randomness up to step $t$. Then, computing the expected value of both sides of \eqref{proof_stochastic_100} given $\ccalF_t$ leads to \begin{align}\label{proof_stochastic_200} \E{f(\bbx_{t+1})\mid \ccalF_t } \leq f(\bbx_t) - \eta\eps' +\frac{\eta D \nu}{\sqrt{b_g}} +\frac{\eta^2LD^2}{2} \end{align} where we used the inequality $\E{X}\leq \sqrt{\E{X^2}}$ when $X$ is a positive random variable. Replace the stepsize $\eta$ by its value ${\eps'}/({D^2L})$ and the batch size {$b_g$ by its lower bound ${(16D^2\nu^2)}/({\eps'^2})$} to obtain \begin{align}\label{proof_stochastic_300} \E{f(\bbx_{t+1}) \mid \ccalF_t } \leq f(\bbx_t) -\frac{\eps'^2}{4D^2L} . \end{align} Hence, in this case, the objective function value decreases in expectation by a constant factor of $\mathcal{O}(\eps'^2)$. Now we proceed to study the case that the current iterate $\bbx_t$ does not satisfy the inequality $\bbd_t^{\top} (\bbv_t-\bbx_t) < -\eps'$ and we need to perform the second-order update in step 8. In this case, we can show that \begin{align}\label{proof_stochastic_400} f(\bbx_{t+1}) & \leq f(\bbx_t) + \nabla f(\bbx_t)^\top(\bbx_{t+1}-\bbx_t) +\frac{1}{2} (\bbx_{t+1}-\bbx_t)^\top \nabla^2 f(\bbx) (\bbx_{t+1}-\bbx_t) + \frac{M}{6} \|\bbx_{t+1}-\bbx_t\|^3\nonumber\\ & \leq f(\bbx_t) + \sigma \nabla f(\bbx_t)^\top(\bbu_t-\bbx_t) +\frac{\sigma^2}{2} (\bbu_t-\bbx_t)^\top \nabla^2 f(\bbx) (\bbu_t-\bbx_t) + \frac{\sigma^3 MD^3}{6}\nonumber\\ & \leq f(\bbx_t) + \sigma \bbd_t^\top(\bbu_t-\bbx_t)+ \sigma (\nabla f(\bbx_t)-\bbd_t)^\top(\bbu_t-\bbx_t) +\frac{\sigma^2}{2} (\bbu_t-\bbx_t)^\top \bbH_t (\bbu_t-\bbx_t) \nonumber\\ & \qquad + \frac{\sigma^2}{2} (\bbu_t-\bbx_t)^\top (\nabla^2 f(\bbx)-\bbH_t) (\bbu_t-\bbx_t) +\frac{\sigma^3 MD^3}{6}. \end{align} Note that $\bbu_t$ is a $\rho$-approximate solution for the subproblem in step 6 of Algorithm 2, with the objective function value less than $ -\rho \gamma'$. This observation implies that the quadratic term $(\bbu_t-\bbx_t)^\top \bbH_t (\bbu_t-\bbx_t)$ is bounded above by $-\rho\gamma'$. Further, the linear term $ \bbd_t^\top(\bbu_t-\bbx_t) $ is less than $r$ according to the constraint of the subproblem. Applying these substitutions and using the Cauchy-Schwartz inequality multiple times lead to \begin{align}\label{proof_stochastic_500} f(\bbx_{t+1}) \leq f(\bbx_t) +\sigma r+ \sigma D \|\bbd_t-\nabla f(\bbx_t)\| -\frac{\sigma^2\rho\gamma'}{2} + \frac{\sigma^2D^2}{2} \| \bbH_t- \nabla^2 f(\bbx)\| +\frac{\sigma^3 MD^3}{6}. \end{align} Compute the conditional expected value of both sides of \eqref{proof_stochastic_500} and use the inequalities in \eqref{eq:bound_on_batch_approx} to obtain \begin{align}\label{proof_stochastic_600} \E{ f(\bbx_{t+1}) \mid \ccalF_t} \leq f(\bbx_t) +\sigma r+ \frac{\sigma D \nu}{\sqrt{b_g}} -\frac{\sigma^2\rho\gamma'}{2} + \frac{\sigma^2D^2 \xi}{2\sqrt{b_H}} +\frac{\sigma^3 MD^3}{6}. \end{align} By setting the stepsize $\sigma=\rho\gamma'/MD^3$ in \eqref{proof_stochastic_600} it follows that \begin{align}\label{proof_stochastic_700} \E{ f(\bbx_{t+1}) \mid \ccalF_t} \leq f(\bbx_t) -\frac{\rho^3\gamma'^3}{3L^2D^6} +\frac{r\rho\gamma'}{MD^3} + \frac{\rho\gamma' \nu}{MD^2\sqrt{b_g}}+ \frac{\rho^2\gamma'^2 \xi}{2M^2D^4\sqrt{b_H}} . \end{align} Moreover, setting {$r=\frac{\rho^2\gamma'^2}{18MD^3}$} and $b_H=\frac{81D^4\xi^2}{\rho^2\gamma'^2}$, and replacing $b_g$ by its lower bound $\frac{324\nu^2 M^2 D^8}{\rho^4\gamma'^4}$ lead to \begin{align}\label{proof_stochastic_800} \E{ f(\bbx_{t+1})\mid \ccalF_t} \leq f(\bbx_t) -\frac{\rho^3\gamma'^3}{6M^2D^6} \end{align} Hence, in this case, the expected objective function value decreases by a constant of $\mathcal{O}(\rho^3\gamma'^3)$. By combining the results in \eqref{proof_stochastic_300} and \eqref{proof_stochastic_800}, we obtain that if the iterate $\bbx_t$ is not the final iterate the objective function value at step $t+1$ satisfies the following ineqaulity \begin{align} \E{ f(\bbx_{t+1}) \mid \ccalF_t} \leq f(\bbx_t) -\min\left\{\frac{\eps'^2}{4LD^2} , \frac{\rho^3\gamma'^3}{6M^2D^6} \right\}. \end{align} Let us define $T$ as the number of iterations we perform until Algorithm 2 stops. We use an argument similar to Wald's lemma to derive an upper bound on the expected number of iterations $T$ that we need to run the algorithm. Note that \begin{align} \E{ f(\bbx_0)-f(\bbx_T) } & = \E{\sum_{t=1}^{T} (f(\bbx_{t-1}) -f(\bbx_{t}) )} \nonumber\\ & = \E{\E{\sum_{t=1}^{T} (f(\bbx_{t-1}) -f(\bbx_{t}) )} \Bigg|\ T=k} \nonumber\\ & = \E{\E{\sum_{t=1}^{k} (f(\bbx_{t-1}) -f(\bbx_{t}) )} \Bigg|\ T=k} \nonumber\\ & = \sum_{k=1}^\infty \E{\sum_{t=1}^{k} (f(\bbx_{t-1}) -f(\bbx_{t}) )} \mathbb{P}(T=k) \nonumber\\ & = \sum_{k=1}^\infty \sum_{t=1}^{k} \E{ (f(\bbx_{t-1}) -f(\bbx_{t}) )} \mathbb{P}(T=k) \nonumber\\ & \geq \sum_{k=1}^\infty \sum_{t=1}^{k} \min\left\{\frac{\eps'^2}{4LD^2} , \frac{\rho^3\gamma'^3}{6M^2D^6} \right\} \ \mathbb{P}(T=k) \nonumber\\ & = \min\left\{\frac{\eps'^2}{4LD^2} , \frac{\rho^3\gamma'^3}{6M^2D^6} \right\} \sum_{k=1}^\infty k\ \mathbb{P}(T=k) \nonumber\\ & = \min\left\{\frac{\eps'^2}{4LD^2} , \frac{\rho^3\gamma'^3}{6M^2D^6} \right\} \E{T}. \end{align} Hence, $\E{T}\leq \E{f(\bbx_0)- f(\bbx_T) } /{\min\left\{\frac{\eps'^2}{4LD^2} , \frac{\rho^3\gamma'^3}{6M^2D^6} \right\}}$. We further know that $f(\bbx_T)\geq f(\bbx^*)$ which implies that \begin{equation} \E{T}\leq (f(\bbx_0) - f(\bbx^*))\max\left\{\frac{4LD^2}{\eps'^2} , \frac{6M^2D^6} {\rho^3\gamma'^3}\right\}. \end{equation} Using Markov's inequality we can show that \begin{align} \mathbb{P}\left( T \leq a \right)\geq 1-\frac{ (f(\bbx_0) - f(\bbx^*))\max\left\{\frac{4LD^2}{\eps'^2} , \frac{6M^2D^6} {\rho^3\gamma'^3}\right\}}{a} \end{align} Set $a=\frac{(f(\bbx_0) - f(\bbx^*))}{\delta}\max\left\{\frac{4LD^2}{\eps'^2} , \frac{6M^2D^6} {\rho^3\gamma'^3}\right\}$ to obtain that \begin{align} \mathbb{P}\left( T \leq \frac{ (f(\bbx_0) - f(\bbx^*))\max\left\{\frac{4LD^2}{\eps'^2} , \frac{6M^2D^6} {\rho^3\gamma'^3}\right\}}{\delta} \right)\geq 1-\delta. \end{align} Therefore, it follows that with high probability the total number of iterations $T$ that Algorithm 2 runs is at most $\mathcal{O}(\max\left\{{\eps'^{-2}} ,{\rho^{-3}\gamma'^{-3}}\right\})$. Now it remains to show that the outcome of Algorithm 2 is an $(\eps,\gamma)$-SOSP of Problem~\eqref{eq:stoc_main_problem} with high probability. Let's assume that $\bbx_t$ is the final output of Algorithm 2. Then, we know that $\bbx_t$ satisfies the conditions \begin{equation}\label{proof_stochastic_part_2_100} \bbd_t^{\top}(\bbx-\bbx_t)\geq-\eps' \quad \forall \ \bbx\in \ccalC, \end{equation} and \begin{equation}\label{proof_stochastic_part_2_200} (\bbx-\bbx_t)^\top \bbH_t (\bbx-\bbx_t)\geq -\gamma' \quad \forall \ \bbx\in\ccalC,\ \bbd_t^\top(\bbx-\bbx_t)\leq r. \end{equation} First, we use the condition in \eqref{proof_stochastic_part_2_100} to show that $\bbx_t$ satisfies the first-order optimality condition with high probability. Note that for any $\bbx\in \ccalC$ it holds that \begin{align}\label{proof_stochastic_part_2_300} \nabla f(\bbx_t)^{\top} (\bbx-\bbx_t) &=\bbd_t^{\top} (\bbx-\bbx_t) + (\nabla f(\bbx_t)-\bbd_t)^{\top} (\bbx-\bbx_t) \nonumber\\ &\geq \bbd_t^{\top} (\bbx-\bbx_t) - D\|\nabla f(\bbx_t)-\bbd_t\|. \end{align} Now compute the minimum of both sides of \eqref{proof_stochastic_part_2_300} for all $\bbx\in \ccalC$ to obtain \begin{align}\label{proof_stochastic_part_2_400} \min_{\bbx\in\ccalC} \{\nabla f(\bbx_t)^{\top} (\bbx-\bbx_t) \} &\geq \min_{\bbx\in\ccalC}\{\bbd_t^{\top} (\bbx-\bbx_t) - D\|\nabla f(\bbx_t)-\bbd_t\|\}\nonumber\\ &= \min_{\bbx\in\ccalC}\{\bbd_t^{\top} (\bbx-\bbx_t) \} - D\|\nabla f(\bbx_t)-\bbd_t\|\nonumber\\ &\geq -\eps' - D\|\nabla f(\bbx_t)-\bbd_t\|, \end{align} where the equality holds since $D\|\nabla f(\bbx_t)-\bbd_t\|$ does not depend on $\bbx$, and the last inequality is implied by \eqref{proof_stochastic_part_2_100}. Since $\E{\|\nabla f(\bbx_t)-\bbd_t\|^2}\leq \nu^2/b_g$ we obtain from Markov's inequality that \begin{equation}\label{proof_stochastic_part_2_500} \mathbb{P}\left( \|\nabla f(\bbx_t)-\bbd_t\| \leq \eps''\right) \geq 1-\frac{\nu^2}{b_g\eps''^2}. \end{equation} Therefore, by combining the results in \eqref{proof_stochastic_part_2_400} and \eqref{proof_stochastic_part_2_500} we obtain that \begin{equation}\label{proof_stochastic_part_2_600} \mathbb{P}\left( \min_{\bbx\in\ccalC} \{\nabla f(\bbx_t)^{\top} (\bbx-\bbx_t) \} \geq -(\eps'+D\eps'') \right) \geq 1-\frac{\nu^2}{b_g\eps''^2}. \end{equation} Now by setting $\eps''=\eps'/D$ it follows from \eqref{proof_stochastic_part_2_600} that with probability at least $1-\nu^2 D^2/b_g \eps'^2$ the final iterate $\bbx_t$ satisfies \begin{equation}\label{proof_stochastic_part_2_700} \nabla f(\bbx_t)^{\top} (\bbx-\bbx_t) \geq -2\eps' \qquad \forall \ \bbx\in \ccalC. \end{equation} Replacing $\eps'$ by $\eps/2$ leads to \begin{equation}\label{proof_stochastic_part_2_702} \nabla f(\bbx_t)^{\top} (\bbx-\bbx_t) \geq -\eps \qquad \forall \ \bbx\in \ccalC. \end{equation} It remains to show that with high probability the final iterate $\bbx_t$ satisfies the second-order optimality condition. First, consider the sets $\ccalA_t=\{\bbx \mid \nabla f(\bbx_t)^{\top} (\bbx-\bbx_t) =0 \}$ and $\ccalB_t=\{ \bbx \mid \bbd_t^{\top}(\bbx-\bbx_t)\leq r \}$. We proceed to show that with high probability $\ccalA_t\subset\ccalB_t$. If $\bby$ satisfies the condition \begin{equation}\label{proof_stochastic_part_2_800} \nabla f(\bbx_t)^{\top} (\bby-\bbx_t) =0, \end{equation} then it can be shown that \begin{align}\label{proof_stochastic_part_2_900} \bbd_t^{\top} (\bby-\bbx_t) &\leq \nabla f(\bbx_t)^{\top} (\bby-\bbx_t)+ (\bbd_t-\nabla f(\bbx_t))^{\top} (\bby-\bbx_t) \nonumber\\ &\leq D\|\bbd_t-\nabla f(\bbx_t)\|. \end{align} Since $\E{\|\nabla f(\bbx_t)-\bbd_t\|^2}\leq \nu^2/b_g$ we obtain from Markov's inequality that \begin{equation}\label{proof_stochastic_part_2_1000} \mathbb{P}\left( \|\nabla f(\bbx_t)-\bbd_t\| \leq \frac{r}{D}\right) \geq 1-\frac{\nu^2D^2}{b_gr^2}. \end{equation} Therefore, by combining the results in \eqref{proof_stochastic_part_2_900} and \eqref{proof_stochastic_part_2_1000} we obtain that \begin{equation}\label{proof_stochastic_part_2_1100} \mathbb{P}\left( \bbd_t^{\top} (\bby-\bbx_t) \leq r\right) \geq 1-\frac{\nu^2D^2}{b_gr^2}. \end{equation} This argument shows that if $\bby\in \ccalA_t$, then it also belongs to the set $\ccalB_t$, i.e., $\bby\in \ccalB_t$, with high probability. This result shows if an inequality holds for all $\bbx$ that satisfy $\bbd_t^{\top}(\bbx-\bbx_t)\leq r$, then with high probability that inequality also holds for all $\bbx$ that satisfy the condition $\nabla f(\bbx_t)^{\top} (\bbx-\bbx_t) =0$. Now, note that if $\bbx_t$ is the output of Algorithm 2, then for any $\bbx\in \ccalC$ satisfying $\bbd_t^{\top} (\bbx-\bbx_t)\leq r$ it holds that \begin{align}\label{proof_stochastic_part_2_1200} (\bbx-\bbx_t)^\top \nabla^2 f(\bbx_t) (\bbx-\bbx_t) &= (\bbx-\bbx_t)^\top \bbH_t (\bbx-\bbx_t)- (\bbx-\bbx_t)^\top( \bbH_t-\nabla^2 f(\bbx_t) ) (\bbx-\bbx_t) \nonumber\\ &\geq -\gamma' -D^2\|\bbH_t-\nabla^2 f(\bbx_t)\|. \end{align} Further, define the random variable $X_t=\|\bbH_t-\nabla^2 f(\bbx_t)\|$. As we know that $\E{X_t^2}\leq \xi^2/{b_H}$, it follows by Markov's inequality that $\mathbb{P}(X_t\leq a) \geq 1-\xi^2/({b_H}a^2)$. Therefore, we can write that \begin{equation}\label{proof_stochastic_part_2_1300} \mathbb{P}(\|\bbH_t-\nabla^2 f(\bbx_t)\|\leq \gamma'') \geq 1-\frac{\xi^2}{b_H\gamma''^2}. \end{equation} Hence, by using the results in \eqref{proof_stochastic_part_2_1200} and \eqref{proof_stochastic_part_2_1300}, we can show that with probability at least $1-\frac{\xi^2}{b_H\gamma''^2}$ for any $\bbx\in \ccalC$ satisfying $\bbd_t^{\top} (\bbx-\bbx_t)\leq r$ it holds \begin{align} (\bbx-\bbx_t)^\top \nabla^2 f(\bbx_t) (\bbx-\bbx_t) &\geq -\gamma' -D^2\gamma''. \end{align} By setting $\gamma''= \gamma'/D^2$ it follows that $\bbx_t$ satisfies the condition \begin{align} (\bbx-\bbx_t)^\top \nabla^2 f(\bbx_t) (\bbx-\bbx_t) \geq -2\gamma' \quad \forall\ \bbx\in\ccalC,\bbd_t^{\top} (\bbx-\bbx_t)\leq r, \end{align} with a probability larger than $1-\frac{\xi^2D^4}{b_H\gamma'^2}$. Further, with probability at least $1-\frac{\nu^2D^2}{b_gr^2}$ we know that $\ccalA_t\subset \ccalB_t$. These observations imply that if $\bbx_t$ is the output of Algorithm 2 it satisfies \begin{align} (\bbx-\bbx_t)^\top \nabla^2 f(\bbx_t) (\bbx-\bbx_t) \geq -2\gamma' \quad \forall\ \bbx\in\ccalC,\nabla f(\bbx_t)^{\top} (\bbx-\bbx_t)=0, \end{align} with probability at least $1-\frac{\xi^2D^4}{b_H\gamma'^2}- \frac{\nu^2D^2}{b_gr^2}$, where we used the inequality \begin{align} P(A \cap B) &= P(A)+P(B)-P(A\cup B)\nonumber\\ & \geq P(A)+P(B)-1. \end{align} By setting $\gamma'=\gamma/2$ we obtain that with probability at least $1-\frac{\xi^2D^4}{b_H\gamma'^2}- \frac{\nu^2D^2}{b_gr^2}$ the final iterate satisfies the condition \begin{align} (\bbx-\bbx_t)^\top \nabla^2 f(\bbx_t) (\bbx-\bbx_t) \geq -\gamma \quad \forall\ \bbx\in\ccalC,\nabla f(\bbx_t)^{\top} (\bbx-\bbx_t)=0. \end{align} Therefore, with probability at least $1-\frac{\nu^2 D^2}{b_g \eps'^2}-\frac{\xi^2D^4}{b_H\gamma'^2}- \frac{\nu^2D^2}{b_gr^2}$ the output of Algorithm 2 is an $(\eps,\gamma)$-SOSP of the stochastic optimization problem in \eqref{eq:stoc_main_problem}. This observation and the conditions on the batch sizes in \eqref{condition_on_batch_sizes} implies that the output of Algorithm 2 is an $(\eps,\gamma)$-SOSP of the stochastic optimization problem in \eqref{eq:stoc_main_problem} with probability at least $1-\frac{1}{16}-\frac{1}{324}- \frac{\rho^2}{81}\geq 0.92$. (Note that $\rho\leq 1$). Indeed, by increasing the size of batches $b_g$ and $b_H$ all the results hold with a higher probability. \section*{Acknowledgment} {This work was supported by DARPA Lagrange and ONR BRC Program. The authors would like to thank Yue Sun for pointing out a missing condition in the first draft of the paper.} {{{
{ "timestamp": "2018-10-10T02:05:26", "yymm": "1809", "arxiv_id": "1809.02162", "language": "en", "url": "https://arxiv.org/abs/1809.02162", "abstract": "In this paper, we study the problem of escaping from saddle points in smooth nonconvex optimization problems subject to a convex set $\\mathcal{C}$. We propose a generic framework that yields convergence to a second-order stationary point of the problem, if the convex set $\\mathcal{C}$ is simple for a quadratic objective function. Specifically, our results hold if one can find a $\\rho$-approximate solution of a quadratic program subject to $\\mathcal{C}$ in polynomial time, where $\\rho<1$ is a positive constant that depends on the structure of the set $\\mathcal{C}$. Under this condition, we show that the sequence of iterates generated by the proposed framework reaches an $(\\epsilon,\\gamma)$-second order stationary point (SOSP) in at most $\\mathcal{O}(\\max\\{\\epsilon^{-2},\\rho^{-3}\\gamma^{-3}\\})$ iterations. We further characterize the overall complexity of reaching an SOSP when the convex set $\\mathcal{C}$ can be written as a set of quadratic constraints and the objective function Hessian has a specific structure over the convex set $\\mathcal{C}$. Finally, we extend our results to the stochastic setting and characterize the number of stochastic gradient and Hessian evaluations to reach an $(\\epsilon,\\gamma)$-SOSP.", "subjects": "Machine Learning (cs.LG); Optimization and Control (math.OC); Machine Learning (stat.ML)", "title": "Escaping Saddle Points in Constrained Optimization", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9859363750229257, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7084883445813448 }
https://arxiv.org/abs/1106.2944
Matroids and log-concavity
We show that f-vectors of matroid complexes of realisable matroids are log-concave. This was conjectured by Mason in 1972. Our proof uses the recent result by Huh and Katz who showed that the coefficients of the characteristic polynomial of a realisable matroid form a log-concave sequence. We also discuss the relationship between log-concavity of f-vectors and h-vectors of matroids. In the last section we explain the connection between zonotopal algebra and f-vectors and characteristic polynomials of matroids.
\section{Introduction} Let $M=(E,\Delta)$ be a matroid of rank $r$. $E$ denotes the ground set and $\Delta \subseteq 2^E$ denotes the matroid complex, \ie the abstract simplicial complex of independent sets. Let $f=(f_0,\ldots, f_r)$ be the $f$-vector of $\Delta$, \ie $f_i$ is the number of sets of cardinality $i$ in $\Delta$. Dominic Welsh conjectured in 1969 \cite{welsh-1971} that the $f$-vector of a matroid complex is \emph{unimodal}, \ie there exists $j\in\{0,1,\ldots, r\}$ \st $f_0 \le f_1 \le \ldots \le f_j \ge \ldots \ge f_r$. Three successive strengthenings of this conjecture were proposed by John Mason in 1972 \cite{mason-1972}. The weakest of them is \emph{log-concavity} of the $f$-vector, \ie \begin{align} f_i^2 \ge f_{i-1}f_{i+1} \text{ for } i=1,\ldots, r - 1. \label{eq:logconcavity} \end{align} If the inequalities in \eqref{eq:logconcavity} are strict, we say that the $f$-vector is \emph{strictly log-concave}. Since then, those conjectures have received considerable attention. See for example \cite{brown-colbourn-1994, dawson-1984,dowling-1980, hamidoune-salaun-1989, johnson-kontoyiannis-madiman-2009,mahoney-1985, seymour-1975, stanley-1981,wagner-2008,zhao-1985}. Carolyn Mahoney proved log-concavity for cycle matroids of outerplanar graphs in 1985 \cite{mahoney-1985}. David Wagner \cite{wagner-2008} describes further partial results, several stronger variants of Mason's conjecture, and other sequences of integers that are associated to a matroid and that are conjectured to be log-concave. Log-concave sequences arising in combinatorics have been studied by many authors. For an overview, see the surveys by Francesco Brenti and Richard Stanley \cite{brenti-1994,stanley-1989}. Our main result is the following theorem: \begin{Theorem} \label{Theorem:fvectorStrictlyLogConcave} The $f$-vector of the matroid complex of a realizable matroid is strictly log-concave. \end{Theorem} The fact that we are able to prove strict log-concavity indicates that the $f$-vector of a matroid complex might satisfy stronger inequalities. The strongest of Mason's three conjectures \cite{mason-1972} is ultra-log-concavity, \ie the conjecture that the following inequalities hold: \begin{equation} \frac{f_i^2}{\binom{f_1}{i}^2} \ge \frac{f_{i-1}}{\binom{f_1}{i-1}}\frac{f_{i+1}}{\binom{f_1}{i+1}} \text{ for } i=1,\ldots, r-1. \end{equation} This conjecture is one of the main topics of an upcoming workshop at AIM\footnote{Workshop on \emph{Stability, hyperbolicity, and zero localization of functions}, December 5 to December 9, 2011 at the American Institute of Mathematics, Palo Alto, California. Organized by Petter Br\"and\'en, George Csordas, Olga Holtz, and Mikhail Tyaglov.\\ \url{http://www.aimath.org/ARCC/workshops/hyperbolicpoly.html} }. \smallskip Finding inequalities satisfied by $f$-vectors of matroid complexes is interesting because it is a step towards the classification of $f$-vectors and $h$-vectors of matroid complexes. Johnson, Kontoyiannis, and Madiman \cite{johnson-kontoyiannis-madiman-2009} show that Theorem~\ref{Theorem:fvectorStrictlyLogConcave} implies a bound on the entropy of the cardinality of a random independent set in a matroid. A possible application to network reliability is explained in Section~\ref{Section:GraphPolynomials}. Our log-concavity results might also help to prove statements about coefficients and zeroes of various graph polynomials. \subsection*{Organization of the article} In Section~\ref{Section:MatroidAndMatroidPolynomials}, we introduce the $f$-polynomial and the characteristic polynomial of a matroid. Recently, June Huh and Eric Katz \cite{huh-katz-2011} proved that the characteristic polynomial of a realizable matroid is log-concave (a univariate polynomial is log-concave if its coefficient form a log-concave sequence). In Section~\ref{Section:Extensions}, we establish a connection between the characteristic polynomial and the $f$-polynomial. In conjunction with the result by Katz and Huh, this implies log-concavity of the $f$-polynomial of realizable matroids. In Section~\ref{Section:hvectors}, we deduce that $h$-vectors of certain thickenings of a realizable matroid are log-concave, which strengthens a result by Jason Brown and Charles Colbourn. As we will see in Section~\ref{Section:StrictLogConcavity}, this implies Theorem~\ref{Theorem:fvectorStrictlyLogConcave}. In Section~\ref{Section:ZonotopalAlgebra}, we give a brief introduction to zonotopal algebra and explain how the $f$-polynomial and the characteristic polynomial are related to it. Zonotopal algebra is the study of several classes of vector spaces of polynomials that can be associated with a realization of a matroid. The Hilbert series of those spaces are matroid invariants. In Section~\ref{Section:GraphPolynomials}, we explain the relationship between various graph/matroid polynomials, zonotopal algebra and our log-concavity results. \section{Matroid polynomials} \label{Section:MatroidAndMatroidPolynomials} In this section, we review the definitions of some matroid polynomials. We assume that the reader is familiar with matroid theory. Good references are the book by James Oxley \cite{MatroidTheory-Oxley} and the Wikipedia articles on matroids and the Tutte polynomial. Recall that we denote by $M=(E,\Delta)$ a matroid of rank $r$. The \emph{Tutte polynomial} \cite{brylawski-oxley-1992} of $M$ is defined as \begin{align} T_M(x,y) &= \sum_{A\subseteq E}(x-1)^{r-\mathop{\mathrm{rk}}(A)} (y-1)^{\abs{A}- \mathop{\mathrm{rk}}(A)}. \end{align} An important specialization of the Tutte polynomial is the \emph{characteristic polynomial} \begin{align} \chi_M(q) &= (-1)^{r}T_M(1-q,0) = \sum_{A\subseteq E} (-1)^{\abs{A}} q^{r - \mathop{\mathrm{rk}}(A)}. \end{align} The \emph{reduced characteristic polynomial} is defined as \begin{align} \bar\chi_M(q) = \frac{1}{q-1}\chi_M(q). \end{align} Note that $\chi_M(q)$ vanishes for $q=1$, so $\bar\chi_M(q)$ is indeed a polynomial. Huh and Katz proved the following theorem, extending an earlier theorem by the first author~\cite{huh-2010}: \begin{Theorem}[\cite{huh-katz-2011}] \label{Theorem:KatzHuh} If $M$ is a realizable matroid, then the coefficients of its reduced characteristic polynomial $\bar\chi_M(q)$ form a log-concave sequence. \end{Theorem} It is easy to see that log-concavity of $\bar\chi_M(q)$ implies log-concavity of $\chi_M(q)$. We are interested in the \emph{$f$-polynomial} of the matroid given by \begin{align} f_M(q) = T_M(1+q, 1) &= \sum_ A \in \Delta } q^{r - \mathop{\mathrm{rk}}(A)} = \sum_{i=0}^r f_i q^{r-i}. \end{align} \section{Free (Co-)Extension } In this section, we introduce free (co-)extensions of matroids. This helps us to establish a connection between the characteristic polynomial and the $f$-polynomial. In conjunction with Theorem~\ref{Theorem:KatzHuh}, this connection implies log-concavity of the $f$-polynomial of realizable matroids. \label{Section:Extensions} \begin{Definition} Let $M=(E,\Delta)$ be a matroid of rank $r$ and let $e\not \in E$. The \emph{free extension} of $M$ (by $e$) is the matroid $M + e=(E\cup \{e\}, \Delta + e)$, where \begin{align} \Delta + e := \Delta \cup \{ (I\cup \{e\}) : I \in \Delta \text{ and } \abs{I}\le r-1 \} . \end{align} \end{Definition} Several properties of the free extension are described in \cite[7.3.3.~Proposition]{brylawski-1986}. \begin{Remark} \label{Remark:ExtensionRealization} If $M$ is realized over the field $\mathbb{K}$ by the list of vectors $X \subseteq \mathbb{K}^r$, then $M + e$ is realized by the list $(X, x)$, where $x\in \mathbb{K}^r$ is a vector that is not contained in any (linear) hyperplane spanned by the vectors in $X$. If $\mathbb{K}$ is a finite field, such a vector might not exist. However, if $M$ is realizable over the field $\mathbb{K}$, it is also realizable over the infinite field $\mathbb{K}(t)$ of rational functions in $t$ with coefficients in~$\mathbb{K}$. \end{Remark} Recall that the \emph{dual matroid} $M^*=(E,\Delta^*)$ is given by \begin{align} \Delta^* = \{ A : \mathop{\mathrm{rk}}(E\setminus A)=r \} . \end{align} The dual matroid has rank $ r^* = \abs{E}-r$ and its rank function is given by $\mathop{\mathrm{rk}}^*(A)= \abs{A} + \mathop{\mathrm{rk}}(E\setminus A) - r$. The Tutte polynomial satisfies $T_M(x,y)=T_{M^*}(y,x)$. We will use the \emph{free coextension} $M \times e$ of a matroid $M$ which is defined as \begin{align} M \times e := (M^* + e )^*. \end{align} Equivalently, the free coextension of $M$ is the extension by a non-loop $e$ which is contained in every dependent flat \cite[Section 7.3]{MatroidTheory-Oxley}. \begin{Proposition} \label{Proposition:IndependenceCharacteristic} Let $M$ be a matroid of rank $r$ and let $M \times e$ denote its free coextension. Then, \begin{align} (-1)^{r+1} \chi_{M \times e}(-q) &= (1 + q)f_M(q) . \end{align} \end{Proposition} \begin{proof} For the proof of this statement, we use the fact that both the characteristic polynomial and the $f$-polynomial are evaluations of the Tutte polynomial. Note that the matroid $M \times e$ has rank $r+1$. To simplify notation, the rank functions of $M^*$ and $M^* + e$ are both denoted by $\mathop{\mathrm{rk}}^*$. \begin{align} (-1)^{r+1}\chi_{M \times e} (-q) &= T_{M \times e}(1+q,0) = T_{M^* + e}(0,1+q) \\ &= \sum_{A\subseteq E \cup\{ e \} } (-1)^{ r^* - \mathop{\mathrm{rk}}^*(A)} q^{\abs{A}-\mathop{\mathrm{rk}}^*(A)} \\ &= \sum_{A\subseteq E} \Bigl((-1)^{ r^* - \mathop{\mathrm{rk}}^*(A)} q^{\abs{A}-\mathop{\mathrm{rk}}^*(A)} \nonumber\\ &\qquad\qquad\qquad\quad + (-1)^{ r^* - \mathop{\mathrm{rk}}^*(A \cup e)} q^{\abs{A}+1 - \mathop{\mathrm{rk}}^*(A\cup e)} \Bigl) \label{eq:IndependenceCharacteristicA} \\ &= (1+q)\sum_{\substack{A\subseteq E\\\mathop{\mathrm{rk}}^*(A)=r^*}} q^{\abs{A} - r^*} \label{eq:IndependenceCharacteristic} = (1+q) T_{M^*}(1,1+q) \\ &= (1+q) T_{M}(1+q,1 ) = (1+q) f_{M}(q) \end{align} \eqref{eq:IndependenceCharacteristic} is equal to \eqref{eq:IndependenceCharacteristicA} because $\mathop{\mathrm{rk}}^*(A)<r^*$ implies $\mathop{\mathrm{rk}}^*(A \cup e)= \mathop{\mathrm{rk}}^*(A)+1$. For those $A$, the summands vanish. \end{proof} \begin{Corollary \label{Corollary:Mason} The $f$-vector of the matroid complex of a realizable matroid is log-concave. \end{Corollary} \begin{proof} Combine Proposition~\ref{Proposition:IndependenceCharacteristic} and Theorem \ref{Theorem:KatzHuh}. Keep in mind that free coextensions of realizable matroids are realizable (cf. Remark~\ref{Remark:ExtensionRealization}). \end{proof} \begin{Remark} Proposition~\ref{Proposition:IndependenceCharacteristic} appeared implicitly in an article by Thomas Brylawski on (reduced) broken-circuit complexes \cite{brylawski-1977}. In Section~\ref{Section:ZonotopalAlgebra}, we give another proof of Proposition~\ref{Proposition:IndependenceCharacteristic} for matroids that are realizable over a field of characteristic zero. This proof uses zonotopal algebra. \end{Remark} \begin{Example} We consider the uniform matroid $U_{2,6}$, \ie the matroid on six elements where every set of cardinality at most two is independent. Note that $U_{2,6} \times e = (U_{4,6} + e)^* = U_{4,7}^* = U_{3,7} $. \begin{align*} f_{U_{2,6}}(q) &= q^2 + 6q + 15 \\ (-1)^3\chi_{U_{3,7}}(-q) &= q^3 + 7q^2 + 21t + 15 = (q+1)f_{U_{2,6}}(q) \\ \end{align*} \end{Example} \section{Log-concavity of some $h$-vectors} \label{Section:hvectors} In this section, we strengthen a result by Jason Brown and Charles Colbourn \cite{brown-colbourn-1994}. They showed in a nonconstructive way that every matroid has a thickening whose $h$-vector is log-concave. Thickening denotes an operation where additional copies of some elements of the ground set are added. We prove that the $h$-vector of a $k$-fold thickening (\ie every element of the ground set is replaced by $k$ copies of itself) of a realizable matroid is log-concave for sufficiently large $k$. \begin{Definition} Let $M$ be a matroid of rank $r$. Its \emph{$h$-vector} $(h_0,\ldots, h_r)$ consists of the coefficients of the \emph{$h$-polynomial} defined by the equation $h_M(q) = \sum_{i=0}^r h_i q^{r-i} = f_M(q-1)$, \ie \begin{align} h_j = \sum_{i = 0 }^j (-1)^{j-i} \binom{r- i}{j-i} f_i \quad \text{for } i=0,\ldots, r. \label{equation:hvector} \end{align} \end{Definition} \begin{Definition} Let $M=(E,\Delta)$ be a matroid and let $k$ be a positive integer. We define the \emph{$k$-fold thickening $M^k$} of $M$ to be the matroid on the ground set $E\times\{ 1,\ldots, k\}$ whose independence complex is given by \begin{align} \Delta^k = \{ I \subseteq E \times \{1,\ldots, k\} : \pi_E(I)\in \Delta \text{ and } \abs{\pi_E(I)}=\abs{I} \}. \end{align} $\pi_E : E\times \{1,\ldots, k\} \to E$ denotes the projection to $E$. \end{Definition} \begin{Remark} If $M$ is realized by a list of vectors $X$, $M^k$ is realized by the list $X^k$ that contains $k$ copies of every element of $X$. \end{Remark} \begin{Theorem} \label{Theorem:kThickeningHlc} Let $M=(E,\Delta)$ be a realizable matroid of rank r and let $f_1$ denote the number of elements in $E$ that are not loops. Then, there exists an integer $k_0\le ((f_1r)^{3r})$ \st for all $k\ge k_0$, the $h$-vector of $M^k$, the $k$-fold thickening of $M$, is log-concave. \end{Theorem} \begin{Remark} We expect that a careful analysis will yield an upper bound on $k_0$ that is a lot stronger. \end{Remark} \begin{proof} First, we observe the following connection between the $f$-polynomials of $M$ and $M^k$: \begin{align} \label{eq:fKthickening} f_{M^k}(q) = \sum_{i=0}^r k^if_i q^{r-i} = k^r f_M(\frac 1k q). \end{align} Let $(f_0,\ldots, f_r)$ denote the $f$-vector of $M$ and let $(h_0',\ldots, h_r')$ denote the $h$-vector of $M^k$. By \eqref{equation:hvector}, $h_j' = \sum_{i = 0 }^j (-1)^{j-i} \binom{ r - i }{ j-i } k^i f_i$. Hence, \begin{align} (h_j')^2 &= \left(\sum_{i = 0 }^j (-1)^{j-i} \binom{r- i}{j-i} k^if_i\right)^2 = k^{2j} f_j^2 + o(k^{2j}) \label{equation:hjsquared}\\ h_{j-1}'h_{j+1}' &= \left(\sum_{i = 0 }^{j-1} (-1)^{j-i} \binom{r- i}{j-i-1} k^if_i\right) \left(\sum_{i = 0 }^{j+1} (-1)^{j-i} \binom{r- i}{j-i+1} k^if_i\right) \label{equation:hjbla} \\ &= k^{2j} f_{j-1}f_{j+1} + o(k^{2j}). \end{align} For large $k$, all summands except for the ones involving $k^{2j}$ are negligible. In particular, for large $k$ \begin{align} (h_j')^2\ge h_{j-1}'h_{j+1}' \text{ is equivalent to } f_j^2\ge f_{j-1}f_{j+1}. \end{align} The latter inequality holds by Corollary~\ref{Corollary:Mason}. For the upper bound on $k_0$, note that Ed Swartz proved in \cite{swartz-2005} that \begin{align} f_i \le \sum_{j=0}^i \binom{r-j}{r-i}\left(\binom{r-1}{j}h_r + \binom{r-1}{j-1}\right). \label{equation:fVectorBound} \end{align} $h_r$ can be bounded above by the following argument: the $h$-vector of a matroid complex is the $h$-vector of a multicomplex \cite[Theorem II.3.3]{stanley-1996}. It follows directly from \eqref{equation:hvector} that $h_1=f_1-r$. Hence, $h_r \le \binom{f_1 -1}{r-1}$. Thus, we can deduce from \eqref{equation:fVectorBound} that $f_i\le r^{2i}f_1^r$. Comparing this with \eqref{equation:hjsquared} and \eqref{equation:hjbla} implies the upper bound. \end{proof} \section{Strict log-concavity of $f$-vectors} \label{Section:StrictLogConcavity} In this section, we show that the results in the previous section imply Theorem~\ref{Theorem:fvectorStrictlyLogConcave} and we discuss the location of the modes of the $f$-vector of a matroid complex. Jeremy Dawson conjectured in \cite{dawson-1984} that the $h$-vector of a matroid is log-concave and proved that this would imply log-concavity of the $f$-polynomial. Actually, a more general statement holds (cf. also \cite[Corollary 8.4]{brenti-1994}): \begin{Lemma} \label{Lemma:himpliesfLC} Let $a_0,\ldots, a_r$ be non-negative integers and $a_0\neq 0$. Suppose that the polynomial $a(q)=\sum_{i=0}^r a_i q^{r-i}$ is log-concave. Then, the polynomial $ b(q) = \sum_{i=0}^r b_i q^{r-i} = a(q+1)$ is \emph{strictly} log-concave. \end{Lemma} \begin{proof} Our proof is inspired by Dawson's proof in \cite{dawson-1984}. For $0\le k \le r$, we define $a^k(q)= \sum_{i=0}^k a_i q^{k-i}$ and $b^k(q)=\sum_{i=0}^k b_{i,k} q^{k-i} =a^k(q+1)$. The polynomials $a^k(q)$ are by construction log-concave. We show by induction over $k$ that this implies log-concavity of the polynomials $b^k(q)$. This is sufficient since $b(q)=b^r(q)$. For $k\le 1$, nothing needs to be shown. For $k=2$, we need to check one inequality: \begin{align} b_1^2 &= (a_1 + 2a_0)^2 = a_1^2 + 4 a_0a_1 + 4a_0^2 \\ &\ge a_0a_2 + 4 a_0a_1 + 4a_0^2 > a_0 (a_2 + a_1 + a_0) = b_0 b_2. \end{align} Now let $k\ge 3$. Note that \begin{align} b^{k+1}(q) = a^{k+1}(q+1) = (q+1)a^k(q+1)+a_{k+1} = (q+1)b^k(q) + a_{k+1}. \end{align} This polynomial is strictly log-concave if $ (q+1)(qb^k(q) + a_{k+1}) = q((q+1)b^k(q) + a_{k+1}) + a_{k+1}$ is, since setting the $q^0$ coefficient to zero followed by a division by $q$ preserves strict log-concavity. It is an easy exercise to show that multiplication by $(q+1)$ preserves strict log-concavity of a polynomial in $q$. Hence, it is sufficient to prove that $(qb^k(q) + a_{k+1})$ is strictly log-concave. By induction, we only need to check the inequality involving the term $a_{k+1}$, \ie $b_{k,k}^2 > b_{k-1,k}a_{k+1}$: \begin{align} \label{equation:LCfirstLine} b_{k,k}^2 - b_{k-1,k}a_{k+1} &= (a_0+\ldots + a_k)^2 - \sum_{j=0}^{k-1}(k-j)a_j a_{k+1} \\ \label{equation:LCsecondLine} & > (a_0+\ldots + a_k)^2 - \sum_{j=0}^{k-1} \sum_{i=1}^{k-j}a_{j+i} a_{k+1-i} \\ &= \sum_{i+j \le k} a_ia_j \ge a_0^2 \ge 1 . \end{align} To see that \eqref{equation:LCfirstLine} is greater than \eqref{equation:LCsecondLine}, note that strict log-concavity of the $a_j$ implies $a_ja_{k+1} < a_{j+i}a_{k+1-i}$ for $1 \le i \le k-j$. \end{proof} \begin{proof}[Proof of Theorem~\ref{Theorem:fvectorStrictlyLogConcave}] By Theorem~\ref{Theorem:kThickeningHlc}, there exists an integer $k\,$ \st the $h$-poly\-nomial of $M^k$, the $k$-fold thickening of $M$, is log-concave. By Lemma~\ref{Lemma:himpliesfLC}, this implies strict log-concavity of the $f$-polynomial of $M^k$. \eqref{eq:fKthickening} implies that the $f$-polynomial of $M^k$ is strictly log-concave if and only if the $f$-polynomial of $M$ is strictly log-concave. \end{proof} \subsection*{Modes of $f$-vectors} For a unimodal sequence $f_0,\ldots, f_r$, it is interesting to find the location of its \emph{modes}, \ie the element(s) where the maximum of the sequence is attained. \begin{Remark} \label{Remark:ModeLocation} The index of the smallest mode of the $f$-vector of a rank $r$ matroid is at least $\lfloor r/2\rfloor$. In fact, the first half of the $f$-vector of an arbitrary matroid is strictly monotonically increasing \cite[7.5.1.~Proposition]{bjoerner-1992}. The minimum $\lfloor r/2\rfloor$ is attained by the uniform matroid $U_{r,r}$. If $M$ is realizable, Theorem~\ref{Theorem:fvectorStrictlyLogConcave} implies that $f_M$ has at most two modes. Some matroids have monotonically increasing $f$-vectors. In fact, it follows from \eqref{eq:fKthickening} that for an arbitrary matroid $M$ and sufficiently large $k$, the $f$-vector of the $k$-fold thickening of $M$ is strictly monotonically increasing. \end{Remark} \section{Zonotopal Algebra and matroid polynomials} \label{Section:ZonotopalAlgebra} \emph{Zonotopal algebra} is the study of several classes of graded vector spaces of polynomials that can be associated with a realization of a matroid over a field of characteristic zero. The Hilbert series of those spaces are matroid invariants. The spaces can be described in various ways and each space has a dual counterpart with the same Hilbert series. The theory of zonotopal algebra was developed by Olga Holtz and Amos Ron \cite{holtz-ron-2011}, extending various previous results \eg on polynomial spaces spanned by box splines \cite{BoxSplineBook}. Related work includes \cite{ardila-postnikov-2009,berget-2010, holtz-ron-xu-2010, lenz-2010,li-ron-2011, moci-2010, sturmfels-xu-2010}. Let $\mathbb{K}$ be a field of characteristic zero and let $X=(x_1,\ldots, x_N)\subseteq \mathbb{K}^r$ be a list of vectors spanning $\mathbb{K}^r$. The two zonotopal spaces that are of interest to us in this paper are the \emph{central $\mathcal{P}$-space} $\mathcal{P}(X)$ and the \emph{internal $\mathcal{P}$-space} $\mathcal{P}_-(X)$. Given $x\in X$, we denote by $p_x$ the linear polynomial in $\mathbb{K}[t_1,\ldots, t_r]$ whose $t_i$ coefficient is the $i$th coordinate of the vector $x$. We define \begin{align} \mathcal{P}(X) &:= \mathop{\mathrm{span}} \left\{ \prod_{ x\in Y } p_x : Y\subseteq X,\, \mathop{\mathrm{rk}}(X\setminus Y)=r \right\} \\ \mathcal{P}_-(X) &:= \mathop{\mathrm{span}} \left\{ \prod_{ x\in Y } p_x : Y\subseteq X,\, \mathop{\mathrm{rk}}(X\setminus (Y,y))=r \text{ for all } y\in X \right\}. \label{eq:InternalPspace} \end{align} The Hilbert series of those two spaces are evaluations of the Tutte polynomial $T_X(x,y)$ of the matroid defined by $X$ \cite{ardila-2010,ardila-postnikov-2009,holtz-ron-2011}: \begin{align} \mathop{\mathrm{Hilb}}(\mathcal{P}(X),q) &= q^{N-r} T_X(1, \frac 1q) \label{eq:CentralPTutte} \\ \mathop{\mathrm{Hilb}}(\mathcal{P}_-(X),q) &= q^{N-r} T_X(0, \frac 1q). \end{align} $X^*\in \mathbb{K}^{(N-r)\times r}$ denotes a list of vectors realizing the matroid dual to the matroid realized by $X$. In the central case, we obtain \begin{align} \label{eq:CentralHilbAsTutteEval} q^{r} \mathop{\mathrm{Hilb}}(\mathcal{P}(X^*),\frac 1q) &= T_X(q, 1) \end{align} by dualizing and by reversing the order of the coefficients. In the internal case, we obtain \begin{align} \label{eq:InternalHilbAsTutteEval} q^{r} \mathop{\mathrm{Hilb}}(\mathcal{P}_-(X^*),\frac 1q) &= T_X(q, 0) \end{align} by dualizing and by reversing the order of the coefficients. By comparing \eqref{eq:CentralHilbAsTutteEval} and \eqref{eq:InternalHilbAsTutteEval} with the definitions in Section~\ref{Section:MatroidAndMatroidPolynomials} we obtain: \begin{Proposition} Let $\mathbb{K}$ be a field of characteristic zero and let $X\subseteq \mathbb{K}^r$ be a list of vectors spanning $\mathbb{K}^r$. Then, \label{Proposition:TutteFcharacteristicPolynomials} \begin{align} f_X(q) &= T_X(q+1, 1) = (q+1)^{r} \mathop{\mathrm{Hilb}}(\mathcal{P}(X^*),\frac 1{q+1}) \\ (-1)^r\chi_X(-q) &= T_X(q+1,0) = (q+1)^{r} \mathop{\mathrm{Hilb}}(\mathcal{P}_-(X^*),\frac 1{q+1}). \end{align} \end{Proposition} \begin{Example} \label{Example:ZonotopalAlgebra} Let $X=((1,0),(0,1),(1,1))$. $X$ realizes the uniform matroid $U_{2,3}$ and $X^*=(1,1,1)$. The Tutte polynomial is $T_X(x,y)=x^2 +x + y$. \begin{align*} \mathcal{P}(X^*)&=\mathop{\mathrm{span}}\{1,t,t^2\} & \mathcal{P}_-(X^*)&=\mathop{\mathrm{span}}\{1,t \} \\ q^2\mathop{\mathrm{Hilb}}(\mathcal{P}(X^*),1/q) &= q^2 + q + 1 & q^2\mathop{\mathrm{Hilb}}(\mathcal{P}_-(X^*),1/q) &= q^2 + q \\ f_X(q)&=q^2 + 3q+3 & \chi_X(-q)&=q^2 + 3q+ 2 \end{align*} \end{Example} \begin{Proposition} \label{Proposition:InternalCentralFreeExtension} Let $\mathbb{K}$ be a field of characteristic zero and let $X\subseteq \mathbb{K}^r$ be a list of vectors spanning $\mathbb{K}^r$. Let $x\in \mathbb{K}^r$ be generic, \ie $x$ is not contained in any (linear) hyperplane spanned by the vectors in $X$. Then, \begin{align} \mathcal{P}_-(X,x) = \mathcal{P}(X) . \end{align} \end{Proposition} \begin{proof} By \cite{ardila-2010} and \cite{holtz-ron-2011}, $\mathcal{P}_-(X,x) = \bigcap_{y\in (X,x)} \mathcal{P}((X,x)\setminus y)$. This implies $\mathcal{P}(X)\supseteq \mathcal{P}_-(X,x)$.\footnote In fact, \cite{holtz-ron-2011} defines $\mathcal{P}(X) := \bigcap_{y\in X}\mathcal{P}(X\setminus y)$ and shows that $\mathcal{P}_-(X)$ is the kernel/inverse system of a certain ideal $\mathcal{I}_-(X)$. In \cite{ardila-2010}, it is shown that the kernel of this ideal is equal to \eqref{eq:InternalPspace}.} Equality can be established by a dimension argument: in \cite{holtz-ron-2011}, it is shown that the dimension of $\mathcal{P}(X)$ is equal to the number of bases that can be selected from $X$ and that the dimension of $\mathcal{P}_-(X)$ equals the number of internal bases in $X$, \ie bases that have no internally active elements. It can easily be seen that $B\subseteq (X,x)$ is an internal basis if and only if $B$ is a basis and $x\not\in B$. \end{proof} \begin{Remark} Proposition~\ref{Proposition:TutteFcharacteristicPolynomials} and Proposition~\ref{Proposition:InternalCentralFreeExtension} imply Proposition~\ref{Proposition:IndependenceCharacteristic} for matroids that are realizable over a field of characteristic zero. This is how we (re-)discovered the connection between the characteristic polynomial and the $f$-polynomial. We believe that in the future, zonotopal algebra will help to solve further problems in matroid theory. \end{Remark} \begin{Question} We have seen that for $\mathcal{P}_{\bullet}(X) \in \{ \mathcal{P}_-(X), \mathcal{P}(X) \}$, the coefficients of the polynomial $(q+1)^{N-r}\mathop{\mathrm{Hilb}}(\mathcal{P}_{\bullet}(X), 1/(q+1))$ \begin{compactenum}[(a)] \item have a combinatorial interpretation and \item form a log-concave sequence. \end{compactenum} For which other zonotopal spaces does this hold? \end{Question} \section{Graph polynomials, zonotopal algebra, and log-concavity} \label{Section:GraphPolynomials} In this section, we present some graph polynomials that are related to the internal and central $\mathcal{P}$-space. In all cases, the connection is made via the Tutte polynomial. Even though this connection is rather straightforward, it has never been stated in the literature. A good survey on graph polynomials that are related to the Tutte polynomial is \cite{ellis-merino-2011} by Joanna Ellis-Monaghan and Criel Merino. Let $G=(V,E)$ be a graph, possibly with multiple edges and loops. Let $M(G)$ denote the cycle matroid of $G$. If $\kappa(G)$ denotes the number of connected components of $G$, then $M(G)$ has rank $\mathop{\mathrm{rk}}(M(G))= \abs{V} - \kappa(G)$. $X(G)$ denotes the reduced oriented incidence matrix of $G$ which realizes the matroid $M(G)$. \subsection{Chromatic and flow polynomials} The chromatic polynomial and the flow polynomial of a graph are related to the internal space $\mathcal{P}_-(X)$. The chromatic polynomial $\chi_G$ of $G$ evaluated at $q\in \mathbb{N}$ equals the number of proper colorings of the graph $G$ with $q$ colors. $\chi_G$ is equal to the characteristic polynomial of $M(G)$ up to a factor: \begin{align} \chi_G(q) = (-1)^{\mathop{\mathrm{rk}}(M(G))} q^{\kappa(G)} T_{M(G)}(1-q,0) . \end{align} Hence, the coefficients of $\chi_G(q)$ form a log-concave sequence and \begin{align} (-1)^{\mathop{\mathrm{rk}}(M(G))}\chi_G(-q) &=(q+1)^{\mathop{\mathrm{rk}}(M(G))}q^{\kappa(G)} \mathop{\mathrm{Hilb}}(\mathcal{P}_-(X(G)^*),\frac 1{q+1}) . \end{align} Let $\vec{E}$ denote an orientation of the edges of $G$ and let $q\ge 2$. A nowhere-zero $q$-flow is an assignment $E \to \{1,\ldots, q-1 \}$ \st for each vertex, the sum over the incoming edges equals the sum over the outgoing edges modulo $q$. The function $\phi_G(q)$ which counts the number of nowhere zero $q$-flows is a polynomial and independent of the orientation $\vec{E}$: \begin{align} \phi_G(q) = (-1)^{\abs{E}-\mathop{\mathrm{rk}}(M(G))}T_{M(G)}(0,1-q) . \end{align} Hence, $\phi_G(q)$ is equal to the characteristic polynomial of the dual matroid. This implies that the coefficients of $\phi_G(q)$ form a log-concave sequence and $\phi_G(q)=(q-1)^{\abs{E}-\mathop{\mathrm{rk}}(M(G))}\mathop{\mathrm{Hilb}}(\mathcal{P}_-(X(G)),1 /(1-q))$. \subsection{Chip-firing games, shellings, and reliability} Three graph/matroid polynomials are related to the central space $\mathcal{P}(X)$: the critical configuration polynomial, the shelling polynomial, and the reliability polynomial. The \emph{critical configuration polynomial} $P_G(q):=T_{M(G)}(1,q)$ is related to chip-firing games played on the graph $G$. Its $q^i$ coefficient equals the number of critical configurations of level $i$ in the chip-firing game played on the graph $G$. The polynomial $h_M(q):=T_M(q,1) $ that we defined in Section~\ref{Section:hvectors} is also called the \emph{shelling polynomial} of the matroid $M$. This polynomial encodes certain combinatorial properties of shellings of the independence complex of the matroid $M$. By \eqref{eq:CentralPTutte}, the shelling polynomial $h_M(q)$ and the critical configuration polynomial $P_G(q)$ are evaluations of the Hilbert series of the central $\mathcal{P}$-space of $X(G)$ resp.\ of a realization of $M^*$. For further information on those two polynomials, see \cite{bjoerner-1992} and \cite[Sections 6.4 and 6.6]{ellis-merino-2011}. \begin{Remark} By Theorem~\ref{Theorem:kThickeningHlc}, the shelling polynomial of a $k$-fold thickening of a matroid is log-concave for large $k$. By duality, for large $k$, the coefficients of the critical configuration polynomial are log-concave for a $k$-fold subdivision of the graph $G$. By a $k$-fold subdivision we mean the operation of subdividing each edge of $G$ into $k$ edges. This operation is dual to replacing an edge by $k$ parallel copies of itself. \end{Remark} \smallskip Let $G=(V,E)$ be a connected graph on $n$ vertices. Let $R_G(p)$ denote the probability that $G$ is connected if each edge is independently removed with probability $p$. $R_G(p)$ is a polynomial \cite{brown-colbourn-1994}. It is called \emph{reliability polynomial} of $G$ and it can be be expressed in the following way: \begin{align} R_G(p) &= (1-p)^{n-1} \sum_{i=0}^{\abs{E}-n+1} h_i p^i \\ &= (1-p)^{n-1} p^{\abs{E}-n+1} T_G(1,\frac 1p) . \end{align} The $h_i$ denote the coefficients of the $h$-polynomial of the cycle matroid of $G$. The relationship between the $h$-vector and the reliability polynomial implies that proving bounds for the $h$-vector might have some real-world applications in determining the reliability of a network. Brown and Colbourn \cite[p. 117]{brown-colbourn-1994} state that if log-concavity of the $h$-vector \emph{``holds for matroids arising in reliability problems, it would imply stronger constraints on the relation between coefficients in the $h$-vector than does Stanley's conditions. These conditions can be incorporated in the Ball-Provan strategy for computing reliability bounds and, hence, would lead to an efficient bounding technique of the reliability polynomial.''} \begin{Example} Let $G$ be the complete graph on three vertices. Its cycle matroid is realized by the matrix $X$ in Example~\ref{Example:ZonotopalAlgebra}. Recall that the Tutte polynomial of this matroid is $T_G(x,y) = x^2 + x + y$. \begin{align*} \chi_G(q) &= q^3 - 3q^2 + 2q & \phi_G(q) &= q-1 \\ h_G(q) &= q^2 + q & P_G(q) &= q+2 & R_G(q) &= (1-p)^2 (2p^2+p) \end{align*} \end{Example} \section*{Acknowledgments} I would like to thank Olga Holtz, Felipe Rinc\'on, and Luca Moci for many stimulating conversations about Mason's conjecture. I also thank June Huh whose comments on an earlier version of this paper lead to a simplification of the proof of Corollary~\ref{Corollary:Mason}. \bibliographystyle{amsplain}
{ "timestamp": "2011-08-22T02:01:23", "yymm": "1106", "arxiv_id": "1106.2944", "language": "en", "url": "https://arxiv.org/abs/1106.2944", "abstract": "We show that f-vectors of matroid complexes of realisable matroids are log-concave. This was conjectured by Mason in 1972. Our proof uses the recent result by Huh and Katz who showed that the coefficients of the characteristic polynomial of a realisable matroid form a log-concave sequence. We also discuss the relationship between log-concavity of f-vectors and h-vectors of matroids. In the last section we explain the connection between zonotopal algebra and f-vectors and characteristic polynomials of matroids.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)", "title": "Matroids and log-concavity", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363748163087, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7084883444328709 }
https://arxiv.org/abs/2001.10770
Array Codes for Functional PIR and Batch Codes
A functional PIR array code is a coding scheme which encodes some $s$ information bits into a $t\times m$ array such that every linear combination of the $s$ information bits has $k$ mutually disjoint recovering sets. Every recovering set consists of some of the array's columns while it is allowed to read at most $\ell$ encoded bits from every column in order to receive the requested linear combination of the information bits. Functional batch array codes impose a stronger property where every multiset request of $k$ linear combinations has $k$ mutually disjoint recovering sets. Locality functional array codes demand that the size of every recovering set is restrained to be at most $r$. Given the values of $s, k, t, \ell,r$, the goal of this paper is to study the optimal value of the number of columns $m$ such that these codes exist. Several lower bounds are presented as well as explicit constructions for several of these parameters.
\section{Introduction}\label{sec:intro} \renewcommand{\baselinestretch}{1}\normalsize \emph{Private information retrieval (PIR) codes} and {batch codes} are families of codes which have several applications such as PIR protocols~\cite{BIKR02,CKGS98,DvirGopi16_1,FVY15,G04,Y10}, erasure codes in distributed storage systems~\cite{PHO13,RPDV14,tamo2014family}, one-step majority-logic decoding~\cite{LC04,M63}, load balancing in storage, cryptographic protocols~\cite{IKOS04}, switch codes~\cite{BCSY17,CGTZ15,WKCB17}, and more. They have been recently generalized to \emph{functional PIR} and \emph{functional batch} codes~\cite{ZYE19}. In this work we study these families of codes when they are used as array codes. The setup of storing information in array codes works as follows. Assume $s$ bits are encoded to be stored in a $t\times m$ array, where each column corresponds to a \emph{server} that stores the encoded bits. The encoded bits should satisfy several properties which depend upon whether the resulting code is a PIR, batch, functional PIR, or functional batch codes. Given a design parameter $k$ of the code, it is required in PIR codes that every information bit has $k$ mutually disjoint \emph{recovering sets}. Here, a recovering set is a set of columns, i.e., servers, in which given the encoded bits in the columns of the recovering set it is possible to recover the information bit. In case it is possible to read only a portion of the encoded bits in every column, we denote this parameter by $\ell$. An array code with these parameters and properties is defined as an \emph{$(s,k,m,t,\ell)$ PIR array code}. Furthermore, it will be called an \emph{$(s,k,m,t,\ell)$ batch array code} if every \emph{multiset} request of $k$ information bits has $k$ mutually disjoint recovering sets. In case the requests are not only of information bits but any linear combination of them, we receive an \emph{$(s,k,m,t,\ell)$ functional PIR array code}, if the same linear combination is requested $k$ times or \emph{$(s,k,m,t,\ell)$ functional batch array code} for a multiset request of $k$ linear combinations. Yet another family of codes that will be studied in this paper will be referred by \emph{locality functional array codes}. Here we assume that $\ell=t$ and an \emph{$(s,k,m,t,r)$ locality functional array code} guarantees that every linear combination ${\boldsymbol v}$ of the information bits has $k$ mutually disjoint recovering sets, where each is of size of at most $r$. The main figure of merit when studying these families of codes is to optimize the number of columns, i.e., servers, given the values of $s,k,t,\ell$. Thus, the smallest $m$ such that an $(s,k,m,t,\ell)$ PIR, batch, functional PIR, functional batch code exists, is denoted by $P_{t,\ell}(s,k), B_{t,\ell}(s,k), FP_{t,\ell}(s,k), FB_{t,\ell}(s,k)$, respectively. Studying the value of $P_{t,\ell}(s,k)$ has been initiated in~\cite{FVY15} and since then several more results have appeared; see e.g.~\cite{BE17,BE19,CKYZ19,ZWWG19}. Note that the first work~\cite{IKOS04} which studied batch codes defined them in their array codes setup and only later on they were studied in their one-dimensional case, also known as \emph{primitive batch codes}; see e.g.~\cite{AY17,LS15,RSDG16,VY16, ZS16}. Functional PIR and batch codes have been recently studied in~\cite{ZYE19} but only for vectors, that is, $t=\ell=1$. Thus, this paper initiates the study of functional PIR and batch codes in the array setup. The motivation to study functional PIR and batch codes originates from the observation that in many cases and protocols, such as PIR, the user is not necessarily interested in one of the information bits, bur rather, some linear combination of them. Furthermore, functional batch codes are closely related to the family of \emph{random I/O (RIO) codes}, introduced by Sharon and Alrod~\cite{SA13}, which are used to improve the random input/output performance of flash memories. A variant of RIO codes, called \emph{parallel RIO codes}, was introduced in~\cite{YM16}, and linear codes of this family of codes have been studied in~\cite{YKL17}. It was then shown in~\cite{ZYE19} that in fact linear parallel RIO codes are equivalent to functional batch codes. \begin{comment} A related family of codes to functional batch codes are random I/O (RIO) codes. This family of codes was recently introduced by Sharon and Alrod [48] and provides a coding scheme to improve the random input/output performance of ash memories. An (n;M; t) RIO code stores t pages in n cells with t + 1 levels such that it is enough to sense a single read threshold in order to read any of the t pages. Sharon and Alrod showed in [48] that the design of RIO codes is equivalent to the design of write-once memory (WOM) codes [17, 23, 45, 64]. The latter family of codes attracted substantial attention in recent years in order to improve the lifetime of ash memories by allowing writing multiple messages to the memory without the need for an erase operation. However, while in WOM codes, the messages are received one after the other and thus are not known all in advance, in RIO codes the information of all logical pages can be known in advance when programming the cells. This variant of RIO codes, called parallel RIO codes, was introduced in [65]. A recent construction of parallel RIO codes [66] used the coset coding scheme [17] with Hamming codes in order to construct parallel RIO codes. In fact, this construction is equivalent to the requirements of functional batch codes, and thus every functional batch code can be used as a parallel RIO code as well. The other direction does not necessarily hold since parallel RIO codes do not have to be linear, as opposed to functional batch codes. The codes from [66] gave two constructions of functional batch codes (which are parallel RIO codes) with the following parameters: (s = 3; k = 4; n = 7) and (s = 4; k = 8; n = 15). \end{comment} The rest of the paper is organized as follows. In Section~\ref{sec:defs}, we formally define the codes studied in the paper, discuss some of the previous related work, and list several basic properties. In Section~\ref{sec:bounds}, we show lower bounds on the number of servers for functional PIR and batch array codes. Section~\ref{sec:cons} lists several code constructions which are based on the Gadget Lemma, covering codes, and several more results for $k=1,2$. Section~\ref{sec:cons_spec} presents three constructions of array codes and in Section~\ref{sec:analysis} the rates of these codes are studied. Section~\ref{sec:Locality} studies locality functional array codes. Lastly, Section~\ref{sec:conc} concludes the paper. \section{Definitions and Preliminaries}\label{sec:defs} This work is focused on five families of codes, namely \emph{private information retrieval} (\emph{PIR}) codes that were defined recently in~\cite{FVY15}, \emph{batch codes} that were first studied by Ishai et al. in~\cite{IKOS04}, their extension to \emph{functional PIR codes} and \emph{functional batch codes} that was investigated in~\cite{ZYE19}, \emph{and locality functional codes}. In these five families of codes, $s$ information bits are encoded to $m$ bits. While for PIR codes it is required that every information bit has $k$ mutually disjoint recovering sets, batch codes impose this property for every multiset request of $k$ bits. Similarly, for functional PIR codes it is required that every linear combination of the information bits has $k$ mutually disjoint recovering sets, and functional batch codes impose this property for every multiset request of $k$ linear combination of the bits. Lastly, similar to functional PIR codes, for locality functional codes it is required that the size of every recovering set is limited to be at most $r$. While this description of the codes corresponds to the case of one-dimensional codewords, the goal of this work is to study their extension as \emph{array codes}, which is defined as follows. The set $[n]$ denotes the set of integers $\{1,2,\ldots,n\}$ and $\Sigma=\mathbb{F}_2$. We start with the formal definition of the first four families of codes that will be studied in the paper, while we defer the definition of locality functional array codes to Section~\ref{sec:Locality}. \begin{definition}\label{ArrayCodesDef} \begin{enumerate} \item An $(s,k,m,t,\ell)$ \textbf{PIR array code} over $\Sigma$ is defined by an encoding map ${\cal E}:\Sigma^s \rightarrow (\Sigma^t)^m$ that encodes $s$ information bits $x_1,\dots,x_s$ into a $t\times m$ array and a decoding function ${\cal D}$ that satisfies the following property. For any $i \in [s]$ there is a partition of the columns into $k$ recovering sets $S_1,\ldots,S_k \subseteq [m]$ such that $x_i$ can be recovered by reading at most $\ell$ bits from each column in $S_j,j\in[k]$. \item An $(s,k,m,t,\ell)$ \textbf{batch array code} over $\Sigma$ is defined by an encoding map ${\cal E}:\Sigma^s \rightarrow (\Sigma^t)^m$ that encodes $s$ information bits $x_1,\dots,x_s$ into a $t\times m$ array and a decoding function ${\cal D}$ that satisfies the following property. For any multiset request of $k$ bits $i_1,\ldots,i_k\in[s]$ there is a partition of the columns into $k$ recovering sets $S_1,\ldots,S_k \subseteq [m]$ such that $x_{i_j}, j\in[k]$ can be recovered by reading at most $\ell$ bits from each column in $S_j$. \item An $(s,k,m,t,\ell)$ \textbf{functional PIR array code} over $\Sigma$ is defined by an encoding map ${\cal E}:\Sigma^s \rightarrow (\Sigma^t)^m$ that encodes $s$ information bits $x_1,\dots,x_s$ into a $t\times m$ array and a decoding function ${\cal D}$ that satisfies the following property. For any request of a linear combination ${\boldsymbol v}$ of the information bits, there is a partition of the columns into $k$ recovering sets $S_1,\ldots,S_k \subseteq [m]$ such that ${\boldsymbol v}$ can be recovered by reading at most $\ell$ bits from each column in $S_j,j\in[k]$. \item An $(s,k,m,t,\ell)$ \textbf{functional batch array code} over $\Sigma$ is defined by an encoding map ${\cal E}:\Sigma^s \rightarrow (\Sigma^t)^m$ that encodes $s$ information bits $x_1,\dots,x_s$ into a $t\times m$ array and a decoding function ${\cal D}$ that satisfies the following property. For any multiset request of $k$ linear combinations ${\boldsymbol v}_1,\ldots, {\boldsymbol v}_k$ of the information bits, there is a partition of the columns into $k$ recovering sets $S_1,\ldots,S_k \subseteq [m]$ such that ${\boldsymbol v}_{j}, j\in[k]$ can be recovered by reading at most $\ell$ bits from each column in $S_j$. \end{enumerate} \end{definition} We refer to each column as a \emph{bucket} and to each entry in a bucket as a \emph{cell}. Furthermore, it is said that a cell stores a \emph{singleton} if one of the information bits is stored in the cell. In the rest of the paper we will refer to every linear combination of the information bits as a binary vector of length $s$, which indicates the information bits in this linear combination. Our goal is to fix the values of $s,k,t$ and $\ell$ and then seek to optimize the value of $m$. In particular, we will have that $t$ and $\ell$ are fixed, where $t\geq \ell$, and then study the growth of $m$ as a function of $s$ and $k$. Hence, we denote by $P_{t,\ell}(s,k), B_{t,\ell}(s,k), FP_{t,\ell}(s,k), FB_{t,\ell}(s,k)$ the smallest $m$ such that an $(s,k,m,t,\ell)$ PIR, batch, functional PIR, functional batch code exists, respectively. In case $\ell=t=1$ we will simply remove them from these notations. The following upper and lower bounds on the number of buckets for PIR array codes have been shown in~\cite{BE19,CKYZ19,ZWWG19} and are stated in the following theorem. \begin{theorem}\label{theorem:PIRLB} \begin{enumerate} \item $P_{t,t}(s,k) \geq \frac{2\cdot k \cdot s}{s+t}$,~\cite[Th. 3]{BE19}.\label{theorem:part1} \item For any integer $t \geq 2$ and any integer $s>t$, $P_{t,t}(s,k) \geq \frac{k\cdot s \cdot(2s - 2t + 1)}{(2s - 2t+1)t+(s-t)^2}$,~\cite[Th. 4]{BE19}.\label{theorem:part2} \item For any integer $t \geq 2$ and any integer $s>2t$, $P_{t,t}(s,k) \geq \frac{2k\cdot s \cdot(s + 1)}{(s-t)^2 + 3st - t^2 + 2t}$,~\cite[Th. 16]{ZWWG19}.\label{theorem:part4} \item For any integer $t \geq 2$ and any integer $t < s \leq 2t$, $P_{t,t}(s,k) \leq \frac{k\cdot s \cdot(2s - 2t +1)}{(2s - 2t+1)t+(s-t)^2}$,~\cite[Th. 6]{BE19}.\label{theorem:part3} \item For any integers $p,t$ with $p\leq t+1$, $P_{t,t}(pt,k) \leq m$, where $k = {t \choose t-p+1}{s \choose t}$ and $m = {t \choose t-p+1}{s \choose t}+{s-p \choose t-p+1}{s-1 \choose p-1}$,~\cite[Th. 10]{CKYZ19}.\label{theorem:part5} \end{enumerate} \end{theorem} Note that for any two integers $t \geq 2$ and $s>t$, the bound in Theorem~\ref{theorem:PIRLB}\eqref{theorem:part2} improves upon the bound in Theorem~\ref{theorem:PIRLB}\eqref{theorem:part1}. This is verified by showing that $\frac{k\cdot s \cdot(2s - 2t + 1)}{(2s - 2t+1)t+(s-t)^2} - \frac{2\cdot k \cdot s}{s+t} \geq 0$ by basic algebraic manipulations. However the lower bound in Theorem~\ref{theorem:PIRLB}\eqref{theorem:part1} holds for all values of $s$, while the one in Theorem~\ref{theorem:PIRLB}\eqref{theorem:part2} only for $s>t$. Also, in~\cite{ZWWG19} it was shown that for any two integers $t \geq 2$ and $s>2t$, the bound in Theorem~\ref{theorem:PIRLB}\eqref{theorem:part4} is stronger than the bound in Theorem~\ref{theorem:PIRLB}\eqref{theorem:part2}. The result in Theorem~\ref{theorem:PIRLB}\eqref{theorem:part3} is achieved by Construction 1 in~\cite{BE19}. The authors of~\cite{BE19} presented another construction which is not reported here due to its length. For the exact details please refer to~\cite[Construction 4 and Th.8]{BE19}. This construction was then improved in~\cite{ZWWG19} and in~\cite{CKYZ19}. Several more constructions of PIR array codes have also been presented in~\cite{CKYZ19,ZWWG19}. \begin{comment} Some of the recently known results on functional PIR and functional batch codes from~\cite{ZYE19} are summarized in the next theorem. The function $H(\cdot)$ denotes the binary entropy function defined by $H(p)=-p\log{p}-(1-p)\log{(1-p)}$. \begin{theorem}\label{theorem:PrevFPFB} \begin{enumerate} \item\label{th3:part1} For any even integer $k\geq 4$, $\lim_{s\rightarrow\infty} \frac{FP(s,k)}{s} \geq \frac{1}{H(1/k)}$. \item\label{th3:part2} For any $t \geq 2$, $FP(2t,3)=3t+2$, $FP(2t,4)=3t+3$, $3t+3\le FP(2t+1,3)\le 3t+4$ and $3t+4\le FP(2t+1,4)\le3t+5$. \item\label{th3:part3} For any positive integer $k$, $\lim_{s\rightarrow\infty} \frac{FB(s,k)}{s} \ge \frac{k}{\log(k+1)}.$ \item\label{th3:part4} If $c_1=\frac{1}{2}$ and $c_{k+1}$ is the root of the polynomial $H(z)=H(c_k)-zH(c_k)$, then $\lim_{s\rightarrow\infty} \frac{FB(s,k)}{s}\le \frac{1}{H(c_{k})}.$ \end{enumerate} \end{theorem} \end{comment} The following theorem summarizes some of the known basic previous results, as well as several new ones. The proofs are rather simple and are thus omitted. \begin{theorem}\label{theorem:Basic} For every $s,k,t,\ell,a$ positive integers: \begin{enumerate} \item $P_{t,\ell}(s,1) = B_{t,\ell}(s,1) = \lceil s/t\rceil$. \item $FP_{t,\ell}(s,k_1+k_2) \leq FP_{t,\ell}(s,k_1) + FP_{t,\ell}(s,k_2)$ (also for $P$, $B$, and $FB$). \item $FP_{t,\ell}(s,a \cdot k) \leq a \cdot FP_{t,\ell}(s,k)$ (also for $P$, $B$, and $FB$). \label{theorem:partak} \item $FP_{t,\ell}(s_1 + s_2,k) \leq FP_{t,\ell}(s_1,k) + FP_{t,\ell}(s_2,k)$ (also for $P$, $B$, and $FB$). \label{theorem:parts1s2} \item $FP_{t,\ell}(a \cdot s,k) \leq a \cdot FP_{t,\ell}(s,k)$ (also for $P$, $B$, and $FB$). \label{theorem:partas} \item $FP_{t,\ell}(s,k) \leq a \cdot FP_{a\cdot t,\ell}(s,k)$ (also for $P$, $B$, and $FB$). \label{theorem:partat} \end{enumerate} \end{theorem} One of the simplest ways to construct array PIR and batch codes uses the Gadget Lemma, which was first proved in~\cite{IKOS04}. \begin{lemma}(\textbf{The Gadget Lemma})\label{lem:gadget} Let ${\cal C}$ be an $(s,k,m,1,1)$ batch code, then for any positive integer $t$ there exists an $(ts,k,m,t,1)$ batch array code ${\cal C}'$ (denoted also by $t\cdot {\cal C}$). \end{lemma} It is easily verified that the Gadget Lemma holds also for PIR codes and therefore $P_{t,\ell}(s,k) \leq P_{t,1}(s,k) \leq P(\lceil s/t\rceil,k)$ and $B_{t,\ell}(s,k) \leq B_{t,1}(s,k) \leq B(\lceil s/t\rceil,k)$. However, unfortunately, the Gadget Lemma does not hold in general for functional PIR and batch codes. Even a weaker variation of the Gadget Lemma, where $\ell=t$, does not hold in general for functional PIR and batch codes either. Assume by contradiction that if there is an $(s,k,m,1,1)$ functional PIR code ${\cal C}$, then for any positive integer $t$ there exists a $(ts,k,m,t,t)$ functional PIR array code. Then, this will imply that $FP_{t,t}(ts,k) \leq FP(s,k)$. However, it is known that $FP(2,2) = 3$ by the simple parity code. Thus, under this assumption it would hold that $FP_{2,2}(4,2) \leq FP(2,2) = 3$. But, according to a lower bound on functional PIR array codes, which will be shown in Theorem~\ref{theorem:LBFP4}, it holds that $FP_{2,2}(4,2) \geq \frac{2\cdot 2 \cdot 15}{15 + 3} > 3$, which is a contradiction. \section{Lower Bounds on Array Codes}\label{sec:bounds} In this section we present several lower bounds on functional PIR and batch array codes. Let ${a \brace b}$ be the Stirling number of the second kind, which calculates the number of partitions of a set of $a$ elements into $b$ nonempty subsets. It is well known that $ {a\brace b}=\frac{1}{b!}\sum_{i=0}^b (-1)^{b-i}{b\choose i} i^a. $ \begin{comment} \begin{theorem}\label{theorem:LBFB} Let $s,k,t$ and $\ell$ be positive integers. Then, \begin{enumerate} \item\label{thLBFB:part1} $FB_{t,\ell}(s,k) \geq m $, where $m$ is the smallest positive integer such that $ \sum_{i=k}^{m} {m \choose i} \cdot {i \brace k} \cdot \left(\sum_{j=1}^{\ell} {t \choose j} \right)^{i} \geq {2^s + k - 2 \choose k}.$ \item $FP_{t,\ell}(s,k) \geq m $, where $m$ is the smallest positive integer such that $\sum_{i=k}^{m} {m \choose i} \cdot {i \brace k} \cdot \left(\sum_{j=1}^{\ell} {t \choose j} \right)^{i} \geq 2^s - 1.$ \item $FP_{t,\ell}(s,k) \geq m $, where $m$ is the smallest positive integer such that $ \sum_{i=1}^{m-k+1} {m \choose i} \cdot \left(\sum_{j=1}^{\ell} {t \choose j} \right)^{i} \geq k \cdot (2^s -1).$ \item $FP_{t,\ell}(s,k) \geq \left\lceil \frac{\log_2(k(2^s-1)+1)}{\log_2(\sum_{i=0}^{\ell} {t \choose i})}\right\rceil$. \end{enumerate} \end{theorem} \end{comment} \begin{theorem}\label{theorem:LBFB} For all $s,k,t$ and $\ell$ positive integers $FB_{t,\ell}(s,k) \geq m^* $, where $m^*$ is the smallest positive integer such that $$ \sum_{i=k}^{m^*} {m^* \choose i} \cdot {i \brace k} \cdot \left(\sum_{j=1}^{\ell} {t \choose j} \right)^{i} \geq {2^s + k - 2 \choose k}.$$ \end{theorem} \begin{IEEEproof} Let ${\cal C}$ be an optimal $(s,k,m^*,t,\ell)$ functional batch array code. Since there are $s$ information bits, there are $(2^s - 1)$ possible linear combination requests and there are ${2^s + k - 2 \choose k}$ possible multiset requests of length $k$. For each multiset request of $k$ linear combinations ${\boldsymbol v}_1,\ldots, {\boldsymbol v}_k$ of the information bits, there is a partition of the buckets of the code ${\cal C}$ into $k$ recovering sets $S_1,\ldots,S_k \subseteq [m^*]$ such that ${\boldsymbol v}_{j}, j\in[k]$ can be recovered by reading at most $\ell$ bits from each column in $S_j$. In each bucket there are $t$ cells where at most $\ell$ cells from them can be read. Thus, there are $\sum_{j=1}^{\ell} {t \choose j}$ nonzero linear combinations that can be obtained from one bucket. For any positive integer $n$, there are $(\sum_{j=1}^{\ell} {t \choose j})^n$ nonzero linear combinations that can be obtained from $n$ buckets while using all the $n$ buckets. In order to satisfy a multiset request, the buckets must be divided into $k$ disjoint recovering sets such that each set can satisfy one requested linear combination. There are $$\sum_{i = k}^{m^*} {m^* \choose i} \cdot {i \brace k} $$ possibilities to divide at most $m^*$ buckets into $k$ nonempty disjoint sets. Each subset of the buckets of size at least $k$ can be divided into $k$ nonempty sets. Thus, we take the sum over all the subsets of the buckets of size at least $k$, where for each such subset we count the number of possibilities to divide it into $k$ nonempty subsets using Stirling number of the second kind. From each subset of size $p$ where $k\leq p\leq m^*$, there exist $(\sum_{j=1}^{\ell} {t \choose j})^{p}$ linear combinations. Therefore, for a given partition of $i, k\leq i \leq m^*$ buckets into $k$ subsets such that the sizes of the subsets are $p_1,p_2,\ldots,p_k$ where $\sum_{j=1}^{k} p_j = i$, the number of different $k$-sets of linear combinations such that each linear combination taken from one subset is $$\prod_{p\in\{p_1,p_2,\cdots,p_k\}} \left(\sum_{j=1}^{\ell} {t \choose j} \right)^{p} = \left(\sum_{j=1}^{\ell} {t \choose j} \right)^{i}.$$ In order to satisfy each multiset request by a set of $k$ linear combinations such that each linear combination satisfies one requested linear combination. It must hold that the number of different $k$-sets of linear combinations such that each linear combination taken from one subset of the buckets, for all partitions of the $m^*$ buckets into $k$ nonempty disjoint subsets, is larger than the number of multiset requests. Thus, \begin{equation}\label{eq:LBFB} \sum_{i=k}^{m^*} {m^* \choose i} \cdot {i \brace k} \cdot \left(\sum_{j=1}^{\ell} {t \choose j} \right)^{i} \geq {2^s + k - 2 \choose k}. \end{equation} \end{IEEEproof} A similar lower bound can be obtained for functional PIR array codes. While in functional batch array codes there exist ${2^s + k - 2 \choose k}$ possible multiset requests, in functional PIR array codes there exist $2^s - 1$ possible requests. \begin{corollary}\label{cor:LBFP3} For all $s,k,t$ and $\ell$ positive integers $FP_{t,\ell}(s,k) \geq m^* $, where $m^*$ is the smallest positive integer such that \begin{equation}\label{eq:LBFP3} \sum_{i=k}^{m^*} {m^* \choose i} \cdot {i \brace k} \cdot \left(\sum_{j=1}^{\ell} {t \choose j} \right)^{i} \geq 2^s - 1. \end{equation} \end{corollary} Another combinatorial bound for functional PIR array codes is shown in the following theorem. \begin{theorem}\label{theorem:LBFP1} For all $s,k,t$ and $\ell$ positive integers $FP_{t,\ell}(s,k) \geq m^*$, where $m^*$ is the smallest positive integer such that $$ \sum_{i=1}^{m^*-k+1} {m^* \choose i} \cdot \left(\sum_{j=1}^{\ell} {t \choose j} \right)^{i} \geq k \cdot (2^s -1).$$ \end{theorem} \begin{IEEEproof} Let ${\cal C}$ be an optimal $(s,k,m^*,t,\ell)$ functional PIR array code. Since there are $s$ information bits, there are $(2^s - 1)$ possible requests. The code ${\cal C}$ must satisfy each request $k$ times by $k$ linear combinations from $k$ disjoint recovering sets. In other words, for each request there are $k$ nonempty disjoint recovering sets, such that each set has a linear combination equal to the request. Each recovering set must be of size at most $m^* - k + 1$, in order to have other $k-1$ nonempty recovering sets. In each bucket there are $t$ cells where at most $\ell$ cells from them can be read. Thus, there are $\sum_{i=1}^{\ell} {t \choose i}$ nonzero linear combinations that can be obtained from one bucket and $(\sum_{j=1}^{\ell} {t \choose j})^n$ from $n$ buckets, for any positive integer $n$, while using all the $n$ buckets. We are interested in counting the different linear combinations that can be obtained from at most $m^*-k+1$ buckets. Thus, there are $$\sum_{i=1}^{m^*-k+1} {m^* \choose i} \cdot \left(\sum_{j=1}^{\ell} {t \choose j} \right)^{i}$$ such linear combinations. It must hold that the number of different linear combinations that can be got from at most $m^* - k + 1$ buckets is larger than $k$ times the number of the possible requests. Thus, \begin{equation}\label{eq:LBFP1} \sum_{i=1}^{m^*-k+1} {m^* \choose i} \cdot \left(\sum_{j=1}^{\ell} {t \choose j} \right)^{i} \geq k \cdot (2^s -1). \end{equation} \end{IEEEproof} The following corollary is derived from Theorem~\ref{theorem:LBFP1}. \begin{corollary}\label{theorem:LBFP2} $FP_{t,\ell}(s,k) \geq \left\lceil \frac{\log_2(k(2^s-1)+1)}{\log_2(\sum_{i=0}^{\ell} {t \choose i})}\right\rceil$, for all $s,k,t$ and $\ell$ positive integers. \end{corollary} \begin{IEEEproof} The proof of \Tref{theorem:LBFP1} can be modified by using a weaker constraint, that the size of each subset is at most $m$. Thus, it must hold that $\sum_{i=1}^{m} {m \choose i} \cdot \left(\sum_{j=1}^{\ell} {t \choose j} \right)^{i} \geq k \cdot (2^s -1)$. From the equality $\sum_{i=0}^{m} {m \choose i} \cdot x^i = (x+1)^m$, we get that, \begin{align*} \sum_{i=1}^{m} {m \choose i} \cdot \left(\sum_{j=1}^{\ell} {t \choose j} \right)^{i} &= \left(1+\sum_{j=1}^{\ell} {t \choose j}\right)^m -1 &\\ & = \left(\sum_{j=0}^{\ell} {t \choose j}\right)^m -1 \geq k \cdot (2^s -1). \end{align*} Therefore, a lower bound over the minimal number of buckets, is $FP_{t,\ell}(s,k) \geq \left\lceil \frac{\log_2(k(2^s-1)+1)}{\log_2(\sum_{j=0}^{\ell} {t \choose j})}\right\rceil$. \end{IEEEproof} Lastly in this section we show a different lower bound for functional PIR array codes, which is motivated by the corresponding lower bound for PIR array codes from~\cite[Th. 3]{BE19}. \begin{theorem}\label{theorem:LBFP4} For any $s,k,t$ and $\ell$ positive integers, $FP_{t,\ell}(s,k) \geq \frac{2\cdot k \cdot (2^s-1)}{(2^s-1)+(\sum_{i=1}^{\ell} {t \choose i})}$. \end{theorem} \begin{IEEEproof} Suppose there exists an $(s,k,m,t,\ell)$ functional PIR array code. There are $2^s-1$ possible linear combination requests which are denoted by ${\boldsymbol u}_i$ for $1\leq i\leq 2^s-1$. For $i\in[2^s-1]$, we define by $\alpha_i$ to be the number of recovering sets of size $1$ of the $i$-th linear combination request ${\boldsymbol u}_i$. Since it is possible to read at most $\ell$ bits from each bucket, every bucket can satisfy at most $\sum_{i=1}^{\ell} {t \choose i}$ linear combinations. Thus, the number of recovering sets of size $1$ is $m \cdot \sum_{i=1}^{\ell} {t \choose i}$, and $\sum_{j=1}^{2^s-1} \alpha_j \leq m \cdot \sum_{i=1}^{\ell} {t \choose i}$. Hence, there exists $q \in[2^s-1]$ such that $\alpha_q \leq \frac{m\cdot \sum_{i=1}^{\ell} {t \choose i}}{2^s-1}$, so out of its $k$ disjoint recovering sets of ${\boldsymbol u}_q$, at most $\alpha_q$ of them are of size $1$, and the size of each of the remaining $k-\alpha_q$ subsets is at least $2$. Hence, $$m \geq \alpha_q + 2(k-\alpha_q) = 2k - \alpha_q \geq 2k - \frac{m\cdot\sum_{i=1}^{\ell} {t \choose i}}{2^s-1},$$ and therefore $m(1+\frac{\sum_{i=1}^{\ell} {t \choose i}}{(2^s-1)}) \geq 2k$, which implies that $FP_{t,\ell}(s,k) \geq \frac{2k(2^s-1)}{(2^s-1)+\sum_{i=1}^{\ell} {t \choose i}}.$ \end{IEEEproof} \section{General Constructions of Array Codes}\label{sec:cons} In this section we present several constructions of array codes for functional PIR and batch codes. \subsection{Basic Constructions} Even though the Gadget Lemma cannot be extended in general for functional PIR and batch codes, here we show a variation of it that will hold. For any positive integer $i$, $\mathbf 0^{i}$ denotes the zero vector of length $i$, and for any two vectors ${\boldsymbol v}$ and ${\boldsymbol u}$, the vector ${\boldsymbol v}{\boldsymbol u}$ is defined to be the concatenation of ${\boldsymbol u}$ after ${\boldsymbol v}$. \begin{lemma}\label{lemma:FBGadget} For any positive integer $p$, if there exists an $(s,p\cdot k,m,t,\ell)$ functional batch array code, then there exists an $(p\cdot s,k,m,p\cdot t,\ell)$ functional batch array code. Therefore, $$FP_{p \cdot t,\ell}(s,k) \leq FB_{p \cdot t,\ell}(p \cdot s, k) \leq FB_{t,\ell}(s,p \cdot k),$$ and in particular, $FP_{t,1}(s,k) \leq FB_{t,1}(s,k) \leq FB(\lceil \frac{s}{t}\rceil,t\cdot k)$. \end{lemma} \begin{IEEEproof} Let ${\cal C}$ be an $(s,p\cdot k,m,t,\ell)$ functional batch array code with encoding function ${\cal E}$ and decoding function ${\cal D}$. We construct an $(p\cdot s,k,m,p \cdot t,\ell)$ functional batch array code ${\cal C}'$ by using the code ${\cal C}$. Let ${\cal S} = \{x_{i,j}: 1 \leq i \leq p, 1 \leq j \leq s\}$ be the set of $p\cdot s$ information bits. The $p\cdot s$ information bits can be partitioned into $p$ parts, each of size $s$, such that part $i,i\in[p]$ is ${\cal S}_i = \{x_{i,j}: 1\leq j \leq s\}$. The code ${\cal C}'$ will be represented by a $pt \times m$ array $A$, that contains $p$ subarrays $A_1,A_2,\ldots,A_p$ each of dimension $t \times m$. In the encoding function of the code ${\cal C}'$, the $i$-th subarray $A_i$ stores the encoded bits of the set ${\cal S}_i $ by applying the encoding function ${\cal E}$ of the code ${\cal C}$ over the information bits in the set ${\cal S}_i$. Let $R = \{{\boldsymbol v}_1,{\boldsymbol v}_2,\ldots,{\boldsymbol v}_k\}$ be a multiset request of size $k$ of the $p \cdot s$ information bits, where ${\boldsymbol v}_i,i\in[k]$ is a binary vector of length $ps$ that represents the $i$-th request. For each $i\in[k]$, denote ${\boldsymbol v}_i = ({\boldsymbol v}_i^1, {\boldsymbol v}_i^2,\ldots,{\boldsymbol v}_i^p)$ where ${\boldsymbol v}_i^j,j\in[p]$ is a vector of length $s$ that represents the linear combination of the bits in ${\cal S}_j$. Let $R^{*} = \{{\boldsymbol v}_i^j: 1\leq i \leq k, 1\leq j \leq p\}$ be a multiset request of size $pk$, that has $pk$ vectors of length $s$ each. By using the decoding function ${\cal D}$ of the code ${\cal C}$ with the request $R^{*}$ we get $pk$ recovering sets. For each $i\in[k]$ and $j\in[p]$, let $B_i^j = \{(h_{i,1},{\boldsymbol u}_{i,1}), (h_{i,2},{\boldsymbol u}_{i,2}), \ldots, (h_{i,a_i},{\boldsymbol u}_{i,a_i})\}$ be a recovering set for ${\boldsymbol v}_i^j$ of size $a_i$, where for each $g\in[a_i]$, $(h_{i,g},{\boldsymbol u}_{i,g})$ is a pair of a bucket $h_{i,g}$ with a vector ${\boldsymbol u}_{i,g}$ of length $t$ that indicates the cells which are read from the bucket $h_{i,g}$. For each $B_i^j$ and $f\in[p]$, let $B_{i,f}^j = \{( h_{i,1},\mathbf 0^{t(f-1)} {\boldsymbol u}_{i,1}\mathbf 0^{t(p-f)}),\ldots,( h_{i,a_i},\mathbf 0^{t(f-1)} {\boldsymbol u}_{i,a_i}\mathbf 0^{t(p-f)})\}$ be a recovering set for ${\boldsymbol v}_i^j$, that reads the cells of subarray $A_f$. For each $i\in[k]$, to satisfy the request ${\boldsymbol v}_i$, the union $\cup_{f=1}^{p} B_{i,f}^{f}$ is taken, since for each $f\in[p]$ the subset $B_{i,f}^{f}$ can satisfy the request ${\boldsymbol v}_i^f$. For each $f_1,f_2\in[p]$, $i_1,i_2\in[k]$ and $j_1,j_2\in[p]$, $B_{i_1,f_1}^{j_1}$ and $B_{i_2,f_2}^{j_2}$ have disjoint subsets of buckets if $i_1\neq i_2$ or $j_1\neq j_2$, because $B_{i_1}^{j_1}$ and $B_{i_2}^{j_2}$ have disjoint subsets of buckets if $i_1\neq i_2$ or $j_1\neq j_2$. Thus, for any $i\neq j\in[k]$, $\cup_{f=1}^{p} B_{i,f}^{f}$ and $\cup_{f=1}^{p} B_{j,f}^{f}$ have disjoint subsets of buckets. It remains to show that we read at most $\ell$ cells from each bucket. For any ${\boldsymbol v}_i,i\in[k]$ it is clear that if the recovering set $B_{i,f_1}^{j}$ was used then $f_1 = j$, which implies that the recovering sets $B_{i,f_2}^{j}$ for each $f_2 \neq f_1$ was not used. Thus, the recovering sets that were used to satisfy ${\boldsymbol v}_i$ have disjoint subsets of buckets. Thus, each bucket can appear in at most one of these recovering sets, and it is known that each one of these subsets uses at most $\ell$ cells from each bucket from the properties of the code ${\cal C}$. The last claim in the lemma holds by setting $p=t$ and $t=1$. \end{IEEEproof} \begin{comment} \e{proof or outline?} \begin{IEEEproof} Let ${\cal C}$ be an $(s,p\cdot k,m,t,\ell)$ functional batch array code. We construct a $(p\cdot s,k,m,p \cdot t,\ell)$ functional batch array code ${\cal C}'$ by using the code ${\cal C}$. The $p\cdot s$ information bits are partitioned into $p$ parts, each of size $s$, such that the $i$-th part is encoded to a $t\times m$ array $A_i$ using ${\cal C}$. The code ${\cal C}'$ is presented by a $pt\times m$ array $A$ that contains the $p$ arrays $A_1,\ldots,A_p$. Let $R = \{{\boldsymbol v}_1,\ldots,{\boldsymbol v}_k\}$ be a multiset request of size $k$ of the $p \cdot s$ information bits, where ${\boldsymbol v}_i,i\in[k]$ is a binary vector of length $ps$ that represents the $i$-th request. For each $i\in[k]$, ${\boldsymbol v}_i = ({\boldsymbol v}_i^1,\ldots,{\boldsymbol v}_i^p)$, where for each $j\in[p]$, ${\boldsymbol v}_i^j$ is a vector of length $s$, that represents the linear combination of the $j$-th part of the information bits. Let $R^{*} = \{{\boldsymbol v}_i^j: 1\leq i \leq k, 1\leq j \leq p\}$ be a multiset request of size $pk$ consisting of $pk$ vectors of length $s$ each. By requesting $R^{*}$ from the code ${\cal C}$ we get $pk$ recovering sets. For each $i\in[k]$, to satisfy the request ${\boldsymbol v}_i$, we take the union of the recovering sets of each ${\boldsymbol v}_i^j,j\in[p]$ where for each $j\in[p]$, in the recovering sets of ${\boldsymbol v}_i^j$ we read the cells that from the array $A_j$. It can be shown that each recovering set obtained from ${\cal C}$ is used only once in one of the recovering sets in the code ${\cal C}'$. Thus, the recovering sets are disjoint and from each bucket at most $\ell$ cells are read as in the code ${\cal C}$. \end{IEEEproof} \end{comment} Another general construction is stated in the next theorem. \begin{theorem}\label{theorem:t+1Tot} For any positive integers, $s,k,t,t_0$, and $\ell$, $FB_{t,\ell}(s,k) \leq m +m_0$, where $m = FB_{t+t_0,\ell}(s,k)$ and $m_0 = FB_{t,\ell}(m\cdot t_0,k)$. \end{theorem} \begin{IEEEproof} Let ${\cal C}_1,{\cal C}_2$ be an $(s,k,m,t+t_0,\ell),(m \cdot t_0,k,m_0,t,\ell)$ functional batch array code, respectively. We construct an $(s,k,m+m_0,t,\ell)$ functional batch array code ${\cal C}$ by using the codes ${\cal C}_1,{\cal C}_2$. First, the $s$ information bits are encoded using the encoder function of the code ${\cal C}_1$ to get a $(t+t_0)\times m$ array $A$. Then, the $t_0 \cdot m$ bits in the last $t_0$ rows of $A$ are encoded into a $t\times m_0$ array $B$ using the encoder function of the code ${\cal C}_2$. The code ${\cal C}$ will be represented by a $t\times (m+m_0)$ array, where the first $m$ buckets (columns) will be the first $t$ rows of the array $A$ and the last $m_0$ buckets will be the array $B$. Let $R = \{{\boldsymbol v}_1,\ldots,{\boldsymbol v}_k\}$ be a multiset request of size $k$, where ${\boldsymbol v}_i,i\in[k]$ is a binary vector of length $s$ that represents the $i$-th request. Denote by $\{E_1,\ldots,E_k\}$ the $k$ recovering sets that are obtained by using the decoding function of the code ${\cal C}_1$ with the request $R$. For each $i\in[k]$, assume that $|E_i| = p_i$ and denote $E_i = \{(h_{i,1},{\boldsymbol u}_{i,1}), \ldots, (h_{i,p_i},{\boldsymbol u}_{i,p_i})\}$ where for each $j\in[p_i]$, $(h_{i,j},{\boldsymbol u}_{i,j})$ is a pair of a bucket $h_{i,j}$ with a vector ${\boldsymbol u}_{i,j}$ of length $t+t_0$ that indicates the cells which are read from the bucket $h_{i,j}$. For each $i\in[k]$ and $j\in[p_i]$, let ${\boldsymbol u}'_{i,j}$ be the vector with the last $t_0$ entries of ${\boldsymbol u}_{i,j}$ and let $R'_{i,j}$ be the sum of the bits in the cells that indicated by ${\boldsymbol u}'_{i,j}$. Let $R' = \{\sum_{j=1}^{p_1} R'_{1,j},\dots,\sum_{j=1}^{p_k} R'_{k,j}\}$ be a multiset request of size $k$. Denote by $\{F_1,\ldots,F_k\}$ the $k$ recovering sets that are obtained by using the decoding function of the code ${\cal C}_2$ with the multiset request $R'$. To satisfy ${\boldsymbol v}_i$, the code ${\cal C}$ can use the recovering set $F_i \cup E'_i$, where $E'_i = \{(h_{i,1},{\boldsymbol u}''_{i,1}), \ldots, (h_{i,k},{\boldsymbol u}''_{i,k})\}$ where for each $j\in[k]$, ${\boldsymbol u}''_{i,j}$ is the vector with the first $t$ entries of ${\boldsymbol u}_{i,j}$. It remains to show that at most $\ell$ cells are read from each bucket. Each ${\boldsymbol v}_i,i\in[k]$ has a recovering set $F_i\cup E'_i$, where the recovering set $F_i$ of ${\cal C}_2$ uses at most $\ell$ cells from each bucket from the property of the code ${\cal C}_2$. Also, the recovering set $E_i$ of ${\cal C}_1$ uses at most $\ell$ cells from each bucket from the property of the code ${\cal C}_1$. Thus, $E'_i$ also uses at most $\ell$ cells. \end{IEEEproof} Note that a similar statement can hold for functional PIR array code, where for any positive integers $s,k,t,t_0$, and $\ell$, $FP_{t,\ell}(s,k) \leq m +m_0$, where $m = FP_{t+t_0,\ell}(s,k)$ and $m_0 = FB_{t,\ell}(m\cdot t_0,k)$. \begin{comment} \e{The idea behind the proof of Theorem~\ref{theorem:t+1Tot} is explained for the specific parameters $t_0=m_0=k=1$. An $(s,1,m+1,t,\ell)$ functional batch array code ${\cal C}'$ can be constructed using an $(s,1,m,t+1,\ell)$, $(m,1,1,t,\ell)$ functional batch array codes ${\cal C}_1$, ${\cal C}_2$, respectively. The code ${\cal C}'$ will have the first $t$ rows of the code ${\cal C}_1$ and another one bucket which is the encoding of all the linear combinations in the cells of the last row of ${\cal C}_1$ using the code ${\cal C}_2$. In the decoding, when a linear combination is requested, we use the recovering set of the code ${\cal C}_1$. However, instead of reading the cells in the last row, we read them from the additional bucket in order to get a recovering set for the same request of the code ${\cal C}'$. Note that a similar statement can hold for functional PIR array code, where for any positive integers $s,k,t,t_0$, and $\ell$, $FP_{t,\ell}(s,k) \leq m +m_0$, where $m = FP_{t+t_0,\ell}(s,k)$ and $m_0 = FB_{t,\ell}(m\cdot t_0,k)$.} \end{comment} \subsection{Constructions based upon Covering Codes} In this section it is shown how covering codes are used to construct array codes. Denote by $d_H({\boldsymbol x},{\boldsymbol y})$ the Hamming distance between two vectors ${\boldsymbol x},{\boldsymbol y}$, and denote by $w_H({\boldsymbol x})$ the Hamming weight of ${\boldsymbol x}$. Also define $\langle {\boldsymbol x}, {\boldsymbol y}\rangle$ as the inner product of the two vectors ${\boldsymbol x},{\boldsymbol y}$. Next we remind the definition of covering codes~\cite{CHLL97}. \begin{definition} Let $n\geq1$, $R\geq0$ be integers. A code ${\cal C} \subseteq \mathbb{F}_q^n$ is called an \textbf{$R$-covering code} if for every word ${\boldsymbol y}\in \mathbb{F}_q^n$ there is a codeword ${\boldsymbol x}\in {\cal C}$ such that $d_{H}({\boldsymbol x},{\boldsymbol y})\leq R$. The notation $[n,k,R]_q$ denotes a linear code over $\mathbb{F}_q$ of length $n$, dimension $k$, and covering radius $R$. The value $g[n,R]_q$ denotes the smallest dimension of a linear code over $\mathbb{F}_q$ with length $n$ and covering radius $R$. The value $h[s,R]_q$ is the smallest length of a linear code over $\mathbb{F}_q$ with covering radius $R$ and redundancy $s$. In case $q=2$ we will remove it from these notations. \end{definition} The following property is well known for linear covering codes; see e.g.~\cite[Th. 2.1.9]{CHLL97}. \begin{property}\label{property:covering} For an $[n,k,R]$ linear covering code with some parity check matrix $H$, every syndrome vector $s \in \Sigma^{n-k}$ can be represented as the sum of at most $R$ columns of $H$. \end{property} The connection between linear codes and functional batch array codes is established in the next theorem. \begin{theorem}\label{theorem:CoveringToBucket} Let ${\cal C}$ be a $[t,t-s,\ell]$ linear covering code. Then, there exists an $(s,1,1,t,\ell)$ functional batch array code. In particular, $FB_{t,\ell}(t-g[t,\ell],1) = 1$. \end{theorem} \begin{IEEEproof} Let ${\boldsymbol x} = (x_1,\ldots,x_s)$ the vector of dimension $1\times s$ with the $s$ information bits, and let $H$ be a parity check matrix of the code ${\cal C}$, with dimension $s\times t$. We construct an $(s,1,1,t,\ell)$ functional batch array code ${\cal C}'$ by taking each entry of the vector ${\boldsymbol c} = ({\boldsymbol x} H)^{\intercal}$ as a cell in the code. The dimension of ${\boldsymbol c}$ is $t\times 1$, and thus, we get one bucket with $t$ cells where each cell has a linear combination of the $s$ information bits. Let ${\boldsymbol u}\in\Sigma^s$ be a request which represents the linear combination $\langle {\boldsymbol u}, {\boldsymbol x}\rangle$ of the $s$ information bits. From Property~\ref{property:covering}, we know that there exists a vector ${\boldsymbol y}\in\Sigma^t$ such that ${\boldsymbol y}\cdot H^{\intercal} = {\boldsymbol u}$, where $w = w_H({\boldsymbol y}) \leq \ell$. Let {${\cal A} = \{i : i\in[t], y_{i}~=~1\}$}, where $y_i$ is the entry number $i$ of ${\boldsymbol y}$. Thus, $\langle {\boldsymbol u}, {\boldsymbol x}\rangle = {\boldsymbol u} \cdot {\boldsymbol x}^{\intercal} = {\boldsymbol y} \cdot H^{\intercal}\cdot {\boldsymbol x}^{\intercal} = {\boldsymbol y} \cdot {\boldsymbol c} = \sum_{i\in{\cal A}} c_i$, where $c_i$ is the entry number $i$ of ${\boldsymbol c}$. Therefore, to satisfy the request $\langle {\boldsymbol u}, {\boldsymbol x}\rangle$ we should read $\left|{\cal A}\right| = w \leq \ell$ cells from the code ${\cal C}'$. Recall that $g[t,\ell]$ is the smallest dimension of a linear code with length $t$ and covering radius $\ell$. Thus, there exists a $[t,g[t,\ell],\ell]$ linear covering code. We get that there exists a $(t-g[t,\ell],1,1,t,\ell)$ functional batch array code, which implies that $FB_{t,\ell}(t-g[t,\ell],1) = 1$. \end{IEEEproof} Theorem~\ref{theorem:CoveringToBucket} holds also for functional PIR array code and thus the following results are derived. \begin{corollary}\label{cor:coveringgtell} Let $s,k,t$ and $\ell$ be positive integers. Then, \begin{enumerate} \item\label{cor:coveringk} $FP_{t,\ell}(s,k) \leq FB_{t,\ell}(s,k) \leq k\cdot \left\lceil \frac{s}{t - g[t,\ell]} \right\rceil$. \item\label{cor13:part2} $FP_{t+t_0,\ell}(s,k) \leq FP_{t,t}(s,k)$, where $t_0 = g[t+t_0,\ell]$. Also works for FB \item\label{cor:coveringt+t_0} $FP_{t,\ell}(s,k) \leq FB_{t,\ell}(s,k) \leq k\cdot \left(\left\lceil \frac{s}{\alpha} \right\rceil + 1\right)$, where $\left\lceil \frac{s}{\alpha} \right\rceil \leq t-g[t,\ell]$, and $\alpha = (t+1)-g[(t+1), \ell]$. \end{enumerate} \end{corollary} The third claim of Corollary~\ref{cor:coveringgtell} is derived from Theorem~\ref{theorem:CoveringToBucket} and Theorem~\ref{theorem:t+1Tot}. \begin{comment} We can use covering codes in order to get new constructions with lower values of $\ell$, based on existing constructions. \begin{theorem}\label{theorem:coveringlowerell} $FP_{t+1,\lceil t/2 \rceil}(s,k) \leq FP_{t,t}(s,k).$ (Also for P,B,FB). \end{theorem} \begin{IEEEproof} Given an $(s,k,m,t,t)$ functional PIR array code ${\cal C}$, we can add a new cell for each bucket and write the sum of all the cells in the same buckets. In other words, we add a new row with the parities of each bucket. Given a request $R$ that the code must satisfy $k$ times by reading at most $\lceil t/2 \rceil$ cells from each bucket. We can take the same recovering set of the buckets of the code ${\cal C}$ that satisfies $R$. In each bucket that we read at most $\lceil t/2 \rceil$ cells we read the same cells. In each bucket that we read more that $\lceil t/2 \rceil$ cells, then we can use the parity cell, and read the redundant cells which are less than $t - \lceil t/2 \rceil \leq \lfloor t/2 \rfloor$, which mean we read at most $\lceil t/2 \rceil$ cells. \end{IEEEproof} We can use covering codes in order to get new constructions with lower values of $t$, based on existing constructions. \begin{theorem}\label{theorem:coveringlowert} $FP_{t,\ell}(s,1) \leq \left\lceil \frac{s}{(t+1)-g[(t+1), \ell]} \right\rceil + 1$, where $\left\lceil \frac{s}{(t+1)-g[(t+1), \ell]} \right\rceil \leq t-g[t,\ell]$. \end{theorem} \begin{IEEEproof} From Corollary~\ref{cor:coveringgtell} we can get that $FP_{t+1,\ell}(s,1) \leq \left\lceil \frac{s}{(t+1)-g[(t+1), \ell]} \right\rceil$. Which means there exists a construction of $(s,1,\left\lceil \frac{s}{(t+1)-g[(t+1), \ell]} \right\rceil,t+1,\ell)$ functional PIR array code ${\cal C}$. We can build an $(s,1,\left\lceil \frac{s}{(t+1)-g[(t+1), \ell]} \right\rceil + 1,t,\ell)$ functional PIR array code by using the code ${\cal C}$. We know that there exists a $[t,g[t,\ell],\ell]$ linear covering code. Then from Theorem~\ref{theorem:CoveringToBucket}, there exists a $(t - g[t,\ell],1,1,t,\ell)$ functional PIR array code ${\cal C}_1$. We can use the last row of the code ${\cal C}$, which has $\left\lceil \frac{s}{(t+1)-g[(t+1), \ell]} \right\rceil \leq t - g[t,\ell]$ cells, as information bits to the code ${\cal C}_1$, then we get a bucket with at most $t$ cells. The new code will be the $\left\lceil \frac{s}{(t+1)-g[(t+1), \ell]} \right\rceil$ buckets of the code ${\cal C}$ without the last row (which means $t$ cells in each bucket), and an addition one bucket of the code ${\cal C}_1$. Given a request $R$, that the new code must satisfy once. We can use the code ${\cal C}$ to get a recovering set that satisfies $R$, where from each bucket we read at most $\ell$ cells. We can take the same recovering set with the last bucket, in each bucket where the code ${\cal C}$ needs to read the last cell, we can read it from the last bucket. We can read any linear combination of the cells of the last row of the code ${\cal C}$ because in the last bucket we write them using covering code with radius $\ell$. \end{IEEEproof} \begin{corollary} $FP_{t,\ell}(s,k) \leq k\cdot \left(\left\lceil \frac{s}{(t+1)-g[(t+1), \ell]} \right\rceil + 1\right)$, where $\left\lceil \frac{s}{(t+1)-g[(t+1), \ell]} \right\rceil \leq t-g[t,\ell]$. \end{corollary} \end{comment} \subsection{The Cases of $k = 1,2$} Even though the cases of $k=1,2$ are the most trivial ones when the codewords are vectors, they are apparently not easily solved for array codes. In this section we summarize some of our findings on these important and interesting cases. \begin{theorem}\label{theorem:ArrayCodek1} For each $s,t,\ell$ positive integers: \begin{enumerate} \item\label{theorem:Boundk1} $FP_{t,\ell}(s,1) \geq \left\lceil \frac{s}{\log_2\left(\sum_{i=0}^{\ell} {t \choose i}\right)} \right\rceil $. \item\label{theorem:ArrayCodek1tt} $FP_{t, t}(s,1) = \left\lceil \frac{s}{t} \right\rceil$. \item\label{theorem:ArrayCodek1logt1} $FP_{t,1}(\lfloor \log_2(t+1)\rfloor,1) = 1$ and $\left\lceil \frac{s}{\log_2(t+1)}\right\rceil \leq FP_{t,1}(s,1) \leq \left\lceil \frac{s}{\lfloor \log_2(t+1)\rfloor}\right\rceil$. \item\label{thk1:part4} $FP_{t, \alpha \cdot t}(s,1) \leq \left\lceil \frac{s}{t-g[t, \alpha \cdot t]} \right\rceil$, where $0 < \alpha < 1$. \item\label{th11:5} $FP_{t, t/2}(s,1) = \frac{s}{t} + 1$, where $t$ is even, $\frac{s}{t}$ is integer, and $\frac{s}{t} \leq t-1$. \end{enumerate} \end{theorem} \begin{IEEEproof} \begin{enumerate} \item From corollary~\ref{theorem:LBFP2}. \item The lower bound over $FP_{t,t}(s,1)$ is obtained by using the lower bound from the first claim of this theorem, $FP_{t,t}(s,1) \geq \left\lceil \frac{s}{\log_2\left(\sum_{i=0}^{t} {t \choose i}\right)} \right\rceil = \left\lceil \frac{s}{t} \right\rceil$. The upper bound can be verified by showing that there exists an $(s,1,\left\lceil\frac{s}{t} \right\rceil,t,t)$ functional PIR array code. There are $t$ cells in each buckets. Then, in order to write all the $s$ information bits there is a need to $\lceil \frac{s}{t}\rceil$ buckets. Each request is a linear combination of the $s$ information bits. Thus, each request can be satisfied by reading the information bits which included in the request. It was shown that $FP_{t,t}(s,1) \geq \left\lceil \frac{s}{t}\right\rceil$ and there exists an $(s,1,m,t,t)$ functional PIR array code. Therefore, $FP_{t,t}(s,1) = \left\lceil \frac{s}{t}\right\rceil$. \item A $(\lfloor \log_2(t+1)\rfloor,1,1,t,1)$ functional PIR array code ${\cal C}$ can be obtained by writing all the $2^{\lfloor \log_2(t+1)\rfloor - 1} \leq t$ linear combinations of the information bits in at most $t$ cells of one bucket. Each request is a linear combination of the information bits, and hence, for each request there exists a cell in the bucket that satisfies it. Thus, the appropriate cell can satisfy the request. The minimum number of buckets is $1$. Thus, $FP_{t,1}(\lfloor \log_2(t+1)\rfloor,1) = 1$. The lower bound over $FP_{t,1}(s,1)$ is derived from the first claim of this theorem. Thus $FP_{t,1}(s,1) \geq \left\lceil \frac{s}{\log_2(\sum_{i=0}^{1}{t \choose i})}\right\rceil = \left\lceil \frac{s}{\log_2(t+1)}\right\rceil$. The upper bound is shown by using Theorem~\ref{theorem:Basic}\eqref{theorem:partas}, \begin{align*} FP_{t,1}(s,1) &= FP_{t,1}\left(\frac{s}{\lfloor \log_2(t+1)\rfloor} \cdot \lfloor \log_2(t+1)\rfloor,1\right) &\\ & \leq FP_{t,1}\left(\left\lceil\frac{s}{\lfloor \log_2(t+1)\rfloor}\right\rceil \cdot \lfloor \log_2(t+1)\rfloor,1\right) &\\ &\leq \left\lceil\frac{s}{\lfloor \log_2(t+1)\rfloor}\right\rceil \cdot FP_{t,1}\left(\lfloor \log_2(t+1)\rfloor,1\right) &\\ &\leq \left\lceil\frac{s}{\lfloor \log_2(t+1)\rfloor} \right\rceil.& \end{align*} \item From Corollary~\ref{cor:coveringgtell}\eqref{cor:coveringk}. \item The lower bound over $FP_{t,t/2}(s,1)$ can be found using the lower bound from the first claim of this theorem, \begin{align*} FP_{t,t/2}(s,1) & \geq \left\lceil \frac{s}{\log_2(\sum_{i=0}^{t/2} {t \choose i})} \right\rceil &\\ &\geq \left\lceil \frac{s}{\log_2(\sum_{i=0}^{t} {t \choose i}) } \right\rceil +1 &\\ & = \left\lceil \frac{s}{t}\right\rceil +1. \end{align*} For the upper bound, from Corollary~\ref{cor:coveringgtell}\eqref{cor:coveringt+t_0} we get that $FP_{t,t/2}(s,1) \leq \left\lceil \frac{s}{(t+1)-g[(t+1), t/2]} \right\rceil + 1$. Since $g[t+1,t/2] = 1$, then $FP_{t,t/2}(s,1) \leq s/t + 1$. Lastly we need to show that $\left\lceil \frac{s}{(t+1)-g[(t+1), t/2]} \right\rceil \leq t-g[t,\ell]$ in order to use Corollary~\ref{cor:coveringgtell}\eqref{cor:coveringt+t_0}. Since $s/t \leq t-1$, it is derived that $\left\lceil \frac{s}{(t+1)-g[(t+1), t/2]} \right\rceil = \frac{s}{t} \leq t-1 = t - g[t+1,t/2] = g[t,t/2]$. Thus, $FP_{t,t/2} = \frac{s}{t} + 1$. \end{enumerate} \end{IEEEproof} \begin{example} In this example we demonstrate the construction of a $(12,1,4,4,2)$ functional PIR array code according to Theorem~\ref{theorem:ArrayCodek1}\eqref{th11:5}. The construction is given in Table~\ref{tableEx1}. It can be verified that $FP_{4,2}(12,1) = 4$. Note that in this example and in the rest of the paper the notation $x_{i_1}x_{i_2}\cdots x_{i_h}$ is a shorthand to the summation $x_{i_1}+x_{i_2}+\cdots + x_{i_h}$. \begin{table} \caption{$(12,1,4,4,2)$ functional PIR array code}\label{tableEx1} \begin{center} \begin{tabular}{ |c|c|c|c| } \hline 1 & 2 & 3 & 4\\ \hline \hline $x_1$ & $x_5$ & $x_9$ & $x_1x_2x_3x_4$ \\ \hline $x_2$ & $x_6$ & $x_{10}$ & $x_5x_6x_7x_8$ \\ \hline $x_3$ & $x_7$ & $x_{11}$ & $x_9x_{10}x_{11}x_{12}$ \\ \hline $x_4$ & $x_8$ & $x_{12}$ & $x_1x_2\cdots x_{12}$ \\ \hline \end{tabular} \end{center} \end{table} \end{example} An improvement for the case of $\ell=1$ is proved in the following theorem. \begin{theorem}\label{theorem:FPIRell1} For any positive integers $s_1,s_2,$ and $t$, $$FP_{t,1}(s_1+s_2,1) \leq \left\lceil\frac{s_1}{\left\lfloor \log_2(t+1) \right\rfloor}\right\rceil + 1,$$ where $2^{s_2} -1 \hspace{-0.5ex}\leq \left(\left\lceil\frac{s_1}{\left\lfloor \log_2(t+1) \right\rfloor}\right\rceil+1\right) (t - (2^{\lfloor \log_2(t+1) \rfloor}-1)).$ \end{theorem} \begin{IEEEproof} A construction of an $(s_1+s_2,1,m,t,1)$ functional PIR array code for $m = \left\lceil\frac{s_1}{\left\lfloor \log_2(t+1) \right\rfloor}\right\rceil + 1$ is presented. The first $s_1$ information bits are divided into $m-1$ parts, where $h_i,i\in[m-1]$ is the size of part $i$, and $h_i \leq \lfloor \log_2(t+1) \rfloor$. Then, all the linear combinations of part $i\in[m-1]$ are written in the $i$-th bucket, so in each of the first $m-1$ buckets there are at least $t - (2^{\lfloor \log_2(t+1) \rfloor}-1)$ empty cells. In the last bucket, the parity of each of the first $2^{\lfloor \log_2(t+1) \rfloor}-1$ rows is stored. Since $2^{s_2} - 1 \leq m \cdot (t - (2^{\lfloor \log_2(t+1) \rfloor}-1))$, each of the $2^{s_2} - 1$ linear combinations of the $s_2$ bits can be written in the empty cells of the $m$ buckets. Let ${\boldsymbol v} = ({\boldsymbol v}_1,\ldots,{\boldsymbol v}_m)$ be a request such that for any $i\in[m-1]$ the length of ${\boldsymbol v}_i$ is $h_i$, the length of ${\boldsymbol v}_m$ is $s_2$, and for simplicity assume that they are all nonzero. The linear combination ${\boldsymbol v}_m$ is satisfied by the cell where it is stored and assume it is in the $j$-th bucket, where $j<m$. Assume that the cell in the $j$-th bucket where the linear combination ${\boldsymbol v}_j$ is stored is in row $r$. We read from each bucket $b\in[m-1]$, where $b\neq j$ the cell with the linear combination represented by ${\boldsymbol v}_b + {\boldsymbol u}_b$, where ${\boldsymbol u}_b$ is the vector that represents the cell in bucket $b$ in row $r$, but if ${\boldsymbol v}_b + {\boldsymbol u}_b = \mathbf0$ do not read from bucket $b$. Also, we read the cell in row $r$ from the last bucket. Then, the obtained linear combination is the combination that is represented by $({\boldsymbol v}_1,\ldots,{\boldsymbol v}_{m-1})$, because $\sum_{1\leq b \leq m, b\neq j} {\boldsymbol u}_b = {\boldsymbol v}_j$ and for each $b\in[m-1]$ where $b\neq j$ we read the linear combination that is represented by ${\boldsymbol v}_b+{\boldsymbol u}_b$ from bucket $b$. \end{IEEEproof} \begin{comment} The following corollary is an immediate result of Theorem~\ref{theorem:FPIRell1}. \begin{corollary}\label{cor:FPIRell1(2)} For every positive integers $s$ and $t$ it holds that $$FP_{t,1}(s,1) \leq \left\lfloor \alpha s \right\rfloor \cdot p + \left\lceil \frac{s - \left\lfloor \alpha s \right\rfloor \cdot (\frac{1}{\alpha})}{\left\lfloor \log_2(t+1) \right\rfloor} \right\rceil,$$ where $$\alpha = \frac{1}{(p-1)\cdot \lfloor \log_2(t+1) \rfloor + q} ,$$ $p$ and $q$ are the smallest positive integers such that $2^q - 1 \leq p \cdot (t - (2^{\lfloor \log_2(t+1) \rfloor}-1))$, and $ q > \lfloor \log_2(t+1) \rfloor$. \end{corollary} \begin{IEEEproof} Firstly, we are interested in the first $\left\lfloor \alpha s \right\rfloor \cdot (\frac{1}{\alpha})$ information bits. By using Theorem~\ref{theorem:Basic}\eqref{theorem:partas} and the result from Theorem~\ref{theorem:FPIRell1}, we get that $FP_{t,1}(\left\lfloor \alpha s \right\rfloor \cdot (\frac{1}{\alpha}),1) \leq \left\lfloor \alpha s \right\rfloor \cdot FP_{t,1}((\frac{1}{\alpha}),1) \leq \left\lfloor \alpha s \right\rfloor \cdot p$. Now we are interested in the last $(s - \left\lfloor \alpha s \right\rfloor \cdot (\frac{1}{\alpha}))$ information bits. From Theorem~\ref{theorem:ArrayCodek1}\eqref{theorem:ArrayCodek1logt1} we can seek that $FP_{t,1}(s - \left\lfloor \alpha s \right\rfloor \cdot (\frac{1}{\alpha}),1) \leq \left\lceil \frac{s - \left\lfloor \alpha s \right\rfloor \cdot (\frac{1}{\alpha})}{\left\lfloor \log_2(t+1) \right\rfloor} \right\rceil$. By using Theorem~\ref{theorem:Basic}\eqref{theorem:parts1s2}, we get that $FP_{t,1}(s,1) = FP_{t,1}(s - \left\lfloor \alpha s \right\rfloor \cdot (\frac{1}{\alpha}) + \left\lfloor \alpha s \right\rfloor \cdot (\frac{1}{\alpha}),1) \leq FP_{t,1}(s - \left\lfloor \alpha s \right\rfloor \cdot (\frac{1}{\alpha}),1) + FP_{t,1}(\left\lfloor \alpha s \right\rfloor \cdot (\frac{1}{\alpha}),1) \leq \left\lfloor \alpha s \right\rfloor \cdot p + \left\lceil \frac{s - \left\lfloor \alpha s \right\rfloor \cdot (\frac{1}{\alpha})}{\left\lfloor \log_2(t+1) \right\rfloor} \right\rceil.$ \end{IEEEproof} \end{comment} For any $t,s_1,s_2$ where $s = s_1+s_2$ and $s_2 \geq \lfloor \log_2(t+1) \rfloor$, the upper bound in Theorem~\ref{theorem:FPIRell1} improves upon the one in Theorem~\ref{theorem:ArrayCodek1}\eqref{theorem:ArrayCodek1logt1} since $ \left\lceil \frac{s}{\lfloor \log_2(t+1)\rfloor}\right\rceil \geq \left\lceil\frac{s_1}{\left\lfloor \log_2(t+1) \right\rfloor}\right\rceil + 1$. \begin{example} In this example the construction of a $(15,1,7,4,1)$ functional PIR array code is demonstrated based on Theorem~\ref{theorem:FPIRell1}. It can be verified that the parameters $t=4, s_1 = 12$ and $s_2 = 3$ satisfy the constraints of Theorem~\ref{theorem:FPIRell1}. The construction is given in Table~\ref{tb1}. The first $s_1 = 12$ information bits are partitioned into $6$ parts, each part of size $2$. All the nonzero linear combinations of part $i,i\in[6]$ are written in the $i$-th bucket with one cell remains empty. The sum of each of the first $3$ rows is written. Now, there are still $7$ empty cells, which are used to store all the nonzero linear combinations of the last $s_2=3$ bits in the empty cells. It can be concluded that $FP_{4,1}(15,1)\leq 7$, and from Theorem~\ref{theorem:ArrayCodek1}\eqref{theorem:ArrayCodek1logt1} we get that $FP_{4,1}(15,1) \geq 7$. Thus, $FP_{4,1}(15,1) = 7$. \begin{table} \begin{center} \caption{$(15,1,7,4,1)$ functional PIR array code}\label{tb1} \begin{tabular}{ |c|c|c|c|c|c|c| } \hline 1 & 2 & 3 & 4 & 5 & 6 & 7\\ \hline \hline $x_1$ & $x_3$ & $x_5$ & $x_7$ & $x_9$ & $x_{11}$ & $x_1x_3x_5x_7x_9x_{11}$ \\ \hline $x_2$ & $x_4$ & $x_6$ & $x_8$ & $x_{10}$ & $x_{12}$ & $x_2x_4x_6x_8x_{10}x_{12}$ \hspace{-2ex} \\ \hline $x_1x_2$ & $x_3x_4$ & $x_5x_6$ & $x_7x_8$ & $x_9x_{10}$ & $x_{11}x_{12}$ & $x_1\cdots x_{12}$ \\ \hline $x_{13}$ & $x_{14}$ & $x_{15}$ & $x_{13}x_{14}$ & $x_{13}x_{15}$ & $x_{14}x_{15}$ & $x_{13}x_{14}x_{15}$ \\ \hline \end{tabular} \end{center} \end{table} \end{example} Lastly, we report on several results for $k=2$. \begin{comment} \begin{theorem}\label{theorem:FB2282} Let $s$ be positive integer. Then, \begin{enumerate} \item\label{th17:part1} $6 \leq FB_{2,2}(8,2) \leq 7$. \item\label{th17:part2} $6 \leq FB_{3,1}(8,2) \leq 7$. \item\label{th17:part3} $0.71s \lesssim \log_{7}(2^{s-1}\cdot (2^s - 1)) \leq FB_{2,2}(s,2) \leq 7\cdot\left\lceil \frac{s}{8} \right\rceil$. \end{enumerate} \end{theorem} \e{The first claim of Theorem~\ref{theorem:FB2282} can be verified by using Theorem~\ref{theorem:LBFB} and the construction given in Table~\ref{tb82722}. The second claim of Theorem~\ref{theorem:FB2282} is derived from the first claim and Corollary~\ref{cor:coveringgtell}\eqref{cor13:part2}. Lastly, the upper bound of the third claim is derived from the first claim and Theorem~\ref{theorem:Basic}\eqref{theorem:partas} and the lower bound from Theorem~\ref{theorem:LBFB}.} \end{comment} \begin{table} \begin{center} \caption{$(8,2,7,2,2)$ functional PIR array code}\label{tb82722} \begin{tabular}{ |c|c|c|c|c|c|c|} \hline 1 & 2 & 3 & 4 & 5 & 6 & 7\\ \hline \hline $x_1$ & $x_2$ & $x_1x_2$ & $x_5$ & $x_6$ & $x_5x_6$ & $x_1x_2x_5x_6$ \\ \hline $x_3$ & $x_4$ & $x_3x_4$ & $x_7$ & $x_8$ & $x_7x_8$ & $x_3x_4x_7x_8$ \\ \hline \end{tabular} \end{center} \end{table} \begin{theorem}\label{theorem:FB2282} $6 \leq FB_{2,2}(8,2) \leq 7$. \end{theorem} \begin{IEEEproof} The lower bound is obtained from Theorem~\ref{theorem:LBFB}. The upper bound is verified using the construction which appears in Table~\ref{tb82722}, i.e., the construction gives an $(8,2,7,2,2)$ functional batch array code. There are 8 information bits, 7 buckets, each one with 2 cells, and we show that this code can satisfy each multiset request of size 2. Let ${\cal S}_1 = \{x_1,x_2,x_3,x_4\}$ be a set of the first $4$ information bits and ${\cal S}_2 = \{x_5,x_6,x_7,x_8\}$ be a set of the last $4$ information bits. Let $R = \{{\boldsymbol v}_1,{\boldsymbol v}_2\}$ be a multiset request of size $2$, where ${\boldsymbol v}_1$ and ${\boldsymbol v}_2$ are vectors of size $8$. For each $i\in[2]$, ${\boldsymbol v}_i = ({\boldsymbol v}_i^1,{\boldsymbol v}_i^2)$ where ${\boldsymbol v}_i^j,j\in[2]$ is a vector of length $4$ that represents a linear combination of the bits in ${\cal S}_j$. The possible linear combinations of ${\cal S}_1$ are divided into four different types in the following way. \begin{enumerate} \item The first type ${\cal T}_1$ includes the vectors that can be satisfied by using only one bucket from the buckets $1-3$. \item The second type ${\cal T}_2$ includes any vector ${\boldsymbol u}$ that satisfies the following constraint. The vectors ${\boldsymbol u}+(1,1,0,0)$ and ${\boldsymbol u} + (0,0,1,1)$ can be satisfied by one bucket from buckets $1-3$. (The vector (1,1,0,0) represents the linear combination $x_1+x_2$.) \item The third type ${\cal T}_3$ includes any vector ${\boldsymbol u}$ that satisfies the following constraint. The vectors ${\boldsymbol u}+(1,1,1,1)$ and ${\boldsymbol u} + (1,1,0,0)$ can be satisfied by one bucket from the buckets $1-3$. \item The fourth type ${\cal T}_4$ includes any vector ${\boldsymbol u}$ that satisfies the following constraint. The vectors ${\boldsymbol u}+(1,1,1,1)$ and ${\boldsymbol u} + (0,0,1,1)$ can be satisfied by one bucket from the buckets $1-3$. \end{enumerate} These four types are disjoint and their union covers all the nonzero linear combinations of ${\cal S}_1$. From the symmetry of the first four information bits and the last four bits, the linear combinations of ${\cal S}_2$ are divided in the same way. It is possible to see that every two buckets from buckets $1-3$ can satisfy each possible linear combination of the first four bits. In the same way, every two buckets from buckets $4-6$ can satisfy each possible linear combination of the last four bits. Also, the last bucket can satisfy each vector $({\boldsymbol u},{\boldsymbol u})$, where ${\boldsymbol u} \in \{(1,1,0,0),(0,0,1,1),(1,1,1,1)\}$. If one of the vectors $\{{\boldsymbol v}_1^1,{\boldsymbol v}_2^1\}$ is included in ${\cal T}_1$ (assume it is ${\boldsymbol v}_1^1$) and one of the vectors $\{{\boldsymbol v}_1^2,{\boldsymbol v}_2^2\}$ is included in ${\cal T}_1$ (assume it is ${\boldsymbol v}_1^2$), then these two vectors can be satisfied by one bucket from $1-3$ and one bucket from $4-6$. Then the remaining two buckets of $1-3$ can satisfy ${\boldsymbol v}_2^1$ and the remaining two buckets of $4-6$ can satisfy ${\boldsymbol v}_2^2$. Therefore, in this case the request $R$ is satisfied by disjoint sets. If there exist $2\leq q_1,q_2 \leq 4$ where ${\boldsymbol v}_1^1\in {\cal T}_{q_1}$ and ${\boldsymbol v}_1^2\in {\cal T}_{q_2}$. Then, there exists a vector ${\boldsymbol u}'$ where ${\boldsymbol v}_1^1 + {\boldsymbol u}'$ can be satisfied by one bucket from buckets $1-3$ and ${\boldsymbol v}_1^2 + {\boldsymbol u}'$ can be satisfied by one bucket from buckets $4-6$. Thus, the code can satisfy ${\boldsymbol v}_1^1$ and ${\boldsymbol v}_1^2$, that consist the request ${\boldsymbol v}_1$, by one bucket from $1-3$, one bucket from $4-6$, and the last bucket, which satisfies the request $({\boldsymbol u}',{\boldsymbol u}')$ for each possible ${\boldsymbol u}'$. Then, the remaining two buckets of $1-3$ can satisfy ${\boldsymbol v}_2^1$ and the remaining two buckets of $4-6$ can satisfy ${\boldsymbol v}_2^2$. Similarly, if there exist $2\leq q_1,q_2 \leq 4$ where ${\boldsymbol v}_2^1\in {\cal T}_{q_1}$ and ${\boldsymbol v}_2^2\in {\cal T}_{q_2}$, the code can satisfy the requests ${\boldsymbol v}_1$ and ${\boldsymbol v}_2$ by disjoint sets. The last case is when $\{{\boldsymbol v}_1^1,{\boldsymbol v}_2^1\}\subseteq {\cal T}_1$ and $\{{\boldsymbol v}_1^2,{\boldsymbol v}_2^2\}\subseteq {\cal T}_q$, where $2\leq q\leq 4$ (or $\{{\boldsymbol v}_1^1,{\boldsymbol v}_2^1\}\subseteq {\cal T}_q$ and $\{{\boldsymbol v}_1^2,{\boldsymbol v}_2^2\}\subseteq {\cal T}_q$). In the beginning we satisfy ${\boldsymbol v}_1^1$ by one bucket from $1-3$. Then, take a vector ${\boldsymbol u}''$, such that ${\boldsymbol v}_2^2 + {\boldsymbol u}''$ can be satisfied by one bucket, denote it by $b_1$. The vector ${\boldsymbol v}_2^1 + {\boldsymbol u}''$ can be satisfied by the remaining two buckets from $1-3$, denote them by $b_2,b_3$. Then, the request $R_2 = \{{\boldsymbol v}_2^1,{\boldsymbol v}_2^2\}$ can be satisfied by $\{b_1,b_2,b_3,7\}$ (where $7$ is the last bucket). Lastly, the request ${\boldsymbol v}_1^2$ can be satisfied by the remaining two buckets from $4-6$. Thus, we can conclude that there exists $2$ recovering sets for each possible request, and hence, $FB_{2,2}(8,2) \leq 7$. \end{IEEEproof} The result in Theorem~\ref{theorem:FB2282} can be generalized to different values of $s$. \begin{corollary}\label{cor:FB22s2} $\log_{7}(2^{s-1}\cdot (2^s - 1)) \leq FB_{2,2}(s,2) \leq 7\cdot\left\lceil \frac{s}{8} \right\rceil.$ \end{corollary} \begin{IEEEproof} The upper bound is derived from \Tref{theorem:FB2282}, and \Tref{theorem:Basic}\eqref{theorem:partas}. The lower bound is obtained from \Tref{theorem:LBFB}, where $FB_{2,2}(s,2) \geq m$ where $m$ is the smallest positive integer such that $ \sum_{i=2}^{m} {m \choose i} \cdot {i \brace 2} \cdot \left(\sum_{j=1}^{2} {2 \choose j} \right)^{i} \geq {2^s \choose 2}$. It is known that ${i \brace 2} = 2^{i-1} - 1$. Thus, $\sum_{i=2}^{m} {m \choose i} \cdot (2^{i-1} - 1) \cdot 3^{i} \geq 2^{s-1}\cdot (2^s - 1)$. For each $i\geq 2$, $(2^{i-1} -1)\cdot 3^i \leq 6^i$. Hence, it must hold that $\sum_{i=0}^{m} {m \choose i} \cdot 6^i \geq \sum_{i=2}^{m} {m \choose i} \cdot 6^i \geq 2^{s-1}\cdot (2^s - 1)$. From the equality $\sum_{i=0}^{m} {m \choose i} \cdot x^i = (x+1)^m$, we get that $\sum_{i=0}^{m} {m \choose i} \cdot 6^i = 7^m \geq 2^{s-1} \cdot (2^s - 1)$. Thus, $FB_{2,2}(s,2) \geq m \geq \log_{7}(2^{s-1} \cdot (2^s - 1))$. \end{IEEEproof} According to Corollary~\ref{cor:FB22s2}, we get that for $s$ large enough $\log_{7}(2^{s-1}\cdot (2^s - 1)) = \log_{7} (2^{s-1}) + \log_{7} (2^{s}-1) \approx (s-1)\cdot \log_7(2) + s \cdot \log_7(2) = (2s-1) \cdot \log_7(2) \approx 0.71s \lesssim FB_{2,2}(s,2) \leq \left\lceil \frac{7s}{8} \right\rceil.$ In addition, the result in Theorem~\ref{theorem:FB2282} can be modified to different value of $t$. \begin{corollary}\label{cor:FB3182} $6 \leq FB_{3,1}(8,2) \leq 7$. \end{corollary} \begin{IEEEproof} The lower bound is obtained from Theorem~\ref{theorem:LBFB}. The upper bound is verified by Theorem~\ref{cor:coveringgtell}\eqref{cor13:part2}, where $FB_{3,1}(8,2) \leq FB_{2,2}(8,2) \leq 7$. \end{IEEEproof} \section{Specific Constructions of Array Codes}\label{sec:cons_spec} In this section we discuss three constructions of array codes. \subsection{Construction $A$} We start with a construction given in~\cite[Th.20]{FVY15}, where it was proved in~\cite[Th.10]{CKYZ19} that this construction gives a PIR array code for any integer $t\geq2$. We study how it can be used also as batch and functional PIR array codes for $t=2$. First, the construction for the general case is presented. \begin{comment} \begin{construction}\label{cons10} \e{Let $t \geq 2$ be a fixed integer. The number of information bits is $s = t(t + 1)$, the number of cells in each bucket (the number of the rows) is $t$. In the first $m' = {t(t+1) \choose t}$ buckets all tuples of $t$ bits out of the $t(t + 1)$ information bits are stored. In the last $m''={t(t+1) \choose t+1}/t$ buckets all possible summations of $t + 1$ bits are stored, such that each one of the $t(t+1)$ bits appears in exactly one summation in every bucket.} \end{construction} \end{comment} \begin{construction}\label{cons10} Let $t \geq 2$ be a fixed integer. The number of information bits is $s = t(t + 1)$, the number of cells in each bucket (the number of the rows) is $t$. The number of buckets is $m=m'+m''$, where $m' = {t(t+1) \choose t}$, and $m'' = {t(t+1) \choose t+1}/t$. In the first $m'$ buckets all the tuples of $t$ bits out of the $t(t + 1)$ information bits are stored, which needs ${t(t+1) \choose t}$ buckets. In the last $m''$ buckets we store all possible summations of $t + 1$ bits, such that each one of the $t(t+1)$ bits appears in exactly one summation in every bucket (in each summation there are $t+1$ bits and there are $t$ rows). There are ${t(t+1) \choose t+1}$ such summations and since there are $t$ rows then $t$ summations can be stored in each bucket, so the number of buckets of this part is $m'' = {t(t+1) \choose t+1}/t$. \end{construction} For any integer $t \geq 2$ denote the code that is obtained from Construction~\ref{cons10} by ${\cal C}^A_t$. Construction~\ref{cons10} for the case of $t=2$ is demonstrated in Table~\ref{ex_cons10}. \begin{table*} \begin{center} \caption{Construction~\ref{cons10} for $t=2$}\label{ex_cons10} \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline 1&2&3&4&5&6&7&8&9&10&11&12&13&14&15 \\ \hline \hline $x_1$ & $x_1$ & $x_1$ & $x_1$ & $x_1$ & $x_2$ & $x_2$ & $x_2$ & $x_2$ & $x_3$ & $x_3$ & $x_3$ & $x_4$ & $x_4$ & $x_5$ \\ \hline $x_2$ & $x_3$ & $x_4$ & $x_5$ & $x_6$ & $x_3$ & $x_4$ & $x_5$ & $x_6$ & $x_4$ & $x_5$ & $x_6$ & $x_5$ & $x_6$ & $x_6$ \\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c| } \hline 16&17&18&19&20&21&22&23&24&25 \\ \hline \hline $x_1x_2x_3$ & $x_1x_2x_4$ & $x_1x_2x_5$ & $x_1x_2x_6$ & $x_1x_3x_4$ & $x_1x_3x_5$ & $x_1x_3x_6$ & $x_1x_4x_5$& $x_1x_4x_6$& $x_1x_5x_6$\\ \hline $x_4x_5x_6$ & $x_3x_5x_6$ & $x_3x_4x_6$ & $x_3x_4x_5$ & $x_2x_5x_6$ & $x_2x_4x_6$ & $x_2x_4x_5$ & $x_2x_3x_6$& $x_2x_3x_5$& $x_2x_3x_4$\\ \hline \end{tabular} \end{center} \end{table*} \begin{comment} To prove the correctness of Construction~\ref{cons10} we want to define \emph{hypergraphs}, where a hypergraph ${\cal G}(V,E)$ consists of set of elements $V$ called \emph{vertices}, and set $E$ of nonempty subsets of $V$ called \emph{hyperedges}. The hypergraph is $r$-uniform, if each hyperedge consists of exactly $r$ vertices of $V$. A \emph{1-factor} of a $r$-uniform hypergraph is a set of hyperedges that touches each vertex exactly once, or equivalently a partition of the vertices into subsets of size $r$. The correctness of Construction~\ref{cons10} is heavily based on the following theorem. \begin{theorem}(\textbf{Baranyai's Theorem})\label{theorem:Baranyai} If $r | p$, then the complete $r$-uniform hypergraph on $p$ vertices decomposes into 1-factors, where a 1-factor is a set of $p/r$ pairwise disjoint $r$-sets. \end{theorem} \e{Next we prove the correctness of the construction.} \begin{theorem}\label{theorem:PExample9} For all integer $t\geq 2$, the code ${\cal C}^A_t$ from Construction~\ref{cons10} is a $(t(t+1),k,m,t,t)$ PIR array code, where $k = {t(t+1) \choose t}$, $m = {t(t+1) \choose t} + \frac{{t(t+1) \choose t+1}}{t}$. In particular, $$\frac{k\cdot(t+1)(2t^2+1)}{t^3 + 2t^2 +1} \leq P_{t,t}(t(t+1),k) \leq m.$$ \end{theorem} \begin{IEEEproof} We can get the lower bound by using \Tref{theorem:PIRLB}\eqref{theorem:part2}, $P_{t,t}(t+d,k) \geq \frac{k\cdot (t+d)(2d+1)}{(2d+1)t+d^2}$, we have $d = t^2$, thus \begin{align*} P_{t,t}(t+t^2,k) \geq \frac{k\cdot (t+t^2)(2t^2+1)}{(2t^2+1)t+t^4} = \frac{k\cdot (t+1)(2t^2+1)}{t^3 + 2t^2 + 1}. \end{align*} To get the upper bound we will use Construction~\ref{cons10}. To show that we can divide all of the possible summations of $t+1$ bits into ${t(t+1) \choose t+1}/t$ buckets, such that all the information bits appear in each bucket and each one of the bits appears in exactly one summation, we can use Baranyai's theorem. The information bits will be the vertices, there are $p=t(t+1)$ vertices, and we choose $r=t+1$. The hypergraph on this $p$ vertices decomposes into 1-factors, where each one is a set of $p/r = t$ disjoint $r $-sets. Which means that there are ${p \choose r} \cdot \frac{r}{p} = {p-1 \choose r-1}$ ways to divide the $p$ vertices into subsets of size $r$, such that each subset appears in exactly one division. We know that each division has disjoint subsets that include all the vertices. Thus, there are ${p-1 \choose r-1} = {t(t+1)-1 \choose t} = {t(t+1) \choose t+1}/t$ ways to divide the $p = t(t+1)$ information bits into subsets of size $r = t+1$ such that each subset appears in one division, and each division has disjoint subsets which include all the information bits, which means that each information bit appears in exactly one subset. We can have a summation of the information bits from each such subset, each division has $p/r = t$ subsets thus we have $t$ summations and we can write them in one bucket, and for the ${t(t+1) \choose t+1}/t$ divisions we need ${t(t+1) \choose t+1}/t$ buckets. We will show that this construction gives us a $(t(t+1),k,m,t,t)$ PIR array code, with parameters as in the theorem. We want to show that for every information bit $x_i, i \in [s]$, there are $k$ disjoint sets $R_{i,1}, . . . , R_{i,k} \subseteq [m]$ such that for all $j \in [k]$, $x_i$ is equal to a linear combination of the bits stored in the buckets of the set $R_{i,j}$. From the construction there are ${t(t+1) - 1 \choose t-1}$ buckets from the first $m'$ buckets that include the bit $x_i$, because there are ${t(t+1) - 1 \choose t-1}$ tuples of $t$ bits that include $x_1$. We can take each such bucket as a recovering set, because it includes $x_i$ as a singleton. The last $m''$ buckets include summations of $t+1$ bits, and each bucket has all the information bits exactly once in one summation, thus $x_i$ appears in all last $m''$ buckets. In each summation that includes $x_i$ it also includes $t$ other bits, which are not $x_i$, then we can read these $t$ bits from one of the first $m'$ buckets, which includes this $t$-tuple, and this bucket is not one of the buckets we took from the first $m'$ buckets because it does not include $x_i$. Thus a set which contains one of the last $m''$ buckets, and the appropriate bucket from the first $m'$ buckets, is a recovering set for $x_i$. We know from the construction that $x_i$ appears in all different possible summations with all different possible subsets of $t$ bits, each summation is in one of the last $m''$ buckets. In addition, we know that each one of all different possible tuples of $t$ bits appears in different bucket of the first $m'$ buckets. There are ${t(t+1) - 1 \choose t}$ possible summations of $t+1$ bits that include $x_i$, thus there are another ${t(t+1) - 1 \choose t}$ recovering sets for $x_i$. Every two recovering sets are disjoint: let $R_{i,j_1}$ and $R_{i,j_2}$ different recovering sets, if the two sets are singletons then they are from the first $m'$ buckets and each set contains different bucket. If $|R_{i,j_1}| = 1$ and $|R_{i,j_2}| = 2$, then they are disjoint because the bucket in $R_{i,j_1}$ is from the first $m'$ buckets and contains $x_i$, but the two buckets in $R_{i,j_2}$ one is from the last $m''$ buckets and one from the first $m'$ buckets but does not have $x_i$. If $|R_{i,j_1}| = 2$ and $|R_{i,j_2}| = 2$, then each set has a bucket with different summation that includes $x_i$ then the $t$-tuple of bits in the different summations is different, thus the buckets that include these two $t$-tuple are different. We found ${t(t+1) - 1 \choose t-1} + {t(t+1) - 1 \choose t}$ disjoint sets to recover $x_i$. And using the identity ${n-1 \choose k-1} + {n-1 \choose k} = {n \choose k}$, we get that $k = {t(t+1) - 1 \choose t-1} + {t(t+1) - 1 \choose t} = {t(t+1) \choose t}$. \end{IEEEproof} We can see that according to Stirling's approximation when $t$ is sufficiently large we get that \begin{align*} k &= {t(t+1) \choose t} \approx \frac{(t(t+1))^{t(t+1)}}{t^t \cdot (t^2)^{t^2}} = \frac{t^{t(t+1)}\cdot (t+1)^{t(t+1)}}{t^{2t^2 + t}} & \\ & = \frac{(t+1)^{t(t+1)}}{t^{t^2}} \approx \frac{t^{t^2+t}}{t^{t^2}} = t^t.& \end{align*} Thus according to the lower bound we have in Theorem~\ref{theorem:PExample9}, for sufficiently large $t$ we get $P_{t,t}(t(t+1),k) \geq \frac{k\cdot(2t^3 + 2t^2+t+1)}{t^3 + 2t^2 +1} \approx 2t^t$. For the upper bound we have $P_{t,t}(t(t+1),k) \leq m$, where $m = {t(t+1) \choose t} + \frac{{t(t+1) \choose t+1}}{t} = k + \frac{{t(t+1) \choose t+1}}{t}$. For sufficiently large $t$ we get that \begin{align*} m &= k + \frac{{t(t+1) \choose t+1}}{t} &\\ & = k + \frac{t(t+1) + 1 - (t+1)}{t+1} \cdot {t(t+1) \choose t} \cdot \frac{1}{t} &\\ & = k + \frac{t^2}{t+1} \cdot \frac{1}{t} \cdot k \approx 2k \approx 2t^t. \end{align*} \end{comment} Now we want to show that the code ${\cal C}^A_2$ is a $(6,15,25,2,2)$ batch array code, by using several properties which are proved in the following three lemmas. For each $i\in[6]$, denote by ${\cal F}_i\subseteq [15]$ the subset of buckets from the first $15$ buckets, that have a cell with the singleton $x_i$. It holds that for any $i\in[6]$, $|{\cal F}_i| = 5$, and for any different $i,j\in[6]$, $|{\cal F}_i \cap {\cal F}_j| = 1$. Assume that every multiset request $R$ of size $k=15$ is represented by a vector $(k_1,\ldots,k_6)$, where $k_i$ indicates the number of times $x_i$ appears in the multiset request and $k_1 \geq \cdots \geq k_6$. \begin{lemma}\label{lemma:BEx9Aux1} For any multiset request $(k_1,\ldots,k_6)$ of size $k=15$, the code ${\cal C}^A_2$ can satisfy all the requests of bits $x_3,x_4,x_5,x_6$ by using only the first $15$ buckets. \end{lemma} \begin{IEEEproof} The proof is divided into the following cases according to number of different information bits that appear in the request. \noindent {\bf Case 1:} If $k_3 = 0$, then none of the bits $x_3,x_4,x_5,x_6$ is requested and the property clearly holds. \noindent {\bf Case 2:} If $k_4 = 0$, then it necessarily holds that $k_3\leq 5$. Assume by contradiction that $k_3>5$. Then, it holds that $k_1\geq k_2 > 5$, and hence, $k = k_1+k_2+k_3 > 15$, which is a contradiction. Thus $k_3 \leq 5$ and the code can use $k_3$ buckets from ${\cal F}_3$. \noindent {\bf Case 3:} If $k_5 = 0$, then it necessarily holds that $k_4\leq k_3 \leq 4$. Assume by contradiction that $k_4>4$. Then, it holds that $k_1\geq k_2 \geq k_3 > 4$, and hence, $k = k_1+k_2+k_3+k_4 > 15$, which is a contradiction. Assume by contradiction that $k_3>4$, when $k_4 \geq 1$. Then, it holds that $k_1\geq k_2 > 4$, and hence, $k = k_1+k_2+k_3+k_4 > 15$, which is a contradiction. Thus $k_3 \leq 4$ and the code ${\cal C}^A_2$ can satisfy the bit requests of $x_3$ by taking $k_3$ buckets from ${\cal F}_3$. Then the code ${\cal C}^A_2$ can satisfy the bit requests of $x_4$ by taking $k_4\leq 4$ buckets from ${\cal F}_4 \setminus ({\cal F}_4 \cap {\cal F}_3)$, where $|{\cal F}_4 \setminus ({\cal F}_4 \cap {\cal F}_3)| = 4$. \noindent {\bf Case 4:} If $k_6 = 0$, then it necessarily holds that $k_5\leq k_4 \leq 3$ and $k_3 \leq 4$. Assume by contradiction that $k_5>3$. Then, it holds that $k_1\geq k_2 \geq k_3 \geq k_4 \geq k_5 > 3$, and hence, $k = k_1+k_2+k_3+k_4+k_5 > 15$, which is a contradiction. Assume by contradiction that $k_4>3$, when $k_5 \geq 1$. Then, it holds that $k_1\geq k_2 \geq k_3 \geq k_4 > 3$, and hence, $k = k_1+k_2+k_3+k_4+k_5 > 15$, which is a contradiction. Assume by contradiction that $k_3>4$, when $k_5 + k_4 \geq 2$. Then, it holds that $k_1\geq k_2 \geq k_3 > 4$, and hence, $k = k_1+k_2+k_3+k_4+k_5 > 15$, which is a contradiction. Thus, $k_3 \leq 4$ and the code ${\cal C}^A_2$ can satisfy the bit requests of $x_3$ by taking $k_3$ buckets from ${\cal F}_3$. Also, $k_4 \leq 3$, then the code ${\cal C}^A_2$ can satisfy the bit requests of $x_4$ by taking $k_4$ buckets from ${\cal F}_4 \setminus ({\cal F}_4 \cap {\cal F}_3)$. Lastly, the code ${\cal C}^A_2$ can satisfy the bit requests of $x_5$ by taking $k_5\leq 3$ buckets from ${\cal F}_5 \setminus (({\cal F}_5 \cap {\cal F}_4) \cup ({\cal F}_5 \cap {\cal F}_3))$, where $|{\cal F}_5 \setminus (({\cal F}_5 \cap {\cal F}_4) \cup ({\cal F}_5 \cap {\cal F}_3))| = 3$. \noindent {\bf Case 5:} If $k_6 > 0$, then it necessarily holds that $k_6\leq k_5 \leq 2$, $k_4 \leq 3$ and $k_3 \leq 4$. Assume by contradiction that $k_6>2$. Then, it holds that $k_1\geq k_2 \geq k_3 \geq k_4 \geq k_5 > 2$, and hence, $k = \sum_{i=1}^{6} k_i > 15$, which is a contradiction. Assume by contradiction that $k_5>2$ when $k_6 \geq 1$. Then, it holds that $k_1\geq k_2 \geq k_3 \geq k_4 > 2$, and hence, $k = \sum_{i=1}^{6} k_i > 15$, which is a contradiction. Assume by contradiction that $k_4>3$ when $k_6 + k_5 \geq 2$. Then, it holds that $k_1\geq k_2 \geq k_3 > 3$, and hence, $k = \sum_{i=1}^{6} k_i> 15$, which is a contradiction. Assume by contradiction that $k_3>4$ when $k_6 + k_5 + k_4 \geq 3$. Then, it holds that $k_1\geq k_2 > 4$, and hence, $k = \sum_{i=1}^{6} k_i> 15$, which is a contradiction. Thus, $1 \leq k_3 \leq 4$ and the code ${\cal C}^A_2$ can satisfy the bit requests of $x_3$ by taking $k_3$ buckets from ${\cal F}_3$. Then the code ${\cal C}^A_2$ can satisfy the bit requests of $x_4$ by taking $k_4\leq 3$ buckets from ${\cal F}_4 \setminus ({\cal F}_4 \cap {\cal F}_3)$. Then the code ${\cal C}^A_2$ can satisfy the bit requests of $x_5$ by taking $k_5\leq 2$ buckets from ${\cal F}_5 \setminus (({\cal F}_5 \cap {\cal F}_4) \cup ({\cal F}_5 \cap {\cal F}_3))$. Lastly, the code ${\cal C}^A_2$ can satisfy the bit requests of $x_6$ by taking $k_6\leq 2$ buckets from ${\cal F}_6 \setminus (({\cal F}_6 \cap {\cal F}_5) \cup ({\cal F}_6 \cap {\cal F}_4) \cup ({\cal F}_6 \cap {\cal F}_3))$, where $\left|{\cal F}_6 \setminus (({\cal F}_6 \cap {\cal F}_5) \cup ({\cal F}_6 \cap {\cal F}_4) \cup ({\cal F}_6 \cap {\cal F}_3))\right| = 2$. \end{IEEEproof} \begin{lemma}\label{lemma:BEx9Aux3} In the code ${\cal C}^A_2$, for any information bit $x_i$ and for any bucket $b_1 \in [15] \setminus {\cal F}_i$, there exists a bucket $b_2, 16\leq b_2\leq 25$ such that $\{b_1,b_2\}$ is a recovering set of $x_i$. In addition, the $\left| [15] \setminus {\cal F}_i \right|$ recovering sets are mutually disjoint. \end{lemma} \begin{IEEEproof} For any information bit $x_i$, the buckets of $[15]\setminus {\cal F}_i$, are the buckets from the first $m'=15$ buckets that does not include $x_i$. Each bucket $b_1 \in [15]\setminus {\cal F}_i$ has two singletons $x_{j_1},x_{j_2}$ which are different than $x_i$. From the construction of the code ${\cal C}^A_2$ we know that there exists a bucket $b_2$ from the last $10$ buckets that has the summation $x_i + x_{j_1} + x_{j_2}$. Thus, the subset $\{b_1,b_2\}$ is a recovering set of $x_i$. We want to show that for any two different buckets $b'_1,b''_1 \in [15]\setminus {\cal F}_i$, the recovering sets $\{b'_1,b'_2\}$ and $\{b''_1,b''_2\}$ of $x_i$ are disjoint. It holds that $\{b'_1\} \cap \{b''_1,b''_2\} = \emptyset$ because it holds that $b'_1 \neq b''_1$ and $b'_1 \neq b''_2$ because $b'_1\in[15]$ but $b''_2 \notin[15]$. In addition, $\{b'_2\} \cap \{b''_1,b''_2\} = \emptyset$ because it holds that $b'_2 \notin [15]$ but $b''_1\in [15]$ and $b'_2 \neq b''_2$ because each bucket in the last $10$ buckets has exactly one summation with $x_i$. \end{IEEEproof} For any information bit $x_i,i\in[6]$ denote by $R_{b}^{i}$ the recovering set that uses bucket $b\in[15]$ and can satisfy $x_i$. For example, $R_1^1 = \{1\}$ and $R_{12}^1 = \{12,22\}$. \begin{lemma}\label{lemma:BEx9Aux2} For the two information bits $x_1,x_2$, the buckets $\{10,11,\ldots,15\}$ are divided into $3$ pairs, ${\cal P} = \{(10,15)$, $(11,14)$,$(12,13)\}$, such that for any pair $(b_1,b_2)\in {\cal P}$, it holds that $\left|R_{b_1}^{1} \cap R_{b_{2}}^{2}\right| > 0$ and $\left|R_{b_1}^{2} \cap R_{b_{2}}^{1}\right| > 0$. \end{lemma} \begin{IEEEproof} For the first pair, $(10,15)$, it holds that $R_{10}^1 = \{10,20\}, R_{10}^2 = \{10,25\},R_{15}^1 = \{15,25\}$, and $R_{15}^2 = \{15,20\}$. Then, it holds that $\left|R_{10}^{1} \cap R_{15}^{2}\right| $ $= | \{10,20\} \cap$ $\{15,20\}| > 0$ and $\left|R_{10}^{2} \cap R_{b_15}^{1}\right| = |\{10,25\} \cap$ $\{15,25\}| > 0$. Similarly, the claim holds also for the pairs $(11,14)$ and $(12,13)$. \end{IEEEproof} \begin{comment} \begin{IEEEproof} The number of buckets in the subset $[15\setminus ({\cal F}_1 \cup {\cal F}_2)$ is $6$, which are the bucket that doesn't contain $x_1,x_2$ or both of them. From the construction of the code ${\cal C}^A_2$ we can see that $[15]\setminus ({\cal F}_1 \cup {\cal F}_2) = \{10,\ldots,15\}$ can be partitioned into $3$ pairs, such that each pair contains exactly the four information bits $x_3,x_4,x_5,x_6$. It is possible because each bucket in $[10,15]$ has two different singletons from the subset $\{x_3,x_4,x_5,x_6\}$, and we can see that the pairs are $\{(10,15),(11,14),(12,13)\}$. Denote by $b_1, 10\leq b_1 \leq15$ the bucket that has $x_{q_1},x_{q_2}$ as singletons, and by $b_2$ the bucket that form a pair with $b_1$, which has $x_{q_3},x_{q_4}$ as singletons. The recovering set $R_{b_1}^{(1)} = \{b_1,b'_1\}$ where $b'_1\in{\cal T}_2$ that has $x_1+x_{q_1} + x_{q_2}$. The recovering set $R_{b_2}^{(1)} = \{b_1,b'_2\}$ where $b'_2\in{\cal T}_2$ that has $x_1+x_{q_3} + x_{q_4}$. The recovering set $R_{b_1}^{(2)} = \{b_1,b'_3\}$ where $b'_3\in{\cal T}_2$ that has $x_2+x_{q_1} + x_{q_2}$. The recovering set $R_{b_2}^{(2)} = \{b_1,b'_4\}$ where $b'_4\in{\cal T}_2$ that has $x_2+x_{q_3} + x_{q_4}$. From the construction of $C^A_2$ we know that the $6$ information bits appears in every bucket in ${\cal T}_2$, thus the bucket that has a summation $x_1+x_{q_1} + x_{q_2}$, includes also the summation $x_2+x_{q_3} + x_{q_4}$. Thus, $b'_1 = b'_4$ and $b'_2=b'_4$. We get that $R_{b_1}^{(1)} \cap R_{b_1}^{(2)} = {b_1}$, $R_{b_1}^{(1)} \cap R_{b_2}^{(2)} = {b'_1}$. Also, $R_{b_2}^{(1)} \cap R_{b_1}^{(2)} = {b'_2}$, $R_{b_2}^{(1)} \cap R_{b_2}^{(2)} = {b_2}$. For any other bucket in $[10,15]$ different than $b_1,b_2$, we have $\left|R_{b_1}^{(1)} \cap R_{b_r}^{(2)}\right| = 0$. Assume by contradiction that $R_{b_1}^{(1)} \cap R_{b_r}^{(2)} = \{b\}$, this means that $b \in R_{b_r}^{(2)}$ and $b\in\{b_1,b'_1\}$. Then $\left|R_{b_r}^{(2)} \cap R_{b_1}^{(2)}\right| > 0$ or $\left|R_{b_r}^{(2)} \cap R_{b_2}^{(2)}\right| > 0$, which is a contradiction to the fact that the recovering sets of $x_2$ are disjoint. Similarly, for any other bucket in $[10,15]$ different than $b_1,b_2$, we have $\left|R_{b_2}^{(1)} \cap R_{b_r}^{(2)}\right| = 0$. \end{IEEEproof} \end{comment} Now, we are ready to show that the code ${\cal C}^A_2$ is a $(6,15,25,2,2)$ batch array code. \begin{theorem}\label{theorem:BExample9} The code ${\cal C}^A_2$ is a $(6,15,25,2,2)$ batch array code. In particular, $B_{2,2}(6,15) = 25$. \end{theorem} \begin{IEEEproof} The lower bound is derived from Theorem~\ref{theorem:PIRLB}\eqref{theorem:part4}, $B_{2,2}(6,15) \geq \frac{30\cdot 6 \cdot7}{(4)^2 + 36 - 4 + 4} > 24$. The upper bound is derived from the code ${\cal C}^A_2$. Let $(k_1,\ldots,k_6)$ be a multiset request of size $k = 15$. The first step is to satisfy all the requests of bits $x_3,x_4,x_5,x_6$ according to Lemma~\ref{lemma:BEx9Aux1} by using only the first $m' = 15$ buckets. Then, the remaining requests are of the bits $x_1,x_2$. Denote by $\alpha_1,\alpha_2$ the number of the remaining buckets from the first $m'=15$ buckets that include $x_1,x_2$ as singleton, but not both of them, respectively. Then, take $\min\{k_2,\alpha_2\}$ buckets as a recovering sets of $x_2$ and take $\min\{k_1,\alpha_1\}$ buckets as recovering sets of $x_1$. The first bucket which contains the singletons $x_1,x_2$ is not used yet. Denote by $r$ the number of bit requests from the multiset request that were satisfied so far. Furthermore, denote by $k'_1,k'_2$ the number of remaining bit requests of $x_1,x_2$, respectively, where $k'_1 = k_1 - \min\{k_1,\alpha_1\}$ and $k'_2 = k_2 - \min\{k_2,\alpha_2\}$. After this step we still have $15-r$ buckets in the first $m'=15$ buckets, including the first bucket and all the last $m''=10$ buckets. Therefore, for $x_1$ and $x_2$ there are $15-r$ possible recovering sets. The second step is to satisfy the remaining $15-r$ bit requests from the multiset request. If $k'_1 = 0$ or $k'_2=0$, then it is possible to satisfy them by using the remaining $k-r=15-r$ recovering sets of $x_1$ or $x_2$. Otherwise, $k'_1 > 0$ and $k'_2>0$. So far we used all the buckets from the set $({\cal F}_1 \cup {\cal F}_2)\setminus \{1\}$ which is of size $8$ and another $p$ buckets from the subset $\{10,11,\ldots,15\}$. Thus, $k'_1+k'_2 = 7 - p$. Let ${\cal G} \subseteq \{10,11,\ldots,15\}$ be the subset of buckets from $\{10,11,\ldots,15\}$ that were not used in the first step and let $p = 6 - |{\cal G}|$. According to Lemma~\ref{lemma:BEx9Aux3}, there are at least $7-p$ remaining recovering sets for each bit of $\{x_1,x_2\}$, which are the set $\{1\}$ and the sets of $R_b^i$ where $b\in{\cal G}$ and $i \in [2]$. According to Lemma~\ref{lemma:BEx9Aux2}, the buckets $\{10,11,\ldots,15\}$ are divided into $3$ pairs, where the $b$-th bucket is paired with the $(25-b)$-th bucket, for $10\leq b\leq 15$. The subset ${\cal G}$ is partitioned into two subsets, ${\cal U}_1 = \{b\in{\cal G} : (25-b) \in {\cal G}\}$ and ${\cal U}_2 = \{b\in{\cal G} : (25-b) \notin {\cal G}\}$. Let $\beta_1 = |{\cal U}_1|$ and $\beta_2 = |{\cal U}_2|$. The following cases are considered. \noindent {\bf Case 1:} If $p$ is even and $k'_1$ is even (or $k'_2$ is even). Since $p$ is even, it is deduced that $\beta_2$ is even as well. Assume that $k'_1$ is even, then also $(k'_1 - \beta_2)$ is even. In order to satisfy $x_1$ we can take $\min\{\beta_2,k'_1\}$ recovering sets that use $\min\{\beta_2,k'_1\}$ buckets from ${\cal U}_2$. We can see that $\beta_1 + \beta_2 = 6-p$ and $k'_1 \leq 6-p = \beta_1 + \beta_2$ then $k'_1 - \beta_2 \leq \beta_1$. If $k'_1>\beta_2$, then we can satisfy the remaining requests of $x_1$ with $(k'_1 - \beta_2)/2$ pairs of buckets from ${\cal U}_1$, where for each bucket $b$ from the $(k'_1 - \beta_2)$ buckets we can take $R^1_b$ as a recovering set for $x_1$. It is possible to show that each recovering set for $x_1$ that uses a bucket from ${\cal U}_2$ intersects with only one recovering set for $x_2$ that uses a bucket from ${\cal G}$. Also, each pair of recovering sets for $x_1$ that uses a pair of bucket from ${\cal U}_1$ intersects with only two recovering sets for $x_2$ that use buckets from ${\cal G}$. Thus, from the $7-p$ recovering sets of $x_2$ it is not possible to use only $\max \{k'_1, \beta_2 + 2\cdot \frac{k'_1 - \beta_2}{2}\} = k'_1$ of them. Thus it is possible to use the remaining $7-p - k'_1 = k'_2$ to satisfy the $k'_2$ requests of $x_2$. The case when $k'_1$ is odd but $k'_2$ is even can be solved similarly while changing between $x_1$ and $x_2$. \noindent {\bf Case 2:} If $p$ is odd and $k'_1$ is odd (or $k'_2$ is odd). Then $\beta_2$ is odd. Assume that $k'_1$ is odd, then also $(k'_1 - \beta_2)$ is even and the rest is similar to Case 1. \noindent {\bf Case 3:} If $p$ is even and $k'_1,k'_2$ are odd. Then start with satisfying $x_1$ with a recovering set $\{1\}$. Then we still have an even number of remaining requests of $x_1$ that must be satisfied, and the rest is similar to Case 1. \noindent {\bf Case 4:} If $p$ is odd and $k'_1,k'_2$ are even. Then start with satisfying $x_1$ with a recovering set $\{1\}$. Then we still have an odd number of remaining requests of $x_1$ that must be satisfied, and the rest is similar to Case 2. Thus, we can conclude that the code can satisfy each multiset of $15$ information bits, and hence, $B_{2,2}(6,15) = 25$. \end{IEEEproof} In addition it is possible to show that the code ${\cal C}^A_2$ is a $(6,11,25,2,2)$ functional PIR array code. \begin{theorem}\label{theorem:FPExample9} The code ${\cal C}^A_2$ is a $(6,11,25,2,2)$ functional PIR array code. In particular, $21 \leq FP_{2,2}(6,11) \leq 25.$ \end{theorem} \begin{IEEEproof} The lower bound is obtained from \Tref{theorem:LBFP4}, where $FP_{2,2}(6,11)\geq \frac{2\cdot 11 \cdot 63}{3 + 63} = 21$. The upper bound can be obtained from the code ${\cal C}^A_2$. Given a request $R$, a linear combination of the information bits, that the code ${\cal C}^A_2$ must satisfy $k=11$ times by disjoint recovering sets. Because of the symmetry of $x_i,i\in[6]$, it is sufficient to check requests according to their length (number of information bits). Thus, the proof is divided into the following cases according to number of information bits that appear in the request. \noindent {\bf Case 1:} If the request contains one information bit then it is the case of PIR. \noindent {\bf Case 2:} If the request contains two information bits, then assume that it is $x_1+x_2$. Then the recovering sets are the following $\{\{1\}$, $\{2,6\}$, $\{3,7\}$, $\{4,8\}$, $\{5,9\}$, $\{16,11\}$, $\{17,10\}$, $\{18,13\}$, $\{19,12\}$, $\{20,25\}$, $\{21,24\}$, $\{22,23\}\}$. \noindent {\bf Case 3:} If the request contains three information bits, then assume that it is $x_1+x_2+x_3$. Then the recovering sets are the following $\{\{16\}$, $\{1,2\}$, $\{17,10\}$, $\{18,11\}$, $\{19,12\}$, $\{20,7\}$, $\{21,8\}$, $\{22,9\}$, $\{23,5\}$, $\{24,4\}$, $\{25,3\}\}$. \noindent {\bf Case 4:} If the request contains four information bits, then assume that it is $x_3+x_4+x_5+x_6$. Then the recovering sets are the following $\{\{16,2\}$, $\{17,3\}$, $\{18,4\}$, $\{19,5\}$, $\{20,25\}$, $\{21,24\}$, $\{22,23\}$, $\{10,15\}$, $\{11,14\}$, $\{12,13\}$, $\{6,7,8,9\}\}$. \noindent {\bf Case 5:} If the request contains five information bits, then assume that it is $x_2+x_3+x_4+x_5+x_6$. Then the recovering sets are the following $\{\{16,1\}$, $\{17,2\}$, $\{18,3\}$, $\{19,4\}$, $\{20,5\}$, $\{21,11\}$, $\{22,12\}$, $\{23,13\}$, $\{24,14\}$, $\{25,15\}$, $\{6,7,8,9\}\}$. \noindent {\bf Case 6:} If the request contains all the information bits, that it is $x_1+x_2+x_3+x_4+x_5+x_6$. Then the recovering sets are the following $\{\{16\}$, $\{17\}$, $\{18\}$, $\{19\}$, $\{20\}$, $\{21\}$, $\{22\}$, $\{23\}$, $\{24\}$, $\{25\}$, $\{1,10,15\}$, $\{2,8,14\}$, $\{3,9,11\}$, $\{4,7,12\}$, $\{5,6,13\}\}$. \end{IEEEproof} \subsection{Construction B} Next we generalize an example given in~\cite{FVY15} of a PIR code for any integer $r\geq3$ and study how it can be used also as batch array codes. We first present the construction for the general case. \begin{construction}\label{construction:cons8} Let $r\geq 3$ be a fixed integer, the number of information bits is $s = r(r+1)$, the number of the buckets is $m = r+1$, and the number of the cells in each bucket is $t = (r-1)r + 1$. The information bits are partitioned into $r+1$ parts each of size $r$, denote by ${\cal S}_i$ the part $i$ of the bits. For each $i\in[r+1]$, write the linear combination $\sum_{j\in{\cal S}_i} x_j$ to bucket $i$. For each $i,i\in[r+1]$ write each one of the subsets of size $r-1$ of ${\cal S}_i$ as singletons in a different bucket other than bucket $i$. \end{construction} For any integer $r \geq 3$ denote the code that is obtained from Construction~\ref{construction:cons8} by ${\cal C}^B_r$. Construction~\ref{construction:cons8} for the case of $r=3$ is demonstrated in Table~\ref{ex_cons8}. It is possible to show that for any $r\geq 3$ the code ${\cal C}^B_r$ is an $(r^2+r,r,r+1,r^2-r+1,r-1)$ PIR array code. \begin{table} \begin{center} \caption{Construction~\ref{construction:cons8} for $r=3$}\label{ex_cons8} \begin{tabular}{ |c|c|c|c| } \hline 1 & 2 & 3 & 4 \\ \hline \hline $x_1x_2x_3$ & $x_1$ & $x_2$ & $x_1$ \\ \hline $x_4$ & $x_2$ & $x_3$ & $x_3$ \\ \hline $x_6$ & $x_4x_5x_6$ & $x_4$ & $x_5$\\ \hline $x_7$ & $x_7$ & $x_5$ & $x_6$ \\ \hline $x_8$ & $x_9$ & $x_7x_8x_9$ & $x_8$ \\ \hline $x_{10}$ & $x_{10}$ & $x_{11}$ & $x_{9}$ \\ \hline $x_{11}$ & $x_{12}$ & $x_{12}$ & $x_{10}x_{11}x_{12}$\\ \hline \end{tabular} \end{center} \end{table} \begin{theorem}\label{theorem:PExample8} For any integer $r \geq 3$ the code ${\cal C}^B_r$ from Construction~\ref{construction:cons8} is an $(r^2+r,r,r+1,r^2-r+1,r-1)$ PIR array code. In particular, $$\frac{r\cdot(4r^2 + 3r -1)}{4r^2 - r +1} \leq P_{r^2-r+1,r-1}(r^2+r,r) \leq r+1.$$ \end{theorem} \begin{IEEEproof} The lower bound can be obtained by using \Tref{theorem:PIRLB}\eqref{theorem:part2}, \begin{align*} P_{r^2-r+1,r-1}&(r^2+r,r) \geq P_{r^2-r+1,r^2-r+1}(r^2+r,r) &\\ & \geq \frac{r\cdot (r^2+r)(4r-1)}{(4r-1)(r^2 - r +1) + (2r-1)^2} &\\ & = \frac{r(4r^3 - r^2 + 4r^2 - r)}{4r^3 - 4r^2 + 4r - r^2 + r -1 +4r^2-4r+1} &\\ & = \frac{r^2(4r^2 + 3r - 1)}{4r^3 - r^2 + r} = \frac{r\cdot(4r^2 + 3r -1)}{4r^2 - r +1}. \end{align*} The upper bound is verified by using the code ${\cal C}^B_r$. There are $s = r(r+1)$ information bits, and the number of buckets is $m = r+1$. For each $i\in[m]$, there exists a cell with the linear combination $\sum_{q\in{\cal S}_i}x_q$ and another $r(r-1)$ cells to store one $(r-1)$-subset from each ${\cal S}_j,j\in[r+1]$, where $j\neq i$. Thus, the number of the rows is $r^2 -r +1$. Let $x_j$ be a request that the code ${\cal C}^B_r$ must satisfy by $r$ disjoint recovering sets. Assume that $x_j \in {\cal S}_i, i\in[r+1]$. There are $r-1$ buckets which include $x_j$ as a singleton, because $x_j$ appears in $r-1$ subsets of length $r-1$ of part ${\cal S}_i$. Thus, each bucket of the $r-1$ buckets is taken as a recovering set, while reading only one cell from it. In addition, in the $i$-th bucket there exists a cell with $\sum_{q\in{\cal S}_i} x_q$, which includes $x_j$. The $(r-1)$-subset, ${\cal S}_i \setminus \{x_j\}$, is written in a bucket $p$, which is different from bucket $i$, and is different from the buckets that were taken so far (because $x_j \notin {\cal S}_i \setminus \{x_j\}$). Thus, the set $\{i,p\}$ is a recovering set of $x_j$, and it is sufficient to read from bucket $i$ one cell, which is $\sum_{q\in{\cal S}_i} x_q$ and to read $r-1$ cells with the $r-1$ bits of ${\cal S}_i \setminus \{x_j\}$ from bucket $p$. Thus, there exist $r$ disjoint recovering sets for $x_j$, where at most $r-1$ cells are read from each bucket. \end{IEEEproof} Next we want to show that for any integer $r\geq 3$ the code ${\cal C}^B_r$ is an $(r^2+r,r,r+1,r^2-r+1,r-1)$ batch array code, by using a property stated in the following lemma. \begin{lemma}\label{lemma:BEx8Aux1} For any integer $r\geq 3$ it holds that every two buckets of the code ${\cal C}^B_r$ can form a recovering set of every bit $x_i$ by reading at most $r-1$ cells from each bucket. \end{lemma} \begin{IEEEproof} Given a pair of buckets from ${\cal C}^B_r$, for simplicity we assume that they are the first two buckets. The first bucket has a cell with $\sum_{i\in{\cal S}_{1}} x_i$, and has exactly $r-1$ bits as singletons from each ${\cal S}_j, 2\leq j \leq r+1$. Hence, the first bucket does not include exactly one of the information bits from each ${\cal S}_j,2\leq j \leq r+1$. Thus, the number of bits that do not appear as singletons in the first bucket is $2r$. Hence, the first bucket can satisfy each information bit except to these $2r$ bits, by reading exactly one cell. The second bucket contains $r-1$ bits out of the $r$ bits of ${\cal S}_{1}$ as singletons. Thus, each one of these $(r-1)$ bits from ${\cal S}_{1}$ can be satisfied by reading each one of them as a singleton from the second bucket. Also, the remaining bit of ${\cal S}_{1}$ can be satisfied by reading the $r-1$ singletons of ${\cal S}_{1}$ from the second bucket with the cell $\sum_{i\in{\cal S}_{1}} x_i$ in the first bucket. The first two buckets include different $(r-1)$-subsets of each part other than ${\cal S}_1,{\cal S}_2$. Then, the information bit that does not appear as a singleton cell or as part of the cell $\sum_{i\in{\cal S}_{1}} x_i$ in the first bucket, definitely appears as a singleton cell or in the cell $\sum_{i\in{\cal S}_{2}} x_i$ in the second bucket. Then, each bit $x_q\in {\cal S}_{j}$ where $3\leq j\leq r+1$ can be satisfied by reading it as a singleton from the second bucket. There are $r-1$ such bits, and thus, it remains to show that the code can satisfy the bit $x_{q_1}\in {\cal S}_{2}$ that is not part of the $(r-1)$-subset of singletons which are stored in the first bucket. We can satisfy $x_{q_1}$ by reading the $r-1$ singletons of ${\cal S}_{2}$ from the first bucket with the cell $\sum_{i\in{\cal S}_{2}} x_i$ in the second bucket. Thus, the first two buckets of the code ${\cal C}^B_r$ can form a recovering set of every bit $x_i$. Similarly, it holds for any two buckets of the code ${\cal C}^B_r$. \end{IEEEproof} Now, we are ready to show that for any integer $r\geq 3$ the code ${\cal C}^B_r$ is $(r^2+r,r,r+1,r^2-r+1,r-1)$ batch array code. \begin{theorem}\label{theorem:BExample8} For any integer $r \geq 3$ the code ${\cal C}^B_r$ from Construction~\ref{construction:cons8} is an $(r^2+r,r,r+1,r^2-r+1,r-1)$ batch array code. In particular, $$\frac{r\cdot(4r^2 + 3r -1)}{4r^2 - r +1} \leq B_{r^2-r+1,r-1}(r^2+r,r) \leq r+1.$$ \end{theorem} \begin{IEEEproof} The lower bound is follows from the lower bound of $P_{r^2-r+1,r-1}(r^2+r,r)$. The upper bound is achieved by using Contruction~\ref{construction:cons8}. Let $R = \{x_{i_1}, x_{i_2}, \ldots , x_{i_{r}}\}$ be a multiset request of $r$ information bits. First, we want to show that the code ${\cal C}^B_r$ can satisfy the first $r-1$ bits of the request by using only $r-1$ buckets. From Construction~\ref{construction:cons8} it is known that each information bit $x_i$ appears as a singleton in $r-1$ buckets out of the $r+1$ buckets. Thus, in each subset of buckets of size at least $3$, there is at least one bucket that contains a cell with $x_i$. Therefore, the first $r-1$ bits of the request can be read by singletons from $r-1$ different buckets. After the first step, we still have $2$ buckets and from Lemma~\ref{lemma:BEx8Aux1} it is known that these two buckets can satisfy each $x_i$, in particular $x_{i_{r}}$. \end{IEEEproof} According to Theorem~\ref{theorem:PExample8} and Theorem~\ref{theorem:BExample8} it can be verified that for any $r\geq 3$, $r < \frac{r\cdot(4r^2 + 3r -1)}{4r^2 - r +1} \leq P_{r^2-r+1,r-1}(r^2+r,r) \leq B_{r^2-r+1,r-1}(r^2+r,r) \leq r+1$. Thus, we conclude that Construction~\ref{construction:cons8} gives optimal PIR and batch array codes. \subsection{Construction C} We now present our third construction, and study how it can be used as PIR and functional PIR array codes for specific parameters. \begin{construction}\label{consAdd} Let $s \geq 2$ be a fixed integer. The number of information bits is $s$, the number of cells in each bucket (the number of the rows) is $2$. We write each two nonzero disjoint linear combinations of total size at most $s$, and hence, we need $m = \sum_{i=2}^{s} ({s \choose i} \cdot {i \brace 2})$ buckets. Then, \begin{align*} m = \sum_{i=2}^{s}\hspace{-0.3ex}\left({s \choose i} {i \brace 2}\right) \hspace{-0.3ex}=\hspace{-0.3ex} \sum_{i=2}^{s}\hspace{-0.3ex} \hspace{-0.3ex}{s \choose i} (2^{i-1}-1)\hspace{-0.3ex} = \frac{3^s+1}{2} - 2^s. \end{align*} \end{construction} For any integer $s \geq 2$ denote the code that is obtained from Construction~\ref{consAdd} by ${\cal C}^C_s$. Construction~\ref{consAdd} for the case of $s=4$ is demonstrated in Table~\ref{ex_add_cons} and provides the following results. First, we show that the code ${\cal C}^C_4$ is a $(4,16,25,2,1)$ PIR array code. \begin{table*} \begin{center} \caption{Construction~\ref{consAdd} for $s=4$}\label{ex_add_cons} \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline 1&2&3&4&5&6&7&8&9&10&11&12&13&14 \\ \hline \hline $x_1$ & $x_1$ & $x_1$ & $x_2$ & $x_2$ & $x_3$ & $x_1$ & $x_1$ & $x_1$ & $x_2$ & $x_2$ & $x_2$ & $x_3$ & $x_3$ \\ \hline $x_2$ & $x_3$ & $x_4$ & $x_3$ & $x_4$ & $x_4$ & $x_2x_3$ & $x_2x_4$ & $x_3x_4$ & $x_1x_3$ & $x_1x_4$ & $x_3x_4$ & $x_1x_2$ & $x_1x_4$ \\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c| } \hline 15&16&17&18&19&20&21&22&23&24&25 \\ \hline \hline $x_3$ & $x_4$ & $x_4$ & $x_4$ & $x_1$ & $x_2$ & $x_3$ & $x_4$ & $x_1x_2$& $x_1x_3$& $x_1x_4$\\ \hline $x_2x_4$ & $x_1x_2$ & $x_1x_3$ & $x_2x_3$ & $x_2x_3x_4$ & $x_1x_3x_4$ & $x_1x_2x_4$ & $x_1x_2x_3$ & $x_3x_4$& $x_2x_4$& $x_2x_3$\\ \hline \end{tabular} \end{center} \end{table*} \begin{comment} \begin{theorem}\label{theorem:ConsCRes} \begin{enumerate} \item\label{theorem:P21416} $23 \leq P_{2,1}(4,16)$ $ \leq 25.$ \item\label{theorem:FP22414} $24 \leq FP_{2,2}(4,14) \leq 25.$ \item\label{theorem:FP22548} $88 \leq FP_{2,2}(5,48) \leq 90$. \end{enumerate} \end{theorem} \end{comment} \begin{theorem}\label{theorem:P21416} The code ${\cal C}^C_4$ from Construction~\ref{consAdd} is a $(4,16,25,2,1)$ PIR array code. In particular, $23 \leq P_{2,1}(4,16)$ $ \leq 25.$ \end{theorem} \begin{IEEEproof} The lower bound is obtained using Theorem~\ref{theorem:PIRLB}\eqref{theorem:part2}, $P_{2,1}(4,16) \geq P_{2,2}(4,16) \geq \frac{16\cdot 4 \cdot 5}{5\cdot 2 + 4} > 22$. The upper bound is verified using the code ${\cal C}^C_4$. Let $x_i,i\in[4]$ be a request, that the code ${\cal C}^C_4$ must satisfy $16$ times. From the symmetry of the code, assume that $x_i = x_1$. The following are the recovering sets of $x_1$, where from each bucket only one cell is read. $\{\{1\}$,$\{2\}$, $\{3\}$, $\{7\}$, $\{8\}$, $\{9\}$, $\{19\}$, $\{10,6\}$, $\{11,5\}$, $\{13,4\}$, $\{14,18\}$, $\{15,17\}$, $\{16,12\}$, $\{20,23\}$, $\{21,24\}$, $\{22,25\}\}$. \end{IEEEproof} Next, we show that the code ${\cal C}^C_4$ is a $(4,14,25,2,2)$ functional PIR array code. \begin{theorem}\label{theorem:FP22414} The code ${\cal C}^C_4$ from Construction~\ref{consAdd} is a $(4,14,25,2,2)$ functional PIR array code. In particular, $24 \leq FP_{2,2}(4,14) \leq 25.$ \end{theorem} \begin{IEEEproof} The lower bound is obtained using Theorem~\ref{theorem:LBFP4}, $FP_{2,2}(4,14) \geq \frac{2\cdot 14 \cdot 15}{15 + 3} > 23$. The upper bound is verified using the code ${\cal C}^C_4$. Let $R$ be a linear combination request, that the code ${\cal C}^C_4$ must satisfy $14$ times. From the symmetry of the code, the proof is divided into the following cases according to the number of information bits that appear in $R$. If the number of information bits that appear in $R$ is $p$ then we assume that the request is $x_1+x_2+\cdots+x_p$. \noindent {\bf Case 1:} The recovering sets are the following $\{\{1\}$, $\{2\}$, $\{3\}$, $\{7\}$, $\{8\}$, $\{9\}$, $\{19\}$, $\{10,6\}$, $\{11,5\}$, $\{13,4\}$, $\{14,18\}$, $\{15,17\}$, $\{16,12\}$, $\{20,23\}$, $\{21,24\}$, $\{22,25\}\}$. \noindent {\bf Case 2:} The recovering sets are the following $\{\{1\}$, $\{13\}$, $\{16\}$, $\{23\}$, $\{2,4\}$, $\{3,5\}$, $\{7,10\}$, $\{8,11\}$, $\{9,12\}$, $\{14,15\}$, $\{17,18\}$, $\{19,20\}$, $\{21,22\}$, $\{24,25\}\}$. \noindent {\bf Case 3:} The recovering sets are the following. $\{\{7\}$, $\{10\}$, $\{13\}$, $\{22\}$, $\{1,24\}$, $\{2,23\}$, $\{3,25\}$, $\{4,17\}$, $\{5,14\}$, $\{6,16\}$, $\{8,20\}$, $\{9,21\}$, $\{11,12\}$, $\{18,19\}\}$. \noindent {\bf Case 4:} The recovering sets are the following. $\{\{19\}$, $\{20\}$, $\{21\}$, $\{22\}$, $\{23\}$, $\{24\}$, $\{25\}$, $\{1,6\}$, $\{2,5\}$, $\{3,4\}$, $\{7,11\}$, $\{8,10\}$, $\{9,13\}$, $\{12,16\}$, $\{14,18\}$, $\{15,17\}\}$. \end{IEEEproof} Construction~\ref{consAdd} for the case of $s=5$ is demonstrated in Table~\ref{ex_add_cons5} and provides the following result. \begin{table*} \begin{center} \caption{Construction~\ref{consAdd} for $s=5$}\label{ex_add_cons5} \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline 1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16&17&18&19&20&21&22 \\ \hline \hline $x_1$ & $x_1$ & $x_1$ & $x_1$ & $x_2$ & $x_2$ & $x_2$ & $x_3$ & $x_3$ & $x_4$ & $x_1$ & $x_1$ & $x_1$ & $x_1$ & $x_1$ & $x_1$ & $x_2$ & $x_2$ & $x_2$ & $x_2$ & $x_2$ & $x_2$ \\ \hline $x_2$ & $x_3$ & $x_4$ & $x_5$ & $x_3$ & $x_4$ & $x_5$ & $x_4$ & $x_5$ & $x_5$ & $x_2x_3$ & $x_2x_4$ & $x_2x_5$ & $x_3x_4$ & $x_3x_5$ & $x_4x_5$ & $x_1x_3$ & $x_1x_4$ & $x_1x_5$ & $x_3x_4$ & $x_3x_5$ & $x_4x_5$ \\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline 23&24&25&26&27&28&29&30&31&32&33&34&35&36&37&38&39&40 \\ \hline \hline $x_3$ & $x_3$ & $x_3$ & $x_3$ & $x_3$ & $x_3$ & $x_4$ & $x_4$ & $x_4$ & $x_4$ & $x_4$ & $x_4$ & $x_5$ & $x_5$ & $x_5$ & $x_5$ & $x_5$ & $x_5$ \\ \hline $x_1x_2$ & $x_1x_4$ & $x_1x_5$ & $x_2x_4$ & $x_2x_5$ & $x_4x_5$ & $x_1x_2$ & $x_1x_3$ & $x_1x_5$ & $x_2x_3$ & $x_2x_5$ & $x_3x_5$ & $x_1x_2$ & $x_1x_3$ & $x_1x_4$ & $x_2x_3$ & $x_2x_4$ & $x_3x_4$\\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline 41&42&43&44&45&46&47&48&49&50&51&52&53&54 \\ \hline \hline $x_1$ & $x_1$ & $x_1$ & $x_1$ & $x_2$ & $x_2$ & $x_2$ & $x_2$ & $x_3$ & $x_3$ & $x_3$ & $x_3$ & $x_4$ & $x_4$ \\ \hline $x_2x_3x_4$ & $x_2x_3x_5$ & $x_2x_4x_5$ & $x_3x_4x_5$ & $x_1x_3x_4$ & $x_1x_3x_5$ & $x_1x_4x_5$ & $x_3x_4x_5$ & $x_1x_2x_4$ & $x_1x_2x_5$ & $x_1x_4x_5$ & $x_2x_4x_5$ & $x_1x_2x_3$ & $x_1x_2x_5$ \\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c| } \hline 55&56&57&58&59&60&61&62&63&64&65 \\ \hline \hline $x_4$ & $x_4$ & $x_5$ & $x_5$ & $x_5$ & $x_5$ & $x_1$ & $x_2$ & $x_3$ & $x_4$ & $x_5$ \\ \hline $x_1x_3x_5$ & $x_2x_3x_5$ & $x_1x_2x_3$ & $x_1x_2x_4$ & $x_1x_3x_4$ & $x_2x_3x_4$ & $x_2x_3x_4x_5$ & $x_1x_3x_4x_5$& $x_1x_2x_4x_5$ & $x_1x_2x_3x_5$ & $x_1x_2x_3x_4$ \\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline 66&67&68&69&70&71&72&73&74&75&76&77&78&79&80 \\ \hline \hline $x_1x_2$ & $x_1x_2$ & $x_1x_2$ & $x_1x_3$ & $x_1x_3$ & $x_1x_3$ & $x_1x_4$ & $x_1x_4$ & $x_1x_4$ & $x_1x_5$ & $x_1x_5$ & $x_1x_5$ & $x_2x_3$ & $x_2x_4$ & $x_2x_5$ \\ \hline $x_3x_4$ & $x_3x_5$ & $x_4x_5$ & $x_2x_4$ & $x_2x_5$ & $x_4x_5$ & $x_2x_3$ & $x_2x_5$ & $x_3x_5$ & $x_2x_3$ & $x_2x_4$ & $x_3x_4$ & $x_4x_5$ & $x_3x_5$ & $x_3x_4$ \\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c| } \hline 81&82&83&84&85&86&87&88&89&90 \\ \hline \hline $x_1x_2$ & $x_1x_3$ & $x_1x_4$ & $x_1x_5$ & $x_2x_3$ & $x_2x_4$ & $x_2x_5$ & $x_3x_4$ & $x_3x_5$ & $x_4x_5$\\ \hline $x_3x_4x_5$ & $x_2x_4x_5$ & $x_2x_3x_5$ & $x_2x_3x_4$ & $x_1x_4x_5$ & $x_1x_3x_5$ & $x_1x_3x_4$ & $x_1x_2x_5$ & $x_1x_2x_4$ & $x_1x_2x_3$ \\ \hline \end{tabular} \end{center} \end{table*} \begin{theorem}\label{theorem:FP22548} The code ${\cal C}^C_5$ from Construction~\ref{consAdd} is a $\allowbreak(5,48,90,2,2)$ functional PIR array code. In particular, $88 \leq FP_{2,2}(5,48) \leq 90$. \end{theorem} \begin{IEEEproof} The lower bound is obtained using Theorem~\ref{theorem:LBFP4}, $FP_{2,2}(5,48) \geq \frac{2\cdot 48 \cdot 31}{31 + 3} > 87$. The upper bound is verified using the code ${\cal C}^C_5$. Let $R$ be a linear combination request that the code ${\cal C}^C_5$ must satisfy $48$ times. From the symmetry of the code, the proof is divided into the following cases according to the number of information bits that appear in $R$. If the number of information bits that appear in $R$ is $p$ then we assume that the request is $x_1+x_2+\cdots+x_p$. \noindent {\bf Case 1:} The recovering sets are the following $\{\{1\}$, $\{2\}$, $\{3\}$, $\{4\}$, $\{11\}$, $\{12\}$, $\{13\}$, $\{14\}$, $\{15\}$, $\{16\}$, $\{41\}$, $\{42\}$, $\{43\}$, $\{44\}$, $\{61\}$, $\{17,26\}$, $\{18,32\}$, $\{19,38\}$, $\{20,23\}$, $\{21,29\}$, $\{22,35\}$, $\{24,33\}$, $\{25,39\}$, $\{27,30\}$, $\{28,36\}$, $\{31,40\}$, $\{34,37\}$, $\{45,8\}$, $\{46,9\}$, $\{47,10\}$, $\{49,6\}$, $\{50,7\}$, $\{48,51\}$, $\{53,5\}$, $\{52,54\}$, $\{55,67\}$, $\{57,72\}$, $\{58,69\}$, $\{59,66\}$, $\{64,56\}$, $\{65,60\}$, $\{62,78\}$, $\{63,79\}$, $\{71,81\}$, $\{73,82\}$, $\{83,80\}$, $\{68,85\}$, $\{70,88\}$, $\{74,86\}$, $\{75,90\}$, $\{76,89\}$, $\{77,87\}\}$. \noindent {\bf Case 2:} The recovering sets are the following $\{\{1\}$, $\{23\}$ ,$\{29\}$, $\{35\}$, $\{66\}$, $\{67\}$, $\{68\}$, $\{81\}$, $\{2,5\}$, $\{3,6\}$, $\{4,7\}$, $\{9,53\}$, $\{8,49\}$, $\{10,88\}$, $\{11,20\}$, $\{12,21\}$, $\{13,22\}$, $\{14,17\}$, $\{15,18\}$, $\{16,19\}$, $\{24,26\}$, $\{25,27\}$, $\{30,32\}$, $\{31,33\}$, $\{36,38\}$, $\{37,39\}$, $\{41,45\}$, $\{42,46\}$, $\{43,47\}$, $\{44,85\}$, $\{51,52\}$, $\{54,57\}$, $\{55,56\}$, $\{59,60\}$, $\{61,28\}$, $\{62,34\}$, $\{63,40\}$, $\{64,74\}$, $\{65,77\}$, $\{69,80\}$, $\{70,79\}$, $\{71,72\}$, $\{73,78\}$, $\{75,82\}$, $\{84,87\}$, $\{86,48\}$, $\{50,58\}$, $\{76,83\}\}$. \noindent {\bf Case 3:} The recovering sets are the following $\{\{11\}$, $\{17\}$, $\{23\}$, $\{53\}$, $\{57\}$, $\{90\}$, $\{1,8\}$, $\{2,6\}$, $\{3,32\}$, $\{4,38\}$, $\{5,16\}$, $\{7,36\}$, $\{9,29\}$, $\{10,89\}$, $\{30,66\}$, $\{12,40\}$, $\{13,34\}$, $\{14,39\}$, $\{15,33\}$, $\{18,80\}$, $\{19,79\}$, $\{20,37\}$, $\{21,31\}$, $\{22,88\}$, $\{24,76\}$, $\{73,86\}$, $\{26,74\}$, $\{27,68\}$, $\{28,87\}$, $\{35,63\}$, $\{41,64\}$, $\{42,65\}$, $\{43,81\}$, $\{44,47\}$, $\{45,69\}$, $\{46,60\}$, $\{48,82\}$, $\{49,56\}$, $\{50,59\}$, $\{51,78\}$, $\{52,85\}$, $\{54,67\}$, $\{55,70\}$, $\{58,77\}$, $\{61,75\}$, $\{62,71\}$, $\{72,84\}$, $\{25,83\}\}$. \noindent {\bf Case 4:} The recovering sets are the following $\{\{41\}$, $\{45\}$, $\{49\}$, $\{53\}$, $\{65\}$, $\{66\}$, $\{69\}$, $\{72\}$, $\{1,8\}$, $\{2,6\}$, $\{3,5\}$, $\{10,11\}$, $\{9,12\}$, $\{7,14\}$, $\{4,20\}$, $\{13,28\}$, $\{15,22\}$, $\{16,21\}$, $\{17,34\}$, $\{18,27\}$, $\{19,40\}$, $\{23,64\}$, $\{24,62\}$, $\{25,33\}$, $\{26,61\}$, $\{29,63\}$, $\{30,48\}$, $\{31,38\}$, $\{32,44\}$, $\{35,88\}$, $\{36,39\}$, $\{37,85\}$, $\{42,90\}$, $\{43,89\}$, $\{46,68\}$, $\{47,67\}$, $\{50,71\}$, $\{51,87\}$, $\{52,84\}$, $\{54,74\}$, $\{55,70\}$, $\{56,75\}$, $\{57,78\}$, $\{58,79\}$, $\{59,73\}$, $\{60,76\}$, $\{77,81\}$, $\{82,86\}\}$. \noindent {\bf Case 5:} The recovering sets are the following $\{\{61\}$, $\{62\}$, $\{63\}$, $\{64\}$, $\{65\}$, $\{81\}$, $\{82\}$, $\{83\}$, $\{84\}$, $\{85\}$, $\{86\}$, $\{87\}$, $\{88\}$, $\{89\}$, $\{90\}$, $\{66,4\}$, $\{67,3\}$, $\{68,2\}$, $\{69,7\}$, $\{70,6\}$, $\{71,1\}$, $\{72,9\}$, $\{73,5\}$, $\{74,17\}$, $\{75,10\}$, $\{76,8\}$, $\{77,18\}$, $\{78,11\}$, $\{79,12\}$, $\{80,13\}$, $\{41,40\}$, $\{42,34\}$, $\{43,28\}$, $\{44,19\}$, $\{45,39\}$, $\{46,33\}$, $\{47,27\}$, $\{48,14\}$, $\{49,38\}$, $\{50,32\}$, $\{51,20\}$, $\{52,15\}$, $\{53,37\}$, $\{54,26\}$, $\{55,21\}$, $\{56,16\}$, $\{57,31\}$, $\{58,25\}$, $\{59,22\}\}$. \end{IEEEproof} \section{Asymptotic Analysis of Array Codes}\label{sec:analysis} The goal of this section is to provide a figure of merit in order to compare between the different constructions of array codes. For simplicity we consider the case where $\ell=t$, that is, it is possible to read all the bits in every bucket. Under this setup, it holds that $FP_{t,t}(s,k)\leq sk/t$ for all $s,k,$ and $t$. This motivates us to define the following values $${\cal R}_{X}(t,k) = \limsup_{s \to \infty} \frac{X_{t,t}(s,k)}{sk/t},$ where $X\in\{P,B,FP,FB\}$. The case where $t=1$ has been studied in several previous works. For example, for functional PIR array codes we have ${\cal R}_{FP}(1,k) \geq \frac{1}{k\cdot H(1/k)}$ for any even integer $k\geq 4$~\cite[Th. 13]{ZYE19}. Also, for functional batch array codes it holds from~\cite[Th. 21]{ZYE19} that ${\cal R}_{FB}(1,k) \leq \frac{1}{k\cdot H(c_{k})}$, where $c_1=\frac{1}{2}$ and $c_{k+1}$ is the root of the polynomial $H(z)=H(c_k)-zH(c_k)$. For the case $k=1$ we have ${\cal R}_{FB}(t,1) = {\cal R}_{FP}(t,1) = 1$ from Theorem~\ref{theorem:ArrayCodek1}\eqref{theorem:ArrayCodek1tt}. According to the bounds and constructions studied in the paper, we can already summarize several results in the following theorems for $t=2$ and general values. \begin{theorem} \begin{enumerate} \item ${\cal R}_{FP}(2,2) \leq {\cal R}_{FB}(2,2) \leq \frac{7}{8} = 0.875$, and ${\cal R}_{FB}(2,2) \geq 0.71$. \item ${\cal R}_{FP}(2,11) \leq \frac{25}{33} = 0.758$. \item ${\cal R}_{FP}(2,14) \leq \frac{25}{28} = 0.893$. \item ${\cal R}_{FP}(2,48) \leq \frac{3}{4} = 0.75$. \item ${\cal R}_{P}(2,16) \leq \frac{25}{32} = 0.78125$. \item ${\cal R}_{B}(2,15) \leq \frac{5}{9} = 0.556$. \end{enumerate} \end{theorem} \begin{IEEEproof} \begin{enumerate} \item From Theorem~\ref{theorem:FB2282} we have $FB_{2,2}(s,2) \leq 7\cdot\left\lceil \frac{s}{8} \right\rceil$. Thus, ${\cal R}_{FB}(2,2) = \limsup_{s \to \infty} \frac{FB_{2,2}(s,2)}{2s/2} \leq \limsup_{s \to \infty} \frac{7\lceil s/8\rceil}{s}$ $ \leq \limsup_{s \to \infty} \frac{(7s/8) + 7}{s}$ $ = \frac{7}{8}$. From Corollary~\ref{cor:FB22s2} we have $FB_{2,2}(s,2) \geq 0.71s$. Thus, ${\cal R}_{FB}(2,2) = \limsup_{s \to \infty} \frac{FB_{2,2}(s,2)}{2s/2} \geq \limsup_{s \to \infty} \allowbreak\frac{0.71s}{s} = 0.71$. \item From \Tref{theorem:FPExample9} we have $FP_{2,2}(6,11) \leq 25$. Then, it is possible to use \Tref{theorem:Basic}\eqref{theorem:partas} to get that $FP_{2,2}(s,11) \leq 25\cdot \left\lceil \frac{s}{6} \right\rceil$. Thus, ${\cal R}_{FP}(2,11) = \limsup_{s \to \infty} \frac{FP_{2,2}(s,11)}{11s/2} \leq \limsup_{s \to \infty} \frac{25\lceil s/6\rceil}{11s/2} \leq \limsup_{s \to \infty} \frac{(25s/6) + 25}{11s/2} = \frac{50}{66} = 0.758$. \item From \Tref{theorem:FP22414} we have $FP_{2,2}(4,14) \leq 25$. Then, it is possible to use \Tref{theorem:Basic}\eqref{theorem:partas} to get that $FP_{2,2}(s,14) \leq 25\cdot \left\lceil \frac{s}{4} \right\rceil$. Thus, ${\cal R}_{FP}(2,14) = \limsup_{s \to \infty} \frac{FP_{2,2}(s,14)}{14s/2} \leq \limsup_{s \to \infty} \frac{25\lceil s/4\rceil}{7s} \leq \limsup_{s \to \infty} \frac{(25s/4) + 25}{7s} = \frac{25}{28} = 0.893$. \item From \Tref{theorem:FP22548} we have $FP_{2,2}(5,48) \leq 90$. Then, it is possible to use \Tref{theorem:Basic}\eqref{theorem:partas} to get that $FP_{2,2}(s,48) \leq 90\cdot \left\lceil \frac{s}{5} \right\rceil$. Thus, ${\cal R}_{FP}(2,48) = \limsup_{s \to \infty} \frac{FP_{2,2}(s,48)}{48s/2} \leq \limsup_{s \to \infty} \frac{90\lceil s/5\rceil}{24s} \leq \limsup_{s \to \infty} \frac{(90s/5) + 90}{24s} = \frac{90}{120} = \frac{3}{4} = 0.75$. \item From~\Tref{theorem:P21416} we have $P_{2,1}(4,16) \leq 25$. Then, it is possible to use \Tref{theorem:Basic}\eqref{theorem:partas} and get that $P_{2,1}(s,16) \leq 25\cdot \left\lceil \frac{s}{4} \right\rceil$. Thus, ${\cal R}_{P}(2,16) = \limsup_{s \to \infty} \frac{P_{2,2}(s,16)}{16s/2} \leq \limsup_{s \to \infty} \frac{25\lceil s/4\rceil}{8s} \leq \limsup_{s \to \infty} \frac{(25s/4) + 25}{8s} = \frac{25}{32} = 0.78125$. \item From~\Tref{theorem:BExample9} we have $B_{2,2}(6,15) = 25$. Then, it is possible to use \Tref{theorem:Basic}\eqref{theorem:partas} and get that $B_{2,2}(s,15) \leq 25\cdot \left\lceil \frac{s}{6} \right\rceil$. Thus, ${\cal R}_{B}(2,15) = \limsup_{s \to \infty} \frac{B_{2,2}(s,15)}{15s/2} \leq \limsup_{s \to \infty} \frac{25\lceil s/6\rceil}{15s/2} \leq \limsup_{s \to \infty} \frac{(25s/6) + 25}{15s/2} = \frac{25}{45} = 0.556$. \end{enumerate} \end{IEEEproof} \begin{theorem} \begin{enumerate} \item For any $r\geq 3$, ${\cal R}_{P}(r^2-r+1,r) \leq \frac{(r+1)(r^2 -r + 1)}{r(r^2+r)}$ (also for B). \item For any $t\geq 2$, ${\cal R}_{P}(t,k) \leq\frac{m}{k(t+1)}$, where $k={t(t+1) \choose t}$ and $m = k+ \frac{{t(t+1) \choose t+1}}{t}$. \item For any two integers $t$ and $k$, ${\cal R}_{FB}(t,k) \leq \frac{1}{k\cdot H(c_{tk})}$, where $c_1=\frac{1}{2}$ and $c_{k+1}$ is the root of the polynomial $H(z)=H(c_k)-zH(c_k)$. \item For any positive integers $t,k$ and $a$, ${\cal R}_X(t,a\cdot k) \leq {\cal R}_X(t,k)$, where $X\in\{P,B,FP,FP\}$. \item For any positive integers $t,k$ and $a$, ${\cal R}_X(t,k) \leq {\cal R}_X(a\cdot t,k)$, where $X\in\{P,B,FP,FP\}$. \end{enumerate} \end{theorem} \begin{IEEEproof} \begin{enumerate} \item From \Tref{theorem:PExample8} we have for any $r \geq 3$, $P_{r^2-r+1,r-1}(r^2+r,r) \leq r+1$. Then, it is possible to use \Tref{theorem:Basic}\eqref{theorem:partas} to get that $P_{r^2-r+1,r-1}(s,r) \leq (r+1)\cdot \left\lceil \frac{s}{r^2+r} \right\rceil$. Thus, for a given $r$, it holds that \begin{align*} {\cal R}_{P}&(r^2-r+1,r) = \limsup_{s \to \infty} \frac{P_{r^2-r+1,r^2-r+1}(s,r)}{rs/(r^2-r+1)} &\\ &\leq \limsup_{s \to \infty} \frac{P_{r^2-r+1,r-1}(s,r)}{rs/(r^2-r+1)} &\\ &\leq \limsup_{s \to \infty} \frac{(r+1)\cdot \left\lceil \frac{s}{r^2+r} \right\rceil}{rs/(r^2-r+1)} &\\ &\leq \limsup_{s \to \infty} \frac{\frac{(r+1)s}{r^2+r} + (r+1)}{rs/(r^2-r+1)} = \frac{(r+1)(r^2-r+1)}{r(r^2+r)}. \end{align*} \item From Theorem~\ref{theorem:PIRLB}\eq{theorem:part5} we have for any $t \geq 2$ and $p=t+1$, $P_{t,t}(t(t+1),k) \leq m$, where $k={t(t+1) \choose t}$ and $m = k+ \frac{{t(t+1) \choose t+1}}{t}$. Then, it is possible to use \Tref{theorem:Basic}\eqref{theorem:partas} to get that $P_{t,t}(s,k) \leq m\cdot \left\lceil \frac{s}{t(t+1)} \right\rceil$. Thus, for a given $t$, it holds that ${\cal R}_{P}(t,k) = \limsup_{s \to \infty} \frac{P_{t,t}(s,k)}{sk/t} \leq \limsup_{s \to \infty} \frac{m\cdot \left\lceil \frac{s}{t(t+1)} \right\rceil}{sk/t} \leq \limsup_{s \to \infty} \frac{\frac{m\cdot s}{t(t+1)} + m}{sk/t} = \frac{m}{k(t+1)}$. \item From Lemma~\ref{lemma:FBGadget}, we have $FB_{t,t}(s,k)\leq FB_{t,1}(s,k) \leq FB(\lceil s/t \rceil,t\cdot k)$. ${\cal R}_{FB}(t,k) = \limsup_{s \to \infty} \frac{FB_{t,t}(s,k)}{sk/t} \leq \limsup_{s \to \infty}\frac{FB(\lceil s/t \rceil,t\cdot k)}{sk/t} = \limsup_{s \to \infty}\frac{FB(\lceil s/t \rceil,t\cdot k)}{s/t}\cdot \frac{1}{k}$. Thus, according to~\cite[Th. 21]{ZYE19}, ${\cal R}_{FB}(t,k) \leq \frac{1}{k\cdot H(c_{tk})}$, where $c_1=\frac{1}{2}$ and $c_{k+1}$ is the root of the polynomial $H(z)=H(c_k)-zH(c_k)$. \item From Theorem~\ref{theorem:Basic}\eqref{theorem:partak} we have that for any positive integer $a$ and any $X\in\{P,B,FP,FP\}$, $X_{t,t}(s,a\cdot k) \leq a \cdot X_{t,t}(s,k)$. Thus, $R_{X}(t,a\cdot k) = \limsup_{s \to \infty} \frac{X_{t,t}(s,a\cdot k)}{ska/t} \leq \limsup_{s \to \infty} \frac{a\cdot X_{t,t}(s,k)}{ska/t} = \limsup_{s \to \infty} \frac{X_{t,t}(s,k)}{sk/t} = {\cal R}_{X}(t,k)$. \item From Theorem~\ref{theorem:Basic}\eqref{theorem:partat} we have that for any positive integer $a$ and any $X\in\{P,B,FP,FP\}$, $a \cdot X_{a\cdot t,a\cdot t}(s,k) \geq X_{t,a \cdot t}(s,k) = X_{t,t}(s,k)$. Thus, $R_{X}(t,k) = \limsup_{s \to \infty} \allowbreak\frac{X_{t,t}(s,k)}{sk/t} \leq \limsup_{s \to \infty}\allowbreak\frac{a\cdot X_{a\cdot t,a\cdot t}(s,k)}{sk/t} = \limsup_{s \to \infty} \allowbreak\frac{X_{a\cdot t,a\cdot t}(s,k)}{sk/(at)} = {\cal R}_{X}(a\cdot t,k)$. \end{enumerate} \end{IEEEproof} \section{Locality Codes}\label{sec:Locality} In this section we study a new family of array codes which is a special case of functional PIR array codes in the sense that each recovering set is of size at most $r$ and all the cells of each bucket can be read, i.e., $\ell=t$. This new family of array codes will be called \emph{locality functional array codes}. In order to find lower bounds and constructions for locality functional array codes we will use codes and designs in subspaces and covering codes. \subsection{Definitions and Basic Constructions} This section is studying the following family of codes. \begin{definition} An $(s,k,m,t,r)$ \textbf{locality functional array code} over $\Sigma$ is defined by an encoding map ${\cal E}:\Sigma^s \rightarrow (\Sigma^t)^m$ that encodes $s$ information bits $x_1,\dots,x_s$ into a $t\times m$ array and a decoding function ${\cal D}$ that satisfies the following property. For any request of a linear combination ${\boldsymbol v}$ of the information bits, there is a partition of the columns into $k$ recovering sets $S_1,\ldots,S_k \subseteq [m]$ where $|S_j| \leq r$ for any $j\in[k]$. \end{definition} We denote by $D(s,k,t,r)$ the smallest number of buckets $m$ such that an $(s,k,m,t,r)$ locality functional array code exists. For the rest of the section, assume that the parameters $s,k,t$ and $r$ are positive integers such that $t\leq s$. The following theorem summarizes several results on $D(s,k,t,r)$ based upon basic bound and constructions. \begin{theorem}\label{basicLocality} \begin{enumerate} \item $D(s,k,t,r) \geq m^*$, where $m^*$ is the smallest positive integer such that $\sum_{i=1}^{\min\{r,m^*-k+1\}} {m^*\choose i}(2^t - 1)^i \geq k(2^s - 1)$.\label{section1} \item For any integer $a$ where $1\leq a < t$, $D(s,k,t,r) \leq D(s-a,k, t - a ,r)$. \item For every positive integers $s_1,s_2,r_1,r_2$ and $p$, $D(s_1+s_2,k,t,r_1+r_2) \leq D(s_1,k,t,r_1) + D(s_2,k,t,r_2)$. In particular, $D(ps,k,t,pr) \leq p \cdot D(s,k,t,r)$. \label{BaLocSec3} \end{enumerate} \end{theorem} \begin{IEEEproof} \begin{enumerate} \item Similar to the proof of Theorem~\ref{theorem:LBFP1} but with minor changes. Here, all cells from each bucket can be read. Hence, for any positive integer $n$, there are $(2^t-1)^n$ nonzero linear combinations that can be obtained from $n$ buckets while using all the $n$ buckets. Also, each recovering set must be of size at most $\min\{r,m^*-k+1\}$. Thus, we get that $\sum_{i=1}^{\min\{r,m^*-k+1\}} {m^*\choose i}(2^t - 1)^i \geq k(2^s - 1)$. \item Let ${\cal C}$ be an $(s-1,k,m,t-1,r)$ locality functional array code with $m$ buckets such that each bucket has $t-1$ cells. For the $s$ information bits $x_1,\ldots,x_s$, we encode the first $s-1$ bits using the encoder of ${\cal C}$ to get $m$ buckets where each bucket has $t-1$ cells. For each bucket, a new cell that stores $x_s$ is added. Assume that $R$ is the request which is a linear combination of the $s$ information bits. Let $R_1$ be the part of the request which is a linear combination of the first $s-1$ information bits. From the properties of ${\cal C}$, for the request $R_1$, there exist $k$ disjoint recovering sets $\{{\cal S}_1,{\cal S}_2,\ldots,{\cal S}_k\}$ such that $|{\cal S}_j|\leq r$ for any $j\in[k]$. If $R=R_1$, then the same $\{{\cal S}_1,{\cal S}_2,\ldots,{\cal S}_k\}$ are recovering sets for $R$. If $R = x_s$, we can take the first $k$ buckets as $k$ recovering sets each of size 1. If $R$ includes $x_s$, then the same $\{{\cal S}_1,{\cal S}_2,\ldots,{\cal S}_k\}$ are recovering sets for $R$, where we can read $x_s$ from one of the buckets in each ${\cal S}_j$. Thus, $D(s,k,t,r)\leq D(s-1,k,t-1,r)$ and we can get that $D(s,k,t,r) \leq D(s-a,k, t - a ,r)$ by induction on $a$. \item Let ${\cal C}_1$ be an $(s_1,k,m_1,t,r_1)$ locality functional array code and ${\cal C}_2$ be an $(s_2,k,m_2,t,r_2)$ locality functional array code. The codes ${\cal C}_1$ and ${\cal C}_2$ are used to construct an $(s_1+s_2,k,m_1+m_2,t,r_1+r_2)$ locality functional array code by encoding the first $s_1$ bits using the encoder of ${\cal C}_1$ and the last $s_2$ bits using the encoder of ${\cal C}_2$. Assume that $R$ is the request which is a linear combination of the $s_1+s_2$ information bits. Let $R_1,R_2$ be the part of $R$ which is a linear combination of the first $s_1$, last $s_2$ information bits, respectively. According to ${\cal C}_1,{\cal C}_2$, there exist $k$ recovering sets $\{{\cal S}^1_1,{\cal S}^1_2,\ldots,{\cal S}^1_k\},\{{\cal S}^2_1,{\cal S}^2_2,\ldots,{\cal S}^2_k\}$ for $R_1,R_2$ such that each recovering set has size at most $r_1,r_2$, respectively. Then, the set ${\cal S}^1_j \cup {\cal S}^2_j$ for any $j\in[k]$ is a recovering set for $R$ with size at most $r_1+r_2$. Therefore, the sets $\{{\cal S}^1_1\cup {\cal S}^2_1,{\cal S}^1_2\cup {\cal S}^2_2,\ldots,{\cal S}^1_k\cup {\cal S}^2_k\}$ are $k$ recovering sets for $R$ such that the size of each recovering set is at most $r_1+r_2$. Thus, $D(s_1+s_2,k,t,r_1+r_2) \leq D(s_1,k,t,r_1) + D(s_2,k,t,r_2)$ and we can get that $D(ps,k,t,pr) \leq p \cdot D(s,k,t,r)$ by induction on $p$. \end{enumerate} \end{IEEEproof} \subsection{Constructions Based on Subspaces} In this section we show connections between the problem of finding the minimal number of buckets for locality functional array codes and several problems in subspaces. Subspaces were used in~\cite{SES19} to construct \emph{array codes} and to examine their locality and availability. The family of array codes that was defined in~\cite{SES19} is a linear subspace of $b\times n$ matrices over $\mathbb{F}_q$ such that each codeword is a $b\times n$ matrix where each entry is called a symbol. The \emph{weight} of each codeword was defined to be the number of nonzero columns in the codeword and the distance of the code is the minimal weight of a nonzero codeword. The problem that was presented in~\cite{SES19} was to examine locality and availability of array codes where two types of locality were defined. The first one is \emph{node locality}. A codeword column $j\in[n]$ has node locality $r_{nd}$ if it can be recovered by a linear combination of the symbols of the columns in a recovering set of size $r_{nd}$. If all codeword columns have node locality $r_{nd}$, then $r_{nd}$ is also called the node locality of the array code. The second type is \emph{symbol locality} $r_{sb}$ which is similar to node locality but instead of recovering the whole column, here only one symbol (entry of the codewords matrices) is needed to be recovered. Similarly, there are two types of availability. The node, symbol availability, denoted by $t_{nd},t_{sb}$ is the number of pairwise disjoint recovering sets of size at most $r_{nd},r_{sb}$ for any codeword column, symbol, respectively. To simplify the problem, they flattened each $b\times n$ codeword into a vector of length $bn$ by reading the symbols of the codeword column by column from first to last entry. The $M \times bn$ generator matrix $G$, where each row is a flattened codeword, can represent the array code $C$, where the columns $(j-1)b+1,\ldots,jb$ of $G$ correspond to the symbols of the $j$-th codeword column of $C$ and these columns are called the $j$-th \emph{thick column} of $G$. By this way, the $j$-th thick column of $G$ which corresponds to the $j$-th codeword column of $C$, can be represented by $V_j$ which is a $b$-subspace of $\mathbb{F}_q^M$. Thus, an equivalent constraints of node and symbol locality can be formed using subspaces as stated in~\cite[Lemma 3]{SES19}, where a subset ${\cal S} = \{j_1,\ldots,j_p\}\subseteq [n]\setminus\{j\}$ is a recovering set for the codeword column $j\in [n]$, if and only if $V_j \subseteq V_{j_1}+\cdots + V_{j_p}$. Similarly, ${\cal S}$ is a recovering set for the symbol $(i,j), i\in[b],j\in[n]$ if and only if ${\boldsymbol g}_{(j-1)b+i} \in V_{j_1}+\cdots + V_{j_p}$, where ${\boldsymbol g}_{(j-1)b+i}$ is the $i$-th column in the $j$-th thick column of $G$ that corresponds to the $i$-th entry in the $j$-th codeword column of $C$. In our work we are interested in the problem of recovering the requests which are all possible linear combinations of the information bits, which is different from the problem in~\cite{SES19} where the nodes or symbols that are part of the code are needed to be recovered. We can apply some of the results and constructions from~\cite{SES19} in our case. Recall that we defined $\Sigma = \mathbb{F}_2$. Let $\Sigma^s$ be a vector space of dimension $s$ over $\Sigma$. We can consider each bucket which has $t$ cells, as a subspace of $\Sigma^s$ with dimension $t$ and denote a subspace of dimension $t$ as a \emph{$t$-subspace}. The following claim is motivated by~\cite[Lemma 3]{SES19}. \begin{claim} The value of $D(s,k,t,r)$ is the smallest number $m$ of $t$-subspaces of $\Sigma^s$ such that there exists a partition of the subspaces into $k$ subsets, ${\cal S}_1,\ldots,{\cal S}_k$, that satisfies the following property. The size of each subset ${\cal S}_i$ is at most $r$ and for every request $R$, which can be represented by a $1$-subspace $W$, it holds that for each ${\cal S}_i$, $W \subseteq \Sigma_{j=0}^{r'} {\cal S}_{i_j}$ where ${\cal S}_{i_j}$ is the $j$-th subspace in ${\cal S}_i$ and $|{\cal S}_i| = r' \leq r$. \end{claim} \begin{comment} \begin{IEEEproof} Given $m$ subspaces of $\Sigma^s$, $V_1,\ldots,V_m$, each of dimension at most $t$, such that $\cup_{1\leq i_1<\cdots < i_r \leq m} (V_{i_1}+\cdots + V_{i_r}) = \Sigma^s$. For each subspace $V_i$ we form a bucket with at most $t$ cells in order to get locality functional array code. Given a request $R$ a linear combination of the $s$ information bits, where $R\in\Sigma^s$. Thus, there exists $r$ subspaces where $R\in (V_{i_1}+\cdots + V_{i_r})$. Therefore we can read $R$ from the buckets number $i_1,i_2,\ldots,i_r$. \end{IEEEproof} \end{comment} Let ${\boldsymbol x} = (x_1,x_2,\ldots,x_s)$ be the vector of dimension $1\times s$ with the $s$ information bits and let $V$ be a $t$-subspace of $\Sigma^s$. It is said that a bucket with $t$ cells \emph{stores} a $t$-subspace $V$ if for a given basis ${\cal B}=\{{\boldsymbol v}_1,{\boldsymbol v}_2,\ldots,{\boldsymbol v}_t\}$, the $i$-th cell $i\in[t]$ of the bucket stores the linear combination $\langle {\boldsymbol v}_i, {\boldsymbol x}\rangle$. Note that the choice of the basis ${\cal B}$ does not matter and we can choose any basis of $V$. Each request $R$ which is a linear combination of the $s$ information bits can be represented by a $1$-subspace $W$ of $\Sigma^s$. It is said that a request is \emph{contained} in a bucket $b$ if the set $\{b\}$ is a recovering set for the request. Note that if $W$ is contained in a $t$-subspace $V$ then the request $R$ is contained in the bucket that stores $V$. Let ${\cal G}_q(s, t)$ denote the set of all $t$-dimensional subspaces of the vector space $\mathbb{F}_q^s$. The set ${\cal G}_q(s, t)$ is often called the \emph{Grassmannian}~\cite{EZ19}. It is well known that \begin{align*} |{\cal G}_q(s,t)| = \qbin{s}{t}{q} := \frac{(q^s-1)(q^{s-1}-1)\cdots(q^{s-t+1}-1)}{(q^{t}-1)(q^{t-1}-1)\cdots(q-1)}, \end{align*} where $\qbin{s}{t}{q}$ is the $q$-ary Gaussian coefficient~\cite{VW92}. The following is a definition of \emph{spreads} from~\cite{GPSV17} which are partitions of vector spaces. \begin{definition}\label{def:spread} Let $s = at$. Then a set ${\cal S} \subseteq {\cal G}_q(s, t)$ is called a $t$-\textbf{spread} if all elements of ${\cal S}$ intersect only trivially and they cover the whole space $\mathbb{F}^s_q$. \end{definition} It is known that the size of a $t$-spread of $\mathbb{F}^s_q$ is $\frac{q^s-1}{q^t-1}$ when $s$ is a multiple of $t$~\cite{GPSV17}. It is also follows that spreads do not exist when $t$ does not divide $s$. In case $s$ is not a multiple of $t$ there is a notion of \emph{partial spreads}, where a \emph{partial $t$-spread} of $\mathbb{F}_q^s$ is a collection of mutually disjoint $t$-subspaces. For the problem we are studying in this section, partial spreads cannot be used due to the fact that they do not necessarily cover the whole space. Thus, in order to deal with the cases when $t$ does not divide $s$ we use \emph{covering designs} which are defined as follows~\cite{EV11}. \begin{definition}\label{def:CoveringDesign} A \textbf{covering design} $\mathbb{C}_q(s, t, a)$ is a subset ${\cal S}\subseteq {\cal G}_q(s, t)$ such that each element of ${\cal G}_q(s, a)$ is contained in at least one subspace from ${\cal S}$. \end{definition} The covering number $C_q(s, t, a)$ is the minimum size of a covering design $\mathbb{C}_q(s, t, a)$. From~\cite[Th. 4.6]{EV11} we get that for any $1\leq t \leq s$, \begin{equation}\label{eq:CovDesign} C_q(s,t,1) = \left\lceil\frac{q^s-1}{q^t-1}\right\rceil. \end{equation} Note that when $t|s$, an optimal covering design $\mathbb{C}_q(s, t, 1)$ is exactly a $t$-spread of $\mathbb{F}^s_q$. Now, we will define another family of partitions and another family of codes that can be used to construct locality functional array codes. The following is a definition of \emph{$\lambda$-fold partitions} from~\cite{ESSSV11}. \begin{definition} Let $\lambda$ be a positive integer. A \textbf{$\lambda$-fold partition} of the vector space $V = \mathbb{F}^s_q$ is a multiset ${\cal S}$ of subspaces of $V$ such that every nonzero vector in $V$ is contained in exactly $\lambda$ subspaces in ${\cal S}$. \end{definition} Note that a $1$-fold partition of $\mathbb{F}^s_q$ that does not contain a subspace with dimension larger than $t$ is also a covering design $\mathbb{C}_q(s, t, 1)$. Denote by $A_q(s,t,\lambda)$ the minimum size of a $\lambda$-fold partition of $\mathbb{F}^s_q$ that does not contain a subspace with dimension larger than $t$. In~\cite{ESSSV11}, it is also possible to find results on $\lambda$-fold partitions. For example, there exists a construction of a $\left(\frac{2^t-1}{2^p-1}\right)$-fold partition of $\Sigma^s$ with $\frac{2^s-1}{2^p-1}$ $t$-subspaces where $p=$gcd$(s,t)$. Therefore, $A_2(s,t,\frac{2^t-1}{2^p-1}) \leq \frac{2^s-1}{2^p-1}$. \begin{comment} The following is a definition of \emph{subspace designs} from~\cite{C74}. \begin{definition} For any positive integer $a$ where $a\leq t$, an $a$-$(s, t, k)_q$ \textbf{subspace design} over $\mathbb{F}_q$ is defined as a collection of $t$-subspaces of $\mathbb{F}_q^s$, called blocks, such that each element of ${\cal G}_q(s, a)$ is contained in exactly $k$ blocks. \end{definition} Note that an $a-(s,t,1)_q$ subspace design is also a covering design $\mathbb{C}_q(s, t, a)$. Denote by $A_q(s,t,k;a)$ the minimum size of an $a$-$(s,t,k)_q$ subspace design. \end{comment} Lastly, the following is a definition of \emph{covering Grassmannian codes} from~\cite{EZ19}. \begin{definition} For every positive integers $\alpha$ and $\delta$ where $\delta+t \leq s$, an $\alpha$-$(s, t, \delta)_q^c$ \textbf{covering Grassmannian code} $\mathbb{C}$ is a subset of ${\cal G}_q(s, t)$ such that each subset of $\alpha$ codewords of $\mathbb{C}$ spans a subspace whose dimension is at least $\delta+t$ in $\mathbb{F}_q^s$. \end{definition} The value $B_q(s,t,\delta;\alpha)$ will denote the maximum size of an $\alpha$-$(s, t, \delta)_q^c$ covering Grassmannian code. The following theorem summarizes some bounds on $D(s,k,t,r)$ using spreads, covering designs, $\lambda$-fold partitions, and covering Grassmannian codes. \begin{theorem}\label{SubspaceLocality} For each $s,t,k$ and $r$ positive integers \begin{enumerate} \item $D(s,1,t,1) = C_2(s,t,1) = \left\lceil\frac{2^s-1}{2^t-1}\right\rceil$. \label{SubLocSec1} \item $D(s,1,t,r) \leq r \cdot \left\lceil\frac{2^{s/r}-1}{2^t - 1}\right\rceil$, where $r|s$. \label{SubLocSec2} \item $D(s,k,t,1) \leq A_2(s,t,k)$. \label{SubLocSec3} \item $D(s,\lfloor B_2(s,t,s-t;r)/r \rfloor,t,r) \leq B_2(s,t,s-t;r)$. \item $D(s,\qbin{s-1}{t-1}{2},t,1) \leq \qbin{s}{t}{2}$, where $t>1$. \item $D(s,\left\lfloor \frac{2^{s}-2^t}{r\cdot 2^{t}-r}\right\rfloor +1,t,r) \leq \frac{2^{s}-1}{2^{t}-1}$, where $s = rt$. \label{SubLocSec5} \end{enumerate} \end{theorem} \begin{IEEEproof} \begin{enumerate} \item To prove this part we use a construction motivated by~\cite[Construction 2]{SES19}. Let $\mathbb{C}$ be a $\mathbb{C}_2(s,t,1)$ covering design with $C_2(s,t,1)$ $t$-subspaces. To construct an $(s,C_2(s,t,1),t,1)$ locality functional array code, we take $C_2(s,t,1)$ buckets where each bucket stores one of the $t$-subspace from $\mathbb{C}$. From Definition~\ref{def:CoveringDesign}, every $1$-subspace of $\Sigma^s$ is contained in at least one $t$-subspace from $\mathbb{C}$. Thus, each request $R$ which can be represented by a $1$-subspace of $\Sigma^s$, is contained in at least one bucket. Therefore, by using Equation~\eqref{eq:CovDesign} we get that $D(s,1,t,1) \leq C_2(s,t,1) = \left\lceil\frac{2^s-1}{2^t-1}\right\rceil$. For the other direction, assume that ${\cal C}$ is an $(s,1,m,t,1)$ locality functional array code with $m$ buckets. We construct a $\mathbb{C}_2(s,t,1)$ covering design with $m$ $t$-subspaces of $\Sigma^s$ that are stored in the $m$ buckets of ${\cal C}$. Let $W$ be a $1$-subspace of $\Sigma^s$ that represents a request $R$ for the code ${\cal C}$. From the property of the code ${\cal C}$, there exists one bucket that contains $R$. Therefore, there exists one $t$-subspace in $\mathbb{C}$ that contains $W$. Thus, $C_2(s,t,1) \leq D(s,1,t,1)$. \item This result holds from part \eqref{SubLocSec1} in this theorem and Theorem~\ref{basicLocality}\eqref{BaLocSec3}. \item Let ${\cal S}$ be a $k$-fold partition of $\Sigma^s$ that does not contain a subspace with dimension larger than $t$. Assume that $|{\cal S}|=m$. To construct a locality functional array code, we take $m$ buckets where each bucket stores one of the subspaces from ${\cal S}$. Assume that $R$ is the request which can be represented by a vector ${\boldsymbol u}$ of $\Sigma^s$. Then, from the property of the multiset ${\cal S}$, the vector ${\boldsymbol u}$ is contained in exactly $k$ subspaces in ${\cal S}$. Therefore, $R$ is contained in exactly $k$ buckets. Thus, the $m$ buckets form an $(s,k,m,t,1)$ locality functional array code, and hence, $D(s,k,t,1) \leq A_2(s,t,k)$. \item Let $\mathbb{C}$ be an $r$-$(s,t,s-t)_2$ covering Grassmannian code with $m$ $t$-subspaces of $\Sigma^s$. We take $m$ buckets where each bucket stores one of the $t$-subspaces from $\mathbb{C}$. Let $R$ be the request. From the property of the code $\mathbb{C}$, every subset of $r$ $t$-subspaces of $\mathbb{C}$ spans the whole space $\Sigma^s$. Hence, every subset of $r$ buckets contains $R$. Therefore, we can partition the $m$ buckets into $\lfloor m/r \rfloor$ parts, where each part contains $R$, and hence, there exist $\lfloor m/r \rfloor$ recovering sets for $R$. Thus, the construction with the $m$ buckets forms an $(s,\lfloor m/r \rfloor,m,t,r)$ locality functional array code. \item To prove this part we use a construction motivated by~\cite[Construction 1]{SES19}. We construct an $(s,\qbin{s-1}{t-1}{2},\qbin{s}{t}{2},t,1)$ locality functional array code by taking $\qbin{s}{t}{2}$ buckets where each bucket has $t$ cells and stores one of the $t$-subspaces of $\Sigma^s$. Every $1$-subspace of $\Sigma^s$ is contained in exactly $\qbin{s-1}{t-1}{2}$ $t$-subspaces. Therefore, every request $R$ which can be represented by a $1$-subspace is contained in exactly $\qbin{s-1}{t-1}{2}$ buckets. Thus, we get that $D(s,\qbin{s-1}{t-1}{2},t,1)$ $\leq \qbin{s}{t}{2}$. \item Let $s = rt$ and ${\cal S}$ be a $t$-spread of $\Sigma^s$ such that $|{\cal S}|$ = $\frac{2^s-1}{2^{t}-1}$. To construct a locality functional array code we store each $t$-subspace in ${\cal S}$ in a bucket with $t$ cells. Assume that $R$ is the request which can be represented by a $1$-subspace $W$ of $\Sigma^s$. From the property of spreads, there exists a subspace in ${\cal S}$ that includes $W$. Therefore, there exists a bucket that contains $R$ which can form a recovering set of size $1$. Then, partition the remaining $\frac{2^s-1}{2^{t}-1} - 1 = \frac{2^s - 2^t}{2^t-1}$ buckets into $\left\lfloor \frac{2^{s}-2^t}{r\cdot 2^{t}-r}\right\rfloor$ parts where each part has size $r$. Each part ${\cal P}_i$ has $r$ mutually disjoint $t$-subspaces $U_{i_1},U_{i_2},\ldots,U_{i_r}$. Hence, $\sum_{j=1}^{r} U_{i_j} = \Sigma^s$. Thus, each part ${\cal P}_i$ is a recovering set of $R$ of size $r$. Then, there exist $1+ \left\lfloor \frac{2^{s}-2^t}{r\cdot 2^{t}-r}\right\rfloor$ recovering sets each of size at most $r$ and the code is an $(s,\left\lfloor \frac{2^{s}-2^t}{r\cdot 2^{t}-r}\right\rfloor+1,\frac{2^s-1}{2^{t}-1},t,r)$ locality functional array code. \end{enumerate} \end{IEEEproof} The following is an example of Theorem~\ref{SubspaceLocality}\eqref{SubLocSec3}. \begin{example}\label{ex:foldLoc} In this example we will use an example of a $2$-fold partition from~\cite{ESSSV11} in order to construct a locality functional array code. Let $s = 3$. The following multiset ${\cal S}$ of subspaces of $\Sigma^3$ is a $2$-fold partition that does not contain a subspace with dimension larger than $t=2$. ${\cal S} = \{\{100,011,111\},\allowbreak\{010,001,011\},\allowbreak\{001,110,111\},\allowbreak\{110,010,100\},\allowbreak\{101\},\allowbreak\{101\}\}$. We represent each element in $\Sigma^3$ as a binary vector of length $3$ and every subspace in ${\cal S}$ by its elements except the zero vector. It holds that any binary vector of length $3$ is contained in exactly two subspaces in ${\cal S}$, and hence, $A_2(3,2,2)\leq 6$. We construct a $(3,2,6,2,1)$ locality functional array code with the following buckets that are obtained from ${\cal S}$. \begin{center} \begin{tabular}{ |c|c|c|c|c|c| } \hline 1 & 2 & 3 & 4 & 5 & 6\\ \hline \hline $x_3$ & $x_2$ & $x_1$ & $x_2x_3$ & $x_1x_3$ & $x_1x_3$\\ \hline $x_1x_2$ & $x_1$ & $x_2x_3$ & $x_2$ & & \\ \hline \end{tabular} \end{center} For example, if the request is $x_1+x_2$, then the recovering sets are $\{\{1\},\{2\}\}$. \end{example} The following is an example of Theorem~\ref{SubspaceLocality}\eqref{SubLocSec5}. \begin{example} For $s=4,t=2$ and $r=2$, the following set ${\cal S}$ is a $2$-spread of $\Sigma^4$ of size $\frac{2^4-1}{2^2-1} = 5$. ${\cal S} = \{\{0001,0010\},\allowbreak\{0100,1000\},\allowbreak\{0101,1010\},\allowbreak\{1001\allowbreak,0111\},\allowbreak\{0110\allowbreak,1011\}\}$. We represent each element in $\Sigma^4$ as a binary vector of length $4$ and every $2$-subspace as a basis with $2$ vectors. We construct a $(4,3,5,2,2)$ locality functional array code with the following buckets that are obtained from ${\cal S}$. \begin{center} \begin{tabular}{ |c|c|c|c|c| } \hline 1 & 2 & 3 & 4 & 5\\ \hline \hline $x_1$ & $x_3$ & $x_1x_3$ & $x_1x_4$ & $x_2x_3$\\ \hline $x_2$ & $x_4$ & $x_2x_4$ & $x_1x_2x_3$ & $x_1x_2x_4$\\ \hline \end{tabular} \end{center} For example, if the request is $x_1+x_2$, then the recovering sets are $\{\{1\},\{2,3\},\{4,5\}\}$. \end{example} \subsection{Bounds and Constructions based upon Covering Codes} In this section we show how covering codes are used to construct locality functional array codes and to get lower bounds for $D(s,k,t,r)$. For the rest of the section we assume that ${\boldsymbol x} = (x_1,x_2,\ldots,x_s)$ is the vector of dimension $1\times s$ with the $s$ information bits. For the case of $t=1$ the following result can be obtained. Remember that $h[s,r]_q$ is the smallest length of a linear covering code over $\mathbb{F}_q$ with covering radius $r$ and redundancy $s$. \begin{theorem}\label{th:LocCovk=1} $D(s,1,1,r) = h[s,r]$. \end{theorem} \begin{IEEEproof} There exists an $[h[s,r], h[s,r] - s,r]$ linear covering code with some parity check matrix $H$. To construct a locality functional array code we store in each bucket the linear combination $\langle {\boldsymbol h}_i, {\boldsymbol x}\rangle$ where ${\boldsymbol h}_i$ is the $i$-th column of $H$. Assume that $R$ is the request which can be represented by a binary vector ${\boldsymbol u} \in \Sigma^s$. From Property~\ref{property:covering}, we know that the vector ${\boldsymbol u} $ can be represented as the sum of at most $r$ columns of $H$. Therefore, there exists a recovering set of size at most $r$ for the request $R$. The number of buckets is the number of columns of $H$ which is $h[s,r]$. Thus, $D(s,1,1,r) \leq h[s,r]$. The lower bound can be obtained from Corollary~\ref{cor:LocToCov} which will appear later. \end{IEEEproof} We can generalize the connection of covering codes and locality functional array codes with general $t$. We start by defining a partition of matrices. \begin{definition} A \textbf{$t$-partition} of a matrix $H$ is a collection ${\cal P}$ of subspaces of dimension $t$ with the property that every column vector of $H$ is contained in at least one member of ${\cal P}$. A $t$-partition is called \textbf{strict} if every column vector of $H$ is contained in exactly one member of ${\cal P}$. \end{definition} The next theorem shows the connection between covering codes and locality functional array codes with $k=1$. \begin{theorem}\label{th:CoveringToLocality} Let $H$ be a parity check matrix for an $[n,n-s,r]$ covering code, and let $p$ be the smallest size of a $t$-partition of $H$. Then, $D(s,1,t,r) \leq p$. \end{theorem} \begin{IEEEproof} Let $H$ be a parity check matrix of a given $[n,n-s,r]$ covering code. Let ${\cal P}$ be a $t$-partition of $H$, that contains $p$ subspaces of dimension $t$. We construct an $(s,1,p,t,r)$ locality functional array code ${\cal C}$ by storing each $t$-subspace from ${\cal P}$ in one bucket with $t$ cells. Let ${\boldsymbol u}\in \Sigma^s$ be a request which represents the linear combination $\langle {\boldsymbol u}, {\boldsymbol x}\rangle$ of the $s$ information bits. From Property~\ref{property:covering}, we know that there exists a vector ${\boldsymbol y}\in\Sigma^n$ such that $H\cdot {\boldsymbol y} = {\boldsymbol u}$, where $w = w_H({\boldsymbol y}) \leq r$. If $w_H({\boldsymbol y}) = r' \leq r$, then the request ${\boldsymbol u}$ is equal to the sum of $r'$ columns of $H$ and denote them by ${\boldsymbol h}_{i_1},{\boldsymbol h}_{i_2},\ldots,{\boldsymbol h}_{i_r'}$. We know that each $1$-subspace with a basis $\{{\boldsymbol h}_{i_j}\},j\in[r']$ is contained in one subspace from the partition ${\cal P}$, and hence, the vector ${\boldsymbol h}_i$ is contained in one bucket of ${\cal C}$. Thus, we can get all the $r'$ columns from at most $r'\leq r$ buckets. \end{IEEEproof} Now, the method to get locality functional array codes from covering codes over $\mathbb{F}_q$ is established. We follow an example from~\cite{BPW89} and for that we use the following definition in the rest of this section. \begin{definition}\label{def:BasisHT} Let ${\cal B} = \{1, \epsilon, \epsilon^2,\ldots, \epsilon^{w-1}\}$ be a basis for $\mathbb{F}_{2^w}$ over $\Sigma$ where $\epsilon$ is a primitive element of $\mathbb{F}_{2^w}$. For each $i\in[0,2^w-2]$ let $(\epsilon^i)_w$ be the binary column vector of length $w$ that represents the element $\epsilon^i$ of $\mathbb{F}_{2^w}$ with respect to the basis ${\cal B}$. Let ${\cal U}_0$ be the binary matrix of size $(w\times (2^{w}-1))$ that has in column number $i,i\in[0,2^{w}-2]$ the vector $(\epsilon^i)_w$. For each $i\in[0,2^{w}-2]$, let ${\cal U}_i$ be the matrix which is obtained from ${\cal U}_0$ by cyclically rotating its columns $i$ places to the left. Note that for each $i\in[0,2^{w}-2]$ the first column in matrix ${\cal U}_i$ is the vector $(\epsilon^i)_w$. For an element $\epsilon^i$ over $\mathbb{F}_{2^w}$ let ${\cal T}(\epsilon^i) = {\cal U}_i$ be a matrix over $\Sigma$ of size $(w \times (2^w-1))$ and let ${\cal T}(0)$ be the $(w\times (2^{w}-1))$ zeros matrix. We define the same transformation for vectors and matrices, where for a matrix $M_1$ of size $(a\times b)$ over $\mathbb{F}_{2^w}$ let ${\cal T}(M_1) = M_2$ be the matrix over $\Sigma$ of size $(aw \times b(2^w-1))$ that is obtained from $M_1$ by replacing each element $\alpha$ of $\mathbb{F}_{2^w}$ in the matrix $M_1$ by its appropriate $(w\times (2^{w}-1))$ matrix ${\cal T}(\alpha)$. \end{definition} The following is an example to demonstrate Definition~\ref{def:BasisHT}. \begin{example} Let ${\cal B} = \{1,\epsilon^1,\epsilon^2\}$ be a basis for $\mathbb{F}_{2^3}$ over $\Sigma$, where $\epsilon$ is a primitive element of $\mathbb{F}_{2^3}$ chosen to satisfy the primitive polynomial $x^3+x+1$, and hence, $\epsilon^3 = \epsilon + 1$. Then, the coordinates of the successive powers of $\epsilon$ with respect to ${\cal B}$ are the columns of the matrix ${\cal U}_0$ \begin{equation*} {\cal U}_0= \begin{bmatrix} 1 & 0 & 0 & 1 & 0 & 1 & 1 \\ 0&1&0&1&1&1&0\\ 0&0&1&0&1&1&1 \end{bmatrix} . \end{equation*} For example, the following matrix is ${\cal T}(\epsilon^1)$ \begin{equation*} {\cal T}(\epsilon^1)= \begin{bmatrix} 0&0&1&0&1&1&1\\ 1&0&1&1&1&0&0\\ 0&1&0&1&1&1&0 \end{bmatrix} . \end{equation*} \end{example} We show that the transformation defined in Definition~\ref{def:BasisHT} is a linear transformation. \begin{lemma}\label{lemma:linearT} The transformation ${\cal T}:\mathbb{F}_{2^w} \rightarrow \mathbb{F}_2^{w\times (2^w-1)}$ is a linear transformation. \end{lemma} \begin{IEEEproof} We want to show that for any $\epsilon^{i_1},\epsilon^{i_2} \in \mathbb{F}_{2^w}$, ${\cal T}(\epsilon^{i_1})+{\cal T}(\epsilon^{i_2}) = {\cal T}(\epsilon^{i_1}+\epsilon^{i_2})$. Assume that $\epsilon^{i_1}+\epsilon^{i_2} = \epsilon^{i_3}$. From Definition~\ref{def:BasisHT}, we know that ${\cal T}(\epsilon^{i_1})+{\cal T}(\epsilon^{i_2}) = {\cal U}_{i_1}+{\cal U}_{i_2}$. From Definition~\ref{def:BasisHT}, for every $j\in[2^w-1]$, the $j$-th column of ${\cal U}_{i_1},{\cal U}_{i_2},{\cal U}_{i_3},{\cal U}_{i_1}+{\cal U}_{i_2}$ is $(\epsilon^{i_1+j})_w,(\epsilon^{i_2+j})_w,(\epsilon^{i_3+j})_w,(\epsilon^{i_1+j} + \epsilon^{i_2+j})_w$, respectively. Also, $\epsilon^{i_1+j} + \epsilon^{i_2+j}= \epsilon^{j}(\epsilon^{i_1} + \epsilon^{i_2}) = \epsilon^{i_3+j}$. Thus, the $j$-th column of ${\cal U}_{i_3}$ is equal to the $j$-th column of ${\cal U}_{i_1}+{\cal U}_{i_2}$ for all $j\in[2^w-1]$. Thus, ${\cal T}(\epsilon^{i_1})+{\cal T}(\epsilon^{i_2}) = {\cal U}_{i_1}+{\cal U}_{i_2} = {\cal U}_{i_3} = {\cal T}(\epsilon^{i_1}+\epsilon^{i_2})$. \end{IEEEproof} The same transformation ${\cal T}$ that was defined for vectors and matrices in Definition~\ref{def:BasisHT} is also a linear transformation following similar proof as for Lemma~\ref{lemma:linearT}. The following result can be found in~\cite[Lemma 3.1]{BPW89}, but we want to prove it in a different way, by constructing a specific parity check matrix in order to use it in other claims. \begin{lemma}\label{lemma:2^wTo2} Let $H$ be a parity check matrix of an $[n,n-s,r]_{2^w}$ covering code. Then, the matrix ${\cal T}(H)$ is a parity check matrix of a binary $[(2^w - 1)n, (2^w - 1)n - ws,r]$ covering code. In particular, $h[ws,r] \leq (2^w - 1)\cdot h[s,r]_{2^w}$. \end{lemma} \begin{IEEEproof} Let ${\cal C}$ be an $[n,n-s,r]_{2^w}$ covering code and let $H$ be a parity check matrix of the code ${\cal C}$ of size $(s\times n)$. We want to show that the matrix $H' = {\cal T}(H)$ is a parity check matrix of a binary $[(2^w - 1)n, (2^w - 1)n - ws,r]$ covering code. The size of $H'$ is $(ws\times (2^w-1)n)$. Given a binary column vector ${\boldsymbol u}$ of length $ws$, we show that there are at most $r$ columns of $H'$ that their sum is ${\boldsymbol u}$. The vector ${\boldsymbol u}$ can be partitioned into $s$ vectors of length $w$ where ${\boldsymbol u} = ({\boldsymbol u}_1,{\boldsymbol u}_2,\ldots,{\boldsymbol u}_s)^\intercal$. Each vector ${\boldsymbol u}_i$ of length $w$ can represent an element of $\mathbb{F}_{2^w}$ according to the basis ${\cal B}$ from Definition~\ref{def:BasisHT}. Hence, ${\boldsymbol u} = ((\epsilon^{i_1})_w^\intercal,(\epsilon^{i_2})_w^\intercal,\ldots,(\epsilon^{i_s})_w^\intercal)^\intercal$ and from the $s$ elements we can get a column vector ${\boldsymbol v} = (\epsilon^{i_1},\epsilon^{i_2},\ldots,\epsilon^{i_s})^\intercal$ of dimension $s\times 1$ over $\mathbb{F}_{2^w}$. The first column in each ${\cal U}_i,i\in[0,2^w-2]$ is the vector $(\epsilon^i)_w$. Then, from the construction of ${\cal T}({\boldsymbol v})$, the first column of the matrix ${\cal T}({\boldsymbol v})$ is the vector ${\boldsymbol u}$. From the property of the code ${\cal C}$, it is known that there exists a vector ${\boldsymbol y}\in\mathbb{F}_{2^w}^n$ such that $H\cdot {\boldsymbol y} = {\boldsymbol v}$, where $w_H({\boldsymbol y}) \leq r$. Let ${\cal A} = \{i: i\in [n], y_i \neq 0\}$ and note that $|{\cal A}|\leq r$. Let ${\boldsymbol h}_i$ be the $i$-th column of $H$. Then, $\sum_{i\in{\cal A}} y_i{\boldsymbol h}_i = {\boldsymbol v}$. For each $i\in{\cal A}$ we define ${\boldsymbol h}'_i = y_i {\boldsymbol h}_i$ and from the linearity of the transformation ${\cal T}$ we have ${\cal T}({\boldsymbol v}) = {\cal T}(\sum_{i\in{\cal A}} {\boldsymbol h}'_i) = \sum_{i\in{\cal A}} {\cal T}({\boldsymbol h}'_i)$. Thus, the vector $(\sum_{i\in{\cal A}} {\cal T}({\boldsymbol h}'_i))_1 = \sum_{i\in{\cal A}} {\cal T}({\boldsymbol h}'_i)_1 = {\boldsymbol u}$, where ${\cal T}({\boldsymbol h}'_i)_1$ is the first column of the matrix ${\cal T}({\boldsymbol h}'_i)$. For each $i\in {\cal A}$, assume that $y_i = \epsilon^{j_i}$. Then, the first column of the matrix ${\cal T}({\boldsymbol h}'_i)$ is the $j_i$-th column of the matrix ${\cal T}({\boldsymbol h}_i)$. Thus, $\sum_{i\in{\cal A}} {\cal T}({\boldsymbol h}_i)_{j_i} = {\boldsymbol u}$, where ${\cal T}({\boldsymbol h}_i)_{j_i}$ is the $j_i$-th column of the matrix ${\cal T}({\boldsymbol h}_i)$. For each $i\in{\cal A}$, the matrix ${\cal T}({\boldsymbol h}_i)$ has size $(ws\times (2^w-1))$ and it is a sub matrix of $H'$ that starts in the column number $(2^w-1)(i-1)+1$ of $H'$. Hence, the $j_i$-th column of the matrix ${\cal T}({\boldsymbol h}_i)$ is the column number $(2^w-1)(i-1)+j_i$ of the matrix $H'$. Therefore, $\sum_{i\in{\cal A}} {\boldsymbol h}'_{(2^w-1)(i-1)+j_i} = {\boldsymbol u}$, where ${\boldsymbol h}'_i$ is the $i$-th column of $H'$. Thus, the vector ${\boldsymbol u}$ is a sum of $|{\cal A}| \leq r$ columns of $H'$ and the matrix $H'$ is a parity check matrix of a binary $[(2^w - 1)n, (2^w - 1)n - ws,r]$ covering code. \end{IEEEproof} An upper bound on the value of $D(s,1,t,r)$ can be obtained in the next theorem using non-binary covering codes. \begin{theorem}\label{th:covToLoc} For any positive integer $w$ such that $t|w$, $D(ws,1,t,r) \leq \dfrac{(2^w - 1)h[s,r]_{2^w}}{2^t - 1}$. \end{theorem} \begin{IEEEproof} Let ${\cal C}$ be an $[n, n- s,r]_{2^w}$ covering code over $\mathbb{F}_{2^w}$, where $n=h[s,r]_{2^w}$. Let the matrix $H$ be a parity check matrix of ${\cal C}$ of size $(s \times n)$. From Lemma~\ref{lemma:2^wTo2}, we get that there exists a binary $[(2^w - 1)n, (2^w - 1)n - ws,r]$ covering code with parity check matrix $H' = {\cal T}(H)$. We want to find the smallest size of a $t$-partition of $H'$. Let $j_1,j_2,j_3\in[0,2^w-2]$ be such that $\epsilon^{j_1} + \epsilon^{j_2} = \epsilon^{j_3}$. Then, in the matrix ${\cal U}_0$ from Definition~\ref{def:BasisHT}, it holds that the sum of the $j_1$-th and $j_2$-th columns is the $j_3$-th column. In the matrix ${\cal U}_i,i\in[0,2^w-2]$ the $j_1$-th, $j_2$-th, $j_3$-th column is ($\epsilon^i \cdot \epsilon^{j_1})_w, (\epsilon^i \cdot \epsilon^{j_2})_w, (\epsilon^i \cdot \epsilon^{j_3})_w$, respectively. It holds that $\epsilon^i \cdot \epsilon^{j_1} + \epsilon^i \cdot \epsilon^{j_2} = \epsilon^i \cdot (\epsilon^{j_1} + \epsilon^{j_2}) = \epsilon^i \cdot \epsilon^{j_3}$. Thus, we can conclude that in the matrix ${\cal U}_i,i\in[0,2^w-2]$ it also holds that the sum of the $j_1$-th and $j_2$-th columns is the $j_3$-th column. Let $({\cal U}_i)_{j}$ be the $j$-th column of ${\cal U}_i$. Assume that a basis that includes the columns $\{({\cal U}_0)_{j_1},({\cal U}_0)_{j_2},\ldots,({\cal U}_0)_{j_t}\}$ spans the columns $\{({\cal U}_0)_{j_1},({\cal U}_0)_{j_2},\ldots,({\cal U}_0)_{j_{2^t-1}}\}$ of the matrix ${\cal U}_0$. Then, the basis that includes the columns $\{({\cal U}_i)_{j_1},({\cal U}_i)_{j_2},\ldots,({\cal U}_i)_{j_t}\}$ spans the columns $\{({\cal U}_i)_{j_1},({\cal U}_i)_{j_2},\ldots,({\cal U}_i)_{j_{2^t-1}}\}$ of the matrix ${\cal U}_i,i\in[0,2^w-2]$. The matrix ${\cal U}_0$ includes all the nonzero column vectors of length $w$, which means that it includes the space $\mathbb{F}^w_2\setminus \{0\}$. It is given that $t|w$. Hence, there exists a $t$-spread of $\mathbb{F}^w_2$. Thus, there exists a strict $t$-partition ${\cal P}$ of ${\cal U}_0$ with $p = \frac{2^w-1}{2^t-1}$ $t$-subspaces. Each subspace of ${\cal P}$ is represented by a basis of $t$ column vectors of ${\cal U}_0$ and denote them by $\{\{({\cal U}_0)_{j^1_1},\allowbreak({\cal U}_0)_{j^1_2},\allowbreak\ldots,({\cal U}_0)_{j^1_t}\},\allowbreak\{({\cal U}_0)_{j^2_1},\allowbreak({\cal U}_0)_{j^2_2},\allowbreak\ldots,\allowbreak({\cal U}_0)_{j^2_t}\},\allowbreak\ldots,\allowbreak \{({\cal U}_0)_{j^p_1},\allowbreak({\cal U}_0)_{j^p_2},\allowbreak\ldots,\allowbreak({\cal U}_0)_{j^p_t}\}\}$. The $p$ $t$-subspaces $\{\{({\cal U}_i)_{j^1_1},\allowbreak({\cal U}_i)_{j^1_2},\allowbreak\ldots,\allowbreak({\cal U}_i)_{j^1_t}\},\allowbreak\{({\cal U}_i)_{j^2_1},\allowbreak({\cal U}_i)_{j^2_2},\allowbreak\ldots,\allowbreak({\cal U}_i)_{j^2_t}\},\allowbreak\ldots,\allowbreak \{({\cal U}_i)_{j^p_1},\allowbreak({\cal U}_i)_{j^p_2},\allowbreak\ldots,\allowbreak({\cal U}_i)_{j^p_t}\}\}$ form a strict $t$-partition of ${\cal U}_i$. For each $i\in[n]$ let ${\boldsymbol h}_i$ be the $i$-th column of the matrix $H$. The matrix ${\cal T}({\boldsymbol h}_i)$ includes $s$ matrices of size $(w \times (2^w-1))$ that all have the same partition regarding the column numbers. Hence, the partition $\{\{(({\cal U}_{i_1})^\intercal_{j^1_1},\allowbreak({\cal U}_{i_2})^\intercal_{j^1_1},\allowbreak\ldots,\allowbreak({\cal U}_{i_s})^\intercal_{j^1_1})^\intercal,\allowbreak\ldots,\allowbreak(({\cal U}_{i_1})^\intercal_{j^1_t},\allowbreak({\cal U}_{i_2})^\intercal_{j^1_t},\allowbreak\ldots,\allowbreak({\cal U}_{i_s})^\intercal_{j^1_t})^\intercal\},\allowbreak\ldots,\allowbreak \{(({\cal U}_{i_1})^\intercal_{j^p_1},\allowbreak({\cal U}_{i_2})^\intercal_{j^p_1},\allowbreak\ldots,\allowbreak({\cal U}_{i_s})^\intercal_{j^p_1})^\intercal,\allowbreak\ldots,\allowbreak(({\cal U}_{i_1})^\intercal_{j^p_t},\allowbreak({\cal U}_{i_2})^\intercal_{j^p_t},\allowbreak\ldots,\allowbreak({\cal U}_{i_s})^\intercal_{j^p_t})^\intercal\}\}$ is a strict $t$-partition of ${\cal T}({\boldsymbol h}_i)$ with $p=\frac{2^w-1}{2^t-1}$ $t$-subspaces. Therefore, there exits a strict $t$-partition of the matrix $H'$ with $\frac{(2^w-1)n}{2^t-1}$ $t$-subspaces. Thus, By using Theorem~\ref{th:CoveringToLocality} we get that $D(ws,1,t,r) \leq \dfrac{(2^w - 1)h[s,r]_{2^w}}{2^t - 1}$. \end{IEEEproof} \begin{comment} \begin{theorem}\label{th:partitionMatrix} For any positive integers $h$ and $t$, where $t|h$, there exists a $t$-partition of a parity check matrix of a $[(2^h - 1)(2^{h+1}+1),(2^h - 1)(2^{h+1}+1)-4h,2]$ covering code with size $\dfrac{(2^h - 1)(2^{h+1}+1)}{2^t - 1}$. \end{theorem} \begin{IEEEproof} In Theorem 3.2 in~\cite{BPW89} there exists a construction of a $(4\times (2^{h+1}+1))$ parity check matrix $U$ of a $[2^{h+1}+1,2^{h+1}+1 - 4,2]_{2^h}$ covering code over $\mathbb{F}_{2^h}$. In order to get a $[(2^h - 1)(2^{h+1}+1),(2^h - 1)(2^{h+1}+1)-4h,2]$ covering code over $GF(2)$, first a basis is chosen for $GF(2^h)$ over $GF(2)$ $\{1, \epsilon, \epsilon^2,\ldots \epsilon^{h-1}\}$ where $\epsilon$ is a primitive element of $GF(2^h)$. Then we calculate the coordinates of the successive powers of $\epsilon$ with respect to the $GF(2)$-basis, $\{1, \epsilon, \epsilon^2,\ldots \epsilon^{h-1}\}$, of $GF(2^h)$. Then we define the matrix $H_0$ of size $(h\times 2^{h}-1)$ that includes the coordinates vector of $\epsilon^i,i\in[0,2^{h}-2]$ in the $i$-th column. Let $U'$ be a $(4h \times (2^h - 1)(2^{h+1}+1))$ matrix obtained from $U$ by replacing each element of $GF(2^h)$ in the matrix $U$ by its appropriate $(h\times (2^{h}-1))$ matrix. Where $0$ is replaced by the $(h\times (2^{h}-1))$ zeros matrix, $\epsilon^0 = 1$ is replaced by the matrix $H_0$, and each $\epsilon ^i$ by the matrix $H_i$ which is obtained from $H_0$ by cyclically rotating its columns $i$ places to the left. The matrix $U'$ is a parity check matrix of a $[(2^h - 1)(2^{h+1}+1),(2^h - 1)(2^{h+1}+1)-4h,2]$ covering code. If in matrix $H_0$ it holds that the sum of columns $j_1\in[0,2^h-2]$ and $j_2\in[0,2^h-2]$ is the column $j_3\in[0,2^h-2]$ then $\epsilon^{j_1} + \epsilon^{j_2} = \epsilon^{j_3}$. In matrix $H_i,i\in[0,2^h-2]$ the column number $j_1$ is $\epsilon^i \cdot \epsilon^{j_1}$, the column number $j_2$ is $\epsilon^i \cdot \epsilon^{j_2}$, and the column number $j_3$ is $\epsilon^i \cdot \epsilon^{j_3}$. The sum of the columns $j_1$ and $j_2$ in $H_i$ is $\epsilon^i \cdot \epsilon^{j_1} + \epsilon^i \cdot \epsilon^{j_2} = \epsilon^i \cdot (\epsilon^{j_1} + \epsilon^{j_2}) = \epsilon^i \cdot \epsilon^{j_3}$. Thus, we can conclude that in matrix $H_i,i\in[0,2^h-2]$ it is also holds that the sum of the columns $j_1$ and $j_2$ is column $j_3$. Then, if a basis that includes columns $\{j_1,j_2,\ldots,j_t\}$ of the matrix $H_0$ spans the columns $\{j_1,j_2,\ldots,j_{2^t-1}\}$ of the same matrix $H_0$, then a basis that includes columns $\{j_1,j_2,\ldots,j_t\}$ of the matrix $H_i,i\in[0,2^h-2]$ spans the columns $\{j_1,j_2,\ldots,j_{2^t-1}\}$ of the same matrix $H_i$. The matrix $H_0$ includes all the nonzero vectors of length $h$, which means that it includes the space $\mathbb{F}^h_2\setminus \{0\}$. It is known that there exists a $t$-spread of $\mathbb{F}^h_2$, if $t|h$. Thus, there exists a strict $t$-partition ${\cal P}$ of $H_0$ into $\frac{2^h-1}{2^t-1}$ $t$-subspaces. The same partition ${\cal P}$ is also a strict $t$-partition for $H_i,i\in[0,2^h-2]$. Then, each sub-matrix $\hat U$ of $U'$ that replaces a column of $U$ includes four $(h\times (2^h-1))$ matrices that has the same partition ${\cal P}$, and hence, the partition ${\cal P}$ is also a strict $t$-partition of $\hat U$ with $\frac{2^h-1}{2^t-1}$ $t$-subspaces. Thus, there exits a strict $t$-partition of the matrix $U'$ with $(2^{h+1}+1)\cdot \frac{2^h-1}{2^t-1}$ $t$-subspaces. \end{IEEEproof} \end{comment} We can use Theorem~\ref{th:covToLoc} to find upper bounds on the value of $D(s,1,t,r)$ by using previous bounds on the size of non-binary covering codes. \begin{example}\label{ex:covToLocEx} \begin{enumerate} \item In~\cite{DO01} a $[1097,1097-8,2]_{2^3}$ covering code is provided. Thus, $h[8,2]_{2^3} \leq 1097$. Then, from Theorem~\ref{th:covToLoc}$, D(3\cdot 8,1,3,2) = D(24,1,3,2) \leq \frac{2^3 - 1}{2^3-1}h[8,2]_{2^3} = 1097$. For a lower bound, we can use Theorem~\ref{basicLocality}\eqref{section1} to get $D(24,1,3,2) \geq 828$. \item For $r=3$, the following result can be obtained from~\cite[Theorem 4.3]{DGMP08}. For $q=4$ and $p = 3$, $h[s=3p+2,3]_{q} \leq (9\cdot q^{2} + 2\frac{q^2-1}{q-1}) = 154$. Hence, $h[11,3]_{2^2} \leq 154$. From Theorem~\ref{th:covToLoc}, $D(22,1,2,3)\leq 154$. For a lower bound, we can use Theorem~\ref{basicLocality}\eqref{section1} to get $D(22,1,2,3) \geq 99$. \end{enumerate} \end{example} The following is another use of Theorem~\ref{th:covToLoc} to find bounds on the value of $D(s,1,t,r)$ using another general family of non-binary covering codes. \begin{corollary}\label{th:th58} For any positive integers $w$ and $t$, where $t|w$, $D(4w,1,t,2) \leq \dfrac{(2^w - 1)(2^{w+1}+1)}{2^t - 1}$. \end{corollary} \begin{IEEEproof} In~\cite[Theorem 3.2]{BPW89} there exists a construction of a $(4\times (2^{w+1}+1))$ parity check matrix $H$ of a $[2^{w+1}+1,2^{w+1}+1 - 4,2]_{2^w}$ covering code over $\mathbb{F}_{2^w}$. Therefore, $h[4,2]_{2^w} \leq 2^{w+1}+1$. From Theorem~\ref{th:covToLoc} we get $D(4w,1,t,2) \leq \dfrac{(2^w - 1)(2^{w+1}+1)}{2^t - 1}$. \end{IEEEproof} For any positive integers $w$ and $t$, where $t|w$ we have $D(4w,1,t,2)\leq 2\cdot \frac{2^{2w}-1}{2^t-1}$ from Theorem~\ref{SubspaceLocality}\eqref{SubLocSec2}, and from Corollary~\ref{th:th58} we get $D(4w,1,t,2) \leq \frac{(2^w - 1)(2^{w+1}+1)}{2^t - 1}$. Thus, we can save $2\cdot \frac{2^{2w}-1}{2^t-1} - \frac{(2^w - 1)(2^{w+1}+1)}{2^t - 1} = \frac{2^{w}-1}{2^t-1}$ buckets. \begin{comment} \begin{theorem} $D(8,1,2,2) \leq 9$. \end{theorem} \begin{IEEEproof} Using a construction from Theorem 3.2 in~\cite{BPW89}, we have that $h[8,2] \leq 27$, thus, the appropriate parity check matrix has $27$ columns. We want to show that there exists a partition of these columns, where each one of length $8$, into $27/3 = 9$ linear subspaces of dimension $2$. In Theorem 3.2 in~\cite{BPW89} first they find a $2$-spanning set (the lines determined by pairs of points in S cover all other points), which can be represented as a $(4\times 9)$ matrix $U$ where each entry is from $\mathbb{F}_4$. Then to convert it to $GF(2)$ they replace each element in the matrix with a $(2\times 3)$ matrices that have all the nonzero column of length $2$. We can see that in all the $(2\times 3)$ matrices the first two columns equal to the third, thus after replacing elements of matrix $U$ we get that the sum of columns $i$ and $i+1$ is equal to column $i+2$ for $i\in{0,3,6,\ldots}$. \end{IEEEproof} \end{comment} The following is an example of a locality functional array code that is obtained from Corollary~\ref{th:th58}. \begin{example} For the case of $w=4$ and $t=2$, we have $s=4w=16$. Let $V$ be $\mathbb{F}_{2^w}^4=\mathbb{F}_{16}^4$. To get a basis for $V$ as a vector space over $\Sigma$, we first choose a basis ${\cal B} = \{1, \epsilon, \epsilon^2, \epsilon^3\}$ for $\mathbb{F}_{16}$ over $\Sigma$ where $\epsilon$ is a primitive element of $\mathbb{F}_{16}$ chosen to satisfy the primitive polynomial $x^4+x+1$. It holds that $\epsilon^4 = \epsilon + 1$. Then, the coordinates of the successive powers of $\epsilon$ with respect to the basis ${\cal B}$ are the columns of the matrix \setcounter{MaxMatrixCols}{20} \begin{equation*} {\cal U}_0= \begin{bmatrix} 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 1&0&1&1&1\\ 0&1&0&0&1&1&0&1&0&1&1&1&1&0&0\\ 0&0&1&0&0&1&1&0&1&0&1&1&1&1&0\\ 0 & 0 & 0 & 1 & 0 & 0 &1 &1&0&1&0&1&1&1&1 \end{bmatrix} . \end{equation*} In~\cite[Theorem 3.2]{BPW89}, there exists a construction of a parity check matrix of a $[33,33-4,2]_{2^w}$ covering code. \setcounter{MaxMatrixCols}{20} \begin{equation*} H = \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 & 1 & 0 & 0 & 0 & 0 & \cdots & 0\\ 1 & \epsilon^1 & \epsilon^2 & \cdots & \epsilon^{14} & 0 &1&0&0&0&\cdots&0\\ 1 & \epsilon^2 & \epsilon^4 & \cdots & \epsilon^{13} & 0&0&0&1&1&\cdots&1\\ 1 & 0 & 0 & \cdots & 0 & 0 & 0 &1 &1 & \epsilon^1 & \cdots & \epsilon^{14} \end{bmatrix} . \end{equation*} Let $({\cal U}_i)_{j}$ be the $j$-th column of ${\cal U}_i$. The following is a strict $t$-partition of ${\cal U}_i$, ${\cal P}_i=\{\{({\cal U}_i)_1,\allowbreak({\cal U}_i)_6,\allowbreak({\cal U}_i)_{11}\},\allowbreak\{({\cal U}_i)_2,\allowbreak({\cal U}_i)_7,\allowbreak({\cal U}_i)_{12}\},\allowbreak\{({\cal U}_i)_3,\allowbreak({\cal U}_i)_8,\allowbreak({\cal U}_i)_{13}\},\allowbreak\{({\cal U}_i)_4,\allowbreak({\cal U}_i)_9,\allowbreak({\cal U}_i)_{14}\},\allowbreak\{({\cal U}_i)_5,\allowbreak({\cal U}_i)_{10},\allowbreak({\cal U}_i)_{15}\}\}$, where we represent every subspace in ${\cal P}$ by its elements except the zero vector. In addition, each subspace can be represented by a basis of two vectors. From Lemma~\ref{lemma:2^wTo2} we get that $H' = {\cal T}(H)$ is a parity check matrix of a binary $[495,495-16,2]$ covering code. Recall that in the transformation ${\cal T}$, each element of $\mathbb{F}_{16}$ is replaced with an appropriate matrix ${\cal U}_i$ of size $(4\times 15)$. Each column in $H$ has $4$ elements of $\mathbb{F}_{16}$ and is replaced with $4$ matrices such that each matrix ${\cal U}_i$ of size $(4\times 15)$ that has a strict $t$-partition ${\cal P}_i$ with $5$ subspaces. Each column in $H$ is a $(16\times 15)$ matrix in $H'$, which can be stored in $5$ buckets such that each bucket stores one subspace from the partition, and hence, the $33$ columns of $H$ can be stored in $33*5 = 165$ buckets. Thus, we get that $D(16,1,2,2) \leq 165$. \end{example} \begin{comment} We can see that in $H_0$ the the sum of columns $i$ and $i+1$ is equal to column $i+2$ for $i\in{0,3,6,9,12}$. Each column of $U$ is replaced with a $(16\times 15)$ matrix. Thus, if there are only $0$'s and $1$'s and $\epsilon^{j}$ where $j\ mod\ 3 = 0$ then the sum of columns $i$ and $i+1$ is equal to column $i+2$ for $i\in{0,3,6,9,12}$. But this will not work from columns with different elements, to solve this we must take more than one column. First we are interested in the last $14$ columns. Starting with the column $(0,0,1,\epsilon)^T$, after replacing each entry with the appropriate matrix, we can see that the sum of the first two columns is the third column of the matrix of the column $(0,0,1,\epsilon^{13})^T$ (the same for each column $i$ and $i+1$, where $i\in{0,3,6,9,12}$, their sum appears in column $i+2$ in $(0,0,1,\epsilon^{13})^T$). Then we take the first two columns of the matrix of the column $(0,0,1,\epsilon^{13})^T$ and we can see that their sum appears in the third column of the matrix of $(0,0,1,\epsilon^{10})^T$. Then the sum of the first two columns of the matrix of the column $(0,0,1,\epsilon^{10})^T$ appears in the third column of the matrix of $(0,0,1,\epsilon^{7})^T$. Then the sum of the first two columns of the matrix of the column $(0,0,1,\epsilon^{7})^T$ appears in the third column of the matrix of $(0,0,1,\epsilon^{4})^T$. Then the sum of the first two columns of the matrix of the column $(0,0,1,\epsilon^{4})^T$ appears in the third column of the matrix of $(0,0,1,\epsilon^{1})^T$ where we started, thus, we can get partition the matrices of the columns \setcounter{MaxMatrixCols}{20} \begin{equation*} \begin{bmatrix} 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 1 & 1\\ \epsilon^1 & \epsilon^{13} & \epsilon^{10} & \epsilon^7 & \epsilon^4 \end{bmatrix} \end{equation*} In the same way (but taking the sum of the columns $i+1$ and $i+2$ where $i\in{0,3,6,9,12}$) we can get a partition of \begin{equation*} \begin{bmatrix} 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 1 & 1\\ \epsilon^2 & \epsilon^{5} & \epsilon^{8} & \epsilon^{11} & \epsilon^{14} \end{bmatrix} \end{equation*} There is still a problem with the columns \begin{equation*} \begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ \epsilon^1 & \epsilon^2 &\epsilon^{4} & \epsilon^5 & \epsilon^7 & \epsilon^8&\epsilon^{10}&\epsilon^{11} & \epsilon^{13} & \epsilon^{14} \\ \epsilon^2 & \epsilon^4 & \epsilon^8 & \epsilon^{10} & \epsilon^{14} & \epsilon^1 & \epsilon^5 & \epsilon^7 & \epsilon^{11}& \epsilon^{13}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \end{equation*} But if the first part of matrix $U$ was the following \begin{equation*} \begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ \epsilon^1 & \epsilon^2 &\epsilon^{4} & \epsilon^5 & \epsilon^7 & \epsilon^8&\epsilon^{10}&\epsilon^{11} & \epsilon^{13} & \epsilon^{14} \\ \epsilon^2 & \epsilon^4 & \epsilon^{11} & \epsilon^1 & \epsilon^8 & \epsilon^{10} & \epsilon^{14} & \epsilon^7 & \epsilon^5& \epsilon^{13}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \end{equation*} then we can solve the problem, which means that if we use the same method as we did in the last part of $U$, if we start with the appropriate matrix of the column $(1,\epsilon^{1},\epsilon^{2},0)^T$, then the sum of the first two columns is in the third column of the appropriate matrix of the column $(1,\epsilon^{13},\epsilon^{5},0)^T$. Then the sum of the first two columns of the matrix of the column $(1,\epsilon^{13},\epsilon^{5},0)^T$ appears in the third column of the matrix of $(1,\epsilon^{10},\epsilon^{14},0)^T$. Then the sum of the first two columns of the matrix of the column $(1,\epsilon^{10},\epsilon^{14},0)^T$ appears in the third column of the matrix of $(1,\epsilon^{7},\epsilon^{8},0)^T$. Then the sum of the first two columns of the matrix of the column $(1,\epsilon^{7},\epsilon^{8},0)^T$ appears in the third column of the matrix of $(1,\epsilon^{4},\epsilon^{11},0)^T$. Then the sum of the first two columns of the matrix of the column $(1,\epsilon^{4},\epsilon^{11},0)^T$ appears in the third column of the matrix of $(1,\epsilon^{1},\epsilon^{2},0)^T$. Thus, we can get a partition the matrices of the columns \begin{equation*} \begin{bmatrix} 1 & 1 & 1 & 1 & 1 \\ \epsilon^1 & \epsilon^{13} &\epsilon^{10} & \epsilon^7 & \epsilon^4 \\ \epsilon^2 & \epsilon^5 & \epsilon^{14} & \epsilon^{8} & \epsilon^{11} \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} \end{equation*} In the same way we can get a partition of the other remaining five columns. \e{But now the question is, does the new part of matrix $U$ can works according to Theorem 3.2 in~\cite{Short codes...}?} \end{comment} Next, another possible way to obtain locality functional array codes from covering codes is presented. First, we define a possible modification for matrices that we will use in order to construct new parity check matrices for covering codes from given parity check matrices. \begin{definition} Given a matrix $H$ of size $(n\times s)$, its $i$-th \textbf{modified matrix} denoted by $H^{(i)}$ of size $(n+1\times s)$ is the matrix that has the same rows of $H$ except of row $i$, where it has the complement of row $i$ of $H$, with an additional column with only $1$ in row $i$. \end{definition} The next theorem shows that for a given parity check matrix of a covering code, the modified matrix is also a parity check matrix of another covering code. Even though the following seems to be a basic property, we could not find its proof, and hence, we add the following proof for completeness. \begin{theorem}\label{th:CovMat} For a parity check matrix $H$ for a binary $[n,n-s,2]$ covering code and an integer $i$, the $i$-th modified matrix $H^{(i)}$ is also a parity check matrix of a binary $[n+1,n+1-s,2]$ covering code. \end{theorem} \begin{IEEEproof} Let $H$ be a parity check matrix of an $[n,n-s,2]$ covering code. For a given $i\in[s]$, let $H^{(i)}$ be the $i$-th modified matrix of $H$. The size of $H^{(i)}$ is $(s\times(n+1))$. From Property~\ref{property:covering}, for each vector ${\boldsymbol v}\in\Sigma^s$ there exists a vector ${\boldsymbol y}\in \Sigma^n$ such that $H\cdot {\boldsymbol y} = {\boldsymbol v}$ where $w_H({\boldsymbol y}) \leq 2$. Let ${\boldsymbol h}_i,{\boldsymbol h}'_i$ be the $i$-th column of $H,H^{(i)}$, respectively. If $w_H({\boldsymbol y})=2$, assume that ${\boldsymbol v} = {\boldsymbol h}_{j_1}+{\boldsymbol h}_{j_2}$. The column vector ${\boldsymbol h}'_j$ is different from the column vector ${\boldsymbol h}_j$ only in row $i$, where ${\boldsymbol h}'_j$ has the complement of the element in row $i$ in ${\boldsymbol h}_j$. Thus, it holds that ${\boldsymbol v} = {\boldsymbol h}'_{j_1}+{\boldsymbol h}'_{j_2}$. If $w_H({\boldsymbol y})=1$, assume that ${\boldsymbol v} = {\boldsymbol h}_j$. From the construction of $H^{(i)}$, it holds that ${\boldsymbol h}_j = {\boldsymbol h}'_j + {\boldsymbol h}'_{n+1}$. Therefore, we can get ${\boldsymbol v}$ as a sum of two columns of $H^{(i)}$. Thus, $H^{(i)}$ is a parity check matrix of a binary $[n+1,n+1-s,2]$ covering code. \end{IEEEproof} One possible way to use Theorem~\ref{th:CovMat} to get locality functional array codes is shown next. \begin{theorem}\label{th:LOCs=7} $D(7,1,2,2) = 7$. \end{theorem} \begin{IEEEproof} From~\cite[Theorem 1]{GDT91} and the example after it, we can get a construction of a parity check matrix for a binary $[19,19-7,2]$ covering code. The following is a parity check matrix $H$ of the code. \begin{equation*} \begin{scriptsize} \begin{bmatrix} 0&0&0&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1 \\ 0&1&1&0&0&1&1&0&0&1&1&0&0&1&1&0&0&0&0 \\ 1&0&1&0&1&0&1&0&1&0&1&0&1&0&1&0&0&0&0 \\ 0&0&0&0&0&1&1&0&1&0&1&0&1&1&0&0&0&1&1 \\ 0&0&0&0&1&1&0&0&0&1&1&0&1&0&1&0&1&0&1 \\ 0&0&0&0&0&0&0&0&0&0&0&1&1&1&1&1&1&1&1 \\ 0&0&0&0&0&0&0&1&1&1&1&0&0&0&0&1&1&1&1 \\ \end{bmatrix} . \end{scriptsize} \end{equation*} The following is the matrix $H^{(1)}$, the first modified matrix of $H$ where the first row is the complement of the first row of $H$ and a new column with only $1$ in the first entry is added. \noindent \scalebox{0.84}{% $\begin{bmatrix} 1&1&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1 \\ 0&1&1&0&0&1&1&0&0&1&1&0&0&1&1&0&0&0&0&0 \\ 1&0&1&0&1&0&1&0&1&0&1&0&1&0&1&0&0&0&0&0 \\ 0&0&0&0&0&1&1&0&1&0&1&0&1&1&0&0&0&1&1&0 \\ 0&0&0&0&1&1&0&0&0&1&1&0&1&0&1&0&1&0&1&0 \\ 0&0&0&0&0&0&0&0&0&0&0&1&1&1&1&1&1&1&1&0 \\ 0&0&0&0&0&0&0&1&1&1&1&0&0&0&0&1&1&1&1&0 \\ \end{bmatrix}.$} \begin{comment} \begin{equation*} \begin{scriptsize} \begin{bmatrix} 1&1&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1 \\ 0&1&1&0&0&1&1&0&0&1&1&0&0&1&1&0&0&0&0&0 \\ 1&0&1&0&1&0&1&0&1&0&1&0&1&0&1&0&0&0&0&0 \\ 0&0&0&0&0&1&1&0&1&0&1&0&1&1&0&0&0&1&1&0 \\ 0&0&0&0&1&1&0&0&0&1&1&0&1&0&1&0&1&0&1&0 \\ 0&0&0&0&0&0&0&0&0&0&0&1&1&1&1&1&1&1&1&0 \\ 0&0&0&0&0&0&0&1&1&1&1&0&0&0&0&1&1&1&1&0 \\ \end{bmatrix}. \end{scriptsize} \end{equation*} \end{comment} From Theorem~\ref{th:CovMat}, the matrix $H^{(1)}$ is a parity check matrix of a binary $[20,20-7,2]$ covering code. Note that the fourth column is all zero column which we can remove to get the following matrix $H^{(1)'}$ which is a parity check matrix of a binary $[19,19-7,2]$ covering code. \begin{equation*} \begin{scriptsize} \begin{bmatrix} 1&1&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1 \\ 0&1&1&0&1&1&0&0&1&1&0&0&1&1&0&0&0&0&0 \\ 1&0&1&1&0&1&0&1&0&1&0&1&0&1&0&0&0&0&0 \\ 0&0&0&0&1&1&0&1&0&1&0&1&1&0&0&0&1&1&0 \\ 0&0&0&1&1&0&0&0&1&1&0&1&0&1&0&1&0&1&0 \\ 0&0&0&0&0&0&0&0&0&0&1&1&1&1&1&1&1&1&0 \\ 0&0&0&0&0&0&1&1&1&1&0&0&0&0&1&1&1&1&0 \\ \end{bmatrix} . \end{scriptsize} \end{equation*} Let ${\boldsymbol h}'_j$ be the $j$-th column of the matrix $H^{(i)'}$. We can find a $2$-partition of the matrix $H^{(1)'}$. We will present the partition as a set of $7$ $2$-subspaces such that each subspace is presented by a basis with two columns of $H^{(i)'}$. The following is a possible $2$-partition of $H^{(i)'}$ ${\cal P} = \{\{{\boldsymbol h}'_7,{\boldsymbol h}'_{11}\},\allowbreak\{{\boldsymbol h}'_8,{\boldsymbol h}'_{12}\},\allowbreak\{{\boldsymbol h}'_9,{\boldsymbol h}'_{13}\},\allowbreak\{{\boldsymbol h}'_{10},{\boldsymbol h}'_{14}\},\allowbreak\{{\boldsymbol h}'_4,{\boldsymbol h}'_5\},\allowbreak\{{\boldsymbol h}'_1,{\boldsymbol h}'_2\},\allowbreak\{{\boldsymbol h}'_3,{\boldsymbol h}'_{19}\}\}$. We can see that $14$ out of $19$ columns form the bases. It can be verified that ${\boldsymbol h}'_4 + {\boldsymbol h}'_5 = {\boldsymbol h}'_{6}$, ${\boldsymbol h}'_7+{\boldsymbol h}'_{11} = {\boldsymbol h}'_{15}$, ${\boldsymbol h}'_8+{\boldsymbol h}'_{12} = {\boldsymbol h}'_{16}$, ${\boldsymbol h}'_{10} + {\boldsymbol h}'_{14} = {\boldsymbol h}'_{17}$ and ${\boldsymbol h}'_9 + {\boldsymbol h}'_{13} = {\boldsymbol h}'_{18}$. Therefore, ${\cal P}$ is a $2$-partition of $H^{(i)'}$ with size $7$. Thus, from Theorem~\ref{th:CoveringToLocality} we get that $D(7,1,2,2)\leq 7$. For the lower bound, assume by contradiction that there exists a $(7,1,6,2,2)$ locality functional array code. Then, from Theorem~\ref{th:LocToCov} we get that $h[7,2] \leq 18$. But from~\cite{CHLL97} we have that $h[7,2] = 19$, which is a contradiction. Thus, $D(7,1,2,2)\geq 7$. \end{IEEEproof} Next, we show how to construct covering codes using locality functional array codes. \begin{theorem}\label{th:LocToCov} Let ${\cal C}$ be an $(s,1,m,t,r)$ locality functional array code. Then, $h[s,r] \leq m\cdot(2^t - 1)$. \end{theorem} \begin{IEEEproof} Assume that ${\cal C}$ is an $(s,1,m,t,r)$ locality functional array code which has $m$ buckets such that in each bucket stored at most $t$ linear combinations of the $s$ information bits. From the $t$ cells in each bucket we can get at most $(2^t-1)$ different linear combinations. We can represent each linear combination as a binary vector of length $s$. Then, we construct an $(s\times m\cdot(2^t-1))$ parity check matrix $H$ where we have all the vectors that we get from the linear combinations of all the $m$ buckets as columns of the matrix. Let ${\boldsymbol u}\in \Sigma^s$ be a column vector of length $s$ which can represent a request for the code ${\cal C}$. From the property of ${\cal C}$, there exists a recovering set $S\subseteq [m]$ where $|S|\leq r$ that satisfies the request. Assume that $S = \{b_1,b_2,\ldots,b_{r'}\}$ where $r'\leq r$. From each bucket $b_i \in S$ we read a linear combination ${\boldsymbol v}_i$ of the $t$ cells which is a linear combination of the $s$ information bits. From the construction of $H$, the column vector ${\boldsymbol v}_i$ is a column in $H$. Then, ${\boldsymbol u} = \sum_{i=1}^{r'} {\boldsymbol v}_i$, and hence, the vector ${\boldsymbol u}$ is a sum of at most $r$ columns of $H$. Thus, the matrix $H$ is a parity check matrix of a binary $[m\cdot(2^t-1),m\cdot(2^t-1) - s,r]$ covering code, and hence, $h[s,r] \leq m\cdot(2^t-1)$. \end{IEEEproof} Now we will use Theorem~\ref{th:LocToCov} to get a lower bound on the value of $D(s,1,t,r)$. \begin{corollary}\label{cor:LocToCov} $D(s,1,t,r) \geq \left\lceil\dfrac{h[s,r]}{2^t-1}\right\rceil$. \end{corollary} \begin{IEEEproof} Assume by contradiction that $D(s,1,t,r) = m < \left\lceil\dfrac{h[s,r]}{2^t-1}\right\rceil$. The number of buckets $m$ is an integer. Then, $m < \dfrac{h[s,r]}{2^t-1}$. From Theorem~\ref{th:LocToCov} we have $h[s,r]\leq m\cdot (2^t-1) < \dfrac{h[s,r]}{2^t-1} \cdot (2^t-1) = h[s,r]$ which is a contradiction. \end{IEEEproof} We can get upper bounds on the value $h[s,r]$ from~\cite{CHLL97}. For example, $h[2s-1,2]\geq 2^s-1$ for any $s\geq 3$ and we can conclude that $D(2s-1,1,t,2) \geq \left\lceil\dfrac{2^s-1}{2^t-1}\right\rceil$. \section{Conclusion}\label{sec:conc} In this work we studied constructions and bounds of several families of codes. We defined and presented functional PIR array codes, functional batch array codes, and locality functional array codes. Lower bounds on the smallest number of buckets of these codes were given. Several upper bounds on the smallest number of buckets were shown based on general constructions, specific constructions, subspaces, and covering codes. In Table~\ref{sumTable}, we provide a summary of most of the results that appear in the work. The first column specifies the family of codes that the result refers to. Denote a PIR array code, batch array code, functional PIR array code, functional batch array code, locality functional array code by $P,B,FP,FB,L$, respectively. The next five columns specify the values of the parameters of the codes. The following two columns refer to lower and upper bounds on the codes and the last column includes notes such as constraints on the parameters and where the results appeared in the work. Lastly, we note that there are plenty of problems which remain for future research, such as generalizing the specific constructions and finding new bounds for different parameters. \begin{table*} \begin{center} \caption{Summary of the results}\label{sumTable} \begin{tabular}{ |c|c|c|c|c|c|c|c|c| } \hline Code & $s$ & $k$ & $t$ & $\ell$ & $r$ & Lower bound & Upper bound & notes \\ &&&&&&&&\\ \hline \hline $FP/FB$ & $s$ & 1 & $t$ & $t$ & $-$ & $\left\lceil \frac{s}{t} \right\rceil$ & $\left\lceil \frac{s}{t} \right\rceil$ &\\&&&&&&&&\Tref{theorem:ArrayCodek1}\\ \hline $FP/FB$ & $s$ & 1 & $t$ & $1$ & $-$ & $\left\lceil \frac{s}{\log_2(t+1)}\right\rceil$ & $\left\lceil \frac{s}{\lfloor \log_2(t+1)\rfloor}\right\rceil$ & \\&&&&&&&&\Tref{theorem:ArrayCodek1}\\ \hline $FP/FB$ & $s$ & 1 & $t$ & $t/2$ & $-$ & $\frac{s}{t} + 1$ & $\frac{s}{t} + 1$ & $t$ is even, $\frac{s}{t}$ is integer, and $\frac{s}{t} \leq t-1$\\&&&&&&&&\Tref{theorem:ArrayCodek1}\\ \hline $FP/FB$ & $s_1+s_2$ & 1 & $t$ & $1$ & $-$ & $\left\lceil \frac{s_1+s_2}{\log_2(t+1)}\right\rceil$ & $\left\lceil\frac{s_1}{\left\lfloor \log_2(t+1) \right\rfloor}\right\rceil + 1$ & $2^{s_2} -1 \hspace{-0.5ex}\leq \left(\left\lceil\frac{s_1}{\left\lfloor \log_2(t+1) \right\rfloor}\right\rceil+1\right)\cdot $ \\&&&&&&&& $(t - (2^{\lfloor \log_2(t+1) \rfloor}-1))$ \Tref{theorem:FPIRell1}\\ \hline $FP/FB$ & $s$ & 1 & $t$ & $\alpha t$ & $-$ & $$ & $\left\lceil \frac{s}{t-g[t, \alpha t]} \right\rceil $ & $0 < \alpha < 1$ \\ &&&&&&&&\Tref{theorem:ArrayCodek1}\\ \hline $FB$ & $s$ & k & $t$ & $1$ & $-$ & & $FB(\frac{s}{t}, t\cdot k)$ & $\frac{s}{t}$ is integer \\ &&&&&&&& \Lref{lemma:FBGadget}\\ \hline $FB$ & $8$ & 2 & $2$ & $2$ & $-$ & $6$ & $7$ & \\ &&&&&&&&\Tref{theorem:FB2282}\\ \hline $FB$ & $s$ & 2 & $2$ & $2$ & $-$ & $\log_{7}(2^{s-1}\cdot (2^s - 1))$ & $7\cdot \left\lceil \frac{s}{8} \right\rceil$ & \\ &&&&&&&&\Cref{cor:FB22s2}\\ \hline $P$ & $r^2 + r$ & $r$ & $r^2 - r + 1$ & $r-1$ & $-$ & $r+1$ & $r+1$ & $r \geq 3$ \\ &&&&&&&&\Tref{theorem:PExample8}\\ \hline $B$ & $r^2 + r$ & $r$ & $r^2 - r + 1$ & $r-1$ & $-$ & $r+1$ & $r+1$ & $r \geq 3$ \\ &&&&&&&&\Tref{theorem:BExample8}\\ \hline $B$ & $6$ & $15$ & $2$ & $2$ & $-$ & $25$ & $25$ & $$ \\ &&&&&&&&\Tref{theorem:BExample9}\\ \hline $FP$ & $6$ & $11$ & $2$ & $2$ & $-$ & $21$ & $25$ & $$ \\ &&&&&&&&\Tref{theorem:FPExample9}\\ \hline $P$ & $4$ & $16$ & $2$ & $1$ & $-$ & $23$ & $25$ & $$ \\ &&&&&&&&\Tref{theorem:P21416}\\ \hline $FP$ & $4$ & $14$ & $2$ & $2$ & $-$ & $24$ & $25$ & $$ \\ &&&&&&&&\Tref{theorem:FP22414}\\ \hline $FP$ & $5$ & $48$ & $2$ & $2$ & $-$ & $88$ & $90$ & $$ \\ &&&&&&&&\Tref{theorem:FP22548}\\ \hline $L$ & $s$ & $1$ & $t$ & $t$ & $1$ & $\left\lceil\frac{2^s-1}{2^t-1}\right\rceil$ & $\left\lceil\frac{2^s-1}{2^t-1}\right\rceil$ & $$ \\ &&&&&&&&\Tref{SubspaceLocality}\\ \hline $L$ & $s$ & $1$ & $t$ & $t$ & $r$ & $$ & $r\cdot \left\lceil\frac{2^{s/r}-1}{2^t-1}\right\rceil$ & $r|s$ \\ &&&&&&&&\Tref{SubspaceLocality}\\ \hline $L$ & $s$ & $\qbin{s-1}{t-1}{2}$ & $t$ & $t$ & $1$ & $$ & $\qbin{s}{t}{2}$ & $$ \\ &&&&&&&&\Tref{SubspaceLocality}\\ \hline $L$ & $s$ & $\left\lfloor \frac{2^{s}-2^t}{r\cdot 2^{t}-r}\right\rfloor +1$ & $t$ & $t$ & $r$ & $$ & $\frac{2^{s}-1}{2^{t}-1}$ & $s=rt$ \\ &&&&&&&&\Tref{SubspaceLocality}\\ \hline $L$ & $3$ & $2$ & $2$ & $2$ & $1$ & $5$ & $6$ & $$ \\ &&&&&&&&Example~\ref{ex:foldLoc}\\ \hline $L$ & $s$ & $1$ & $1$ & $1$ & $r$ & $h[s,r]$ & $h[s,r]$ & $$ \\ &&&&&&&&Theorem~\ref{th:LocCovk=1}\\ \hline $L$ & $ws$ & $1$ & $t$ & $t$ & $r$ & $\left\lceil\dfrac{h[ws,r]}{2^t-1}\right\rceil$ & $\dfrac{(2^w - 1)h[s,r]_{2^w}}{2^t - 1}$ & $t|w$ \\ &&&&&&&&Corollary~\ref{cor:LocToCov}, Theorem~\ref{th:covToLoc}\\ \hline $L$ & $4w$ & $1$ & $t$ & $t$ & $2$ & $$ & $\dfrac{(2^w - 1)(2^{w+1}+1)}{2^t - 1}$ & $t|w$ \\ &&&&&&&&Corollary~\ref{th:th58}\\ \hline $L$ & $24$ & $1$ & $3$ & $3$ & $2$ & $828$ & $1097$ & $$ \\ &&&&&&&&Example~\ref{ex:covToLocEx}\\ \hline $L$ & $22$ & $1$ & $2$ & $2$ & $3$ & $99$ & $154$ & $$ \\ &&&&&&&&Example~\ref{ex:covToLocEx}\\ \hline $L$ & $7$ & $1$ & $2$ & $2$ & $2$ & $7$ & $7$ & $$ \\ &&&&&&&&Theorem~\ref{th:LOCs=7}\\ \hline \end{tabular} \end{center} \end{table*}
{ "timestamp": "2020-09-02T02:10:58", "yymm": "2001", "arxiv_id": "2001.10770", "language": "en", "url": "https://arxiv.org/abs/2001.10770", "abstract": "A functional PIR array code is a coding scheme which encodes some $s$ information bits into a $t\\times m$ array such that every linear combination of the $s$ information bits has $k$ mutually disjoint recovering sets. Every recovering set consists of some of the array's columns while it is allowed to read at most $\\ell$ encoded bits from every column in order to receive the requested linear combination of the information bits. Functional batch array codes impose a stronger property where every multiset request of $k$ linear combinations has $k$ mutually disjoint recovering sets. Locality functional array codes demand that the size of every recovering set is restrained to be at most $r$. Given the values of $s, k, t, \\ell,r$, the goal of this paper is to study the optimal value of the number of columns $m$ such that these codes exist. Several lower bounds are presented as well as explicit constructions for several of these parameters.", "subjects": "Information Theory (cs.IT)", "title": "Array Codes for Functional PIR and Batch Codes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363746096915, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.708488344284397 }
https://arxiv.org/abs/0910.4987
Optimal bounds for the colored Tverberg problem
We prove a "Tverberg type" multiple intersection theorem. It strengthens the prime case of the original Tverberg theorem from 1966, as well as the topological Tverberg theorem of Barany et al. (1980), by adding color constraints. It also provides an improved bound for the (topological) colored Tverberg problem of Barany & Larman (1992) that is tight in the prime case and asymptotically optimal in the general case. The proof is based on relative equivariant obstruction theory.
\section{Introduction} Tverberg's theorem from 1966 \cite{Tverberg-1} \cite[Sect.~8.3]{mat-1} claims that any family of $(d+1)(r-1)+1$ points in $\R^d$ can be partitioned into $r$ sets whose convex hulls intersect; a look at the codimensions of intersections shows that the number $(d+1)(r-1)+1$ of points is minimal for this. In their 1990 study of halving lines and halving planes, B\'ar\'any, F\"uredi \& Lov\'asz \cite{BFL} observed ``we need a colored version of Tverberg's theorem'' and provided a first case, for three triangles in the plane. In response to this, B\'{a}r\'{a}ny \& Larman \cite{Bar-Lar} in 1992 formulated the following general problem and proved it for the planar case. \smallskip \textbf{The colored Tverberg problem:} \emph{Determine the smallest number $t=t(d,r)$ such that for every collection $\CC =C_{0}\sqcup \dots\sqcup C_{d}$ of points in $\mathbb{R}^d$ with $|C_i|\ge t$, there are $r$ disjoint subcollections $F_{1},\dots,F_{r}$ of~$\CC $ satisfying} \begin{equation*} |F_{i}\cap C_{j}|\le 1 \text{ \ for every \ } i\in \{1,\dots,r\},\ j\in\{0,\dots,d\}, \text{ and \ } \mathrm{conv\,}(F_1)\cap\dots\cap\mathrm{conv\,}(F_r)\neq\emptyset. \end{equation*} A family of such disjoint subcollections $F_{1},\dots,F_{r}$ that contain at most one point from each \emph{color class} $C_i$ is called a \emph{rainbow $r$-partition}. (We do not require $F_1\cup\dots\cup F_r=\CC$ for this.) Multiple points are allowed in these collections of points, but then the cardinalities have to account for these. A trivial lower bound is $t(d,r)\ge r$: Collections $\CC$ with only $(r-1)(d+1)$ points in general position do not admit an intersecting $r$-partition, again by codimension reasons. B\'{a}r\'{a}ny and Larman showed that the trivial lower bound is tight in the cases $t(1,r)=r $ and $t(2,r)=r$, presented a proof by Lov\'{a}sz for $t(d,2)=2$, and conjectured the following equality. \smallskip \textbf{The B\'{a}r\'{a}ny--Larman conjecture:} \emph{$t(d,r)=r$ for all $r\ge2$ and $d\ge1$.} \smallskip Still in 1992, \v{Z}ivaljevi\'{c} \& Vre\'{c}ica \cite{ZV-1} established for $r$ prime the upper bound $t(d,r)\le 2r-1$. The same bound holds for prime powers according to \v{Z}ivaljevi\'{c} \cite{guide2}. The bound for primes also yields bounds for arbitrary $r$: For example, one gets $t(d,r)\le 4r-3$, since there is a prime $p$ (and certainly a prime power!) between $r$ and~$2r$. \medskip As in the case of Tverberg's classical theorem, one can consider a topological version of the colored Tverberg problem. \smallskip \textbf{The topological Tverberg theorem:} (\cite{BaBB-81} \cite[Sect.~6.4]{MatousekBZ:BU}) \emph{Let $r\ge2$ be a prime power, $d\ge1$, and $N=(d+1)(r-1)$. Then for every continuous map of an $N$-simplex $\Delta_N$ to $\R^d$ there are $r$ disjoint faces $F_{1},\dots,F_{r}$ of $\Delta_N$ whose images under $f$ intersect in~$\R^d$.} \textbf{The topological colored Tverberg problem:} \emph{Determine the smallest number $t=tt(d,r)$ such that for every simplex $\Delta$ with $(d+1)$-colored vertex set $\CC =C_{0}\sqcup \dots\sqcup C_{d}$, $|C_{i}|\ge t$, and every continous map $f:\Delta\rightarrow\mathbb{R}^d$ there are $r$ disjoint faces $F_{1},\dots,F_{r}$ of $\Delta$ satisfying} \begin{equation*} |F_{i}\cap C_{j}|\le 1 \text{ for every } i\in \{1,\dots,r\},\ j\in\{0,\dots,d\}, \text{ and \ } f(F_1)\cap\dots\cap f(F_r)\neq\emptyset. \end{equation*} The family of faces $F_{1},\dots,F_{r}$ is called a \emph{topological rainbow partition}. The argument from \cite{ZV-1} and \cite{guide2} gives the same upper bound $tt(d,r)\le 2r-1$ for $r$ a prime power, and consequently the upper bound $tt(d,r)\le 4r-3$ for arbitrary $r$. Notice that $t(d,r)\le tt(d,r)$. \smallskip \textbf{The topological B\'{a}r\'{a}ny--Larman conjecture:} \emph{$tt(d,r)=r$ for all $r\ge2$ and $d\ge1$.} \smallskip The Lov\'{a}sz proof for $t(d,2)=2$ presented in \cite{Bar-Lar} is topological and thus also valid for the topological B\'{a}r\'{a}ny--Larman conjecture. Therefore $tt(d,2)=2$. The general case of the topological B\'{a}r\'{a}ny--Larman conjecture would classically be approached via a study of the existence of an $\Sym_r$-equivariant map \begin{equation} \Delta_{r,|C_{0}| } * \dots * \Delta_{r,|C_{d}|} \ \ \longrightarrow_{\Sym_r}\ \ S(W_{r}^{\oplus (d+1)}) \ \simeq\ S^{(r-1)(d+1)-1}, \label{eq:Map-colored-Tverberg} \end{equation} where $W_{r}$ is the standard $(r-1)$-dimensional real representation of $\Sym_{r}$ obtained by restricting the coordinate permutation action on $\mathbb{R}^r$ to $\{(\xi_{1},\dots,\xi_{r})\in\mathbb{R}^{r} : \xi_{1}+\dots+\xi_{r}=0\}$ and $\Delta_{r,n}$ denotes the $r\times n$ chessboard complex $([r])^{*n}_{\Delta(2)}$; cf.~\cite[Remark after Thm.~6.8.2]{MatousekBZ:BU}. However, we will establish in Proposition~\ref{prop:fails} that this approach fails when applied to the colored Tverberg problem directly, due to the fact that the square chessboard complexes $\Delta_{r,r}$ admit $\Sym_r$-equivariant collapses that reduce the dimension. In the following, we circumvent this problem by a different, particular choice of parameters, which produces chessboard complexes $\Delta_{r,r-1}$ that are closed pseudomanifolds and thus do not admit collapses. \section{Statement of the main results} Our main result is the following strengthening of (the prime case of) the topological Tverberg theorem. \begin{theorem}\label{thm:main2} Let $r\ge2$ be prime, $d\ge1$, and $N:=(r-1)(d+1)$. Let $\Delta_N$ be an $N$-dimensional simplex with a partition of the vertex set into parts (``color classes'') \[ \CC \ \ =\ \ C_{0}\sqcup \dots\sqcup C_m, \] with $|C_i|\le r-1$ for all $i$. Then for every continous map $f:\Delta_N\rightarrow\mathbb{R}^d$, there are $r$ disjoint ``rainbow'' faces $F_{1},\dots,F_{r}$ of $\Delta_N$ whose images under $f$ intersect, that is, \begin{equation*} |F_{i}\cap C_{j}|\le 1 \text{ for every } i\in \{1,\dots,r\},\ j\in\{0,\dots,m\}, \text{ and \ } f(F_1)\cap\dots\cap f(F_r) \neq\emptyset. \end{equation*} \end{theorem} The requirement $|C_i|\le r-1$ forces that there are at least $d + 2$ non-empty color classes. Theorem \ref{thm:main2} is tight in the sense that there would exist counter-examples $f$ if $|C_0|=r$ and $|C_1|=\ldots=|C_m|$. Our first step will be to reduce Theorem~\ref{thm:main2} to the following special case. \begin{theorem}\label{thm:main} Let $r\ge2$ be prime, $d\ge1$, and $N:=(r-1)(d+1)$. Let $\Delta_N$ be an $N$-dimensional simplex with a partition of the vertex set into $d+2$ parts \[ \CC \ \ =\ \ C_{0}\sqcup \dots\sqcup C_{d}\sqcup C_{d+1}, \] with $|C_i|=r-1$ for $i\le d$ and $|C_{d+1}|=1$. Then for every continous map $f:\Delta_N\rightarrow\mathbb{R}^d$, there are $r$ disjoint faces $F_{1},\dots,F_{r}$ of $\Delta_N$ satisfying \begin{equation*} |F_{i}\cap C_{j}|\le 1 \text{ for every } i\in \{1,\dots,r\},\ j\in\{0,\dots,d+1\}, \text{ and \ } f(F_1)\cap\dots\cap f(F_r) \neq\emptyset. \end{equation*} \end{theorem} \begin{proof}[Reduction of Theorem~\ref{thm:main2} to Theorem~\ref{thm:main}.] Suppose we are given such a map $f$ and a coloring $C_1 \sqcup\dots\sqcup C_m$ of the vertex set of $\Delta_N$. Let $N':=(r-1)m$ and $C_{m+1}:=\emptyset$. We enlarge the color classes $C_i$ by $N'-N = (r-1)(m-(d+1))$ new vertices and obtain color classes $C'_1 , \dots ,C'_{m+1}$, such that $C_i\subseteq C'_i$ for all $i$, and $|C'_1 | = \dots = |C'_m | = r - 1$ and $|C'_{m+1}| = 1$. We construct out of $f$ a new map $f': \Delta_{N'} \rightarrow \R^{d'}$, where $d' := m - 1$, as follows: We regard $\R^d$ as the subspace of $\R^{d'}$ where the last $d'-d$ coordinates are zero. So we let $f'$ be the same as $f$ on the $N$-dimensional front face of $\Delta_{N'}$. We assemble the further $N' - N$ vertices into $d' - d$ groups $V_1,\dots, V_{d'-d}$ of $r-1$ vertices each. The vertices in $V_i$ shall be mapped to $e_{d+i}$, the $(d+i)$st standard basis vector of $\R^{d'}$ . We extend this map linearly to all of $\Delta_{N'}$ and we obtain $f'$. We apply Theorem~\ref{thm:main} to $f'$ and the coloring $C'_1 ,\dots,C'_{m+1}$ and obtain disjoint faces $F'_1 ,\dots, F'_r$ of $\Delta_{n'}$. Let $F_i := F'_i \cap\Delta_N$ be the intersection of $F'_i$ with the $N$-dimensional front face of $\Delta_{N'}$. By construction of $f'$, the intersection $f'(F'_1 )\cap\dots\cap f'(F'_r)$ lies in $R^d$. Therefore, already $F_1,\dots, F_r$ is a colorful Tverberg partition for $f'$, and hence it is for $f$: We have $f(F_1)\cap\dots\cap f(F_r)=\emptyset$. \end{proof} Such a reduction previously appears in Sarkaria's proof for the prime power Tverberg theorem \cite[(2.7.3)]{Sarkaria-primepower}; see also Longueville's exposition \cite[Prop.~2.5]{deL01}. \begin{remark} Soon after completion of the first version of the preprint for this paper we noticed (see \cite[Sect.~2]{BMZ2}) that Theorem~\ref{thm:main} also has a simpler proof, using degrees rather than equivariant obstruction theory; a very similar proof was provided by Vre\'cica and \v{Z}ivaljevi\'c~\cite{VZdegreeproof}. We provide it in \cite{BMZ2} as a special case of a Vre\'cica--Tverberg type transversal theorem, accompanied by much more complete cohomological index calculations, which also yield a second new proof that establishes Theorem \ref{thm:main2} directly, without a reduction to Theorem~\ref{thm:main}. The simpler proof, however, does not imply that the equivariant map proposed by the natural configuration space/test map scheme of Theorem~\ref{prop:main} \emph{does} exists if $r $ divides $ (r-1)!^d$. This we prove at the end of the current paper. \end{remark} Either of our Theorems \ref{thm:main2} and \ref{thm:main} immediately implies the topological Tverberg theorem for the case when $r$ is a prime, as it holds for an \emph{arbitrary} partition of the vertex set into color classes of the specified sizes. Thus it is a ``constrained'' Tverberg theorem as discussed recently by Hell \cite{Hell-2}. It remains to be explored how the constraints can be used to derive lower bounds for the number of Tverberg partitions; compare Vu\'ci\'c \& \v{Z}ivaljevi\'c \cite{Vu-Z} \cite[Sect.~6.3]{MatousekBZ:BU}. More importantly, however, Theorem~\ref{thm:main} implies the topological B\'{a}r\'{a}ny--Larman conjecture for the case when $r+1$ is a prime, as follows. \begin{corollary} \label{Th-Result3} If $r+1$ is prime, then $t(d,r)=tt(d,r)=r$. \end{corollary} \begin{proof} We prove that if $r\ge3$ is prime, then $tt(d,r-1)\le r-1$. For this, let $\Delta_{N-1}$ be a simplex with vertex set $\CC =C_{0}\sqcup \dots\sqcup C_{d}$, $|C_i|=r-1$, and let $f:\Delta_{N-1}\rightarrow\R^d$ be continuous. Extend this to a map $\Delta_{N}\rightarrow\R^d$, where $\Delta_N$ has an extra vertex $v_N$, and set $C_{d+1}:=\{v_N\}$. Then Theorem~\ref{thm:main2} can be applied, and yields a topological colored Tverberg partition into $r$ parts. Ignore the part that contains~$v_N$. \end{proof} Using estimates on prime numbers one can derive from this tight bounds for the colored Tverberg problem also in the general case. The classical Bertrand's postulate (``For every $r$ there is a prime $p$ with $r+1\le p<2r$'') can be used here, but there are also much stronger estimates available, such as the existence of a prime $p$ between $r$ and $r+r^{6/11+\varepsilon}$ for arbitrary $\varepsilon>0$ if $r$ is large enough according to Lou \& Yao \cite{LouYao}. \begin{corollary}\label{Cor-1} $r\le t(d,r)\le tt(d,r)\le 2r-2$ for all $d\ge1$ and $r\ge2$. $r\le t(d,r)\le tt(d,r)\le (1+o(1))\,r$ for $d\ge1$ and $r\rightarrow\infty$. \end{corollary} \begin{proof} The first, explicit estimate is obtained from Bertrand's postulate: For any given $r$ there is a prime $p$ with $r+1\le p<2r$. We use $|C_i|\ge 2r-2\ge p-1$ to derive the existence of a colored Tverberg $(p-1)$-partition, which in particular yields an $r$-partition since $p-1\ge r$. The second, asymptotic estimate uses the Lou \& Yao bound instead. \end{proof} \begin{remark} The colored Tverberg problem as originally posed by B\'ar\'any \& Larman \cite{Bar-Lar} in 1992 was different from the version we have given above (following B\'ar\'any, Fu\"redi \& Lov\'asz \cite{BFL} and Vre\'cica \& \v{Z}ivaljevi\'c \cite{ZV-1}): B\'ar\'any and Larman had asked for an upper bound $N(d,r)$ on the cardinality of the union $|\CC|$ that together with $|C_i|\ge r$ would force the existence of a rainbow $r$-partition. This original formulation has two major disadvantages: One is that the Vre\'cica--\v{Z}ivaljevi\'c result does not apply to it. A second one is that it does not lend itself to estimates for the general case in terms of the prime case. However, our Corollary \ref{Th-Result3} also solves the original version for the case when $r+1$ is a prime. \end{remark} The colored Tverberg problem originally arose as a tool to obtain complexity bounds in computational geometry. As a consequence, our new bounds can be applied to improve these bounds, as follows. Note that in some of these results $t(d,d+1)^d$ appears in the exponent, so even slightly improved estimates on $t(d,d+1)$ have considerable effect. For surveys see \cite{Bar-1}, \cite[Sect.~9.2]{mat-1}, and \cite[Sect.~11.4.2]{Ziv:handbook}. Let $S\subseteq \mathbb{R}^{d}$ be a set in general position of size $n$, that is, such that no $d+1$ points of $S$ are on a hyperplane. Let $h_{d}(n)$ denote the number of hyperplanes that bisect the set $S$ and are spanned by the elements of the set $S$. According to B\'{a}r\'{a}ny \cite[p.~239]{Bar-1}, \begin{equation*} h_{d}(n)=O(n^{d-\varepsilon_{d}}) \qquad\text{with}\qquad \varepsilon_{d}= t(d,d+1)^{-(d+1)}. \end{equation*} Thus we obtain the following bound and equality. \begin{corollary} If $d+2$ is a prime then \begin{equation*} h_{d}(n)=O(n^{d-\varepsilon_{d}}) \qquad\text{with}\qquad \varepsilon_{d}= ( d+1) ^{-(d+1)}. \end{equation*} For general $d$, we obtain e.g. $\varepsilon_{d}\geq (d+1) ^{-(d+1)-O(\log d)}$. \end{corollary} Let $\CC \subseteq \mathbb{R}^{d}$ be a finite set. A $\CC$\emph{-simplex} is the convex hull of some collection of $d+1$ points of $\CC $. The second selection lemma \cite[Thm.~9.2.1]{mat-1} claims that for an $n$-point set $\CC \subseteq \mathbb{R}^{d}$ and the family $\mathcal{F}$ of $\alpha \binom{n}{d+1}$ $\CC $-simplices with $\alpha\in(0,1]$ there exists a point contained in at least $c\cdot\alpha ^{s_{d}}\binom{n}{d+1}$ $\CC $-simplices of $\mathcal{F}$. Here $c=c(d)>0$ and $s_{d}$ are constants. For dimensions $d>2$, the presently known proof gives that $s_{d}\approx t(d,d+1) ^{d+1}$. Again, Corollary~\ref{Cor-1} yields the following, much better bounds for the constant~$s_{d}$. \begin{corollary} If $d+2>4$ is a prime then the second selection lemma holds for $s_{d}=(d+1)^{d+1}$, and in general e.g.\ for $s_{d}=(2d+2)^{d+1}$. \end{corollary} Let $X\subset \mathbb{R}^{d}$ be an $n$ element set. A $k$\emph{-facet} of the set $X$ is an oriented $(d-1)$-simplex $\mathrm{conv}\{x_{1},\dots,x_{d}\}$ spanned by elements of $X$ such that there are exactly $k$ points of $X$ on its strictly positive side. When $n-d$ is even $\frac{n-d}{2}$-facets of the set $X$ are called \emph{halving facets}. From \cite[Thm.~11.3.3]{mat-1} we have a new, better estimate for the number of halving facets. \begin{corollary} For $d>2$ and $n-d$ even, the number of halving facets of an $n$-set $X\subset \mathbb{R}^{d}$ is $O(n^{d-\frac{1}{(2d)^{d}}})$.% \end{corollary} \section{The Configuration Space/Test Map scheme} According to the ``deleted joins'' version the general ``Configuration Space/Test Map'' (CS/TM) scheme for multiple intersection problems, as pioneered by Sarkaria, Vre\'cica \& \v{Z}ivaljevi\'c, and others, formalized by \v{Z}ivaljevi\'c, and exposited beautifully by Matou\v{s}ek \cite[Chap.~6]{MatousekBZ:BU}, we proceed as follows. Assume that we want to prove the existence of a rainbow $r$-partition for arbitrary colored point sets $\CC =C_0\sqcup C_1\sqcup\dots\sqcup C_k$ in $\R^d$ with $|C_i|= t_i$. So we have to show that there is no (affine) map \[ f: C_0 * C_1 * \dots * C_k \ \longrightarrow\ \R^d, \] for which no $r$ images of disjoint simplices from the simplicial complex (join of discrete sets) $C_0 * C_1 * \dots * C_k $ intersect in~$\R^d$. (Compare \v{Z}ivaljevi\'c \cite[Sect.~11.4.2]{Ziv:handbook}.) The ``deleted joins'' configuration space/test map scheme now suggests to take a $r$-fold deleted join of this map $f$, where one has to take an $r$-fold $2$-wise deleted join in the domain and an $r$-fold $r$-wise deleted join in the range; cf.\ \cite[Chap.~6.3]{MatousekBZ:BU}. Thus we arrive at an equivariant map \begin{equation} \label{eq:CSTM-scheme-general} f_{\Delta(2)}^{*r}: \quad \Delta_{r,|C_0|} * \Delta_{r,|C_1|} * \dots * \Delta_{r,|C_k|} \ \ \longrightarrow_{\Sym_r}\ \ (\R^d)^{*r}_{\Delta}\ \subset\ \R^{r\times(d+1)}{\setminus}T \ \simeq \ S(W_r^{\oplus(d+1)}). \end{equation} Here \begin{compactitem}[$\bullet$] \item the simplicial complex $X:= \Delta_{r,|C_0|} * \Delta_{r,|C_1|} * \dots * \Delta_{r,|C_k|}$ on the left hand side is a join of $k+1$ chessboard complexes, where $\Delta_{r,|C_i|}=(C_i)_{\Delta(2)}^{*r}$ is the chessboard complex on $r$ rows and $|C_i|$ columns, on which $\Sym_r$ acts by permuting the $r$ rows. This is a simplicial complex on $r(|C_0| + |C_1| + \dots + |C_k|)$ vertices, of dimension $|C_0| + |C_1| + \dots + |C_k|-1$ if $|C_i|\le r$, and of dimension $\max\{|C_0|,r\}+\max\{|C_1|,r\}+\dots+\max\{|C_k|,r\}-1$ in general. Points in $X$ can be represented in the form $\lambda_1 x_1+ \dots + \lambda_r x_r$, where $x_i$ is a point in (a simplex of) the $i$-th copy of the complex $C_0 * C_1 * \dots * C_k$, and the $\lambda_i\ge0$, $\sum_i\lambda_i=1$, denote a convex combination. \item $(\R^d)^{*r}_{\Delta}$ is a deleted join, which is most easily represented as a subset of the space of all real $r\times(d+1)$-matrices for which not all rows are equal, and where $\Sym_r$ acts by permuting the rows. To factor out the diagonal $T$, which is the $(d+1)$-dimensional subspace of all matrices for which all rows are equal, we subtract the average of all rows from each row, which maps this equivariantly to $W_r^{\oplus(d+1)}{\setminus}\{0\}$, the space of all real $r\times(d+1)$-matrices with column sums equal to zero but for which not all rows are zero, and where $\Sym_r$ still acts by permuting the rows. This in turn is homotopy equivalent to the sphere $S(W_r^{\oplus(d+1)})=(S^{r-2})^{*(d+1)}=S^{(r-1)(d+1)-1}=S^{N-1}$, where $\pi\in\Sym_r$ reverses the orientation exactly if $(\mathrm{sgn\,}\pi)^{d+1}$ is negative. \item The action of $\Sym_r$ is non-free exactly on the subcomplex $A:=(\Delta_{r,|C_0|}*\ldots*\Delta_{r,|C_m|})^{\emptyset,\emptyset}\subset X$ given by all the points $\lambda_1 x_1+ \dots + \lambda_r x_r$ such that $\lambda_i=\lambda_j=0$ for two distinct row indices $i<j$. These lie in simplices that have no vertices in the rows $i$ and $j$, so the transposition $\pi_{ij}$ fixes these simplices pointwise. \item The map $f_{\Delta(2)}^{*r}:X\rightarrow\R^{r\times(d+1)}$ suggested by the ``deleted joins'' scheme takes the point $\lambda_1 x_1+ \dots + \lambda_r x_r$ and maps it to the $r\times(d+1)$-matrix in $\R^{r\times(d+1)}$ whose $k$-th row is $(\lambda_k,\lambda_k f(x_k))$. For an arbitrary map $f$, the image of $A$ under $f^{*r}_{\Delta(2)}$ does not intersect the diagonal $T$: If $\lambda_i=\lambda_j=0$, then not all rows $(\lambda_k,\lambda_kf(x_k))$ can be equal, since $\sum_k\lambda_k=1$. However, for the following we replace $f_{\Delta(2)}^{*r}$ by the map $F_0:X\rightarrow\R^{r\times(d+1)}$ that maps $\lambda_1 x_1+ \dots + \lambda_r x_r$, to the $r\times(d+1)$-matrix whose $k$-th row is $(\lambda_k,(\Pi_{\ell=1}^r\lambda_\ell)f(x_k))$. The two maps $ f_{\Delta(2)}^{*r}$ and $F_0$ are homotopic as maps $A\rightarrow\R^{r\times(d+1)}\setminus\{T\}$ by a linear homotopy, so the resulting extension problems are equivalent by \cite[Prop.~3.15(ii)]{Dieck87}. The advantage of the map $F_0$ is that its restriction to $A$ is independent of $f$. \end{compactitem} Thus we have established the following. \begin{proposition} [CS/TM scheme for the generalized topological colored Tverberg problem] \label{prop:CSTM-scheme-general}% If for some parameters $(d,r,k;t_0,\dots,t_k)$ the $\Sym_r$-equivariant extension \mbox{\rm(\ref{eq:CSTM-scheme-general})} of the map $F:A\rightarrow\ \R^{r\times(d+1)}{\setminus}T$ does not exist, then the colored Tverberg $r$-partition exists for all continuous $f: C_0 * C_1 * \dots * C_k \rightarrow\R^d$ with $|C_i|\ge t_i$. \end{proposition} Vre\'cica \& \v{Z}ivaljevi\'c achieve this for $(d,r,d;2r-1,\dots,2r-1)$ and prime $r$ by applying a Borsuk--Ulam type theorem to the action of the subgroup $\mathbb{Z}_r\subset\Sym_r$, which acts freely on the join of chessboard complexes if $r$ is a prime. However, they loose a factor of $2$ from the fact that the chessboard-complexes $\Delta_{r,t}$ of dimension $r-1$ are homologically $(r-2)$-connected only if $t\ge 2r-1$; compare \cite{BLZV}, \cite{Zie:chess}, and \cite{Sa-Wa}. Our Theorem~\ref{thm:main} claims this for $(d,r,d+1;r-1,\ldots,r-1,1)$. To prove it, we will use relative equivariant obstruction theory, as presented by tom Dieck in \cite[Sect.~II.3]{Dieck87}. \section{Proof of Theorem \protect\ref{thm:main}} First we establish that the scheme of Proposition~\ref{prop:CSTM-scheme-general} fails when applied to the colored Tverberg problem directly. \begin{proposition}\label{prop:fails} For all $r\ge2$ and $d\ge1$, with $N=(r-1)(d+1)$, an equivariant $\Sym_r$-equivariant map \[ F: (\Delta_{r,r})^{*(d+1)} \ \ \longrightarrow_{\Sym_r}\ \ W_r^{\oplus(d+1)}\setminus\{0\} \ \simeq\ S^{N-1} \] exists. \end{proposition} \begin{proof} For any facet of the $(r-1)$-dimensional chessboard complex $\Delta_{r,r}$ there is a collapse which removes the facet together with its subfacet obtained by deleting the vertex in the $r$-th column. Performing these collapses simultaneously, we see that $\Delta_{r,r}$ collapses $\Sym_r$-equivariantly to an $(r-2)$-dimensional subcomplexes of $\Delta_{r,r}$, and thus $(\Delta_{r,r})^{*(d+1)}$ equivariantly retracts to a complex whose dimension is only $(d+1)(r-1)-1=N-1$. Thus there is no obstruction to the construction of such an equivariant map: Any generic map $f:\CC\rightarrow\R^d$ induces such an equivariant map on the $(N-2)$-skeleton, and since the action of $\Sym_r$ is free on the open $(N-1)$-simplices, there is no obstruction for the equivariant extension of the map to $W_r^{\oplus(d+1)}{\setminus}\{0\}\simeq S^{N-1}$. \end{proof} We now specialize the general scheme of Proposition~\ref{prop:CSTM-scheme-general} to the situation of Theorem~\ref{thm:main}. Thus we have to show the following. \begin{proposition}\label{prop:main} Let $r\geq 2$ and $d\geq 1$ be integers, and $N=(r-1)(d+1)$. An $\mathfrak{S}_{r}$-equivariant map \begin{equation*} F:(\Delta _{r,r-1})^{\ast d}\ast \Delta _{r,r-1}\ast \lbrack r]\ \ \longrightarrow _{\mathfrak{S}_{r}}\ \ W_{r}^{\oplus (d+1)}\setminus \{0\} \end{equation*} that extends the equivariant map $F_0|_A$ which on the non-free subcomplex of the domain, \begin{equation*} A\ =\ ((\Delta _{r,r-1})^{\ast d}\ast \Delta _{r,r-1}\ast \lbrack r])^{\emptyset ,\emptyset }, \end{equation*} maps $\lambda _{1}x_{1}+\dots +\lambda _{r}x_{r}$ with $\lambda _{i}=\lambda _{j}=0$, $i<j$ to the $r\times (d+1)$-matrix with $i$-th row $(\lambda _{i},0)$, exists if and only if \begin{equation*} r\mid(r-1)!^{d}. \end{equation*} \end{proposition} The vertex set of $X=(\Delta_{r,r-1})^{*d} * \Delta_{r,r-1} * [r]$ may be represented by a rectangular array of size $r\times((r-1)(d+1)+1)$, which carries the $d+1$ chessboard complexes $\Delta_{r,r-1}$ lined up from left to right, and in the last column has the chessboard complex $\Delta_{r,1}=[r]$, which is just a discrete set. (See Figure~\ref{fig:array}) \begin{figure}[tbh] \centering \unitlength=1.2mm \mbox{\begin{picture}(17,0) \put(10,11){$F:$} \put(24,22){$C_0$} \put(50,22){$C_{d-1}$} \put(68,22){$C_{d}$} \put(77,22){$C_{d+1}$} \put(110,-3.5){$\R^{r\times(d+1)}$} \put(87,10){\Large$\longrightarrow_{\Sym_r}$} \put(21,-2.5){$ \Delta_{r,r-1} \hspace{1mm} * \hspace{2mm}\cdots \hspace{2mm} * \hspace{2mm} \Delta_{r,r-1} \hspace{1mm} * \hspace{1mm} \Delta_{r,r-1} * [r]$} \end{picture} \includegraphics[trim=42 20 233 20, clip,scale=.72]{Figure-1-1.eps} \hspace{18mm} \includegraphics[trim=404 20 55 20, clip,scale=.72]{Figure-1-1.eps} \hspace{12mm}\mbox{}} \smallskip \caption{The vertex set, and one facet in $\Phi$ of the combinatorial configuration space for $r=5$.} \label{fig:array} \end{figure} The join of chessboard complexes $(\Delta_{r,r-1})^{*d} * \Delta_{r,r-1} * [r]$ has dimension $(r-1)(d+1)=N$, while the target sphere has dimension $N-1$. On both of them, $\Sym_r$ acts by permuting the rows. While the chessboard complexes $\Delta_{r,r}$ collapse equivariantly to lower-dimensional complexes, the chessboard complexes $\Delta_{r,r-1}$ are closed oriented pseudomanifolds of dimension $r-2$ and thus don't collapse; for example, $\Delta_{3,2}$ is a circle and $\Delta_{4,3}$ is a torus. We will read the maximal simplices of such a complex from left to right, which yields the orientation cycle in a special form with few signs that will be very convenient. \begin{lemma} \emph{(cf.\ \cite{BLZV} \cite{Sa-Wa}, \cite[p.~145]{Jonsson})} \label{Lemma:Chess-Manifold}% For $r>2$, the chessboard complex $\Delta_{r,r-1}$ is a connected, orientable pseudomanifold of dimension $r-2$. Therefore \begin{equation*} {H}_{r-2}(\Delta_{r,r-1};\mathbb{Z)=Z} \end{equation*} and an orientation cycle is \begin{equation} z_{r,r-1}\ =\ \sum_{\pi\in\Sym_{r}} (\mathrm{sgn\,}\pi) \langle(\pi(1),1),\dots,(\pi(r-1),r-1)\rangle. \label{eq:generating_cocycle} \end{equation} $\Sym_r$ acts on $\Delta_{r,r-1}$ by permuting the rows; this affects the orientation according to $\pi\cdot z_{r,r-1}=(\mathrm{sgn\,}\pi) z_{r,r-1}$. \end{lemma} Here we use the usual notation $\langle w_0,\dots,\widehat{w_i},\dots,w_k\rangle$ for an oriented simplex with ordered vertex set $(w_0,\dots,w_k)$ from which the vertex $ w_i$ is omitted. \begin{proof}[Proof of Proposition~\ref{prop:main}] For $r=2$, since $2\nmid 1$, this says that there is no equivariant map $S^N\rightarrow S^{N-1}$, where both spheres are equipped with the antipodal action: This is the Borsuk--Ulam theorem (and the Lov\'asz proof). Thus we may now assume that $r\ge3$. Let $X:=(\Delta_{r,r-1})^{*(d+1)}*[r]$ be our combinatorial configuration space, $A\subset X$ the non-free subset, and $F_0:A\rightarrow_{\Sym_r} S(W_{r}^{\oplus (d+1)})$ the prescribed map that we are to extend $\Sym_r$-equivariantly to~$X$. Since $\dim (X)=N$ and $\dim S(W_{r}^{\oplus (d+1)})=N-1$ with $\mathrm{conn\,}S(W_{r}^{\oplus (r+1)})=N-2$, by \cite[Sect.~II.3]{Dieck87} the existence of an $\Sym_r$-equivariant extension $(\Delta_{r,r-1})^{*(d+1)}*[r] \rightarrow S(W_{r}^{\oplus (d+1)})$ is equivalent to the vanishing of the primary obstruction \[ \mathfrak{o}\ \in \ H_{\Sym_r}^{N}\big(X,A ;\Pi_{N-1}(S(W_{r}^{\oplus (d+1)}))\big). \] The Hurewicz isomorphism gives an isomorphism of the coefficient $\Sym_r$-module with a homology group, \begin{equation*} \Pi_{N-1}(S(W_{r}^{\oplus (r+1)}))\ \cong\ H_{N-1}(S(W_{r}^{\oplus (r+1)}); \mathbb{Z})\ \ =:\ \ \mathcal{Z}. \end{equation*} As an abelian group this module $\mathcal{Z}=\langle \zeta \rangle $ is isomorphic to $\mathbb{Z}$. The action of the permutation $\pi \in \Sym_r$ on the module $\mathcal{Z}$ is given by \begin{equation*} \pi \cdot\zeta\ =\ (\mathrm{sign\,}\pi ) ^{d+1}\zeta. \end{equation*} \smallskip \textbf{Computing the obstruction cocycle.} We will now compute an obstruction cocycle $\mathfrak{c}_f$ in the cochain group $C_{\Sym_r}^{N}\big( X,A ;\mathcal{Z}\big)$, and then show that for prime $r$ the cocycle is not a coboundary, that is, it does not vanish when passing to $\mathfrak{o}=[\mathfrak{c}_f]$ in the cohomology group $H_{\Sym_r}^{N}\big( X,A ;\mathcal{Z}\big)$. For this, we use a specific general position map $f:X\rightarrow\R^d$, which induces a map $F:X\rightarrow\R^{r\times(d+1)}$; the value of the obstruction cocycle $\mathfrak{c}_f$ on an oriented maximal simplex $\sigma$ of $X$ is then given by the signed intersection number of $F(\sigma)$ with the test space, the diagonal $T$. (Compare \cite{Dieck87} and \cite{Sale}.) Let $e_{1},\dots,e_{d}$ be the standard basis vectors of $\R^d$, set $e_{0}:=0\in\R^d$, and denote by $v_{0},\dots,v_{N}$ the set of vertices of the $N$-simplex $\Delta_N$ in the given order, that is, such that $C_i=\{v_{i(r-1)},\dots,v_{(i+1)(r-1)-1}\}$ for $i\le d$ and $C_{d+1}=\{v_{(d+1)(r-1)}\}$. Let $f:\| \Delta_{N}\| \rightarrow\R^d$ be the linear map defined on the vertices by \begin{equation*} \left\{ \begin{array}{llll} v_{i} & \overset{f}{\longmapsto } & e_{\lfloor {i}/({r-1})\rfloor } & \text{for }0\leq i\le N-1, \\ v_{N} & \overset{f}{\longmapsto } & \tfrac{1}{d+1}\sum_{i=0}^{d}e_{i}, \end{array} \right. \end{equation*} that is, such that the vertices in $C_i$ are mapped to the vertex $e_i$ of the standard $d$-simplex for $i\le d$, while $v_N\in C_{d+1}$ is mapped to the center of this simplex. \begin{figure}[tbh] \centering \includegraphics[scale=0.9]{Figure-5.eps} \caption{The map $f:\| \Delta ^{16}\| \rightarrow \mathbb{R}^{3}$ in the case $d=3$ and $r=5$} \end{figure} This induces a linear map $f: C_0 * \dots * C_{d+1}\rightarrow \R^d$ and thus an equivariant map $F:X\rightarrow\R^{r\times(d+1)}$, taking $\lambda_1x_1+\dots+\lambda_rx_r$ to the $r\times(d+1)$-matrix whose $k$-th row is $(\lambda_k,(\Pi_{\ell=1}^r\lambda_\ell)x_k)$, which extends the prescribed map $F_0:A\rightarrow\R^{r\times(d+1)}{\setminus}T$. The intersection points of the image of $F$ with the diagonal $T$ correspond to the colored Tverberg $r$-partitions of the configuration $\CC=C_0\sqcup\dots\sqcup C_{d+1}$ in~$\R^d$. Since $\lambda_1=\dots=\lambda_r=\tfrac1r$ at all these intersection points, we find that $F$ is in general position with respect to $T$. The only Tverberg $r$-partitions of the point configuration $\CC$ (even ignoring colors) are given by $r-1$ $d$-simplices with its vertices at $e_0,e_1,\dots,e_d$, together with one singleton point ($0$-simplex) at the center. Clearly there are $(r-1)!^d$ such partitions. We take representatives for the $\Sym_r$-orbits of maximal simplices of~$X$ such that from the last $\Delta_{r,r-1}$ factor, the vertices $(1,1),\dots,(r-1,r-1)$ are taken. On the simplices of $X$ we use the orientation that is induced by ordering all vertices left-to-right on the array of Figure~\ref{fig:array}. This orientation is $\Sym_r$-invariant, as permutation of the rows does not affect the left-to-right ordering. \smallskip \textbf{The obstruction cocycle evaluated on subcomplexes of $( \Delta_{r,r-1}) ^{*d}* \Delta_{r,r-1} *[r]$.} Let us consider the following chains of dimensions $N$ resp.~$N-1$ (illustrated in Figure~\ref{fig:chains}), where $z_{r,r-1}$ denotes the orientation cycle for the chessboard complex $\Delta_{r,r-1}$, as given by Lemma~\ref{Lemma:Chess-Manifold}: \begin{eqnarray*} \Phi & = & (z_{r,r-1})^{*d}* \langle (1,1),\dots, \ \dots\ ,\dots,(r-1,r-1),(r,r)\rangle ,\\ \Omega_j & = & (z_{r,r-1})^{*d}* \langle (1,1),\dots, \ \ldots\ ,\dots,(r-1,r-1),(j,r)\rangle \qquad (1\le j < r) ,\\ \Theta_i & = & (z_{r,r-1})^{*d}* \langle (1,1),\dots,\widehat{(i,i)},\dots,(r-1,r-1),(r,r)\rangle \qquad (1\le i\le r),\\ \Theta_{i,j} & = & (z_{r,r-1})^{*d}* \langle (1,1),\dots,\widehat{(i,i)},\dots,(r-1,r-1),(j,r)\rangle \qquad (1\le i\le r,\,1\le j< r). \end{eqnarray*} Explicitly the signs in these chains are as follows. If $\sigma$ denotes the facet $\langle(1,1),\dots,(r-1,r-1)\rangle$ of $\Delta_{r,r-1}$, such that $\pi\sigma=\langle(\pi(1),1),\dots,(\pi(r-1),r-1)\rangle$, then $\Phi$ is given by \[ \Phi\ \ =\ \ \sum_{\pi_1,\dots,\pi_d\in\Sym_r} (\textrm{sgn\,}\pi_1)\cdots(\textrm{sgn\,}\pi_d)\, \pi_1\sigma * \dots * \pi_d\sigma * \langle (1,1),\dots,(r-1,r-1),(r,r)\rangle \] and similarly for $\Omega_j$, $\Theta_i$, and $\Theta_{i,j}$ \begin{figure}[tbh] \centering \unitlength=1mm \begin{picture}(26,60) \put(0,43){$\Phi = (\Delta_{r,r-1})^{*d}\ * $} \put(0,25){$\Theta_i = (\Delta_{r,r-1})^{*d}\ * $} \put(55,43){$\Omega_j = (\Delta_{r,r-1})^{*d}\ * $} \put(55,25){$\Theta_{i,j} = (\Delta_{r,r-1})^{*d}\ * $} \put(55, 7){$\Theta_{j,j} = (\Delta_{r,r-1})^{*d}\ * $} \put(48,25){$i$} \put(105,46){$j$} \put(105,28){$j$} \put(106,25){$i$} \put(105,10){$j$} \end{picture} \includegraphics[trim = 155 0 310 0,clip,scale=0.55]{Figure-3.eps} \hspace{35mm} \includegraphics[trim = 450 0 23 0,clip,scale=0.55]{Figure-3.eps} \caption{Schemes for the combinatorics of the chains $\Phi$, $\Omega_j$, $\Theta_i$, and $\Theta_{i,j}$.} \label{fig:chains} \end{figure} The evaluation of $\mathfrak{c}_f$ on $\Phi$ picks out the facets that correspond to colored Tverberg partitions: Since the last part of the partition must be the singleton vertex $v_N$, we find that the last rows of the chessboard complex $Delta_{r,r-1}$ factors are not used. We may define the orientation on $S(W_r^{\oplus(d+1)})$ such that \[ \mathfrak{c}_f ( \sigma * \cdots * \sigma * \langle (1,1),\dots,(r-1,r-1),(r,r)\rangle) \ \ =\ \ +\zeta. \] Then we get \[ \mathfrak{c}_f \big( \pi_1\sigma*\dots*\pi_d\sigma * \langle (1,1),\dots,(r-1,r-1),(r,r)\rangle\big) \ =\ \begin{cases} (\textrm{sgn\,}\pi_1)\cdots(\textrm{sgn\,}\pi_d)\,\zeta & \textrm{if }\pi_1(r)=\dots=\pi_d(r)=r,\\ 0 & \textrm{otherwise}. \end{cases} \] The sign $(\textrm{sgn\,}\pi_1)\cdots(\textrm{sgn\,}\pi_d)$ comes from the fact that $F$ maps $\sigma*\cdots*\sigma*\langle(1,1),\ldots,(r-1,r-1),(r,r)\rangle$ and $\pi_1\sigma*\cdots*\pi_d\sigma*\langle(1,1),\ldots,(r-1,r-1),(r,r)\rangle$ to the same simplex in $W_r^{\oplus(d+1)}$, however with a different order of the vertices. Thus, \[ \mathfrak{c}_{f}(\Phi ) \ \ =\ \ (r-1)!^{d}\, \zeta . \] Moreover, for any Tverberg $r$-partition in our configuration the last point $v_N$ has to be a singleton, while the facets of $\Omega_j$ correspond to $r$-partitions where the $j$-th face pairs $v_N$ with a point in $C_d$. Thus the cochains $\Omega_j$ do not capture any Tverberg partitions, and we get \begin{equation*}\qquad \mathfrak{c}_{f}(\Omega_{j})\ \ =\ \ 0\qquad\textrm{ for }1\le j<r. \end{equation*} \smallskip \textbf{Is the cocycle $\mathfrak{c}_{f}$ a coboundary?} Let us assume that $\mathfrak{c}_{f}$ is a coboundary. Then there is an equivariant cochain $\mathfrak{h}\in C_{\Sym_r}^{N-1}\big(X,A;\mathcal{Z}\big)$ such that $\mathfrak{c}_{f}=\delta \mathfrak{h}$, where $\delta $ is the coboundary operator. In order to simplify the notation, from now on we drop the join factor $( \Delta_{r,r-1}) ^{*d}$ from the notation of the subcomplexes $\Phi $, $\Theta_{i}$ and $\Omega_{i}$. Note that the join with this complex accounts for a global sign of $(-1)^{d(r-1)}$ in the boundary/coboundary operators, since in our vertex ordering the complex $( \Delta_{r,r-1}) ^{*d}$, whose facets have $d(r-1)$ vertices, comes first. Thus we have \[ \partial\Phi\ \ =\ \ (-1)^{d(r-1)} \sum_{i=1}^{r}(-1)^{i-1}\Theta_{i} \] and similarly for $1\le j<r$, \[ \partial\Omega_j\ \ =\ \ (-1)^{d(r-1)}\big( \sum_{i=1}^{r-1}(-1)^{i-1}\Theta_{i,j} \ + \ (-1)^{r-1}\Theta_r\big). \] \emph{Claim 1.} For $1\le i,j<r$, $i\neq j$ we have $\mathfrak{h}( \Theta_{i,j})=0$. \begin{proof} We consider the effect of the transposition $\pi_{ir}$. The simplex $\langle (1,1),\dots,\widehat{(i,i)},\dots,(r-1,r-1),(j,r)\rangle$ has no vertex in the $i$-th and in the $r$-th row, so it is fixed by $\pi_{ir}$. The $d$ chessboard complexes in $\Theta_{i,j}$ are invariant but change orientation under the action of $\pi_{ir}$, so the effect on the chain $\Theta_{i,j}$ is $\pi_{ir} \cdot\Theta_{i,j}=(-1)^d \Theta_{i,j}$ and hence \[ \mathfrak{h}(\pi_{ir} \cdot\Theta_{i,j})\ =\ \mathfrak{h}((-1)^d \Theta_{i,j})\ =\ (-1)^d \mathfrak{h}(\Theta_{i,j}). \] On the other hand $\mathfrak{h}$ is equivariant, so \[ \mathfrak{h}(\pi_{ir} \cdot\Theta_{i,j})\ =\ \pi_{ir} \cdot \mathfrak{h}(\Theta_{i,j})\ =\ (-1)^{d+1} \mathfrak{h}(\Theta_{i,j}) \] since $\Sym_r$ acts on $\mathcal Z$ by multiplication with $(\mathrm{sgn\,}\pi)^{d+1}$. Comparing the two evaluations of $\mathfrak{h}(\pi_{ir} \cdot\Theta_{i,j})$ yields $(-1)^{d} \mathfrak{h}(\Theta_{i,j}) = (-1)^{d+1} \mathfrak{h}(\Theta_{i,j}) $. \end{proof} \emph{Claim 2.} For $1\le j<r$ we have $\mathfrak{h}( \Theta_{j,j})= -\mathfrak{h}(\Theta_j)$. \begin{proof} The interchange of the $j$-th row with the $r$-th moves $\Theta_{j,j}$ to $\Theta_j$, where we have to account for $d$ orientation changes for the chessboard join factors. Thus $\pi_{jr}\Theta_{j,j} = (-1)^d \Theta_j$, which yields \[ (-1)^d\mathfrak{h}( \Theta_j ) \ =\ \mathfrak{h}((-1)^d \Theta_j ) \ =\ \mathfrak{h}(\pi_{jr}\Theta_{j,j} ) \ =\ \pi_{jr}\cdot\mathfrak{h}(\Theta_{j,j} ) \ =\ (-1)^{d+1}\mathfrak{h}(\Theta_{j,j} ). \] \end{proof} We now use the two claims to evaluate $\mathfrak{h}(\partial\Omega_j)$. Thus we obtain \begin{eqnarray*} 0 \ =\ \mathfrak{c}_{f}(\Omega_j) \ =\ \delta \mathfrak{h}(\Omega_j) \ =\ \mathfrak{h}(\partial\Omega_j) & = & (-1)^{d(r-1)}\big( (-1)^{j-1}\mathfrak{h}(\Theta_{j,j}) \ + \ (-1)^{r-1}\mathfrak{h}(\Theta_r)\big) \end{eqnarray*} and hence \[ (-1)^{j}\mathfrak{h}(\Theta_j)\ \ =\ \ (-1)^{r}\mathfrak{h}(\Theta_r). \] The final blow now comes from our earlier evaluation of the cochain $\mathfrak{c}_f$ on $\Phi$: \begin{eqnarray*} (r-1)!^{d} \cdot \zeta \ =\ \mathfrak{c}_{f}(\Phi) \ =\ \delta \mathfrak{h}(\Phi) \ =\ \mathfrak{h}(\partial \Phi ) & =& \mathfrak{h}( (-1)^{d(r-1)} \sum_{j=1}^{r} (-1)^{j-1} \Theta_j)\\ & =& -(-1)^{d(r-1)}\sum_{j=1}^{r}(-1)^{j}\mathfrak{h}( \Theta_j)\\ & =& -(-1)^{d(r-1)}\sum_{j=1}^{r}(-1)^{r}\mathfrak{h}( \Theta_r)\\ & =& (-1)^{(d+1)(r-1)} r\,\mathfrak{h}( \Theta_r). \end{eqnarray*} Thus, the integer coefficient of $\mathfrak{h}(\Theta _{r})$ should be equal to $\tfrac{(r-1)!^{d}}{r}\zeta$, up to a sign. Consequently, when $r~\nmid ~(r-1)!^{d}$, the cocycle $\mathfrak{c}_{f}$ is not a coboundary, i.e.\ the cohomology class $\mathfrak{o}=[\mathfrak{c}_{f}]$ does not vanish and so there is no $\mathfrak{S}_{r}$-equivariant extension $X\rightarrow S(W_{r}^{\oplus (d+1)})$ of $F_0|_A$. On the other hand, when $r\mid(r-1)!^{d}$ we can define \begin{equation*} \begin{array}{llll} \mathfrak{h}(\Theta _{j}) & := & + (-1)^{\left( d+1\right) \left( r-1\right) +j+r}\cdot\tfrac{(r-1)!^{d}}{r}\cdot \zeta , & \text{for }1\leq j\leq r, \\ \mathfrak{h}(\Theta _{j,j}) & := & - (-1)^{\left( d+1\right) \left( r-1\right) +j+r}\cdot\tfrac{(r-1)!^{d}}{r}\cdot \zeta , & \text{for }1\leq j<r, \\[1.6mm] \mathfrak{h}(\Theta _{i,j}) & := & 0, & \text{for }i\neq j,~1\leq i\leq r,~1\leq j<r. \end{array} \end{equation*} Here we actually do obstruction theory with respect to the filtration $(\Delta_{r,r-1})^{*d} * (\Delta_{r,r-1}*[r])^{(n)}$ of $X$, where $(\Delta_{r,r-1}*[r])^{(n)}$ denotes the $n$-skeleton of $\Delta_{r,r-1}*[r]$. The obstruction cocycle actually lies in \[ C^{r-1}_{\mathfrak{S}_{r}} (\Delta_{r,r-1}*[r] ; \mathcal{Z} \otimes H_{(r-1)d-1} ((\Delta_{r,r-1})^{*d};\mathbb{Z})), \] and it is the coboundary of $\mathfrak{h}$. Since $\mathfrak{h}$ is only non-zero on the ``cells'' $\Theta_{j}$ and $\Theta_{j,j}$, which are only invariant under $\textnormal{id}\in\mathfrak{S}_{r}$, we can solve the extension problem equivariantly. Hence for $r\mid(r-1)!^{d}$ an $\mathfrak{S}_{r}$-equivariant extension $X\rightarrow S(W_{r}^{\oplus (d+1)})$ exists. \end{proof} \textbf{Acknowledgements.} We are grateful to Carsten Schultz for critical comments, and to Aleksandra, Julia, and Torsten for constant support. The Mathematische Forschungsinstitut Oberwolfach and The Institute for Pure and Applied Mathematics at UCLA provided perfect working environments for the completion of this paper.
{ "timestamp": "2009-11-19T15:25:31", "yymm": "0910", "arxiv_id": "0910.4987", "language": "en", "url": "https://arxiv.org/abs/0910.4987", "abstract": "We prove a \"Tverberg type\" multiple intersection theorem. It strengthens the prime case of the original Tverberg theorem from 1966, as well as the topological Tverberg theorem of Barany et al. (1980), by adding color constraints. It also provides an improved bound for the (topological) colored Tverberg problem of Barany & Larman (1992) that is tight in the prime case and asymptotically optimal in the general case. The proof is based on relative equivariant obstruction theory.", "subjects": "Combinatorics (math.CO); Algebraic Topology (math.AT)", "title": "Optimal bounds for the colored Tverberg problem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363741964573, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7084883439874492 }
https://arxiv.org/abs/1710.04705
On smooth square-free numbers in arithmetic progressions
A. Booker and C. Pomerance (2017) have shown that any residue class modulo a prime $p\ge 11$ can be represented by a positive $p$-smooth square-free integer $s = p^{O(\log p)}$ with all prime factors up to $p$ and conjectured that in fact one can find such $s$ with $s = p^{O(1)}$. Using bounds on double Kloosterman sums due to M. Z. Garaev (2010) we prove this conjecture in a stronger form $s \le p^{3/2 + o(1)}$ and also consider more general versions of this question replacing $p$-smoothness of $s$ by the stronger condition of $p^{\alpha}$-smoothness. Using bounds on multiplicative character sums and a sieve method, we also show that we can represent all residue classes by a positive square-free integer $s\le p^{2+o(1)}$ which is $p^{1/(4e^{ /2})+o(1)}$-smooth. Additionally, we obtain stronger results for almost all primes $p$.
\section{Introduction and main results} \subsection{Motivation} We recall that an integer $n$ is called {\it $y$-smooth\/} if all prime divisors of $n$ do not exceed $y$, and is called {\it square-free\/} if it is not divisible by a square of a prime. Following Booker and Pomerance~\cite{BoPom}, for a prime $p $ we denote by $M(p)$ the smallest integer $M$ such that any residue class modulo $p$ contains a $p$-smooth square-free representative, and set, formally, $M(p) =\infty$ if no such representative exists. We note that by~\cite[Theorem~1]{BoPom} we have $M(p)< \infty$ for every $p\ge 11$. It is noted in~\cite[Section~6]{BoPom} that the argument of the proof of~\cite[Theorem~1]{BoPom} actually gives $M(p) = p^{O(\log p)}$ and conjectured that $M(p) = p^{O(1)}$. Here we settle this conjecture in a stronger and more general form. More precisely, we address the question of Booker and Pomerance~\cite{BoPom} about the smallest value of $\alpha$ for which $M_\alpha(p) < \infty$ for a sufficiently large $p$ and show that this is true for $\alpha > 1/(4 e^{1/2})$, by using the methods of~\cite{Harm,HaSh}. In fact, we obtain an explicit bound on $M_\alpha(p)$ and also extend this for composite moduli. We also note that the area of representations of residue classes by numbers of prescribed arithmetic structure takes its origins in the works of Erd{\H o}s, Odlyzko and S{\'a}rk{\"o}zy~\cite{EOS} and Harman~\cite{Harm}, see also~\cite{FrKuSh, Gar2, HaSh, RaWa, Shp1, Shp2, Wal} for more recent developments. \subsection{Main results} For a real positive $\alpha$, we denote by $M_\alpha^*(q)$ the smallest integer $M$ such that any {\it reduced\/} residue class modulo $q$ contains a $q^\alpha$-smooth square-free positive representative $k\le M$, and set, formally, $M_\alpha^*(q) =\infty$ if no such representative exists. Our main result is \begin{theorem}\label{thm:Malpha} For cube-free integer $q\to \infty$ and any fixed $$ \alpha > \begin{cases} 1/(4 e^{1/2}), & \text{if $q$ is cube-free,}\\ 1/(3 e^{1/2}), & \text{otherwise,} \end{cases} $$ we have $$ M_{\alpha}^*(q) \le q^{2 + o(1)}. $$ \end{theorem} \begin{remark} In particular, Theorem~\ref{thm:Malpha} improves on the result of Harman~\cite[Theorem~3]{Harm} which, for any fixed $\varepsilon > 0$, gives the existence (without the additional square-freeness condition) of a $q^{1/(4 e^{1/2})+\varepsilon}$-smooth integer $n\leq q^{9/4+\varepsilon}$ in arithmetic progressions modulo a cube-free $q$ and a $q^{1/(3 e^{1/2})+\varepsilon}$-smooth integer $n\leq q^{7/3+\varepsilon}$ in arithmetic progressions modulo an arbitrary $q$. \end{remark} To exhibit the main ideas behind of our approach in the simplest form and also because this corresponds to the original question of Booker and Pomerance~\cite{BoPom}, in Theorems~\ref{thm:Malpha-p-AA}, ~\ref{thm:M1} and~\ref{thm:M1-AA} we treat only prime moduli $p$. However, all necessary ingredients are readily available in the case of composite moduli $q$ as well, see, for example ~\cite[Lemma~4]{MuSh},~\cite[Corollary~2.5]{FoSh} and~\cite[Lemma~3.1]{Irv}. For almost all primes $p$, we obtain a stronger result as Theorem~\ref{thm:Malpha} using bounds on character sums from ~\cite{MuSh}. \begin{theorem}\label{thm:Malpha-p-AA} As $Q\to \infty$, for any fixed $\alpha > 0$ for all but $Q^{o(1)}$ primes $p\in [Q,2Q]$, we have $$ M_\alpha^*(p) \le p^{2 + o(1)}. $$ \end{theorem} In the particular case of $p$-smoothness, we can actually do more and break the $p^2$-barrier. Using bounds of double Kloosterman sums with prime arguments due to Garaev~\cite{Gar1}, we prove: \begin{theorem} \label{thm:M1} As $p\to \infty$ we have $$ M(p) \le p^{3/2+o(1)}. $$ \end{theorem} \begin{remark} It should be noted that the main result of Balog and Pomerance~\cite{BaPom} gives the existence (without the additional condition of square-freeness) of a $p$-smooth integer $n\leq p^{7/4+\varepsilon}$ in arithmetic progressions modulo $p$. Our method can be used to derive further extensions of the results of~\cite{BaPom}. \end{remark} One of the implications of our results is that in~\cite[Corollary~7]{BoPom} the value of $d$ can be taken to be reasonably small. Using a result of Irving~\cite{Irv}, we also show that for almost all $p$ one can break the $3/2$-threshold of Theorem~\ref{thm:M1}, see also our comments in Section~\ref{sec:tight} below. \begin{theorem}\label{thm:M1-AA} As $Q\to \infty$, for all but $o(Q/\log Q)$ primes $p\in [Q,2Q]$, we have $$ M(p) \le p^{4/3 + o(1)}. $$ \end{theorem} \subsection{Some methods behind our results} The proof of Theorem~\ref{thm:Malpha} is based on the ideas of~\cite{HaSh}, which are modified to accomodate the square-freeness condition and which, after some preparations in Section~\ref{sec:sum Mob}, we develop in Section~\ref{sec:sumprod-AP}. Furthermore, to make it work, instead of the Burgess bound (see~\cite[Theorem~12.6]{IwKow}) used in~\cite{HaSh}, we apply some bounds from~\cite{Mun,MuTr}, presented in Section~\ref{sec:charsum}. For Theorem~\ref{thm:Malpha-p-AA} we use a different and more direct approach which is enabled by the fact that for almost all primes we have bounds of very short character sums from~\cite{MuSh}, which in turn is based on some ideas of Garaev~\cite{Gar0} and which we also present in Section~\ref{sec:charsum}. For Theorems~\ref{thm:M1} and~\ref{thm:M1-AA} we use yet another approach which is based on bounds of some double weighted Kloosterman-like sums from~\cite{Gar1} and~\cite{Irv}, respectively, see Section~\ref{sec:Ksum}. These bounds are used in Section~\ref{sec:congprod} to study some congruences with products of primes, which underlie our approach. Furthermore, in the proof of Theorem~\ref{thm:M1-AA} we also use bounds for the number of small solutions of some quadratic congruences, see Section~\ref{sec:cong recip sq}. We introduce some general notation in Section~\ref{sec:note} , which we then follow throughout the paper and collect several useful facts on arithmetic functions in Section~\ref{sec:arith funct}. \subsection{On the tightness of our results} \label{sec:tight} Clearly, the lower bounds on $\alpha$ in Theorem~\ref{thm:Malpha} cannot be improved until the classical bound of Burgess~\cite{Burg} on the smallest quadratic nonresidue is improved. Furthermore, the upper bound of Theorem~\ref{thm:M1} also seems to be the best possible one can achieve nowadays. In fact, even without any arithmetic restrictions on positive integers $u \le U$ and $v \le V$ one can guarantee the existence of a solution to $uv \equiv a \pmod p$ only for $UV \ge p^{3/2+\varepsilon}$ for some fixed $\varepsilon>0$, see~\cite[Section~3.1]{Shp0} for a survey of relevant results. \section{Preparations} \subsection{General notation} \label{sec:note} We recall that the notations $U = O(V)$, $U \ll V$ and $U \ll V$ are all equivalent to the assertion that the inequality $|U|\le cV$ holds for some constant $c>0$. Throughout the paper, the implied constants in these symbols may occasionally, where obvious, depend on the integer parameter $r\ge 1$ and are absolute otherwise. Throughout the paper, the letter $\ell$ and $p$, with and without subscripts, always denote primes numbers. As usual, we use $\mu(k)$, $\tau(k)$ and $\varphi(k)$ to denote the M{\" o}bius, divisor and Euler functions of an integer $k \ge 1$, respectively. We set $$ \psi = 2^{1/15} \qquad\mbox{and}\qquad \xi = \psi-1 $$ and write $a \sim A$ to indicate $a \in [A,\psi A]$. We also write \begin{equation} \label{eq: rho} \rho = e^{-1/2}. \end{equation} \subsection{Some properties of arithmetic functions} \label{sec:arith funct} We recall the well-known elementary identity \begin{equation}\label{inversion} \sum_{d \mid \gcd(n,q)} \mu(d) = \begin{cases} 1 &\text{if $\gcd(n,q)=1$,}\\ 0 &\text{otherwise.} \end{cases}. \end{equation} We also note that by the Mertens formula (see~\cite[Equation~(2.15)]{IwKow}), for any real $Y > X \ge 2$ we have \begin{equation} \label{eq:Mer} \sum_{X \le \ell \le Y}\frac{1}{\ell} = \log \frac{\log Y}{\log X} +O\(\frac{1}{\log X}\). \end{equation} In particular, it easily follows from~\eqref{eq:Mer} that \begin{equation} \label{eq:phi} \frac{\varphi(k)}{k} = \prod_{ \ell \mid k}\(1-\frac{1}{\ell}\) \gg \frac{1}{\log \log k} \end{equation} for any integer $k \ge 3$ We also need the classical bound \begin{equation} \label{eq:tau} \tau(k) = k^{o(1)}, \end{equation} on the divisor function, see, for example,~\cite[Equation~(1.81)]{IwKow}. We recall that by~\cite[Lemma~2.5 (2)]{Mun} we have: \begin{lemma}\label{lem: SqFAP} For any $M > 0$ and $q \ge 2$ we have $$ \sum_{\substack{m\sim M \\ \gcd(m,q)=1}} \mu^2(m) = \frac{\xi}{\zeta(2)}\prod_{p\mid q}\(1+\frac{1}{p}\)^{-1} M + O(M^{1/2}\tau(q))\,. $$ \end{lemma} Furthermore, by~\cite[Lemma~7]{HaSh} we have the following upper bound: \begin{lemma}\label{lem: AP bound} For any $M > \log q \ge 2$ we have $$ \sum_{\substack{m\sim M \\ \gcd(m,q)=1}} 1 \ll \frac{\varphi(q)}{q}M\,. $$ \end{lemma} \section{Bounds of exponential and character sum and the number of solutions to some congruences} \subsection{Character sums} \label{sec:charsum} Let ${\mathcal X}_q$ be the set of {\it multiplicative\/} characters of the residue ring modulo $q \ge 1$ and let ${\mathcal X}_q^*={\mathcal X}_q\setminus\{\chi_0\}$ be the set of {\it nonprincipal\/} characters; we refer the reader to~\cite[Chapter~3]{IwKow} for the relevant background. In particular, we make use of the following orthogonality property of characters, see~\cite[Section~3.2]{IwKow}, \begin{equation} \label{eq:orth} \frac{1}{\varphi(q)}\sum_{\chi\in {\mathcal X}_q} \chi(a) = \begin{cases} 1, & \text{if}\ a \equiv 1 \pmod q,\\ 0, & \text{otherwise,} \end{cases} \end{equation} which holds for any integer $a$ with $\gcd(a,q)=1$. Our argument rests on the existence of a good bound for the sums \begin{equation} \label{eq:sum s-f} S^\sharp_\chi(t) = \sum_{\substack{1\le s \le t\\s~\text{square-free}}} \chi(s) \end{equation} of characters $\chi\in{\mathcal X}_q^*$ over square-free integers $s \in [1,t]$. In particular, we need the following bound of Munsch and Trudgian~\cite[Lemma~7]{MuTr} which has been as previously stated for $r=2$ in~\cite[Lemma~3.2]{Mun}. In fact we also formulate this in a more general form which cover arbitrary moduli $q$, rather than only cube-free $q$. \begin{lemma} \label{lem:CharSquarfree} For any integer $q$ and a positive integer $t \le q$ and \begin{itemize} \item for any fixed integer $r\geq 2$ if $q$ cube-free, \item for $r = 2, 3$ for any $q$, \end{itemize} we have $$ \max_{\chi\in{\mathcal X}_q^*} \left|S^\sharp_\chi(t)\right|\le t^{1-1/r}q^{(r+1)/(4r^2)+o(1)}, $$ as $q\to \infty$. \end{lemma} In particular, we have \begin{cor} \label{cor:CharSquarfree-eps} There exists an absolute constant $c_0>0$ such that for any real $\varepsilon>0$ and a positive integer $t$ with \begin{itemize} \item $t \in [q^{1/4 + \varepsilon}, q]$ if $q$ cube-free, \item $t \in [q^{1/3 + \varepsilon}, q]$ for any $q$, \end{itemize} we have $$ \max_{\chi\in{\mathcal X}_q^*} \left|S^\sharp_\chi(t)\right|\le t^{1 - c_0\varepsilon^2 }. $$ \end{cor} Finally, we need the following simple bound which follows from the orthogonality of characters and which we refer to as the {\it mean-value estimate for character sums\/}. \begin{lemma} \label{lem:Aver} For $N \ge 1$ and any sequence of complex numbers $a_n$ we have $$ \sum_{\chi \in {\mathcal X}} \left| \sum_{n \le N} a_n \chi(n) \right|^2 \le \varphi(q) (N/q + 1) \sum_{n \le N} |a_n|^2. $$ \end{lemma} We present a special case of~\cite[Lemma~4]{MuSh} in the following form convenient for our applications. \begin{lemma} \label{lem:CharSquarfree-AA} Let $t$ and $Q$ be sufficient large positive integers with $Q \ge t^\varepsilon$ for some fixed $\varepsilon > 0$. Then for any $\delta< 1/4$ and $$ \vartheta = \min\{ (1-2 \delta) \gamma, 2 \delta\(1- \gamma\)\} $$ where $\gamma$ is the following fractional part $$ \gamma = \left\{\frac{2 \log Q}{\log t}\right\} $$ for all but at most $Q^{4\delta} t^{\vartheta +o(1)}$ primes $p \le Q$ we have $$ \max_{\chi\in{\mathcal X}_p^*} \left|S^\sharp_\chi(t)\right|\le t^{1-\delta}. $$ \end{lemma} \subsection{Double Kloosterman sums with prime arguments} \label{sec:Ksum} For a prime $p$, we define ${\mathbf{\,e}}_p(z) = \exp(2 \pi i z/p)$ and consider the exponential sums $$ W_p(a;L)= \sum_{\ell_1,\ell_2 \in {\mathcal L}} {\mathbf{\,e}}_p\(a \overline \ell_1 \overline \ell_2\), $$ where ${\mathcal L}$ is the set of primes $\ell \in [L,2L]$ with $\gcd(\ell,p) =1$ and for an integer $k$ with $\gcd(k,p)=1$ we use $\overline k$ to denote the multiplicative inverse of $k$ modulo $p$, that is, the unique integer with $$ k \overline k \equiv 1 \pmod p \qquad\mbox{and}\qquad 1 \le \overline k < p. $$ We now record the following bound which follows from the proof of~\cite[Lemma~2.4]{Gar1}. \begin{lemma} \label{lem:BilinSums} For $1\le L \le p^{1/3} $ we have $$ \left| W_p(a;L) \right| \le L^{3/2} p^{1/8+o(1)}, $$ as $p\to \infty$. \end{lemma} \begin{proof} As in~\cite[Lemma~2.4]{Gar1}, we consider a more general sum $$ W = \sum_{k=1}^K\left|\sum_{n=1}^{N_k} \gamma_n {\mathbf{\,e}}_p\(a \overline k \overline n\)\right|, $$ where $\gamma_n$ are some complex numbers with $\gamma_n = p^{o(1)}$, $n =1, \ldots, N$ and $N_k$ are some positive integers with $N_k\le N$, $k=1, \ldots K$. Following the proof of~\cite[Lemma~2.4]{Gar1}, and using~\cite[Lemma~2.3]{Gar1} in full generality, we arrive to the inequality $$ W^8 \ll p^{1+o(1)} (KN)^4 \(K^{7/2} p^{-1/2}+K^{2}\)\(N^{7/2} p^{-1/2}+N^{2}\). $$ We note that for $K,N\ge p^{1/3}$ we obtain the bound $W \ll (KN)^{15/16} p^{o(1)}$ in~\cite[Lemma~2.4]{Gar1}, while for $ K,N \le p^{1/3}$ we arrive to $$ W \ll (KN)^{3/4} p^{1/8+ o(1)}. $$ The result now follows. \end{proof} For almost all moduli, improving some previous results from~\cite{FoSh}, Irving~\cite{Irv} has shown that on average over $p$ one can improve Lemma~\ref{lem:BilinSums}. We present the result of~\cite[Lemma~3.1]{Irv} in a very special case with the averaging only over prime numbers with both variables in the same range $[L,2L]$. \begin{lemma} \label{lem:BilinSums-AA} As $Q\to \infty$, for any fixed integer $k\ge 1$, for $1 \le L \le Q$ we have $$ \sum_{p\in [Q,2Q]} \max_{\gcd(a,p)=1} \left| W_p(a;L) \right| \le Q^{1+o(1)}\( L^{(3k-1)/(2k)} Q^{1/(2k)}+ L^{(4k-1)/(2k)}\). $$ \end{lemma} Hence, from Lemma~\ref{lem:BilinSums-AA}, we have: \begin{cor} \label{cor:BilinSums-AA} As $Q\to \infty$, for any fixed integer $k\ge 1$, for $1 \le L \le Q$, for all but $o(Q/\log Q)$ primes $p\in [Q,2Q]$, we have $$ \max_{\gcd(a,p)=1} \left| W_p(a;L) \right| \le \( L^{(3k-1)/(2k)} Q^{1/(2k)}+ L^{(4k-1)/(2k)}\) Q^{o(1)}. $$ \end{cor} \subsection{Congruences with reciprocals of squares} \label{sec:cong recip sq} Given an integer $r \ge 1$, a real $U \ge 1$, and $\lambda \in {\mathbb F}_p$, let $I_{r,p}(U;\lambda )$ be the number of solutions to the congruence \begin{align*} \frac{1}{u_1^2}+ \ldots+ \frac{1}{u_r^2} &\equiv \frac{1}{u_{r+1}^2}+ \ldots+ \frac{1}{u_{2r}^2}+ \lambda \pmod p,\\ U \le u_1, & \ldots, u_{2r} \le 2U, \qquad i =1,\ldots, 2r. \end{align*} First we observe that the standard expression of $I_{r,p}(U;\lambda )$ via additive characters immediately implies the well-known inequality $$ I_{r,p}(U;\lambda) \le I_{r,p}(U; 0). $$ Hence we denote $$ I_{r,p}(U) = I_{r,p}(U; 0) $$ and concentrate on this quantity. Heath-Brown~\cite[Lemma~1]{H-B} has given a nontrivial bound on $I_{r,p}(U)$, see also~\cite[Proposition~1]{BouGar}, however these results seems to be not strong enough for our purpose. However on average over $p$ a much stronger bound is given by ~\cite[Lemma~3.4]{LSZ1} (We recall that all implied constants are allowed to depend on $r$): \begin{lemma} \label{lem:Irp Aver} For any fixed positive integer $r$ and sufficiently large real $1 \le U \le Q$, we have $$ \frac{1}{Q}\sum_{Q \le p \le 2Q} I_{r,p}(U) \le \(U^{2r} Q^{-1} + U^r\)Q^{o(1)}. $$ \end{lemma} Given two positive real numbers $U$ and $V$, we denote by $T_{a,p}(U,V)$ the number of solutions to the congruence $$ u^2v \equiv a \pmod p, \qquad 1 \le u \le U, \ 1 \le v \le V. $$ \begin{lemma} \label{lem:TIaa} As $Q\to \infty$, for all but $o(Q/\log Q)$ primes $p\in [Q,2Q]$, for any integer $a$ with $\gcd(a,p)=1$ and reals $U$ and $V$ with $1\le U,V \le Q$, we have $$ T_{a,p}(U,V) \ll V^{1/4}(Up^{-1/4}+U^{1/2})Q^{o(1)}. $$ \end{lemma} \begin{proof} In order to lighten the notation, $T$ will denote the number of solutions $T_{a,p}(U,V)$. Expanding, we get $$T^2 \le \#\left\{1\le u_1, u_2 \le U, 1\le b\le 2V \text{ such that } \left(\frac{1}{u_1^2}+ \frac{1}{u_2^2}\right) \equiv va^{-1} \bmod p\right\}.$$ By Cauchy-Schwarz inequality, we deduce \begin{align*}T^4 & \ll V \#\left\{1\le u_1,u_2,u_3,u_4 \le U \text{ such that } \frac{1}{u_1^2}+ \frac{1}{u_2^2} \equiv \frac{1}{u_{3}^2}+ \frac{1}{u_{4}^2} \bmod p\right\} \\ & \ll V I_{2,p}(U). \end{align*} Using Lemma \ref{lem:Irp Aver} with $r=2$, we deduce that for almost all primes $p\in [Q,2Q]$, we have $$I_{2,p}(U) \ll p^{o(1)} (U^{4}p^{-1} + U^2).$$ Thus, for almost all primes $p$, we get $$T \ll V^{1/4}(Up^{-1/4}+U^{1/2})p^{o(1)}.$$ \end{proof} \subsection{Congruences with products of primes} \label{sec:congprod} Given two positive real numbers $L$ and $h$, we denote by $N_{a,p}(L,h)$ the number of solutions to the congruence \begin{equation} \label{eq:cong llu} \ell_1 \ell_2 u \equiv a \pmod p, \qquad \ell_1,\ell_2 \in {\mathcal L}, \ 1 \le u \le h, \end{equation} where ${\mathcal L}$ is the set of primes $\ell \in [L,2L]$ with $\gcd(\ell,p) =1$. First we note that using standard techniques, we easily derive the following asymptotic formula \begin{lemma} \label{lem:congr-asymp} For any integer $a$ and prime $p$ with $\gcd(a,p)=1$ and real $h$ and $L$ with $1 \le L \le p^{1/3}$ and $1\le h\le p$, we have $$ N_{a,p}(L,h) = \frac{K^2 h}{p} + O\(L^{3/2} p^{1/8+o(1)}\), $$ where $K = \# {\mathcal L}$ is the cardinality of ${\mathcal L}$. \end{lemma} \begin{proof} We interpret the congruence~\eqref{eq:cong llu} as the uniformity of distribution question about the number of residues $a \ell_1^{-1} \ell_2^{-1} \pmod p$ (where the inversions are modulo $p$), which fall in the interval $[1,h]$. The result follows from Lemma~\ref{lem:BilinSums} applied to the sets ${\mathcal U}={\mathcal V}$ which are the sets of reciprocals modulo $p$ of $\ell \in {\mathcal L}$, combined with the the Erd{\H o}s--Tur{\'a}n inequality, see~\cite{DrTi,KuNi}. \end{proof} Similarly, from Corollary~\ref{cor:BilinSums-AA}, we derive \begin{lemma} \label{lem:congr-asymp-AA} As $Q\to \infty$, for any fixed integer $k\ge 1$, for $1 \le L \le Q$ for all but $o(Q/\log Q)$ primes $p\in [Q,2Q]$, for any integer $a$ with $\gcd(a,p)=1$ and real $h$ with $1\le h\le p$, we have $$ N_{a,p}(L,h) = \frac{K^2 h}{p} + O\(\( L^{(3k-1)/(2k)} p^{1/(2k)}+ L^{(4k-1)/(2k)}\) p^{o(1)}\), $$ where $K = \# {\mathcal L}$ is the cardinality of ${\mathcal L}$. \end{lemma} We also need the following upper bound on $N_{a,p}(L,h) $ which is better for small values of $h$ when Lemma~\ref{lem:congr-asymp} fails to produce any nontrivial result. \begin{lemma} \label{lem:congr-bound} For any integer $a$ and prime $p$ with $\gcd(a,p)=1$ and reals $1 \le L, h \le p$ we have $$ N_{a,p}(L,h) \le \(L^2 h/p +1\)p^{o(1)}. $$ \end{lemma} \begin{proof} Clearly, we can assume that $1 \le a \le p$. Then the congruence~\eqref{eq:cong llu} implies that $\ell_1 \ell_2 u = a + kp$ for some non-negative $k \le 4L^2h/p$. Thus $k$ takes at most $4L^2 h/p +1$ possible values and for each of them $\ell_1$ and $\ell_2$ cant take at most $O(\log p)$ possible values among the divisors of $a + kp \ge 1$. \end{proof} We now estimate the average value of $N_{a,p}(L,h)$ over a special set of $a$ using Lemma~\ref{lem:TIaa}. \begin{lemma} \label{lem:congr-bound-aver} As $Q\to \infty$, for all but $o(Q/\log Q)$ primes $p\in [Q,2Q]$, for any integer $a$, and real $1 \le F, L, h \le p$ with $F, L^2h < p$, for the sum $$ R_{a,p}(F,L,h) = \sum_{F \le d \le 2F} N_{ad^{-2},p}(L,h) $$ we have $$ R_{a,p}(F,L,h) \le \max\{F(L^2h)^{1/4}p^{-1/4}, F^{1/2}(L^2h)^{1/4}\} p^{o(1)}. $$ \end{lemma} \begin{proof} We observe that the sum $R_{a,p}(F,L,h)$ counts the number of solutions to the congruence $$ \ell_1 \ell_2 u d^2 \equiv a \pmod p, \qquad F \le d \le 2F,\ \ell_1,\ell_2 \in {\mathcal L}, \ 1 \le u \le h. $$ Denoting $v = \ell_1 \ell_2 u$ we see from~\eqref{eq:tau} that each such $v \in [1, L^2h]$ can be represented like this in at most $p^{o(1)}$ ways. Hence $$ R_{a,p}(F,L,h) \le T_{a,p}(2F,L^2h) p^{o(1)}. $$ Thus by Lemma~\ref{lem:TIaa} $$ R_{a,p}(F,L,h) \le \max\{F(L^2h)^{1/4}p^{-1/4}, F^{1/2}(L^2h)^{1/4}\} p^{o(1)}. $$ This concludes the proof. \end{proof} We also need to bound the number of solutions of a modified version of~\eqref{eq:cong llu}. Namely, we use $Q_{a,p}(L,h)$ to denote the number of solutions to the congruence \begin{equation} \label{eq:cong ll2v} \ell_1 \ell_2^2 v \equiv a \pmod p, \qquad \ell_1,\ell_2 \in {\mathcal L}, \ 1 \le v \le h, \end{equation} \begin{lemma} \label{lem:congr-boundsquare} For any integer $a$ and prime $p$ with $\gcd(a,p)=1$ and reals $ 1\le L, h \le p$ with $2L h \le p$ we have $$ Q_{a,p}(L,h) \le \(L h/p +1\)Lp^{o(1)}. $$ \end{lemma} \begin{proof} The congruence~\eqref{eq:cong ll2v} implies that $\ell_1 v \equiv a \ell_2^{-2} \pmod p$. Since $2L h \le p$, for each choice $\ell_2$, the value $\ell_1 v$ can take at most $Lh/p+1$ values and the result follows. \end{proof} \section{Multilinear sums over products in arithmetic progressions} \subsection{Some sums with the M{\" o}bius function} \label{sec:sum Mob} We are now able to establish a full analogue of~\cite[Lemma~8]{HaSh}, where the summation is only over $m$ and $n$ with square-free products $mn$. \begin{lemma}\label{smooth} For integers $N \ge q^{1/4} > 1$, real $0 < \zeta < 1$ and a positive integer $d = o\((\log N)^2/\log \log N\)$ coprime with $q$, we have \begin{align*} & \sum_{\substack{N^{\zeta} \le p \le N \\ \gcd(p,dq)=1}} \sum_{\substack{m\sim N/dp \\ \gcd(m,dq)=1}} \mu^2 (m)\\ & \qquad \qquad= \left( \frac{\xi \log(1/\zeta)}{\zeta(2)} + o(1) \right) \prod_{\ell \mid dq}\(1+\frac{1}{\ell}\)^{-1} \frac{N}{d}. \end{align*} \end{lemma} \begin{proof} We define $$ U= N/\tau(dq)^2 \qquad\mbox{and}\qquad V = N/(d\log (dq)). $$ We first consider the part over primes $p \leq U$. Applying Lemma~\ref{lem: SqFAP} to the inner sum, we obtain \begin{align*} \sum_{\substack{N^{\zeta} \le p \le U \\ \gcd(p,dq)=1}} & \sum_{\substack{m\sim N/dp \\ \gcd(m,dq)=1}} \mu^2 (m)\\ &= \frac{\xi}{\zeta(2)} \prod_{\ell \mid dq}\(1+\frac{1}{\ell}\)^{-1}\frac{N}{d} \sum_{\substack{N^{\zeta} \le p \le U \\ \gcd(p,dq)=1}}\frac{1}{p} \\ &\qquad \qquad \qquad\quad + O\(\sum_{\substack{ N^{\zeta} \le p \le U \\ \gcd(p,dq)=1}} \(\frac{N}{dp}\)^{1/2}\tau(dq) \) . \end{align*} We discard the coprimality condition and extend the summation over all primes $p \le U$. By the prime number theorem and partial summation, the error term is \begin{align*} \sum_{\substack{ N^{\zeta} \le p \le U \\ \gcd(p,dq)=1}} \(\frac{N}{dp}\)^{1/2}\tau(dq) & = N^{1/2} d^{-1/2}\tau(dq) \sum_{\substack{ N^{\zeta} \le p \le U \\ \gcd(p,dq)=1}} \frac{1}{p^{1/2}}\\ &\ll N^{1/2} d^{-1/2} \tau(dq) \frac{U^{1/2}}{\log U} \ll \frac{N}{d^{1/2} \log N} \end{align*} since by the bound on the divisor function~\eqref{eq:tau} we have $U = N^{1+o(1)}$. Since there are only $O(1)$ primes $p \mid dq$ with $p > N^{\zeta}$, using~\eqref{eq:Mer}, we now obtain \begin{align*} \sum_{\substack{N^{\zeta} \le p \le U \\ \gcd(p,dq)=1}}\frac{1}{p} &= \sum_{N^{\zeta} \le p \le U}\frac{1}{p} + O(N^{-\zeta}) \\ & = \log \frac{\log N - 2\log \tau(dq) }{\zeta \log N} +O\(\frac{1}{\log N}\). \end{align*} Thus, using the Mertens formula~\eqref{eq:Mer}, we obtain $$ \sum_{\substack{N^{\zeta} \le p \le U \\ \gcd(p,dq)=1}}\frac{1}{p} = \log (1/\zeta)+o(1), $$ and thus \begin{align*} & \sum_{\substack{N^{\zeta} \le p \le U \\ \gcd(p,dq)=1}} \sum_{\substack{m\sim N/dp \\ \gcd(m,dq)=1}} \mu^2 (m) \\ & \quad\qquad \quad = \( \frac{\xi \log(1/\zeta)}{\zeta(2)} +o(1)\) \prod_{\ell \mid dq}\(1+\frac{1}{\ell}\)^{-1} \frac{N}{d} + O\(\frac{N}{d^{1/2}\log N}\)\,. \end{align*} We now add the contribution from primes $U < p \le N$, which we \begin{itemize} \item put in the error term; \item further separate into two ranges $U<p \le V$ and $V<p \le N$; \item replace $\mu^2 (m)$ with $1$ and abandon the condition $ \gcd(p,dq)=1$. \end{itemize} Hence we derive \begin{equation} \label{eq:MEEE} \sum_{\substack{N^{\zeta} \le p \le N \\ \gcd(p,dq)=1}} \sum_{\substack{m\sim N/dp \\ \gcd(m,dq)=1}} \mu^2 (m) = \mathrm{M} + O\(\mathrm{E}_1 + \mathrm{E}_2 + \mathrm{E}_3\) \end{equation} with the main term $$ \mathrm{M}= \( \frac{\xi \log(1/\zeta)}{\zeta(2)} +o(1)\) \prod_{\ell \mid dq}\(1+\frac{1}{\ell}\)^{-1} \frac{N}{d} , $$ where $$ \mathrm{E}_1 = \frac{N}{d^{1/2}\log N}, \quad \mathrm{E}_2= \sum_{ U < p \le V} \sum_{\substack{m\sim N/dp \\ \gcd(m,dq)=1}} 1 , \quad \mathrm{E}_3= \sum_{\ V < p \le N} \sum_{\substack{m\sim N/dp \\ \gcd(m,dq)=1}} 1 $$ are the error terms, which we estimate separately. Note that the range $U<p \le V$ can be empty, and thus $\mathrm{E}_2=0$ in this case. Since, trivially, $dq$ has at most $O\(\log (dq)\)$ prime divisors, by~\eqref{eq:Mer} we obtain $$ \prod_{\ell \mid dq}\(1+\frac{1}{\ell}\) \ll \log \log (dq) \ll \log \log N $$ and thus under the condition $d =o\((\log N)^2/\log \log N\)$, we see that \begin{equation} \label{eq:E1} \mathrm{E}_1 = o(\mathrm{M}). \end{equation} We now follow closely the proof of~\cite[Lemma~8]{HaSh}. In the range $U < p \le V$ we apply Lemma~\ref{lem: AP bound}, which yields \begin{equation} \label{eq: mid range} \mathrm{E}_2 \ll \frac{\varphi(dq)}{dq} \frac{N}{d} \sum_{U < p \le N/(d\log (dq))} \frac{1}{p}. \end{equation} Clearly \begin{equation} \label{eq: prod l} \frac{\varphi(dq)}{dq} = \prod_{\ell \mid dq}\(1-\frac{1}{\ell}\) \le \prod_{\ell \mid dq}\(1+\frac{1}{\ell}\)^{-1}. \end{equation} Using the Mertens formula~\eqref{eq:Mer} again, we obtain \begin{align*} \sum_{U < p \le V} \frac{1}{p} &= \log \frac{\log N - \log(d\log (dq))}{ \log N- \log \log \tau(dq)} +O\(\frac{1}{\log N}\)\\ & = \log\( 1+ O\( \frac{\max\{\log(\tau(dq)), \log(d \log (dq))\}}{\log N}\)\) \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad +O\(\frac{1}{\log N}\) \\ & \ll \frac{\max\{\log(\tau(dq)), \log(d\log (dq))\}}{\log N}, \end{align*} which after substituting in~\eqref{eq: mid range} and recalling~\eqref{eq:tau} and~\eqref{eq: prod l}, implies \begin{equation} \label{eq:E2} \mathrm{E}_2 = o(\mathrm{M}). \end{equation} Finally in the range $V < p \le N$, we use the trivial bound $$ \mathrm{E}_3 \le \sum_{V < p \le N} \sum_{\substack{m\sim N/dp \\ \gcd(m,dq)=1}} 1 \le N \sum_{V < p \le N} \frac{1}{p} , $$ which, as in the proof of~\cite[Lemma~8]{HaSh}, together with the Mertens formula~\eqref{eq:Mer} implies \begin{equation} \label{eq:E3} \mathrm{E}_3 \ll N \frac{ \log \log N}{\log N} = o(\mathrm{M}). \end{equation} Substituting~\eqref{eq:E1}, \eqref{eq:E2} and~\eqref{eq:E3} in~\eqref{eq:MEEE}, we conclude the proof. \end{proof} We write for convenience ${\mathcal B}= \{n:~\gcd(n,q)=1\}$ and let $$ c_n = \begin{cases}1 &\text{if $p\mid n \Rightarrow p < N^{\zeta}$},\\ 0 &\text{otherwise.} \end{cases} $$ We are now able to establish our main technical statement, which is an asymptotic formula for the sum \begin{equation} \label{eq:Sum S} S^\sharp=\sum_{\substack{mnr \in {\mathcal B} \\ m, n \sim N}} b_r c_m\mu^2(mn), \end{equation} where the summation is only over $m$ and $n$ with square-free products $mn$. A similar sum $$ S=\sum_{\substack{mnr \in {\mathcal B} \\ m, n \sim N}} b_r c_m $$ treated in the proof of~\cite[Lemma~8]{HaSh} can be deal with much simpler as the variable are independent and we can write $$ S=\sum_{\substack{mnr \in {\mathcal B} \\ m, n \sim N}} b_r c_m = \sum_{\substack{m \in {\mathcal B} \\ m\sim N}} c_m \sum_{\substack{n \in {\mathcal B} \\ n \sim N}} 1 \sum_{r \in {\mathcal B}} b_r . $$ \begin{lemma}\label{lem:sums} For integers $N \ge q^{1/4} > 1$, real $1/2 < \zeta \le 1$ and any finitely supported sequence $b_r$, for the sum~\eqref{eq:Sum S} we have $$ S^\sharp = C\(\frac{\xi N}{\zeta(2)}\)^2\prod_{\ell \mid q}\(1+ \frac{2}{\ell}\)^{-1} (1+\log \zeta +o(1)) \sum_{r \in {\mathcal B}} b_r, $$ where $$ C=\prod_{\ell}\(1-\frac{1}{(\ell+1)^2}\). $$ \end{lemma} \begin{proof} Using the elementary identity~\eqref{inversion}, we have $$ S^\sharp = \sum_{\substack{m,n \in {\mathcal B} \\ m,n \sim N}} c_m\mu^2(m)\mu^2(n) \sum_{d \mid \gcd(n,m)} \mu(d) \ \sum_{r \in {\mathcal B}} b_r \,. $$ Switching summation and using the multiplicativity of coefficients $c_m$, we obtain (using that $\mu^3(d)=\mu(d)$) \begin{align*} S^\sharp&=\sum_{\substack{d \leq \psi N\\ \gcd(d,q)=1}} \mu(d)c_d \sum_{\substack{m\sim N/d,\, \gcd(m,dq)=1 \\ n\sim N/d, \, \gcd(n,dq)=1}} \mu^2(m)\mu^2(n)c_m \sum_{r \in {\mathcal B}} b_r \\ &=\sum_{\substack{d \leq \psi N\\ \gcd(d,q)=1}} \mu(d)c_d S_1(d) S_2(d) \sum_{r \in {\mathcal B}} b_r, \end{align*} where $$ S_1(d) = \sum_{\substack{m\sim N/d \\\gcd(m,dq)=1}} \mu^2(m)c_m \qquad\mbox{and}\qquad S_2(d) = \sum_{\substack{n\sim N/d \\ \gcd(n,dq)=1}} \mu^2(n). $$ We set \begin{equation} \label{eq: def D} D = \log N \end{equation} and estimate the contribution in $S$ from $d \ge D$ using trivial estimate $$ |S_1(d)|, |S_2(d)| \le N/d $$ as $O(N^2/D)$. Thus we obtain \begin{equation} \label{eq:SSSD} S^\sharp=\gamma \sum_{r \in {\mathcal B}} b_r, \end{equation} where \begin{equation} \label{eq: gamma} \gamma = \sum_{\substack{d \le D\\ \gcd(d,q)=1}} \mu(d)c_d S_1(d) S_2(d) + O(N^2/D). \end{equation} So we now assume $d \le D$. Then, by Lemma~\ref{lem: SqFAP} and the divisor bound, we have $$S_2(d) = \(\frac{\xi}{\zeta(2)}+o(1)\)\prod_{\ell \mid dq}\(1+\frac{1}{\ell}\)^{-1} \frac{N}{d}. $$ We now evaluate $S_1(d)$. We remark (since $\zeta > \frac12$) that $$ \sum_{\substack{m\sim N/d \\\gcd(m,dq)=1}} \mu^2(m)c_m =\sum_{\substack{m\sim N/d \\\gcd(m,dq)=1}}\mu^2(m) - \sum_{\substack{N^{\zeta} \le p\le N \\ \gcd(p,dq)=1}}\sum_{\substack{m\sim N/(dp) \\ \gcd(m,dq)=1}} \mu^2(m)\,.$$ Lemmas~\ref{lem: SqFAP} and~\ref{smooth} then give \begin{align*} \sum_{\substack{m\sim N/d \\\gcd(m,dq)=1}}&\mu^2(m)\\& = \(\frac{\xi}{\zeta(2)}+o(1)\)\prod_{\ell \mid dq}\(1+\frac{1}{\ell}\)^{-1} \frac{N}{d} + O\(\(N/d\)^{1/2} \tau(dq)\) \end{align*} and $$ \sum_{\substack{N^{\zeta} \le p\le N \\ \gcd(p,dq)=1}}\sum_{\substack{m\sim N/(dp) \\ \gcd(m,dq)=1}} \mu^2(m) = \(\frac{\xi \log(1/\zeta) }{\zeta(2)}+o(1)\) \prod_{\ell \mid dq}\(1+\frac{1}{\ell}\)^{-1} \frac{N}{d} , $$ respectively. Thus subtracting, we derive $$ S_1(d) = \(\frac{\xi}{\zeta(2)} (1+\log\zeta )+o(1)\)\prod_{\ell \mid dq}\(1+\frac{1}{\ell}\)^{-1} \frac{N}{d}. $$Hence for $d \le D$ we have $$ S_1(d)S_2(d) = \(\frac{\xi N}{\zeta(2)d}\)^2\prod_{\ell \mid dq}\(1+\frac{1}{\ell}\)^{-2} (1+\log\zeta +o(1)) $$ which after the substitution in~\eqref{eq: gamma} implies \begin{equation} \label{eq: gamma asymp1} \begin{split} \gamma = \(\frac{\xi N}{\zeta(2)}\)^2& \prod_{\ell \mid q}\(1+\frac{1}{\ell}\)^{-2} (1+\log\zeta +o(1))\\ &\sum_{\substack{d \leq D\\ \gcd(d,q)=1}}\frac{\mu(d)}{d^2}c_d\prod_{\ell \mid d}\(1+\frac{1}{\ell}\)^{-2} + O(N^2/D). \end{split} \end{equation} Now, \begin{align*} \sum_{\substack{d \leq D\\ \gcd(d,q)=1}}\frac{\mu(d)}{d^2}c_d&\prod_{\ell \mid d}\(1+\frac{1}{\ell}\)^{-2}\\ & = \sum_{\substack{d=1\\ \gcd(d,q)=1}}^{\infty} \frac{\mu(d)}{d^2}c_d\prod_{\ell \mid d}\(1+\frac{1}{\ell}\)^{-2} + O(D^{-1}). \end{align*} Recalling the definition of the coefficients $c_d$, we obtain \begin{equation} \label{eq: gamma asymp2} \begin{split} \sum_{\substack{d \leq D\\ \gcd(d,q)=1}}\frac{\mu(d)}{d^2}c_d&\prod_{\ell \mid d}\(1+\frac{1}{\ell}\)^{-2}\\ & = \prod_{\substack{\ell \le N^{\zeta} \\ \gcd(\ell,q)=1}} \(1-\frac{1}{(\ell+1)^2}\) + O(D^{-1}) . \end{split} \end{equation} Furthermore \begin{align*} \prod_{\substack{\ell \le N^{\zeta} \\ \gcd(\ell,q)=1}}& \(1-\frac{1}{(\ell+1)^2}\) \\ &= \prod_{\substack{\ell \le N^{\zeta} \\ \ell\mid q}} \(1-\frac{1}{(\ell+1)^2}\)^{-1} \prod_{\ell \le N^{\zeta}} \(1-\frac{1}{(\ell+1)^2}\) \\ &= \prod_{\ell\mid q} \(1-\frac{1}{(\ell+1)^2}\)^{-1}\prod_{\ell}\(1-\frac{1}{(\ell+1)^2}\)(1+O(N^{-\zeta})). \end{align*} Combining this with~\eqref{eq: gamma asymp2} and then substituting in~\eqref{eq: gamma asymp1}, we obtain \begin{align*} \gamma = \(\frac{\xi N}{\zeta(2)}\)^2& \prod_{\ell \mid q}\(1+\frac{1}{\ell}\)^{-2} \prod_{\ell\mid q} \(1-\frac{1}{(\ell+1)^2}\)^{-1}\\ &\prod_{\ell}\(1-\frac{1}{(\ell+1)^2}\) (1+\log\zeta +o(1)) + O(N^2/D). \end{align*} Since $$ \(1+\frac{1}{\ell}\)^2 \(1-\frac{1}{(\ell+1)^2}\) = 1 +\frac{2}{\ell} , $$ we obtain \begin{align*} \gamma = \(\frac{\xi N}{\zeta(2)}\)^2& \prod_{\ell \mid q}\(1 +\frac{2}{\ell} \)^{-1}\\ &\prod_{\ell}\(1-\frac{1}{(\ell+1)^2}\) (1+\log\zeta +o(1)) + O(N^2/D). \end{align*} By the version of the Mertens formula~\eqref{eq:Mer}, similarly to~\eqref{eq:phi} we have \begin{equation} \label{eq: prod 2l} \prod_{\ell \mid q}\(1 +\frac{2}{\ell} \)^{-1} \ge \prod_{\ell \mid q}\(1 +\frac{1}{\ell} \)^{-2} \gg (\log \log q)^{-2}. \end{equation} Therefore with the above choice~\eqref{eq: def D} of $D$, we obtain $$ \gamma = \(\frac{\xi N}{\zeta(2)}\)^2 \prod_{\ell \mid q}\(1 +\frac{2}{\ell} \)^{-1}\prod_{\ell}\(1-\frac{1}{(\ell+1)^2}\) \(1+\log\zeta +o(1) \), $$ which together with~\eqref{eq:SSSD} concludes the proof. \end{proof} \begin{remark} For prime $q$ the proofs of Lemmas~\ref{smooth} and~\ref{lem:sums} simplify quite significantly and the factors depending on $q$ all become equal to $1$. \end{remark} Taking $\zeta=1$ in Lemma~\ref{lem:sums} (or proceeding in a similar manner, but using only the estimates on $S_2(d)$), we obtain: \begin{cor}\label{cor:productsqrfree} We have, $$\sum_{\substack{mnr \in {\mathcal B} \\ m, n \sim N}} b_r \mu^2(mn) = (C+o(1))\(\frac{\xi N}{\zeta(2)}\)^2\prod_{\ell\mid q}\(1+\frac{2}{\ell}\)^{-1} \sum_{r \in {\mathcal B}} b_r.$$ \end{cor} \begin{remark} Removing the summation over $r$ in Corollary~\ref{cor:productsqrfree}, that is, taking the sequence $b_r$ which is supported only on $r=1$, we obtain an asymptotic formula for the number of products $mn$ being square-free with $m,n$ of same size and coprime to a fixed number. \end{remark} \subsection{Some sieving result} \label{sec:sumprod-AP} We now specify \begin{equation} \label{eq: zeta} \zeta = \rho(1 + \varepsilon) = \frac{1 + \varepsilon}{e^{1/2}}, \end{equation} where $\rho$ is given by~\eqref{eq: rho}, and derive an analogue of~\cite[Lemma~9]{HaSh}. \begin{lemma}\label{lem:sieve} Let be $N \ge q^{1/4} > 1$ integers and let $\varepsilon >0$ be a sufficiently small fixed real number. Suppose that ${\mathcal A} \subseteq {\mathcal B}$ is a set such that for a finitely supported sequence $b_r$ and some $\lambda > 0$ and $\eta > 0$, we have \begin{equation}\label{Lambda} \sum_{\substack{mnr \in {\mathcal A}\\ m, n \sim N}} a_n b_r \mu^2(mn) =\lambda \sum_{\substack{mnr \in {\mathcal B}\\ m, n \sim N}} a_n b_r \mu^2(mn) + O(\lambda x^{1 - \eta}) \end{equation} for any sequence $a_n = O(1)$. Then for $\zeta$ given by~\eqref{eq: zeta} \begin{align*} \sum_{\substack{mnr \in {\mathcal A}\\ m, n \sim N}} & b_r c_n c_m \mu^2(mn)\\ & \ge C\(\frac{\xi N}{\zeta(2)}\)^2\prod_{\ell \mid q}\(1 +\frac{2}{\ell} \)^{-1}(2\log(1+\varepsilon)+o(1)) \sum_{r \in {\mathcal B}} b_r\\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad + O(\lambda x^{1 - \eta})\,. \end{align*} \end{lemma} \begin{proof} Generally, we follow very closely the argument of the proof of~\cite[Lemma~9]{HaSh}, however here we use Lemma~\ref{lem:sums} and Corollary~\ref{cor:productsqrfree} instead of~\cite[Lemmas~6 and~8]{HaSh} in the corresponding place. Using the observation of Balog~\cite{Bal}, we have $$ \sum_{\substack{mnr \in {\mathcal A}\\ m, n \sim N}} b_r c_n c_m\mu^2(mn) \ge E - F $$ where $$ E = \sum_{\substack{mnr \in {\mathcal A}\\ m, n \sim N}} b_r c_m\mu^2(mn) \,, \qquad F = \sum_{\substack{mnr \in {\mathcal A}\\ m, n \sim N}} b_r h_n\mu^2(mn)\,, $$ and $h_n = 1-c_n$. By the assumption~\eqref{Lambda}, we have $$ E = \lambda \sum_{\substack{mnr \in {\mathcal B} \\ m, n \sim N}} b_r c_m\mu^2(mn) + O(\lambda x^{1 - \eta}) $$ and $$F = \lambda \sum_{\substack{mnr \in {\mathcal B} \\ m, n \sim N}} b_r h_n\mu^2(mn) + O(\lambda x^{1 - \eta}).$$ Now, by Lemma~\ref{lem:sums} $$ \sum_{\substack{mnr \in {\mathcal B} \\ m, n \sim N}} b_r c_m\mu^2(mn) = C\(\frac{\xi N}{\zeta(2)}\)^2\prod_{\ell \mid q}\(1 +\frac{2}{\ell} \)^{-1}(1+\log \zeta+o(1)) \sum_{r \in {\mathcal B}} b_r \, $$ while by a combination of Lemma~\ref{lem:sums} and Corollary~\ref{cor:productsqrfree} $$\sum_{\substack{mnr \in {\mathcal B} \\ m, n \sim N}} b_r h_n\mu^2(mn) = C\(\frac{\xi N}{\zeta(2)}\)^2\prod_{\ell \mid q}\(1 +\frac{2}{\ell} \)^{-1}(-\log \zeta+o(1)) \sum_{r \in {\mathcal B}} b_r.$$ After summation, we obtain $$E-F \geq C\(\frac{\xi N}{\zeta(2)}\)^2\prod_{\ell \mid q}\(1 +\frac{2}{\ell} \)^{-1}(1+2\log \zeta+o(1)) \sum_{r \in {\mathcal B}} b_r.$$ Remarking that $1+2\log \zeta=2\log(1+\varepsilon)$ we conclude the proof. \end{proof} In the special case where ${\mathcal A} = {\mathcal A}_{a,q}(x)$ is the set of integers $k \in [x,2x]$ with $k \equiv a \pmod q$, using that $2 \log(1+\varepsilon) > \varepsilon$ if $\varepsilon >0$ is a sufficiently small, we derive: \begin{cor}\label{cor:AP} Assume that the condition of Lemma~\ref{lem:sieve} holds with $x_0(\varepsilon)$ for the set ${\mathcal A} = {\mathcal A}_{a,q}(x)$ with $\lambda = 1/ \varphi(q)$, where $x_0(\varepsilon)$ depends only on $\varepsilon$ and is sufficiently large. Then, for a finitely supported sequence of positive real numbers $b_r$ and some $\eta > 0$, we have \begin{align*} \sum_{\substack{mnr \in {\mathcal A}_{a,q}(x)\\ m, n \sim N}} & b_r c_n c_m \mu^2(mn)\\& \ge \varepsilon C\(\frac{\xi N}{\zeta(2)}\)^2\prod_{\ell \mid q}\(1 +\frac{2}{\ell} \)^{-1} \sum_{r \in {\mathcal B}} b_r + O(x^{1 - \eta} q^{-1})\,. \end{align*} \end{cor} \subsection{Products in arithmetic progressions} We now define the following parameter \begin{equation} \label{eq: nu} \nu= \rf{1/\varepsilon}. \end{equation} For a given $q$, we consider the set of integers $r$ that are products of $13$ distinct primes of the form \begin{equation} \label{eq:set r} r = \ell_1\ldots \ell_{12} s\qquad\mbox{and}\qquad \gcd(r,q)=1, \end{equation} where \begin{equation} \label{eq:primes} \ell_1, \ldots, \ell_{12} \sim q^{1/8}, \qquad s \sim q^{1/\nu}, \end{equation} and let $b_r$ be the characteristic function of this set. We note that $b_r$ is supported on the interval $[R, \psi^{13} R]$ with $R=q^{3/2 +1/\nu}$. As before, we also define by ${\mathcal A}_{a,q}(x)$ the set of integers $k \in [x,2x]$ with $k \equiv a \pmod q$. Next, repeating word-by-word the argument of the proof of~\cite[Lemma~14]{HaSh}, but using Lemma~\ref{lem:CharSquarfree} instead of the classical Burgess bound as in~\cite{HaSh}, we now show that for any fixed sufficiently small $\varepsilon> 0$ the conditions of Lemma~\ref{lem:sieve} are satisfied for the set ${\mathcal A} = {\mathcal A}_{a,q}(x)$ and the choice of $b_r$ with $N=q^{1/4 + \varepsilon}$ upon writing $x=N^2R$. \begin{lemma} \label{lem:Cong} Let $\varepsilon > 0$ be sufficiently small, $q> 1$ and $N=q^{1/4 + \varepsilon}$. Suppose that the sequence $b_r$ is the characteristic function of the set defined by~\eqref{eq:set r} and~\eqref{eq:primes}. Then for integers $a$ and $q$ with $\gcd(a,q)=1$ and such that $q$ is cube-free we have $$ \sum_{\substack{mnr \in {\mathcal A}_{a,q}(x)\\ m, n \sim N}} a_n b_r \mu^2(mn) = \frac{1}{\varphi(q)}\sum_{\substack{mnr \in {\mathcal B}\\ m, n \sim N}} a_n b_r \mu^2(mn) + O\(q^{-1} x^{1-\eta}\) $$ with $\eta = \varepsilon^4$, $R=q^{3/2 + 1/\nu}$, where $\nu$ is as in~\eqref{eq: nu}, and $x=N^2R$, and any sequence $a_n$ satisfying $|a_n| \le n^{o(1)}$. \end{lemma} \begin{proof} We start with the observation that if $b_r \ne 0$ and $m, n \sim N$ then due to the choice of our parameters we always have $$ mnr \in [N^2R, \psi^{15}N^2 R] \subseteq [x,2x]. $$ In particular, if $b_r \ne 0$ and $m, n \sim N$ then the condition $mnr \in {\mathcal A}_{a,q}(x)$ is equivalent to the congruence $mnr \equiv a \pmod q$ and the condition $mnr \in {\mathcal B}$ is merely equivalent to $\gcd(mn,q)=1$. Let $$ S = \sum_{\substack{mnr \in {\mathcal A}_{a,q}(x)\\ m, n \sim N}} a_n b_r \mu^2(mn). $$ Using the orthogonality of characters we write $$ \mathrm{S} = \sum_{\substack{mnr \in {\mathcal B}\\ m, n \sim N}} a_n b_r \mu^2(mn)\frac{1}{\varphi(q)} \sum_{\chi\in {\mathcal X}_q} \chi(mnr a^{-1}). $$ Using the equation~\eqref{inversion}, we write \begin{align*} \mathrm{S} = \frac{1}{\varphi(q)} \sum_{\chi\in {\mathcal X}_q}& \sum_{\substack{r \in {\mathcal R} \\ m,n \sim N}} a_n \chi(mn)\mu^2(m)\mu^2(n)\\ & \qquad \sum_{d \mid \gcd(n,m)} \mu(d) \ \sum_{r \in {\mathcal R}} \chi(r) b_r \chi(a^{-1})\,. \end{align*} Thus, rearranging the summation, we obtain \begin{equation} \label{eq:SSd} \mathrm{S}= \sum_{d \leq \psi N} \mu(d)\mathrm{S}_d, \end{equation} where $$ \mathrm{S}_d = \frac{1}{\varphi(q)} \sum_{\chi\in {\mathcal X}_q} \sum_{\substack{m\sim N/d,\, \gcd(m,d)=1 \\ n\sim N/d, \, \gcd(n,d)=1}} \chi(mn)\mu^2(m)\mu^2(n)a_{nd} \sum_{r \in {\mathcal R}} \chi(r)\chi(d^2a^{-1}). $$ In particular $S_d = 0$ unless $\gcd(d,q)=1$, in which case, using the orthogonality of characters again, we see that $$ \mathrm{S}_d = \sum_{\substack{mnr \in {\mathcal A}_{ad^{-2},q}(x)\\ m,n\sim N/d, \, \gcd(mn,d)=1 }} a_{nd}\mu^2(m)\mu^2(n) b_r. $$ Since when $m$ and $n$ are fixed, the value of $r$ is uniquely defined modulo $q$ and thus can take $O(R/q)$ possible values (recall that $R\ge q$), we have the trivial estimate \begin{equation} \label{eq:Triv Sd} \mathrm{S}_d \le N^{2+o(1)}R d^{-2} q^{-1}= x^{1+o(1)} d^{-2} q^{-1}. \end{equation} On the other hand, separating the contribution from the principal character in~\eqref{eq:SSd}, we also write \begin{equation} \label{eq:SdMdEd} \mathrm{S}_d = \mathrm{M}_d + O\(\mathrm{E}_d\), \end{equation} with the main term $$ \mathrm{M}_d = \frac{1}{\varphi(q)} \sum_{\substack{mnr \in {\mathcal B}\\ m, n \sim N/d,\, \gcd(mn,d)=1 }} a_{nd} b_r \mu^2(m)\mu^2(n) $$ and the error term $$ \mathrm{E}_d = \frac{1}{\varphi(q)} \sum_{\chi\in {\mathcal X}_q^*} \left|\mathrm{E}_d(\chi) \right|, $$ where $$ \mathrm{E}_d(\chi) = \sum_{\substack{m\sim N/d,\, \gcd(m,d)=1 \\ n\sim N/d, \, \gcd(n,d)=1}} \chi(mn)\mu^2(m)\mu^2(n)a_{nd} \sum_{r \in {\mathcal R}} \chi(r)\chi(d^2a^{-1}) . $$ We now use some parameter $D$ and use~\eqref{eq:Triv Sd} for $d > D$ and use~\eqref{eq:SdMdEd} otherwise. Together with~\eqref{eq:SSd}, this leads to the asymptotic formula \begin{equation} \label{eq:SME-1} \mathrm{S}= \sum_{d \leq \psi N} \mu(d) \mathrm{M}_d + O\(\mathrm{E} + x^{1+o(1)} D^{-1} q^{-1}\), \end{equation} where $$ \mathrm{E} = \sum_{d \leq D} \left|\mathrm{E}_d(\chi) \right|. $$ Using the trivial upper bound $\mathrm{M}_d \le x^{1+o(1)} d^{-2} q^{-1}$, which is similar to~\eqref{eq:Triv Sd}, we obtain \begin{align*} \sum_{d \le D } \mu(d) \mathrm{M}_d & = \sum_{d \leq \psi N} \mu(d) \mathrm{M}_d + O\( x^{1+o(1)} D^{-1} q^{-1}\)\\ & = \frac{1}{\varphi(q)}\sum_{\substack{mnr \in {\mathcal B}\\ m, n \sim N}} a_n b_r \mu^2(mn) + O\( x^{1+o(1)} D^{-1} q^{-1}\), \end{align*} which together with~\eqref{eq:SME-1} implies \begin{equation} \label{eq:SME-2} \mathrm{S}= \frac{1}{\varphi(q)}\sum_{\substack{mnr \in {\mathcal B}\\ m, n \sim N}} a_n b_r \mu^2(mn) + O\(\mathrm{E} + x^{1+o(1)} D^{-1} q^{-1}\). \end{equation} Hence it remains to estimate $\mathrm{E}$ which we do by estimating individually $\mathrm{E}_d$ for $d\le D$. From now on, we fix $$ D=x^{\varepsilon^2}. $$ For a real $\omega> 0$ we consider the character sums over primes $$ V_\omega(\chi) = \sum_{\ell\sim q^{\omega}} \chi(\ell), $$ which we use with $\omega =1/8$ and $\omega=1/\nu$. We also consider the weighted sums $$ W_d(\chi) = \sum_{\substack{m \sim N/d \\ \gcd(m,d)=1}} \sum_{\substack{n \sim N/d \\ \gcd(n,d)=1}} \sum_{v \in {\mathcal V}} a_{nd} \chi(mnv)\mu^2(m)\mu^2(n), $$ where $v$ runs through the set ${\mathcal V}$ of $q^{1/2 + o(1)}$ products $v= \ell_1 \ell_2 \ell_3 \ell_4$ as in~\eqref{eq:set r}. Thus, we can write \begin{equation} \label{eq:fE2} \vert \mathrm{E}_d \vert = \frac{1}{\varphi(q)} \sum_{\chi\in {\mathcal X}_q^*}\left|V_{1/8}(\chi)\right|^8 \left|V_{1/\nu}(\chi)\right|\left|W_d(\chi) \right|. \end{equation} We now collect the currently available information about the sums $V_{1/8}(\chi)$, $V_{1/\nu}(\chi)$ and $W_d(\chi)$. First, since $N= q^{1/4 + \varepsilon}$, for a sufficiently small $\varepsilon>0$ and the above choice of $D$ we have $N/d > (dq)^{1/4 +\varepsilon/2}$. Hence, by Corollary~\ref{cor:CharSquarfree-eps} applied to the character $\chi \chi_{0}^{(d)}$ where $\chi \in {\mathcal X}_q^*$ and $\chi_{0}^{(d)}$ is the trivial character modulo $d$, we have \begin{equation} \begin{split} \label{eq:indiv} \max_{\chi \in {\mathcal X}_q^*} \left|W_d(\chi)\right| & \le \frac{N^{1+o(1)}}{d} q^{1/2} \max_{\chi \in {\mathcal X}_q^*} \left|\sum_{\substack{m \sim N/d \\ \gcd(m,d)=1}} \chi(m)\mu^2(m)\right| \\ & \ll \left(\frac{N}{d}\right)^{2-c_0\varepsilon^2}q^{1/2} \end{split} \end{equation} with some absolute constant $c_0> 0.$ We also have the inequalities \begin{equation} \label{eq:aver1} \sum_{\chi \in {\mathcal X}_q} \left|V_{1/8}(\chi) \right|^{16} \ll q^{2}, \qquad \sum_{\chi \in {\mathcal X}_q} \left|V_{1/\nu}(\chi) \right|^{2\nu} \ll q^{2}, \end{equation} and \begin{equation} \label{eq:aver2} \sum_{\chi\in {\mathcal X}_q} \left|W_d(\chi)\right| ^2 \le (N/d)^2 q^{3/2+o(1)}\(1 + (N/d)^2q^{-1/2}\) , \end{equation} implied by Lemma~\ref{lem:Aver}. Since for the above choice of parameters we have $(N/d)^2 \ge q^{1/2}$ (provided $\varepsilon$ is small enough) the inequality~\eqref{eq:aver2} simplifies as \begin{equation} \label{eq:aver3} \sum_{\chi\in {\mathcal X}_q} \left|W_d(\chi)\right| ^2 \le (N/d)^4 q^{1+o(1)}. \end{equation} We now write $ \left|W_d(\chi) \right| = \left|W_d(\chi) \right|^{1/\nu} \left|W_d(\chi) \right|^{1-1/\nu} $ and apply~\eqref{eq:indiv}, deriving from~\eqref{eq:fE2} \begin{equation} \label{eq:fE3} \begin{split} \vert \mathrm{E}_d \vert \le \frac{1}{\varphi(q)} \(\left(\frac{N}{d}\right)^{2-c_0\varepsilon^2}q^{1/2}\)^{1/\nu} \sum_{\chi\in {\mathcal X}_q^*}&\left|V_{1/8}(\chi)\right|^8 \left|V_{1/\nu}(\chi)\right| \left|W_d(\chi) \right|^{1-1/\nu}. \end{split} \end{equation} Finally, since $$ \frac{1}{2} + \frac{1}{2\nu} + \frac{\nu-1}{2\nu} = 1 $$ by the H{\"o}lder inequality, applied to the sum in~\eqref{eq:fE3}, and extending the summation to all $\chi \in {\mathcal X}_q$, we obtain \begin{align*} \vert \mathrm{E}_d \vert \le \frac{1}{\varphi(q)} & \(\left(\frac{N}{d}\right)^{2-c_0\varepsilon^2}q^{1/2}\)^{1/\nu} \( \sum_{\chi \in {\mathcal X}_q} \left|V_{1/8}(\chi) \right|^{16}\right)^{1/2} \\ & \qquad \(\sum_{\chi\in {\mathcal X}_q} \left|V_{1/\nu}(\chi)\right|^{2\nu}\)^{1/(2\nu)} \(\sum_{\chi\in {\mathcal X}_q} \left|W_d(\chi) \right|^2\)^{(\nu-1)/(2\nu)}. \end{align*} Recalling~\eqref{eq:aver1} and~\eqref{eq:aver3}, we derive \begin{align*} \vert \mathrm{E}_d \vert & \le \frac{1}{\varphi(q)} \(\left(\frac{N}{d}\right)^{2-c_0\varepsilon^2}q^{1/2}\)^{1/\nu} q^{1+1/\nu} \((N/d)^4 q^{1+o(1)}\)^{\frac{\nu-1}{2\nu}}\\ &= \frac{(N/d)^2}{\varphi(q)} q^{3/2+1/\nu + o(1)} (N/d)^{-c_0\varepsilon^2/\nu} = \frac{x}{d^2}\varphi(q)^{-1}(N/d)^{-c_0\varepsilon^2/\nu+o(1)} . \end{align*} Noticing that $\varepsilon \geq 1/\nu$ and $d\leq x^{\varepsilon^2}$, we derive $$ \mathrm{E} = \sum_{d \leq D} |\mathrm{E}_d| \ll \frac{x^{1-\eta}}{\varphi(q)} $$ for some $\eta= \varepsilon^4$ and a sufficiently small $\varepsilon$, which together with~\eqref{eq:SME-2} concludes the proof. \end{proof} \section{Proofs of Main Results} \subsection{Proof of Theorem~\ref{thm:Malpha}} \subsubsection{Cube-free moduli} Here we always assume that $q$ is cube-free. We fix some sufficiently small $\varepsilon > 0$. Let $\rho$, $\zeta$ and $\nu$ be as in~\eqref{eq: rho}, \eqref{eq: zeta} and~\eqref{eq: nu}, respectively, and let $$ \beta = \rho/4 = \frac{1}{4e^{1/2}}. $$ We also choose $N$, $R$ and $x$ as in Lemma~\ref{lem:Cong} and remark that $$ N^{\zeta} = q^{(1/4 + \varepsilon)\rho(1+\varepsilon)} = q^{\beta + 5\beta \varepsilon + \varepsilon^2} \le q^{\beta + \varepsilon} $$ provided that $\varepsilon$ is small enough. Finally, we define ${\mathcal K}$ as the following {\it multiset\/} \begin{equation}\label{defk} {\mathcal K} = \{k=mn~:~ m, n \sim N, \ \mu^2(mn)=1, \ p \mid mn \Rightarrow p < N^{\zeta}\}, \end{equation} where the integers $k$ are counted with multiplicity in ${\mathcal K}$. For integers $a$ and $q$ with $\gcd(a,q)=1$ and such that $q$ is cube-free we consider the number $T$ of solutions to the congruence \begin{equation} \label{eq:rk cong} kr\equiv a \pmod q \end{equation} where $k\in {\mathcal K}$, where the multiset ${\mathcal K}$ is defined by~\eqref{defk} and $r$ is defined by~\eqref{eq:set r} and~\eqref{eq:primes}. By Lemma~\ref{lem:Cong} and then by Corollary~\ref{cor:AP} we see that \begin{equation} \begin{split} \label{eq:T asymp} T & = \sum_{\substack{kr \in {\mathcal A}_{a,q}(x)\\ k\in {\mathcal K}}} b_r = \frac{1}{\varphi(q)}\sum_{\substack{kr \in {\mathcal B}\\ k \in {\mathcal K}}} b_r + O\(q^{-1} x^{1-\eta}\) \\ & \ge \varepsilon C\frac{1}{\varphi(q)} \(\frac{\xi N}{\zeta(2)}\)^2\prod_{\ell \mid q}\(1 +\frac{2}{\ell} \)^{-1} \sum_{r \in {\mathcal B}} b_r + O(x^{1 - \eta} q^{-1})\,. \end{split} \end{equation} By the prime number theorem there are $q^{3/2+ 1/\nu+o(1)}$ values of $r$ given by~\eqref{eq:set r} and~\eqref{eq:primes} and for each of them $q^{3/2 + 1/\nu} \ll r \ll q^{3/2 + 1/\nu}$. Hence, for a sufficiently small $\varepsilon > 0$, after simple calculations, using~\eqref{eq:phi} and also~\eqref{eq: prod 2l}, we obtain from~\eqref{eq:T asymp} that \begin{equation} \label{eq:T large} T \ge N^2R q^{-1+o(1)} = xq^{-1+o(1)}. \end{equation} We see from the definition of the sets of $r$ and $k$ that if $kr $ is not square-free then it is divisible by a square of a prime $\ell \ge q^\kappa$ where $\kappa = \min\{1/8, 1/\nu\}$. Together with $kr \in {\mathcal A}_{a,q}(x)$ this puts the product $kr\le x$ in a prescribed arithmetic progression modulo $q\ell^2$. Thus there are at most $x/q\ell^2$ positive integers $t$ in any such progression. Summing over all $\ell \ge q^\kappa$ (and ignoring the primality constraint) we obtain at most $$ \sum_{\ell \ge q^\kappa}\frac{x}{q\ell^2} \ll xq^{-1-\kappa} $$ such values of $t$. From the classical bound on the divisor function, see~\cite[Equation~(1.81)]{IwKow}, we infer that each of such $t$ leads to at most $t^{o(1)} = q^{o(1)}$ possible triples $(m,n,r)$ with $mn =k \in {\mathcal K}$. Comparing this with (\ref{eq:T large}), out of $T$ solutions to~\eqref{eq:rk cong} at most $Tq^{-\kappa+o(1)}$ are not square-free. Since $mnr \le x = q^{2+2 \varepsilon + \frac{1}{\nu}}$ we have \begin{equation} \label{eq: prelim} M_{1/(4 e^{1/2})+\varepsilon}^*(q) \ll q^{2 + 3 \varepsilon}, \end{equation} and the result follows. Indeed, assuming that it fails, we see that there is $\varepsilon_0$ and $\delta_0$ such that $$ M_{1/(4 e^{1/2})+\varepsilon_0}^*(q) \gg q^{2 + \delta_0}. $$ Then taking $\varepsilon = \min\{\varepsilon_0, \delta_0/4\}$ we obtain a contradiction with~\eqref{eq: prelim}. \subsubsection{Arbitrari moduli} We can derive a result for $q$ non cube-free following the same lines. We define the parameters $N=q^{1/3+\varepsilon}$ and $\nu$ as in~\eqref{eq: nu}. We consider the set of integers $r$ that are products of $9$ distinct primes of the form \begin{equation} \label{eq:set rbis} r = \ell_1\ldots \ell_{8} s\qquad\mbox{and}\qquad \gcd(r,q)=1, \end{equation} where \begin{equation} \label{eq:primesbis} \ell_1, \ldots, \ell_{8} \sim q^{1/6}, \quad s \sim q^{1/\nu}. \end{equation} As before, ${\mathcal K}$ we have \begin{equation}\label{defkbis} {\mathcal K} = \{k=mn~:~ m, n \sim N, \ \mu^2(mn)=1, \ p \mid mn \Rightarrow p < N^{\zeta}\}, \end{equation} where the integers $k$ are counted with multiplicity in ${\mathcal K}$. For integers $a$ and $q$ with $\gcd(a,q)=1$, we similarly define the number $T'$ of solutions to the congruence $$ kr\equiv a \pmod q $$ where $k\in {\mathcal K}$ with the multiset ${\mathcal K}$ is defined by~\eqref{defkbis} and $r$ is defined by~\eqref{eq:set rbis} and~\eqref{eq:primesbis}. For this new set of parameters, applying Corollary~\ref{cor:CharSquarfree-eps} in the case of arbitrary $q$ and following exactly the same lines, we can show that the conditions of Lemma \ref{lem:sieve} are fullfilled and prove an exact analogue of Lemma~\ref{lem:Cong}. The proof goes then exactly as in the case of cube-free moduli. \subsection{Proof of Theorem~\ref{thm:Malpha-p-AA}} We fix some integer $n > 2/\alpha$ and reals $1/4 > \varepsilon, \delta > 0$ and define $$ \beta = 1- 2\delta/n +\varepsilon \qquad\mbox{and}\qquad k = \rf{\beta/\alpha} $$ Clearly it is enough to prove Theorem~\ref{thm:Malpha-p-AA} for all but $Q^{o(1)}$ primes in dyadic intervals $[Q/2,Q]$. We further denote $$ T =\fl{(Q/2)^{2/n}} \qquad\mbox{and}\qquad W = \fl{(Q/2)^{\beta}} $$ and define the sets \begin{itemize} \item ${\mathcal S}$ as the set of square-free integers $s \le T$; \item ${\mathcal U}$ as the set of products $u = \ell_1\ldots \ell_k $ of $k$ distinct primes $ \ell_1,\ldots, \ell_k \in [0.5W^{1/k}, W^{1/k}]$. \end{itemize} We note that for the above definition we have $$ n \le \left\{\frac{2 \log Q}{\log T}\right\} = n +O(1/\log Q). $$ Hence we have $\gamma \ll 1/\log Q$ in the conditions of Lemma~\ref{lem:CharSquarfree-AA} and thus, recalling that $\delta<1/4$, we have \begin{equation} \label{eq:bound p} \max_{\chi\in{\mathcal X}_p^*} \left|S^\sharp_\chi(T)\right|\le T^{1-\delta}, \end{equation} for all but $Q^{4\delta+o(1)}$ primes $p \in [Q/2, Q]$. Hence, we fix a prime $p \in [Q/2, Q]$ for which the bound~\eqref{eq:bound p} holds. Clearly products $suv$ with $(s,u,v) \in {\mathcal S}\times {\mathcal U}\times {\mathcal U}$ are $p^{\alpha}$-smooth (as $k$ is chosen to satisfy $\beta/k < \alpha$), but generally speaking may not be square-free. We now claim that there are many products of these type in every reduced class modulo $p$. Then we show that at least one of these representatives is square-free. Indeed, let us fix some integer $a$ with $\gcd(a,p) = 1$ and let $N$ be the number of solutions to \begin{equation} \label{eq:cong} suv\equiv a \pmod p, \qquad (s,u,v) \in {\mathcal S}\times {\mathcal U}\times {\mathcal U}. \end{equation} To show that $N > 0$ and thus prove the claim, for a real $x$, as usual, we denote by $\pi(x)$ the number of primes $\ell \le x$. Certainly the asymptotic formula $$ \# {\mathcal S} \sim \frac{6}{\pi^{2}} T $$ for the cardinality of ${\mathcal S}$ is well known, however it is quite enough for us to use the trivial bounds $$ T \ge \# {\mathcal S} \ge \pi(T). $$ We also use that $$ \#{\mathcal U} =\binom{\pi\(W^{1/k}\)}{k}. $$ It now follows from the prime number theorem that \begin{equation} \label{eq:Card} \# {\mathcal S} = T^{1+o(1)} \qquad\mbox{and}\qquad \#{\mathcal U} = W^{1+o(1)}. \end{equation} Using the orthogonality of characters we express the number of solutions~\eqref{eq:cong} as $$ N= \sum_{(s,u,v) \in {\mathcal S}\times {\mathcal U}\times {\mathcal U}} \frac{1}{p-1}\sum_{\chi\in {\mathcal X}_p} \chi(suva^{-1}). $$ Changing the order of summation and using the multiplicativity of characters, we now obtain $$ N =\frac{1}{p-1}\sum_{\chi\in {\mathcal X}_p} \chi(a^{-1}) \(\sum_{u \in {\mathcal U}} \chi(u) \)^2 \sum_{s\in {\mathcal S}} \chi(s) . $$ Now, separating the contribution from the principal character, we derive \begin{equation} \label{eq:T and R} N =\frac{\#{\mathcal S}\( \#{\mathcal U} \)^2}{p-1}+ \frac{1}{p-1}R, \end{equation} where $$ R = \sum_{\chi\in {\mathcal X}_p^*} \chi(a^{-1}) \(\sum_{u \in {\mathcal U}} \chi(u) \)^2 S^\sharp_\chi(T) $$ and $S^\sharp_\chi(T)$ is given by~\eqref{eq:sum s-f}. Since $p$ is chosen to satisfy the bound~\eqref{eq:bound p}, we have \begin{equation} \label{eq:R} |R| \le T^{1-\delta} \sum_{\chi\in {\mathcal X}_p^*} \left| \sum_{u \in {\mathcal U}} \chi(u) \right|^2 . \end{equation} Furthermore, using the orthogonality property~\eqref{eq:orth}, we obtain $$ \sum_{\chi\in {\mathcal X}_p^*} \left| \sum_{u \in {\mathcal U}} \chi(u) \right|^2 \le \sum_{\chi\in {\mathcal X}_p} \left| \sum_{u \in {\mathcal U}} \chi(u) \right|^2 = (p-1) \#{\mathcal U}. $$ Recalling (\ref{eq:R}), we obtain $$ |R|\le T^{1-\delta}p^{1+o(1)} \#{\mathcal U}, $$ which after substitution in~\eqref{eq:T and R} and then using~\eqref{eq:Card} gives \begin{align*} N & =\frac{\#{\mathcal S}\( \#{\mathcal U} \)^2}{p-1} + O\(T^{1-\delta} \#{\mathcal U}\)\\ &= \frac{\#{\mathcal S}\( \#{\mathcal U} \)^2}{p-1}\(1+ O\(pT^{-\delta}W^{-1}\)\) &= \frac{\#{\mathcal S}\( \#{\mathcal U} \)^2}{p-1}\(1+ O\(p^{-\varepsilon}\)\). \end{align*} It remains to show that out $N$ such products $suv$ satisfying~\eqref{eq:cong} we can find at least one square-free. Similarly to the argument of the proof of Theorem~\ref{thm:Malpha} we note that if $suv$ is not square-free that it is divisible by a square of a prime $\ell \in (0.5W^{1/k}, W^{1/k}]$ and thus out of $N$ solutions to~\eqref{eq:cong} at most $TW^{2-1/k}p^{-1+o(1)}$ are not square-free. We now conclude that for a sufficiently large $p$ at least one of the products $$ suv \le TW^2 \le Q^{2+2(1-2\delta)/n +2\varepsilon} $$ satisfying~\eqref{eq:cong}, is square-free. We recall that this holds for all but $Q^{4\delta+o(1)}$ primes $p \in [Q/2, Q]$. Because $n$ can be chosen arbitrary large and while $\varepsilon$ and $\delta > 0$ can be chosen arbitrary small, the result now follows. \subsection{Proof of Theorem~\ref{thm:M1}} Let us fix some sufficiently small $\varepsilon > 0$ and set \begin{equation} \label{eq:DhL} D = p^{\varepsilon/4}, \qquad h = p-1, \qquad L = p^{1/4+\varepsilon}. \end{equation} Let $N_{a,p}^\sharp(L,h)$ be the number of solutions to the congruence~\eqref{eq:cong llu} with a square-free $u$. Using the standard inclusion-exclusion principle, we write $$ N_{a,p}^\sharp(L,h) = \sum_{d \le h^{1/2}} \mu(d) N_{ad^{-2},p}(L,h/d^2). $$ We use Lemma~\ref{lem:congr-asymp} to estimate the contribution from $d \le D$ as \begin{align*} \sum_{d \le D} \mu(d) N_{ad^{-2},p}(L,h/d^2) & = \frac{K^2 h}{p} \sum_{d \le D} \frac{\mu(d)}{d^2} + O\(D L^{3/2} p^{1/8+o(1)}\)\\ & = \frac{K^2 h}{p} \sum_{d=1}^\infty \frac{\mu(d)}{d^2} + O\( \frac{K^2 h}{Dp} + D L^{3/2} p^{1/8+o(1)}\)\\ & = \frac{K^2 h}{\zeta(2) p} + O\( \frac{K^2 h}{Dp} + D L^{3/2} p^{1/8+o(1)}\), \end{align*} where, as before $K$, is the cardinality of the set of primes $\ell \in [L,2L]$. Next, we use Lemma~\ref{lem:congr-bound} to estimate the contribution from $d > D$ as \begin{equation} \begin{split} \label{eq:large d} \sum_{D < d \le h^{1/2}} N_{ad^{-2},p}(L,h/d^2) & = \sum_{D < d \le h^{1/2}} \(\frac{L^2 h}{d^2p} +1\)p^{o(1)} \\ & \le \(\frac{L^2 h}{Dp} + h^{1/2}\)p^{o(1)}. \end{split} \end{equation} Therefore, we see that $$ N_{a,p}^\sharp(L,h) = \frac{ K^2 h}{\zeta(2) p} + O\( \(\frac{L^2 h}{Dp} + D L^{3/2} p^{1/8}+h^{1/2}\) p^{o(1)}\). $$ In particular, recalling the choice of parameters in~\eqref{eq:DhL}, we obtain \begin{equation} \label{eq:NLhK4} N_{a,p}^\sharp(L,h)= \frac{ K^2 h}{\zeta(2) p} + O\(p^{1/2+7\varepsilon/4+o(1)}\). \end{equation} Since $K= L^{1+o(1)} = p^{1/4 + \varepsilon+o(1)}$, the main term in~\eqref{eq:NLhK4} is of the form $L^{2+o(1)}hp^{-1} = p^{1/2+2\varepsilon + o(1)}$ which dominates the error term. Hence we can simplify the equation~\eqref{eq:NLhK4} as\begin{equation} \label{eq:NLh4} N_{a,p}^\sharp(L,h) = p^{1/2+2\varepsilon + o(1)}. \end{equation} Now a product $\ell_1\ell_2 u$ contributing to $N_{a,p}^\sharp(L,h)$ is not square-free if $$ \ell_1 = \ell_2\quad \text{or} \quad \ell_1 \mid u \quad \text{or} \quad \ell_2 \mid u. $$ Since each choice $\ell_1 = \ell_2$ defines $u$ uniquely, we have at most $L$ non square-free solutions of this type. Solutions with $\ell_j \mid u$, $j =1, 2$ lead to a congruence of the type~\eqref{eq:cong ll2v} with $h/L$ instead of $h$. Thus, by Lemma~\ref{lem:congr-boundsquare} there are $Lp^{o(1)}$ such solutions. Since both these quantities are much smaller than $N_{a,p}^\sharp(L,h)$ given by~\eqref{eq:NLh4}, and $\varepsilon$ is arbitrary, the result follows. \subsection{Proof of Theorem~\ref{thm:M1-AA}} Let us fix some $\varepsilon > 0$ and instead of~\eqref{eq:DhL} we now set \begin{equation} \label{eq:DQL} D = Q^{\varepsilon/2}, \qquad E = Q^{1/6+\varepsilon},\qquad h = Q, \qquad L = Q^{1/6+\varepsilon}. \end{equation} We follow the same lines as in the proof of Theorem \ref{thm:M1} using Lemma~\ref{lem:congr-asymp-AA} instead of Lemma~\ref{lem:congr-asymp}. To begin, we note that for $L \le p^{1/5}$, the bound of Lemma~\ref{lem:congr-asymp-AA} with $k=5$ takes the form $$ \( L^{14/10} p^{1/10} + L^{19/10}\) p^{o(1)} = L^{14/10} p^{1/10+o(1)}. $$ Hence for any prime $p \in [Q,2Q]$ to which this bound applies, as in the proof of Theorem~\ref{thm:M1}, we estimate the contribution from $d \le D$ as $$ \sum_{d \le D} \mu(d) N_{ad^{-2},p}(L,h/d^2) = \frac{K^2 h}{\zeta(2) p} + O\( \frac{K^2 h}{Dp} + D L^{14/10} p^{1/10 +o(1)} \), $$ where, as before $K= L^{1+o(1)}$, is the cardinality of the set of primes $\ell \in [L,2L]$. Next, using Lemma~\ref{lem:congr-bound} we estimate the contribution from $E \ge d > D$ similarly to~\eqref{eq:large d} as \begin{align*} \sum_{D < d \le E} N_{ad^{-2},p}(L,h/d^2) & = \sum_{D < d \le E} \(\frac{L^2 h}{d^2p} +1\)p^{o(1)} \\ & \le \(\frac{L^2 h}{Dp} + E\)p^{o(1)}. \end{align*} Finally for large divisors, precisely $E < d \le h^{1/2}$ we cover the range of summation over $d$ by $O(\log p)$ dyadic intervals of the form $[F,2F]$ where $E \le F \le h^{1/2}$. Clearly, recalling~\eqref{eq:DQL}, we verify that $$F \le h^{1/2} \le p \qquad\mbox{and}\qquad L^2h/F^2 \le L^2h/E^2 \leq p. $$ Hence, we can estimate the contribution from each of the intervals by Lemma~\ref{lem:congr-bound-aver} as \begin{align*} R_{a,p}(F,L,h/F^2) & \le \max\left\{F(L^2h/F^2)^{1/4}p^{-1/4}, F^{1/2}(L^2h/F^2)^{1/4}\right\} p^{o(1)}\\ & = \max\left\{F^{1/2} (L^2h)^{1/4}p^{-1/4}, (L^2h)^{1/4}\right\} p^{o(1)} \\ &= (L^2h)^{1/4}p^{o(1)} \end{align*} since $F \le h^{1/2} \le p^{1/2}$. Therefore, we see that \begin{equation*} N_{a,p}^\sharp(L,h) = \frac{ K^2 h}{\zeta(2) p} + O\( \(\frac{L^2 h}{Dp} + D L^{14/10} p^{1/10} +E + \(L^{2}h\)^{1/4} \) p^{o(1)}\). \end{equation*} In particular, recalling the choice of parameters in~\eqref{eq:DQL}, we obtain $L^2h = Q^{4/3+2\varepsilon}$ and thus $$ \frac{L^2 h}{Dp} =Q^{1/3+(3/2)\varepsilon} , \quad D L^{14/10} p^{1/10} \le Q^{1/3+19\varepsilon/10}, $$ while $$ \(L^{2}h\)^{1/4} = Q^{1/3+\varepsilon/2}. $$ In particular \begin{equation} \label{eq:NLhK} N_{a,p}^\sharp(L,h)= \frac{ K^2 h}{\zeta(2) p} + O\( Q^{1/3+19\varepsilon/10}\). \end{equation} Since $K= L^{1+o(1)} = p^{1/6 + \varepsilon+o(1)}$, the main term in~\eqref{eq:NLhK} is of the form $L^{2+o(1)}hp^{-1} = Q^{1/3+2\varepsilon + o(1)}$ which dominates the error term. Hence, we can simplify the equation~\eqref{eq:NLhK} as \begin{equation} \label{eq:NLh} N_{a,p}^\sharp(L,h) = Q^{1/3+2\varepsilon + o(1)}. \end{equation} Now a product $\ell_1\ell_2 u$ contributing to $N_{a,p}^\sharp(L,h)$ is not square-free if $$ \ell_1 = \ell_2\quad \text{or} \quad \ell_1 \mid u \quad \text{or} \quad \ell_2 \mid u. $$ Since each choice $\ell_1 = \ell_2$ defines $u$ uniquely, we have at most $L$ non square-free solutions of this type. Solutions with $\ell_j \mid u$, $j =1, 2$ lead to a congruence of the type~\eqref{eq:cong ll2v} with $h/L$ instead of $h$. Thus, by Lemma~\ref{lem:congr-boundsquare} there are $Lp^{o(1)}$ such solutions. Since both these quantities are much smaller than $N_{a,p}^\sharp(L,h)$ given by~\eqref{eq:NLh}, and $\varepsilon$ is arbitrary, the result follows. \section{Comments} \label{sec:comm} Clearly any improvement of Lemma~\ref{lem:CharSquarfree} immediately leads to an improvement of Theorem~\ref{thm:Malpha}. In particular, it is mentioned in~\cite{Mun} that under the Generalised Riemann Hypothesis (GRH) the bound $$ \max_{\chi\in{\mathcal X}_p^*} \left|S^\sharp_\chi(t)\right|\le t^{1/2}p^{ o(1)}, $$ holds for any integer $t <p$. Combined with the argument used in the proof of Theorem~\ref{thm:Malpha-p-AA}, it leads to the bound $M_{\alpha}^*(p) \le p^{2+o(1)}$ for any fixed $\alpha > 0$. Thus, Theorem~\ref{thm:Malpha-p-AA} shows unconditionally, that this holds for an overwhelming majority of primes $p$. We recall that nontrivial upper bounds on $T_{a,p}(U,V)$ are also known for all $p$, albeit weaker than that of Lemma~\ref{lem:TIaa}. For example, Nunes~\cite[Equation~(3.13)]{Nun} gives the bound $$ T_{a,p}(U,V) \le \min\left\{U^{2/3} V^{1/4}, U^{1/4} V^{2/3}\right\} p^{o(1)}. $$ which holds for any integer $a$ with $\gcd(a,p)=1$ and reals $U$ and $V$ with $1\le U,V \le p^{3/4}$. This, however, is not enough to improve Theorem~\ref{thm:M1}, where the bottleneck comes from bounds of double Kloosterman sums in Lemma~\ref{lem:BilinSums}. On the other hand, a version of the argument used in the proof of Lemma~\ref{lem:TIaa} has been used in~\cite{LSZ2} to improve the result of Nunes~\cite{Nun} on squarefree numbers in arithmetic progressions modulo $q$ on average over moduli $q$. Supported by the results of Theorem~\ref{thm:M1} and~\ref{thm:M1-AA}, we believe in the following \begin{conj} As $p\to \infty$, we have $$ M(p) = p^{1+ o(1)}. $$ \end{conj} On the other hand, Andrew Booker has given a construction which shows that there is an absolute constant $c$ such that \begin{equation} \label{eq:Low bound} M(p) \ge c p \frac{\log p}{\log \log p}. \end{equation} Indeed, for an integer parameter $K$ we choose a prime $p$ such that $$ kp+4 \equiv 0 \pmod {p_{k+1}^2}, \qquad k=1,...,K, $$ where $p_k$ denotes the $k$th prime. Clearly, the smallest positive square-free $s \equiv 4 \pmod p$ satisfies $s > Kp$. By the Linnik theorem~\cite[Theorem~18.7]{IwKow} we can take $$ p = \(\prod_{k=2}^K p_{k+1}^2\)^{O(1)} = \exp \(O(K\log K)\) $$ which implies~\eqref{eq:Low bound}. \section*{Acknowledgement} The authors are grateful to Andrew Booker and Carl Pomerance for discussions and encouragement. In particular, Andrew Booker provided an argument leading to the lower bound~\eqref{eq:Low bound}. This work was also partially supported (for M.M.) by the Austrian Science Fund (FWF), START-project Y-901 ``Probabilistic methods in analysis and number theory'' led by Christoph Aistleitner and (for I.S.) by the Australian Research Council Grant~DP170100786.
{ "timestamp": "2018-06-12T02:14:11", "yymm": "1710", "arxiv_id": "1710.04705", "language": "en", "url": "https://arxiv.org/abs/1710.04705", "abstract": "A. Booker and C. Pomerance (2017) have shown that any residue class modulo a prime $p\\ge 11$ can be represented by a positive $p$-smooth square-free integer $s = p^{O(\\log p)}$ with all prime factors up to $p$ and conjectured that in fact one can find such $s$ with $s = p^{O(1)}$. Using bounds on double Kloosterman sums due to M. Z. Garaev (2010) we prove this conjecture in a stronger form $s \\le p^{3/2 + o(1)}$ and also consider more general versions of this question replacing $p$-smoothness of $s$ by the stronger condition of $p^{\\alpha}$-smoothness. Using bounds on multiplicative character sums and a sieve method, we also show that we can represent all residue classes by a positive square-free integer $s\\le p^{2+o(1)}$ which is $p^{1/(4e^{ /2})+o(1)}$-smooth. Additionally, we obtain stronger results for almost all primes $p$.", "subjects": "Number Theory (math.NT)", "title": "On smooth square-free numbers in arithmetic progressions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363737832231, "lm_q2_score": 0.7185943865443352, "lm_q1q2_score": 0.7084883436905015 }
https://arxiv.org/abs/1807.03663
Orbits of monomials and factorization into products of linear forms
This paper is devoted to the factorization of multivariate polynomials into products of linear forms, a problem which has applications to differential algebra, to the resolution of systems of polynomial equations and to Waring decomposition (i.e., decomposition in sums of d-th powers of linear forms; this problem is also known as symmetric tensor decomposition). We provide three black box algorithms for this problem. Our main contribution is an algorithm motivated by the application to Waring decomposition. This algorithm reduces the corresponding factorization problem to simultaenous matrix diagonalization, a standard task in linear algebra. The algorithm relies on ideas from invariant theory, and more specifically on Lie algebras. Our second algorithm reconstructs a factorization from several bi-variate projections. Our third algorithm reconstructs it from the determination of the zero set of the input polynomial, which is a union of hyperplanes.
\section{Introduction} The main contribution of this paper is a simple algorithm which determines whether an input polynomial $f(x_1,\ldots,x_n)$ has a factorization of the form \begin{equation} \label{problem} f(x)=l_1(x)^{\alpha_1} \cdots l_n(x)^{\alpha_n} \end{equation} where the linear forms $l_i$ are linearly independent. The algorithm outputs such a factorization if there is one. Our algorithm works in the black box model: we assume that we have access to the input polynomial $f$ only through a ``black box'' which on input $(x_1,\ldots,x_n)$ outputs $f(x_1,\ldots,x_n)$. We therefore deal with a very special case of the polynomial factorization problem. As explained in Section~\ref{waring} below, this special case already has an interesting application to Waring decomposition. The algorithm is based on (elementary) ideas of invariant theory, but is nonetheless quite simple: it essentially boils down to the simultaneous diagonalization of commuting matrices, a standard task in linear algebra. For the general problem of factorization in the black box model there is a rather involved algorithm by Kaltofen and Trager~\cite{KalTra90}, see Section~\ref{comparison} for more details. Our factorization algorithm seems to be the first to rely on ideas from invariant theory, and to reduce a multivariate polynomial factorization problem to matrix diagonalization. Let us now explain why it is natural to use invariant theory in this context. \subsection{Connection with invariant theory} \label{gct} Consider a field $K$ of characteristic 0 and a polynomial $f \in K[x_1,\ldots,x_n]$. By definition, the orbit $\mathrm{Orb}(f)$ of $f$ under the action of the general linear group is the set of polynomials of the form $f(A.x)$ where $A \in GL_n(K)$ is an arbitrary invertible matrix. In their Geometric Complexity Theory program~\cite{mulmuley01,mulmuley08}, Mulmuley and Sohoni have proposed the following approach to lower bounds in algebraic complexity: in order to prove a lower bound for a polynomial~$g$, show that it does not belong to a suitable {\em orbit closure}~$\overline{\mathrm{Orb}(f)}$. The case where $f$ is the determinant polynomial is of particular interest as it allows to address the infamous ``permanent versus determinant'' problem. Mulmuley and Sohoni have also proposed a specific representation-theoretic approach to deal with this orbit closure problem. As it turns out, the representation-theoretic approach provably does not work~\cite{burgisser16}. The general approach based on orbit closure remains plausible, but has so far not produced any major lower bound result because the orbit closure of the determinant is difficult to describe. By contrast, the renewed interest in invariant theory has led to new positive results, i.e., to new polynomial time algorithms: see for instance~\cite{burgisser13,BGOWW17,garg16,mulmuley12} and especially~\cite{kayal2012affine}, which is a main inspiration for this paper. We deal here with the simplest of all orbits, namely, the orbit of a single monomial $x_1^{\alpha_1} \ldots x_n^{\alpha_n}$, and we derive a new factorization algorithm. It is immediate from the definition that this orbit is the set of polynomials that can be factorized as in~(\ref{problem}) with linearly independent forms. Note that the orbit closure of the monomial $x_1x_2\ldots x_n$ is the set of polynomials that can be written as products of $n$ linear forms (without any assumption of linear independence). This is well known in algebraic geometry, see example (5) in Section~3.1.2 of~\cite{landsbergGCT} and exercise 3.1.4.2 in the same book.} Moreover, equations for this orbit closure are known, see chapter 9 of~\cite{landsbergGCT} for a derivation of the equations and the history of this subject. However, no factorization algorithm relying on ideas from invariant theory is currently known for arbitrary products of linear forms. We suggest this problem as a natural step before considering more complicated orbit closure problems. \subsection{Application to Waring decomposition} \label{waring} The factorization problem studied here is motivated mainly by an algorithm due to Neeraj Kayal (see Section~5 of~\cite{kayal11}). Factorization in products of linear forms is also useful for algorithmic differential algebra~\cite{SingerUlmer97,vanHoeij99} and for the resolution of systems of algebraic equations by factorization of the $U$-resultant~\cite{cox06,Kobayashi88}. Kayal's algorithm determines whether a homogeneous polynomial $f$ of degree $d$ in $n$ variables can be written as the sum of $n$ $d$-th powers of linearly independent forms. This algorithm is based on the fact that such a polynomial has a Hessian determinant which factors as a product of $(d-2)$-th powers of linearly independent forms. In~\cite{kayal11} the Hessian is factorized with Kaltofen's algorithm { for the factorization of polynomials given by straight-line programs}~\cite{kaltofen89}. The decomposition of $f$ as a sum of $d$-th powers can be recovered from this information. The algorithm presented in this note can therefore be used instead of Kaltofen's algorithm to solve the same decomposition problem. Building on these ideas from~\cite{kayal11}, it was recently shown in~\cite{GKP18} how to recover up to $O(n^2)$ terms in a Waring decomposition\footnote{This algorithm works when the linear forms to be recovered are sufficiently generic; efficient reconstruction in the worst case is still open.} (and more generally in a sum of powers of affine forms with possibly different exponents in each power). The algorithm works for polynomials of degree $d \geq 5$ and is based on the factorization of a ``generalized Hessian'' into products of linear forms. There are now up to order $n^2$ distinct linear forms in the factorization, and that many linear forms must of course be linearly dependent. This provides further motivation for the problem suggested at the end of Section~\ref{gct} (namely, the extension of our algorithm to the case of linearly dependent forms). Factorization in products of dependent forms is discussed at the end of Section~\ref{comparison}. \subsection{Comparison with previous factorization algorithms} \label{comparison} As mentioned above, the algorithm for Waring decomposition in~\cite{kayal11} relies on Kaltofen's factorization algorithm~\cite{kaltofen89} which works in the arithmetic circuit (or ``straight-line program'') model: the input polynomial $f$ is described by an arithmetic circuit, and the output is a list of arithmetic circuits for the irreducible factors of $f$ together with their multiplicities. One could instead appeal to the black-box factorization algorithm by Kaltofen and Trager~\cite{KalTra90}. In this case, instead of factorizing a circuit for the determinant of a Hessian matrix one would use a black box for the determinant of this matrix. The algorithm from~\cite{KalTra90} produces a black box for the irreducible factors of $f$ given a black-box for evaluating $f$. Compared to~\cite{KalTra90,kaltofen89} our algorithm works in a hybrid model: we use the most general of the two for the input polynomial (black box representation) but we explicitly determine the linear forms $l_i$ in~(\ref{problem}) when they exist.\footnote{It would anyway be easy to explicitly determine the $l_i$ by interpolation from a black box for these linear forms.} For the general polynomial factorization problem, it is apparently not known how to efficiently produce "small" arithmetic circuits for the irreducible factors of a polynomial $f$ given a black-box for $f$. Due to the black box algorithm of~\cite{KalTra90}, this would be equivalent to producing a small arithmetic circuit for a polynomial given a black box for this polynomial. The algorithms from~\cite{KalTra90,kaltofen89} project the original $n$-variate factorization problem to a bivariate factorization problem, solve the bivariate problem using a factorization algorithm for polynomials in dense representation, and then lift the result to a factorization of the $n$-variate input polynomial. It will be clear that our algorithm is based on a very different principle: instead of projecting we do linear algebra computations directly in $n$-dimensional space. There is an intringuing connection between our algorithm and Gao's algorithm for the absolute factorization of bivariate polynomials~\cite{gao03}: they are both based on the study of certains partial differential equations. For the connection of our approach to PDEs see Lemma~\ref{lieP} in Section~\ref{invback}. As explained in Section~\ref{waring}, for the application to Waring decomposition following~\cite{kayal11} we can assume that the linear forms $l_i$ are independent. This assumption does not seem so natural in other applications such as differential algebra~\cite{SingerUlmer97,vanHoeij99} or the resolution of systems of polynomial equations~\cite{cox06,Kobayashi88}. For this reason, we present in Section~\ref{bivariate} another algorithm for factorization into products of linear forms based like~\cite{KalTra90,kaltofen89} on bivariate projections. Our goal in that section is to give a simpler algorithm which takes advantage of the fact that we are considering only a special case of the polynomial factorization problem. We present another simple algorithm in Section~\ref{hyperplane}. This algorithm requires a univariate factorization algorithm, and the projection-based algorithm requires a bivariate factorization algorithm (see Sections~\ref{bivariate} and~\ref{hyperplane} for more details).} For these last two algorithms, no assumption of linear independence is needed. This is also the case for the algorithms in~\cite{Kobayashi88,vanHoeij99}. In these two papers no complexity analysis is provided, and it is assumed in the second one that the polynomial to be factorized is squarefree. We note that the algorithm from~\cite{Kobayashi88} bears some similarity to the algorithm that we present in Section~\ref{hyperplane}: { both are based on the determination of the zero set of the input polynomial, which is a union of hyperplanes.} \subsection{On the choice of fields} Polynomial factorization problems come with many variations. In particular, the following choices need to be made: \begin{itemize} \item[(i)] The input is a polynomial $f \in K[x_1,\ldots,x_n]$. What field $K$ do we choose as field of coefficients for $f$? \item[(ii)] What field $\mathbb{K}$ do we choose as field of coefficients for the output? More precisely, the output is a factorization $f=g_1 \ldots g_k$ where the polynomials $g_i$ belong to $\mathbb{K}[x_1,\ldots,x_n]$ for some field extension $\mathbb{K}$ of $K$, and are irreducible over $\mathbb{K}$. In the literature it is often (but not always) assumed that $K=\mathbb{K}$. \item[(iii)] How do we represent the field elements? Assume for instance that $K=\mathbb{Q}$ and that we are interested in absolute factorization, i.e., factorization over $\mathbb{K} = \overline{\mathbb{Q}}$ (the algebraic closure of $\mathbb{Q}$). Do we insist on a symbolic representation for the coefficients of the $g_i$'s (in this case, the coefficients would be represented as elements of an extension of $\mathbb{Q}$ of finite degree) or, using an embedding $\overline{\mathbb{Q}} \subseteq \mathbb{C}$, are we happy to compute only numerical approximations of these coefficients? \end{itemize} Absolute factorization seems to be the most natural choice for this paper because of the application to Waring decomposition (this problem has been studied mostly in algebraically closed fields\footnote{Some results are also known for the field of real numbers~\cite{carlini12,comon12}}). Moreover, for any field $\mathbb{K}$ if a decomposition of $f$ of the form~(\ref{problem}) with the $l_i$ in $\mathbb{K}[x_1,\ldots,x_n]$ is possible then this decomposition clearly is an absolute factorization of $f$. Nevertheless, we do not commit to any specific choice for (i), (ii) and (iii) except that $K$ must be of characteristic zero. This is possible because our main algorithm is a {\em reduction} (to matrix diagonalization). Any (efficient) algorithm for this standard linear algebra task for a specific choice of (i), (ii) and (iii) will therefore yield an (efficient) factorization algorithm. We elaborate on the complexity of our reduction in Section~\ref{complexity}. \subsection{Complexity of our { invariant-theoretic} algorithm} \label{complexity} The black box algorithm in~\cite{KalTra90} applies to polynomials with coefficients in a field $K$ of characteristic 0. The only assumption on $K$ if that a factorization algorithm for univariate polynomials in $K[x]$ is available. This black box algorithm can therefore be thought of as a reduction from multivariate to univariate polynomial factorization. In order to evaluate precisely the complexity of this algorithm for a specific field $K$, one must of course take into account the complexity of the univariate factorization problem for this particular field. Likewise, our main algorithm can be thought of as a reduction to (simultaneous) matrix diagonalization.\footnote{Note that diagonalizing a matrix is clearly related to the factorization of its characteristic polynomial.} When we write that the algorithms of Section~\ref{factorization} run in polynomial time, we mean polynomial in $n$ (the number of variables of the input polynomial) and $d$ (its degree). In particular, the algorithm makes $\mathrm{poly}(n,d)$ calls to the black box for $f$. It also performs simultaneous diagonalization on $n$ (commuting) matrices, and makes a few other auxiliary computations. The main one is the determination of the Lie algebra of $f$, which as explained in Section~\ref{invback} is a linear algebra problem; a polynomial black box algorithm for it can be found in~\cite{kayal2012affine}. {% A more precise analysis of our algorithm can be found in the appendix. It suggests that the computation of the Lie algebra of $f$ is a particularly expensive step. Improving the algorithm from~\cite{kayal2012affine} (or its analysis in the appendix) seems to be an interesting open problem.} If we just want to {\em decide} the existence of a suitable factorization (rather than compute it) our algorithm becomes purely algebraic, i.e., it just performs arithmetic operations (additions, multiplications and tests to zero) on the function values given by the black box for $f$. In particular, we do not need to factor univariate polynomials or diagonalize matrices.} Like in~\cite{kaltofen89,KalTra90} our algorithm is randomized and can return a wrong answer with a small probability $\epsilon$. This is unavoidable because homogeneous polynomials of degree $d$ in $n$ variables have $\binom{n+d-1}{d}$ coefficients and this is bigger than any fixed polynomial in $n$ and $d$ if these two parameters are nonconstant. As a result, for a polynomial $f$ of form~(\ref{problem}) there will always be another polynomial $g$ which agrees with $f$ on all points queried on input $f$. The algorithm will therefore erroneously\footnote{Indeed, the algorithm should report failure if $g$ is not of form~(\ref{problem}), or if it is should return a different factorization than for $f$.} output the same answer on these two inputs. The probability of error $\epsilon$ can be thought of as a small fixed constant, and as usual it can be made as small as desired by repeating the algorithm (or by changing the parameters in the algorithm from~\cite{kayal2012affine} for the computation of the Lie algebra; this is the main source of randomness in our algorithm\footnote If $f$ is given explicitly as a sum of monomials, the Lie algebra can be computed deterministically in polynomial time; this is clear from the characterization of the Lie algebra in Lemma~\ref{lieP}.}). \subsection{Organization of the paper} In Section~\ref{background} we recall some background on matrix diagonalization, simultaenous diagonalization and invariant theory. In Section~\ref{orbitsection} we give a characterization of the polynomials in the orbit of a monomial. We use this characterization in Section~\ref{factorization} to derive our main algorithm for factorization into products of (independent) linear forms.} An algorithm based on the older idea of bivariate projections is presented in Section~\ref{bivariate}. { In contrast to~\cite{kaltofen89,KalTra90} this algorithm recovers a factorization of the input polynomial from {\em several} bivariate projections.} Another simple algorithm is presented in Section~\ref{hyperplane}. { As mentioned earlier, this algorithm relies on the determination of the zero set of $f$.} Our last two algorithms do not rely on any invariant theory and do not require any independence property for the linear forms. As pointed out at the end of Section~1.1, for factorization into products of arbitrary linear forms no algorithm that would rely on ideas from invariant theory is known at this time.} { The paper ends with two appendices where we analyze the complexity of our three algorithms in more detail than in the main body. In particular, we point out in Appendix~\ref{bb} an optimization of our invariant-theoretic algorithm for the ``white box'' model, in which the black box for $f$ is implemented by an arithmetic circuit.} \section{Background} \label{background} We first recall the Schwarz-Zippel lemma~\cite{Schw,zippel}, a ubiquitous tool in the analysis of randomized algorithms. \begin{lemma} Let $f \in K[x_1,\ldots,x_n]$ be a nonzero polynomial. If $a_1,\ldots,a_n$ are drawn independently and uniformly at random from a finite set $S \subseteq K$ then $$\Pr[f(a_1,\ldots,a_n)=0] \leq \deg(f)/|S|.$$ \end{lemma} A similar result with a slightly worse bound was obtained a little earlier by DeMillo and Lipton~\cite{demillo}. In the remainder of this section we recall some background on matrix diagonalization, on simultaenous diagonalization, on invariant theory and Lie algebras. \subsection{Background on matrix diagonalization} \label{diagback} Since our main algorithm is a reduction to matrix diagonalization, it is appropriate to provide some brief background on the algorithmic solutions to this classical problem. After a first course on linear algebra, this might look like a simple task: to diagonalize a matrix $M$, first compute its eigenvalues. Then, for each eigenvalue $\lambda$ compute a basis of $\mathrm{Ker}(M-\lambda.I)$. But this problem is more subtle than it { seems at first sight}. Let us begin with numerical algorithms. There is a vast literature on numerical methods for eigenvalue problems (see for instance~\cite{bjorck16} and the references there). Naively, one might want to compute the eigenvalues of $M$ by computing the roots of its characteristic polynomial $\chi_M(\lambda)=\det(M-\lambda I)$. This approach is hardly ever used in practice for large matrices because the roots of a polynomial can be very sensitive to perturbations of its coefficients~\cite{wilkinson84}. A theoretical analysis explaining why such a bad behaviour is rather prevalent can be found in~\cite{burgisser17}. The QR algorithm is now considered to be the standard algorithm for computing all eigenvalues and eigenvectors of a dense matrix~\cite{bjorck16}. It works well in practice, but a thorough understanding of this algorithm (or of {\em any} efficient and stable numerical algorithm for the computation of eigenvalue -- eigenvector pairs) is still lacking, see Open Problem~2 in~\cite{burgisser2013}. Let us now turn to symbolic methods. In the absence of roundoff errors, an approach based on the computation of the characteristic polynomial becomes feasible (see~\cite{pernet07} for the state of the art on the computation of this polynomial). From the knowledge of $\chi_M$ we can decide whether $M$ is diagonalizable using the following classical result from linear algebra. \begin{proposition} \label{diagmat} Let $K$ be a field of characteristic 0 and let $\chi_M$ be the characteristic polynomial of a matrix $M \in M_n(K)$. Let $P_M = \chi_M / \mathrm{gcd}(\chi_M,\chi_M')$ be the squarefree part of $\chi_M$. The matrix $M$ is diagonalizable over $\overline{K}$ iff $P_M(M)=0$.\footnote{% An equivalent characterization is that the minimal polynomial of $M$ has only simple roots.} Moreover, in this case $M$ is diagonalizable over $K$ iff all the roots of $P_M$ lie in $K$. \end{proposition} Once we know that $M$ is diagonalizable, computing the diagonal form of $M$ symbolically requires the factorization of $P_M$. We note that for $K=\mathbb{Q}$, finding the roots of $P_M$ in $\mathbb{Q}$ is cheaper than the general problem of factorization in irreducible factors over $\mathbb{Q}[X]$~(\cite{aecf}, Proposition~21.22). This faster algorithm should therefore be used to diagonalize over $\mathbb{Q}$. For the purpose of this paper, this is relevant for factorisation into a product of linear forms with rational coefficients. Once we know the eigenvalues of $M$ and their multiplicities, the last step is the computation of a transition matrix $T$ such that $T^{-1}MT$ is diagonal. For this step we refer to~\cite{giesbrecht94,giesbrecht95,villard97}. These papers consider the more general problem of computing symbolic representations of the Jordan normal form. The knowledge of $T$ is particularly important for the application to factorization into product of linear forms because (as shown in Section~\ref{factorization}) these forms can be read off directly from the transition matrix. If we just want to know whether such a factorization is possible over $K$ or $\overline{K}$, Proposition~\ref{diagmat} is sufficient. \subsection{Simultaneous diagonalization} \label{simulback} It is a well known fact of linear algebra that a family of diagonalizable matrices is simultaneously diagonalizable if and only if they pairwise commute. We will use this criterion to test whether a family of matrices $A_1,\ldots,A_k$ is simultaneously diagonalizable. If the test succeeds, we will then need to diagonalize them. Note that a transition matrix which diagonalizes $A_1$ may not necessarily diagonalize the other matrices (this may happen if $A_1$ has an eigenvalue of multiplicity larger than 1). We can nonetheless perform a simultaneous diagonalization by diagonalizing a single matrix. Indeed, as suggested in Section~6.1.1 of~\cite{kayal2012affine} we can diagonalize a random linear combination of the $A_i$'s. We sketch a proof of this simple fact below. For notational simplicity we consider only the case of two matrices. The general case can be treated in a similar way. \begin{lemma} \label{simulemma} Assume that $M,N \in M_n(k)$ are two simultaneously diagonalizable matrices. There is a set $B \subseteq K$ of size at most $n(n-1)/2$ such that for any $t \in K \setminus B$ any eigenvector of $M+tN$ is also an eigenvector of $M$ and~$N$. \end{lemma} \begin{proof} Since $M$ and $N$ are simultaneously diagonalizable we may as well work in a basis where these matrices become diagonal. We therefore assume without loss of generality that $M=\operatorname{diag}(\lambda_1,\ldots,\lambda_n)$ and $N=\operatorname{diag}(\mu_1,\ldots,\mu_n)$. We then have $M+tN=\operatorname{diag}(\lambda_1+t\mu_1,\ldots,\lambda_n+t\mu_n)$ for any $t \in K$. We may take for $B$ the set of $t$'s such that $\lambda_i+t\mu_i = \lambda_j + t \mu_j$ for some pair $\{i,j\}$ such that $(\lambda_i,\mu_i) \neq (\lambda_j,\mu_j)$. This is indeed a set of size at most $n(n-1)/2$, and for $t {\not \in} B$ the eigenspace of $M+tN$ associated to the eigenvalue $\lambda_i+t\mu_i$ is the intersection of the eigenspace of $M$ associated to $\lambda_i$ and of the eigenspace of $N$ associated to $\mu_i$. In particular, any eigenvector of $M+tN$ is also an eigenvector of $M$ and $N$. \end{proof} \begin{proposition} Assume that $M,N \in M_n(k)$ are two simultaneously diagonalizable matrices and that $t$ is drawn from the uniform distribution on a finite set $S \subset K$. With probability at least $1-\frac{n(n-1)}{2|S|}$, all the transition matrices which diagonalize $M+tN$ also diagonalize $M$ and $N$. \end{proposition} \begin{proof} We show that the required property holds true for any $t$ that does not belong to the ``bad set'' of Lemma~\ref{simulemma}. For an invertible matrix $T$, $T^{-1}(M+tN)T$ is diagonal iff all the column vectors of $T$ are eigenvectors of $M+tN$. But for $t {\not \in B}$, any eigenvector of $M+tN$ is also an eigenvector of $M$ and $N$. As a result, if $T^{-1}(M+tN)T$ is diagonal then $T^{-1}MT$ and $T^{-1}NT$ are diagonal as well. \end{proof} \subsection{Background on invariants and Lie algebras} \label{invback} In this section and in the remainder of the paper, $K$ denotes a field of characteristic 0. The general linear group $GL_n$ acts on the polynomial ring $K[x_1,\ldots,x_n]$ by linear change of variables: an invertible matrix $A \in GL_n$ sends a polynomial $P(x) \in K[x_1,\ldots,x_n]$ to $P(A.x)$. The group of invariant of $P$ is the group of matrices $A$ such that $P(A.x) = P(x)$. We recall that this is a Lie group. Its Lie algebra $\mathfrak{g}$ is a linear subspace of $M_n(K)$ defined as the tangent space of $G$ at identity. More precisely, $\mathfrak{g}$ is the ``linear part'' of the tangent space; the (affine) tangent space is $I+\mathfrak{g}$. The Lie algebra associated to the group of invariants of $P$ will be called simply ``Lie algebra of $P$'', and we will denote it by $\mathfrak{g}_P$. It can be explicitly computed as follows. \begin{lemma}[Claim 59 in~\cite{kayal2012affine}] \label{lieP} A matrix $A=(a_{ij}) \in M_n(K)$ belongs to the Lie algebra of $P$ if and only if \begin{equation} \label{liePeq} \sum_{i,j \in [n]} a_{ij} x_j \frac{\partial P}{\partial x_i}=0 \end{equation} \end{lemma} The elements of the Lie algebra therefore correspond to linear dependence relations between the polynomials $x_j \frac{\partial P}{\partial x_i}$. As an example we determine the group of invariants of monomials. \begin{lemma} \label{moninvar} The group of invariants of a monomial $m=x_1^{\alpha_1}....x_n^{\alpha_n}$ with $\alpha_i \geq 1$ for all $i$ is generated by: \begin{itemize} \item[(i)] The diagonal matrices $\operatorname{diag}(\lambda_1,\ldots,\lambda_n)$ with $\prod_{i=1}^n \lambda_i^{\alpha_i} =1$. We denote this subgroup of $GL_n$ by $T_{\alpha}$, where $\alpha$ is the tuple $(\alpha_1,\ldots,\alpha_n)$. \item[(ii)] The permutation matrices which map any variable $x_i$ to a variable $x_{\pi(i)}$ with same exponent in $m$ (i.e., with $\alpha_i=\alpha_{\pi(i)}$). \end{itemize} \end{lemma} \begin{proof} The monomial is obviously invariant under the actions of matrices from (i) and (ii). Conversely, assume that $m$ is invariant under the action of an invertible matrix $A$. By uniqueness of factorization, $A$ must send every variable $x_i$ to the multiple of another variable, i.e., to $\lambda_ix_{\pi(i)}$. Moreover we must have $\alpha_i=\alpha_{\pi(i)}$ and $\prod_{i=1}^n \lambda_i^{\alpha_i} =1$, so $A$ is in the group generated by (i) and (ii). \end{proof} The Lie algebras of monomials is determined in Proposition~\ref{liemon}. In this paper we will follow the Lie-algebraic approach from~\cite{kayal2012affine}. As a result we will not work directly with groups of invariants. If two polynomials are equivalent under the action of $GL_n$, their Lie algebras are conjugate. More precisely: \begin{proposition}[Proposition 58 in~\cite{kayal2012affine}] \label{lieconj} If $P(x)=Q(A.x)$ then $$\mathfrak{g}_P = A^{-1}.\mathfrak{g}_Q.A$$ \end{proposition} \section{The orbit of a monomial} \label{orbitsection} Throughout the paper, $m$ denotes a monomial $x_1^{\alpha_1}....x_n^{\alpha_n}$ with all exponents $\alpha_i \geq 1$. \begin{proposition} \label{liemon} The Lie algebra $\mathfrak{g}_m$ of a monomial $m=x_1^{\alpha_1} \cdots x_n^{\alpha_n}$ with all exponents $\alpha_i \geq 1$ is the space of diagonal matrices $\operatorname{diag}(\lambda_1,\ldots,\lambda_n)$ such that $\sum_{i=1}^n \alpha_i \lambda_i =0$. \end{proposition} \begin{proof} By Lemma~\ref{lieP}, all these matrices are in $\mathfrak{g}_m$ since $m$ satisfies the equation $x_i \frac{\partial m}{\partial x_i} = \alpha_i m$. Conversely, if $A \in \mathfrak{g}$ all off-diagonal entries $a_{ij}$ must vanish since the monomial $x_j \frac{\partial m}{\partial x_i}$ could not cancel with any other monomial in~(\ref{liePeq}). \end{proof} \begin{remark} \label{liemongen} The above characterization of $\mathfrak{g}_m$ is no longer true if some exponents $\alpha_i$ may vanish. Indeed, in this case there is no constraint on the entries in row $i$ of a matrix in $\mathfrak{g}_m$. However, we note for later use that in all cases, the space of diagonal matrices $\operatorname{diag}(\lambda_1,\ldots,\lambda_n)$ which lie in $\mathfrak{g}_m$ is defined by $\sum_{i=1}^n \alpha_i \lambda_i =0$. \end{remark} It is easy to check by a direct computation that the Lie algebra determined in Proposition~\ref{liemon} is (as expected) equal to the tangent space at identity of the group $T_{\alpha}$ from Lemma~\ref{moninvar}. The next result turns Proposition~\ref{liemon} into an equivalence. \begin{proposition} \label{moncharlie} Let $f \in K[x_1,\ldots,x_n]$ be a homogeneous polynomial of degree $d$. The two following properties are equivalent: \begin{itemize} \item[(i)] $f$ is a monomial which depends on all of its $n$ variables. \item[(ii)] The Lie algebra of $f$ is an $(n-1)$-dimensional subspace of the space of diagonal matrices. \end{itemize} \end{proposition} \begin{proof} We have seen in Proposition~\ref{liemon} that (i) implies (ii). Conversely, for any polynomial $P$ let us denote by $\mathfrak{d}_P$ the subspace of its Lie algebra made of diagonal matrices. By Lemma~\ref{lieP}, $\mathfrak{d}_f$ is the space of of matrices $\operatorname{diag}(\lambda_1,\ldots,\lambda_n)$ such that \begin{equation} \label{diagf} \sum_{i=1}^n \lambda_i x_i \frac{\partial f}{\partial x_i}=0 \end{equation} For any monomial $m$, $x_i \frac{\partial m}{\partial x_i}$ is proportional to $m$. This implies that $\mathfrak{d}_f$ is the intersection of the $\mathfrak{d}_m$'s for the various monomials $m$ appearing in $f$ since the contributions to~(\ref{diagf}) coming from different monomials cannot cancel. By Remark~\ref{liemongen}, for two distinct monomials $m_1$ and $m_2$ appearing in $f$ the subspaces $\mathfrak{d}_{m_1}$ and $\mathfrak{d}_{m_2}$ are distinct since they are defined by linear forms that are not proportional (here we use the homogeneity of $f$). It follows that their intersection is of dimension $n-2$ in contradiction with (ii). Therefore, only one monomial can appear in $f$. Finally, by Remark~\ref{liemongen} all of the $n$ variables must appear in this monomial; otherwise, $\mathfrak{g}_f$ would contain some nondiagonal matrices. \end{proof} We can now characterize the Lie algebras of polynomials in the orbit of a monomial. \begin{theorem} \label{orbit} Consider a monomial $m=x_1^{\alpha_1} \cdots x_n^{\alpha_n}$ with $\alpha_i \geq 1$ for all $i$, a homogeneous polynomial $f \in K[x_1,\ldots,x_n]$ of degree $d=\alpha_1+\cdots+\alpha_n$ and an invertible matrix~$A$. The two following properties are equivalent. \begin{itemize} \item[(i)] The action of $A$ sends $m$ to a multiple of $f$, i.e., $m(A.x)=c.f(x)$ for some constant $c$. \item[(ii)] The Lie algebras of $f$ and $m$ are conjugate by $A$, i.e., $\mathfrak{g}_f = A^{-1}.\mathfrak{g}_m.A$. \end{itemize} \end{theorem} \begin{proof} Proposition~\ref{lieconj} shows that (i) implies (ii). For the converse, assume that $\mathfrak{g}_f = A^{-1}.\mathfrak{g}_m.A$ and define $g(x)=f(A^{-1}.x)$. By Proposition~\ref{lieconj} we have $\mathfrak{g}_g = \mathfrak{g}_m$. It follows from Propositions~\ref{liemon} and~\ref{moncharlie} that $g=\lambda.m$ for some nonzero constant $\lambda$. We therefore have $m(Ax)=f(x)/\lambda$. \end{proof} This characterization takes a particularly simple form in the case of equal exponents. \begin{theorem} \label{equalexp} Consider a monomial $m=(x_1 \cdots x_n)^{\alpha}$ and a homogeneous polynomial $f \in K[x_1,\ldots,x_n]$ of degree $d=n\alpha$. The two following properties are equivalent. \begin{itemize} \item[(i)] Some multiple of $f$ belongs to the orbit of $m$, i.e., $m(A.x)=c.f(x)$ for some invertible matrix $A$ and some constant $c$. \item[(ii)] The Lie algebra of $f$ has a basis made of $n-1$ diagonalizable matrices of trace zero which pairwise commute. \end{itemize} Moreover, $f$ is a constant multiple of $m$ if and only its Lie algebra is the space of diagonal matrices of trace zero. \end{theorem} \begin{proof} Let $f$ be in the orbit of $m$. By Proposition~\ref{lieconj}, in order to establish~(ii) for $f$ we just need to check that this property is true for $m$. This is the case since (by Proposition~\ref{liemon}) the Lie algebra of $m$ is the space of diagonal matrices of trace 0. Conversely, assume that (ii) holds for $f$. It is a well known fact of linear algebra that a family of diagonalizable matrices is simultaneously diagonalizable if and only if they pairwise commute. By simultaneously diagonalizing the $n-1$ matrices in the basis of $\mathfrak{g}_{f}$ we find that this Lie algebra is conjugate to $\mathfrak{g}_m$ (which as we just saw is the space of diagonal matrices of trace 0). Hence some constant multiple of $f$ is in the orbit of $m$ by Theorem~\ref{orbit}. As to the second part of the theorem, we have already seen that $\mathfrak{g}_m$ is the space of diagonal matrices of trace zero. Conversely, if $\mathfrak{g}_f = \mathfrak{g}_m$ we can apply Theorem~\ref{orbit} with $A = \mathrm{Id}$ and it follows that $f$ is a constant multiple of~$m$. \end{proof} Note that if (ii) holds for some basis of $\mathfrak{g}_{f}$ this property holds for all bases. Also, if $K$ is algebraically closed we can always take $c=1$ in Theorems~\ref{orbit} and~\ref{equalexp}. \section{Factorization into products of independent forms} \label{factorization} By definition, the orbit of a monomial $m=x_1^{\alpha_1} \cdots x_n^{\alpha_n}$ contains the polynomial $f$ if an only if $f$ can be written as $f(x)=l_1(x)^{\alpha_1} \cdots l_n(x)^{\alpha_n}$ where the linear forms $l_i$ are linearly independent. We will exploit the characterization of orbits obtained in Section~\ref{orbitsection} to factor such polynomials. We assume that we have access to a black-box for $f$. We begin with the simpler case of equal exponents. Note that this is exactly what is needed in Section~5 of~\cite{kayal11}. \subsection{Equal exponents} \label{equal} In this section we describe an algorithm which takes as input a homogeneous polynomial $f \in K[x_1,\ldots,x_n]$ of degree $d=n\alpha$, determines if it can be expressed as $f = (l_1 \cdots l_n)^{\alpha}$ where the $l_i$'s are linearly independent forms and finds such a factorization if it exists. In the first three steps of the following algorithm we decide whether such a factorization exists over $\overline{K}$, and in the last two we actually compute the factorization. \begin{enumerate} \item Compute a basis $B_1,\ldots,B_k$ of the Lie algebra of $f$. \item Reject if $k \neq n-1$, i.e., if the Lie algebra is not of dimension $n-1$. \item Check that the matrices $B_1,\ldots,B_{n-1}$ commute, are all diagonalizable over $\overline{K}$ and of trace zero. If this is the case, declare that $f$ can be factored as $f = (l_1 \cdots l_n)^{\alpha}$ where the $l_i$'s are linearly independent forms. Otherwise, reject. \item Perform a simultaneous diagonalization of the $B_i$'s, i.e., find an invertible matrix $A$ such that the $n-1$ matrices $AB_iA^{-1}$ are diagonal. \item At the previous step we have found a matrix $A$ such that $f(A^{-1}x) = \lambda.m(x)$ where $m$ is the monomial $(x_1 \cdots x_n)^{\alpha}$. We therefore have $f(x)=\lambda.m(Ax)$ and we output this factorization. \end{enumerate} Note that this algorithm outputs a factorization of the form $f = \lambda.(l_1 \cdots l_n)^{\alpha}$. We can of course obtain $\lambda=1$ by an appropriate scaling of the $l_i$'s if desired. \begin{theorem} The above algorithm runs in polynomial time and determines whether $f$ can be written as $f = (l_1 \cdots l_n)^{\alpha}$ where the forms $l_i$ are linearly independent. It ouputs such a factorization if there is one. \end{theorem} \begin{proof} The correctness of the algorithm follows from Theorem~\ref{equalexp}. In particular, the equivalence of properties (i) and (ii) in Theorem~\ref{equalexp} shows that the algorithm will make a correct decision on the existence of a suitable factorization at step~3. If this step succeeds, the simultaneous diagonalization at step~4 is possible since (as already pointed out Section~\ref{simulback} and in the proof of Theorem~\ref{equalexp}) simultaneous diagonalization is always possible for a family of matrices which are diagonalizable and pairwise commute. By Proposition~\ref{lieconj}, the Lie algebra of $f(A^{-1}x)$ is the space of diagonal matrices of trace 0. This implies that $f(A^{-1}x)$ is a constant multiple of $m$ by the last part of Theorem~\ref{equalexp}, and justifies the last step of the algorithm. Let us now explain how to implement the 5 steps. A randomized\footnote{There is no need for randomization if $f$ is given explicitly as a sum of monomials rather than by a black box (in this case we can directly solve the linear system from Lemma~\ref{lieP}).} black box algorithm for Step~1 based on Lemma~\ref{lieP} can be found in Lemma~22 of~\cite{kayal2012affine}. Steps 2 and 3 are mostly routine (use Proposition~\ref{diagmat} to check that the $B_i$'s are diagonalizable). Step 4 (simultaenous diagonalization of commuting matrices) is also a standard linear alegbra computation. One suggestion from Section~6.1.1 of~\cite{kayal2012affine} is to diagonalize a random linear combination of the $B_i$'s (see Section~\ref{simulback} for more details). That matrix can be diagonalized as explained in Section~\ref{diagback}. Finally, the scaling factor $\lambda$ at step 5 can be computed by one call to the black box for $f$. \end{proof} \begin{remark} \label{smallfield} We have presented the above algorithm with a view towards factorisation over $\overline{K}$, but it is readily adapted to factorization over some intermediate field $K \subseteq \mathbb{K} \subseteq \overline{K}$. Note in particular that to decide the existence of a factorization at step 3, we would need to check that the matrices $B_i$ are diagonalizable over $\mathbb{K}$. As recalled in Proposition~\ref{diagmat}, this requires an algorithm that decides whether the characteristic polynomial of a matrix has all its roots in~$\mathbb{K}$. In the case $\mathbb{K}= \overline{K}$, if we stop at step~3 we obtain a purely algebraic algorithm for deciding the existence of a suitable factorization (in particular, we do not need to factorize univariate polynomials or diagonalize matrices).} \end{remark} \subsection{General case} \label{general} In this section we describe an algorithm which takes as input a homogeneous polynomial $f$ of degree $d=\alpha_1+\cdots+\alpha_n$ in $n$ variables, determines if it can be expressed as $f(x)=l_1(x)^{\alpha_1} \cdots l_n(x)^{\alpha_n}$ where the $l_i$'s are linearly independent forms, and finds such a factorization if it exists. Note that the values of the exponents $\alpha_i$ are determined by the algorithm (they are not given as input). We assume that $\alpha_i \geq 1$ for all $i$. The number of distinct factors is therefore equal to the number of variables of $f$. The case where there are more factors than variables is related to orbit closure and we do not treat it in this section. Let us explain briefly explain how to handle the case where some exponents $\alpha_i$ may be 0, i.e., the case where the number $r$ of distinct factors is smaller than the number of variables. In this case, $f$ has only $r$ "essential variables", i.e., it is possible to make a linear (invertible) change of variables after which $f$ depends only on $r$ variables. This puts us therefore in the situation where the number of distinct factors is equal to the number of variables. The number of essential variables and the corresponding change of variables can be computed with Kayal's algorithm\footnote{The algorithm in~\cite{kayal11} works in the circuit model, i.e., it is assumed that the input polynomial is given by an arithmetic circuit. Kayal later showed how to perform the same task in the black box model, see Section~3 of~\cite{kayal2012affine}.}~\cite{kayal11}, see also~\cite{carlini06}. We can now present our factorization algorithm. Like in the case of equal exponents, the existence of a suitable factorization is decided in the first three steps. \begin{enumerate} \item Compute a basis $B_1,\ldots,B_k$ of the Lie algebra of $f$. \item Reject if $k \neq n-1$, i.e., if the Lie algebra is not of dimension $n-1$. \item Check that the matrices $B_1,\ldots,B_{n-1}$ commute and are all diagonalizable over $\overline{K}$. If this is not the case, reject. Otherwise, declare the existence of a factorization $f(x)=l_1(x)^{\alpha_1} \cdots l_n(x)^{\alpha_n}$ where the linear forms $l_i$ are linearly independent and $\alpha_i \geq 1$ (the $l_i$ and $\alpha_i$ will be determined in the last 3 steps of the algorithm). \item Perform a simultaneous diagonalization of the $B_i$'s, i.e., find an invertible matrix $A$ such that the $n-1$ matrices $AB_iA^{-1}$ are diagonal. \item At the previous step we have found a matrix~$A$ such that $g(x)=f(A^{-1}x)$ has a Lie algebra $\mathfrak{g}_g$ which is an $(n-1)$-dimensional subspace of the space of diagonal matrices. Then we compute the orthogonal of $\mathfrak{g}_g$, i.e., we find a vector $\alpha=(\alpha_1,\dots,\alpha_n)$ such $\mathfrak{g}_g$ is the space of matrices $\operatorname{diag}(\lambda_1,\ldots,\lambda_n)$ satisfying $\sum_{i=1}^n \alpha_i \lambda_i = 0$. We normalize $\alpha$ so that $\sum_{i=1}^n \alpha_i = d$. \item We must have $g(x)=\lambda.m$ where $\lambda \in K^*$ and $m$ is the monomial $x_1^{\alpha_1} \cdots x_n^{\alpha_n}$ (in particular, $\alpha$ must be a vector with integral entries). We therefore have $f(x)=\lambda.m(Ax)$ and we output this factorization. \end{enumerate} Again, this algorithm outputs a factorization of the form $f(x) = \lambda.l_1(x)^{\alpha_1} \cdots l_n(x)^{\alpha_n}$ and we can obtain $\lambda=1$ by an appropriate scaling of the $l_i$'s. \begin{theorem} \label{general_th} The above algorithm runs in polynomial time and determines whether $f$ can be written as $f(x)=l_1(x)^{\alpha_1} \cdots l_n(x)^{\alpha_n}$ where the forms $l_i$ are linearly independent and $\alpha_i \geq 1$ for all $i$. It ouputs such a factorization if there is one. \end{theorem} \begin{proof} The two main steps (finding a basis of $\mathfrak{g}_f$ and simultaneous diagonalization) can be implemented efficiently as in the case of equal exponents, so we'll focus on the correctness of the algorithm. Assume first that $f$ can be written as $f(x)=L_1(x)^{\beta_1} \cdots L_n(x)^{\beta_n}$ where the $L_i$'s are linearly independent forms and $\beta_i \geq 1$ for all $i$. Then $f$ is in the orbit of the monomial $M=x_1^{\beta_1} \cdots x_n^{\beta_n}$, so $\mathfrak{g}_f$ and $\mathfrak{g}_M$ are conjugate by Proposition~\ref{lieconj}. By Proposition~\ref{liemon}, $\mathfrak{g}_M$ is the space of diagonal matrices $\operatorname{diag}(\lambda_1,\ldots,\lambda_n)$ such that $\sum_{i=1}^n \beta_i \lambda_i =0$. These two facts imply that the first 4 steps of the algorithm will succeed. The polynomial $g(x)=f(A^{-1}x)$ defined at step 5 has a Lie algebra which is an $(n-1)$-dimensional subspace of the space of diagonal matrices. By Proposition~\ref{moncharlie}, $g$ must therefore be a monomial. Proposition~\ref{liemon} implies that the tuple of exponents of $g$ is correctly determined at step 5, so that we indeed have $g=\lambda.m$ at step 6. Note that $m$ may differ from $M$ by a permutation of indices, and likewise the factorization output by the algorithm may differ from $f(x)=L_1(x)^{\beta_1} \cdots L_n(x)^{\beta_n}$ by a permutation of indices and the scaling of linear forms. Conversely, if the 3 first steps of the algorithm succeed the $B_i$ must be simultaneously diagonalizable and it follows again from Proposition~\ref{moncharlie} that the polynomial $g$ defined at step 5 satisfies $g =\lambda.m$ where $\lambda \in K^*$ and $m$ is some monomial $ x_1^{\alpha_1} \cdots x_n^{\alpha_n}$. In particular, Proposition~\ref{moncharlie} guarantees that the exponents $\alpha_i$ are all positive. The algorithm will then output at step 6 a correct factorization of $f$. \end{proof} Like in Section~\ref{equal} we have presented our algorithm with a view towards factorisation over $\overline{K}$, but it is readily adapted to factorization over some intermediate field $K \subseteq \mathbb{K} \subseteq \overline{K}$ as explained in Remark~\ref{smallfield}. { In the above algorithm we need to perform the simultaneous diagonalization at step~4 before computing the exponents $\alpha_i$. In the remainder of this section we show that the exponents can be computed without step~4. The corresponding algorithm relies on Proposition~\ref{eigenexp} below. First,} we recall that for any set of matrices $S \subseteq M_n(K)$ the centralizer of $S$ is the set of matrices that commute with all matrices of $S$. It is a linear subspace of $M_n(K)$ and we denote it by $C(S)$. \begin{proposition} \label{eigenexp} Consider a monomial $m=x_1^{\alpha_1} \cdots x_n^{\alpha_n}$ with $\alpha_i \geq 1$ for all~$i$, and a polynomial $f$ in the orbit of $m$. The centralizer $C(\mathfrak{g}_f)$ of the Lie algebra of $f$ is of dimension $n$. Moreover, there is a unique $H$ in $C(\mathfrak{g}_f)$ such that $\operatorname{Tr} H =d$ and $\operatorname{Tr} (HM) =0$ for all $M \in \mathfrak{g}_f$. The matrix $H$ is diagonalizable, its eigenvalues are $(\alpha_1,\ldots,\alpha_n)$ and { $C(\mathfrak{g}_f) = \mathfrak{g_f} \oplus \mathrm{Span}(H)$.} \end{proposition} Note that the case $\alpha_1=\ldots=\alpha_n=1$ corresponds to $H=\mathrm{Id}$. { The condition $\operatorname{Tr} (HM) =0$ for all $M \in \mathfrak{g}_f$ is an analogue of the trace zero condition in property (ii) of Theorem~\ref{equalexp}.} \begin{proof} We first consider the case $f=m$. By Proposition~\ref{liemon}, $\mathfrak{g}_m$ is the set of diagonal matrices $\operatorname{diag}(\lambda_1,\dots,\lambda_n)$ such that $\sum_i\alpha_i\lambda_i=0$. For $1\leq i\neq j\leq n$, let $\mathcal{H}_{ij}$ denote the set of matrices $\operatorname{diag}(\lambda_1,\dots,\lambda_n)$ such that $\lambda_i=\lambda_j$. Consider the set $\mathfrak{h}$ of diagonal matrices. The hyperplane $\mathfrak{g}_m$ of $\mathfrak{h}$ is equal to no hyperplane of the form $\mathcal{H}_{ij}$. Since the field is infinite, $\mathfrak{g}_m$ is not contained in the union of the hyperplanes $\mathcal{H}_{ij}$. Then $\mathfrak{g}_m$ contains a matrix $M_0$ with pairwise distinct eigenvalues. Then ${\mathfrak h}\subseteq C(\mathfrak{g}_m)\subseteq C(M_0)\subseteq {\mathfrak h}$, and $C(\mathfrak{g}_m)={\mathfrak h}$. Set $H_0=\operatorname{diag}(\alpha_1,\dots,\alpha_n)\in{\mathfrak h}$. It is clear that $\operatorname{Tr}(H_0)=d$ and $\operatorname{Tr}(H_0M)=0$ for any $M\in \mathfrak{g}_m$. Conversely, let $H=\operatorname{diag}(\beta_1,\dots,\beta_n)\in{\mathfrak h}$ and $M=\operatorname{diag}(\lambda_1,\dots,\lambda_n)\in \mathfrak{g}_m$. Then $\operatorname{Tr}(HM)=\sum_i\beta_i\lambda_i$. Since $\mathfrak{g}_m$ is the hyperplane of ${\mathfrak h}$ defined by $\sum_i\alpha_i\lambda_i=0$, $\operatorname{Tr}(HM)=0$, for any $M\in \mathfrak{g}_m$ if and only if $(\beta_1,\dots,\beta_n)$ is proportional to $(\alpha_1,\dots,\alpha_n)$. If moreover $\operatorname{Tr}(H)=d$, we get $H=H_0$. This proves the unicity. Morever, $\operatorname{Tr}(H_0^2)=\sum_i\alpha_i^2\neq 0$ and $H_0\not\in \mathfrak{g}_m$, since the field has characteristic zero. Then, since $\mathfrak{g}_m$ is an hyperplane of ${\mathfrak h}$, $\mathfrak{g}_m\oplus KH_0=C(\mathfrak{g}_m)={\mathfrak h}$.\\ Consider now a point $f$ in the orbit of $m$. Let $A$ be an invertible matrix such that $f=A.m=m\circ A^{-1}$. Then, by Proposition~\ref{lieconj}, $\mathfrak{g}_f=A \mathfrak{g}_mA^{-1}$ and $C(\mathfrak{g}_f)=A C(\mathfrak{g}_m)A^{-1}$. One easily checks that $H$ satisfies the proposition for~$f$ if and only if $A^{-1}HA$ satisfies it for $m$. With the first part, this proves the existence and unicity of $H$. \end{proof} { This proposition yields the following algorithm for the computation of the exponents $\alpha_1,\ldots,\alpha_n$. We assume that the first three steps of the algorithm of Theorem~\ref{general_th} have executed successfully. \begin{itemize} \item[(a)] Set up and solve the linear system which expresses that $\operatorname{Tr}[H]=d$, $\operatorname{Tr}[HB_i]=0$ and $HB_i = B_iH$ for all $i=1,\ldots,n-1$. Here $(B_1,\ldots,B_{n-1})$ is the basis of $\mathfrak{g}_f$ computed at step 1 of the algorithm of Theorem~\ref{general_th}. The system's unknowns are the $n^2$ entries of $H$. \item[(b)] Compute the eigenvalues $\alpha_1,\ldots,\alpha_n$ of $H$. \end{itemize} Note that the system constructed at step (a) is overdetermined: it has $\Theta(n^3)$ equations but only $n^2$ unknowns. Proposition~\ref{eigenexp} guarantees that the system has a unique solution $H$, and that the eigenvalues of $H$ are the exponents $\alpha_1,\ldots,\alpha_n$. We refer to Section~\ref{diagback} for the computation of eigenvalues at step~(b).} \section{Bivariate projections} \label{bivariate} In this section we present a probabilistic black box algorithm that finds a factorization into products of linear forms whenever this is possible, without any assumption of linear independence of the linear forms. As explained before this can be done with the algorithm by Kaltofen and Trager~\cite{KalTra90}. We assume that the input polynomial is in $K[x_1,\ldots,x_n]$ where $K$ is infinite. In contrast to Section~\ref{factorization}, we do not need to assume that $K$ is of characteristic 0. The hypothesis that $K$ is infinite is needed because the algorithm draws random elements from ``large enough'' but finite subsets $S \subseteq K$. The algorithm also applies to a finite field if $K$ is large enough for this, or if we can draw points from a large enough field extension. As in~\cite{kaltofen89,KalTra90} we rely on bivariate projections but we present a simplified algorithm which takes advantage of the fact that we are trying to factor polynomials of a special form (another simple algorithm based on a different idea is presented in the next section). In these two papers, a factorization of the input polynomial is recovered from a single bivariate projection (see Step~R in~\cite{kaltofen89} and Step~1 in~\cite{KalTra90} \footnote{% More precisely, the construction of the black boxes for the irreducible factors of $f$ requires a single projection. Evaluating these black boxes at an input point requires another bivariate projection, see Step~A in~\cite{KalTra90}}). By contrast, we will recover the solution to our problem from several projections as in e.g.~\cite{GKP18,kayal2012affine}. A recurring difficulty with projection-based algorithms is that when we try to ``lift'' the solutions of problems on a lower-dimensional space to a solution of the original problem, the lift may not be unique. We first present in Section~\ref{unique} a solution under an additional assumption which guarantees uniqueness of the lift. We then lift (as it were) this assumption in Section~\ref{randomproj}. We assume that a polynomial time factorization algorithm for polynomials in $K[x,y]$ is available. It is explained in~\cite{kaltofen85} how to obtain such an algorithm from a univariate factorization algorithm for the field of rational numbers, and more generally for number fields and finite fields. In the case of absolute factorization, polynomial time algoritms were first given by Gao~\cite{gao03} and by Ch\`eze and Lecerf~\cite{ChezeLecerf07}. The complexity of the latter algorithm was analyzed in~\cite{ChezeLecerf07} for the algebraic (unit cost) model of computation. The complexity of the former algorithm was analyzed in~\cite{gao03} for an input polynomial with coefficients in a finite field $F_q$.\footnote{The algorithm also works for fields of characteristic 0, but a precise analysis of its complexity was left for future work.} Without loss of generality, we'll assume that our input $f$ is in $K[x_1,\ldots,x_n]$ with $n \geq 4$. Indeed, if there are only 3 variables we can set $g(x_1,x_2) = f(x_1,x_2,1)$, use the bivariate algorithm to factor $g$ as a product of affine forms, and homogonenize the result to obtain a factorization of $f$. Note that the homogenization step includes a multiplication by $x_3^{\deg(f)-\deg(g)}$. \subsection{A uniqueness condition} \label{unique} In this section we assume that our input $f(x_1,\ldots,x_n)$ can be factorized as \begin{equation} \label{manyforms} f(x)=\lambda.l_1(x)^{\alpha_1} \cdots l_k(x)^{\alpha_k} \end{equation} where the linear form $l_i$ is not proportional to $l_j$ if $i \neq j$, and $\lambda$ is a nonzero constant. We would like to recover $\lambda$, the exponents $\alpha_i$'s and the $l_i$'s (note that each linear forms is defined only up to a constant). Write $l_i(x) =\sum_{j=1}^n l_{ij}x_j$. In order to guarantee ``uniqueness of the lift'' we make the following temporary assumption: \begin{itemize} \item[(*)] The $k$ coefficients $l_{i1}$ are distinct and nonzero and $l_{in}=1$ for all $i$. \end{itemize} The algorithm is as follows. \begin{enumerate} \item For $j=2,\ldots,n-1$ define $g_j(x_1,x_j) = f \circ \pi_j$ where the projection $\pi_j$ sends variable $x_n$ to the constant 1, leaves $x_1$ and $x_j$ unchanged and sets all other variables to 0. Compute the dense representation of the $g_j$'s by interpolation. \item Using the bivariate factorization algorithm, write each $g_j$ as $g_j(x_1,x_j)=\lambda. \prod_{i=1}^k (a_{ij}x_1+b_{ij}x_j+1)^{\beta_{ij}}$. \item At the beginning of this step, each of the $n-2$ tuples $(a_{1j},\ldots,a_{kj})$ is a permutation of the tuple $(l_{11},\ldots,l_{k1})$. We reorder the factors in the factorizations of the $g_j$ from step 2 to make sure that the $n-2$ tuples are identical (i.e., its elements always appear in the same order). After reordering, the $n-2$ tuples of exponents $(\beta_{1j},\ldots,\beta_{kj})$ will also become identical. We therefore obtain factorizations of the form: $$g_j(x_1,x_j)=\lambda. \prod_{i=1}^k (a_ix_1+c_{ij}x_j+1)^{\gamma_i}.$$ \item We output the factorization: $$f(x_1,\ldots,x_n)=\lambda. \prod_{i=1}^k (a_ix_1+c_{i2}x_2+\cdots+c_{i,n-1}x_{n-1}+x_n)^{\gamma_i}.$$ \end{enumerate} The main issue regarding the correctness of this algorithm is to make sure that we have correctly combined the factors of the $g_j$'s to obtain the factors of $f$. This is established in the next proposition. For an example of what can go wrong without assumption (*) consider the following two polynomials: $$f_1=(x_1+x_2+x_3+x_4)(x_1+2x_2+2x_3+x_4)$$ and $$f_2=(x_1+x_2+2x_3+x_4)(x_1+2x_2+x_3+x_4).$$ At step 1 of the algorithm, these two polynomials are mapped to the same pair of bivariate polynomials: $$g_2=(x_1+x_2+1)(x_1+2x_2+1),\ g_3=(x_1+x_3+1)(x_1+2x_3+1)$$ and there is no unique way of lifting $\{g_2,g_3\}$ to an input polynomial. Another difficulty is that the factorization pattern of $f$ (i.e., the set of exponents $\{\alpha_1,\ldots,\alpha_k\}$) could change after projection, for instance $$f=(x_1+x_2+x_3+x_4)(x_1+2x_2+x_3+x_4)$$ is mapped to $$g_2=(x_1+x_2+1)(x_1+2x_2+1),\ g_3=(x_1+x_3+1)^2.$$ \begin{proposition} \label{assumptionprop} The above algorithm correctly factorizes the polynomials of form~(\ref{manyforms}) that satisfy assumption (*). \end{proposition} \begin{proof} Since $l_{in}=1$ for all $i$ we have $\lambda=f(0,\cdots,0,1)=g_j(0,\cdots,0)$ for all $j=2,\ldots,n-1$. Each $g_j$ admits the factorization: \begin{equation} \label{2factor} g_j(x_1,x_j)=\lambda. \prod_{i=1}^k (l_{i1}x_1+l_{ij}x_j+1)^{\alpha_j} \end{equation} All these polynomials therefore have same factorization pattern as $f$ (note in particular that the affine forms $l_{i1}x_1+l_{ij}x_j+1$ are nonconstant since $l_{i1} \neq 0$; and two of these forms cannot be proportional since the $l_{i1}$ are distinct). It follows that the factorization of $g_j$ discovered by the algorithm at step 2 is identical to~(\ref{2factor}) up to a permutation, i.e., we have $a_{ij}=l_{\sigma_j(i)1}$, $b_{ij}=l_{\sigma_j(i)j}$ and $\beta_{ij}=\alpha_{\sigma_j(i)}$ for some permutation $\sigma_j \in {\mathfrak{S}}_k$. Since the $l_{i1}$ are distinct, after reordering at step 3 these $n-2$ permutations become identical, i.e., we have $a_{i}=l_{\sigma(i)1}$, $c_{ij}=l_{\sigma(i)j}$ and $\gamma_{i}=\alpha_{\sigma(i)}$ for some permutation~$\sigma$. Finally, at step 4 the algorithm outputs the correct factorization $f(x)=\lambda.\prod_{i=1}^k l_{\sigma(i)}(x)^{\alpha_{\sigma(i)}}$. \end{proof} \subsection{General case} \label{randomproj} In this section we present a black box algorithm that factors a homogeneous polynomial $f \in K[x_1,\ldots,x_n]$ of degree $d$ into a product of $d$ linear forms whenever this is possible, thereby lifting assumption (*) from Section~\ref{unique}. The algorithm is as follows. \begin{enumerate} \item Set $g(x)=f(A.x)$ where $A \in M_n(K)$ is a random matrix. \item Attempt to factor $g$ with the algorithm of Section~\ref{unique}. If this fails, reject $f$. In case of success, let $g'(x)=\lambda.l_1(x)^{\alpha_1} \cdots l_k(x)^{\alpha_k}$ be the factorization output by this algorithm. \item Check that $f(x)=g'(A^{-1}.x)$ and output the corresponding factorization. \end{enumerate} The random matrix at step~1 is constructed by drawing its entries independently at random from some large enough finite set $S \subseteq K$. The point of this random change of variables is that $g$ will satisfy assumption (*) of Section~\ref{unique} with high probability if $f$ can be factored as a product of linear forms. The (quite standard) arguments needed to estimate the probability of success are presented in the proof of Theorem~\ref{projth}. Note also that by the Schwarz-Zippel lemma, $A$ will be invertible with high probability. At step 2 we need a black-box for $g$. Such a black box is easily obtained by composing the black box for $f$ with the map $x \mapsto A.x$. At step 3, we check the polynomial identity $f(x)=g'(A^{-1}.x)$ by evaluating the left and right-hand sides at {one random point}. \begin{theorem} \label{projth} The above algorithm runs in polynomial time and determines whether $f$ can be written as a product of linear forms. It outputs such a factorization if there is one. \end{theorem} \begin{proof} By the Schwarz-Zippel lemma, any factorization of $f$ output at step 3 will be correct with high probability. So we only need to prove the converse: if $f$ can be factored as a product of linear forms, the algorithm finds a correct factorization with high probability. Suppose therefore that $$f(x)=L_1(x)^{\alpha_1} \cdots L_k(x)^{\alpha_k}$$ where no two linear forms $L_i, L_j$ in this expression are proportional. Then $g(x)=f(A.x)$ can be written as $$g(x)=\ell_1(x)^{\alpha_1} \cdots \ell_k(x)^{\alpha_k}$$ where $\ell_i(x)=L_i(A.x)$. If $A$ is invertible, the linear forms in this expression will not be proportional. The coefficients of these linear forms are given by the expression: \begin{equation} \label{elleq} \ell_{ij}=\sum_{p=1}^n L_{ip} A_{pj}. \end{equation} If the entries $A_{pj}$ of $A$ are drawn from a set $S \subset K$, $\ell_{in}=0$ with probability at most $1/|S|$ since $L_i {\not \equiv} 0$. These $n$ coefficients will all be nonzero with probability at least $1-n/|S|$; in this case we can factor out $\lambda=\prod_{i=1}^k \ell_{in}^{\alpha_i}$ to make sure that the coefficient of $x_n$ in each linear form is equal to 1 as required by assumption (*). This gives the factorization $$g(x)=\lambda.l_1(x)^{\alpha_1} \cdots l_k(x)^{\alpha_k}$$ where $l_i(x)=\ell_i(x)/\ell_{in}$. The same argument as for $\ell_{in}$ shows that $\ell_{i1}$ and therefore $l_{i1}$ will be nonzero with high probability. To take care of assumption~(*), it remains to check that the $l_{i1}$ will be distinct with high probability. The condition $l_{i1} \neq l_{j1}$ is equivalent to $\ell_{i1}\ell_{jn} - \ell_{j1} \ell_{in} \neq 0$. By~(\ref{elleq}) this expression can be viewed as a quadratic form in the entries of $A$. From unique factorization and the hypothesis that the linear forms $L_i, L_j$ are not proportional it follows that this quadratic form is not identically 0. We conclude again that it will be nonzero with high probability by the Schwarz-Zippel lemma. We have established that $g(x)=f(A.x)$ satisfies (*) with high probability. In this case, by Proposition~\ref{assumptionprop} the factorization of $g$ at step 2 of the algorithm and the verification of the polynomial identity at step 3 will also succeed. \end{proof} \section{Identifying the hyperplanes and their multiplicities} \label{hyperplane} If a polynomial $f$ can be factored as a product of linear forms, its zero set $Z(f)$ is a union of (homogeneous) hyperplanes. In this section we present an algorithm based on this simple geometric fact. We can identify each hyperplane in $Z(f)$ by finding $n-1$ nonzero points that lie on it. Assume that $f$ can be written as $f(x)=\lambda.l_1(x)^{\alpha_1} \cdots l_k(x)^{\alpha_k}$ where the linear forms $l_i$ are not proportional. We will need a total of $k(n-1)$ points on $Z(f)$ to identify the $k$ hyperplanes. Our algorithm begins with the determination of these $k(n-1)$ points. \begin{enumerate} \item Pick a random point $a \in K^n$ and $n-1$ random vectors $v_1,\ldots,v_{n-1}$ in~$K^n$ (or representatives of points in $\mathbb{P}(K^{n})$ to be more precise). Let $\Delta_i$ be the line of direction $v_i$ going through $a$. Compute the intersection $\Delta_i \cap Z(f)$ for $i=1,\ldots,n-1$. \item Output the $k(n-1)$ intersection points $a_1,\ldots,a_{k(n-1)}$ found at step~1. \end{enumerate} In the sequel, we assume that $f(a) \neq 0$. This holds with high probability by the Schwarz-Zippel lemma. At step~1 we compute $\Delta_i \cap Z(f)$ by finding the roots of the univariate polynomial $g(t)=f(a+tv_i)$. We obtain one point on each hyperplane $Z(l_1),\ldots,Z(l_k)$ except if $v_i$ belongs to one of these hyperplanes. This can happen only with negligible probability. Moreover, these $k$ points are distinct except if $\Delta_i$ goes through the intersection of two of these hyperplanes. Again, this happens with negligible probability (we explain in the proof of Theorem~\ref{hyperplaneth} how to obtain explicit bounds on the probabilities of these bad events). Since $a {\not \in Z(f)}$, with high probability we find a total of $k(n-1)$ distinct points as claimed at step~2. Moreover, each hyperplane $Z(l_i)$ contains exactly $n-1$ points. Note that at step~1 we have also determined $k$ if this parameter was not already known in advance. At the next stage of our algorithm we determine the $k$ hyperplanes. We first determine the hyperplane going through $a_1$ as follows: \begin{enumerate} \item[3.] Find $n-2$ points $b_2,\ldots,b_{n-1}$ in the set $\{a_2,\ldots,a_{k(n-1)}\}$ such that each line $(a_1 b_j)$ is included in $Z(f)$. \item[4.] Output the linear subspace $H_1=\mathrm{Span}(a_1,b_2,\ldots,b_{n-1})$. \end{enumerate} At step~3 we can find out whether a line $(a_1a_j)$ is included in $Z(f)$ by checking that the univariate polynomial $g(t)=f(ta_1+(1-t)a_j)$ is identically~0. This can be done deterministically with $k-1$ calls to the black box for $f$ (indeed, if $g {\not \equiv} 0$ this polynomial has at most $k$ roots, and we already know that $g(0)=g(1)=0$). Alternatively, we can perform a single call to the black box by evaluating $g$ at a random point. Assume for instance that $Z(l_1)$ is the hyperplane going through $a_1$. In the analysis of the first two steps we saw that (with high probability) $a_1$ does not lie on any other $Z(l_j)$, and that exactly $n-2$ points $b_2,\ldots,b_{n-1}$ in $\{a_2,\ldots,a_{k(n-1)}\}$ lie on $Z(l_1)$. The algorithm identifies these points at step 3 (we will find exactly one point on each line $\Delta_i$). It follows that the subspace $H_1$ output at step 4 is included in $Z(l_1)$. To conclude that $H_1 = Z(l_1)$, it remains to show that $H_1$ is of dimension~$n-1$. Assume without loss of generality that $\{a_1\}=\Delta_1 \cap Z(l_1)$ and $\{b_j\} = \Delta_j \cap Z(l_1)$ for $j=2,\ldots,n-1$. Then $a_1=a+t_1v_1$ and $b_j=a+t_jv_j$ for $j=2,\ldots,n-1$. Here $v_1,\ldots,v_{n-1}$ are the directions chosen at step 1, and $t_1,\ldots,t_{n-1}$ are appropriate nonzero scalars. With high probability, the $n$ vectors $a,v_1,\ldots,v_{n-1}$ are linearly independent. In this case, the family $a+t_1v_1,\ldots,a+t_{n-1}v_{n-1}$ is of rank $n-1$ as desired. The above analysis shows that steps 3 and 4 identify $H_1=Z(l_1)$ with high probability. The $k-1$ remaining hyperplanes can be identified by repeating this procedure. For instance, to determine the second hyperplane $H_2$ we will remove the points $a_1,b_2,\ldots,b_{n-1}$ (which lie on $H_1$) from the set $\{a_1,\ldots,a_{k(n-1)}\}$ and we will determine the hyperplane going through the first of the $(k-1)(n-1)$ remaining points. In the next stage of the algorithm we determine the multiplicities $\alpha_i$ of the linear forms $l_i$. This is done as follows: \begin{enumerate} \item[5.] Consider again the random point $a$ and the random vector $v_1$ drawn at step 1. We have already computed the intersection points with $H_1=Z(l_1),\ldots,H_k=Z(l_k)$ of the line $\Delta_1$ of direction $v_1$ going through~$a$. Recall that this was done by computing the roots $t_1,\ldots,t_k$ of the univariate polynomial $g(t)=f(a+tv_1)$. Let us assume without loss of generality that these roots are ordered so that $\{a+t_1v_1\} = H_1 \cap \Delta_1,\ldots, \{a+t_k v_1 \} = H_k \cap \Delta_1$. Now we compute the multiplicities $\alpha_1,\ldots,\alpha_k$ of $t_1,\ldots,t_k$ as roots of $g$ and we output these multiplicities. \end{enumerate} If $f(x)=l_1(x)^{\alpha_1} \cdots l_k(x)^{\alpha_k}$, the multiplicities of the roots of $g$ are indeed equal to $\alpha_1,\ldots,\alpha_k$ except if $\Delta_1$ goes through the intersection of two of the hyperplanes $H_1,\ldots,H_k$. As already pointed out in the analysis of the first two steps, this happens only with negligible probability. Note that there is nothing special about $\Delta_1$ at step 5: we could have used a new random line~$\Delta$ instead. The final stage of the algorithm is a normalization step. \begin{enumerate} \item[6.] At the beginning of this step we have determined linear forms $l_i$ and multiplicities $\alpha_i$ so that $f(x)=\lambda.l_1(x)^{\alpha_1} \cdots l_k(x)^{\alpha_k}$ for some constant~$\lambda$. We determine~$\lambda$ by one call to the black box for $f$ at a point where the $l_i$ do not vanish (for instance, at a random point). \end{enumerate} We have obtained the following result. \begin{theorem} \label{hyperplaneth} Let $f \in K[x_1,\ldots,x_n]$ be a polynomial of degree $d$ that admits a factorization $f(x)=\lambda.l_1(x)^{\alpha_1} \cdots l_k(x)^{\alpha_k}$ over $\overline{K}$, where no two linear forms $l_i$ are proportional. The above algorithm determines such a factorization with high probability, and the number of calls to the black box for $f$ is polynomial in $n$ and $d$. Assume moreover that $K=\mathbb{Q}$ and that a factorization of $f$ where $l_i \in \mathbb{Q}[x_1,\ldots,x_n]$ is possible. If the coefficients of these linear forms are of bit size at most $s$ then all calls to the black box are made at rational points of bit size polynomial in $n$, $d$ and $s$. \end{theorem} \begin{proof} The correctness of the algorithm follows from the above analysis. Let us focus therefore on the case $K=\mathbb{Q}$ of the theorem. This result relies on a standard application of the Schwarz-Zippel lemma. More precisely, as explained in the analysis of the first two steps, we want to pick a random point $a$ such that $f(a) \neq 0$ and random vectors $v_1,\ldots,v_{n-1}$ that do not belong to any of the hyperplanes. Moreover, the line $\Delta_i$ defined at step~1 should not go through the intersection of two hyperplanes. Let us pick the coordinates of $a$ and of the $v_i$ independently at random from a finite set $S \subseteq \mathbb{Q}$. By the Schwarz-Zippel lemma, $\Pr[f(a)=0] \leq k/|S| \leq d/|S|$; and for any linear form $l_j$ we have $\Pr[l_j(v_i)=0] \leq 1/|S|$. As to $\Delta_i$, let us bound for instance the probability of going through the intersection of the first two hyperplanes. Since $\Delta_i$ is the line of direction $v_i$ going through~$a$, it suffices to make sure that $l_2(a)l_1(v_i) - l_1(a)l_2(v_i) \neq 0$. By the Schwarz-Zippel lemma this happens with probability at least $1-2/|S|$. Another constraint arising in the analysis of steps 3 and 4 is that $a,v_1,\ldots,v_{n-1}$ should be linearly independent. By the Schwarz-Zippel lemma, the corresponding determinant vanishes with probability at most~$n/|S|$. Note that the bounds obtained so far are independent of $s$. This parameter comes into play when we compute the intersections $\Delta_i \cap Z(f)$ at step~1. Recall that we do this by finding the roots of the univariate polynomial $g(t)=f(a+tv_i)$. The roots are: $$t_1=-l_1(a)/l_1(v_i),\ldots,t_k=-l_k(a)/l_k(v_i).$$ Then at step 3 we call the black box at points belonging to lines going through two of the $k(n-1)$ intersections points found at step~1. \end{proof} The algorithm presented in this section relies on a simple and appealing geometric picture, but it suffers from a drawback compared to the algorithms of sections~\ref{factorization} and~\ref{bivariate}: \begin{remark} \label{pointsize} Assume that $K = \mathbb{Q}$. The above algorithm may need to call the black box for $f$ at algebraic (non rational) points in the case where the linear forms $l_i$ do not have rational coefficients. This is due to the fact that we call the black box at points that lie on the hyperplanes $l_i=0$. By contrast, the algorithms of sections~\ref{factorization} and~\ref{bivariate} always call the black box at integer points even when $f$ has algebraic (non rational) coefficients. To see why this is true for the algorithm of Section~\ref{bivariate}, note that the main use of the black box is for performing bivariate interpolation. In Section~\ref{factorization}, the black box is used {% only} for the computation of the Lie algebra of $f$ following Lemma~22 of~\cite{kayal2012affine}. {% More details on the black box calls performed by our three algorithms can be found in the appendix.} \end{remark}
{ "timestamp": "2018-07-11T02:09:47", "yymm": "1807", "arxiv_id": "1807.03663", "language": "en", "url": "https://arxiv.org/abs/1807.03663", "abstract": "This paper is devoted to the factorization of multivariate polynomials into products of linear forms, a problem which has applications to differential algebra, to the resolution of systems of polynomial equations and to Waring decomposition (i.e., decomposition in sums of d-th powers of linear forms; this problem is also known as symmetric tensor decomposition). We provide three black box algorithms for this problem. Our main contribution is an algorithm motivated by the application to Waring decomposition. This algorithm reduces the corresponding factorization problem to simultaenous matrix diagonalization, a standard task in linear algebra. The algorithm relies on ideas from invariant theory, and more specifically on Lie algebras. Our second algorithm reconstructs a factorization from several bi-variate projections. Our third algorithm reconstructs it from the determination of the zero set of the input polynomial, which is a union of hyperplanes.", "subjects": "Computational Complexity (cs.CC); Symbolic Computation (cs.SC); Commutative Algebra (math.AC)", "title": "Orbits of monomials and factorization into products of linear forms", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363733699887, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7084883433935535 }
https://arxiv.org/abs/1609.09565
Analysis of Exact and Approximated Epidemic Models over Complex Networks
We study the spread of discrete-time epidemics over arbitrary networks for well-known propagation models, namely SIS (susceptible-infected-susceptible), SIR (susceptible-infected-recovered), SIRS (susceptible-infected-recovered-susceptible) and SIV (susceptible-infected-vaccinated). Such epidemics are described by $2^n$- or $3^n$-state Markov chains. Ostensibly, because analyzing such Markov chains is too complicated, their $O(n)$-dimensional nonlinear "mean-field" approximation, and its linearization, are often studied instead. We provide a complete global analysis of the epidemic dynamics of the nonlinear mean-field approximation. In particular, we show that depending on the largest eigenvalue of the underlying graph adjacency matrix and the rates of infection, recovery, and vaccination, the global dynamics takes on one of two forms: either the epidemic dies out, or it converges to another unique fixed point (the so-called endemic state where a constant fraction of the nodes remain infected). A similar result has also been shown in the continuous-time case. We tie in these results with the "true" underlying Markov chain model by showing that the linear model is the tightest upper-bound on the true probabilities of infection that involves only marginals, and that, even though the nonlinear model is not an upper-bound on the true probabilities in general, it does provide an upper-bound on the probability of the chain not being absorbed. As a consequence, we also show that when the disease-free fixed point is globally stable for the mean-field model, the Markov chain has an $O(\log n)$ mixing time, which means the epidemic dies out quickly. We compare and summarize the results on different propagation models.
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{E}{pidemic} models have been extensively studied since a first mathematical formulation was introduced in 1927 by Kermack and McKendrick \cite{kermack1927contribution}. Though initially proposed to understand the spread of contagious diseases \cite{bailey1975mathematical}, the study of epidemics applies to many other areas, such as network security \cite{alpcan2010network,acemoglu2013network}, viral advertising \cite{phelps2004viral,richardson2002mining}, and information propagation \cite{jacquet2010information,cha2009measurement}. Questions of interest include the existence of fixed-points, stability (does the epidemic die out), transient behavior, the cost of an epidemic \cite{LizMinimizing,bose2013cost}, how best to control an epidemic \cite{drakopoulos2014efficient,nowzari2015analysis}, etc. We analyze the spread of epidemics over arbitrary networks for most well-known propagation models in the literature, including SIS (susceptible-infected-susceptible), SIR (susceptible-infected-recovered), SIRS (susceptible-infected-recovered-susceptible), and SIV (susceptible-infected-vaccinated). In the basic SIS model, each node in the network is in one of two different states: susceptible (healthy) or infected. A healthy node has a chance of getting infected if it has infected neighbors in the network. The probability of getting infected increases as the number of infected neighbors increases. An infected node also has a chance of recovering after which it still has a chance of getting infected by its neighbors. Flu is an example of this model. SIR and SIRS models have an extra recovered state, which corresponds to the nodes that have recovered from the disease and are not susceptible to it. Mumps and Pertussis respectively are examples of SIR and SIRS epidemics. Additionally, in SIV models, there is a random vaccination (either permanent or temporary) which permits direct transition from the susceptible state to the recovered (vaccinated) one. Considering even the SIS case in its entirety, for a network with $n$ nodes, this yields a Markov chain with $2^n$ states, sometimes called the exact or ``stochastic'' model. This is a discrete-space model, as there are two possible states of ``0'' and ``1'' for healthy and infected. Ostensibly, because analyzing this Markov chain is too complicated, various $n$-dimensional linear and non-linear approximations have been proposed. The most common of these is the $n$-dimensional non-linear mean-field approximation, and its corresponding linearization about the disease-free fixed point, which are often referred to as ``deterministic'' models. Indeed these are continuous-space models, that take real numbers between 0 and 1, which can be understood as the marginal probability for being infected (or the infected fraction of the $i$-th subpopulation). We provide a complete global analysis of the dynamics of the nonlinear model. In particular, we show that depending on the largest eigenvalue of the underlying graph adjacency matrix and the rates of infection, recovery, and vaccination, the global dynamics takes on one of two forms: either the epidemic dies out (disease-free fixed point), or it converges to another unique fixed point where a constant fraction of the nodes remain infected (endemic state). Furthermore, we tie in the approximated models with the \emph{true} underlying Markov chain model. We prove that the linear model provides an upper bound on the marginal probabilities of infection, and this is the \emph{tightest upper-bound using the marginals only}. We show that, even though the nonlinear model is not an upper-bound on the true probabilities in general, it does provide an upper-bound on the probability of the chain not being absorbed (some nodes being infected). As a consequence of these results, we show that when the $O(n)$-dimensional approximate models are stable to the disease-free fixed point, the Markov chain has a mixing time of $O(\log n)$, which means the epidemic dies out fast in the true model as well. The study of continuous-time and discrete-time epidemic models are two parallel bodies of work, and interesting results have been shown in both cases by different groups of researchers, e.g. \cite{Ganeshnetworktopology,van2014upper,fall2007epidemiological,shuai2013global,li2014analysis,nowzari2014stability,draief2006thresholds} in the continuous-time and \cite{arenas2010discrete,WangEpidemic,chakrabarti2008epidemic,ahn2013global,prakash2012threshold,ahn2014mixing} in the discrete-time case. Depending on the application in hand it may make more sense to use one class or the other. This paper focuses on discrete-time models, and we provide a unified analysis of exact and approximated models and the connections between them. We spell out our contributions with respect to what is known in both the discrete-time and the continuous-time literature, below.\\ The following results were not known in either of the discrete- or continuous-time literature: \begin{enumerate} \item We show that the linear model is the tightest upper-bound with the marginals only on the exact probabilities of infection. \item We show that, even though the nonlinear model is not an upper-bound on the exact probabilities in general, it does provide an upper-bound on the probability of the chain not being absorbed. \item Although the logarithmic time-to-extinction of the epidemic under the threshold was known for the SIS model in the continuous-time case (Ganesh et al. \cite{Ganeshnetworktopology}), this result had not been shown for other well-known propagation models (e.g. SIRS, SIV, etc.) in either discrete-time or continuous-time. \end{enumerate} In addition to the above, we complement the discrete-time literature by showing the following results that were recently shown in the continuous-time case \cite{shuai2013global,khanafer2014stability,fall2007epidemiological} but not in the discrete-time one. \begin{enumerate} \item In discrete-time mean-field approximated models, the stability of the disease-free fixed point under the threshold had been shown for SIS and many more complicated propagation models. However, the existence and stability of a unique endemic equilibrium above the threshold had not been shown for any discrete-time model, before this work. \item Contrary to the continuous-time literature, the stability results shown for discrete-time approximated models are typically “local.” But we show “global stability” results, which are counterparts of the continuous-time case. \end{enumerate} Sections 2, 3, and 4, are devoted to SIS, SIRS, and SIV epidemic models, respectively. Starting from SIS epidemics, we describe the exact Markov chain model, the nonlinear epidemic map, and the linear model. In the analysis of the nonlinear model, we first describe the case where the epidemic dies out. Then we analyze the second case where the all-healthy fixed point is not stable, and show the existence and uniqueness of a second fixed point, and its global stability. Returning back to the exact Markov chain model, we establish the connection between that and the approximated models. We define a partial order which makes the transition matrix of the MC an order-preserving map, and helps us to establish the relation. We further generalize the model by allowing each node to have its own recovery and infection rates. We discuss variations of the models depending on the effect of simultaneous recovery and infection, as well as the efficacy of the vaccination. Simulation results for all the models are provided in Section 5, which support the results proved throughout the paper. We finally summarize the results, compare them, and conclude in Section 6. To avoid confusion and facilitate reading, we use boxes for the main equations describing the models in each section. The proofs are postponed to the appendix. The current paper combines and expands the results that first appeared in \cite{ahn2013global},\cite{ahn2014mixing},\cite{ruhi2015sirs}. \section{SIS Epidemics} \begin{figure}[hb] \centering \includegraphics[width=0.4\columnwidth]{SIS.png} \caption{State diagram of a single node in the SIS model, and the transition rates. Wavy arrow represents exogenous (network-based) transition. $S$ is healthy or susceptible, $I$ is infected.} \label{fig:SIS_model} \end{figure} \subsection{Model Description} \subsubsection{Exact Markov Chain Model} For a given connected network $G$ with $n$ nodes, let $N_i$ be the neighborhood of node $i$. Let $A$ be the adjacency matrix of $G$. Each node can be in a state of health, represented by ``0'', or a state of infection, represented by ``1''. Consequently, $\xi(t)=(\xi_1(t), \cdots, \xi_n(t)) \in \{0,1 \}^n$ is a binary n-tuple where each of its entries represents the state of each node at time $t$. i.e. $i$ is infected if $\xi_i(t) =1$ and it is healthy if $\xi_i(t)=0$. We assume that probability of infection of each node given the current state $\xi(t)$ is independent. In other words, for any two state vectors $X,Y \in \{0,1\}^n$, \begin{equation} \mathbb{P}(\xi(t+1)=Y|\xi(t)=X) = \prod_{i=1}^n \mathbb{P}(\xi_i(t+1)=Y_i|\xi(t)=X) \label{MC0} \end{equation} A healthy node remains healthy if all its neighbors are healthy. A healthy node can receive infection from any of its infected neighbors independently with probability $\beta$. An infected node becomes healthy if it is recovered from the disease with probability $\delta$ while not getting infected by any of its neighbors. To summarize this, \begin{empheq}[box=\mbox]{align} \tikzmark{A} &\mathbb{P}(\xi_i(t+1)=Y_i|\xi(t)=X) \nonumber \\ &= \left\{ \begin{array}{rl} (1-\beta)^{m_i} & \text{if } (X_i,Y_i)=(0,0), |N_i \cap \mathbb{S}(X)| = m_i,\\ 1- (1-\beta)^{m_i} & \text{if } (X_i,Y_i)=(0,1), |N_i \cap \mathbb{S}(X)| = m_i,\\ \delta(1-\beta)^{m_i} & \text{if } (X_i,Y_i)=(1,0), |N_i \cap \mathbb{S}(X)| = m_i,\\ 1-\delta(1-\beta)^{m_i} & \text{if } (X_i,Y_i)=(1,1), |N_i \cap \mathbb{S}(X)| = m_i. \end{array} \right.\tikzmark{B} \label{Eq:noimmune} \end{empheq} \begin{tikzpicture}[remember picture,overlay] \draw ([shift={(-.2em,2.5ex)}]A) rectangle ([shift={(-0.5em,-6.0ex)}]B); \end{tikzpicture} where $\mathbb{S}(X)$ is the support of $X \in \{0,1\}^n$, i.e. $\mathbb{S}(X) = \{ i : X_i = 1 \}$. Let $S$ be the transition matrix of this Markov Chain, $S_{X,Y}=\mathbb{P}(\xi(t+1)=Y|\xi(t)=X)$. We assume that the Markov chain is time-homogeneous and write $S_{X,Y}=\mathbb{P}(Y|X)$ for simplicity. The Markov chain has a unique stationary distribution, which is the state where all the nodes in the network are healthy with probability $1$. If all the nodes are healthy, no node will be exposed to disease, and therefore they will always stay healthy. Therefore the probability distribution on the states $\{0,1\}^n$, goes to the all-healthy state as time progresses. In other words, the disease will die out if we wait long enough. However, this result is not practical since it may take a very long time especially if the mixing time of the Markov chain is exponentially large. It is difficult to analyze the dynamics of the Markov chain as the number of nodes increases. Comparing the discrete-time Markov chain model to the continuous-time Markov chain model described in \cite{Ganeshnetworktopology}, continuous-time Markov chain model allows only one flip of each node's epidemic state at each moment. However, the discrete-time model allows change of epidemic states for more than one node at the same time. The reason being that change of epidemic state for two or more nodes can occur at same time interval, even though they do not happen at the same moment. The transition matrix of the embedded Markov chain of continuous-time model has nonzero entries only where the Hamming distance of row coordinate and column coordinate is 1. In other words, the number of different digits for $X,Y \in \{0,1\}^n$ should be 1 in order for the entry of the $X$-th row and the $Y$-th column to be nonzero. However, the transition matrix of the discrete-time Markov chain model can have nonzero entries everywhere (except the row of the absorbing state). Denote $I(t)$ as the set of infected nodes at time $t$. Define $p_i(t)$ as the marginal probability that node $i$ is infected at time $t$, i.e. $p_i(t)=\mathbb{P}(\xi_i(t)=1)$. \begin{align} p_i(&t+1)=\mathbb{P}(\xi_i(t+1)=1 | \xi_i(t)=1) \mathbb{P}(\xi_i(t)=1)\notag\\ &\hspace{35pt}+\mathbb{P}(\xi_i(t+1)=1 | \xi_i(t)=0) \mathbb{P}(\xi_i(t)=0) \end{align} By marginalizing out the state of the other nodes, we can write this as \begin{align} p_i(&t+1)=\mathbb{E}_{\xi_{-i}(t)|\xi_i(t)=1}\bigg[1-\delta \prod_{j\in N_i}(1-\beta\mathds{1}_{\xi_j(t)=1})\bigg]p_i(t)\notag\\ &+\mathbb{E}_{\xi_{-i}(t)|\xi_i(t)=0}\bigg[1-\prod_{j\in N_i}(1-\beta\mathds{1}_{\xi_j(t)=1})\bigg](1-p_i(t)) , \end{align} where the conditional expectations are on the joint probability of all nodes other than $i$ (denoted by $\xi_{-i}$). \subsubsection{Approximated Nonlinear Model} One may approximate $\prod_{j \in N_i} (1-\beta \mathds{1}_{\xi_j(t)=1})$ by averaging it as $\mathbb{E}[ 1-\beta \mathds{1}_{\xi_j(t)=1}] = 1 - \beta p_j(t)$ and using the assumption that the events are independent. \begin{align} P_i(t+1) & = \left(1-\delta \prod_{j \in N_i} \left( 1-\beta P_j(t) \right) \right)P_i(t) \nonumber \\ & \quad + \left(1- \prod_{j \in N_i} \left( 1-\beta P_j(t) \right) \right) (1-P_i(t)) \label{Eq:approximatedP} \end{align} In fact this is the so-called mean-field approximation. We use capital $P$ for the approximated probabilities, to distinguish them from the exact probabilities of the Markov chain, $p$. The approximated model is studied on $[0,1]^n$, the $n$-dimensional probability space which is computationally less demanding than the $2^n$-dimensional discrete space. One such model was studied by Chakrabarti and Wang \cite{chakrabarti2008epidemic}, \cite{WangEpidemic}. Ahn \cite{ahn2013global} viewed the $n$-dimensional probability distribution at time $t+1$ as the image of the probability distribution at time $t$ mapped by $\Phi:[0,1]^n \to [0,1]^n$. The $i$-th component of the epidemic map $\Phi$ is defined as follows: \begin{empheq}[box=\fbox]{equation} \Phi_i(x) = (1-\delta)x_i + ( 1- (1-\delta)x_i)\left( 1 - \prod_{j \in N_i} (1-\beta x_j) \right) \label{Eq:wtphieq} \end{empheq} It is trivial to check that $P_i(t+1) = \Phi_i((P_1(t), \cdots, P_n(t))^T)$ from \eqref{Eq:approximatedP}. \subsubsection{Linear Model} The linearization of the above nonlinear mapping around the origin is what is referred to as the linear model: \begin{equation} \tilde{P}_i(t+1) = (1-\delta)\tilde{P}_i(t) + \beta \left( \sum_{j \in N_i} \tilde{P}_j(t) \right) \label{Eq:linear} \end{equation} Putting together equations of this form for all $i$, one can see this as \begin{empheq}[box=\fbox]{equation} \tilde{P}(t+1) = ((1-\delta)I_n + \beta A) \tilde{P}(t) \end{empheq} Note that $(1-\delta)I_n + \beta A$ is in fact the Jacobian of $\Phi$ at the origin. \subsection{Analysis of the Nonlinear Model} \subsubsection{Epidemic Extinction: $\frac{\beta\lambda_{\max}(A)}{\delta}<1$} We study the epidemic map $P_i(t+1) = \Phi_i((P_1(t), \cdots, P_n(t))^T)$ where $\Phi : [0,1]^n \to [0,1]^n$ is defined as \eqref{Eq:wtphieq} on $n$-dimensional probability space. To understand the behavior of this model, we can upper bound it as the following. \begin{align} \Phi_i(x) &= (1-\delta)x_i + ( 1- (1-\delta)x_i)\left( 1 - \prod_{j \in N_i} (1-\beta x_j) \right) \\ &\leq (1-\delta)x_i + \left( 1 - \prod_{j \in N_i} (1-\beta x_j) \right) \\ &\leq (1-\delta)x_i + \beta \left( \sum_{j \in N_i} x_j \right) \end{align} The latter equation is the linear map \eqref{Eq:linear}. In fact the linearization gives an upper bound on the nonlinear model. For two real-valued column vectors $u,v\in \mathbb{R}^n$, we say $u \preceq v$, if $u_i \leq v_i$ for all $i \in \{1,\cdots,n\}$, and $u \prec v$, if $u_i < v_i$ for all $i \in \{1,\cdots,n\}$. For $P(t)=(P_1(t), \cdots, P_n(t))^\mathrm{T}$ \begin{equation} P(t+1) = \Phi(P(t)) \preceq ((1-\delta)I_n + \beta A ) P(t) \end{equation} Clearly $P(t)$ converges to the origin for both \eqref{Eq:wtphieq} and \eqref{Eq:linear} if $\lambda_{max}((1-\delta)I_n + \beta A ) <1 $. In other words, when $\frac{\beta\lambda_{\max}(A)}{\delta}$ is less than $1$, the origin is a unique fixed point of \eqref{Eq:wtphieq} which is globally stable. The reason is that this happens for the linear upper bound which is the Jacobian matrix of \eqref{Eq:wtphieq} at the origin. We will therefore focus on the dynamics of the system when $\lambda_{max}((1-\delta)I_n + \beta A ) > 1 $. \subsubsection{Epidemic Spread: $\frac{\beta\lambda_{\max}(A)}{\delta}>1$} \paragraph{Existence and Uniqueness of Nontrivial Fixed Point} The origin, the trivial fixed point of the system equation is unstable when $\lambda_{max}((1-\delta)I_n + \beta A ) > 1 $. Moreover, it is not clear in general whether there exists any other fixed point, or how many fixed points exist if so. In this section, we prove that there actually exists a nontrivial fixed point of \eqref{Eq:seq3}. We also prove that the nontrivial fixed point is unique. Wang et al. \cite{WangEpidemic} and Chakrabarti et al. \cite{chakrabarti2008epidemic} focus on staying healthy by defining the probability that a node receives no infection from its neighborhood. We focus on {\em infection} rather than staying healthy. Let $\Xi : [0,1]^n \to [0,1]^n$ with $\Xi=(\Xi_1 , \cdots, \Xi_n )^\mathrm{T}$ be a map associated with network $G$ satisfying the three properties below. (a) $\Xi_i(x)=0$ and $\displaystyle \frac{ \partial \Xi_i }{ \partial x_j} = \beta A_{i,j}$ at the origin. (b) $\displaystyle \frac{ \partial \Xi_i }{ \partial x_j} > 0$ if $i \in N_j$ in $G$, and $\displaystyle \frac{ \partial \Xi_i}{ \partial x_j} = 0$ if $i \notin N_j$ in $G$. (c) For any $i,j,k \in \{ 1, \cdots, n \}$, $\displaystyle \frac{ \partial^2 \Xi_i }{ \partial x_j \partial x_k} \leq 0$. Obviously $\Xi_i(x)=\left( 1 - \prod_{j \in N_i} (1-\beta x_j) \right)$ satisfies all the conditions above. We define another map here. Let $\omega : [0,1] \to \mathbb{R}_+$ be a function which also satisfies three properties below. (d) $\omega(0)=0$, $\omega(1) \geq 1$ (e) $\omega'(0)=\delta$, $\omega'(s) > 0$ for all $s \in (0,1)$ (f) $\displaystyle \frac{\omega(s_1)}{s_1} < \frac{\omega(s_2)}{s_2}$ if $s_1 < s_2$ It is also clear that $\displaystyle \omega(s) = \frac{\delta s}{1-(1-\delta)s}$ satisfies all three conditions above. By defining $\Xi(\cdot)$ and $\omega(\cdot)$ here, the analysis can also be applied directly to the immune-admitting model which will be described later. We can view \eqref{Eq:wtphieq} as \begin{equation} P_i(t+1) = P_i(t) + (1-(1-\delta)P_i(t)) ( \Xi_i(P(t)) - \omega(P_i(t))) \label{Eq:seq3} \end{equation} \begin{lemma}\label{Lem:ccv} Let $h_{i,u,v} : s \to \Xi_i(u+sv)$ be a function defined on subset of nonnegative real numbers for given $i \in \{1, \cdots, n\}$, $u,v \in [0,1]^n$. Then $\displaystyle \frac{h_{i,u,v}(s) - h_{i,u,v}(0)}{s}$ is a decreasing function of $s$. \end{lemma} \begin{lemma}\label{lm:v} $\lambda_{max}((1-\delta)I_n + \beta A ) > 1 $ if and only if there exists $v \succ (0,\cdots, 0)^\mathrm{T} = 0_n$ such that $(\beta A - \delta I_n )v \succ 0_n$ \end{lemma} The main theorem of this section which guarantees the existence and uniqueness of nontrivial fixed point of \eqref{Eq:seq3} follows. \begin{theorem}\label{Thm:existence} Define a map $\Psi : [0,1]^n \to \mathbb{R}^n$ with $\Xi$ and $\omega$ satisfying the conditions (a)-(f) above, as \begin{equation} \Psi_i(x) = \Xi_i(x) - \omega(x_i)~. \label{Eq:psieq} \end{equation} Then $\Psi=(\Psi_1, \cdots, \Psi_n)$ has a unique nontrivial (other than the origin) zero if $\frac{\beta\lambda_{\max}(A)}{\delta} > 1 $. \end{theorem} We should emphasize that this unique nontrivial zero (denoted by $x^*$ in the proof) is also the unique nontrivial fixed point of \eqref{Eq:seq3} as desired. As a further remark, consider a network whose edge $\{i,j\}$ has weight $w_{ij}=w_{ji} \in [0,1]$. The weight of each edge could represent the degree of intimacy. The weight matrix can replace the adjacency matrix to define $\Xi_i(x)=\left( 1 - \prod_{j \in N_i} (1-\beta w_{ij} x_j) \right)$. Then $\Xi$ defined by the weight matrix rather than the adjacency matrix also satisfies all three conditions (a)-(c) if $A_{ij}$ is replaced by $w_{ij}$ from (a). The system of equations will still have the same properties even if we admit different weights. \paragraph{Global Stability of Nontrivial Fixed Point} The origin, the trivial fixed point of the system is globally stable if $\lambda_{max}((1-\delta)I_n + \beta A ) < 1 $. The next issue is whether the nontrivial fixed point is also stable if $\lambda_{max}((1-\delta)I_n + \beta A ) > 1 $. It turns out that this is true, if we are not initially at the origin. \begin{theorem}\label{Thm:GSofNFP} Suppose $\lambda_{max}((1-\delta)I_n + \beta A ) > 1 $. As $t$ increases $P(t+1)=\Phi(P(t))$ defined by \eqref{Eq:wtphieq} converges to the unique nontrivial fixed point $x^*$, if $P(0)$ is not the origin. \end{theorem} \subsection{Analysis of the Exact Markov Chain}\label{sec:SIS_MC} Returning back to the Markov chain model, we study the mixing time of the Markov chain and how it relates to the nonlinear and linear models. The mixing time of a Markov chain is defined as follows (\cite[Def.~4.5]{levin2009markov}): \begin{equation} t_{mix}(\epsilon)=\min \{t: \sup_\mu \| \mu S^t - \pi \|_{TV} \leq \epsilon \} , \label{Eq:mixingdefn} \end{equation} where $\mu$ is any initial probability distribution defined on the state space and $\pi$ is the stationary distribution. $\| \cdot \|_{TV}$ is total variation distance which measures distance of two probability distributions. Total variation distance of two probability measures $\mu$ and $\mu'$ is defined by \begin{equation} \| \mu -\mu' \|_{TV} = \frac{1}{2} \sum_x | \mu(x) - \mu'(x) | \end{equation} where $x$ is any possible state in the probability space. In fact $t_{mix}(\epsilon)$ is the smallest time where distance between the stationary distribution and probability distribution at time $t$ from any initial distribution is smaller than or equal to $\epsilon$. Roughly speaking, the mixing time measures how fast initial distribution converges to the limit distribution. \subsubsection{A Linear Programming Approach} Let $\mu(t) \in \mathbb{R}^{2^n}$ be a probability row vector of $\{0,1\}^n$ at time $t$. The probability that node $i$ is infected at time $t$, which is denoted by $p_i(t)$ as before, is simply the marginal probability of $\mu(t)$. That is $p_i(t)=\sum_{X_i=1} \mu_X(t)$. By defining $p_0(t)=1$ (for $\sum \mu_X(t)=1$) and sticking it to the rest of marginal probabilities, we get the column vector $p(t)=(p_0(t), p_1(t), \cdots, p_n(t) )^T$. One can interpret $p(t)$ as \emph{observable data} and $\mu(t)$ as \emph{hidden complete data} at time $t$. We give an upper bound for $p(t+1)$, observable data at the next time step, using only current observable information. Let $f_i \in \mathbb{R}^{n+1}$ be the $i$-th unit column vector. $S$ is the transition matrix of the Markov chain, as defined before. $B \in \mathbb{R}^{2^n \times (n+1)}$ is a matrix that relates the observable data, $p(t)$, to the hidden complete data, $\mu(t)$. It can be formally expressed as: \begin{equation} B_{X,k} = \left\{ \begin{array}{rl} 1 & \text{if } k=0,\\ X_k & \text{if } k \in \{1,2, \cdots, n\}. \end{array} \right. \end{equation} We would like to maximize $p_i(t+1)$ for a node $i$, given $p_1(t), \cdots, p_n(t)$. This leads to the following result. \begin{proposition}\label{Lem:LP} $\displaystyle p_i(t+1) \leq (1-\delta)p_i(t) + \beta \sum_{j \in N_i} p_j(t)$. This is the tightest upper-bound that involves only the marginal probabilities at time $t$. \end{proposition} Notice that this is interestingly the linear model that we have been considering. In fact, by applying Proposition~\ref{Lem:LP} to each node, we can express it as \begin{equation} p(t+1) \preceq ((1-\delta)I_n + \beta A) p(t), \end{equation} and $(1-\delta)I_n + \beta A$ is the system matrix of the linear model. For obtaining tighter bounds, one should use higher order terms than just marginals (e.g. pairwise probabilities, triples, etc.) \cite{ruhi2016improved}. Now we prove the practical result of logarithmic mixing time for $\lambda_{max}((1-\delta)I_n + \beta A ) < 1 $. Let $e_X \in \mathbb{R}^{2^n}$ denote the $X$-th unit vector, i.e. the probability vector all of whose components are zero, except the $X$-th component. Also define $\bar{0}, \bar{1} \in \{0,1\}^n$ as the state where everyone is healthy and infected, respectively. \begin{theorem}\label{Thm:upperboundmt} If $\frac{\beta\lambda_{\max}(A)}{\delta}<1$, the mixing time of the Markov chain whose transition matrix $S$ is described by Eqs. \eqref{MC0} and \eqref{Eq:noimmune} is $O(\log n)$. \end{theorem} \subsubsection{Partial Ordering}\label{sec:partialordering} In this section, we define a partial order on the set of probability vectors of $\{0,1\}^n$, and establish the connection between the nonlinear model and the Markov chain. The nonlinear model does not generally provide an upper bound on the true probabilities $p_i(t)$. However, it gives an upper bound on the probability that the system is not in the all-healthy state. We define $\leq_{st}$ on the set of probability vectors of $\{0,1\}^n$ as follows. \begin{equation} \mu \leq_{st} \mu' \quad \text{iff} \quad \sum_{X \preceq Z} \mu_X \geq \sum_{X \preceq Z} \mu'_X \quad \forall Z \in \{0,1\}^n \end{equation} where $X \preceq Z$ means $X_i \leq Z_i$ for all $i$. Note that $\sum_{X \preceq Z} \mu_X$ represents the probability that each node of $\mathbb{S}(Z)^c$ is healthy under probability distribution $\mu$. $\mu \leq_{st} \mu'$ means that the probability of some nodes being healthy is higher under $\mu$ than under $\mu'$, for any set of nodes. Roughly speaking, infection probability under $\mu'$ stochastically dominates one under $\mu$. It is trivial to check that $\leq_{st}$ is a well-defined partial order. It is clear that $e_{\bar{1}}$ is the greatest element and $e_{\bar{0}}$ is the smallest element under $\leq_{st}$. As mentioned before, since the underlying graph is connected, and there we have an absorbing state, it is not hard to see that the stationary distribution is $e_{\bar{0}}$, which corresponds to all nodes being healthy with probability $1$. If all the nodes in the network are healthy, there is no infection and they always stay healthy. The following two lemmas reveal why $\leq_{st}$ is nice; it makes $S$ an order-preserving map, i.e. $\mu \leq_{st} \mu'$ implies $\mu S \leq_{st} \mu' S$. \begin{lemma}\label{Lem:RinverseSR} $R^{-1}SR$ is a $2^n$ by $2^n$ matrix all of whose entries are non-negative where $R \in \mathbb{R}^{\{0,1\}^n \times \{0,1\}^n }$ is defined as \begin{equation} R_{X,Y}= \left\{ \begin{array}{rl} 1 & \text{if } X \preceq Y,\\ 0 & \text{otherwise } \end{array} \right. \end{equation} \end{lemma} \begin{lemma}\label{Lem:order} If $\mu \leq_{st} \mu'$, then $\mu S \leq_{st} \mu' S$. \end{lemma} Note that Lemma~\ref{Lem:order} directly implies $$\sum_{X \preceq \bar{0}} (\mu S^t)_X = (\mu S^t)_{\bar{0}} \geq (e_{\bar{1}} S^t)_{\bar{0}} = \sum_{X \preceq \bar{0}} (e_{\bar{1}} S^t)_X$$ for any probability vector $\mu$, since $\mu \leq_{st} e_{\bar{1}}$. Now we establish a result which enables us to relate the nonlinear map $\Phi$ to the true probabilities of the Markov chain. For any given $n$-dimensional vector $r=(r_1, \cdots, r_n )^T$, define the $2^n$-dimensional column vector $u(r)$ by $u(r)_X = \displaystyle \prod_{i \in \mathbb{S}(X)} (1-r_i)$. Then we have the following lemma. \begin{lemma}\label{Lem:uofr} $Su(r) \succeq u(\Phi(r))$ for all $r \in [0,1]^n$. \end{lemma} It should be clear that $e_{\bar{0}}^T = u((1,1,\cdots,1)^T) = u(1_n)$ (we distinguish $1_n=(1,1,\cdots, 1)^T \in [0,1]^n$ from $\bar{1} \in \{0,1\}^n$ which is a state of infection). Lemma~\ref{Lem:uofr} is particularly useful because $S$ is a matrix all of whose entries are non-negative, and it follows that \begin{equation} S^t e_{\bar{0}}^T = S^t u(1_n) \succeq u( \Phi^t (1_n)) . \label{Eq:boundbyphitilde} \end{equation} Of note, by some algebra on $e_{\bar{1}} S^t e_{\bar{0}}^T$ using this bound, the same bound as in \eqref{epsilon_last} can be established, which leads to the mixing time result. Furthermore, the $i$-th component of $\Phi^t(1_n)$ provides an upper bound on the probability that the current state is not the steady state, given that the infection started from node $i$ with probability 1 at time 0. Mathematically, $e_{\hat{i}}S^t e_{\bar{0}}^T \geq e_{\hat{i}} u(\Phi^t (1_n)) = 1- \Phi^t_i (1_n)$ by \eqref{Eq:boundbyphitilde}, and we have \begin{align} \mathbb{P}(\xi(t) \neq \bar{0}|\xi(0)=\hat{i}) &= 1- \mathbb{P}(\xi(t)=\bar{0}|\xi(0)=\hat{i})\\ &= 1- e_{\hat{i}}S^t e_{\bar{0}}^T\\ &\leq \Phi^t_i (1_n) \end{align} More importantly, the probability that the network is not in the all-healthy state at time $t$ given that the initial epidemic state is $X$ can be bounded above by the entries of $\Phi^t (1_n)$: \begin{align} &\mathbb{P}(\xi(t) \neq \bar{0}|\xi(0)=X) \\ &= 1- \mathbb{P}(\xi(t) = \bar{0}|\xi(0)=X) = 1- e_{X}S^t e_{\bar{0}}^T \\ &\leq 1- u( \Phi^t (1_n))_X= 1- \prod_{i \in \mathbb{S}(X)} \left( 1- \Phi^t_i(1_n) \right) \label{not-all-healthy} \end{align} \begin{proposition} The nonlinear model provides an upper bound on the probability of the chain not being in the all-healthy state as \begin{equation} \mathbb{P}(\xi(t) \neq \bar{0}|\xi(0)=X)\leq 1- \prod_{i \in \mathbb{S}(X)} \left( 1- \Phi^t_i(1_n) \right) \end{equation} for any state $X$. \end{proposition} We should finally remark that the reason why it is possible for the nonlinear map to converge to a unique non-origin fixed point when $\frac{\beta\lambda_{\max}(A)}{\delta} > 1$, even though the original Markov chain model always converges to the all-healthy state, is that \eqref{not-all-healthy} is only an upper bound on $\mathbb{P}(\xi(t) \neq \bar{0}|\xi(0)=X)$. In other words, if the origin is globally stable in the epidemic map $\Phi$, we can infer that the Markov chain model mixes fast. However, if the origin in the epidemic map is unstable, we cannot infer anything about mixing time. \subsection{Generalized Contact Model} In this section, we generalize the contact model. In the previous model, everyone had the same recovery rate $\delta$ and infection rate $\beta$. One of the main results was that the epidemic dies out fast if the largest eigenvalue of $M=(1-\delta)I_n + \beta A $ is smaller than $1$. $M$ is defined by $\beta$, the infection rate, $\delta$, the recovery rate, and $A$, the adjacency matrix. In other words $M$ is the contact model. To model an epidemic spread where everyone has its own infection and recovery rate, we can define the generalized infection matrix. Let $M=(m_{i,j})$ be the generalized infection matrix where $m_{i,j} \in [0,1]$ represents the infection probability that $i$ is infected at time $t+1$ when $j$ is the only infected node at time $t$. In this setting, each diagonal entry $m_{i,i}$ represents self-infection rate. In other words, $1-m_{i,i}$ is recovery rate of node $i$ and $m_{i,i}$ is the probability that $i$ stays infected when there is no other infected nodes in the network. We also assume that probability of infection of each node given the current state $\xi(t)$ is independent. More precisely, for any two state vectors $X,Y \in \{0,1\}^n$, \begin{equation} \mathbb{P}(\xi(t+1)=Y|\xi(t)=X) = \prod_{i=1}^n \mathbb{P}(\xi_i(t+1)=Y_i|\xi(t)=X) \end{equation} Probability transition from given state is defined by $M$. \begin{empheq}[box=\fbox]{align} &\mathbb{P}(\xi_i(t+1)=Y_i|\xi(t)=X) \nonumber \\ &= \left\{ \begin{array}{rl} \displaystyle \prod_{j \in \mathbb{S}(X)} (1-m_{i,j}) & \text{if } Y_i=0,\\ \displaystyle 1-\prod_{j \in \mathbb{S}(X)} (1-m_{i,j}) & \text{if } Y_i=1,\end{array} \right. \end{empheq} We define the transition matrix, $S^{(M)} \in \mathbb{R}^{\{0,1\}^n \times \{0,1\}^n}$ by $S^{(M)}_{X,Y} = \mathbb{P}(\xi_i(t+1)=Y_i|\xi(t)=X)$ in the equation above. For two probability distributions $\mu$ and $\mu'$ which are defined on $\{0,1\}^n$, $\mu \leq_{st} \mu'$ is equivalent to the statement that all the entries of $(\mu-\mu')R$ are non-negative. Lemma~\ref{Lem:RinverseSR} is also true for $S^{(M)}$. We can check that $(R^{-1}S^{(M)}R)_{X,Z} = S^{(M^T)}_{\neg Z, \neg X} \geq 0$ where $M^T$ is the transpose of $M$. $S^{(M)}$ is an order-preserving map under $\leq_{st}$ by Lemma~\ref{Lem:order}. The epidemic map associated with $M$, $\displaystyle \Phi^{(M)}: [0,1]^n \to [0,1]^n$ is defined by \begin{empheq}[box=\fbox]{equation} \displaystyle \Phi^{(M)}_i(x)=1- \prod_{j=1}^n (1-m_{i,j}x_j) \end{empheq} and $\displaystyle \Phi^{(M)} = (\displaystyle \Phi^{(M)}_1 , \displaystyle \Phi^{(M)}_2, \cdots, \displaystyle \Phi^{(M)}_n)$. $M$ is the Jacobian matrix of $\displaystyle \Phi^{(M)}(\cdot)$ at the origin which gives an upper bound. i.e. $\Phi^{(M)}(x) \preceq Mx$. The origin is the unique fixed point which is globally stable if the largest eigenvalue of $M$ is smaller than $1$. It also has a unique nontrivial fixed point which is globally stable if the largest eigenvalue of $M$ is greater than $1$. Same as in Theorem~\ref{Thm:upperboundmt} and Lemma~\ref{Lem:uofr}, $\lambda_{\max}(M)<1$ guarantees that the mixing time of the Markov chain whose transition matrix is $S^{(M)}$ has an upper bound of $\displaystyle t_{mix}(\epsilon) \leq \frac{\log \frac{n}{\epsilon}}{-\log \| M \|}$, i.e. the mixing time is $O(\log n)$. \subsection{Immune-Admitting Model}\label{sec:immune-admitting} In this section, we study the immune-admitting model. The model is the same as that of the previous section except that in a single time interval a node cannot go from infected to healthy back to infected. In other words, a node does not get infected from its neighbors if it has just recovered from the disease. To summarize this, \begin{empheq}[box=\mbox]{align} \tikzmark{C} &\mathbb{P}(\xi_i(t+1)=Y_i|\xi(t)=X) \nonumber \\ &= \left\{ \begin{array}{rl} (1-\beta)^{m_i} & \text{if } (X_i,Y_i)=(0,0), \: |N_i \cap \mathbb{S}(X)| = {m_i},\\ 1- (1-\beta)^{m_i} & \text{if } (X_i,Y_i)=(0,1), \: |N_i \cap \mathbb{S}(X)| = {m_i},\\ \delta & \text{if } (X_i,Y_i)=(1,0), \\ 1-\delta & \text{if } (X_i,Y_i)=(1,1). \end{array} \right.\tikzmark{D} \label{Eq:immuneadmitting} \end{empheq} \begin{tikzpicture}[remember picture,overlay] \draw ([shift={(-.2em,2.5ex)}]C) rectangle ([shift={(-0.7em,-5.5ex)}]D); \end{tikzpicture} The transition matrix is defined in a similar way. In this model, the probability that a node becomes healthy from infected is $\delta$ which is larger than $\delta(1-\beta)^{m_i}$ as in immune-not-admitting model described in \eqref{Eq:noimmune}. Roughly speaking, the immune-admitting model is more likely to go to steady state than the immune-not-admitting model The mixing time of this model is also $O(\log n)$. Most of the formal proof is very similar to the one for immune-not-admitting model, and we omit it for the sake of brevity. An epidemic map of the immune-admitting model can be studied as well, which is defined as \begin{empheq}[box=\fbox]{equation} {\widetilde{\Phi}}_i(x) = (1-\delta)x_i + ( 1- x_i)\left( 1 - \prod_{j \in N_i} (1-\beta x_j) \right) \label{Eq:Phieq} \end{empheq} ${\widetilde{\Phi}}:[0,1]^n \to [0,1]^n$ of \eqref{Eq:Phieq} has similar properties with $\Phi(\cdot)$ of \eqref{Eq:wtphieq}. ${\widetilde{\Phi}}(\cdot)$ and $\Phi(\cdot)$ have same Jacobian matrix at the origin which is linear upper bound of both nonlinear epidemic maps. Analysis of $\Phi(\cdot)$ is modified to analyze $\widetilde{\Phi}(\cdot)$ here. We represent $\widetilde{\Phi}(\cdot)$ using $\Xi(\cdot)$ and $\omega(\cdot)$ as we did in \eqref{Eq:seq3}. We can view \begin{equation} {\widetilde{\Phi}}_i(x) = x_i + (1-x_i) ( \Xi_i(x) - \omega(x_i)) \label{Eq:Phiomegaform} \end{equation} where $\displaystyle \Xi_i(x)=\left( 1 - \prod_{j \in N_i} (1-\beta x_j) \right)$ and $\displaystyle \omega(s) = \frac{\delta s}{1-s}$. It is trivial to check that ${\Xi}(\cdot)$ and $\omega(\cdot)$ and satisfy all the conditions (a) - (f). Therefore we can apply Theorem~\ref{Thm:existence} to show that ${\widetilde{\Phi}}(\cdot)$ has a unique nontrivial fixed point if the largest eigenvalue of the Jacobian matrix at the origin is greater than $1$. The origin, the trivial fixed point of the system is globally stable if $\lambda_{max}((1-\delta)I_n + \beta A ) < 1 $. The next issue is whether the unique nontrivial fixed point is also stable if $\lambda_{max}((1-\delta)I_n + \beta A ) > 1 $. This is not true in general for ${\widetilde{\Phi}}(\cdot)$. The following is an example of an unstable nontrivial fixed point. \begin{equation} \mathbf{A} = \left( \begin{array}{ccc} 0 & 1 & 1 \\ 1 & 0 & 0 \\ 1 & 0 & 0 \end{array} \right) \qquad \delta=0.9 \quad \beta=0.9 \label{Eq:unstable} \end{equation} The nontrivial fixed point of the system above is $x^*= (0.286, 0.222, 0.222)^\mathrm{T}$. The Jacobian matrix of ${\widetilde{\Phi}}$ at $x^*$ is \begin{equation} J_{{\widetilde{\Phi}}(x^*)} = \left( \begin{array}{ccc} -0.260 & 0.514 & 0.514 \\ 0.700 & -0.157 & 0 \\ 0.700 & 0 & -0.157 \end{array} \right) \end{equation} The eigenvalue with largest absolute value in the above Jacobian matrix is $-1.059$ whose absolute value is greater than 1. However, $P(t)={\widetilde{\Phi}}^t(P(0))$ converges to a cycle rather than a nontrivial fixed point $x^*$. The biggest difference between \eqref{Eq:Phieq} and \eqref{Eq:wtphieq} is that $\displaystyle \frac{ \partial \Phi_i }{ \partial x_j} \geq 0$ for any $i,j \in \{1,\cdots, n\}$ in \eqref{Eq:wtphieq}, while it does not hold for ${\widetilde{\Phi}}(\cdot)$ in \eqref{Eq:Phieq}. The proof of Theorem~\ref{Thm:GSofNFP} can be applied to ${\widetilde{\Phi}}(\cdot)$ if $\displaystyle \frac{ \partial {\widetilde{\Phi}}_i }{ \partial x_j} \geq 0$ for any $i,j \in \{1,\cdots, n\}$ in \eqref{Eq:Phieq}. Even though the nontrivial fixed point of ${\widetilde{\Phi}}(\cdot)$ is not stable generally, we shall show that it is stable with high probability for a family of random graphs. To study the stability of the nontrivial fixed point with high probability, we will begin with the following lemma that demonstrates that the Jacobian matrix at $x^*$ has no eigenvalue greater than or equal to unity for any values of $\beta$ and $\delta$ and for any connected graph. \begin{lemma}\label{Lem:jacob} Suppose that $x^*$ is a unique nontrivial fixed point of ${\widetilde{\Phi}} : [0,1]^n \to [0,1]^n$ with $\Xi$ satisfying the conditions (a),(b) and (c) when $\lambda_{max}((1-\delta)I_n + \beta A ) > 1 $. Then the Jacobian matrix of ${\widetilde{\Phi}}$ at $x^*$ has no eigenvalue of greater than or equal to $1$. \end{lemma} For the proof see pages 64--66 of \cite{ahn2014random}. Even though $J_{\widetilde{\Phi}}$ has no eigenvalue which is greater than or equal to $1$, the fixed point $x^*$ still has a chance to be unstable if there is an eigenvalue which is greater than or equal to 1 in absolute value. We now show that $x^*$ is stable with high probability when we consider a certain family of random graphs and the number of vertices is large. We will later show that this family of random graphs includes Erd\"os-R\'enyi graphs. We fix $\Xi_i(x)=\left( 1 - \prod_{j \in N_i} (1-\beta x_j) \right)$ from now on. \begin{equation} \frac{ \partial \Xi_i }{ \partial x_j} = \beta \prod_{k \in N_i\setminus \{j\}} (1-\beta x_k) = \beta \frac{ 1 - \Xi_i}{1-\beta x_j} \text{ if } i \in N_j\text{ in }G \end{equation} \begin{equation} J_\Xi = \beta \,\text{diag}(1_n - \Xi) A \,\text{diag}(1_n - \beta x)^{-1} \end{equation} \begin{theorem} Suppose that $G^{(n)}$ is a random graph with $n$ vertices and let $d^{(n)}_{\min}$ and $d^{(n)}_{\max}$ denote the minimum and maximum degree of $G^{(n)}$. If $\mathrm{Pr} [(d^{(n)}_{\min})^2 > a \cdot d^{(n)}_{\max} ]$ goes to $1$ as $n$ goes to infinity for any fixed $a > 0$, then the system is unstable at the origin and locally stable at the nontrivial fixed point $x^*$ with high probability as $n$ grows, for any fixed $\beta$ and $\delta$. \label{lem:rg} \end{theorem} For the proof see pages 66--68 of \cite{ahn2014random}. \begin{figure*}[thpb] \centering \includegraphics[width=1.4\columnwidth]{comp_2.png} \caption{Summary and comparison of the results for SIS, SIRS, and SIV models, as a function of $\frac{\beta\lambda_{max}(A)}{\delta}$. MC stands for the Markov chain model. MFA stands for the mean-field approximation, aka the nonlinear model.} \label{fig:comp} \end{figure*} We can think of several random graph models that satisfy the condition of Theorem \ref{lem:rg}. For example, if the random graph has uniform degree then the minimum degree and maximum degree are identical and as long as the degree grows with $n$, the ratio $\displaystyle \frac{d_{\min}^2}{d_{\max}} = d$ will grow with any $n$ and exceed $a$ with high probability. Similarly,for random graphs where the degree distribution of each node is identical and the degree distribution "concentrates", so that we can expect that the maximum degree and the minimum degree are proportional to the expected of degree, in which case $\displaystyle \frac{d_{\min}^2}{d_{\max}}$ grows if the expected degree increases unbounded with $n$. The Erd\"os-R\'enyi random graph, $G^{(n)}=G(n,p(n))$ has identical degree distribution. \begin{corollary} Consider an Erd\"os-R\'enyi random graph $G^{(n)}=G(n,p(n))$ with $\displaystyle p(n)= c \frac{\log n}{n}$ where $c > 1$ is a constant. Then $\widetilde{\Phi}(\cdot)$ is locally unstable at the origin and has a locally stable nontrivial fixed point with high probability for any fixed $\beta$ and $\delta$. \end{corollary} For the proof see page 69 of \cite{ahn2014random}. Since $\displaystyle p= c \frac{\log n}{n}$ for $c=1$ is also the threshold for connectivity, we can say that connected Erd\"os-R\'enyi graphs have a nontrivial stable fixed point with high probability. The random geometric graph $G^{(n)}=G(n,r(n))$ also has identical degree distribution if each node is distributed uniformly. As studied in \cite{PenroseRandomGeometricGraph}, such random graphs have maximum and minimum degree which are proportional to the expected degree with high probability if $r(n)$ is smaller than the threshold of connectivity. Like Erd\"os-R\'enyi graphs, it has high probability of having a nontrivial stable fixed point if the degree grows with $n$. \begin{comment} \subsection{Continuous-Time Model}\label{sec:continuous} The discrete time model may give an unstable nontrivial fixed point as in \eqref{Eq:unstable}. However, in the continuous-time model the nontrivial fixed point is globally stable if $\Xi$ and $\omega$ satisfy all the properties from (a) to (f). Consider a differential equation. \begin{align} \frac{dx_i}{dt} &= \frac{1}{\Delta t} \left( (1-x_i)\Xi_i(x_1, \cdots, x_n) - \delta x_i \right)\notag\\ &= \frac{1-x_i}{\Delta t} \left( \Xi_i(x) - \omega(x_i) \right) \label{Eq:cseq1} \end{align} Then, \eqref{Eq:Phiomegaform} is just the forward Euler method of \eqref{Eq:cseq1} with $\Delta t$ as step size for time. The origin is a trivial equilibrium point of \eqref{Eq:cseq1}. The origin is unstable if, and only if, $-\delta I_n+\beta A$ has an eigenvalue in the RHP. i.e. one or more eigenvalues of $-\delta I_n+\beta A$ have positive real parts. Since $A$ is symmetric and by Perron-Frobenius theorem its largest eigenvalue in absolute value is positive, unstableness of the origin is equivalent to $\lambda_{max}((1-\delta)I_n + \beta A ) > 1 $. By Theorem~\ref{Thm:existence}, we know that \eqref{Eq:cseq1} has a nontrivial fixed point $x^*$ under this condition. \begin{theorem} Suppose that $\lambda_{max}((1-\delta)I_n + \beta A ) > 1$, then $x(t)$ defined by \eqref{Eq:cseq1} converges to $x^*$ as $t$ goes to infinity unless $x(0)=0_n$ \end{theorem} \begin{proof} We will suggest a Lyapunov function that is strictly decreasing for all initial points except $x(0) = 0_n$: \begin{equation} V(x)= \max_{1 \leq i \leq n} \left\{ \frac{| x_i - x_i^* |}{x_i^*} \right\} \end{equation} It is obvious that $V(x^*)=0$ and $V(x) > 0$ for all $x \in [0,1]^n \setminus \{x^*\}$. Suppose that $V(x)=r >0$. Then, $x_j \in [(1-r)x_j^*, (1+r)x_j^*]$ for all $j \in \{1, \cdots, n \}$. There is $i$ such that $x_i = (1-r)x_i^*$ or $(1+r)x_i^*$. In the case of $x_i = (1+r)x_i^*$, $0_n \preceq \max(x, x^*) - x^* \preceq rx^*$, we obtain the following equation by Lemma~\ref{Lem:ccv} for $\displaystyle u=x^* - \frac{\max(x, x^*) - x^*}{r}, v=\frac{\max(x, x^*) - x^*}{r}$. \begin{align} \Xi_i(x) &\leq \Xi_i(\max(x, x^*))=\Xi_i(u+(1+r)v) \\ &= (1+r) \left( \frac{h_{i,u,v}(1+r) - h_{i,u,v}(0)}{1+r} \right) + h_{i,u,v}(0) \\ &\leq (1+r) \left(h_{i,u,v}(1) - h_{i,u,v}(0) \right) + h_{i,u,v}(0) \\ &=(1+r) \Xi_i(u+v) - r \Xi_i(u) \\ &\leq (1+r)\Xi_i(u+v) = (1+r) \Xi_i(x^*) \end{align} The equation above is necessary to prove following inequality: \begin{align} \frac{dx_i}{dt} &= \frac{1-x_i}{\Delta t} \left( \Xi_i(x) - \omega(x_i) \right) \\ &=\frac{1-(1+r)x_i^*}{\Delta t} \left( \Xi_i(x) - \omega((1+r)x_i^*) \right) \\ &\leq \frac{1-(1+r)x_i^*}{\Delta t} \left( (1+r)\Xi_i(x^*) - \omega((1+r)x_i^*) \right) \\ &<\frac{(1+r)(1-(1+r)x_i^*)}{\Delta t} \left( \Xi_i(x^*) - \omega(x_i^*) \right) = 0 \end{align} Hence, $\displaystyle \frac{|x_i(t) - x_i^*|}{x_i^*}=\frac{x_i(t) - x_i^*}{x_i^*}$ is strictly decreasing. Otherwise, $x_i = (1-r)x_i^*$. If $r<1$, \begin{align} \frac{dx_i}{dt} &= \frac{1-x_i}{\Delta t} \left( \Xi_i(x) - \omega(x_i) \right)\\ &=\frac{1-(1-r)x_i^*}{\Delta t} \left( \Xi_i(x) - \omega((1-r)x_i^*) \right) \\ &\geq \frac{1-(1-r)x_i^*}{\Delta t} \left( \Xi_i((1-r)x^*) - \omega((1-r)x_i^*) \right) \\ &\geq \frac{1-(1-r)x_i^*}{\Delta t} \left( (1-r)\Xi_i(x^*) - \omega((1-r)x_i^*) \right) \\ &>\frac{(1-r)(1-(1-r)x_i^*)}{\Delta t} \left( i(x^*) - \omega(x_i^*) \right) = 0 \end{align} $\displaystyle \frac{|x_i(t) - x_i^*|}{x_i^*}=\frac{x_i^* - x_i(t)}{x_i^*}$ is strictly decreasing. If $r=1$, all the entries of $x(t)$ are positive after short time unless $x(t)$ is the origin. Since $\displaystyle \frac{d}{dt} \left( \frac{|x_i(t) - x_i^*|}{x_i^*} \right) < 0$ for all $i$ such that $\displaystyle \frac{|x_i(t) - x_i^*|}{x_i^*} = V(x)$, $\displaystyle \frac{d}{dt} V(x(t)) < 0$. $V(x)$ is a Lyapunov function of this system and it completes the proof. \end{proof} We finally remark that, even though the continuous-time and discrete-time models are related through the forward Euler method and that the discrete-time model can be viewed as a discretization of the continuous-time model, it does not mean that continuous-time model is approximation to the true underlying epidemic spread. There are certain applications, such as the interaction of humans over a social network, say, where the discrete-time model appears to be more appropriate. In either case, whether to use a continuous-time model or a discrete-time model (and in the latter case whether to use Immune-admitting or the immune-not-admitting model) depends on the application at hand. \end{comment} \section{SIRS Epidemics} In this section we consider the SIRS model in which each node can be in one of three states of S, I and R. During each time epoch, nodes in the susceptible state can be infected by their infected neighbors according to independent events with probability $\beta$ (the {\em infection rate}) each. Nodes that are infected, during each such time epoch can recover with probability $\delta$ (the {\em recovery rate}) and, finally, nodes in the recovered state can randomly transition to the susceptible state with probability $\gamma$ ({\em immunization loss}). \subsection{Model Description} \subsubsection{Exact Markov Chain Model} We start again with the exact Markov chain model. The state of node $i$ at time $t$, denoted by $\xi_i(t)$, can take one of the following values: $0$ for \emph{Susceptible} (or healthy), $1$ for \emph{Infected} (or Infectious), and $2$ for \emph{Recovered}. i.e. $\xi_i(t) \in \left\{0,1,2\right\}$. Fig. \ref{fig} shows the three states and the corresponding transitions. $\beta$ is the transmission probability on each link, $\delta$ is the healing probability, and $\gamma$ is the immunization loss probability. \begin{figure}[htpb] \centering \includegraphics[width=0.7\columnwidth]{model.png} \caption{State diagram of a single node in the SIRS model, and the transition rates. Wavy arrow represents exogenous (network-based) transition. $S$ is healthy but can get infected, $I$ is infected, $R$ is healthy but cannot get infected.} \label{fig} \end{figure} The state of the whole network can be represented as: \begin{equation} \xi(t)=(\xi_i(t),\dots,\xi_n(t)) \in \left\{0,1,2\right\}^n \end{equation} Furthermore, let $S$ denote the $3^n\times 3^n$ state transition matrix of the Markov chain, with elements of the form: \begin{align}\label{MC1} S_{X,Y}&= \mathbb{P}\left(\xi(t+1)=Y \mid \xi(t) = X\right)\notag\\ &= \prod_{i=1}^n \mathbb{P}\left(\xi_i(t+1)=Y_i \mid \xi(t) = X\right) , \end{align} due to the independence of the next states given the current state. \begin{empheq}[box=\fbox]{multline}\label{MC2} \mathbb{P}\left(\xi_i(t+1)=Y_i \mid \xi(t) = X\right)=\\ \begin{cases} (1-\beta)^{m_i},& \text{if } (X_i,Y_i)=(0,0)\\ 1-(1-\beta)^{m_i},& \text{if } (X_i,Y_i)=(0,1)\\ 0,& \text{if } (X_i,Y_i)=(0,2)\\ 0,& \text{if } (X_i,Y_i)=(1,0)\\ 1-\delta,& \text{if } (X_i,Y_i)=(1,1)\\ \delta,& \text{if } (X_i,Y_i)=(1,2)\\ \gamma,& \text{if } (X_i,Y_i)=(2,0)\\ 0,& \text{if } (X_i,Y_i)=(2,1)\\ 1-\gamma,& \text{if } (X_i,Y_i)=(2,2)\\ \end{cases} , \end{empheq} where $m_i=\left\vert{\left\{ {j\in N_i} \mid X_j=1\right\}}\right\vert=\left\vert{N_i\cap I(t)}\right\vert$. The set of susceptible, infected, and recovered nodes at time $t$ are denoted as $S(t)$, $I(t)$, and $R(t)$ respectively. We state the marginal probability of the nodes as $p_{R,i}(t)$ and $p_{I,i}(t)$, for the probability that \emph{node $i$ is in state $R$ at time $t$} and the probability that \emph{node $i$ is in state $I$ at time $t$}, respectively. Then $p_{S,i}(t)$ follows immediately as $1-p_{R,i}(t)-p_{I,i}(t)$. Based on the above-mentioned transition rates, we can calculate the two marginal probabilities as: \begin{align} p&_{R,i}(t+1) =(1-\gamma)p_{R,i}(t)+\delta p_{I,i}(t) ,\label{exact_R}\\ p&_{I,i}(t+1) =(1-\delta)p_{I,i}(t)\notag\\ &+\mathbb{E}_{ |\xi_i(t)=0}\bigg[1-\prod_{j\in N_i}(1-\beta\mathds{1}_{\xi_j(t)=1})\bigg](1-p_{R,i}(t)-p_{I,i}(t)) ,\label{exact_I} \end{align} As mentioned, the recursion for $p_{S,i}(t+1)$ can be found from $p_{S,i}(t)+p_{I,i}(t)+p_{R,i}(t)=1$. \subsubsection{Nonlinear Model} One may consider the mean-field approximation of the above marginal probabilities, which can be expressed as: \begin{empheq}[box=\fbox]{align} P&_{R,i}(t+1) =(1-\gamma)P_{R,i}(t)+\delta P_{I,i}(t) ,\label{nonlinear_R}\\ P&_{I,i}(t+1) =(1-\delta)P_{I,i}(t)+\notag\\ &\Big(1-\prod_{j\in N_i} (1-\beta P_{I,j}(t))\Big)(1-P_{R,i}(t)-P_{I,i}(t)) ,\label{nonlinear_I} \end{empheq} This approximate model is in fact a nonlinear mapping with $2n$ states (rather than $3^n$ states). \subsubsection{Linear Model} One step further would be to approximate the preceding equations by a linear model. Linearizing Eqs. \eqref{nonlinear_R} and \eqref{nonlinear_I} around the origin results in the following mapping: \begin{align} \tilde{P}_{R,i}(t+1) &=(1-\gamma)\tilde{P}_{R,i}(t)+\delta \tilde{P}_{I,i}(t) ,\\ \tilde{P}_{I,i}(t+1) &=(1-\delta)\tilde{P}_{I,i}(t)+\beta\sum\limits_{j\in N_i} \tilde{P}_{I,j} . \end{align} These equations (for all $i$) can be expressed in a matrix form: \begin{empheq}[box=\fbox]{align}\label{linear} &\begin{bmatrix}\tilde{P}_R(t+1)\\\tilde{P}_I(t+1)\end{bmatrix}=M \begin{bmatrix}\tilde{P}_R(t)\\\tilde{P}_I(t)\end{bmatrix} ,\\ &\text{where}\notag\\ &M = \begin{bmatrix} (1-\gamma)I_n & \delta I_n\\ 0_{n\times n} & (1-\delta)I_n+\beta A \end{bmatrix} . \end{empheq} \subsection{Analysis of the Nonlinear Model} \subsubsection{Epidemic Extinction: $\frac{\beta\lambda_{\max}(A)}{\delta}<1$} The origin is trivially a fixed point of both the linear (Eq. \ref{linear}) and nonlinear (Eqs. \ref{nonlinear_R} and \ref{nonlinear_I}) mappings. In fact, at this fixed point we have: $$[P_{R,1}(t), \dots, P_{R,n}(t), P_{I,1}(t), \dots, P_{I,n}(t)]^T= 0_{2n} ,$$ which means all the nodes are susceptible (healthy) with probability 1, and the system stays there permanently, because there are no infected nodes anymore. Clearly, if $\|M\|<1$, then the origin is globally stable for the linear model (\ref{linear}) and also locally stable for the nonlinear model (\ref{nonlinear_I}, \ref{nonlinear_R}). The eigenvalues of $M$ matrix consist of the eigenvalues of $(1-\gamma)I_n$ and the eigenvalues of $(1-\delta)I_n+\beta A$. Noticing that the eigenvalues of $(1-\gamma)I_n$ are always less than one, it can be concluded that $\|M\|<1$ if the largest eigenvalue of $(1-\delta)I_n+\beta A$ is less than one. In addition, the linear model (\ref{linear}) is an upper bound on the nonlinear model (\ref{nonlinear_R}, \ref{nonlinear_I}), i.e. \begin{multline} P_{I,i}(t+1) =(1-\delta)P_{I,i}(t)\\ +\Big(1-\prod_{j\in N_i} (1-\beta P_{I,j}(t))\Big)(1-P_{R,i}(t)-P_{I,i}(t))\\ \leq (1-\delta)P_{I,i}(t)+\beta\sum\limits_{j\in N_i} P_{I,j} , \end{multline} This concludes the following. \begin{proposition} If $\frac{\beta\lambda_{\max}(A)}{\delta}<1$, then the origin is a globally stable fixed point for both linear model (\ref{linear}) and nonlinear model (\ref{nonlinear_R}, \ref{nonlinear_I}). \end{proposition} \subsubsection{Epidemic Spread: $\frac{\beta\lambda_{\max}(A)}{\delta}>1$} \paragraph{Existence and Uniqueness of Nontrivial Fixed Point}\label{sec:SIRS_secondfixedpoint} The trivial fixed point of the mappings, the origin, is not stable if $(1-\delta)+\beta\lambda_{\max}(A)>1$. We show that there exists a unique nontrivial fixed point when $(1-\delta)+\beta\lambda_{\max}(A)>1$ for SIRS model. By rearranging Eq. \eqref{nonlinear_I}, we can rewrite the system equations as: \begin{empheq}[left=\empheqlbrace]{align} P_{R,i}(t+1)=&(1-\gamma)P_{R,i}(t)+\delta P_{I,i}(t)\label{nonlinear_R_new}\\ P_{I,i}(t+1)=&P_{I,i}(t)+(1-P_{R,i}(t)-P_{I,i}(t))\notag\\ &\cdot\big(\Xi_i(P_I(t))-\omega(P_{R,i}(t),P_{I,i}(t))\big) ,\label{nonlinear_I_new} \end{empheq} where $\Xi_i \colon [0,1]^n \to [0,1]$ and $\omega \colon [0,1]^2 \to \mathbb{R}^+$ are the following maps associated with network $G$: \begin{equation} \Xi_i(P_I(t))=1-\prod_{j\in N_i} (1-\beta P_{I,j}(t)) , \end{equation} \begin{equation} \omega(P_{R,i}(t),P_{I,i}(t))=\frac{\delta P_{I,i}(t)}{1-P_{R,i}(t)-P_{I,i}(t)} . \end{equation} It can be verified that the maps defined above, enjoy the following properties: \begin{enumerate}[label=(\alph*)] \item $\Xi_i(0_n)=0$\\ $\frac{\partial \Xi_i(P_I)}{\partial P_{I,j}}\bigg|_{0_n}=\beta A_{i,j}$ \item $\begin{cases}\frac{\partial \Xi_i(P_I)}{\partial P_{I,j}}>0 &\mbox{if } i\in N_j\\ \frac{\partial \Xi_i(P_I)}{\partial P_{I,j}}=0 &\mbox{if } i\not\in N_j\end{cases}$ \item $\frac{\partial^2\Xi_i(P_I)}{\partial P_{I,j} \partial P_{I,k}}\leq 0 \quad \forall i,j,k \in \{1,\dots,n\}$ \item $\omega(0,0)=0$\\ $\frac{\partial \omega(P_{R,i},P_{I,i})}{\partial P_{I,i}}\bigg|_{(0,0)}=\delta$ \item $\frac{\partial \omega(P_{R,i},P_{I,i})}{\partial P_{I,i}}>0 \quad \forall P_{I,i}\in(0,1)$ \item $\frac{\omega(P_{R,i},P_{I,i})}{P_{I,i}}$ is an increasing function of both $P_{R,i}$ and $P_{I,i}$. More specifically: $\frac{\omega(s_1,t_1)}{s_1}<\frac{\omega(s_2,t_2)}{s_2}$ if $s_1<s_2$ and $t_1<t_2$. \end{enumerate} The main result of this section is as follows. \begin{theorem}\label{thm:SIRS_nontrivial} If $\frac{\beta\lambda_{\max}(A)}{\delta}>1$, the nonlinear map (\ref{nonlinear_R}, \ref{nonlinear_I}), or equivalently (\ref{nonlinear_R_new}, \ref{nonlinear_I_new}), has a unique nontrivial fixed point. \end{theorem} \paragraph{Stability of the Nontrivial Fixed Point} Since the trivial fixed point was globally stable when $\frac{\beta\lambda_{\max}(A)}{\delta}<1$, the existence of a second unique fixed point at $\frac{\beta\lambda_{\max}(A)}{\delta}>1$ raises the question of whether it is also stable. It turns out that this is not true in general. In fact, same as in immune-admitting SIS model (Section~\ref{sec:immune-admitting}), we can find simple examples in which the system converges to a cycle rather than the unique second fixed point. Nevertheless, like immune-admitting SIS, this fixed point can be shown to be stable with high probability for some general families of random graphs. \subsection{Analysis of the Exact Markov Chain} Since the graph $G$ is connected and the Markov chain has an absorbing state $\xi=(0,0,\dots,0) = \bar{0}$, the unique stationary distribution is: $$\pi=e_{\bar{0}} ,$$ where $e_X \in \mathbb{R}^{3^n}$ denotes the probability vector with all elements of zero, except the $X$-th one. This coincides with the fixed point of the mappings; however, the main concern is whether the Markov chain converges to its stationary distribution within a ``reasonable amount of time,'' or not. We show that when $\frac{\beta\lambda_{\max}(A)}{\delta}<1$, not only are the linear and nonlinear maps globally stable at the origin, but also the mixing time of the Markov chain is $O(\log n )$, meaning that the Markov chain mixes fast and the epidemic dies out. Let the row vector $\mu(t)\in \mathbb{R}^{3^n}$ be the probability vector of the Markov chain. The relationship between these probabilities ($\mu_X(t)$) and the marginal probabilities ($p_{R,i}(t)$, $p_{I,i}(t)$) is in the following forms: $p_{R,i}(t)=\sum_{X_i=2} \mu_X(t)$, $p_{I,i}(t)=\sum_{X_i=1} \mu_X(t)$. We express all these terms as well as $p_0=\sum \mu_X(t)=1$ in the form of a column vector $p(t)=[p_0(t),p_1(t),\dots,p_{2n}]^T$, i.e. \begin{equation} p(t)=\begin{bmatrix} 1, \vline p_{R,1}(t), \dots,p_{R,n}(t), \vline p_{I,1}(t), \dots, p_{I,n}(t)\end{bmatrix}^T . \end{equation} The matrix $B \in \mathbb{R}^{3^n\times (2n+1)}$ which relates the ``observable data'' $p(t)$, and the ``hidden complete data'' $\mu(t)$, can be expressed as: \begin{equation} B_{X,k}= \begin{cases} 1,& \text{if } k=0\\ \hdashline 0,& \text{if } k\in\left\{1,2,\dots,n\right\} \text{ and } X_k=0\\ 0,& \text{if } k\in\left\{1,2,\dots,n\right\} \text{ and } X_k=1\\ 1,& \text{if } k\in\left\{1,2,\dots,n\right\} \text{ and } X_k=2\\ \hdashline 0,& \text{if } k\in\left\{n+1,n+2,\dots,2n\right\} \text{ and } X_{k-n}=0\\ 1,& \text{if } k\in\left\{n+1,n+2,\dots,2n\right\} \text{ and } X_{k-n}=1\\ 0,& \text{if } k\in\left\{n+1,n+2,\dots,2n\right\} \text{ and } X_{k-n}=2\\ \end{cases} \end{equation} Now we can proceed to the main theorem of this section. \begin{theorem}\label{thm_mixing} If $\frac{\beta\lambda_{\max}(A)}{\delta}<1$, the mixing time of the Markov chain whose transition matrix $S$ is described by Eqs. \eqref{MC1} and \eqref{MC2} is $O(\log n)$. \end{theorem} \section{SIV Epidemics} In this section we consider the effect of vaccination by incorporating direct immunization into the model studied in the previous sections. In other words, the transition from $S$ to $R$ is also permitted now (See Fig. \ref{fig2}). This class of processes are often referred to as SIV (Susceptible-Infected-Vaccinated) epidemics. Depending on the value of $\gamma$, this model can represent temporary ($\gamma\neq 0$) or permanent ($\gamma=0$) immunization. Moreover, based on the efficacy of the vaccine, there are two different models: infection-dominant and vaccination-dominant. \begin{figure}[thpb] \centering \includegraphics[width=0.7\columnwidth]{model2.png} \caption{State diagram of a single node in the SIRS-with-Vaccination model, and the transition rates. Wavy arrow represents exogenous (network-based) transition. $\theta$ represents the probability of direct immunization.} \label{fig2} \end{figure} \subsection{Infection-Dominant Model} In this case, the infection is dominant, in the sense that if a susceptible node receives both infection and vaccine at the same time, it gets infected. The elements of state transition matrix are \begin{align}\label{MC1_id} S_{X,Y}&= \mathbb{P}\left(\xi(t+1)=Y \mid \xi(t)= X\right)\notag\\ &= \prod_{i=1}^n \mathbb{P}\left(\xi_i(t+1)=Y_i \mid \xi(t) = X\right) , \end{align} where \begin{empheq}[box=\fbox]{multline}\label{MC2_id} \mathbb{P}\left(\xi_i(t+1)=Y_i \mid \xi(t) = X\right)=\\ \begin{cases} (1-\beta)^{m_i}(1-\theta),& \text{if } (X_i,Y_i)=(0,0)\\ 1-(1-\beta)^{m_i},& \text{if } (X_i,Y_i)=(0,1)\\ (1-\beta)^{m_i}\theta,& \text{if } (X_i,Y_i)=(0,2)\\ 0,& \text{if } (X_i,Y_i)=(1,0)\\ 1-\delta,& \text{if } (X_i,Y_i)=(1,1)\\ \delta,& \text{if } (X_i,Y_i)=(1,2)\\ \gamma,& \text{if } (X_i,Y_i)=(2,0)\\ 0,& \text{if } (X_i,Y_i)=(2,1)\\ 1-\gamma,& \text{if } (X_i,Y_i)=(2,2)\\ \end{cases} , \end{empheq} and as before $m_i=\left\vert{\left\{ {j\in N_i} \mid X_j=1\right\}}\right\vert=\left\vert{N_i\cap I(t)}\right\vert$. Compared to Eq. \eqref{MC2}, the first and the third elements have changed in Eq. \eqref{MC2_id}, and for $\theta=0$ the model reduces to the previous one. In this infection-dominant model the marginal probabilities are: \begin{align} p&_{R,i}(t+1) =(1-\gamma)p_{R,i}(t)+\delta p_{I,i}(t)\notag\\ &+\mathbb{E}_{ |\xi_i(t)=0}\bigg[\prod_{j\in N_i}(1-\beta\mathds{1}_{\xi_j(t)=1})\bigg]\theta(1-p_{R,i}(t)-p_{I,i}(t)) ,\label{exact_R_id}\\ p&_{I,i}(t+1) =(1-\delta)p_{I,i}(t)\notag\\ &+\mathbb{E}_{ |\xi_i(t)=0}\bigg[1-\prod_{j\in N_i}(1-\beta\mathds{1}_{\xi_j(t)=1})\bigg](1-p_{R,i}(t)-p_{I,i}(t)) ,\label{exact_I_id} \end{align} The steady state behavior in the presence of immunization is rather different from the SIS/SIRS cases, in which all the nodes became susceptible. In this model, once there is no node in the infected state, the Markov chain reduces to a simpler Markov chain, where the nodes are all decoupled. In fact from that time on, each node has an independent transition probability between $S$ and $R$. The stationary distribution of each single node is then $P_S^* = \frac{\gamma}{\gamma+\theta}$ and $P_R^* = \frac{\theta}{\gamma+\theta}$ (Fig. \ref{steady}). In order for this MC to converge, we should have $\gamma\theta \neq 1$. The stationary distribution of each state $X$ is then: $$\pi_X = \prod_{i=1}^n (\frac{\gamma}{\gamma+\theta})^{\mathbb{I}(X_i=0)} \cdot 0^{\mathbb{I}(X_i=1)} \cdot (\frac{\theta}{\gamma+\theta})^{\mathbb{I}(X_i=2)}$$ \begin{figure}[thpb] \centering \includegraphics[width=0.8\columnwidth]{steady_2.png} \caption{Reduced Markov chain of a single node in the steady state.} \label{steady} \end{figure} Now the nonlinear map (mean-field approximation of the Markov chain model) can be obtained as: \begin{empheq}[box=\fbox]{align} P&_{R,i}(t+1) =(1-\gamma)P_{R,i}(t)+\delta P_{I,i}(t)\notag\\ &+ \prod_{j\in N_i} (1-\beta P_{I,j}(t))\theta(1-P_{R,i}(t)-P_{I,i}(t)), \label{nonlinear_R_id}\\ P&_{I,i}(t+1) =(1-\delta)P_{I,i}(t)+\notag\\ &\Big(1-\prod_{j\in N_i} (1-\beta P_{I,j}(t))\Big)(1-P_{R,i}(t)-P_{I,i}(t)). \label{nonlinear_I_id} \end{empheq} It can be easily verified that one fixed point of this nonlinear map occurs at $P_{R,i}(t)=P_R^*$ and $P_{I,i}(t)=0$, i.e. $$\begin{bmatrix}P_{R}(t)\\ P_{I}(t)\end{bmatrix}= \begin{bmatrix} \frac{\theta}{\gamma+\theta}1_n\\ 0_n\end{bmatrix}, $$ which is nicely consistent with the steady state of the Markov chain. After some algebra, the linearization of the above model around the fixed point can be expressed as: \begin{empheq}[box=\fbox]{align} &\begin{bmatrix}\tilde{P}_R(t+1)\\\tilde{P}_I(t+1)\end{bmatrix}=\begin{bmatrix} P_R^* 1_n\\ 0_n\end{bmatrix} + M \begin{bmatrix}\tilde{P}_R(t)-P_R^* 1_n\\\tilde{P}_I(t) - 0_n\end{bmatrix} ,\\ &\text{where}\notag\\ &M=\begin{bmatrix} (1-\gamma-\theta)I_n & (\delta-\theta) I_n - \theta P_S^* \beta A\\ 0_{n\times n} & (1-\delta)I_n+ P_S^*\beta A \end{bmatrix} . \end{empheq} \subsubsection{Analysis of the Nonlinear Model} \paragraph{Epidemic Extinction: $\frac{\gamma}{\gamma+\theta}\frac{\beta\lambda_{\max}(A)}{\delta}<1$} The following result summarizes the stability of the disease-free fixed point. \begin{proposition}\label{prop_id} The main fixed point of the nonlinear map (\ref{nonlinear_R_id}, \ref{nonlinear_I_id}) is \begin{enumerate}[label=\alph*)] \item locally stable, if $\frac{\gamma}{\gamma+\theta}\frac{\beta}{\delta}\lambda_{max}(A)<1$, and \item globally stable, if $\frac{\beta}{\delta}\lambda_{max}(A)<1$ . \end{enumerate} \end{proposition} The authors of \cite{prakash2012threshold} have shown the same condition for the local stability, but they do not provide any result on the global stability. \paragraph{Epidemic Spread: $\frac{\gamma}{\gamma+\theta}\frac{\beta\lambda_{\max}(A)}{\delta}>1$} The main fixed point of the mapping is not stable if $\frac{\gamma}{\gamma+\theta}\frac{\beta\lambda_{\max}(A)}{\delta}>1$. We show the existence and uniqueness of the the second fixed point for this case. The gist of the proof is the same as that of Section \ref{sec:SIRS_secondfixedpoint}, except we replace Property (d) with the more general of: \begin{enumerate}[label=(\alph*')] \addtocounter{enumi}{3} \item $\omega(P_{R,i},0)=0$\\ $\frac{\partial \omega(P_{R,i},P_{I,i})}{\partial P_{I,i}}\bigg|_{(P_{R,i},0)}=\frac{\delta}{1-P_{R,i}}$ \end{enumerate} for any $P_{R,i}\neq 1$. \begin{theorem}\label{thm:SIV_id_nontrivial} If $\frac{\gamma}{\gamma+\theta}\frac{\beta\lambda_{\max}(A)}{\delta}>1$, the nonlinear map (\ref{nonlinear_R_id}, \ref{nonlinear_I_id}), has a unique nontrivial fixed point. \end{theorem} \subsubsection{Analysis of the Exact Markov Chain} We show the mixing time result for this case as well. Vectors $\mu(t)$, $p(t)$ and the matrix $B$ are defined as before. \begin{theorem}\label{thm_mixing_id} If $\frac{\beta\lambda_{\max}(A)}{\delta}<1$, the mixing time of the Markov chain whose transition matrix $S$ is described by Eqs. \eqref{MC1_id} and \eqref{MC2_id} is $O(\log n)$. \end{theorem} \subsection{Vaccination-Dominant Model} In this variation of the model the assumption is if a susceptible node receives both infection and vaccine at the same time, it becomes vaccinated. The transition probabilities of the Markov chain are again \begin{align}\label{MC1_vd} S_{X,Y}&= \mathbb{P}\left(\xi(t+1)=Y \mid \xi(t)= X\right)\notag\\ &= \prod_{i=1}^n \mathbb{P}\left(\xi_i(t+1)=Y_i \mid \xi(t) = X\right) , \end{align} with the change that \begin{empheq}[box=\fbox]{multline}\label{MC2_vd} \mathbb{P}\left(\xi_i(t+1)=Y_i \mid \xi(t) = X\right)=\\ \begin{cases} (1-\beta)^{m_i}(1-\theta),& \text{if } (X_i,Y_i)=(0,0)\\ (1-(1-\beta)^{m_i})(1-\theta),& \text{if } (X_i,Y_i)=(0,1)\\ \theta,& \text{if } (X_i,Y_i)=(0,2)\\ 0,& \text{if } (X_i,Y_i)=(1,0)\\ 1-\delta,& \text{if } (X_i,Y_i)=(1,1)\\ \delta,& \text{if } (X_i,Y_i)=(1,2)\\ \gamma,& \text{if } (X_i,Y_i)=(2,0)\\ 0,& \text{if } (X_i,Y_i)=(2,1)\\ 1-\gamma,& \text{if } (X_i,Y_i)=(2,2)\\ \end{cases} , \end{empheq} and $m_i=\left\vert{\left\{ {j\in N_i} \mid X_j=1\right\}}\right\vert=\left\vert{N_i\cap I(t)}\right\vert$ as before. In this case the marginal probabilities are: \begin{align} p_{R,i}&(t+1) =(1-\gamma)p_{R,i}(t)+\delta p_{I,i}(t)+\notag\\ & \theta(1-p_{R,i}(t)-p_{I,i}(t)) ,\label{exact_R_vd}\\ p_{I,i}&(t+1) =(1-\delta)p_{I,i}(t)+ (1-\theta)\times\notag\\ &\mathbb{E}_{ |\xi_i(t)=0}\bigg[1-\prod_{j\in N_i}(1-\beta\mathds{1}_{\xi_j(t)=1})\bigg](1-p_{R,i}(t)-p_{I,i}(t))\label{exact_I_vd} \end{align} The nonlinear map, or the mean-field approximation, can be stated as: \begin{empheq}[box=\fbox]{align} &P_{R,i}(t+1) =(1-\gamma)P_{R,i}(t)+\delta P_{I,i}(t)\notag\\ &+ \theta(1-P_{R,i}(t)-P_{I,i}(t)), \label{nonlinear_R_vd}\\ &P_{I,i}(t+1) =(1-\delta)P_{I,i}(t)+(1-\theta)\notag\\ &\cdot\Big(1-\prod_{j\in N_i} (1-\beta P_{I,j}(t))\Big)(1-P_{R,i}(t)-P_{I,i}(t)) \label{nonlinear_I_vd} \end{empheq} As a result, the first order (linear) model is: \begin{empheq}[box=\fbox]{align} &\begin{bmatrix}\tilde{P}_R(t+1)\\\tilde{P}_I(t+1)\end{bmatrix}=\begin{bmatrix} P_R^* 1_n\\ 0_n\end{bmatrix} + M \begin{bmatrix}\tilde{P}_R(t)-P_R^* 1_n\\\tilde{P}_I(t) - 0_n\end{bmatrix} ,\notag\\ &\text{where}\notag\\ &M=\begin{bmatrix} (1-\gamma-\theta)I_n & (\delta-\theta) I_n - \theta P_S^* \beta A\\ 0_{n\times n} & (1-\delta)I_n+(1-\theta) P_S^*\beta A \end{bmatrix} .\notag \end{empheq} We should note that for the vaccination-dominant model, the steady state of the Markov chain and the main fixed point of the mapping are exactly the same as in the infection-dominant model. However, as we may expect, the vaccination-dominant model is more stable. \subsubsection{Analysis of the Nonlinear Model} \paragraph{Epidemic Extinction: $(1-\theta)\frac{\gamma}{\gamma+\theta}\frac{\beta}{\delta}\lambda_{max}(A)<1$} The stability of the vaccination-dominant model can be summarized in the following theorem. \begin{proposition}\label{prop_vd} The main fixed point of the nonlinear map (\ref{nonlinear_R_vd}, \ref{nonlinear_I_vd}) is \begin{enumerate}[label=\alph*)] \item locally stable, if $(1-\theta)\frac{\gamma}{\gamma+\theta}\frac{\beta}{\delta}\lambda_{max}(A)<1$, and \item globally stable, if $(1-\theta)\frac{\beta}{\delta}\lambda_{max}(A)<1$ . \end{enumerate} \end{proposition} \begin{figure}[tpb] \centering \includegraphics[width=0.9\columnwidth]{SIS.pdf} \caption{The evolution of an SIS epidemic over an Erd\H{o}s-R\'enyi graph with $n=2000$ nodes. Below the threshold we observe fast extinction of the epidemic (blue curve). Above the threshold, convergence is not observed (red curve).} \label{plot0} \end{figure} \begin{figure*}[!t] \centering \subfloat{\includegraphics[width=2.3in]{SIRS_n.pdf}% \label{plot1}} \subfloat{\includegraphics[width=2.3in]{SIV_InfDom_n.pdf}% \label{plot2}} \subfloat{\includegraphics[width=2.3in]{SIV_VacDom_n.pdf}% \label{plot3}} \caption{The evolution of a) SIRS, b) SIV-Vaccination-Dominant, c) SIV-Infection-Dominant epidemics over an Erd\H{o}s-R\'enyi graph with $n=2000$ nodes. The blue curves show fast extinction of the epidemic. The red curves show epidemic spread around the nontrivial fixed point (convergence is not observed.)} \label{joint} \end{figure*} \paragraph{Epidemic Spread: $(1-\theta)\frac{\gamma}{\gamma+\theta}\frac{\beta}{\delta}\lambda_{max}(A)>1$} As before, the disease-free fixed point of the mapping is not stable when $(1-\theta)\frac{\gamma}{\gamma+\theta}\frac{\beta}{\delta}\lambda_{max}(A)>1$, and there exists a unique second fixed point. \begin{theorem} If $(1-\theta)\frac{\gamma}{\gamma+\theta}\frac{\beta}{\delta}\lambda_{max}(A)>1$, the nonlinear map (\ref{nonlinear_R_vd}, \ref{nonlinear_I_vd}), has a unique nontrivial fixed point. \end{theorem} The proof is similar to that of Theorem \ref{thm:SIV_id_nontrivial}, and is omitted for brevity. \subsubsection{Analysis of the Exact Markov Chain} As shown above, the stability condition of the main fixed point (epidemic extinction) is relaxed by a factor of $(1-\theta)$ in the vaccination-dominant model. In this part, we show that the condition for the fast mixing time of the Markov chain is also relieved by the same factor. \begin{theorem}\label{thm_mixing_vd} If $(1-\theta)\frac{\beta\lambda_{\max}(A)}{\delta}<1$, the mixing time of the Markov chain whose transition matrix $S$ is described by Eqs. \eqref{MC1_vd} and \eqref{MC2_vd} is $O(\log n)$. \end{theorem} \section{Experimental Results} We show the simulation results on Erd\H{o}s-R\'enyi graphs, for below and above the epidemic thresholds, and they confirm the theorems proved in the paper. In a graph with $n=2000$ and $\lambda_{\max}(A)=16.159$, for SIS epidemics, we fix $\delta=0.9$ and try different values of $\beta$. As it can be seen in Fig. \ref{plot0}, when the condition $\frac{\beta\|A\|}{\delta}<1$ is satisfied (e.g. $\beta=0.055$) the epidemic decays exponentially, and dies out quickly. In contrast when $\frac{\beta\|A\|}{\delta}>1$ (e.g. $\beta=0.056$), the epidemic does not exhibit convergence to the disease-free state in any observable time. In fact the epidemic keeps spreading, around the nontrivial fixed point. The same behavior is observed for the other models as well. The results are plotted in Fig. \ref{joint}, in log-log scale. $\gamma$ and $\theta$ are set to $0.5$, and we change $\beta$. For SIRS model, the threshold condition is $\frac{\beta\|A\|}{\delta}< 1$, which is the same as that of SIS, and it means having an additional recovered state does not necessarily make the system more stable. For the first SIV model (infection-dominant), we observe the same exponential decay when $\frac{\gamma}{\gamma+\theta}\frac{\beta\|A\|}{\delta}<1$ (e.g. when $\|A\|=16.232$ and $\beta=0.11$), which means the vaccination indeed makes the system more stable. Furthermore, for the vaccination-dominant model, under $(1-\theta)\frac{\gamma}{\gamma+\theta}\frac{\beta\|A\|}{\delta}<1$ (e.g. $\beta=0.22$), we observe the fast convergence again, which confirms that the system is even more stable when vaccination is dominant. As plots show, for above the threshold cases (e.g. $\beta=0.07$ for SIRS, $0.13$ for SIV-infection-dominant, and $0.29$ for SIV-vaccination-dominant) we do not observe epidemic extinction in any reasonable time. \section{Summary and Conclusions} We studied the exact network-based Markov chain Model for the SIS, SIR, SIRS and SIV epidemics, and their celebrated mean-field approximations, as well as their linear approximations. Below a certain threshold, the disease-free fixed point is globally stable for the nonlinear model, and also the mixing time of the exact Markov chain is $O(\log n)$, which means the epidemic dies out fast. Furthermore, above a threshold, the disease-free fixed point is not stable for the linear and nonlinear models, and there exists a second unique fixed point, which corresponds to the endemic state. This nontrivial fixed point is also stable in most cases. Fig. \ref{fig:comp} compares and summarizes all the results. As one can see, for SIS and SIRS cases there is no gap between the two thresholds, but there is a gap in SIV cases, over which only the local stability of the mean-field approximation is known. Finally we should remark that the exact epidemic threshold of the Markov chain, and whether such threshold exists, is still an open question. Extensive numerical simulations suggest the existence of such threshold and a phase transition behavior. However, the observed threshold, for certain networks, is different from the threshold for stability of the nonlinear model.
{ "timestamp": "2016-10-03T02:01:27", "yymm": "1609", "arxiv_id": "1609.09565", "language": "en", "url": "https://arxiv.org/abs/1609.09565", "abstract": "We study the spread of discrete-time epidemics over arbitrary networks for well-known propagation models, namely SIS (susceptible-infected-susceptible), SIR (susceptible-infected-recovered), SIRS (susceptible-infected-recovered-susceptible) and SIV (susceptible-infected-vaccinated). Such epidemics are described by $2^n$- or $3^n$-state Markov chains. Ostensibly, because analyzing such Markov chains is too complicated, their $O(n)$-dimensional nonlinear \"mean-field\" approximation, and its linearization, are often studied instead. We provide a complete global analysis of the epidemic dynamics of the nonlinear mean-field approximation. In particular, we show that depending on the largest eigenvalue of the underlying graph adjacency matrix and the rates of infection, recovery, and vaccination, the global dynamics takes on one of two forms: either the epidemic dies out, or it converges to another unique fixed point (the so-called endemic state where a constant fraction of the nodes remain infected). A similar result has also been shown in the continuous-time case. We tie in these results with the \"true\" underlying Markov chain model by showing that the linear model is the tightest upper-bound on the true probabilities of infection that involves only marginals, and that, even though the nonlinear model is not an upper-bound on the true probabilities in general, it does provide an upper-bound on the probability of the chain not being absorbed. As a consequence, we also show that when the disease-free fixed point is globally stable for the mean-field model, the Markov chain has an $O(\\log n)$ mixing time, which means the epidemic dies out quickly. We compare and summarize the results on different propagation models.", "subjects": "Social and Information Networks (cs.SI); Dynamical Systems (math.DS)", "title": "Analysis of Exact and Approximated Epidemic Models over Complex Networks", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363713038174, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7084883419088144 }
https://arxiv.org/abs/2109.09242
Renormalisation of the two-dimensional border-collision normal form
We study the two-dimensional border-collision normal form (a four-parameter family of continuous, piecewise-linear maps on $\mathbb{R}^2$) in the robust chaos parameter region of [S. Banerjee, J.A. Yorke, C. Grebogi, Robust Chaos, Phys. Rev. Lett. 80(14):3049--3052, 1998]. We use renormalisation to partition this region by the number of connected components of a chaotic Milnor attractor. This reveals previously undescribed bifurcation structure in a succinct way.
\section{Introduction} \label{sec:intro} \setcounter{equation}{0} Piecewise-linear maps can exhibit complicated dynamics yet are relatively amenable to an exact analysis. For this reason they provide a useful tool for us to explore complex aspects of dynamical systems, such as chaos. They arise as approximations to certain types of grazing bifurcations of piecewise-smooth ODE systems \cite{DiBu08}, and are used as mathematical models, particularly in social sciences \cite{PuSu06}. In this paper we study the family of maps \begin{equation} (x,y) \mapsto f_\xi(x,y) = \begin{cases} \begin{bmatrix} \tau_L x + y + 1 \\ -\delta_L x \end{bmatrix}, & x \le 0, \\ \begin{bmatrix} \tau_R x + y + 1 \\ -\delta_R x \end{bmatrix}, & x \ge 0, \end{cases} \label{eq:f} \end{equation} where \begin{equation} \xi = \left( \tau_L, \delta_L, \tau_R, \delta_R \right). \label{eq:xi} \end{equation} With $(x,y) \in \mathbb{R}^2$ and $\xi \in \mathbb{R}^4$, this is the two-dimensional border-collision normal form \cite{NuYo92}, except the border-collision bifurcation parameter (often denoted $\mu$) has been scaled to $1$. It is a normal form in the sense that any continuous, piecewise-linear map with two pieces for which the image of the switching line intersects the switching line at a unique point that is not a fixed point, can be transformed to \eqref{eq:f} under an affine change of coordinates, see for instance \cite{Si20e}. With $\tau_R = -\tau_L$ and $\delta_L = \delta_R$, \eqref{eq:f} reduces to the well-studied Lozi map \cite{Lo78}. While \eqref{eq:f} appears simple its dynamics can be remarkably rich \cite{BaGr99,Gl16e,Si14,SiMe08b,ZhMo06b}. In \cite{BaYo98} Banerjee, Yorke, and Grebogi identified an open parameter region $\Phi_{\rm BYG} \subset \mathbb{R}^4$ (defined below) throughout which $f_\xi$ has a chaotic attractor, and this was shown formally in \cite{GlSi21}. Their work popularised the notion that families of piecewise-linear maps typically exhibit chaos in a robust fashion. This is distinct from families of one-dimensional unimodal maps --- often promoted as a paradigm for chaos --- that have dense windows of periodicity \cite{GrSw97,Ly97}. Robust chaos had already been demonstrated by Misiurewicz in the Lozi map \cite{Mi80}, but by studying the border-collision normal form, Banerjee, Yorke, and Grebogi showed that robust chaos occurs for generic families of piecewise-linear maps. However, while $f_\xi$ has a chaotic attractor for all $\xi \in \Phi_{\rm BYG}$, the attractor undergoes bifurcations, or crises \cite{GrOt83}, as the value of $\xi$ is varied within $\Phi_{\rm BYG}$. The purpose of this paper is to reveal bifurcation structure within $\Phi_{\rm BYG}$ and we achieve this via renormalisation. Broadly speaking, renormalisation involves showing that, for some member of a family of maps, a higher iterate or induced map is conjugate to a different member of this family \cite{Ma93}. By employing this relationship recursively one can obtain far-reaching results. Renormalisation is central for understanding generic families of one-dimensional maps \cite{CoEc80,DeVa93}. For instance, Feigenbaum's constant ($4.6692\ldots$) for the scaling of period-doubling cascades is the eigenvalue with largest modulus of a fixed point of a renormalisation operator for unimodal maps. For the one-dimensional analogue of \eqref{eq:f} (skew tent maps) the bifurcation structure was determined by Ito {\em et.~al.}~\cite{ItTa79b} via renormalisation, see also \cite{VeGl90}. More recently renormalisation was applied to a two-parameter family of two-dimensional, piecewise-linear maps in \cite{PuRo18,PuRo19}. Their results show that for any $n \ge 1$ there exists $\xi \in \mathbb{R}^4$ such that \eqref{eq:f} has $2^n$ coexisting chaotic attractors. We apply renormalisation to \eqref{eq:f} in the following way. On the preimage of the closed right half-plane, denoted $\Pi_\xi$, the second iterate of $f_\xi$ is conjugate to an alternate member of \eqref{eq:f}. That is, $f_\xi^2$ is conjugate to $f_{g(\xi)}$ for a certain function $g : \mathbb{R}^4 \to \mathbb{R}^4$. By repeatedly iterating a boundary of $\Phi_{\rm BYG}$ backwards under $g$, we are able to divide $\Phi_{\rm BYG}$ into regions $\cR_n$, for $n = 0,1,2,\ldots$, where $f_\xi$ has a chaotic Milnor attractor with $2^n$ connected components. The regions converge to a fixed point of $g$ as $n \to \infty$. The main difficulties we overcome are in analysing the global dynamics of the nonlinear map $g$ and showing that the relevant dynamics of $f_\xi$ occurs entirely within $\Pi_\xi$. Our main results are presented in \S\ref{sec:results}, see Theorems \ref{th:Rn}--\ref{th:affinelyConjugate}. Sections \ref{sec:XY}--\ref{sec:mainProof} work toward proofs of these results. First \S\ref{sec:XY} describes the phase space of \eqref{eq:f}, primarily saddle fixed points and their stable and unstable manifolds. Then in \S\ref{sec:f2} we consider the second iterate $f_\xi^2$ on $\Pi_\xi$ and construct a conjugacy to $f_{g(\xi)}$. In \S\ref{sec:phiPsi} we derive geometric properties of the boundaries of $\cR_0$ and in \S\ref{sec:renormalisation} study the dynamics of $g$. Chaos is proved in the sense of a positive Lyapunov exponent. This positivity is achieved for all points in the attractor, including points whose forward orbits intersect the switching line where $f_\xi$ is not differentiable. This is achieved by using one-sided directional derivatives which are always well-defined in our setting, \S\ref{sec:lyap}. A recursive application of the renormalisation is performed in \S\ref{sec:mainProof}. Finally \S\ref{sec:conc} provides a discussion and outlook for future studies. \section{Main results} \label{sec:results} \setcounter{equation}{0} In this section we motivate and define the parameter region $\Phi_{\rm BYG}$ and the renormalisation operator $f_\xi \mapsto f_{g(\xi)}$, then state the main results. First Theorem \ref{th:Rn} clarifies the geometry of the regions $\cR_n \subset \mathbb{R}^4$. Next Theorem \ref{th:R0} informs us of the dynamics of $f_\xi$ in $\cR_0$. Finally Theorem \ref{th:affinelyConjugate} describes the dynamics with $\xi \in \cR_n$ and any value $n \ge 0$ and follows from a recursive application of the renormalisation to Theorem \ref{th:R0}. Throughout the paper we write \begin{align} f_{L,\xi}(x,y) &= \begin{bmatrix} \tau_L x + y + 1 \\ -\delta_L x \end{bmatrix}, & f_{R,\xi}(x,y) &= \begin{bmatrix} \tau_R x + y + 1 \\ -\delta_R x \end{bmatrix}, \label{eq:fLfR} \end{align} for the left and right pieces of \eqref{eq:f}. \subsection{Two saddle fixed points} \label{sub:fps} Consider the parameter region \begin{equation} \Phi = \left\{ \xi \in \mathbb{R}^4 \,\big|\, \tau_L > \delta_L + 1, \,\delta_L > 0, \,\tau_R < -(\delta_R + 1), \,\delta_R > 0 \right\}. \label{eq:saddleSaddleRegion} \end{equation} For any $\xi \in \Phi$, $f_\xi$ has exactly two fixed points. Specifically \begin{equation} Y = \left( \frac{-1}{\tau_L - \delta_L - 1}, \frac{\delta_L}{\tau_L - \delta_L - 1} \right) \label{eq:Y} \end{equation} is a fixed point of $f_{L,\xi}$ and lies in the left half-plane, while \begin{equation} X = \left( \frac{-1}{\tau_R - \delta_R - 1}, \frac{\delta_R}{\tau_R - \delta_R - 1} \right) \label{eq:X} \end{equation} is a fixed point of $f_{R,\xi}$ and lies in the right half-plane. The eigenvalues associated with these points are those of the Jacobian matrices of $f_{L,\xi}$ and $f_{R,\xi}$: \begin{align} A_L(\xi) &= \begin{bmatrix} \tau_L & 1 \\ -\delta_L & 0 \end{bmatrix}, & A_R(\xi) &= \begin{bmatrix} \tau_R & 1 \\ -\delta_R & 0 \end{bmatrix}. \label{eq:ALAR} \end{align} Notice $\tau_L$ and $\delta_L$ are the trace and determinant of $A_L$; similarly $\tau_R$ and $\delta_R$ are the trace and determinant of $A_R$. It follows that $\Phi$ is the set of all parameter combinations for which $Y$ is a saddle with positive eigenvalues and $X$ is a saddle with negative eigenvalues. \subsection{The parameter region $\Phi_{\rm BYG}$} \label{sub:phiBYG} For any $\xi \in \Phi$, $X$ and $Y$ have one-dimensional stable and unstable manifolds. Fig.~\ref{fig:igRN_phasePortrait} illustrates the stable (blue) and unstable (red) manifolds of $Y$. These intersect if and only if $\phi(\xi) \le 0$, where \begin{equation} \phi(\xi) = \delta_R - (1+\tau_R) \delta_L + \frac{1}{2} \big( (1+\tau_R) \tau_L - \tau_R - \delta_L - \delta_R \big) \left( \tau_L + \sqrt{\tau_L^2 - 4 \delta_L} \right). \label{eq:phi} \end{equation} Equation \eqref{eq:phi} can be derived by directly calculating the first few linear segments of the stable and unstable manifolds of $Y$ as they emanate from $Y$, see \cite{GlSi21}. As a bifurcation, $\phi(\xi) = 0$ is a homoclinic corner \cite{Si16b} and is analogous to a `first' homoclinic tangency for smooth maps \cite{PaTa93}. Banerjee, Yorke, and Grebogi \cite{BaYo98} observed that an attractor is often destroyed here, so focussed their attention on the parameter region \begin{equation} \Phi_{\rm BYG} = \left\{ \xi \in \Phi \,\big|\, \phi(\xi) > 0 \right\}, \label{eq:BYGRegion} \end{equation} where the stable and unstable manifolds of $Y$ do not intersect. Indeed for all $\xi \in \Phi_{\rm BYG}$, $f_\xi$ has a trapping region and therefore a topological attractor \cite{Gl17}. \begin{figure}[b!] \begin{center} \includegraphics[height=8cm]{igRN_phasePortrait} \caption{ A sketch of the phase space of $f_\xi$ \eqref{eq:f} with $\xi \in \Phi_{\rm BYG}$. We have shown the fixed points $X$ and $Y$ and the initial parts of $W^s(Y)$ (blue) and $W^u(Y)$ (red) as they emanate from $Y$ (these manifolds do not intersect when $\phi(\xi) > 0$). The small black dots show $1000$ iterates of the forward orbit of the origin after transient dynamics has decayed. \label{fig:igRN_phasePortrait} } \end{center} \end{figure} \subsection{The renormalisation operator} \label{sub:renormOp} \begin{figure}[b!] \begin{center} \includegraphics[height=5cm]{igRN_Pi} \caption{ The preimage of the closed right half-plane \eqref{eq:Pi}. \label{fig:igRN_Pi} } \end{center} \end{figure} On $\mathbb{R}^2$ the second iterate $f_\xi^2$ is a continuous, piecewise-linear map with four pieces. But if we restrict our attention to the set \begin{equation} \Pi_\xi = \left\{ f_\xi^{-1}(x,y) \,\middle|\, x \ge 0 \right\}, \label{eq:Pi} \end{equation} then $f_\xi^2$ has only two pieces: \begin{equation} f_\xi^2(x,y) = \begin{cases} \left( f_{R,\xi} \circ f_{L,\xi} \right)(x,y), & x \le 0, \\ f_{R,\xi}^2(x,y), & x \ge 0. \end{cases} \label{eq:f2} \end{equation} As shown in Fig.~\ref{fig:igRN_Pi}, the boundary of $\Pi_\xi$ intersects the switching line at $(x,y) = (0,-1)$ and has slope $-\tau_L < 0$ in $x < 0$ and slope $-\tau_R > 0$ in $x > 0$. For any $\xi \in \Phi$, the map \eqref{eq:f2} is affinely conjugate to the normal form \eqref{eq:f} (see Proposition \ref{pr:conjugacy}). This is because the switching line of \eqref{eq:f2} satisfies the non-degeneracy conditions mentioned in \S\ref{sec:intro}. When the affine transformation to the normal form is applied, the matrix parts of the pieces of \eqref{eq:f2} undergo a similarity transform, thus their traces and determinants are not changed. The matrix part of the $x \le 0$ piece of \eqref{eq:f2} is $A_R(\xi) A_L(\xi)$, which has trace $\tau_L \tau_R - \delta_L - \delta_R$ and determinant $\delta_L \delta_R$. The matrix part of the $x \ge 0$ piece of \eqref{eq:f2} is $A_R(\xi)^2$, which has trace $\tau_R^2 - 2 \delta_R$ and determinant $\delta_R^2$. Hence \eqref{eq:f2} can be transformed to $f_{g(\xi)}$ where \begin{equation} g(\xi) = \big( \tau_R^2 - 2 \delta_R, \delta_R^2, \tau_L \tau_R - \delta_L - \delta_R, \delta_L \delta_R \big). \label{eq:g} \end{equation} Notice we are transforming the left piece of \eqref{eq:f2} to the right piece of $f_{g(\xi)}$ and the right piece of \eqref{eq:f2} to the left piece of $f_{g(\xi)}$. This ensures $g(\xi) \in \Phi$ (see Proposition \ref{pr:PhiForwardInvariant}) so our renormalisation operator $f_\xi \mapsto f_{g(\xi)}$ produces another member of the family \eqref{eq:f} in $\Phi$. Also observe \begin{equation} \xi^* = (1,0,-1,0) \label{eq:xiStar} \end{equation} is a fixed point of $g$ and lies on the boundary of $\Phi$. \subsection{Division of parameter space} \label{sub:division} For all $n \ge 0$ let \begin{equation} \zeta_n(\xi) = \phi \big( g^n(\xi) \big). \label{eq:zetan} \end{equation} The surface $\zeta_n(\xi) = 0$ is an $n^{\rm th}$ preimage of $\phi(\xi) = 0$ under $g$. We now use these surfaces to form the regions \begin{equation} \cR_n = \left\{ \xi \in \Phi \,\big|\, \zeta_n(\xi) > 0, \zeta_{n+1}(\xi) \le 0 \right\}, \label{eq:Rn} \end{equation} for all $n \ge 0$. The following result (proved in \S\ref{sub:RnProof}) gives properties of these regions. \begin{theorem} The $\cR_n$ are non-empty, mutually disjoint, and converge to $\{ \xi^* \}$ as $n \to \infty$. Moreover, \begin{equation} \Phi_{\rm BYG} \subset \bigcup_{n=0}^\infty \cR_n \,. \label{eq:RnUnion} \end{equation} \label{th:Rn} \end{theorem} Being four-dimensional the $\cR_n$ are inherently difficult to visualise. Fig.~\ref{fig:igRN_parameterSpace} shows two-dimensional cross-sections obtained by fixing the values of $\delta_L > 0$ and $\delta_R > 0$. For any such cross-section only finitely many $\cR_n$ are visible because as $n \to \infty$ they converge to $\{ \xi^* \}$ for which $\delta_L = \delta_R = 0$. Notice $\cR_1$ contains some points that do not belong to $\Phi_{\rm BYG}$. For this reason the two sets in \eqref{eq:RnUnion} are not equal. \begin{figure}[b!] \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(16.7,8.5) \put(-.2,0){\includegraphics[height=8cm]{igRN_parameterSpace_a}} \put(8.7,0){\includegraphics[height=8cm]{igRN_parameterSpace_b}} \put(3.2,8.2){\small {\bf a)}~~$\delta_L = \delta_R = 0.01$} \put(12.1,8.2){\small {\bf b)}~~$\delta_L = \delta_R = 0.5$} \end{picture} \caption{ Two-dimensional cross-sections of the parameter regions $\cR_n$. In panel (a) $\cR_n$ is visible for all $n = 0,1,\ldots,4$; in panel (b) only $\cR_0$ and $\cR_1$ are visible. In both panels $\Phi_{\rm BYG}$ is the bounded by the vertical line $\tau_L = \delta_L + 1$, the horizontal line $\tau_R = -\delta_R - 1$, and the curve $\zeta_0 = 0$. \label{fig:igRN_parameterSpace} } \end{center} \end{figure} \subsection{A chaotic attractor with one connected component} \label{sub:R0} The next result shows $f_\xi$ has a chaotic, connected Milnor attractor for all $\xi \in \cR_0$ when $\delta_R < 1$. This is proved in \S\ref{sub:R0Proof} and based on the results of \cite{GlSi21}. The attractor is the closure of the unstable manifold of $X$, \begin{equation} \Lambda(\xi) = {\rm cl}(W^u(X)). \label{eq:Lambda} \end{equation} \begin{theorem} For the map $f_\xi$ with any $\xi \in \cR_0$, \begin{romanlist} \item $\Lambda(\xi)$ is bounded, connected, and invariant, \item every $z \in \Lambda(\xi)$ has a positive Lyapunov exponent, and \item if $\delta_R < 1$ there exists forward invariant $\Delta \subset \mathbb{R}^2$ with non-empty interior such that \begin{equation} \bigcap_{n=0}^\infty f_\xi^n(\Delta) = \Lambda(\xi). \label{eq:LambdaAsInfiniteIntersection} \end{equation} \end{romanlist} \label{th:R0} \end{theorem} Lyapunov exponents for \eqref{eq:f} are clarified in \S\ref{sec:lyap}. Stronger notions of chaos have been obtained on subsets of $\cR_0$, see \cite{Gl17,GlSi21}. While we have not been able to prove that $\Lambda(\xi)$ is a topological attractor, \eqref{eq:LambdaAsInfiniteIntersection} shows it contains the $\omega$-limit set of all points in $\Delta$. The set $\Delta$ has positive Lebesgue measure, thus $\Lambda(\xi)$ is a Milnor attractor \cite{Mi85}. If $\Delta$ is a trapping region (i.e.~it maps to its interior) then $\Lambda(\xi)$ is an attracting set by definition \cite{Ro04}. If $\Delta$ is the trapping region of \cite{GlSi21} (there denoted $\Omega_{\rm trap}$) then \eqref{eq:LambdaAsInfiniteIntersection} appears to be true for some but not all $\xi \in \cR_0$. We expect the extra condition $\delta_R < 1$ is unnecessary but is included in Theorem \ref{th:R0} because our proof utilises an area-contraction argument. \subsection{A chaotic attractor with many connected components} \label{sub:mainTheorem} For any $\xi \in \cR_n$ we have $g^n(\xi) \in \cR_0$ (see Lemma \ref{le:gForwards}), while Theorem \ref{th:R0} describes the dynamics in $\cR_0$. Thus by combining the renormalisation with Theorem \ref{th:R0} we are able to describe the dynamics of $f_\xi$ with $\xi \in \cR_n$. In view of the way $g$ is constructed, our renormalisation corresponds to the substitution rule \begin{equation} (L,R) \mapsto (RR,LR). \label{eq:substitutionRule} \end{equation} The same rule arises in the one-dimensional setting of Ito {\em et.~al.}~\cite{ItTa79b}. Given a word $\cW$ comprised of $L$'s and $R$'s of length $k$, let $\cF(\cW)$ be the word of length $2 k$ that results from applying \eqref{eq:substitutionRule} to every letter in $\cW$. If an orbit of $f_{g(\xi)}$ has symbolic itinerary $\cW$, the corresponding orbit of $f_\xi$ has symbolic itinerary $\cF(\cW)$. The attractor of Theorem \ref{th:R0} is the closure of the unstable manifold of $X$. Consequently for $\xi \in \cR_n$ the corresponding attractor is the closure of the unstable manifold of a periodic solution with symbolic itinerary $\cF^n(R)$, see Table \ref{tb:cF}. \begin{table}[b!] \begin{center} \begin{tabular}{c|c} $n$ & $\cF^n(R)$ \\ \hline $0$ & R \\ $1$ & LR \\ $2$ & RRLR \\ $3$ & LRLRRRLR \\ $4$ & RRLRRRLRLRLRRRLR \end{tabular} \end{center} \caption{ The first few words in the sequence generated by repeatedly applying the symbolic substitution rule \eqref{eq:substitutionRule} to $R$. \label{tb:cF} } \end{table} \begin{theorem} Let $n \ge 0$ and $\xi \in \cR_n$. Then $g^n(\xi) \in \cR_0$ and there exist mutually disjoint sets $S_0,S_1,\ldots,S_{2^n-1} \subset \mathbb{R}^2$ such that $f_\xi(S_i) = S_{(i+1) \,{\rm mod}\, 2^n}$ and \begin{equation} f_\xi^{2^n} \big|_{S_i} ~\text{is affinely conjugate to}~ f_{g^n(\xi)} \big|_{\Lambda(g^n(\xi))} \label{eq:affinelyConjugate} \end{equation} for each $i \in \{ 0,1,\ldots,2^n-1 \}$. Moreover, \begin{equation} \bigcup_{i=0}^{2^n-1} S_i = {\rm cl} \left( W^u \left( \gamma_n \right) \right), \label{eq:Siunion} \end{equation} where $\gamma_n$ is a saddle-type periodic solution of $f_\xi$ with symbolic itinerary $\cF^n(R)$. \label{th:affinelyConjugate} \end{theorem} Numerical explorations suggest that \eqref{eq:Siunion} is the unique attractor of \eqref{eq:f} for any $\xi \in \cR_n$. Theorem \ref{th:affinelyConjugate} tells us it has $2^n$ connected components and is the closure of the unstable manifold of a saddle-type period-$2^n$ solution. Each component $S_i$ is invariant under $2^n$ iterations of $f_\xi$. Equation \eqref{eq:affinelyConjugate} tells us that the dynamics of $f_\xi^{2^n}$ on $S_i$ is equivalent (under an affine coordinate change) to that of $f_{g^n(\xi)}$ on $\Lambda(g^n(\xi))$. Since $g^n(\xi) \in \cR_0$, the properties listed in Theorem \ref{th:R0} apply to $f_\xi^{2^n}$ on $S_i$. Thus \eqref{eq:Siunion} is a chaotic Milnor attractor of $f_\xi$. As an example, consider $f_\xi$ with \begin{equation} \xi_{\rm ex} = (1.15,0.01,-1.12,0.01) \in \cR_2 \,. \label{eq:xiExample} \end{equation} Fig.~\ref{fig:igRN_phasePortrait2}-a shows $1000$ points of the forward orbit of the origin after transient behaviour has decayed. As expected these points appear to converge to a chaotic attractor with four connected components. By Theorem \ref{th:affinelyConjugate} each component is affinely conjugate to $\Lambda(g^2(\xi))$ which is approximated in Fig.~\ref{fig:igRN_phasePortrait2}-b by again iterating the origin. The set $\Lambda(g^2(\xi))$ has a complicated branched structure but this is not visible in Fig.~\ref{fig:igRN_phasePortrait2}-b because the determinants are extremely small. \begin{figure}[b!] \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(16.7,8.5) \put(.2,0){\includegraphics[height=8cm]{igRN_phasePortrait2}} \put(9.1,0){\includegraphics[height=8cm]{igRN_phasePortrait2alt}} \put(3.6,8.2){\small {\bf a)}~~$\xi = \xi_{\rm ex}$} \put(12.5,8.2){\small {\bf b)}~~$\xi = g^2(\xi_{\rm ex})$} \end{picture} \caption{ Numerically computed attractors of $f_\xi$ with $\xi = \xi_{\rm ex}$, \eqref{eq:xiExample}, in panel (a), and $\xi = g^2(\xi_{\rm ex})$ in panel (b). In panel (a) the four small triangles are the points of a periodic solution with symbolic itinerary $\cF^2(R) = RRLR$. \label{fig:igRN_phasePortrait2} } \end{center} \end{figure} \section{The stable and unstable manifolds of the fixed points} \label{sec:XY} \setcounter{equation}{0} In this section we discuss the stable and unstable manifolds of the saddle fixed points $X$ and $Y$. Here and throughout the paper \begin{equation} 0 < \lambda_L^s < 1 < \lambda_L^u \label{eq:eigsAL} \end{equation} denote the eigenvalues of $A_L$, and \begin{equation} \lambda_R^u < -1 < \lambda_R^s < 0 \label{eq:eigsAR} \end{equation} denote the eigenvalues of $A_R$. These are functions of $\xi$ and assume $\xi \in \Phi$. \subsection{Stable and unstable manifolds of piecewise-linear maps} \label{sub:P} Let $P$ be one of the saddle fixed points $X$ or $Y$. The stable manifold of $P$ is defined as \begin{equation} W^s(P) = \left\{ z \in \mathbb{R}^2 \setminus \{ P \} \,\big|\, f_\xi^n(z) \to P ~\text{as}~ n \to \infty \right\}. \label{eq:WsDefn} \end{equation} For all $\xi \in \Phi$ the map $f_\xi$ is invertible so the unstable manifold of $P$ is defined analogously as \begin{equation} W^u(P) = \left\{ z \in \mathbb{R}^2 \setminus \{ P \} \,\big|\, f_\xi^{-n}(z) \to P ~\text{as}~ n \to \infty \right\}. \label{eq:WuDefn} \end{equation} Since $P$ is a saddle, $W^s(P)$ and $W^u(P)$ are one-dimensional. As with smooth maps, from $P$ they emanate tangent to the stable and unstable subspaces $E^s(P)$ and $E^u(P)$. These subspaces are the lines through $P$ with directions given by the eigenvectors of $\rD f_\xi (P)$. But since $f_\xi$ is piecewise-linear, $W^s(P)$ and $W^u(P)$ in fact {\em coincide} with $E^s(P)$ and $E^u(P)$ in a neighbourhood of $P$. Globally they have a piecewise-linear structure: $W^s(P)$ has kinks on the switching line $x=0$ and on the backward orbits of these points; $W^u(P)$ has kinks on the image of switching line, $y=0$, and on the forward orbits of these points. In the remainder of this section we reproduce the geometric constructions of \cite{GlSi21} that will be needed below. \subsection{The stable and unstable manifolds of $Y$} \label{sub:Y} \begin{figure}[b!] \begin{center} \includegraphics[height=8cm]{igRN_Y} \caption{ A sketch of the phase space of $f_\xi$ with $\xi \in \Phi_{\rm BYG}$. The triangle $\Omega(\xi)$ is shaded. \label{fig:igRN_Y} } \end{center} \end{figure} Since the eigenvalues of $A_L$ are positive, $W^s(Y)$ and $W^u(Y)$ each have two dynamically independent branches. Let $D$ denote the first kink of the right branch of $W^u(Y)$ as we follow it outwards from $Y$, see Fig.~\ref{fig:igRN_Y}. Notice $D$ is the intersection of $E^u(Y)$ with $y=0$. Now let $B$ denote the intersection of $E^u(Y)$ with the line through $f_\xi(D)$ and parallel to $E^s(Y)$. Then let $\Omega(\xi)$ be the closed compact triangle with vertices $D$, $f_\xi(D)$, and $B$. The following result says $\Omega(\xi)$ is forward invariant under $f_\xi$. This was proved in \cite{GlSi21} by direct calculations. The key observation is that $f_\xi(D)$ lies to the right of $E^s(Y)$ because $\phi(\xi) > 0$. \begin{proposition} For any $\xi \in \Phi_{\rm BYG}$, $f_\xi \left( \Omega(\xi) \right) \subset \Omega(\xi)$. \label{pr:Omega} \end{proposition} The next result tells us that the attractor of Theorem \ref{th:R0} is contained in $\Omega(\xi)$. \begin{lemma} For any $\xi \in \Phi_{\rm BYG}$, $\Lambda(\xi) \subset \Omega(\xi)$. \label{le:LambdaInOmega} \end{lemma} \begin{proof} Since $\Omega(\xi)$ is forward invariant we only need to show $X \in \Omega(\xi)$. By direct calculations we find that the line through $D$ and $f_\xi(D)$ is $y = \ell(x)$ where \begin{equation} \ell(x) = \frac{\delta_R}{\lambda_L^s - \tau_R} \left( x - \frac{1}{1 - \lambda_L^s} \right). \nonumber \end{equation} From \eqref{eq:X} we obtain, after much simplification, \begin{equation} X_2 - \ell(X_1) = \frac{\delta_R \left( \lambda_L^{s^2} - \tau_R \lambda_L^s + \delta_R \right)} {(\delta_R + 1 - \tau_R) \left( \lambda_L^s - \tau_R \right) \left( 1 - \lambda_L^s \right)}. \nonumber \end{equation} In view of \eqref{eq:saddleSaddleRegion} and \eqref{eq:eigsAL}, each factor in this expression is positive, thus $X$ lies above the line through $D$ and $f_\xi(D)$. Also $X_1 > 0$ and $X_2 < 0$, thus $X \in \Omega(\xi)$ as required. \end{proof} \subsection{The stable and unstable manifolds of $X$} \label{sub:X} \begin{figure}[b!] \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(16.5,8.6) \put(0,0){\includegraphics[height=8cm]{igRN_X_a}} \put(8.5,0){\includegraphics[height=8cm]{igRN_X_b}} \put(3.4,8.3){\small {\bf a)}~~$\xi \in \cR_0$} \put(11.5,8.3){\small {\bf b)}~~$\xi \in \cR_n \,, n \ge 1$} \end{picture} \caption{ Sketches of phase space with $\xi \in \cR_0$ in panel (a) and $\xi \in \cR_n$ with $n \ge 1$ in panel (b). The set $\Delta_0$ in panel (a) is introduced in \S\ref{sub:R0Proof}. The set $\Omega'$ in panel (b) is introduced in \S\ref{sub:OmegaPrime}. \label{fig:igRN_X} } \end{center} \end{figure} Since the eigenvalues of $A_R$ are negative, $W^s(X)$ and $W^u(X)$ each have one dynamically independent branch. Let $T$ denote the intersection of $E^u(X)$ with $y=0$ and let $V$ denote the intersection of $E^s(X)$ with $x=0$, see Fig.~\ref{fig:igRN_X}. It is easily shown that \begin{equation} T = \left( \frac{1}{1 - \lambda_R^s}, 0 \right). \label{eq:T} \end{equation} If $f_\xi^2(T)$ lies to the left of $E^s(X)$, as in Fig.~\ref{fig:igRN_X}-a, then $W^s(X)$ and $W^u(X)$ intersect transversely. If $f_\xi^2(T)$ lies to the right of $E^s(X)$, as in Fig.~\ref{fig:igRN_X}-b, then $W^s(X)$ and $W^u(X)$ have no intersection. The following result was obtained in \cite{Gl17} by calculating $f_\xi^2(T)$ explicitly. \begin{proposition} For any $\xi \in \Phi$, $f_\xi^2(T)$ lies to the left of $E^s(X)$ if and only if $\psi(\xi) > 0$, where \begin{align} \psi(\xi) = (\tau_L \tau_R - \delta_R) \lambda_R^u + \left( \frac{\delta_L}{\delta_R} + \delta_L - 1 \right) \lambda_R^s - \tau_L (1 + \delta_R) + \tau_R (1 - \delta_L). \label{eq:psi} \end{align} \label{pr:psi} \end{proposition} As a bifurcation, $\psi(\xi) = 0$ is a homoclinic corner for the fixed point $X$. This is analogous to the surface $\phi(\xi) = 0$ for the fixed point $Y$ as discussed in \S\ref{sub:phiBYG}. \section{The second iterate of $f_\xi$} \label{sec:f2} \setcounter{equation}{0} As discussed in \S\ref{sub:renormOp}, on $\Pi_\xi$ the second iterate of $f_\xi$ is a continuous, piecewise-linear map with two pieces, \eqref{eq:f2}. Next in \S\ref{sub:conjugacy} we provide the affine transformation that converts \eqref{eq:f2} to the normal form \eqref{eq:f}. Then in \S\ref{sub:psiAgain} we show that the bifurcation surface $\psi(\xi) = 0$ of the previous section is in fact identical to $\zeta_1(\xi) = \phi(g(\xi)) = 0$. \subsection{A transformation to the normal form} \label{sub:conjugacy} Any continuous, two-piece, piecewise-linear map on $\mathbb{R}^2$ for which the image of the switching line intersects the switching line at a unique point that is not a fixed point can be transformed to \eqref{eq:f} under an affine coordinate transformation. The required transformation is described in the original work \cite{NuYo92}. For the generalisation to $n$ dimensions refer to \cite{Si16}. The switching line of \eqref{eq:f2} satisfies this condition for any $\xi \in \Phi$. As clarified by Proposition \ref{pr:conjugacy}, the required coordinate transformation is \begin{equation} h_\xi(x,y) = \frac{1}{\tau_R + \delta_R + 1} \begin{bmatrix} x \\ \delta_R x + \tau_R y - \delta_R \end{bmatrix}. \label{eq:h} \end{equation} \begin{proposition} For any $\xi \in \Phi$, \begin{equation} f_\xi^2 = h_\xi^{-1} \circ f_{g(\xi)} \circ h_\xi \,, \label{eq:conjugacy} \end{equation} on $\Pi_\xi$. \label{pr:conjugacy} \end{proposition} \begin{proof} By directly composing \eqref{eq:fLfR} and \eqref{eq:h} we obtain \begin{equation} h_\xi \circ f_\xi^2 = \begin{cases} \dfrac{1}{\tau_R + \delta_R + 1} \begin{bmatrix} \left( \tau_R^2 - \delta_R \right) x + \tau_R y + \tau_R + 1 \\ -\delta_R^2 x \end{bmatrix}, & x \le 0, \\ \dfrac{1}{\tau_R + \delta_R + 1} \begin{bmatrix} \left( \tau_L \tau_R - \delta_L \right) x + \tau_R y + \tau_R + 1 \\ -\delta_L \delta_R x \end{bmatrix}, & x \ge 0, \end{cases} \nonumber \end{equation} and it is readily seen that $f_{g(\xi)} \circ h_\xi$ produces the same expression. \end{proof} Write $(\tilde{x},\tilde{y}) = h_\xi(x,y)$. Notice that $x$ and $\tilde{x}$ have opposite signs, i.e. \begin{equation} {\rm sgn}(x) = -{\rm sgn}(\tilde{x}). \label{eq:oppositeSigns} \end{equation} This is because $\tau_R + \delta_R + 1 < 0$ by \eqref{eq:saddleSaddleRegion}. Thus the left piece of $f_{g(\xi)}$ corresponds to the right piece of $f_\xi^2$ in \eqref{eq:f2}, and this is consistent with how $g$ was introduced in \S\ref{sub:renormOp}. \subsection{A reinterpretation of $\psi$} \label{sub:psiAgain} In \S\ref{sub:X} we saw that the fixed point $X$ of $f_\xi$ has a homoclinic corner when $\psi(\xi) = 0$. The same is true for $f_\xi^2$: its fixed point $X$ has a homoclinic corner when $\psi(\xi) = 0$. Notice $X$ is a fixed point of $f_{R,\xi}^2$, which is transformed under \eqref{eq:conjugacy} to $f_{L,g(\xi)}$, which has the fixed point $Y$. Thus, while the stable and unstable manifolds of $X$ lie in $\Pi_\xi$, they transform to the stable and unstable manifolds of $Y$ for $f_{g(\xi)}$. The latter manifolds have a homoclinic corner when $\phi(g(\xi)) = 0$, which suggests that $\psi(\xi) = 0$ and $\phi(g(\xi)) = 0$ are the same surface. The following result tells us that this is indeed the case. \begin{lemma} For any $\xi \in \Phi$, \begin{equation} \phi(g(\xi)) = \tau_R \lambda_R^{u^2} \psi(\xi). \label{eq:psi2} \end{equation} \label{le:psizeta1Relationship} \end{lemma} \begin{proof} Equation \eqref{eq:phi} can be written as \begin{equation} \phi(\xi) = (1+\tau_R) \lambda_L^{u^2} - (\tau_R + \delta_L + \delta_R) \lambda_L^u + \delta_R \,. \label{eq:phi2} \end{equation} To evaluate $\phi(g(\xi))$, in \eqref{eq:phi2} we replace $\delta_L$ with $\delta_R^2$, $\delta_R$ with $\delta_L \delta_R$, and $\tau_R$ with $\tau_L \tau_R - \delta_L - \delta_R$, see \eqref{eq:g}. Also we replace $\lambda_L^u$ with $\lambda_R^{u^2}$ because $\lambda_R^{u^2}$ is the unstable eigenvalue of $A_R^2$ (which has trace and determinant given by the first two components of \eqref{eq:g}). It is a simple (though tedious) exercise to show that upon performing these substitutions and simplifying we obtain $\tau_R \lambda_R^{u^2} \psi(\xi)$. \end{proof} \section{The geometry of the boundary of $\cR_0$} \label{sec:phiPsi} \setcounter{equation}{0} The region $\cR_0 \subset \mathbb{R}^4$ is bounded by $\zeta_0(\xi) = \phi(\xi) = 0$, $\zeta_1(\xi) = \phi(g(\xi)) = 0$, and the hyperplanes specified in \eqref{eq:saddleSaddleRegion}. Since parameter space is four-dimensional these are difficult to visualise. We can benefit from the fact that the $\delta_L$ and $\delta_R$ components of $g$ are decoupled from $\tau_L$ and $\tau_R$. Thus two-dimensional slices \begin{equation} \Phi_{\rm slice}(\delta_L,\delta_R) = \left\{ (\tau_L,\tau_R) \,\middle|\, \tau_L > \delta_L + 1, \tau_R < -\delta_R-1 \right\}, \label{eq:Phislice} \end{equation} defined by fixing the values of $\delta_L$ and $\delta_R$, map to one another under $g$. In any such slice $\zeta_0(\xi) = 0$ and $\zeta_1(\xi) = 0$ are curves. In this section we show that for any values $0 < \delta_L < 1$ and $0 < \delta_R < 1$, these curves have the geometry shown in Fig.~\ref{fig:igRN_phipsi}. \begin{figure}[b!] \begin{center} \includegraphics[height=10cm]{igRN_phipsi} \caption{ A sketch of $\zeta_0(\xi) = 0$ and $\zeta_1(\xi) = 0$ (equivalently $\phi(\xi) = 0$ and $\hat{\psi}(\xi) = 0$) in $\Phi_{\rm slice}(\delta_L,\delta_R)$ with $0 < \delta_L < 1$ and $0 < \delta_R < 1$. The curve $\tau_R = -\frac{1}{\tau_L} - \delta_R - 1$ is shown dashed. \label{fig:igRN_phipsi} } \end{center} \end{figure} Observe $\zeta_0(\xi) = 0$ is the same as $\phi(\xi) = 0$, while, by Lemma \ref{le:psizeta1Relationship}, $\zeta_1(\xi) = 0$ is the same as $\psi(\xi) = 0$. However, we find the function \begin{equation} \hat{\psi}(\xi) = \lambda_R^u \psi(\xi), \label{eq:psiHatDefn} \end{equation} easier to work with $\psi(\xi)$. By \eqref{eq:psi2} the sign of $\hat{\psi}(\xi)$ is the same as that of $\zeta_1(\xi)$. From \eqref{eq:psi} we obtain \begin{equation} \hat{\psi}(\xi) = -\delta_L \left( \lambda_R^{u^2} - 1 \right) + \lambda_R^u \left( \lambda_R^{u^2} - 1 \right) \tau_L + (1 - \delta_R) \lambda_R^{u^2} \,. \label{eq:psiHat} \end{equation} The remainder of this section is organised as follows. First in \S\ref{sub:phi} we study the curve $\phi(\xi) = 0$. We then derive analogous properties for $\hat{\psi}(\xi) = 0$ and obtain some additional bounds, \S\ref{sub:psi}. Lastly we show these curves intersect at a unique point in $\Phi_{\rm slice}$, \S\ref{sub:phiandpsi}. \subsection{The curve $\phi(\xi) = 0$} \label{sub:phi} We first show the curve $\phi(\xi) = 0$ does not exist in $\Phi_{\rm slice}(\delta_L,\delta_R)$ if $\delta_L \ge 1$. \begin{lemma} Let $\xi \in \Phi$. If $\delta_L \ge 1$ then $\phi(\xi) < 0$. \label{le:deltaLge1} \end{lemma} \begin{proof} We can rearrange \eqref{eq:phi2} as \begin{equation} \phi(\xi) = (\tau_R + \delta_R + 1) \lambda_L^u \left( \lambda_L^u - 1 \right) - \delta_R \left( \lambda_L^{u^2} - 1 \right) + (1 - \delta_L) \lambda_L^u \,. \label{eq:phi3} \end{equation} By inspection the first two terms in \eqref{eq:phi3} are negative and if $\delta_L \ge 1$ then the last term is less than or equal to zero. \end{proof} The next result shows that $\phi(\xi) = 0$ appears roughly as in Fig.~\ref{fig:igRN_phipsi}. \begin{proposition} Let $0 < \delta_L < 1$ and $\delta_R > 0$. There exists a unique $C^\infty$ function $G : (-\infty,-\delta_R-1] \to (\delta_L+1,\infty)$ such that \begin{equation} \phi \big( G(\tau_R), \delta_L, \tau_R, \delta_R \big) = 0, \label{eq:phiZeroCurve} \end{equation} for all $\tau_R \in (-\infty,-\delta_R-1]$. Moreover, $G$ is strictly increasing, $G(\tau_R) \to \delta_L+1$ as $\tau_R \to -\infty$, and $G(-\delta_R-1) = \alpha + \frac{\delta_L}{\alpha}$ where $\alpha \in \mathbb{R}$ is the largest solution to \begin{equation} -\delta_R \alpha^2 + (1-\delta_L) \alpha + \delta_R = 0\,. \label{eq:phiTopIntersection} \end{equation} \label{pr:phiZeroCurve} \end{proposition} \begin{proof} First fix $\tau_R \le -\delta_R-1$. With $\tau_L = \delta_L + 1$ we have $\lambda_L^u = 1$ and so \eqref{eq:phi2} simplifies to $\phi(\xi) = 1 - \delta_L > 0$. As $\tau_L \to \infty$ we have $\lambda_L^u \to \infty$ and so $\phi(\xi) \to -\infty$ (because the $\lambda_L^{u^2}$-coefficient in \eqref{eq:phi2} is negative). Thus by the intermediate value theorem there exists $\tau_L = G(\tau_R) > \delta_L + 1$ satisfying \eqref{eq:phiZeroCurve}. To demonstrate the uniqueness of $G$ we differentiate \eqref{eq:phi2} to obtain \begin{equation} \frac{\partial \phi}{\partial \tau_L} = \left( 2 (1 + \tau_R) \lambda_L^u - (\tau_R + \delta_L + \delta_R) \right) \frac{\partial \lambda_L^u}{\partial \tau_L}. \label{eq:phiZeroCurveProof10} \end{equation} It is a simple exercise to show that $\frac{\partial \lambda_L^u}{\partial \tau_L} = \frac{\lambda_L^u}{\lambda_L^u - \lambda_L^s}$. Also if $\phi = 0$ then by \eqref{eq:phi2} we can replace $(\tau_R + \delta_L + \delta_R)$ in \eqref{eq:phiZeroCurveProof10} with $\frac{\delta_R}{\lambda_L^u} + (1+\tau_R) \lambda_L^u$ to obtain \begin{equation} \frac{\partial \phi}{\partial \tau_L} \bigg|_{\phi = 0} = \left( (1 + \tau_R) \lambda_L^u - \frac{\delta_R}{\lambda_L^u} \right) \frac{\lambda_L^u}{\lambda_L^u - \lambda_L^s}. \label{eq:phiZeroCurveProof11} \end{equation} By inspection $\frac{\partial \phi}{\partial \tau_L} \big|_{\phi = 0} < 0$. Thus $G$ is unique (because if $\phi = 0$ for two distinct values of $\tau_L > \delta_L + 1$ then $\frac{\partial \phi}{\partial \tau_L} \ge 0$ at at least one of these values). Since $\phi(\xi)$ is $C^\infty$ the function $G$ is $C^\infty$ by the implicit function theorem. From \eqref{eq:phi2} we obtain \begin{equation} \frac{\partial \phi}{\partial \tau_R} = \lambda_L^u \left( \lambda_L^u - 1 \right), \label{eq:phiZeroCurveProof20} \end{equation} which is evidently positive. Thus $\frac{d G}{d \tau_R} = -\frac{\frac{\partial \phi}{\partial \tau_L}}{\frac{\partial \phi}{\partial \tau_R}} \Big|_{\phi = 0} > 0$, so $G$ is strictly increasing. Also $G(\tau_R) \to \delta_L+1$ as $\tau_R \to -\infty$ because if we fix $\tau_L = \delta_L + 1 + \ee$, then $\phi(\xi) \to -\infty$ as $\tau_R \to -\infty$ for any $\ee > 0$. Finally, by substituting $\tau_R = -\delta_R - 1$ into \eqref{eq:phi2} we obtain \begin{equation} \phi(\xi) \big|_{\tau_R = -\delta_R - 1} = -\delta_R \lambda_L^{u^2} + (1-\delta_L) \lambda_L^u + \delta_R \,. \label{eq:phiZeroCurveProof30} \end{equation} Since $\tau_L = \lambda_L^u + \frac{\delta_L}{\lambda_L^u}$ we have $G(-\delta_R-1) = \alpha + \frac{\delta_L}{\alpha}$. \end{proof} \subsection{The curve $\hat{\psi}(\xi) = 0$} \label{sub:psi} The arguments presented here for $\hat{\psi}$ mirror those above for $\phi$. We first show $\hat{\psi}(\xi) = 0$ does not exist in $\Phi_{\rm slice}(\delta_L,\delta_R)$ if $\delta_R \ge 1$. \begin{lemma} Let $\xi \in \Phi$. If $\delta_R \ge 1$ then $\hat{\psi}(\xi) < 0$. \label{le:deltaRge1} \end{lemma} \begin{proof} By inspection the first two terms in \eqref{eq:psiHat} are negative and if $\delta_R \ge 1$ then the last term is less than or equal to zero. \end{proof} We now show $\hat{\psi}(\xi) = 0$ appears roughly as in Fig.~\ref{fig:igRN_phipsi}. \begin{proposition} Let $\delta_L > 0$ and $0 < \delta_R < 1$. There exists a unique $C^\infty$ function $H : [\delta_L+1,\infty) \to (-\infty,-\delta_R-1)$ such that \begin{equation} \hat{\psi} \big( \tau_L, \delta_L, H(\tau_L), \delta_R \big) = 0, \label{eq:psiHatZeroCurve} \end{equation} for all $\tau_L \in [\delta_L+1,\infty)$. Moreover, $H$ is strictly increasing, $H(\tau_L) \to -\delta_R-1$ as $\tau_L \to \infty$, and $H(\delta_L+1) = \beta + \frac{\delta_R}{\beta}$ where $\beta \in \mathbb{R}$ is the smallest (most negative) solution to $p(\beta) = 0$ where \begin{equation} p(\beta) = (1+\delta_L) \beta^3 + (1-\delta_L-\delta_R) \beta^2 - (1+\delta_L) \beta + \delta_L \,. \label{eq:psiHatLeftIntersection} \end{equation} \label{pr:psiHatZeroCurve} \end{proposition} \begin{proof} Fix $\tau_L \ge \delta_L + 1$. With $\tau_R = -\delta_R - 1$ we have $\lambda_R^u = -1$ and so \eqref{eq:psiHat} simplifies to $\hat{\psi}(\xi) = 1 - \delta_R > 0$. Also $\hat{\psi}(\xi) \to -\infty$ as $\tau_R \to -\infty$, thus, by the intermediate value theorem, there exists $\tau_R = H(\tau_L) < -\delta_R-1$ satisfying \eqref{eq:psiHatZeroCurve}. From \eqref{eq:psiHat}, \begin{equation} \frac{\partial \hat{\psi}}{\partial \tau_R} = \left( 3 \tau_L \lambda_R^{u^2} + 2 (1 - \delta_L - \delta_R) \lambda_R^u - \tau_L \right) \frac{\lambda_R^u}{\lambda_R^u - \lambda_R^s}, \nonumber \end{equation} and if $\hat{\psi}(\xi) = 0$ this can be simplified to \begin{equation} \frac{\partial \hat{\psi}}{\partial \tau_R} \bigg|_{\hat{\psi}=0} = \left( \tau_L \left( 1 + \lambda_R^{u^2} \right) - \frac{2 \delta_L}{\lambda_R^u} \right) \frac{\lambda_R^u}{\lambda_R^u - \lambda_R^s}, \label{eq:psiHatZeroCurveProof11} \end{equation} which is positive. Hence $H(\tau_L)$ satisfying \eqref{eq:psiHatZeroCurve} is unique for all $\tau_L \ge \delta_L + 1$. Moreover, $H$ is $C^\infty$ because $\hat{\psi}$ is $C^\infty$. From \eqref{eq:psiHat}, \begin{equation} \frac{\partial \hat{\psi}}{\partial \tau_L} = \lambda_R^u \left( \lambda_R^{u^2} - 1 \right) < 0, \nonumber \end{equation} thus $\frac{d H}{d \tau_L} = -\frac{\frac{\partial \hat{\psi}}{\partial \tau_R}}{\frac{\partial \hat{\psi}}{\partial \tau_L}} \Big|_{\hat{\psi} = 0} > 0$, i.e.~$H$ is strictly increasing. We have $H(\tau_L) \to -\delta_R-1$ as $\tau_L \to \infty$ because if $\tau_R = -\delta_R-1-\ee$ then $\hat{\psi}(\xi) \to -\infty$ as $\tau_L \to \infty$ for any $\ee > 0$. Finally, by substituting $\tau_L = \delta_L+1$ into \eqref{eq:psiHat} we obtain $\hat{\psi}(\xi) \big|_{\tau_L = \delta_L + 1} = p \left( \lambda_R^u \right)$ and so $H(\delta_L+1) = \beta + \frac{\delta_R}{\beta}$ as required. \end{proof} Next we obtain upper bounds on the values of $\beta$ and $\beta + \frac{\delta_R}{\beta}$. These are the values of $\lambda_R^u$ and $\tau_R$ for the point at which the curve $\hat{\psi}(\xi) = 0$ meets the boundary $\tau_L = \delta_L + 1$, see Fig.~\ref{fig:igRN_phipsi}. \begin{lemma} Let $\delta_L > 0$ and $0 < \delta_R < 1$. The value of $\beta$ in Proposition \ref{pr:psiHatZeroCurve} satisfies $\beta > -\frac{1 + \sqrt{5}}{2}$ and $\beta + \frac{\delta_R}{\beta} > -2$. \label{le:betaBounds} \end{lemma} \begin{proof} The function $p$ can be rewritten as \begin{equation} p(\beta) = \delta_L \left( \beta - 1 \right)^2 \left( \beta + 1 \right) - \delta_R \beta^2 + \beta \left( \beta^2 + \beta - 1 \right). \label{eq:betaBoundsProof10} \end{equation} The first two terms of \eqref{eq:betaBoundsProof10} are negative, so since $p(\beta) = 0$ the last term of \eqref{eq:betaBoundsProof10} must be positive. This requires $\beta > -\frac{1 + \sqrt{5}}{2}$. Also $p$ can be rewritten as \begin{equation} p(\beta) = \left[ (\beta+1) \left( 1 - \frac{1}{\beta} \right) \left( 1 + \delta_L - \frac{\delta_L}{\beta} \right) + (1-\delta_R) \right] \beta^2. \nonumber \end{equation} Thus $p(\beta) = 0$ implies \begin{equation} -(\beta + 1) = \frac{1-\delta_R}{\left( 1 - \frac{1}{\beta} \right) \left( 1 + \delta_L - \frac{\delta_L}{\beta} \right)}. \label{eq:betaBoundsProof30} \end{equation} Since $\beta < 0$ the denominator of \eqref{eq:betaBoundsProof30} is greater than $1$ and so $-(\beta + 1) < 1-\delta_R$. Thus $(\beta+1)^2 < (1-\delta_R)^2$ which can be rearranged as $\beta^2 + \delta_R < -2 \beta - \delta_R (1 - \delta_R)$. Since $0 < \delta_R < 1$ this can be reduced to $\beta + \frac{\delta_R}{\beta} > -2$. \end{proof} Lastly we show that the curve $\tau_R = -\frac{1}{\tau_L} - \delta_R - 1$ lies below $\hat{\psi}(\xi) = 0$, as in Fig.~\ref{fig:igRN_phipsi}. This result is used later in the proof of Proposition \ref{pr:psiHat}. \begin{lemma} Let $\delta_L > 0$, $0 < \delta_R < 1$, and $\tau_L \ge \delta_L + 1$. Then \begin{equation} H(\tau_L) > -\frac{1}{\tau_L} - \delta_R - 1. \label{eq:HBound} \end{equation} \label{le:HBound} \end{lemma} \begin{proof} By iterating \eqref{eq:T} under $f_{R,\xi}$ and $f_{L,\xi}$ we obtain \begin{equation} f_\xi^2(T) = \left( \tau_L \left( \frac{\tau_R}{1 - \lambda_R^s} + 1 \right) - \frac{\delta_R}{1 - \lambda_R^s} + 1, -\delta_L \left( \frac{\tau_R}{1 - \lambda_R^s} + 1 \right) \right). \label{eq:f2T} \end{equation} The second component of \eqref{eq:f2T} is clearly positive with any $\tau_R < -\delta_R - 1$. The first component of \eqref{eq:f2T} can be rearranged as \begin{equation} f_\xi^2(T)_1 = \left( \tau_L - \frac{(\tau_R + \delta_R) \lambda_R^s - 1}{\tau_R + \delta_R + 1} \right) \left( \frac{\tau_R}{1 - \lambda_R^s} + 1 \right). \label{eq:f2T1} \end{equation} If $\tau_L = \frac{-1}{\tau_R + \delta_R + 1}$ (equivalently $\tau_R = -\frac{1}{\tau_L} - \delta_R - 1$) then \eqref{eq:f2T1} simplifies to a quantity that is clearly negative. In this case $f_\xi^2(T)$ is located in the second quadrant of $\mathbb{R}^2$, so certainly it lies to the left of $E^s(X)$. Thus $\psi(\xi) > 0$ by Proposition \ref{pr:psi}, so $\hat{\psi}(\xi) < 0$. We have shown $\tau_R = -\frac{1}{\tau_L} - \delta_R - 1$ implies $\hat{\psi}(\xi) < 0$. Therefore if $\hat{\psi}(\xi) = 0$ (equivalently $\tau_R = H(\tau_L)$), then $\tau_R > -\frac{1}{\tau_L} - \delta_R - 1$, as required. \end{proof} \subsection{The curves $\phi(\xi) = 0$ and $\hat{\psi}(\xi) = 0$ intersect at a unique point} \label{sub:phiandpsi} \begin{proposition} Fix $0 < \delta_L < 1$ and $0 < \delta_R < 1$. There exist unique $\tau_L > \delta_L + 1$ and $\tau_R < -\delta_R-1$ such that $\phi(\xi) = \hat{\psi}(\xi) = 0$. \label{pr:phipsiIntersection} \end{proposition} \begin{proof} By Propositions \ref{pr:phiZeroCurve} and \ref{pr:psiHatZeroCurve} the curves $\phi(\xi) = 0$ and $\hat{\psi}(\xi) = 0$ must intersect. To show this intersection is unique it suffices to show that at any point of intersection the slope $\frac{d \tau_R}{d \tau_L}$ of $\phi(\xi) = 0$ is greater than that of $\hat{\psi}(\xi) = 0$. From the calculations performed in the proof of Proposition \ref{pr:phiZeroCurve}, the slope of $\phi(\xi) = 0$ is \begin{equation} \left( \frac{d G}{d \tau_R} \right)^{-1} = \frac{-(1 + \tau_R) \lambda_L^u + \frac{\delta_R}{\lambda_L^u}} {\left( \lambda_L^u - 1 \right) \left( \lambda_L^u - \lambda_L^s \right)}. \nonumber \end{equation} Consequently \begin{equation} \left( \frac{d G}{d \tau_R} \right)^{-1} > -\frac{\lambda_R^u + 1}{\lambda_L^u - 1}, \label{eq:phiZeroSlopeApprox} \end{equation} because $\tau_R < \lambda_R^u$, $\delta_R > 0$, and $\lambda_L^s > 0$. From the calculations performed in the proof of Proposition \ref{pr:psiHatZeroCurve}, the slope of $\hat{\psi}(\xi) = 0$ is \begin{equation} \frac{d H}{d \tau_L} = \frac{\left( \lambda_R^{u^2} - 1 \right) \left( \lambda_R^u - \lambda_R^s \right)} {\tau_L \left( 1 + \lambda_R^{u^2} \right) - \frac{2 \delta_L}{\lambda_R^u}}. \nonumber \end{equation} Consequently \begin{equation} \frac{d H}{d \tau_L} < -\frac{\lambda_R^u \left( \lambda_R^{u^2} - 1 \right)}{\lambda_L^u \left( \lambda_R^{u^2} + 1 \right)}, \label{eq:psiHatZeroSlopeApprox} \end{equation} because $\tau_L > \lambda_L^u$, $\delta_L > 0$, and $\lambda_R^s < 0$. Now suppose for a contradiction that $\left( \frac{d G}{d \tau_R} \right)^{-1} \le \frac{d H}{d \tau_L}$ at a point where both $\phi(\xi) = 0$ and $\hat{\psi}(\xi) = 0$. By \eqref{eq:phiZeroSlopeApprox} and \eqref{eq:psiHatZeroSlopeApprox} this implies \begin{equation} -\frac{\lambda_R^u + 1}{\lambda_L^u - 1} < -\frac{\lambda_R^u \left( \lambda_R^{u^2} - 1 \right)}{\lambda_L^u \left( \lambda_R^{u^2} + 1 \right)}, \nonumber \end{equation} which can be rearranged as \begin{equation} -\frac{\left( \lambda_R^u + 1 \right) \left[ \lambda_L^u \left( \lambda_R^u + 1 \right) + \lambda_R^u \left( \lambda_R^u - 1 \right) \right]} {\lambda_L^u \left( \lambda_L^u - 1 \right) \left( \lambda_R^{u^2} + 1 \right)} < 0. \nonumber \end{equation} For this to be true the term in square brackets must be negative, and this implies \begin{equation} \lambda_L^u (\tau_R + 1) < -2, \label{eq:phipsiIntersectionProof50} \end{equation} because $\tau_R < \lambda_R^u$ and $\lambda_R^u \left( \lambda_R^u - 1 \right) > 2$. However, $\phi(\xi) = 0$, so by applying the quadratic formula to \eqref{eq:phi2} we obtain \begin{equation} \tau_R + \delta_L + \delta_R - \sqrt{(\tau_R + \delta_L + \delta_R)^2 - 4 (1 + \tau_R) \delta_R} = 2 \lambda_L^u (\tau_R + 1). \nonumber \end{equation} Thus \eqref{eq:phipsiIntersectionProof50} implies \begin{equation} \tau_R + \delta_L + \delta_R - \sqrt{(\tau_R + \delta_L + \delta_R)^2 - 4 (1 + \tau_R) \delta_R} < -4, \nonumber \end{equation} which can be rearranged as \begin{equation} \tau_R < \frac{-2 \delta_L - 3 \delta_R - 4}{2 + \delta_R}. \nonumber \end{equation} Since $\delta_L, \delta_R > 0$ this implies $\tau_R < -2$. But the curve $\hat{\psi}(\xi) = 0$ increases with $\tau_L$, thus on $\hat{\psi}(\xi) = 0$ the value of $\tau_R$ is greater than its value at the boundary $\tau_L = \delta_L + 1$ where it equals $\beta + \frac{\delta_R}{\beta}$. So the bound $\beta + \frac{\delta_R}{\beta} > -2$ of Lemma \ref{le:betaBounds} provides a contradiction. Therefore $\left( \frac{d G}{d \tau_R} \right)^{-1} > \frac{d H}{d \tau_L}$ at any point where $\phi(\xi) = 0$ and $\hat{\psi}(\xi) = 0$ intersect, hence the intersection point is unique. \end{proof} \section{Dynamics of the renormalisation operator} \label{sec:renormalisation} \setcounter{equation}{0} In this section we study the dynamics of $g$ on $\Phi$. We first show that any $\xi \in \Phi$ maps under $g$ to another point in $\Phi$. \begin{proposition} If $\xi \in \Phi$ then $g(\xi) \in \Phi$. \label{pr:PhiForwardInvariant} \end{proposition} \begin{proof} Write $g(\xi) = \left( \tilde{\tau}_L, \tilde{\delta}_L, \tilde{\tau}_R, \tilde{\delta}_R \right)$. By \eqref{eq:g} and the assumption $\xi \in \Phi$ we obtain \begin{align*} \tilde{\tau}_L - \left( \tilde{\delta}_L + 1 \right) &= \tau_R^2 - 2 \delta_R - \left( \delta_R^2 + 1 \right) = \tau_R^2 - \left( \delta_R + 1 \right)^2 > 0, \\ \tilde{\delta}_L &= \delta_R^2 > 0, \\ \tilde{\tau}_R + \tilde{\delta}_R + 1 &= \tau_L \tau_R - \delta_L - \delta_R + \delta_L \delta_R + 1 \\ &< -(\delta_L + 1)(\delta_R + 1) - \delta_L - \delta_R + \delta_L \delta_R + 1 \\ &= -2 (\delta_L + \delta_R) < 0, \\ \tilde{\delta}_R &= \delta_L \delta_R > 0, \end{align*} which implies $g(\xi) \in \Phi$. \end{proof} Next in \S\ref{sub:renormalisation} we consider the subset of $\Phi$ for which $\hat{\psi}(\xi) < 0$. We show that any point in this subset maps under $g$ to another point in this subset. This result is central to showing that the regions $\cR_n$ are mutually disjoint and proving Theorem \ref{th:Rn} in \S\ref{sub:RnProof}. Recall, the sign of $\hat{\psi}(\xi)$ is the same as that of $\zeta_1(\xi)$ by \eqref{eq:psiHatDefn}. \subsection{The subset of $\Phi$ for which $\hat{\psi}(\xi) < 0$} \label{sub:renormalisation} We first show that the point at which the curve $\hat{\psi}(\xi) = 0$ meets $\tau_L = \delta_L + 1$ maps under $g$ to a point below the dashed curve of Fig.~\ref{fig:igRN_phipsi} in the corresponding slice $\Phi_{\rm slice}(\tilde{\delta}_L,\tilde{\delta}_R)$. \begin{lemma} Let $\delta_L > 0$ and $0 < \delta_R < 1$. Let $\xi_0 = (\delta_L+1,\delta_L,\beta + \frac{\delta_R}{\beta},\delta_R)$ where $\beta$ is as given in Proposition \ref{pr:psiHatZeroCurve}. Write $g(\xi_0) = \left( \tilde{\tau}_L, \tilde{\delta}_L, \tilde{\tau}_R, \tilde{\delta}_R \right)$. Then \begin{equation} \tilde{\tau}_R < -\frac{1}{\tilde{\tau}_L} - \tilde{\delta}_R - 1. \label{eq:tildeTauRBound} \end{equation} \label{le:tildeTauRBound} \end{lemma} \begin{proof} The inequality \eqref{eq:tildeTauRBound} is equivalent to \begin{equation} \tilde{\tau_L} \left( \tilde{\tau}_R + \tilde{\delta}_R + 1 \right) + 1 < 0. \label{eq:tildeTauRBoundProof1} \end{equation} By \eqref{eq:g} we have $\tilde{\tau}_L = \tau_R^2 - 2 \delta_R$, $\tilde{\tau}_R = \tau_L \tau_R - \delta_L - \delta_R$, and $\tilde{\delta}_R = \delta_L \delta_R$; also $\tau_L = \delta_L + 1$. Upon substituting these into \eqref{eq:tildeTauRBoundProof1}, after simplification the left-hand side of \eqref{eq:tildeTauRBoundProof1} becomes \begin{equation} \omega = (1+\delta_L) \tau_R^3 + (1-\delta_L)(1-\delta_R) \tau_R^2 - 2 \delta_R (1+\delta_L) \tau_R - 2 \delta_R (1-\delta_L)(1-\delta_R) + 1. \label{eq:omega} \end{equation} Thus it remains for us to show that $\omega < 0$. Into \eqref{eq:omega} we substitute $\tau_R = \beta + \frac{\delta_R}{\beta}$ to obtain, after much rearranging, \begin{equation} \omega = p(\beta) + q(\beta) + \delta_L \delta_R \beta (\beta + 2) + (1-\delta_L) (\beta+1) + \delta_R^2 (1+\delta_L) \left( \beta + \frac{\delta_R}{\beta} \right) \frac{1}{\beta^2}, \label{eq:tildeTauRBoundProof10} \end{equation} where $p$ is given by \eqref{eq:psiHatLeftIntersection} and \begin{equation} q(\beta) = \big( \delta_L (2-\delta_R) + \delta_R \big) \beta + \delta_R^2 (1-\delta_L)(1-\delta_R) \frac{1}{\beta^2}. \label{eq:tildeTauRBoundProof11} \end{equation} Since $\beta < -1$ we have \begin{align} q(\beta) &< -\big( \delta_L (2-\delta_R) + \delta_R \big) + \delta_R^2 (1-\delta_L)(1-\delta_R) \nonumber \\ &< -\big( \delta_L (2-\delta_R) + \delta_R \big) + \delta_R^2 (1-\delta_R) \nonumber \\ &= -\delta_L (2 - \delta_R) - \delta_R \left( \delta_R^2 - \delta_R + 1 \right) \nonumber \\ &< 0. \nonumber \end{align} Also $p(\beta) = 0$ and by inspection the last three terms of \eqref{eq:tildeTauRBoundProof10} are negative (because $\beta+1 < 0$ and $\beta + 2 > 0$ by Lemma \ref{le:betaBounds}). Therefore $\omega < 0$. \end{proof} We now use Lemma \ref{le:tildeTauRBound} to show that the subset of $\Phi$ for which $\hat{\psi}(\xi) < 0$ is forward invariant under $g$. \begin{proposition} Let $\xi \in \Phi$. If $\hat{\psi}(\xi) \le 0$ then $\hat{\psi}(g(\xi)) < 0$. \label{pr:psiHat} \end{proposition} \begin{proof}[Proof of Proposition \ref{pr:psiHat}] Write $g(\xi) = \left( \tilde{\tau}_L, \tilde{\delta}_L, \tilde{\tau}_R, \tilde{\delta}_R \right)$. Since $\xi \in \Phi$ we have $\delta_L, \delta_R > 0$. First suppose $0 < \delta_R < 1$. If $\tilde{\delta}_R \ge 1$ then certainly $\hat{\psi}(g(\xi)) < 0$ by Lemma \ref{le:deltaRge1}, so let us suppose $\tilde{\delta}_R < 1$. Since $\tilde{\delta}_L = \delta_R^2 < 1$, by Proposition \ref{pr:phipsiIntersection} the curves $\phi = 0$ and $\hat{\psi} = 0$ intersect at a unique point in $\Phi_{\rm slice}(\tilde{\delta}_L,\tilde{\delta}_R)$, call it $\tilde{\xi}_{\rm int}$, see Fig.~\ref{fig:igRN_phipsiImage}. With $\xi = \xi_0$ as in Lemma \ref{le:tildeTauRBound}, the inequality \eqref{eq:tildeTauRBound} implies $\hat{\psi}(g(\xi_0)) < 0$ by Lemma \ref{le:HBound}. Also $\phi(g(\xi_0)) = 0$, because $\hat{\psi}(\xi_0) = 0$, thus $g(\xi_0)$ lies on $\phi = 0$ and below $\tilde{\xi}_{\rm int}$, as in Fig.~\ref{fig:igRN_phipsiImage}. Now if $\hat{\psi}(\xi) \le 0$ and $\xi \ne \xi_0$, then $g(\xi)$ lies in the shaded region of Fig.~\ref{fig:igRN_phipsiImage}. The curve $\hat{\psi} = 0$ does not enter this region because the intersection point $\tilde{\xi}_{\rm int}$ is unique. Thus $g(\xi)$ lies below the curve $\hat{\psi} = 0$, that is $\hat{\psi}(g(\xi)) < 0$. Second suppose $\delta_R \ge 1$. Then \begin{equation} \tilde{\tau}_R = \tau_L \tau_R - \delta_L - \delta_R < -(\delta_L+1)(\delta_R+1) - \delta_L - \delta_R < -3, \nonumber \end{equation} where we have used $\delta_L > 0$ and $\delta_R \ge 1$ to produce the last inequality. Thus $\tilde{\tau}_R < -2$ and so $g(\xi)$ lies below $\hat{\psi} = 0$ by Lemma \ref{le:betaBounds}. That is, $\hat{\psi}(g(\xi)) < 0$. \end{proof} \begin{figure}[t!] \begin{center} \includegraphics[height=10cm]{igRN_phipsiImage} \caption{ A sketch of $\phi(\tilde{\xi}) = 0$ and $\hat{\psi}(\tilde{\xi}) = 0$ where $\tilde{\xi} = g(\xi)$ with $0 < \tilde{\delta}_L < 1$ and $0 < \tilde{\delta}_R < 1$. The point $\tilde{\xi}_{\rm int}$ is the unique intersection of $\phi(\tilde{\xi}) = 0$ and $\hat{\psi}(\tilde{\xi}) = 0$. The point $\xi_0$ is as in Lemma \ref{le:tildeTauRBound}. \label{fig:igRN_phipsiImage} } \end{center} \end{figure} \subsection{Arguments leading to a proof of Theorem \ref{th:Rn}} \label{sub:RnProof} Here we prove Theorem \ref{th:Rn} after a sequence of lemmas. \begin{lemma} Let $\xi \in \cR_n$ for some $n \ge 1$. Then $g^i(\xi) \in \cR_{n-i}$ for all $i = 1,2,\ldots,n$. \label{le:gForwards} \end{lemma} \begin{proof} We have $\zeta_n(\xi) > 0$ and $\zeta_{n+1}(\xi) \le 0$ by \eqref{eq:Rn}. Thus $\zeta_{n-i} \left( g^i(\xi) \right) > 0$ and $\zeta_{n-i+1} \left( g^i(\xi) \right) \le 0$ by \eqref{eq:zetan}. Also $g^i(\xi) \in \Phi$ by Proposition \ref{pr:PhiForwardInvariant}. Thus $g^i(\xi) \in \cR_{n-i}$ by \eqref{eq:Rn}. \end{proof} \begin{lemma} Let $\xi \in \Phi$ with $g(\xi) \in \cR_{n-1}$ for some $n \ge 1$. Then $\xi \in \cR_n$. \label{le:gBackwards} \end{lemma} \begin{proof} We have $\zeta_{n-1}(g(\xi)) > 0$ and $\zeta_n(g(\xi)) \le 0$ by \eqref{eq:Rn}. Thus $\zeta_n(\xi) > 0$ and $\zeta_{n+1}(\xi) \le 0$ by \eqref{eq:zetan}. So $\xi \in \cR_n$ because also $\xi \in \Phi$. \end{proof} \begin{lemma} Let $\xi \in \cR_n$ for some $n \ge 1$. Then $\zeta_0(g(\xi)) > 0$. \label{le:zeta0gxi} \end{lemma} \begin{proof} We have $\zeta_n(\xi) > 0$ by \eqref{eq:Rn}, thus $\zeta_1 \left( g^{n-1}(\xi) \right) > 0$ by \eqref{eq:zetan}. Thus $\zeta_1(\xi) > 0$ by Proposition \ref{pr:psiHat} (recall the sign of $\zeta_1$ is the same as that of $\hat{\psi}$). That is, $\zeta_0(g(\xi)) > 0$. \end{proof} \begin{lemma} Let $\xi \in \Phi$ and write $g^i(\xi) = \left( \tau_{L,i}, \delta_{L,i}, \tau_{R,i}, \delta_{R,i} \right)$ for each $i$. Then $\tau_{L,2} > \tau_L^2 \tau_R^2$ and $\tau_{R,2} < \tau_L \tau_R$. \label{eq:g2Bound} \end{lemma} \begin{proof} By \eqref{eq:g}, \begin{equation} \tau_{L,2} = \tau_{R,1}^2 - 2 \delta_{R,1} = \left( \tau_L \tau_R - \delta_L - \delta_R \right)^2 - 2 \delta_L \delta_R \,, \nonumber \end{equation} which can be rearranged as \begin{equation} \tau_{L,2} = \left( \tau_L \tau_R - \delta_L \right)^2 + \left( \tau_L \tau_R - \delta_R \right)^2 - \tau_L^2 \tau_R^2 \,. \nonumber \end{equation} Then from the bounds in \eqref{eq:saddleSaddleRegion} we obtain $\tau_{L,2} > \tau_L^2 \tau_R^2$. Also \begin{equation} \tau_{R,2} = \tau_{L,1} \tau_{R,1} - \delta_{L,1} - \delta_{R,1} < \tau_{L,1} \tau_{R,1} \,. \nonumber \end{equation} By substituting $\tau_{L,1} > 1$ and $\tau_{R,1} > \tau_L \tau_R$ we obtain $\tau_{R,2} < \tau_L \tau_R$. \end{proof} \begin{proof}[Proof of Theorem \ref{th:Rn}] Suppose for a contradiction that the $\cR_n$ are {\em not} mutually disjoint. So there exists $\xi \in \cR_m \cap \cR_n$ for some $0 \le m < n$. This implies $g^{n-1}(\xi) \in \cR_1$ by Lemma \ref{le:gForwards}, and so $\hat{\psi}(g^{n-1}(\xi)) > 0$ (the sign of $\zeta_1$ is the same as that of $\hat{\psi}$). Also $g^m(\xi) \in \cR_0$, so $\hat{\psi}(g^m(\xi)) \le 0$. By Proposition \ref{pr:psiHat}, $\hat{\psi}(g^{m+i}(\xi)) \le 0$ for all $i \ge 0$. In particular $\hat{\psi}(g^{n-1}(\xi)) \le 0$, and this is a contradiction. Therefore the $\cR_n$ are mutually disjoint. Now choose any $\xi \in \Phi_{\rm BYG}$. To verify \eqref{eq:RnUnion} we show there exists $n \ge 0$ such that $\xi \in \cR_n$. Certainly this is true if $\hat{\psi}(\xi) \le 0$, because in this case $\xi \in \cR_0$, so let us assume $\hat{\psi}(\xi) > 0$. In view of Lemma \ref{eq:g2Bound}, we consider the map $\tilde{g} : \mathbb{R}^2 \to \mathbb{R}^2$ defined by \begin{equation} \tilde{g}(\tau_L,\tau_R) = \left( \left( \tau_L \tau_R \right)^2, \tau_L \tau_R \right). \nonumber \end{equation} For any $j \ge 1$ the $j^{\rm th}$-iterate of $\tilde{g}$ is given explicitly by \begin{equation} \tilde{g}^j(\tau_L,\tau_R) = \left( \left( \tau_L \tau_R \right)^{2 k_j}, \left( \tau_L \tau_R \right)^{k_j} \right), \nonumber \end{equation} where $k_j = 3^{j-1}$. Then Lemma \ref{eq:g2Bound} implies $\tau_{R,2 j} < \left( \tau_L \tau_R \right)^{k_j}$ (using the notation of Lemma \ref{eq:g2Bound}) and so $\tau_{R,2 j} \to -\infty$ as $j \to \infty$. Thus there exists $m \ge 0$ such that $\tau_{R,m} \le -2$. Then $\hat{\psi}(g^m(\xi)) < 0$ by Lemma \ref{le:betaBounds}. Now let $n \in \{ 1,2,\ldots, m \}$ be the smallest integer for which $\hat{\psi}(g^n(\xi)) \le 0$. Then $\hat{\psi}(g^{n-1}(\xi)) > 0$, so $\phi(g^n(\xi)) > 0$. That is, $g^n(\xi) \in \cR_0$. Hence $\xi \in \cR_n$, by $n$ applications of Lemma \ref{le:gBackwards}. This completes our verification of \eqref{eq:RnUnion}. To show that $\cR_j$ is non-empty for all $j \ge 0$, first observe $\hat{\psi}(\xi^*) > 0$. Also $\cR_0$ is certainly non-empty. So for any $j \ge 1$ we can choose $\xi \in \Phi_{\rm BYG}$ sufficiently close to $\xi^*$ that $\hat{\psi}(g^i(\xi)) > 0$ for all $i = 0,1,\ldots,j-1$. Again let $n \ge 1$ be the smallest integer for which $\hat{\psi}(g^n(\xi)) \le 0$. Then $n \ge j$ and $g^n(\xi) \in \cR_0$. Thus $g^{n-j}(\xi) \in \cR_j$ (by again using Lemma \ref{le:gBackwards}), i.e.~$\cR_j$ is non-empty. Finally, choose any $\ee > 0$ and let $B_\ee(\xi^*)$ be the open ball in $\mathbb{R}^4$ centred at $\xi^*$ and with radius $\ee$ using the Euclidean norm. We now show there exists $m \ge 1$ such that $\cR_n \subset B_\ee(\xi^*)$ for all $n > m$. This will prove that $\cR_n \to \{ \xi^* \}$ as $n \to \infty$. Choose any $\xi \in \Phi$ with $\xi \notin B_\ee(\xi^*)$. It is simple exercise to show that $|\tau_L \tau_R| \ge 1 + \frac{\ee}{\sqrt{2}}$. Thus, as above, there exists $m \ge 0$ such that $\tau_{R,m} \le -2$ and $\xi \in \cR_n$ for some $n \le m$. Hence for any $n > m$ the region $\cR_n$ contains no points outside of $B_\ee(\xi^*)$. That is $\cR_n \subset B_\ee(\xi^*)$ for all $n > m$ and therefore $\cR_n \to \{ \xi^* \}$ as $n \to \infty$. \end{proof} \section{Positive Lyapunov exponents} \label{sec:lyap} \setcounter{equation}{0} For smooth maps Lyapunov exponents are usually defined in terms of the derivative of the map. The border-collision normal form $f_\xi$ is not differentiable on $x=0$, so instead we work with one-sided directional derivatives, \S\ref{sub:osdd}. We then define Lyapunov exponents in terms of these derivatives, \S\ref{sub:lyap}. This definition coincides with the familiar interpretation of Lyapunov exponents as the asymptotic rate of separation of nearby forward orbits \cite{Si20e}. Then in \S\ref{sub:R0Proof} we prove Theorem \ref{th:R0}. \subsection{One-sided directional derivatives} \label{sub:osdd} \begin{definition} The {\em one-sided directional derivative} of a function $F : \mathbb{R}^2 \to \mathbb{R}^2$ at $z \in \mathbb{R}^2$ in a direction $v \in \mathbb{R}^2$ is \begin{equation} \rD_v^+ F(z) = \lim_{\delta \to 0^+} \frac{F(z + \delta v) - F(z)}{\delta}, \label{eq:Dplus} \end{equation} if this limit exists. \end{definition} The following result tells us that one-sided directional derivatives of the $n^{\rm th}$ iterate of \eqref{eq:f} exist everywhere and for all $n \ge 1$. This follows from the piecewise-linearity and continuity of \eqref{eq:f}. For a proof see \cite{Si20e}. \begin{lemma} For any $\xi \in \mathbb{R}^4$, $z \in \mathbb{R}^2$, $v \in \mathbb{R}^2$, and $n \ge 1$, $\rD_v^+ f_\xi^n(z)$ exists. \label{le:DplusExists} \end{lemma} \subsection{Lyapunov exponents} \label{sub:lyap} In view of Lemma \ref{le:DplusExists} we can use the following definition. \begin{definition} The {\em Lyapunov exponent} of $f_\xi$ at $z \in \mathbb{R}^2$ in a direction $v \in \mathbb{R}^2$ is \begin{equation} \lambda(z,v) = \limsup_{n \to \infty} \frac{1}{n} \ln \left( \left\| \rD_v^+ f_\xi^n(z) \right\| \right). \label{eq:limsup} \end{equation} \end{definition} If the forward orbit of $z$ does not intersect $x=0$, then $\rD f_\xi^n(z)$ (the Jacobian matrix of $f_\xi^n$ at $z$) is well-defined for all $n \ge 1$. Moreover, $\rD_v^+ f_\xi^n(z) = \rD f_\xi^n(z) v$, so in this case \eqref{eq:limsup} reduces to the usual expression given for smooth maps. The following result is Theorem 2.1 of \cite{GlSi21}, except in \cite{GlSi21} only forward orbits that do not intersect $x=0$ were considered. The generalisation to one-sided directional derivatives is elementary so we do not provide a proof. The proof in \cite{GlSi21} is achieved by constructing an invariant expanding cone for multiplying vectors $v$ under the matrices $A_L$ and $A_R$. The derivative in \eqref{eq:limsup} can be written as $v$ left-multiplied by $n$ matrices each of which is either $A_L$ or $A_R$. The cone implies the vector increases in norm each time it is multiplied by $A_L$ or $A_R$, so certainly the norm increases on average, i.e.~$\lambda(z,v) > 0$. \begin{proposition} For any $\xi \in \Phi_{\rm BYG}$, $z \in \mathbb{R}^2$, and $v = (1,0)$, \begin{equation} \liminf_{n \to \infty} \frac{1}{n} \ln \left( \left\| \rD_v^+ f_\xi^n(z) \right\| \right) > 0. \label{eq:liminf} \end{equation} \label{pr:lyap} \end{proposition} \subsection{Arguments leading to a proof of Theorem \ref{th:R0}} \label{sub:R0Proof} We are now ready to prove Theorem \ref{th:R0}. Once we have constructed the set $\Delta$, the equality \eqref{eq:LambdaAsInfiniteIntersection} follows from the arguments given in the proof of Lemma 6.2 of \cite{GlSi21}. We reproduce these arguments here for convenience. \begin{proof}[Proof of Theorem \ref{th:R0}] The set $\Lambda(\xi)$ is bounded because $X \in \Omega$ and $\Omega$ is bounded and forward invariant (Proposition \ref{pr:Omega}). Also $\Lambda(\xi)$ is connected and invariant by the definition of an unstable manifold. With $v = (1,0)$ and any $z \in \Lambda(\xi)$, the Lyapunov exponent $\lambda(z,v)$ is well-defined by Lemma \ref{le:DplusExists}. Moreover $\lambda(z,v) > 0$ by Proposition \ref{pr:lyap} and because the supremum limit is greater than or equal to the infimum limit. It remains for us to prove part (iii). Here we assume $\delta_R < 1$; also $\delta_L < 1$ by Lemma \ref{le:deltaLge1}. Since $\xi \in \cR_0$ we have $\zeta_1(\xi) \le 0$ and so $\psi(\xi) \ge 0$ by \eqref{eq:psi2}. Thus $f_\xi^2(T)$ lies on or to the left of $E^s(X)$ by Proposition \ref{pr:psi}. Let $Z$ denote the intersection of $E^s(X)$ with $\roverline{T f_\xi^2(T)}$ (the line segment connecting $T$ and $f_\xi^2(T)$). Notice $\roverline{X T}$ and $\roverline{T Z}$ are subsets of $W^u(X)$ while $\roverline{Z X}$ is a subset of $W^s(X)$. Let $\Delta_0$ be the filled triangle with vertices $X$, $T$, and $Z$, see Fig.~\ref{fig:igRN_X}-a. Also let $\Delta = \bigcup_{n=0}^\infty f_\xi^n(\Delta_0)$. The set $\Delta$ is forward invariant, by definition, and has non-empty interior because it contains $\Delta_0$. As in \cite{GlSi21}, let $\tilde{\Delta} = \bigcap_{n=0}^\infty f_\xi^n(\Delta)$. We now show $\Lambda(\xi) \subset \tilde{\Delta}$. Choose any $z \in \Lambda(\xi)$. Let $\{ z_k \}$ be a sequence of points in $W^u(X)$ with $z_k \to z$ as $k \to \infty$. For each $k$, $f_\xi^{-n}(z_k) \to X$ as $n \to \infty$, thus there exists $n_k \ge 1$ such that $f_\xi^{-{n_k}}(z_k) \in \roverline{X T}$. Thus $f_\xi^{-{n_k}}(z_k) \in \Delta_0$, so $z_k \in \Delta$. This is true for all $k$, thus $z \in \Delta$. But $z \in \Lambda(\xi)$ is arbitrary, thus $\Lambda(\xi) \subset \Delta$. Also $\Lambda(\xi)$ is forward invariant, thus $\Lambda(\xi) \subset \tilde{\Delta}$. Finally we show $\tilde{\Delta} \subset \Lambda(\xi)$. The determinants $\delta_L$ and $\delta_R$ of the pieces of $f_\xi$ are both less than $1$, thus the area (Lebesgue measure) of $f_\xi^n(\Delta)$ converges to $0$ as $n \to \infty$. Now choose any $z \in \tilde{\Delta}$. Then $z \in f_\xi^n(\Delta)$ for all $n \ge 0$ and so the distance of $z$ to the boundary of $f_\xi^n(\Delta)$ converges to $0$ as $n \to \infty$. The boundary of $\Delta_0$ consists of $\roverline{X Z}$, which lies in the part of $W^s(X)$ that converges linearly to $X$, and two line segments in $W^u(X)$. Consequently the boundary of $f_\xi^n(\Delta_0)$ is contained in $\roverline{X f_\xi^n(Z)} \cup W^u(X)$ for all $n \ge 0$. Thus the boundary of $\Delta$ is contained in $\roverline{Z f_\xi(Z)} \cup W^u(X)$, so the boundary of $f_\xi^n(\Delta)$ is contained in $\roverline{f_\xi^n(Z) f_\xi^{n+1}(Z)} \cup W^u(X)$ for all $n \ge 0$. But $\roverline{f_\xi^n(Z) f_\xi^{n+1}(Z)}$ converges to $X$, hence the distance of $z$ to $W^u(X)$ must be $0$. Thus $z \in \Lambda(\xi)$. But $z \in \tilde{\Delta}$ is arbitrary, thus $\tilde{\Delta} \subset \Lambda(\xi)$. This completes our demonstration of \eqref{eq:LambdaAsInfiniteIntersection}. \end{proof} \section{Implementing the renormalisation recursively} \label{sec:mainProof} \setcounter{equation}{0} In this section we work towards a proof of Theorem \ref{th:affinelyConjugate}. First in \S\ref{sub:OmegaPrime} we use the unstable manifold of $X$ to construct a triangle $\Omega'(\xi)$ that maps to $\Omega(g(\xi))$ under the affine transformation $h_\xi$ for converting $f_\xi^2$ to $f_{g(\xi)}$. In particular we show that $\Omega'(\xi)$ is a subset of both $\Omega(\xi)$ and $\Pi_\xi$ and this allows us to implement the renormalisation recursively in \S\ref{sub:proofByInduction}. \subsection{Properties of the set mapping to $\Omega(g(\xi))$} \label{sub:OmegaPrime} Suppose $\xi \in \Phi$ with $\zeta_1(\xi) > 0$ (equivalently $\psi(\xi) < 0$). Then $f_\xi^2(T)$ lies to the right of $E^s(X)$ by Proposition \ref{pr:psi}. Thus $f_\xi^3(T)$ lies to the left of $E^s(X)$ (because $\lambda_R^u < 0$). Now let $Q$ denote the intersection of $E^u(X)$ with the line through $f_\xi^3(T)$ and parallel to $E^s(X)$, see Fig.~\ref{fig:igRN_X}-b. Then let $\Omega'(\xi)$ be the filled triangle with vertices $f_\xi(T)$, $f_\xi^3(T)$, and $Q$. \begin{lemma} Let $\xi \in \Phi$ with $\zeta_1(\xi) > 0$. Then \begin{romanlist} \item \label{it:InPi} $\Omega'(\xi) \subset \Pi_\xi$, \item \label{it:ImageDisjoint} $\Omega'(\xi) \cap f_\xi \left( \Omega'(\xi) \right) = \varnothing$, \item \label{it:ImageInRight} $f_\xi \left( \Omega'(\xi) \right) \subset \left\{ (x,y) \in \mathbb{R}^2 \,\big|\, x > 0 \right\}$, \item \label{it:MapsToOmega} $h_\xi \left( \Omega'(\xi) \right) = \Omega(g(\xi))$, \item \label{it:InOmega} and if $\zeta_0(\xi) > 0$ then $\Omega'(\xi) \subset \Omega(\xi)$. \end{romanlist} \label{le:OmegaPrime} \end{lemma} \begin{proof} Let $\Xi_R = \left\{ (x,y) \in \mathbb{R}^2 \,\big|\, x > 0 \right\}$ denote the open right half-plane and let $\Psi$ be the triangle with vertices $X$, $f_\xi(T)$, and $V$. We now prove parts \ref{it:InPi}--\ref{it:InOmega} in order. \begin{romanlist} \item Observe $f_\xi(X) = X \in \Xi_R$, thus $X \in \Pi_\xi$ by \eqref{eq:Pi}. Similarly $f_\xi(V) \in \Xi_R$, thus $V \in \Pi_\xi$. Also $f_\xi^2(T) \in \Xi_R$, thus $f_\xi(T) \in \Pi_\xi$. That is, all vertices of $\Psi$ belong to $\Pi_\xi$, thus $\Psi \subset \Pi_\xi$ because these sets are convex. From \eqref{eq:T} and \eqref{eq:f2T} we find that the slope of the line through $T$ and $f_\xi^2(T)$ is $\frac{-\delta_L}{\tau_L - \lambda_R^s}$, which is negative, thus $f_\xi^2(T)$ lies to the left of $T$. Consequently $f_\xi^3(T)$ lies above $f_\xi(T)$. Also $f_\xi(T)$ lies above $V$ because \begin{equation} f_\xi(T)_2 - V_2 = \frac{1 - \delta_R}{\left( 1 - \lambda_R^s \right) \left( 1 - \frac{1}{\lambda_R^u} \right)} > 0. \nonumber \end{equation} Therefore $f_\xi^3(T) \in \Psi$. Thus $\Omega'(\xi) \subset \Psi \subset \Pi_\xi$. \item Observe $f_\xi(\Psi)$ is the quadrilateral with vertices $X$, $f_\xi(V)$, $f_\xi^2(T)$, and $T$. Thus $\Psi$ and $f_\xi(\Psi)$ intersect only at $X$. But $\Omega'(\xi) \subset \Psi$ does not contain $X$, thus $\Omega'(\xi) \cap f_\xi \left( \Omega'(\xi) \right) = \varnothing$. \item The left-most point of $f_\xi(\Psi)$ is $X \in \Xi_R$, thus $f_\xi \left( \Omega'(\xi) \right) \subset f_\xi(\Psi) \subset \Xi_R$. \item For the map $f_\xi^2$, the fixed point $X$ is a saddle with positive eigenvalues. Thus its unstable manifold has two dynamically independent branches. The branch that emanates to the left has its first and second kinks at $f_\xi(T)$ and $f_\xi^3(T)$. Let $\mathcal{B}$ denote this branch up to the second kink, that is $\mathcal{B}$ is the union of the line segments $\roverline{X f_\xi(T)}$ and $\roverline{f_\xi(T) f_\xi^3(T)}$. By the conjugacy relation \eqref{eq:conjugacy}, $h_\xi(\mathcal{B})$ is part of one branch of the unstable manifold of the analogous fixed point of $f_{g(\xi)}$. Since $h_\xi$ flips points across the switching line \eqref{eq:oppositeSigns}, $h_\xi(\mathcal{B})$ is part of the unstable manifold of $Y$ (for the map $f_{g(\xi)}$). This branch has its first and second kinks at $D$ and $f_{g(\xi)}(D)$, thus $h_\xi(\mathcal{B})$ is the union of the line segments $\roverline{Y D}$ and $\roverline{D f_{g(\xi)}(D)}$. By similar reasoning $Q$ maps under $h_\xi$ to the point $B$ of $f_{g(\xi)}$. This verifies part \ref{it:MapsToOmega}. \item The first components of $T$ and $D$ are $T_1 = \frac{1}{1 - \lambda_R^s}$ and $D_1 = \frac{1}{1 - \lambda_L^s}$. Observe $0 < T_1 < D_1$, thus $T$ lies between $(0,0)$ and $D$. By iterating these under $f_{R,\xi}$ we have that $f_\xi(T)$ lies on the line segment connecting $(1,0)$ and $f_\xi(D)$. Now suppose $\zeta_0(\xi) > 0$. Then $f_\xi(T) \in \Omega(\xi)$ because $(1,0) \in \Omega(\xi)$, $f_\xi(D) \in \Omega(\xi)$, and $\Omega(\xi)$ is convex. Moreover, $f_\xi^3(T) \in \Omega(\xi)$ because $\Omega(\xi)$ is forward invariant (Proposition \ref{pr:Omega}). Also $X \in \Omega(\xi)$ by Lemma \ref{le:LambdaInOmega}. Thus the triangle with vertices $f_\xi(T)$, $f_\xi^3(T)$, and $X$ is contained in $\Omega(\xi)$ (again by the convexity of $\Omega(\xi)$). This triangle contains $\Omega'(\xi)$, thus $\Omega'(\xi) \subset \Omega(\xi)$ as required. \end{romanlist} \end{proof} \subsection{Arguments leading to a proof of Theorem \ref{th:affinelyConjugate}} \label{sub:proofByInduction} \begin{proof}[Proof of Theorem \ref{th:affinelyConjugate}] Let $I_n = \{ 0,1,\ldots, 2^n-1 \}$. We use induction on $n$ to prove Theorem \ref{th:affinelyConjugate} and show that \begin{equation} \text{if $\zeta_0(\xi) > 0$ then $S_i \subset \Omega(\xi)$ for all $i \in I_n$}. \label{eq:affinelyConjugateProof0} \end{equation} With $n=0$ the statements in Theorem \ref{th:affinelyConjugate} are true trivially with $S_0 = \Lambda(\xi)$. Also \eqref{eq:affinelyConjugateProof0} is true because $\zeta_0(\xi) > 0$ (since $\xi \in \cR_0$) and $S_0 \subset \Omega(\xi)$ by Lemma \ref{le:LambdaInOmega}. Now suppose the result is true for some $n \ge 0$; it remains for us to verify the result for $n+1$. Choose any $\xi \in \cR_{n+1}$. Then $g(\xi) \in \cR_n$ by Lemma \ref{le:gForwards}. By the induction hypothesis applied to the point $g(\xi)$, we have $g^{n+1}(\xi) \in \cR_0$ and there exist mutually disjoint sets $\tilde{S}_0, \tilde{S}_1, \ldots, \tilde{S}_{2^n-1} \subset \mathbb{R}^2$ with $f_{g(\xi)} \left( \tilde{S}_i \right) = \tilde{S}_{(i+1) \,{\rm mod}\, 2^n}$ and \begin{equation} f_{g(\xi)}^{2^n} \big|_{\tilde{S}_i} ~\text{is affinely conjugate to}~ f_{g^{n+1}(\xi)} \big|_{\Lambda(g^{n+1}(\xi))} \label{eq:affinelyConjugateProof10} \end{equation} for all $i \in I_n$. Also $\zeta_0(g(\xi)) > 0$ by Lemma \ref{le:zeta0gxi}, thus by \eqref{eq:affinelyConjugateProof0} the induction hypothesis also gives $\tilde{S}_i \subset \Omega(g(\xi))$ for all $i \in I_n$. Let $S_{2i} = h_\xi^{-1} \left( \tilde{S}_i \right)$ for each $i \in I_n$ (these sets are mutually disjoint because $h_\xi$ is a homeomorphism). Let $S_{2i+1} = f_\xi(S_{2i})$ for each $i \in I_n$ (these sets are mutually disjoint because $f_\xi$ is a homeomorphism). For any $i,j \in I_n$ we have $S_{2i} \subset \Omega'(\xi)$ by Lemma \ref{le:OmegaPrime}\ref{it:MapsToOmega} and $S_{2j+1} \cap \Omega'(\xi) = \varnothing$ by Lemma \ref{le:OmegaPrime}\ref{it:ImageDisjoint}, so $S_{2i} \cap S_{2j+1} = \varnothing$. Therefore the sets $S_0, S_1, \ldots, S_{2^{n+1}-1}$ are mutually disjoint. For each $i \in I_n$, $S_{2i} \subset \Pi_\xi$ by Lemma \ref{le:OmegaPrime}\ref{it:InPi}, so \begin{equation} f_\xi^2 \big|_{S_{2i}} ~\text{is affinely conjugate to}~ f_{g(\xi)} \big|_{\tilde{S}_i} \label{eq:affinelyConjugateProof20} \end{equation} by Proposition \ref{pr:conjugacy}. Also $f_\xi^2(S_{2i}) = S_{2i+2 \,{\rm mod}\, 2^{n+1}}$, so $f_\xi(S_{2i+1}) = f_{R,\xi}(S_{2i+1}) = S_{2i+2 \,{\rm mod}\, 2^{n+1}}$ using also Lemma \ref{le:OmegaPrime}\ref{it:ImageInRight}. Thus \begin{equation} f_\xi^2 \big|_{S_{2i+1}} ~\text{is affinely conjugate to}~ f_\xi^2 \big|_{S_{2i}} \nonumber \end{equation} using $f_{R,\xi}$ as the affine transformation. By further use of \eqref{eq:conjugacy} we have that $f_\xi^{2^{n+1}} \big|_{S_{2i}}$ and $f_\xi^{2^{n+1}} \big|_{S_{2i+1}}$ are affinely conjugate to $f_{g(\xi)}^{2^n} \big|_{\tilde{S}_i}$, thus also to $f_{g^{n+1}(\xi)} \big|_{\Lambda(g^{n+1}(\xi))}$ by \eqref{eq:affinelyConjugateProof10} (this verifies \eqref{eq:affinelyConjugate} for $n+1$). The induction hypothesis also implies \begin{equation} \bigcup_{i=0}^{2^n-1} \tilde{S}_i = {\rm cl} \left( W^u(\gamma_n) \right), \label{eq:affinelyConjugateProof40} \end{equation} where $\gamma_n$ is a periodic solution of $f_{g(\xi)}$ with symbolic itinerary $\cF^n(R)$. By \eqref{eq:conjugacy}, $h_\xi^{-1}(\gamma_n)$ is a periodic solution of $f_\xi^2$. Since $h_\xi$ flips the left and right half-planes, see \eqref{eq:oppositeSigns}, the symbolic itinerary of $h_\xi^{-1}(\gamma_n)$ is obtained by swapping $L$ and $R$'s in $\cF^n(R)$. Then $\gamma_{n+1} = h_\xi^{-1}(\gamma_n) \cup f_\xi \left( h_\xi^{-1}(\gamma_n) \right)$ is a periodic solution of $f_\xi$ and since $f_\xi \left( h_\xi^{-1}(\gamma_n) \right)$ is contained in the right half-plane (Lemma \ref{le:OmegaPrime}\ref{it:ImageInRight}) its symbolic itinerary is obtained by further replacing each $L$ with $LR$ and each $R$ with $RR$, hence $\gamma_{n+1}$ has symbolic itinerary $\cF^{n+1}(R)$. Also by \eqref{eq:affinelyConjugateProof20} and \eqref{eq:affinelyConjugateProof40}, \begin{equation} \bigcup_{i=0}^{2^{n+1}-1} S_i = {\rm cl} \left( W^u(\gamma_{n+1}) \right), \nonumber \end{equation} which verifies \eqref{eq:Siunion} for $n+1$. Finally, if $\zeta_0(\xi) > 0$ then for all $i \in I_n$ we have $S_{2i} \subset \Omega(\xi)$ by Lemma \ref{le:OmegaPrime}\ref{it:InOmega} and $S_{2i+1} \subset \Omega(\xi)$ because $\Omega(\xi)$ is forward invariant verifying \eqref{eq:affinelyConjugateProof0} for $n+1$. \end{proof} \section{Discussion} \label{sec:conc} \setcounter{equation}{0} In this paper we have shown how part of the parameter space of \eqref{eq:f} naturally divides into regions $\cR_0, \cR_1, \ldots$. As demonstrated by Theorem \ref{th:affinelyConjugate}, renormalisation enables us to describe the dynamics in each $\cR_n$ with $n \ge 1$ based on knowledge of the dynamics in $\cR_0$. Theorem \ref{th:R0} describes the dynamics in $\cR_0$, but is incomplete. It remains to show the attractor $\Lambda$ is unique and satisfies stronger notions of chaos throughout $\cR_0$. Also we would like to extend the results to high-dimensional maps. Finally we comment on the analogy of Feigenbaum's constant for our renormalisation by looking at the rate at which the regions $\cR_n$ converge to the fixed point $\xi^*$. The $4 \times 4$ Jacobian matrix $\rD g(\xi^*)$ has exactly one unstable eigenvalue: $2$. It follows that the diameter of $\cR_n$ divided by the diameter of $\cR_{n+1}$ tends, as $n \to \infty$, to the constant $2$. \section*{Acknowledgements} The authors were supported by Marsden Fund contract MAU1809, managed by Royal Society Te Ap\={a}rangi.
{ "timestamp": "2021-09-21T02:26:17", "yymm": "2109", "arxiv_id": "2109.09242", "language": "en", "url": "https://arxiv.org/abs/2109.09242", "abstract": "We study the two-dimensional border-collision normal form (a four-parameter family of continuous, piecewise-linear maps on $\\mathbb{R}^2$) in the robust chaos parameter region of [S. Banerjee, J.A. Yorke, C. Grebogi, Robust Chaos, Phys. Rev. Lett. 80(14):3049--3052, 1998]. We use renormalisation to partition this region by the number of connected components of a chaotic Milnor attractor. This reveals previously undescribed bifurcation structure in a succinct way.", "subjects": "Chaotic Dynamics (nlin.CD); Dynamical Systems (math.DS)", "title": "Renormalisation of the two-dimensional border-collision normal form", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363713038173, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7084883419088144 }
https://arxiv.org/abs/1612.04443
Indivisibility of class numbers of imaginary quadratic fields
We quantify a recent theorem of Wiles on class numbers of imaginary quadratic fields by proving an estimate for the number of negative fundamental discriminants down to -X whose class numbers are indivisible by a given prime and whose imaginary quadratic fields satisfy any given set of local conditions. This estimate matches the best results in the direction of the Cohen-Lenstra heuristics for the number of imaginary quadratic fields with class number indivisible by a given prime. This general result is applied to study rank 0 twists of certain elliptic curves.
\section{Introduction} Ideal class numbers of imaginary quadratic fields have been studied since Gauss, who conjectured that for any given $h$, there are only finitely many negative fundamental discriminants $D$ such that $h(D) = h$. The history of Gauss' Conjecture is rich. The conjecture was shown to be true by work of Heilbronn \cite{Heilbronn}, who did not show how to find the imaginary quadratic fields with a given class number. Siegel \cite{Siegel} proved that $h(-D)$ grows like $|D|^{1/2}$, but did so ineffectively. In other words, for each $\epsilon > 0$ he proved that for sufficiently large $D$ that there are positive constants $c_1$ and $c_2$ for which $$ c_1 D^{1/2 -\epsilon} < h(-D) < c_2 D^{1/2+\epsilon} $$ While explicit upper bounds for $h(-D)$ are known, the constants $c_1$ are ineffective for all $\epsilon$. Baker \cite{Baker} and Heegner \cite{Heegner} computed the complete finite list of negative fundamental discriminants D for which $h(D) = 1$. The works Gross and Zagier \cite{GrZa} and Goldfeld \cite{Goldfeld2} produce a lower bound for $h(D)$ which is asymptotically smaller than Siegel's bound, but is effective and allows one (in principle) to compute the complete list of imaginary quadratic fields with any given class number. It is natural to ask what else can be said about the structure of ideal class groups. For example, how often should we expect the $\ell$-torsion subgroup of the class group to be trivial for a given odd prime $\ell$? The {\it Cohen-Lenstra heuristics} \cite{CL} predict an answer: \begin{equation}\label{eq:CL} \lim_{X \to \infty} \frac{ \# \{ - X < D < 0: \ell \nmid h(D) \}}{X} = \prod_{n = 1}^{\infty} \left( 1 - \frac{1}{\ell^{n}} \right) = 1 - \frac{1}{\ell} - \frac{1}{\ell^2} + \frac{1}{\ell^5} \cdots \end{equation} Here the numbers $D$ are fundamental discriminants. Note that the Cohen-Lenstra heuristics actually predict much more about the structure of the class groups, give similar predictions for real quadratic fields, and have been generalized by others to other number fields. For a concise description for the quadratic number field case, the reader is encouraged to read Chapter 5 Section 10 of \cite{Cohen}. Numerical data provides some evidence for the Cohen-Lenstra heuristics, and for $\ell = 3$, strong theorems supporting equation (\ref{eq:CL}) are known. Gauss' {\it genus theory} says that the number of order 2 elements of the class group is $2^{t-1} - 1$, where $t$ is the number of distinct prime divisors of the discriminant (see Proposition 3.11 of \cite{Cox}). For $\ell = 3$, a theorem of Davenport and Heibronn \cite{DH} says that if $\epsilon >0$, then for $X$ sufficiently large we have $$ \frac{ \# \{-X < D < 0: 3 \nmid h(D) \}}{X} \ge \frac{1}{2} - \epsilon. $$ They proved this by showing that the cubic number fields are in a discriminant preserving correspondence with a certain set of classes of binary cubic forms, and they used this fact to count the order $3$ elements of class groups of quadratic number fields. For $\ell > 3$ much less is known about the $\ell$-torsion of class groups. Soundararajan \cite{Sound} used analytic techniques to count $\ell$-torsion points of class groups, and showed $$ \# \{ -X < D < 0 : \ell | h(D) \} \gg X^{\frac{1}{2} + \epsilon(\ell)}, $$ where $\epsilon (\ell) >0$ approaches 0 as $\ell \to \infty$. Kohnen and Ono \cite{KO} used the theory of modular forms to study the occurrence of class groups with trivial $\ell$-torsion for $\ell > 3$. They proved for any $\epsilon >0$, for sufficiently large $X$ we have $$ \# \{-X < D < 0 : \ell \nmid h(D) \} \ge \left( \frac{2 (\ell -2)}{\sqrt{3} (\ell - 1)} - \epsilon \right) \frac{\sqrt{X}}{\log{X}}. $$ Information about the structure of class groups of quadratic fields can be used to study questions about Mordell-Weil groups of elliptic curves in families of quadratic twists, however, additional information about the splitting and ramification data of the quadratic number fields is often required for such applications. For $E: y^2 = p(x)$ an elliptic curve over $\mathbb{Q}$ with $p(x)$ in Weierstrass form, we define the twist of $E$ by a fundamental discriminant $D$ to be the elliptic curve defined by $$ E_D : y^2 D = p(x). $$ Note that $E_D$ is isomorphic to $E$ over $\mathbb{Q} (\sqrt{D})$, but not over $\mathbb{Q}$. The {\it Heegner hypotheses} are a set of conditions about how the rational primes of bad reduction of an elliptic curve split in an imaginary quadratic field. The work of Kolyvagin on the Birch and Swinnerton-Dyer Conjecture (see \cite{Ko1}, \cite{Ko2}) is based on the existence of suitable quadratic twists of elliptic curves in which the twisting discriminant satisfy prescribed Heegner hypotheses. Combining his work with an important theorem of Gross and Zagier, who showed that the height of the Heegner point is a multiple of the derivative of the $L$-series of the elliptic curve at 1, it follows that the Birch and Swinnerton-Dyer Conjecture holds when the analytic rank is at most 1. Heegner points have played an important role in studying Goldfeld's Conjecture, which concerns the ranks of the twists as $D$ varies over the set of fundamental discriminants. Define $M^r_E (X) := \# \{D: |D|<X: \mbox{ord}_{s=1} L(s, E_D) = r \}$. If $E/\mathbb{Q}$ is an elliptic curve and $r$ is 0 or 1, then $$ M^r_E (X) \sim \frac{X}{2}, \quad X \to \infty. $$ The best general results on Goldfeld's Conjecture were, until recently, due to Perelli, Pomykala, and Skinner (see \cite{OS} and \cite{PePo}). For the rank 0 case, Ono and Skinner \cite{OS} showed that \begin{equation}\label{eq:OS} M^0_E (X) \gg \frac{ X}{\log {X}}. \end{equation} For the rank 1 case, Perelli and Pomykala \cite{PePo} showed \begin{equation}\label{eq:PePo} M^1_E (X) \gg_{\epsilon} X^{1 - \epsilon} \end{equation} for any $\epsilon > 0$. Earlier this year, Kriz and Li \cite{KL} showed for a large class of elliptic curves, $$ M^r (X) \gg \frac{X}{\log^{\frac{5}{6}}(X)}. $$ Strong results on Goldfeld's conjecture have been obtained for special elliptic curves by making use of the aformentioned theorem of Davenport and Heilbronn on the 3-indivisibility of class numbers. Using the half-integral weight modular forms established by Waldspurger and a theorem of Frey \cite{Frey}, James \cite{James} showed that the elliptic curve with Cremona label 14B satisfies $M^0_E (X) \gg X$. Showing that an elliptic curve has a positive proportion of twists with rank one requires more than Waldspurger's modular forms. Vatsal \cite{Vatsal} used a theorem of Gross and Zagier \cite{GrZa} to show that the elliptic curve $E = X_0(19)$ has $M^r_E (X) \gg X$ for $r = 0,1$. Vatsal's argument was extended by Byeon \cite{Byeon} to elliptic curves in the isogeny class of an elliptic curve with a nontrivial cuspidal 3-torsion point and square-free conductor. The results toward Goldfeld's conjecture described above apply to certain elliptic curves with residually reducible mod 3 Galois representations, and rely on a refinement of the theorem of Davenport and Heilbronn due to Horie and Nakagawa \cite{HN}. Their refinement showed that a positive proportion of imaginary quadratic fields have trivial $\ell$-torsion and satisfy prescribed local conditions. One might hope to extend the work of Horie and Nakagawa to a theorem on $\ell$-indivisibility of class groups for $\ell > 3$ by refining the work of Kohnen and Ono \cite{KO} in an analogous way. A barrier to refining Kohnen and Ono's theorem is showing that the modular forms arising in their argument have Fourier coefficients which are supported on prescribed arithmetic progressions and are nontrivial modulo $\ell$. This is difficult because for many modular forms this property doesn't hold. For example, the values of the partition function $p(n)$ are the Fourier coefficients for the modular form $1 / \eta(z)$, and the Ramanujan congruences tell us $p(5n + 4) \equiv 0 \pmod{5}$, and so sieving the Fourier expansion of this form can return a modular form which is trivial modulo 5. Here $\eta(z):=q^{1/24}\prod_{n=1}^{\infty}(1-q^n)$ (here $q:=e^{2\pi i z}$ throughout) is Dedekind's eta-function, and it is a weight -1/2 weakly holomorphic modular form. Recently, Wiles \cite{Wiles} established the existence of imaginary quadratic fields with prescribed local data whose class numbers are indivisible by a given odd prime $\ell$. \begin{theorem*}{(Wiles)} Let $\ell \ge 5$ be prime, and let $ S_0 , S_+ , S_{-}$ be finite disjoint sets of distinct odd primes not containing $\ell$ such that the following are true: \begin{enumerate} \item $S_0$ does not contain any primes which are $ 1 \pmod{\ell}$ \item $S_+$ does not contain any primes which are $ -1 \pmod{\ell}$ \item $S_-$ does not contain any primes which are $1 \pmod{\ell}$ and $-1 \pmod{4}$. \end{enumerate} Then there exists a negative fundamental discriminant $D$ such that $\ell \nmid h(D)$, and $\mathbb{Q} (\sqrt{D})$ splits at every prime in $S_+$, is inert at every prime in $S_-$, and ramifies at every prime in $S_0$. \end{theorem*} In view of the work of Horie and Nakagawa when $\ell=3$ \cite{HN}, the goal of the present work is to prove a quantified version of the theorem of Wiles for the $\ell > 3$ case by obtaining an estimate for the number of imaginary quadratic fields which satisfy the conclusion of Wiles' theorem, similar to the estimate of Kohnen and Ono. We define \begin{equation}\label{eq:sturmbound} M_{\Sigma} := \frac{1}{8} [ \Gamma_0 (1) : \Gamma_0 ( N_{\Sigma})] \end{equation} and \begin{equation}\label{eq:level} N_{\Sigma} : = 4 Q_{\Sigma}^6 (\prod_{q \in S_0 \cup S_- \cup S_+} q^6), \end{equation} where $Q_{\Sigma}$ is equal to $1$ if $S_-$ is nonempty and otherwise is the smallest odd prime not contained in $S_+ \cup S_- \cup S_0$ which is not congruent to 1 modulo $\ell$ and -1 modulo 4. Our main theorem is the following estimate for the smallest discriminant divisible by a given prime $p$ lying in a certain arithmetic progression which satisfies the conclusion of Wiles' theorem. \begin{theorem}\label{thm:maintheorem} Suppose $p > M_{\Sigma}$ is a prime such that the following are true: \begin{enumerate} \item We have that $\left( \frac{p}{\ell} \right) = 1$ and $p \not\equiv 1 \pmod{\ell}$, \item We have that $p \equiv 1 \pmod{8}$, \item For odd primes $q \le M_{\Sigma}$, $q \neq \ell$, we have $\left( \frac{p}{q} \right) = 1.$ \end{enumerate} Then there is some $k_p \le p M_{\Sigma} $ such that $p \nmid k_p$ and $\ell \nmid h( - k_p p)$ and $\mathbb{Q} ( \sqrt{ - k_p p})$ ramifies at all primes of $S_0$, splits at every prime in $S_+$, and is inert at every prime in $S_-$. \end{theorem} Combining this result with Dirichlet's theorem on primes in arithmetic progressions, we obtain the following corollary, which can viewed as an extension of \cite{KO} to allow for local conditions. To state it, we let $T_{\Sigma, \ell}$ denote the set of all fundamental discriminants which satisfy the conclusions of Theorem \ref{thm:maintheorem}. That is, $T_{\Sigma, \ell}$ contains the set of negative fundamental discriminants $D$ of quadratic fields $K$ which ramify at all primes of $S_0$, split at every prime in $S_+$, and are inert at every prime in $S_-$, and have $\ell \nmid h(D)$. Also, let $r_{\Sigma}$ be the number of odd primes less than $M_{\Sigma}$, excluding $\ell$. Then we have the following: \begin{corollary}\label{thm:maincor} Let $\ell$ be an odd prime. If $\epsilon >0$, then for sufficiently large $X$ we have $$ \# \{ -X < D <0: \ell \nmid h(D), D \in T_{\Sigma, \ell} \} \ge \left( \frac{\ell -2 }{ (\ell - 1) 2^{r_{\Sigma}+4} \sqrt{M_{\Sigma}}} - \epsilon \right) \frac{\sqrt{X}}{\log{X}}. $$ \end{corollary} One can apply Corollary \ref{thm:maincor} to count twists of elliptic curves which have Mordell-Weil rank 0 over $\mathbb{Q}$ and trivial $\ell$-Selmer group. To state this result, it is convenient to define the following subsets of primes dividing the conductor $N_E$. Let $\tilde{S}_E$ be the subset of odd primes dividing the conductor $N_E$ of $E$ defined by \begin{equation}\label{eq:SE} \widetilde{S}_E := \{ p|N_E: p \equiv -1 \pmod{\ell}, \ell \nmid ord_{p}(\Delta_E) \}, \end{equation} where $\Delta_E$ is the discriminant of $E$. Also, we set \begin{equation}\label{eq:Tplus} T_+ = \{ p|N_E, ord_p(j_E) < 0; E/\mathbb{Q}_p \text{ is not a Tate Curve} \}, \end{equation} and \begin{equation}\label{eq:Tminus} T_- = \{ p|N_E: p \notin T_+, p \equiv 3 \pmod{4} \}. \end{equation} A Tate curve $E / \mathbb{Q}_p$ is such that $E / \overline{\mathbb{Q}}_p \simeq \overline{\mathbb{Q}}_p^* / q^{\mathbb{Z}}$ for some $q \in \mathbb{Q}_p$, for details see Appendix C of \cite{Silverman}. \begin{corollary}\label{thm:rank0} Suppose $E/\mathbb{Q}$ is an elliptic curve with odd conductor $N_E$, and suppose $E$ has a $\mathbb{Q}$-rational torsion point $P$ of odd prime order $\ell$, and suppose $P$ is not contained in the kernel of reduction modulo $\ell$. Assume $ord_{\ell} (j(E)) \ge 0$. Also assume $\widetilde{S}_E = \emptyset$ and neither $T_+$ nor $T_-$ contain a prime which is $1 \pmod{\ell}$. Then we have $$ \# \{ -X < D <0: rk(E_D) = 0, Sel_{\ell} (E_D) = \{1\} \} \gg \frac{\sqrt{X}}{\log{X}}. $$ \end{corollary} \begin{remark} As mentioned above, the best general results on Goldfeld's Conjecture are due to Ono, Perelli, Pomykala, and Skinner (see equations \ref{eq:OS} and \ref{eq:PePo}). The corollary given here falls short of improving on this estimate. However, it is a refinement in that it gives rank 0 twists whose $\ell$-Selmer groups have trivial $\ell$-parts. This is the best known estimate for this type of problem. Corollary \ref{thm:rank0} presumably can be extended to also give rank 1 twists which simultaneously have trivial $\ell$-Shafarevich-Tate groups. This claim would require a careful study of the aforementioned paper of Frey \cite{Frey}. \end{remark} This paper is organized as follows. In Section 2, we describe a theorem of Zagier relating class numbers of imaginary quadratic fields to the coefficients of a weight $\frac{3}{2}$ mock modular form, and we use his result to prove a lemma which is vital to the proof of our main result. In Section 3, we prove Theorem \ref{thm:maintheorem} and Corollaries \ref{thm:maincor} - \ref{thm:rank0}, and in Section 4, we give examples to illustrate our results. \section*{Acknowledgements} The author thanks Ken Ono for suggesting this project, and thanks Edray Goins and the referee for their many helpful comments. \section{Hurwitz Mock Modular forms} \subsection{Zagier's Eisenstein Series} Throughout, $\mathbb{H}$ is the upper half plane, $z = x + iy$ is a complex number in $\mathbb{H}$ with $x,y \in \mathbb{R}$, and $q := e^{2 \pi i z}$. Also, for $ N \in \mathbb{Z}_{> 0}$, $k \in \frac{1}{2} \mathbb{Z}$, and $\chi$ a Dirichlet character, we let $M_{k} (\Gamma_0 (N), \chi)$ and $S_{k} (\Gamma_0 (N), \chi)$ denote the usual vector spaces of integer and half-integer weight modular forms and cusp forms. The proof of Theorem 1.1 requires generalizations of modular forms, the so-called {\it harmonic Maass forms}. We will describe only briefly the main properties of harmonic Maass forms. To learn more, the reader is encouraged to read \cite{BFOR} and \cite{Ono}. A harmonic Maass form is a real-analytic function which transforms like a modular form. All harmonic Maass forms have a natural decomposition, as $$ f ( z) = f^+ ( z) + \frac{(4 \pi y)^{1-k}}{k-1} \overline{c_f (0) } + f^- ( z), $$ where $f^+$ and $f^-$ have Fourier expansions as follows, for some $m_0 \in \mathbb{Z}$: $$ f^+ ( z) = \sum_{n = m_0 }^{\infty} c_f^+ (n) q^n, $$ and $$ f^- (z) = \sum_{\substack{n > 0 }} \overline{c_f^- (n)} \Gamma ( 1-k, 4 \pi n y) q^{-n}, $$ where $\Gamma_(s,x) := \int_x^{\infty} t^{s-1} e^{-t} dt$ is the \textit{incomplete Gamma-function}. The form $f^+$ is called the \emph{holomorphic part} of $f$, and $\frac{(4 \pi y)^{1-k}}{k-1} \overline{c_f(0) } + f^- ( z)$ is called the \emph{nonholomorphic part} of $f$. If the nonholomorphic part of $f$ is trivial, then $f$ is a weakly holomorphic modular form. When the nonholomorphic part is nontrivial, $f^+$ is called a \emph{mock modular form}. We let $H_k(\Gamma_0(N), \chi)$ denote the space of harmonic Maass forms with Nebentypus character $\chi$ on $\Gamma_0(N)$. While many harmonic Maass forms have poles at cusps, not all of them do. A harmonic Maass form $f \in H_k (\Gamma_0(N), \chi)$ is said to be of {\it moderate growth} if there exists $\epsilon >0$ such that $$ f(z) = O(e^{\epsilon y}) $$ as $y \to \infty$, and if analogous conditions hold at all cusps of $\Gamma_0(N)$. We let $H_k^{mg} (\Gamma_0(N), \chi)$ denote the space of weight $k$ harmonic Maass forms of moderate growth. Harmonic Maass forms of moderate growth do not have poles at cusps. Moreover, harmonic Maass forms of moderate growth which have trivial nonholomorphic part are holomorphic modular forms. Throughout we let $D$ be a negative fundamental discriminant, and we let $h(D)$ be the class number for the quadratic field $\mathbb{Q} ( \sqrt{D})$. We use the Hurwitz class numbers $H(n)$, which are defined as follows. Suppose $-n = Df^2$, where $D < 0$ is a fundamental discriminant. \begin{equation}\label{eq:hurwitz} H(n) = \frac{h (D)}{w(D)} \sum_{d | f} \mu(d) \left( \frac{D}{d} \right) \sigma_1 \left(\frac{f}{d} \right), \end{equation} where $\sigma_1$ is the usual sum of divisors function and $w(-n)$ is half the number of units in the integer ring of $\mathbb{Q} (\sqrt{-n})$. % Let $\mathcal{H} (z)$ be defined by \begin{equation}\label{eq:zagier} \mathcal{H}(z) := - \frac{1}{12} + \sum_{n=1}^{\infty} H(n) q^n + \frac{1}{8 \sqrt{\pi}} \sum_{n \in \mathbb{Z}} \Gamma \left( - \frac{1}{2}, 4 \pi n^2 y \right) q^{-n^2}, \end{equation} where $\Gamma(\alpha,x)$ is the usual incomplete Gamma-function. Zagier showed that $\mathcal{H}(z)$ is a harmonic Maass form \cite{Zagier}. In particular, if $\xi_{\frac{3}{2}}$ is the differential operator defined by $\xi_{\frac{3}{2}} := 2 i y^{\frac{3}{2}} \frac{\overline{\partial}}{\partial \overline{z}}$, then Zagier showed the following: \begin{theorem}{(Zagier)}\label{thm:zagier} $\mathcal{H}(z)$ is a weight $\frac{3}{2}$ harmonic Maass form of moderate growth on $\Gamma_0(4)$. Moreover, $\xi_{3/2} (\mathcal{H}) = - \frac{1}{16 \pi} \Theta$, where $\Theta(z) := \sum_{n \in \mathbb{Z}} q^{n^2}$ is the Jacobi theta function. \end{theorem} We use $\mathcal{H}(z)$ to construct modular forms whose coefficients represent the fundamental discriminants which correspond to fields with the desired splitting conditions. Then we argue as in \cite{JO} and \cite{KO}. \begin{remark} The weight $3/2$ modular form $\sum_{n =0}^{\infty} r(n) q^n := \Theta(z)^3$ is intimately tied to class numbers for imaginary quadratic fields. It is well known that the $r(n)$ are given by Hurwitz class numbers $H(n)$. \begin{theorem}[Gauss] $$ r(n) = \begin{cases} 12 H(n) \\ 24 H(n) \\ r(n/4) & n \equiv 0 \pmod{4} \\ 0 & n \equiv 7 \pmod{8} \end{cases} $$ \end{theorem} The modular form $\theta^3$ was used in many previous results on indivisibility of class numbers (see for example \cite{KO}). However, it is insufficient for our result, because its Fourier coefficients are not supported on all arithmetic progressions. For the square free $n$ with $ n \equiv 7 \pmod{8}$, the class numbers $h (-n)$ are not represented. \end{remark} \subsection{Sieving Zagier's Mock Modular Form} We require the following result, which shows that we can define holomorphic modular forms whose coefficients are supported on fundamental discriminants satisfying local conditions and are given by class numbers. Given sets $S_+, S_-, S_0$ as in Theorem \ref{thm:maintheorem}, we let $A_{\Sigma}$ be defined as the set of positive integers $n$ such that the following hold: \begin{enumerate} \item For $p \in S_+ \cup S_- \cup S_-$, $p^2 \nmid n$. \item $\mathbb{Q} (\sqrt{-n})$ splits at the primes in $S_+$, ramifies at the primes in $S_0$, and is inert at the primes in $S_-$. \end{enumerate} \begin{lemma}\label{thm:mainlemma} Let $S_+, S_-, S_0$ be sets as in Theorem \ref{thm:maintheorem}, and assume that $S_-$ is nonempty. Then there is a weight $\frac{3}{2}$ modular form $H^{\Sigma} (z) = \sum_{n=1}^{\infty} a(n) q^n$ on $\Gamma_0( N_{\Sigma})$, where $N_{\sigma}$ is as in equation \ref{eq:level}, such that $$ a(n) = \begin{cases} H(n) & n \in A_{\Sigma} \\ 0 & \text{otherwise} \end{cases} $$ \end{lemma} The idea is to take combinations of twists of Zagier's function $\mathcal{H}(z)$ to obtain holomorphic modular form. The key properties of $\mathcal{H}(z)$ that allow us to do this are \begin{enumerate} \item the Fourier expansion of the non-holomorphic part is supported on terms of the form $q^{-n^2}$, which allows us to use twisting to annihilate the non-holomorphic part of $\mathcal{H}(z)$, and \item $\mathcal{H} (z)$ has moderate growth at poles, which ensures that any linear combination of twists of $\mathcal{H}(z)$ will not have any exponential singularities, as a weakly holomorphic modular form would. \end{enumerate} For $\chi$ a Dirichlet character modulo $m$, the twist of $\mathcal{G}(z) := \sum_{n=0}^{\infty} a(n,y) q^n \in H_k (\Gamma_0(N), \psi)$ by $\chi$ is given by $$ \mathcal{G}_{\chi} (z) = \sum_{n \in \mathbb{Z}} \chi(n) a(n,y) q^n . $$ If $d$ is a positive integer, the operators $U(d), V(d)$ are defined, as one does when working with holomorphic modular forms, by $$ (\mathcal{G}| U(d)) (z) := \sum_{n \in \mathbb{Z}} a(dn,y) q^n $$ and $$ (\mathcal{G}| V(d)) (z) := \sum_{n \in \mathbb{Z}}^{\infty} a(n,y) q^{dn}. $$ It is well known that a twist of a modular form is itself a modular form, for a proof, see Proposition 17 on page 127 of \cite{Koblitz}. The same proof shows that twists of harmonic Maass forms are also harmonic Maass forms. Specifically, for $\mathcal{G}(z) \in H^{mg}_{k+ \frac{1}{2}} ( \Gamma_0(4N), \psi)$, the form $\mathcal{G}_{\chi} (z)$ belongs to $H_{k + \frac{1}{2}}^{mg} ( \Gamma_0 (4N m^2), \psi \cdot \chi^2)$, and $(\mathcal{G} | V(d) )$ and $(\mathcal{G}| U(d))$ lie in $H_{k+\frac{1}{2}}^{mg} ( \Gamma_0(4Nd), \psi \cdot \left( \frac{4d}{\cdot} \right))$. \noindent \textit{Proof of Lemma \ref{thm:mainlemma}:} First, we take a combination of twists for which the nonholomorphic part of $\mathcal{H} (z)$ is annihilated. Let $p$ be in $S_-$. We have \begin{align*} f(z) &:= \frac{1}{2} (\mathcal{H}(z) - \left( \frac{-1}{p} \right) \mathcal{H}_{\left( \frac{\cdot}{p} \right)} (z)) \\ &= \sum_{n=1}^{\infty} \frac{1}{2} \left( 1 - \left( \frac{-n}{p} \right) \right) H(n) q^n + \frac{1}{16 \sqrt{\pi}} \sum_{p | n} \Gamma(- \frac{1}{2}, 4 \pi n^2 y) q^{ - n^2}. \end{align*} Note that the coefficient of $q^n$ in the holomorphic part of $f(z)$is $\frac{1}{2} H(n)$ if $p | n$, $H(n)$ if $ (\frac{-n}{p} ) = -1$, and $0$ if $ \left( \frac{-n}{p} \right) = 1$. The nonholomorphic part of $f$ is supported on multiples of $p$, because twisting the nonholomorphic part by the Legendre symbol annihilates those coefficients. To eliminate what remains of the nonholomorphic part and the multiples of $p$ in the holomorphic part, we take the twist $ f_{\left( \frac{\cdot}{p}\right)^2}$. Repeating the above steps for every $p \in S_+ \cup S_-$, we obtain a form which is supported on $n$ for which the primes in $S_+ \cup S_-$ split or are inert in $\mathbb{Q} (\sqrt{-n})$ as desired. To obtain a modular form which is supported on coefficients which are multiples of the primes in $S_0$, let $d$ be the product of the primes in $S_0$. We apply the $U(d)$, operator, then twist by $\left( \frac{ -n}{q} \right)^2$ for each $q \in S_0$, and then apply the $V(d)$ operator. \qed \section{Proofs of Theorem \ref{thm:maintheorem} and Corollaries} \subsection{Proof of Theorem \ref{thm:maintheorem}} The proof of Theorem \ref{thm:maintheorem} requires a well known result of Sturm \cite{Sturm}, which says that if a modular form with integer Fourier coefficients is nonvanishing modulo a prime $\ell$, then there is a bound on the index of the first coefficient which is nonzero modulo $\ell$. To state his theorem, for a rational prime $\ell$ and a modular form $f(z) = \sum_{n=0}^{\infty} a(n) q^n \in M_{k} (\Gamma_0(N), \chi)$ with coefficients in $\mathbb{Z}$, we define $$ord_{\ell} (f):= \textit{min}_{n} \{ n: \ell \nmid a(n) \},$$ and we say $ord_{\ell}(f) := \infty$ if $\ell | a(n)$ for all $n$. \begin{theorem}[Sturm]\label{thm:sturm} For a modular form $f(z) = \sum_{n=0}^{\infty} a(n) q^n \in M_{k} (\Gamma_0(N), \chi)$ with integer Fourier coefficients, if $$ \textit{ord}_{\ell}(f) > \frac{k}{12} [ \Gamma_0(1):\Gamma_0(N)], $$ then $\text{ord}_{\ell}(f) = \infty$. \end{theorem} \begin{remark} Note that Sturm's theorem was originally only formulated for holomorphic modular forms of integer weight, but the proof carries over to half-integral weight modular forms. \end{remark} \noindent \textit{Proof of Theorem \ref{thm:maintheorem}:} Let $H^{\Sigma} (z)$ be the modular form from Lemma \ref{thm:mainlemma} for $S_+$, $S_-$, and $S_0$, replacing $S-$ with $\{ Q_{\Sigma} \}$ if $S_- = \emptyset$. Let $$\mathcal{F} (z) := \left(H^{\Sigma}| U(p) \right) -p \left( H^{\Sigma} | V(p) \right).$$ By Lemma \ref{thm:mainlemma}, the form $\mathcal{F}(z)$ is a modular form of weight $\frac{3}{2}$ on $\Gamma_0(p N_{\Sigma})$. By Theorem 1 of Wiles \cite{Wiles}, $\mathcal{F} (z)$ has a Fourier coefficient which is indivisible by $\ell$. Therefore Sturm's Theorem tells us that we have $$n_p := ord_{\ell} (\mathcal{F}) \le \frac{3}{24} [ \Gamma_0(1) : \Gamma_0 (pN_{\Sigma} )] .$$ It follows from a well-known formula for $[\Gamma_0(1):\Gamma_0 (N)]$ (see for example \cite{CBMS}) that we have $$n_p \le M_{\Sigma} (p+1).$$ We have that $n_p $ must be of the form $ f_p^2 k_p$, with $ k_p $ square free. It follows from conditions (1)-(3) in Theorem \ref{thm:maintheorem} that for all $n \le M_{\Sigma}$, the $np^{th}$ Fourier coefficient of $\mathcal{F}(z)$ is divisible by $\ell$, so $p \nmid k_p$. Therefore either $- p k_p$ or $- 4 p k_p$ is a fundamental discriminant for an imaginary quadratic field satisfying the desired local conditions and whose class number is indivisible by $\ell$. \qed \subsection{Proof of Corollary \ref{thm:maincor}} Note that at least half of the values $k_p p$ from the main theorem must be distinct as $p$ varies over the primes greater than $M_{\Sigma}$ satisfying the conditions of Theorem \ref{thm:maintheorem}. If instead we had $k_p p = k_q q = k_r r$ with $p < q < r$, we would have $qr | k_p$, which would violate the bound on $k_p$. To count the fundamental discriminants down to $-X$ which satisfy the desired conditions, it suffices to count the primes which satisfy the conditions of Theorem \ref{thm:maintheorem} for which the fundamental discriminant from Theorem \ref{thm:maintheorem} is greater than $-X$. The primes $p$ that satisfy the third condition of Theorem \ref{thm:maintheorem} are those for which for each $q$ up to $M_{\Sigma}$, $p$ lies one of $\frac{q-1}{2}$ arithmetic progressions modulo $q$, which correspond to $p$ being a quadratic residue modulo $q$. Similarly, the other two conditions amount to restricting $p$ to certain arithmetic progressions modulo $2$ and $\ell$. For the fundamental discriminant corresponding to $p$ obtained from Theorem \ref{thm:maintheorem} to be guaranteed to be greater than $-X$, it suffices to require $$ 4 p M_{\Sigma} (p+1) \le X. $$ It follows from Dirichlet's theorem for primes in arithmetic progressions that given $\epsilon > 0$ for sufficiently large $X$, we have $$ \# \{ -X < D <0: \ell \nmid h(D), D \in T_{\Sigma} \} \ge \left( \frac{\ell -2}{ \ell -1} \frac{1}{ 2^{r_{\Sigma} + 4 } \sqrt{M_{\Sigma}}} - \epsilon \right) \frac{\sqrt{X}}{\log{X}}. $$ \subsection{Proof of Corollary \ref{thm:rank0}} First we recall a theorem of Frey \cite{Frey}. \begin{theorem}[Frey]\label{thm:frey} Suppose $E / \mathbb{Q}$ is an elliptic curve with a $\mathbb{Q}$-rational torsion point $P$ of odd prime order $\ell$, and suppose $P$ is not contained in the kernel of reduction modulo $\ell$. Suppose $\widetilde{S}_E = \emptyset$. Suppose that $D $ is a negative square-free integer coprime to $\ell N_E$ and satisfies \begin{enumerate} \item If $2 | N_E$ then $d \equiv 3 \pmod{4}$ \item If $ord_{\ell}(j(E))<0$, then $\left( \frac{D}{\ell} \right) = -1$, \item If $p|N_E$ is an odd prime, then $$ \left( \frac{d}{p} \right) = \begin{cases} -1 & \textit{if } ord_p(j_E) \ge 0 \\ -1 & \textit{if } ord_p (j_E) < 0 \text{ and } E/\mathbb{Q}_p \text{ is a Tate curve} \\ 1 & otherwise \end{cases} $$ \end{enumerate} Then $Sel_{\ell}(E_D)$ is nontrivial if and only if $\ell | h(D)$. \end{theorem} Now to prove the corollary, note that the twists $E_D$ have trivial $\ell$ torsion over $\mathbb{Q}$. We set $$ S_+ = \{ p|N_E, ord_p(j_E) < 0: E/\mathbb{Q_p} \text{ is not a Tate Curve} \}, $$ and $$ S_- = \{ p|N_E: p \notin S_+ \}, $$ and $S_0 = \emptyset$. It follows from Corollary \ref{thm:maincor} that there are at least $O(\frac{\sqrt{X}}{\log{X}})$ fundamental discriminants down to $-X$ which satisfy Frey's conditions, and so the result follows from Theorem \ref{thm:frey}. \section{Examples} Here we illustrate Theorem \ref{thm:maintheorem} and Corollary \ref{thm:rank0}. \begin{example} Suppose that $\ell = 5$ and that the sets are $S_+ = \{ 3 \}$, $S_- = S_0 = \emptyset$. The smallest prime which satisfies the conditions of Theorem \ref{thm:maintheorem} is 394969. The smallest discriminant bounded by Theorem \ref{thm:maintheorem} is a multiple of this prime, however, it is clear that one shouldn't need to look at numbers that large to find imaginary quadratic fields which split at 3 and have a class number which is not divisible by 5. By direct calculation, we see that for the primes $p$ less than 100, for all but 79 we have $5 \nmid h(-p)$, out of which 11 of the 21 corresponding imaginary quadratic fields split at 3. This discrepancy between the bounds predicted by Theorem \ref{thm:maintheorem} and the actual fundamental discriminants we observe is typical of these theorems, and it illustrates the main obstacles which remain in attacking the original Cohen-Lenstra conjectures. \end{example} \begin{example} Let $E:y^2 + y = x^3 - x^2 + 20x -8$ be the elliptic curve with Cremona label 203.a1. Then $E(\mathbb{Q}) \simeq \mathbb{Z}/5\mathbb{Z}$. The conductor of $E$ is $7 \cdot 29$. It follows from Corollary \ref{thm:rank0} that we have $$ \# \{ -X < D <0: rk(E_D) = 0, Sel_{5} (E_D) = \{1\} \} \gg \frac{\sqrt{X}}{\log{X}}. $$ \end{example}
{ "timestamp": "2017-11-07T02:12:16", "yymm": "1612", "arxiv_id": "1612.04443", "language": "en", "url": "https://arxiv.org/abs/1612.04443", "abstract": "We quantify a recent theorem of Wiles on class numbers of imaginary quadratic fields by proving an estimate for the number of negative fundamental discriminants down to -X whose class numbers are indivisible by a given prime and whose imaginary quadratic fields satisfy any given set of local conditions. This estimate matches the best results in the direction of the Cohen-Lenstra heuristics for the number of imaginary quadratic fields with class number indivisible by a given prime. This general result is applied to study rank 0 twists of certain elliptic curves.", "subjects": "Number Theory (math.NT)", "title": "Indivisibility of class numbers of imaginary quadratic fields", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363713038174, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7084883419088144 }
https://arxiv.org/abs/1211.5384
Superfast solution of linear convolutional Volterra equations using QTT approximation
We address a linear fractional differential equation and develop effective solution methods using algorithms for inversion of triangular Toeplitz matrices and the recently proposed QTT format. The inverses of such matrices can be computed by the divide and conquer and modified Bini's algorithms, for which we present the versions with the QTT approximation. We also present an efficient formula for the shift of vectors given in QTT format, which is used in the divide and conquer algorithm. As the result, we reduce the complexity of inversion from the fast Fourier level $O(n\log n)$ to the speed of superfast Fourier transform, i.e., $O(\log^2 n).$ The results of the paper are illustrated by numerical examples.
\section{Introduction} Equations involving derivatives of fractional order are of great importance, due to their role in mathematical models applied in mechanics, biochemistry, electrical engineering, medicine, etc., see~\cite{diethelm-1997,CapMain1,FreedDiet1}. In this paper we present a superfast algorithm for the numerial solution of the linear equation \begin{equation}\label{eq1} D^{\alpha}_*y(t) = F(t,y(t)) = my(t)+f(t), \quad 0\leq t \leq T, \quad y(0)=y_0, \end{equation} where $0<\alpha<1$ is the order of the fractional operator, $m\in\mathbb{R}$ is a constant referred to as \emph{mass}, and $f(t)$ is a sufficiently well--behaved \emph{forcing} term. For $\alpha=1/2$ this equation is a scalar version of the Bagley-Torvik equation~\cite{BT1}, which is used in the modelling of viscoelastic materials. The definitions of Caputo derivative $D^\alpha_*$ can be found in many sources, e.g.~\cite{Diet1,Pod1}, and are presented in appendix for the convenience. The classical result of Diethelm~\cite[Lem. 6.2]{Diet1} allows us to rewrite~\eqref{eq1} in the form \begin{equation}\label{eq2} y(t) = y_0 + \frac{1}{\Gamma(\alpha)} \int_0^t (t-s)^{\alpha-1} \left( my(s)+f(s) \right) ds, \end{equation} where $\Gamma(\alpha) = \int_0^\infty e^{-t}t^{\alpha-1} dt$ is the gamma function. Eq.~\eqref{eq2} is the weakly singular convolutional Volterra equation of the second kind with the Abel--type kernel. Volterra equations of second kind are well-studied and are proven to have a unique continuous solution for $0\leq t \leq T,$ see, e.g.~\cite[Thm. 3.2]{linz-1985}. The solution is asymptotically stable if $m<0$ (see~\cite{GorRut1}) which we will always assume in this paper. For certain forcing terms, the solution of~\eqref{eq2} can be found using series methods. In a general framework, we can discretize~\eqref{eq2} using a collocation or Galerkin method and numerically solve the resulted linear system. This matrix approach to fractional calculus was brilliantly presented by I. Podlubny in~\cite{podlubny-matr-2000}. In this paper we consider the collocation method and assume that $y(t)$ is approximated by a piecewise--linear function on a uniform grid $t_j=jh,$ $j=0,\ldots,n,$ where $h=T/n.$ The stability of collocation methods for fractional equations was studied in~\cite{blanck-1995,blanck-1996} and an error analysis can be found in~\cite{DietFordFreed1}. The discretized equation is the following $$ y_j = y_0 + \frac{h^\alpha}{\Gamma(\alpha)} \sum_{k=0}^j w_{j,k} (m y_k + f_k), \qquad j=1,\ldots,n, $$ where $y_j=y(t_j),$ $f_k=f(t_k)$ and $w_{j,k}$ are quadrature weights, defined by integration of piecewise--linear basis functions with Abel-type kernel, i.e., \begin{equation}\nonumber w_{j,k} = \frac{1}{\alpha (\alpha+1)} \left\{ \begin{array}{lc} (j-1)^{\alpha+1} - (j-\alpha-1)j^\alpha, & k=0, \\ (j-k-1)^{\alpha+1} - 2 (j-k)^{\alpha+1} + (j-k+1)^{\alpha+1}, & 1 \leq k < j, \\ 1, & k=j. \end{array} \right. \end{equation} Finally, we obtain the linear system $A y = b$ with triangular Toeplitz matrix and the right-hand side defined as follows, \begin{equation}\label{eq3} \sum_{k=1}^j a_{j-k} y_k = b_j, \qquad j=1,\ldots,n, \end{equation} \begin{equation}\nonumber \begin{split} a_p & = \left\{ \begin{array}{lc} 1 - \gamma m, & p=0, \\ - \gamma m \left( (p-1)^{\alpha+1} -2p^{\alpha+1} + (p+1)^{\alpha+1} \right),& p>0, \end{array} \right. \\ b_j & = y_0 + \gamma \left( \sum_{k=1}^j w_{j,k} f_k + w_{j,0} (m y_0 + f_0) \right), \end{split} \end{equation} where $\gamma=h^\alpha / \Gamma(\alpha+2).$ The numerical scheme we use is analogous to the fractional Adams method proposed in~\cite{DietFordFreed1} for a general (e.g. nonlinear) function $F(t,y(t)).$ The method is developed as a generalization of the Adams--Bashforth--Moulton scheme from the classical numerical analysis of ordinary differential equations and a detailed error analysis is provided. The complexity of the fractional Adams method in the nonlinear case is $\mathcal{O}(n^2).$ To reduce this complexity, we can take into account the decay speed of the Abel kernel~$k(s)=s^{\alpha-1}$ of the integral in~\eqref{eq2}. The so-called~\emph{fixed memory principle}~\cite{Pod1,Pod2} and more accurate~\emph{nested mesh method}~\cite{FordSimp1,DietFordFreedLuchko1} are based on truncation and approximation of the tail of the integral~\eqref{eq2}, respectively, and have almost linear complexity w.r.t. $n.$ We revise these methods in Sec.~\ref{LOG}. For linear $F(t,y(t)),$ the problem writes as the linear system~\eqref{eq3}, which can be solved using well--developed algorithms for the inversion of triangular Toeplitz matrices, or triangular strip matrices, as they are referred in~\cite{podlubny-matr-2000}. These methods are recalled in Sec.~\ref{TTOEP}, and have the asymptotic complexity of the fast Fourier transform (FFT) algorithm, which is $\mathcal{O}(n \log n).$ Recently, a superfast Fourier transform algorithm was proposed in~\cite{dks-ttfft-2012}, based on the approximation of vectors in the \emph{quantized tensor train} (QTT) format~\cite{osel-tt-2011,khor-qtt-2011}. The method can be considered as a classical model of quantum superfast Fourier transform algorithm~\cite{EkertJozsa-1998}, and has a square-logarithmical complexity~$\mathcal{O}(\log^2 n)$ for a certain class of vectors, for which such a model is efficient. This class of vectors is partially established in~\cite{sav-rank1-2012} and include, for example, vectors with sparse Fourier image. The numerical experiments provided in Sec.~\ref{TTINV} show that the Abel kernel $t(s) = s^{1-\alpha}$ is efficiently approximated by the QTT format for all $0<\alpha <1$ with accuracy up to the machine threshold. Based on this observation, we propose the superfast inversion algorithm for the triangular Toeplitz matrix~\eqref{eq3}, using the QTT approximation. The numerical experiments provided in Sec.~\ref{NUM} justify the accuracy and sublinear complexity of the method proposed. \section{Numerical method with logarithmic memory} \label{LOG} In \cite{Pod1} and \cite{Pod2} the author describes an approach to the numerical integration involved in solving a fractional problem whereby the first part (or tail) of the integral is ignored (i.e. assuming the value of the integral over this region is negligible) and so the memory of the system is truncated at some point. The error introduced via this process is described in \cite{Pod1} for Riemann-Liouville fractional derivatives. In \cite{FordSimp1} the authors consider the error that is introduced when this approach is applied to problems expressed with respect to the Caputo fractional derivative. The authors show that by introducing a finite memory of fixed length $T$ for the Caputo derivative we introduce an error of the form \begin{equation} \label{fixerror} E=\left| \frac{1}{\Gamma\left( 1-\alpha\right)}\int^{t-T}_0\frac{y'(s)}{\left( t-s\right)^\alpha}ds\right|. \end{equation} Letting $\sup_{s\in[0,t]}|y'(s)|=M$ then B \begin{equation} \label{fe2} E\leq\frac{M\left( t^{1-\alpha}-T^{1-\alpha}\right)}{\Gamma\left( 2-\alpha\right)}. \end{equation} So for a fixed memory $T<t$ we have a loss of order such that the error does not tend towards zero as the stepsize approaches zero. Indeed, the authors in \cite{FordSimp1} highlight that in order to preserve the order of the method we would need to choose $T$ so that (for a fixed error bound $E$) we have \begin{equation} \label{fe3} T^{1-\alpha}\geq t^{1-\alpha}-\left(\frac{E\Gamma(2-\alpha)}{M}\right), \end{equation} which introduces a computational cost --- precisely what the fixed memory principle is trying to avoid. To overcome this it is proposed in \cite{FordSimp1}, and described further in \cite{DietFordFreedLuchko1}, that the fixed memory principle is amended so that the region of integration $[0,t]$ is decomposed into a sequence of finite-length intervals with differing stepsizes. So as we move `backwards' along the interval from $t$ to $0$ the subintervals use coarser and coarser stepsizes, except possibly for some small sub-interval near zero due to the length of this subinterval not being an exact multiple of the current stepsize --- in such circumstances the authors suggest a couple of alternative approaches for this subinterval, one such alternative being the use of the original stepsize. Such a \emph{nested mesh} approach (see actual mesh on Fig.~\ref{fig:nest}) is possible due to the scaling properties of the fractional integral, which are discussed in \cite{DietFordFreedLuchko1} and \cite{FordSimp1}. Thus the weights (of the Adams--type method described earlier) for calculating $\Omega^\alpha_hf(nh)\approx I^\alpha f(nh)$ with a stepsize $h$ can be used to calculate $\Omega^\alpha_{\omega^ph}f\left(\omega^pnh\right)\approx I^\alpha f\left(n\omega^ph\right)$ using a stepsize of $\omega^ph$. The authors \cite{FordSimp1} define, for $h\in\mathbb{R}^+$, the mesh $M_h$ by $M_h=\left\{ hn,n\in\mathbb{N}\right\}$. If $\omega ,r,p\in\mathbb{N}$, $\omega>0$, $r>p$, then $M_{\omega^rh}\subset M_{\omega^ph}$. The authors then decompose the interval $[0,t]$, for fixed $T>0$ in the following way: \begin{equation} \label{mesh1} [0,t]=[0,t-\omega ^mT]\cup [t-\omega ^mT, t-\omega^{m-1}T]\cup\cdots\cup [t-\omega T, t-T]\cup [t-T,t] \end{equation} where $m\in\mathbb{N}$ is the smallest integer such that $t<\omega^{m+1}T$. A step length of $h$ is used over the most recent time interval $[t-T,t]$ with successively larger step sizes over earlier intervals, as follows. Let $t,T,h\in\mathbb{R}$, $\omega^{m+1}T>t\geq \omega^mT$, $t>1$, $h>0$ with $t=nh$ for some $n\in\mathbb{N}$. The integral can be rewritten as \begin{eqnarray} \label{mesh2} i^\alpha_{[0,t]}f(t)&=&I^\alpha_{[t-T,t]}f(t)+\sum^{m-1}_{i=0}I^\alpha_{\left[ t-\omega^{i+1}T,t-\omega^iT\right]}f(t)+I^\alpha_{[0,t-\omega^mT]}f(t)\\ &=&I^\alpha_{[t-T,t]}f(t)+\sum^{m-1}_{i=0}\omega^{i\alpha}I^\alpha_{[t-\omega T,t-T]}f\left(\omega^it\right)+\omega^{m\alpha}I^\alpha_{[0,t-\omega^mT]}f\left(\omega^mt\right) , \end{eqnarray} where $I^\alpha_{[t-a,t-b]}f(t)=\frac{1}{\Gamma(\alpha)}\int^{t-b}_{t-a}\frac{f(s)}{(t-s)^{1-\alpha}}ds$. The authors also show the following: \begin{theorem}\cite{FordSimp1} The nested mesh scheme preserves the order of the underlying rule on which it is based. \end{theorem} In addition, whilst the computational cost of the full-memory approach is of order $O(N^2)$, the nested-mesh approach has order $O(N\log N)$. \begin{figure}[t] \begin{center} \input ./tikz/nest.tikz \end{center} \caption{Example of the grids used at subsequent steps of the nested mesh method (from top to bottom). On each line the active points of the grid are shown by black and non-active by grey dots.} \label{fig:nest} \end{figure} \section{Inversion of triangular Toeplitz matrices} \label{TTOEP} \subsection{Basic properties of triangular Toeplitz matrices} Let $\mathcal{T}_n$ be a set of lower triangular Toeplitz $n\times n$ matrices% \footnote{Here and further we write matrix and vector indices in round brackets instead of putting them as subscripts, in order to introduce the convenient notation for QTT representation later.}% , i.e., $$ A \in\mathcal{T}_n \quad \Leftrightarrow\quad A = \left[a(j,k)\right]_{j,k=0}^{n-1},\quad a(j,k) = a(j-k), \quad a(p)=0, \quad p < 0. $$ It is easy to check the following properties of $\mathcal{T}_n.$ \begin{enumerate} \item $A \in\mathcal{T}_n, \: B \in\mathcal{T}_n \:\Rightarrow\: AB \in\mathcal{T}_n;$ \item $A \in\mathcal{T}_n, \: B \in\mathcal{T}_n \:\Rightarrow\: AB = BA;$ \item $A \in\mathcal{T}_n, \: a_0\neq0 \:\Rightarrow\: A^{-1} \in\mathcal{T}_n.$ \end{enumerate} By the last property, the inverse matrix $B=A^{-1},$ as well as all matrices from $\mathcal{T}_n,$ is defined by its first column. The standard solution method for triangular linear systems has complexity $\mathcal{O}(n^2)$ and yields the following trivial formula \begin{equation}\label{eq:inv}% b(0)=\frac{1}{a(0)}, \qquad b(j) = -\frac{1}{a(0)} \sum_{k=1}^j b(j-k) a(k), \quad j=1,\ldots,n-1. \end{equation} For $A, B \in\mathcal{T}_n,$ the product $X=AB \in\mathcal{T}_n$ and is also defined by the first column $x=A b.$ Therefore, matrix-by-matrix multiplication in $\mathcal{T}_n$ is equivalent to the multiplication of a vector by the Toeplitz matrix, i.e., discrete \emph{convolution} $x(j) = \sum_{k=0}^j a(j-k) b(k).$ A naive computation by this formula requires $\mathcal{O}(n^2)$ operations, but it is well-known that it can be computed in $\mathcal{O}(n \log n)$ operations using the fast Fourier transform (FFT) algorithm~\cite{gauss-fft,cooleytukey-fft}. To recall this, we note that each $n \times n$ Toeplitz matrix $T$ is the leading submatrix of some $2n \times 2n$ circulant matrix \begin{equation}\nonumber C = \begin{bmatrix} T & * \\ * & T \end{bmatrix}, \qquad C = \left[c(j,k)\right], \quad\mbox{where}\quad c(j,k)=c(j-k \mod 2n), \end{equation} and all circulant matrices are diagonalized by unitary Fourier matrix as follows (see cf.~\cite{golub-1996}) $$ C = F^* \Lambda F, \qquad \Lambda = \sqrt{2n} \mathop{\mathrm{diag}}\nolimits(F c). $$ Therefore, multiplication by $C$ and hence by $T$ can be performed by 3 FFTs of size $n$ with complexity $\mathcal{O}(n \log n).$ The inversion of triangular Toeplitz matrices has asymptotically the same complexity, i.e., $c M(n),$ where $M(n)$ denotes the complexity of matrix multiplication. The modern highly-improved inversion algorithms reduce the constant to the level from $c=1.4$ to $c=1.5,$ see, e.g.~\cite{ttoep-inv-murphy}. We now recall the classical algorithms, which have slightly larger constant $c,$ but are much more simple and easy to follow. In Sec.~\ref{TTINV} we will adjust the classical inversion algorithms to use the compressed format for the approximate representation of matrix, reducing the complexity to \emph{sublinear} w.r.t. $n.$ \subsection{Divide and conquer method} To benefit from the Toeplitz structure and reach $\mathcal{O}(n \log n)$ complexity for the inversion algorithm, we can use the divide-and-conquer strategy. This was noted in~\cite{morf-dc-1980} and developed in~\cite{commenges-dc-1984}. It is easy to check that if $2n \times 2n$ lower triangular Toeplitz matrix $A' \in \mathcal{T}_{2n}$ is partitioned to $n \times n$ matrices, the inverse matrix writes as follows \begin{equation}\label{eq:dc} A' = \begin{bmatrix} A & \\ C & A \end{bmatrix}, \qquad (A')^{-1} = \begin{bmatrix} A^{-1} & \\ -A^{-1}CA^{-1} & A^{-1} \end{bmatrix} = \begin{bmatrix} A^{-1} & \\ & A^{-1} \end{bmatrix} \: \begin{bmatrix} I & \\ -CA^{-1} & I \end{bmatrix}, \end{equation} where $A\in\mathcal{T}_n,$ $A^{-1}\in\mathcal{T}_n$ and $C$ is a Toeplitz matrix. If $n=2^d$ and $A_d\in\mathcal{T}_{2^d},$ this formula yields the reccurent method to compute $A_d^{-1}.$ We start from some small $d_0$ and use~\eqref{eq:inv} to compute the inverse of $2^{d_0}\times 2^{d_0}$ leading submatrix $A_{d_0}\in\mathcal{T}_{2^{d_0}}.$ Then we subsequently apply~\eqref{eq:dc} and compute $A^{-1}_d$ in $(d-d_0)$ steps. Each step requires to compute the first column of $A^{-1}_t C_t A^{-1}_t$ with $2^t \times 2^t$ Toeplitz matrices $A^{-1}_t$ and $C_t,$ where $t=d_0+1,\ldots,d.$ Each multiplication is done in $\mathcal{O}(t 2^t)$ operations, which summarizes to $\mathcal{O}(d 2^d) = \mathcal{O}(n \log n)$ overall complexity. More accurately, the cost of the divide and conquer algorithm is smaller than $12$ FFTs of size $n.$ \subsection{Bini's and related approximate methods} In order to reduce the number of FFTs used in computations and obtain algorithm with better parallel performance, the approximate method to compute $A^{-1}$ for $A\in\mathcal{T}_n$ was proposed in~\cite{bini-1984}. It is noted that $\mathcal{T}_n$ is the algebra generated by the matrix $H\in\mathcal{T}_n$ with unit elements on the subdiagonal and zeros elsewhere, i.e., transposed Jordan block with zero diagonal. Therefore, $A\in\mathcal{T}_n$ with first column $a=[a(j)]_{j=0}^{n-1}$ is written as $A = \sum_{j=0}^{n-1} a(j) H^j.$ The idea is to add a small element $\varepsilon^n$ at the top right corner of the matrix and substitute $H$ by $H_\varepsilon = H + \varepsilon^n e_0^\trans e_{n-1}.$ It is easy to check that $D_\varepsilon H_\varepsilon D_\varepsilon^{-1} = \varepsilon C, $ where $D_\varepsilon=\mathop{\mathrm{diag}}\nolimits\{\varepsilon^j\}_{j=0}^{n-1},$ and $C = H_1 = H + e_0^\trans e_{n-1}$ generates the algebra of circulant $n \times n$ matrices. Then $A$ and $A^{-1}$ are approximated as follows \begin{equation}\label{eq:bini} \begin{split} A \approx \tilde A_\varepsilon & = \sum_{j=0}^{n-1} a(j) H_\varepsilon^j = D_\varepsilon^{-1} \left( \sum_{j=0}^{n-1} a(j) \varepsilon^j C_j \right) D_\varepsilon = D_\varepsilon^{-1} C_\varepsilon D_\varepsilon = D_\varepsilon^{-1} F^* \Lambda_\varepsilon F D_\varepsilon, \\ A^{-1} \approx \tilde A_\varepsilon^{-1} & = D_\varepsilon^{-1} F^* \Lambda_\varepsilon^{-1} F D_\varepsilon, \qquad \mbox{where}\quad \Lambda_\varepsilon = \sqrt{n} \mathop{\mathrm{diag}}\nolimits(F a_\varepsilon), \quad a_\varepsilon(j)=a(j)\varepsilon^j. \end{split} \end{equation} The first column of $A_\varepsilon^{-1}$ is computed using two FFTs of size $n.$ This idea was revised in~\cite{mng-bini-2004}, where it was proposed to apply Bini's algorithm to the first column $a$ of matrix $A$ padded with $n$ zero elements. The revised version of Bini's algorithm requires two FFTs of size $2n$ and has better accuracy properties. \subsection{Newton iteration} The classical Newton iteration \begin{equation}\label{eq:nw} B_{k+1} = 2 B_k - B_k A B_k \end{equation} was proposed in~\cite{schulz-newton-1933} for the computing the inverse $A^{-1}$ of a nonsingular matrix $A.$ It converges quadratically if initial guess $B_0$ is s.t. $\|I - AB_0\|\leq 1$ in any operator norm of a matrix. In~\cite{benisrael-newton-1966} it is shown that for $B_0 = \mu A^*$ with some small real $\mu$ Newton iteration converges to the inverse $A^{-1}$ of a nonsingular or pseudoinverse $A^\dagger$ of a singlular matrix $A.$ In~\cite{pan-newton-1991} further deep analysis is provided, for instance, it is shown that $\mu^{-1} = \|A\|_1\|A\|_{\infty}$ is a good and reliable choice. In relation to Toeplitz and related structured matrices, the Newton iteration with approximation of the result on each step was developed using the concept of displacement ranks~\cite{pan-newton-2002} and tensor product approximations \cite{hkt-iter-2008,oot-newton-2008,ost-latensor-2009}. For $A\in\mathcal{T}_n$ the choice of initial guess $B_0 = \mu A^*$ is not effective, since $A^* \notin\mathcal{T}_n$ and we can not perform iterations with $B_k\in\mathcal{T}_n,$ which grants low storage and fast multiplication. If $B_0\in\mathcal{T}_n,$ every Newton iteration costs two convolutions, i.e., $6$ FFTs of size $2n.$ \begin{remark} A single Newton iteration for lower triangular Toeplitz matrices is slower than the divide and conquer method. \end{remark} It is not easy to provide a good initial guess $B_0\in\mathcal{T}_n$ for which the Newton iteration with a given matrix $A\in\mathcal{T}_n$ converges in one or few steps. However, Newton iteration can be used to improve the accuracy of matrix $B \approx A^{-1}, B\in\mathcal{T}_n$ computed by other means, if $\|I - AB\|\leq 1.$ For instance, we can note the following relation between the divide and conquer method and Newton iteration. \begin{remark} For matrix $A' \in \mathcal{T}_{2n}$ defined in~\eqref{eq:dc}, the Newton iteration~\eqref{eq:nw} with initial guess \begin{equation}\nonumber B_0 = \begin{bmatrix} A^{-1} & 0 \\ 0 & 0\end{bmatrix}, \qquad A \in \mathcal{T}_n, \end{equation} gives $B_1 = (A')^{-1}$, i.e., converges in one step and is equivalent to the divide and conquer method~\eqref{eq:dc}. \end{remark} Therefore, for $A\in\mathcal{T}_n,$ divide and conquer method is always better than the Newton iteration, which reduces to the divide and conquer method in the special case. \begin{figure}[t] \begin{center} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/mat/aa50n28t01.tikz}} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/mat/ba50n28t01.tikz}} \hfil \end{center} \begin{center} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/mat/aa80n28t01.tikz}} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/mat/ba80n28t01.tikz}} \hfil \end{center} \caption{Decay profiles of triangular Toeplitz matrix~\eqref{eq3} (left) and its inverse (right) for $n=2^{28},$ $h=T/n,$ $T=10$ and $\alpha=0.5$ (top) and $\alpha=0.8$ (bottom) and for different mass $m.$} \label{fig:decay} \end{figure} \subsection{Decay of the elements of inverse matrix} It is instructive to look at the decay profiles of the elements of a triangular Toeplitz matrix~\eqref{eq3} and its inverse, see Fig.~\ref{fig:decay}. There is a jump in magnitude between diagonal and subdiagonal elements, i.e., $$ \frac{a_0}{a_1} = \frac{1-(\gamma m)^{-1}}{2^{\alpha+1} - 2}, \qquad \gamma=\frac{h^\alpha}{\Gamma(\alpha+2)}, $$ where the numerator increases when $n \to \infty,$ $h \to 0$ and tends to one when $m \to -\infty.$ After the jump, elements decay polynomially, i.e., $a_p \sim p^{\alpha-1}$ for $p \geq 1.$ For the inverse matrix the behaviour is the same for a certain (possibly very long) set of elements. However, after certain point the rate of decay changes from $1-\alpha$ to $1+\alpha,$ i.e., $b_p \sim p^{-\alpha-1}$ for $p\geq P.$ The bend point $P$ which is obtained from the experiment, is the monotonically decreasing function $P=P(\gamma m),$ i.e. the larger is the initial jump, the later the decay of element of the inverse matrix switches to faster rate. The observed behaviour of elements of inverse matrix allows us to predict the upper bound for the norm of the second half of vector, using the information about the first half. We will use this property in the next subsection, where the divide and conquer algorithm will be adapted for the vectors approximately given in the low--parametrical tensor--structured format. \section{Inversion of triangular Toeplitz matrices using QTT approximation} \label{TTINV} \subsection{Tensor train and quantized tensor train formats} A \emph{tensor} is an array with $d$ indices (or \emph{modes}) $$ \mathbf{A} = [a(k_1,\ldots,k_d)], \qquad k_p = 0,\ldots,n_p-1, \quad p=1,\ldots,d. $$ The tensor train (TT) format~\cite{osel-newten-2009eng,osel-tt-2011} for the tensor $\mathbf{A}$ reads\footnote{We will often write the equations in elementwise form, which assumes that all indices run through all possible values.} \begin{equation}\label{eq:tt} a(k_1,k_2,\ldots,k_d) = A^{(1)}_{k_1} A^{(2)}_{k_2} \ldots A^{(d)}_{k_d}, \end{equation} where each $A^{(p)}_{k_p}$ is an $r_{p-1} \mathbin{\times} r_p$ matrix. Usually the \emph{border conditions} $r_0 = r_d = 1$ are imposed to make every entry $a(k_1,\ldots,k_d)$ a scalar. However, larger $r_0$ and $r_d$ can be considered and every entry of a tensor $\mathbf{A} = [a(k_1,\ldots,k_d)]$ becomes an $r_0 \mathbin{\times} r_d$ matrix. Values $r_0,\ldots,r_{d-1}$ are referred to as~\emph{TT--ranks} and characterize the~\emph{separation properties} of the tensor~$\mathbf{A}.$ Three-dimensional arrays $A^{(p)} = [A^{(p)}_{k_p}]$ are referred to as~\emph{TT--cores}. To apply the TT compression to low dimensional data, the idea of \emph{quantization} was proposed~\cite{osel-2d2d-2010,khor-qtt-2011}. We will explain the idea for a one-dimensional vector $a = [a(k)]_{k=0}^{n-1},$ restricting the discussion to $n=2^d.$ Define the binary notation of index $k$ as follows \begin{equation}\label{eq:bit} k=\overline{k_1\ldots k_d} \mathrel{\stackrel{\mathrm{def}}{=}} \sum_{p=1}^d k_p 2^{p-1}, \qquad k_p=0,1. \end{equation} The isomorphic mapping $k \leftrightarrow (k_1,\ldots k_d)$ allows us to \emph{reshape} a vector $a=[a(k)]$ into the $d$--tensor $\dot\mathbf{A}=[\dot a(k_1,\ldots,k_d)].$ The TT format~\eqref{eq:tt} for the latter is called the \emph{QTT format} and reads \begin{equation} \label{eq:qtt} a(k) = a(\overline{k_1 \ldots k_d}) = \dot a(k_1,\ldots,k_d) = A^{(1)}_{k_1} \ldots A^{(d)}_{k_d}. \end{equation} This idea appears in~\cite{osel-2d2d-2010} in the context of matrix approximation. In~\cite{khor-qtt-2011} the TT format applied after the quantization of indices was called the \emph{QTT format} and applied to a class of functions discretized on uniform grids, revealing the impressive approximation properties. In particular, it was proven that the QTT--ranks of $\exp x,$ $\sin x,$ $\cos x,$ $x^p$ are uniformly bounded w.r.t. the grid size. For the functions $e^{-\alpha x^2},$ $x^\alpha,$ $\frac{\sin x}{x}$, $\frac{1}{x},$ etc., similar properties were found experimentally. \begin{figure}[t] \begin{center} \hfil \includegraphics[width=.48\textwidth]{./Pic/rank/f28.pdf} \hfil \includegraphics[width=.48\textwidth]{./Pic/rank/g28.pdf} \hfil \end{center} \begin{center} \hfil \includegraphics[width=.48\textwidth]{./Pic/rank/an28t01sp6.pdf} \hfil \includegraphics[width=.48\textwidth]{./Pic/rank/bn28t01sp6.pdf} \hfil \end{center} \caption{Effective QTT rank of vector $[k^{\alpha-1}]$ (top left), vector $[(k-1)^{\alpha+1}-2k^{\alpha+1}+(k+1)^{\alpha+1}]$ (top right), first column of matrix~\eqref{eq3} (bottom left) and its inverse (bottom right) w.r.t. parameter $\alpha$ and relative approximation accuracy $\varepsilon$ in Frobenius norm. Problem size $n=2^{28},$ maximum time $T=10,$ mass $m=-10^{+6}$} \label{fig:r} \end{figure} The QTT separation function of the function $x^{\alpha-1}$ discretized on a uniform grid, is particularly important for us, because it motivates the use of the QTT approximation to develop the superfast algorithms for the solution of fractional differential equations. In the numerical experiment we found out that the QTT ranks are very moderate for all $0 < \alpha < 1$ and for accuracy up to the machine threshold. The same holds for the first column of matrix~\eqref{eq3} as well as for its inverse. On Fig.~\ref{fig:r} we show the effective (average) QTT rank w.r.t. $\varepsilon$ and $\alpha.$ We note that the effective rank does not overcome $10,$ even for very large grids up to $n=2^{28}.$ To construct a superfast algorithms in the QTT format we first have to compress the data to this format using the algorithm with the sublinear complexity. The original TT--SVD algorithm proposed in~\cite{osel-tt-2011} requires all elements of tensor and therefore does not suit for this purpose. To compress matrix~\eqref{eq3} to QTT format, we apply cross interpolation algorithm TT--ACA proposed in~\cite{so-dmrgi-2011proc}. This method computes the approximation using only a few elements of the original array, and does not require all elements to be computed. The comparison of runtimes of TT--SVD and TT--ACA algorthms is given on Fig.~\ref{fig:dmrg}. The time that is required to choose the good subset of elements for the interpolation depends on the structure of data, which is defined by parameters $\alpha$ and $m.$ It is easy to see that the behaviour of data is less regular for the large $\alpha$ and mass, which leads to larger runtimes of TT--ACA. Nevertheless, we clearly see that TT--ACA outperforms TT--SVD for all examples and has sublinear complexity w.r.t. problem size $n.$ \begin{figure}[t] \begin{center} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/dmrg/a10e08.tikz}} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/dmrg/a90e08.tikz}} \hfil \end{center} \caption{Runtimes of TT--SVD and TT--ACA algorithms for the approximation of matrix~\eqref{eq3} in the QTT format w.r.t. size $n.$ (left) $\alpha=0.1,$ (right) $\alpha=0.9.$} \label{fig:dmrg} \end{figure} \subsection{Fourier transform and convolution in QTT format} \label{QTTconv} Inversion algorithms for triangular Toeplitz matrices $A \in \mathcal{T}_n$ recalled in Sec.~\ref{TTOEP} are based on two main operations: Fourier transform and discrete convolution. The radix-2 reccurent relation which was known to Gauss~\cite{gauss-fft} and lays behind the famous Cooley-Tuckey FFT algorithm~\cite{cooleytukey-fft} perfectly matches the multilevel structure of QTT format, resulting in the QTT--FFT algorithm~\cite{dks-ttfft-2012}. For a vector of size $n=2^d$ given approximately in QTT format~\eqref{eq:qtt}, the QTT--FFT computes the Fourier transform with complexity~$\mathcal{O}(d^2 R^3),$ where $R$ is the maximum QTT rank of the input vector, Fourier image, and all intermediate vectors of the algorithm. The discrete convolution, i.e., multiplication by Toeplitz matrix, can be performed by three Fourier transforms with complexity $\mathcal{O}(d^2 R^3),$ where $R$ bounds the QTT ranks of both vectors to convolve as well as their Fourier images. As shown in~\cite{khkaz-conv-2011}, the convolution $c=a \mathbin{\star} b$ of two vectors with QTT ranks bounded by $r_a$ and $r_b,$ can be written in QTT form with QTT ranks bounded by $2r_ar_b.$ This representation has large QTT ranks, which can be reduced to the value bounded by $r_c \leq 2r_ar_b$ using some TT--truncation algorithm. We can use SVD--based algorithm proposed in~\cite{osel-tt-2011} or iterative DMRG--type approach proposed in~\cite{Os-mvk2-2011}, resulting in convolution algorithms with $\mathcal{O}(d r_a^3r_b^3)$ and $\mathcal{O}(d (r_a+r_b+r_c) r_a r_b r_c)$ complexity, respectively. If $r_a \approx r_b \approx r_c \approx R,$ the QTT--FFT and DMRG--based convolution algorithms have complexity $\mathcal{O}(d^2 R^3)$ and $\mathcal{O}(d R^4),$ respectively. Therefore, we can not say in general which approach is better, even in the simplest case of almost equal QTT ranks. This will be established in numerical experiments. \subsection{Shifts of vectors in QTT format} The convolution algorithm proposed in~\cite{khkaz-conv-2011} is based on the remarkable property of shift matrices $L \in \mathcal{T}_{2^d}$ and $U = L^\trans,$ where the first column of $L$ is $l=\left(0,1,0,0,\ldots,0\right)^\trans.$ It is shown in~\cite{khkaz-conv-2011} that all matrices $L^p,$ $p=1,\ldots,2^d-1$ have all QTT ranks two. Hence, if a vector $a$ has QTT ranks $r_1,\ldots,r_{d-1},$ then the right shifted vector $b=L^p a$ for all $p$ has QTT ranks not larger than $2r_1, \ldots, 2r_{d-1}.$ The same holds for left shifts $c=U^p a.$ In the following theorem we improve this result for vectors, shifted by one element. \begin{theorem} Let $a=\left[a(k)\right]_{k=0}^{2^d-1}$ has the QTT representation~\eqref{eq:qtt}, then the vector \begin{equation}\nonumber b=\begin{bmatrix} x & a(0) & \ldots & a(2^d-2)\end{bmatrix}^\trans \end{equation} has the QTT representation $b(k) = b(\overline{k_1 \ldots k_d}) = B^{(1)}_{k_1} \ldots B^{(d)}_{k_d}$ with the following cores \begin{equation}\label{eq:push} \begin{split} B^{(1)}_0 = \begin{bmatrix} \phantom{A_0^{(1)}} & 1 \end{bmatrix}, \qquad B^{(p)}_0 = \begin{bmatrix} A_0^{(p)} & \\ \phantom{b_p} & 1 \end{bmatrix}, \qquad B^{(d)}_0 = \begin{bmatrix} A_0^{(d)} \\ x \end{bmatrix}, \\ B^{(1)}_1 = \begin{bmatrix} A_0^{(1)} & \phantom{1} \end{bmatrix}, \qquad B^{(p)}_1 = \begin{bmatrix} A_1^{(p)} & \\ b_p & \phantom{1} \end{bmatrix}, \qquad B^{(d)}_1 = \begin{bmatrix} A_1^{(d)} \\ b_d \end{bmatrix}, \end{split} \end{equation} where $p=2,\ldots,d-1$ and $b_q = A^{(1)}_1 \ldots A^{(q-1)}_1 A^{(q)}_0$ for $q=2,\ldots,d.$ Similarly, the vector \begin{equation}\nonumber c=\begin{bmatrix} a(1) & \ldots & a(2^d-1) & y \end{bmatrix}^\trans \end{equation} has the QTT representation $c(k) = c(\overline{k_1 \ldots k_d}) = C^{(1)}_{k_1} \ldots C^{(d)}_{k_d}$ with the following cores \begin{equation}\label{eq:pull} \begin{split} C^{(1)}_0 = \begin{bmatrix} A_1^{(1)} & \phantom{1} \end{bmatrix}, \qquad C^{(p)}_0 = \begin{bmatrix} A_0^{(p)} & \\ c_p & \phantom{1} \end{bmatrix}, \qquad C^{(d)}_0 = \begin{bmatrix} A_0^{(d)} \\ c_d \end{bmatrix}, \\ C^{(1)}_1 = \begin{bmatrix} \phantom{A_0^{(1)}} & 1 \end{bmatrix}, \qquad C^{(p)}_1 = \begin{bmatrix} A_1^{(p)} & \\ & 1 \end{bmatrix}, \qquad C^{(d)}_1 = \begin{bmatrix} A_1^{(d)} \\ y \end{bmatrix}, \end{split} \end{equation} where $p=2,\ldots,d-1$ and $c_q = A^{(1)}_0 \ldots A^{(q-1)}_0 A^{(q)}_1$ for $q=2,\ldots,d.$ \end{theorem} \begin{proof} We check~\eqref{eq:push} straightforwardly. For $k=0$ it holds \begin{equation}\nonumber b(0) = B^{(1)}_{0} \ldots B^{(d)}_{0} = \begin{bmatrix} \phantom{A_0^{(1)}} & 1 \end{bmatrix} \: \ldots \: \begin{bmatrix} A_0^{(p)} & \\ \phantom{b_p} & 1 \end{bmatrix} \: \ldots \: \begin{bmatrix} A_0^{(d)} \\ x \end{bmatrix} = x \end{equation} For $k=\overline{k_1k_2\ldots k_d}$ with $k_1=1$ it holds \begin{equation}\nonumber \begin{split} b(k) & = b(\overline{1 k_2\ldots k_d}) = B^{(1)}_{1} B^{(2)}_{k_2} \ldots B^{(d)}_{k_d} = \begin{bmatrix} A_0^{(1)} & \phantom{1} \end{bmatrix} \: \begin{bmatrix} A_{k_2}^{(2)} & \\ * & * \end{bmatrix} \: \ldots \: \begin{bmatrix} A_{k_d}^{(d)} \\ * \end{bmatrix} \\ & = A_0^{(1)} A_{k_2}^{(2)} \ldots A_{k_d}^{(d)} = a(\overline{0 k_2 \ldots k_d}) = a(k-1), \end{split} \end{equation} where ``$*$'' denotes arbitrarily zero or non-zero element. For $k=\overline{k_1k_2k_3\ldots k_d}$ with $k_1=0$ and $k_2=1$ it holds \begin{equation}\nonumber \begin{split} b(k) & = b(\overline{01 k_3\ldots k_d}) = B^{(1)}_{0} B^{(2)}_{1} B^{(3)}_{k_3} \ldots B^{(d)}_{k_d} = \begin{bmatrix} \phantom{A_0^{(1)}} & {1} \end{bmatrix} \: \begin{bmatrix} A_1^{(2)} & \\ {b_2} & \phantom{1} \end{bmatrix} \: \begin{bmatrix} A_{k_3}^{(3)} & \\ * & * \end{bmatrix} \: \ldots \: \begin{bmatrix} A_{k_d}^{(d)} \\ * \end{bmatrix} \\ & = b_2 A_{k_3}^{(3)} \ldots A_{k_d}^{(d)} = A_1^{(1)} A_0^{(2)} A_{k_3}^{(3)} \ldots A_{k_d}^{(d)} = a(\overline{10 k_3\ldots k_d}) = a(k-1). \end{split} \end{equation} Finally, for $k=\overline{k_1k_2k_3\ldots k_d}$ with $k_1=\ldots=k_{p-1}=0$ and $k_p=1$ it holds \begin{equation}\nonumber \begin{split} b(k) & = b(\overline{\underbrace{0 \ldots 0}_{p-1\:\textrm{zeros}}1 k_{p+1}\ldots k_d}) = B^{(1)}_{0}B^{(2)}_{0} \ldots B^{(p-1)}_{0} B^{(p)}_{1} B^{(p+1)}_{k_{p+1}} \ldots B^{(d)}_{k_d} \\ & = \begin{bmatrix} \phantom{A_0^{(1)}} & {1} \end{bmatrix} \: \begin{bmatrix} A_0^{(2)} & \\ \phantom{b_2} & {1} \end{bmatrix} \: \ldots \begin{bmatrix} A_0^{(p-1)} & \\ \phantom{b_2} & {1} \end{bmatrix} \: \begin{bmatrix} A_{1}^{(p)} & \\ b_p & \phantom{1} \end{bmatrix} \: \begin{bmatrix} A_{k_{p+1}}^{(p+1)} & \\ * & * \end{bmatrix} \: \ldots \: \ldots \begin{bmatrix} A_{k_d}^{(d)} \\ * \end{bmatrix} \\ & = b_p A_{k_{p+1}}^{(p+1)} \ldots A_{k_d}^{(d)} = A_1^{(1)} \ldots A_1^{(p-1)} A_0^{(p)} A_{k_{p+1}}^{(p+1)} \ldots A_{k_d}^{(d)} \\ & = a(\overline{\underbrace{1 \ldots 1}_{p-1\:\textrm{ones}}0 k_{p+1}\ldots k_d}) = a(k-1). \end{split} \end{equation} Equation~\eqref{eq:pull} is verified in the same way. \end{proof} \subsection{Divide and conquer algorithm in QTT format} We are now ready to present the version of divide and conquer algorithm which operates with data given approximately in QTT format. Let $n=2^d$ and consider $A\in\mathcal{T}_n$ defined by the first column $a(k)$ which is represented in the QTT format~\eqref{eq:qtt}. As previously, let $A_t$ denote $2^t \times 2^t$ leading submatrix of $A.$ For small $d_0$ we can invert $A_{d_0}$ using standard divide and conquer method and approximate the first column of $A_{d_0}^{-1}$ in QTT format using SVD-based algorithm~\cite{osel-tt-2011}. Now suppose that for some $t$ the first column of $A_t^{-1}$ is computed in QTT format, and we have to compute the QTT approximation of the first column of $A_{t+1}^{-1},$ using the recursion~\eqref{eq:dc}. It is necessary to describe the Toeplitz matrix $C$ which lies in the lower part of $A_{t+1}.$ The first column of $C$ is $c_+ = [a(2^t), a(2^t+1), \ldots, a(2^{t+1}-1)]^\trans$ and has the following QTT representation \begin{equation}\label{eq:col} c_+(\overline{k_1 \ldots k_t}) = a(\overline{k_1 \ldots k_t} + 2^t) = A^{(1)}_{k_1} A^{(2)}_{k_2} \ldots A^{(t)}_{k_t} A^{(t+1)}_1 A^{(t+2)}_0 \ldots A^{(d)}_0. \end{equation} The first row of $C$ is $c_- = [a(2^t), a(2^t-1), \ldots, a(1)]^\trans.$ To construct the QTT representation for $c_-,$ first write the QTT representation for $a=[a(0), \ldots, a(2^t-1)]^\trans,$ which is $$ a(\overline{k_1 \ldots k_t}) = A^{(1)}_{k_1} A^{(2)}_{k_2} \ldots A^{(t)}_{k_t} A^{(t+1)}_0 A^{(t+2)}_0 \ldots A^{(d)}_0. $$ Then apply the `pull' operation and construct QTT format for $a'=[a(1), \ldots, a(2^t)]^\trans$ as follows $$ a'(\overline{k_1 \ldots k_t}) = C^{(1)}_{k_1} C^{(2)}_{k_2} \ldots C^{(t)}_{k_t} A^{(t+1)}_0 A^{(t+2)}_0 \ldots A^{(d)}_0, $$ where TT--cores $C_{k_q}^{(q)}$ are defined by~\eqref{eq:pull}. Finally, revert the ordering of elements in the vector $a'$ to obtain the QTT format for $c_-$ as follows \begin{equation}\label{eq:row} c_-(\overline{k_1 \ldots k_t}) = C^{(1)}_{1-k_1} C^{(2)}_{1-k_2} \ldots C^{(t)}_{1-k_t} A^{(t+1)}_0 A^{(t+2)}_0 \ldots A^{(d)}_0. \end{equation} We summarize the above steps in Alg.~\ref{alg:dc}. Note that the workhorse of divide and conquer method is the discrete convolution in QTT format, which can be performed by two different methods. This results in two variants of algorithms with different performance, which will be studied in numerical experiments. \begin{algorithm}[t] \caption{Divide and conquer in QTT format} \label{alg:dc} \begin{algorithmic}[1] \REQUIRE{$A\in\mathcal{T}_n,$ $n=2^d,$ given by vector $[a(k)]_{k=0}^{n-1}$ in QTT format~\eqref{eq:qtt}} \ENSURE{$B=A^{-1}\in\mathcal{T}_n$ given in QTT format} \STATE For small $d_0,$ compute the first column of $2^{d_0}\times 2^{d_0}$ leading submatrix $A_{d_0}.$ Compute $B_{d_0}=A_{d_0}^{-1}$ by~\eqref{eq:inv} and approximate in in QTT format by TT--SVD algorithm~\cite{osel-tt-2011}. \FOR{$t=d_0,\ldots,d-1$} \STATE Compute the first row and column of matrix $C_t$ in~\eqref{eq:dc} in QTT format by~\eqref{eq:col} and~\eqref{eq:row}. \STATE Compute the first column of $B_t C_t B_t$ by two convolutions in QTT format, see Sec.~\ref{QTTconv}. \STATE Combine the first column of $B_t$ and first column of $B_t C_t B_t$ given in QTT format as follows \begin{equation}\nonumber b(\overline{k_1\ldots k_t}) = B^{(1)}_{k_1} \ldots B^{(t)}_{k_1}, \qquad g(\overline{k_1\ldots k_t}) = G^{(1)}_{k_1} \ldots G^{(t)}_{k_1}, \end{equation} to the single vector $b'$ in QTT format, which is defined as follows \begin{equation}\nonumber b'(\overline{k_1\ldots k_t k_{t+1}}) = \begin{bmatrix} B^{(1)}_{k_1} & G^{(1)}_{k_1}\end{bmatrix} \: \begin{bmatrix} B^{(2)}_{k_2} & \\ & G^{(2)}_{k_2}\end{bmatrix} \: \ldots \: \begin{bmatrix} B^{(t)}_{k_t} & \\ & G^{(t)}_{k_t}\end{bmatrix} \: \begin{bmatrix} 1-k_{t+1} \\ k_{t+1}\end{bmatrix} \: \end{equation} \STATE Apply TT--truncate algorithm to $b'$ to reduce the ranks of QTT representation. \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Modified Bini's algorithm in QTT format} The implementation of~\eqref{eq:bini} in the QTT format is very straightforward. It is enough to mention that the QTT format of vector $[\varepsilon^j]_{j=0}^{n-1},$ $n=2^d,$ has QTT--ranks one~(see~\cite{khor-qtt-2011}), since $$ \varepsilon^j = \varepsilon^{\overline{j_1j_2\ldots j_d}} = \varepsilon^{j_1} \varepsilon^{2j_2} \ldots \varepsilon^{2^{d-1}j_d}. $$ Therefore, multiplication of a vector in QTT format by diagonal matrix $D_\varepsilon$ requires only the appropriate scaling of TT--cores. By Alg.~\ref{alg:bi} we present the QTT version of the modified Bini's algorithm~\cite[Alg. 2]{mng-bini-2004}. The algorithm includes two Fourier transforms in the QTT format which can not be substituted by discrete convolution. Note that this algorithm contains two approximation errors: \begin{itemize} \item The first comes form original approximation of triangular Toeplitz matrix $A$ by the diagonally scaled circulant matrix $A_\varepsilon.$ The accuracy of this approximation is governed by parameter $\varepsilon.$ According to the numerical tests made by the authors of~\cite{mng-bini-2004}, the good choice for Bini's and modified Bini's methods are $\varepsilon^n = 0.5 \times 10^{-8}$ and $\varepsilon^n=10^{-5},$ respectively. \item The second error comes from TT--truncation algorithm applied in the QTT--FFT algorithm and on each step of Newton iteration. The threshold parameter of TT--truncation should be usually smaller than $\varepsilon^n$ in order to maintain the accuracy of the result after diagonal scaling. \end{itemize} \begin{algorithm}[t] \caption{Modified Bini's method in QTT format} \label{alg:bi} \begin{algorithmic}[1] \REQUIRE{$A\in\mathcal{T}_n,$ $n=2^d,$ given by vector $[a(k)]_{k=0}^{n-1}$ in QTT format~\eqref{eq:qtt}} \ENSURE{$B_\varepsilon = A^{-1}_\varepsilon \approx A^{-1}\in\mathcal{T}_n$ given in QTT format} \STATE Choose $0< \varepsilon < 1$ and let $\hat a(k) = \varepsilon^k a(k)$ for $k=0,\ldots,n-1$ and $\hat a(k)=0$ for $k=n,\ldots,2n-1.$ The QTT representation of $\hat a$ is the following $$ \hat a(k) = \hat a(\overline{k_1\ldots k_d k_{d+1}}) = \hat A^{(1)}_{k_1} \ldots \hat A^{(d)}_{k_d} (1-k_{d+1}), \qquad \hat A^{(p)}_{k_p} = \varepsilon^{2^{p-1} k_p} A^{(p)}_{k_p}, \quad p=1,\ldots,d. $$ \STATE Apply QTT--FFT~\cite{dks-ttfft-2012} to compute the size--$2n$ Fourier transform $\lambda = \sqrt{2n} F \hat a.$ \STATE Apply Newton iteration~\eqref{eq:nw} to compute $c=\lambda^{-1}$ in the QTT format. Each iteration step includes the pointwise (Hadamard) multiplication of vectors in QTT format and TT--truncation to reduce the QTT--ranks. \STATE Apply QTT--FFT again to compute the size--$2n$ Fourier transform $\hat b = F^* c / \sqrt{2n}$ in the QTT format $$ \hat b(k) = \hat b(\overline{k_1 \ldots k_d k_{d+1}}) = \hat B^{(1)}_{k_1} \ldots \hat B^{(d)}_{k_d} \hat B^{(d+1)}_{k_{d+1}}. $$ \STATE The QTT representation of the first column of $B_\varepsilon$ is the following $$ b_\varepsilon(k) = b_\varepsilon(\overline{k_1\ldots k_d}) = B^{(1)}_{k_1} \ldots B^{(d)}_{k_d} \hat B^{(d+1)}_{0}, \qquad B^{(p)}_{k_p} = \varepsilon^{-2^{p-1} k_p} \hat B^{(p)}_{k_p}, \quad p=1,\ldots,d. $$ \end{algorithmic} \end{algorithm} \section{Numerical experiments} \label{NUM} \subsection{Timings of inversion algorithms} \begin{figure}[p] \begin{center} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/time/qa20sm5.tikz}} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/time/qa80sm5.tikz}} \hfil \end{center} \begin{center} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/time/qa20sm0.tikz}} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/time/qa80sm0.tikz}} \hfil \end{center} \begin{center} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/time/qa20sp5.tikz}} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/time/qa80sp5.tikz}} \hfil \end{center} \caption{Runtimes of divide and conquer algorithm (solid lines) and modified Bini's algorithm (dashed lines) for the inversion of triangular Toeplitz matrix~\eqref{eq3} in full and in the QTT formats (grey and black lines, respectively) w.r.t. problem size $n$ and step size $h=T/n.$ Fixed maximum time $T=10,$ fractional order $\alpha=0.2$ (left) and $\alpha=0.8$ (right), mass $m=-10^{-5}$ (top), $m=-10^0$ (middle), $m=-10^5$ (bottom).} \label{fig:qtime} \end{figure} On Fig.~\ref{fig:qtime} we show the runtime of inversion algorithms for triangular Toeplitz matrices in full and in the QTT format w.r.t. problem size and for different parameters $\alpha, m.$ Standard inversion algorithms have the $\mathcal{O}(n\log n)$ complexity which depends only on problem size. Quite contrarily, the complexity and runtime of QTT algorithms depend on QTT--ranks of input and intermediate vectors, which are sensitive to the fractional order $\alpha,$ mass $m$ and step size $h.$ They also depend crucially on the method used to compute the discrete convolution in the QTT format. We can note that the divide and conquer algorithm~\ref{alg:dc} which uses QTT--conv algorithm~\cite{khkaz-conv-2011} is always significantly faster than the same method which uses QTT--FFT algorithm~\cite{dks-ttfft-2012} to compute the convolution. However, QTT--FFT works well in modified Bini's algorithm~\ref{alg:bi}, which appears to be the fastest method when mass is small in modulus. For large mass the divide and conquer algorithm~\ref{alg:dc} with QTT--conv is preferable to the modified Bini's algorithm~\ref{alg:bi}. For mass $m\sim -1$ these methods have the same asymptotical complexity. From Fig.~\ref{fig:qtime} it can be easily seen that the QTT algorithms are asymptotically faster than the algorithms in full format. For practical computations it is very important at which size $n$ there is a \emph{crossover point}, i.e., the minimum value of $n$ for which the QTT algorithms are actually faster than the algorithms in full format. Numerical experiments show that for a wide range of parameters $\alpha$ and $m$ the crossover point between full and QTT divide and conquer methods is $\log_2 n \simeq {20}.$ This value is about the same as the crossover point between FFT and QTT--FFT algorithm applied to signals with sparse Fourier image~\cite{dks-ttfft-2012}. The crossover point between full and QTT versions of the modified Bini's algorithm depends on $m$ and $\alpha$ and can be even smaller, e.g. $\log_2 n \simeq 17$ for $m$ small in modulus. \subsection{Accuracy test for constant forcing} \begin{figure}[t] \begin{center} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/acc/t01sm0.tikz}} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/acc/t05sm0.tikz}} \hfil \end{center} \caption{Accuracy of the solution of the test problem~\eqref{eq:fconst} in the relative Frobenius norm w.r.t. problem size $n$ and for different fractional parameters $\alpha.$ Fixed maximum time $T=10$ (left) and $T=10^5$ (right). Mass $m=-1.$} \label{fig:acc} \end{figure} We consider a simple problem for which the analytical solution is available, namely the one with constant forcing term. \begin{equation}\label{eq:fconst} D^{\alpha}_*y(t)=my(t)+\lambda, \qquad y(0)=y_0. \end{equation} The analytical solution is written in the following form \begin{equation} \label{eq:fsol} y(t)=y_0E_\alpha\left(m t^\alpha\right) +\frac{\lambda}{m}E_\alpha\left(m t^\alpha\right) -\frac{\lambda}{m}, \end{equation} where $E_\alpha$ is the Mittag--Lefler function~\cite{ML1, ML2}, which can be expressed and computed by certain (sometimes slow-converging) series. The accuracy verification results are shown on Fig.~\ref{fig:acc}. We see that as the problem size grows, the accuracy improves until certain point and then the error start growing. This is explained by the machine threshold errors amplified by the condition number of the matrix $A$ from~\eqref{eq3} which is unbounded as $n$ grows to the infinity. \subsection{Accuracy of the Laplace transform} \begin{figure}[t] \begin{center} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/lap/l3aa80n22sm0.tikz}} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/lap/l3aa80t03sm0.tikz}} \hfil \end{center} \caption{The Laplace transform of the solution~\eqref{eq:sol34} and its discrete approximations. Fractional parameter $\alpha=0.8,$ mass $m=-1.$ Left: fixed number of grid points $n=2^{22},$ different maximum time $T.$ Right: fixed maximum time $T=10^3,$ different $n.$} \label{fig:lapa} \end{figure} Consider the following test equation \begin{equation}\label{eq:34} D^{\alpha}_*y(t)=my(t)+t^\frac{3}{4}, \qquad y(0)=1. \end{equation} Since the forcing term $f(t)=t^{3/4}$ does not have a short Taylor series representation, this problem could be difficult for methods based on it. Unlike the previous example, the analytical solution in space domain is not available. Instead we can solve the problem using the Laplace transform.\footnote{History of the Laplace transform and other essential details can be found in, eg.~\cite{laplace-2003}.} The Laplace transform of a function $f(t)$ with the appropriate speed of decay is defined by \begin{equation}\label{eq:lap} F(s) = \mathop{\mathcal{L}}\nolimits\{f(t)\} = \int_0^{\infty} e^{-st} f(t) dt. \end{equation} The Laplace transform of a convolution is the product of Laplace transforms, \begin{equation}\nonumber (f \star g)(t) = \int_0^t f(\tau) g(t-\tau) d\tau \qquad\Leftrightarrow\qquad \mathop{\mathcal{L}}\nolimits\{f \star g\}(s) = \mathop{\mathcal{L}}\nolimits\{f\}(s) \mathop{\mathcal{L}}\nolimits\{g\}(s). \end{equation} This allows to simplify the equation~\eqref{eq:34} and find the Laplace transform of the solution, \begin{equation} \label{eq:sol34} Y(s)=\frac{1}{s^{1-\alpha}(s\alpha-m)}+\frac{\Gamma(1.75)}{s^{1.75} (s^\alpha-m)}. \end{equation} The inverse Laplace transform $y(t)=\mathop{\mathcal{L}}\nolimits^{-1}\{Y(s)\}$ is given by the complex contour integral and is difficult for numerical computation. However, we can easily compute the Laplace transform of the discrete solution $\mathop{\mathcal{L}}\nolimits\tilde y = \tilde Y$ in points $\{s_k\}$ using a rectangle quadrature rule, \begin{equation} \label{eq:lapd} \tilde Y(s_k) \approx \tilde Y_k = h \sum_{j=0}^n e^{-t_j s_k} \tilde y(t_j), \qquad t_j=jh. \end{equation} Then we compare $Y(s_k)$ and $\tilde Y_k$ to establish the accuracy of the discrete solution $\tilde y(t_j)=y_j.$ It is easy to see that equation~\eqref{eq:lapd} contains three sources of errors, i.e. the ones of the discrete solution, of the quadrature rule and of the truncation of indefinite integral~\eqref{eq:lap} to the finite interval $[0:T],$ $T=nh.$ To compute $\tilde Y(s)$ accurately for small $s$ we should take $T > s^{-1} \log \varepsilon^{-1},$ where $\varepsilon$ is a machine precision error. To keep the quadrature rule error small, we should also use grids with small time step $h.$ This is shown on Fig.~\ref{fig:lapa}, where the exact Laplace transform~\eqref{eq:sol34} is compared with its discrete approximations for different $T$ and $n.$ These factors motivate the use of very large grid size $n$ and hence the QTT approach. It should be noted that the discrete Laplace transform~\eqref{eq:lapd} is computed perfectly in the QTT format since the QTT--ranks of the exponent are all ones. Finally, on Fig.~\ref{fig:lape} we show the accuracy of the Laplace transform of the solution~\eqref{eq:sol34} for $10^{-3} \leq s \leq 10^0$ and for different $T$ and $n.$ It is clear that large problem size is essential for the accurate representation of the solution in the Laplace transform space. \begin{figure}[t] \begin{center} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/lap/l3ea80n28sm0.tikz}} \hfil \resizebox{.48\textwidth}{!}{\input{./Pic/lap/l3ea80t07sm0.tikz}} \hfil \end{center} \caption{Accuracy of the Laplace transform~\eqref{eq:sol34} given by discrete approximation~\eqref{eq:lapd}. Fractional parameter $\alpha=0.8,$ mass $m=-1.$ Left: fixed number of grid points $n=2^{28},$ different maximum time $T.$ Right: fixed maximum time $T=10^7,$ different $n.$} \label{fig:lape} \end{figure} \section{Conclusions and future work} \label{CONC} We present the new family of algorithms for the solution of linear fractional ODEs. Our approach develops the framework of matrix algorithms for fractional calculus~\cite{podlubny-matr-2000} by embedding the QTT tensor decomposition inside matrices, as proposed in~\cite{osel-2d2d-2010}. The proposed algorithms works on matrix level and can be formally applied for the inversion of any triangular Toeplitz matrix, as well as the one obtained by discretisation of a linear fractional calculus problem. The workhorse of the inversion algorithms is the discrete convolution and/or Fourier transform of vectors given/approximated in the compressed QTT form. The success of the proposed algorithms, however, is based on the representability of the initial matrix and intermediate vectors arising in computations in the QTT format with a modest accuracy. As the motivating example we consider a simple linear fractional differential equation which reduces to the weakly singular convolutional Volterra equation with the Abel-type kernel. The QTT approximation method benefits from both the smoothness and decay of the Abel kernel, which results in efficient QTT--representation of problem matrix with the accuracy up to the machine precision. As shown by numerical experiments, the QTT--ranks of the intermediate vectors in the proposed algorithms remain bounded or grow slowly with the problem size. As the result, our algorithms of the inversion of triangular Toeplitz matrices demonstrate sublinear $o(n)$ complexity, which falls down to the complexity $\mathcal{O}(\log^2 n)$ of the superfast Fourier transform in certain cases. For our implementation the crossover point with the standard algorithms based on the FFTW library for the considered experiments is $17 \lesssim \log_2n \lesssim 21,$ i.e., the developed methods give not only the asymptotical benefit, but also a practical speedup for the problems of moderate size. The proposed approach opens a new class of algorithms for the fractional calculus, i.e., methods of sublinear complexity. The developed techniques can be applied to the fractional equations with several differential operators of different order. They also can be generalised to fractional PDEs in two and more dimensions and to the nonlinear fractional problem. This would be the topic of further work, which will be reported elsewhere. \section*{References}
{ "timestamp": "2012-11-26T02:02:59", "yymm": "1211", "arxiv_id": "1211.5384", "language": "en", "url": "https://arxiv.org/abs/1211.5384", "abstract": "We address a linear fractional differential equation and develop effective solution methods using algorithms for inversion of triangular Toeplitz matrices and the recently proposed QTT format. The inverses of such matrices can be computed by the divide and conquer and modified Bini's algorithms, for which we present the versions with the QTT approximation. We also present an efficient formula for the shift of vectors given in QTT format, which is used in the divide and conquer algorithm. As the result, we reduce the complexity of inversion from the fast Fourier level $O(n\\log n)$ to the speed of superfast Fourier transform, i.e., $O(\\log^2 n).$ The results of the paper are illustrated by numerical examples.", "subjects": "Numerical Analysis (math.NA)", "title": "Superfast solution of linear convolutional Volterra equations using QTT approximation", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363713038173, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7084883419088144 }
https://arxiv.org/abs/0807.0779
The complex Busemann-Petty problem for arbitrary measures
The complex Busemann-Petty problem asks whether origin symmetric convex bodies in C^n with smaller central hyperplane sections necessarily have smaller volume. The answer is affirmative if n\leq 3 and negative if n\geq 4. In this article we show that the answer remains the same if the volume is replaced by an "almost" arbitrary measure. This result is the complex analogue of Zvavitch's generalization to arbitrary measures of the original real Busemann-Petty problem.
\section{Introduction} In 1956 the Busemann-Petty problem was posed (see [BP]), asking the following question: suppose that $K$ and $L$ are two origin symmetric convex bodies in $\mathbb{R}^n$ such that for every $\xi \in S^{n-1},$ $$\mbox{\rm Vol}_{n-1}\bigl(K\cap \xi^{\perp}\bigr) \leq \mbox{\rm Vol}_{n-1}\bigl(L\cap \xi^{\perp}\bigr).$$ Does it follow that $$\mbox{\rm Vol}_{n}\bigl(K\bigr) \leq \mbox{\rm Vol}_{n}\bigl(L \bigr) \ \ ?$$ \noindent The answer is affirmative if $n\leq 4$ and negative if $n\geq 5.$ The problem was solved in the late 90's as a result of a series of papers ([LR], [Ba], [Gi], [Bu], [Lu], [Pa], [Ga], [Zh1], [K1], [K2], [Zh2], [GKS]; see [K5, p.3] for the history of the solution). A few years later Zvavitch [Zv] showed that one can replace the volume by essentially any measure on $\mathbb{R}^n.$ Namely, if we consider any even continuous positive function $f$ on $\mathbb{R}^n$ and denote by $\mu$ the measure with density $f,$ we can define \begin{center}$\mu(D)=\int_{D}f(x)dx$ \hspace{10pt} and \hspace{10pt} $\mu(D\cap \xi^{\perp})=\int_{D\cap \xi^{\perp}}f(x)dx,$ \end{center} for every closed bounded invariant with respect to all $R_{\theta}$ set $D$ in $\mathbb{R}^n$ and every $\xi \in S^{n-1}.$ Then the Busemann-Petty problem for general measures is stated as follows: Suppose that $K$ and $L$ are two origin symmetric convex bodies in $\mathbb{R}^n$ such that, for every $\xi \in S^{n-1},$ $$\mu(K\cap \xi^{\perp}) \leq \mu(L\cap \xi^{\perp}).$$ \noindent Does it follow that $$\mu(K)\leq \mu(L) \ ?$$ Surprisingly, the answer remains the same as in the original problem. It is affirmative for $n\leq 4$ and negative for $n\geq 5.$ Zvavitch's ideas for general measures were applied and further developed in [R], [Y1] and [Y2], for hyperbolic and spherical spaces and for sections of lower dimensions. In this article we study the complex version of the Busemann-Petty problem for arbitrary measures. Let $\xi \in \mathbb{C}^n$ with $ |\xi|=1.$ We denote by $$H_{\xi}=\{z\in \mathbb{C}^n \ : \ (z,\xi)=\sum\limits_{k=1}^{n}z_k \overline{\xi}_k=0\}$$ \noindent the complex hyperplane perpendicular to $\xi.$ Origin symmetric convex bodies in $\mathbb{C}^n$ are the unit balls of norms on $\mathbb{C}^n.$ We denote by $\|\cdot\|_K$ the norm corresponding to the body $K$ $$K=\{z \in \mathbb{C}^n \ : \|z\|_{K}\leq 1\}.$$ \noindent We identify $\mathbb{C}^n$ with $\mathbb{R}^{2n}$ using the mapping $$\xi=(\xi_1,\ldots ,\xi_n)=(\xi_{11}+i\xi_{12},\ldots ,\xi_{n1}+i\xi_{n2})\longmapsto (\xi_{11},\xi_{12},\ldots, \xi_{n1},\xi_{n2})$$ \noindent and observe that under this mapping the complex hyperplane $H_{\xi}$ turns into a $(2n-2)$-dimensional subspace of $\mathbb{R}^{2n}$ orthogonal to the vectors \begin{center} $\xi=(\xi_{11},\xi_{12},\ldots, \xi_{n1},\xi_{n2})$ \hspace{1.1pt} and \hspace{1.1pt} $\xi^{\perp}=(-\xi_{12},\xi_{11},\ldots, -\xi_{n2},\xi_{n1}).$ \end{center} \noindent Since norms on $\mathbb{C}^n$ satisfy the equality $$\|\lambda z\|=|\lambda|\|z\|, \ \ \forall z \in \mathbb{C}^n, \ \forall \lambda \in \mathbb{C}^n,$$ origin symmetric complex convex bodies correspond to those origin symmetric convex bodies $K$ in $\mathbb{R}^{2n}$ that are invariant with respect to any coordinate-wise two-dimensional rotation, namely for each $\theta \in [0,2\pi]$ and each $x=(x_{11},x_{12},\ldots, x_{n1},x_{n2}) \in \mathbb{R}^{2n}$ \begin{equation} \|x\|_{K}=\|R_{\theta}(x_{11},x_{12}), \ldots, R_{\theta}(x_{n1},x_{n2})\|_{K}, \end{equation} \noindent where $R_{\theta}$ stands for the counterclockwise rotation of $\mathbb{R}^2$ by the angle $\theta$ with respect to the origin. If a convex body satisfies $(1)$ we will say that \emph{it is invariant with respect to all $R_{\theta}$}. The complex Busemann-Petty problem ([KKZ]) can now be formulated as follows: Suppose $K$ and $L$ are origin symmetric invariant with respect to all $R_{\theta}$ convex bodies in $\mathbb{R}^{2n}$ such that $$\mbox{\rm Vol}_{2n-2}(K\cap H_\xi)\leq \mbox{\rm Vol}_{2n-2}(L\cap H_\xi)$$ for each $\xi$ from the unit sphere $S^{2n-1}$ of $\mathbb{R}^{2n}.$ Does it follow that $$\mbox{\rm Vol}_{2n}(K) \leq \mbox{\rm Vol}_{2n}(L) \ ?$$ As it is proved in [KKZ] the answer is affirmative if $n\leq 3$ and negative if $n\geq 4.$ Let $f$ be an even positive and continuous function on $\R^{2n}$. We define a measure $\mu$ on $\R^{2n}$ with density $f,$ so that \begin{center}$\mu(D)=\int_{D}f(x)dx$ \hspace{10pt} and \hspace{10pt} $\mu(D\cap H)=\int_{D\cap H}f(x)dx$ \end{center} for every closed bounded invariant with respect to all $R_{\theta}$ set $D$ in $\R^{2n}$ and every $(2n-2)$-dimensional subspace $H$ of $\R^{2n}.$ As it is proved in Section 3 (Lemma \ref{finv}), one may assume, without loss of generality, that the density $f$ is also invariant with respect to all rotations $R_{\theta}.$ We will call such a function \emph{$R_{\theta}$-invariant}. Then, the complex Busemann-Petty problem for arbitrary measures is stated as follows: \noindent Suppose $K$ and $L$ are origin symmetric invariant with respect to all $R_{\theta}$ convex bodies in $\mathbb{R}^{2n}$ so that for every $\xi \in S^{2n-1}$ $$\mu(K\cap H_{\xi})\leq \mu(L\cap H_{\xi}),$$ does it follow that $$\mu(K)\leq\mu(L) \ ?$$ In this article we prove that, analogously to the real case, the solution remains the same for arbitrary measures with a positive continuous density. Note that, the positivity assumption on $f$ is necessary, because otherwise one may assume that the density is identically zero where the affirmative answer to the problem holds trivially in all dimensions. \medskip \section{The Fourier analytic connection to the problem} Through out this paper we use the Fourier transform of distributions. The Schwartz class of rapidly decreasing infinitely differentiable functions (test functions) in $\mathbb{R}^n$ is denoted by $\mathcal S(\mathbb{R}^n),$ and the space of distributions over $\mathcal S(\mathbb{R}^n)$ by $\mathcal S^{\prime}(\mathbb{R}^n).$ The Fourier transform $\hat{f}$ of a distribution $f \in \mathcal S^{\prime}(\mathbb{R}^n)$ is defined by $\langle\hat{f},\phi\rangle=\langle f,\hat{\phi}\rangle$ for every test function $\phi.$ A distribution is called even homogeneous of degree $p \in \mathbb{R}$ if $\langle f(x),\phi(x/\alpha)\rangle=|\alpha|^{n+p}\langle f,\phi\rangle$ for every test function $\phi$ and every $\alpha \in \mathbb{R}, \ \alpha\neq 0.$ The Fourier transform of an even homogeneous distribution of degree $p$ is an even homogeneous distribution of degree $-n-p.$ A distribution $f$ is called positive definite if, for every test function $\phi, \ \langle f, \phi\ast\overline{\phi(-x)}\rangle \geq 0.$ By Schwartz's generalization of Bochner's theorem, this is equivalent to $\hat{f}$ being a positive distribution in the sense that $\langle\hat{f},\phi\rangle \geq 0$ for every non-negative test function $\phi,$ (see [K5, section 2.5] for more details). A compact set $K \subset\mathbb{R}^n$ is called a star body, if every straight line that passes through the origin crosses the boundary of the set at exactly two points and the boundary of $K$ is continuous in the sense that the \emph{Minkowski functional} of $K,$ defined by $$\|x\|_K=\min \{\alpha \geq 0 : x \in \alpha K \}$$ is a continuous function on $\mathbb{R}^n.$ A star body $K$ in $\mathbb{R}^n$ is called $k$-smooth (infinitely sooth) if the restriction of $\|x\|_{K}$ to the sphere $S^{n-1}$ belongs to the class of $C^k(S^{n-1} )\ (C^{\infty}(S^{n-1})).$ It is well-known that one can approximate any convex body in $\mathbb{R}^n$ in the radial metric, $d(K, L)=\sup \{|\rho_{K}(\xi)-\rho_{L}(\xi)|,\ \xi \in S^{n-1} \},$ by a sequence of infinitely smooth convex bodies. The proof is based on a simple convolution argument (see for example [Sch, Theorem 3.3.1]). It is also easy to see that any convex body in $\mathbb{R}^{2n}$ invariant with respect to all $R_{\theta}$ rotations can be approximated in the radial metric by a sequence of infinitely smooth convex bodies invariant with respect to all $R_{\theta}.$ This follows from the same convolution argument, because invariance with respect to $R_{\theta}$ is preserved under convolutions. If $D$ is an infinitely smooth origin symmetric star body in $\mathbb{R}^n$ and $0<k <n,$ then the Fourier transform of the distribution $\|x\|_D^{-k}$ is a homogeneous function of degree $-n+k$ on $\mathbb{R}^n,$ whose restriction to the sphere is infinitely smooth (see [K5, Lemma 3.16]). The following Proposition is a spherical version of Parseval's formula established in [K3], (see also [K5, Lemma 3.22]): \begin{prop}\label{prop:parseval} Let $D$ be an infinitely smooth origin symmetric star body in $\mathbb{R}^n$ and $g \in C^{k-1}(\mathbb{R}^n)$ even homogeneous of degree $-n+k$ function. Then $$\int_{S^{n-1}}g(\theta)\| \theta\|_{D}^{-k}d\theta=(2\pi)^n \int_{S^{n-1}}\hat{g}(\xi)\bigl(\| \theta\|_{D}^{-k}\bigr)^{\wedge}(\xi) d\xi.$$ \end{prop} \medskip The concept of an intersection body was introduced by Lutwak [Lu]. This concept was generalized in [K3], as follows: Let $1\leq k <n,$ and let $D$ and $L$ be two origin symmetric star bodies in $\mathbb{R}^n.$ We say that $D$ is the \emph{$k$-intersection body of $L$} if for every $(n-k)-$dimensional subspace $H$ of $\mathbb{R}^n$ $$\mbox{\rm Vol}_k(D\cap H^{\perp})=\mbox{\rm Vol}_{n-k}(L\cap H ).$$ We introduce the class of $k$-intersection bodies, as those star bodies that can be obtained as the limit, in the radial metric, of a sequence of $k$-intersection bodies of star bodies. A Fourier analytic characterization of $k$-intersection bodies was proved in [K4]. \noindent \begin{prop}\label{prop:k-posdef} An origin symmetric star body $D$ in $\mathbb{R}^n$ is a $k$-intersection body, $1\leq k \leq n-1,$ if and only if $\|\cdot\|_{D}^{-k}$ is a positive definite distribution. \end{prop} \medskip \bigskip Let $1\leq k<2n$ and let $H$ be an $(2n-k)-$dimensional subspace of $\mathbb{R}^{2n}.$ We denote by $\chi(\cdot)$ the indicator function on $[-1,1]$ and by $|\cdot|_2$ the Euclidean norm in the proper space. We fix an orthonormal basis $e_1,\ldots,e_k$ in the orthogonal subspace $H^{\perp}.$ For any convex body $D$ in $\mathbb{R}^{2n}$ and any even positive continuous function $f$ on $\mathbb{R}^{2n}$ we define the $(2n-k)-$dimensional parallel section function $A_{f,D,H}$ as a function on $\mathbb{R}^k$ such that \begin{equation}\label{eqt:secf}A_{f,D,H}(u)= \int_{\{x\in \mathbb{R}^{2n}:(x,e_1)=u_1,\ldots,(x,e_k)=u_k\} }\chi(\|x\|_D) f(x)dx, \ u\in \mathbb{R}^{k}. \end{equation} The original lower dimensional parallel section function that corresponds to the $(n-k)$-dimensional volume of the section of $D$ with a subspace $H$ (put $n$ instead of $2n$ and $f=1$), was defined in [K4]. Note that at $0$ the function $A_{f,D,H}$ measures the central section of the body $D$ by the subspace $H.$ Passing to polar coordinates on $H$ we have that \begin{eqnarray}\label{eqn:polarH} A_{f,D,H}(0)&=&\mu(D\cap H)=\int_{H}\chi(\|x\|_D) f(x)dx \nonumber \\ &=&\int\limits_{S^{2n-1}\cap H}\Bigl(\int_{0}^{\|\theta\|_D^{-1}}r^{2n-3}f(r\theta)dr\Bigr)d\theta. \end{eqnarray} If $D$ is infinitely smooth and $f \in \mathbb{C}^{\infty}(\mathbb{R}^{2n}),$ the function $A_{f,D,H}$ is infinitely differentiable at the origin (see [K5, Lemma 2.4]). So we can consider the action of the distribution $|u|_2^{-q-k}/ \Gamma(-q/2)$ on $A_{f,D,H}$ and apply a standard regularization argument (see for example [K5, p.36] and [GS, p.10]). Then the function \begin{equation}\label{eqt:q-A} q \longmapsto \left< \frac{|u|_2^{-q-k}}{\Gamma(-\frac{q}{2})},A_{f,D,H}(u)\right> \end{equation} \noindent is an entire function of $q\in \mathbb{C}.$ If $q=2m, \ m \in \mathbb{N}\cup \{0\},$ then $$\left<\frac{|u|_2^{-q-k}}{\Gamma(-\frac{q}{2})}\Big|_{q=2m}, A_{f,D,H}(u)\right>$$ \begin{equation*}\label{eqt:q=2m}=\frac{(-1)^m |S^{k-1}|}{2^{m+1}k(k+2)\cdots(k+2m-2)}\Delta^m A_{f,D,H}(0), \end{equation*} where $|S^{k-1}|=2\pi^{k/2}/\Gamma(k/2)$ is the surface area of the unit sphere in $\mathbb{R}^k,$ and $\Delta=\sum_{i=1}^k \partial^2/\partial u_i^2$ is the $k$-dimensional Laplace operator (see [GS, p.71-74]). Note that the function (\ref{eqt:q-A}) is equal, up to a constant, to the fractional power of $\Delta^{q/2}A_{f,D,H}$ (see [KKZ] or [K4] for complete definition). \medskip \noindent \textbf{Remark.} If a body $D$ is $m$-smooth (or infinitely smooth) and $f \in C^m(\mathbb{R}^{2n})$ (or $C^{\infty}(\mathbb{R}^{2n}))$ it is easy for one to see that the function $$x \mapsto |x|_2^{-m}\int_0^{\frac{|x|_2}{\|x\|_K}}r^{2n-3}f\bigl(r\frac{x}{|x|}\bigr)dr$$ is also $m$-times (infinitely) continuously differentiable on $\mathbb{R}^{2n}\setminus \{0 \}.$ \medskip The proof of following proposition is similar to that of Proposition 4 in [KKZ]. So we omit it here. \begin{prop}\label{prop:A} Let $D$ be an infinitely smooth origin symmetric convex body in $\mathbb{R}^{2n}, \ f \in C^{\infty}(\mathbb{R}^{2n}),$ and $1\leq k < 2n.$ Then for every $(2n-k)-$dimensional subspace $H$ of $\mathbb{R}^{2n}$ and any $q \in \mathbb{R}, \ -k<q< 2n-k,$ $$\left< \frac{|u|_2^{-q-k}}{\Gamma(-\frac{q}{2})}, A_{f,D,H}(u)\right>$$ \begin{equation}\label{eqt:A^{q}}=\frac{2^{-q-k}\pi^{-\frac{k}{2}}}{\Gamma\bigl(\frac{q+k}{2}\bigr)} \int\limits_{S^{2n-1}\cap H^{\perp}} \Bigl(|x|_2^{-2n+k+q}\int_{0}^{\frac{|x|_2}{\|x\|_D}} r^{2n-k-1-q}f\bigl(r\frac{x}{|x|_2}\bigr)dr\Bigr)^{\wedge}(\theta)d\theta. \end{equation} \noindent Now, if $m \in \mathbb{N} \cup \{0\},$ $$\D^m A_{f,D,H}(0)$$ \begin{equation}\label{eqt:D^m}=\frac{(-1)^m}{(2\pi)^k}\int\limits_{S^{2n-1}\cap H^{\perp}} \Bigl(|x|_2^{-2n+k+2m}\int_{0}^{\frac{|x|_2}{\|x\|_D}} r^{2n-k-1-2m}f\bigl(r\frac{x}{|x|_2}\bigr)dr\Bigr)^{\wedge}(\theta)d\theta \end{equation} \end{prop} \bigskip The following (elementary) inequality is similar to Lemma 1 in [Zv]. \begin{lemma}\label{lm:elem} Let $a, b>0$ and let $\alpha$ be a non-negative function on $(0, \max \{a,b \} ]$ so that the integrals below converge. Then \begin{equation}\label{eqt:elem} \int_{0}^{a}t^{2n-1}\alpha(t)dt-a^2\int^{a}_0t^{2n-3}\alpha(t)dt\leq \int_{0}^{b}t^{2n-1}\alpha(t)dt-a^2\int^{b}_0t^{2n-3}\alpha(t)dt.\!\! \end{equation} \end{lemma} \section{Connection with $k$-intersection bodies}\label{measuresections} As mentioned in the Introduction, we can assume that the density function is $R_{\theta}$-invariant. This simple observation plays an important role to the solution of the problem. \begin{lemma}\label{finv} Suppose $f$ is an even non-negative continuous function on $\mathbb{R}^{2n}$ and $\mu$ is a measure with density $f.$ Then there exists an even non-negative continuous function $\tilde{f}$ that is invariant with respect to all rotations $R_{\theta}$ such that \begin{center} $\mu(D)=\int_{D}\tilde{f}(x)dx $ \hspace{1.1pt} and \hspace{1.1pt} $\mu(D\cap H_{\xi})=\int_{D\cap H_{\xi}}\tilde{f}(x)dx,$ \end{center} for every closed bounded invariant with respect to all $R_{\theta}$ set $D$ in $\mathbb{R}^{2n}$ and $\xi \in S^{2n-1}.$ \end{lemma} \noindent \textbf{Proof.} We define its average over the unit circle, $\tilde{f}(x)=\frac{1}{2\pi}\int_0^{2\pi}f(R_{\theta} x)d\theta,$ for every $x\in \mathbb{R}^{2n}.$ Then for every compact invariant with respect to all $R_{\theta}$ set $D$ in $\mathbb{R}^{2n},$ \begin{eqnarray*}\int_{D}\tilde{f}(x)dx&=&\frac{1}{2\pi}\int_{D}\int_0^{2\pi}f(R_{\theta} x)d\theta dx \\ &=&\frac{1}{2\pi}\int_0^{2\pi}\int_{R_{\theta}^{-1}D}f(y)dyd\theta= \mu(D)\end{eqnarray*} since $R_{\theta}^{-1}D=D,$ for all $\theta \in [0,2\pi].$ Moreover, since central sections of complex convex bodies by complex hyperplanes correspond to convex bodies in $\mathbb{R}^{2n-2}$ that are also invariant with respect to the $R_{\theta}$ rotations, we similarly get that for every $\xi \in S^{2n-1},$ $$\mu(D\cap H_{\xi})=\int_{D\cap H_{\xi}}\tilde{f}(x)dx.$$ \qed Now, we are ready to express the measure of the central sections in terms of the Fourier transform. \begin{thm}\label{thm:musection} Suppose $K$ is an infinitely smooth origin symmetric invariant with respect to all $R_{\theta}$ convex body in $\mathbb{R}^{2n}, \ n \geq 2,$ and $f$ is an infinitely differentiable even positive and $R_{\theta}$-invariant function on $\mathbb{R}^{2n}.$ Then for every $\xi \in S^{2n-1}$ \begin{equation}\label{eqt:musection} \mu(K\cap H_{\xi})=\frac{1}{2\pi}\Bigl(|x|_2^{-2n+2}\int_{0}^{\frac{|x|_2}{\|x\|_{K}}} r^{2n-3}f\bigl(r\frac{x}{|x|_2}\bigr)dr\Bigr)^{\wedge}(\xi) \end{equation} \end{thm} \noindent In order to prove Theorem \ref{thm:musection} we need the following: \begin{lemma}\label{lemma:constf} Let $K$ and $f$ as in Theorem \ref{thm:musection}. Then for every $\xi \in S^{2n-1}$ the Fourier transform of the distribution \begin{equation}\label{invdistr}|x|_2^{-2n+2}\int_0^{\frac{|x|_2}{\|x\|_{K}}} r^{2n-3}f (r\frac{x}{|x|_{2}})dr \end{equation} is a constant function on $S^{2n-1}\cap H_{\xi}^{\perp}.$ \end{lemma} \noindent \textbf{Proof.} The function $\|x\|^{-1}_{K}$ is invariant with respect to all $R_{\theta}$ (see Introduction), so, since $f$ is $R_{\theta}$-invariant it is easy to see that the distribution in (\ref{invdistr}) is a continuous function which is also invariant with respect to all rotations $R_{\theta}.$ By the connection between the Fourier transform of distributions and linear transformations, its Fourier transform is also invariant with respect to all $R_{\theta}$. As mentioned in the Introduction, the space $H_{\xi}^{\perp}$ is spanned by the vectors $\xi$ and $\xi^{\perp}.$ So every vector in $S^{2n-1}\cap H_{\xi}^{\perp}$ is a rotation $R_{\theta},$ for some $\theta \in [0,2\pi],$ of $\xi$ and hence the Fourier transform of $$|x|_2^{-2n+2}\int_0^{\frac{|x|_2}{\|x\|_{K}}} r^{2n-3}f (r\frac{x}{|x|_{2}})dr$$ is a constant function on $S^{2n-1}\cap H_{\xi}^{\perp}.$ \qed \medskip \noindent \textbf{Proof of Theorem \ref{thm:musection}}. Let $\xi \in S^{2n-1}.$ In formula (\ref{eqt:D^m}) we put $ \ H_{\xi}=H, \ k=2$ and $ m=0.$ Then, by the definition of the lower dimensional section function $A_{f,D,H}(0),$ equation (\ref{eqn:polarH}), we have that $$\mu(K\cap H_{\xi})=\frac{1}{(2\pi)^2}\int_{S^{2n-1}\cap H_{\xi}^{\perp}}\Bigl(|x|_2^{-2n+2}\int_{0}^{\frac{|x|_2}{\|x\|_{K}}} r^{2n-3}f\bigl(r\frac{x}{|x|_2}\bigr)dr\Bigr)^{\wedge}(\eta)d\eta.$$ By Lemma \ref{lemma:constf}, the function under the integral is constant on the circle $S^{2n-1}\cap H_{\xi}^{\perp}.$ Since $\xi \in H_{\xi}^{\perp}$ we have that $$\mu(K\cap H_{\xi})=\frac{1}{(2\pi)^2}2\pi\Bigl(|x|_2^{-2n+2}\int_{0}^{\frac{|x|_2}{\|x\|_{K}}} r^{2n-3}f\bigl(r\frac{x}{|x|_2}\bigr)dr\Bigr)^{\wedge}(\xi)$$ \noindent which proves the theorem.\qed \bigskip As in the case of the complex Busemann-Petty problem the property of a body to be a $2$-intersection body is closely related to the solution of the complex Busemann-Petty problem for arbitrary measures. \begin{thm}\label{thm:main2} The solution of the complex Busemann-Petty problem for arbitrary measures in $\mathbb{C}^n $ has an affirmative answer if and only if every origin symmetric invariant with respect to all $R_{\theta}$ convex body in $\mathbb{R}^{2n}$ is a $2$-intersection body. \end{thm} The proof of Theorem \ref{thm:main2} will follow from the Remarks and the next lemmas. \medskip \noindent \textbf{Remark 1.} To prove the affirmative part of the problem it is enough to consider infinitely smooth origin symmetric invariant with respect to all $R_{\theta}$ bodies. This is true because one can approximate, in the radial metric, from inside the body $K$ and from outside the body $L$ by infinitely smooth convex invariant with respect to all $R_{\theta}$ bodies. Then if the affirmative answer holds for infinitely smooth bodies it also holds in the general case. \medskip \noindent \textbf{Remark 2.} Let $D$ be an origin symmetric convex body which is not a $k$-intersection body. Then, there exists a sequence of infinitely smooth convex bodies with strictly positive curvature which are not $k$-intersection bodies that converges in the radial metric to $D,$ (see [K5, Lemma 4.10]). If, in addition, $D$ is invariant with respect to all $R_{\theta},$ one can choose a sequence of bodies with the same property. \medskip \noindent \textbf{Remark 3.} A simple approximation argument allows us to prove Theorem \ref{thm:main2} only for measures whose density is an infinitely differentiable even positive and $R_{\theta}$-invariant function on $\mathbb{R}^{2n}$. Let $f$ be the even positive continuous $R_{\theta}$-invariant density function of a measure $\mu,$ as it is defined in the Introduction. Then there exists an increasing sequence $g_n$ of even positive functions in $C^{\infty}(\mathbb{R}^{2n})$ such that $g_n(x)\chi(\|x\|_D) \rightarrow f(x)\chi(\|x\|_D),$ a.e., for every compact set $D.$ Then by the Monotone Convergence Theorem we have that \begin{center}$\int_{\mathbb{R}^{2n}} g_n(x)\chi(\|x\|_D)dx\rightarrow \mu(D)$ \hspace{1.1pt} and \hspace{1.1pt} $\int_{H} g_n(x)\chi(\|x\|_D)dx\rightarrow \mu(H\cap D),$ \end{center} \noindent as $n\rightarrow \infty,$ for every subspace $H$ of $\mathbb{R}^{2n}.$ In addition, by Lemma \ref{finv}, we may assume that every $g_n$ is also $R_{\theta}$-invariant. \medskip Now we are ready to prove the affirmative part of the complex Busemann-Petty problem for arbitrary measures. \begin{lemma}\label{lm:pos} Suppose $K$ and $L$ are infinitely smooth origin symmetric invariant with respect to all $R_{\theta}$ convex bodies in $\mathbb{R}^{2n}$ so that $K$ is a $2$-intersection body and let $f$ be an infinitely differentiable even positive $R_{\theta}$-invariant function on $\mathbb{R}^{2n}.$ Then, if for every $\xi \in S^{2n-1}$ \begin{equation}\label{eqt:sectineq} \mu(K\cap H_{\xi}) \leq \mu(L\cap H_{\xi}) \end{equation} then $$\mu(K)\leq \mu(L).$$ \end{lemma} \noindent \textbf{Proof.} By the remark before Proposition \ref{prop:A} and [K5, Lemma 3.16], the Fourier transform of the distributions \begin{center} $|x|_2^{-2n+2}\int_{0}^{\frac{|x|_2}{\|x\|_{K}}} r^{2n-3}f\bigl(r\frac{x}{|x|_2}\bigr)dr,$ \hspace {7pt} and \hspace{7pt} $|x|_2^{-2n+2}\int_{0}^{\frac{|x|_2}{\|x\|_{L}}} r^{2n-3}f\bigl(r\frac{x}{|x|_2}\bigr)dr$ \end{center} are homogeneous of degree $-2$ and continuous functions on $\mathbb{R}^{2n}\setminus \{0\}.$ So, by Theorem \ref{thm:musection}, the inequality (\ref{eqt:sectineq}) becomes $$\Bigl(|x|_2^{-2n+2}\int_{0}^{\frac{|x|_2}{\|x\|_{K}}} r^{2n-3}f\bigl(r\frac{x}{|x|_2}\bigr)dr\Bigr)^{\wedge}(\xi)$$ $$\leq \Bigl(|x|_2^{-2n+2}\int_{0}^{\frac{|x|_2}{\|x\|_{L}}} r^{2n-3}f\bigl(r\frac{x}{|x|_2}\bigr)dr\Bigr)^{\wedge}(\xi).$$ Since $K$ is an infinitely smooth $2$-intersection body, by Proposition \ref{prop:k-posdef} and [K5, Theorem 3.16] the Fourier transform of the distribution $\|x\|_K^{-2}$ is a non-negative continuous, outside the origin, function on $\mathbb{R}^{2n}.$ Multiplying both sides of the latter inequality by $\bigl(\|x\|_K^{-2}\bigr)^{\wedge}$ and applying the spherical version of Parseval, Proposition \ref{prop:parseval}, we have that $$\int_{S^{2n-1}}\bigl(\|x\|^{-2}_{K}\bigr)^{\wedge}(\xi)\Bigl(|x|_2^{-2n+2}\int_{0}^{\frac{|x|_2}{\|x\|_{K}}} r^{2n-3}f\bigl(r\frac{x}{|x|_2}\bigr)dr\Bigr)^{\wedge}(\xi)d\xi$$ $$\leq \int_{S^{2n-1}}\bigl(\|x\|^{-2}_{K}\bigr)^{\wedge}(\xi)\Bigl(|x|_2^{-2n+2}\int_{0}^{\frac{|x|_2}{\|x\|_{L}}} r^{2n-3}f\bigl(r\frac{x}{|x|_2}\bigr)dr\Bigr)^{\wedge}(\xi)d\xi,$$ which gives $$\int_{S^{2n-1}}\|x\|^{-2}_{K}\int_{0}^{\|x\|_{K}^{-1}} r^{2n-3}f(rx)drdx$$ \begin{equation}\label{eqt:secintineq}\leq \int_{S^{2n-1}}\|x\|^{-2}_{K}\int_{0}^{\|x\|_{L}^{-1}} r^{2n-3}f(rx)drdx.\end{equation} \noindent We use the elementary inequality, equation (\ref{eqt:elem}), with $a=\|x\|_{K}^{-1}, b=\|x\|_{L}^{-1}$ and $\alpha(r)=f(rx)$ and integrate over $S^{2n-1}.$ Then $$\int_{S^{2n-1}}\bigl(\int_{0}^{\|x\|_{K}^{-1}}r^{2n-1}f(rx)dr\bigr)dx- \int_{S^{2n-1}}\|x\|_{K}^{-2}\bigl(\int_{0}^{\|x\|_{K}^{-1}}r^{2n-3}f(rx)dr\bigr)dx$$ \begin{equation}\label{eqt:sphelem} \leq \int_{S^{2n-1}}\bigl(\int_{0}^{\|x\|_{L}^{-1}}r^{2n-1}f(rx)dr\bigr)dx- \int_{S^{2n-1}}\|x\|_{K}^{-2}\bigl(\int_{0}^{\|x\|_{L}^{-1}}r^{2n-3}f(rx)dr\bigr)dx \end{equation} \noindent We add the equations (\ref{eqt:secintineq}) and (\ref{eqt:sphelem}) and have that $$\int_{S^{2n-1}}\bigl(\int_0^{\|x\|_{K}^{-1}}r^{2n-1}f(rx)dr\bigr)dx \leq \int_{S^{2n-1}}\bigl(\int_0^{\|x\|_{K}^{-1}}r^{2n-1}f(rx)dr\bigr)dx$$ which immediately implies that $$\mu(K)\leq \mu(L).$$ \qed For the negative part we need a perturbation argument to construct a body that will give a counter-example to the problem. The following lemma (without the assumption of invariance with respect to $R_{\theta}$ rotations) was proved in [Zv, Proposition 2] (see also [K5, Lemma 5.16]). The new body immediately inherits the additional property of invariance with respect to all $R_{\theta}$ of the original convex body. \begin{lemma}\label{lemma:countexmpl} Let $L$ be an infinitely smooth origin symmetric convex body with positive curvature and let $f, g \in C^2(\mathbb{R}^{2n}),$ such that $f$ is strictly positive on $\mathbb{R}^{2n}.$ For $\varepsilon >0$ we define a star body $K$ so that $$\int_0^{\|x\|^{-1}_K} t^{2n-3}f(tx)dt=\int_0^{\|x\|_{L}^{-1}}t^{2n-3}f(tx)dt-\varepsilon g(x), \ \forall x \in S^{2n-1}.$$ Then, if $\varepsilon$ is small enough the body $K$ is convex. Moreover, if $L$ is invariant with respect to all $R_{\theta},$ and $f, g$ are $R_{\theta}$-invariant then $K$ is also invariant with respect to all $R_{\theta}.$ \end{lemma} \medskip \begin{lemma}\label{lm:neg} Let $f \in C^{\infty}(\mathbb{R}^{2n})$ is an even positive $R_{\theta}$-invariant function. Suppose $L$ is an infinitely smooth origin symmetric invariant with respect to all $R_{\theta}$ convex body in $\mathbb{R}^{2n}$ with positive curvature which is not a $2$-intersection body. Then there exists an origin symmetric invariant with respect to all $R_{\theta}$ convex body $K$ in $\mathbb{R}^{2n}$ so that for every $\xi \in S^{2n-1}$ $$\mu(K\cap H_{\xi}) \leq \mu(L\cap H_{\xi})$$ but $$\mu(K)> \mu(L).$$ \end{lemma} \textbf{Proof.} The body $L$ is infinitely smooth, so, by [K5, Lemma 3.16], the Fourier transform of $\|x\|_{L}^{-2}$ is a continuous function on $\mathbb{R}^{2n}.$ Since $L$ is not a $2$-intersection body, by Proposition \ref{prop:k-posdef} there exists an open set $\Omega \subset S^{2n-1}$ where the Fourier transform of $\|x\|_L^{-2}$ is negative. We can assume that $\Omega$ is invariant with respect to rotations $R_{\theta}$ since $L$ is. Using a standard perturbation procedure for convex bodies, see for example [KKZ, Lemma 5] and [K5, p.96], we define an even non-negative invariant with respect to all $R_{\theta}$ function $h \in C^{\infty}(S^{2n-1})$ whose support is in $\Omega.$ We extend $h$ to an even homogeneous function $h(\frac{x}{|x|_2})|x|_2^{-2}$ of degree $-2$ on $\mathbb{R}^{2n}.$ Then, by [K5, Lemma 3.16] the Fourier transform of $h(\frac{x}{|x|_2})|x|_2^{-2}$ is an even homogeneous function $g(\frac{x}{|x|_2})|x|_2^{-2n+2}$ of degree $-2n+2$ on $\mathbb{R}^{2n},$ with $g\in C^{\infty}(S^{2n-1}).$ Moreover, $g$ is also invariant with respect to rotations $R_{\theta}.$ The assumptions for the body $L$ allow us to apply Lemma \ref{lemma:countexmpl} and take $\varepsilon >0$ small enough to define a convex body $K$ by $$|x|_2^{-2n+2}\int_0^{\frac{|x|_2}{\|x\|_K}}t^{2n-3}f\bigl(t\frac{x}{|x|_2}\bigr)dt$$ $$=|x|_2^{-2n+2}\int_0^{\frac{|x|_2}{\|x\|_L}}t^{2n-3}f\bigl(t\frac{x}{|x|_2}\bigr)dt- \varepsilon g\bigl(\frac{x}{|x|_2}\bigr)|x|_2^{-2n+2}.$$ We apply Fourier transform to both sides of the latter inequality. Then, by Theorem \ref{thm:musection}, since $h\geq 0,$ we obtain the following inequality for the measures of the central sections of $K$ and $L$ by the subspace $H_{\xi},$ $$\mu(K\cap H_{\xi})=\frac{1}{2\pi}\Bigl(|x|_2^{-2n+2}\int_{0}^{\frac{|x|_2}{\|x\|_{K}}} r^{2n-3}f\bigl(r\frac{x}{|x|_2}\bigr)dr\Bigr)^{\wedge}(\xi)$$ $$=\frac{1}{2\pi}\Bigl(|x|_2^{-2n+2}\int_{0}^{\frac{|x|_2}{\|x\|_{L}}} r^{2n-3}f\bigl(r\frac{x}{|x|_2}\bigr)dr\Bigr)^{\wedge}(\xi)-(2\pi)^{2n-1}\varepsilon h(\xi)$$ $$\leq \mu(L\cap H_{\xi})$$ On the other hand, the function $h$ is positive only where $\bigl(\|\cdot\|^{-2}_{L}\bigr)^{\wedge}$ is negative. So, for every $\xi \in S^{2n-1},$ $$\bigl(\|\cdot\|^{-2}_{L}\bigr)^{\wedge}(\xi) \Bigl(|x|_2^{-2n+2}\int_{0}^{\frac{|x|_2}{\|x\|_{K}}} r^{2n-3}f\bigl(r\frac{x}{|x|_2}\bigr)dr\Bigr)^{\wedge}(\xi)$$ $$=\bigl(\|\cdot\|^{-2}_{L}\bigr)^{\wedge}(\xi)\Bigl(|x|_2^{-2n+2}\int_{0}^{\frac{|x|_2}{\|x\|_{L}}} r^{2n-3}f\bigl(r\frac{x}{|x|_2}\bigr)dr\Bigr)^{\wedge}(\xi)$$ $$-(2\pi)^{2n}\bigl(\|\cdot\|^{-2}_{L}\bigr)^{\wedge}(\xi)\varepsilon h(\xi)$$ $$> \bigl(\|\cdot\|^{-2}_{L}\bigr)^{\wedge}(\xi)\Bigl(|x|_2^{-2n+2}\int_{0}^{\frac{|x|_2}{\|x\|_{L}}} r^{2n-3}f\bigl(r\frac{x}{|x|_2}\bigr)dr\Bigr)^{\wedge}(\xi),$$ Now, we integrate the latter inequality over $S^{2n-1}$ and apply the spherical version of Parseval's identity. Then similarly to Lemma \ref{lm:pos}, we apply the elementary inequality for integrals, Lemma \ref{lm:elem}, and conclude that $$ \mu(K)> \mu(L).$$ \qed \bigskip \section{The solution of the problem }\label{solCBPGM} To prove the main result of this paper we need to determine the dimensions in which an origin symmetric invariant with respect to all $R_{\theta}$ convex body in $\mathbb{R}^{2n}$ is a $2$-intersection body. \begin{main} The solution to the complex Busemann-Petty problem for arbitrary measures is affirmative if $n\leq 3$ and negative if $n\geq 4.$ \end{main} \textbf{Proof.} It is known that an origin symmetric invariant with respect to $R_{\theta},$ convex body in $\mathbb{R}^{2n}, n\geq 2,$ is a $k$-intersection body if $k\geq 2n-4$ (see [KKZ]). Hence, we obtain an affirmative answer to the complex Busemann-Petty problem for arbitrary measures if $n\leq 3.$ Now, suppose that $n \geq 4.$ The unit ball $B_q^n$ of the complex space $l_q^n, \ q>2,$ considered as a subset of $\mathbb{R}^{2n}:$ $$B^n_q=\{x\in \mathbb{R}^{2n}:\|x\|_q=\bigl( (x_{11}^2+x_{12}^2)^{q/2}+\cdots +(x_{n1}^2+x_{n2}^2)^{q/2}\bigr)^{1/q} \leq 1 \}$$ provides a counter-example for the Lebegue measure ($f=1$), of a body that is not a $k$-intersection body for $k<2n-4$ (see [KKZ, Theorem 4]). By Proposition \ref{prop:k-posdef} this implies that for $n\geq 4$ the distribution $\|x\|_q^{-2}$ is not positive definite. Then the result follows by Theorem \ref{thm:main2}. \qed \bigbreak {\bf Acknowledgments:} The author was partially supported by the NSF grant DMS-0652571. Also, part of this work was carried out when the author was visiting the Pacific Institute of Mathematics, which the author thanks for its hospitality.
{ "timestamp": "2008-07-04T17:25:27", "yymm": "0807", "arxiv_id": "0807.0779", "language": "en", "url": "https://arxiv.org/abs/0807.0779", "abstract": "The complex Busemann-Petty problem asks whether origin symmetric convex bodies in C^n with smaller central hyperplane sections necessarily have smaller volume. The answer is affirmative if n\\leq 3 and negative if n\\geq 4. In this article we show that the answer remains the same if the volume is replaced by an \"almost\" arbitrary measure. This result is the complex analogue of Zvavitch's generalization to arbitrary measures of the original real Busemann-Petty problem.", "subjects": "Functional Analysis (math.FA)", "title": "The complex Busemann-Petty problem for arbitrary measures", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363700641144, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7084883410179708 }
https://arxiv.org/abs/2110.06488
The Convex Geometry of Backpropagation: Neural Network Gradient Flows Converge to Extreme Points of the Dual Convex Program
We study non-convex subgradient flows for training two-layer ReLU neural networks from a convex geometry and duality perspective. We characterize the implicit bias of unregularized non-convex gradient flow as convex regularization of an equivalent convex model. We then show that the limit points of non-convex subgradient flows can be identified via primal-dual correspondence in this convex optimization problem. Moreover, we derive a sufficient condition on the dual variables which ensures that the stationary points of the non-convex objective are the KKT points of the convex objective, thus proving convergence of non-convex gradient flows to the global optimum. For a class of regular training data distributions such as orthogonal separable data, we show that this sufficient condition holds. Therefore, non-convex gradient flows in fact converge to optimal solutions of a convex optimization problem. We present numerical results verifying the predictions of our theory for non-convex subgradient descent.
\section{Introduction} Neural networks (NNs) exhibit remarkable empirical performance in various machine learning tasks. However, a full characterization of the optimization and generalization properties of NNs is far from complete. Non-linear operations inherent to the structure of NNs, over-parameterization and the associated highly nonconvex training problem makes their theoretical analysis quite challenging. In over-parameterized models such as NNs, one natural question arises: Which particular solution does gradient descent/gradient flow find in unregularized NN training problems? Suppose that $\mfX\in \mbR^{N\times d}$ is the training data matrix and $\mfy\in \{1,-1\}^N$ is the label vector. For linear classification problems such as logistic regression, it is known that gradient descent (GD) exhibits implicit regularization properties, see, e.g., \citep{soudry2018implicit, gunasekar2018characterizing}. To be precise, under certain assumptions, GD converges to the following solution which maximizes the margin: \begin{equation}\label{max_margin:linear} \mathop{\arg\min}_{\mfw\in \mbR^d} \frac{1}{2}\|\mfw\|_2^2, \text{ s.t. } y_n\mfw^T\mfx_n \geq 1, n\in[N]. \end{equation} Here we denote $[N]=\{1,\dots,N\}$. Recently, there are several results on the implicit regularization of the (stochastic) gradient descent method for NNs. In \citep{lyu2019gradient}, for the multi-layer homogeneous network with exponential or cross-entropy loss, with separable training data, it is shown that the gradient flow (GF) and GD finds a stationary point of the following non-convex max-margin problem: \begin{equation}\label{max_margin:nn} \mathop{\arg\min}_{\btheta} \frac{1}{2}\|\btheta\|_2^2, \text{ s.t. } y_nf(\btheta; \mfx_n)\geq 1, n\in[N], \end{equation} where $f(\btheta;\mfx)$ represents the output of the neural network with parameter $\btheta$ given input $\mfx$. In \citep{tibor}, by further assuming the orthogonal separability of the training data, it is shown that all neurons converge to one of the two max-margin classifiers. One corresponds to the data with positive labels, while the other corresponds to the data with negative labels. However, as the max-margin problem of the neural network \eqref{max_margin:nn} is a non-convex optimization problem, the existing results only guarantee that it is a stationary point of \eqref{max_margin:nn}, which can be a local minimizer or even a saddle point. In other words, the global optimality is not guaranteed. In a different line of work \citep{nnacr, tcrnn, ergen2020implicit,ergen2021global}, exact convex optimization formulations of two and three-layer ReLU NNs are developed, which have global optimality guarantees in polynomial-time when the data has a polynomial number of hyperplane arrangements, e.g., in any fixed dimension or with convolutional networks of fixed filter size. The convex optimization framework was extended to vector output networks \cite{sahiner2020vector}, quantized networks \cite{bartan2021training}, autoencoders \cite{sahiner2020convex,gupta2021exact}, networks with polynomial activation functions \cite{bartan2021neural}, networks with batch normalization \cite{ergenbatchnorm2021}, univariate deep ReLU networks and deep linear networks \cite{ergen2021revealing} and Generative Adversarial Networks \cite{sahiner2021hidden}. In this work, we first derive an equivalent convex program corresponding to the maximal margin problem \eqref{max_margin:nn}. We then consider non-convex subgradient flow for unregularized logistic loss. We show that the limit points of non-convex subgradient flow can be identified via primal-dual corespondance in the convex optimization problem. We then present a sufficient condition on the dual variable to ensure that all stationary points of the non-convex max-margin problem are KKT points of the convex max-margin problem. For certain regular datasets including orthogonal separable data, we show that this sufficient condition on the dual variable holds, thus implies the convergence of gradient flow on the unregularized problem to the global optimum of the non-convex maximalo margin problem \eqref{max_margin:nn}. Consequently, this enables us to fully characterize the implicit regularization of unregularized gradient flow or gradient descent as convex regularization applied to a convex model. \subsection{Related Work} There are several works studying the property of two-layer ReLU networks trained by gradient descent/gradient flow dynamics. The following papers study the gradient descent like dynamics in training two-layer ReLU networks for regression problems. \citet{ma2020quenching} show that for two-layer ReLU networks, only a group of a few activated neurons dominate the dynamics of gradient descent. In \citep{mei2018mean}, the limiting dynamics of stochastic gradient descent (SGD) is captured by the distributional dynamics from a mean-field perspective and they utlize this to prove a general convergence result for noisy SGD. \citet{li2020learning} focus on the case where the weights of the second layer are non-negative and they show that the over-parameterized neural network can learn the ground-truth network in polynomial time with polynomial samples. In \citep{zhou2021local}, it is shown that mildly over-parameterized student network can learn the teacher network and all student neurons converge to one of the teacher neurons. Beyond \citep{lyu2019gradient} and \citep{tibor}, the following papers study the classification problems. In \citep{chizat2018global}, under certain assumptions on the training problem, with over-parameterized model, the gradient flow can converge to the global optimum of the training problem. For linear separable data, utilizing the hinge loss for classification, \citet{wang2019learning} introduce a perturbed stochastic gradient method and show that it can attain the global optimum of the training problem. Similarly, for linear separable data, \citet{yang2021learning} introduce a modified loss based on the hinge loss to enable (stochastic) gradient descent find the global minimum of the training problem, which is also globally optimal for the training problem with the hinge loss. \section{Preliminaries} In this section, we describe the problem setting and outline our main contribution. \subsection{Problem setting} We focus on two-layer neural networks with ReLU activation, i.e., \begin{equation} f(\btheta, \mfX)=(\mfX\mfW_1)_+\mfw_2, \end{equation} where $\mfW_1\in \mbR^{d\times m}$, $\mfw_2\in \mbR^m$ and $\btheta=(\mfW_1,\mfw_2)$ represents the parameter. Due to the ReLU activation, this neural network is homogeneous, i.e., for any scalar $c>0$, we have $f(c\btheta; \mfX) = c^2 f(\btheta; \mfX)$. The training problem is given by \begin{equation}\label{prob:logis} \min_{\btheta} \sum_{n=1}^N \ell(y_nf(\btheta; \mfx_n)), \end{equation} where $\ell(q):\mbR\to \mbR_+$ is the loss function. We focus on the logistic, i.e, cross-entropy loss, i.e., $\ell(q)=\log(1+\exp(-q))$. We briefly review gradient descent and gradient flow as follows. The gradient descent takes the update rule \begin{equation*} \btheta(t+1) = \btheta(t)-\eta(t)\mfg(t), \end{equation*} where $\mfg(t)\in \p^\circ \mcL(\btheta(t))$ and $\p^\circ$ represents the Clarke's subdifferential. The gradient flow can be viewed as the gradient descent with infinitesimal step size. The trajectory of the parameter $\btheta$ during training is an arc $\btheta: [0,+\infty)\to \Theta$, where $\Theta=\{\btheta=(\mfW_1,\mfw_2)|\mfW_1\in \mbR^{d\times m}, \mfW_2\in \mbR^m\}$. More precisely, the gradient flow is given by the differential inclusion \begin{equation*} \frac{d}{dt} \btheta(t)\in -\p^\circ \mcL(\btheta(t)), \end{equation*} for $t\geq 0$, a.e.. \subsection{Outline of our contributions} We consider the more general multi-class version of the problem with $K$ classes. Suppose that $\bar \mfy \in [K]^N$ is the label vector. Let $\mfY=(y_{n,k})_{n\in[N],k\in[K]}\in \mbR^{N\times K}$ be the encoded label matrix such that \begin{equation*} y_{n,k}=\begin{cases} \begin{aligned} &1,&\text{ if }\bar y_n=k,\\ &-1,&\text{ otherwise}. \end{aligned} \end{cases} \end{equation*} Similarly, we consider the following two-layer vector-output neural networks with ReLU activation: \begin{equation*} F(\boldsymbol{\Theta},\mfX) = \bmbm{f_1(\btheta_1,\mfX)\\\vdots\\f_K(\btheta_K,\mfX)}=\bmbm{(\mfX\mfW_1^{(1)})_+\mfw_2^{(1)}\\\vdots\\ (\mfX\mfW_1^{(K)})_+\mfw_2^{(K)}}, \end{equation*} where we write $\boldsymbol{\Theta}=(\btheta_1,\dots,\btheta_K)$. For $k=1,\dots,K$, we have $\btheta_k=(\mfW_1^{(k)},\mfw_2^{(k)})$ where $\mfW_1^{(k)}\in \mbR^{N\times m}$ and $\mfw_2^{(k)}\in \mbR^{m}$. One can view each of the $K$ outputs of $F(\boldsymbol{\Theta},\mfX)$ as the output of a two-layer scalar-output neural network. Consider the following training problem: \begin{equation}\label{train_multi} \min_{\boldsymbol{\Theta}} \sum_{k=1}^K\sum_{n=1}^N\ell(y_{n,k}f_k(\btheta_k,\mfx_n)). \end{equation} According to \cite{lyu2019gradient}, the gradient flow and the gradient descent finds a stationary point of the following non-convex max-margin problem: \begin{equation}\label{max_margin:nn_multi} \mathop{\arg\min}_{\boldsymbol{\Theta}} \sum_{k=1}^K\frac{1}{2}\|\btheta_k\|_2^2, \text{ s.t. } y_{n,k}f(\btheta_k; \mfx_n)\geq 1, n\in[N], k\in[K]. \end{equation} Denote the set of all possible hyperplane arrangement as \begin{equation} \mcP=\{\diag(\mbI(\mfX\mfw\geq 0))|\mfw\in \mbR^d\}, \end{equation} and let $p=|\mcP|$. We can also write $\mcP=\{\mfD_1,\dots,\mfD_p\}$. From \citep{cover1965geometrical}, we have an upper bound $p\leq 2r \left(\frac{e (N-1)}{r}\right)^r$ where $r = \mbox{rank}(X)$. We first reformulate \eqref{max_margin:nn_multi} as convex optimization. \begin{proposition}\label{prop:form_multi} The non-convex problem \eqref{max_margin:nn_multi} is equivalent to the following convex program \begin{equation}\label{max_margin:cvxnn_multi} \begin{aligned} \min\;&\sum_{k=1}^K\sum_{j=1}^p(\|\mfu_{j,k}\|_2+\|\mfu_{j,k}'\|_2),\\ \text{ s.t. }&\diag(\mfy_k)\sum_{j=1}^p\mfD_j\mfX(\mfu_{j,k}'-\mfu_{j,k})\geq \boldsymbol{1},\\ &(2\mfD_j-I)\mfX\mfu_{j,k}\geq 0, (2\mfD_j-I)\mfX\mfu_{j,k}'\geq 0, j\in[p], k\in[K]. \end{aligned} \end{equation} where $\mfy_k$ is the $k$-th column of $\mfY$. The dual problem of \eqref{max_margin:cvxnn_multi} is given by \begin{equation}\label{max_margin:cvxnn_multi_dual} \begin{aligned} \max\;&\tr(\bLbd^T\mfY),\\ \text{ s.t. }&\diag(\mfy_k)\blbd_k\succeq 0, \max_{\|\mfw\|_2\leq 1} |\blbd_k^T(\mfX^T\mfw)_+|\leq 1,k\in[K]. \end{aligned} \end{equation} where $\blbd_k$ is the $k$-th column of $\bLbd$. \end{proposition} We present the detailed derivation of the convex formulation \eqref{max_margin:cvxnn_multi} and its dual problem \eqref{max_margin:cvxnn_multi_dual} in the appendix. Given $\mfu\in \mbR^d$, we define \begin{equation} \mfD(\mfu)=\diag(\mbI(\mfX\mfu>0)). \end{equation} For two vectors $\mfu,\mfv\in\mbR^d$, we define the cosine angle between $\mfu$ and $\mfv$ by \begin{equation*} \cos \angle (\mfu,\mfv)=\frac{\mfu^T\mfv}{\|\mfu\|_2\|\mfv\|_2}. \end{equation*} The following theorem illustrate that for neurons satisfying $\mathbf{sign}(\mfy_k^T(\mfX\mfw_{1,i}^{(k)})_+)=\mathbf{sign}(w_{2,i}^{(k)})$ at initialization, $\mfw_{1,i}^{(k)}$ align to the direction of $\pm \mfX^T\mfD(\mfw_{1,i}^{(k)})\mfy_k$ at a certain time $T$, depending on $\mathbf{sign}(w_{2,i_{k,+}}^{(k)})$ at initialization. In Section 2.3, we show that these are dual extreme points of \eqref{max_margin:cvxnn_multi}. \begin{theorem}\label{thm:align} Consider the $K$-class classification training problem \eqref{train_multi} for any dataset. Suppose that the neural network is scaled at initialization such that $\|\mfw_{1,i}^{(k)}\|_2=|w_{2,i}^{(k)}|$ for $i\in[m]$ and $k\in[K]$. Assume that at initialization, for $k\in[K]$, there exists neurons $(\mfw_{1,i_k}^{(k)},\mfw_{2,i_k}^{(k)}) such that \begin{equation}\label{cond:neuron} \begin{aligned} &\mathbf{sign}(\mfy_k^T(\mfX\mfw_{1,i_{k}}^{(k)})_+)=\mathbf{sign}(w_{2,i_{k}}^{(k)})=s, \end{aligned} \end{equation} where $s\in\{1,-1\}$. Consider the subgradient flow applied to the non-convex problem \eqref{train_multi}. Let $\delta\in(0,1)$. Suppose that the initialization is sufficiently close to the origin. Then, for $k\in[K]$, there exist $T=T(\delta,k)$ such that \begin{equation*} \begin{aligned} \cos\angle\pp{\mfw_{1,i_{k}}^{(k)}(T), s\mfX^T\mfD(\mfw_{1,i_{k}}^{(k)}(T))\mfy_k} \geq 1-\delta.\\ \end{aligned} \end{equation*} \end{theorem} Next, we impose conditions on the dataset to prove a stronger global convergence results on the flow. We say that the dataset $(\mfX,\bar\mfy)$ is orthogonal separable among multiple classes if for all $n,n'\in[N]$, \begin{equation*} \begin{aligned} &\mfx_n^T\mfx_{n'}>0, \text{ if }\bar y_n=\bar y_{n'},\\ &\mfx_n^T\mfx_{n'}\leq 0, \text{ if } \bar y_n\neq \bar y_{n'}. \end{aligned} \end{equation*} For orthogonal separable dataset among multiple classes, the subgradient flow for the non-convex problem \eqref{train_multi} can find the global optimum of \eqref{max_margin:nn_multi} up to a scaling constant. \begin{theorem}\label{thm:ortho_multi} Suppose that $(\mfX,\bar \mfy)\in \mbR^{N\times d} \times [K]^N$ is orthogonal separable among multiple classes. Consider the non-convex subgradient flow applied to the non-convex problem \eqref{train_multi}. Suppose that the initialization is sufficiently close to the origin and scaled as in Theorem \ref{thm:align}. Then, the non-convex subgradient flow converges to the global optimum of the convex program \eqref{max_margin:cvxnn_multi} and hence the non-convex objective \eqref{max_margin:nn_multi} up to scaling. \end{theorem} Therefore, the above result characterizes the \emph{implicit regularization} of unregularized gradient flow as \emph{convex regularization}, i.e., group $\ell_1$ norm, in the convex formulation \eqref{max_margin:cvxnn_multi}. It is remarkable that group sparsity is enforced by small initialization magnitude with no explicit form of regularization. \subsection{Convex Geometry of Neural Gradient Flow} Suppose that $\blbd\in \mbR^N$. Here we provide an interesting geometric interpretation behind the formula \begin{align*} \cos\angle(\mfu,\mfX^T\mfD(\mfu)\blbd)>1-\delta. \end{align*} which describes a \emph{dual extreme point} to which hidden neurons approach as predicted by Theorem \ref{thm:align}. We now explain the geometric intuition behind this result. Consider an ellipsoid $\{\mfX\mfu\,:\,\|\mfu\|_2\le 1\}$. A positive extreme point of this ellipsoid along the direction $\blbd$ is defined by $ \arg\max_{\mfu\,:\,\|\mfu\|_2\le 1} \blbd^T \mfX\mfu$, which is given by the formula $\frac{\mfX^T\blbd}{\|\mfX^T\blbd\|_2}$. Next, we consider the rectified ellipsoid set $\mathcal{Q}:=\{(\mfX\mfu)_+\,:\,\|\mfu\|_2\le 1\}$ introduced in \citep{ergen2020convex} and shown in Figure \ref{fig:map}. The constraint $\max_{\mfu:\|\mfu\|_2\leq 1}|\blbd^T(\mfX\mfu)_+|\leq 1$ on $\blbd$ is equivalent to $\blbd\in \mcQ^*$. Here $\mcQ^*$ is the absolute polar set of $\mcQ$, which appears as a constraint in the convex program \eqref{max_margin:cvxnn_multi_dual} and is defined as the following convex set \begin{equation} \mcQ^*=\{\blbd \,:\max_{\mfz\in \mcQ}|\blbd^T\mfz|\leq 1\}. \end{equation} An extreme point of this non-convex body along the direction $\blbd$ is given by the solution of the problem \begin{align} \max_{\mfu\,:\,\|\mfu\|_2\le 1}\blbd^T (\mfX\mfu)_+ = \max_{\mfD_j \in \mcP} \, \max_{\mfu\,:\,\|\mfu\|_2\le 1, (2\mfD_j-I)\mfX\mfu\ge 0} \blbd^T \mfD_j\mfX\mfu. \label{eq:extremepoint} \end{align} Here, $(\blbd,\mfu)$ are primal-dual pairs as they appear in the convex dual program \eqref{max_margin:cvxnn_multi_dual}. First, note that a stationary point of gradient flow on the objective in \eqref{eq:extremepoint} is given by the identity $c \mfu \in \partial_{\mfu}^\circ \blbd^T(\mfX\mfu)_+$ where $c$ is a constant. In particular, by picking the zero as the subgradient of $(\mfx_n^T\mfu)_+$ when $\mfx_n^T\mfu=0$, \begin{align} \mfu = \frac{\mfX^T\mfD(\mfu)\blbd}{\|\mfX^T\mfD(\mfu)\blbd\|_2}= \frac{\sum_{n=1}^N \lambda_n \mfx_n \mbI(\mfu^T\mfx_n>0)}{\|\sum_{n=1}^N \lambda_n \mfx_n \mbI(\mfu^T\mfx_n>0)\|_2}. \end{align} Note that the formula $\cos\angle(\mfu,\mfX^T\mfD(\mfu)\blbd)>1-\delta$ appearing in Theorem \ref{thm:align} shows that gradient flow reaches the extreme points of projected ellipsoids $\{\mfD_j\mfX\mfu\,:\,\|\mfu\|_2\le 1\}$ in the direction of $\blbd=\mfy_k$, where $\mfD_j\in \mcP$ corresponds to a valid hyperplane arrangement. This interesting phenomenon is depicted in Figures \ref{fig:dyn1} and \ref{fig:dyn2}. The one-dimensional spikes in Figures \ref{fig:map} and \ref{fig:dyn1} are projected ellipsoids. More details on the rectified ellipsoids including a characterization of extreme points can be found in \cite{ergen2020convex}. Detailed setup for Figure \ref{fig:map} to \ref{fig:dyn2} and additional experiments can be found in Appendix \ref{app:num}. \begin{figure}[H] \centering \begin{minipage}[t]{0.45\textwidth} \centering \includegraphics[width=\linewidth]{gfgmt/illustration_figure/minmax_map.pdf} \caption{Rectified Ellipsoid $\mathcal{Q}:=\{(\mfX\mfu)_+:\|\mfu\|_2\le 1\}$ and its extreme points (spikes). }\label{fig:map} \end{minipage} \hspace{0.5cm} \begin{minipage}[t]{0.45\textwidth} \centering \includegraphics[width=\linewidth]{gfgmt/illustration_figure/polar.pdf} \caption{Convex absolute polar set $\mathcal{Q}^*$ of the Rectified Ellipsoid (purple) and other dual constraints (grey).}\label{fig:dual} \end{minipage} \centering \begin{minipage}[t]{0.45\textwidth} \centering \includegraphics[width=\linewidth]{gfgmt/illustration_figure/track_Xw.pdf} \caption{Trajectories of $(\mfX\hat \mfw_{1,i})_+$ along the training dynamics of gradient descent.}\label{fig:dyn1} \end{minipage} \hspace{0.5cm} \begin{minipage}[t]{0.45\textwidth} \centering \includegraphics[width=\linewidth]{gfgmt/illustration_figure/track_w.pdf} \caption{Trajectories of $\hat \mfw_{1,i}=\frac{\mfw_{1,i}}{\|\mfw_{1,i}\|_2}$ along the training dynamics of gradient descent.}\label{fig:dyn2} \end{minipage} \caption{Two-layer ReLU network gradient descent dynamics on an orthogonal separable dataset. $\hat \mfw_{1,i}=\frac{\mfw_{1,i}}{\|\mfw_{1,i}\|_2}$ is the normalized vector of the $i$-th hidden neuron in the first layer. Note that the hidden neurons converge to the extreme points \eqref{eq:extremepoint} of the rectified ellipsoid set $\mathcal{Q}$ as predicted by Theorem \ref{thm:align}. }\label{fig:track} \end{figure} \section{Convex max-margin problem}\label{sec:cvx} Here we primarily focus on the binary classification problem for simplicity, which are later extended to the multi-class case. We can reformulate the nonconvex max-margin problem \eqref{max_margin:nn} as \begin{equation}\label{kkt:2l_relu} \min \frac{1}{2} (\|\mfW_1\|_F^2+\|\mfw_2\|_2^2), \text{ s.t. } \mfY(\mfX\mfW_1)_+\mfw_2\geq \boldsymbol{1}, \end{equation} where $\mfY=\diag(\mfy)$. This is a nonconvex optimization problem due to the ReLU activation and the two-layer structure of neural network. Analogous to the convex formulation introduced in \citep{nnacr} for regularized training problem of neural network, we can provide a convex optimization formulation of \eqref{kkt:2l_relu} and derive the dual problem. \begin{proposition}\label{prop:cvx_form} The problem \eqref{kkt:2l_relu} is equivalent to \begin{equation}\label{max_margin:nn_cvx} \begin{aligned} P^*_\mathrm{cvx}=\min \;&\sum_{j=1}^p(\|\mfu_j\|_2+\|\mfu_j'\|_2), \\ \text{ s.t. }&\mfY\sum_{j=1}^p\mfD_j\mfX(\mfu_j'-\mfu_j)\geq \boldsymbol{1}, \\ &(2\mfD_j-I)\mfX\mfu_j\geq 0,(2\mfD_j-I)\mfX\mfu_j'\geq 0,\forall j\in[p]. \end{aligned} \end{equation} The dual problem of \eqref{max_margin:nn_cvx} is given by \begin{equation}\label{cvx_mm:dual} D^* = \max_{\blbd} \mfy^T\blbd \text{ s.t. }\mfY\blbd\succeq 0, \max_{\mfu:\|\mfu\|_2\leq 1}|\blbd^T(\mfX^T\mfu)_+|\leq 1. \end{equation} \end{proposition} The following proposition gives a characterization of the KKT point of the non-convex max-margin problem \eqref{max_margin:nn}. The definition of $B$-subdifferential can be found in Appendix \ref{app:definition}. \begin{proposition}\label{prop:kkt_property} Let $(\mfW_1,\mfw_2,\blbd)$ be a KKT point of the non-convex max-margin problem \eqref{max_margin:nn} (in terms of B-subdifferential). Suppose that $w_{2,i}\neq 0$ for certain $i\in[m]$. Then, there exists a diagonal matrix $\hat \mfD_i\in \mbR^{N\times N}$ satisfying \begin{equation*} \begin{aligned} & (\hat \mfD_i)_n=1, \text{ for } \mfx_n^T\mfw_{1,i}>0,\\ & (\hat\mfD_i)_n\in\{0,1\}, \text{ for } \mfx_n^T\mfw_{1,i}=0,\\ & (\hat \mfD_i)_n=0, \text{ for } \mfx_n^T\mfw_{1,i}<0.\\ \end{aligned} \end{equation*} such that \begin{equation*} \begin{aligned} \frac{\mfw_{1,i}}{w_{2,i}} =\mfX^T\hat \mfD_i\blbd, \|\mfX^T\hat \mfD_i\blbd\|_2=1.\\ \end{aligned} \end{equation*} \end{proposition} Based on the characterization of the KKT point of the non-convex max-margin problem \eqref{max_margin:nn}, we provide an equivalent condition to ensure that it is also the KKT point of the convex max-margin problem \eqref{max_margin:nn_cvx}. \begin{theorem}\label{thm:kkt} The KKT point of the non-convex max-margin problem \eqref{kkt:2l_relu} (in terms of B-subdifferential) is the KKT point of the convex max-margin problem \eqref{max_margin:nn_cvx} if and only if $\blbd$ is dual feasible, i.e., \begin{equation}\label{dual_feas} \max_{\mfu:\|\mfu\|_2\leq 1} |\blbd^T(\mfX\mfu)_+|\leq 1. \end{equation} This condition is equivalent to for all $\mfD_j\in \mcP$, the dual variable $\blbd$ satisfies that \begin{equation}\label{cond:D} \max_{\|\mfu\|_2\leq 1,(2\mfD_j-I)\mfX\mfu\geq 0} |\blbd^T\mfD_j\mfX\mfu| \leq 1. \end{equation} \end{theorem} \section{Dual feasibility of the dual variable}\label{sec:dual_feas} A natural question arises: is it possible to examine whether $\blbd$ is feasible in the dual problem? We say the dataset $(\mfX,\mfy)$ is orthogonal separable if for all $n,n'\in[N]$, \begin{equation*} \begin{aligned} &\mfx_n^T\mfx_{n'}>0, \text{ if }y_n=y_{n'},\\ &\mfx_n^T\mfx_{n'}\leq 0, \text{ if } y_n\neq y_{n'}. \end{aligned} \end{equation*} For orthogonal separable data, as long as the induced diagonal matrices in Proposition \ref{prop:kkt_property} cover the positive part and the negative part of the labels, the KKT point of the non-convex max-margin problem \eqref{max_margin:nn} is the KKT point of the convex max-margin problem \eqref{max_margin:nn_cvx}. \begin{proposition}\label{prop:dual_ortho} Suppose that $(\mfX,\mfy)$ is orthogonal separable. Suppose that the KKT point of the non-convex problem include two neurons $(\mfw_{1,i_+},w_{2,i_+})$ and $(\mfw_{1,i_-}, w_{2,i_-})$ such that the corresponding diagonal matrices $\hat \mfD_{i_+}$ and $\hat \mfD_{i_-}$ defined in Proposition \ref{prop:kkt_property} satisfy that \begin{equation*} \hat \mfD_{i_+}\geq \diag(\mbI(y=1)),\quad \hat \mfD_{i_-}\geq \diag(\mbI(y=-1)). \end{equation*} Then, the dual variable $\blbd$ is dual feasible, i.e., satisfying \eqref{dual_feas}. \end{proposition} The spike-free matrices discussed in \citep{ergen2020convex} also makes examining the dual feasibility of $\blbd$ easier. The definition of spike-free matrices can be found in Appendix \ref{app:definition} \begin{proposition}\label{prop:dual_spike_free} Suppose that $\mfX$ is spike-free. Suppose that the KKT point of the non-convex problem include two neurons $(\mfw_{1,i_+},w_{2,i_+})$ and $(\mfw_{1,i_-}, w_{2,i_-})$ such that the corresponding diagonal matrices $\hat \mfD_{i_+}$ and $\hat \mfD_{i_-}$ defined in Proposition \ref{prop:kkt_property} satisfy that \begin{equation*} \hat \mfD_{i_+}\geq \diag(\mbI(y=1)),\quad \hat \mfD_{i_-}\geq \diag(\mbI(y=-1)). \end{equation*} Then, the dual variable $\blbd$ is dual feasible, i.e., satisfying \eqref{dual_feas}. \end{proposition} \begin{remark} For the spike-free data, the constraint on the dual problem is equivalent to \begin{equation*} \max_{\mfX\mfu\geq 0, \|\mfu\|_2\leq 1} |\blbd^T\mfX\mfu| \leq 1, \end{equation*} or equivalently \begin{equation*} \max_{\mfX\mfu\geq 0, \|\mfu\|_2\leq 1} \blbd^T\mfY_+\mfX\mfu\leq 1,\quad \min_{\mfX\mfu\geq 0} \blbd^T\mfY_-\mfX\mfu\geq -1. \end{equation*} \end{remark} \section{Sub-gradient flow dynamics of logistic loss}\label{sec:logis_gf} We consider the following sub-gradient flow of the logistic loss \eqref{prob:logis} \begin{equation} \begin{aligned} \frac{\p}{\p t} \mfw_{1,i}(t) =& w_{2,i}(t) \pp{\sum_{n:(\mfw_{1,i}(t))^T\mfx_n>0} \tilde \lambda_n(t) \mfx_n(t) },\\ \frac{\p}{\p t} w_{2,i}(t)=&\sum_{n=1}^N \tilde \lambda_n(t) ((\mfw_{1,i}(t))^T\mfx_n(t))_+. \end{aligned} \end{equation} where the $n$-th entry of $\widetilde{\blbd}(t)\in \mbR^N$ is defined \begin{equation} \tilde \lambda_n = -y_n \ell'(q_n), \quad q_n = y_n(\mfx_n^T\mfW_1)_+\mfw_2. \end{equation} For simplicity, we omit the term $(t)$. For instance, we write $\mfw_{1,i}=\mfw_{1,i}(t)$. To be specific, when $\mfw_{1,i}^T\mfx_n=0$, we select $0$ as the subgradient of $w_{2,i}(\mfw_{1,i}^T\mfx_n)_+$ with respect to $\mfw_{1,i}$. Denote $\bsigma_i=\mathbf{sign}(\mfX\mfu_i)$. For $\bsigma\in \{1,-1,0\}^N$, we define \begin{equation} \mfg(\bsigma,\widetilde{\blbd})=\sum_{n:\sigma_n>0} \tilde \lambda_n\mfx_n. \end{equation} For simplicity, we also write \begin{equation} \mfg(\mfu,\widetilde{\blbd}) := \mfg(\mathbf{sign}(\mfX\mfu),\blbd)=\sum_{n:\mfw_{1,i}^T\mfx_n>0} \tilde \lambda_n\mfx_n. \end{equation} Then, we can rewrite sub-gradient flow of the logistic loss \eqref{prob:logis} as follows: \begin{equation}\label{gf:grad} \begin{aligned} \frac{\p}{\p t} \mfw_{i,1} = w_{2,i} \mfg(\mfu,\widetilde{\blbd}),\quad \frac{\p}{\p t} w_{i,2}=\mfw_{i,1}^T\mfg(\mfu,\widetilde{\blbd}). \end{aligned} \end{equation} Assume that the neural network is scaled at initialization, i.e., $\|\mfw_{1,i}(0)\|_2^2=w_{2,i}^2(0)$ for $i\in[m]$. Then, the neural network is scaled for $t\geq 0$. \begin{lemma}\label{lem:sign} Suppose that $\|\mfw_{1,i}(0)\|_2=|\mfw_{2,i}(0)|> 0$ for $i\in[m]$. Then, for any $t>0$, we have $\|\mfw_{1,i}(t)\|_2=|w_{2,i}(t)|> 0$. \end{lemma} According to Lemma \ref{lem:sign}, for all $t\geq 0$, $\mathbf{sign}(w_{2,i}(t))=\mathbf{sign}(w_{2,i}(0))$. Therefore, we can simply write $s_i=s_i(t)=\mathbf{sign}(w_{2,i}(t))$. As the neural network is scaled for $t\geq 0$, it is interesting to study the dynamics of $\mfw_{1,i}$ in the polar coordinate. We write $\mfw_{1,i}(t)=e^{r_i(t)}\mfu_i(t)$, where $\|\mfu_i(t)\|_2=1$. The gradient flow in terms of polar coordinate writes \begin{equation}\label{proj_gf:log_sign} \begin{aligned} \frac{\p}{\p t} r_i=s_i\mfu_i^T\mfg(\mfu_i,\widetilde{\blbd}),\quad \frac{\p}{\p t} \mfu_i=s_i\pp{\mfg(\mfu_i,\widetilde{\blbd})-\pp{\mfu_i^T\mfg(\mfu_i,\widetilde{\blbd})}\mfu_i}. \end{aligned} \end{equation} Let $x_{\mathrm{max}}=\max_{i\in[n]}\|\mfx_i\|_2$. Define $g_{\min}$ to be \begin{equation}\label{def:gmin} \begin{aligned} g_{\min}= &\min_{\bsigma\in \mcQ} \|\mfg(\bsigma,\mfy/4)\|_2, \text{ s.t. } \mfg(\bsigma,\mfy/4)\neq 0, \end{aligned} \end{equation} where we denote \begin{equation} \mcQ=\{\bsigma\in \{1,0,-1\}^N|\bsigma=\mathbf{sign}(\mfX\mfw),\mfw\in \mbR^d\}. \end{equation} As the set $\mcQ\subseteq \{1,-1,0\}^N$ is finite, we note that $g_{\min}>0$. We note that when $\max_{n\in[N]}|q_n|\approx 0$, we have $\widetilde{\blbd}\approx \frac{\mfy}{4}$. The following Lemma shows that with initializations sufficiently close to $0$, $\|\mfg(\mfu(t), \widetilde{\blbd}(t))-\mfg(\mfu(t),\mfy/4)\|_2$ and $\norm{ \frac{d}{dt} \mfg(\mfu(t),\widetilde{\blbd}(t))}_2$ can be very small. \newcommand{\sqrt{1-\delta/8}}{\sqrt{1-\delta/8}} \begin{lemma}\label{lem:lbd_bnd} Suppose that $T>0$ and $\delta>0$. Suppose that $(\mfu(t),r(t))$ follows the gradient flow \eqref{proj_gf:log_sign} with $s=1$ and the initialization $\mfu(0)=\mfu_0$ and $r(0)=r_0$. Suppose that $r_0$ is sufficiently small. Then, the following two statements hold. \begin{itemize} \item For all $t\leq T$, we have \begin{equation*} \|\mfg(\mfu(t), \widetilde{\blbd}(t))-\mfg(\mfu(t),\mfy/4)\|_2\leq \frac{g_\mathrm{min}\delta}{8}. \end{equation*} \item For $t\leq T$ such that $\mathbf{sign}(\mfX\mfu(s))$ is constant in a small neighbor of $t$, we have \begin{equation*} \norm{ \frac{d}{dt} \mfg(\mfu(t),\widetilde{\blbd}(t))}_2\leq \frac{g_\mathrm{min}^2\delta}{16}. \end{equation*} \end{itemize} \end{lemma} Based on the above lemma on the property of $\mfg(\mfu(t), \widetilde{\blbd}(t))$, we introduce the following lemma on $\cos \angle(\mfu(t),\mfg(\mfu(t),\widetilde{\blbd}(t)))$. \begin{lemma}\label{lem:condition_logis} Let $\delta\in(0,1)$.Suppose that $\mfu_0$ satisfies that $\|\mfu_0\|_2=1$ and $\widetilde{\blbd}(0)^T(\mfX\mfu_0)_+>0$. Suppose that $(\mfu(t),r(t))$ follows the gradient flow \eqref{proj_gf:log_sign} with $s=1$ and the initialization $\mfu(0)=\mfu_0$ and $r(0)=r_0$. Let $\mfv(t)=\frac{\mfg(\mfu(t),\widetilde{\blbd}(t))}{\|\mfg(\mfu(t),\widetilde{\blbd}(t))\|_2}$. We write $\mfv_0=\mfv(0)$, $\bsigma_0=\bsigma(0)$ and $g_0=\|\mfg(\bsigma_0,\mfy/4)\|_2$. Denote \begin{equation} T^*=\frac{1}{2g_0\sqrt{1-\delta/8}}\pp{\log\frac{\sqrt{1-\delta/8}+1-\delta}{\sqrt{1-\delta/8}-1+\delta}-\log\frac{\sqrt{1-\delta/8}+\mfv_0^T\mfu_0}{\sqrt{1-\delta/8}-\mfv_0^T\mfu_0}}. \end{equation} For $c\in(0,1-\delta]$, define \begin{equation} T^{\mathrm{shift}}(c) = \frac{1}{2g_0\sqrt{1-\delta/8}}\pp{\log\frac{\sqrt{1-\delta/8}+c}{\sqrt{1-\delta/8}-c}-\log\frac{\sqrt{1-\delta/8}+\mfv_0^T\mfu_0}{\sqrt{1-\delta/8}-\mfv_0^T\mfu_0}} \end{equation} Suppose that $r_0$ is sufficiently small such that the statements in Lemma \ref{lem:lbd_bnd} holds for $T=T^*$. Then, at least one of the following event happens \begin{itemize} \item There exists a time $T$ such that we have $\mathbf{sign}(\mfX\mfu(t))=\mathbf{sign}(\mfX\mfu_0)$ for $t\in[0,T)$ and $\mathbf{sign}(\mfX\mfu(t))\neq \mathbf{sign}(\mfX\mfu_0)$. Let $\mfu_1=\mfu(T)$ and $\mfv_1=\lim_{t\to T-0}\mfv(t)$. If $\mfu_1^T\mfv_1\leq 1-\delta$, then the time $T$ satisfies that $$ T\leq T^{\mathrm{shift}}(\mfv_1^T\mfu_1). $$ Otherwise, there exists a time $T'$ satisfying $$ T'\leq T^*, $$ such that we have $\mathbf{sign}(\mfX\mfu(t))=\mathbf{sign}(\mfX\mfu_0)$ for $t\in[0,T']$ and $\mfu(T')^T\mfv(T')\geq 1-\delta$. \item There exists a time $$ T\leq T^*, $$ such that we have $\mathbf{sign}(\mfX\mfu(t))=\mathbf{sign}(\mfX\mfu_0)$ for $t\in[0,T]$ and $\mfu(T)^T\mfv(T)\geq 1-\delta$. \end{itemize} \end{lemma} \begin{corollary} Suppose that there exists a time $T$ such that we have $\mathbf{sign}(\mfX\mfu(t))=\mathbf{sign}(\mfX\mfu_0)$ for $t\in[0,T)$ and $\mathbf{sign}(\mfX\mfu(t))\neq \mathbf{sign}(\mfX\mfu_0)$. If we have $$ T> T^{\mathrm{shift}}(\mfv_1^T\mfu_1)=\frac{1}{g_0\sqrt{1-\delta/8}}\pp{\log\frac{\sqrt{1-\delta/8}+\mfv_1^T\mfu_1}{\sqrt{1-\delta/8}-\mfv_1^T\mfu_1}-\log\frac{\sqrt{1-\delta/8}+\mfv_0^T\mfu_0}{\sqrt{1-\delta/8}-\mfv_0^T\mfu_0}}, $$ then it follows that $u_1^Tv_1> 1-\delta$. \end{corollary} \begin{proposition}\label{prop:uv} Consider the sub-gradient flow \eqref{proj_gf:log_sign} with $s=1$ and the initialization $\mfu(0)=\mfu_0$ and $r(0)=r_0$. Here at initilization the neuron $\mfu_0$ satisfies that $\|\mfu_0\|_2=1$ and $\mfy^T(\mfX\mfu_0)_+>0$. Let $\mfv(t)=\frac{\mfg(\mfu(t),\widetilde{\blbd}(t))}{\|\mfg(\mfu(t),\widetilde{\blbd}(t))\|_2}$. For any $\delta>0$, for sufficiently small $r_0$, there exists a time $T=\mcO(\log(\delta^{-1}))$ such that $\mfu(T)^T\mfv(T)\geq 1-\delta$ and $\cos \angle(\mfu(T),\mfg(\mfu(T),\mfy))\geq 1-\delta$. \end{proposition} \begin{remark} The statement of proposition is similar to Lemma 4 in \citep{maennel2018gradient}. However, their proof has a problem because they did not consider the change of $\mathbf{sign}(\mfX\mfw)$ along the gradient flow. Our proof in Appendix \ref{proof:prop:uv} corrects this error. \end{remark} \subsection{Property of orthogonal separable datasets} Denote $\mcB=\{\mfw\in \mbR^d\,:\|\mfw\|_2\leq 1\}$. The following lemma give a sufficient condition on $\mfw$ to satisfy the condition in Proposition \ref{prop:dual_ortho}. \begin{lemma}\label{lem:ortho_cond_max} Assume that $(\mfX,\mfy)$ is orthogonal separable. Suppose that $\mfw\in \mcB$ is a local maximizer of $\mfy^T(\mfX\mfw)_+$ in $\mcB$ and $(\mfX\mfw)_+\neq 0$. Then, $\lra{\mfw,\mfx_n}> 0$ for $n\in [N]$ such that $y_n=1$. Suppose that $\mfw\in \mcB$ is a local minimizer of $\mfy^T(\mfX\mfw)_+$ in $\mcB$ and $(\mfX\mfw)_+\neq 0$. Then, $\lra{\mfw,\mfx_n}> 0$ for $n\in [N]$ such that $y_n=-1$. \end{lemma} We show an equivalent condition of $\mfu\in \mcB$ being the local maximizer/minimizer of $\mfy^T(\mfX\mfu)_+$ in $\mcB$. \begin{proposition}\label{prop:ortho_cond} Assume that $(\mfX,\mfy)$ is orthogonal separable. Then, $\mfu\in\mcB$ is a local maximizer of $\mfy^T(\mfX\mfu)_+$ in $\mcB$ is equivalent to $\cos\angle(\mfu,\mfg(\mfu,\mfy))=1$. Similarly, $\mfu\in\mcB$ is a local minimizer of $\mfy^T(\mfX\mfu)_+$ in $\mcB$ is equivalent to $\cos\angle(\mfu,\mfg(\mfu,\mfy))=-1$. \end{proposition} Based on Proposition \ref{prop:dual_ortho} and \ref{prop:ortho_cond}, we present the main theorem. \begin{theorem}\label{thm:ortho_grad_flow} Suppose that the dataset is orthogonal separable and $\btheta(t)$ follows the gradient flow. Suppose that the neural network is scaled at initialization, i.e., $\|\mfw_{1,i}(0)\|_2=|w_{2,i}(0)|$ for all $i\in[m]$. For almost all initializations which are sufficiently close to zero, the limiting point of $\frac{\btheta(t)}{\|\btheta(t)\|_2}$ is $\frac{\btheta^*}{\|\btheta^*\|_2}$, where $\btheta^*$ is a global minimizer of the max-margin problem \eqref{max_margin:nn}. \end{theorem} We present a sketch of the proof as follows. According to Proposition \ref{prop:uv}, for initialization sufficiently close to zero, there exist two neurons and time $T_+,T_->0$ such that $$ \begin{aligned} &\cos \angle(\mfw_{1,i_+}(T_+),\mfg(\mfw_{1,i_+}(T_+),\mfy))\geq 1-\delta,\\ &\cos \angle(\mfw_{1,i_-}(T_-),\mfg(\mfw_{1,i_-}(T_-),\mfy))\leq -(1-\delta). \end{aligned} $$ This implies that $\mfw_{1,i_+}(T_+)$ and $\mfw_{1,i_-}(T_+)$ are sufficiently close to certain stationary points of gradient flow maximizing/minimizing $\mfy^T(\mfX\mfu_+)$ over $\mcB$, i.e., $\{\mfu\in \mcB|\cos(\mfu,\mfg(\mfu,\mfy))=\pm1\}$. As the dataset is orthogonal separable, from Proposition \ref{prop:ortho_cond} and Lemma \ref{lem:ortho_cond_max}, the induced masking matrices $\hat D_{i_+}(T_+)$ and $\hat D_{i_-}(T_-)$ by $\mfw_{1,i_+}(T_+)$/$\mfw_{1,i_-}(T_-)$ in Proposition \ref{prop:kkt_property} satisfy that $\hat D_{i_+}(T_+)\geq \diag(\mbI(\mfy=1))$ and $\hat D_{i_-}(T_-)\geq \diag(\mbI(\mfy=-1))$. According to Lemma 3 in \citep{tibor}, for $t\geq \max\{T_+,T_-\}$, we also have $\hat D_{i_+}(t)\geq \diag(\mbI(\mfy=1))$ and $\hat D_{i_-}(t)\geq \diag(\mbI(\mfy=-1))$. According to Theorem \ref{thm:kkt} and Proposition \ref{prop:dual_ortho}, the KKT point of the non-convex max-margin problem \eqref{max_margin:nn} that gradient flow converges to corresponds to the KKT point of the convex max-margin problem \eqref{max_margin:nn_cvx}. \section{Conclusion} We provide a convex formulation of the non-convex max-margin problem for two-layer ReLU neural networks and uncover a primal-dual extreme point relation between non-convex subgradient flow. Under the assumptions on the training data, we show that flows converge to KKT points of the convex max-margin problem, hence a global optimum of the non-convex objective. \section{Acknowledgements} This work was partially supported by the National Science Foundation under grants ECCS-2037304, DMS-2134248, the Army Research Office.
{ "timestamp": "2021-10-14T02:10:42", "yymm": "2110", "arxiv_id": "2110.06488", "language": "en", "url": "https://arxiv.org/abs/2110.06488", "abstract": "We study non-convex subgradient flows for training two-layer ReLU neural networks from a convex geometry and duality perspective. We characterize the implicit bias of unregularized non-convex gradient flow as convex regularization of an equivalent convex model. We then show that the limit points of non-convex subgradient flows can be identified via primal-dual correspondence in this convex optimization problem. Moreover, we derive a sufficient condition on the dual variables which ensures that the stationary points of the non-convex objective are the KKT points of the convex objective, thus proving convergence of non-convex gradient flows to the global optimum. For a class of regular training data distributions such as orthogonal separable data, we show that this sufficient condition holds. Therefore, non-convex gradient flows in fact converge to optimal solutions of a convex optimization problem. We present numerical results verifying the predictions of our theory for non-convex subgradient descent.", "subjects": "Machine Learning (cs.LG); Optimization and Control (math.OC)", "title": "The Convex Geometry of Backpropagation: Neural Network Gradient Flows Converge to Extreme Points of the Dual Convex Program", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363700641143, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7084883410179708 }
https://arxiv.org/abs/1011.5460
Quantum Walks on Regular Graphs and Eigenvalues
We study the transition matrix of a quantum walk on strongly regular graphs. It is proposed by Emms, Hancock, Severini and Wilson in 2006, that the spectrum of $S^+(U^3)$, a matrix based on the amplitudes of walks in the quantum walk, distinguishes strongly regular graphs. We find the eigenvalues of $S^+(U)$ and $S^+(U^2)$ for regular graphs.
\section{Introduction} A discrete-time quantum walk is a quantum process on a graph whose state vector is governed by a matrix, called the transition matrix. In \cite{ESWH, EHSW06} Emms, Severini, Wilson and Hancock propose that the quantum walk transition matrix can be used to distinguish between non-isomorphic graphs. Let $U(G)$ and $U(H)$ be the transition matrices of quantum walks on $G$ and $H$ respectively. Given a matrix $M$, the \textsl{positive support} of $M$, denoted $S^+(M)$, is the matrix obtained from $M$ as follows: \[ (S^+(M))_{i,j} = \begin{cases} 1 & \text{if } M_{i,j} >0\\ 0 & \text{otherwise.}\end{cases} \] \begin{theorem}\label{su1} If $G$ and $H$ are isomorphic regular graphs, then $S^+(U(G)^3)$ and $S^+(U(H)^3)$ are cospectral. \end{theorem} The authors of \cite{EHSW06, ESWH} propose that the converse of Theorem \ref{su1} is also true; they conjecture that the spectrum of the matrix $S^+(U^3)$ distinguishes strongly regular graphs. After experiments on a large set of graphs, no strongly regular graph is known to have a cospectral mate with respect to this invariant. If the conjecture is true, it would yield a classical polynomial-time algorithm for the Graph Isomorphism Problem for strongly regular graphs (but there do not seem to be strong grounds for believing the conjecture). In this paper we will find the spectra of two matrices related to proposed graph invariant, for regular graphs. In \cite{EHSW06}, Emms et al.~compute some eigenvalues of $\splus$ and $S^+(U^2)$ but do not determine them all; for both matrices, they find the set of eigenvalues which are derived from the eigenvalues of the adjacency matrix, but do not find the remaining eigenvalues. Here we will use an approach which exploits the linear algebraic properties of $\splus$ to yield a complete proof that the spectrum of $\splus$ is determined by the spectrum of the graph with respect to the adjacency matrix. We also completely determine the spectrum of $S^+(U^2)$ by expressing $S^+(U^2)$ in terms of $\splus$ and the identity matrix. \section{Preliminary Definitions} \label{sec:Defs} A \textsl{discrete-time quantum walk} is a process on a graph $G$ governed by a unitary matrix, $U$, which is called the \textsl{transition matrix.} For $uv$ and $wx$ arcs in the digraph of $G$, the transition matrix is defined to be: \[ U_{wx,uv} = \begin{cases} \frac{2}{d(v)} &\text{ if } v=w \text{ and } u \neq x ,\\ \frac{2}{d(v)} -1 & \text{ if } v=w \text{ and } u = x, \\ 0 &\text{ otherwise.} \end{cases} \] Let $A$ the adjacency matrix of $G$. Let $D$ be the digraph of $G$ and consider the following incidence matrices of $D$, both with rows indexed by the vertices of $D$ and columns indexed by the arcs of $D$: \[ (\ins)_{i,j} = \begin{cases} 1 &\text{if } i \text{ is the head of arc }j \\ 0 &\text{otherwise}\end{cases} \] and \[ (\outs)_{i,j} = \begin{cases} 1 &\text{if } i \text{ is the tail of arc }j \\ 0 &\text{otherwise.}\end{cases} \] To describe the quantum walk, we need one more matrix: let $P$ be a permutation matrix with row and columns indexed by the arcs of $D$ such that, \[P_{wx,uv} = \begin{cases} 1 &\text{if } x=u \text{ is the tail of arc } w=v \\ 0 &\text{otherwise.}\end{cases} \] Then, we see that $\ins\outs^{T} = A(G)$ and \[ (\outs^T\ins)_{wx,uv} = \begin{cases} 1 &\text{if } v=w,\\ 0 &\text{otherwise.} \end{cases} \] If $G$ is regular with valency $k$, we have that \[U = \frac{2}{k} \outs^T\ins - P.\] \section{Eigenvalues of $S^+(U)$} \label{sec:su} In this section, we will find the eigenvalues of $S^+(U)$ for a regular graph $G$. If $G$ is regular with valency $1$, then $G$ must be a matching and the spectrum of $S^+(U(G))$ is easily determined. We may direct our attention to regular graphs with valency $k \geq 2$. If $G$ is a regular graph with valency $k$ on $n$ vertices, then \[U = \frac{2}{k} \outs^T\ins - P.\] The only negative entries have values $\frac{2}{k} -1$, for $k \geq 2$, so $S^+(U) = \outs^T\ins - P$. From Section \ref{sec:Defs}, we see that $\outs \outs^T = k I$ and $\ins \ins^T = k I$. From the definition of $P$, we get that \[ P \ins^T = \outs^T \; \text{ and } \; P\outs^T = \outs^T \] Let $Q = \frac{2}{k} \ins^T \ins -I$. Then, $ Q^2 = I$ and we can write $\splus$ as: \[ S^+(U) = \outs^T\ins - P = P ( \ins^T \ins -I)= \frac{k}{2}P\left(Q + \frac{k-2}{k}I\right) \] Since $P^2 = Q^2 = I$, then $P$ and $Q$ generate the dihedral group; that is to say, $\langle P, Q\rangle$ is a linear representation of the dihedral group. It is known that an indecomposable representation of this group over $\cx$ has dimension 1 or 2. Using this, we can compute the eigenvalues and multiplicities of elements of $\langle P, Q\rangle$, in particular, of $S^+(U)$. In \cite{Sze04}, Szegedy uses an observation of this flavour to find the spectrum of $U = PQ$. Here, we will use a similar decomposition of the Hilbert space and other linear algebra methods to explicitly determine the spectrum of $\splus$ in terms of the spectrum of the adjacency matrix. \begin{theorem}\label{thm:evals} If $G$ is a regular connected graph with valency $k \geq 2$ and $n$ vertices, then $S^+(U(G))$ has eigenvalues as follows: \begin{enumerate}[i)] \item $k-1$ with multiplicity 1, \item $\frac{\lambda \pm \sqrt{\lambda^2 - 4(k-1)}}{2}$ as $\lambda$ ranges over the eigenvalues of $A$, the adjacency matrix of $G$, and $\lambda \neq k$, \item 1 with multiplicity $\frac{n(k-1)}{2}+1$, and \item $-1$ with multiplicity $\frac{n(k-1)}{2}$. \end{enumerate} \end{theorem} \noindent{{\sl Proof. }} For a matrix $M$, we write $\col(M)$ to denote the column space of $M$ and $\ker(M)$ to denote the kernel of $M$. Let $K = \col(\ins^T)+ \col(\outs^T)$ and let $L = \ker(\ins) \cap \ker(\outs)$. Observe that $K$ and $L$ are orthogonal complements of each other. Then $\re^{vk}$ is the direct sum of orthogonal subspaces $K$ and $L$. We will proceed by considering eigenvectors of $\splus$ in $K$ and in $L$ separately. For $K$, we will show that the eigenvectors of $\splus$ in $K$ lie in subspaces $C(\lambda)$ where $\lambda$ ranges over the eigenvalues of $A$. The eigenspace $C(k)$ has dimension $1$ while $C(\lambda)$ has dimension 2 for all $\lambda \neq k$. In $L$, we will show that all eigenvectors of $\splus$ have eigenvalue $\pm 1 $ and we will find the multiplicities of $\pm1$. First, we show that $K$ and $L$ are $\splus$-invariant. Since $L$ is the orthogonal complement of $K$, it suffices to check that $K$ is $\splus$-invariant. We obtain that: \begin{equation}\label{SDins} \splus \ins^T = k\outs^T - \outs^T = (k-1)\outs^T \end{equation} and \begin{equation}\label{SDouts} \splus \outs^T = \outs^TA - \ins^T . \end{equation} Hence, $K$ is $\splus$-invariant. We consider eigenvectors of $\splus$ in $K$. From equations \eqref{SDins} and \eqref{SDouts}, we obtain: \begin{equation}\label{S2Douts} \splus^2 \outs^T = \splus (\outs^TA - \ins^T) = \splus \outs^TA - (k-1)\outs^T \end{equation} Let $\Zz$ be an eigenvector of $A$ with eigenvalue $\lambda$. Let $\Zy:= \outs^T \Zz$. Then, applying $\Zy$ to equation \eqref{S2Douts}, we obtain: \[ \begin{split} \splus^2 \Zy &= \splus^2 \outs^T \Zz \\ &= \splus \outs^TA\Zz - (k-1)\outs^T\Zz \\ &= \lambda \splus \Zy - (k-1)\Zy . \end{split} \] Rearranging, we get \begin{equation}\label{magic} (\splus^2 - \lambda \splus + (k-1)I )\Zy = 0 . \end{equation} Let $C(\lambda) = \text{span}\{ \Zy, \splus \Zy \}$. From equation \eqref{magic} we see that $C(\lambda)$ has dimension at most 2, is $\splus$-invariant and is contained in $K$. If $C(\lambda)$ is 1-dimensional, then $\Zy$ is an eigenvector of $\splus$. Let $\theta$ be the corresponding eigenvalue. Then \[ \begin{split} \theta \Zy &= \splus \Zy \\ &= \splus \outs^T \Zz \\ &= ( \outs^TA - \ins^T) \Zz \\ &= \lambda \Zy - \ins^T \Zz \end{split} \] Then $(\theta - \lambda) \Zy = - \ins^T \Zz$ and $\Zz$ is in $\col(\ins^T) \cap \col(\outs^T)$. Then $\Zy$ is constant on arcs with a given head and on arcs with a given tail. Then $\Zy$ is constant on arcs of any component of $G$. Since $G$ is connected, $\Zy$ is the constant vector, which implies that $\Zz$ is a constant vector and $\lambda =k$. The eigenvalue of $\splus$ corresponding to $\Zy$ is $k-1$. Now suppose $C(\lambda)$ is 2-dimensional. Then, the minimum polynomial of $C(\lambda)$ is \[ t^2 - \lambda t + (k-1) = 0 \] from \eqref{magic} and the eigenvalues are \[ \frac{\lambda \pm \sqrt{\lambda^2 - 4(k-1)} }{2} . \] These subspaces $C(\lambda)$ account for $2n -1$ eigenvalues of $\splus$. Since $\ins^T$ and $\outs^T$ are both $(nk) \times n$ matrices, $K$ has dimension at most $2n$. But, $\ins^T \mathbf{j} = \outs^T \mathbf{j} = \mathbf{j}$, where $\mathbf{j}$ is the all ones vector, since each row of both $\ins^T$ and $\outs^T$ has exactly one entry with value 1 and all other entries have value 0. Then, $K$ has dimension at most $2n -1$ and we have found all of the eigenvectors of $\splus$ in $K$. We will now find the remaining $n(k-2) + 1$ eigenvalues of $\splus$ over $L$. Let $\Zy$ be in $L$. Then \[ \begin{split} \splus \Zy &= (\outs^T\ins - P) \Zy \\ &= \outs^T\ins\Zy - P\Zy \\ &= - P\Zy . \end{split} \] If $\Zy$ is an eigenvector of $\splus$ with eigenvalue $\lambda$ and $\Zy$ is in $L$, then $\Zy$ is an eigenvector of $P$ with eigenvalue $-\lambda$. Since $P$ is a permutation matrix, $\lambda = \pm1$. To find the multiplicities we consider the sum of all the eigenvalues of $\splus$, which is equal to the trace of $\splus$. Observing that $P$ is a traceless matrix, \[ \tr(\splus) = \tr(\outs^T\ins - P) = \tr(\outs^T\ins) = \tr(\ins\outs^T) = \tr(A) = 0 . \] The sum over all eigenvalues of $\splus$ should be 0. Let $\spec(A)$ be the set of eigenvalues of $A$. Consider the sum over the eigenvalues of eigenvectors of $K$: \[ \begin{split} &(k-1) + \sum_{\lambda \in \spec(A), \lambda \neq k} \frac{\lambda \pm \sqrt{\lambda^2 - 4(k-1)} }{2}\\ &= (k-1) + \sum_{\lambda \in \spec(A), \lambda \neq k} \lambda \\ &=-1 + \sum_{\lambda \in \spec(A)} \lambda \\ &= -1 . \end{split} \] Then, the sum of the eigenvalue of the eigenvectors over $L$ is 1. So, 1 and $-1$ have multiplicities $\frac{n(k-2)}{2} + 1$ and $\frac{n(k-2)}{2}$, respectively. \qed \section{Eigenvalues of $S^+(U^2)$} \label{sec:su2} We will show that $S^+(U^2) = (S^+(U))^2 + I$. Then, the eigenvalues of $S^+(U^2)$ are determined by the eigenvalues of $\splus$. The proof of the theorem will proceed by an analysis of which pairs of arcs give a negative entry in $U^2$. \begin{theorem}\label{thm:su2} For any regular graph with valency $k$, if $k>2$ then $S^+(U^2) = S^+(U)^2 + I $.\end{theorem} \noindent{{\sl Proof. }} Since $\outs^T \ins$ is the adjacency matrix of the line digraph of $G$, then $(\outs^T \ins)^2$ has the property that its $(j,i)$th entry counts the number of length two, directed walks in the line digraph of $G$. Observe that there is such a walk from $i$ to $j$ in $L(G)$ if and only if the head of $i$ is adjacent to the tail of $j$ in $G$. In particular, if there is a walk of length two from $i$ to $j$, there is only one such walk. Then, $(\outs^T \ins)^2$ is a $01$-matrix and is the support of $U^2$. We will find the required expression for $S^+(U^2)$ by subtracting from $(\outs^T \ins)^2$ the entries in $U^2$ which have negative value. We then proceed to look at the possible arrangements of $i$ and $j$ such that there is a length two, directed walk in $L(G)$ from $i$ to $j$, in Table \ref{3walks}. \begin{table}[htdp] \begin{center} \begin{tabular}{|l|m{4.5cm}|m{3cm}|} \hline & Directed walk of length 3 from $i$ to $j$ & Value of $(U^2)_{i,j}$\\ \hline Case 1. & \begin{center} \begin{tikzpicture}[decoration={ markings, mark=at position .54 with {\arrow[black,thick]{>};}} ] \draw[postaction={decorate}] (0,0) -- (1,0); \draw[postaction={decorate}] (1,0) -- (2,0); \draw[postaction={decorate}] (2,0) -- (3,0); \filldraw[black] (0,0) circle (1.5pt) (1,0) circle (1.5pt) (2,0) circle (1.5pt) (3,0) circle (1.5pt); \filldraw[white] (1.5, 0.3) circle (1pt); \draw (0.5,0) node[anchor=north] {$i$} (2.5,0) node[anchor=north] {$j$}; \end{tikzpicture} \end{center} & \[\left( \frac{2}{k} \right)^2\] \\ \hline Case 2. & \begin{center} \begin{tikzpicture}[decoration={ markings, mark=at position .54 with {\arrow[black,thick]{>};}} ] \draw[postaction={decorate}] (0,0) -- (1,0); \draw[postaction={decorate}] (1,0) -- (2,0); \draw[postaction={decorate}] (2,0) .. controls (1.3333,-0.4) and (0.6666, -0.4).. (0,0); \filldraw[black] (0,0) circle (1.5pt) (1,0) circle (1.5pt) (2,0) circle (1.5pt); \filldraw[white] (1.5, 0.3) circle (1pt); \draw (0.5,0) node[anchor=south] {$i$} (1,-0.3) node[anchor=north] {$j$}; \end{tikzpicture} \end{center} & \[\left( \frac{2}{k} \right)^2\] \\ \hline Case 3. & \begin{center} \begin{tikzpicture}[decoration={ markings, mark=at position .54 with {\arrow[black,thick]{>};}} ] \draw[postaction={decorate}] (0,0) -- (1,0); \draw[postaction={decorate}] (1,0) .. controls (1.3333,0.3) and (1.6666, 0.3).. (2,0); \draw[postaction={decorate}] (2,0) .. controls (1.6666,-0.3) and (1.3333, -0.3).. (1,0); \filldraw[black] (0,0) circle (1.5pt) (1,0) circle (1.5pt) (2,0) circle (1.5pt); \filldraw[white] (1.5, 0.6) circle (1pt); \draw (0.5,0) node[anchor=north] {$i$} (1.5,-0.2) node[anchor=north] {$j$}; \end{tikzpicture} \end{center} & \[\left( \frac{2}{k} \right)\left( \frac{2}{k} -1\right)\] \\ \hline Case 4. & \begin{center} \begin{tikzpicture}[decoration={ markings, mark=at position .54 with {\arrow[black,thick]{>};}} ] \draw[postaction={decorate}] (0,0) -- (1,0); \draw[postaction={decorate}] (1,0) .. controls (1.3333,0.3) and (1.6666, 0.3).. (2,0); \draw[postaction={decorate}] (2,0) .. controls (1.6666,-0.3) and (1.3333, -0.3).. (1,0); \filldraw[black] (0,0) circle (1.5pt) (1,0) circle (1.5pt) (2,0) circle (1.5pt); \filldraw[white] (1.5, 0.6) circle (1pt); \draw (1.5,0.3) node[anchor=south] {$i$} (0.5,0) node[anchor=north] {$j$}; \end{tikzpicture} \end{center} & \[\left( \frac{2}{k}-1 \right)\left( \frac{2}{k} \right)\]\\ \hline Case 5. & \begin{center} \begin{tikzpicture}[decoration={ markings, mark=at position .54 with {\arrow[black,thick]{>};}} ] \draw[postaction={decorate}] (1,0) -- (0,0); \draw[postaction={decorate}] (0,0) .. controls (0.3333,0.5) and (0.6666, 0.5).. (1,0); \draw[postaction={decorate}] (0,0) .. controls (0.3333,-0.5) and (0.6666, -0.5)..(1,0); \filldraw[black] (0,0) circle (1.5pt) (1,0) circle (1.5pt); \draw (0.5,0.5) node[anchor=south] {$i$} (0.5,-0.5) node[anchor=north] {$j$}; \end{tikzpicture} \end{center} & \[\left( \frac{2}{k} -1 \right)^2\]\\ \hline \end{tabular} \end{center} \caption{All possible pairs $i,j$ such that there is a length 2 walk in $L(G)$} \label{3walks} \end{table}% We see that the only negative entries of $U^2$ occur for $i,j$ in Cases 3 and 4, when $k >2$. Then $(U^2)_{j.i}$ is negative when $i$ and $j$ share the same head but not the same tail and when $i$ and $j$ share the same tail but not the same head. Then, \[ \begin{split} S^+(U^2) &= (\outs^T \ins)^2 -(\outs^T \outs -I) - (\ins^T \ins -I) \\ &= (\outs^T \ins)^2 -\outs^T \outs - \ins^T \ins + I +I \\ &= (\outs^T \ins)^2 -(\outs^T \ins)P - P(\outs^T \ins) + P^2 +I \\ &= (\outs^T \ins -P)^2 + I \\ &= S^+(U)^2 + I \end{split} \] \qed The next theorem explicitly lists the eigenvalues of $S^+(U^2)$. \begin{theorem} If $G$ is a regular connected graph with valency $k \geq 2$ and $n$ vertices, then $S^+(U(G)^2)$ has eigenvalues as follows: \begin{enumerate}[i)] \item $k^2-2k + 2$ with multiplicity 1, \item $\frac{\lambda^2 - 2k + 4}{2} \pm \frac{\lambda\sqrt{\lambda^2 - 4(k-1)}}{4}$ as $\lambda$ ranges over the eigenvalues of $A$, the adjacency matrix of $G$, and $\lambda \neq k$ and \item 2 with multiplicity $n(k-1)+1$. \end{enumerate} \end{theorem} \noindent{{\sl Proof. }} From Theorem \ref{thm:su2}, we get that $S^+(U^2) = (S^+(U))^2 + I$. Let $\Zy$ be an eigenvector of $\splus$ with eigenvalues $\theta$. Then, $ S^+(U^2) \Zy = (\theta^2 + 1)\Zy $ and $\Zy$ is an eigenvector of $S^+(U^2)$ with eigenvalue $\theta^2 + 1$. The rest follow from the eigenvalues of $\splus$ found in Theorem \ref{thm:evals}. \qed \section{Quantum Walk Algorithms for Graph Isomorphism} \label{sec:QwalkGI} The \textsl{Graph Isomorphism Problem} is the problem of deciding whether or not two given graphs are isomorphic. The algorithms of Shiau, Joynt and Coppersmith in \cite{SJC03}, Douglas and Wang in \cite{DW08}, and Gamble, Friesen, Zhou and Joynt in \cite{GFZJ10} use the idea of evolving a quantum walk on a given pair of graphs and then comparing a permutation-invariant aspect of the states of the quantum walk on each graph. Both \cite{SJC03} and \cite{GFZJ10} present algorithms based on a two-particle quantum walk.\footnote[1]{In the case of \cite{GFZJ10}, the particles are bosons.} Both procedures have been tested on large number of strongly regular graphs without finding a pair not distinguished by the procedure. In \cite{S10}, Smith gives a family of graphs on which the procedure of Gamble et al.~\cite{GFZJ10} does not distinguish arbitrary graphs; in fact, he shows that $k$-boson quantum walks do not distinguish arbitrary graphs. However, the question of whether or not the procedure distinguishes all strongly regular graphs is still open. The quantum walk procedure of Douglas and Wang in \cite{DW08} has also been tested on classes of strongly regular graphs and of regular graphs, where all non-isomorphic graphs were distinguished. Finding a pair of non-isomorphic strongly regular graphs which are not distinguished by any of the three algorithms remains an open problem. Finding a pair of non-isomorphic strongly regular graphs which are not distinguished by the procedure of Emms et al.~is also an open problem. For work toward finding such a pair of graphs, see \cite{me10}. \section{Introduction} A discrete-time quantum walk is a quantum process on a graph whose state vector is governed by a matrix, called the transition matrix. In \cite{ESWH, EHSW06} Emms, Severini, Wilson and Hancock propose that the quantum walk transition matrix can be used to distinguish between non-isomorphic graphs. Let $U(G)$ and $U(H)$ be the transition matrices of quantum walks on $G$ and $H$ respectively. Given a matrix $M$, the \textsl{positive support} of $M$, denoted $S^+(M)$, is the matrix obtained from $M$ as follows: \[ (S^+(M))_{i,j} = \begin{cases} 1 & \text{if } M_{i,j} >0\\ 0 & \text{otherwise.}\end{cases} \] \begin{theorem}\label{su1} If $G$ and $H$ are isomorphic regular graphs, then $S^+(U(G)^3)$ and $S^+(U(H)^3)$ are cospectral. \end{theorem} The authors of \cite{EHSW06, ESWH} propose that the converse of Theorem \ref{su1} is also true; they conjecture that the spectrum of the matrix $S^+(U^3)$ distinguishes strongly regular graphs. After experiments on a large set of graphs, no strongly regular graph is known to have a cospectral mate with respect to this invariant. If the conjecture is true, it would yield a classical polynomial-time algorithm for the Graph Isomorphism Problem for strongly regular graphs (but there do not seem to be strong grounds for believing the conjecture). In this paper we will find the spectra of two matrices related to proposed graph invariant, for regular graphs. In \cite{EHSW06}, Emms et al.~compute some eigenvalues of $\splus$ and $S^+(U^2)$ but do not determine them all; for both matrices, they find the set of eigenvalues which are derived from the eigenvalues of the adjacency matrix, but do not find the remaining eigenvalues. The spectrum of $S^+(U)$ is also given in \cite{raewh11}. Here we will use an approach which exploits the linear algebraic properties of $\splus$ to yield a proof that the spectrum of $\splus$ is determined by the spectrum of the graph with respect to the adjacency matrix. We also completely determine the spectrum of $S^+(U^2)$ by expressing $S^+(U^2)$ in terms of $\splus$ and the identity matrix. \section{Preliminary Definitions} \label{sec:Defs} A \textsl{discrete-time quantum walk} is a process on a graph $G$ governed by a unitary matrix, $U$, which is called the \textsl{transition matrix.} For $uv$ and $wx$ arcs in the digraph of $G$, the transition matrix is defined to be: \[ U_{wx,uv} = \begin{cases} \frac{2}{d(v)} &\text{ if } v=w \text{ and } u \neq x ,\\ \frac{2}{d(v)} -1 & \text{ if } v=w \text{ and } u = x, \\ 0 &\text{ otherwise.} \end{cases} \] Let $A$ the adjacency matrix of $G$. Let $D$ be the digraph of $G$ and consider the following incidence matrices of $D$, both with rows indexed by the vertices of $D$ and columns indexed by the arcs of $D$: \[ (\ins)_{i,j} = \begin{cases} 1 &\text{if } i \text{ is the head of arc }j \\ 0 &\text{otherwise}\end{cases} \] and \[ (\outs)_{i,j} = \begin{cases} 1 &\text{if } i \text{ is the tail of arc }j \\ 0 &\text{otherwise.}\end{cases} \] To describe the quantum walk, we need one more matrix: let $P$ be a permutation matrix with row and columns indexed by the arcs of $D$ such that, \[P_{wx,uv} = \begin{cases} 1 &\text{if } x=u \text{ is the tail of arc } w=v \\ 0 &\text{otherwise.}\end{cases} \] Then, we see that $\ins\outs^{T} = A(G)$ and \[ (\outs^T\ins)_{wx,uv} = \begin{cases} 1 &\text{if } v=w,\\ 0 &\text{otherwise.} \end{cases} \] If $G$ is regular with valency $k$, we have that \[U = \frac{2}{k} \outs^T\ins - P.\] \section{Eigenvalues of $S^+(U)$} \label{sec:su} In this section, we will find the eigenvalues of $S^+(U)$ for a regular graph $G$. If $G$ is regular with valency $1$, then $G$ must be a matching and the spectrum of $S^+(U(G))$ is easily determined. We may direct our attention to regular graphs with valency $k \geq 2$. If $G$ is a regular graph with valency $k$ on $n$ vertices, then \[U = \frac{2}{k} \outs^T\ins - P.\] The only negative entries have values $\frac{2}{k} -1$, for $k \geq 2$, so $S^+(U) = \outs^T\ins - P$. From Section \ref{sec:Defs}, we see that $\outs \outs^T = k I$ and $\ins \ins^T = k I$. From the definition of $P$, we get that \[ P \ins^T = \outs^T \; \text{ and } \; P\outs^T = \outs^T \] Let $Q = \frac{2}{k} \ins^T \ins -I$. Then, $ Q^2 = I$ and we can write $\splus$ as: \[ S^+(U) = \outs^T\ins - P = P ( \ins^T \ins -I)= \frac{k}{2}P\left(Q + \frac{k-2}{k}I\right) \] Since $P^2 = Q^2 = I$, then $P$ and $Q$ generate the dihedral group; that is to say, $\langle P, Q\rangle$ is a linear representation of the dihedral group. It is known that an indecomposable representation of this group over $\cx$ has dimension 1 or 2. Using this, we can compute the eigenvalues and multiplicities of elements of $\langle P, Q\rangle$, in particular, of $S^+(U)$. In \cite{Sze04}, Szegedy uses an observation of this flavour to find the spectrum of $U = PQ$. Here, we will use a similar decomposition of the Hilbert space and other linear algebra methods to explicitly determine the spectrum of $\splus$ in terms of the spectrum of the adjacency matrix. \begin{theorem}\label{thm:evals} If $G$ is a regular connected graph with valency $k \geq 2$ and $n$ vertices, then $S^+(U(G))$ has eigenvalues as follows: \begin{enumerate}[i)] \item $k-1$ with multiplicity 1, \item $\frac{\lambda \pm \sqrt{\lambda^2 - 4(k-1)}}{2}$ as $\lambda$ ranges over the eigenvalues of $A$, the adjacency matrix of $G$, and $\lambda \neq k$, \item 1 with multiplicity $\frac{n(k-2)}{2}+1$, and \item $-1$ with multiplicity $\frac{n(k-2)}{2}$. \end{enumerate} \end{theorem} \noindent{{\sl Proof. }} For a matrix $M$, we write $\col(M)$ to denote the column space of $M$ and $\ker(M)$ to denote the kernel of $M$. Let $K = \col(\ins^T)+ \col(\outs^T)$ and let $L = \ker(\ins) \cap \ker(\outs)$. Observe that $K$ and $L$ are orthogonal complements of each other. Then $\re^{vk}$ is the direct sum of orthogonal subspaces $K$ and $L$. We will proceed by considering eigenvectors of $\splus$ in $K$ and in $L$ separately. For $K$, we will show that the eigenvectors of $\splus$ in $K$ lie in subspaces $C(\lambda)$ where $\lambda$ ranges over the eigenvalues of $A$. The eigenspace $C(k)$ has dimension $1$ while $C(\lambda)$ has dimension 2 for all $\lambda \neq k$. In $L$, we will show that all eigenvectors of $\splus$ have eigenvalue $\pm 1 $ and we will find the multiplicities of $\pm1$. First, we show that $K$ and $L$ are $\splus$-invariant. Since $L$ is the orthogonal complement of $K$, it suffices to check that $K$ is $\splus$-invariant. We obtain that: \begin{equation}\label{SDins} \splus \ins^T = k\outs^T - \outs^T = (k-1)\outs^T \end{equation} and \begin{equation}\label{SDouts} \splus \outs^T = \outs^TA - \ins^T . \end{equation} Hence, $K$ is $\splus$-invariant. We consider eigenvectors of $\splus$ in $K$. From equations \eqref{SDins} and \eqref{SDouts}, we obtain: \begin{equation}\label{S2Douts} \splus^2 \outs^T = \splus (\outs^TA - \ins^T) = \splus \outs^TA - (k-1)\outs^T \end{equation} Let $\Zz$ be an eigenvector of $A$ with eigenvalue $\lambda$. Let $\Zy:= \outs^T \Zz$. Then, applying $\Zy$ to equation \eqref{S2Douts}, we obtain: \[ \begin{split} \splus^2 \Zy &= \splus^2 \outs^T \Zz \\ &= \splus \outs^TA\Zz - (k-1)\outs^T\Zz \\ &= \lambda \splus \Zy - (k-1)\Zy . \end{split} \] Rearranging, we get \begin{equation}\label{magic} (\splus^2 - \lambda \splus + (k-1)I )\Zy = 0 . \end{equation} Let $C(\lambda) = \text{span}\{ \Zy, \splus \Zy \}$. By definition, $C(\lambda)$ has dimension at most 2 and is contained in $K$. For any vector $\Zx = \alpha\Zy + \beta \splus \Zy$ in $C(\lambda)$, we have that \[ \splus \Zx = \alpha \splus \Zy + \beta \splus^2 \Zy. \] From equation \eqref{magic}, we can write $\splus^2 \Zy$ as a linear combination of $\splus \Zy $ and $\Zy$ and hence $\splus \Zx \in C(\lambda)$. Then, $C(\lambda)$ is $\splus$-invariant. If $C(\lambda)$ is 1-dimensional, then $\Zy$ is an eigenvector of $\splus$. Let $\theta$ be the corresponding eigenvalue. Then \[ \begin{split} \theta \Zy &= \splus \Zy \\ &= \splus \outs^T \Zz \\ &= ( \outs^TA - \ins^T) \Zz \\ &= \lambda \Zy - \ins^T \Zz \end{split} \] Then $(\theta - \lambda) \Zy = - \ins^T \Zz$ and $\Zz$ is in $\col(\ins^T) \cap \col(\outs^T)$. Then $\Zy$ is constant on arcs with a given head and on arcs with a given tail. Then $\Zy$ is constant on arcs of any component of $G$. Since $G$ is connected, $\Zy$ is the constant vector, which implies that $\Zz$ is a constant vector and $\lambda =k$. The eigenvalue of $\splus$ corresponding to $\Zy$ is $k-1$. Now suppose $C(\lambda)$ is 2-dimensional. Then, the minimum polynomial of $C(\lambda)$ is \[ t^2 - \lambda t + (k-1) = 0 \] from \eqref{magic} and the eigenvalues are \[ \frac{\lambda \pm \sqrt{\lambda^2 - 4(k-1)} }{2} . \] These subspaces $C(\lambda)$ account for $2n -1$ eigenvalues of $\splus$. Since $\ins^T$ and $\outs^T$ are both $(nk) \times n$ matrices, $K$ has dimension at most $2n$. But, $\ins^T \mathbf{j} = \outs^T \mathbf{j} = \mathbf{j}$, where $\mathbf{j}$ is the all ones vector, since each row of both $\ins^T$ and $\outs^T$ has exactly one entry with value 1 and all other entries have value 0. Then, $K$ has dimension at most $2n -1$ and we have found all of the eigenvectors of $\splus$ in $K$. We will now find the remaining $n(k-2) + 1$ eigenvalues of $\splus$ over $L$. Let $\Zy$ be in $L$. Then \[ \begin{split} \splus \Zy &= (\outs^T\ins - P) \Zy \\ &= \outs^T\ins\Zy - P\Zy \\ &= - P\Zy . \end{split} \] If $\Zy$ is an eigenvector of $\splus$ with eigenvalue $\lambda$ and $\Zy$ is in $L$, then $\Zy$ is an eigenvector of $P$ with eigenvalue $-\lambda$. Since $P$ is a permutation matrix, $\lambda = \pm1$. To find the multiplicities we consider the sum of all the eigenvalues of $\splus$, which is equal to the trace of $\splus$. Observing that $P$ is a traceless matrix, \[ \tr(\splus) = \tr(\outs^T\ins - P) = \tr(\outs^T\ins) = \tr(\ins\outs^T) = \tr(A) = 0 . \] The sum over all eigenvalues of $\splus$ should be 0. Let $\spec(A)$ be the set of eigenvalues of $A$. Consider the sum over the eigenvalues of eigenvectors of $K$: \[ \begin{split} &(k-1) + \sum_{\lambda \in \spec(A), \lambda \neq k} \frac{\lambda \pm \sqrt{\lambda^2 - 4(k-1)} }{2}\\ &= (k-1) + \sum_{\lambda \in \spec(A), \lambda \neq k} \lambda \\ &=-1 + \sum_{\lambda \in \spec(A)} \lambda \\ &= -1 . \end{split} \] Then, the sum of the eigenvalue of the eigenvectors over $L$ is 1. So, 1 and $-1$ have multiplicities $\frac{n(k-2)}{2} + 1$ and $\frac{n(k-2)}{2}$, respectively. \qed \section{Eigenvalues of $S^+(U^2)$} \label{sec:su2} We will show that $S^+(U^2) = (S^+(U))^2 + I$. Then, the eigenvalues of $S^+(U^2)$ are determined by the eigenvalues of $\splus$. The proof of the theorem will proceed by an analysis of which pairs of arcs give a negative entry in $U^2$. \begin{theorem}\label{thm:su2} For any regular graph with valency $k$, if $k>2$ then $S^+(U^2) = S^+(U)^2 + I $.\end{theorem} \noindent{{\sl Proof. }} Since $\outs^T \ins$ is the adjacency matrix of the line digraph of $G$, then $(\outs^T \ins)^2$ has the property that its $(j,i)$th entry counts the number of length two, directed walks in the line digraph of $G$. Observe that there is such a walk from $i$ to $j$ in $L(G)$ if and only if the head of $i$ is adjacent to the tail of $j$ in $G$. In particular, if there is a walk of length two from $i$ to $j$, there is only one such walk. Then, $(\outs^T \ins)^2$ is a $01$-matrix and is the support of $U^2$. We will find the required expression for $S^+(U^2)$ by subtracting from $(\outs^T \ins)^2$ the entries in $U^2$ which have negative value. We then proceed to look at the possible arrangements of $i$ and $j$ such that there is a length two, directed walk in $L(G)$ from $i$ to $j$, in Table \ref{3walks}. \begin{table}[htdp] \begin{center} \begin{tabular}{|l|m{4.5cm}|m{3cm}|} \hline & Directed walk of length 3 from $i$ to $j$ & Value of $(U^2)_{i,j}$\\ \hline Case 1. & \begin{center} \begin{tikzpicture}[decoration={ markings, mark=at position .54 with {\arrow[black,thick]{>};}} ] \draw[postaction={decorate}] (0,0) -- (1,0); \draw[postaction={decorate}] (1,0) -- (2,0); \draw[postaction={decorate}] (2,0) -- (3,0); \filldraw[black] (0,0) circle (1.5pt) (1,0) circle (1.5pt) (2,0) circle (1.5pt) (3,0) circle (1.5pt); \filldraw[white] (1.5, 0.3) circle (1pt); \draw (0.5,0) node[anchor=north] {$i$} (2.5,0) node[anchor=north] {$j$}; \end{tikzpicture} \end{center} & \[\left( \frac{2}{k} \right)^2\] \\ \hline Case 2. & \begin{center} \begin{tikzpicture}[decoration={ markings, mark=at position .54 with {\arrow[black,thick]{>};}} ] \draw[postaction={decorate}] (0,0) -- (1,0); \draw[postaction={decorate}] (1,0) -- (2,0); \draw[postaction={decorate}] (2,0) .. controls (1.3333,-0.4) and (0.6666, -0.4).. (0,0); \filldraw[black] (0,0) circle (1.5pt) (1,0) circle (1.5pt) (2,0) circle (1.5pt); \filldraw[white] (1.5, 0.3) circle (1pt); \draw (0.5,0) node[anchor=south] {$i$} (1,-0.3) node[anchor=north] {$j$}; \end{tikzpicture} \end{center} & \[\left( \frac{2}{k} \right)^2\] \\ \hline Case 3. & \begin{center} \begin{tikzpicture}[decoration={ markings, mark=at position .54 with {\arrow[black,thick]{>};}} ] \draw[postaction={decorate}] (0,0) -- (1,0); \draw[postaction={decorate}] (1,0) .. controls (1.3333,0.3) and (1.6666, 0.3).. (2,0); \draw[postaction={decorate}] (2,0) .. controls (1.6666,-0.3) and (1.3333, -0.3).. (1,0); \filldraw[black] (0,0) circle (1.5pt) (1,0) circle (1.5pt) (2,0) circle (1.5pt); \filldraw[white] (1.5, 0.6) circle (1pt); \draw (0.5,0) node[anchor=north] {$i$} (1.5,-0.2) node[anchor=north] {$j$}; \end{tikzpicture} \end{center} & \[\left( \frac{2}{k} \right)\left( \frac{2}{k} -1\right)\] \\ \hline Case 4. & \begin{center} \begin{tikzpicture}[decoration={ markings, mark=at position .54 with {\arrow[black,thick]{>};}} ] \draw[postaction={decorate}] (0,0) -- (1,0); \draw[postaction={decorate}] (1,0) .. controls (1.3333,0.3) and (1.6666, 0.3).. (2,0); \draw[postaction={decorate}] (2,0) .. controls (1.6666,-0.3) and (1.3333, -0.3).. (1,0); \filldraw[black] (0,0) circle (1.5pt) (1,0) circle (1.5pt) (2,0) circle (1.5pt); \filldraw[white] (1.5, 0.6) circle (1pt); \draw (1.5,0.3) node[anchor=south] {$i$} (0.5,0) node[anchor=north] {$j$}; \end{tikzpicture} \end{center} & \[\left( \frac{2}{k}-1 \right)\left( \frac{2}{k} \right)\]\\ \hline Case 5. & \begin{center} \begin{tikzpicture}[decoration={ markings, mark=at position .54 with {\arrow[black,thick]{>};}} ] \draw[postaction={decorate}] (1,0) -- (0,0); \draw[postaction={decorate}] (0,0) .. controls (0.3333,0.5) and (0.6666, 0.5).. (1,0); \draw[postaction={decorate}] (0,0) .. controls (0.3333,-0.5) and (0.6666, -0.5)..(1,0); \filldraw[black] (0,0) circle (1.5pt) (1,0) circle (1.5pt); \draw (0.5,0.5) node[anchor=south] {$i$} (0.5,-0.5) node[anchor=north] {$j$}; \end{tikzpicture} \end{center} & \[\left( \frac{2}{k} -1 \right)^2\]\\ \hline \end{tabular} \end{center} \caption{All possible pairs $i,j$ such that there is a length 2 walk in $L(G)$} \label{3walks} \end{table}% We see that the only negative entries of $U^2$ occur for $i,j$ in Cases 3 and 4, when $k >2$. Then $(U^2)_{j.i}$ is negative when $i$ and $j$ share the same head but not the same tail and when $i$ and $j$ share the same tail but not the same head. Then, \[ \begin{split} S^+(U^2) &= (\outs^T \ins)^2 -(\outs^T \outs -I) - (\ins^T \ins -I) \\ &= (\outs^T \ins)^2 -\outs^T \outs - \ins^T \ins + I +I \\ &= (\outs^T \ins)^2 -(\outs^T \ins)P - P(\outs^T \ins) + P^2 +I \\ &= (\outs^T \ins -P)^2 + I \\ &= S^+(U)^2 + I \end{split} \] \qed The next theorem explicitly lists the eigenvalues of $S^+(U^2)$. \begin{theorem} If $G$ is a regular connected graph with valency $k \geq 2$ and $n$ vertices, then $S^+(U(G)^2)$ has eigenvalues as follows: \begin{enumerate}[i)] \item $k^2-2k + 2$ with multiplicity 1, \item $\frac{\lambda^2 - 2k + 4}{2} \pm \frac{\lambda\sqrt{\lambda^2 - 4(k-1)}}{4}$ as $\lambda$ ranges over the eigenvalues of $A$, the adjacency matrix of $G$, and $\lambda \neq k$ and \item 2 with multiplicity $n(k-2)+1$. \end{enumerate} \end{theorem} \noindent{{\sl Proof. }} From Theorem \ref{thm:su2}, we get that $S^+(U^2) = (S^+(U))^2 + I$. Let $\Zy$ be an eigenvector of $\splus$ with eigenvalues $\theta$. Then, $ S^+(U^2) \Zy = (\theta^2 + 1)\Zy $ and $\Zy$ is an eigenvector of $S^+(U^2)$ with eigenvalue $\theta^2 + 1$. The rest follow from the eigenvalues of $\splus$ found in Theorem \ref{thm:evals}. \qed \section{Quantum Walk Algorithms for Graph Isomorphism} \label{sec:QwalkGI} The \textsl{Graph Isomorphism Problem} is the problem of deciding whether or not two given graphs are isomorphic. The algorithms of Shiau, Joynt and Coppersmith in \cite{SJC03}, Douglas and Wang in \cite{DW08}, and Gamble, Friesen, Zhou and Joynt in \cite{GFZJ10} use the idea of evolving a quantum walk on a given pair of graphs and then comparing a permutation-invariant aspect of the states of the quantum walk on each graph. Both \cite{SJC03} and \cite{GFZJ10} present algorithms based on a two-particle quantum walk.\footnote[1]{In the case of \cite{GFZJ10}, the particles are bosons.} Both procedures have been tested on large number of strongly regular graphs without finding a pair not distinguished by the procedure. In \cite{S10}, Smith gives a family of graphs on which the procedure of Gamble et al.~\cite{GFZJ10} does not distinguish arbitrary graphs; in fact, he shows that $k$-boson quantum walks do not distinguish arbitrary graphs. However, the question of whether or not the procedure distinguishes all strongly regular graphs is still open. The quantum walk procedure of Douglas and Wang in \cite{DW08} has also been tested on classes of strongly regular graphs and of regular graphs, where all non-isomorphic graphs were distinguished. Finding a pair of non-isomorphic strongly regular graphs which are not distinguished by any of the three algorithms remains an open problem. Finding a pair of non-isomorphic strongly regular graphs which are not distinguished by the procedure of Emms et al.~is also an open problem. For work toward finding such a pair of graphs, see \cite{me10}. \bibliographystyle{plain}
{ "timestamp": "2011-07-28T02:04:12", "yymm": "1011", "arxiv_id": "1011.5460", "language": "en", "url": "https://arxiv.org/abs/1011.5460", "abstract": "We study the transition matrix of a quantum walk on strongly regular graphs. It is proposed by Emms, Hancock, Severini and Wilson in 2006, that the spectrum of $S^+(U^3)$, a matrix based on the amplitudes of walks in the quantum walk, distinguishes strongly regular graphs. We find the eigenvalues of $S^+(U)$ and $S^+(U^2)$ for regular graphs.", "subjects": "Combinatorics (math.CO); Quantum Physics (quant-ph)", "title": "Quantum Walks on Regular Graphs and Eigenvalues", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363700641143, "lm_q2_score": 0.7185943865443349, "lm_q1q2_score": 0.7084883410179706 }
https://arxiv.org/abs/2107.13443
On fractional version of oriented coloring
We introduce the fractional version of oriented coloring and initiate its study. We prove some basic results and study the parameter for directed cycles and sparse planar graphs. In particular, we show that for every $\epsilon > 0$, there exists an integer $g_{\epsilon} \geq 12$ such that any oriented planar graph having girth at least $g_{\epsilon}$ has fractional oriented chromatic number at most $4+\epsilon$. Whereas, it is known that there exists an oriented planar graph having girth at least $g_{\epsilon}$ with oriented chromatic number equal to $5$. We also study the fractional oriented chromatic number of directed cycles and provide its exact value. Interestingly, the result depends on the prime divisors of the length of the directed cycle.
\section{Introduction} ``It is possible to go to a graph theory conference and to ask oneself, at the end of every talk, what is the fractional analogue?'' - Scheinermann and Ullman~\cite{scheinermann2008fgt} made this remark in the preface of their book on fractional graph theory. Considering the popularity of studying fractional versions of well-studied problems, it is a wonder that the fractional version of oriented coloring is yet to be studied. We initiate it with this article. An \textit{oriented graph} $G$ is a directed graph without any directed cycle of length 1 or 2. In this article, a graph $G$ refers to a simple or an oriented graph while $V(G)$ denotes the set of vertices of $G$ and $E(G)$ or $A(G)$ refers to its set of edges or arcs, respectively. The notion of oriented coloring was introduced by Bruno Courcelle~\cite{courcelle1994monadic} in 1994 inspiring a considerable volume of work on the topic (see recent survey~\cite{sopena-updated-survey} for details). An \textit{oriented $k$-coloring} of an oriented graph $G$ is a function $f$ from $V(G)$ to a set of $k$ colors such that (i) $f(x) \neq f(y)$ for every arc $xy \in A(G)$, and (ii) $f(y) = f(z)$ implies $f(x) \neq f(w)$ for every $xy,zw \in A(G)$. The \textit{oriented chromatic number} $\chi_o(G)$ is the minimium $k$ such that $G$ admits an oriented $k$-coloring. The oriented chromatic number for a family $\mathcal{F}$ of oriented graphs is given by $$\chi_o(\mathcal{F}) = \max\{\chi_o(G)| G \in \mathcal{F}\}.$$ Without further ado, let us now define the natural analogue of the fractional version of the oriented chromatic number. Let $S$ be a set of $k$ colors and let $P_b(S)$ denote the set of all subsets of $S$ having cardinality $b$. A \textit{$b$-fold oriented $k$-coloring} is a mapping $f$ from $V(G)$ to $P_b(S)$ satisfying $(i)$ $f(x) \cap f(y) = \emptyset $ for every arc $xy \in A(G)$, and $(ii)$ $f(x) \cap f(w) \neq \emptyset $ implies $f(y) \cap f(z) = \emptyset $ for every $xy, zw \in A(G)$. The \textit{$b$-fold oriented chromatic number} $\chi^b_o(G)$ of $G$ is the minimum $k$ such that $G$ admits an $b$-fold oriented $k$-coloring. The \textit{fractional oriented chromatic number} of $G$ is given by $$\chi^*_o(G) = \inf_{b \rightarrow \infty} \frac{\chi^b_{o}(G)}{b}.$$ Observe that $\chi^{a+b}_o(G) \leq \chi^a_o(G) + \chi^b_o(G)$ for all $a,b \in \mathbb{N}$. Thus the above limit exists due to the Subadditivity Lemma (see Appendix A.4~\cite{scheinermann2008fgt}). Naturally, for a family $\mathcal{F}$ of oriented graphs $$\chi^*_o(\mathcal{F}) = \max\{\chi^*_o(G)| G \in \mathcal{F}\}.$$ Notice that the oriented $k$-coloring and chromatic number $\chi_o(\cdot)$ are equivalent to the $1$-fold oriented $k$-coloring and chromatic number $\chi^1_o(\cdot)$, respectively. The close relation between oriented chromatic number and (oriented) graph homomorphisms is well-known~\cite{sopena-updated-survey}. On the other hand, the fractional version of the usual chromatic number has a famous equivalent formulation using homomorphism to Kneser graphs~\cite{scheinermann2008fgt}. Thus, naturally we study the relation between the oriented fractional coloring and oriented graph homomorphisms. Alongside such a study, we explore some other basic properties of oriented fractional coloring. A relevant related concept is the oriented analogues of clique and clique number. The analogue of clique and clique number for oriented graphs ramified into two notions: (i) oriented absolute clique and clique number, and (ii) oriented relative clique and clique number. The later is more relevant here. An \textit{oriented relative clique}~\cite{nandySS2016} of an oriented graph $G$ is a vertex subset $R \subseteq V(G)$ satisfying $f(x) \neq f(y)$ under any homomorphism $f$ of $G$, and for any distinct vertices $x,y \in R$. The \textit{oriented relative clique number}~\cite{nandySS2016} $\omega_{ro}(G)$ of $G$ is the cardinality of a largest oriented relative clique. The oriented relative clique number $\omega_{ro}(\mathcal{F})$ for a family $\mathcal{F}$ of oriented graphs is the maximum $\omega_{ro}(G)$ where $g \in \mathcal{F}$. We will soon see that the parameter $\chi^*_o(\cdot)$ is sandwiched between the parameters oriented relative clique number (lower bound) and oriented chromatic number (upper bound). Therefore, it is interesting to examine the oriented fractional chromatic number of oriented graphs or families of oriented graphs for which the values of $\omega_{ro}$ and $\chi_o$ are different. The most ordinary such graphs one can think of are probably the directed cycles $C_r$ of length $r$. The study of fractional oriented chromatic number of these simple looking graphs turn out to be quite challenging. Moreover, the corresponding results are utterly surprising and have an interesting relation with prime numbers. To motivate the readers, we present an interesting example where the fractional oriented chromatic number is strictly less than the oriented chromatic number. \begin{example}\label{example 7-cycle} Let $C_7$ be the directed 7-cycle. We know that $\chi_o(C_7) = 4$. However, Fig.~\ref{fig ex1} shows a $2$-fold oriented $7$-coloring of $C_7$ implying $\chi^*_o(C_7) \leq \frac{7}{2} = 3.5 < 4 = \chi_o(C_7)$. Is this upper bound tight? We will answer that later in this article, and till then we encourage the readers to think about it. \end{example} \begin{figure} \centering \begin{tikzpicture}[inner sep=.7mm] \foreach \a in {0,...,6} { \node[draw, circle, line width=1pt](v\a) at (\a*360/7:3cm){$x_\a$}; } \node at (0:3.3cm)[right]{$\{1,2\}$}; \node at (360/7:3.3cm)[above]{$\{3,4\}$}; \node at (2*360/7:3.4cm)[left]{$\{5,6\}$}; \node at (3*360/7:3.4cm)[left]{$\{7,1\}$}; \node at (4*360/7:3.3cm)[left]{$\{2,3\}$}; \node at (5*360/7:3.3cm)[below]{$\{4,5\}$}; \node at (6*360/7:3.35cm)[right]{$\{6,7\}$}; \draw[-latex,line width=1pt,black] (v0) -- (v1); \draw[-latex,line width=1pt,black] (v1) -- (v2); \draw[-latex,line width=1pt,black] (v2) -- (v3); \draw[-latex,line width=1pt,black] (v3) -- (v4); \draw[-latex,line width=1pt,black] (v4) -- (v5); \draw[-latex,line width=1pt,black] (v5) -- (v6); \draw[-latex,line width=1pt,black] (v6) -- (v0); \end{tikzpicture} \caption{Fractional oriented coloring of the directed 7-cycle $C_7$.} \label{fig ex1} \end{figure} One of the most important open problems in the domain of oriented coloring is determining the oriented chromatic number of the family $\mathcal{P}_3$ of oriented planar graphs. In particular, improving the upper bound $\chi_o(\mathcal{P}_3) \leq 80$~\cite{raspaud1994planar80} seems to be especially challenging. Moreover, the related class of questions of determining the oriented chormatic number $\chi_o(\mathcal{P}_g)$ of the family $\mathcal{P}_g$ of oriented planar graphs with \textit{girth} (length of a smallest cycle) at least $g$ are also interesting. However, till date the only known exact bounds are for the cases $\chi_o(\mathcal{P}_g) = 5$ for all $g \geq 12$. In general, finding the exact value of $\chi_o(\mathcal{P}_g)$ for $g \geq 3$ seems to be tough problem. Therefore, we wondered if the fractional version will be any easier or if the exact values of the parameter will remain the same. For this particular article, we focus more on the second question and find that indeed the fractional oriented chromatic number of $\mathcal{P}_g$ can be less than $5$ for large values of $g$, and, in fact, it gets arbitrarily close to $4$ as $g \to \infty$. The organization of this article as follows. In Section~\ref{sec pre}, we present the preliminary notation, terminology and establish some basic results. In Sections~\ref{sec directed cycle} and~\ref{sec planar} we study the fractional oriented chromatic number of directed graphs and sparse planar graphs, respectively. Finally, in Section~\ref{sec conclusions}, we share our concluding remarks. \section{Preliminaries and basic results}\label{sec pre} Let $G$ and $H$ be two oriented graphs. A function $f: V(G) \to V(H)$ is a \textit{homomorphism} of $G$ to $H$ if for each arc $xy$ of $G$, $f(x)f(y)$ is an arc of $H$. We use the notation $G \to H$ to denote that $G$ admits a homomorphism to $H$. This definition readily motivates the following result. \begin{proposition} If $G \to H$, then $\chi_o^{*}(G) \leq \chi_o^{*}(H)$. \end{proposition} \begin{proof} Let $\chi_o^{*}(H) = \frac{\chi^b_{o}(H)}{b}$, and thus there exists a $b$-fold oriented $\chi^b_{o}(H)$-coloring $f$ of $H$. Assume that $\phi: G \to H$ is a homomorphism. Notice that the composition $f \circ \phi$ is a $b$-fold oriented $\chi^b_{o}(H)$-coloring of $G$. \end{proof} Now the next natural question is whether there exists any equivalent definition of oriented fractional coloring and chromatic number using the notion of homomorphisms. To express such an equivalent formulation we need some definitions. The \textit{Kneser graph $KG_{a,b}$} is a graph having the subsets of cardinality $b$ of a set of cardinality $a$ as vertices, and two vertices are adjacent if and only if they are disjoint sets. A \textit{consistent sub-orientation} of $KG_{a,b}$ is an oriented graph $\overrightarrow{KG}_{a,b}$, whose underlying graph is a subgraph of $KG_{a,b}$, and whose arcs are oriented in such a way that given two arcs $xy$ and $wz$ we have $x \cap z \neq \emptyset \implies y \cap w = \emptyset$. That brings us to the equivalent formulation of oriented fractional coloring using the notion of homomorphisms. \begin{theorem}\label{th kneser gen} An oriented graph $G$ satisfies $\chi_o^{*}(G) \leq \frac{a}{b}$ if and only if $G \to \overrightarrow{KG}_{ac,bc}$ where $\overrightarrow{KG}_{ac,bc}$ is a consistent sub-orientation of $KG_{ac,bc}$ and $c$ is a constant positive integer. \end{theorem} To prove the above theorem, we will first prove a supporting result. \begin{theorem}\label{th kneser sp} If $G \to \overrightarrow{KG}_{a,b}$ for some consistent sub-orientation of $KG_{a,b}$, then there exists a consistent sub-orientation of $KG_{ac,bc}$ satisfying $G \to \overrightarrow{KG}_{ac,bc}$ for every positive integer $c$. \end{theorem} \begin{proof} It is enough to show that there exists a consistent sub-orientation of $KG_{ac,bc}$ isomorphic to $\overrightarrow{KG}_{a,b}$. This can be obtained by replacing each vertex $x = \{x_1, x_2, \cdots, x_b\}$ of $\overrightarrow{KG}_{a,b}$ with $\hat{x} = \{x_{11}, x_{12}, \cdots, x_{1b}, x_{21}, x_{22}, \cdots, x_{2b},\cdots, x_{c1}, x_{c2}, \cdots, x_{cb}\}$. \end{proof} Now we are ready to prove Theorem~\ref{th kneser gen}. \medskip \noindent \textit{Proof of Theorem~\ref{th kneser gen}.} Let $\chi_o^{*}(G) \leq \frac{a}{b}$. That means, $G$ admits a $b'$-fold oriented $a'$-coloring $f$ for some $a', b'$ satisfying $\frac{a'}{b'} = \frac{a}{b}$. Notice that, as we have not assumed a reduced form of $\frac{a}{b}$, it is possible to have $a' < a$ and $b' < b$. However, due to Theorem~\ref{th kneser sp} it is enough to prove assuming a reduced form of $\frac{a}{b}$. Now let us construct an oriented graph $C$ having the colors used for the $b'$-fold oriented $a'$-coloring $f$ of $G$ as vertices. Moreover, two vertices $x$ and $y$ of $C$ have an arc $xy$ between them if there exists an arc $uv$ in $G$ satisfying $f(u)=x$ and $f(v) = y$. Notice that, $C$ is a consistent sub-orientation of $KG_{a',b'}$ and $f$ is a homomorphism of $G$ to $C$. This takes care of the ``if'' part. On the other hand, if $g: G \to \overrightarrow{KG}_{ac,bc}$ where $\overrightarrow{KG}_{ac,bc}$ is a consistent sub-orientation of $KG_{ac,bc}$, then $f$ itself is also a $bc$-fold $ac$-coloring of $G$. This implies $\chi_o^{*}(G) \leq \frac{a}{b}$. This completes the ``only if'' part. \qed Next we will move onto the sandwich theorem mentioned in the introduction. Before stating it, it is useful to recall a handy characterization of an oriented relative clique. \begin{proposition}~\cite{nandySS2016}\label{prop oclique char} A vertex subset $R \subseteq V(G)$ of $G$ is an oriented relative clique if and only if any non-adjacent pair of vertices in $R$ is connected by a directed 2-path. \end{proposition} Now we are ready to state and prove the sandwich theorem. \begin{theorem}\label{prop sandwich} For any oriented graph $G$, $\omega_{ro}(G) \leq \chi^*_o(G) \leq \chi_o(G)$. \end{theorem} \begin{proof} Let $R$ be a relative oriented clique of $G$ with $|R| = \omega_{ro}(G)$. Suppose $f$ is a $b$-fold oriented $a$-coloring of $G$. Then, as the non-adjacent vertices of $R$ are connected by a directed $2$-path, we must have $f(x) \cap f(y) = \emptyset$ for all distinct $x,y \in R$. Thus, a total of $|R|b$ colors have been used on the vertices of $R$. Therefore, $a \geq |R|b = \omega_{ro}(G)b$ which implies that $\frac{a}{b} \geq \omega_{ro}$. Hence the first inequality. The second inequality follows from the trivial observation $\chi^{*}_o(G) \leq \chi^{1}_o(G) = \chi_o(G)$. \end{proof} We will sign off the section by establishing a general lower bound for oriented fractional chromatic number. To do so, we will introduce an oriented analogue of independent sets. A vertex subset $I \subseteq V(G)$ of an oriented graph $G$ is an \textit{oriented independent set} if for each distinct pair of vertices $x, y \in I$, it is possible to find an oriented coloring $f$ of $G$ satisfying $f(x)=f(y)$. \begin{proposition} Given an oriented graph $G$, a vertex subset $I \subseteq V(G)$ is an oriented independent set if and only if any two vertices of $I$ are neither adjacent nor connected by a directed $2$-path. \end{proposition} \begin{proof} The ``if'' part directly follows from Proposition~\ref{prop oclique char}. For the ``only if'' part, given distinct $x, y \in I$, define the following function $$f(z) = \begin{cases} z, &\text{ for } z \in V(G) \setminus \{y\},\\ x, &\text{ for } z =y. \end{cases} $$ Notice that $f$ is a homomorphism of $G$ to $G \setminus \{y\}$. \end{proof} We also use this definition to introduce the oriented analogue of the parameter independence number. The \textit{oriented independence number $\alpha_o(G)$} is the maximum $|I|$ where $|I|$ is an oriented independent set of $G$. Finally, we are ready to state the result that gives us a general lower bound of oriented fractional chromatic number. \begin{theorem} Given any oriented graph $G$ we have $\chi_o^{*}(G) \geq \frac{|V(G)|}{\alpha_o(G)}$. \end{theorem} \begin{proof} Let $f$ be a $b$-fold $a$-coloring of $G$ with $\frac{a}{b} = \chi_o^{*}(G)$. This means, there is a set $S$ of $a$ colors whose set of all subsets having cardinality $b$ is $P_b(S)$, and in particular $f$ is a function from $V(G)$ to $P_b(S)$. Note that, if $f(u)=x$ and $f(v)=y$, and $x \cap y \neq emptyset$, then $u, v$ are independent. Thus $S_z = \{u: z \in f(u)\}$ for any $z \in S$ is an oriented independent set and $|S_z| \leq \alpha_o(G)$. Moreover, observe that every vertex $u \in V(G)$ is part of exactly $b$ such sets. Thus we have $$|V(G)| \cdot b = \sum_{z \in S} |S_z| \leq \alpha_o(G) \cdot |S| = \alpha_o(G) \cdot a.$$ This implies $$\chi_o^{*}(G) = \frac{a}{b} \geq \frac{|V(G)|}{\alpha_o(G)}$$ and completes the proof. \end{proof} Notice that, we can prove the tightness of the bound $\chi_o^{*}(C_7) = 3.5$ easily using the above result. In the next section, we are going to provide the generalized tight values for $\chi_o^{*}(C_r)$, where $C_r$ denotes the directed cycle of length $r$, for all $r \geq 3$. \section{Directed cycles}\label{sec directed cycle} We use the following classification of prime numbers to present our result. A prime number $k > 3$ is a \textit{type-A prime} if $k\equiv 3\bmod 4$, and is a \textit{type-B prime} if $k\equiv 1\bmod 4$. Let $r > 5$ be a positive integer. If $r$ does not have a type-A prime factor, then $\beta(r)=0$. Otherwise, $\beta(r)=\frac{4}{p+1}$ where $p$ is the least type-A prime factor of $r$. For instance $\beta(28)=\beta(35)=\frac{1}{2}$, $\beta(55)=\beta(88)=\frac{1}{3}$, and $\beta(26) =\beta(2^n)=0$. \begin{theorem}\label{directed-cycles} Let $C_r$ be a directed cycle of length $r$. Then \begin{enumerate}[(a)] \item $\chi^*_{o}(C_r) = 3$ if $r \equiv 0 \bmod 3$, \item $\chi^*_{o}(C_4) = 4$, \item $\chi^*_{o}(C_5) = 5$, \item $\chi^*_{o}(C_r) = 4 - \beta(r)$, if $r > 5$ and $r \not\equiv 0 \bmod 3$. \end{enumerate} \end{theorem} The first three parts of the above theorem follow directly from known results. However, the main challenge is to prove the last part of it. That part is unexpected and gives us another indication of how difficult, counter-intuitive and yet beautiful the theory of fractional oriented coloring may actually be. Throughout this section we will assume that the set of vertices of $C_r$ is $V(C_r) = \{u_i | i \in \mathbb{Z}/r\mathbb{Z} \}$ and the set of arcs of $C_r$ is $$A(C_r) = \{u_iu_{i+1} | \text{ the operation `+' is taken modulo } r \}.$$ \medskip \noindent \textit{Proof of Theorem~\ref{directed-cycles}(a,b,c).} We know that $\omega_{ro}(C_r) = \chi_o(C_r) = 3$ for all $r \equiv 0 \bmod 3$~\cite{sopena-updated-survey}. We also know that $\omega_{ro}(C_4) = \chi_o(C_4) = 4$ and $\omega_{ro}(C_5) = \chi_o(C_5) = 5$~\cite{sopena-updated-survey}. Thus the result follows due to Proposition~\ref{prop sandwich}. \hfill $ \square $ \medskip The proof of Theorem~\ref{directed-cycles}(d) is broken into several lemmas and observations presented in the following. For the rest of the section we only deal with $C_r$ having $r > 5$ and $r \not\equiv 0 \bmod 3$. All the `$+$' and `$-$' operations perfomed in the subscript of $u$ is computed modulo $r$ unless otherwise stated. \begin{lemma}\label{greedy-coloring} For all $r > 5$ and $r \not\equiv 0 \bmod 3$ we have, $3 \leq \chi^*_{o}(C_r) \leq 4$. \end{lemma} \begin{proof} We know that $\omega_{ao}(C_r) = 3$ and $\chi_o(C_r) = 4$~\cite{sopena-updated-survey}. Thus the result follows due to Proposition~\ref{prop sandwich}. \end{proof} Any $b$-fold oriented $k$-coloring of $C_r$ having $\frac{k}{b} < 4$ is a \textit{miser $b$-fold oriented $k$-coloring}. \medskip Let $c$ be a miser $b$-fold oriented $k$-coloring of $C_r$ using a set $S$ of $k < 4b$ colors. Note that as a directed 2-path is an oriented clique, its vertices must receive disjoint sets of colors. As a consequence: \begin{lemma}\label{lem dipath} For all $i,j \in \{0,1, \cdots , r-1\}$ satisfying $0 \neq |i-j| \leq 2$ we have $c(u_i) \cap c(u_j) = \emptyset$. \end{lemma} Thus assume that $c(u_0) = A$, $c(u_1) = B$ and $c(u_2) = C$. By Lemma~\ref{lem dipath}, $|A \cup B \cup C| = 3b$ and $|A|=|B|=|C| = b$. Suppose that $D = S \setminus (A \cup B \cup C) $. Observe that $|D| = k - 3b < b$. Notice that the definitions of $A, B, C$ and $D$ depends on the coloring $c$ and the vertices $u_0, u_1, u_2$. Now we introduce some notations to aid our proof. Note that $c(u_i) \subset A \cup B \cup C \cup D$ for each $i \in \{0,1, \cdots , r-1 \}$. We will use the notation $c(u_i) = A_i B_i C_i D_i$ to denote the set of colors used on $u_i$. However, at some point of time if we are sure that some of the sets among $A, B, C$ or $D$ has empty intersection with $c(u_i)$, then we can drop the corresponding name of the set along with its subscript to denote $c(u_i)$. Moreover, if we are sure that $c(u_i) \cap X \neq \emptyset$ for some $X \in \{A,B,C,D\}$, then we can replace $X_i$ with $X^*_i$ in the above notation. One more type of notation that we use is the following: if for some $X \in \{A,B,C,D\}$ we know that $c(u_i) \cap X \neq \emptyset$, then we may denote it as $c(u_i)=X_{\#}$. For instance if $c(u_i)=A_{\#}$, then we are sure that $c(u_i)$ has colors from $A$ but it may or may not have colors from the other sets. Furthermore, if $c(u_i) \cap X \neq \emptyset$ and $c(u_i) \cap Y \neq \emptyset$ for some $X, Y \in \{A,B,C,D\}$, then we may denote it as $c(u_i)=X_{\#}Y_{\#}$. Similarly, we may use three or four set names among $A,B,C,D$ for this notation. For instance, if at some point of time we learn that $c(u_i) \cap A \neq \emptyset$, $c(u_i) \cap B = \emptyset$ and $c(u_i) \cap C = \emptyset$, then we can write $c(u_i) = A_i B_i D_i$, $c(u_i) = A^*_i D_i$, $c(u_i) = A^*_i B^*_i D_i$, $c(u_i) = A_{\#}B_{\#}$ or $c(u_i) = A_{\#}$ to denote the set of colors used on $u_i$. We will choose to use the notation that suits our purpose. Now we are going to list some observations needed for the proof. \begin{observation}\label{obs p1} There exists no index $i$ with $c(u_i) = D^*_i$. \end{observation} \begin{proof} It is not possible to have $c(u_i) = D^*_i$ as $|c(u_i)| = b$, $|D^*_i| \leq |D| < b$. \end{proof} The next observation directly follows from the second condition from the definition of $b$-fold oriented $k$-coloring and the fact that $c(u_0) = A$, $c(u_1) = B$, $c(u_2) = C$. \begin{observation}\label{obs p2} The following implications hold: \begin{itemize} \item[$(a)$] $c(u_i) = B_{\#} \implies c(u_{i+1}) \neq A_{\#}$, \item[$(b)$] $c(u_{i+1}) = A_{\#} \implies c(u_i) \neq B_{\#}$, \item[$(c)$] if $c(u_3) = A$, then $c(u_i) = A_{\#} \implies c(u_{i+1}) \neq C_{\#}$, \end{itemize} \end{observation} In the above statement the case if $c(u_i) = A_{\#}$ is considered under the special additional condition $c(u_3) = A$. \begin{observation}\label{obs p3} If $c(u_i) = X_iD_i$ and $c(u_j) = X_jD_j$, then $X_iD_i \cap X_jD_j \neq \emptyset$ for any $(X_i,X_j) \in \{(A_i,A_j),(B_i,B_j),(C_i,C_j)\}$. \end{observation} \begin{proof} This observation holds by the pigeonhole principle as $|c(u_i)| = |c(u_j)| = b$ and $|X \cup D| < 2b$. \end{proof} The next observation is based on the fact that the color sets assigned to the three vertices belonging to a directed 2-path must be distinct under any $b$-fold oriented $k$-coloring as a directed 2-path is an oriented clique. \begin{observation}\label{obs p4} For any index $i \in \{0,1, \cdots , r-1\}$ and for any $X \in \{A,B,C\}$ we have, $$X \cap (c(u_i) \cup c(u_{i+1}) \cup c(u_{i+2})) \neq \emptyset .$$ \end{observation} \begin{proof} Note that the vertices $u_i, u_{i+1}$ and $u_{i+2}$ of $C_r$ induces a directed 2-path, and hence $\{u_i, u_{i+1}, u_{i+2}\}$ is a an oriented relative clique. Thus $|c(u_i) \cup c(u_{i+1}) \cup c(u_{i+2})| = 3b$. However, $|S \setminus X| = k-b < 3b$. This implies $X \cap (c(u_i) \cup c(u_{i+1}) \cup c(u_{i+2})) \neq \emptyset$. \end{proof} The above observation shows that three consecutive vertices of $C_r$ cannot all avoid colors from a particular set $X \in \{A,B,C\}$. The following observation will show the opposite. \begin{observation}\label{obs p5} For any index $i \in \{0,1, \cdots , r-1\}$ and for any $X \in \{A,B,C\}$ we cannot have, $$c(u_i), c(u_{i+1}), c(u_{i+2}) = X_{\#} .$$ \end{observation} \begin{proof} Suppose the contrary. If $c(u_i), c(u_{i+1}), c(u_{i+2}) = A_{\#}$, then Observation~\ref{obs p2} implies $c(u_{i-1}), c(u_{i}), c(u_{i+1}) \neq B_{\#}$ contradicting Observation~\ref{obs p4}. The other two cases, namely, if $c(u_i), c(u_{i+1}), c(u_{i+2}) = B_{\#}$ and, if $c(u_i), c(u_{i+1}), c(u_{i+2}) = C_{\#}$ can be proved similarly. \end{proof} Based on the above observations we are able to show stronger implications. \begin{observation}\label{obs p6} The following implications hold: \begin{itemize} \item[$(a)$] $c(u_i) = C_\#$ and $c(u_{i}) \neq B_\# \implies c(u_{i+1}) = A_\#$ and $c(u_{i+1}) \neq C_\#$, \item[$(b)$] $c(u_i) = B_\#$ and $c(u_{i}) \neq A_\# \implies c(u_{i+1}) = C_\#$ and $c(u_{i+1}) \neq B_\#$, \item[$(c)$] $c(u_3) = A$, $c(u_i) = A_\#$ and $c(u_{i}) \neq C_\# \implies c(u_{i+1}) = B_\#$ and $c(u_{i+1}) \neq A_\#$. \end{itemize} \end{observation} \begin{proof} $(a)$ If $c(u_i) = C_\#$ and $c(u_{i}) \neq B_\#$, then $c(u_{i+1}) \neq B_\#$ due to Observation~\ref{obs p2}. Moreover, if $c(u_{i+1}) = C_\#$, then $c(u_{i+2}) \neq B_\#$ due to Observation~\ref{obs p2}. This implies $c(u_i),c(u_{i+1}),c(u_{i+2}) \neq B_\#$ contradicting Observation~\ref{obs p4}. Also $c(u_{i+1}) = D^*_{i+1}$ is not possible due to Observation~\ref{obs p1}. Thus $c(u_{i+1}) = A_\#$ and $c(u_{i+1}) \neq C_\#$. \medskip $(b), (c)$ The proofs are similar. \end{proof} An \textit{automorphism} of an oriented graph $G$ is a function $f : V(G) \rightarrow V(G)$ such that $f(u)f(v) \in A(G)$ if and only if $uv \in A(G)$. An oriented graph $G$ is \textit{$3$-dipath transitive} if for any two directed $3$-paths $abcd$ and $wxyz$ of $G$ there exists an automorphism $f$ such that $f(a) =w$, $f(b) =x$, $f(c) =y$ and $f(d) =z$. Now we are going to prove one of the key lemmas to aid the proof of the theorem. We need to introduce some notations to state the lemma. The set of colors $c(u_i)$ is called the \textit{label} of $u_i$ for any $i \in \mathbb{Z}/r\mathbb{Z}$. A sequence of \textit{$k$ consecutive labels} are the labels used on $u_{i}, u_{i+1}, \cdots, u_{i+k-1}$ for some $i \in \mathbb{Z}/r \mathbb{Z}$. A sequence of $3$ consecutive labels of the form ($A^*_iD_i, A_{i+1}B^*_{i+1}D_{i+1}, C^*_{i+2}D_{i+2}$) is a \textit{triple}. The sequence of $4$ consecutive labels of the form ($A^*_iD_i, A^*_{i+1}B^*_{i+1}D_{i+1}, B^*_{i+2}C^*_{i+2}D_{i+2}, C^*_{i+3}D_{i+3}$) is a \textit{quad}. \begin{lemma}\label{lem triple-quad} Any label is either part of a triple or a quad. \end{lemma} \begin{proof} As $c(u_0) = A$, $c(u_1) = B$, $c(u_2) = C$, the lemma is true for all $c(u_i)$ having $ i \in \{0,1,2 \}$. Now assume that the lemma is true for all $c(u_i)$ having $ i \in \{0,1, \cdots k-1 \}$. Observe that $c(u_{k-1}) = C^*_kD_k$ due to our assumption. Now we want to show that $c(u_k)$ is part of either a triple or a quad. Note that $c(u_k) = A^*_kD_k$ due to Observation~\ref{obs p2} and~\ref{obs p6}. If $c(u_{k+1}) \neq B_{\#}$, then it contradicts Observation~\ref{obs p4}. Thus $c(u_{k+1}) = A_{i+1}B^*_{i+1}C_{i+1}D_{i+1}$. However if $c(u_{k+1}) = C_{\#}$, then $c(u_2) \cap c(u_{k+1}) \neq \emptyset $. On the other hand, as $c(u_3) = A^*_{3}D_{3}$, we have $c(u_3) \cap c(u_{k}) \neq \emptyset $. This contradicts the second condition of the definition of $b$-fold oriented $k$-coloring. Therefore, $c(u_{k+1}) = A_{i+1}B^*_{i+1}D_{i+1}$. Also $c(u_{k+1}) = B_{\#}$ implies $c(u_{k+2}) \neq A_{\#}$ due to Observation~\ref{obs p2}. Moreover, $c(u_{k+2}) = C_{\#}$ due to Observation~\ref{obs p4} Therefore, either $c(u_{k+2}) = C^*_{i+2}D_{i+2}$ making $c(u_k)$ part of a triple, or $c(u_{k+2}) = B^*_{i+2}C^*_{i+2}D_{i+2}$. In the later case, we must have $c(u_{k+3}) = C^*_{i+3}D_{i+3}$ due to Observation~\ref{obs p2}. Also this implies that $c(u_{k+1}) = A_{\#}$ due to Observation~\ref{obs p4}. This makes $c(u_k)$ part of a quad. \end{proof} The labels in the entire cycle can thus be viewed as a circular arrangement of triples and quads. Let $(c(u_t), c(u_{t+1}), c(u_{t+2}))$ be a triple. If we rename the vertices as $u_i = u_{i+t}$ for all $i \in \mathbb{Z}/r \mathbb{Z}$ and some constant $t$, then the definitions of the sets $A, B, C$ and $D$ gets changed accordingly. However, we observe that a triple remains a triple and a quad remains a quad. \begin{lemma}\label{lem same-same} If we rename the vertices as $u_i = u_{i+t}$ for all $i \in \mathbb{Z}/r \mathbb{Z}$ and some constant $t$ where $\Sigma = (c(u_t), c(u_{t+1}), c(u_{t+2}))$ is a triple, then a triple remains a triple and a quad remains a quad. \end{lemma} \begin{proof} For convenience we will refer to every vertex as per their names before renaming throughout the proof. After renaming, the definitions of the sets $A, B, C$ and $D$ changes. We will use the labels $c(u_{t}) = A', c(u_{t+1}) =B', c(u_{t+2})=C'$ and $D'= (A \cup B \cup C \cup D) \setminus (A' \cup B' \cup C')$ for convenience. Suppose the first triple/quad after $\Sigma$ that does not remain a triple/quad after renaming is $\Lambda $. We consider the following two cases. \medskip \noindent \textbf{Case 1:} Suppose before renaming $\Lambda = (c(u_{j}), c(u_{j+1}), c(u_{j+2}))$ was a triple. Considering the situation before renaming, due to pigeonhole principle, we can say that $c(u_{t}) \cap c(u_{j+3}) = A^*_tD_t \cap A^*_{j+3}D_{j+3} \neq \emptyset $. On the other hand, considering the situation after renaming $c(u_{t}) \cap c(u_{j+3}) = A' \cap C'^*_{j+3}D'_{j+3} = \emptyset $, a contradiction. Thus $\Lambda$ must be a triple even after renaming. \medskip \noindent \textbf{Case 2:} Suppose before renaming $\Lambda = (c(u_{j}), c(u_{j+1}), c(u_{j+2}), c(u_{j+3}))$ was a quad. Considering the situation before renaming, due to pigeonhole principle, we can say that $c(u_{t+2}) \cap c(u_{j+3}) = C^*_{t+2}D_{t+2} \cap C^*_{j+3}D_{j+3} \neq \emptyset $. On the other hand, considering the situation after renaming $c(u_{t+2}) \cap c(u_{j+3}) = C' \cap A'^*_{j+3}D'_{j+3} = \emptyset $, a contradiction. Thus $\Lambda$ must be a quad even after renaming. \end{proof} If the last element of a triple/quad $\Sigma$ is $c(u_i)$ and the first element of a triple/quad $ \Lambda $ is $c(u_{i+1})$, then $ \Sigma $ and $ \Lambda $ are \textit{consecutive} where $ \Sigma $ is \textit{before} $ \Lambda $ and $ \Lambda $ is \textit{after} $ \Sigma $. \begin{lemma}\label{no-consecutive-triples} There are no consecutive triples. \end{lemma} \begin{proof} As $r \not\equiv 0 \bmod 3$, there must be at least one quad. Thus if there are consecutive triples, then there exists a quad $\Sigma_1$ and two triples $\Sigma_2, \Sigma_3$ such that $\Sigma_1, \Sigma_3$ are, respectively, before and after $\Sigma_2$. As the sets $A, B$ and $C$ are defined based on the set of colors assigned to the vertices $u_0, u_1$ and $u_2$, respectively, and as $C_r$ is $3$-dipath transitive, we may assume that $\Sigma_2 = (c(u_0), c(u_1), c(u_2)) = (A,B,C)$. Therefore, $\Sigma_1 = (A^*_{-4}D_{-4}, A^*_{-3}B^*_{-3}D_{-3}, B^*_{-2}C^*_{-2}D_{-2}, C^*_{-1}D_{-1})$ and $\Sigma_3 = (A^*_{3}D_{3},A_4B^*_{4}D_{4},C^*_{5}D_{5})$. Note that the following constraints follow directly from the definition of $b$-fold oriented coloring. For any distinct $i,j \in \mathbb{Z}/r \mathbb{Z}$ \begin{equation}\label{2dipath-equation} c(u_i) \cap c(u_j) \neq \emptyset \implies c(u_{i-1}) \cap c(u_{j+1}) = \emptyset \text{ and } c(u_{i+1}) \cap c(u_{j-1}) = \emptyset. \end{equation} Observe that $c(u_{-1}) \cap c(u_5) = C_{-1} D_{-1} \cap C_5 D_5 \neq \emptyset $ by pigeonhole principle. This implies $ c(u_{0}) \cap c(u_4) = \emptyset $ by $ eq^n$~(\ref{2dipath-equation}). Furthermore $c(u_0) \cap c(u_3) = A \cap A^*_3 D_3 \neq \emptyset $. This implies $ c(u_{-1}) \cap c(u_4) = \emptyset $ by $ eq^n$~(\ref{2dipath-equation}). By $ eq^n$~(\ref{2dipath-equation}) we have $$c(u_{-1}) \cap c(u_2) = C^*_{-1}D_{-1} \cap C \neq \emptyset \implies c(u_{-2}) \cap c(u_3) = \emptyset $$ and $$c(u_{-2}) \cap c(u_2) = B^*_{-2}C^*_{-2}D_{-2} \cap C \neq \emptyset \implies c(u_{-3}) \cap c(u_3) = \emptyset .$$ As $c(u_{-2}) \cap c(u_3) = c(u_{-3}) \cap c(u_3) = \emptyset $, Lemma~\ref{lem dipath} implies $ c(u_{-1}) \cap c(u_3) \neq \emptyset $. This implies $ c(u_{-2}) \cap c(u_4) = \emptyset $. Thus $c(u_{-2}) \cup c(u_{-1}) \cup c(u_{0}) \subseteq (A \cup B \cup C \cup D) \setminus c(u_4)$. This is a contradiction to Lemma~\ref{lem dipath} as $|(A \cup B \cup C \cup D) \setminus c(u_4)| < 3b $. \end{proof} Therefore, there are some positive number of quads in between any two triples. In order to analyze the number of quads in between two triples, we construct a symmetric binary matrix $T$ of dimension $r \times r$ such that \begin{equation}\label{T-matrix} T[i,j]=1 \text{ if and only if } c(u_i) \cap c(u_j) \neq\emptyset \text{ for all } i,j \in \mathbb{Z}/r \mathbb{Z}. \end{equation} Thus $T[i,j]=1$ implies $T[i+1,j-1]=T[i-1,j+1]=0$ by $eq^n$~(\ref{2dipath-equation}). The $i^{th}$ row of the binary matrix $T$ is denoted by $T_i$. Three consecutive bits along a row or a column cannot be all $0$s or all $1$s due to Observations~\ref{obs p4} and~\ref{obs p5}. A \textit{cyclic right-shift} is the operation of rearranging the entries in a vector by moving the final entry to the first position, while shifting all other entries to the next position. Now we will present an important property of the binary matrix $T$. \begin{lemma}\label{cyclic-shift} The vector $T_{i+1}$ is obtained by a cyclic right-shift on $T_i$. \end{lemma} \begin{proof} We have to show that $T[i,j]=T[i+1,j+1]$ for any $i,j \in \mathbb{Z}/r \mathbb{Z}$. Suppose the contrary, that is, $T[i,j] \neq T[i+1,j+1]$ for some $i,j \in \mathbb{Z}/r \mathbb{Z}$. First assume that $T[i,j] = 0$ and $T[i+1,j+1]= 1$. This implies $T[i+2,j]=T[i,j+2]=0$ by $eq^n$~(\ref{2dipath-equation}). Moreover, to avoid three consecutive $0$s in the $i^{th}$ row we have $T[i,j+1]=1$. This implies $T[i+1,j]=0$ by $eq^n$~(\ref{2dipath-equation}). Now the $j^{th}$ column has three consecutive $0$s contradicting Observation~\ref{obs p4}. Thus $T[i,j] = 0$ and $T[i+1,j+1]= 1$ is not possible. Similarly it can be shown that it is not possible to have $T[i,j]=1$ and $T[i+1,j+1]=0$. \end{proof} Let $\Sigma_1$ be a triple and after that there are $q$ quads after which there is a triple $\Sigma_2$. Then we say that $\Sigma_2$ is \textit{$q$-quads after} $\Sigma_1$. \begin{lemma}\label{constant-triple-separation} If $\Sigma_3$ is $q_2$-quads after $\Sigma_2$ and $\Sigma_2$ is $q_1$-quads after $\Sigma_1$, then $q_2 = q_1$. \end{lemma} \begin{proof} Without loss of generality let $\Sigma_1=(u_{-i-2},u_{-i-1},u_{-i})$, $\Sigma_2=(u_0,u_1,u_2)$, $\Sigma_3=(u_j,u_{j+1},u_{j+2})$ and $q_1 < q_2$. As $c(u_{-i-1}) \neq C_{\#}$, we have $c(u_{-i-1}) \cap c(u_2) = \emptyset $ and thus due to Lemma~\ref{cyclic-shift} $c(u_{2}) \cap c(u_{i+5}) = \emptyset $. However, note that as $q_1 < q_2$, $c(u_{i+5})$ is the fourth element of a quad. Thus $c(u_{i+5}) = C^*_{i+5}D_{i+5}$ implying $c(u_{2}) \cap c(u_{i+5}) \neq \emptyset $, a contradiction. \end{proof} Note that there is at least one triple due to the way the sets $A, B, C$ are defined. If there are $t$ triples and if a triple is $q$-quads after another, then $r=(4q+3)t$. Therefore, the next lemma follows directly. \begin{lemma}\label{lem typeB none} There is no miser $b$-fold oriented $k$-coloring of $C_r$ unless $r$ is divisible by $(4q+3)$ for some $q \in \mathbb{N}$. \end{lemma} Suppose that $r$ is divisible by $(4q+3)$ for some $q \in \mathbb{N}$. Furthermore, let $q'$ is the smallest integer such that $(4q'+3)$ divides $r$. Then observe that $(4q'+3)$ is a prime number. Therefore, if a $C_r$ admits a miser $b$-fold oriented $k$-coloring then $r$ must have a type-$A$ prime factor. Furthermore we will show that if $r$ is divisible by a type-$A$ prime $p$, then $\chi^*_o(C_r) \leq 4 - \frac{4}{p+1}$. In fact we will provide a particular coloring to prove that. \begin{lemma}\label{lem typeA upper} If a type-$A$ prime $p$ divides $r$, then $\chi^*_o(C_r) \leq 4 - \beta(r)$. \end{lemma} \begin{proof} Assume that $p$ is the smallest type-$A$ prime that divides $r$. Let $p = 4k-1$ and $r = pt$. Now we will provide a $k$-fold oriented $p$-coloring of $C_r$. Let $f(x) = \{i \cdot k +1, i \cdot k +2, \cdots, i \cdot k +k \}$ where `+' is taken modulo $(4k-1)$, $x \equiv i \bmod p$ and $i \leq p-1$ for all $x \in V(C_r)$. Observe that $f$ is a $k$-fold oriented $p$-coloring of $C_r$. \end{proof} Also we will show that the above upper bound is the best possible. \begin{lemma}\label{lem typeA lower} If a type-$A$ prime $p$ divides $r$, then $\chi^*_o(C_r) \geq 4 - \beta(r)$. \end{lemma} \begin{proof} Suppose a miser $b$-fold oriented $k$-coloring $c$ of $C_r$ has exactly $t$ triples and $q$ quads in between two successive triples. Therefore, $r = (4q+3)t$ and $\beta(r) \leq q+1$. Note that a color from the set $A$ can be used at most once in a triple or a quad. Thus, a particular color from $A$ is used at most $(q+1)t$ times. As we can rename the vertices as $u_i = u_{i+t}$, any color used can be thought of as a color from $A$. Hence any color is used at most $(q+1)t$ times. Thus $$rb \leq k(q+1)t \implies 4 - \frac{1}{(q+1)} \leq \frac{k}{b} \implies 4 - \beta(r) \leq \frac{k}{b}.$$ This ends our proof. \end{proof} Now we are ready to prove Theorem~\ref{directed-cycles}(d). \medskip \noindent \textit{Proof of Theorem~\ref{directed-cycles}(d).} The proof directly follows from Lemma~\ref{lem typeB none}, \ref{lem typeA upper} and~\ref{lem typeA lower}. \hfill $ \square $ \section{Sparse planar graphs}\label{sec planar} Recall that $\mathcal{P}_g$ denote the family of oriented planar graphs having girth at least $g$. Due to Theorem~\ref{directed-cycles} proved in the previous section, we know that $\chi^*_o(\mathcal{P}_{g}) \geq 4$ for all $g \geq 3$. A natural question to ask is, how close to $4$ can $\chi^*_o(\mathcal{P}_{g})$ get as we increase the value of $g$. The next result is our attempt to answer that question. \begin{theorem}\label{th epsilon-girth} For any $\epsilon > 0$, there exists an integer $g_{\epsilon} < \infty$ such that $4 \leq \chi^*_o(\mathcal{P}_{g_{\epsilon}}) \leq 4+ \epsilon$. \end{theorem} We need to introduce a few notation and terminology for our proof. Given an arc $xy$ of an oriented graph $G$, $y$ is an \textit{out-neighbor} of $x$ and $x$ is an \textit{in-neighbor} of $y$. The set of all out-neighbors (resp., in-neighbors) of $x$ is denoted by $N^+(x)$ (resp., $N^-(x)$). Furthermore, given a vertex set $S \subseteq V(G)$ one can define the following $$N^+(S) = \cup_{x \in S} N^+(x) \text{ and } N^-(S) = \cup_{x \in S} N^-(x).$$ Let $\alpha = (\alpha_1, \alpha_2, \cdots, \alpha_k) \in \{+,-\}^k$ be an $k$-vector. For $k=1$, let us define $N^{\alpha}(x)=N^{\alpha_1}(x)$ for any $x \in V(G)$. After that, for $k \geq 2$, we can inductively define $N^{\alpha}(x) = N^{\alpha_1}(N^{\alpha'}(x))$ where $\alpha' = (\alpha_2, \alpha_3, \cdots, \alpha_k)$. An oriented graph $G$ is $k$-nice if for each $k$-vector $\alpha \in \{+,-\}^k$ and each vertex $x \in V(G)$ we have $N^{\alpha}(x) = V(G)$. This is an important property in our context due to the following result. \begin{proposition}\cite{nice}\label{prop nice} If an oriented graph $T$ is $k$-nice, then any $G \in \mathcal{P}_{5k-1}$ admits a homomorphism to $T$. \end{proposition} Next we need to construct a particular class of oriented graph $T_l$ for all $l \geq 0$. Let $m = 2^l$ and let $n = 4m+1 = 2^{l+2} +1$. Consider the additive group $\mathbb{Z}/n\mathbb{Z}$ and let $V(T_l)$ be the set of all $m$-tuples of $\mathbb{Z}/n\mathbb{Z}$ having consecutive elements, that is, $$V(T_l) = \{x_i = (i, i+ 1, \cdots, i+m-1) : i \in \mathbb{Z}/n\mathbb{Z} \}$$ where the $+$ operation is taken modulo $n$. Furthermore, $$A(T_l) = \{x_ix_j | i+m \leq j \leq i + m+1 \}.$$ Intuitively speaking, the vertex $x_i$ is represented by a set of $m$-tuples $(i, i+ 1, \cdots, i+m-1)$ and it has arcs to the vertices $x_{i+m}= (i+m, i+m +1, \cdots, i+2m-1)$ and $x_{i+m+1} = (i+m+1, i+m +2, \cdots, i+2m)$. Note that the elements of the tuple of $x_i$ are distinct from the elements of the tuple of $x_{i+m}$ and $x_{i+m+1}$. Thus $x_i$ has exactly two out-neighbors $x_{i+m}$ and $x_{i+m+1}$. On the other hand, $x_{i-m}$ and $x_{i-m-1}$ has $x_i$ as their common out-neighbor. That is, $x_i$ has exactly two in-neighbors $x_{i-m}$ and $x_{i-m-1}$. For convenience we have depicted the oriented graph $T_l$ for $l = 1$ in Fig.~\ref{figure:sample}. \begin{figure} \centering \begin{tikzpicture}[inner sep=.7mm] \foreach \a in {0,...,8} { \node[draw, circle, line width=1pt](v\a) at (\a*360/9:3cm){$x_\a$}; } \node at (0:3.3cm)[right]{$\{0,1\}$}; \node at (360/9:3.3cm)[right]{$\{1,2\}$}; \node at (2*360/9:3.4cm)[above]{$\{2,3\}$}; \node at (3*360/9:3.4cm)[above]{$\{3,4\}$}; \node at (4*360/9:3.3cm)[left]{$\{4,5\}$}; \node at (5*360/9:3.3cm)[left]{$\{5,6\}$}; \node at (6*360/9:3.35cm)[left]{$\{6,7\}$}; \node at (7*360/9:3.45cm)[right]{$\{7,8\}$}; \node at (8*360/9:3.3cm)[right]{$\{8,0\}$}; \draw[-latex,line width=1pt,black] (v0) -- (v3); \draw[-latex,line width=1pt,black] (v1) -- (v4); \draw[-latex,line width=1pt,black] (v2) -- (v5); \draw[-latex,line width=1pt,black] (v3) -- (v6); \draw[-latex,line width=1pt,black] (v4) -- (v7); \draw[-latex,line width=1pt,black] (v5) -- (v8); \draw[-latex,line width=1pt,black] (v6) -- (v0); \draw[-latex,line width=1pt,black] (v7) -- (v1); \draw[-latex,line width=1pt,black] (v8) -- (v2); \draw[-latex,line width=1pt,black] (v0) -- (v2); \draw[-latex,line width=1pt,black] (v1) -- (v3); \draw[-latex,line width=1pt,black] (v2) -- (v4); \draw[-latex,line width=1pt,black] (v3) -- (v5); \draw[-latex,line width=1pt,black] (v4) -- (v6); \draw[-latex,line width=1pt,black] (v5) -- (v7); \draw[-latex,line width=1pt,black] (v6) -- (v8); \draw[-latex,line width=1pt,black] (v7) -- (v0); \draw[-latex,line width=1pt,black] (v8) -- (v1); \end{tikzpicture} \caption{The oriented graph $T_l$ for $l = 1$.} \label{figure:sample} \end{figure} Next we will show that $T_l$ is $n$-nice. \begin{lemma} The oriented graph $T_l$ is $n$-nice. \end{lemma} \begin{proof} Observe that each vertex $x_i$ of $T_l$ has exactly $2$ out-neighbors $x_{i+m} , x_{i+m+1}$ and exactly $2$ in-neighbors $x_{i-m}, x_{i-m-1}$. Therefore, given a set $S = \{x_i, x_{i+1}, \cdots, x_{i+m-1}\}$ of $t$ consecutive neighbors of $T_l$, we have $|N^+(S)|=|N^-(S)|=t+1$ for all $t < n$. Thus, for any vertex $x_i$ and any $(n-1)$-vector $\alpha$, we will have $|N^{\alpha}(x_i)| = n$. Hence $N^{\alpha}(x_i) = V(T_l)$. \end{proof} Therefore using the above lemma and Proposition~\ref{prop nice} we can conclude the following. \begin{lemma}\label{lem girth-nice} Any $G \in \mathcal{P}_{5 \cdot n -1}$ admits a homomorphism to $T_l$ where $n = 2^{l+2} + 1$. \end{lemma} Now we are ready to prove Theorem~\ref{th epsilon-girth}. \medskip \noindent \textit{Proof of Theorem~\ref{th epsilon-girth}} Note that a directed cycle $C_r$ of length $r = 5 \cdot 2^{n+1}$ belongs to $\mathcal{P}_{5 \cdot n -1}$. Also, $\beta(r) = 0$. Thus according to Theorem~\ref{directed-cycles}(d), $\chi_o^*(C_r) = 4$. This implies the lower bound. We have independently proved Theorem~\ref{directed-cycles}(d) in the next section. Next we are going to prove the upper bound. Note that each vertex of $T_l$ corresponds to an $m$-tuple. where $m = 2^{l}$. That means if we consider the elements of the tuples to be colors, then the tuple corresponding to $x_i$ can be viewed as set of $m$-colors assigned to $x_i$. Therefore, the tuples corresponding to the vertices of $T_l$ is a potential $m$-fold oriented $n$-coloring of $T_l$. We are going to verify if the tuples indeed provide an $m$-fold oriented $n$-coloring of $T_l$. To verify the first condition of $m$-fold oriented coloring, we observe that any two adjacent vertices of $T_l$ corresponds to tuples that have disjoint elements. To be precise, an arc of $T_l$ can be of the two following types: $x_ix_{i+m}$ and $ x_ix_{i+m+1}$. Clearly, the elements of $x_i = (i, i+1, \cdots, i+m-1)$ is different from the elements of $x_{i+m} = (i+m, i+m+1, \cdots, i+2m-1)$ and $x_{i+m+1} = (i+m+1, i+m+2, \cdots, i+2m)$. To verify the second condition, let us assume that the condition is violated by the set of arcs $x_{i_1}x_{i_2}$ and $x_{j_1}x_{j_2}$. In order for such a violation to happen, we must have at least one common element $s$ between the tuples corrsponding to $x_{i_1}, x_{j_2}$ and at least one common element $t$ between the tuples corrsponding to $x_{i_2}, x_{j_1}$. That $s$ is an element of $x_{i_1}$ and $t$ is an element of $x_{i_2}$ implies $t = s+l$ where $1 \leq l \leq 2m$. Similarly, as $t$ is an element of $x_{j_1}$ and $s$ is an element of $x_{j_2}$, we have $s = t+l'$ where $1 \leq l' \leq 2m$. Therefore, $s + (l+l') = s$ where $2 \leq (l+l') \leq 4m$. This is a contradiction as we are working modulo $(4m+1)$. Hence, $T_l$ admits an $m$-fold oriented $n$-coloring. Therefore, any graph that admits a homomorphism to $T_l$ also admits an $m$-fold oriented $n$-coloring. Thus if $G$ admits a homomorphism to $T_l$, then $$\chi^*_o(G) \leq \frac{|V(T_l)|}{m} = \frac{n}{m} =\frac{2^{l+2}+1}{2^{l}}=4+\frac{1}{2^{l}}.$$ Thus we are done as $\lim_{l \to \infty} \frac{1}{2^{l}} =0$. \hfill $ \square $ \section{Conclusions}\label{sec conclusions} This article initiates the study on fractional oriented chromatic number. The works of this article naturally motivates the following problems. \begin{enumerate}[(1)] \item Find the linear programming formulation of the notion of fractional oriented chromatic number. \item Find out the fractional oriented chromatic numbers of all oriented cycles. \item Find out the fractional oriented chromatic numbers of all the families of oriented planar graphs having girth at least $g$, for all $g \geq 3$. \item Study the complexity dichotomy of finding the parameter. \end{enumerate} \bibliographystyle{abbrv}
{ "timestamp": "2021-07-29T02:23:38", "yymm": "2107", "arxiv_id": "2107.13443", "language": "en", "url": "https://arxiv.org/abs/2107.13443", "abstract": "We introduce the fractional version of oriented coloring and initiate its study. We prove some basic results and study the parameter for directed cycles and sparse planar graphs. In particular, we show that for every $\\epsilon > 0$, there exists an integer $g_{\\epsilon} \\geq 12$ such that any oriented planar graph having girth at least $g_{\\epsilon}$ has fractional oriented chromatic number at most $4+\\epsilon$. Whereas, it is known that there exists an oriented planar graph having girth at least $g_{\\epsilon}$ with oriented chromatic number equal to $5$. We also study the fractional oriented chromatic number of directed cycles and provide its exact value. Interestingly, the result depends on the prime divisors of the length of the directed cycle.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)", "title": "On fractional version of oriented coloring", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363758493941, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7084883392334739 }
https://arxiv.org/abs/2011.14299
A characterization of $X$ for which spaces $C_p(X)$ are distinguished and its applications
We prove that the locally convex space $C_{p}(X)$ of continuous real-valued functions on a Tychonoff space $X$ equipped with the topology of pointwise convergence is distinguished if and only if $X$ is a $\Delta$-space in the sense of \cite {Knight}. As an application of this characterization theorem we obtain the following results: 1) If $X$ is a Čech-complete (in particular, compact) space such that $C_p(X)$ is distinguished, then $X$ is scattered. 2) For every separable compact space of the Isbell--Mrówka type $X$, the space $C_p(X)$ is distinguished. 3) If $X$ is the compact space of ordinals $[0,\omega_1]$, then $C_p(X)$ is not distinguished. We observe that the existence of an uncountable separable metrizable space $X$ such that $C_p(X)$ is distinguished, is independent of ZFC. We explore also the question to which extent the class of $\Delta$-spaces is invariant under basic topological operations.
\section{Introduction}\label{intro} Following J. Dieudonn\'{e} and L. Schwartz \cite{dieudonne} a locally convex space (lcs) $E$ is called \emph{distinguished} if every bounded subset of the bidual of $E$ in the weak$^{*}$-topology is contained in the closure of the weak$^{*}$-topology of some bounded subset of $E$. Equivalently, a lcs $E$ is distinguished if and only if the strong dual of $E$ (i.e. the topological dual of $E$ endowed with the strong topology) is \emph{barrelled}, (see \cite[8.7.1]{Ko}). A. Grothendieck \cite{grothendieck} proved that a metrizable lcs $E$ is distinguished if and only if its strong dual is \emph{bornological}. We refer the reader to survey articles \cite{BB1} and \cite{BB2} which present several more modern results about distinguished metrizable and Fr\'echet lcs. Throughout the article, all topological spaces are assumed to be Tychonoff and infinite. By $C_{p}(X)$ and $C_{k}(X)$ we mean the spaces of all real-valued continuous functions on a Tychonoff space $X$ endowed with the topology of pointwise convergence and the compact-open topology, respectively. By a \emph{bounded set} in a topological vector space (in particular, $C_{p}(X)$) we understand any set which is absorbed by every $0$-neighbourhood. For spaces $C_{p}(X)$ we proved in \cite{FKLS} the following theorem (the equivalence $(1) \Leftrightarrow (4)$ has been obtained in \cite{fe-ka}). \begin{theorem}\label{Theor:characterization} For a Tychonoff space $X$, the following conditions are equivalent{\rm:} \begin{enumerate} \item[{\rm (1)}] $C_p(X)$ is distinguished. \item[{\rm (2)}] $C_{p}\left( X\right) $ is a large subspace of $\mathbb{R}^{X}$, i.e. for every bounded set $A$ in $\mathbb{R}^{X}$ there exists a bounded set $B$ in $C_{p}(X)$ such that $A\subset cl_{\R^X}(B)$. \item [{\rm (3)}] For every $f \in \R^X$ there is a bounded set $B \subset C_p(X)$ such that $f \in cl_{\R^X}(B)$. \item[{\rm (4)}] The strong dual of the space $C_{p}(X)$ carries the finest locally convex topology. \end{enumerate} \end{theorem} Several examples of $C_{p}(X)$ with(out) distinguished property have been provided in papers \cite{fe-ka}, \cite{fe-ka-sa} and \cite{FKLS}. The aim of this research is to continue our initial work on distinguished spaces $C_{p}(X)$. The following concept plays a key role in our paper. We show its applicability for the studying of distinguished spaces $C_{p}(X)$. \begin{deff}{\rm (\cite{Knight})}\label{def:Delta} A topological space $X$ is said to be a $\Delta$-space if for every decreasing sequence $\{D_n: n \in \omega\}$ of subsets of $X$ with empty intersection, there is a decreasing sequence $\{V_n: n \in \omega\}$ consisting of open subsets of $X$, also with empty intersection, and such that $D_n \subset V_n$ for every $n \in \omega$. \end{deff} We should mention that R. W. Knight \cite{Knight} called all topological spaces $X$ satisfying the above Definition \ref{def:Delta} by $\Delta$-sets. The original definition of a $\Delta$-set of the real line $\R$ is due to G. M. Reed and E. K. van Douwen (see \cite{Reed}). In this paper, for general topological spaces satisfying Definition \ref{def:Delta} we reserve the term \emph{$\Delta$-space}. The class of all $\Delta$-spaces is denoted by $\Delta$. In Section \ref{description} we give an intrinsic description of all spaces $X \in \Delta$. One of the main results of our paper, Theorem \ref{Theor:description} says that $X$ is a $\Delta$-space if and only if $C_{p}(X)$ is a distinguished space. This characterization theorem has been applied systematically for obtaining a range of results from our paper. Our main result in Section \ref{compact} states that a \v{C}ech-complete (in particular, compact) $X \in \dcal$ must be scattered. A very natural question arises what are those scattered compact spaces $X \in \dcal$. In view of Theorem \ref{Theor:description}, it is known that a Corson compact $X$ belongs to the class $\dcal$ if and only if $X$ is a scattered Eberlein compact space \cite{FKLS}. With the help of Theorems \ref{Theor:description} and \ref{Theor:union} we show that the class $\Delta$ contains also all separable compact spaces of the Isbell--Mr\'owka type. Nevertheless, as we demonstrate in Section \ref{compact}, there are compact scattered spaces $X \notin \Delta$ (for example, the compact space $[0,\omega_{1}]$). Section \ref{metr} deals with the questions about metrizable spaces $X\in \Delta$. We notice that every $\sigma$-scattered metrizable space $X$ belongs to the class $\Delta$. For separable metrizable spaces $X$, our analysis reveals a tight connection between distinguished $C_p(X)$ and well-known set-theoretic problems about special subsets of the real line $\R$. We observe that the existence of an uncountable separable metrizable space $X\in\Delta$ is independent of ZFC and it is equivalent to the existence of a separable countably paracompact nonnormal Moore space. We refer readers to \cite{Nyikos} for the history of the normal Moore problem. In Section \ref{problems} we study whether the class $\Delta$ is invariant under the basic topological operations: subspaces, (quotient) continuous images, finite/countable unions and finite products. We pose several new open problems. \section{Description Theorem}\label{description} In this section we provide an intrinsic description of $X \in \dcal$. For the reader's convenience we recall some relevant terminology. \begin{enumerate} \item[\rm (a)] A disjoint cover $\{X_{\gamma}: \gamma \in \Gamma\}$ of $X$ is called a {\it partition} of $X$. \item[\rm (b)] A collection of sets $\{U_{\gamma}: \gamma \in \Gamma\}$ is called an {\it expansion} of a collection of sets $\{X_{\gamma}: \gamma \in \Gamma\}$ in $X$ if $X_{\gamma} \subseteq U_{\gamma} \subseteq X$ for every index $\gamma \in \Gamma$. \item[\rm (c)] A collection of sets $\{U_{\gamma}: \gamma \in \Gamma\}$ is called {\it point-finite} if no point belongs to infinitely many $U_{\gamma}$ -s. \end{enumerate} \begin{theorem}\label{Theor:description} For a Tychonoff space $X$, the following conditions are equivalent{\rm:} \begin{enumerate} \item[{\rm (1)}] $C_p(X)$ is distinguished. \item[{\rm (2)}] Any countable partition of $X$ admits a point-finite open expansion in $X$. \item[{\rm (3)}] Any countable disjoint collection of subsets of $X$ admits a point-finite open expansion in $X$. \item[{\rm (4)}] $X$ is a $\Delta$-space. \end{enumerate} \end{theorem} \begin{proof} Observe that every collection of pairwise disjoint subsets of $X$, $\{X_{\gamma}: \gamma \in \Gamma\}$ can be extended to a partition by adding a single set $X_{\ast} = X \setminus \bigcup \{X_{\gamma}: \gamma \in \Gamma\}$. If the obtained partition admits a point-finite open expansion in $X$, then removing one open set we get a point-finite open expansion of the original disjoint collection. This shows evidently the equivalence (2) $\Leftrightarrow$ (3). Assume now that (3) holds. Let $\{D_n: n \in \omega\}$ be a decreasing sequence subsets of $X$ with empty intersection. Define $X_n = D_n \setminus D_{n+1}$ for each $n \in \omega$. By assumption, a disjoint collection $\{X_n: n \in \omega\}$ admits a point-finite open expansion $\{U_n: n \in \omega\}$ in $X$. Then $\{V_n = \bigcup \{U_i: i \geq n\}: n \in \omega\}$ is an open decreasing expansion in $X$ with empty intersection. This proves the implication (3) $\Rightarrow$ (4). Next we show (4) $\Rightarrow$ (2). Let $\{X_n: n \in \omega\}$ be any countable partition of $X$. Define $D_0 = X$ and $D_n = X \setminus \bigcup\{X_i: i < n\}$. Then $X_n \subset D_n$ for every $n$, the sequence $\{D_n: n \in \omega\}$ is decreasing and its intersection is empty. Assuming (4), we find an open decreasing expansion $\{U_n: n \in \omega\}$ of $\{D_n: n \in \omega\}$ in $X$ such that $\bigcap\{U_n: n \in \omega\} = \emptyset$. For every $x \in X$ there is $n$ such that $x \notin U_m$ for each $m > n$, it means that $\{U_n: n \in \omega\}$ is a point-finite expansion of $\{X_n: n \in \omega\}$ in $X$. This finishes the proof (3) $\Rightarrow$ (4) $\Rightarrow$ (2) $\Leftrightarrow$ (3). Now we prove the implication (1) $\Rightarrow$ (2). Let $\{X_n: n \in \omega\}$ be any countable partition of $X$. Fix any function $f \in \R^X$ which satisfies the following conditions: for each $n \in \omega$ and every $x \in X_n$ the value of $f(x)$ is greater than $n$. By assumption, there is a bounded subset $B$ of $C_p(X)$ such that $f \in cl_{\R^X}(B)$. Hence, for every $n \in \omega$ and every point $x \in X_n$, there exists $f_x \in B$ such that $f_x(x) > n$. But $f_x$ is a continuous function, therefore there is an open neighbourhood $U_x \subset X$ of $x$ such that $f_x(y) > n$ for every $y \in U_x$. We define an open set $U_n \subset X$ as follows: $U_n = \bigcup\{U_x: x \in X_n\}$. Evidently, $X_n \subseteq U_n$ for each $n \in \omega$. If we assume that the open expansion $\{U_n : n \in \omega\}$ is not point-finite, then there exists a point $y \in X$ such that there are infinitely many numbers $n$ with $y \in U_{x_n}$ for some $x_n \in X_n$. This means that $\sup\{g(y): g \in B \} = \infty$, which contradicts the boundedness of $B$. It remains to prove (2) $\Rightarrow$ (1). By Theorem \ref{Theor:characterization}, we need to show that for every mapping $f \in \R^X$ there is a bounded set $B \subset C_p(X)$ such that $f \in cl_{\R^X}(B)$. If there exists a constant $r > 0$ such that $\sup\{|f(x)|: x \in X\} < r$, then we take $B=\{h \in C(X): \sup\{|h(x)|: x \in X\} < r \}$. It is easy to see that $B$ is as required. Let $f \in \R^X$ be unbounded. Denote by $Y_0=\emptyset$ and $Y_n =\{x \in X: n-1 \leq |f(x)| < n\}$ for each non-zero $n \in \omega$. Define $\varphi: X \rightarrow \omega$ by the rule: if $Y_n \neq \emptyset$ then $\varphi(x) = n$ for every $x\in Y_n$. So, $|f| < \varphi$. Put $X_{n}=\varphi^{-1}(n)$ for each $n \in \omega$. Note that some sets $X_{n}$ might happen to be empty, but the collection $\{X_{n}: n \in \omega\}$ is a partition of $X$ with countably many nonempty $X_n$ -s. By our assumption, there exists a point-finite open expansion $\{U_{n}:n\in\omega\}$ of the partition $\{X_{n}:n\in\omega\}$. Define $F:X\rightarrow\omega$ by $F(x)=\max\{n:x\in U_{n}\}$. Obviously, $f < F$. Finally, we define $B = \{h\in C_{p}(X): |h|\leq F\}$. Then $f \in cl_{\R^X}(B)$, because for every finite subset $K \subset X$ there is a function $h \in B$ such that $f\restriction_{K} = h\restriction_{K}$. Indeed, given a finite subset $K \subset X$, let $\{V_{x}:x\in K\}$ be the family of pairwise disjoint open sets such that $x\in V_{x}\subset U_{\varphi(x)}$ for every $x\in K$. For each $x\in K$, fix a continuous function $h_{x}:X\rightarrow [-\varphi(x),\varphi(x)]$ such that $h_{x}(x)= f(x)$ and $h_{x}$ is equal to the constant value $0$ on the closed set $X \setminus V_{x}$. One can verify that $h=\Sigma_{x\in K}h_{x} \in B$ is as required. \end{proof} Below we present a straightforward application of Theorem \ref{Theor:description}. \begin{corollary}{\rm (\cite{FKLS})}\label{cor:subspace} Let $Z$ be any subspace of $X$. If $X$ belongs to the class $\Delta$, then $Z$ also belongs to the class $\Delta$. \end{corollary} \begin{proof} If $\{Z_{\gamma}: \gamma \in \Gamma\}$ is any collection of pairwise disjoint subsets of $Z$ and there exists a point-finite open expansion $\{U_{\gamma}: \gamma \in \Gamma\}$ in $X$, then obviously $\{U_{\gamma} \cap Z: \gamma \in \Gamma\}$ is a point-finite expansion consisting of the sets relatively open in $Z$. It remains to apply Theorem \ref{Theor:description}. \end{proof} The last result can be reversed, assuming that $X \setminus Z$ is finite. \begin{proposition}\label{prop:subspace} Let $Z$ be a subspace of $X$ such that $Y = X \setminus Z$ is finite. If $Z$ belongs to the class $\Delta$, then $X$ belongs to $\Delta$ as well. \end{proposition} \begin{proof} Let $\{X_n: n \in \omega\}$ be any countable collection of pairwise disjoint subsets of $X$. Denote by $F$ the set of those $n \in \omega$ such that $X_n \bigcap Y \neq \emptyset$. There might be only finitely many $X_n$ -s which intersect the finite set $Y$, hence $F \subset \omega$ is finite. If $n \in F$, then we simply declare that $U_n$ is equal to $X$. Consider the subcollection $\{X_n: n \in \omega \setminus F\}$. It is a countable collection of pairwise disjoint subsets of $Z$. Since $Z \in \dcal$, by Theorem \ref{Theor:description}, there is a point-finite open expansion $\{U_n: n \in \omega \setminus F\}$ in $Z$. Observe that $Z$ is open in $X$, therefore all those $U_n$ -s remain open in $X$. Bringing all $U_n$ -s of both sorts together we obtain a point-finite open expansion $\{U_n: n \in \omega \}$ in $X$. Finally, $X \in \dcal$, by Theorem \ref{Theor:description}. \end{proof} \begin{remark} \label{Rem:scant} The following applicable concept has been re-introduced in \cite{FKLS}. A family $\left\{\mathcal{N}_{x}:x\in X\right\}$ of subsets of a Tychonoff space $X$ is called a \emph{scant cover} for $X$ if each $\mathcal{N}_{x}$ is an open neighbourhood of $x$ and for each $u\in X$ the set $X_{u} =\left\{ x\in X:u\in \mathcal{N}_{x}\right\}$ is finite. \footnote{ The referee kindly informed the authors that this notion also is known in the literature under the name \emph{the point-finite neighbourhood assignment}.} Our Theorem \ref{Theor:description} generalizes one of the results obtained in \cite{FKLS} stating that if $X$ admits a scant cover $\left\{ \mathcal{N}_{x}:x\in X\right\}$ then $C_{p}\left(X\right)$ is distinguished. Indeed, let $\{X_{\gamma}: \gamma \in \Gamma\}$ be any collection of pairwise disjoint subsets of $X$. Define $U_{\gamma} = \bigcup \{\mathcal{N}_{x}: x\in X_{\gamma}\}$. It is easily seen that $\{U_{\gamma}: \gamma \in \Gamma\}$ is a point-finite open expansion in $X$, by definition of a scant cover. Applying Theorem \ref{Theor:description}, we conclude that $C_{p}\left( X\right) $\ is distinguished. \end{remark} \section{Applications to compact spaces $X \in \dcal$}\label{compact} First we recall a few definitions and facts (probably well-known) which will be used in the sequel. A space $X$ is said to be {\it scattered} if every nonempty subset $A$ of $X$ has an isolated point in $A$. Denote by $A^{(1)}$ the set of all non-isolated (in $A$) points of $A \subset X$. For ordinal numbers $\alpha$, the $\alpha$-th derivative of a topological space $X$ is defined by transfinite induction as follows. $X^{(0)} = X$;\, $X^{(\alpha+1)} = (X^{(\alpha)})^{(1)}$;\, $X^{(\gamma)} = \bigcap_{\alpha<\gamma} X^{(\alpha)}$ for limit ordinals $\gamma$. For a scattered space $X$, the smallest ordinal $\alpha$ such that $X^{(\alpha)}=\emptyset$ is called the \emph{scattered height} of $X$ and is denoted by $ht(X)$. For instance, $X$ is discrete if and only if $ht(X)=1$. The following classical theorem is due to A. Pe\l czy\'nski and Z. Semadeni. \begin{theorem}{\rm (\cite[Theorem~8.5.4]{Semadeni})}\label{Theor:scattered} A compact space $X$ is scattered if and only if there is no continuous mapping of $X$ onto the segment $[0,1]$. \end{theorem} A continuous surjection $\pi:X \rightarrow Y$ is called {\it irreducible} (see \cite[Definition~7.1.11]{Semadeni}) if for every closed subset $F$ of $X$ the condition $\pi(F)=Y$ implies $F=X$. \begin{proposition}{\rm (\cite[Proposition~7.1.13]{Semadeni})}\label{Prop:restriction} Let $X$ be a compact space and let $\pi:X \rightarrow Y$ be a continuous surjection. Then there exists a closed subset $F$ of $X$ such that $\pi(F)=Y$ and the restriction ${\pi\res_{F}:F \rightarrow Y}$ is irreducible. \end{proposition} \begin{proposition}{\rm (\cite[Proposition~25.2.1]{Semadeni})}\label{Prop:dense} Let $X$ be a compact space and let $\pi:X \rightarrow Y$ be a continuous surjection. Then $\pi$ is irreducible if and only if whenever $E \subset X$ and $\pi(E)$ is dense in $Y$, then $E$ is dense in $X$. \end{proposition} Recall that a Tychonoff space $X$ is \emph{\v{C}ech-complete} if $X$ is a $G_{\delta}$-set in some (equivalently, any) compactification of $X$, (see \cite[3.9.1]{Engelking}). It is well known that every locally compact space and every completely metrizable space is \v{C}ech-complete. Next statement resolves an open question posed in \cite{FKLS}. \begin{theorem}\label{Theor:Main_result} Every \v{C}ech-complete (in particular, compact) $\dcal$-space is scattered. \end{theorem} \begin{proof} \emph{Step 1: $X$ is compact}. On the contrary, assume that $X$ is not scattered. First, by Theorem \ref{Theor:scattered}, there is a continuous mapping $\pi$ from $X$ onto the segment $[0,1]$. Second, by Proposition \ref{Prop:restriction}, there exists a closed subset $F$ of $X$ such that $\pi(F)=[0,1]$ and the restriction ${\pi\res_{F}:F \rightarrow [0,1]}$ is irreducible. Since $X \in \dcal$ the compact space $F$ also belongs to $\dcal$, by Corollary \ref{cor:subspace}. For simplicity, without loss of generality we may assume that $F$ is $X$ itself and $\pi: X \rightarrow [0,1]$ is irreducible. Let $\{X_{n}:n\in\omega\}$ be a partition of $[0,1]$ into dense sets. Put $Y_{n}=\bigcup_{k\geq n}X_{k}$, and $Z_{n}=\pi^{-1}(Y_{n})$ for all $n\in\omega$. Then all sets $Z_{n}$ are dense in $X$ by Proposition \ref{Prop:dense} and the intersection $\bigcap_{n\in\omega} Z_{n}$ is empty. Every compact space $X$ is a Baire space, i.e. the Baire category theorem holds in $X$, hence if $\{U_{n}:n\in\omega\}$ is any open expansion of $\{Z_{n}:n\in\omega\}$, then the intersection $\bigcap_{n\in\omega}U_{n}$ is dense in $X$. In view of our Theorem \ref{Theor:description} this conclusion contradicts the assumption $X \in \dcal$, and the proof follows. \emph{Step 2: $X$ is any \v{C}ech-complete space}. By the first step we deduce that every compact subset of $X$ is scattered. But any \v{C}ech-complete space $X$ is scattered if and only if every compact subset of $X$ is scattered. A detailed proof of this probably folklore statement can be found in \cite{STTWW}. \end{proof} \begin{proposition}\label{prop:count} If $X$ is a first-countable compact space, then $X\in\dcal$ if and only if $X$ is countable. \end{proposition} \begin{proof} If $X \in \dcal$, then $X$ is scattered, by Theorem \ref{Theor:Main_result}. By the classical theorem of S. Mazurkiewicz and W. Sierpi\'nski \cite[Theorem~8.6.10]{Semadeni}, a first-countable compact space is scattered if and only if it is countable. This proves (i) $\Rightarrow$ (ii). The converse is known \cite{FKLS} and follows from the fact that any countable space $X = \{x_n: n \in \omega\}$ admits a scant cover. Indeed, define $X_n = \{x_i : i \geq n\}$. Then the family $\{X_n:n \in \omega\}$ is a scant cover of $X$. Now it suffices to mention Remark \ref{Rem:scant}. \end{proof} \begin{remark}\label{Knaster} Theorem \ref{Theor:Main_result} extends also a well-known result of B. Knaster and K. Urbanik stating that every countable \v{C}ech-complete space is scattered \cite{KU}. It is easy to see that a countable Baire space contains a dense subset of isolated points, but in general does not have to be scattered. We don't know whether every Baire $\dcal$-space must have isolated points. \end{remark} Recall that an Eberlein compact is a compact space homeomorphic to a subset of a Banach space with the weak topology. A compact space is said to be a Corson compact space if it can be embedded in a $\Sigma$-product of the real lines. Every Eberlein compact is Corson, but not vice versa. However, every scattered Corson compact space is a scattered Eberlein compact space \cite{Alster}. \begin{theorem}{\rm (\cite{FKLS})}\label{Theor:Corson} A Corson compact space $X$ belongs to the class $\dcal$ if and only if $X$ is a scattered Eberlein compact space. \end{theorem} Bearing in mind Theorem \ref{Theor:Main_result}, to show Theorem \ref{Theor:Corson} it suffices to use the fact that every scattered Eberlein compact space admits a scant cover (the latter follows from the proof of \cite[Lemma 1.1]{BM}) and then apply Remark \ref{Rem:scant}. Being motivated by the previous results one can ask if there exist scattered compact spaces $X\in\Delta$ which are not Eberlein compact. The next question is also crucial: Does there exist a compact scattered space $X \notin \dcal$? Below we answer both questions positively. We need the following somewhat technical \begin{theorem} \label{Theor:union} Let $Z = C_0 \cup C_1$ be a Tychonoff space such that \begin{enumerate} \item[{\rm (1)}] $C_0 \cap C_1 = \emptyset$. \item[{\rm (2)}] $C_0$ is an open $F_{\sigma}$ subset of $Z$. \item[{\rm (3)}] both $C_0$ and $C_1$ belong to the class $\dcal$. \end{enumerate} Then $Z$ also belongs to the class $\dcal$. \end{theorem} \begin{proof} By assumption, $C_0 = \bigcup\{F_n: n \in \omega\}$, where each $F_n$ is closed in $Z$. Let $\{X_n: n \in \omega\}$ be any countable collection of pairwise disjoint subsets of $Z$. Our target is to define open sets $U_n \supseteq X_n$, $n \in \omega$ in such a way that the collection $\{U_n: n \in \omega\}$ is point-finite. We decompose the sets $X_n = X_n^{0} \cup X_n^{1}$, where $X_n^{0} = X_n \cap C_0$ and $X_n^{1} = X_n \cap C_1$. By Theorem \ref{Theor:description}, the collection $\{X_n^{0}: n \in \omega\}$ expands to a point-finite open collection $\{U_n^{0}: n \in \omega\}$ in $C_0$. The set $C_0$ is open in $Z$, therefore $U_n^{0}$ are open in $Z$ as well. Now we consider the disjoint collection $\{X_n^{1}: n \in \omega\}$ in $C_1$. By assumption, $C_1 \in \dcal$, therefore applying Theorem \ref{Theor:description} once more, we find a point-finite expansion $\{V_n^{1}: n \in \omega\}$ in $C_1$ consisting of sets which are open in $C_1$. Every set $V_n^{1}$ is a trace of some set $W_n^{1}$, which is open in $Z$, i.e. $V_n^{1}= W_n^{1} \cap C_1$, and every $W_n^{1}$ is open in $Z$. We refine the sets $W_n^{1}$ by the formula $U_n^{1} = W_n^{1} \setminus \bigcup\{F_i: i \leq n\}$. Since all sets $F_i$ are closed in $Z$, the sets $U_n^{1}$ remain open in $Z$. Since all sets $F_i$ are disjoint with $C_1$, the collection $\{U_n^{1}: n \in \omega\}$ remains to be an expansion of $\{X_n^{1}: n \in \omega\}$. Furthermore, the collection $\{U_n^{1}: n \in \omega\}$ is point-finite, because $\{V_n^{1}: n \in \omega\}$ is point-finite, and every point $z \in C_0$ belongs to some $F_n$, hence $z \notin U_m^1$ for every $m \geq n$. Finally, we define $U_n = U_n^{0} \cup U_n^{1}$. The collection $\{U_n: n \in \omega\}$ is a point-finite open expansion of $\{X_n: n \in \omega\}$, and the proof is complete. \end{proof} This yields the following \begin{corollary} \label{cor:height_2} Let $Z$ be any separable scattered Tychonoff space such that its scattered height $ht(Z)$ is equal to 2. Then $Z \in \dcal$. \end{corollary} \begin{proof} The structure of $Z$ is the following. $Z = C_0 \cup C_1$, where $C_0$ is a countable dense in $Z$ set consisting of isolated in $Z$ points and $C_1$ consists of all accumulation points. Moreover, the space $C_1$ with the topology induced from $Z$ is discrete. All conditions of Theorem \ref{Theor:union} are satisfied, and the result follows. \end{proof} Our first example will be the one-point compactification of an Isbell--Mr\'owka space $\Psi(\acal)$. We recall the construction and basic properties of $\Psi(\acal)$. Let $\acal$ be an almost disjoint family of subsets of the set of natural numbers $\N$ and let $\Psi(\acal)$ be the set $\N \cup \acal$ equipped with the topology defined as follows. For each $n \in \N$, the singleton $\{n\}$ is open, and for each $A \in \acal$, a base of neighbourhoods of $A$ is the collection of all sets of the form $\{A\} \cup B$, where $B\subset A$ and $|A \setminus B| < \omega$. The space $\Psi(\acal)$ is then a first-countable separable locally compact Tychonoff space. If $\acal$ is a maximal almost disjoint (MAD) family, then the corresponding Isbell--Mr\'owka space $\Psi(\acal)$ would be in addition pseudocompact. (Readers are advised to consult \cite[Chapter 8]{HT-MT} which surveys various topological properties of these spaces). \begin{theorem} \label{Theor:height_3} There exists a separable scattered compact space $X$ with the following properties: \begin{enumerate} \item[{\rm (a)}] The scattered height of $X$ is equal to 3. \item[{\rm (b)}] $X \in \dcal$. \item[{\rm (c)}] $X$ is not an Eberlein compact space. \end{enumerate} \end{theorem} \begin{proof} Let $\acal$ be any uncountable almost disjoint (in particular, MAD) family of subsets of $\N$ and let $Z$ be the corresponding first-countable separable locally compact Isbell--Mr\'owka space $\Psi(\acal)$. It is easy to see that $Z = \Psi(\acal)$ satisfies the assumptions of Corollary \ref{cor:height_2}. Hence, $Z \in \dcal$. Now, denote by $X$ the one-point compactification of the separable locally compact space $Z$. Then the scattered height of $X$ is equal to 3. Note that $X \in \dcal$ by Proposition \ref{prop:subspace}. Moreover, $X$ is not an Eberlein compact space, since every separable Eberlein compact space is metrizable, while $\Psi(\acal)$ is metrizable if and only if $\acal$ is countable. \end{proof} Now we show that there exist scattered compact spaces which are not in the class $\dcal$. We will use the classical Pressing Down Lemma. Let $[0,\omega_1)$ be the set of all countable ordinals equipped with the order topology. For simplicity, we identify $[0,\omega_1)$ with $\omega_1$. A subset $S$ of $\omega_1$ is called a {\it stationary} subset if $S$ has nonempty intersection with every closed and unbounded set in $\omega_1$. A mapping $\varphi: S \rightarrow \omega_1$ is called {\it regressive} if $\varphi(\alpha) < \alpha$ for each $\alpha \in S$. The proof of the following fundamental statement can be found for instance in \cite{Kunen}. \begin{theorem}\textbf{Pressing Down Lemma.}\label{Theor:Press} Let $\varphi: S \rightarrow \omega_1$ be a regressive mapping, where $S$ is a stationary subset of $\omega_1$. Then for some $\gamma < \omega_1$, $\varphi^{-1}\lbrace{\gamma}\rbrace$ is a stationary subset of $\omega_1$. \end{theorem} It is known that there are plenty of stationary subsets of $\omega_1$. In particular, every stationary set can be partitioned into countably many pairwise disjoint stationary sets \cite{Kunen}. Note that $\omega_1$ is a scattered locally compact and first-countable space. Next statement resolves an open question posed in \cite{FKLS}. \begin{theorem}\label{Theor:notD_1} The compact scattered space $[0,\omega_1]$ is not in the class $\dcal$. \end{theorem} \begin{proof} It suffices to show that $\omega_1$ does not belong to the class $\dcal$. Assume, on the contrary, that $\omega_1 \in \dcal$. Denote by $L$ the set of all countable limit ordinals. Evidently, $L$ is a closed unbounded set in $\omega_1$. Take any representation of $L$ as the union of countably many pairwise disjoint stationary sets $\{S_n: n\in \omega\}$. By Theorem \ref{Theor:description}, there exists a point-finite open expansion $\{U_n: n \in \omega\}$ in $\omega_1$. Fr every $\alpha \in U_n$ there is an ordinal $\beta(\alpha) < \alpha$ such that $[\beta(\alpha) , \alpha] \subset U_n$. In fact, for every $n \in \omega$ we can define a regressive mapping $\varphi_n: S_n \rightarrow \omega_1$ by the formula: $\varphi_n (\alpha) = \beta(\alpha)$. Since $S_n$ is a stationary set for every $n$, we can apply to $\varphi_n$ the Pressing Down Lemma. Hence, for each $n$ there are a countable ordinal $\gamma_n$ and an uncountable subset $T_n \subset S_n$ with the following property: $[\gamma_n , \alpha] \subset U_n$ for every $\alpha \in T_n$. Denote $\gamma =\sup\{\gamma_n: n \in \omega\} \in \omega_1$. Because all $T_n$ are unbounded, for all natural $n$ we have an ordinal $\alpha_n \in T_n$ such that $\gamma < \alpha_n$ and $[\gamma_n , \alpha_n] \subset U_n$. This implies that $\gamma \in U_n$ for every $n \in \omega$. However, a collection $\{U_n: n \in \omega\}$ is point-finite. The obtained contradiction finishes the proof. \end{proof} The function space $C_{k}(X)$ is called \emph{Asplund} if every separable vector subspace of $C_{k}(X)$ isomorphic to a Banach space, has the separable dual. \begin{proposition}\label{Asplund} If a Tychonoff space $X$ belongs to the class $\dcal$, then the space $C_{k}(X)$ is Asplund. The converse conclusion fails in general. \end{proposition} \begin{proof} Let $\mathcal{K}(X)$ be the family of all compact subset of $X$. By the assumption and Corollary \ref{cor:subspace}, each $K\in\mathcal{K}(X)$ belongs to the class $\dcal$. Clearly, $C_{k}(X)$ is isomorphic to a (closed) subspace of the product $\Pi=\prod_{K\in\mathcal{K}(X)}C_{k}(K)$ of Banach spaces $C_{k}(K)$. Assume that $E$ is a separable vector subspace of $C_{k}(X)$ isomorphic to a Banach space. Observe that $E$ is isomorphic to a subspace of the finite product $\prod_{j\in F}C_{k}(K_{j})$ for $K_{j}\in \mathcal{K}(X)$ and $j\in F$. Indeed, let $B$ be the unit (bounded) ball of the normed space $E$. Then there exists a finite set $F$ such that $\bigcap_{j\in F}\pi^{-1}_{j}(U_{j})\cap \Pi\subset B$, where $U_{j}$ are balls in spaces $C_{k}(K_{j})$, $j\in F$, and $\pi_{j}$ are natural projections from $E$ onto $C_{k}(K_{j})$. Let $\pi_{F}$ be the (continuous) projection from $\Pi$ onto $\prod_{\j\in F}C_{k}(K_{j})$. Then $\pi_{F}\restriction_E$ is an injective continuous and open map from $E$ onto $(\pi_{F}\restriction_E)(E)\subset\prod_{\j\in F}C_{k}(K_{j})$. The injectivity of $\pi_{F}\restriction_E$ follows from the fact that $B$ is a bounded neighbourhood of zero in $E$. It is easy to see that the image $(\pi_{F}\restriction_E)(B)$ is an open neighbourhood of zero in $\prod_{j\in F}C_{k}(K_{j})$. On the other hand, $\prod_{j\in F}C_{k}(K_{j})$ is isomorphic to the space $C_{k}(\bigoplus_{j\in F}K_{j})$ and the compact space $\bigoplus_{j\in F}K_{j}$ is scattered. By the classical \cite[Theorem 12.29]{fabian} $E$ must have the separable dual $E^{*}$. Hence, $C_{k}(X)$ is Asplund. The converse fails, as Theorem \ref{Theor:notD_1} shows for $X=[0,\omega_1]$. \end{proof} Since every infinite compact scattered space $X$ contains a nontrivial converging sequence, for such $X$ the Banach space $C(X)$ is not a Grothendieck space, (see \cite{dales}). \begin{corollary} If $X$ is an infinite compact and $X \in \dcal$, then the Banach space $C(X)$ is not a Grothendieck space. The converse fails, as $X=[0,\omega_{1}]$ applies. \end{corollary} For non-scattered spaces $X$ Theorem \ref{Theor:Main_result} implies immediately the following \begin{corollary}\label{cor:beta} If $X$ is a non-scattered space, the Stone-\v{C}ech compactification $\beta X$ is not in the class $\dcal$. \end{corollary} \begin{proposition}\label{prop:remainder} Let $X=\beta Z \setminus Z$, where $Z$ is any infinite discrete space. Then $X$ is not in the class $\dcal$. \end{proposition} \begin{proof} $\beta Z \setminus Z$ does not have isolated points for any infinite discrete space $Z$. \end{proof} It is known that $X= [0, \omega_1]$ is the Stone-\v{C}ech compactification of $[0,\omega_1)$. We showed that $X \notin \dcal$. Also, $\beta Z \notin \dcal$ for any infinite discrete space $Z$. Every scattered Eberlein compact space belongs to the class $\dcal$ by Theorem \ref{Theor:Corson}, however no Eberlein compact $X$ can be the Stone-\v{C}ech compactification $\beta Z$ for any proper subset $Z$ of $X$ by the Preiss--Simon theorem (see \cite[Theorem IV.5.8]{Arch}). All these facts provides a motivation for the following result. \begin{example}\label{example:beta} There exists an Isbell--Mr\'owka space $Z$ which is {\it almost compact} in the sense that the one-point compactification of $Z$ coincides with $\beta Z$ (see \cite[Theorem 8.6.1]{HT-MT}). Define $X = \beta Z$. Then $X \in \dcal$, by Theorem \ref{Theor:height_3}. \end{example} \section{Metrizable spaces $X \in \dcal$}\label{metr} In this section we try to describe constructively the structure of nontrivial metrizable spaces $X \in \dcal$. Note first that every scattered metrizable $X$ is in the class $\dcal$ since every such space $X$ homeomorphically embeds into a scattered Eberlein compact \cite{BL}, and then Theorem \ref{Theor:Corson} and Corollary \ref{cor:subspace} apply. We extend this result as follows. A topological space $X$ is said to be \emph{$\sigma$-scattered} if $X$ can be represented as a countable union of scattered subspaces and $X$ is called \emph{strongly $\sigma$-discrete} if it is a union of countably many of its closed discrete subspaces. Strongly $\sigma$-discreteness of $X$ implies that $X$ is $\sigma$-scattered, for any topological space. For metrizable $X$, by the classical result of A. H. Stone \cite{Stone}, these two properties are equivalent. \begin{proposition}\label{prop:sigma_scat} Any $\sigma$-scattered metrizable space belongs to the class $\Delta$. \end{proposition} \begin{proof} In view of aforementioned equivalence, every subset of $X$ is $F_{\sigma}$. If every subset of $X$ is $F_{\sigma}$, then $X \in \Delta$. This fact apparently is well-known (see also a comment after Claim \ref{Claim1}). For the sake of completeness we include a direct argument. We show that $X$ satisfies the condition (2) of Theorem \ref{Theor:description}. Let $\{X_n: n \in \omega \}$ be any countable disjoint partition of $X$. Denote $X_n = \bigcup\{F_{n,m}: m \in \omega\}$, where each $F_{n,m}$ is closed in $X$. Define open sets $U_n$ as follows: $U_0 = X$ and $U_n = X \setminus \bigcup\{F_{k,m} : k < n, m < n\}$ for $n \geq 1$. Then $\{U_n: n \in \omega \}$ is a point-finite open expansion of $\{X_n: n \in \omega \}$ in $X$. \end{proof} A metrizable space $A$ is called an \emph{absolutely analytic} if $A$ is homeomorphic to a Souslin subspace of a complete metric space $X$ (of an arbitrary weight), i.e. $A$ is expressible as $A = \bigcup_{\sigma \in \N^\N} \bigcap_{n \in \N} A_{\sigma|n}$, where each $A_{\sigma|n}$ is a closed subset of $X$. It is known that every absolutely analytic metrizable space $X$ (in particular, every Borel subspace of a complete metric space) either contains a homeomorphic copy of the Cantor set or it is strongly $\sigma$-discrete. Therefore, for absolutely analytic metrizable space $X$ the converse is true: $X \in \dcal$ implies that $X$ is strongly $\sigma$-discrete \cite{FKLS}. However, the last structural result can not be proved in general for all (separable) metrizable spaces without extra set-theoretic assumptions. Let us recall several definitions of special subsets of the real line $\R$ (see \cite{Miller_survey}, \cite{Reed}). \begin{enumerate} \item[\rm (a)] A $Q$-set $X$ is a subset of $\R$ such that each subset of $X$ is $F_{\sigma}$, or, equivalently, each subset of $X$ is $G_{\delta}$ in $X$. \item[\rm (b)] A $\lambda$-set $X$ is a subset of $\R$ such that each countable $A\subset X$ is $G_{\delta}$ in $X$. \item[\rm (c)] A $\Delta$-set $X$ is a subset of $\R$ such that for every decreasing sequence $\{D_n: n \in \omega\}$ subsets of $X$ with empty intersection there is a decreasing expansion $\{V_n: n \in \omega\}$ consisting of open subsets of $X$ with empty intersection. \end{enumerate} \begin{Claim}\label{Claim1} The existence of an uncountable separable metrizable $\Delta$-space is equivalent to the existence of an uncountable $\Delta$-set. \end{Claim} \begin{proof} Note that every separable metrizable space homeomorphically embeds into a Polish space $\R^{\omega}$ and the latter space is a one-to-one continuous image of the set of irrationals $\P$. Therefore, if $M$ is an uncountable separable metrizable space, then there exist an uncountable set $X \subset \R$ and a one-to-one continuous mapping from $X$ onto $M$. It is easy to see that $X$ is a $\Delta$-set provided $M$ is a $\Delta$-space. \end{proof} Note that in the original definition of a $\Delta$-set, G. M. Reed used $G_{\delta}$-sets instead of open sets and E. van Douwen observed that these two versions are equivalent \cite{Reed}. From the original definition it is obvious that each $Q$-set must be a $\Delta$-set. The fact that every $\Delta$-set is a $\lambda$-set is known as well. K. Kuratowski showed that in ZFC there exist uncountable $\lambda$-sets. The existence of an uncountable $Q$-set is one of the fundamental set-theoretical problems considered by many authors. F. Hausdorff showed that the cardinality of an uncountable $Q$-set $X$ has to be strictly smaller than the continuum $\cont= 2^{\aleph_0}$, so in models of ZFC plus the Continuum Hypothesis (CH) there are no uncountable $Q$-sets. Let us outline several known most relevant facts. (1) Martin's Axiom plus the negation of the Continuum Hypothesis (MA $+ \lnot$CH) implies that every subset $X \subset \R$ of cardinality less than $\cont$ is a $Q$-set (see \cite{FM}). (2) It is consistent that there is a $Q$-set $X$ such that its square $X^2$ is not a $Q$-set \cite{F_squares}. (3) The existence of an uncountable $Q$-set is equivalent to the existence of an uncountable strong $Q$-set, i.e. a $Q$-set all finite powers of which are $Q$-sets \cite{P}. (4) No $\Delta$-set $X$ can have cardinality $\cont$ \cite{P1}. Hence, under MA, every subset of $\R$ that is a $\Delta$-set is also a $Q$-set. Recently we proved the following claim: If $X$ has a countable network and $|X|=\cont$, then $C_p(X)$ is not distinguished \cite{FKLS}. In view of our Theorem \ref{Theor:description} this fact means that no $\Delta$-space $X$ with a countable network can have cardinality $\cont$. \footnote{ The referee kindly informed the authors that the last result can be derived easily from the actual argument of \cite{P1}.} (5) It is consistent that there exists a $\Delta$-set $X$ that is not a $Q$-set \cite{Knight}. Of course, there are plenty of nonmetrizable $\Delta$-spaces with non-$G_{\delta}$ subsets, in ZFC. (6) An uncountable $\Delta$-set exists if and only if there exists a separable countably paracompact nonnormal Moore space (see \cite{FRW} and \cite{P1}). Summarizing, the following conclusion is an immediate consequence of our Theorem \ref{Theor:description} and the known facts about $\Delta$-sets listed above. \begin{corollary} \label{cor:Moore}\mbox{} \begin{enumerate} \item[\rm (1)] The existence of an uncountable separable metrizable space such that $C_p(X)$ is distinguished, is independent of ZFC. \item[\rm (2)] There exists an uncountable separable metrizable space $X$ such that $C_p(X)$ is distinguished, if and only if there exists a separable countably paracompact nonnormal Moore space. \end{enumerate} \end{corollary} \section{Basic operations in $\dcal$ and open problems}\label{problems} In this section we consider the question whether the class $\Delta$ is invariant under the following basic topological operations: subspaces, continuous images, quotient continuous images, finite/countable unions, finite products. \emph{1. Subspaces}. Trivial because of Corollary \ref{cor:subspace}. \emph{2. (Quotient) continuous images}. Evidently, every topological space is a continuous image of a discrete one. The following assertion is a consequence of a known fact about MAD families (see \cite[Chapter 8]{HT-MT}). \begin{proposition}\label{prop:locomp} There exists a first-countable separable pseudocompact locally compact Isbell--Mr\'owka space $\Psi(\acal)$ which admits a continuous surjection onto the segment $[0,1]$. \end{proposition} Thus, the class $\Delta$ is not invariant under continuous images even for separable locally compact spaces. However, one can show that every uncountable quotient continuous image of any Isbell--Mr\'owka space $\Psi(\acal)$ satisfies the conditions of Corollary \ref{cor:height_2}, therefore it is a $\Delta$-space. Note also that a class of scattered Eberlein compact spaces preserves continuous images. We were unable to resolve the following major open problem. \begin{problem}\label{Problem_1} Let $X$ be any compact $\Delta$-space and $Y$ be a continuous image of $X$. Is $Y$ a $\Delta$-space? \end{problem} Even a more general question is open. \begin{problem}\label{Problem_2} Let $X$ be any $\Delta$-space and $Y$ be a quotient continuous image of $X$. Is $Y$ a $\Delta$-space? \end{problem} Towards a solution of these problems we obtained several partial positive results. \begin{proposition}\label{prop:quotientmap} Let $X$ be any $\Delta$-space and $\varphi: X \to Y$ be a quotient continuous surjection with only finitely many nontrivial fibers. Then $Y$ is also a $\Delta$-space. \end{proposition} \begin{proof} By assumption, there exists a closed subset $K \subset X$ such that $\varphi(K)$ is finite and $\varphi\restriction_{X\setminus K}: X\setminus K \to Y\setminus \varphi(K)$ is a one-to-one mapping. Both sets $X\setminus K$ and $Y\setminus \varphi(K)$ are open in $X$ and $Y$, respectively. Since $\varphi$ is a quotient continuous mapping, it is easy to see that $\varphi\restriction_{X\setminus K}$ is a homeomorphism. $X\setminus K$ is a $\Delta$-space, hence $Y\setminus \varphi(K)$ is also a $\Delta$-space. Finally, $Y$ is a $\Delta$-space, by Proposition \ref{prop:subspace}. \end{proof} \begin{proposition}\label{prop:closedmap} Let $X$ be any $\Delta$-space and $\varphi: X \to Y$ be a closed continuous surjection with finite fibers. Then $Y$ is also a $\Delta$-space. \end{proposition} \begin{proof} Let $\{Y_n: n\in\omega\}$ be a partition of $Y$. By assumption, the partition $\{\varphi^{-1}(Y_n): n\in\omega\}$ admits a point-finite open expansion $\{U_n: n\in\omega\}$ in $X$. Clearly, $\varphi(X\setminus U_n)$ are closed sets in $Y$. Define $V_n = Y\setminus \varphi(X\setminus U_n)$ for each $n\in\omega$. We have that $\{V_n: n\in\omega\}$ is an open expansion of $\{Y_n: n\in\omega\}$ in $Y$. It remains to verify that the family $\{V_n: n\in\omega\}$ is point-finite. Indeed, let $y\in Y$ be any point. Each point in the fiber $\varphi^{-1}(y)$ belongs to a finite number of sets $U_n$. Since the fiber $\varphi^{-1}(y)$ is finite, $y$ is contained only in a finite number of sets $V_n$ which finishes the proof. \end{proof} \emph{3. Finite/countable unions}. \begin{proposition}\label{prop:unions} Assume that $X$ is a finite union of closed subsets $X_i$, where each $X_i$ belongs to the class $\Delta$. Then $X$ also belongs to $\Delta$. In particular, a finite union of compact $\Delta$-spaces is also a $\Delta$-space. \end{proposition} \begin{proof} Denote by $Z$ the discrete finite union of $\Delta$-spaces $X_i$. Obviously, $Z$ is a $\Delta$-space which admits a natural closed continuous mapping onto $X$. Since all fibers of this mapping are finite, the result follows from Proposition \ref{prop:closedmap}. \end{proof} We recall a definition of the Michael line. The Michael line $X$ is the refinement of the real line $\R$ obtained by isolating all irrational points. So, $X$ can be represented as a countable disjoint union of singletons (rationals) and an open discrete set. Nevertheless, the Michael line $X$ is not in $\Delta$ \cite{FKLS}. This example and Proposition \ref{prop:unions} justify the following \begin{problem}\label{Problem_3} Let $X$ be a countable union of compact subspaces $X_i$ such that each $X_i$ belongs to the class $\Delta$. Does $X$ belong to the class $\Delta$? \end{problem} \emph{4. Finite products}. We already mentioned earlier that the existence of a $Q$-set $X \subset \R$ such that its square $X^2$ is not a $Q$-set, is consistent with ZFC. \begin{problem}\label{Problem_4} Is the existence of a $\Delta$-set $X \subset \R$ such that its square $X^2$ is not a $\Delta$-set, consistent with ZFC? \end{problem} It is known that the finite product of scattered Eberlein compact spaces is a scattered Eberlein compact. \begin{problem}\label{Problem_5} Let $X$ be the product of two compact spaces $X_1$ and $X_2$ such that each $X_i$ belongs to the class $\Delta$. Does $X$ belong to the class $\Delta$? \end{problem} Our last problem is inspired by Theorem \ref{Theor:height_3}. \begin{problem}\label{Problem_6} Let $X$ be any scattered compact space with a finite scattered height. Does $X$ belong to the class $\dcal$? \end{problem} \textbf{Acknowledgements.} The authors thank Michael Hru\v s\'ak for a useful information about Isbell--Mr\'owka spaces.
{ "timestamp": "2020-12-01T02:18:30", "yymm": "2011", "arxiv_id": "2011.14299", "language": "en", "url": "https://arxiv.org/abs/2011.14299", "abstract": "We prove that the locally convex space $C_{p}(X)$ of continuous real-valued functions on a Tychonoff space $X$ equipped with the topology of pointwise convergence is distinguished if and only if $X$ is a $\\Delta$-space in the sense of \\cite {Knight}. As an application of this characterization theorem we obtain the following results: \n1) If $X$ is a Čech-complete (in particular, compact) space such that $C_p(X)$ is distinguished, then $X$ is scattered. 2) For every separable compact space of the Isbell--Mrówka type $X$, the space $C_p(X)$ is distinguished. 3) If $X$ is the compact space of ordinals $[0,\\omega_1]$, then $C_p(X)$ is not distinguished. \nWe observe that the existence of an uncountable separable metrizable space $X$ such that $C_p(X)$ is distinguished, is independent of ZFC. We explore also the question to which extent the class of $\\Delta$-spaces is invariant under basic topological operations.", "subjects": "General Topology (math.GN); Functional Analysis (math.FA)", "title": "A characterization of $X$ for which spaces $C_p(X)$ are distinguished and its applications", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363737832231, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.708488337748735 }
https://arxiv.org/abs/2004.09093
The Number of Singular Fibers in Hyperelliptic Lefschetz Fibrations
We consider complex surfaces, viewed as smooth $4$-dimensional manifolds, that admit hyperelliptic Lefschetz fibrations over the $2$-sphere. In this paper, we show that the minimal number of singular fibers of such fibrations is equal to $2g+4$ for even $g\geq4$. For odd $g\geq7$, we show that the number is greater than or equal to $2g+6$. Moreover, we discuss the minimal number of singular fibers in all hyperelliptic Lefschetz fibrations over the $2$-sphere as well.
\section{Introduction} Donaldson and Gompf's results (\cite{Don.1}, \cite{Don.2}, \cite{Go} and \cite{GS}) give the relation between symplectic $4$-manifolds and Lefschetz fibrations, which are a fibering of a $4$-manifold by surfaces, with a finite number of singularities of a prescribed type. Donaldson proved that every symplectic $4$-manifold admits a Lefschetz pencil, which can be blown up at its base points to obtain a Lefschetz fibration. On the other hand, Gompf proved that any $4$-manifold admitting a Lefschetz fibration carries a symplectic structure. The isomorphism class of a Lefschetz fibration is determined by its global monodromy. This relation provides a combinatorial way to understand any symplectic $4$-manifold via its monodromy, whenever it exists.\par A Lefschetz fibration admits certain singular fibers associated to its monodromy. The results on the number of singular fibers of a Lefschetz fibration give us important information about the total space. It is well known that the number of singular fibers in a Lefschetz fibration cannot be arbitrary. A natural question to ask is what the minimal number of singular fibers in Lefschetz fibrations is. \par Let $M_{g,h}$ denote the minimal number of singular fibers in all nontrivial relatively minimal Lefschetz fibrations of fiber genus $g$ and base genus $h$. Even though the exact value of $M_{g,h}$ for $h\geq1$ is almost known (except the numbers $M_{g,1}$ for $g\geq3$ and $M_{2,2}$) \cite{Ha,Ko,Ks,M,StY}, this question is still open when $h=0$ and $g\geq3$. It was proved that $M_{g,0}\leq 2g+4$ when $g$ is even and $M_{g,0}\leq 2g+10$ when $g$ is odd~\cite{c,dmp,k1}. It is known that $M_{2,0}=7$ by Xiao's construction~\cite{X}. Recently, a relation among seven positive Dehn twists in the mapping class group of genus-$2$ surface was found by Baykur and Korkmaz~\cite{bk}. They also constructed an interesting relation consisting of $12$ positive Dehn twists along simple closed curves which are invariant under a hyperelliptic involution $\iota$ in the mapping class group of genus-$3$ surface. Moreover, they showed that the number of singular fibers in all genus-$3$ hyperelliptic Lefschetz fibrations over the $2$-sphere is greater than or equal to $12$.\par Let $N_{g}$ denote the minimal number of singular fibers in all genus-$g$ hyperelliptic Lefschetz fibrations over the $2$-sphere having at least one singular fiber. It follows from the result of Baykur and Korkmaz that $N_{3}=12$. For $g\geq4$, it is known that $N_{g}\leq2g+4$ (respectively $N_{g}\leq 8g+4$) when $g$ is even (respectively when $g$ is odd). (Here, $8g+4$ comes from the hyperelliptic relation.)\par Let $M_{g}$ denote the minimal number of singular fibers in all genus-$g$ hyperelliptic Lefschetz fibrations on a complex surface over the $2$-sphere having at least one singular fiber. Here, by a complex surface we mean a compact connected complex analytic manifold of complex dimension $2$, considered as a smooth $4$-dimensional manifold.\par Our aim in this paper is to estimate the numbers $N_{g}$ and $M_{g}$ for $g\geq4$. For the number $M_{g}$, we have the following results: \begin{theorem}\label{thm1} For all even $g\geq4$, $M_{g}=2g+4$. \end{theorem} \begin{theorem}\label{thm2} For all odd $g\geq7$, $M_{g}\geq2g+6$. \end{theorem} For the number $N_{g}$ with $4\leq g \leq 10$, we have the following results: \begin{theorem}\label{thm3}For the number $N_{g}$ the following holds. \begin{enumerate} \item[(1)] $N_{4}=12$, \item[(2)] $N_{5}\geq15$, \item[(3)] $N_{6}=16$, \item[(4)] $N_{7}\geq17$, \item[(5)] $N_{8}=19$ or $20$, \item[(6)] $N_{9}\geq24$, \item[(7)] $N_{10}=23$ or $24$. \end{enumerate} \end{theorem} Here is an outline of the paper. In Section~\ref{S2}, we give some relevant background information from the theory of Lefschetz fibrations and some results to be used in the sequel. Section~\ref{S3} investigates the minimal number of singular fibers in hyperelliptic Lefschetz fibrations on complex surfaces. In this section, we prove Theorems~\ref{thm1} and ~\ref{thm2}. In Section~\ref{S4}, we investigate the minimal number of singular fibers in hyperelliptic Lefschetz fibrations. We examine these numbers for $4\leq g \leq 10$ and prove Theorem~\ref{thm3}. \medskip \noindent \textit{Acknowledgements.} I would like to thank my advisor Mustafa Korkmaz for many invaluable comments and discussions. Thanks are due to Anar Akhmedov and T.-J. Li for helpful conversations. I also thank the referee and the editor for reading the paper very carefully, making many valuable suggestions and corrections. This paper is a part of the author's Ph.D. thesis ~\cite{a} at Middle East Technical University. The author was partially supported by the Scientific and Technological Research Council of Turkey (T{\"{U}}B\.{I}TAK). \section{ preliminaries}\label{S2} \par We start with a review of some basic definitions and properties of Lefschetz fibrations. In this paper, we denote the $2$-sphere by $\mathbb{S}^{2}$. Let $\Sigma_g$ denote a closed connected oriented surface of genus-$g$ and ${\rm Mod}_g$ denote the mapping class group of $\Sigma_g$, i.e., the group of isotopy classes of orientation-preserving diffeomorphisms of $\Sigma_g$. Let $M$ be a closed connected oriented smooth $4$-dimensional manifold. A smooth surjective map $f\colon M\to \mathbb{S}^{2}$ is a {\textit{Lefschetz fibration}} with connected oriented genus-$g$ regular fiber if it has finitely many critical points and around each critical point it is written in the form of $f(z_{1},z_{2})=z_{1}^{2}+z_{2}^{2}$ with respect to some local complex coordinates agreeing with the orientations of $M$ and $\mathbb{S}^{2}$. The genus-$g$ of a regular fiber $F$ is called \textit{the genus of the fibration}. We assume that all the critical points lie in distinct fibers, called \textit{singular fibers}, which can be obtained after a small perturbation. Each singular fiber is obtained by shrinking a simple closed curve, called \textit{vanishing cycle}, in the regular fiber. If the vanishing cycle is non-separating (respectively separating), then the singular fiber is said to be \textit{irreducible} (respectively \textit{reducible}). In this paper, we also assume that all Lefschetz fibrations are nontrivial, i.e. it has at least one singular fiber and fibrations are relatively minimal, i.e. it has no fiber containing a sphere of self-intersection $-1$, otherwise one can blow-down it without changing the rest of the fibration. \par Lefschetz fibrations can be described combinatorially by their monodromy representations. The monodromy of a Lefschetz fibration $f\colon M\to \mathbb{S}^{2}$ is given by a positive factorization $t_{\alpha_{1}}t_{\alpha_{2}}\ldots t_{\alpha_{n}}=1$ in ${\rm Mod}_g$, where $\alpha_{i}$ are the vanishing cycles of the singular fibers. (Here $t_a$ denotes the positive Dehn twist about a simple closed curve $a$ on a genus-$g$ surface.) Conversely, for a given positive factorization $t_{a_{1}}t_{a_{2}}\ldots t_{a_{k}}=1$ in ${\rm Mod}_g$, one can construct a genus-$g$ Lefschetz fibration over $\mathbb{S}^{2}$ by attaching $2$-handles along vanishing cycles $a_{i}$ in a $\Sigma_{g}$ fiber in $\Sigma_{g}\times D^{2}$ with $-1$ framing, and then by closing it up by a fiber preserving map to get a fibration over $\mathbb{S}^{2}$. Two Lefschetz fibrations $f_1\colon M_1\to \mathbb{S}^2$ and $f_2\colon M_2\to \mathbb{S}^2$ are said to be \textit{isomorphic} if there exist orientation preserving diffeomorphisms $H\colon M_1\to M_2$ and $h\colon \mathbb{S}^2 \to \mathbb{S}^2$ such that $f_2H=hf_1$. If $g\geq2$, it is known that a genus-$g$ Lefschetz fibration over $\mathbb{S}^2$ is characterized by a positive factorization of the identity element in ${\rm Mod}_g$ up to \textit{Hurwitz moves} (exchanging subwords $t_{a_{i}}t_{a_{i+1}}=t_{a_{i+1}}t_{t_{a_{i+1}}(a_{i})}$) and \textit{global conjugations} (changing each $t_{a_{i}}$ with $t_{\varphi(a_{i})}$ for some $\varphi \in {\rm Mod}_g$). \par The hyperelliptic mapping class group ${\rm HMod}_g$ of $\Sigma_g$ is defined to be the subgroup of the mapping class group ${\rm Mod}_g$ which is the centralizer of the class of a hyperelliptic involution $\iota\colon \Sigma_g \to \Sigma_g$. We say that a genus-$g$ Lefschetz fibration is hyperelliptic if its vanishing cycles are invariant under the hyperelliptic involution $\iota$ up to isotopy.\par We collect some useful facts about the first homology group of the hyperelliptic mapping class group. \par Recall that for any group $G$, the first homology group of $G$ with integral coefficient is the abelianization of $G$, that is, \[ H_1(G;\mathbb{Z})=G/[G,G], \] where $[G,G]$ is the subgroup of $G$ generated by all commutators $[a,b]=aba^{-1}b^{-1}$ for all $a,b \in G$. It is known that $H_1({\rm Mod}_g;\mathbb{Z})$ is a cyclic group generated by the class of a Dehn twist about a non-separating simple closed curve and also we have the following lemma: \begin{lemma}\label{hom} For a closed orientable surface of genus $g\geq1$, we have the following isomorphism of the first homology group $H_1({\rm Mod}_{g;}\mathbb{Z})$ of the mapping class group ${\rm Mod}_{g}$: \[ H_1({\rm Mod}_{g};\mathbb{Z})\cong \begin{cases} \mathbb{Z}_{12},&\text{if $g=1$},\\ \mathbb{Z}_{10},&\text{if $g=2$},\\ 0,&\text{if $g\geq3$}.\\ \end{cases} \] \end{lemma} For the proof of Lemma ~\ref{hom} and further details about the homology groups of the mapping class group, see \cite{k2}. \par The following lemma can be proven by the presentation of the hyperelliptic mapping class group ~\cite{bh}. \begin{lemma}\label{homh} For a closed orientable surface of genus $g\geq1$, the first homology group $H_1({\rm HMod}_g;\mathbb{Z})$ of the hyperelliptic mapping class group ${\rm HMod}_{g}$ has the following isomorphism: \[ H_1({\rm HMod}_{g};\mathbb{Z})\cong \begin{cases} \mathbb{Z}/ 4(2g+1),&\text{if $g$ is odd},\\ \mathbb{Z}/ 2(2g+1),&\text{if $g$ is even}.\\ \end{cases} \] \end{lemma} All Dehn twists about non-separating simple closed curves that are invariant under the hyperelliptic involution $\iota$ on $\Sigma_g$ are nontrivial in the hyperelliptic mapping class group of $\Sigma_g$, ${\rm HMod}_{g}$, and each of them maps to the same generator in $H_1({\rm HMod}_{g};{\mathbb{Z}})$ under the natural map ${\rm HMod}_{g}\to H_1({\rm HMod}_{g};{\mathbb{Z}})$. If a product of positive Dehn twists about non-separating curves in ${\rm HMod}_{g}$ is trivial then the number of twists is divisible by $4(2g+1)$ (respectively $2(2g+1$)) when $g$ is odd (respectively even). A separating simple closed curve on $\Sigma_g$ is said to be of \textit{type $h$} if it bounds subsurfaces of genera $h$ and $g-h$. By the even chain relation, each positive Dehn twist about a separating simple closed curve of type $h$ can be written as a product of $2h(4h+2)$ positive Dehn twists about non-separating simple closed curves. This implies the following relation between the number of non-separating singular fibers and that of separating singular fibers in a genus-$g$ hyperelliptic Lefschetz fibration: \begin{lemma}\label{lem1} Let $n$ (or $s$) be the number of non-separating (resp. separating) vanishing cycles in a genus-$g$ hyperelliptic Lefschetz fibration over $\mathbb{S}^{2}$. Then, we have \begin{eqnarray}\label{eq1} n+\sum_{h=1}^{[g/2]}2h(4h+2)s_{h}\equiv \begin{cases} 0 \pmod{4(2g+1)},&\text{if $g$ is odd,} \\ 0 \pmod{2(2g+1)},&\text{if $g$ is even.} \end{cases} \nonumber \end{eqnarray} where $s=\sum_{h=1}^{[g/2]}s_{h}$, and $s_h$ is the number of separating vanishing cycles of type $h$. \end{lemma} \begin{lemma}\cite{E,M1,M2} Let $f\colon X \to \mathbb{S}^{2}$ be a genus-$g$ hyperelliptic Lefschetz fibration. Let $n$ and $s=\sum_{h=1}^{[g/2]}s_{h}$ be the numbers of non-separating and separating vanishing cycles of this fibration, respectively, where $s_h$ denotes the number of separating vanishing cycles that separate the genus-$g$ surface into two surfaces one of which has genus $h$. Then the signature of $X$ is \begin{eqnarray*} \sigma(X)&=&-\dfrac{g+1}{2g+1}n+\sum_{h=1}^{[g/2]} \bigg(\dfrac{4h(g-h)}{2g+1}-1\bigg)s_{h}. \end{eqnarray*} \end{lemma} \begin{remark} Ozbagci~\cite{Oz.1} concluded that $\sigma(X)\leq n-s$ for any $4$-manifold $X$ admitting a genus-$g$ Lefschetz fibration over $\mathbb{S}^2$ or $\mathbb{D}^2$ and he also proved that \[ \sigma(X)\leq n-s-4 \] when the Lefschetz fibration over $\mathbb{S}^2$ is hyperelliptic. It can be easily obtained that $\sigma(X)\leq n-s-2$ using $b_1(X)\leq 2g-1$ by the handlebody decomposition of nontrivial Lefschetz fibrations over $\mathbb{S}^2$ and the fact that every nontrivial Lefschetz fibration over $\mathbb{S}^{2}$ has at least one non-separating vanishing cycle. Then, Cadavid~\cite{c} improved the upper bound of signature $\sigma(X)$, showing that \begin{eqnarray}\label{eq2} \sigma(X)\leq n-s-2(2g-b_1(X)). \end{eqnarray} \end{remark} \par Let us recall the following Stipsicz's theorem, which we will use to examine the number of singular fibers. \begin{theorem}\cite{St.1} \label{t1} Let $f\colon X \to \mathbb{S}^2$ be a nontrivial genus-$g$ Lefschetz fibration with $b_{2}^{+}(X)=1$. \begin{itemize} \item[(1)] If $g\geq 6$ is even, then $f\colon X \to \mathbb{S}^2$ admits at least $2g+4$ singular fibers. (This lower bound is sharp.) \item [(2)] If $g\geq 15$ is odd, then $f\colon X \to \mathbb{S}^2$ admits at least $2g+10$ singular fibers. (This lower bound is sharp.) \item[(3)] If $g\geq 9$ is odd, then $f\colon X \to \mathbb{S}^2$ contains at least $2g+6$ singular fibers. \end{itemize} \end{theorem} We want to remark that in the above theorem the lower bounds in $(1)$ and $(2)$ are sharp, that is, the minimum values $2g+4$ and $2g+10$, respectively, can be realized on ruled surfaces which are uniquely determined as $(\Sigma _{g/2}\times \mathbb{S}^{2})\#4\overline{\mathbb{C} P^{2}}$ and $(\Sigma _{(g-1)/2}\times \mathbb{S}^{2})\#8\overline{\mathbb{C} P^{2}}$, respectively \cite[Sections $4.1$ and $4.2$]{St.1}. However, in $(3)$, the lower bound may not be sharp, that is, we do not know whether there exists a Lefschetz fibration $f\colon X \to \mathbb{S}^2$ with $b_{2}^{+}(X)=1$ and $2g+6$ singular fibers. \section{The minimal number of singular fibers in hyperelliptic Lefschetz fibrations on complex surfaces}\label{S3} \subsection{Even genus case} In this section, first we prove some lemmas to prove Theorem ~\ref{thm1}. \begin{lemma}\label{l1} The $4$-manifold $(\Sigma _{2}\times S^{2})\#3\overline{\mathbb{C} P^{2}}$ does not admit a genus-$4$ Lefschetz fibration over $\mathbb{S}^2$. \end{lemma} \begin{proof} Suppose that $(\Sigma _{2}\times \mathbb{S}^{2})\#3\overline{\mathbb{C} P^{2}}$ admits a genus-$4$ Lefschetz fibration and consider the homology class of a regular fiber $F$. We may write \[ [F]=a[U]+b[V]+\displaystyle\sum_{i=1}^{3} c_{i}[E_{i}]\in H_{2}\big((\Sigma _{2}\times \mathbb{S}^{2})\#3\overline{\mathbb{C} P^{2}};\mathbb{Z}\big), \] for some integers $a$,$b$ and $c_{i}$, where $[U]$, $[V]$ denote the homology classes of the section and fiber of the ruling $\Sigma _{2}\times S^{2}\to \Sigma _{2}$, respectively, such that $[U]^{2}=[V]^{2}=0$, $[U]\cdot [V]=1$, and $[E_{i}]$ denote the homology class of the exceptional sphere of the $i$th blow-up. The composition of the blowing down and the projection map $\Sigma _{2}\times \mathbb{S}^{2}\to \Sigma _{2} $ leads to a degree-$d$ map $F\rightarrow \Sigma _{2}$ for some integer $d$. The degree $d$ must be equal to $a$. Moreover, since the fiber of the trivial $\mathbb{S}^{2}$-bundle $\Sigma _{2}\times \mathbb{S}^{2}\to \Sigma _{2}$ has a pseudo-holomorphic representative \cite{Liliu}, the degree of the map $F\rightarrow \Sigma _{2}$ is positive by the positivity of intersection. Consider a singular fiber $\Sigma$. Since the normalization of $\Sigma$ has genus $\leq 3$, such a degree-$d$ map yields the following inequality \[ 3-1\geq g(\Sigma)-1 \geq d(2-1)=a(2-1), \] where $g(\Sigma)$ is the genus of the fiber $\Sigma$ ~\cite{Kne.1}. Therefore, $0<d=a\leq 2$. Since $[F]^{2}=0$, we have \begin{eqnarray} 2ab=\displaystyle\sum_{i=1}^{3} c_{i}^{2}.\label{4eqn} \end{eqnarray} Since the symplectic structure on $(\Sigma _{2}\times \mathbb{S}^{2})\#3\overline{\mathbb{C} P^{2}} $ is unique up to deformations and diffeomorphisms, we can apply the adjunction formula \[ 2g(F)-2=[F]^{2}+[K]\cdot [F], \] where $[K]=-2[U]+(2h-2)[V]+[E_{1}]+[E_{2}]+[E_{3}]$ is the canonical class with $h=g(\Sigma_2)=2$. In this case, the adjunction formula gives \begin{eqnarray}\label{adjunction} 2g(F)-2=2ah-2a-2b-\displaystyle\sum_{i=1}^{3} c_{i}. \end{eqnarray} Thus, for $g(F)=4$ and $h=2$ , we have \begin{eqnarray} 6=2a-2b-\displaystyle\sum_{i=1}^{3} c_{i}\label{adj1}. \end{eqnarray} For $a=1$, by the identities (\ref{4eqn}) and (\ref{adj1}) we have \[ \displaystyle\sum_{i=1}^{3} c_{i}^{2}=2b \mbox{ and } \displaystyle\sum_{i=1}^{3} c_{i} =-4-2b, \] which lead to \[ \displaystyle\sum_{i=1}^{3} c_{i}^{2}+\displaystyle\sum_{i=1}^{3} c_{i} =-4. \] Hence $\displaystyle\sum_{i=1}^{3} \bigg(c_{i}+\frac{1}{2}\bigg)^{2}=-\frac{13}{4}$, which is not possible. In the case $a=2$, using the identities (\ref{4eqn}) and (\ref{adj1}), we have the following equalities: \[ 4b=\displaystyle\sum_{i=1}^{3} c_{i}^{2} \mbox{ and } 2=-2-\displaystyle\sum_{i=1}^{3} c_{i}, \] which give \[ \displaystyle\sum_{i=1}^{3} c_{i}^{2}+2\displaystyle\sum_{i=1}^{3} c_{i}=-4. \] Thus, the resulting equality is $\displaystyle\sum_{i=1}^{3} (c_{i}+1)^{2}=-1$, which is a contradiction. Therefore, this shows that $(\Sigma _{2}\times \mathbb{S}^{2})\#3\overline{\mathbb{C} P^{2}}$ does not admit a genus-$4$ Lefschetz fibration over $\mathbb{S}^2$. \end{proof} \begin{remark}\label{rem1} The proof of the above theorem is based on Stipsicz's technique in \cite[Lemma $4.4$]{St.1}, and some arguments of \cite[Theorem $21$]{BK.1} and \cite[Lemma $4.2$]{Li.1}. It implies that Theorem \ref{t1} $(1)$ is true for $g=4$ and similarly, one can also show that Theorem \ref{t1} $(3)$ holds for $g=7$. \end{remark} Let $e(X)$ denote the Euler characteristic of a $4$-manifold $X$. For a genus-$g$ Lefschetz fibration $f\colon M \to \mathbb{S}^2$ with $n$ separating and $s$ non-separating vanishing cycles, we have \begin{eqnarray*} e(M)=4-4g+n+s. \end{eqnarray*} We define the following two invariants associated to the $4$-manifold $M$: \begin{eqnarray*} \chi_{h}(M)=\dfrac{e(M)+\sigma(M)}{4} \textrm{ and } c_{1}^{2}(M)=2e(X)+3\sigma(X). \end{eqnarray*} Note that if $M$ is a complex surface, then $\chi_{h}(M)$ is the holomorphic Euler characteristic of $M$ and $c_{1}^{2}(M)$ is the square of the first Chern class of $M$. \begin{lemma}\label{hollemma} Let $f$ be a genus-$g$ hyperelliptic Lefschetz fibration on a complex surface $X$ over $\mathbb{S}^2$ with even $g\geq 6$ or odd $g\geq 9$. If $n+s<2g+4$, then $n\geq2g+2$. \end{lemma} \begin{proof} Suppose that there exists a hyperelliptic Lefschetz fibration on a complex surface $X$ with $n< 2g+2$. Let us first consider $n<2g$. Using the inequality $\sigma(X)\leq n-s-4$ for hyperelliptic Lefschetz fibrations over $\mathbb{S}^2$ ~\cite{Oz.1}, we have \begin{eqnarray*} \chi_{h}(X)&=&\dfrac{e(X)+\sigma(X)}{4} \\ &\leq& \dfrac{(4-4g+n+s)+(n-s-4)}{4}\\ &=&\dfrac{2n-4g}{4}<0. \end{eqnarray*} Now, assume that $n=2g$, which gives rise to $s\leq3$. By the signature formula, we get \begin{eqnarray*} \sigma(X)&=&-\dfrac{g+1}{2g+1}n+\sum_{h=1}^{[g/2]} \Big(\dfrac{4h(g-h)}{2g+1}-1\Big)s_{h}\\ &\leq &-\dfrac{g+1}{2g+1}(2g)+3\Big(\dfrac{4(g/2)(g/2)}{2g+1}-1\Big)\\ &=&\dfrac{g^2-8g-3}{2g+1}\\ &<&\dfrac{g}{2}-3 \end{eqnarray*} and also, using $n+s\leq2g+3$ we have \begin{eqnarray*} \chi_{h}(X)&=&\dfrac{e(X)+\sigma(X)}{4}\\ &<&\dfrac{4-4g+2g+3+(g/2)-3}{4}\\ &\leq &\dfrac{-3(g/2)+4}{4}<0. \end{eqnarray*} Hence, we conclude that $\chi_{h}(X)<0$ if $n\leq2g$. By the classification of complex surfaces, $X$ is diffeomorphic to a blow up of a ruled surface which implies that $b_{2}^{+}=1$ ~\cite{ba}. However, this contradicts Theorem~\ref{t1}. Therefore, $n>2g$. Since the number $n$ is even by the equality in Lemma~\ref{lem1}, we get the required inequality. \end{proof} \subsubsection{Proof of Theorem~\ref{thm1}} Suppose that we have a hyperelliptic Lefschetz fibration on a complex surface $X$ with $n+s<2g+4$ and $g\geq6$ is even. Hence, $n\geq 2g+2$ by Lemma~\ref{hollemma}. The equality in Lemma~\ref{lem1} implies that $n$ is even and also $s=\sum_{h=1}^{[g/2]}s_{h}>0$ . Thus, $n=2g+2$ and $s=1$. The signature $\sigma(X)$ of $X$ is computed using the signature formula as follows: \begin{eqnarray*} \sigma(X)&=&-\dfrac{g+1}{2g+1}n+\sum_{h=1}^{[g/2]} \Big(\dfrac{4h(g-h)}{2g+1}-1\Big)s_{h}\\ &\leq &-\dfrac{g+1}{2g+1}(2g+2)+\Big(\dfrac{4(g/2)(g/2)}{2g+1}-1\Big)\\ &=&-\dfrac{g^2+6g+3}{2g+1}\\ &<&-\dfrac{g}{2}. \end{eqnarray*} Using $\sigma(X)<-\dfrac{g}{2}$, $n=2g+2$ and $s=1$, we get: \begin{eqnarray*} \chi_{h}(X)&=&\dfrac{e(X)+\sigma(X)}{4}\\ &<&\dfrac{4-4g+2g+3-(g/2)}{4}\\ &\leq & \dfrac{-5(g/2)+7}{4}<0. \end{eqnarray*} In this case, the classification of complex surfaces implies that $X$ is a blow-up of a ruled surface and hence $b_{2}^{+}=1$. However, this is impossible if $g\geq6$ by Theorem~\ref{t1}. Thus $M_{g}\geq2g+4$. For even $g\geq6$, the existence of the genus-$g$ hyperelliptic Lefschetz fibration over $\mathbb{S}^2$ with $2g+4$ singular fibers ~\cite{c,dmp,k1} implies that $M_{g}=2g+4$. \par Now, consider the remaining case, $g=4$. Assume that there exists a hyperelliptic Lefschetz fibration so that $n+s<2g+4=12$. The equation in Lemma~\ref{lem1} leads to $n+12s_1+4s_2\equiv 0 \pmod{18}$, where $s=s_1+s_2$ and $n$ is even. Moreover, we have $n\geq6$ using the inequality $n\geq \dfrac{1}{5}(8g-3)=\dfrac{29}{5}$ given in~\cite{BK.1}.\par The possible triples $(n,s_1,s_2)$ and some topological invariants of the corresponding genus-$4$ Lefschetz fibrations over $\mathbb{S}^2$, which can be easily computed using the signature formula and $e(X)=4-4g+n+s=-12+n+s_1+s_2$, are given as follows: \begin{center} \begin{tabular}{ r|c|c|c|c| } \multicolumn{1}{r}{} & \multicolumn{1}{c}{$(n,s_1,s_2)$} & \multicolumn{1}{c}{$e(X)$} & \multicolumn{1}{c}{$\sigma(X)$} & \multicolumn{1}{c}{$c_{1}^{2}(X)$} \\ \cline{2-5} (a$1$) & $(6,1,0)$ & $-5$&$-3$&$-19$ \\ \cline{2-5} (a$2$) & $(6,4,0)$ & $-2$ &$-2$&$-10$\\ \cline{2-5} (a$3$) & $(6,0,3)$ & $-3$ &$-1$&$-9$\\ \cline{2-5} (a$4$) & $(8,2,1)$ &$ -1$ &$-3$&$-11$\\ \cline{2-5} \end{tabular} \end{center} \vspace*{0.4cm} We now rule out all cases: Case (a$1$). In this case, $c_1^2(X)=-19 < 4-4g=-12$, which gives a contradiction~\cite{St.2}. Cases (a$2$)$-$(a$4$). In these cases, $c_{1}^{2}(X)<2-2g=-6$. This implies that $X$ is a blow-up of a rational or ruled surface~\cite{Li.1}. Thus we have $b_{2}^{+}(X)=1$. Moreover, using inequality (\ref{eq2}), one can conclude that $X$ cannot be simply-connected and so it is a blow-up of a ruled surface. The equalities \begin{eqnarray*} e(X)&=&2-2b_1(X)+b_{2}^{+}(X)+b_{2}^{-}(X)\\ &=&3-2b_1(X)+b_{2}^{-}(X) \end{eqnarray*} and \begin{eqnarray*} \sigma(X)&=&b_{2}^{+}(X)-b_{2}^{-}(X)=1-b_{2}^{-}(X) \end{eqnarray*} imply that $b_{1}(X)=4$. Hence, $X$ is diffeomorphic to $(\Sigma _{2}\times S^{2})\#m\overline{\mathbb{C} P^{2}}$. (Note that $m=2, 1$ and $3$ for the cases (a$2$), (a$3$) and (a$4$), respectively). From the proof of Lemma~\ref{l1}, we see that $(\Sigma _{2}\times S^{2})\#m\overline{\mathbb{C} P^{2}}$ cannot admit a genus-$4$ Lefschetz fibration over $\mathbb{S}^{2}$ for $m=0,1,2,3$. Since there is a hyperelliptic genus-$4$ Lefschetz fibration with $12$ singular fibers~\cite{c,dmp,k1}, we have $M_{4}=12$. This proves our claim. \subsection{Odd genus case} In this section, we find a lower bound for the number $M_{g}$ when $g\geq7$ is odd. \subsubsection{Proof of Theorem~\ref{thm2}} Suppose that there exists a hyperelliptic Lefschetz fibration on a complex surface $X$ with odd $g\geq 7$ and $n+s<2g+6$. First consider the case $g\geq 9$. If $n<2g$, then it can be shown that $\chi_h(X)<0$ using the inequality $\sigma(X) \leq n-s-4$ as in the proof of Lemma~\ref{hollemma}. This implies that $b_{2}^{+}=1$ by the classification of complex surfaces. But, this gives a contradiction with Theorem~\ref{t1}. The odd case of the equation in Lemma~\ref{lem1} leads to $n\equiv 0 \pmod{4}$. We can conclude that $n\geq 2g+2$. The assumption $n+s<2g+6$ gives rise to $n=2g+2$ and $s\leq3$. Therefore, the signature formula implies the following inequality: \begin{eqnarray*} \sigma(X)&=&-\dfrac{g+1}{2g+1}n+\sum_{h=1}^{[g/2]} \Big(\dfrac{4h(g-h)}{2g+1}-1\Big)s_{h}\\ &\leq & -\dfrac{g+1}{2g+1}(2g+2)+3\Big(\dfrac{4(g/2)(g/2)}{2g+1}-1\Big)\\ &=&\dfrac{g^2-10g-5}{2g+1}\\ &<&\dfrac{g}{2}-5. \end{eqnarray*} Then, using the inequality $\sigma(X)<\dfrac{g}{2}-5$, the holomorphic Euler characteristic satisfies \begin{eqnarray*} \chi_{h}(X)&=&\dfrac{e(X)+\sigma(X)}{4}=\dfrac{4-4g+n+s+\sigma(X)}{4}\\ &<&\dfrac{4-4g+2g+5+(g/2)-5}{4}\\ &\leq&\dfrac{-3g}{8}+1<0. \end{eqnarray*} Hence, the classification of complex surfaces implies that $X$ is a blow-up of a ruled surface. In this case, $b_{2}^{+}(X)=1$. However, this contradicts to Theorem~\ref{t1}. Now consider the case $g=7$. Suppose that we have a hyperelliptic genus-$7$ Lefschetz fibration $X$ with $n+s<20$, where $s=s_1+s_2+s_3$. We know that $n\geq\dfrac{1}{5}(8g-3)=\dfrac{53}{5}$ (and therefore $n\geq11$) ~\cite{BK.1} and it follows from the congruence \[ n+12s_1-20s_2+24s_3 \equiv 0\pmod{60} \] that $n \equiv 0 \pmod{4}$ by Lemma~\ref{lem1}. Hence the possible values of $(n,s_1,s_2,s_3)$, $e(X)$, $\sigma(X)$, $c_{1}^{2}(X)$ and $\chi_h(X)$ are as follows: \begin{center} \begin{tabular}{ r|c|c|c|c|c| } \multicolumn{1}{r}{} & \multicolumn{1}{c}{$(n,s_1,s_2,s_3)$} & \multicolumn{1}{c}{$e(X)$} & \multicolumn{1}{c}{$\sigma(X)$} & \multicolumn{1}{c}{$c_{1}^{2}(X)$} & \multicolumn{1}{c}{$\chi_h(X)$} \\ \cline{2-6} (b$1$) & $(12,0,0,2)$ &$ -10$ &$-2$&$-26$&$-3$\\ \cline{2-6} (b$2$) & $(12,2,0,1)$ &$ -9$ &$-3$&$-27$&$-3$\\ \cline{2-6} (b$3$) & $(12,4,0,0)$ &$ -8$ &$-4$&$-28$&$-3$\\ \cline{2-6} (b$4$) & $(12,1,0,4)$ & $-7$ &$3$&$-5$&$-1$\\ \cline{2-6} (b$5$) &$(12,0,3,2)$ & $-7$ &$3$&$-5$&$-1$\\ \cline{2-6} (b$6$) & $(12,3,0,3)$ & $-6$ &$2$&$-12$&$-1$\\ \cline{2-6} (b$7$) & $(12,2,3,1)$ &$ -5$ &$1$&$-7$&$-1$\\ \cline{2-6} (b$8$) & $(12,0,0,7)$ &$ -5$ &$9$&$-5$&$1$\\ \cline{2-6} (b$9$) & $(12,5,0,2)$ &$ -5$ &$1$&$-7$&$-1$\\ \cline{2-6} (b$10$) &$ (12,4,3,0)$ &$ -5$ &$1$&$-7$&$-1$\\ \cline{2-6} (b$11$) & $(16,0,2,1)$ &$ -5$ &$-3$&$-19$&$-2$\\ \cline{2-6} \end{tabular} \end{center} \vspace*{0.4cm} Cases (b$1$)$-$(b$3$). The manifold $X$ has $c_{1}^{2}(X)<4-4g=-24$, which gives a contradiction ~\cite{St.2}. Cases (b$4$)$-$(b$7$), (b$9$) and (b$10$). In these cases, $\chi_h(X)<0$. Thus, $X$ is a blow-up of a ruled surface. However, $\sigma(X)\leq 0$ for such a manifold. Hence, we exclude these cases. Case (b$8$). In this case, the manifold $X$ does not satisfy the inequality $\sigma(X)\leq n-s-4$. Case (b$11$). In this case, since $c_{1}^{2}(X)<2-2g=-12$, $X$ is diffeomorphic to a blow-up of a rational or ruled surface. Hence $b_2^{+}=1$. We have \begin{eqnarray*} e(X)&=&-5=2-2b_{1}(X)+b_{2}^{+}(X)+b_{2}^{-}(X)\\ &=&3-2b_1(X)+b_{2}^{-}(X) \end{eqnarray*} and \begin{eqnarray*} \sigma(X)&=&-3=b_{2}^{+}(X)-b_{2}^{-}(X)\\ &=&1-b_{2}^{-}(X). \end{eqnarray*} Hence $(b_{1}(X),b_{2}^{+}(X),b_{2}^{-}(X))=(6,1,4)$. Therefore, $X=(\Sigma _{3}\times S^{2})\#3\overline{\mathbb{C}P^{2}}$. But one can prove that $(\Sigma _{3}\times S^{2})\#3\overline{\mathbb{C}P^{2}}$ does not admit a genus-$7$ Lefschetz fibration over $\mathbb{S}^2$ using the same idea as in the proof of Lemma~\ref{l1}. This finishes the proof. ~\hfill$\square$ \section{The minimal number of singular fibers in hyperelliptic Lefschetz fibrations} \label{S4} In this section, we determine the minimal number of singular fibers in some hyperelliptic Lefschetz fibrations over $\mathbb{S}^{2}$. The proofs of Theorems~\ref{thm1} and~\ref{thm2} rely on the fact that any complex surface admitting a symplectic structure with $\chi_{h}<0$ is diffeomorphic to a ruled surface. In this section, we study the minimal number of singular fibers in hyperelliptic Lefschetz fibrations over $\mathbb{S}^{2}$ that may not have a complex structure. Recall that $N_{g}$ denotes the minimal number of singular fibers in all hyperelliptic genus-$g$ Lefschetz fibrations over $\mathbb{S}^2$. \subsubsection{Proof of Theorem~\ref{thm3}} One can easily conclude that $N_{4}=12$ and $N_{7}\geq 17$ by the proofs of Theorems~\ref{thm1} and ~\ref{thm2} (for $g=7$, refer to the cases (b$1$)$-$(b$3$)), respectively. Now let us begin the proof of Theorem~\ref{thm3} (2). Suppose that $N_{5}<15$ so that we have a hyperelliptic genus-$5$ Lefschetz fibration $X$. Let $n$ and $s=s_1+s_2$ be the numbers of nonseparating and separating vanishing cycles, respectively. Hence $n+s<15$. The equation in Lemma~\ref{lem1} turns out to be \[ n+12s_1-4s_2 \equiv 0 \pmod{44} \] so that $n$ is divided by $4$. It is known that $n\geq8$ ~\cite{BK.1}. The signature and the Euler characteristic are computed as \[ \sigma(X)=\dfrac{-6n+5s_1+13s_2}{11} \] and \[ e(X)=4-4g+n+s=-16+n+s_1+s_2, \] respectively. Hence the possible values of $(n,s_1,s_2)$, $e(X)$, $\sigma(X)$, $c_{1}^{2}(X)$ and $\chi_h(X)$ are as follows: \begin{center} \begin{tabular}{ r|c|c|c|c|c| } \multicolumn{1}{r}{} & \multicolumn{1}{c}{$(n,s_1,s_2)$} & \multicolumn{1}{c}{$e(X)$} & \multicolumn{1}{c}{$\sigma(X)$} & \multicolumn{1}{c}{$c_{1}^{2}(X)$} & \multicolumn{1}{c}{$\chi_h(X)$} \\ \cline{2-6} (c$1$) & $(8,0,2)$ &$ -6$&$-2$&$-18$&$-2$ \\ \cline{2-6} (c$2$) & $(8,3,0)$ &$ -5$ &$-3$&$-19$&$-2$\\ \cline{2-6} (c$3$) & $(8,1,5)$ & $-2$ &$2$&$2$&$0$\\ \cline{2-6} \end{tabular} \end{center} \vspace*{0.4cm} We now eliminate all cases: Cases (c$1$) and (c$2$). In these cases, $c_{1}^{2}(X)< 4-4g=-16$. This is impossible~\cite{St.2}. Case (c$3$). In this case, $\sigma(X)> n-s-4$, which is also impossible for hyperelliptic Lefschetz fibrations~\cite{Oz.1}. Therefore, $N_{5}$ cannot be less than $15$. Next, we will prove that $N_{6}=16$. Suppose that $N_{6}<16$ so that we have a hyperelliptic genus-$6$ Lefschetz fibration $X$ with $n+s<16$, where $s=s_1+s_2+s_3$. Using arguments similar to the above, we have the possible values of $(n,s_1,s_2,s_3)$, $e(X)$, $\sigma(X)$ and $ c_{1}^{2}(X)$ are as follows: \begin{center} \begin{tabular}{ r|c|c|c|c| } \multicolumn{1}{r}{} & \multicolumn{1}{c}{$(n,s_1,s_2,s_3)$} & \multicolumn{1}{c}{$e(X)$} & \multicolumn{1}{c}{$\sigma(X)$} & \multicolumn{1}{c}{$c_{1}^{2}(X)$} \\ \cline{2-5} (d$1$) & $(10,0,3,0)$ & $-7$&$-1$&$-17$\\ \cline{2-5} (d$2$) & $(10,3,0,1)$ &$ -6$ &$-2$&$-18$\\ \cline{2-5} (d$3$) & $(10,2,0,3)$ & $-5$ &$1$&$-7$\\ \cline{2-5} (d$4$) & $(10,1,4,0)$ & $-5$ &$1$&$-7$\\ \cline{2-5} (d$5$) & $(12,0,1,0)$ &$ -7$ &$-5$&$-29$\\ \cline{2-5} (d$6$) & $(12,1,2,0)$ & $-5$ &$-3$&$-19$\\ \cline{2-5} (d$7$) &$ (14,1,0,0)$ & $-5$ &$-7$&$-31$\\ \cline{2-5} \end{tabular} \end{center} \vspace*{0.4cm} We now eliminate all cases: Cases (d$5$) and (d$7$). In these cases, $c_1^2(X) < 4-4g=-22$. This is a contradiction~\cite{St.2}. Cases (d$1$), (d$2$) and (d$6$). In these cases, $c_{1}^{2}<2-2g=-10$. Hence, $X$ is a blow-up of a rational or ruled surface~\cite{Li.1}. Thus, $b_{2}^{+}(X)=1$. However, this contradicts to Theorem~\ref{t1}. Cases (d$3$) and (d$4$). In these cases, we have the following identities: \begin{eqnarray} \sigma(X)&=&b_{2}^{+}(X)-b_{2}^{-}(X)=1, \label{heq5}\\ e(X)&=&2-2b_1(X)+b_{2}^{+}(X)+b_{2}^{-}(X)=-5.\label{heq6} \end{eqnarray} So, the equations ($\ref{heq5}$) and ($\ref{heq6}$) yield \begin{eqnarray} b_{2}^{+}(X)&=&b_{1}(X)-3, \label{heq7}\\ b_{2}^{-}(X)&=&b_{1}(X)-4.\label{heq8} \end{eqnarray} Observe that $X$ cannot be a rational surface because $b_{1}(X)=4>0$ as $b_2^{+}=1$. Also, $X$ is not a blow-up of a ruled surface, since ruled surfaces have $\sigma\leq 0$. Let $\widetilde{X}$ be the minimal model of $X$ so that $X\cong\widetilde{X}\#k\overline{\mathbb{CP}}^{2}$ for some non-negative integer $k$. Due to Liu~\cite{Liu} and Taubes~\cite{Tau}, $c_{1}^{2}(\widetilde{X})\geq0$. Also, the equation \begin{eqnarray*} c_{1}^{2}(\widetilde{X})=c_{1}^{2}(X)+k=-7+k \end{eqnarray*} implies that $k\geq7$. It is known that $b_{2}^{-}(X)\geq k \geq7$. The identity (\ref{heq8}) gives rise to $b_{1}(X)\geq 11$. Since $b_{1}(X)\leq 2g-1=11$ by the theory of Lefcshetz fibrations, we have $b_{1}(X)=11$. However, this contradicts with the result of \cite[Lemma 2.5]{Li.1}. Hence $N_{6}$ cannot be less than $16$. Since there exists a genus-$6$ hyperelliptic Lefschetz fibration with $16$ singular fibers~\cite{c,dmp,k1}, we have $N_{6}=16$.\par For $8\leq g \leq10$, we list all possible values of the numbers $n$ and $s$ (the remaining details for these cases follow similarly from the above arguments). One can list these numbers using the congruence in Lemma~\ref{lem1} and the inequality $n\geq(8g-3)/5$ ~\cite{BK.1}.\par For $g=8$, the possible values of $(n,s_1,s_2,s_3,s_4), e(X)$, $\sigma(X)$ and $c_{1}^{2}(X)$ are as follows: \begin{center} \begin{tabular}{ r|c|c|c|c| } \multicolumn{1}{r}{} & \multicolumn{1}{c}{$(n,s_1,s_2,s_3,s_4)$} & \multicolumn{1}{c}{$e(X)$} & \multicolumn{1}{c}{$\sigma(X)$} & \multicolumn{1}{c}{$c_{1}^{2}(X)$} \\ \cline{2-5} (e$1$) & $(14,1,0,0,1)$ & $-12$&$-4$&$-36$ \\ \cline{2-5} (e$2$) & $(14,0,2,0,1)$ &$ -11$ &$-1$&$-25$\\ \cline{2-5} (e$3$) & $(14,0,1,3,0)$ & $-10$ &$2$&$-14$\\ \cline{2-5} (e$4$) & $(16,1,1,0,0)$ &$ -10$ &$-6$&$-38$\\ \cline{2-5} (e$5$) & $(14,0,1,2,2)$ & $-9$ &$5$&$-3$\\ \cline{2-5} \end{tabular} \end{center} \vspace*{0.4cm} Using arguments similar to those arguments, we can eliminate all possibilities except for the case (e$5$). Thus, we can conclude that $N_{8}=19$ or $20$.\par For $g=9$, the possible values of $(n,s_1,s_2,s_3,s_4), e(X), \sigma(X)$ and $c_{1}^{2}(X)$ are as follows: \begin{center} \begin{tabular}{ r|c|c|c|c| } \multicolumn{1}{r}{} & \multicolumn{1}{c}{$(n,s_1,s_2,s_3,s_4)$} & \multicolumn{1}{c}{$e(X)$} & \multicolumn{1}{c}{$\sigma(X)$} & \multicolumn{1}{c}{$c_{1}^{2}(X)$} \\ \cline{2-5} (f$1$) & $(16,0,0,0,2)$ &$ -14$&$-2$&$-34$ \\ \cline{2-5} (f$2$) & $(16,1,1,1,0)$ & $-13$ &$-3$&$-35$\\ \cline{2-5} (f$3$) & $(16,0,0,1,3)$ & $-12$ &$4$&$-12$\\ \cline{2-5} (f$4$) & $(16,5,0,0,0)$ & $-11$&$-5$&$-37$\\ \cline{2-5} (f$5$) & $(16,1,1,2,1)$ & $-11$ &$3$&$-13$\\ \cline{2-5} (f$6$) & $(16,0,3,2,0)$ & $-11$ &$3$&$-13$\\ \cline{2-5} (f$7$) & $(16,3,1,0,2)$ & $-10$ &$2$&$-14$\\ \cline{2-5} \end{tabular} \end{center} \begin{center} \begin{tabular}{ r|c|c|c|c| } \multicolumn{1}{r}{} & \multicolumn{1}{c}{$(n,s_1,s_2,s_3,s_4)$} & \multicolumn{1}{c}{$e(X)$} & \multicolumn{1}{c}{$\sigma(X)$} & \multicolumn{1}{c}{$c_{1}^{2}(X)$} \\ \cline{2-5} (f$8$) & $(16,3,0,3,0)$ & $-10$ &$2$&$-14$\\ \cline{2-5} (f$9$) & $(16,2,3,0,1)$ & $-10$ &$2$&$-14$\\ \cline{2-5} (f$10$) & $(16,1,5,0,0)$ & $-10$ &$2$&$-14$\\ \cline{2-5} (f$11$) & $(16,0,0,2,4)$ &$ -10$ &$10$&$10$\\ \cline{2-5} (f$12$) & $(16,5,0,1,1)$ &$ -9$ &$1$&$-15$\\ \cline{2-5} (f$13$) &$ (16,4,2,1,0)$ &$ -9$ &$1$&$-15$\\ \cline{2-5} (f$14$) & $(16,2,0,0,5)$ &$ -9$ &$9$&$9$\\ \cline{2-5} (f$15$) & $(16,1,2,0,4)$ &$ -9$ &$9$&$9$\\ \cline{2-5} (f$16$) &$ (16,1,1,3,2)$ &$ -9$ &$9$&$9$\\ \cline{2-5} (f$17$) & $(16,1,0,6,0)$ & $-9$ &$9$&$9$\\ \cline{2-5} (f$18$) & $(16,0,4,0,3)$ & $-9$ &$9$&$9$\\ \cline{2-5} (f$19$) & $(16,0,3,3,1)$ & $-9$ &$9$&$9$\\ \cline{2-5} (f$20$) & $(20,0,1,2,0)$ & $-9$ &$-3$&$-27$\\ \cline{2-5} \end{tabular} \end{center} \vspace*{0.4cm} Using arguments similar to those above, we can eliminate all possibilities (f$1$)$-$(f$20$) such that $n+s<24$. Thus, we can conclude that $N_{9}\geq 24$.\par For $g=10$, the possible values of $(n,s_1,s_2,s_3,s_4,s_5)$, $e(X)$, $\sigma(X)$ and $c_{1}^{2}(X)$ are as follows: \begin{center} \begin{tabular}{ r|c|c|c|c| } \multicolumn{1}{r}{} & \multicolumn{1}{c}{$(n,s_1,s_2,s_3,s_4,s_5)$} & \multicolumn{1}{c}{$e(X)$} & \multicolumn{1}{c}{$\sigma(X)$} & \multicolumn{1}{c}{$c_{1}^{2}(X)$} \\ \cline{2-5} (g$1$) & $(16,0,1,0,1,1)$ & $-17$&$1$&$-31$ \\ \cline{2-5} (g$2$) & $(16,1,2,0,1,0)$ &$ -16$ &$0$&$-32$\\ \cline{2-5} (g$3$) & $(16,0,1,1,1,1)$ &$ -16$ &$4$&$-20$\\ \cline{2-5} (g$4$) & $(16,1,2,1,1,0)$ & $-15$&$3$&$-21$\\ \cline{2-5} (g$5$) & $(16,1,0,0,2,2)$ &$ -15$ &$7$&$-9$\\ \cline{2-5} (g$6$) & $(16,0,2,0,0,3)$ & $-15$ &$7$&$-9$\\ \cline{2-5} (g$7$) & $(16,0,1,2,1,1)$ &$ -15$ &$7$&$-9$\\ \cline{2-5} (g$8$) & $(16,4,0,0,0,2)$ & $-14$ &$2$&$-22$\\ \cline{2-5} (g$9$) & $(16,2,1,0,2,1)$ & $-14$ &$6$&$-10$\\ \cline{2-5} (g$10$) & $(16,1,3,0,0,2)$ &$ -14$ &$6$&$-10$\\ \cline{2-5} (g$11$) & $(16,1,2,2,1,0)$ &$ -14$ &$6$&$-10$\\ \cline{2-5} (g$12$) & $(16,1,0,1,2,2)$ & $-14$ &$10$&$2$\\ \cline{2-5} (g$13$) & $(16,0,2,1,0,3)$ & $-14$ &$10$&$2$\\ \cline{2-5} (g$14$) & $(16,0,2,0,4,0)$ & $-14$ &$10$&$2$\\ \cline{2-5} (g$15$) & $(16,0,0,0,1,5)$ &$ -14$ &$14$&$14$\\ \cline{2-5} (g$16$) & $(18,2,0,0,0,0)$ & $-16$ &$-8$&$-56$\\ \cline{2-5} (g$17$) & $(18,2,0,1,0,0)$ &$ -15$ &$-5$&$-45$\\ \cline{2-5} (g$18$) & $(18,2,0,2,0,0)$ &$ -14$ &$-2$&$-34$\\ \cline{2-5} (g$19$) & $(18,1,0,0,3,0)$ & $-14$ &$2$&$-22$\\ \cline{2-5} (g$20$) & $(18,0,2,0,0,1)$ & $-14$ &$2$&$-22$\\ \cline{2-5} (g$21$) & $(20,1,0,0,0,1)$ & $-14$ &$-6$&$-46$\\ \cline{2-5} (g$22$) &$ (18,0,0,0,2,3)$ & $-13$ &$9$&$1$\\ \cline{2-5} \end{tabular} \end{center} \vspace*{0.4cm} Using arguments similar to those above arguments, one can eliminate all possibilities except for the case (g$22$). Thus, one can conclude that $N_{10}=23$ or $24$. ~\hfill$\square$ \vspace*{2.5mm} \par As long as genus-$g$ increases, the number of possibilities of $n$ and $s$ increases, where $n$ and $s$ are the numbers of irreducible and reducible fibers, respectively. Hence, it is hard to find the exact value of $N_{g}$. The odd case is harder because of the upper bound $8g+4$ of $N_{g}$. For the general case we have the following: \begin{proposition}\label{pr} Let $f\colon X\to \mathbb{S}^2$ be a genus-$g$ Lefschetz fibration with $n+s<2g+4$ and $g>6$. Then the signature of $X$, $\sigma(X)$, is positive. \end{proposition} \begin{proof} Suppose that $X$ admits a genus-$g$ Lefschetz fibration with $n+s<2g+4$ for $g>6$ and $g\neq 7$. It follows from Theorem~\ref{t1} that $b_2^{+}\neq1$ and therefore $X$ is not a blow-up of a rational or ruled surface. This gives $c_1^{2}(X)\geq 2-2g$ by ~\cite{Li.1}. Therefore we get: \begin{eqnarray*} 2-2g \leq c_1^{2}(X)&=&3\sigma(X)+2e(X)\\ &=&3\sigma(X)+2(4-4g+n+s)\\ &\leq & 3\sigma(X)+2(4-4g+2g+3)\\ &=&3\sigma(X)+14-4g, \end{eqnarray*} which implies that $\sigma(X)>0$ when $g> 6$ and $g\neq7$.\par We see that the same argument holds for $g=7$ using Remark~\ref{rem1}. \end{proof} \begin{remark} Proposition~\ref{pr} implies that every hyperelliptic genus-$g$ Lefschetz fibration with $n+s<2g+4$ and $g>6$ has $b_1(X)>\dfrac{8g-15}{6}$ by applying the inequality (\ref{eq2}). However, the existence of such a Lefschetz fibration is not known~\cite{b1, Sm}. \end{remark} \begin{remark} Recently, Korkmaz has constructed a factorization of the identity in the hyperelliptic mapping class group ${\rm HMod}_{g}$ with length $5g-3$. This new construction provides us to improve the upper bound of $N_{g}$ when $g$ is odd. Therefore, we conclude that $N_{g}\leq 2g+4$ if $g$ is even and $N_{g}\leq 5g-3$ if $g$ is odd. \end{remark}
{ "timestamp": "2020-04-21T02:22:49", "yymm": "2004", "arxiv_id": "2004.09093", "language": "en", "url": "https://arxiv.org/abs/2004.09093", "abstract": "We consider complex surfaces, viewed as smooth $4$-dimensional manifolds, that admit hyperelliptic Lefschetz fibrations over the $2$-sphere. In this paper, we show that the minimal number of singular fibers of such fibrations is equal to $2g+4$ for even $g\\geq4$. For odd $g\\geq7$, we show that the number is greater than or equal to $2g+6$. Moreover, we discuss the minimal number of singular fibers in all hyperelliptic Lefschetz fibrations over the $2$-sphere as well.", "subjects": "Geometric Topology (math.GT)", "title": "The Number of Singular Fibers in Hyperelliptic Lefschetz Fibrations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363704773487, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7084883353731523 }
https://arxiv.org/abs/1707.01308
Measuring heavy-tailedness of distributions
Different questions related with analysis of extreme values and outliers arise frequently in practice. To exclude extremal observations and outliers is not a good decision because they contain important information about the observed distribution. The difficulties with their usage are usually related to the estimation of the tail index in case it exists. There are many measures for the center of the distribution, e.g. mean, mode, median. There are many measures of the variance, asymmetry, and kurtosis, but there is no easy characteristic for heavy-tailedness of the observed distribution. Here we propose such a measure, give some examples and explore some of its properties. This allows us to introduce a classification of the distributions, with respect to their heavy-tailedness. The idea is to help and navigate practitioners for accurate and easier work in the field of probability distributions.Using the properties of the defined characteristics some distribution sensitive extremal index estimators are proposed and their properties are partially investigated.
\section{INTRODUCTION} More than 90 years scientists look for appropriate way for handling outliers. \cite{irwin1925criterion}, \cite{mckay1935distribution}, \cite{nair1948distribution} and \cite{dixon1950analysis, dixon1953processing} consider them mainly with respect to the deviations of the distribution of the maxima of the sample from the one of the maxima of the normal distribution. They discuss the effect of removing outliers and propose some techniques for handling them. Further on some other tests for outliers appear, see e.g. Grubbs' test \cite{grubbs1969procedures}. They still neglects the importance of the extreme values, do not take into account the fact that the standard deviation does not obligatory exists, especially in case of heavy tailed distributions, and compare the observed variable with the appropriate normal one. Recently \cite{klebanov2016big, klebanov2017outliers, klebanov2016outliers} reminded this topic. In 1978 Tukey et al. give different definitions for mild and extremal outliers \cite{tukey1977exploratory} and box-plots \cite{mcgill1978variations} via the quartiles of the distribution and the inter-quartile range ($IQR$). Here we make classification of the distributions, with respect to the heaviness of their tails using the theoretical: quartiles $Q_1, Q_2, Q_3$, $IQR$, lower inner fences ($I_L$), lower outer fences ($O_L$), upper inner fences ($I_R$) and upper outer fences ($O_R$). Suppose $X_1, X_2, ..., X_n$ are mutually independent observations of a random variable (r.v.) $X$ with cumulative distribution function (c.d.f.) $F(x) = \mathbb{P}(X \leq x)$, probability density function (p.d.f.) $f$ and increasing order statistics $X_{(1, n)} \leq ... \leq X_{(n, n)}$. There are many different possibilities to define empirical $p$-quantiles, $p \in (0, 1)$. See e.g. \cite{parzen1979nonparametric, hyndman1996sample, langford2006quartiles}. We use the following one $\hat{F}^\leftarrow(p) = X_{([(n+1)p],n)} + \{(n+1)p-[(n+1)p]\}\{X_{([(n+1)p]+1, n)} - X_{([(n+1)p], n)}\} $, where $[a]$ means the integer part of $a$ and $\frac{1}{n+1} \leq p \leq \frac{n}{n+1}$. Let $\hat{Q}_1$, $\hat{Q}_2$, $\hat{Q}_3$ be the empirical quartiles of the observed r.v. and $\hat{IQR} = \hat{Q}_3 - \hat{Q}_1$ be the corresponding empirical IQR. We use the concepts for empirical: lower inner fences $\hat{I}_L = \hat{Q}_1 - 1.5×\hat{IQR}$, upper inner fences $\hat{I}_R = \hat{Q}_3 + 1.5×\hat{IQR}$, lower outer fences $\hat{O}_L = \hat{Q}_1 - 3×\hat{IQR}$, upper outer fences $\hat{O}_R = \hat{Q}_3 + 3×\hat{IQR}$, mild and extreme outliers, given e.g. in \cite{devore2015probability, NISTSEMATHECH, watkins2010statistics}. We call an observation {\bf{mild outlier}} if it is outside the interval $[\hat{Q}_1 - 1.5\hat{IQR}; \hat{Q}_3 + 1.5\hat{IQR}]$ and inside the interval $[\hat{Q}_1 - 3\hat{IQR}; \hat{Q}_3 + 3\hat{IQR}]$. We call an observation {\bf{extreme outlier}} if it is outside the interval $[\hat{Q}_1 - 3\hat{IQR}; \hat{Q}_3 + 3\hat{IQR}]$. See Figure~\ref{fig:empiricalboxplot} and \cite{devore2015probability}. Different questions related with analysis of outliers arise frequently in practice. The difficulties with their usage are usually related with the estimation of the tail index in case it exists. Recently the extreme value theory develops techniques for handling them, but it mainly relies on the second order condition (see e.g. \cite{de2007extreme}). It seems to be difficult to be checked, handled and understood from practitioners. Due to luck of information about the distribution outside the range of the data, its tail should be estimated via many characteristics. There are many measures for the center of the distribution, e.g. mean, mode, median. There are measures for the variance, asymmetry and kurtosis, but there is no enough characteristics for measuring heaviness of the tails of the distribution. Here we propose such measures and give some examples. All of them are invariant with respect to shifting of the discussed r.v. This allows us to introduce classification of the distributions, with respect to their heavy-tailedness. Using the outliers we propose a relatively easy techniques to recognize the tail of the distribution and to estimate its index of regular variation in case it exists. The idea is to help and navigate practitioners for accurate statistical diagnostics and easier work in the field of probability distributions. This approach provides benchmarks only for recognizing the tails of the observed distribution. For better fit we need to take into account also the specific form of its center. \begin{figure} \includegraphics[scale=.67,draft=false]{empiricalboxplot}\vspace{-0.3cm} \caption{Empirical box-plot, together with the empirical inner and outer fences.}\label{fig:empiricalboxplot} \end{figure} \section{CLASSIFICATION OF DISTRIBUTIONS WITH RESPECT TO THEIR HEAVY-TAILEDNESS} Following Tukey, under theoretical box-plot of a given c.d.f. $F$ we understand the one on Figure~\ref{fig:Fboxplot}. One of the possibilities to make a tentative fitting of the observed distribution is to compare its empirical box-plot with the theoretical box-plot of the tested distribution. However this approach is not robust, especially for small samples. See e.g. \cite{devore2015probability}. The presence of outliers in a sample of independent observations strongly depends not only of the distributional type, but also from the sample size. Therefore we classify the distributions with respect to their probabilities to have mild or extreme outliers. First of all let us mention that all numerical characteristics that we introduce are invariant with respect to shifting of the r.v. \begin{figure} \includegraphics[scale=.67,draft=false]{Fboxplot}\vspace{-0.3cm} \caption{Theoretical box-plot, together with theoretical inner and outer fences.}\label{fig:Fboxplot} \end{figure} \subsection{Classification of the distributions with respect to heaviness of their left tails} {\bf{Definition 1.}} We call a r.v. $X$ and its c.d.f. $F$, {\it $p_{mL}(X)$-mild-heavy left-tailed} if $$P(Q_{1}(F) - 3IQR(F) < X \leq Q_{1}(F) - 1.5IQR(F)) = p_{mL}(X).$$ Having in mind this definition we introduce classification of the distributions with respect to their mild left tail. {\bf{Definition 2.}} A r.v. $X$ and a r.v. $Y$ {\it belong to one and the same $p_{mL}$-mild-heavy left-tailed class} if $p_{mL}(X) = p_{mL}(Y)$. See Figure~\ref{fig:Fig3}, b). {\it A r.v. $X$ has lighter mild-heavy left tail than a r.v. $Y$} if $p_{mL}(X) < p_{mL}(Y)$. Let us note that $p_{mL}(X) = p_{mL}(Y)$ does not mean neither that the $X$ and $Y$ belong to one and the same distributional type, nor that they have one and the same mean or variance. But if $X = Y$ in distribution then $p_{mL}(X) = p_{mL}(Y)$. The $p_{mL}$ characteristic is invariant with respect to shifting. More precisely, for all $c_1 \in \mathbb{R}$ $p_{mL}(c_1 + X) = p_{mL}(X).$ Table~\ref{tab:1} presents a small part of the this classification, where $c_m = \left(\log_{1-\frac{log\,3}{log\,4}}\frac{3}{5}\right)^{-1} \approx 3.08$, $c_e = \left(\log_{1-\frac{log\,3}{log\,4}}\frac{3}{4}\right)^{-1} \approx 5.47$. The fact that $p_{mL}$ characteristic of all normal distributions is approximately $0.0035$ in practice means that if we observe such a r.v. we should expect 3 or 4 mild left outliers to appear in a sample of 1000 observations. Analogously we should expect to have around 34 or 35 mild left outliers in a sample of 10000 observations and so on. All negative exponential distributions have approximately $0.0203$-mild-heavy left tail. So, if we observe 100 independent realizations of exponentially distributed r.v. we should expect to have 2 mild left outliers. \begin{table} \centering \caption{Classification of some of the distributions with respect to their mild-heavy left-taildeness. \label{tab:1}} \begin{tabular}{|c|c|} \hline Distribution & $p_{mL}$ \\ \hline $U(a, b), a < b; Gamma(\alpha, \beta); Pareto(\alpha, \delta); Frechet(\alpha), 0 < \alpha < c_m$ & $0 $ \\ $Frechet(\alpha), \alpha \in (c_m, c_e]$ & $exp\{-\left(2.5 log^{-1/\alpha}(4)-1.5log^{-1/\alpha}\frac{4}{3} \right)^{-\alpha}\} \approx 0$\\ $Frechet(\alpha), \alpha > c_e$ & $exp\{-\left(2.5 log^{-1/\alpha}(4)-1.5log^{-1/\alpha}\frac{4}{3} \right)^{-\alpha}\}-$\\ & $-exp\{-\left(4 log^{-1/\alpha}4-3log^{-1/\alpha}\frac{4}{3} \right)^{-\alpha}\} \approx 0$\\ $Gumbel$ & $\approx 0.00000043$\\ $N(\mu, \sigma^2)$ & $\approx 0.0035$ \\ $Weibull^-(\alpha)$ & $exp\{-(2.5log^{1/\alpha}4 -1.5 log^{1/\alpha}\frac{4}{3})^{\alpha} \}-$\\ & \hfill$-exp\{-(4 log^{1/\alpha}4 - 3 log^{1/\alpha}\frac{4}{3})^{\alpha} \}$\\ $Weibull^-(2)$ & $\approx 0.0102$\\ $t(2)$ & $\approx 0.0266$ \\ $t(1)$ & $\approx 0.0328$ \\ $-Exp(\lambda)$ & $\approx 0.0339$ \\ $Weibull^-(1)$ & $\approx 0.0389$\\ $Weibull^-(0.5)$ & $\approx 0.0495$\\ \hline \end{tabular} \end{table} What about more extreme left outliers? See Figure~\ref{fig:Fig3}, a). {\bf{Definition 3.}} We call a r.v. $X$ and its c.d.f. $F$, {\it $p_{eL}(X)$-extremely heavy left-tailed} if $$P(X < Q_{1}(F) - 3IQR(F)) = p_{eL}(X).$$ \begin{figure} \begin{minipage}[t]{0.5\linewidth} \centerline{% \includegraphics [width=.78\textwidth]{EL}} \end{minipage} \begin{minipage}[t]{0.5\linewidth} \centerline{% \includegraphics [width=.78\textwidth]{ML}} \caption{Relation between the plot of the p.d.f. of a r.v. $X$ with c.d.f. $F$, $p_{mL}(X)$ and $p_{eL}(X)$. \label{fig:Fig3}} \end{minipage} \end{figure} {\bf{Definition 4.}} We say that {\it a r.v. $X$ and a r.v. $Y$ belong to one and the same $p_{eL}$-extremely heavy left-tailed class} if $p_{eL}(X) = p_{eL}(Y)$. Analogously, we say that {\it a r.v. $X$ has lighter extremely heavy left tail than a r.v. $Y$} if $p_{eL}(X) < p_{eL}(Y)$. Table~\ref{tab:2} presents some examples of classification of distributions with respect to their extremely heavy left tails. In order to explain the results let us consider again the normal distribution. The value $p_{eL} \approx 0.0000012$ means that in case we have independent observations on such a r.v. we should expect to have 1 or 2 left extreme outliers in a sample of $10^6$ observations. Analogously we should expect to have approximately 12 left extreme outliers in a sample of $10^7$ observations and so on. \begin{table} \centering \caption{Classification of some of the distributions with respect to their extremely heavy left-tailedness. \label{tab:2}} \begin{tabular}{|c|c|} \hline Distribution & $p_{eL} = F(O_L)$ \\ \hline U(a, b); Gamma$(\alpha, \lambda)$; Pareto($\alpha$, $\delta$); Frechet($\alpha$), $0 < \alpha < c_e$ & 0 \\ Frechet($\alpha$), $\alpha \geq c_e$; Gumbel & $\approx 0$\\ N($\mu$, $\sigma^2$) & $\approx 0.0000012$ \\ $Weibull^-(\alpha)$ & $exp\{-(4 log^{1/\alpha}4 - 3 log^{1/\alpha}\frac{4}{3})^{\alpha}$\\ $Weibull^-(2)$ & $\approx 0.0000668$\\ $-Exp(\lambda)$ & $\approx 0.0093$ \\ $Weibull^-(1)$ & $\approx 0.0093$\\ $t(2)$ & $\approx 0.0146$ \\ $t(1)$ & $\approx 0.0452$ \\ $Weibull^-(0.5)$ & $\approx 0.0654$\\ \hline \end{tabular} \end{table} {\it Note:} 1. $p_{mL}(X) < p_{mL}(Y)$ is not equivalent to $p_{eL}(X) < p_{eL}(Y)$. 2. If $p_{eL}(X) = p_{eL}(Y)$ or $p_{mL}(X) = p_{mL}(Y)$, this does not obligatory mean that $X$ and $Y$ coincide in distribution. 3. $p_{eL}(c_1 + X) = p_{eL}(X)$, for all $c_1 \in \mathbb{R}$. \subsection{Classification of the distributions with respect to heaviness of their right tails} Analogously to the previous subsection we can work with the right tails. See Figure~\ref{fig:Fig4}, a) and b). {\bf{Definition 5.}} We call a r.v. $X$ and its c.d.f. $F$, {\it $p_{mR}(X)$-mild-heavy right-tailed} if $$P(Q_{3}(F) + 1.5IQR(F) < X \leq Q_{3}(F) + 3IQR(F)) = p_{mR}(X).$$ {\bf{Definition 6.}} We say that {\it a r.v. $X$ and a r.v. $Y$ belong to one and the same $p_{mR}$-mild-heavy right-tailed class} if $p_{mR}(X) = p_{mR}(Y)$. {\it A r.v. $X$ has lighter mild-heavy right tail than a r.v. $Y$} if $p_{mR}(X) < p_{mR}(Y)$. {\bf{Definition 7.}} We call a r.v. $X$ and its c.d.f. $F$, {\it $p_{eR}(X)$-extremely heavy right-tailed} if $$P(X > Q_{3}(F) + 3IQR(F)) = p_{eR}(X).$$ \begin{figure} \begin{minipage}[t]{0.5\linewidth} \centerline{% \includegraphics [width=.78\textwidth]{MR}} \end{minipage} \begin{minipage}[t]{0.5\linewidth} \centerline{% \includegraphics [width=.78\textwidth]{ER}} \caption{Relation between the plot of the p.d.f. of a r.v. $X$ with c.d.f. $F$, $p_{mR}(X)$ and $p_{eR}(X)$. \label{fig:Fig4}} \end{minipage} \end{figure} {\bf{Definition 8.}} {\it A r.v. $X$ and a r.v. $Y$ belong to one and the same $p_{eR}$-extremely heavy right-tailed class} if $p_{eR}(X) = p_{eR}(Y)$. We say that {\it a r.v. $X$ has lighter extreme right tail than a r.v. $Y$} if $p_{eR}(X) < p_{eR}(Y)$. The properties of these characteristics are analogous to the corresponding one of the left tails. Some examples are given in Table~\ref{tab:3}. Again we observe that $p_{mR}(X) < p_{mR}(Y)$ is not equivalent to $p_{eR}(X) < p_{eR}(Y)$. The analysis is analogous to those made above for the left tails. It is well known that if we consider only a fixed distribution with regularly varying tail, the bigger the value of $\alpha$ the lighter the corresponding tail of the distribution is. However when we consider the extremely heavy tails, which one of Pareto or Frechet distribution has heavier right tail depends on their parameters. If $X \sim Pareto(2, 1)$ and $Y \sim Frechet(\alpha)$, $\alpha \geq 1$ then $X$ has heavier right tail than $Y$, but $Frechet(0.5)$ has heavier extremal right tail than $Pareto(0.5, 1)$. \begin{table} \centering \caption{Classification of some of the distributions with respect to heaviness of their right tails. \label{tab:3}} \begin{tabular}{|c|c|c|} \hline Distribution & $p_{mR}$ & $p_{eR} = \bar{F}_X(O_R)$ \\ \hline $U(a, b), a < b, Weibull^-(\alpha),\, a, b \in \mathbb{R} $ & $0$ & $0$ \\ $ N(\mu, \sigma^2)$ & $\approx 0.0035$ & $\approx 0.0000012$ \\ $Gamma(2, \lambda )$, $\lambda > 0$ & $\approx 0.0011$ & $\approx 0.000071$ \\ $Gumbel$ & $\approx 0.0243$ & $\approx 0.0026$ \\ $Exp(\lambda), \, \lambda > 0$ & $\approx 0.0339$ & $\approx 0.0093$ \\ $t(2)$ & $\approx 0.0266$ & $\approx 0.0146$ \\ $Gamma(0.5, \lambda )$, $\lambda > 0$ & $\approx \mathbf{0.0502}$ & $\approx 0.0255$ \\ $Frechet(\alpha)$ & $1 - exp^{-(2.5\sqrt[\alpha]{3.48} - 1.5\sqrt[\alpha]{0.72})^{-\alpha}}$ & $1 - e^{-(4\sqrt[\alpha]{3.48} - 3\sqrt[\alpha]{0.72})^{-\alpha}}$\\ $Frechet(2)$ & $\approx 0.0429$ & $\approx 0.0406$\\ $t(1)$ & $\approx 0.0328$ & $\approx 0.0452$ \\ $Pareto(\alpha, \delta)$ & $\frac{\delta^{-\alpha}}{4}(2.5-1.5\sqrt[\alpha]{\frac{1}{3}})^{-\alpha} - p_{eR}$ & $ \frac{\delta^{-\alpha}}{4}(4-3\sqrt[\alpha]{\frac{1}{3}})^{-\alpha}$ \\ $Pareto(2, 1) $ & $\approx \mathbf{0.045}$ & $\approx 0.0486$ \\ $Frechet(1)$ & $\approx 0.0415$ & $\approx 0.0817$ \\ $Pareto(1, 1)$ & $\approx 0.0417$ & $\approx 0.0833$ \\ $Pareto(0.5, 1)$ & $\approx 0.0331$ & $\approx 0.1306$ \\ $Frechet(0.5)$ & $\approx 0.0323$ & $\approx 0.1360$\\ \hline \end{tabular} \end{table} Note that if $X \sim Gamma(\alpha, \lambda)$, $\lambda > 0$, then $p_{mL}(X)$, $p_{eL}(X)$, $p_{mR}(X)$ and $p_{eR}(X)$ does not depend on $\lambda$. \subsection{Classification of the distributions with respect to heaviness of their two-sided tails} Here, for the seek of completeness, we consider the two-sided heavy-tailedness of the distributions. However in practice it is better to make a more detailed comparison of the probabilities to have one-sided left or right, mild or extreme outliers. It gives us more comprehensive picture about the tail behaviour of the observed distribution. {\bf{Definition 9.}} We call a r.v. $X$ and its c.d.f. $F$, {\it $p_{m2}(X)$-mild-heavy two-tailed} if $$P(Q_{1}(F) - 3IQR(F) < X \leq Q_{1}(F) - 1.5IQR(F) \cup Q_{3}(F) + 1.5IQR(F) < X \leq Q_{3}(F) + 3IQR(F)) = p_{m2}(X).$$ {\bf{Definition 10.}} {\it A r.v. $X$ and a r.v. $Y$ belong to one and the same $p_{m2}$-mild-heavy two-tailed class} if $p_{m2}(X) = p_{m2}(Y)$. {\it A r.v. $X$ with c.d.f. $F$ has lighter mild two-tails than a r.v. $Y$} if $p_{m2}(X) < p_{m2}(Y)$. {\bf{Definition 11.}} A r.v. $X$ and its c.d.f. $F$ are called {\it $p_{e2}(X)$-extremely heavy two-tailed} if $$P(X < Q_{1}(F) - 3IQR(F) \cup X > Q_{3}(F) + 3IQR(F)) = p_{e2}(X).$$ {\bf{Definition 12.}} {\it A r.v. $X$ and a r.v. $Y$ belong to one and the same $p_{e2}$-extremely heavy two-tailed class} if $p_{e2}(X) = p_{e2}(Y)$ and {\it a r.v. $X$ has lighter extreme two-tails than a r.v. $Y$} if $p_{e2}(X) < p_{e2}(Y)$. {\it Note:} Again the equalities $p_{m2}(X) = p_{m2}(Y)$ or $p_{e2}(X) = p_{e2}(Y)$, does not obligatory mean that $X \stackrel{\rm d}{=} Y$. In Table~\ref{tab:6} we have presented the values of $p_{m2}(X)$ and $p_{e2}(X)$ for some probability laws. See Figure~\ref{fig:Fig5}, a) and b). \begin{figure} \begin{minipage}[t]{0.5\linewidth} \centerline{% \includegraphics [width=.78\textwidth]{M2}} \end{minipage} \begin{minipage}[t]{0.5\linewidth} \centerline{% \includegraphics [width=.78\textwidth]{E2}} \caption{Relation between the plot of the p.d.f. of a r.v. $X$, $p_{m2}(X) = \frac{p_{mL}(X)+p_{mR}(X)}{2}$ and $p_{e2}(X)= \frac{p_{eL}(X)+p_{eR}(X)}{2}$. \label{fig:Fig5}} \end{minipage} \end{figure} \begin{table} \centering \caption{Classification of some of the distributions with respect to heaviness of their two-sided tails. \label{tab:6}} \begin{tabular}{|c|c|c|} \hline Distribution & $p_{m2}(X) = p_{mL}(X) + p_{mR}(X)$ & $p_{e2}(X) = F_X(O_L) + \bar{F}_X(O_R)$ \\ \hline U(a, b) & 0 & 0 \\ N($\mu$, $\sigma^2$) &$\approx 0.007 $ & $\approx 0.000002$ \\ $Gamma(2, \lambda )$, $\lambda > 0$ & $\approx 0.0011$ & $\approx 0.000071$ \\ $Weibull^-(\alpha)$ & $exp\{-(2,5 log^{1/\alpha}4 - 1,5log^{1/\alpha}\frac{4}{3} )^{\alpha}\} - p_1$ & $p_1 = exp\{-(4 log^{1/\alpha}4-3log^{1/\alpha}\frac{4}{3})^\alpha\}$\\ $Weibull^-(2)$ & $\approx 0.0102$ & $\approx 0.000067$\\ $Gumbel$ & $\approx 0.0243$ & $\approx 0.0026$ \\ $-Exp(\lambda),\, Exp(\lambda), \, \lambda > 0$ & $\approx 0.0339$ & $\approx 0.0093$ \\ $Weibull^-(1)$ & $\approx 0.0388$ & $\approx 0.0093$\\ $Gamma(0.5, \lambda )$, $\lambda > 0$ & $\approx \mathbf{0.0501}$ & $\approx 0.0255$ \\ $t(2)$ & $\approx \mathbf{0.0532}$ & $\approx 0.0293$ \\ $Frechet(\alpha)$ & $1 - exp^{-(2.5\sqrt[\alpha]{3.48} - 1.5\sqrt[\alpha]{0.72})^{-\alpha}} - p_3$ & $p_3 = 1 - e^{-(4\sqrt[\alpha]{3.48} - 3\sqrt[\alpha]{0.72})}$\\ $Frechet(2)$ & $\approx 0.0429$ & $\approx 0.0406$\\ $ Pareto(\alpha, \delta)$ & $\frac{\delta^{-\alpha}}{4}(2.5-1.5\sqrt[\alpha]{\frac{1}{3}})^{-\alpha} - p_2$ & $p_2 = \frac{\delta^{-\alpha}}{4}(4-3\sqrt[\alpha]{\frac{1}{3}})^{-\alpha}$ \\ $Pareto(2, 1) $ & $\approx 0.045$ & $\approx 0.0486$ \\ $Weibull^-(0.5)$ & $\approx 0.0495$ & $\approx 0.0654$\\ $Frechet(1)$ & $\approx 0.0415$ & $\approx 0.0817$ \\ $Pareto(1, 1)$ & $\approx 0.0417$ & $\approx 0.0833$ \\ $t(1)$ & $\approx \mathbf{0.0656}$ & $\approx 0.0903$ \\ $Pareto(0.5, 1)$ & $\approx 0.0331$ & $\approx 0.1306$ \\ $Frechet(0.5)$ & $\approx 0.0323$ & $\approx 0.1360$\\ \hline \end{tabular} \end{table} \subsection{Algorithm for applications} Considering the outliers in a sample and comparing their relative frequencies with $p_{mL}$, $p_{eL}$, $p_{mR}$ and $p_{eR}$ we are able to make a better modelling of the tails of the distribution of the observed r.v. The algorithm is the following: \begin{enumerate} \item Determine $\hat{Q}_1$, $\hat{Q}_2$, $\hat{Q}_3$, $I\hat{Q}R$, $\hat{I}_L$, $\hat{O}_L$, $\hat{I}_R$, $\hat{O}_R$ and compare the empirical box-plot with the theoretical box-plot of the chosen distributions. \item Determine the relative frequencies of the left and right, mild and extreme outliers. \item Make confidence intervals, based on the relative frequencies of mild or extreme outliers. Compare these relative frequencies with $p_{mL}$ and $p_{mR}$ in the list of distributions and chose appropriate classes of distributions for modelling the probability law of the observed r.v. \item Make confidence intervals, based only on the relative frequencies of extreme outliers. Compare these relative frequencies with $p_{eL}$ and $p_{eR}$ in the list of distributions chosen in 3. and find the most appropriate distributional types for modeling the observed r.v. \item Estimate the parameters of the chosen distributions. \item Use some goodness of fit test to chose the best model. \end{enumerate} \section{FIVE NEW ESTIMATORS OF THE EXTREMAL INDEX. EMPIRICAL STUDY.} In this section we suppose that $\widehat{Q}_1 > 0,\widehat{Q}_1 \not= 1$ and at least one of the following two conditions hold: $\widehat{Q}_1\not = \widehat{Q}_3$ or $\widehat{O}_R > 1$. We propose to model the observed r.v. with appropriate distribution with regularly varying tail, i.e. such that $\lim_{y \to \infty} \frac{1 - F(xy)}{1 - F(y)} = x^{-\alpha}$ and present five distribution sensitive estimators of the parameter $\alpha$. The relative frequency $\hat{p}_{eR}$ of the right extreme outliers in the sample is a strongly consistent and unbiased estimator of $p_{eR}$. The right outer fence $\hat{O}_R$ is an asymptotically consistent estimator of the theoretical $O_R$. The following two estimators have very fast rate of convergence in case when the observed r.v. is $Pareto(\alpha)$ distributed. See the empirical study and Table~\ref{tab:estimators}. $$\hat{\alpha}_{Par,n} = -\frac{log\, \widehat{p}_{eR}}{log\,\widehat{O}_R}, \quad \hat{\alpha}_{Q, Par,n} = \frac{log(3)}{log \widehat{Q}_3 - log \widehat{Q}_1}.$$ They have approximately the same properties as the Hill and the t-Hill estimators. \begin{landscape} \begin{table} \centering \caption{Empirical results. \label{tab:estimators}} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Distribution} & \multirow{2}{*}{$n$} &\multicolumn{2}{c|}{$\hat{\alpha}_{Par,n}$} & \multicolumn{2}{c|}{$\hat{\alpha}_{Q, Par,n}$} & \multicolumn{2}{c|}{$\hat{\alpha}_{Frech, n}$} & \multicolumn{2}{c|}{$\hat{\alpha}_{Q, Frech, n}$} & \multicolumn{2}{c|}{$\hat{\alpha}_{Q, HillH, n}$} & \multirow{2}{*}{The best}\\ \cline{4-13} \multicolumn{2}{|c|}{of $\xi$.} & & Mean & St. Dev. & Mean & St. Dev. & Mean & St. Dev. & Mean & St. Dev. & Mean & St. Dev.& estimator\\ \hline \multirow{12}{*}{$Pareto(\alpha)$} & \multirow{4}{*}{$\alpha = 0.5$} & $30$ & 0.5463 & 0.1356& {\bf 0.5116} & 0.1442 & 0.531 & 0.138 & 0.7323 &0.2064 & 1.2437 & 29.8589 & $\hat{\alpha}_{Q, Par,n}$\\ & & $100$ &0.5119& 0.0669 & {\bf 0.5035} & 0.0756 & 0.4954 & 0.0683 &0.7207 &0.1083 & 0.3904 & 74.1974& $\hat{\alpha}_{Q, Par,n}$\\ & & $1000$ & 0.5015 & 0.0204 & {\bf 0.5005} & 0.0238 & 0.4846 & 0.0208 & 0.7164 & 0.034& 1.7906 & 0.3105& $\hat{\alpha}_{Q, Par,n}$\\ & & $10000$ & {\bf 0.5001} & 0.0063 & 0.5002 & 0.0074 & 0.4831 & 0.0063 & 0.716 &0.0106 & 1.761& 0.0922&$\hat{\alpha}_{Par,n}$\\ \cline{2-14} & \multirow{4}{*}{$\alpha = 1$} & $30$ & 1.0671 &0.2528 & {\bf 1.026} & 0.2869 & 1.05 & 0.2568 & 1.4685 & 0.4106 & -2.1133 & 17.9816 & $\hat{\alpha}_{Q, Par,n}$ \\ & & $100$ & 1.0312& 0.1486 & {\bf 1.0092} & 0.1516 & 1.0141 & 0.1518 & 1.4446 & 0.2171 & -3.0747 &9.8974 & $\hat{\alpha}_{Q, Par,n}$\\ & & $1000$ & 1.0022 & 0.0422 & {\bf 1.0006} & 0.0469 & 0.9848 & 0.0432 & 1.4322& 0.0671&-2.3396& 0.2569&$\hat{\alpha}_{Q, Par,n}$\\ & & $10000$ & 1.0004 & 0.0132 & {\bf 1} & 0.0146& 0.983& 0.0136& 1.4314&0.021& -2.324& 0.081& $\hat{\alpha}_{Q, Par,n}$\\ \cline{2-14} & \multirow{4}{*}{$\alpha = 2$} & $30$ & {\bf1.9996} & 0.4313 & 2.0514 & 0.566 & 1.9789 & 0.4351 & 2.9364 & 0.8102 & -1.134 & 0.2572& $\hat{\alpha}_{Par,n}$\\ & & $100$ & 2.0772 & 0.3332 & {\bf2.0185} & 0.3021 & 2.0607 & 0.338 & 2.8893 & 0.4325 & -1.0865 & 0.0938 & $\hat{\alpha}_{Q, Par,n}$\\ & & $1000$ & 2.0083 & 0.0937 & {\bf2.0021} & 0.094 & 1.9919 & 0.0953 & 2.8658 & 0.1346 & -1.0766& 0.0273& $\hat{\alpha}_{Q, Par,n}$\\ & & $10000$ & {\bf2} & 0.0292 & 1.9998 & 0.03 & 1.9836 & 0.0297 & 2.8624 & 0.043 &-1.0737 &0.0087& $\hat{\alpha}_{Par,n}$\\ \hline \multirow{12}{*}{$Frechet(\alpha)$} & \multirow{4}{*}{$\alpha = 0.5$} & $30$ & 0.5738 & 0.1549& 0.354 & 0.0826 & 0.5568 & 0.1564 & {\bf0.5067} & 0.1182 & 0.9214 & 9.8233& $\hat{\alpha}_{Q, Frech, n}$\\ & & $100$ & 0.5335 & 0.0738 & 0.3508 & 0.0442 & 0.5151 & 0.0747 & {\bf0.5021} & 0.0632 & 0.7301 &0.2186& $\hat{\alpha}_{Q, Frech, n}$\\ & & $1000$ & 0.5201 & 0.0222 & 0.3494 & 0.0139 & 0.5014 & 0.0225 & {\bf0.5001} & 0.0199 & 0.7012& 0.0564 & $\hat{\alpha}_{Q, Frech, n}$\\ & & $10000$ & 0.519 & 0.007 & 0.3494 & 0.0044 & 0.5002 & 0.007 & {\bf0.5001} & 0.0062 & 0.6989& 0.0175 &$\hat{\alpha}_{Q, Frech, n}$\\ \cline{2-14} & \multirow{4}{*}{$\alpha = 1$} & $30$ & 1.0825 & 0.2599 & 0.7066 & 0.1648 & 1.0654 & 0.2637 & {\bf 1.0115} & 0.2358 & 3.2049 & 324.4615 & $\hat{\alpha}_{Q, Frech, n}$\\ & & $100$ & 1.0506 & 0.1534 & 0.7023 & 0.0892 & 1.0337 & 0.1565 & {\bf1.0053} & 0.1277 & 5.5908 & 707.2482 &$\hat{\alpha}_{Q, Frech, n}$\\ & & $1000$ & 1.0209 & 0.0445 & 0.6989 & 0.0276 & 1.0038 & 0.0454 & {\bf1.0004 } & 0.0396 & -15.003& 1031.205 & $\hat{\alpha}_{Q, Frech, n}$\\ & & $10000$ & 1.0174 & 0.014 & 0.6987 & 0.0088 & 1.0003 & 0.0143 & {\bf1.0001} & 0.0126 & 287.2494 & 14959.87& $\hat{\alpha}_{Q, Frech, n}$\\ \cline{2-14} & \multirow{4}{*}{$\alpha = 2$} & $30$ & 1.9469 & 0.3957 & 1.4131 & 0.3275 & 1.9289 & 0.3988 & {\bf2.0227} & 0.4688 & -1.5699 & 1.755& $\hat{\alpha}_{Q, Frech, n}$\\ & & $100$ & 2.094 & 0.3418 & 1.4031 & 0.178 & 2.081 & 0.3461 & {\bf2.0084} & 0.2548 & -1.4384& 0.1996 &$\hat{\alpha}_{Q, Frech, n}$\\ & & $1000$ & 2.0202 & 0.0978 &1.3975 & 0.0554 & 2.0072 & 0.0993 & {\bf2.0004} & 0.0794 & -1.4013 & 0.0553 & $\hat{\alpha}_{Q, Frech, n}$\\ & & $10000$ & 2.0144 & 0.0309 & 1.3974 & 0.0175 & 2.0014 & 0.0313 & {\bf2.0002} & 0.025 & -1.3975 & 0.0176& $\hat{\alpha}_{Q, Frech, n}$\\ \hline \multirow{12}{*}{$Hill-$} & \multirow{4}{*}{$\alpha = 0.5$} & $30$ &{\bf0.4692} & 0.1223& 0.2921 & 0.0675& 0.4534& 0.1234 & 0.4181 &0.0966 & 2.164 & 0.7827& $\hat{\alpha}_{Par,n}$\\ & & $100$ &{\bf0.4393} & 0.0591& 0.2925& 0.0367& 0.4221& 0.0598&0.4186 &0.0525& 0.5647& 0.9664& $\hat{\alpha}_{Par,n}$\\ & & $1000$ & 0.4286 & 0.0178& 0.2915& 0.0115& 0.411& 0.0179& 0.4173& 0.0165& {\bf 0.5012} & 0.0342 & $\hat{\alpha}_{Q, HillH, n}$\\ & & $10000$ & 0.4278 & 0.0057 & 0.2915 & 0.0037 & 0.4102 & 0.0057 & 0.4172 & 0.0052 & {\bf 0.5001} & 0.0107 & $\hat{\alpha}_{Q, HillH, n}$\\ \cline{2-14} & \multirow{4}{*}{$\alpha = 1$} & $30$ & {\bf0.8051} & 0.205 & 0.4134 & 0.0926 & 0.788 & 0.2077 & 0.5917 & 0.1325 & 2.6214 & 187.5964& $\hat{\alpha}_{Par,n}$\\ & & $100$ & 0.7581 & 0.1056& 0.4127 & 0.0505& 0.7403 & 0.1075 & 0.5908 & 0.0723 & {\bf1.0795}& 0.4421& $\hat{\alpha}_{Q, HillH, n}$\\ & & $1000$ & 0.7377 & 0.0315 & 0.4115 & 0.0158 & 0.7194 & 0.0321 & 0.589 & 0.0227 & {\bf1.0055} & 0.0948 & $\hat{\alpha}_{Q, HillH, n}$\\ & & $10000$ & 0.7355 & 0.0098 & 0.4113 & 0.005 & 0.7172 & 0.01 & 0.5887 & 0.0072 & {\bf1.0008} & 0.0908& $\hat{\alpha}_{Q, HillH, n}$\\ \cline{2-14} $Horror(\alpha)$ & \multirow{4}{*}{$\alpha = 2$} & $30$ & 1.2185 & 0.283 & 0.5198 & 0.1143 & 1.2019 & 0.2859 & 0.744 & 0.1636 & {\bf1.543} &69.0628& $\hat{\alpha}_{Q, HillH, n}$\\ & & $100$ &1.1965 &0.1785 & 0.519 & 0.0631 & 1.1812 & 0.1816 & 0.7429 & 0.0903 & {\bf2.8577} & 28.6543&$\hat{\alpha}_{Q, HillH, n}$\\ & & $1000$ & 1.1554 & 0.05 & 0.5178 & 0.0196 & 1.1399 & 0.051 & 0.7411 & 0.0281 & {\bf2.0379} &0.3168 & $\hat{\alpha}_{Q, HillH, n}$\\ & & $10000$ & 1.1529 & 0.0159 & 0.5177 & 0.0062 & 1.1374 & 0.0162 & 0.741 & 0.0089 & {\bf2.0044}& 0.0932& $\hat{\alpha}_{Q, HillH, n}$\\ \hline \end{tabular} \end{table} \end{landscape} The second group of two estimators $$\hat{\alpha}_{Frech, n} = -\frac{log(-log\, \widehat{p}_{eR})}{log\,\widehat{O}_R},\quad \hat{\alpha}_{Q, Frech, n} = -\frac{log(log(4)) - log(log(4/3))}{log \widehat{Q}_3 - log \widehat{Q}_1}$$ is better in cases when the observed r.v. is a $Frechet(\alpha)$ distributed. We should mention that in both of these cases, it is well known that for estimating the parameter of the Pareto distribution, the Hill estimator (see e.g. \cite{hill1975simple}) is the best estimator. With respect to the robustness their behaviour is comparable with the one of the t-Hill estimator (see \cite{jordanova2012weak, fabian2009ifas}). The last estimator is the most appropriate in case the observed r.v. has Hill-Horror distribution $$F^\leftarrow(p) = (1 - p)^{-1/\alpha}(-log\,(1 - p)), p \in (0, 1), \quad (see \cite{EMK}).$$ This estimator is defined by $$\hat{\alpha}_{Q, HillH, n} = \frac{log(3)}{log \widehat{Q}_3 +log(log(4/3)) - log \widehat{Q}_1 - log(log(4))}.$$ Let us make a brief empirical investigation of these estimators. For different but fixed $n = 30, 10^2, 10^3, 10^4$, we have made $m = 10^4$ samples with sample size $n$, of observation on one and the same r.v. Within these $m = 10^4$ samples the type and the parameters are one and the same, but in general the types change between $Pareto(\alpha)$, $Frechet(\alpha)$ or $Hill-Horror(\alpha)$ distribution for different $\alpha$. Then we have calculated $10^4$ values of $\hat{\alpha}_{Par,n}$, $\hat{\alpha}_{Q, Par,n}$, $\hat{\alpha}_{Frech, n}$, $\hat{\alpha}_{Q, Frech, n}$, $\hat{\alpha}_{Q, HillH, n}$ and finally we have calculated the corresponding means and standard deviations. The results are given in Table~\ref{tab:estimators}. The best estimator in any particular case is the one that takes into account the type of the observed r.v. Therefore the choice of the distribution is the most important step for the estimation of the index of regular variation. Although we have found good estimators for the regularly varying index when the observed distribution is almost regularly varying. The most dangerous case, is again the Hill-Horror distributed one. The question about estimation of $\alpha$ for small samples of such data, e.g. when $n \leq 30$ is still open. In this cases, however it seems to be not realistic to find a good estimator of the tail index, because due to the slow rate of convergence, with very high probability, the sample does not contain enough information about the tail of the distribution. \section{ACKNOWLEDGMENTS} The authors was partially supported by the Project RD-08-96/06.02.2017 from the Scientific Research Fund in University of Shumen and grant No 80-10-146/21.04.2017 of Sofia University, Bulgaria. \nocite{*} \bibliographystyle{aipnum-cp}%
{ "timestamp": "2017-07-06T02:04:02", "yymm": "1707", "arxiv_id": "1707.01308", "language": "en", "url": "https://arxiv.org/abs/1707.01308", "abstract": "Different questions related with analysis of extreme values and outliers arise frequently in practice. To exclude extremal observations and outliers is not a good decision because they contain important information about the observed distribution. The difficulties with their usage are usually related to the estimation of the tail index in case it exists. There are many measures for the center of the distribution, e.g. mean, mode, median. There are many measures of the variance, asymmetry, and kurtosis, but there is no easy characteristic for heavy-tailedness of the observed distribution. Here we propose such a measure, give some examples and explore some of its properties. This allows us to introduce a classification of the distributions, with respect to their heavy-tailedness. The idea is to help and navigate practitioners for accurate and easier work in the field of probability distributions.Using the properties of the defined characteristics some distribution sensitive extremal index estimators are proposed and their properties are partially investigated.", "subjects": "Methodology (stat.ME); Probability (math.PR)", "title": "Measuring heavy-tailedness of distributions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9458012747599251, "lm_q2_score": 0.7490872243177518, "lm_q1q2_score": 0.7084876516661036 }
https://arxiv.org/abs/1412.4271
Multi-Context Models for Reasoning under Partial Knowledge: Generative Process and Inference Grammar
Arriving at the complete probabilistic knowledge of a domain, i.e., learning how all variables interact, is indeed a demanding task. In reality, settings often arise for which an individual merely possesses partial knowledge of the domain, and yet, is expected to give adequate answers to a variety of posed queries. That is, although precise answers to some queries, in principle, cannot be achieved, a range of plausible answers is attainable for each query given the available partial knowledge. In this paper, we propose the Multi-Context Model (MCM), a new graphical model to represent the state of partial knowledge as to a domain. MCM is a middle ground between Probabilistic Logic, Bayesian Logic, and Probabilistic Graphical Models. For this model we discuss: (i) the dynamics of constructing a contradiction-free MCM, i.e., to form partial beliefs regarding a domain in a gradual and probabilistically consistent way, and (ii) how to perform inference, i.e., to evaluate a probability of interest involving some variables of the domain.
\section*{\centerline{APPENDIX}} \subsubsection*{A-I $\mathcal{I}_{non-scale}^\ast$: A short version of $\mathcal{I}^\ast$ without scale-invariance property} \label{A-I} $\mathcal{I}^\ast$ aims at minimally parameterizing the information contained in an MCM so that the posed inter-contextual query can be stated as an LP with the fewest number of parameters. As pointed out earlier in Sec. \ref{Sec:Grammar}, $\mathcal{I}^\ast$ has to decide on the following: (i) what RVs have to be included in the LP, and (ii) the abstraction level required to minimally encode the information on the RVs identified in step (i) for the LP, in our case, the parametrization of the identified RVs. In what follows, a simple algorithm, $\mathcal{I}_{non-scale}^\ast$, is sketched which only performs (i) and ignores (ii). In other words, $\mathcal{I}_{non-scale}^\ast$ identifies the relevant RVs needed to derive the \emph{exact} lower/upper bound for the inter-contextual query, however, it does not aim at minimally encoding them into the LP\footnote{To read more on this, the reader is referred to the discussion on scale-invariance property in Sec. \ref{Sec:Grammar} and Sec. A-III of Appendix.}. $\mathcal{I}_{non-scale}^\ast$ consists of three steps: \begin{itemize} \item[(1)] Identify all the RVs involved in the posed query (e.g., in $P(X|Y,z)$ these are the random vector $X$, random vector $Y$ and RV $z$). \item[(2a)] If any two of the already identified RVs belong to two overlapping contexts, identify all the \emph{overlapping} RVs between these two contexts (e.g., in Fig. 5(b) and for the query $P(X|Y)$ for which step (1) would identify $X$ and $Y$, random vector $Z$ in the overlapping region must be identified as well). \item[(2b)] If any two of the already identified RVs belong to two contexts connected through a chain of overlapping contexts: identify all the RVs contained in all the \emph{overlapping} regions of the chain of contexts. \item[(3)] Parameterize only the identified RVs in steps (1), (2a), and (2b) (remove all the other RVs from the MCM---there is no need to encode the information on any other RVs not identified in steps (1), (2a), and (2b)). \hfill $\blacksquare$ \end{itemize} It should be noted that whether the posed query involves minimization or maximization does not affect which RVs need to be identified by $\mc I_{non-scale}^{\ast}$. Finally, It is worth noting that with a minor modification to step (3) of $\mathcal{I}_{non-scale}^\ast$, the scale-invariance property could be achieved. The modification has to do with the question of how to \emph{minimally} encode the information on each RV identified in steps (1), (2a), and (2b) of $\mathcal{I}_{non-scale}^\ast$. To demonstrate the operation of $\mc I_{non-scale}^{\ast}$ on a more complicated MCM that involves loops, consider the following example sketched in Fig. \ref{A_alg_sample}(a). The query of interest is $\mathbb P(X|Y)_\downarrow$. \begin{figure}[h!] \centering \includegraphics[width=0.38\textwidth]{fig_g8_alg_apply_ext_2.pdf} \caption{(a) Sample MCM. The RVs involved in the posed query are depicted in blue. (b) In Step (1) $\bb{X}$ and $\bb{Y}$ are identified; in step (2b) the RVs $\bb{b}$, $\bb{d}$ as well as $\bb{a}$, $\bb{c}$, and $\bb{e}$ are identified. According to step (3) of $\mc I_{non-scale}^{\ast}$ all of the information as to the RVs $\bb{X}$, $\bb{Y}$, $\bb{b}$, $\bb{d}$, $\bb{a}$, $\bb{c}$, and $\bb{e}$ has to be stated as an LP to derive the query.} \label{A_alg_sample} \end{figure} Next, we are going to sketch the proof for $\mc I_{non-scale}^{\ast}$. Let us first state the claim formally and then provide the proof. \subsubsection*{A-II Proof for $\mc I_{non-scale}^{\ast}$:} \textbf{Lemma}: \emph{Given a posed query and an MCM, if all the information on the RVs identified in steps (1) to (2b) of $\mc I_{non-scale}^{\ast}$ is stated and then solved as an LP, the exact solution (i.e., a min or max) can be derived for the posed query; all the remaining information available in the MCM is deemed irrelevant to the derivation of the query, hence the sufficiency.} \textbf{Proof:} Our proof is constructive. In the proof we entertain two ideas, namely (i) the idea of generative process and, particularly, that of \emph{conditioning} also used in Sec. \ref{Sec:Gen}, and (ii) the notion we refer to as the \emph{locality of information}. Suppose that all the RVs discussed in steps (1) to (2b) of $\mc I_{non-scale}^{\ast}$ are identified. The key insight is that the information on how the remaining RVs probabilistically interact with each other is completely local in nature and, therefore, irrelevant to the derivation of the posed query. To see this, one can start off with the identified RVs and then in a gradual fashion add on\footnote{This is based on the fundamental property that a JPD can be expanded using the chain rule of probability in an arbitrary order.} the rest of the RVs (through the idea of conditioning discussed in Sec. \ref{Sec:Gen}). Quite crucially, this very process of adding the non-identified RVs to the model can be done completely in a local fashion, i.e., without imposing any constraints on how the identified RVs probabilistically interact. The mere fact that those RVs can be added into the model: (i) subsequent to the identified ones, and (ii) without inducing any sort of constraints on the identified ones, deems them irrelevant to the derivation of the query. \hfill $\blacksquare$ \subsubsection*{A-III Scale-Invariance Property: Intuition} Here, we will provide a proof for the example on scale-invariance property given in Sec. \ref{Sec:Grammar}. Although the proof is provided for a special query, the methodology used in the proof provides an insightful way of \emph{visualizing} an inference problem. The idea behind the proof is very simple and related to visualizing the connection of a RV to the underlying sample space using Venn diagrams. Without loss of generality, we assume that all the RVs present in the domain are binary\footnote{The generalization of the argument to non-binary RVs is straightforward.}. Random vector $\bb{X}=\bb{x}_{1:n}$ partitions the sample space $\Omega$ into $2^n$ disjoint regions each of which corresponds to a realization of $\bb{X}$. If each realization of the random vector $\bb{x}_{1:n}$ corresponds to a binary number (i.e., binary-coding the realizations), then one can conclude $\textit Val(\bb{X})=\{0,1,\cdots,2^n-1\}$. Let us index the partitions by their corresponding realization of $\bb{X}$. An illustrative example of an induced partitioning of the sample space $\Omega$ due to random vector $\bb{X}=\bb{x}_{1:n}$ is depicted in Fig. \ref{fig_scale_inv}(a), and a partitioning induced by RVs $\bb{y}$ and $\bb{z}$ is sketched in Fig. \ref{fig_scale_inv}(b). We note that the mere knowledge of the distribution function of a random quantity does not provide one with the knowledge of the underlying partitions. For this particular example, since the JPD over $\bb{X},\bb{y},\bb{z}$ is not available, the knowledge of how the partitions induced by $\bb{y},\bb{z}$ (Fig. \ref{fig_scale_inv}(b)) and the ones induced by $\bb{X}$ (Fig. \ref{fig_scale_inv}(a)) interact, i.e., to what extent they overlap, remains unspecified. Therefore, since $\mathbb P(X|y)=\frac{\mathbb P(X,y)}{\mathbb P(y)}$, to minimize (maximize) $\mathbb P(X|y)$, the quantity $\mathbb P(X,y)$ has to be minimized (maximized). Pictorially, the minimization (maximization) of $\mathbb P(X,y)$ corresponds to the minimization (maximization) of the overlap between the partitions corresponding to the events $\{\bb{X}=X\}$ and $\{\bb{y}=y\}$; hence, very simply, $\textstyle\mathbb P(X,y)_\downarrow=[\mathbb P(X)+\mathbb P(y)-1]^+$ and $\textstyle\mathbb P(X,y)_\uparrow=\min\{\mathbb P(X),\mathbb P(y)\}$. The key point, which yields the scale-invariance property, is that to derive the minimum (maximum) overlap between the partitions corresponding to the events $\{\bb{X}=X\}$ and $\{\bb{y}=y\}$ \emph{the information as to how the other partitions---corresponding to the other realizations of the present RVs in the model---interact with one another neither needs to be known nor to be encoded into the LP}; a fact which results in not requiring to encode the information as to the other realizations. Hence the only pieces of information that are required to be encoded and then solved as an LP are $\mathbb P(X)$ and $\mathbb P(y)$. The same line of reasoning could be adopted for $\mathbb P(x_i|y)$. The idea of scale-invariance, therefore, aims to avoid the encoding of the information as to the partitions induced on $\Omega$ which are yet deemed to be irrelevant to the derivation of the posed query; hence one needs to encode solely the relevant ones into the LP. \begin{figure}[h!] \centering \includegraphics[width=0.48\textwidth]{fig_scale_inv.pdf} \caption{Sample Space: (a) Partitioning induced on $\Omega$ due to $\bb{X}=\bb{x}_{1:n}$. The blue region corresponds to the partition associated to the event $\{\bb{x}_i=0\}$ and the red one to that of $\{\bb{X}=i\}$ where $i \in \textit Val(\bb{X})$. (b) Partitioning induced on $\Omega$ due to RVs $\bb{y}$ and $\bb{z}$. The blue region corresponds to the partition associated to the event $\{\bb{y}=0\}$.} \label{fig_scale_inv} \end{figure} \subsubsection*{Acknowledgement} The authors would like to thank the anonymous reviewers for their valuable comments. This work was supported in part by the Natural Sciences and Engineering Research Council (NSERC) under grant RGPIN 262017 and by the Fonds Quebecois de la Recherche sur la Nature et les Technologies (FQRNT). \section{DISCUSSION} \label{sec:work} We will now discuss related work so as to build a connection between ours and previous attempts to incorporate partial probabilistic knowledge of a domain in the task of inference. Attempting to combine Probabilistic Logic and BNs, the authors in \cite{andersen1990probabilistic,andersen1994bayesian} formulate the inference problem as an optimization problem subject to non-linear constraints so as to incorporate the conditional independence relations embedded in the BN. However, in our proposed framework, the issue of dealing with conditional independence relations does not arise at all, because these relations are dealt with during the derivation process of intra-contextual probabilities. The authors of \cite{hansen1995models} point out that one could avoid non-linear optimization when the value for a conditional probability is at least imprecisely known. For example, the constraint $\mathbb P(a|b)=\mathbb P(a)$, if the value for $\mathbb P(a)$ is known either precisely or imprecisely within some interval $[\alpha,\beta]$, can be written as \begin{eqnarray*} \frac{\mathbb P(a,b)}{\mathbb P(b)}=\mathbb P(a) \in [\alpha,\beta] \Leftrightarrow \left\{ \begin{array}{lc} \mathbb P(a,b)-\alpha \mathbb P(b)>0,& \\ \mathbb P(a,b)-\beta \mathbb P(b)<0.& \end{array} \right. \end{eqnarray*} Hence, the independence $\mathbb P(a|b)=\mathbb P(a)$ can be formulated as a number of linear constraints. However, the main drawback of this approach is that encoding a conditional independence relation such as $\mathbb P(\bb{x}|\bb{y},\bb{a}_1,\cdots, \bb{a}_n)=\mathbb P(\bb{x}|\bb{y})$ requires a number of linear equations that is exponential in $n$ to be introduced into the optimization problem \cite{andersen1994bayesian}. Drawing on the idea of Context-Specific Independence (CSI) \cite{boutilier1996context}, the authors of \cite{geiger1991advances} propose the Bayesian Multinet model which aims at taking advantage of the existing CSIs to perform inference, by modeling a single BN as multiple context-specific BNs. Translated into our multi-context setting, the Bayesian Multinet model corresponds to the case where the whole domain is modeled as a single BN, i.e., a single-context MCM, that can be decomposed into multiple BNs each being valid for a specific instantiation of some RVs in the domain. The authors of \cite{kiefllingtowards} point out the same concerns which led us to propose MCM, namely: (i) If unverified (in)dependencies are imposed between the variables in the domain then implausible results may arise; (ii) PGMs require one to have complete probabilistic knowledge of a domain which may not be available. Motivated by these, \cite{kiefllingtowards} gives a collection of rules to carry out inference in a domain. Broadly speaking, this work is similar to ours in spirit with the main distinction being the level of abstraction chosen to perform inference. In \cite{kiefllingtowards} inference is performed in a very local and rule-based fashion and conditional independence relations are dealt with directly which complicates the task at hand; a task which is futile when it comes to dealing with domains of many variables. In our case, by introducing the notion of context and encoding conditional independence relations within contexts we avoid having to contemplate the intra-contextual inference problem and leave this task for the corresponding context. This way, we can take advantage of the possibly rich independence structure governing the context and carry out the intra-contextual inference problem in a computationally efficient manner. Finally, let us discuss some interesting aspects of the proposed model. The degree of belief is encoded mathematically in the form of a probability distribution over the variables contained within the context. Furthermore, in the process of partial belief formation (which leads to the formation of contexts) the reasoner is ignorant as to how various contexts probabilistically interact (are related), except that, some contexts may in fact share a number of variables in between and hence overlap. Later on, in the process of the derivation of the query posed to the reasoner, this ignorance manifests in the uncertainty region represented by the min/max values for the inter-contextual query of interest. In other words, if the reasoner incurs ignorance as to the (in)dependency structure governing the variables present in the domain, then later on, in the process of derivation of the posed query, the reasoner has to pay the price by merely arriving at a \emph{probability interval} rather than a point probability as an answer to the query of interest. Yet, the knowledge of the underlying dependency structure is a fundamental knowledge whose availability to the reasoner should \emph{not} be postulated as an inevitability but as an advantaged position. The evolutionary process of MCM does not enforce a specific gradual expansion path, for the claim of MCM is merely that \emph{any partial belief formation as to the domain can be modeled in the framework depicted by MCM}. That is, the reasoner may arrive at different MCMs, depending on the order in which the reasoner encounters different concepts and also depending on her background knowledge as to the nature of the potential connections between a collection of variables. Simply put, the order according to which the reasoner comes about knowing the concepts or propositions of the domain does matter (cf. the discussion on the order of belief formation in Sec. \ref{Sec:Gen}). MCM enables one to carry out inference without having to commit to any unjustified or uncertain independence assumptions. In light of this, contexts symbolize the regions of the domain over which an (in)dependence structure is presumed and hence, the growth and merging of contexts indicates the formation of new (in)dependence structures over some parts of the domain which previously were unstructured. In short, MCM is meant to be invoked in circumstances where the observations and the a priori knowledge combined are not sufficient for the reasoner to form the full JPD over all of the domain variables and yet, quite crucially, the reasoner is reluctant to submit to any unjustified assumptions to compensate for such inadequacy of knowledge. \section{CONCLUSION} \label{sec:conclusion} In an attempt to establish a middle ground between Bayesian Logic and Probabilistic Logic \cite{andersen1990probabilistic,andersen1994bayesian}, on one side, and PGMs\footnote{For instance, Bayesian Networks \cite{pearl1986fusion}, Markov Networks \cite{koller2009probabilistic}, and Chain Graphs \cite{buntine1995chain}.} on the other, we proposed the Multi-Context Model to represent the state of partial knowledge regarding a domain. The generative process for the gradual construction of contradiction-free MCMs was discussed. The task of Inference for MCM was studied and, along the path, the notions of inference grammar, nestedness, and transformation were introduced. A short version of $\mathcal{I}^\ast$ without the scale-invariance property was provided in Appendix. It is worth noting that scale-invariance property can be achieved with a minor change to the last step of the proposed algorithm. \section{INFERENCE IN MCMS} \label{sec:inference} In this section we consider \emph{evidential} inference problems in multi-context settings. The objective is to evaluate (to the extent possible) a probability of the form $\mathbb P(\bb{O}=O|\bb{E}=E)$, called a \emph{query}, where $\bb{O}$ and $\bb{E}$ are two mutually exclusive sets of RVs. The set $\bb{E}$ is the set of evidence variables and $\bb{O}$ is the set of RVs for which we are interested in knowing with what probability they take on the value $O$, upon the observation of $\bb{E}=E$. In multi-context settings, inference problems can be categorized into two broad classes: \begin{itemize} \item Intra-Contextual Inference Problems: For which the sets $\bb{E}$ and $\bb{O}$ both belong to the same context. \item Inter-Contextual Inference Problems: For which the sets $\bb{E}$ and $\bb{O}$ do not belong to a single context and, therefore, more than one context is involved in the inference problem. \end{itemize} In what follows, we will elaborate on these two cases. \subsection{INTRA-CONTEXTUAL INFERENCE PROBLEM} \label{Intra_Contextual_Inference_Problem} One advantage of MCMs is that, once an inference problem is found to be an intra-contextual inference problem, one can take advantage of the rich independence structure potentially governing the context to accomplish the task of inference in a computationally efficient way. For instance, if the probabilistic knowledge of a context is presented in a form of a BN, then one can benefit from a variety of exact or approximate methods already developed for BNs. For a comprehensive study of such methods the reader is referred to \cite{koller2009probabilistic}. Hence, it is of great interest to have contexts whose probabilistic knowledge can be represented in some form of a PGM with sufficiently rich independence structure for which inference problems can be solved in a computationally efficient way. For example, if the probabilistic knowledge of a context is to be modeled according to some BN, we would like that BN to be as sparsely connected as possible and enjoy low tree-width to ensure computational efficiency for the task of inference \cite{chandrasekaran2012complexity}. \subsection{INTER-CONTEXTUAL INFERENCE PROBLEM: INFERENCE GRAMMAR} \label{Sec:Grammar} In this section, we turn our attention to the task of inter-contextual inference. The RVs involved in the query for the inter-contextual inference problem do not belong to a single context. For this reason, the answer to the query is inevitably in the form of an interval indicating a lower and upper bound for the query. Since $\mathbb P(E|O)+\mathbb P(\bar{E}|O)=1$ we have $\mathbb P(E|O)_\uparrow=1-\mathbb P(\bar{E}|O)_\downarrow$. Therefore, we can focus our attention on the minimization problem (i.e., identifying a lower bound to the probability of interest) realizing that any maximization problem (i.e., identifying an upper bound to the probability of nterest) could be cast as a minimization problem and vice versa. First, we are going to consider some simple queries which are posed to some example MCMs. These MCMs are depicted in Fig. \ref{fig_grammar_cases}(a-c). The goal here is to develop some insight as to which variables are indeed relevant and which are deemed irrelevant for a given query and the corresponding MCM. \begin{figure}[h!] \centering \includegraphics[width=0.47\textwidth]{fig_grammar_cases_bold_size_3.pdf} \caption{Sample inference rules given for some inter-contextual inference problems. The RVs involved in the query are shown in blue.} \label{fig_grammar_cases} \end{figure} We begin by considering a simple case: the disjoint MCM shown in Fig. \ref{fig_grammar_cases}(a). The rule to evaluate $\mathbb P(X|Y)_\downarrow$ is also given in Fig. \ref{fig_grammar_cases}(a). Interestingly enough, the expression only requires the intra-contextual quantities $\mathbb P(X)$ and $\mathbb P(Y)$ and it does not depend on any other RV present in the domain. In other words, as far as $\mathbb P(X|Y)_\downarrow$ is concerned, the MCM shown in Fig. \ref{fig_grammar_cases}(a) is equivalent to a much simpler MCM: the one corresponding to having only two disjoint contexts described by $\mathbb P(\bb{X})$ and $\mathbb P(\bb{Y})$. Next, we take the MCM given in Fig. \ref{fig_grammar_cases}(b) where there is an overlap between the context containing $\bb{X}$ and the one containing $\bb{Y}$. The overlapping part consists of the random vector $\bb{Z}$. The rule to evaluate $\mathbb P(X|Y,Z)_\downarrow$ is given in Fig. \ref{fig_grammar_cases}(b). Now, consider the MCM shown in Fig. \ref{fig_grammar_cases}(c) where we have the same setting we had in previous case but a new random variable $\bb{t}$ is added in the overlapping region. Notice that the expression for $\mathbb P(X|Y,Z,t)_\downarrow$ given in Fig. \ref{fig_grammar_cases}(c) is the same expression given for $\mathbb P(X|Y,Z)_\downarrow$ in Fig. \ref{fig_grammar_cases}(b) with the substitution of $Z,t$ instead of $Z$. That is, $Z$ in Fig. \ref{fig_grammar_cases}(b) and $Z,t$ in Fig. \ref{fig_grammar_cases}(c) are representing the same thing, namely, ``all the variables in the overlapping region", and in that respect, they are ultimately the same. The rules are very much like sentences in predicate logic for which variables merely serve as place-holders. The derivation of the rules given in Fig. \ref{fig_grammar_cases}(a-c) is not presented here. However, using the proof presented in Sec. A-II of Appendix (to identify the relevant variables) and subsequently following the methodology outlined in Sec. A-III of Appendix (to visualize the partitions and reason out the extent they overlap) it should be straightforward to derive the presented rules. The sample set of rules presented is by no means exhaustive, nonetheless, due to the idea of {context transformation that will be} discussed in Sec. \ref{sec:transformation}, they can be applied to a wide range of interesting inter-contextual inference problems. We would like to clarify that our ultimate objective is \emph{not} to compute and provide the complete set of rules { that can answer all possible queries and for all possible MCMs}, since simply, the set is infinite in size. What we need, therefore, is an algorithm, let us call it $\mathcal{I}^\ast$, that can provide the answer to the posed query being given an MCM as an input. The presented rules provide insights and hints to the nature of $\mathcal{I}^\ast$ which needs to be devised to ideally handle \emph{any} arbitrary query posed to \emph{any}\footnote{Although we believe that the MCMs generated through the generative process outlined in Sec. \ref{Sec:Gen} are more cognitively plausible, nonetheless, from a pure mathematical point of view, it would be of interest to find an algorithm which could handle \emph{any} MCM.} MCM. In a sense, we can get a glimpse of the nature of $\mathcal{I}^\ast$ through analyzing the presented rules. In other words, the derived rules serve as a lens through which one can study $\mathcal{I}^\ast$. In Sec. A-I of Appendix a simple version of $\mathcal{I}^\ast$ that can handle arbitrary MCMs is outlined. The motivation behind giving this sample set of rules can now be summarized in the following. \begin{enumerate} \item To shed light on the general nature of a rule (which reflects on the nature of $\mathcal{I}^\ast$). More specifically, to illustrate that a rule enjoys two key properties, namely: (i) scale-invariance, (ii) resemblance to sentences in predicate logic, in that in both cases, variables are mere place-holders. For this resemblance we refer to $\mathcal{I}^\ast$ as \emph{inference grammar}. \item To demonstrate that a rule is telling us which intra-contextual quantities are essential and which are irrelevant for a particular inter-contextual query. \item To emphasize the key property that a rule derived under a specific MCM remains valid for and can be applied to infinitely many other MCMs all of which are linked through the notions of nestedness and transformation; hence generalization is achieved. \item To lay down the foundation of \emph{transformation} and \emph{nestedness} which both play crucial roles in understanding the underlying machinery behind $\mathcal{I}^\ast$. \end{enumerate} Next, we discuss another key property of the inference rules, namely, that of \emph{scale-invariance}. Consider once again the case in Fig. \ref{fig_motivation_toy_problem_1}. Now let us derive $\mathbb P(x_i|y)_\downarrow,$ and $\mathbb P(X|y)_\downarrow$ where $\bb{X}\triangleq \bb{x}_{1:n}$. Using the rule given in Fig. \ref{fig_grammar_cases}(a), one arrives at the following results: $\mathbb P(x_i|y)_\downarrow=[\frac{\mathbb P(x_i)-\mathbb P(\bar{y})}{\mathbb P(y)}]^+,$ and $\mathbb P(X|y)_\downarrow=[\frac{\mathbb P(X)-\mathbb P(\bar{y})}{\mathbb P(y)}]^+$. In other words, the expressions remain the same, regardless of the dimension of the quantity of interest, i.e., be it a single RV or be it a random vector comprised of many RVs. In this respect, once again, the inference rules resemble expressions in predicate logic. The intuition on the scale invariance is provided in Sec. A-III of Appendix. It is worth noting that $\mathcal{I}^\ast$ formulates the inter-contextual inference problem as a Linear Programming (LP) optimization (cf. Sec. A-I of Appendix). The key issues to consider are: (i) what RVs have to be included in the LP, and (ii) the abstraction level $\mathcal{I}^\ast$ should choose to encode the RVs identified in step (i) for the LP, i.e., the parametrization of RVs identified in step (i) for the LP. In what follows, the concepts of nestedness and transformation are put forth. Once the two are introduced, one could apply a single rule (e.g., one in Fig. \ref{fig_grammar_cases}(a)) to a much larger number of MCMs; in fact to infinitely many MCMs. \subsection{INTER-CONTEXTUAL INFERENCE PROBLEM: NESTEDNESS AND TRANSFORMATION} \label{sec:transformation} \begin{figure}[h!] \centering \includegraphics[width=0.49\textwidth]{fig_transform_1_bold_mod_add_size_1.pdf} \caption{Inter-Contextual Inference Problem: Transformation and hierarchical construct. As one proceeds from the left to the right, a more comprehensive knowledge of domain is assumed to be available, of course hypothetically.} \label{fig_transform_1} \end{figure} The nested property, or \emph{nestedness}, refers to the fact that every MCM can be considered as an element of a family of MCMs. That family contains all MCMs which through marginalization can produce the original MCM. In such a case we simply say that the nested property holds between the original MCM and the family. The process of going from the original MCM to one of the members of the family is referred to as \emph{transformation}. For example, the MCM containing three contexts $\{\bb{x}\}$, $\{\bb{y}\}$, and $\{\bb{z}\}$ shown in Fig. \ref{fig_transform_1}(a) is a member of a family of MCMs containing two contexts $\{\bb{x},\bb{y}\}$ and $\{\bb{z}\}$, shown in Fig. \ref{fig_transform_1}(b), one of which is associated to a \emph{family} of JPDs over $\bb{x}$ and $\bb{y}$ (the dash-dotted circle in Fig. \ref{fig_transform_1}(b)) which, if marginalized, produces the same $\mathbb P(\bb{x})$ and $\mathbb P(\bb{y})$ in the original MCM (left-most MCM). Mathematically, the set of all JPDs over RVs $\bb{x}$ and $\bb{y}$ which, if marginalized, produce specific marginal probability distributions $\mathbb P(\bb{x})$ and $\mathbb P(\bb{y})$ is denoted by $\{\mathbb P(\bb{x},\bb{y})\} \models \mathbb P(\bb{x}) \wedge \mathbb P(\bb{y})$. The notion of the nested property enables us to look at one MCM as a subset of another larger MCM. The nested property, furthermore, enables one to sort MCMs in a hierarchical construct as illustrated in Fig. \ref{fig_transform_1} where moving from the left to the right corresponds to moving from lower levels of hierarchy to higher levels. \begin{figure}[h!] \centering \includegraphics[width=0.39\textwidth]{fig_reduction_5_bold_size_1.pdf} \caption{Transformation: Sample case.} \label{fig_reduction} \end{figure} To convey the idea, consider the case illustrated in Fig. \ref{fig_reduction}. Suppose the query of interest is $\mathbb P(x|y,R)_\downarrow$. Then, one can first transform the original (left-most) MCM into the MCM shown in the middle, and subsequently into the right-most MCM. Hence, using the right-most MCM and the rule given in Fig. \ref{fig_grammar_cases}(b), one can write $\mathbb P(x|y,R)_\downarrow=[\frac{\mathbb P(x|R)-\mathbb P(\bar{y}|R)}{\mathbb P(y|R)}]^+ =[\frac{\mathbb P(x|R)-1+\mathbb P(y|R)}{\mathbb P(y|R)}]^+$. If we had the knowledge of $\mathbb P(y|R)$ then the expression given above would have been sufficient to derive $\mathbb P(x|y,R)_\downarrow$. However, since $\mathbb P(y|R)$ is \emph{not} known, we need to go through one more step. This is precisely due to, and emphasizes, the fact that by working on the right-most MCM we implicitly presumed that we were equipped with more knowledge than we really had. Using the middle MCM and the rule given in Fig. \ref{fig_grammar_cases}(a), one can conclude $\textstyle\mathbb P(y|R)_\downarrow=\textstyle[\frac{\mathbb P(y)-\mathbb P(\bar{R})}{\mathbb P(R)}]^+$. Altogether\footnote{This is due to the observation that for function $ f(y)=\textstyle(\frac{k+y}{y})$ when $k<0$, $\textstyle\min_{1\geq y\geq t>0}f(y)=\textstyle(\frac{k+t}{t})$.}, $\mathbb P(x|y,R)_\downarrow = \big( [\frac{\mathbb P(x|R)-1+\mathbb P(y|R)}{\mathbb P(y|R)}]^+\big)_{\downarrow}= [\frac{\mathbb P(x|R)-1+\mathbb P(y|R)_\downarrow}{\mathbb P(y|R)_\downarrow}]^+$. It is worth noting that the same rule would apply if instead of the random vector $\bb{R}$ we were dealing with the random variable $\bb{a}$, i.e., to find $\mathbb P(x|y,a)_\downarrow$ one could use the same expression given for $\mathbb P(x|y,R)_\downarrow$ by substituting $a$ in place of $R$ in all the expressions. Arguments of this kind are made possible due to the idea of transformation which enables us to analyze the transformed MCM (e.g., the middle one in Fig. \ref{fig_reduction}) rather than the original MCM (the left-most one in Fig. \ref{fig_reduction}). Furthermore, the concept of transformation highlights a key idea: if a piece of information (i.e., an intra-contextual quantity) is irrelevant in the transformed MCM for the posed query, it must have been irrelevant in the original MCM in the first place. This statement, once again, sheds light on what intra-contextual quantities are relevant or irrelevant to derive a posed inter-contextual query on a given MCM. \section{INTRODUCTION} At an abstract level, an individual (also referred to as a reasoner) is faced with a domain where by ``domain" we simply mean a collection of propositions or concepts which are mathematically encoded as Random Variables (RVs). To arrive at the complete probabilistic knowledge of the domain, i.e., to learn how all RVs in the domain probabilistically interact with one another, is indeed a demanding task. In reality, an individual is often faced with a domain for which she merely possesses \emph{partial} knowledge---that is, she only knows how \emph{some} (not all) RVs in the domain interact. To make the setting under study more tangible, consider the following case. Suppose that the probabilistic knowledge of a domain is represented by a Probabilistic Graphical Model (PGM) $\mathcal{B}$, e.g., a Bayesian Network (BN). Then the reasoner comes across a new RV, say $\boldsymbol\psi$, and would like to incorporate it into $\mathcal{B}$ so as to achieve the complete probabilistic knowledge of the new domain (which now also includes $\boldsymbol\psi$). However, incorporation of $\boldsymbol\psi$ into $\mathcal{B}$ would require knowledge of how $\boldsymbol{\psi}$ is probabilistically related to all the RVs already present in $\mathcal{B}$; a knowledge which may be, quite plausibly, unavailable to the reasoner. An interesting question that now arises is how to handle situations where only partial knowledge as to how $\boldsymbol\psi$ is probabilistically related to $\mathcal{B}$ is available. An example would be when the reasoner merely knows how $\boldsymbol\psi$ interacts probabilistically with only one RV, say $\boldsymbol{\phi}$, in $\mathcal{B}$. In this paper, a graphical model, namely, the Multi-Context Model (MCM) is proposed to represent the setting in which only partial probabilistic knowledge of a domain is available to the reasoner. More specifically, MCM is a graphical language to represent settings in which the Joint Probability Distribution (JPD) over all RVs is not available, but what is available instead is the JPDs over a collection of subsets of RVs of the domain (referred to as sub-domains or \emph{contexts}). These contexts are potentially overlapping, i.e., they could share some RVs. As pointed out elegantly in \cite{pearl1990reasoning}, \emph{``this state of partial knowledge is more common, because we often begin thinking about a problem through isolated frames, paying no attention to interdependencies."} Along the same line of thought, it is plausible to assume that the probabilistic knowledge of the domain at the early primitive stage consists of a collection of disjoint contexts and as the reasoner acquires more knowledge as to how the variables in the model are related to one another and thus probabilistically interact, contexts gradually go through a process very much like an evolution: contexts start to share some variables, overlaps begin to emerge and, once enough knowledge is obtained, a number of contexts could merge thereby giving rise to bigger contexts. This naturally raises the following fundamental question: How could a collection of consistent, probabilistically sound, and potentially overlapping contexts emerge \emph{gradually} over the course of time? In an attempt to answer this question we present a generative process of constructing a contradiction-free MCM. Finally, we would like to note that the special case where the whole domain is modeled as a single context corresponds to the conventional way of modeling the probabilistic knowledge of a domain using a single PGM, e.g., by some BN. Another yet crucial question which we address in this work---which is another motivation behind the development of the MCM---is how the task of inference (i.e., the evaluation of some probability of interest which is hereafter referred to as \emph{query}) should be carried out in a domain which is modeled according to some MCM. A query does not necessarily belong to any one of the contexts in particular and, in fact, may involve RVs from different contexts. The paper is structured as follows. After introducing the notation in Sec. \ref{sec:term}, we define in Sec. \ref{sec:gen_p} the MCM and drawing on the notion of probabilistic conditioning, a generative process of constructing a contradiction-free MCM is discussed. Then, in Sec. \ref{sec:inference} we elaborate on the problem of inference in a multi-context setting, i.e., in a domain whose probabilistic knowledge is encoded as an MCM. In Sec. \ref{sec:work} we discuss the relevant past work and comment on the proposed model. Finally, Sec. \ref{sec:conclusion} concludes the paper. \section{MULTI-CONTEXT MODEL} \label{sec:gen_p} As explained earlier, a \textit{domain} is simply the set of all Random Variables (RVs) at hand. A \textit{context} comprises a collection of RVs for which their JPD is precisely known, see Fig. \ref{fig_termin}(a). In general, two contexts could be disjoint (Fig. \ref{fig_termin}(b)) or overlapping (Fig. \ref{fig_termin}(c)). \begin{figure}[h!] \centering \includegraphics[width=0.3 \textwidth]{fig_termin_2_bold.pdf} \caption{Graphical representation of contexts: (a) Context associated to $\mathbb P(\bb{a},\bb{b},\bb{X})$. (b) Two disjoint contexts associated to $\mathbb P(\bb{a},\bb{b})$ and $\mathbb P(\bb{Y},\bb{t})$. (c) Two overlapping contexts associated to $\mathbb P(\bb{X},\bb{Y},\bb{t})$ and $\mathbb P(\bb{Y},\bb{z},\bb{k})$. The random vector $\bb{Y}$ is referred to as the \emph{induced} part in Sec. \ref{sec:gen_p}.} \label{fig_termin} \end{figure} A \textit{Multi-Context Model (MCM)} encodes the probabilistic knowledge of a domain as a collection of possibly overlapping contexts. This enables the handling of situations in which comprehensive knowledge of a domain is not available, but partial information is, in the form of JPDs of some subsets of the domain. Let us first motivate the proposed MCM by entertaining a simple yet enlightening example. \subsection{MOTIVATING EXAMPLE} Consider a domain consisting of the RVs $\bb{y},\bb{z}$ in addition to a set of $n$ RVs, $\bb{x}_{1:n}$. A reasoner has formed a partial belief as to the probabilistic connections between the variables of the domain. More specifically, the reasoner knows precisely the JPDs $\mathbb P(\bb{y},\bb{z})$ and $\mathbb P(\bb{x}_{1:n})$ but not the JPD $\mathbb P(\bb{y},\bb{z}, \bb{x}_{1:n})$. This setting is described by an MCM that consists of two disjoint contexts, one associated to RVs $\bb{y},\bb{z}$ and the other to $\bb{x}_{1:n}$, as shown in Fig. \ref{fig_motivation_toy_problem_1}. \begin{figure}[h!] \centering \includegraphics[width=0.13\textwidth]{fig_motivation_toy_problem_1_size_1.pdf} \caption{Problem statement as an MCM.} \label{fig_motivation_toy_problem_1} \end{figure} Assume that the following query is posed: Given the available information, what could be said about $\mathbb P(y|x_i)$ for some $i=1,\cdots,n$? The RVs $\bb{y}$ and $\bb{x}_i$ belong to different contexts, therefore, the JPD of $\bb{y}$ and $\bb{x}_i$, $\mathbb P(\bb{x}_i,\bb{y})$, is not available. The best one can hope for is to derive the range within which $\mathbb P(y|x_i)$ varies, namely, $[\mathbb P(y|x_i)_\downarrow,\mathbb P(y|x_i)_\uparrow]$. Let us for the moment assume the objective is to find $\mathbb P(y|x_i)_\downarrow$. Based on the conventional methodology, i.e., the approach adopted by past work (cf. \cite{andersen1990probabilistic,andersen1994bayesian,hansen1995models} and references therein) one has to write down \emph{all} the information as a list of linear equations and solve it as a Linear Program (LP). The main drawback of the conventional approach is that it cannot distinguish between what information is relevant and what is irrelevant for the posed query, and hence what needs to and what need not be considered in answering the query. The price for this is that the number of parameters required to merely formulate the query as an LP is exponential in $n$. The key point, however, is that what information is relevant (or irrelevant) depends directly on the posed query, i.e., it is query-dependent. The main advantage of the proposed MCM over previous approaches is that it enables answering a query in a computationally efficient manner by distinguishing the relevant information from the irrelevant for the given query. This is realized thorough adopting the notion of \emph{inference grammar}; a concept which will be systematically defined later. For our example, following the inference rule we will provide in Sec. \ref{Sec:Grammar}, one can easily get $\mathbb P(y|x_i)_\downarrow=[\frac{\mathbb P(y)-\mathbb P(\bar{x}_i)}{\mathbb P(x_i)}]^+$. The task of inference in an MCM is carried out on two different levels, which makes the task more computationally efficient: \begin{itemize} \item[(i)] High-Level Reasoning: at this level, through the use of inference grammar, the relevant quantities are identified (e.g., $\mathbb P(y)$ and $\mathbb P(\bar{x}_i)$ in the case of our example). \item[(ii)] Low-Level Reasoning: the relevant quantities, identified in (i), can then be computed by employing inference algorithms which take advantage of the potentially rich independence structure governing the contexts. For example, it could very well be the case that for the JPD associated to $\bb{x}_{1:n}$ a large number of conditional independence relations hold. In that case, stating the derivation of $\mathbb P(\bar{x}_i)$ (i.e., $1-\mathbb P(x_i)$) as an LP would be computationally inefficient\footnote{The number of parameters required just to state the problem as an LP is exponential in $n$.} but unnecessary. Indeed, the task of finding $\mathbb P(\bar{x}_i)$ could be accomplished in a computationally efficient way using one of the many inference methods developed for probabilistic graphical models; a key point that the previous approaches do not take advantage of. \end{itemize} As a final step, in order to derive the lower/upper bound to the posed query, the quantities identified in (i) and subsequently calculated in (ii) are stated and solved as an LP. The idea behind ``high-level reasoning" will be explained and clarified further in Sec. \ref{Sec:Grammar} and \ref{sec:transformation}, while the concept of ``low-level reasoning" will be discussed in Sec. \ref{Intra_Contextual_Inference_Problem}. \subsection{GENERATIVE PROCESS OF CONTRADICTION-FREE MCMS} \label{Sec:Gen} The objective of the generative process we describe in this section is to provide a way to consistently\footnote{That is, without introducing any form of contradictory result with respect to any probability assignment.} construct contexts, in a sequential manner, over a set of RVs. The act of constructing a context, i.e., of assigning a JPD to a subset of RVs, corresponds to forming a \emph{subjective}\footnote{One must not interpret the subjectivity of belief as ``total disconnectivity from the reality." Thus, we adopt the Bayesian interpretation of probability in this section. The avid reader is referred to \cite{chalmers2013thing}. An adherent to the frequentist interpretation of probability could think of contexts as being empirically constructed from a collection of data and thus skip Sec. \ref{Sec:Gen} and proceed directly to the next section.} belief over those RVs. In this light, the act of constructing multiple contexts corresponds to \emph{gradually} forming subjective beliefs over a number of subsets of variables in the domain; hence every context symbolizes an established belief over the RVs involved in that context. We introduce this problem by considering a simple case shown in Fig. \ref{fig_consis_3var_2pic}(a). \begin{figure}[h!] \centering \includegraphics[width=0.31\textwidth]{fig_consis_3var_2pic_bold.pdf} \caption{Generative process for contradiction-free Multi-Context Model. The dash-dotted contexts cannot be freely assigned.} \label{fig_consis_3var_2pic} \end{figure} Suppose there are three RVs, namely, $\bb{x},\bb{y},$ and $\bb{z}$, present in the domain and let us consider the following question: Could one assign $\mathbb P(\bb{x},\bb{y})$ and $\mathbb P(\bb{y},\bb{z}),$ \textit{freely} and \emph{gradually} in a consistent manner, over the three variables without introducing any sort of contradiction? It is easy to verify that the answer is positive. Indeed, one could start off by assigning $\mathbb P(\bb{x},\bb{y})$. This assignment would, of course, induce the marginal $\mathbb P(\bb{y})$ and one can write $\mathbb P(\bb{y},\bb{z})=\mathbb P(\bb{y})\mathbb P(\bb{z}|\bb{y})$. Then, to complete this task, one would just need to proceed with assigning $\mathbb P(\bb{z}|\bb{y})$. This process could be referred to as a \textit{generative} process of the assignment of $\mathbb P(\bb{x},\bb{y})$ and $\mathbb P(\bb{y},\bb{z})$ over $\bb{x},\bb{y}$, and $\bb{z}$ without introducing any inconsistencies, in a gradual manner. Indeed, free-assignment refers to the act of freely assigning the non-induced, e.g., $P(\bb{z}|\bb{y})$, part of the \emph{to-be-formed} belief, e.g., $P(\bb{y},\bb{z})$. In other words, free-assignment signifies the observation that the already-formed belief does not impose any constraints on the non-induced part of the to-be-formed belief. Let us now consider the case shown in Fig. \ref{fig_consis_3var_2pic}(b). Could one assign $\mathbb P(\bb{x},\bb{y}),\mathbb P(\bb{y},\bb{z}),$ and $\mathbb P(\bb{x},\bb{z})$ freely and {gradually} in a consistent manner over the three variables without introducing any sort of contradiction? After some investigation, one can see that the answer is negative \cite{pearl1985bayesian}. Not surprisingly, the reason for this has to do with the existence of a loop in the model: once $\mathbb P(\bb{x},\bb{y})$ and $\mathbb P(\bb{y},\bb{z})=\mathbb P(\bb{y})\mathbb P(\bb{z}|\bb{y})$ are assigned\footnote{$\mathbb P(\bb{y})$ is induced by the assignment of $\mathbb P(\bb{x},\bb{y})$.}, then $\mathbb P(\bb{x},\bb{z})$ cannot be assigned freely. This is due to the fact that $\mathbb P(\bb{x},\bb{z})$ has to satisfy some non-trivial conditions imposed by the already assigned contexts $\mathbb P(\bb{x},\bb{y})$ and $\mathbb P(\bb{y},\bb{z})$ \cite{pearl1985bayesian}. In summary, whenever it comes to generating a new context, the JPD associated to that context has to be separated into two parts: (i) the part induced by the already existing contexts, and (ii) the part containing new variables which have never been so far associated to any context (i.e., non-induced part). The key point in the generation of contradiction-free MCMs is that the former part has to be induced by some context which, itself, is already present in the domain. That is, all the induced parts have to be already contained within some context. Otherwise, to include the induced parts---each constrained by the context it is already in---in a new context, the newly created context would have to satisfy some nontrivial constraints and therefore could not be \emph{freely} assigned. \begin{figure}[h!] \centering \includegraphics[width=0.16\textwidth]{fig_mcx_1_bold.pdf} \caption{MCM for $\mathbb P(\bb{a},\bb{b},\bb{c}), \mathbb P(\bb{b},\bb{d})$, and $\mathbb P(\bb{b},\bb{c},\bb{e})$.} \label{fig_mcx_1} \end{figure} Let us discuss one final case to further clarify the process. Consider the multi-context model in Fig. \ref{fig_mcx_1}. Could this model be constructed freely and gradually in a probabilistically consistent manner? The answer is positive. We first assign $\mathbb P(\bb{a},\bb{b},\bb{c})$, then we assign $\mathbb P(\bb{b},\bb{c},\bb{e})=\mathbb P(\bb{b},\bb{c})\mathbb P(\bb{e}|\bb{b},\bb{c})$ where $\mathbb P(\bb{b},\bb{c})$ is induced by our first assignment of $\mathbb P(\bb{a},\bb{b},\bb{c})$. Finally, we assign $\mathbb P(\bb{b},\bb{d})=\mathbb P(\bb{b})\mathbb P(\bb{d}|\bb{b})$ where $\mathbb P(\bb{b})$ is induced by our first assignment of $\mathbb P(\bb{a},\bb{b},\bb{c})$. A closer look reveals that this is not the only way we can gradually construct a contradiction-free model in this case: we could have performed the assignments in a different order\footnote{Yet, this is not always the case: suppose there are four RVs in the domain, namely, $\bb{a},\bb{b},\bb{c}$ and $\bb{d}$ and we would like to assign $\mathbb P(\bb{a},\bb{b}),\mathbb P(\bb{b},\bb{c}),$ and $\mathbb P(\bb{c},\bb{d})$. Performing the assignments in the order $(1)-\mathbb P(\bb{a},\bb{b}),(2)-\mathbb P(\bb{b},\bb{c}), (3)-\mathbb P(\bb{c},\bb{d})$ would not introduce any inconsistencies, in contrast to using the order $(1)-\mathbb P(\bb{a},\bb{b}),(2)-\mathbb P(\bb{c},\bb{d}),(3)-\mathbb P(\bb{b},\bb{c})$.}. Of course, the only thing which would have been different would be the induced probabilities. That is, if one does the assignment in the following order: (1)$-\mathbb P(\bb{b},\bb{d})$, (2)$-\mathbb P(\bb{a},\bb{b},\bb{c})$, (3)$-\mathbb P(\bb{b},\bb{c},\bb{e})$ then the first assignment of $\mathbb P(\bb{b},\bb{d})$ will induce $\mathbb P(\bb{b})$ for the second assignment of $\mathbb P(\bb{a},\bb{b},\bb{c})=\mathbb P(\bb{b})\mathbb P(\bb{a},\bb{c}|\bb{b})$ and the second assignment will induce $\mathbb P(\bb{b},\bb{c})$ for the third assignment $\mathbb P(\bb{b},\bb{c},\bb{e})=\mathbb P(\bb{b},\bb{c})\mathbb P(\bb{e}|\bb{b},\bb{c})$. \section{TERMINOLOGY AND NOTATION} \label{sec:term} In this section we present the mathematical notation and the terminology employed in this paper. Random quantities are denoted by bold-faced letters; their realizations are denoted by the same letter but non-bold. More specifically, RVs are denoted by lower-case bold-faced letters, e.g., $\bb{x}$, while random vectors are denoted by upper-case bold letters, e.g., $\bb{X}$. $\textit{Val}(\cdot)$ denotes the set of values a random quantity can take, e.g., $\textit{Val}(\bb{x})$ is the set of all possible realizations of the RV $\bb{x}$. In this paper, we assume that all random quantities are discrete. The JPD over the RVs $\bb{x}_1,\cdots,\bb{x}_n$ is denoted by $\mathbb P(\bb{x}_1,\cdots,\bb{x}_n)$; when $\bb{x}_1,\cdots,\bb{x}_n$ comprise a vector $\bb{X}$ then $\mathbb P(\bb{X}):=\mathbb P(\bb{x}_1,\cdots,\bb{x}_n)$. We will use the notation $\bb{x}_{1:n}$ to denote the sequence of $n$ RVs $\bb{x}_1,\cdots,\bb{x}_n$. To simplify presentation and to prevent our expressions from becoming cumbersome, we incur the following abuse of notation: We denote the probability $\mathbb P(\bb{x}=x)$ by $\mathbb P(x)$ for some RV $\bb{x}$ and its realization $x\in \textit{Val}(\bb{x})$. Also, $\mathbb P(\bar{x}):=\mathbb P(\bb{x}\neq x)=1-\mathbb P(x)$ for some $x\in \textit{Val}(\bb{x})$, i.e., $\mathbb P(\bar{x})$ is the probability that $\bb{x}$ takes on any value other than $x$. For conditional probabilities we will use the notation $\mathbb P(x|y)$ instead of $\mathbb P(\bb{x}={x}|\bb{y}={y})$. Similar notations will be used for the case of random vectors, i.e., $\mathbb P(X):=\mathbb P(\bb{X}=X)$, $\mathbb P(\bar{X}):=\mathbb P(\bb{X}\neq X)=1-\mathbb P(\bb{X}=X)=1-\mathbb P(X)$, and $\mathbb P(X|Y):=\mathbb P(\bb{X}={X}|\bb{Y}={Y})$. The subscript $\downarrow$ on a probability, e.g., $\mathbb P(x|y)_\downarrow$, denotes the minimum value the probability can take subject to the constraints induced by the available probabilistic knowledge. Likewise, the subscript $\uparrow$ on a probability denotes the maximum value the probability can take. Finally, the operator $[\cdot]^+$ gives the positive part of its argument, i.e., $[a]^+:=\max\{0,a\}$ for any real-valued $a$.
{ "timestamp": "2015-06-19T02:15:22", "yymm": "1412", "arxiv_id": "1412.4271", "language": "en", "url": "https://arxiv.org/abs/1412.4271", "abstract": "Arriving at the complete probabilistic knowledge of a domain, i.e., learning how all variables interact, is indeed a demanding task. In reality, settings often arise for which an individual merely possesses partial knowledge of the domain, and yet, is expected to give adequate answers to a variety of posed queries. That is, although precise answers to some queries, in principle, cannot be achieved, a range of plausible answers is attainable for each query given the available partial knowledge. In this paper, we propose the Multi-Context Model (MCM), a new graphical model to represent the state of partial knowledge as to a domain. MCM is a middle ground between Probabilistic Logic, Bayesian Logic, and Probabilistic Graphical Models. For this model we discuss: (i) the dynamics of constructing a contradiction-free MCM, i.e., to form partial beliefs regarding a domain in a gradual and probabilistically consistent way, and (ii) how to perform inference, i.e., to evaluate a probability of interest involving some variables of the domain.", "subjects": "Artificial Intelligence (cs.AI); Logic (math.LO); Probability (math.PR); Machine Learning (stat.ML)", "title": "Multi-Context Models for Reasoning under Partial Knowledge: Generative Process and Inference Grammar", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9458012640659995, "lm_q2_score": 0.7490872075132152, "lm_q1q2_score": 0.7084876277616686 }
https://arxiv.org/abs/1811.02399
Subspaces that can and cannot be the kernel of a bounded operator on a Banach space
Given a Banach space $E$, we ask which closed subspaces may be realised as the kernel of a bounded operator $E \rightarrow E$. We prove some positive results which imply in particular that when $E$ is separable every closed subspace is a kernel. Moreover, we show that there exists a Banach space $E$ which contains a closed subspace that cannot be realized as the kernel of any bounded operator on $E$. This implies that the Banach algebra $\mathcal{B}(E)$ of bounded operators on $E$ fails to be weak*-topologically left Noetherian. The Banach space $E$ that we use is the dual of Wark's non-separable, reflexive Banach space with few operators.
\section{Introduction} \noindent In this note we address the following natural question: given a Banach space $E$, which of its closed linear subspaces $F$ are the kernel of some bounded linear operator $E \rightarrow E$? We shall begin by showing that if either $E/F$ is separable, or $F$ is separable and $E$ has the separable complementation property, then $F$ is indeed the kernel of some bounded operator on $E$ (Propositions \ref{2.0} and \ref{2.0a}). Our main result is that there exists a reflexive, non-separable Banach space $E$ for which these are the only closed linear subspaces that may be realised as kernels (Theorem \ref{2.1}), and in particular $E$ has a closed linear subspace that cannot be realised as the kernel of a bounded linear operator on $E$ (Corollary \ref{2.3}). The Banach space in question may be taken to be the dual of any reflexive, non-separable Banach space that has few operators, in the sense that every bounded operator on $E$ is the sum of a scalar multiple of the identity and an operator with separable range. Wark has shown that such Banach spaces exist \cite{Wa, Wa2018}. We now describe how we came to consider this question. Given a Banach space $E$ we write $E'$ for its dual space, and $\B(E)$ for the algebra of bounded linear operators $E \rightarrow E$. We recall that a \emph{dual Banach algebra} is a Banach algebra $A$ which is isomorphically a dual Banach space in such a way that the multiplication on $A$ is separately weak*-continuous; equivalently $A$ has a predual which may be identified with a closed $A$-submodule of $A'$. When $E$ is a reflexive Banach space, $\B(E)$ is a dual Banach algebra with predual given by $E \widehat{\otimes} E',$ where $\widehat{\otimes}$ denotes the projective tensor product of Banach spaces. We recall the following definition from \cite{Wh}: \begin{definition} Let $A$ be a dual Banach algebra. We say that $A$ is \textit{weak*-topologically left Noetherian} if every weak*-closed left ideal $I$ of $A$ is weak*-topologically finitely-generated, i.e. there exists $n \in \N$, and there exist $x_1, \ldots, x_n \in I$ such that $$I = \overline{Ax_1 +\C x_1 + \cdots +Ax_n +\C x_n}^{w^*}.$$ \end{definition} In \cite{Wh} various examples were given of dual Banach algebras which satisfy this condition, but none were found that fail it. Using our main result, we are able to prove in Theorem \ref{2.5.13} of this note that, for any non-separable, reflexive Banach space $E$ with few operators in the above sense, $\B(E')$ is a dual Banach algebra which is not weak*-topologically left Noetherian. \section{Results} \noindent We first show that in many cases closed linear subspaces can be realised as kernels. In particular, for a separable Banach space every closed linear subspace is the kernel of a bounded linear operator. Given a Banach space $E$, and elements $x \in E, \lambda \in E'$, we use the bra-ket notation $\vert x \rangle \langle \lambda \vert$ to denote the rank-one operator $y \mapsto \langle y, \lambda \rangle x$. \begin{proposition} \label{2.0} Let $E$ be a Banach space, and let $F$ be a closed linear subspace of $E$ such that $E/F$ is separable. Then there exists $T \in \B(E)$ such that $\ker T = F$. \end{proposition} \begin{proof} Since $E/F$ is separable, the unit ball of $F^\perp \cong (E/F)'$ is weak*-metrisable, and hence, since it is also compact, it is separable. Therefore we may choose a sequence of functionals $(\lambda_n)$ which is weak*-dense in the unit ball of $F^\perp$. We may assume that $E$ is infinite dimensional, since otherwise the result follows from elementary linear algebra. We may therefore pick a normalised basic sequence $(b_n)$ in $E$. Define $T \in \B(E)$ by $$T = \sum_{n=1}^\infty 2^{-n} \vert b_n \rangle \langle \lambda_n \vert.$$ Since each $\lambda_n$ belongs to $F^\perp$, clearly $F \subset \ker T$. Conversely, if $x \in \ker T$ then, since $(b_n)$ is a basic sequence, we must have $\langle x, \lambda_n \rangle = 0$ for all $n \in \N$. Hence $$x \in \{ \lambda_n : n \in \N \}_\perp = \left( \overline{ \operatorname{span}}^{w^*} \{ \lambda_n : n \in \N \}\right)_\perp = (F^\perp)_\perp = F,$$ as required. \end{proof} A Banach space~$E$ is said to have the \emph{separable complementation property} if, for each separable linear subspace~$F$ of~$E$, there is a separable, complemented linear subspace~$D$ of~$E$ such that $F \subset D$. For such Banach spaces we can show that every separable closed linear subspace is a kernel. By \cite{L} every reflexive Banach space has the separable complementation property, so that the next proposition applies in particular to the duals of Wark's Banach spaces, which we shall use in our main theorems. We refer to \cite{HMVZ} for a survey of more general classes of Banach spaces that enjoy the separable complementation property. \begin{proposition} \label{2.0a} Let~$E$ be a Banach space with the separable complementation property. Then, for every closed, separable linear subspace~$F$ of~$E$, there exists $T\in \B(E)$ such that $\ker T = F$. \end{proposition} \begin{proof} Choose a separable, complemented linear subspace~$D$ of~$E$ such that $F \subset D$, and let $P \in \B(E)$ be a projection with range~$D$. By Proposition \ref{2.0}, we can find $S \in \B(D)$ such that $\ker S= F$. Then $$ T\colon\ x\mapsto SPx + x-Px,\quad E\to E, $$ defines a bounded linear operator on~$E$. We shall now complete the proof by showing that $\ker T = F$. Indeed, for each $x \in \ker T$ we have $0 = (\operatorname{id}_E-P)Tx = x- Px$, so that $Px = x$. This implies that $0=Tx = Sx$, and therefore $x\in F$. Conversely, each $x\in F$ satisfies $Sx=0$ and $Px = x$, from which it follows that $Tx=0$. \end{proof} We recall some notions from Banach space theory that we shall require. Let $E$ be a Banach space. A \textit{biorthogonal system} in $E$ is a set $$\{ (x_\gamma, \lambda_\gamma) : \gamma \in \Gamma \} \subset E \times E',$$ for some indexing set $\Gamma$, with the property that \begin{align*} \langle x_\alpha, \lambda_\beta \rangle = \begin{cases} 1\ &\text{if}\ \alpha=\beta\\ 0\ &\text{otherwise}\end{cases} \quad (\alpha, \beta \in \Gamma). \end{align*} A \textit{Markushevich basis} for a Banach space $E$ is a biorthogonal system $\{ (x_\gamma, \lambda_\gamma) : \gamma \in \Gamma \}$ in $E$ such that $\{\lambda_\gamma : \gamma \in \Gamma \}$ separates the points of $E$ and such that $\overline{{\rm span \,}} \{x_\gamma : \gamma \in \Gamma \} = E$. For an in-depth discussion of Markushevich bases see \cite{HMVZ}, in which a Markushevich basis is referred to as an ``M-basis''. We now prove a lemma and its corollary which we shall use to prove Corollary \ref{2.3} below. \begin{lemma} \label{2.5.8} Let $E$ be a Banach space containing an uncountable bi\-o\-r\-t\-h\-ogonal system. Then $E$ contains a closed linear subspace $F$ such that both $F$ and $E/F$ are non-separable. \end{lemma} \begin{proof} Let $\left\{ (x_\gamma, \lambda_\gamma) : \gamma \in \Gamma \right\}$ be an uncountable biorthogonal system in $E$. We can write $\Gamma = \bigcup_{n=1}^\infty \Gamma_n$, where $$\Gamma_n = \{ \gamma \in \Gamma : \Vert x_\gamma \Vert, \Vert \lambda_\gamma \Vert \leq n \} \quad (n \in \N).$$ Since $\Gamma$ is uncountable, there must exist an $n \in \N$ such that $\Gamma_n$ is uncountable. Let $\Delta$ be an uncountable subset of $\Gamma_n$ such that $\Gamma_n \setminus \Delta$ is also uncountable, and set $F = \overline{{\rm span \,}} \{x_\gamma : \gamma \in \Delta \}$. The subspace $F$ is non-separable since $\{x_\gamma : \gamma \in \Delta \}$ is an uncountable set satisfying $$\Vert x_\alpha - x_\beta \Vert \geq \frac{1}{n} \vert \langle x_\alpha - x_\beta, \lambda_\alpha \rangle \vert = \frac{1}{n} \quad (\alpha, \beta \in \Delta, \alpha \neq \beta).$$ Let $q \colon E \rightarrow E/F$ denote the quotient map. It is well known that the dual map $q' \colon (E/F)' \rightarrow E'$ is an isometry with image equal to $F^\perp$. For each $\gamma \in \Gamma_n \setminus \Delta$ the functional $\lambda_\gamma$ clearly belongs to $F^\perp$, so that there exists $g_\gamma \in (E/F)'$ such that $q'(g_\gamma) = \lambda_\gamma$, and such that $\Vert g_\gamma \Vert = \Vert \lambda_\gamma \Vert$. We now see that $\{ q(x_\gamma) : \gamma \in \Gamma_n \setminus \Delta \}$ is an uncountable $1/n$-separated subset of $E/F$ because \begin{align*} \Vert q(x_\alpha) - q(x_\beta) \Vert &\geq \frac{1}{n} \vert \langle q(x_\alpha) - q(x_\beta), g_\alpha \rangle \vert =\frac{1}{n} \vert \langle x_\alpha - x_\beta, q'(g_\alpha) \rangle \vert \\ &= \frac{1}{n} \vert \langle x_\alpha - x_\beta, \lambda_\alpha \rangle \vert = \frac{1}{n} \qquad (\alpha, \beta \in \Gamma_n \setminus \Delta, \ \alpha \neq \beta). \end{align*} It follows that $E/F$ is non-separable. \end{proof} \begin{corollary} \label{2.5.7} Let $E$ be a non-separable, reflexive Banach space. Then $E$ contains a closed linear subspace $F$ such that both $F$ and $E/F$ are non-separable. \end{corollary} \begin{proof} By \cite[Theorem 5.1]{HMVZ} $E$ has a Markushevich basis $\left\{ (x_\gamma, \lambda_\gamma) : \gamma \in \Gamma \right\}$. The set $\Gamma$ must be uncountable since $E$ is non-separable and, by the definition of a Markushevich basis, $\overline{{\rm span \,}}\{x_\gamma : \gamma \in \Gamma \} = E$. Hence the result follows from Lemma \ref{2.5.8}. \end{proof} We now move on to discuss our main example. Building on the work of Shelah and Stepr{\=a}ns~\cite{SS}, Wark constructed in \cite{Wa} a reflexive Banach space $E_W$ with the property that it is non-separable but has few operators in the sense that \begin{equation} \label{eq2.5.2} \B(E_W) = \C \, {\rm id}_{E_W} + \mathscr{X}(E_W), \end{equation} where $\mathscr{X}(E_W)$ denotes the ideal of operators on $E_W$ with separable range. Recently Wark gave a second example of such a space \cite{Wa2018} with the additional property that the space is uniformly convex. For the rest of our paper $E_W$ can be taken to be either of these spaces. In particular, the only properties of $E_W$ that we shall make use of are that it is reflexive, non-separable, and satisfies Equation \eqref{eq2.5.2}. \vskip 2mm \textit{Remark.} We briefly outline why the dual Banach algebra $\B(E_W')$ fits into the framework of~\cite{Wh}. A \emph{transfinite basis} for a Banach space~$X$ is a linearly independent family $\{ x_\alpha : \alpha < \gamma \}$ of vectors in $X$, where $\gamma$ is some infinite ordinal, such that $X_0 = \operatorname{span}\{ x_\alpha : \alpha<\gamma\}$ is dense in~$X$, and with the property that there is a constant $C\geqslant 1$ such that, for each ordinal $\beta<\gamma$, the linear map~$P_\beta\colon X_0\to X$ defined by \[ P_\beta \Bigl( \sum_{\alpha<\gamma} s_\alpha x_\alpha \Bigr) = \sum_{\alpha<\beta} s_\alpha x_\alpha \] has norm at most~$C$. In the notation of \cite{Wa} and \cite{Wa2018} the family $\{ e(\alpha) : \alpha < \omega_1 \}$ is a transfinite basis of $E_W$. See the proofs of Theorem 2 in \cite{Wa} or Proposition 8 in \cite{Wa2018}. It is shown in \cite{Ros} that Banach spaces with transfinite bases have the approximation property. Since the duals of reflexive Banach spaces with the approximation property also have this property, $E_W'$ has the approximation property. It follows from \cite[Corollary 5.6]{Wh} that the algebra of compact operators $\K(E_W')$ is a compliant Banach algebra in the sense of \cite[Definition 5.4]{Wh}. Hence $\K(E_W')$, and its multiplier algebra $\B(E_W')$, fit into the framework of that paper. In particular \cite[Theorem 6.3]{Wh} gives a complete description of the weak*-closed left ideals of $\B(E_W')$, although we shall not need this in the sequel. \vskip 2mm In what follows we denote the image of a bounded linear operator $T$ by $\operatorname{im} T$. \begin{proposition} \label{2.5.6} Let $F$ be a closed linear subspace of the Banach space $E_W$ with the property that $F = \overline{\operatorname{im} T_1+ \cdots + \operatorname{im} T_n}$, for some $n \in \N$, and $T_1, \ldots, T_n \in \B(E_W)$. Then either $F$ or $E_W/F$ is separable. \end{proposition} \begin{proof} Suppose that $F = \overline{ \operatorname{im} T_1+ \cdots + \operatorname{im} T_n},$ for some $n \in \N,$ and some $T_1, \ldots, T_n \in \B(E_W)$. By \eqref{eq2.5.2} there exist $\alpha_1, \ldots, \alpha_n \in \C$ and $S_1, \ldots, S_n \in \mathscr{X}(E_W)$ such that $$T_i = \alpha_i {\rm id}_{E_W} + S_i \quad (i=1, \ldots, n).$$ If every $\alpha_i$ equals zero, then $F = \overline{\operatorname{im} S_1+ \cdots + \operatorname{im} S_n}$, which is separable. Otherwise, without loss of generality, we may assume that $\alpha_1 \neq 0$. Let $x \in E_W$. Then $T_1x = \alpha_1 x + S_1x$, implying that $$x = \frac{1}{\alpha_1} \left(T_1x - S_1x \right) \in F + \overline{\operatorname{im} S_1}.$$ As $x$ was arbitrary, it follows that $E_W = F + \overline{\operatorname{im} S_1}$, so that $$E_W/F = \frac{\left(F + \overline{\operatorname{im} S_1} \right)}{F} \cong \frac{\overline{\operatorname{im} S_1}}{\left(\overline{\operatorname{im} S_1} \cap F \right)}.$$ Hence, it follows that $E_W/F$ is separable. \end{proof} We can now prove our two theorems. \begin{theorem} \label{2.1} Let $D$ be a closed linear subspace of $E_W'$. Then the following conditions are equivalent: \begin{enumerate} \item[{\rm (a)}] either $D$ or $E_W'/D$ is separable; \item[{\rm (b)}] $D = \ker T$, for some $T \in \B(E_W')$; \item[{\rm (c)}] there exist $n \in \N$, and $T_1, \ldots, T_n \in \B(E_W')$ such that $D = \bigcap_{i = 1}^n \ker T_i$. \end{enumerate} \end{theorem} \begin{proof} We first prove that (a) implies (b). Indeed, let $D$ be a closed linear subspace of $E_W'$. By \cite{L}, we may apply Proposition \ref{2.0a} to $E_W'$ to see that if $D$ is separable then it may be realised as the kernel of some $T \in \B(E_W')$. If instead $E_W'/D$ is separable, then we may apply Proposition \ref{2.0}. It is trivial that (b) implies (c), so it remains to prove that (c) implies (a). Let $D$ be a closed linear subspace of $E_W'$ that can be written in the given form for some $n \in \N$ and some $T_1, \ldots, T_n \in \B(E_W')$. Set $F = D_\perp$. Since $E_W$ is reflexive, there exist $S_1, \ldots, S_n \in \B(E_W)$ such that, for each $i = 1, \ldots, n$, $T_i = S_i'$, the dual operator of $S_i$. It follows that $$F = D_\perp = \left( \bigcap_{i = 1}^n \ker S_i' \right)_\perp = \overline{\operatorname{im} S_1+ \cdots + \operatorname{im} S_n}.$$ Hence, by Proposition \ref{2.5.6}, either $F$ or $E_W/F$ is separable. Since $F$ is also reflexive, it now follows from the formulae $(E_W/F)' \cong D$ and $F' \cong E_W'/D$ that either $D$ or $E_W'/D$ is separable. \end{proof} \begin{corollary} \label{2.3} The Banach space $E_W'$ contains a closed linear subspace $D$ which is not of the form $\bigcap_{i=1}^n \ker T_i$, for any choice of $n \in \N$, and operators $T_1, \ldots, T_n \in \B(E_W')$. \end{corollary} \begin{proof} By Corollary \ref{2.5.7} $E_W'$ contains a closed linear subspace $D$ such that neither $D$ nor $E_W'/D$ is separable. The result now follows from Theorem \ref{2.1}. \end{proof} \begin{theorem} \label{2.5.13} The dual Banach algebra $\B(E_W')$ is not weak*-topologically left Noetherian. \end{theorem} \begin{proof} To simplify notation set $E= E_W'$. Let $D$ be a closed linear subspace of $E$ as in Corollary \ref{2.3} and set $$\mathscr{I} := \left\{T \in \B(E): \ker T \supset D \right\}.$$ It is clear that $\mathscr{I}$ is a left ideal of $\B(E)$, and it is weak*-closed since $$\mathscr{I}= \left\{ x \otimes \lambda : x \in D, \ \lambda \in E' \right\}^\perp.$$ We shall show that this ideal fails to be weak*-topologically finitely-generated. Assume towards a contradiction that there exist $n \in \N$ and $T_1, \ldots, T_n \in \B(E)$ such that $$\mathscr{I} = \overline{\B(E)T_1+ \cdots + \B(E)T_n}^{w^*}.$$ We show that \begin{equation*} \label{eq2.1} \bigcap_{T \in \mathscr{I}} \ker T = \bigcap_{i=1}^n \ker T_i. \end{equation*} Indeed, let $x \in \bigcap_{i=1}^n \ker T_i$, and $S \in \mathscr{I}$. Take a net $(S_\alpha)$ in $\B(E)T_1+ \cdots + \B(E)T_n$ converging to $S$ in the weak*-topology. Then for any $\lambda \in E'$ we have $$ 0 = \lim_\alpha \langle S_\alpha(x), \lambda \rangle = \lim_\alpha \langle x \otimes \lambda, S_\alpha \rangle = \langle x \otimes \lambda, S \rangle = \langle S(x), \lambda \rangle,$$ and as $\lambda$ was arbitrary it follows that $S(x) = 0$. As $x$ was arbitrary $ \bigcap_{i=1}^n \ker T_i \subset \bigcap_{T \in \mathscr{I}} \ker T$, and the reverse inclusion is trivial. Observe that $D \subset \bigcap_{T \in \mathscr{I}} \ker T$. Conversely, given $x \in E \setminus D$, we may pick $\lambda \in E'$ such that $\langle x, \lambda \rangle =1$, and $\ker \lambda \supset D$. Then the operator $\vert x \rangle \langle \lambda \vert$ belongs to $\mathscr{I}$, but $\vert x \rangle \langle \lambda \vert(x) = x \neq 0$, so that in fact $D = \bigcap_{T \in \mathscr{I}} \ker T= \bigcap_{i=1}^n \ker T_i$. However this contradicts the choice of $D$. \end{proof} This is the only example that we know of a dual Banach algebra which is not weak*-topologically left Noetherian. It would be interesting to know if there are examples of the form $M(G)$, the measure algebra of a locally compact group $G$. In the light of \cite[Corollary~1.6(i)]{Wh} this would be particularly interesting for a compact group $G$. It would also be interesting to know whether the Fourier--Stieltjes algebra of a locally compact group ever fails to be weak*-topologically left Noetherian. Another interesting problem would be to characterise those closed linear subspaces $F$ of a non-separable Banach space $E$ such that $F$ is the kernel of some bounded linear operator on $E$. \subsection*{Acknowledgements} \noindent The second author is supported by the French ``Investissements d'A\-v\-e\-n\-ir'' program, project ISITE-BFC (contract ANR-15-IDEX-03). The article is based on part of the second author's PhD thesis, and as such we would like to thank Garth Dales, as well as the thesis examiners Gordon Blower and Tom K\"orner, for their careful reading of earlier versions of this material and their helpful comments. We are grateful to Hugh Wark and Tomasz Kania for some useful email exchanges.
{ "timestamp": "2018-11-30T02:12:09", "yymm": "1811", "arxiv_id": "1811.02399", "language": "en", "url": "https://arxiv.org/abs/1811.02399", "abstract": "Given a Banach space $E$, we ask which closed subspaces may be realised as the kernel of a bounded operator $E \\rightarrow E$. We prove some positive results which imply in particular that when $E$ is separable every closed subspace is a kernel. Moreover, we show that there exists a Banach space $E$ which contains a closed subspace that cannot be realized as the kernel of any bounded operator on $E$. This implies that the Banach algebra $\\mathcal{B}(E)$ of bounded operators on $E$ fails to be weak*-topologically left Noetherian. The Banach space $E$ that we use is the dual of Wark's non-separable, reflexive Banach space with few operators.", "subjects": "Functional Analysis (math.FA)", "title": "Subspaces that can and cannot be the kernel of a bounded operator on a Banach space", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9902915249511566, "lm_q2_score": 0.7154240079185319, "lm_q1q2_score": 0.7084783317883112 }
https://arxiv.org/abs/2103.11432
Results and questions on matchings in groups and vector subspaces of fields
A matching from a finite subset $A$ of an abelian group to another subset $B$ is a bijection $f:A\rightarrow B$ with the property that $a+f(a)$ never lies in $A$. A matching is called acyclic if it is uniquely determined by its multiplicity function. Motivated by a question of E. K. Wakeford on canonical forms for symmetric tensors, the study of matchings and acyclic matchings in abelian groups was initiated by C. K. Fan and J. Losonczy in [16, 26], and was later generalized to the context of vector subspaces in a field extension [13, 1]. We discuss the acyclic matching and weak acyclic matching properties and we provide results on the existence of acyclic matchings in finite cyclic groups. As for field extensions, we completely classify field extensions with the linear acyclic matching property. The analogy between matchings in abelian groups and in field extensions is highlighted throughout the paper and numerous open questions are presented for further inquiry.
\section{Introduction} The notion of matchings in abelian groups was introduced by Fan and Losonczy in \cite{MR1371651} in order to generalize a geometric property of lattices in Euclidean space. The study of acyclic matchings was motivated by an old problem of Wakeford concerning canonical forms for symmetric tensors \cite{MR1576066}. This notion has been investigated for non-abelian groups as well \cite{MR2388613}, but we solely work with abelian groups. Throughout this paper, $G$ denotes an additive abelian group. \begin{definition}\label{main definition} Let $B$ be a finite subset of $G$ which does not contain the neutral element. For any subset $A$ in $G$ with the same cardinality as $B$, a {\it matching} from $A$ to $B$ is defined to be a bijection $f:A\to B$ such that for any $a\in A$ we have $a+f(a)\not\in A$. For any matching $f$ as above, the associated \textit{multiplicity function} $m_f:G\to \mathbb{Z}_{\geq0}$ is defined via the rule: \begin{equation}\label{main criterion} \forall x\in G,\quad m_ f(x)=\#\{a\in A\,:\, a+ f(a)=x\}. \end{equation} A matching $ f:A\to B$ is called {\it acyclic} if for any matching $g:A\to B$, $m_f=m_g$ implies $f=g$. \end{definition} In view of Definition \ref{main definition}, a natural question to ask is whether two finite subsets $A$ and $B$ of $G$ satisfying $\#A=\#B$ and $0\notin B$ can be matched or be acyclically matched, i.e. is there a matching or an acyclic matching from $A$ onto $B$? It is known that there exists a matching $f:A\rightarrow B$ if $A=B$, if every element of $B$ is a generator of $G$, or if $G$ is torsion-free \cite{MR1618439}. The latter result in particular implies that a torsion-free abelian group $G$ possesses the \textit{matching property}: For any two subsets $A$ and $B$ as in Definition \ref{main definition}, there exists a matching $f:A\rightarrow B$. In \cite{MR1618439}, Losonczy proves that abelian groups with the matching property are precisely those that are either torsion-free or cyclic of prime order; namely, groups that do not possess any non-trivial proper finite subgroup. Indeed, torsion-free abelian groups admit the stronger \textit{acyclic matching property} in the sense that for any $A$ and $B$ of the same cardinality with $0\notin B$, there exists an acyclic matching $ f:A\to B$ \cite{MR1409421,MR1618439}. The situation for groups $\Bbb{Z}/p\Bbb{Z}$ of prime orders is more subtle. Paper \cite{MR3393940} shows that for primes $p$ with $p\equiv -1 \pmod{8}$ the group $\Bbb{Z}/p\Bbb{Z}$ does not have the acyclic matching property via exhibiting an explicit subset of $\Bbb{Z}/p\Bbb{Z}$ that does not admit any acyclic matching onto itself. Based on experimental evidence, it is conjectured in \cite{MR3991937} that $\Bbb{Z}/p\Bbb{Z}$ does not admit the acyclic matching property for any prime $p>5$. We shall prove the following theorems on the existence of matchings between certain subsets of a cyclic group of prime order. \begin{theorem}\label{main1} Let $A$ be subsets of the cyclic group $\Bbb{Z}/p\Bbb{Z}$ where $p$ is a prime number. Suppose $A$ satisfies $A\cap 2A=\emptyset$ and is of size $k$ where $k.2^{k-1}<p$.\footnote{For any integer $m$, $mA$ denotes the subset $\{ma\,:\, a\in A\}$.} Then $A$ is acyclically matched to itself via the identity map. \end{theorem} \begin{theorem}\label{main2} Let $p$ be a prime number and suppose $A$ and $B$ are finite subsets of $\Bbb{Z}/p\Bbb{Z}$ with $0\notin B$ which are of the same size k. If $k\leq\sqrt{\log_2p}-1$, then there exists an acyclic matching $f:A\rightarrow B$. \end{theorem} \noindent The theorems will be established in \S2; the proof of the first one is based on a linear algebra argument while the second one utilizes a result from additive number theory. The condition $A\cap 2A=\emptyset$ in Theorem \ref{main1} is necessary for ${\rm{id}}:A\rightarrow A$ to be a matching. In general, all bijections $f:A\rightarrow B$ are matchings provided that $A\cap (A+B)=\emptyset$.\footnote{The sumset $A+B$ is defined as $\{a+b\,:\, a\in A \text{ and }b\in B\}$.} The profusion of matchings then can possibly imply the existence of an acyclic matching from $A$ to $B$. \begin{question}\label{main question} Let $A,B$ be subsets of the cyclic group $\Bbb{Z}/p\Bbb{Z}$ where $p$ is a prime number. Suppose $A$ and $B$ are of the same size $k$. Does the condition $A\cap(A+B)=\emptyset$ guarantee the existence of an acyclic matching $f:A\rightarrow B$? \end{question} \noindent A partial answer will be provided in Proposition \ref{Answer}. In view of the discussion above, an abelian group $G$ is said to admit the \textit{weak acyclic matching property} if there exists an acyclic matching between any two subsets $A$ and $B$ of $G$ that have the same cardinality and satisfy $A\cap (A+B)=\emptyset$. Any cyclic group $\Bbb{Z}/n\Bbb{Z}$ of order smaller than $23$ satisfies the weak acyclic matching property, but the existence of infinitely many cyclic groups $\Bbb{Z}/p\Bbb{Z}$ of prime order with this property is an open question \cite{MR3991937}. The investigation of matchings in abelian groups has an enumerative aspect as well. Paper \cite{MR2847271} for instance provides a lower bound for the number of matchings $A\rightarrow B$ under an assumption on $B$. Using a graph-theoretical interpretation of matchings, in \S2.2 we exhibit bounds on the number of matching $A\rightarrow B$ by invoking some classical results from the theory of permanents; see Proposition \ref{inequality proposition}. Given a field extension $L/F$, an analogous notion of matching between two $F$-subspaces of $L$ is developed by Eliahou and Lecouvey in \cite{MR2735391}. \begin{definition}\label{linear definition} Let $A$ and $B$ be two $k$-dimensional $F$-subspaces of $L$. An ordered basis $\mathcal{A}=\{a_1,\ldots,a_k\}$ of $A$ is said to be \textit{matched} to an ordered basis $\mathcal{B}=\{b_1,\ldots,b_k\}$ of $B$ if \begin{equation}\label{linear criterion} a^{-1}_iA\cap B\subseteq \langle b_1,\ldots,\widehat{b}_i,\ldots,b_k\rangle \end{equation} for each $1\leq i\leq k$. We say that $A$ is matched to $B$ (or $A$ is \textit{matchable} to $B$) if every ordered basis $\mathcal{A}$ of $A$ can be matched to an ordered basis $\mathcal{B}$ of $B$. \end{definition} \noindent To see the analogy, notice that if \eqref{linear criterion} is satisfied, then no $a_ib_i$ can lie in $B=\langle\mathcal{B}\rangle$ and thus, in the multiplicative group $L^\times$, $a_i\mapsto b_i$ defines a matching $\mathcal{A}\rightarrow\mathcal{B}$ in the sense of Definition \ref{main definition}. One can easily check that having \eqref{linear criterion} for all $i\in\{1,\dots,k\}$ implies \begin{equation}\label{intersection criterion} \dim_F\bigcap_{i\in J}\left(a_i^{-1}A\cap B\right)\leq k-\# J \end{equation} for any $J\subseteq\{1,\dots,k\}$. In particular, setting $J=\{1,\dots,k\}$, the subspace $\bigcap_{i=1}^k\left(a_i^{-1}A\cap B\right)$ must be trivial which cannot happen if $1\in B$. This brings us to the linear analogue of the matching property in groups. \begin{definition}\label{linear matching} A field extension $L/F$ has the \textit{linear matching property} if every finite-dimensional $F$-subspace $A$ is matched to any other subspace $B$ of $L$ which is of the same dimension and satisfies $1\notin B$. \end{definition} \noindent Similar to the result from \cite{MR1618439} mentioned above, an extension $L/F$ has the matching property if there is no finite intermediate extension $E/F$ with $E\neq F,L$ \cite{MR2735391}.\footnote{There is a slight gap in the statement of \cite[Theorem 2.6]{MR2735391} which is corrected in \cite{2012arXiv1208.2792E}. The classification of field extensions with the linear matching property that we mentioned is based on \cite[Theorem 2.6]{2012arXiv1208.2792E}.} Also in the group-theoretic context, we mentioned that if every element of $B$ is a generator of $G$, then there exists a matching $A\rightarrow B$ \cite[Proposition 3.4]{MR1618439}. A similar result has been established in the linear setting: Given a finite field extension $L/F$, two $F$-subspaces $A$ and $B$ of the same dimension are matchable if $B$ is a \textit{primitive} $F$-subspace of $L$ \cite[Theorem 4.2]{MR3723776}. Recall that $B$ is called primitive if $F(\alpha)=L$ for each $\alpha\in B\setminus \{0\}$. We shall show the following regarding the largest possible dimension of a primitive subspace. \begin{theorem}\label{primitive} Let $L/F$ be a finite simple field extension. Then the largest possible dimension of a primitive $F$-subspace of $L$ is given by $$ [L:F]-\max_{\stackrel{F\subseteq E\subsetneq L}{E \text{ a proper intermediate subfield}}}[E:F]. $$ \end{theorem} \noindent A proof will appear in \S3 after a review of linear matchings. Indeed, motivated by \cite{MR2735391}, paper \cite{MR3393940} develops a notion of when $A$ can be \textit{acyclically} matched to $B$, hence a definition of \textit{linear acyclic matching property} for field extensions. The former is relevant only when $A\cap AB=\{0\}$, compare with condition $A\cap(A+B)=\emptyset$ in Question \ref{main question} from the group-theoretic setting. Unlike the case of abelian groups, in the linear setting, the linear matching property for field extensions is equivalent to the acyclic one. In \S3, after reviewing the definition of \textit{linear acyclic matching property}, we shall prove the following by building on the arguments appeared in \cite{MR3393940}: \begin{theorem}\label{linear theorem} A field extension $L/F$ admits the linear acyclic matching property if and only if there is no finite intermediate extension $E/F$ with $E\neq F,L$. \end{theorem} \noindent This generalizes \cite[Theorem 4.5]{MR3393940}. \subsection*{Outline} We have devoted \S2 to matchings in the context of abelian groups and \S3 to linear matchings in the context of field extensions. In \S2.1, after a brief review of the literature on matchings and acyclic matchings, we prove Theorem \ref{main1} and Theorem \ref{main2}, as well as Proposition \ref{Answer} that provides a partial answer to Question \ref{main question}. These results are all concerned with the existence of acyclic matchings. Some questions on counting the number of matchings are discussed in \S2.2. Proof of Theorem \ref{primitive} on primitive subspaces in a simple field extension appears in \S3.1. Finally, in \S3.2, we prove Theorem \ref{linear theorem} that characterizes field extensions with the linear acyclic matching property. \section{Matchings in abelian groups} \subsection{Acyclic matchings} We begin with two examples of matchings and a related definition. \begin{definition} The \textit{support} of a matching $f:A\rightarrow B$, denoted by $\mathrm{supp}(f)$, is the subset of elements $x\in A$ at which the multiplicity function $m_f:G\rightarrow\Bbb{Z}_{\geq 0}$ is positive; that is, the subset of elements $x\in G$ that may be realized as $a+f(a)$ for an element $a$ of $A$. \end{definition} \begin{example} Let us examine matchings $f:A\rightarrow B$ in which the subsets $A$ and $B$ of the abelian group $G$ are as large as possible. If $\#A=\#B=\#G-1$, then $B=G\setminus\{0\}$ since $0$ cannot belong to $B$; and any such matching should obviously be in the form of $$ f:A=G\setminus\{g_1\}\rightarrow B=G\setminus\{0\}: a\mapsto g_1-a $$ for an appropriate $g_1\in G$. The support of $f$ only contains $g_1$ at which $m_f$ takes the value $\#G-1$. This clearly shows that the matching above is acyclic. In the case that $A$ and $B$ are of cardinality $\#G-2$, matchings between them can still be determined although with a slightly more complicated analysis. Write $A$ as $G\setminus\{g_1,g_2\}$ and $B$ as $G\setminus\{0,g_3\}$ where $g_1\neq g_2$ and $g_3\neq 0$. Since $f$ is a matching, for any $a\in A$, $f(a)$ should be either $g_1-a$ or $g_2-a$ because otherwise $a+f(a)\in A$. Thus $G\setminus\{0,g_3\}$ may be partitioned as $B_1\sqcup B_2$ where \begin{equation}\label{auxiliary1} B_1:=\{f(a)\,:\, f(a)=g_1-a\},\quad B_2:=\{f(a)\,:\, f(a)=g_2-a\}. \end{equation} Conversely, a matching $f:A\rightarrow B$ may be recovered from a partition $$G\setminus\{0,g_3\}=B_1\sqcup B_2$$ as \begin{equation}\label{formula} f(a)= \begin{cases} g_1-a\, \text{ if } g_1-a\in B_1\\ g_2-a\, \text{ if } g_2-a\in B_2 \end{cases} \end{equation} provided that $(g_1-B_1)\cap(g_2-B_2)=\emptyset$, a condition guaranteeing that $f$ is well defined. Thus we focus on characterizing partitions of $G\setminus\{0,g_3\}$ satisfying this condition. The condition can be rewritten first as $B_1\cap((g_1-g_2)+B_2)=\emptyset$ and then, given $B_1\sqcup B_2=G\setminus\{0,g_3\}$, as $(g_1-g_2)+B_2\subset B_2\cup\{0,g_3\}$. Consequently, if $b$ is in $B_2$, then $(g_1-g_2)+b$ must belong to $B_2$ unless $b=g_2-g_1$ or $b=g_2-g_1+g_3$. The former is impossible as, in view of \eqref{auxiliary1}, $g_2-g_1\in B_2$ means $g_1\in A$. Therefore, for any $b\in B_2$ different from $g_2-g_1+g_3$, we have $(g_1-g_2)+b\in B_2$. Repeating this argument, $2(g_1-g_2)+b$ must belong to $B_2$ too and then $3(g_1-g_2)+b$ until we reach a positive integer $l$ with $l(g_1-g_2)+b=g_2-g_1+g_3$. We conclude that $B_2$ is a progression of the form \begin{equation}\label{auxiliary2} B_2=\{g_3+(g_2-g_1),g_3+2(g_2-g_1),\dots, g_3+l(g_2-g_1)\} \end{equation} where $l$ is a positive\footnote{Here we have assumed $g_3+(g_2-g_1)\neq 0$ in which case $g_3+(g_2-g_1)$ must lie in $B_2$ rather than $B_1$. This is because then $f$ cannot send $g_1-g_3\in A=G\setminus\{g_1,g_2\}$ to $g_1-(g_1-g_3)=g_3$ (see \eqref{formula}). In case that $g_3+(g_2-g_1)=0$, $l$ is zero in \eqref{auxiliary2}, $B_2=\emptyset$ and $f$ is given by $a\mapsto g_1-a$.} integer with $i(g_2-g_1)\neq 0,-g_3$ for any $i\in\{1,\dots,l\}$, a condition required for $B_2\cap\{0,g_3\}=\emptyset$. The knowledge of $l$ completely determines the matching $f:A=G\setminus\{g_1,g_2\}\rightarrow B=G\setminus\{0,g_3\}$ due to formula \eqref{formula} in which $B_2$ is as in \eqref{auxiliary2} and $B_1$ is the complement of $B_2$ in $G\setminus\{0,g_3\}$. The integer $l$ can be recovered from the multiplicity function $m_f:G\rightarrow\Bbb{Z}_{\geq 0}$: it attains the value $l$ at $g_2$, the value $\#G-l-2$ at $g_1$ and is zero elsewhere. We conclude that when $A$ and $B$ are of cardinality $\#G-2$, every matching $A\rightarrow B$ is acyclic. \end{example} \begin{example} Let us consider a family of matchings $f:A\rightarrow B$ for which $A\cap (A+B)=\emptyset$ as in Question \ref{main question}. Let us impose an extra condition: $B\cup\{0\}$ is a subgroup of $G$. First, notice that the multiplicity function $m_f$ of a matching $f:A\rightarrow B$ can never take any value larger than one; otherwise, there exist distinct $a,a'\in A$ for which $x=a+f(a)=a'+f(a')$. So $a=a'+f(a')-f(a)\in A+B$ as $f(a')-f(a)\in B$, contradicting $A\cap(A+B)=\emptyset$. Next, we claim that any matching $f:A\rightarrow B$ is acyclic. To see this, we shall show two matchings $f,g:A\rightarrow B$ with $m_f=m_g$ coincide. For any arbitrary $a\in A$, there should be an $a'\in A$ with $a+f(a)=a'+g(a')$ -- in fact a unique one -- because $m_g$ must be positive at $a+f(a)$ just as $m_f$ is. This can be written as $a=a'+(g(a')-f(a))$ which, due to $g(a')-f(a)\in B\cup\{0\}$, contradicts $A\cap(A+B)=\emptyset$ unless $g(a')=f(a)$ which also implies $a=a'$. \end{example} We next turn into results concerned with the existence of matchings or acyclic matchings. As mentioned in the introduction, it is established in \cite{MR1618439} that an abelian group has matching property if and only if it is torsion-free or cyclic of prime order. The proof therein utilizes Hall's marriage theorem and a result of Kneser offering a lower bound on the size of sumsets in abelian groups. We reproduce the ``if'' part with a slightly different approach based on classical K\"onig's theorem in graph theory. The ideas developed in the proof will be later used to estimate the number of matchings in \S2.2. \begin{theorem}[\cite{MR1618439}]\label{t*} Let $G$ be an abelian group which is either torsion-free or cyclic of prime order. Suppose $A$ and $B$ are two finite subsets of $G$ which are of the same size and $0\notin B$. Then there exists a matching $f:A\rightarrow B$. \end{theorem} \begin{proof} The key idea is to construct a bipartite graph $\mathcal{G}_{A,B}$ whose vertex set is the disjoint union $A\dot\cup B$ of $A$ and $B$. We connect a vertex $a\in A$ to a vertex $b\in B$ if and only if $a+b\notin A$. The graph-theoretical notion of a \textit{perfect matching} now comes into play: Recall that a \textit{matching} $M$ of a graph $\mathcal{G}$ is a collection of edges that no two of them share a vertex. A matching $M$ is called \textit{perfect} if every vertex of $\mathcal{G}$ is incident to an edge belonging to $M$. Clearly, a perfect matching in a bipartite graph amounts to a bijection between the two parts with the property that each vertex is connected to its image under the bijection. Given the way we defined $\mathcal{G}_{A,B}$, it suffices to establish the existence of a perfect matching in $\mathcal{G}_{A,B}$. We shall show that $\mathcal{G}_{A,B}$ admits a perfect matching by invoking the following fact from graph theory: A bipartite graph $\mathcal{G}$ has a perfect matching if the largest possible size of an independent subset of its vertices is $\frac{\# V(\mathcal{G})}{2}$. To see this, notice that the complement of a maximum independent subset of vertices is a minimum \textit{vertex cover}: a subset of vertices which is incident to every edge, and its size is the smallest among subsets with such property. K\"onig's theorem (see \cite{MR1956924} for instance) asserts that, in a bipartite graph, there exists a matching whose size is equal to the size of any (and hence all) minimum vertex cover. Therefore, if $\mathcal{G}$ has a minimum vertex cover of size $\frac{\# V(\mathcal{G})}{2}$, it should have a matching consisting of $\frac{\# V(\mathcal{G})}{2}$ edges. These edges have no vertex in common and so are incident to all $\# V(\mathcal{G})$ vertices of $\mathcal{G}$, thus the proof of the aforementioned fact. To finish the proof, we need to show that the largest possible size of an independent subset of vertices of $\mathcal{G}_{A,B}$ is $$ \frac{\# V(\mathcal{G}_{A,B})}{2}=\#A=\#B. $$ But $A$ and $B$ are already independent, forming the partition of vertices $\mathcal{G}_{A,B}$. So one just needs to argue that there is no larger independent set. Let $S:=A'\sqcup B'$ be an independent set where $A'\subseteq A$ and $B'\subseteq B$. Thus an element $a\in A'$ is not connected to any $b\in B$; that is, given the definition of $\mathcal{G}_{A,B}$, one has $A'+B'\subseteq A$. The containment remains true with $B'':=B'\sqcup\{0\}$ in place of $B'$ (recall that $0\notin B'$). We now invoke a classical result of Kneser (cf. \cite[Theorem 4.3]{MR1477155}): Since $G$ has no finite non-trivial proper subgroup, one has: $$\#(A'+B'')\geq\min\left(\# G, \#A'+\#B''-1\right).$$ Combining with $A'+B''\subseteq A\subset G$, we deduce that $$ \#S=\#A'+\#B'=\#A'+\#B''-1 $$ cannot be larger than $\#A=\#B$. \end{proof} The preceding theorem indicates that an abelian group which is either torsion-free or of prime order has the matching property. As for the stronger acyclic matching property, torsion-free abelian groups admit the latter property as established in \cite{MR1618439} whereas there exist infinitely many primes $p$ for which $\Bbb{Z}/p\Bbb{Z}$ does not possess the acyclic matching property: It is proved in \cite{MR3393940} that if $p\equiv -1 \pmod 8$, then $\Bbb{Z}/p\Bbb{Z}$ does not have this property. Building on the results of that paper (\cite[Lemma 2.1 and Proposition 2.3]{MR3393940}), we present a slightly different proof below which considers a different, larger, family of primes. \begin{theorem}[\cite{MR3393940}]\label{Jafari} There are infinitely many primes $p$ for which $\Bbb{Z}/p\Bbb{Z}$ does not satisfy the acyclic matching property. \end{theorem} \begin{proof} We claim that if the multiplicative order of $2$ modulo an odd prime $p$ is odd, then $\Bbb{Z}/p\Bbb{Z}$ lacks the acyclic matching property. Notice that there are infinitely many such primes. Indeed, the subset formed by these primes is of density $\frac{7}{24}$ according to a result of Hasse \cite{MR205975}. Fix an odd prime $p$ for which the multiplicative order ${\rm{ord}}_2p$ is odd, and set $$ A=B=\left\{\overline{2^m}\,:\,m=0,1,\dots,{\rm{ord}}_2p-1\right\}. $$ Now let $f:A\rightarrow B=A$ be a matching. The bijection $f$ and its inverse $f^{-1}$ have the same multiplicity functions obviously. Thus $f$ is acyclic only if $f=f^{-1}$. But, since $\#A$ is odd, any permutation of order two of $A$ must have a fixed point. This is clearly impossible here because $A$ is invariant under multiplication by $2$ and so if $a\in A$ is a fixed point, $a+f(a)=2a$ lies in $A$ as well, violating the matching property. \end{proof} We next turn into Question \ref{main question} regarding the existence of acyclic matching from $A$ onto $B$ whenever all bijections $A\rightarrow B$ are matchings. Proposition below establishes this under the assumption that $B$, following the terminology of \cite{MR810691}, is a \textit{Sidon set}. \begin{proposition}\label{Answer} Let $A,B$ be subsets of an abelian group $G$. Suppose $A$ and $B$ are of the same size satisfying $A\cap (A+B)=\emptyset$. Then there exists an acyclic matching $f:A\rightarrow B$ if we assume the equation $x+y=z+w$ has no solution in $B$ with $\{x,y\}\cap\{z,w\}=\emptyset$. \end{proposition} \begin{proof} Aiming for a contradiction, let $k$ be the smallest cardinality for which the proposition is false. Label elements of $A$ and $B$ as $A=\{a_1,\dots,a_k\}$ and $B=\{b_1,\dots,b_k\}$. Pick indices $i,j\in\{1,\dots,k\}$ arbitrarily. Bijections $f:A\rightarrow B$ with $a_i\mapsto b_j$ are all matchings and are in a one-to-one correspondence with the bijections $A\setminus\{a_i\}\rightarrow B\setminus\{b_j\}$. No such $f$ is acyclic, so there exists a bijection $g:A\rightarrow B$ with the same multiplicity function. It is possible to find such a $g$ with $g(a_i)\neq b_j$ as otherwise no bijection (matching) $A\setminus\{a_i\}\rightarrow B\setminus\{b_j\}$ would be acyclic contradicting the minimality of $k$. Thus there exist $f,g:A\rightarrow B$ with the same multiplicity functions that satisfy $f(a_i)=b_j$ and $g(a_i)\neq b_j$. In particular, $a_i+f(a_i)=a_i+b_j$ should be in the support of $g$ as well; that element may be realized as $a_{i'}+g(a_{i'})=a_{i'}+b_{j'}$ for suitable $i',j'\in\{1,\dots,k\}$. Notice that $i\neq i'$ and $j\neq j'$ as otherwise $a_i+b_j=a_{i'}+b_{j'}$ implies $g(a_i)=b_j$. We deduce that: For any $i,j\in\{1,\dots,k\}$, there exist $i',j'\in\{1,\dots,k\}$ with $a_i-a_{i'}=b_{j'}-b_j$ where $i\neq i'$ and $j\neq j'$. Fixing $i\in\{1,\dots,k\}$ and letting $j$ vary, by the pigeonhole principle, there exist $i'\in\{1,\dots,k\}\setminus\{i\}$ which is associated with two different indices $j$ and $\tilde{j}$: $a_i+b_j=a_{i'}+b_{j'}$ and $a_i+b_{\tilde{j}}=a_{i'}+b_{\tilde{j'}}$ where $i'\neq i$, $j'\neq j$, ${\tilde{j'}}\neq{\tilde{j}}$ and $j\neq\tilde{j}$. These equations may be written as $b_{j'}-b_j=b_{\tilde{j'}}-b_{\tilde{j}}=a_i-a_{i'}\neq 0$. Therefore, $b_j+b_{\tilde{j'}}=b_{j'}+b_{\tilde{j}}$ where no element from the left appears on the right. \end{proof} \begin{example} Here we provide an application of the preceding proposition. Consider the cyclic group $\Bbb{Z}/n\Bbb{Z}$ and let $k$ be a positive integer with $k>1$ and $(k-1)(2^{k-1}+1)<n$. We shall exhibit two subsets $A$ and $B$ of $\Bbb{Z}/n\Bbb{Z}$ of size $k$ for which $A\cap(A+B)=\emptyset$ and the condition of Proposition \ref{Answer} on $B$ are satisfied. Take $B$ to be the geometric progression $$\left\{\overline{1},\overline{2},\dots,\overline{2^{k-1}}\right\}.$$ Since $2^k<n$, if four residue classes $\overline{x}=\overline{2^r}$, $\overline{y}=\overline{2^s}$, $\overline{z}=\overline{2^t}$ and $\overline{w}=\overline{2^u}$ from the set above satisfy $\overline{x}+\overline{y}=\overline{z}+\overline{w}$, then $2^r+2^s=2^t+2^u$. This equation has no solution in non-negative integers $r,s,t,u$ with $\{r,s\}\cap\{t,u\}=\emptyset$: Dividing both sides by $2^v$ where $v:=\min\{r,s,t,u\}$ results in the equality of an odd number with an even number unless $v$ appears twice among $r,s,t,u$. Now according to the proposition, any subset $A$ of $\Bbb{Z}/n\Bbb{Z}$ of size $k$ with $A\cap(A+B)=\emptyset$ admits an acyclic matching onto $B$. One can for instance take $A$ to be an arithmetic progression such as $$\left\{\overline{a},\overline{a}+\overline{2^{k-1}+1},\dots,\overline{a}+\overline{(k-1)(2^{k-1}+1)}\right\}.$$ This is of size $k$ since $(k-1)(2^{k-1}+1)<n$, and no difference of its elements lies in $B$ because $\overline{i(2^{k-1}+1)}\neq \overline{2^j}$ for all $i,j\in\{0,\dots,k-1\}$ (again a byproduct of $(k-1)(2^{k-1}+1)<n$). \end{example} In the case of \textit{symmetric matchings} where $A=B$, Theorem \ref{main1} establishes the existence of an acyclic matching in the absence of the Sidon condition imposed in Proposition \ref{Answer}, but at the expense of limiting the cardinality of $A$. \begin{proof}[Proof of Theorem \ref{main1}] Aiming for a contradiction, suppose the identity map ${\rm{id}}:A\rightarrow A$ -- which is a matching due to $A\cap 2A=\emptyset$ -- is not acyclic. Therefore, writing $A$ as $\{a_1,\dots,a_k\}$ where $k:=\#A$, there should be a bijection $a_i\mapsto a_{\sigma(i)}$ with $\sigma\in{\rm{S}}_k\setminus\{{\rm{id}}\}$ which is a matching and has the same multiplicity function. In other words, $\{2a_i\}_{1\leq i\leq k}$ and $\{a_i+a_{\sigma(i)}\}_{1\leq i\leq k}$ coincide as multi-sets. Hence there must exist a second permutation $\lambda\in{\rm{S}}_k$ which yields the multi-set $\{2a_i\}_{1\leq i\leq k}$ as a re-ordering of the multi-set $\{a_i+a_{\sigma(i)}\}_{1\leq i\leq k}$, i.e. \begin{equation}\label{identities} a_{\lambda(i)}+a_{\sigma(\lambda(i))}=2a_i \text{ for all } i\in\{1,\dots,k\}. \end{equation} If $\lambda$ is identity, the same should be true about $\sigma$ which cannot be the case. So both permutations $\lambda$ and $\sigma\circ\lambda$ are different from identity. Equation \eqref{identities} can be rephrased in terms of permutation matrices: Denoting the permutation matrix corresponding to a permutation $\nu\in{\rm{S}}_k$ with $$ {\rm{P}}_\nu=[p_{ij}]_{1\leq i,j\leq k},\quad p_{ij}=\begin{cases} 1&j=\nu(i),\\ 0&\text{otherwise}, \end{cases} $$ the vector $[a_1\dots a_k]^{\rm{T}}$ -- whose entries are distinct -- should lie in the null space of $2{\rm{I}}_k-{\rm{P}}_\lambda-{\rm{P}}_{\sigma\circ\lambda}$. We claim that the hypothesis $k.2^{k-1}<p$ of the theorem implies that, given two non-identity permutations $\alpha,\beta\in{\rm{S}}_k$, no vector of $\Bbb{F}_p^k$ with distinct entries lies in the null space of $2{\rm{I}}_k-{\rm{P}}_\alpha-{\rm{P}}_\beta$. The resulting contradiction will then conclude the proof. Notice that the vector $[1\dots 1]^{\rm{T}}$ is in the kernel of $2{\rm{I}}_k-{\rm{P}}_\alpha-{\rm{P}}_\beta$. Our approach is to show that, over $\Bbb{F}_p$, the nullity of this matrix is one, hence no vector with distinct entries belongs to its null space. The key idea is to first prove this in characteristic zero: The matrix is real; so it suffices to show that the entries of any real vector in its kernel must be identical. This easily follows from the fact that $\Bbb{R}$ is ordered: if the entries of a real vector $[x_1\dots x_k]^{\rm{T}}$ satisfy $2x_i=x_{\alpha(i)}+x_{\beta(i)}$ for all $i\in\{1,\dots,k\}$, then they must coincide because neither of the permutations $\alpha$ or $\beta$ is identity. The characteristic polynomial of $2{\rm{I}}_k-{\rm{P}}_\alpha-{\rm{P}}_\beta$ is now a monic polynomial $q(t)\in\Bbb{Z}[t]$ whose constant term is $0$. We claim that its coefficient of $t$ is non-zero; namely, the algebraic multiplicity of $0$ as an eigenvalue of $2{\rm{I}}_k-{\rm{P}}_\alpha-{\rm{P}}_\beta$ is one as well. This is due to the fact that the decomposition $$ \Bbb{R}^k=\{x_1+\dots+x_k=0\}\oplus\Bbb{R}.\{(1,\dots,1)\} $$ is invariant under the transformation $2{\rm{I}}_k-{\rm{P}}_\alpha-{\rm{P}}_\beta$: The first subspace contains its image and the second one, as established, is its kernel. Hence $q(t)$ is $t$ times the characteristic polynomial for the restriction of $2{\rm{I}}_k-{\rm{P}}_\alpha-{\rm{P}}_\beta$ to its invariant subspace $\{x_1+\dots+x_k=0\}$. The latter polynomial has a non-zero constant term since $\{x_1+\dots+x_k=0\}$ intersects the kernel $\Bbb{R}.\{(1,\dots,1)\}$ of $2{\rm{I}}_k-{\rm{P}}_\alpha-{\rm{P}}_\beta$ trivially. We conclude that the coefficient of $t$ in the characteristic polynomial $q(t)$ of $2{\rm{I}}_k-{\rm{P}}_\alpha-{\rm{P}}_\beta$ is non-zero. If this remains true modulo $p$, then the rank of $2{\rm{I}}_k-{\rm{P}}_\alpha-{\rm{P}}_\beta$ over $\Bbb{F}_p$ will remain $k-1$. This is going to be achieved by arguing that the absolute value of the non-zero coefficient of $t$ in $q(t)$ should be less than $p$ if $k.2^{k-1}<p$. This coefficient is $(-1)^{k-1}$ times the sum of all $(k-1)\times(k-1)$ minors of $2{\rm{I}}_k-{\rm{P}}_\alpha-{\rm{P}}_\beta$ along the diagonal. In each column of $2{\rm{I}}_k-{\rm{P}}_\alpha-{\rm{P}}_\beta$, and thus in each column of these minors, the sum of positive entries, as well as the sum of opposites of negative entries, is at most two. Thus the minors cannot be larger than $2^{k-1}$ due to an inequality on determinants of real matrices from \cite{MR485920}. This results in the desired bound $k.2^{k-1}$ for the absolute value of the coefficient of $t$ in $q(t)$. \end{proof} \begin{remark}\label{prime only} In the proof above, one can replace the finite field $\Bbb{F}_p$ with $\Bbb{F}_{p^n}$: The rank of $2{\rm{I}}_k-{\rm{P}}_\alpha-{\rm{P}}_\beta$ will remain $k-1$ over any extension $\Bbb{F}_{p^n}$ of $\Bbb{F}_p$. Consequently, Theorem \ref{main1} remains valid with the additive group $(\Bbb{Z}/p\Bbb{Z})^n$ of $\Bbb{F}_{p^n}$ in place of $\Bbb{Z}/p\Bbb{Z}$. This, along with Proposition \ref{Answer}, is among the few occasions in this article where we discuss acyclic matchings in finite abelian groups which are not necessarily of prime order. Indeed, if $G$ has an element $g$ of order $1<k<\#G$, then there is no matching, let alone an acyclic matching, from $A:=\langle g\rangle$ onto any subset $B$ of $G\setminus\{0\}$ of cardinality $k$ that contains $g$ (\cite[Theorem 3.1]{MR1618439}). This example puts the focus on cyclic groups of prime order in Theorem \ref{main1} and Theorem \ref{main2}, and imposing the condition $A\cap(A+B)=\emptyset$ in Proposition \ref{Answer} into perspective. \end{remark} Theorem \ref{main2} provides a result similar to Theorem \ref{main1} on the existence of acyclic matchings, but now we deal with general matchings $f:A\rightarrow B$ and the condition $A\cap(A+B)=\emptyset$ is dropped at the expense of making subsets smaller than what appears in Theorem \ref{main1}. The existence of acyclic matchings was established in \cite[Theorem 1]{MR1409421} by Alon et al. in the case of subsets of $\Bbb{Z}^n$. The proof uses the existence of a total ordering on $\Bbb{Z}^n$ in an essential way. In \cite[Theorem 4.1]{MR1618439}, Losonczy generalizes this result to torsion-free abelian groups by observing that any torsion-free abelian group admits a total ordering (cf. \cite{MR0007779}). Below, we prove Theorem \ref{main2} by invoking a theorem from arithmetic combinatorics that allows one to order the elements of a small enough subset of $\Bbb{Z}/p\Bbb{Z}$ in a certain way compatible with the group structure. \begin{proof}[Proof of Theorem \ref{main2}] We reproduce the proof of \cite{MR1618439} by utilizing a \textit{rectification principle} which asserts that a sufficiently small subset of $\Bbb{Z}/p\Bbb{Z}$ may be embedded in integers while preserving certain additive properties. We shall use the sharpest possible version established in \cite{MR2452847}: \begin{itemize} \item For any subset $X$ of $\Bbb{Z}/p\Bbb{Z}$ with $\#X\leq\log_2 p$ there exists an injection $\varphi:X\hookrightarrow\Bbb{Z}$ with the property that a relation such as $x+y=z+w$ among the elements of $X$ implies \begin{equation}\label{rectification} \varphi(x)+\varphi(y)=\varphi(z)+\varphi(w). \end{equation} \end{itemize} This result is applicable to $X:=(A+B)\cup A\cup B\cup\{0\}$ since its size is no larger than $$ \#(A+B)+\#A+\#B+1\leq (\# A)(\# B)+\#A+\#B+1=(\#A+1)^2\leq\log_2 p. $$ Replacing $\varphi$ with $\varphi-\varphi(0)$, one may assume that $\varphi$ sends the identity element of $G$ to zero. We can then call an element $b$ of $B$ positive or negative according to the sign of the integer $\varphi(b)$. In particular, if $b\in B$ is positive, then for every $a\in A$, $\varphi(a+b)$ -- which is equal to $\varphi(a)+\varphi(b)$ by \eqref{rectification} -- is larger than $\varphi(a)$: $$ \varphi(a+b)=\varphi(a+b)+\varphi(0)=\varphi(a)+\varphi(b)>\varphi(a). $$ Using this fact, we construct an acyclic matching from $A$ to $B$ first in the case that elements of $B$ are all positive. Write elements of $A$ as $a_1,a_2,\dots,a_k$ so that \begin{equation}\label{ordering} \varphi(a_1)<\varphi(a_2)<\dots<\varphi(a_k). \end{equation} Starting from $a_1$, notice that $a_1+B\not\subseteq A$ due to $0\notin B$. Thus there exists $b\in B$ with $a_1+b\notin A$. Define $f(a_1)$ to be the element $b$ of $B$ with this property for which $\varphi(b)$ is as small as possible. One can continue in this manner inductively: Suppose the values of $f$ at $a_1,\dots,a_{i-1}$ are defined. We have \begin{equation}\label{construction} a_i+(B\setminus\{f(a_1),\dots,f(a_{i-1})\})\not\subseteq A\setminus\{a_1,\dots,a_{i-1}\} \end{equation} because the subsets have the same cardinality, and the first one does not contain $a_i$ while the second one does. We then define $f(a_i)$ to be an element $b$ of $B\setminus\{f(a_1),\dots,f(a_{i-1})\}$ with $a_i+b\notin A\setminus\{a_1,\dots,a_{i-1}\}$ and with $\varphi(b)$ as small as possible. Notice that the matching property is satisfied: $a_i+f(a_i)$ does not belong to $A$ since otherwise it should lie in $\{a_1,\dots,a_{i-1}\}$ which is impossible because of the positivity of $f(a_i)\in B$: $$ \varphi(a_i+f(a_i))>\varphi(a_i)>\varphi(a_1),\dots,\varphi(a_{i-1}). $$ This procedure results in a bijection $f:A\rightarrow B$ which is a matching. We next show that it is acyclic. Assume the contrary: let $g:A\rightarrow B$ be a different matching with the same multiplicity function, i.e. $m_f=m_g$. Since $f\neq g$, one can pick an $x\in A+B$ satisfying \begin{equation}\label{auxiliary9} \{a\in A\,:\, a+f(a)=x\}\neq\{a\in A\,:\, a+g(a)=x\} \end{equation} and $\varphi(x)$ as small as possible. The sets from \eqref{auxiliary9} are of the same size since $m_f(x)=m_g(x)$. We can choose an element from the second one which is not in the first: Let $i\in\{1,\dots,k\}$ be the smallest index satisfying $a_i+f(a_i)\neq x$ and $a_i+g(a_i)=x$. We now reach a contradiction: One cannot have $\varphi(a_i+f(a_i))<\varphi(a_i+g(a_i))=\varphi(x)$ because then \eqref{auxiliary9} holds with $a_i+f(a_i)$ in place of $x$, contradicting the way $x$ was chosen. Hence $\varphi(a_i+f(a_i))\geq \varphi(a_i+g(a_i))$ or, invoking the additivity property, $\varphi(f(a_i))\geq\varphi(g(a_i))$. But $\varphi$ is injective, so $\varphi(f(a_i))>\varphi(g(a_i))$. Due to our choice of $i$, the matchings $f$ and $g$ coincide on $\{a_1,\dots,a_{i-1}\}$. So $g(a_i)$ is a member of the subset $$B\setminus\{f(a_1),\dots,f(a_{i-1})\}=B\setminus\{g(a_1),\dots,g(a_{i-1})\}$$ appearing in \eqref{construction}; and $a_i+g(a_i)\notin A$ since $g$ is a matching. But, in view of $\varphi(f(a_i))>\varphi(g(a_i))$, this violates the way $f(a_i)$ was chosen. Finally, we should address the situation where $B$ has negative elements (recall that $0\notin B$). Partition $B$ as $B_-\sqcup B_+$ where $$ B_-:=\{b\in B\,:\, \varphi(b)<0\}, \quad B_+:=\{b\in B\,:\, \varphi(b)>0\}. $$ Denote the size of $B_-$ by $1\leq l\leq k=\#B$. Writing elements of $A$ as $a_1,\dots,a_k$ as before (see \eqref{ordering}), we can similarly partition $A$ into subsets $$ A_-:=\{a_1,\dots,a_l\}, \quad A_+:=\{a_{l+1},\dots,a_k\}; $$ which are of the same sizes. From what we have established so far, there is an acyclic matching $A_+\rightarrow B_+$, and also an acyclic matching $A_-\rightarrow B_-$ by a straightforward modification of our construction above for the case that elements of the target set are all negative. These two acyclic matchings define a matching $f:A=A_-\sqcup A_+\rightarrow B=B_-\sqcup B_+$ which we claim is acyclic as well. It suffices to show that any other matching $g:A\rightarrow B$ with $m_f=m_g$ maps $A_-$ onto $B_-$ and $A_+$ onto $B_+$. This is due to the fact that the minimum of the expression $$ \min_{A'\subset A, \#A'=l}\sum_{a\in A'}(\varphi(a')+\varphi(h(a')))=\min_{A'\subset A, \#A'=l}\sum_{a\in A'}\varphi(a'+h(a')) $$ as $h$ varies among bijections $A\rightarrow B$ is attained precisely when $A'=A_-$ and $h(A_-)=B_-$; conditions that $f$ satisfies. The integer above for $h=g$ is the same as the corresponding number when $h=f$ due to $m_f=m_g$. We deduce that $g(A_-)=B_-$ and this concludes the proof. \end{proof} \begin{remark} In the rectification principle used in the proof above, one can forgo the logarithmic bound for a linear one provided that a ``small doubling'' condition is involved: \cite[Theorem 2.1]{MR1608875} says that for any $\sigma>0$, there exists a constant $c>0$ dependent only on $\sigma$ so that for any prime number $p$ and any subset $X$ of $\Bbb{Z}/p\Bbb{Z}$ satisfying $\#(X+X)<\sigma(\#X)$ and $\#X\leq cp$, there exists an injection $\varphi:X\hookrightarrow\Bbb{Z}$ for which $\varphi(x)+\varphi(y)=\varphi(z)+\varphi(w)$ whenever $x+y=z+w$. Consequently, setting $X$ to be $(A+B)\cup A\cup B\cup\{0\}$ as in the proof, if $X+X$ is comparable in size with $X$ in the sense above, one can construct an acyclic matching $f:A\rightarrow B$ provided that the size of $X$ is no larger than $cp$. \end{remark} Theorems \ref{main1} and \ref{main2} raise the following natural question: \begin{question}\label{Mersenne} What is the largest $\epsilon>0$ for which there exists $c_1>0$ and $c_2$ with the property that for any prime number $p$, any two subsets $A$ and $B$ of $\Bbb{Z}/p\Bbb{Z}$ with $0\notin B$ and $\#A=\#B\leq c_1(\log_2p)^\epsilon+c_2$ can be matched acyclically? \end{question} \noindent Theorem \ref{main2} implies that the answer to Question \ref{Mersenne} should satisfy $\epsilon\geq\frac{1}{2}$. On the other hand, we do not expect any $\epsilon>1$ to work because there are conjecturally infinitely many Mersenne primes $p$; and for any such prime, the construction appeared in the proof of Theorem \ref{Jafari} exhibits a subset of order $O(\log_2p)$ in $\Bbb{Z}/p\Bbb{Z}$ which admits no acyclic matching onto itself. \subsection{Enumerative questions} Let $G$ be an arbitrary abelian group and suppose $A$ and $B$ are two finite subsets of $G$ of size $k$ with $0\notin B$. The goal of this section is to provide bounds for the number of matchings $A\rightarrow B$ (i.e. $\#\mathcal{M}(A,B)$) in terms of $k$. The key idea is to interpret elements of $\mathcal{M}(A,B)$ as perfect matchings in a certain bipartite graph; an idea that previously appeared in the proof of Theorem \ref{t*}. \begin{definition} Notations as above, the bipartite graph $\mathcal{G}_{A,B}$ associated with $A$ and $B$ has the disjoint union $A\dot\cup B$ as its set of vertices with $a\in A$ connected to $b\in B$ if and only if $a+b\notin A$. \end{definition} \noindent Matchings $A\rightarrow B$ are clearly in correspondence with the perfect matchings in $\mathcal{G}_{A,B}$ (cf. Definition \ref{main definition}). \begin{question} Is there a graph-theoretical interpretation of acyclic matchings from $A$ to $B$? \end{question} In view of the preceding discussion, enumerating matchings $A\rightarrow B$ amounts to counting perfect matchings in a bipartite graph. There is an extensive literature on the problem of counting the number of matchings in a graph; see \cite{MR2889590, MR2438582} for recent developments and open problems on this topic. In particular, it is well-known that the number of perfect matchings in a simple undirected graph $\mathcal{G}$ is not larger than the square root of the \textit{permanent} of its adjacency matrix with equality if $\mathcal{G}$ is bipartite \cite{MR136399,kasteleyn1961statistics}. Recall that the definition of the permanent of a square matrix $M=[m_{i,j}]_{1\leq i,j\leq n}$ is similar to that of the determinant except for the sign associated with each term in the summation: $$ {\rm{per}}(M)=\sum_{\sigma\in {\rm{S}}_n}\prod_{i=1}^nm_{i,\sigma(i)}. $$ Going back to the problem of counting the matchings, as mentioned above, an upper bound for the number of perfect matchings in a graph $\mathcal{G}$ is given by $\sqrt{{\rm{per}}(M_\mathcal{G})}$ where $M_\mathcal{G}$ is the adjacency matrix of $\mathcal{G}$. We have equality if $\mathcal{G}$ is bipartite. In the case of the bipartite graph $\mathcal{G}_{A,B}$ whose vertices are partitioned into two parts $A$ and $B$ of the same size, the permanent of the adjacency matrix is the square of that of the \textit{biadjacency matrix} \begin{equation}\label{biadjacency} M_{A,B}=[m_{ij}]_{1\leq i,j\leq k}, \quad m_{ij}:=\begin{cases}1&\mathrm{if}\; a_i+b_j\notin A,\\ 0&\mathrm{otherwise},\end{cases} \end{equation} where we have denoted elements of $A$ and $B$ by $a_1,\dots,a_k$ and $b_1,\dots,b_k$ respectively. This relation between permanents is due to the fact that the adjacency matrix of $\mathcal{G}_{A,B}$ is given by $$ \begin{bmatrix} \mathbf{0}&M_{A,B}\\ (M_{A,B})^{\rm{T}}&\mathbf{0} \end{bmatrix}_{2k\times 2k}. $$ This whole discussion results in the following: \begin{proposition}\label{number of matchings} With subsets $A$ and $B$ of $G$ as above, one has $$ \#\mathcal{M}(A,B)={\rm{per}}(M_{A,B}) $$ where $M_{A,B}$ is the matrix from \eqref{biadjacency}. \end{proposition} We now arrive at the main result of this section which provides upper and lower bounds on the number of matchings: \begin{proposition}\label{inequality proposition} Suppose $A$ and $B$ are subsets of an abelian group $G$ of the same size. For each $a\in A$ and $b\in B$ define: $$ A_b:=\{a'\in A\,:\, a'+b\notin A\},\quad B_a:=\{b'\in B\,:\, a+b'\notin A\}. $$ The number of matchings from $A$ to $B$ admits the upper bound below: \begin{equation}\label{inequality} \#\mathcal{M}(A,B)\leq\min\left\{\prod_{a\in A}((\#B_a)!)^{\frac{1}{\#B_a}},\prod_{b\in B}((\#A_b)!)^{\frac{1}{\#A_b}}\right\}.\footnote{Here $0!^{\frac{1}{0}}$ should be interpreted as zero; if one of the sets $A_b$ or $B_a$ is empty (e.g. when $b=0$), then there is no matching $A\rightarrow B$.} \end{equation} Moreover, denoting the size of $A$ and $B$ by $k$, suppose the numbers $\{\#B_a\}_{a\in A}$ and $\{\#A_b\}_{b\in B}$ are written in the increasing order as $$ \#B_{a_1}\leq\dots\leq \#B_{a_k}, \quad \#A_{b_1}\leq\dots\leq \#A_{b_k}. $$ Then we have the following lower bound for the number of matchings from $A$ to $B$: \begin{equation}\label{inequality'} \#\mathcal{M}(A,B)\geq\max\left\{\prod_{i=1}^k\max(\#B_{a_i}-i+1,0),\prod_{i=1}^k\max(\#A_{b_i}-i+1,0)\right\}. \end{equation} \end{proposition} \begin{proof} Following the idea developed in \cite{MR2398830}, we derive \eqref{inequality} as a result of the famous \textit{Bregman-Minc inequality}. The inequality, conjectured by Minc \cite{MR155843} and proved by Bregman \cite{bregman1973some}, states that the permanent of a $(0,1)$-matrix $M$ of size $n$ satisfies \begin{align*} {\rm{per}}(M)\leq \prod_{i=1}^n (r_i!)^\frac{1}{r_i} \end{align*} where $r_i$ is the sum of entries (number of $1$'s) in the $i^{\rm{th}}$ row of $M$. Replacing $M$ with its transpose, of course the result also holds with columns in place of rows. Applying this result to the permanent of the biadjacency matrix $M_{A,B}$ from \eqref{biadjacency}, ${\rm{per}}(M_{A,B})$ is not greater than any of the two products appearing on the right-hand-side of \eqref{inequality} because the sum of entries in a row (respectively column) of $M_{A,B}$ corresponding to an element $a\in A$ (resp. $b\in B$) is $\#B_a$ (resp. $\#A_b$). Proposition \ref{number of matchings} now yields \eqref{inequality}. The inequality \eqref{inequality'} immediately follows from a lower bound for the permanent established in \cite{MR195879}: Let $M$ be an $n\times n$ $(0,1)$-matrix. Order the sum of entries of rows of $M$ as $r'_1\leq\dots \leq r'_n$. Then one has $$ {\rm{per}}(M)\geq \prod_{i=1}^n \max(r'_i-i+1,0). $$ A similar result clearly holds for the sum of entries of columns instead of rows. Applying these to the $k\times k$ biadjacency matrix $M_{A,B}$ then yields \eqref{inequality'}. \end{proof} The next example discusses a lower bound for the number of symmetric matchings $A\rightarrow A$. \begin{example} The celebrated \textit{van der Waerden conjecture}, established independently in \cite{egorycev1981solution} and \cite{falikman1981proof}, indicates that the permanent of an $n\times n$ matrix with non-negative entries whose entries in each row or column add up to $r$ is at least $r^n\frac{n!}{n^n}$. This result can be used to find a lower bound for the number matchings in certain cases. Take $A$ to be a subset of an abelian group $G$ that does not contain the identity element and is of size $k$. Suppose all intersections $A\cap (A-a)$ (where $a\in A$) are of the same size $k-r$ where $1\leq r\leq k$. Applying the van der Waerden conjecture to the biadjacency matrix of $\mathcal{G}_{A,A}$, we deduce that $$ \#\mathcal{M}(A,A)\geq r^k\frac{k!}{k^k}. $$ One example of subsets $A$ with such an intersection property is the following: Let $\psi:G\rightarrow G$ be a group automorphism and take $A$ to be the orbit of a non-identity element under the action of $\psi$. The intersections $A\cap (A-a)$ are of the same cardinality as $\psi$ bijects them onto each other: $$ \psi(A\cap (A-a))=\psi(A)\cap (\psi(A)-\psi(a))=A\cap (A-\psi(a)). $$ \end{example} We conclude the subsection with the following question which asks about the number of ways that a function $G\rightarrow\Bbb{Z}_{\geq 0}$ can be realized as the multiplicity function of a matching between two subsets of $G$. \begin{question} Let $G$ be a finite abelian group and $k$ a positive integer smaller than the cardinality of $G$. Suppose $m:G\rightarrow\Bbb{Z}_{\geq 0}$ is a function with $\sum_{g\in G}m(g)$=k. What is the number of matchings $f:A\rightarrow B$ between two subsets of size $k$ of $G$ that satisfy $m_f=m$? \end{question} \noindent It is worthy to point out that realizing $m:G\rightarrow\Bbb{Z}_{\geq 0}$ as an $m_f$ basically requires writing the sequence \begin{equation}\label{auxiliary10} \left\{\underbrace{g,\dots,g}_{m(g) \text{ times}}\right\}_{g\in G, m(g)>0} \end{equation} as a sequence of differences $\{a-(-f(a))\}_{a\in A}$ where both sequences $\{a\}_{a\in A}$ and $\{-f(a)\}_{a\in A}$ of elements of $G$ have distinct terms. Such a problem is studied in \cite{MR3920528}: A sequence of length $k$ such as \eqref{auxiliary10} can always be written as a difference $\{a_i-(-b_i)\}_{1\leq i\leq k}$ such that $f:a_i\mapsto b_i$ is a bijection between two subsets of size $k$ \cite[Theorem 1]{MR3920528}. It is not hard to show that when $k<\frac{\#G}{3}$, one can construct $f$ so that it is a matching. \section{Matchings in linear subspaces of field extensions} \subsection{Primitive subspaces} Let $L/F$ be a field extension and suppose $A$ and $B$ are two $F$-subspaces of $L$ with $\dim_FA=\dim_FB<\infty$ and $1\notin B$. Primitive subspaces of $L$ (defined in \cite{MR3723776}) naturally arise in deciding if $A$ is matched to $B$ (cf. Definition \ref{linear definition}). To elaborate, we review the following situations from the literature where the answer is positive: \begin{itemize} \item $A$ is matched to $B$ if the adjunction of any non-zero element of $B$ to $F$ generates $L$, i.e. if the subspace $B$ is primitive \cite[Theorem 4.2]{MR3723776}; \item $A$ is matched to $B$ if $A=B$ \cite[Theorem 5.1]{MR2735391}; \item $A$ is matched to $B$ if $L/F$ has no proper finite intermediate extension $E/F$ of degree larger than one \cite[Theorem 5.2]{MR2735391}.\footnote{A refinement of this result appears in \cite[Theorem 5.5]{2012arXiv1208.2792E}: In any extension $L/F$, $A$ can be matched to $B$ if $\dim_FA=\dim_FB$ is smaller than $\min_{F\subsetneq E\subseteq F}[E:F]$. See \cite[Corollary 3.6]{MR3723776} for the corresponding group-theoretic result.} \end{itemize} Notice that, as discussed in \S1, these results have parallels in the context of matching a finite subset $A$ of an abelian group $G$ to another finite subset $B$ which is of the same size and does not contain the identity element of $G$ (see \cite[Proposition 3.4]{MR1618439}, \cite[Theorem 2.1]{MR1618439} and \cite[Theorem 3.1]{MR1618439} respectively). The proofs utilize a \textit{dimension criterion} which is based on a linear version of Hall's marriage theorem. (Similarly, Hall's marriage theorem is used in the proof of the aforementioned results from \cite{MR1618439}.) The dimension criterion asserts that inequalities \eqref{intersection criterion} are necessary and sufficient conditions for an ordered basis $\{a_1,\dots,a_n\}$ of $A$ to be matched to an ordered basis of $B$. \begin{example} This example, adapted from \cite{MR2735391}, demonstrates a situation where a subspace cannot be matched to another, and should be regarded as a linear analogue of an example mentioned in Remark \ref{prime only}: Let $E:=F(a)$ be a proper subfield of $L$ where $a\in L$ is algebraic over $F$ of degree $k>1$. Set $A$ to be the same as $E$, meaning $A=\langle 1,a,\dots,a^{k-1}\rangle$. We claim that $A$ is not matched to $B:=\langle a,\dots,a^{k-1},x\rangle$ where $x$ is chosen arbitrarily from $L\setminus E$ (notice that $B$ is also of dimension $k$ and does not have $1$). This is due to the fact for any basis $\{a_1,\dots,a_k\}$ of $A$, subspaces $a_i^{-1}A\cap B$ all coincide with $\langle a,\dots,a^{k-1}\rangle$, thus \eqref{intersection criterion} fails when $\#J>1$. \end{example} We now focus on proving Theorem \ref{primitive}. Given a finite extension $L/F$, primitive subspaces are those $F$-subspaces of $L$ that intersect any extension $E\subsetneq L$ of $F$ only trivially. By the primitive element theorem, there are only finitely many intermediate subfields if and only if $L/F$ is a simple extension (of course there is no primitive subspace unless $L/F$ is simple). Therefore, to determine the largest possible dimension of a primitive subspace of $L$ in the setting of Theorem \ref{primitive}, one needs to determine the same for $F$-subspaces which intersect members of a certain finite family $\mathcal{V}$ of $F$-subspaces of $L$ trivially -- $\mathcal{V}$ being the family of proper intermediate subfields of the extension $L/F$. This is easier to do if the base field $F$ is infinite or at least large enough; see Lemma \ref{infinite field} below. However, finite-dimensional vector spaces over finite fields may be covered by finitely many of their proper subspaces. So, in order to establish Theorem \ref{primitive} for finite $F$, one should take into account that $\mathcal{V}$ here is a special family of subspaces whose members are subfields. This is more subtle and will be discussed in Lemma \ref{finite field}. \begin{lemma}\label{infinite field} Let $V$ be a finite-dimensional vector space over a field $F$ and let $\mathcal{V}=\{V_i\}_{i=1}^{m}$ be a finite family of subspaces of $V$ where $m\leq\#F$. Then the largest possible dimension of a subspace $W$ of $V$ which intersects every member of $\mathcal{V}$ trivially is given by $$ \dim_FV-\max_{1\leq i\leq m}\dim_FV_i. $$ \end{lemma} \begin{proof} Clearly any subspace $W$ whose dimension is larger than the codimension of a subspace from $\mathcal{V}$ does not work as their intersection is non-trivial then. So it suffices to construct a subspace $W$ of dimension $k:=\min_{1\leq i\leq m}{\rm{codim}}_FV_i$ whose intersection with every $V_i$ is trivial. We shall use the following fact frequently: $V$ cannot be covered by a finite number of its proper subspaces unless the number of the subspaces is larger than $\#F$ (in which case $F$ is finite) \cite[Lemma 2]{MR111739}. In particular, we have $V\neq\bigcup_{i=1}^mV_i$. Pick an element $x_1\in V\setminus\bigcup_{i=1}^mV_i$. Next, if subspaces $V_i\oplus\langle x_1\rangle$ are still proper, one can choose an element $x_2$ from $V\setminus\bigcup_{i=1}^m V_i\oplus\langle x_1\rangle$. The procedure can be continued until reaching \begin{equation}\label{complement} x_k\in V\setminus\bigcup_{i=1}^m V_i\oplus\langle x_1,\dots,x_{k-1}\rangle, \end{equation} in which case one of subspaces $V_i\oplus\langle x_1,\dots,x_{k-1},x_k\rangle$ coincides with $V$. Now taking $W$ to be $\langle x_1,\dots,x_k\rangle$, \eqref{complement} implies $W\cap V_i=\{0\}$ for all $i\in\{1,\dots,m\}$. \end{proof} We now turn into finite fields. As usual, for any prime power $q$, the finite field with $q$ elements is denoted by $\Bbb{F}_q$. \begin{lemma}\label{finite field} The codimension of the largest $\Bbb{F}_q$-subspace of $\Bbb{F}_{q^n}$ whose all non-zero elements are primitive is the same as the largest possible degree over $\Bbb{F}_q$ that a proper intermediate subfield can attain. \end{lemma} \begin{proof} Let $p_1<\dots<p_s$ be the prime factors of $n$. The maximal subfields of the extension $\Bbb{F}_{q^n}/\Bbb{F}_q$ are $\Bbb{F}_{q^{n/p_1}},\dots,\Bbb{F}_{q^{n/p_s}}$. The first one has the largest possible degree over $\Bbb{F}_q$ which is $\frac{n}{p_1}$. The goal is to come up with an $\Bbb{F}_q$-subspace $W$ of $\Bbb{F}_{q^n}$ whose dimension is $n-\frac{n}{p_1}$ and intersects each of $\Bbb{F}_{q^{n/p_1}},\dots,\Bbb{F}_{q^{n/p_s}}$ trivially. First, notice that by replacing $\Bbb{F}_q$ with the intersection $\bigcap_{i=1}^s\Bbb{F}_{q^{n/p_i}}=\Bbb{F}_{q^{n/p_1\dots p_s}}$ of maximal subfields we can assume that $n$ is a product of primes, say $n=p_1\dots p_s$ where $p_1<\dots<p_s$ as before. The second step is to apply the normal basis theorem: There is an element $\theta\in \Bbb{F}_{q^n}$ for which \begin{equation}\label{basis} \{\sigma^j(\theta)\}_{j=1}^{n=p_1\dots p_s} \end{equation} is a basis for $\Bbb{F}_{q^n}$ as a vector space over $\Bbb{F}_q$. Here, $\sigma:x\mapsto x^q$ is the Frobenius element, the generator of \begin{equation}\label{Galois} {\rm{Gal}}(\Bbb{F}_{q^n}/\Bbb{F}_q)\cong\Bbb{Z}/n\Bbb{Z}\cong \Bbb{Z}/p_1\Bbb{Z}\times\dots\times\Bbb{Z}/p_s\Bbb{Z}. \end{equation} Our strategy is to construct $W$ as the subspace spanned by a subset $\{\sigma^j(\theta)\}_{j\in T}$ of the basis in \eqref{basis} where $T$ is an appropriate subset of $\Bbb{Z}/n\Bbb{Z}$ of size $n-\frac{n}{p_1}$. Besides the cardinality, since we want all intersections $W\cap\Bbb{F}_{q^{n/p_i}}$ to be trivial, we need the following: For any non-zero vector $(c_j)_{j\in T}$ of elements of $\Bbb{F}_q$, the element $\sum_{j\in T}c_j\sigma^j(\theta)$ should not belong to any $\Bbb{F}_{q^{n/p_i}}$. But in the Galois correspondence, the latter field corresponds to the subgroup $\langle\sigma^{n/p_i}\rangle\cong\Bbb{Z}/p_i\Bbb{Z}$ of \eqref{Galois}. Hence \begin{equation}\label{auxiliary6} \sum_{j\in T}c_j\sigma^j(\theta)\in \Bbb{F}_{q^{n/p_i}}\Leftrightarrow \sigma^{n/p_i}\left(\sum_{j\in T}c_j\sigma^j(\theta)\right)=\sum_{j\in T}c_j\sigma^j(\theta). \end{equation} But $\sigma^{n/p_i}\left(\sum_{j\in T}c_j\sigma^j(\theta)\right)=\sum_{j\in T}c_j\sigma^{j+\frac{n}{p_i}}(\theta) =\sum_{j\in T+\frac{n}{p_i}}c_{j-\frac{n}{p_i}}\sigma^{j}(\theta)$ where the indices $j$ are considered modulo $n=p_1\dots p_s$ (recall that $T\subset\Bbb{Z}/n\Bbb{Z}$). As the elements of the Galois orbit of $\theta$ are linearly independent (i.e. \eqref{basis} is a basis), equating the coefficients in the identity from \eqref{auxiliary6} implies that $c_j=c_{j-\frac{n}{p_i}}$ for any $j\in T$. But clearly $c_k=0$ for $k\notin T$. We deduce that if $c_j\neq 0$ (there exists such a $j$ as otherwise $\sum_{j\in T}c_j\sigma^j(\theta)=0$), then $c_{j-\frac{n}{p_i}}\neq 0$ and thus $j-\frac{n}{p_i}\in T$. Continuing this procedure with $j-\frac{n}{p_i}$ in place of $j$, we observe that if the non-zero element $\sum_{j\in T}c_j\sigma^j(\theta)$ of $W=\langle\{\sigma^j(\theta)\}_{j\in T}\rangle$ lies in $\Bbb{F}_{q^{n/p_i}}$, then $T\subset\Bbb{Z}/n\Bbb{Z}$ must contain an (mod $n$) arithmetic progression of the form $$ j, j-\frac{n}{p_i}, j-2\frac{n}{p_i}, \dots, j-(p_i-1)\frac{n}{p_i}, j-p_i\frac{n}{p_i}\stackrel{n}{\equiv} j. $$ This boils everything down to the additive nature of $$T\subset\Bbb{Z}/n\Bbb{Z}\cong\Bbb{Z}/p_1\Bbb{Z}\times\dots\times\Bbb{Z}/p_s\Bbb{Z}.$$ To finish the solution, one needs to construct a subset $T$ of $\Bbb{Z}/p_1\Bbb{Z}\times\dots\times\Bbb{Z}/p_s\Bbb{Z}$ of size $n-\frac{n}{p_1}$ with the following property: For each $1\leq i\leq s$, $T$ should not contain any subset of the form \begin{equation}\label{auxiliary7} \{j_1\}\times\dots\times\{j_{i-1}\}\times\Bbb{Z}/p_i\Bbb{Z}\times\{j_{i+1}\}\times\dots\times\{j_s\} \end{equation} where $j_1,\dots,j_{i-1},j_{i+1},\dots, j_s$ are arbitrary integers considered modulo suitable primes. Instead of $T$, we exhibit $T$ through its complement $T^c$, a subset of $\Bbb{Z}/p_1\Bbb{Z}\times\dots\times\Bbb{Z}/p_s\Bbb{Z}$ of size $\frac{n}{p_1}=p_2\dots p_s$ which intersects all subsets of the form \eqref{auxiliary7}. Pick arbitrary surjections $$f_2:\Bbb{Z}/p_2\Bbb{Z}\rightarrow\Bbb{Z}/p_1\Bbb{Z},\dots,f_s:\Bbb{Z}/p_s\Bbb{Z}\rightarrow\Bbb{Z}/p_1\Bbb{Z}$$ (recall that $p_2,\dots, p_s$ are larger than $p_1$) and define $T^c$ as $$ \left\{\left(\sum_{i=2}^sf_i(j_i),j_2,\dots,j_s\right)\,:\, j_2\in\Bbb{Z}/p_2\Bbb{Z},\dots,j_s\in\Bbb{Z}/p_s\Bbb{Z}\right\}. $$ It is easy to check that this intersects every subset of the form \eqref{auxiliary7}. \end{proof} \begin{proof}[Proof of Theorem \ref{primitive}] Follows from Lemma \ref{infinite field} if $F$ is infinite and from Lemma \ref{finite field} in the case of finite $F$. \end{proof} \subsection{Linear acyclic matchings} In this final subsection, we shall prove Theorem \ref{linear theorem} after providing a background on \textit{linear acyclic matchings}. Unlike Definition \ref{linear matching} and similar to matching in abelian groups, a linear acyclic matching is indeed a map such as $f:A\rightarrow B$. Here, $A$ and $B$ are vector subspaces of a certain field extension and $f$ is a linear isomorphism. The definition of linear acyclic matchings, developed in \cite{MR3393940}, builds on the notion of \textit{strong matchings} from \cite{MR2735391}. \begin{definition}\label{strong matching} Let $L/F$ be a field extension and $A$ and $B$ two $F$-subspaces of $L$ which are of the same finite dimension. An $F$-linear isomorphism $f:A\rightarrow B$ is said to be a strong matching if any ordered basis $\mathcal{A}$ of $A$ is matched to the ordered basis $f(\mathcal{A})$ of $B$ in the sense specified in Definition \ref{linear definition}. \end{definition} \noindent It is known that there is a strong matching from $A$ to $B$ if and only if $A\cap AB=\{0\}$ in which case every linear isomorphism between $A$ and $B$ is a strong matching \cite[Theorem 6.3]{MR2735391}. In view of the dimension criterion \eqref{intersection criterion}, this is a special situation because if $A\cap AB=\{0\}$, then the subspaces appearing in \eqref{intersection criterion} are trivial. To define linear acyclic matchings, in analogy with Definition \ref{main definition}, one should first make sense of two linear isomorphisms $f,g:A\rightarrow B$ between vector subspaces of a field $L$ having the same ``multiplicity functions''. We want the elements of the multiplicative group $L^\times$ realized as $af(a)$ to be the same as those realized as $ag(a)$. But here $A$ and $B$ are subspaces rather than finite sets. So article \cite{MR3393940} puts forward the definition below: \begin{definition}\label{equivalent} Let $L/F$ be a field extension and $A,B$ be $F$-subspaces of $L$. Two $F$-linear isomorphisms $f,g:A\rightarrow B$ are called to be \textit{equivalent} if there exists a linear automorphism $\phi:A\rightarrow A$ satisfying \begin{equation}\label{auxiliary3} af(a)=\phi(a)g(\phi(a)) \end{equation} for every $a\in A$. \end{definition} \noindent An obvious way of defining an isomorphism $g:A\rightarrow B$ equivalent to a given $f:A\rightarrow B$ is to pick an $r\in F\setminus\{0\}$ and set $g(a):=\frac{1}{r^2}\,f(a)$ which satisfies \eqref{auxiliary3} if $\phi(a):=ra$. But is there any other way to come up with an isomorphism equivalent to $f$? This brings us to the definition of linear acyclic matching property from \cite{MR3393940}. \begin{definition}\label{linear acyclic} Let $L/F$ be a field extension, and suppose $A$ and $B$ are $F$-subspaces of $L$ whose dimensions are finite and equal. A strong matching $f:A\rightarrow B$ is called acyclic if any other strong matching $g:A\rightarrow B$ equivalent to it is of the form $cf$ for some $c\in F$. The extension $L/F$ is said to have the linear acyclic matching property if for every pair $A$ and $B$ of $F$-subspaces of $L$ which are of the same finite dimension and satisfy $A\cap AB=\{0\}$, there exists a linear acyclic matching from $A$ to $B$. \end{definition} \begin{remark} Due to \cite[Theorem 6.3]{MR2735391} (that we alluded to above), only subspaces with $A\cap AB=\{0\}$ are relevant here in which case strong matchings are the same as isomorphisms of $F$-vector spaces. \end{remark} We next start working towards the proof of Theorem \ref{linear theorem}. Lemma \ref{lemma} below will be used in the subsequent Proposition \ref{Proposition 1} that establishes the ``if'' part of Theorem \ref{linear theorem}. The statements and the proofs of the lemma and the proposition are respectively adapted from \cite[Lemma 4.3]{MR3393940} and \cite[Theorem 4.5]{MR3393940} with slight modifications: The original statements are only concerned with extensions $L/F$ where elements of $L\setminus F$ are transcendental over $F$ -- extensions that \cite{MR3393940,MR2735391} (rather unconventionally) call ``purely transcendental''. We more generally consider extensions that lack non-trivial proper intermediate subfields finite over the base. \begin{lemma}\label{lemma} Let $L/F$ be a field extension without any non-trivial proper finite intermediate extension $E/F$. Suppose $A$ and $B$ are two $F$-subspaces of $L$ with $$0<\dim_FA=\dim_FB<\dim_FE.$$ If two $F$-linear isomorphisms $f,g:A\rightarrow B$ are equivalent via a linear automorphism $\phi:A\rightarrow A$, then either $g=cf$ for a suitable $c\in F\setminus\{0\}$ or $g\circ\phi$ is the multiplication map by some $\alpha\in L\setminus\{0\}$ in which case $B=\alpha A$. \end{lemma} \begin{proof} Fix a non-zero element $x$ of $A$. Changing $a$ to $x$ and $a+x$ in \eqref{auxiliary3} yields $xf(x)=\phi(x)g(\phi(x))$ and $(a+x)f(a+x)=\phi(a+x)g(\phi(a+x))$ for any arbitrary $a\in A$. Combining these with \eqref{auxiliary3} and using the additivity of $f$, $g$ and $\phi$, one obtains \begin{equation}\label{auxiliary4} (x\phi(a)-a\phi(x))(xg(\phi(a))-ag(\phi(x)))=0 \end{equation} for all $a\in A$. (See \cite[Proof of Lemma 4.3]{MR3393940} for the details for this computation.) As $L$ is a field, one of the parentheses in \eqref{auxiliary4} should be zero. We conclude that $A$ is the union of the $F$-subspaces below \begin{equation}\label{auxiliary5} V_x:=\{a\in A\,:\, x\phi(a)=a\phi(x)\}, \quad W_x:=\{a\in A\,:\, xg(\phi(a))=ag(\phi(x))\}. \end{equation} Thus $A$ coincides with either $V_x$ or $W_x$. If the former occurs, $\phi:A\rightarrow A$ would be given by multiplication by $r:=\frac{\phi(x)}{x}$. This requires $r$ to lie in $F$: The finite-dimensional $F$-subspace $A$ of $E$ is invariant under multiplication by $r\in E$, hence $r$ satisfies a monic equation with coefficients in $F$ and of degree $\dim_FA<\dim_FE$, cf. \cite[Proposition 2.4]{MR0242802}. But then $F(r)$ is a proper subfield of $E$ which is finite over $F$, thus should be the same as $F$ due to our assumption about the extension $E/F$. Now, in view of the $F$-linearity of $f$, $g$ and $\phi$, plugging $\phi(a)=ra$ in \eqref{auxiliary3} implies $g=cf$ where $c:=\frac{1}{r^2}$. Next suppose $A$ is the same as the second subspace appearing in \eqref{auxiliary5}: If $A=W_x$, then the linear isomorphism $g\circ\phi:A\rightarrow B$ would be the multiplication map by $\alpha:=\frac{g(\phi(x))}{x}$ which implies $B=\alpha A$. \end{proof} The lemma above will be used in the proof of the proposition below which is a slight generalization of \cite[Theorem 4.5]{MR3393940}. \begin{proposition}\label{Proposition 1} A field extension $L/F$ without non-trivial proper finite intermediate extensions of the form $E/F$ has the linear acyclic matching property. \end{proposition} \begin{proof} Let $A$ and $B$ be as in Definition \ref{linear acyclic}: two $F$-subspaces of $E$ of the same finite dimension satisfying $A\cap AB=\{0\}$. The goal is to show the existence of an $F$-linear isomorphism $f:A\rightarrow B$ which is acyclic in the sense any other isomorphism $g:A\rightarrow B$ equivalent to it can be written as $cf$ for an appropriate $c\in F$. There is nothing to prove if $A=B=\{0\}$. Moreover, $A$ and $B$ are proper since $A\cap AB=\{0\}$ implies $1\notin B$. So one can safely assume that $$0<\dim_FA=\dim_FB<\dim_FE$$ as in Lemma \ref{lemma}. Pick an arbitrary isomorphism $f:A\rightarrow B$. If it is acyclic, we are done. Otherwise, the lemma implies that $B=\alpha A$ for some $\alpha\in L\setminus\{0\}$. We claim that the $F$-linear isomorphism $$ \tilde{f}:A\rightarrow B=\alpha A:a\mapsto\alpha a $$ given by multiplication by $\alpha$ is acyclic. If not, there exists another isomorphism $\tilde{g}:A\rightarrow B=\alpha A$ which is not in the form of $c\tilde{f}$ for any $c\in F$ but is equivalent to $\tilde{f}$ via an automorphism $\phi:A\rightarrow A$ satisfying \begin{equation}\label{auxiliary8} a(\alpha a)=a\tilde{f}(a)=\phi(a)\tilde{g}(\phi(a)) \end{equation} for all $a\in A$. Invoking Lemma \ref{lemma} once again, there exists $\beta\in L\setminus\{0\}$ such that $B$ can also be written as $\beta A$, and $\tilde{g}\circ\phi$ is the multiplication map by $\beta$. Substituting in \eqref{auxiliary8}, we deduce that $\phi$ is the multiplication map by $\beta^{-1}\alpha$. But, repeating the argument used in the proof of Theorem \ref{lemma}, the element $\beta^{-1}\alpha$ must lie in $F$ due to our assumption on $L/F$ because $\alpha A=\beta A$ implies that $$[F(\beta^{-1}\alpha):F]\leq \dim_FA<\dim_FL$$ (cf. \cite[Proposition 2.4]{MR0242802}). Plugging $\phi(a)=(\beta^{-1}\alpha)a$ in \eqref{auxiliary8}, the $F$-linearity of $\tilde{g}$ yields $\tilde{g}=(\beta^{-1}\alpha)^{-2}\tilde{f}$. This is a contradiction since we assumed that $\tilde{g}\neq c\tilde{f}$ for all $c\in F$. \end{proof} We next turn into the ``only if'' part of Theorem \ref{linear theorem}. \begin{proposition}\label{Proposition 2} Let $L/F$ be a field extension admitting an intermediate subfield $F\subsetneq E\subsetneq L$ with $[E:F]<\infty$. Then $L/F$ does not satisfy the linear acyclic matching property. \end{proposition} \begin{proof} Motivated by Lemma \ref{lemma}, pick an element $\alpha\in L\setminus E$ and set $A$ and $B$ to be $E$ and $\alpha E$ respectively. Then $A$ and $B$ are finite-dimensional $F$-subspaces satisfying $$ A\cap (AB)=E\cap(\alpha E)=\{0\}. $$ Hence every $F$-linear isomorphism $f:A\rightarrow B$ is a strong matching according to \cite[Theorem 6.3]{MR2735391}. We claim that there always exists another $F$-linear isomorphism $g:A\rightarrow B$ which is equivalent to $f$ but cannot be written as $cf$. Define $g$ as $$g(a):=\frac{1}{\beta}\,f\left(\frac{a}{\beta}\right)$$ where $\beta\in E\setminus F$. This clearly is another $F$-linear isomorphism from $A=E$ onto $B=\alpha E$; and is furthermore equivalent to $f$ because the $F$-linear automorphism $\phi(a):=\beta a$ of $A$ satisfies $af(a)=\phi(a)g(\phi(a))$ for all $a\in A$. But $g$ is not in the form of $cf$ for any $c\in F$. Otherwise: $\frac{1}{\beta}\,f\left(\frac{a}{\beta}\right)=cf(a)$. Since $f$ takes its values in $\alpha E$ and $E$ is a field containing $F$, this requires $\beta$ to lie in $F$, a contradiction. \end{proof} \begin{proof}[Proof of Theorem \ref{linear theorem}] Immediately follows from Propositions \ref{Proposition 1} and \ref{Proposition 2}. \end{proof} \section*{Acknowledgment} We are deeply grateful to Prof. Shmuel Friedland for his constant encouragement, generosity and for many insightful conversations. We are also grateful to Prof. Richard Brualdi and Prof. Martin Isaacs for motivating conversations.
{ "timestamp": "2021-03-23T01:20:42", "yymm": "2103", "arxiv_id": "2103.11432", "language": "en", "url": "https://arxiv.org/abs/2103.11432", "abstract": "A matching from a finite subset $A$ of an abelian group to another subset $B$ is a bijection $f:A\\rightarrow B$ with the property that $a+f(a)$ never lies in $A$. A matching is called acyclic if it is uniquely determined by its multiplicity function. Motivated by a question of E. K. Wakeford on canonical forms for symmetric tensors, the study of matchings and acyclic matchings in abelian groups was initiated by C. K. Fan and J. Losonczy in [16, 26], and was later generalized to the context of vector subspaces in a field extension [13, 1]. We discuss the acyclic matching and weak acyclic matching properties and we provide results on the existence of acyclic matchings in finite cyclic groups. As for field extensions, we completely classify field extensions with the linear acyclic matching property. The analogy between matchings in abelian groups and in field extensions is highlighted throughout the paper and numerous open questions are presented for further inquiry.", "subjects": "Combinatorics (math.CO); Group Theory (math.GR)", "title": "Results and questions on matchings in groups and vector subspaces of fields", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9902915215128427, "lm_q2_score": 0.7154240079185319, "lm_q1q2_score": 0.708478329328459 }
https://arxiv.org/abs/1405.6587
On the grid Ramsey problem and related questions
The Hales--Jewett theorem is one of the pillars of Ramsey theory, from which many other results follow. A celebrated theorem of Shelah says that Hales--Jewett numbers are primitive recursive. A key tool used in his proof, now known as the cube lemma, has become famous in its own right. In its simplest form, this lemma says that if we color the edges of the Cartesian product $K_n \times K_n$ in $r$ colors then, for $n$ sufficiently large, there is a rectangle with both pairs of opposite edges receiving the same color. Shelah's proof shows that $n = r^{\binom{r+1}{2}} + 1$ suffices. More than twenty years ago, Graham, Rothschild and Spencer asked whether this bound can be improved to a polynomial in $r$. We show that this is not possible by providing a superpolynomial lower bound in $r$. We also discuss a number of related problems.
\section{Introduction} \label{sec:intro} Ramsey theory refers to a large body of deep results in mathematics whose underlying philosophy is captured succinctly by the statement that ``Every large system contains a large well-organized subsystem.'' This is an area in which a great variety of techniques from many branches of mathematics are used and whose results are important not only to combinatorics but also to logic, analysis, number theory and geometry. One of the pillars of Ramsey theory, from which many other results follow, is the Hales--Jewett theorem \cite{HJ63}. This theorem may be thought of as a statement about multidimensional multiplayer tic-tac-toe, saying that in a high enough dimension one of the players must win. However, this fails to capture the importance of the result, which easily implies van der Waerden's theorem on arithmetic progressions in colorings of the integers and its multidimensional generalizations. To quote \cite{GrRoSp}, ``The Hales--Jewett theorem strips van der Waerden's theorem of its unessential elements and reveals the heart of Ramsey theory. It provides a focal point from which many results can be derived and acts as a cornerstone for much of the more advanced work." To state the Hales--Jewett theorem formally requires some notation. Let $[m]$ be the set of integers $\{1, 2, \dots, m\}$. We will refer to elements of $[m]^n$ as points or words and the set $[m]$ as the alphabet. For any $a \in [m]^n$, any $x \in [m]$ and any set $\gamma \subset [n]$, let $a\oplus x\gamma$ be the point of $[m]^n$ whose $j$-th component is $a_j$ if $j \not\in \gamma$ and $x$ if $j \in \gamma$. A combinatorial line is a subset of $[m]^n$ of the form $\{a \oplus x\gamma: 1 \leq x \leq m\}$. The Hales--Jewett theorem is now as follows. \begin{THM} For any $m$ and $r$ there exists an $n$ such that any $r$-coloring of the elements of $[m]^n$ contains a monochromatic combinatorial line. \end{THM} The original proof of the Hales--Jewett theorem is similar to that of van der Waerden's theorem, using a double induction, where we prove the theorem for alphabet $[m+1]$ and $r$ colors by using the statement for alphabet $[m]$ and a much larger number of colors. This results in bounds of Ackermann type for the dependence of the dimension $n$ on the size of the alphabet $m$. In the late eighties, Shelah \cite{Shelah} made a major breakthrough by finding a way to avoid the double induction and prove the theorem with primitive recursive bounds. This also resulted in the first primitive recursive bounds for van der Waerden's theorem (since drastically improved by Gowers \cite{G01}). Shelah's proof relied in a crucial way on a lemma now known as the Shelah cube lemma. The simplest case of this lemma says that if we color the edges of the Cartesian product $K_n \times K_n$ in $r$ colors then, for $n$ sufficiently large, there is a rectangle with both pairs of opposite edges receiving the same color. Shelah's proof shows that we may take $n \leq r^{\binom{r+1}{2}} + 1$. In the second edition of their book on Ramsey theory \cite{GrRoSp}, Graham, Rothschild and Spencer asked whether this bound can be improved to a polynomial in $r$. Such an improvement, if it could be generalized, would allow one to improve Shelah's wowzer-type upper bound for the Hales--Jewett theorem to a tower-type bound. The main result of this paper, Theorem~\ref{thm:grid_main} below, answers this question in the negative by providing a superpolynomial lower bound in $r$. We will now discuss this basic case of Shelah's cube lemma, which we refer to as the grid Ramsey problem, in more detail. \subsection{The grid Ramsey problem} \label{subsec:intro_grid} For positive integers $m$ and $n$, let the \emph{grid graph} $\Gamma_{m,n}$ be the graph on vertex set $[m] \times [n]$ where two distinct vertices $(i,j)$ and $(i', j')$ are adjacent if and only if either $i=i'$ or $j=j'$. That is, $\Gamma_{m,n}$ is the Cartesian product $K_m \times K_n$. A \emph{row} of the grid graph $\Gamma_{m,n}$ is a subgraph induced on a vertex subset of the form $\{i\} \times [n]$ and a \emph{column} is a subgraph induced on a vertex subset of the form $[m] \times \{j\}$. A rectangle in $\Gamma_{m,n}$ is a copy of $K_2 \times K_2$, that is, an induced subgraph over a vertex subset of the form $\{(i,j), (i',j), (i,j'),(i',j') \}$ for some integers $1 \le i < i' \le m$ and $1 \le j < j' \le n$. We will usually denote this rectangle by $(i,j,i',j')$. For an edge-colored grid graph, an \emph{alternating rectangle} is a rectangle $(i,j,i',j')$ such that the color of the edges $\{(i,j), (i',j)\}$ and $\{(i,j'), (i',j')\}$ are equal and the color of the edges $\{(i,j), (i,j')\}$ and $\{(i',j), (i',j')\}$ are equal, that is, opposite sides of the rectangle receive the same color. An edge coloring of a grid graph is \emph{alternating-rectangle-free} (or \emph{alternating-free}, for short) if it contains no alternating rectangle. The function we will be interested in estimating is the following. \medskip \noindent \textbf{Definition.} (i) For a positive integer $r$, define $G(r)$ as the minimum integer $n$ for which every edge coloring of $\Gamma_{n,n}$ with $r$ colors contains an alternating rectangle. \noindent (ii) For positive integers $m$ and $n$, define $g(m, n)$ as the minimum integer $r$ for which there exists an alternating-free edge coloring of $\Gamma_{m,n}$ with $r$ colors. Define $g(n) = g(n,n)$. \medskip Note that the two functions $G$ and $g$ defined above are in inverse relation to each other, in the sense that $G(r) = n$ implies $g(n-1) \le r$ and $g(n) \ge r+1$, while $g(n) = r$ implies $G(r) \ge n+1$ and $G(r-1) \le n$. We have already mentioned Shelah's bound $G(r) \le r^{r+1 \choose 2}$ + 1. To prove this, let $n = r^{r+1 \choose 2} + 1$ and suppose that an $r$-coloring of $\Gamma_{r+1, n}$ is given. There are at most $r^{r+1 \choose 2}$ ways that one can color the edges of a fixed column with $r$ colors. Since the number of columns is $n > r^{r+1 \choose 2}$, the pigeonhole principle implies that there are two columns which are identically colored. Let these columns be the $j$-th column and the $j'$-th column and consider the edges that connect these two columns. Since there are $r+1$ rows, the pigeonhole principle implies that there are two rows that have the same color. Let these be the $i$-th row and the $i'$-th row. Then the rectangle $(i,j, i', j')$ is alternating. Hence, we see that $G(r) \le n$, as claimed. This argument in fact establishes the stronger bound $g(r+1, r^{r+1 \choose 2} + 1) \ge r+1$. It is surprisingly difficult to improve on this simple bound. The only known improvement, $G(r) \leq r^{\binom{r+1}{2}} - r^{\binom{r-1}{2} + 1} + 1$, which improves Shelah's bound by an additive term, was given by Gy\'arf\'as \cite{Gyarfas}. Instead, we have focused on the lower bound, proving that $G(r)$ is superpolynomial in $r$. As already mentioned, this addresses a question of Graham, Rothschild and Spencer \cite{GrRoSp}. This question was also reiterated by Heinrich \cite{H90} and by Faudree, Gy\'arf\'as and Sz\H{o}nyi \cite{FaGySz}, who proved a lower bound of $\Omega(r^3)$. Quantitatively, our main result is the following. \begin{THM} \label{thm:grid_main} There exists a positive constant $c$ such that \[ G(r) > 2^{c (\log r)^{5/2}/\sqrt{\log \log r}}. \] \end{THM} We will build up to this theorem, first giving a substantially simpler proof for the weaker bound $G(r) > 2^{c \log^2 r}$. The following theorem, which includes this result, also contains a number of stronger bounds for the off-diagonal case, improving results of Faudree, Gy\'arf\'as and Sz\H{o}nyi \cite{FaGySz}. \begin{THM} \label{thm:grid_asymmetric} \begin{enumerate} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item[(i)] For all $C > e^2$, $g(r^{\log C/2}, r^{r/2C}) \le r$ for large enough $r$. \item[(ii)] For all positive constants $\varepsilon$, $g(2^{\varepsilon \log^2 r}, 2^{r^{1-\varepsilon}}) \le r$ for large enough $r$. \item[(iii)] There exists a positive constant $c$ such that $g(cr^2, r^{r^2 /2} / e^{r^2}) \le r$ for large enough $r$. \end{enumerate} \end{THM} Part (i) of this theorem already shows that $G(r)$ is superpolynomial in $r$, while part (ii) implies the more precise bound $G(r) > 2^{c \log^2 r}$ mentioned above, though at the cost of a weaker off-diagonal result. For $(m,n)=(r+1,r^{r+1 \choose 2})$, it is easy to find an alternating-free edge coloring of $\Gamma_{m,n}$ with $r$ colors by reverse engineering the proof of Shelah's bound $G(r) \leq r^{r+1 \choose 2} + 1$. Part (iii) of Theorem~\ref{thm:grid_asymmetric} shows that $n$ can be close to $r^{\binom{r+1}{2}}$ even when $m$ is quadratic in $r$. This goes some way towards explaining why it is so difficult to improve the bound $G(r) \le r^{r+1 \choose 2}$. \subsection{The Erd\H{o}s--Gy\'arf\'as problem} \label{subsec:intro_erdos_gyarfas} The Ramsey-type problem for grid graphs considered in the previous subsection is closely related to a problem of Erd\H{o}s and Gy\'arf\'as on generalized Ramsey numbers. To discuss this problem, we need some definitions. \medskip \noindent \textbf{Definition.} Let $k, p$ and $q$ be positive integers satisfying $p \ge k+1$ and $2 \le q \le {p \choose k}$. \noindent (i) For each positive integer $r$, define $F_k(r, p, q)$ as the minimum integer $n$ for which every edge coloring of $K^{(k)}_n$ with $r$ colors contains a copy of $K^{(k)}_p$ receiving at most $q-1$ distinct colors on its edges. \noindent (ii) For each positive integer $n$, define $f_k(n,p,q)$ as the minimum integer $r$ for which there exists an edge coloring of $K^{(k)}_n$ with $r$ colors such that every copy of $K^{(k)}_p$ receives at least $q$ distinct colors on its edges. \medskip For simplicity, we usually write $F(r,p,q) = F_2(r,p,q)$ and $f(n,p,q) = f_2(n,p,q)$. As in the previous subsection, the two functions are in inverse relation to each other: $F(r, p, q) = n$ implies $f(n, p, q) \ge r+1$ and $f(n-1, p, q) \le r$, while $f(n,p,q) = r$ implies $F(r, p, q)\ge n+1$ and $F(r-1,p,q) \le n$. The function $F(r, p, q)$ generalizes the Ramsey number since $F(2, p, 2)$ is equal to the Ramsey number $R(p)$. Call an edge coloring of $K_n$ a \emph{$(p,q)$-coloring} if every copy of $K_p$ receives at least $q$ distinct colors on its edges. The definitions of $F(r,p,q)$ and $f(n,p,q)$ can also be reformulated in terms of $(p,q)$-colorings. For example, $f(n, p, q)$ asks for the minimum integer $r$ such that there is a $(p,q)$-coloring of $K_n$ using $r$ colors. The function $f(n,p,q)$ was first introduced by Erd\H{o}s and Shelah \cite{E75, E81} and then was systematically studied by Erd\H{o}s and Gy\'arf\'as \cite{ErGy}. They studied the asymptotics of $f(n,p,q)$ as $n$ tends to infinity for various choices of parameters $p$ and $q$. It is clear that $f(n,p,q)$ is increasing in $q$, but it is interesting to understand the transitions in behavior as $q$ increases. At one end of the spectrum, $f(n, p, 2) \le f(n,3,2) \le \lceil \log n \rceil$, while, at the other end, $f(n,p,{p \choose 2}) = {n \choose 2}$ (provided $p \ge 4$). In the middle range, Erd\H{o}s and Gy\'arf\'as proved that $f(n,p,p) \ge n^{1/(p-2)} - 1$, which in turn implies that $f(n,p,q)$ is polynomial in $n$ for any $q \geq p$. A problem left open by Erd\H{o}s and Gy\'arf\'as was to determine whether $f(n, p, p-1)$ is also polynomial in $n$. For $p = 3$, it is easy to see that $f(n, 3, 2)$ is subpolynomial since it is equivalent to determining the multicolor Ramsey number of the triangle. For $p = 4$, Mubayi \cite{Mubayi} showed that $f(n,4,3) \leq 2^{c \sqrt{\log n}}$ and Eichhorn and Mubayi \cite{EiMu} showed that $f(n,5,4) \le 2^{c \sqrt{\log n}}$. Recently, Conlon, Fox, Lee and Sudakov \cite{CoFoLeSu} resolved the question of Erd\H{o}s and Gy\'arf\'as, proving that $f(n, p, p-1)$ is subpolynomial for all $p \geq 3$. Nevertheless, the function $f(n,p,p-1)$ is still far from being well understood, even for $p=4$, where the best known lower bound is $f(n,4,3) = \Omega(\log n)$ (see \cite{FoSu, KoMu}). In this paper, we consider extensions of these problems to hypergraphs. The main motivation for studying this problem comes from an equivalent formulation of the grid Ramsey problem (actually, this is Shelah's original formulation). Let $K^{(3)}(n,n)$ be the $3$-uniform hypergraph with vertex set $A \cup B$, where $|A| = |B| = n$, and edge set consisting of all those triples which intersect both $A$ and $B$. We claim that $g(n)$ is within a factor of two of the minimum integer $r$ for which there exists an $r$-coloring of the edges of $K^{(3)}(n,n)$ such that any copy of $K_4^{(3)}$ has at least $3$ colors on its edges. To see the relation, we abuse notation and regard both $A$ and $B$ as copies of the set $[n]$. For $i \in A$ and $j,j' \in B$, map the edge $\{(i,j), (i, j')\}$ of $\Gamma_{n,n}$ to the edge $(i,j,j')$ of $K^{(3)}(n,n)$ and, for $i, i' \in A$ and $j \in B$, map the edge $\{(i,j), (i', j)\}$ of $\Gamma_{n,n}$ to the edge $(i,i',j)$ of $K^{(3)}(n,n)$. Note that this defines a bijection between the edges of $\Gamma_{n,n}$ and the edges of $K^{(3)}(n,n)$, where the rectangles of $\Gamma_{n,n}$ are in one-to-one correspondence with the copies of $K_4^{(3)}$ of $K^{(3)}(n,n)$ intersecting both sides in two vertices. Hence, given a desired coloring of $K^{(3)}(n,n)$, we can find a coloring of $\Gamma_{n,n}$ where all rectangles receive at least three colors (and are thus alternating-free), showing that $g(n) \le r$. Similarly, given an alternating-free coloring of $\Gamma_{n,n}$, we may double the number of colors to ensure that the set of colors used for row edges and those used for column edges are disjoint. This turns an alternating-free coloring of $\Gamma_{n,n}$ into a coloring where each $K_4^{(3)}$ receives at least three colors. Hence, as above, we see that $r \le 2 g(n)$. Therefore, essentially the only difference between $g(n)$ and $f_3(2n, 4, 3)$ is that the base hypergraph for $g(n)$ is $K^{(3)}(n,n)$ rather than $K_{2n}^{(3)}$. This observation allows us to establish a close connection between the quantitative estimates for $f_3(n,4,3)$ and $g(n)$, as exhibited by the following pair of inequalities (that we will prove in Proposition \ref{prop:relation_g_f}): \begin{eqnarray} \label{eq:relation_g_f} g(n) \le f_3(2n, 4,3) \le 2 \lceil \log n \rceil^2 g(n). \end{eqnarray} This implies upper and lower bounds for $f_3(n,4,3)$ and $F_3(r,4,3)$ analogous to those we have established for $g(n)$ and $G(r)$. More generally, we have the following recursive upper bound for $F_k(r, p, q)$. \begin{THM} \label{thm:step_down} For positive integers $r, k$, $p$ and $q$, all greater than $1$ and satisfying $r \ge k$, $p \geq k+1$ and $2 \leq q \leq \binom{p}{k}$, \[ F_k\left( r, p, q \right) \le r^{{F_{k-1}(r,p-1,q) \choose k-1}}. \] The above is true even for $q > {p-1 \choose k-1}$, where we trivially have $F_{k-1}(r,p-1,q) = p-1$. \end{THM} By repeatedly applying Theorem \ref{thm:step_down}, we see that for each fixed $i$ with $0 \le i \leq k$ and large enough $p$, \[ F_k\left(r, p, {p-i \choose k-i} + 1\right) \le r^{r^{\iddots^{r^{c_{k,p}}} }}, \] where the number of $r$'s in the tower is $i$. For $0 < i < k$, it would be interesting to establish a lower bound on $F_k(r, p, {p-i \choose k-i})$ that is significantly larger than this upper bound on $F_k(r,p,{p-i \choose k-i} + 1)$. This would establish an interesting phenomenon of `sudden jumps' in the asymptotics of $F_k(r,p,q)$ at the values $q = {p-i \choose k-i}$. We believe that these jumps indeed occur. Let us examine some small cases of this problem. For graphs, as mentioned above, $F(r,p,p)$ is polynomial while $F(r,p,p-1)$ is superpolynomial. For 3-uniform hypergraphs, $F_3(r, p, {p-1 \choose 2} + 1)$ is polynomial in terms of $r$. Hence, the first interesting case is to decide whether the function $F_3(r, p, {p-1 \choose 2})$ is also polynomial. The fact that $F_3(r,4,3)$ is superpolynomial follows from Theorem~\ref{thm:grid_main} and \eqref{eq:relation_g_f}, giving some evidence towards the conjecture that $F_3(r,p,{p-1 \choose 2})$ is superpolynomial. We provide further evidence by establishing the next case for 3-uniform hypergraphs. \begin{THM} \label{thm:F_3_5_6} There exists a positive constant $c$ such that $F_3(r,5,6) \ge 2^{c\log^2 r}$ for all positive integers $r$. \end{THM} \subsection{A chromatic number version of the Erd\H{o}s--Gy\'arf\'as problem} A graph with chromatic number equal to $p$ we call {\it $p$-chromatic}. In the process of studying $G(r)$ (and proving Theorem \ref{thm:grid_main}), we encountered the following variant of the functions discussed in the previous subsection, where $K_p$ is replaced by $p$-chromatic subgraphs. \medskip \noindent \textbf{Definition.} Let $p$ and $q$ be positive integers satisfying $p \ge 3$ and $2 \le q \le {p \choose 2}$. \noindent (i) For each positive integer $r$, define $F_\chi(r, p, q)$ as the minimum integer $n$ for which every edge coloring of $K_n$ with $r$ colors contains a $p$-chromatic subgraph receiving at most $q-1$ distinct colors on its edges. \noindent (ii) For each positive integer $n$, define $f_\chi(n,p,q)$ as the minimum integer $r$ for which there exists an edge coloring of $K_n$ with $r$ colors such that every $p$-chromatic subgraph receives at least $q$ distinct colors on its edges. \medskip Call an edge coloring of $K_n$ a \emph{chromatic-$(p,q)$-coloring} if every $p$-chromatic subgraph receives at least $q$ distinct colors on its edges. As before, the definitions of $F_{\chi}(r,p,q)$ and $f_{\chi}(n,p,q)$ can be restated in terms of chromatic-$(p,q)$-colorings. Also, an edge coloring of $K_n$ is a chromatic-$(p,q)$-coloring if and only if the union of every $q-1$ color classes induces a graph of chromatic number at most $p-1$. In some sense, this looks like the most natural interpretation for the functions $F_\chi(r,p,q)$ and $f_\chi(n,p,q)$. If we choose to use this definition, then it is more natural to shift the numbers $p$ and $q$ by $1$. However, we use the definition above in order to make the connection between $F_\chi(r,p,q)$ and $F(r,p,q)$ more transparent. From the definition, we can immediately deduce some simple facts such as \begin{align} \label{eq:intro_chi_1} F_{\chi}(r,p,q) \le F(r,p,q), \qquad f_\chi(n,p,q) \ge f(n,p,q) \end{align} for all values of $r,p,q, n$ and \[ f_{\chi}\left(n,p, {p \choose 2}\right) = f\left(n,p, {p \choose 2}\right) = {n \choose 2} \] for all $n \ge p \ge 4$. Let $n = F_{\chi}(r,p,q)-1$ and consider a chromatic-$(p,q)$-coloring of $K_n$ that uses $r$ colors in total. Cover the set of $r$ colors by $\lceil r/(q-1) \rceil$ subsets of size at most $q-1$. The chromatic number of the graph induced by each subset is at most $p-1$ and thus, by the product formula for chromatic number, we see that \begin{align*} \label{eq:intro_chi_upper} F_{\chi}(r, p,q) - 1 \le (p-1)^{\lceil r/(q-1) \rceil}. \end{align*} This gives a general exponential upper bound. On the other hand, when $d$ is a positive integer, $p = 2^d + 1$ and $q=d+1$, a coloring of the complete graph which we will describe in Section~\ref{sec:preliminaries} implies that \[ F_{\chi}(r, 2^d + 1, d + 1) \ge 2^{r} + 1. \] Whenever $r$ is divisible by $d$, we see that the two bounds match to give \begin{align} \label{eq:intro_chi_2} F_{\chi}(r, 2^d + 1, d + 1) = 2^{r} + 1. \end{align} Let us examine the value of $F_\chi(r,p,q)$ for some small values of $p$ and $q$. By using the observations \eqref{eq:intro_chi_1} and \eqref{eq:intro_chi_2}, we have \[ F_\chi(r,3,2) = 2^r + 1, \qquad F_\chi(r,3,3) \le F(r, 3, 3) \le r+2 \] for $p=3$ and \[ F_\chi(r,4,2) \geq 2^r + 1, \qquad F_\chi(r,4,4) \le F(r, 4, 4) \le r^2 + 2 \] for $p=4$ and $r \geq 2$. The bound on $F(r,4,4)$ follows since every edge coloring of the complete graph on $r^2+2$ vertices with $r$ colors contains a vertex $v$ and a set $X$ of size at least $r+1$ such that all the edges connecting $v$ to $X$ have the same color, say red. If an edge $e$ with vertices in $X$ is red, we get a $K_4$ using at most three colors by taking $v$, the vertices of $e$, and any other vertex in $X$. So we may suppose at most $r-1$ colors are used on the edges with both vertices in $X$. In this case, as $|X| \ge F(r-1,3,3)$, we can find three vertices in $X$ with at most two colors used on the edges connecting them. Adding $v$ to this set gives a set of four vertices with at most three colors used on the edges connecting them. We show that the asymptotic behavior of $F_\chi(r,4,3)$ is different from both $F_\chi(r,4,2)$ and $F_\chi(r,4,4)$. \begin{THM} \label{thm:chi_4_3} There exist positive constants $C$ and $r_0$ such that for all $r \ge r_0$, \[ 2^{\log^2 r/36} \le F_{\chi}(r, 4, 3) \le C \cdot 2^{130 \sqrt{r\log r}}. \] \end{THM} Despite being interesting in its own right, our principal motivation for considering chromatic-$(p,q)$-colorings was to establish the following theorem, which is an important ingredient in the proof of Theorem \ref{thm:grid_main}. \begin{THM} \label{thm:chi_slow_grow} For every fixed positive integer $n$, there exists an edge coloring of the complete graph $K_n$ with $2^{6\sqrt{\log n}}$ colors with the following property: for every subset $X$ of colors with $|X| \ge 2$, the subgraph induced by the edges colored with a color from $X$ has chromatic number at most $2^{3 \sqrt{|X| \log |X|}}$. \end{THM} This theorem has the following immediate corollary. \begin{COR} For all positive integers $n, r$ and $s \geq 2$, \[ f_{\chi}(n, 2^{3\sqrt{s \log s}} + 1, s + 1) \le 2^{6\sqrt{\log n}} \quad \text{and} \quad F_{\chi}(r, 2^{3\sqrt{s \log s}} + 1, s + 1) \ge 2^{\log^2 r/36}. \] \end{COR} Our paper is organized as follows. In Section \ref{sec:preliminaries}, we review two coloring functions that will be used throughout the paper. In Section \ref{sec:gridramsey}, we prove Theorems \ref{thm:grid_main} and \ref{thm:grid_asymmetric}. In Section \ref{sec:eg}, we prove Theorems \ref{thm:step_down} and \ref{thm:F_3_5_6}. In Section \ref{sec:chi_eg}, we prove Theorems \ref{thm:chi_4_3} and \ref{thm:chi_slow_grow}. We conclude with some further remarks and open problems in Section~\ref{sec:conclusion}. \medskip \noindent \textbf{Notation.} We use $\log$ for the base 2 logarithm and $\ln$ for the natural logarithm. For the sake of clarity of presentation, we systematically omit floor and ceiling signs whenever they are not essential. We also do not make any serious attempt to optimize absolute constants in our statements and proofs. The following standard asymptotic notation will be used throughout. For two functions $f(n)$ and $g(n)$, we write $f(n) = o(g(n))$ if $\lim_{n \rightarrow \infty} f(n)/g(n) = 0$ and $f(n) = O(g(n))$ or $g(n) = \Omega(f(n))$ if there exists a constant $M$ such that $|f(n)| \leq M|g(n)|$ for all sufficiently large $n$. We also write $f(n) = \Theta(g(n))$ if both $f(n) = O(g(n))$ and $f(n) = \Omega(g(n))$ are satisfied. \section{Preliminaries} \label{sec:preliminaries} We begin by defining two edge colorings of the complete graph $K_n$, one a $(3,2)$-coloring and the other a $(4,3)$-coloring. These will be used throughout the paper. We denote the $(3,2)$-coloring by $c_B$, where `B' stands for `binary'. To define this coloring, we let $t$ be the smallest integer such that $n \leq 2^t$. We consider the vertex set $[n]$ as a subset of $\{0,1\}^t$ by identifying $x$ with $(x_1, \dots, x_t)$, where $\sum_{i=1}^t x_i 2^{i-1}$ is the binary expansion of $x - 1$. Then, for two vertices $x = (x_1, \dots, x_t)$ and $y = (y_1, \dots, y_t)$, $c_B(x,y)$ is the minimum $i$ for which $x_i \neq y_i$. This coloring uses at most $\lceil \log n \rceil$ colors and is a $(3,2)$-coloring since three vertices cannot all differ in the $i$-th coordinate. In fact, it is a chromatic-$(2^d + 1, d+1)$-coloring for all integers $d \ge 1$, since it gives an edge partition $E = E_1 \cup \dots \cup E_{t}$ of $K_n$ for $t = \lceil \log n \rceil$ such that, for all $J \subset [t]$, the graph consisting of the edges $\bigcup_{j \in J} E_j$ has chromatic number at most $2^{|J|}$. The $(4,3)$-coloring, which is a variant of Mubayi's coloring \cite{Mubayi}, will be denoted by $c_M$. To define this coloring, we let $t$ be the smallest integer such that $n \leq 2^{t^2}$ and $m = 2^t$. We consider the vertex set $[n]$ as a subset of $[m]^t$ by identifying $x$ with $(x_1, \dots, x_t)$, this time by examining the base $m$ expansion of $x-1$. For two vertices $x = (x_1, \ldots,x_t)$ and $y = (y_1 ,\ldots, y_t)$, let \[ c_M(x,y) = \Big( \{x_i, y_i\}, a_1, \ldots,a_t \Big), \] where $i$ is the minimum index in which $x$ and $y$ differ and $a_j = 0$ or $1$ depending on whether $x_j = y_j$ or not (note that the coloring function $c_M$ is symmetric in its two variables). The coloring $c_M$ is a $(3,2)$-coloring since it partitions the edge set of $K_n$ into bipartite graphs and a simple case analysis similar to that given in \cite{Mubayi} shows that it is a $(4,3)$-coloring (the proof of Theorem \ref{thm:chi_4_3} given in Section \ref{sec:chi_eg} shows that $c_M$ is even a chromatic-$(4,3)$-coloring). Since $2^{(t-1)^2} < n$, the total number of colors used is at most \[ m^2 \cdot 2^t = 2^{3 t} < 2^{3 (1 + \sqrt{\log n})} \leq 2^{6 \sqrt{\log n}}. \] Hence, $c_M$ uses at most $r = 2^{6\sqrt{\log n}}$ colors to color the edge set of the complete graph on $n = 2^{\log^2 r/ 36}$ vertices. \section{The grid Ramsey problem} \label{sec:gridramsey} In order to improve the lower bound on $G(r)$, we need to find an edge coloring of the grid graph which is alternating-free. The following lemma is the key idea behind our argument. For two edge-coloring functions $c_1$ and $c_2$ of the complete graph $K_n$, let $\mathcal{G}(c_1, c_2)$ be the subgraph of $K_n$ where $e$ is an edge if and only if $c_1(e) = c_2(e)$. \begin{LEMMA} \label{lem:row_chromatic} Let $m, n$ and $r$ be positive integers. There exists an alternating-rectangle-free edge coloring of $\Gamma_{m,n}$ with $r$ colors if and only if there are edge colorings $c_1, \ldots, c_m$ of the complete graph $K_n$ with $r$ colors satisfying \[ \chi(\mathcal{G}(c_i, c_j)) \le r \] for all pairs of indices $i,j$. \end{LEMMA} \begin{proof} We first prove the `if' statement. Consider the grid graph $\Gamma_{m,n}$. For each $i$, color the edges of the $i$-th row using the edge coloring $c_i$. Then, for each distinct pair of indices $i$ and $i'$, construct auxiliary graphs $H_{i,i'}$ whose vertex set is the set of edges that connect the $i$-th row with the $i'$-th row (that is, edges of the form $\{(i,j), (i',j)\}$) and where two vertices $\{(i,j), (i',j)\}$ and $\{(i,j'), (i',j')\}$ are adjacent if and only if the two row edges that connect these two column edges have the same color. The fact that $\chi(\mathcal{G}(c_i, c_{i'})) \le r$ implies that there exists a vertex coloring of $H_{i,i'}$ with $r$ colors. Color the corresponding edges $\{(i,j), (i',j)\}$ according to this vertex coloring. Under this coloring, we see that whenever a pair of edges of the form $\{(i,j), (i,j')\}$ and $\{(i',j), (i',j')\}$ have the same color, the colors of the edges $\{(i,j), (i',j)\}$ and $\{(i,j'), (i',j')\}$ are distinct. This gives a coloring of the column edges. Hence, we found the required alternating-rectangle-free edge coloring of $\Gamma_{m,n}$ with $r$ colors. For the `only if' statement, given an alternating-rectangle-free edge coloring of $\Gamma_{m,n}$ with $r$ colors, define $c_i$ as the edge-coloring function of the $i$-th row of $\Gamma_{m,n}$, for each $i \in [m]$. One can easily reverse the process above to show that the colorings $c_1, \ldots, c_m$ satisfy the condition. We omit the details. \end{proof} To find an alternating-rectangle-free edge coloring of $\Gamma_{m,n}$, we will find edge colorings of the rows which satisfy the condition of Lemma \ref{lem:row_chromatic}. Suppose that $E(K_n) = E_1 \cup \dots \cup E_t$ is a partition of the edge set of $K_n$. For an index subset $I \subset [t]$, we let $\mathcal{G}_I$ be the subgraph of $K_n$ whose edge set is given by $\bigcup_{i\in I} E_i$. \begin{LEMMA} \label{lem:grid_coloring} Let $m, n, r$ and $t$ be positive integers. Suppose that an edge partition $E(K_n) = E_1 \cup \dots \cup E_t$ of $K_n$ is given. Let $I$ be a random subset of $[t]$ where each element in $I$ is chosen independently with probability $1/r$ and suppose that \[ \mathbb{P}[\chi(\mathcal{G}_I) \ge r+1] \le \frac{1}{2m}. \] Then $g(m, n) \le r$ and $G(r) \ge \min\{m,n\} + 1$. \end{LEMMA} \begin{proof} For each $i \in [2m]$, choose a vector $v_i \in [r]^t$ independently and uniformly at random. Let $c_i$ be an edge coloring of $K_n$ with $r$ colors where for each $t' \in [t]$, we color all the edges in $E_{t'}$ using the value of the $t'$-th coordinate of $v_i$. Color the $i$-th row of $\Gamma_{2m,n}$ (which is a copy of $K_n$) using $c_i$. For a pair of distinct indices $i,j \in [2m]$, let $I(i,j)$ be the subset of indices $t' \in [t]$ for which $v_i$ and $v_j$ have the same value on their $t'$-th coordinates (thus implying that $c_i$ and $c_j$ use the same color on $E_{t'}$). Then $I(i,j)$ has the same distribution as a random subset of $[t]$ obtained by taking each element independently with probability $1/r$. Moreover, \[ \mathcal{G}(c_i, c_j) = \mathcal{G}_{I(i,j)}. \] Hence, \[ \mathbb{P}[\chi(\mathcal{G}(c_i, c_j)) \ge r+1] = \mathbb{P}[\chi(\mathcal{G}_{I(i,j)}) \ge r+1] \le \frac{1}{2m}. \] Therefore, the expected number of pairs $i,j$ with $i<j$ having $\chi(\mathcal{G}_{I(i,j)}) \ge r+1$ is at most ${2m \choose 2} \frac{1}{2m} \le m$. Hence, there exists a choice of coloring functions $c_i$ for which this number is at most $m$. If this event happens, then we can remove one row from each pair $i,j$ having $\chi(\mathcal{G}_{I(i,j)}) \ge r+1$ to obtain a set $R \subset [2m]$ of size at least $m$ which has the property that $\chi(\mathcal{G}_{I(i,j)}) \le r$ for all $i,j \in R$. By considering the subgraph of $\Gamma_{2m,n}$ induced on $R \times [n]$ and using Lemma \ref{lem:row_chromatic}, we obtain an alternating-rectangle-free edge coloring of $\Gamma_{m,n}$ with $r$ colors. The result follows. \end{proof} We prove Theorems \ref{thm:grid_main} and \ref{thm:grid_asymmetric} in the next two subsections. We begin with Theorem \ref{thm:grid_asymmetric}, which establishes upper bounds for $g(m,n)$ in various off-diagonal regimes. As noted in the introduction, parts (i) and (ii) already yield weak versions of Theorem~\ref{thm:grid_main}. In particular, part (i) implies that $G(r)$ is superpolynomial in $r$, while part (ii) yields the bound $G(r) > 2^{c \log^2 r}$. We recall the stronger off-diagonal statements below. \subsection{Proof of Theorem \ref{thm:grid_asymmetric}} \label{subsec:grid_asymmetric} \noindent \textbf{Parts (i) and (ii)} : For all $C > e^2$, $\varepsilon > 0$ and large enough $r$, $g(r^{\log C/2}, r^{r/2C}) \le r$ and $g(2^{\varepsilon \log^2 r}, 2^{r^{1-\varepsilon}}) \le r$. \medskip Let $n = 2^t$ for some $t$ to be chosen later. The edge coloring $c_B$ from Section \ref{sec:preliminaries} gives an edge partition $E = E_1 \cup \dots \cup E_{t}$ of $K_n$ for $t = \log n$ such that, for all $J \subset [t]$, \[ \chi(\mathcal{G}_J) = 2^{|J|}. \] Hence, if we let $I$ be a random subset of $[t]$ obtained by choosing each element independently with probability $1/r$, then \begin{align} \mathbb{P}[\chi(\mathcal{G}_I) \ge r+1] &= \mathbb{P}\big[|I| \ge \log(r+1)\big] \nonumber \\ &\le {t \choose \log(r+1)} \frac{1}{r^{\log(r+1)}} \le \left( \frac{et}{r \log (r+1)} \right)^{\log (r+1)}. \label{eq:grid_result1} \end{align} For part (i), let $C$ be a given constant and take $t = r\log r /2C$. Then the right-hand side of \eqref{eq:grid_result1} is at most $(r+1)^{-\log(C/e)}$. In Lemma \ref{lem:grid_coloring}, we can take $m = \frac{1}{2}(r+1)^{\log (C/e)} \ge r^{\log C/2}$ and $n = 2^{t} = r^{r/2C}$ to get \[ g(r^{\log C/2}, r^{r/2C}) \le r. \] For part (ii), let $\varepsilon$ be a given constant and take $t = r^{1-\varepsilon}$. For large enough $r$, the right-hand side of \eqref{eq:grid_result1} is at most $\frac{1}{2}r^{-\varepsilon \log r}$. Hence, by applying Lemma \ref{lem:grid_coloring} with $m = r^{\varepsilon \log r} = 2^{\varepsilon \log^2 r}$ and $n = 2^t = 2^{r^{1-\varepsilon}}$, we see that \[ g(2^{\varepsilon \log^2 r}, 2^{r^{1-\varepsilon}}) \le r. \] \medskip \noindent \textbf{Part (iii)} : There exists a positive constant $c$ such that $g(cr^2, r^{r^2 /2} / e^{r^2}) \le r$ for large enough $r$. \medskip Let $c=e^{-3}$. Let $n = cr^2$ and partition the edge set of $K_n$ into $t = {n \choose 2}$ sets $E_1, \ldots, E_t$, each of size exactly one. As before, let $I$ be a random subset of $[t]$ obtained by choosing each element independently with probability $1/r$. In this case, we get $\mathcal{G}_I = \mathcal{G}(n, \frac{1}{r})$ (where $\mathcal{G}(n, p)$ is the binomial random graph). Therefore, \[ \mathbb{P}[\chi(\mathcal{G}_I) \ge r+1] = \mathbb{P}[\chi (\mathcal{G}(cr^2, 1/r)) \ge r+1]. \] The event $\chi\left(\mathcal{G}(cr^2, \frac{1}{r})\right) \ge r+1$ is contained in the event that $\mathcal{G}(cr^2, \frac{1}{r})$ contains a subgraph of order $s \ge r+1$ of minimum degree at least $r$. The latter event has probability at most \begin{align} \label{eqn:g_n_p_chromatic} \sum_{s=r+1}^{cr^2} {cr^2 \choose s} {s^2/2 \choose rs/2} \left(\frac{1}{r}\right)^{rs/2} &\le \sum_{s=r+1}^{cr^2} \left( \left( \frac{ecr^2}{s} \right)^{2} \left(\frac{es}{r} \right)^{r} \left(\frac{1}{r} \right)^{r} \right)^{s/2}. \end{align} For $s=r$, if $r$ is large enough, then the summand is \[ \left( (ecr)^{2} e^{r} \left(\frac{1}{r} \right)^{r} \right)^{r/2} \le \frac{e^{r^2}}{r^{r^2 /2}}\,. \] We next show that the summands are each at most a quarter of the previous summand. As the series starts at $s=r+1$ and ends at $s=cr^2$, the series is then at most half the summand for $s=r$. The ratio of the summand for $s+1$ to the summand for $s$, where $r+1 \leq s+1 \leq cr^2$, is $$ \left(\frac{s+1}{s}\right)^{s(r-2)/2} \left(\left(\frac{ecr^2}{s+1}\right)^2\left(\frac{e(s+1)}{r}\right)^r\left(\frac{1}{r}\right)^r\right)^{1/2}$$ which is at most $$e^{(r-2)/2} \left(\frac{e^{r+2} c^2 (s+1)^{r-2}}{r^{2r-4}}\right)^{1/2} \leq e^{(r-2)/2} (e^{r+2} c^r )^{1/2} = e^{-r/2} < \frac{1}{4}, $$ for $r$ sufficiently large. Hence, the right-hand side of \eqref{eqn:g_n_p_chromatic} is at most $e^{r^2}r^{-r^2/2}/2$ and \[ \mathbb{P}[\chi(\mathcal{G}_I) \ge r+1] \le \frac{e^{r^2}}{2r^{r^2 /2}}. \] By Lemma \ref{lem:grid_coloring}, we conclude that $g(cr^2, r^{r^2 / 2} / e^{r^2}) \le r$. \subsection{Proof of Theorem \ref{thm:grid_main}} \label{subsec:grid_main} In the previous subsection we used quite simple edge partitions of the complete graph as an input to Lemma \ref{lem:grid_coloring} to prove Theorem \ref{thm:grid_asymmetric}. These partitions were already good enough to give the superpolynomial bound $G(r) > 2^{c \log^2 r}$. To further improve this bound and prove Theorem \ref{thm:grid_main}, we make use of a slightly more sophisticated edge partition guaranteed by the following theorem. \begin{THM} \label{thm:chi_slow_grow_less_colors} There exists a positive real $r_0$ such that the following holds for positive integers $r$ and positive reals $\alpha \le 1$ satisfying $(\log r)^\alpha \ge r_0$. For $n = 2^{(\log r)^{2 + \alpha}/200}$, there exists a partition $E = E_1 \cup \dots \cup E_{\sqrt{r}}$ of the edge set of the complete graph $K_n$ such that \[ \chi(\mathcal{G}_I) \le 2^{3(\log r)^{\alpha/2}\sqrt{|I| \log 2|I|}} \] for all $I \subset [\sqrt{r}]$. \end{THM} The proof of this theorem is based on Theorem \ref{thm:chi_slow_grow}, which is in turn based on considering the coloring $c_M$, and will be given in Section \ref{sec:chi_eg}. Now suppose that a positive integer $r$ is given and let $\alpha \leq 1$ be a real to be chosen later. Let $E_1 \cup \dots \cup E_{\sqrt{r}}$ be the edge partition of $K_n$ for $n = 2^{(\log r)^{2 + \alpha}/200}$ given by Theorem \ref{thm:chi_slow_grow_less_colors}. Let $I$ be a random subset of $[\sqrt{r}]$ chosen by taking each element independently with probability $\frac{1}{r}$. Then, by Theorem \ref{thm:chi_slow_grow_less_colors}, we have \[ \chi(\mathcal{G}_I) \ge r+1 \quad \Rightarrow \quad |I| \ge c\frac{(\log r)^{2 - \alpha}}{\log \log r}, \] for some positive constant $c$. Therefore, \begin{align*} \mathbb{P}[\chi(\mathcal{G}_I) \ge r+1 ] &\le \mathbb{P}\left[|I| \ge c \frac{(\log r)^{2 - \alpha}}{\log \log r}\right] \\ &\le {\sqrt{r} \choose c(\log r)^{2 - \alpha}/\log \log r} \left(\frac{1}{r} \right)^{c(\log r)^{2 - \alpha}/ \log \log r} \le r^{-c'(\log r)^{2 - \alpha}/\log \log r} \end{align*} holds for some positive constant $c'$. By Lemma \ref{lem:grid_coloring}, for $m = 2^{c'(\log r)^{3-\alpha} / \log \log r - 1}$, we have $g(m, n) \le r$. We may choose $\alpha$ so that \[ m = n = e^{\Omega((\log r)^{5/2}/ \sqrt{\log \log r})}. \] This gives $G(r) \ge 2^{\Omega((\log r)^{5/2}/ \sqrt{\log \log r})}$, as required. \section{The Erd\H{o}s--Gy\'arf\'as problem} \label{sec:eg} In the introduction, we discussed how the grid Ramsey problem is connected to a hypergraph version of the Erd\H{o}s--Gy\'arf\'as problem. We now establish this correspondence more formally. \begin{PROP} \label{prop:relation_g_f} For all positive integers $n$, we have \[ g(n) \le f_3(2n, 4,3) \le 2 \lceil \log n \rceil ^2 g(n). \] \end{PROP} \begin{proof} Since $K^{(3)}(n,n)$ is a subhypergraph of $K_{2n}^{(3)}$, a $(4,3)$-coloring of $K_{2n}^{(3)}$ immediately gives a coloring of $K^{(3)}(n,n)$ such that every copy of $K_{4}^{(3)}$ receives at least three colors. Hence, by the correspondence between coloring functions for $K^{(3)}(n,n)$ and coloring functions for $\Gamma_{n,n}$ explained in the introduction, it follows that $g(n) \le f_3(2n,4,3)$. We prove the other inequality by showing that for all $m \le n$, \begin{align} \label{eq:recursive} f_3(2m, 4, 3) \le f_3(m, 4, 3) + 2\lceil \log m \rceil g(m). \end{align} By repeatedly applying this recursive formula, we obtain the claimed inequality \begin{align*} f_3(2n, 4, 3) \le 2\lceil \log n \rceil^2 g(n). \end{align*} Thus it suffices to establish the recursive formula \eqref{eq:recursive}. We will do this by presenting a $(4,3)$-coloring of $K^{(3)}_{2m}$. Let $A$ and $B$ be two disjoint vertex subsets of $K^{(3)}_{2m}$, each of order $m$. Given a $(4,3)$-coloring of $K_{m}^{(3)}$ with $f_3(m,4,3)$ colors, color the hyperedges within $A$ using this coloring and also the hyperedges within $B$ using this coloring. Since we started with a $(4,3)$-coloring, every copy of $K_4^{(3)}$ lying inside $A$ or $B$ contains at least $3$ colors on its edges. This leaves us with the copies which intersect both $A$ and $B$. Let $H$ be the bipartite hypergraph that consists of the edges which intersect both parts $A$ and $B$. By definition, we have an alternating-free edge coloring of the grid graph $\Gamma_{m,m}$ using $g(m)$ colors. We may assume, by introducing at most $g(m)$ new colors, that the set of colors used for the row edges and the column edges are disjoint. This gives an edge coloring of $\Gamma_{m,m}$, where each rectangle receives at least three colors. Let $c_1$ be a coloring of $H$ using at most $2g(m)$ colors, where for an edge $\{i,j,j'\} \in H$ with $i \in A, j,j' \in B$, we color it with the color of the edge $\{(i,j), (i,j')\}$ in $\Gamma_{m,m}$ and for an edge $\{i,i',j\} \in H$ with $i,i' \in A, j \in B$, we color it with the color of the edge $\{(i,j), (i',j)\}$ in $\Gamma_{m,m}$. Let $c_2$ be a coloring of $H$ constructed based on the coloring $c_B$ given in Section \ref{sec:preliminaries} as follows: for an edge $\{i,j,j'\} \in H$ with $i \in A, j,j' \in B$, let $c_2(\{i,j,j'\}) = c_B(\{j,j'\})$ and for an edge $\{i,i',j\} \in H$ with $i,i' \in A, j \in B$, let $c_2(\{i,i',j\}) = c_B(\{i,i'\})$. Now color the hypergraph $H$ using the coloring function $c_1 \times c_2$. Consider a copy $K$ of $K_4^{(3)}$ which intersects both parts $A$ and $B$. If $|K \cap A| = |K \cap B| = 2$, then assume that $K = \{i,i',j,j'\}$ for $i,i' \in A$ and $j,j' \in B$. One can see that the set of colors used by $c_1$ on $K$ is identical to the set of colors used on the rectangle $(i,j,i',j')$ in $\Gamma_{m,m}$ considered above. Thus $K$ receives at least three distinct colors. If $|K \cap A| = 1$ and $|K \cap B| = 3$, then the three hyperedges in $K$ which intersect $A$ use at least two colors from the coloring $c_2$, while the unique hyperedge of $K \cap B$ is colored with a different color. Hence $K$ contains at least three colors. Similarly, $K$ contains at least three colors if $|K \cap A| = 3$ and $|K \cap B| = 1$. Since $c_1$ uses at most $2g(m)$ colors and $c_2$ uses at most $\lceil \log m \rceil$ colors, we see that $c_1 \times c_2$ uses at most $2 \lceil \log m \rceil g(m)$ colors. Recall that we used at most $f_3(m,4,3)$ colors to color the edges inside $A$ and $B$. Therefore, we have found a $(4,3)$-coloring of $K^{(3)}_{2m}$ using at most \[ f_3(m,4,3) + 2 \lceil \log m \rceil g(m) \] colors, thereby establishing \eqref{eq:recursive}. \end{proof} \subsection{A basic bound on $F_k(r,p,q)$} Here we prove Theorem \ref{thm:step_down} that provides a basic upper bound on the function $F_k(r,p,q)$. Recall that we are given positive integers $r, k, p$, and $q$ all greater than $1$ and satisfying $r \ge k$. Let $N = r^{{F_{k-1}(r,p-1,q) \choose k-1}}$ and suppose that we are given an edge coloring of $K_N^{(k)}$ with $r$ colors (denoted by $c$). Let $[N]$ be the vertex set of $K_N^{(k)}$. For each integer $t$ in the range $1 \le t \le F_{k-1}(r,p-1,q)$, we will inductively find a pair of disjoint subsets $X_t$ and $Y_t$ of $[N]$ with the following properties: \begin{enumerate} \item $|X_t| = t$ and $|Y_t| \ge \min\{N / r^{{t \choose k-1}}, N-t\}$, \item for all $x \in X_t$ and $y \in Y_t$, $x < y$, \item for all edges $e \in {X_t \cup Y_t \choose k}$ satisfying $|e \cap X_t| \ge k-1$, the color of $e$ is determined by the first $k-1$ elements of $e$ (note that the first $k-1$ elements necessarily belong to $X_t$). \end{enumerate} For the base cases $t=1, \ldots, k-2$, the pair of sets $X_{t} = \{1,2,\ldots,t\}$ and $Y_{t} = [N] \setminus X_t$ trivially satisfy the given properties. Now suppose that for some $t \ge k-2$, we are given pairs $X_{t}$ and $Y_{t}$ and wish to construct sets $X_{t+1}$ and $Y_{t+1}$. Since $t < F_{k-1}(r,p-1,q)$, Property 1 implies that $|Y_t| \ge 1$ and in particular that $Y_t$ is nonempty. Let $x$ be the minimum element of $Y_{t}$ and let $X_{t+1} = X_{t} \cup \{x\}$. For each element $y \in Y_{t} \setminus \{x\}$, consider the vector of colors of length ${|X_{t}| \choose k-2}$ whose coordinates are $c(e' \cup \{x, y\})$ for each $e' \in {X_{t} \choose k-2}$. By the pigeonhole principle, there are at least $\frac{|Y_t| - 1}{ r^{{ |X_t| \choose k-2 }} }$ vertices which have the same vector. Let $Y_{t+1}$ be these vertices. This choice immediately implies Properties 2 and 3 above. To check Property 1, note that \[ |Y_{t+1}| \ge \frac{|Y_t| - 1}{ r^{{ |X_t| \choose k-2 }} } \ge \frac{N / r^{{t \choose k-1}} - t - 1}{ r^{{ |X_t| \choose k-2 }} } = \frac{N}{ r^{{ t+1 \choose k-1 }} } - \frac{t+1}{r^{{ t \choose k-2 }}} > \frac{N}{ r^{{ t+1 \choose k-1 }} } - 1, \] where the final inequality follows from $t \ge k-2$ and $r \ge k$. Since $N = r^{{F_{k-1}(r,p-1,q) \choose k-1}}$, $F_{k-1}(r,p-1,q) \ge t+1$ and $|Y_{t+1}|$ is an integer, this implies that $|Y_{t+1}| \ge \frac{N}{ r^{{ t+1 \choose k-1 }} }$. Let $T = F_{k-1}(r,p-1,q)$ and note that $|X_T| = F_{k-1}(r,p-1,q)$ and $|Y_T| \ge 1$. Construct an auxiliary complete $(k-1)$-uniform hypergraph over the vertex set $X_T$ and color each edge with the color guaranteed by Property 3 above. This gives an edge coloring of $K^{(k-1)}_T$ with $r$ colors and thus, by definition, we can find a set $A$ of $p-1$ vertices using fewer than $q$ colors on its edges in the auxiliary $(k-1)$-uniform hypergraph. It follows from Property 3 that for an arbitrary $y \in Y_T$, $A \cup \{y\}$ is a set of $p$ vertices using fewer than $q$ colors on its edges in the original $k$-uniform hypergraph. \subsection{A superpolynomial lower bound for $F_3(r, 5, 6)$} In this subsection, we present a $(5,6)$-coloring of $K_n^{(3)}$ using $2^{O(\sqrt{\log n})}$ colors. This shows that $f_3(n,5,6) = 2^{O(\sqrt{\log n})}$ and $F_3(r,5,6) = 2^{\Omega(\log^2 r)}$. The edge coloring is given as a product $c = c_1 \times c_2 \times c_3 \times c_4$ of four coloring functions $c_1,c_2,c_3,c_4$. The first coloring $c_1$ is a $(4,3)$-coloring of $K_n^{(3)}$ using $f_3(n,4,3)$ colors. Combining Proposition \ref{prop:relation_g_f} and Theorem \ref{thm:grid_main}, we see that $f_3(n,4,3) = 2^{O((\log n)^{2/5} (\log \log n)^{1/5})}$. Let $n=2^d$ and write the vertices of $K_n$ as binary strings of length $d$. To define $c_2, c_3$ and $c_4$, for three distinct vertices $u,v,w$, assume that the least coordinate in which not all vertices have the same bit is the $i$-th coordinate and let $u_i, v_i, w_i$ be the $i$-th coordinate of $u,v,w$, respectively. Without loss of generality, we may assume that $u_i = v_i \neq w_i$, i.e. $(u_i, v_i, w_i) = (0,0,1)$ or $(1,1,0)$. Define the second color $c_2$ of the triple of vertices $\{u,v,w\}$ as $i$. Thus $c_2$ uses at most $\log n$ colors. Define the third color $c_3$ as the value of $w_i$, which is either 0 or 1. Define the fourth color $c_4$ as $c_M(u, v)$, where $c_M$ is the graph coloring given in Section \ref{sec:preliminaries}, which is both a $(3,2)$ and $(4,3)$-coloring. Recall that $c_M$ uses at most $2^{O(\sqrt{\log n})}$ colors. The number of colors in the coloring $c$ is \[ 2^{O((\log n)^{2/5} (\log \log n)^{1/5})} \cdot \log n \cdot 2 \cdot 2^{O(\sqrt{\log n})} = 2^{O(\sqrt{\log n})}, \] as desired. Now we show that each set of $5$ vertices receives at least $6$ colors in the coloring $c$. Let $i$ be the least coordinate such that the five vertices do not all agree. \medskip \noindent \textbf{Case 1}: One of the vertices (call it $v_1$) has one bit at coordinate $i$, while the other four vertices (call them $v_2, v_3, v_4, v_5$) have the other bit. The $6$ triples containing $v_1$ are different colors from the other $4$ triples. Indeed, the triples containing $v_1$ have $c_2 = i$, while the other triples have $c_2$ greater than $i$. Since $c_M$ is a $(4,3)$-coloring of graphs, $c_4$ tells us that the triples containing $v_1$ have to use at least $3$ colors. On the other hand, by the coloring $c_1$, the triples in the 4-set $\{v_2,v_3,v_4,v_5\}$ have to use at least 3 colors. Hence, at least 6 colors have to be used on the set of five vertices. \medskip \noindent \textbf{Case 2}: Two of the vertices (call them $v_1,v_2$) have one bit at coordinate $i$, while the other three vertices (call them $v_3, v_4, v_5$) have the other bit. Let $V_0=\{v_1,v_2\}$ and $V_1=\{v_3,v_4,v_5\}$. Let $A$ be the set of colors of triples in $\{v_1,...,v_5\}$. We partition $A$ into $A_0, A_1, A_2$ as follows. For each $j \in \{0,1,2\}$, let $A_j$ be the set consisting of the colors of triples containing exactly $j$ vertices from $V_0$. It follows from the colorings $c_2$ and $c_3$ that the three color sets $A_0, A_1, A_2$ form a partition of $A$. Indeed, the color in $A_0$ has second coordinate $c_2$ greater than $i$, while the colors in $A_1$ and $A_2$ have second coordinate $c_2 = i$. Furthermore, the colors in $A_1$ have third coordinate $c_3$ distinct from the third coordinate $c_3$ of the colors in $A_2$. Note also that $|A_0| = 1$. \medskip \noindent \textbf{Case 2a}: $|A_2|=3$. Since the coloring $c_M$ is a $(3,2)$-coloring of graphs, $c_4$ implies that the triples containing $v_1$ whose other two vertices are in $V_1$ receive at least $2$ colors. This implies that $|A_1| \geq 2$ and, therefore, the number of colors used is at least $|A_0|+|A_1|+|A_2| \geq 6$. \medskip \noindent \textbf{Case 2b}: $|A_2|=2$. Suppose without loss of generality that $(v_1,v_2,v_3)$ and $(v_1,v_2,v_4)$ have the same color, which is different from the color of $(v_1,v_2,v_5)$. As each $K_4^{(3)}$ uses at least $3$ colors in coloring $c_1$, $(v_1,v_3,v_4)$ and $(v_2,v_3,v_4)$ have different colors. Note that $c_4(v_1, v_3, v_4) = c_4(v_2, v_3, v_4) = c_M(v_3, v_4)$. Since $c_M$ is a $(3,2)$-coloring of graphs, at least one of $c_M(v_3, v_5)$ or $c_M(v_4, v_5)$ is different from $c_M(v_3, v_4)$. Suppose, without loss of generality, that $c_M(v_3, v_5) \neq c_M(v_3, v_4)$. Since $c$ is defined as the product of $c_1, \ldots, c_4$, we see that the color of $(v_1, v_3, v_5)$ is different from both that of $(v_1, v_3, v_4)$ and $(v_2, v_3, v_4)$. Thus $|A_1| \geq 3$. Then the number of colors used is at least $|A_0|+|A_1|+|A_2| \geq 6$. \medskip \noindent \textbf{Case 2c}: $|A_2|=1$. This implies that the three edges $(v_1,v_2,v_j)$ for $j=3,4,5$ are of the same color. First note that as in the previous case, there are at least two different colors among $c_M(v_3, v_4)$, $c_M(v_3, v_5)$ and $c_M(v_4, v_5)$. Without loss of generality, suppose that $c_M(v_3, v_4) \neq c_M(v_3, v_5)$. Since $c$ is defined as the product of $c_1, \ldots, c_4$, this implies that the set $A_1' = \{c(v_1, v_3, v_4), c(v_2, v_3, v_4)\}$ is disjoint from the set $A_1'' = \{c(v_1, v_3, v_5), c(v_2, v_3, v_5)\}$. Now, by considering the coloring $c_1$, since all three edges $(v_1,v_2,v_j)$ for $j=3,4,5$ are of the same color, we see that $|A_1'| = 2$ and $|A_1''| = 2$. Hence $|A_1| \ge |A_1'| + |A_1''| = 4$. Then the number of colors used is at least $|A_0|+|A_1|+|A_2| \geq 6$. \section{A chromatic number version of the Erd\H{o}s--Gy\'arf\'as problem} \label{sec:chi_eg} \subsection{Bounds on $F_{\chi}(r,4,3)$} In this subsection, we prove Theorem \ref{thm:chi_4_3}. This asserts that \[ 2^{\log^2 r/36} \le F_{\chi}(r,4,3) \le C \cdot 2^{130 \sqrt{r \log r}}. \] In order to obtain the upper bound, we use the concept of dense pairs. Suppose that a graph $G$ is given. For positive reals $\varepsilon$ and $d$, a pair of disjoint vertex subsets $(V_1, V_2)$ is \emph{($\varepsilon, d$)-dense} if for every pair of subsets $U_1 \subseteq V_1$ and $U_2 \subseteq V_2$ satisfying $|U_1| \ge \varepsilon |V_1|$ and $|U_2| \ge \varepsilon |V_2|$, we have \[ e(U_1, U_2) \ge d |U_1| |U_2|, \] where $e(U_1, U_2)$ is the number of edges of $G$ with one endpoint in $U_1$ and the other in $U_2$. The following result is due to Peng, R\"odl and Ruci\'nski \cite{PRR}. Recall that the edge density of a graph $G$ with $m$ edges and $n$ vertices is $m/\binom{n}{2}$. \begin{THM} \label{thm:regularity} For all positive reals $d$ and $\varepsilon$, every graph on $n$ vertices of edge density at least $d$ contains an $(\varepsilon, d/2)$-dense pair $(V_1, V_2)$ for which \[ |V_1| = |V_2| \ge \frac{1}{8}n d^{12/\varepsilon}. \] \end{THM} The original theorem of Peng, R\"odl and Ruci\'nski takes a bipartite graph with $n$ vertices in each part and $dn^2$ edges as input and outputs an $(\varepsilon,d/2)$-dense pair with parts of size at least $\frac{1}{2}n d^{12/\varepsilon}$. The theorem as stated above is an immediate corollary since every $n$-vertex graph of density $d$ contains a bipartite subgraph with $m = \lfloor \frac{n}{2} \rfloor \ge \frac{n}{4}$ vertices in each part and at least $d m^2$ edges. \begin{proofof}{upper bound in Theorem \ref{thm:chi_4_3}} Let $n = F_{\chi}(r,4,3) - 1$ and suppose that a chromatic-$(4,3)$-coloring of $K_n$ using $r$ colors is given. Take a densest color, say red, and consider the graph $\mathcal{G}$ induced by the red edges. This graph has density at least $\frac{1}{r}$. By applying Theorem \ref{thm:regularity} with $\varepsilon = \left(\frac{\ln r}{r}\right)^{1/2}$, we obtain an $(\varepsilon, \frac{1}{2r})$-dense pair $(V_1, V_2)$ in $\mathcal{G}$ such that \[ |V_1| = |V_2| \ge \frac{1}{8}n\left(\frac{1}{r}\right)^{12/\varepsilon} \ge n e^{-13 \sqrt{r \ln r}}. \] For a color $c$ which is not red, let $\mathcal{G}_{+c}$ be the graph obtained by adding all edges of color $c$ to the graph $\mathcal{G}$. Since the given coloring is a chromatic-$(4,3)$-coloring, we see that $\mathcal{G}_{+c}$ is $3$-colorable for all $c$. Consider an arbitrary proper $3$-coloring of $\mathcal{G}_{+c}$. If there exists a color class in this proper coloring which intersects both $V_1$ and $V_2$ in at least $\varepsilon|V_1|$ vertices, then, since $(V_1, V_2)$ is an $(\varepsilon, \frac{1}{2r})$-dense pair, there exists an edge between the two intersections, thereby contradicting the fact that the $3$-coloring is proper. Hence, $\mathcal{G}_{+c}$ has an independent set $I_c$ of size at least $(1-2\varepsilon)|V_1|$ in either $V_1$ or $V_2$. For $i=1,2$, define $C_i$ to be the set of colors $c \in [r]$ for which this independent set $I_c$ is in $V_i$. Since $|C_1| + |C_2| \ge r-1$, we may assume, without loss of generality, that $|C_1| \ge \frac{r-1}{2}$. For each $v \in V_1$, let $d(v)$ be the number of colors $c \in C_1$ for which $v \in I_c$. Note that \begin{align} \label{eq:degreesum} \sum_{v \in V_1} d(v) = \sum_{c \in C_1} |I_c| \ge |C_1| \cdot (1-2\varepsilon)|V_1|. \end{align} For each $X \subset C_1$ of size $\frac{r}{4}$, let $I_X = \bigcap_{c \in X} I_c$. We have \begin{align*} \sum_{\substack{X \subset C_1 \\ |X| = r/4}} |I_X| = \sum_{v \in V_1} {d(v) \choose r/4} \ge |V_1| \cdot {|C_1| \cdot (1-2\varepsilon) \choose r/4}, \end{align*} where the inequality follows from \eqref{eq:degreesum} and convexity. Since $|C_1| \ge \frac{r-1}{2}$, we have \[ \sum_{\substack{X \subset C_1 \\ |X| = r/4}} |I_X| \ge (1-8\varepsilon)^{r/4} |V_1| \cdot {|C_1| \choose r/4}. \] Thus we can find a set $X \subset C_1$ for which $|I_X| \ge (1-8\varepsilon)^{r/4}|V_1|$. By definition, the set $I_X$ does not contain any color from $X$ and hence the original coloring induces a chromatic-$(4,3)$-coloring of a complete graph on $|I_X|$ vertices using at most $3r/4$ colors. This gives \[ F_{\chi}\left(\frac{3r}{4}, 4, 3\right) - 1 \ge |I_X| \ge (1-8\varepsilon)^{r/4} |V_1|. \] For $\varepsilon \le 1/16$, the inequality $1-8\varepsilon \ge e^{-16\varepsilon}$ holds. Hence, for large enough $r$, the right-hand side above is at least \[ e^{-4 \varepsilon r} \cdot n e^{-13\sqrt{r \ln r}} = e^{-4\sqrt{r\ln r}} \cdot n e^{-13\sqrt{r \ln r}} = (F_{\chi}(r, 4, 3) - 1) e^{-17 \sqrt{r \ln r}}. \] We conclude that there exists $r_0$ such that if $r \ge r_0$, then \[ F_{\chi}(r,4,3) \le e^{17 \sqrt{r \ln r}} F_{\chi}\left(\frac{3r}{4}, 4, 3\right). \] We now prove by induction that there is a constant $C$ such that $F_{\chi}(r,4,3) \leq C e^{130 \sqrt{r \ln r}}$ holds for all $r$. This clearly holds for the base cases $r<r_0$, so suppose $r \geq r_0$. Using the above inequality and the induction hypothesis, we obtain \[ F_{\chi}(r,4,3) \le e^{17 \sqrt{r \ln r}} F_{\chi}\left(\frac{3r}{4}, 4, 3\right) \le e^{17 \sqrt{r \ln r}} Ce^{130 \sqrt{(3r/4)\ln (3r/4)}} \le Ce^{130\sqrt{r\ln r}},\] which completes the proof. \end{proofof} We now turn to the proof of the lower bound. In order to establish the lower bound, we show that Mubayi's coloring $c_M$ is in fact a chromatic-$(4,3)$-coloring. This then implies that $F_{\chi}(r,4,3) \ge 2^{\log^2 r/ 36}$, as claimed. Recall that in the coloring $c_M$, we view the vertex set of $K_n$ as a subset of $[m]^t$ for some integers $m$ and $t$ and, for two vertices $x, y \in [m]^t$ of the form $x = (x_1, \ldots, x_t)$ and $y = (y_1, \ldots, y_t)$, we let \[ c_M(x,y) = \Big(\{x_i, y_i\}, a_1, \ldots, a_t\Big), \] where $i$ is the minimum index for which $x_i \neq y_i$ and $a_j = \delta(x_j, y_j)$ is the Dirac delta function. \begin{proofof}{lower bound in Theorem \ref{thm:chi_4_3}} Consider the coloring $c_M$ on the vertex set $[m]^t$. Suppose that two colors $c_{1}$ and $c_{2}$ are given and let \[ c_1 = \Big(\{x_{1}, y_{1}\}, a_{1,1}, \ldots, a_{1,t} \Big) \quad \textrm{and} \quad c_2 = \Big(\{x_{2}, y_{2}\}, a_{2,1}, \ldots, a_{2,t} \Big). \] Suppose that $a_{1, i_1}$ is the first non-zero $a_{1,j}$ term and $a_{2, i_2}$ is the first non-zero $a_{2,j}$ term. In other words, for a pair of vertices which are colored by $c_1$, the first coordinate in which the pair differ is the $i_1$-th coordinate (and a similar claim holds for $c_2$). Let $\mathcal{G}$ be the graph induced by the edges which are colored by either $c_1$ or $c_2$. We will prove that $\chi(\mathcal{G}) \le 3$ by presenting a proper vertex coloring of $\mathcal{G}$ using three colors, red, blue and green. \medskip \noindent \textbf{Case 1}: $i_{1}=i_{2}=i$ for some index $i$. First, color all the vertices whose $i$-th coordinate is equal to $x_{1}$ in red. Second, color all the vertices whose $i$-th coordinate is equal to $x_{2}$ in blue (if $x_1 = x_2$, there are no vertices of color blue). Third, color all other vertices in green. To show that this is a proper coloring, note that if the color between two vertices $z, w \in [m]^t$ is either $c_1$ or $c_2$, then the $i$-th coordinate of $z$ and $w$ must be different. This shows that the set of red vertices and the set of blue vertices are both independent sets. It remains to show that the set of green vertices is an independent set. To see this, note that if the color between $z$ and $w$ is either $c_1$ or $c_2$, then the $i$-th coordinates $z_i$ and $w_i$ must satisfy \[ \{z_i, w_i\} = \{x_{1}, y_{1}\} \quad \textrm{or} \quad \{x_{2}, y_{2}\}, \] as this is the only way the first coordinate of $c_M(z, w)$ can match that of $c_1$ or $c_2$. However, all vertices which have $i$-th coordinate $x_{1}$ or $x_{2}$ are excluded from the set of green vertices. This shows that our coloring is proper. \medskip \noindent \textbf{Case 2}: $i_{1}\neq i_{2}$. Without loss of generality, we may assume that $i_1 < i_2$. We will find a proper coloring by considering only the $i_1$-th and $i_2$-th coordinates. For $v \in [m]^t$ of the form $v = (v_1, v_2, \ldots, v_t)$, let \[ \pi_{i_1}(v) = \begin{cases} 0 & \text{if } v_{i_1} = x_1 \\ 1 & \text{if } v_{i_1} = y_1 \\ * & \text{otherwise}\\ \end{cases} \qquad \text{and} \qquad \pi_{i_2}(v) = \begin{cases} 0 & \text{if } v_{i_2} = x_2 \\ 1 & \text{if } v_{i_2} = y_2 \\ * & \text{otherwise}\\ \end{cases}. \] Consider the projection map \[ \pi \,:\, [m]^t \rightarrow \{0, 1, *\} \times \{0, 1, *\} \] defined by $\pi(v) = (\pi_{i_1}(v), \pi_{i_2}(v))$ and let $\mathcal{H} = \pi(\mathcal{G})$ be the graph on $\{0, 1, *\} \times \{0, 1, *\}$ induced by the graph $\mathcal{G}$ and the map $\pi$. More precisely, a pair of vertices $v, w \in \{0, 1, *\} \times \{0, 1, *\}$ forms an edge if and only if there exists an edge of $\mathcal{G}$ between the two sets $\pi^{-1}(v)$ and $\pi^{-1}(w)$ (see Figure \ref{fig:proj_chi_4_3}). Note that a proper coloring of $\mathcal{H}$ can be pulled back via $\pi^{-1}$ to give a proper coloring of $\mathcal{G}$. It therefore suffices to find a proper $3$-coloring of $\mathcal{H}$. \begin{figure}[htp] \centering \includegraphics[scale=0.75]{coloring_43.eps} \caption{Graph $\mathcal{H} = \pi(\mathcal{G})$ when $a_{1,i_2} = 1$.} \label{fig:proj_chi_4_3} \end{figure} Consider two vertices $z,w \in [m]^t$. If $c_M(z,w) = c_2$, then the first coordinate in which $z$ and $w$ differ is the $i_2$-th coordinate. This implies that $z$ and $w$ have identical $i_1$-th coordinate. Hence, the set of possible edges of the form $\{\pi(z), \pi(w)\}$ is $E_2 = \left\{ \{00, 01\}, \{10, 11\}, \{*0, *1\} \right\}$. Now suppose that $c_M(z,w) = c_1$. Then the possible edges of the form $\{\pi(z), \pi(w)\}$ differ according to the value of $a_{1, i_2}$. \noindent \textbf{Case 2a}: $a_{1, i_2} = 0$. In this case, the $i_2$-th coordinate of $z$ and $w$ must be the same and thus the possible edges of the form $(\pi(z), \pi(w))$ are $E_1 = \left\{ \{00, 10\}, \{01, 11\}, \{0*, 1*\} \right\}$. One can easily check that the graph with edge set $E_1 \cup E_2$ is bipartite. \noindent \textbf{Case 2b}: $a_{1, i_2} = 1$. In this case, the $i_2$-th coordinate of $z$ and $w$ must be different and thus the possible edges of the form $(\pi(z), \pi(w))$ are $E_1 = \left\{ \{00, 11\}, \{00, 1*\}, \{01, 10\}, \{01, 1*\} , \{0*, 10\}, \{0*, 11\}, \{0*, 1*\} \right\}$. A $3$-coloring of the graph with edge set $E_1 \cup E_2$ is given by coloring the set of vertices $\{00, 10, *0\}$ in red, $\{01, 0*\}$ in blue and $\{11, 1*, *1, **\}$ in green (see Figure \ref{fig:proj_chi_4_3}). \end{proofof} \subsection{An edge partition with slowly growing chromatic number} In this section, we will prove Theorem \ref{thm:chi_slow_grow} by showing that $c_M$ has the required property. \medskip \noindent {\bf Theorem \ref{thm:chi_slow_grow}}. The coloring $c_M$ has the following property: for every subset $X$ of colors with $|X| \geq 2$, the subgraph induced by the edges colored with a color from $X$ has chromatic number at most $2^{3 \sqrt{|X| \log |X|}}$. \medskip \begin{proof} Consider the coloring $c_M$ on the vertex set $[m]^t$. For a set of colors $X$, let $\mathcal{G}_X$ be the graph induced by the edges colored by a color from the set $X$. Recall that each color $c$ under this coloring is of the form \begin{align*} c &= \Big(\{v_i, w_i\}, a_1, a_2, \ldots, a_t \Big). \end{align*} Define $\iota(c) = i$ to be the minimum index $i$ for which $a_i = 1$. For this index $i$, define $\eta_1(c) = v_i$ and $\eta_2(c) = w_i$, where we break symmetry by imposing $v_i < w_i$. Let $a_j(c) = a_j$ for $j = 1,\ldots, t$. Given a set of colors $X$, construct an auxiliary graph $\mathcal{H}$ over the vertex set $X$ whose edges are defined as follows. For two colors $c_1, c_2 \in X$, let $i_1 = \iota(c_1)$, $i_2 = \iota(c_2)$ and assume that $i_1 \le i_2$. Then $c_1$ and $c_2$ are adjacent if and only if $a_{i_2}(c_1) = 1$ (it is well-defined since if $i_1 = i_2$, then $a_{i_2}(c_1) = a_{i_1}(c_2) = 1$). Let $\mathcal{I}$ be the family of all independent sets in $\mathcal{H}$. We make the following claim, whose proof will be given later. \begin{CLAIM} \label{claim:indep_sets} The following holds: \\ (i) For all $I \in \mathcal{I}$, the graph $\mathcal{G}_I$ is bipartite. \\ (ii) $\chi(\mathcal{G}_X) \le |\mathcal{I}|$. \end{CLAIM} Suppose that the claim is true. Based on this claim, we will prove by induction on $|X|$ that $\chi(\mathcal{G}_X) \le 2^{3 \sqrt{|X| \log |X|}}$. For $|X|=2$, we proved in the previous subsection that $c_M$ is a chromatic-$(4,3)$-coloring, that is, the union of any two color classes is $3$-colorable. This clearly implies the required result in this case. Now suppose that the statement has been established for all sets of size less than $|X|$. Let $\alpha = \left\lceil \sqrt{\frac{|X|}{\log |X|}} \right\rceil$. If there exists an independent set $I \in \mathcal{I}$ of size at least $\alpha$, then, by the fact that $\mathcal{G}_X = \mathcal{G}_I \cup \mathcal{G}_{X \setminus I}$ and Claim \ref{claim:indep_sets} (i), we have \[ \chi(\mathcal{G}_X) \le \chi(\mathcal{G}_I) \cdot \chi(\mathcal{G}_{X \setminus I}) \le 2 \chi(\mathcal{G}_{X \setminus I}). \] If $|X \setminus I| \ge 2$, then the right hand side is at most $2 \cdot 2^{3\sqrt{|X\setminus I| \log |X\setminus I|}} < 2^{3\sqrt{|X| \log |X|}}$ (the inequality comes from $|I| \ge \alpha$) by the inductive hypothesis, and if $|X \setminus I| \le 1$, then since $\chi(\mathcal{G}_{X \setminus I}) \leq 2$, the right hand side is at most $4$. Hence the claimed bound holds in both cases. On the other hand, if the independence number is less than $\alpha$, then, by Claim \ref{claim:indep_sets} (ii) and the fact that $|X| \ge 2$, we have \[ \chi(\mathcal{G}_X) \le \sum_{i=0}^{\alpha - 1} {|X| \choose i} \le |X|^{2\sqrt{|X|/\log |X|}} = 2^{2\sqrt{|X|\log |X|}}. \] This proves the theorem up to Claim \ref{claim:indep_sets}, which we now consider. \end{proof} \begin{proofof}{Claim \ref{claim:indep_sets}} (i) Suppose that $I \in \mathcal{I}$ is given. By definition, for each color $c \in I$, we have distinct values of $\iota(c)$. For each $c \in I$, consider the map $\pi_c : [m]^t \rightarrow \{0, 1\}$, where for $x \in [m]^t$ of the form $x = (x_1, x_2, \ldots, x_t)$, we define \[ \pi_c(x) = \begin{cases} 0 & \text{if } x_{\iota(c)} \le \eta_1(c) \\ 1 & \text{if } x_{\iota(c)} > \eta_1(c). \end{cases} \] Define the map $\pi : [m]^t \rightarrow \{0, 1\}^{I}$ as \[ \pi(x) = (\pi_c(x))_{c \in I}. \] Consider the graph $\pi(\mathcal{G}_I)$ over the vertex set $\{0, 1\}^{I}$. Let $c$ and $c'$ be two distinct colors in $I$. If $\iota(c') < \iota(c)$, then $a_{\iota(c')}(c) = 0$ since $\iota(c)$ is the minimum index $i$ for which $a_i(c) = 1$ and if $\iota(c') > \iota(c)$, then $a_{\iota(c')}(c) = 0$ since $I$ is an independent set in the auxiliary graph $\mathcal{H}$ defined above. Hence, if $e = \{y,z\}$ is an edge of color $c$, then the two vectors $y$ and $z$ have identical $\iota(c')$-coordinate for all $c' \neq c$, thus implying that $\pi(y)$ and $\pi(z)$ have identical $c'$-coordinate for all $c' \neq c$. Further note that for $x \in \{0,1\}^I$, we have $\pi_c(x) = 0$ if the $\iota(c)$-th coordinate of $x$ is $\eta_1(c)$ and $\pi_c(x)=1$ if the $\iota(c)$-th coordinate of $x$ is $\eta_2(c)$. Since $\{y_{\iota(c)}, z_{\iota(c)}\} = \{\eta_1(c), \eta_2(c)\}$, we see that $\pi_c(y) \neq \pi_c(z)$. Therefore, two vertices $v,w \in \{0, 1\}^{I}$ can be adjacent in $\pi(\mathcal{G}_I)$ if and only if $v$ and $w$ differ in exactly one coordinate, implying that $\pi(\mathcal{G}_I)$ is a subgraph of the hypercube, which is clearly bipartite. A bipartite coloring of this graph can be pulled back to give a bipartite coloring of $\mathcal{G}_I$. \medskip \noindent (ii) We prove this by induction on the size of the set $X$. The claim is trivially true for $|X| = 0$ and $1$, since $|\mathcal{I}| = 1$ and $2$, respectively, and the graph $\mathcal{G}_X$ has chromatic number $1$ and $2$, respectively. Now suppose that we are given a set $X$ and the family $\mathcal{I}$ of independent sets in $\mathcal{H}$ (as defined above). Let $c \in X$ be a color with maximum $\iota(c)$ and let $i = \iota(c)$. Let $\mathcal{I}_c$ be the family of independent sets containing $c$ and $\mathcal{I}'_c$ be the family of all other independent sets. Let $A$ be the subset of vertices of $[m]^t$ whose $i$-th coordinate is $\eta_1(c)$. For two vectors $x, y \in A$, we have $a_i( c_M(x,y) ) = 0$, since both $x$ and $y$ have $i$-th coordinate $\eta_1(c)$. Hence, in the subgraph of $\mathcal{G}_X$ induced on the set $A$, we only see colors $c' \in X$ which have $a_{i}(c') = 0$. Let $X_c \subseteq X$ be the set of colors $c'$ such that $a_{i}(c') = 0$. The observation above implies that $\mathcal{G}_X[A]$ is a subgraph of $\mathcal{G}_{X_c}$. By the inductive hypothesis, $\chi(\mathcal{G}_{X_c})$ is at most the number of independent sets of $\mathcal{H}[X_c]$. Moreover, by the definitions of $X_c$ and $\mathcal{I}_c$ and the choice of $c$, the independent sets of $\mathcal{H}[X_c]$ are in one-to-one correspondence with the independent sets in $\mathcal{I}_c$. Thus, we have \[ \chi(\mathcal{G}_X[A]) \le \chi(\mathcal{G}_{X_c}) \le |\mathcal{I}_c|. \] Now consider the set $B = [m]^t \setminus A$. The subgraph of $\mathcal{G}_X$ induced on $B$ does not contain any edge of color $c$ and therefore $\mathcal{G}_{X}[B]$ is a subgraph of $\mathcal{G}_{X \setminus \{c\}}$. By the inductive hypothesis, $\chi(\mathcal{G}_{X \setminus \{c\}})$ is at most the number of independent sets of $\mathcal{H}[X \setminus \{c\}]$. By definition, the independent sets of $\mathcal{H}[X \setminus \{c\}]$ are in one-to-one correspondence with independent sets in $\mathcal{I}'_c$. Therefore, we have \[ \chi(\mathcal{G}_X[B]) \le \chi(\mathcal{G}_{X \setminus \{c\}}) \le |\mathcal{I}'_c|. \] Hence, \[ \chi(\mathcal{G}_X) \le \chi(\mathcal{G}_X[A]) + \chi(\mathcal{G}_X[B]) \le |\mathcal{I}_c| + |\mathcal{I}'_c| = |\mathcal{I}|, \] and the claim follows. \end{proofof} Using Theorem \ref{thm:chi_slow_grow}, we can now prove Theorem \ref{thm:chi_slow_grow_less_colors}, which we restate here for the reader's convenience. Recall that for an edge partition $E_1 \cup \ldots \cup E_t$ of the complete graph $K_n$ and a set $I \subseteq [t]$, we define $\mathcal{G}_I$ as the subgraph of $K_n$ with edge set $\bigcup_{i \in I} E_i$. \medskip \noindent {\bf Theorem \ref{thm:chi_slow_grow_less_colors}}. There exists a positive real $r_0$ such that the following holds for every positive integer $r$ and positive real $\alpha \le 1$ satisfying $(\log r)^\alpha \ge r_0$. For $n = 2^{(\log r)^{2 + \alpha}/200}$, there exists a partition $E = E_1 \cup \dots \cup E_{\sqrt{r}}$ of the edge set of the complete graph $K_n$ such that \[ \chi(\mathcal{G}_I) \le 2^{3(\log r)^{\alpha/2}\sqrt{|I| \log 2 |I|}} \] for all $I \subset [\sqrt{r}]$. \medskip \begin{proof} Let $N = 2^{\log^2 r/200}$ and $t = (\log r)^{\alpha}$ (since $(\log r)^{\alpha} \ge r_0$ and $\alpha \le 1$, we can guarantee that $N$ and $t$ are large enough by asking that $r_0$ be large enough). Color the edge set of the complete graph on the vertex set $[N]^t$ as follows. For two vectors $v, w \in [N]^t$ of the form $v = (v_1 ,\ldots,v_t)$ and $w = (w_1, \ldots, w_t)$, we let \[ c(v,w) = \Big(i, c_M(v_i, w_i)\Big), \] where $i$ is the minimum index for which $v_i \neq w_i$. Since $c_M$ on $K_N$ uses at most $2^{6\sqrt{\log N}} \le \frac{\sqrt{r}}{\log r}$ colors (see the discussion in Section \ref{sec:preliminaries}), our coloring uses at most \[ t \cdot \frac{\sqrt{r}}{\log r} \le \sqrt{r} \] colors in total. Since $n = N^t$, this coloring gives an edge partition $E = E_1 \cup \dots \cup E_{s}$ of the complete graph on $n$ vertices, for some integer $s \le \sqrt{r}$. Now suppose that a set $I \subset [s]$ is given. The set $I$ can be partitioned into $t$ sets $I_1 \cup \dots \cup I_t$ according to the value of the first coordinate as follows: for each $i \in [t]$, define $I_i$ as the set of indices $j \in I$ for which the color of the edges $E_j$ has $i$ as its first coordinate. For each $i$, let $\pi_i \,:\, [N]^t \rightarrow [N]$ be the projection map to the $i$-th coordinate. Then the graph $\pi_i(\mathcal{G}_{I_i})$ becomes a subgraph of $K_N$ induced by the union of $|I_i|$ colors of $c_M$. Hence, by Theorem \ref{thm:chi_slow_grow}, we know that \[ \chi(\mathcal{G}_{I_i}) \le \chi(\pi_i(\mathcal{G}_{I_i})) \le 2^{3\sqrt{|I_i| \log 2 |I_i|}} \] for each $i \in [t]$, where we introduce the extra $2$ in the logarithm to account for the possibility that $|I_i| = 1$. Therefore, we see that \[ \chi(\mathcal{G}_I) \le \prod_{i \in [t]} \chi(\mathcal{G}_{I_i}) \le 2^{3\sum_{i \in [t]} \sqrt{|I_i| \log 2 |I_i|}}. \] Since $\sqrt{x \log 2 x}$ is concave, Jensen's inequality implies that the sum in the exponent satisfies \[\sum_{i \in [t]} \sqrt{|I_i| \log 2 |I_i|} \leq t \sqrt{(|I|/t) \log (2 |I|/t)} \le \sqrt{t |I| \log 2 |I|} = (\log r)^{\alpha/2} \sqrt{|I| \log 2 |I|} . \] This implies the required result. \end{proof} \section{Concluding Remarks} \label{sec:conclusion} \subsection{The grid Ramsey problem with asymmetric colorings} One may also consider an asymmetric version of the grid Ramsey problem, where we color the row edges using $r$ colors but are allowed to use only two colors on the column edges. Let $G(r,2)$ be the minimum $n$ for which such a coloring is guaranteed to contain an alternating rectangle. One can easily see that \[ r \le G(r,2) \le r^3 + 1. \] The following construction improves the lower bound to $G(r,2) \ge \frac{1}{4}r^2$. Let $n = \frac{1}{4} r^2$ and $p$ be a prime satisfying $\frac{r}{2} \le p \le r$ and $n \le 2^p$ (the existence of such a prime follows from Bertrand's postulate). Consider the $n \times n$ grid. For each $i \in [n]$, assign to the $i$-th row a sequence $(a_{i,1}, \ldots, a_{i, p}) \in [r]^{p}$ so that for all distinct $i$ and $i'$ there exists at most one coordinate $j \in [p]$ for which $a_{i,j} = a_{i',j}$ (the construction will be given below). Given these sequences, for each $i \in [n]$ and distinct $j, j' \in [n]$, color the edge $\{(i,j), (i,j')\}$ as follows: examine the binary expansions of $j$ and $j'$ to identify the first bit $t$ in which the two differ and color the edge with color $a_{i,t}$ (this is possible since $2^p \ge n$). For two distinct rows, suppose that the sequences corresponding to these rows coincide in the $k$-th coordinate. Then the intersection of the two rows is a subgraph of the graph connecting vertices whose $k$-th bit in the binary expansion is 0 to those whose $k$-th bit in the binary expansion is 1. Thus, the intersection of any two rows is a bipartite graph and therefore, by the same argument as in the proof of Lemma \ref{lem:row_chromatic}, we obtain $G(r,2) \ge n$ (note that we in fact obtain a coloring of the $cr^2 \times 2^{c'r}$ grid). It suffices to construct a collection of sequences with the property claimed above. For $a,b \in \mathbb{Z}_p$, consider the following sequence with entries in $\mathbb{Z}_p$: \[ B_{a,b} = \Big( a, a + b, a + 2b, \ldots, a + (p-1)b \Big). \] For two distinct pairs $(a,b)$ and $(a',b')$, the sequences $B_{a,b}$ and $B_{a',b'}$ can overlap in the $i$-th coordinate if and only if $a + ib = a' + ib'$, which is equivalent to $(b-b')i = a' - a$. Since $(a,b) \neq (a',b')$, we see that there exists at most one index $i$ in the range $0 \le i \le p-1$ for which $a + ib = a' + ib'$. Thus the sequences $B_{a,b}$ have the claimed property. Note that the total number of sequences is at least $p^2 \ge n$. Moreover, since $p \le r$, by abusing notation, we may assume that the sequences are in fact in $[r]^p$ and, therefore, we can use them in the construction of our coloring. The following question may be more approachable than the corresponding problem for $G(r)$. \begin{QUES} Can we improve the upper bound on $G(r, 2)$? \end{QUES} \subsection{The Erd\H{o}s--Gy\'arf\'as problem in hypergraphs} As mentioned in the introduction, for each fixed $i$ with $0 \le i \leq k$ and large enough $p$, \[ F_k\left(r, p, {p-i \choose k-i} + 1\right) \le r^{r^{\iddots^{r^{c_{k,p}}} }}, \] where the number of $r$'s in the tower is $i$. It would be interesting to establish a lower bound on $F(r, p, {p-i \choose k-i})$ exhibiting a different behavior. \begin{PROB} Let $p, k$ and $i$ be positive integers with $k \ge 3$ and $0 < i < k$. Establish a lower bound on $F_k(r, p, {p-i \choose k-i})$ that is significantly larger than the upper bound on $F_k(r,p,{p-i \choose k-i} + 1)$ given above. \end{PROB} We have principally considered the $i=1$ case of this question. For example, the Erd\H{o}s--Gy\'arf\'as problem on whether $F(n,p,p-1)$ is superpolynomial for all $p \geq 3$ corresponds to the case where $k = 2$ and $i = 1$. Theorems~\ref{thm:grid_main} and \ref{thm:F_3_5_6} represent progress on the analogous problem with $k = 3$. The next open case, showing that $F_3(r, 6, 10)$ is superpolynomial, appears difficult. For $i \geq 2$, it seems likely that one would have to invoke a variant of the stepping-up technique of Erd\H{o}s and Hajnal (see, for example, \cite{GrRoSp}). In particular, we would like to know the answer to the following question. \begin{QUES} Is $F_3(r, p, p-2)$ larger than $2^{r^c}$ for any fixed $c$? \end{QUES} For $p = 4$, a positive solution to this problem follows since we know that the Ramsey number of $K_4^{(3)}$ is double exponential in the number of colors (see, for example, \cite{AxGyLiMu}). The general case appears to be much more difficult. Another case of particular importance is $F_{2d-1}(r, 2d, d+1)$, since it is this function (or rather a $d$-partite variant) which is used by Shelah in his proof of the Hales--Jewett theorem. If the growth rate of this function is a tower of bounded height for all $d$, then it would be possible to give a tower-type bound for Hales--Jewett numbers. However, we expect that this is not the case. \begin{PROB} Show that for all $s$ there exists $d$ such that \[ F_{2d-1}\left(r, 2d, d+1\right) \ge 2^{2^{\iddots^{2^{r}}}}, \] where the number of $2$'s in the tower is at least $s$. \end{PROB} \subsection{Studying the chromatic number version of the Erd\H{o}s--Gy\'arf\'as problem} Since we know that both $F(r,p,p-1)$ and $F_\chi(r,4,3)$ are superpolynomial in $r$, it is natural to ask the following question (see also \cite{CoFoLeSu}). \begin{QUES} \label{que:chi_poly} Is $F_{\chi}(r,p,p-1)$ superpolynomial in $r$? \end{QUES} By following a similar line of argument to the lower bound for $F_\chi(r,4,3)$, we can show that $c_M$ is also a chromatic-$(5,4)$-coloring. Therefore, $F_{\chi}(r,5,4) = 2^{\Omega(\log^2 r)}$, answering Question~\ref{que:chi_poly} for $p =5$. Since the proof is based on rather tedious case analysis, we will post a supplementary note rather than including the details here. It would be interesting to determine whether the $(p, p-1)$-colorings defined in \cite{CoFoLeSu} are also chromatic-$(p,p-1)$-colorings. If so, they would provide a positive answer to Question~\ref{que:chi_poly}. In Theorem~\ref{thm:chi_4_3}, we showed that $2^{\Omega(\log^2 r)} \le F_{\chi}(r, 4, 3) \le 2^{O(\sqrt{r \log r})}$. It would be interesting to reduce the gap between the lower and upper bounds. Since $F_{\chi}(r,4,2) \ge 2^{r} + 1$, we see that $F_{\chi}(r,4,2)$ is exponential in $r$, while $F_{\chi}(r,4,3)$ is subexponential in $r$. For $p \ge 5$, the value of $q$ for which the transition from exponential to subexponential happens is not known. However, recall that $F_{\chi}(r, 2^d + 1, d+1)$ is exponential in $r$ for all $d \ge 1$. This followed from showing that in the edge coloring $c_B$ the union of every $d$ color classes induces a graph of chromatic number $2^d$ (see Section~\ref{sec:preliminaries}). The following question asks whether a similar edge coloring exists if we want the union of every $d$ color classes to induce a graph of chromatic number at most $2^d - 1$. \begin{QUES} \label{ques:ch_3} Is $F_\chi(r,2^{d}, d+1) = 2^{o(r)}$ for all $d \ge 2$? \end{QUES} A positive answer to Question \ref{ques:ch_3} would allow us to determine, for all $p$, the maximum value of $q$ for which $F_{\chi}(r,p,q)$ is exponential in $r$. Indeed, for $2^{d-1} < p \le 2^{d}$, we have \[ F_\chi(r, p, d) \ge F_\chi(r, 2^{d-1}+1, d) = 2^{\Omega(r)}, \] while a positive answer to Question \ref{ques:ch_3} would imply \[ F_\chi(r, p, d+1) \le F_\chi(r, 2^{d}, d+1) = 2^{o(r)}. \] Hence, given a positive answer to Question \ref{ques:ch_3}, the maximum value of $q$ for which $F_\chi(r, p, q)$ is exponential in $r$ would be $q = \lceil \log p \rceil$. A key component in our proof of Theorem~\ref{thm:grid_main} was Theorem~\ref{thm:chi_slow_grow}, which says that in the coloring $c_M$, the chromatic number of the union of any $s$ color classes is not too large. We suspect that our estimate on the chromatic number is rather weak. It would be interesting to improve it further. More generally, we have the following rather informal question, progress on which might allow us to improve the bounds in Theorem~\ref{thm:grid_main}. \begin{QUES} Given an edge partition of the complete graph $K_n$, how slowly can the chromatic number of the graph determined by the union of $s$ color classes grow? \end{QUES} Finally, let $\mathcal{F}$ be a family of graphs and define $F(r,q; \mathcal{F})$ to be the minimum integer $n$ for which every edge coloring of $K_n$ with $r$ colors contains a subgraph $F \in \mathcal{F}$ that contains fewer than $q$ colors. $F(r,q; \mathcal{F})$ generalizes both $F(r,p,q)$ and $F_\chi(r,p,q)$ since we may take $\mathcal{F}$ to be $\{K_p\}$ for $F(r,p,q)$ and the family of all $p$-chromatic graphs for $F_\chi(r,p,q)$. Our results suggest that $F(r,q; \mathcal{F})$ is closely related to the chromatic number of the graphs in $\mathcal{F}$. The case where $\mathcal{F}$ consists of a single complete bipartite graph was studied in \cite{AxFuMu}. \medskip \noindent {\bf Acknowledgement.} We would like to thank the anonymous referee for several helpful comments.
{ "timestamp": "2014-09-23T02:08:35", "yymm": "1405", "arxiv_id": "1405.6587", "language": "en", "url": "https://arxiv.org/abs/1405.6587", "abstract": "The Hales--Jewett theorem is one of the pillars of Ramsey theory, from which many other results follow. A celebrated theorem of Shelah says that Hales--Jewett numbers are primitive recursive. A key tool used in his proof, now known as the cube lemma, has become famous in its own right. In its simplest form, this lemma says that if we color the edges of the Cartesian product $K_n \\times K_n$ in $r$ colors then, for $n$ sufficiently large, there is a rectangle with both pairs of opposite edges receiving the same color. Shelah's proof shows that $n = r^{\\binom{r+1}{2}} + 1$ suffices. More than twenty years ago, Graham, Rothschild and Spencer asked whether this bound can be improved to a polynomial in $r$. We show that this is not possible by providing a superpolynomial lower bound in $r$. We also discuss a number of related problems.", "subjects": "Combinatorics (math.CO)", "title": "On the grid Ramsey problem and related questions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9902915249511566, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7084783257796885 }
https://arxiv.org/abs/1805.09312
Ergodicity of Iwasawa continued fractions via markable hyperbolic geodesics
We prove the convergence and ergodicity of a wide class of real and higher-dimensional continued fraction algorithms, including folded and $\alpha$-type variants of complex, quaternionic, octonionic, and Heisenberg continued fractions, which we combine under the framework of Iwasawa continued fractions. The proof is based on the interplay of continued fractions and hyperbolic geometry, the ergodicity of geodesic flow in associated modular manifolds, and a variation on the notion of geodesic coding that we refer to as geodesic marking. As a corollary of our study of markable geodesics, we obtain a generalization of Serret's tail-equivalence theorem for almost all points. The results are new even in the case of complex continued fractions.
\section{Introduction} Regular continued fractions (CFs) represent the fractional part $x-\floor{x}$ of a real number as a descending iterated fraction $\frac{1}{a_1+\frac{1}{a_2+\cdots}}$ with positive integer coefficients. Other real CF algorithms adjust the notion of inversion (e.g., backward CFs), floor function (e.g., nearest-integer or $\alpha$ CFs), or modify the allowable digits (e.g., even and Rosen CFs). Higher-dimensional CF algorithms change the underlying space, giving, e.g., Hurwitz CFs on the complex numbers, Hamilton CFs on the quaternions, and the Heisenberg CFs recently defined by the authors on the nilpotent Heisenberg group (see \S \ref{sec:keyexamples} for a discussion of key examples). Given a CF algorithm, two questions are immediate: is the expansion convergent, and is the associated Gauss map ergodic? While convergence is straightforward to prove in most cases, ergodicity is more elusive. For complex CFs, previous proofs of ergodicity use an explicit analysis of the particular dynamical system and critically rely on the finite-range condition \cite{Nakada, MR0422169}. When applicable, these methods produce a finite piecewise-analytic invariant measure \cite{Hensley} and, under further assumptions, a Kuzmin-type theorem yielding the weak Bernoulli property \cite{Nakada, MR2952640}; cf.~\cite{1605.01127, MR2003772}. Unfortunately, the invariant measure is generally not finite: in the case of the one-dimensional Rosen CFs due to a violation of the finite-range condition, and in the case of the J.~Hurwitz complex CFs \cite{MR800085} due to a violation of properness (see below). In order to extend ergodicity to a wider range of higher-dimensional CFs for which the finite-range condition is not known, including the quaternionic and Heisenberg CFs, we generalize the classical connection between real CFs and planar hyperbolic geometry. Both of the above spaces appear as \emph{Iwasawa inversion spaces}, that is, boundaries of rank-one symmetric spaces of non-compact type, suggesting that this more general setting is natural to consider. Indeed, in \cite{1605.01127}, Chousionis-Tyson-Urbanski defined Iwasawa continued fractions on the closely-related \emph{Iwasawa groups} (see \S \ref{sec:further}) as iterated compositions of integral translations and inversions, and studied limit sets resulting from restricted-digit sequences. Here, we extend the above definition of Iwasawa CFs to an Iwasawa CF \emph{algorithm} associating a digit sequence to each point in an Iwasawa inversion space and leverage the connection to hyperbolic geometry to prove the following (see \S \ref{sec:defn} for definitions): \begin{thm}\label{thm:main} Every discrete and proper Iwasawa CF is convergent. Moreover, if it is complete, then it is ergodic. \end{thm} In particular, we obtain: \begin{thm} Folded complex, quaternionic, and Heisenberg CFs and their $\alpha$-type variants are convergent and ergodic. \end{thm} The convergence result is new for the above CFs, as well as for a broad family of new CF algorithms (see Table \ref{tab:examples}). The proof of convergence is based on standard methods, extended to the Heisenberg group in \cite{LV}, with the addition of a Ford circle discreteness argument that accounts for the fact that the entries of matrices associated to certain algorithms do not form discrete rings. The ergodicity result provides a novel approach to real and complex CFs that is robust under perturbations, and is a substantial breakthrough for higher-dimensional CFs (for incomplete CFs, see Theorem \ref{thm:notmain}). The proof of ergodicity is based on the ergodicity of geodesic flow on finite-volume hyperbolic manifolds together with a variation on the notion of geodesic coding, which we refer to as \emph{geodesic marking} (see \S \ref{sec:markable}). The new coding method also yields an a.e.~tail equivalence result for Iwasawa CFs which is novel for all higher-dimensional algorithms including the well-studied Hurwitz complex CFs: \begin{thm} \label{thm:IntroTail} Almost surely, two points in a complete, discrete, and proper Iwasawa CF are tail-equivalent if and only if they are $\mathcal M$-translates of one another. \end{thm} The ``only if'' direction holds for all points, with the same elementary proof as in the one dimensional case. The converse does not hold in general, for example, for the Hurwitz complex CFs. Lakein \cite{Lakein} provides an explicit counterexample.\footnote{Lakein's counterexample makes use of a matrix with determinant $-\mathbbm{i}$, but by multiplying his choice of $A$ by $\mathbbm{i}$, we may obtain one that uses a matrix of determinant $1$.} \subsection{Key Examples} \label{sec:keyexamples} Before giving an outline of the proofs in \S \ref{sec:sketch} and the formal definition of Iwasawa CFs in \S \ref{sec:defn}, we provide some key examples. For a more comprehensive collection of examples, see \S \ref{sec:examples} and Table \ref{tab:examples}. There are two key examples to keep in mind when thinking about Iwasawa Continued Fractions: the well-studied nearest-integer continued fractions illustrated in Figure \ref{fig:modular}, and the Hurwitz Complex CFs. Note that while the Hurwitz Complex CF can be written in terms of complex numbers, it is commonly studied using real coordinates, see \cite{Hensley,Nakada}. \begin{example}[Nearest-Integer Continued Fractions] \label{ex:NearestInteger} We think of nearest-integer CFs as the space $\mathbb X=\mathbb{R}$ with the inversion $\iota(x)=1/x$, digit set $\mathcal Z=\mathbb{Z}$, and ``floor'' function $\floor{x}$ which rounds to the nearest integer. For a point $x$ in $K=[-1/2,1/2)$ one can extract the first CF digit $a_1$ as $\floor{1/x}$. Further digits are extracted by defining the Gauss map $T(x)=1/x-\floor{1/x}$ (and $T(0)=0$), and taking $a_{i+1}=\floor{1/T^i(x)}$. We are interested in the convergence of the partial fractions $$(\iota \circ a_1 \cdots \circ \iota \circ a_n)(0)=\cfrac{1}{a_1+\cfrac{1}{a_2+\cdots \cfrac{1}{a_n+0}}}$$ (on the left, we think of an element of $\mathbb{Z}$ as an additive function on $\mathbb{R}$), and the ergodicity of the Gauss map. We will be working with the modular group $\langle \mathcal Z, \iota\rangle$, in this case isomorphic to $GL(2,\mathbb{Z})$, and the hyperbolic plane $\mathbb{H}^2_\mathbb{R}$ on which it acts. \end{example} \begin{figure}[ht] \begin{subfigure}[b]{0.4\textwidth}\includegraphics[width=\textwidth]{Hurwitz.png}\caption{Hurwitz CF}\end{subfigure} \begin{subfigure}[b]{0.4\textwidth}\includegraphics[width=\textwidth]{AlphaHurwitz.png}\caption{$\alpha$-variant for $\alpha$=0.3}\end{subfigure}\\ \begin{subfigure}[b]{0.4\textwidth}\includegraphics[width=\textwidth]{FoldedHurwitz.png}\caption{Folded variant}\end{subfigure} \begin{subfigure}[b]{0.4\textwidth}\includegraphics[width=\textwidth]{TetrisHurwitz.png}\caption{Tetris variant}\end{subfigure} \caption{Four variants of the Hurwitz complex CF algorithm. The fundamental domain $K$ in each case is displayed inside the unit circle (fixed by the inversion $\iota_c$), and is decomposed into rank-$1$ cylinder sets. The lattice $\mathcal Z=\mathbb{Z}^2$ is extended by the reflection $(x,y)\mapsto(x,-y)$ in the folded variant.} \label{fig:hurwitz} \end{figure} \begin{example}[Hurwitz Complex Continued Fractions] \label{ex:Hurwitz} Let $\mathbb X=\mathbb{R}^2$ and $\mathcal Z=\mathbb{Z}^2$, and let $\floor{\cdot}: \mathbb X\rightarrow \mathcal Z$ be the nearest-integer mapping. Write the complex inversion $z\mapsto 1/z$ in real coordinates as $\iota(x,y)=\frac{(x,-y)}{x^2+y^2}$. Noting that $K=[-1/2,1/2)\times[-1/2,1/2)$ is mapped to $(0,0)$ by $\floor{\cdot}$, define the Gauss map $T: K \rightarrow K$ by $T(x,y)= (\iota(x,y)-\floor{\iota(x,y)})$ with $T(0,0)=(0,0)$. Given a point $(x,y)\in K$, its \emph{iterates} are given by $(x_i,y_i)=T^i(x,y)$, and the \emph{digits} are the elements $a_{i+1}=\floor{\iota(x_i,y_i)}\in \mathcal Z$ subtracted at each stage of the iteration. Since $K$ is bounded away from the unit circle, the resulting continued fraction algorithm is proper, and one shows via an embedding of the modular group $\mathcal M=\langle \mathcal Z, \iota\rangle$ in $PSL(2,\mathbb{Z}[\mathbbm{i}])$ that it is discrete. Surprisingly, it is not complete, i.e., the stabilizer of $\infty$ in $\mathcal M$ is not equal to $\mathcal Z$, since $\mathcal M$ contains the mapping $z\mapsto -z$: \[\label{eq:invexample}\dfrac{1}{1+\dfrac{1}{-1+\dfrac{1}{1+z}}}=-z.\] \end{example} We are therefore unable to recover ergodicity of the Hurwitz CF. For such \emph{centrally-symmetric} Iwasawa CFs, the ergodicity statement of Theorem \ref{thm:main} becomes\footnote{Even when very powerful techniques are applied, sometimes one can do no better than bound the number of ergodic components. See, for example, \cite[Thm.~5.2]{Saussol}.}: \begin{thm}\label{thm:notmain} Let $T: K\rightarrow K$ be the Gauss map for an Iwasawa continued fraction with $n\geq 1$ central symmetries. Then $T$ has at most $n$ ergodic components. \end{thm} The ergodicity of Hurwitz CFs shows that Theorem \ref{thm:notmain} is not always optimal, but it remains possible that ergodicity is an exceptional occurrence for incomplete fractions. We can pass to a \emph{completion} of the Hurwitz CF by introducing the \emph{folded Hurwitz CFs}: \begin{example}[Folded Hurwitz CFs] \label{ex:folded} Let $\mathbb X=\mathbb{R}^2$ and $\iota$ be as above, and set $K=[-1/2,1/2)\times[-1/2,0]\subset \C$. Extend the translation action of $\mathbb{Z}^2$ on $\mathbb{R}^2$ by also allowing postcomposition with negation, setting $\mathcal Z=\{\pm\}\times \mathbb{Z}^2$. Correspondingly, extending the floor the function to be the mapping $\floor{\cdot}: \mathbb{R}^2 \rightarrow \mathcal Z$ that associates with each $(x,y)\in \mathbb{R}^2$ a sign $\sigma$ and $(a,b)\in \mathbb{Z}^2$ so that $\sigma(x-a,y-b)\in K$. The Gauss map, iterates, and digits of a point in $K$ are then defined in the same way as in Example \ref{ex:Hurwitz} above, with a folded digit of a point $(x,y)\in K$ now consisting of an element of $\mathbb{Z}^2$ and a sign choice. The folded Hurwitz CF remains discrete and proper, and we show in \S\ref{sec:appendix} that it is complete. For this choice of $K$ and $T$, ergodicity follows from the same finite-range condition as for the standard Hurwitz complex continued fractions. For $\alpha$-type variants of the folded Hurwitz CF, with, say, $K=[-1/2+\alpha, 1/2+\alpha)\times[-1/2,0]$, ergodicity of the corresponding CF algorithm is entirely original. \end{example} \subsection{Geodesic Marking} We now discuss the primary tool in our ergodicity proof: the notion of geodesic marking, which we believe is of independent interest. Before we do, let us introduce the concept of geodesic coding, following \cite{KU2007}, so that we can emphasize the differences between coding and marking. A code is a map from a symbolic dynamical system to a cross-section of geodesic flow on a quotient of a hyperbolic space by a lattice, such that the forward-shift map on the symbolic system and the first-return map to the cross-section commute under the coding map. Often, the symbolic sequence of a given geodesic is then associated with some digital expansion (such as the continued fraction expansion) of the endpoints of the geodesic. Codes are usually created using cutting sequences (see \cite{AF84,BL,BM,KU2007,Series1} for examples) or reduction theories (see \cite{GH,KU2005,KU2012,MS} for examples). A third method, using entropy calculations, can be seen in \cite{AS}. Neither of these methods extend to the generality we wish to work in: the method of cutting sequences seems to be ``intrinsically two-dimensional" \cite{AF84}, and reduction theories rely on precise arithmetic details of the dual of the CF algorithm. The main contributor to the complexity of reduction theories is the presence of small digits in the CF expansion. This is very clear in the work of \cite{GH}, for example, as a simple geometric cross-section must be augmented with several additional pieces in order to capture the behavior of the small digits. Lakein's counterexample to tail equivalence likewise requires the use of small digits. To avoid the complications caused by small digits, we introduce geodesic \emph{marking}: a sped-up coding which associates finite strings of digits with first-returns to a cross-section of geodesic flow. A priori, we work only with \emph{markable} geodesics (Definition \ref{defi:markable}) intersecting a codimension-one set $\mathcal{C}_{\mathbb{W}}\subset T^1\mathbb{H}$ (see \S \ref{subsec:markable}), and then show that $\mathcal{C}_{\mathbb{W}}$ is in fact a section of geodesic flow and that markable geodesics are generic. Our major result on geodesic marking is the following natural decomposition of a markable geodesic into segments corresponding to cusp excursions, which are furthermore related to iterates of the Gauss map: \begin{thm}[Markable Geodesic Theorem]\label{thm:markable} Fix a complete, proper, and discrete Iwasawa CF algorithm on an Iwasawa inversion space $\mathbb X$, with the associated hyperbolic space $\mathbb{H}$, modular group $\mathcal M$, and fundamental domain $K\subset\mathbb X$ for the lattice $\mathcal Z=\text{Stab}_\mathcal M(\infty)$. There exists a codimension-one set $\mathcal{C}_{\mathbb{W}} \subset T^1 \mathbb{H}$ and a \emph{marking} that assigns to every markable geodesic satisfying $\gamma(0)\in \mathcal{C}_{\mathbb{W}}$ \begin{itemize} \item digits $a_i\in \mathcal Z$ and mappings $M_i\in \mathcal M$, for each $i\in \mathbb{Z}$, \item increasing indices $i_j\in \mathbb{Z}$ and times $t_j$, for each $j\in \mathbb{Z}$, with $i_0=0, t_0=0$ \end{itemize} collectively called the marking of the geodesic $\gamma$ such that: \begin{enumerate} \item (Full Coverage) The segments $[t_{j-1}, t_j]$ have length uniformly bounded below and hence cover all of $\mathbb{R}$, \item (Relation to Gauss Map) For each $i\geq 1$, $a_i$ is the $i^{th}$ CF digit of $\gamma_+$, and $M_i$ is the branch of $T^{-i}$ associated to the Gauss map $T$ at $\gamma_+$, \item (Cusp Detection) If, for $t\in [t_{j-1}, t_j]$, the horoheight of $\gamma(t)$ from $M\infty$ satisfies $\operatorname{ht}_{M\infty} \gamma(t)>h_0$, and if $M^{-1} \gamma_+\in K$ for some $M\in \mathcal M$, then $M=M_{i_j}$, \item (Intersection Detection) Let $M\in \mathcal M$ and $t\in \mathbb{R}$. Then one has $\gamma(t)\in M\mathcal{C}_{\mathbb{W}}$ if and only if for some $j$ one has $t=t_j$ and $M=M_{i_j}$, \item (Shifted Gauss Equivariance) Let $k\in \mathbb{Z}$. The marking $\{a_i',M'_i,i'_j, t'_j\}$ associated to the markable geodesic $\gamma'(t):=M_{i_k}^{-1}\gamma(t+t_k)$ satisfies: $t'_j=t_{j+k}-t_k$, $i'_j=i_{j+k}-i_k$, $a'_i=a_{i+i_k}$, and $M'_i=M_{i_k}^{-1}M_{i+i_k}$. \end{enumerate} \end{thm} \subsection{Sketch of Proofs}\label{sec:sketch} We will now discuss the basic ideas behind the proof of Theorem \ref{thm:markable} and the proof of ergodicity in Theorem \ref{thm:main}, ignoring some necessary subtleties. We encourage the reader to think of the case of nearest-integer continued fractions, illustrated in Figure \ref{fig:modular}. \begin{figure}[ht] \includegraphics[height=1.5in]{SL2} \caption{The setting, in the case of nearest-integer continued fractions. The domain of the Gauss map $K$ is the orange interval, the unit sphere $\mathbb S$ is red, and a geodesic $\gamma$ is dashed; horocycles are green. We use a region $\mathbb{W}$ in $\mathbb S$ bounded away from $\partial \mathbb{H}=\mathbb X$ to build the section $\mathcal{C}_{\mathbb{W}}$.} \label{fig:modular} \end{figure} We consider the hyperbolic space $\mathbb{H}$ whose parabolic boundary is the Iwasawa inversion space $\mathbb X$ and a modular group $\mathcal M$ generated by the inversion $\iota$ and lattice $\mathcal Z$. We let $\mathbb S$ be the unit sphere in $\mathbb{H}\cup \mathbb X$ with respect to the extended Cygan metric. We then look at a set $\mathbb{W}\subset \mathbb S$ of points bounded away from $\mathbb X$ in terms of horoheight, i.e., $\operatorname{ht}_\infty(\mathbb{W})$ is bounded below. We study a geodesic ray $\gamma$ satisfying $\gamma(0)\in\mathbb{W}$, whose forward endpoint $\gamma_+$ lies in the fundamental domain $K$ for $\mathcal Z$, and has CF digits $a_i$ and associated mappings $M_i$. We show that there is a syndetic sequence of non-negative integers $\mathfrak{i}_j$ such that the following holds. First, $\gamma$ intersects each $M_{\mathfrak{i}_j}\mathbb{W}$ and, second, if at some time $t$ the point $\gamma(t)$ comes close to $M\infty$ (that is, $\operatorname{ht}_{M\infty}$ is sufficiently large), then $M=M_{\mathfrak{i}_j}$. This second property is very useful, but we want it to be an ``if and only if" property, not an ``if" property. We then look at a refinement of $\mathbb{W}$. We let $\mathcal{C}_{\mathbb{W}}\subset T^1 \mathbb{H}$ be a set of vectors based at $\mathbb{W}$ such that the corresponding geodesics have been at large horoheight relative to $\infty$ in the past, and terminate at some point of $K$ in the future. We say a geodesic $\gamma$ is \emph{markable} if it passes through infinitely many $\mathcal M$-translates of $\mathcal{C}_{\mathbb{W}}$ in both the past and future and prove the Markable Geodesic Theorem \ref{thm:markable}. In particular, we obtain that all future intersections of $\gamma$ with sets of the form $M\mathcal{C}_{\mathbb{W}}$, $M\in\mathcal M$, satisfy $M=M_{i}$ for some $i$, and occur in increasing order. Turning our attention to the proof of ergodicity, we consider the projection $\pi_\mathbb{H}: \mathbb{H}\rightarrow \mathcal M\backslash\mathbb{H}$, identify $\mathcal{C}_{\mathbb{W}}$ with $\pi_\mathbb{H}(\mathcal{C}_{\mathbb{W}})$, and define a first-return map $\psi: \mathcal{C}_{\mathbb{W}}\rightarrow \mathcal{C}_{\mathbb{W}}$. We then use the ergodicity of geodesic flow to conclude that markable geodesics are generic and that $\psi$ is ergodic. Finally, we project from a geodesic $\gamma\in\mathcal{C}_{\mathbb{W}}$ to its endpoints $(\gamma_+,\gamma_-)$ on the boundary $\widehat{\mathbb X}\times \widehat{\mathbb X}$ and show that the resulting map induced by $\psi$ is a jump transformation associated to the extended Gauss map on a well-behaved subset of $K\times \widehat{\mathbb X}$. From here we use standard arguments to show that the ergodicity of $\psi$ implies the ergodicity of the extended Gauss map, which thus implies, by projecting onto the first coordinate, the ergodicity of $T$. \subsection{Furhter Remarks} \label{sec:further} Iwasawa CFs are the most general setting for our methods, which rely heavily on the fact that Iwasawa inversion spaces are boundaries of rank one symmetric spaces of non-compact type. Indeed, Iwasawa inversion spaces are precisely the spaces with this property, with the exclusion, due to the break down of vector-space-based techniques, of the exceptional $\mathbb X^1_{\mathbb O}$ that can be defined over the non-associative octonions. Our notion of \emph{Iwasawa inversion space} differs slightly from the notion of \emph{Iwasawa groups} of \cite{1605.01127}, which excludes $\mathbb X^n_\mathbb{R}$ and allows $\mathbb X^1_{\mathbb O}$. We remark further that boundaries of rank one symmetric spaces of non-compact type are arguably the most general setting for geometric CFs and Diophantine theory: they are characterized \cite{MR3283670, COWLING19911} as homogeneous geodesic locally compact spaces admitting both a dilation (a notion of fraction) and a well-behaved inversion. (The Cygan metric we work with is not itself geodesic, but gives rise to a geodesic path metric.) The present work suggests the following further directions of study: \begin{question} Under what conditions is the invariant measure for the Gauss map finite or (piecewise) analytic? \end{question} \begin{question} Is the Gauss map mixing? \end{question} \begin{question} Does Theorem \ref{thm:main} hold for incomplete Iwasawa CFs, or for improper Iwasawa CFs with weak contact with the unit sphere (such as the J.~Hurwitz CFs)? \end{question} \begin{question} Can one characterize periodic Iwasawa CF expansions, analogously to the quadratic surd characterization of periodic regular CFs in $\mathbb{R}$ (cf.~\cite{Vperiodic})? \end{question} \begin{question} Can one describe the exceptions to Theorem \ref{thm:IntroTail} (cf.~\cite{Lakein})? \end{question} \begin{question} What Iwasawa CF algorithms are not represented in Table \ref{tab:examples}? \end{question} \subsection{Outline of the paper} Following this introduction, in \S \ref{sec:defn} we provide the general theory and definitions for Iwasawa inversion spaces. In \S \ref{sec:IwasawaCF} we define Iwasawa CFs, give further examples (including Table \ref{tab:examples}) and study conditions that guarantee discreteness, properness, and completeness. In \S \ref{sec:convergence}, we quickly prove the convergence of Iwasawa CFs. In \S \ref{sec:markable}, we will build up the theory surrounding markable geodesics, culminating in the Markable Geodesic Theorem. In \S \ref{sec:ergodicity}, we use the Markable Geodesic Theorem to prove the ergodicity of the Gauss map for an Iwasawa CF expansion and, in applications of this result, prove Theorems \ref{thm:notmain} and \ref{thm:IntroTail}. \subsection{Acknowledgements} A.L. was supported by University of Michigan NSF RTG grant 1045119. This article was written during visits by the authors to George Mason University, University of Michigan, and the Ohio State University. The authors thank these institutions for their hospitality, and Simons Travel Grant and GEAR Grant NSF DMS 11-07452 for the travel funding. The authors would also like to thank Jayadev Athreya and Ralf Spatzier for their helpful comments. \section{General Theory}\label{sec:defn} We now outline the structure of Iwasawa Inversion spaces $\mathbb X=\mathbb X^n_k$, the associated upper half-spaces $\mathbb{H}^{n+1}_k$, and the continued fraction algorithms that can be built on $\mathbb X$ using this structure. We encourage the reader to skip this section on the first reading, following the intuition of the Euclidean space $\mathbb X=\mathbb X^n_\mathbb{R}=\mathbb{R}^n$ and hyperbolic half-space $\mathbb{H}=\mathbb{H}^{n+1}_\mathbb{R}$ lying above it. \subsection{Iwasawa Inversion Spaces} \label{sec:space} Abstractly, an Iwasawa inversion space $\mathbb X$ is an Iwasawa $N$-group associated by the Iwasawa (KAN) decomposition to a non-exceptional rank one semi-simple Lie group $G$ and the parabolic boundary at infinity of the rank one symmetric space $G/K$. We now recall the explicit construction and Euclidean-like structure of these spaces. Fix an associative division algebra $k$ over the reals --- the real, complex, or quaternionic numbers --- and an integer $n\geq 1$. (It appears that one could also consider the exceptional case of octonions, but we will not do so here.) Recall that $k$ has a real part $\Re(k)$ isomorphic to $\mathbb{R}$ and a complementary imaginary part $\Im(k)$ satisfying $\dim_\mathbb{R}(\Im(k))=\dim_\mathbb{R}(k)-1$. We denote the standard norm of an element of $k$ or $k^n$ by $\Norm{\cdot}$, and refer to $\Norm{\cdot}$-preserving $k$-linear automorphisms of $k^n$ as \emph{unitary} transformations. \begin{remark}For $k=\mathbb{R}$, one has $\Im(k)=\{0\}$. Note that $\Im(k)$ remains a subset of $k$; in particular, we do not identify $\Im(k)$ with $\mathbb{R}$ when $k=\C$. We furthermore exclude nonholomorphic transformations such as $z\mapsto \overline z$ from the unitary group, purely for notational convenience (cf.~Remark \ref{rmk:QC}). \end{remark} \begin{defi}[Iwasawa Inversion Space] The \emph{Iwasawa inversion space} $\mathbb X=\mathbb X_k^n$ is the set $k^n\times \Im(k)$ with coordinates $(z,t)$ and group law $$ (z,t)*(z',t')=(z+z', t+t'+2\Im \langle z, z'\rangle), $$ where the inner product of the vectors $z, z'$ is given by $\langle z, z'\rangle=\sum_i \overline{z_i}z'_i$. \end{defi} Over the reals, $\mathbb X^n_\mathbb{R}$ reduces to $\mathbb{R}^n$ with $*$ acting by the usual vector addition. For $k\neq \mathbb{R}$, $\mathbb X^n_k$ is a step-2 nilpotent group (one uses $*$ to emphasize the non-commutativity), with identity $(0,0)$, and the inverse of a group element $(z,t)$ given by $(-z,-t)$. One gives $\mathbb X$ a gauge $\norm{\cdot}$ and Cygan metric $d$ (also known in different contexts as the Koranyi metric or gauge metric) by defining \begin{equation*}\norm{(z,t)}:=\Norm{\Norm{z}^2+t}^{1/2}, \hspace{.35in} d((z,t), (z',t')):=\norm{(-z,-t)*(z,t)}. \end{equation*} The Cygan metric is largely analogous to the Euclidean metric, insofar as its automorphisms include analogs of translations (left multiplication by an element of $\mathbb X$ is an isometric isomorphism); dilations (for each $r>0$, the mapping $\delta_r(z,t)=(rz,r^2t)$ is a group isomorphism that rescales the metric by factor $r$); and rotations (unitary automorphisms of $k^n$ extend to isometric group isomorphisms of $\mathbb X$). On the other hand, the metric is fractal for $k\neq \mathbb{R}$: it is not a path metric (cf.~the closely associated Carnot-Caratheodory path metric) and gives $\mathbb X$ Hausdorff dimension $n\dim_\mathbb{R}(k)+2(\dim_\mathbb{R}(k)-1)$ which is not equal to its topological dimension $(n+1)\dim_\mathbb{R}(k)-1$. The latter is due to the fact that large metric balls are stretched by $\delta_r$ along the $t$ direction, while small ones are flattened out along the $z$ direction. The \emph{Koranyi} inversion $\iota_-: \mathbb X\setminus\{0\}\rightarrow \mathbb X\setminus\{0\}$ is defined by \(\iota_-(z,t)=\left(\frac{-z}{\Norm{z}^2+ t}, \frac{-t}{\Norm{z}^4+\Norm{t}^2}\right).\) The Koranyi inversion is a natural generalization of the mapping $x\mapsto -1/x$, and in particular satisfies the following pair of identities for $h,h'\in \mathbb X\setminus\{0\}$, \cite{COWLING19911}: \[ \label{eq:inversionidentitiesBasic} \norm{\iota_- h}=\frac{1}{\norm{h}}, \hspace{.5in} d(\iota_- h, \iota_- h')=\frac{d(h,h')}{\norm{h}\norm{h'}}. \] In particular, $\iota_-$ sends each sphere $S(0,r)$ to the sphere $S(0,1/r)$, and preserves the unit sphere. We prove the identities in a broader context in Theorem \ref{thm:InversionIdentities}. More generally, $\mathbb X$ admits inversions of the form \(\iota(z,t)=\left(\frac{-A(z)}{\Norm{z}^2+ t}, \frac{-(\det A) t}{\Norm{z}^4+\Norm{t}^2}\right),\) where $A$ is a unitary transformation of $k^n$. We show in Lemma \ref{lemma:inversion} that all inversions satisfy generalizations of Equations \ref{eq:inversionidentitiesBasic}. \subsection{Upper Half-Space} \label{sec:hyp} Fix an Iwasawa inversion space $\mathbb X=\mathbb X^n_k$. We extend the structure and Cygan metric of $\mathbb X$ to $k^{n+1}$ as follows, motivated by Parker \cite{MR1146815}: \begin{defi}\label{def:extendcygan} Extend the Heisenberg group law to $k^n\times k=k^{n+1}$ as $$(z,w)*(z',w')=(z+z', w+w'+2\Im \langle z, z'\rangle),$$ and the gauge and metric as: \begin{equation*}\norm{(z,w)}=\Norm{\Norm{z}^2+\Norm{\Re(w)}+\Im(w)}^{1/2}, \hspace{.15in} d((z,t), (z',t')):=\norm{(-z,-t)*(z,t)}. \end{equation*} \end{defi} \begin{remark} In the case $k=\mathbb{R}$, the Heisenberg group law on $k^{n+1}$ reduces to $(z,w)*(z',w')=(z+z', w+w')$, and the gauge reduces to the Euclidean-like $\norm{(z,w)}=\left(\Norm{z}^2+\Norm{w}\right)^{1/2}$. One could adjust Definition \ref{def:extendcygan}, by taking a square root along the $\Re(w)$ direction, so that it agrees with the Euclidean metric in the real case. We will not do so. \end{remark} \begin{defi} The \emph{upper half-space} $\mathbb{H}^{n+1}_k\subset k^{n+1}$ is the set $$\mathbb{H}^{n+1}_k = \{ (z,w)\in k^n\times k \;:\; \Re(w)>0\},$$ satisfying $\partial \mathbb{H}=\mathbb X$. One gives $\mathbb{H}$ two natural metrics: the restriction of the Cygan metric $d$ on $k^{n+1}$ (this was introduced by Parker in \cite{ MR1146815} for $\mathbb{H}^2_\C$ and generalized by Cao-Parker to $\mathbb{H}^2_{\mathbb O}$ in \cite {MR3764717}); and the negatively-curved hyperbolic metric $d_\mathbb{H}$, defined via an embedding into $\mathbb{P}(k^{n+2})$. Unless otherwise noted, $\mathbb{H}$ will always be equipped with the metric $d_\mathbb{H}$. \end{defi} \begin{defi}[Projective Embedding] \label{defi:ProjectiveEmbedding}Let $\phi: k^{n+1}\rightarrow k^{n+2}$ be given by $\phi(z,w)=(1,\sqrt{2}z, w+\Norm{z}^2)$, and set $\Phi=\mathbb{P}\circ \phi: k^{n+1}\rightarrow \mathbb{P}(k^{n+2})$. \end{defi} Consider the Hermitian form $\langle \cdot, \cdot \rangle_J$ of signature $(n+1,1)$ defined on $k^{n+2}$ by $$ J=\begin{bmatrix} 0 & 0_n & -1\\ 0_n & \operatorname{id}_n & 0_n\\ -1 & 0_n & 0 \end{bmatrix}, $$ and let $\mathcal S=\{(1:a:b)\;:\; \Norm{a}<2\Re(b)\} \subset \mathbb P(k^{n+2})$ be the Siegel region. One can show that $\Phi$ induces a bijection between $\mathbb{H}$ and $\mathcal S$, and furthermore $\mathcal S$ is the projectivization of the negative cone of $J$. This induces an action of the projective unitary group $G=\mathbb{P}U(J)$ on $\mathbb{H}$, cf.~\S \ref{sec:convergence}. \begin{defi}(Hyperbolic metric) The hyperbolic metric $d_\mathbb{H}$ on $\mathbb{H}$ is the unique $G$-invariant Riemannian metric on $\mathbb{H}$ with sectional curvature pinched in the range $[-1, -1/4]$ if $k\neq \mathbb{R}$ or equal to $-1$ if $k=\mathbb{R}$. \end{defi} For $\mathbb{H}=\mathbb{H}^2_\mathbb{R}$, $d_\mathbb{H}$ is agrees with the familiar metric $\frac{1}{y}ds$ if one takes $x=z$ and $y=w^2$. One has $\Phi(\mathbb{H})=\{(1:a:b) \;:\; 2b>a^2\}\subset \mathbb{RP}^2$, and a projective change of coordinates recovers the Klein disk model of $\mathbb{H}^2_\mathbb{R}$ with its $SO(2,1)$-invariant metric. In general, the Siegel region is projectively equivalent to a unit ball in projective space $\mathbb P(k^{n+2})$. The mapping $\Phi\vert_\mathbb X:\partial \mathbb{H} \rightarrow \partial \Phi(\mathbb{H})$ omits a single point, which we identify with the point $\infty$ in the one-point compactification of $k^{n+1}$ (and its subsets $\mathbb X$ and $\overline{\mathbb{H}}$). \subsection{Inversion Theorem} Returning to the Cygan metric, we record two connections to the projective embedding: \begin{lemma}[Parker \cite{MR1146815}] \label{lemma:Jform} Suppose $p,q\in \overline{\mathbb{H}}$, with either $p$ or $q$ in $\mathbb X=\partial \mathbb{H}$. Then the Cygan metric satisfies $d(p,q)=\Norm{\langle \phi(p),\phi(q)\rangle_J}^{1/2}$. \end{lemma} \begin{lemma}\label{lemma:thirdcoordinate} Let $h\in \overline{\mathbb{H}}$ and denote $\phi(h)=(1,a,b)$. Then $\norm{h}=\Norm{b}^{1/2}$. \begin{proof} This is immediate from Definitions \ref{def:extendcygan} and \ref{defi:ProjectiveEmbedding}. \end{proof} \end{lemma} With the above machinery, we can provide a simple description of the Koranyi inversion, extended to $\overline \mathbb{H}$, and prove the inversion identities \eqref{eq:inversionidentitiesBasic}. \begin{lemma} \label{lemma:inversion} The Koranyi inversion $\iota_-: \overline{\mathbb{H}}\setminus\{0\}\rightarrow \overline{\mathbb{H}}\setminus\{0\}$ given by the mapping $$(z,w)\mapsto \left(\frac{-z}{\Norm{z}^2+w}, \frac{\overline{w}}{\Norm{\Norm{z}^2+w}^2}\right)$$ is induced by the matrix $J\in G$. That is, setting $\phi(z,w)=(1,a,b)$, one has $\phi(\iota_-(z,w))=(1,-a/b,1/b)=\frac{\phi(z,w)}{-b}$, and in $\mathbb{P}(k^{n+2})$ one has $\Phi(\iota_-(z,w))=J\Phi(z,w)$. \begin{proof} We have $\phi(z,w)=(1,\sqrt{2}z, \norm{z}^2+w),$ so that $J\phi(z,w)=( -(\norm{z}^2+w),\sqrt{2}z,-1)$. Up to a factor of ${-(\Norm{z}^2+w)}$, this is equivalent to \begin{align*} \left(1,\sqrt{2}\frac{-z}{\Norm{z}^2+w},\frac{1}{\Norm{z}^2+w}\right) &=\left(1,\sqrt{2}\frac{-z}{\Norm{z}^2+w},\Norm{\frac{-z}{\Norm{z}^2+w}}^2+\frac{\overline{w}}{\Norm{\Norm{z}^2+w}^2}\right), \end{align*} which in turn is equal to $\phi(\iota_-(z,w))$ as desired. \end{proof} \end{lemma} \begin{thm}[Inversion Theorem] \label{thm:InversionIdentities}Let $h\in (\mathbb{H}\cup \mathbb X)\setminus\{0\}$ and $h'\in \mathbb X\setminus\{0\}$. The following identities hold for the Koranyi inversion $\iota_-$, Cygan metric $d$, and gauge $\norm{\cdot}$: \[\label{eq:inversionidentitiesAdvanced}\norm{\iota_- h}=\frac{1}{\norm{h}} \hspace{.25in}\text{and} \hspace{.25in} d(\iota_- h, \iota_- h')=\frac{d(h,h')}{\norm{h}\norm{h'}}. \] \begin{proof} Write $\phi(h)=(1,a,b)$ and $\phi(h')=(1,a',b')$. By Lemma \ref{lemma:inversion}, $\phi(\iota_-(h))=(1,-a/b,1/b)$, and the first identity thus follows from Lemma \ref{lemma:thirdcoordinate}. Since $h'\in \mathbb X$, Lemma \ref{lemma:Jform} gives $d(h,h')=\Norm{\left\langle \phi(h),\phi(h')\right\rangle_J}^{1/2}$ and $d(\iota_- h, \iota_- h')=\Norm{\left\langle \iota_- \phi(h),\iota_- \phi(h')\right\rangle_J}^{1/2}$. Using Lemmas \ref{lemma:inversion} and \ref{lemma:thirdcoordinate}, we obtain: \begin{align*} d(\iota_- h, \iota_- h')= \Norm{\left\langle\frac{\phi(h)}{-b},\frac{\phi(h')}{-b'}\right\rangle_J}^{1/2}= \frac{ d(h,h')} {\Norm{b}^{1/2} \Norm{b'}^{1/2}} = \frac{d(h,h')}{\norm{h}\norm{h'}}, \end{align*} providing the second identity. \end{proof} \end{thm} \begin{remark} Surprisingly, Lemma \ref{lemma:Jform} and the second identity of Theorem \ref{thm:InversionIdentities} fail when both $h$ and $h'$ lie in $\mathbb{H}$. \end{remark} Compositions of diagonal elements of $G$ (as well as certain conjugation actions) with the Koranyi inversion continue to satisfy the conclusions of Theorem \ref{thm:InversionIdentities}. We define: \begin{defi} \label{defi:inversion} An \emph{inversion} is a (1-quasi-)conformal mapping $\iota: \mathbb X\setminus\{0\}\rightarrow \mathbb X\setminus\{0\}$ satisfying the conclusions of Theorem \ref{thm:InversionIdentities}. (Recall that conformal mappings of $\mathbb X$ extend to isometries of $\mathbb{H}$ with respect to the metric $d_\mathbb{H}$.) \end{defi} \begin{remark}\label{rmk:QC} Here, (1-quasi-)conformal mappings are defined with respect to the Cygan metric on $X$, and need not preserve a Riemannian conformal gauge. Furthermore, following the restriction to \emph{linear} automorphisms of the field $k$ in \S \ref{sec:space}, we restrict our attention to those conformal mappings that have a linear representation in $\mathbb PU(J)$, or, equivalently, whose Pansu derivative at every point is linear. \end{remark} It follows from the classification of isometries of $\mathbb{H}$ that every inversion factors as a composition of a rotation and the Koranyi inversion. \begin{lemma} \label{lemma:inversionform} If $\iota$ is an inversion, then there exists a unitary mapping $f: k^n\rightarrow k^n$ such that $\iota=f\circ \iota_-$. \begin{proof} Since $\iota$ is conformal, it extends to an isometry of $\mathbb{H}$. The mapping $f=\iota \iota_-$ is an isometry of $\mathbb{H}$ that fixes the points $0$ and $\infty$. It therefore maps the geodesic $\gamma$ joining $0$ and $\infty$ to itself. Since $\iota_-$ and $\iota$ fix the point $(0,1)\in \gamma$ by the first part of \eqref{eq:inversionidentitiesAdvanced}, the same must be true for $f$. Thus, $\iota$ is represented in $U(J)$ by a matrix of the form \[\label{eq:inversionmatrix} \begin{bmatrix} 0 & 0_n & -1\\ 0_n & A & 0_n\\ -1 & 0_n & 0 \end{bmatrix}, \] where $A$ is a unitary matrix over $k^n$. \end{proof} \end{lemma} In addition to the (negative) Koranyi inversion $\iota_-$, we will also be interested in the positive inversion $\iota_+$ corresponding to the matrix $A=-I_n$ in \eqref{eq:inversionmatrix}, and the conjugation inversion $\iota_c$ corresponding to the diagonal matrix $A$ with diagonal entries $(-1, 1, 1, \ldots, 1)$. For example, for $p=(x,y,z)\in \mathbb{R}^3$, one has $\iota_-(p)=-p/\Norm{p}^2$, $\iota_+(p)=p/\Norm{p}^2$, and $\iota_c(p)=(x,-y,-z)/\Norm{p}^2$. Note that under the standard identification of $\C$ with $\mathbb{R}^2$, the mapping $z\mapsto 1/z$ corresponds to the inversion $\iota_c$. \subsection{Isometries, Lattices, and Fundamental Domains} We thus have an Iwasawa inversion space $\mathbb X$ and associated hyperbolic space $\mathbb{H}$, with the unitary group $G$ acting on $\mathbb{H}$ by isometries with respect to the Riemannian metric $d_\mathbb{H}$, and by an analog of M\"obius transformations on $\mathbb X\cup\{\infty\}$. One shows that $G$ is in fact the holomorphic isometry group of $\mathbb{H}$, and the group of (1-quasi-)conformal mappings of $\mathbb X\cup\{\infty\}$. Restricting $G$ to the set of transformations $\text{Stab}_G(\infty)$ preserving infinity provides an action on $\mathbb X$ that can be identified with the group of similarities of $\mathbb X$. This allows us to think of $\text{Isom}(\mathbb X)$ as a subgroup of $\text{Isom}(\mathbb{H})$. The group $G$ is, in fact, a rank-one simple Lie group, with an Iwasawa decomposition $G=KAN$. One can identify the subgroup $N$ with the space $\mathbb X$ (with the group structure provided above), and the subgroup $A$ with the group of dilations $\{\delta_r\;:\; r>0\}$. The subgroup $K$ can be identified with the stabilizer of the point $(0,1)\in \mathbb{H}$, and includes the Koranyi inversion. We will be interested in lattices and fundamental domains in $\text{Isom}(X)$ and $\text{Isom}(\mathbb{H})$, equipped with the respective Haar measures. \begin{defi}\label{defi:fundamentaldomain} Let $Y$ be a metric space with an $\text{Isom}(Y)$-invariant measure. A \emph{lattice} is a discrete subgroup $\Gamma\subset \text{Isom}(Y)$ such that the quotient $\Gamma \backslash \text{Isom}(Y)$ has finite measure. The lattice is \emph{uniform} if $\Gamma \backslash \text{Isom}(Y)$ is furthermore compact, and non-uniform otherwise. A \emph{fundamental domain} for $\Gamma$ is a measurable set $K\subset Y$ such that $\mathbb X=\bigcup_{a\in \Gamma} aK$ and the overlap $K\cap \bigcup_{a(\neq \operatorname{id})\in \Gamma} aK$ has measure $0$. A \emph{nearest-integer mapping} $\floor{\cdot}:Y\rightarrow \Gamma$ associated to $\Gamma$ and $K$ is defined, almost everywhere, by the property that for each $a\in \Gamma$ and $x\in K$, one has $\floor{a(x)}=a$. This property defines $\floor{\cdot}$ uniquely away from the overlap, and $\floor{\cdot}$ provides some choice of admissible values has been made for points in the overlap. \end{defi} \section{Iwasawa Continued Fractions}\label{sec:IwasawaCF} We can now define Iwasawa continued fractions and establish some auxilliary terminology and notation. \begin{defi}(Iwasawa Continued Fraction) The Iwasawa Continued Fraction Algorithm is defined by the following data: \begin{enumerate} \item An associative division algebra $k$ over $\mathbb{R}$ and integer $n\geq 1$, \item The associated Iwasawa inversion space $\mathbb X=\mathbb X^n_k$, \item An inversion $\iota$ (see Definition \ref{defi:inversion}), \item A lattice $\mathcal Z \subset \text{Isom}(\mathbb X)$, a fundamental domain $K\subset \mathbb X$ for $\mathcal Z$, and an associated nearest-integer mapping $\floor{\cdot}: \mathbb X\rightarrow \mathcal Z$ (see Definition \ref{defi:fundamentaldomain}). \end{enumerate} Associated to an Iwasawa CF algorithm, we have: \begin{enumerate} \setcounter{enumi}{4} \item The hyperbolic space $\mathbb{H}=\mathbb{H}^{n+1}_k$ satisfying $\partial \mathbb{H}=\mathbb X$, \item The holomorphic isometry group $G$ of $\mathbb{H}$, \item The modular group $\mathcal M=\langle \iota, \mathcal Z\rangle \subset G$. \item The Gauss map $T:K\rightarrow K$ defined by $T(0)=0$ if $0\in K$ and otherwise by $$T(x)=\floor{\iota(x)}^{-1}(\iota(x)).$$ \end{enumerate} \end{defi} For a point $x\in \mathbb X$, we can then inductively define the continued fraction digits $a_i\in \mathcal Z$ and forward iterates $x_i\in K$ by taking \begin{align*} &a_0 = \floor{x}, &&x_0=a_0^{-1}(x),\\ &a_{i+1} = \floor{\iota(x_i)}, &&x_{i+1}=a_{i+1}^{-1}(\iota(x_i))=T(x_i), \end{align*} where the sequences terminate if at some point $x_i=0$. The (possibly finite) sequence $\{a_i\}$ of elements of $\mathcal Z$ is the \emph{continued fraction sequence} of $x$. (Note that later in the paper, we will assign a bi-infinite string of digits to pairs of points one of which is in $K$, resulting in a different notion of $a_0$. For this reason, for points in $K$ we will leave $a_0$ undefined.) Given a sequence $\{a_i\}$ of elements of $\mathcal Z$ (possibly arising from the above algorithm), one defines the \emph{convergent mappings} $M_i\in \mathcal M$ inductively by setting $M_0$ to be the identity mapping and $M_{n+1}= M_n\circ \iota^{-1} \circ a_{n+1}$. (In the sequel, we will often suppress the $\circ$ notation for convenience.) By construction, we see that $x_0=M_n (x_n)$. For each $i$, the $i^{th}$ \emph{convergent} of the continued fraction is then the point $M_i(0)$. Note that $T^i x_0=x_i=M_i^{-1}(x_0)$. We will be interested in conditions on the continued fraction algorithm that guarantee the following properties: \begin{defi} The continued fraction algorithm is \emph{convergent} if the continued fraction digits of almost every point $x\in K$ produce convergents $M_i(0)$ that indeed converge to $x$ (clearly, every finite expansion is convergent). The algorithm is \emph{ergodic} if the Gauss map $T$ is ergodic. \end{defi} We will use the following definition of ergodicity: \begin{defi} Let $(A,\mu)$ be a measure space and $f:A\rightarrow A$ a measurable (but not necessarily measure-preserving) transformation. Then, $f$ is said to be ergodic with respect to $\mu$ if for every measurable $B\subset A$, $\mu(f^{-1}B\triangle B)=0$ implies that $\mu(B)=0$ or $\mu(A\setminus B)=0$. If $\phi:A\rightarrow A$ is a measurable flow, then $\phi$ is ergodic with respect to $\mu$ if for every measurable $B\subset A$, $\mu(\phi_t(B)\triangle B)=0$ for all $t\in\mathbb{R}$ implies that $\mu(B)=0$ or $\mu(A\setminus B)=0$. \end{defi} \begin{remark} Note that with this definition, ergodicity with respect to a measure $\mu$ implies ergodicity with respect to any measure that is equivalent to $\mu$. In this paper, the relevant measure (or class of equivalent measures) will always be clear from context, and will often be a Lebesgue or Haar measure. \end{remark} We will prove the convergence of the Iwasawa CFs under the assumptions of \emph{properness} and \emph{discreteness}: \begin{defi}[Properness and Discreteness] The Iwasawa continued fraction is \emph{proper} if the closure of $K$ is bounded away from the unit sphere: $\operatorname{rad}(K)=\sup\{\norm{x}\;:\; x\in K\}<1$. It is \emph{discrete} if $\mathcal M$ is a lattice in $G$. \end{defi} There do exist convergent Iwasawa continued fractions that are not proper, most notably regular continued fractions on $\mathbb{R}$ and J.~Hurwitz continued fractions on $\C$. Likewise, one can construct proper but non-discrete Iwasawa continued fractions: for example, let $\mathbb X=\mathbb{R}$, $\mathcal Z=\epsilon \mathbb{Z}$, and $K=(-\epsilon/2,\epsilon/2]$. The resulting continued fraction is generally not discrete, but will be convergent by the \'Sleszy\'nski-Pringsheim Theorem \cite{sleshinskiy} for $\epsilon<1/2$. To prove ergodicity, we will need a further assumption of \emph{completeness}, which rules out hidden symmetries: \begin{defi}[Completeness] The Iwasawa continued fraction is \emph{complete} if one has $\text{Stab}_\mathcal M(\infty)=\mathcal Z$. For an incomplete continued fraction, one may pass to the \emph{completion} by replacing $\mathcal Z$ with the lattice $\text{Stab}_\mathcal M(\infty)$ and making a corresponding modification to the fundamental domain $K$ and floor function $\floor{\cdot}$. This will result in what are often termed ``folded" variants. \end{defi} \begin{defi} \label{defi:centrallySymmetric} The Iwasawa continued fraction is \emph{incomplete with $n$ central symmetries} if there exists a set $\mathcal{R}\subset\text{Isom}(\mathbb X)$ such that \begin{enumerate} \item Every element of $\mathcal{R}$ fixes $0$, i.e., is a rotation around the origin, \item The only element of $\mathcal Z$ to fix $0$ is the identity, \item $\text{Stab}_\mathcal M(\infty)=\langle \mathcal Z,\mathcal{R}\rangle$, \item Every element of $\text{Stab}_\mathcal M(\infty)$ can be written uniquely as $ra$ for some $r\in \mathcal{R}$, $a\in \mathcal Z$, and uniquely as $a'r'$ for some $a'\in \mathcal Z$, $r'\in \mathcal{R}$, and, \item $\mathcal R$ contains $n$ elements. \end{enumerate} The set $\mathcal{R}$ is said to be the set of central symmetries of $\mathcal M$. We say that the fundamental domain $K$ for $\mathcal Z$ is symmetric if for any $r\in \mathcal{R}$, $rK$ is $K$ up to a set of measure zero. \end{defi} \subsection{Further Examples}\label{sec:examples}~ With all of our notation now in place, we may describe many examples of Iwasawa continued fractions. In Table \ref{tab:examples}, we list several types of continued fractions, and for each of them denote the Iwasawa inversion space $\mathbb X$ on which it exists; the lattice $\mathcal Z$, which will often act by left-translation by a subset of $\mathbb X$; the fundamental domain $K$; the inversion, which in all cases will be identified by a $\iota$ signature; whether it is complete and proper (the columns C and P respectively); and some basic references. \begin{table} \caption{Examples of Iwasawa continued fractions. The examples in $\mathbb X^2_\mathbb{R}$ are usually presented as complex CFs.}\label{tab:examples} \begin{tabular}{| p{2.3cm} | c | p{2cm} | p{2.2cm} | c | c | c | p{1.5cm} | } \hline Name:& $\mathbb X$ & $\mathcal Z$ & $K$ & $\iota$ & C&P & Ref \\ \hline \hline Regular & $\mathbb X_{\mathbb{R}}^1$ & $\mathbb{Z}$ & $[0,1)$ & $\iota_+$ & N&N & \cite{Series1} \\ \hline Backwards & $\mathbb X_{\mathbb{R}}^1$ & $\mathbb{Z}$ & $[0,1)$ & $\iota_-$ & Y&N & \cite{AF84} \\ \hline Nearest \mbox{Integer} & $\mathbb X_{\mathbb{R}}^1$ & $\mathbb{Z}$ & $\left[-\frac{1}{2},\frac{1}{2}\right)$ & $\iota_+$ & N&Y & \cite{IT} \\ \hline Nearest \mbox{Integer} \mbox{(variant)} & $\mathbb X_{\mathbb{R}}^1$ & $\mathbb{Z}$ & $\left[-\frac{1}{2},\frac{1}{2}\right)$ & $\iota_-$ & Y&Y & \cite{Hurwitz} \\ \hline Folded Nearest Integer & $\mathbb X_{\mathbb{R}}^1$ & $\langle\mathbb{Z},x\mapsto-x\rangle$ & $\left[0,\frac{1}{2}\right]$ & $\iota_+$ & Y&Y & \cite{MMY} \\ \hline Nakada $\alpha$, $\alpha\in(0,1)$ & $\mathbb X_{\mathbb{R}}^1$ & $\mathbb{Z}$ & $[\alpha-1,\alpha]$ & $\iota_+$ & N&Y & cf.~\cite{AS,NakadaAlpha} \\ \hline Even & $\mathbb X_{\mathbb{R}}^1$ & $2\mathbb{Z}$ & $[-1,1)$ & $\iota_-$ & Y & N & cf.~\cite{BL,KU2007} \\ \hline Rosen for $q\in \mathbb{N}_{\ge 3}$ & $\mathbb X_{\mathbb{R}}^1$ & $\lambda \mathbb{Z}, \ \lambda=2\cos\frac{\pi}{q}$ & $\left[-\frac{\lambda}{2},\frac{\lambda}{2}\right)$ & $\iota_-$ & Y&Y & \cite{MS} \\ \hline $\alpha$-Rosen for $q\in \mathbb{N}_{\ge 3}$& $\mathbb X_{\mathbb{R}}^1$ & $\lambda \mathbb{Z}, \ \lambda=2\cos\frac{\pi}{q}$ & $\left[\lambda(\alpha-1),\lambda\alpha\right)$, $\alpha\in[1/2,1/\lambda)$ & $\iota_-$ & Y&Y & cf.~\cite{DKS} \\ \hline \hline Hurwitz& $\mathbb X_{\mathbb{R}}^2$ & $\mathbb{Z}^2$ & $\left[-\frac{1}{2},\frac{1}{2}\right)^2$ & $\iota_c$ & N&Y & \cite{cijsouw,Hensley} \\ \hline Folded \mbox{Hurwitz} & $\mathbb X_{\mathbb{R}}^2$ & $\langle\mathbb{Z}^2,(x,y)\mapsto(-x,-y)\rangle$ & $\begin{aligned}&\left[-\tfrac{1}{2},\tfrac{1}{2}\right)\\ &\quad \times \left[-\tfrac{1}{2},0\right]\end{aligned}$ & $\iota_c$ & Y&Y & cf.~\cite{Pollicott}\\ \hline Hurwitz Hexagonal & $\mathbb X_{\mathbb{R}}^2$ & $\mathbb{Z}[\rho]$, with $\rho=\frac{1+\sqrt{-3}}{2}$ & Dirichlet region & $\iota_c$ & N&Y & \cite{MR1554754} \\ \hline J.~Hurwitz or Tanaka& $\mathbb X_{\mathbb{R}}^2$ & $\{(a,b)\in\mathbb{Z}^2: a+b\text{ even}\}$ & Dirichlet region & $\iota_c$ & Y&N & \cite{cijsouw,MR800085} \\ \hline Shallit & $\mathbb X_{\mathbb{R}}^2$ & $\mathbb{Z}^2$ & * & $\iota_c$ & N&N & \cite{cijsouw} \\ \hline SKT &$\mathbb X_{\mathbb{R}}^2$ & $\mathbb{Z}[\rho]$, with $\rho=\frac{1+\sqrt{-3}}{2}$ & $\begin{aligned}&\left[0,1\right)\rho \\&\quad \times \left[0,1\right)\overline{\rho}\end{aligned}$ & $\iota_c$ & N&N & \cite{MR0429773} \\ \hline Bianchi, $d=1,2,3,7,11$ & $\mathbb X_{\mathbb{R}}^2$ & $\mathcal{O}_d$, ring of integers & Dirichlet region & $\iota_c$ & N&Y & \cite{Dani,Hines} \\ \hline \hline 3d & $\mathbb X_{\mathbb{R}}^3$ & $\mathbb{Z}^3$ & $\left[-\tfrac{1}{2},\tfrac{1}{2}\right)^3$ & $\iota_+$ & N&Y & new, see Ex. \ref{ex:3d} \\ \hline \hline Quaternionic & $\mathbb X_{\mathbb{R}}^4$ & $\mathbb{Z}^4$ & $\left[-\tfrac{1}{2},\tfrac{1}{2}\right)^4$ & $\iota_c$ & N&N & \cite{Hamilton1,Hamilton2}\\ \hline Hurwitz Quaternionic & $\mathbb X_{\mathbb{R}}^4$ & Hurwitz \mbox{integers} & Dirichlet region & $\iota_c$ & N&Y & \cite{Mennen}\\ \hline \hline Octonionic & $\mathbb X_{\mathbb{R}}^8$ & Cayley \mbox{integers} & Dirichlet region & $\iota_c$ & N&Y & new\\ \hline \hline Heisenberg & $\mathbb X_{\C}^1$ & $\mathbb{Z}^3$ & $\left[-\frac{1}{2},\frac{1}{2}\right)^3$ & $\iota_-$ & N&Y & \cite{LV}\\ \hline Folded \mbox{Heisenberg} & $\mathbb X_{\C}^1$ & $\langle\mathbb{Z}^3,(z,t)\mapsto(\mathbbm{i} z,t)\rangle$ & $\begin{aligned}&\left[-\tfrac{1}{2},0\right]^2\\ &\quad \times \left[-\tfrac{1}{2},\tfrac{1}{2}\right)\end{aligned}$ & $\iota_-$ & Y&Y & new\\ \hline Heisenberg Hexagonal & $\mathbb X_{\C}^1$ & $\mathbb{Z}[\rho]\times \sqrt{3}\mathbb{Z}$ & See Ex. \ref{ex:HeisHex} & $\iota_-$ & N&Y & new \\ \hline \hline Heisenberg Quaternionic & $\mathbb X_{\mathcal{H}}^1$ & $(\mathbb{Z}^4\cup(\mathbb{Z}+1/2)^4)\times \mathbb{Z}^3$& Dirichlet region & $\iota_-$ & N&N & new, see Ex.\ \ref{ex:hurwitzquat}\\ \hline \end{tabular} \end{table} It should be noted that all cases under consideration are discrete. In some cases where the fundamental domain is too complicated to write succinctly, we have labeled it with the Dirichlet region. In this case, we mean the set of points that are closer to $0$ than to any translate of $0$ under $\mathcal Z$, with some choice of boundary. Note as well that the fundamental domain $K$ for the Shallit complex CF algorithm is a rectangle with corners at $.5-.5\mathbbm{i}$, $1$, $\mathbbm{i}$, and $-.5+.5\mathbbm{i}$. \begin{remark} The question of which inversion to use is a non-trivial one. The regular and backwards continued fraction expansion are the same, except that regular CFs use the inversion $x\mapsto 1/x$ and backwards CFs use the inversion $x\mapsto -1/x$. In many cases, the same name is given to the same CF expansions that merely utilize different inversions: variants of the nearest-integer CF exist with $x\mapsto 1/x$, $x\mapsto -1/x$, and $x\mapsto|1/x|$. (See \cite{AS2014} for a related discussion for Rosen CFs.) The nearest-integer CFs defined using $x\mapsto|1/x|$, one of the more common CF algorithms studied recently, is largely equivalent to the folded nearest-integer CFs. \end{remark} The complex continued fractions, quaternionic continued fractions, and octonionic continued fractions are embedded in higher-dimensional real spaces in the standard way, $\C\cong \mathbb{R}^2$, $\mathcal{H}\cong \mathbb{R}^4$, and $\mathbb{O}\cong \mathbb{R}^8$. The inversion $\iota_c$ listed in all these cases is equivalent to $z\mapsto 1/z$ on $\C$, $\mathcal{H}$, or $\mathbb{O}$. One reason for identifying these spaces is that the existence of maximal orders, the Gaussian and Eisenstein integers in $\mathbb{C}$, the Hurwitz integers in $\mathcal{H}$, and the Cayley integers in $\mathbb{O}$, give rise to lattices on $\mathbb{R}^2$, $\mathbb{R}^4$, and $\mathbb{R}^8$ that in turn generate \emph{proper} fundamental domains $K$. The Hurwitz integers in $\mathcal{H}$ are given by \[\label{eq:Hurwitzintegers}\{a+b\mathbbm{i}+c\mathbbm{j}+d\mathbbm{k}:a,b,c,d\in\mathbb{Z} \text{ or }a,b,c,d\in \mathbb{Z}+1/2\} \] The Cayley integers in $\mathbb{O}$ are defined in Chapter 9 of \cite{CSbook} (where they are referred to by the less common name of octavian integers), with properness of the corresponding Dirichlet region following from Lemma 6 of that chapter. We should emphasize that Table \ref{tab:examples} does not cover all well-studied CF algorithms. For example, odd CFs \cite{BM}, CFs related to triangle groups \cite{CS}, CFs related to the Jacobi-Perron algorithm or other subtraction algorithms \cite{SchweigerBook}, and general $(a,b)$-continued fractions \cite{KU2012} do not fit into our framework. The $N$-continued fractions \cite{DKvdW} and $u$-backwards continued fraction \cite{GH} use an $\iota$ which is not an inversion by our definition; however, our proofs could be modified to compensate. Regardless, they would still not be proper. \begin{remark} We are not the first to encounter problems with the incompleteness of the Hurwitz CF algorithm. Pollicott \cite{Pollicott} studied a similar folded continued fraction, albeit using conjugation in place of negation. Nakada \cite{Nakada} studied the full Hurwitz CF, but took as his hyperbolic space the disjoint union of two different spaces and let negation additionally act by swapping between the two. \end{remark} \subsection{Discreteness and Properness} The difficulty of pushing into ever higher dimensions (either by taking $k\neq \mathbb{R}$ or $n\geq 2$) is in finding an appropriate lattice $\mathcal Z$ and fundamental domain $K$ such that the resulting continued fraction is both discrete and proper. The following proposition gives a useful framework for which to prove discreteness: \begin{prop}\label{prop:easydiscrete} Fix an Iwasawa inversion space $\mathbb X=\mathbb X^n_k$, an inversion $\iota$ that is either $\iota_+$, $\iota_-$, or $\iota_c$, and a discrete subring $R\subset k$ such that $2\in R$. Consider the subgroup $\mathcal Z\subset \text{Isom}(X)$ consisting of left-translations by points $(z,t)\in \mathbb X$ such that $z\in R^n$ and $\Norm{z}^2+t\in R$. Then $\mathcal M=\langle \mathcal Z, \iota\rangle\subset \text{Isom}(\mathbb{H})$ is discrete. \end{prop} \begin{example} \label{ex:Heisenberg} For example, in the case of the first Heisenberg group $\mathbb X^1_\C$, we might chose $R=\mathbb{Z}[\mathbbm{i}]$ so $z\in \mathbb{Z}[i]$ and $t\in i\mathbb{Z}$. \end{example} \begin{proof} We can embed $\mathcal M$ as a subgroup of $GL(n+2,k)$ by mapping $\iota$ to a matrix $J_\iota$ of the form \eqref{eq:inversionmatrix}, and left-translation by $(z,t)$ to the matrix $A_{(z,t)}$, where \[\label{Eq:Aztmatrix} A_{(z,t)}=\left[ \begin{array}{ccc} 1 & 0_n & 0 \\ \sqrt{2}z & \operatorname{id}_n & 0_n \\ \Norm{z}^2+t & \sqrt{2}\overline{z} & 1 \end{array}\right]. \] It is now easy to check that $\mathcal Z$ is a group. Unless $\sqrt{2}\in R$, the matrices $A_{(z,t)}$ will not be matrices over $R$ itself. However, consider the discrete set $S$ of $(n+2)\times(n+2)$ matrices $(a_{i,j})_{i,j=1}^{n+2}$ such that $a_{i,j}\in \sqrt{2}R$ if $i$ or $j$ (but not both!) is equal to $1$ or $n+2$, and otherwise $a_{i,j}\in R$. It is easy to check that $S$ is closed under multiplication. Moreover, the generators $J_\iota$ and $A_{(z,t)}$ of $\mathcal M$ belong to $S$, so that $\mathcal M\subset S$, so $\mathcal M$ must be discrete. \end{proof} For the rest of this section, we will assume that all the hypotheses of Proposition \ref{prop:easydiscrete} are satisfied, so that the only remaining difficulty is proving properness. \begin{example}\label{ex:3d} Let us consider higher-dimensional generalizations of the nearest-integers CFs. Let $k=\mathbb{R}$, $\mathbb X=\mathbb X_\mathbb{R}^n=\mathbb{R}^n$, for some $n\ge 1$, and $\iota=\iota_+$. The space $\mathbb{R}^n$ admits the standard lattice $\mathcal Z=\mathbb{Z}^n$ with fundamental domain $K=[-1/2,1/2)^n$. When $n=1$, we get the usual nearest-integer CFs. When $n=2$, we get a variant of the Hurwitz complex CFs ($\iota_+$ acts like $z\mapsto 1/\overline{z}$). When $n=3$, we get a 3d CF which we do not believe has been studied before. However, when $n\ge4$, the corresponding $K$ is no longer proper. \end{example} Examples \ref{ex:Heisenberg} and \ref{ex:3d} fit into the framework of Proposition \ref{prop:easydiscrete} very easily. However, in general, $t$ may not belong to the ring $R$, but does belong to the additive subgroup $R'$ of $\Im(R)$ defined by \( R'=\{t\in \Im(R): \Norm{z}^2+t\in R, \exists z\in R^n\}\subset \Im(R). \) One shows that, as a set, we have $\mathcal Z=R^n\times R'$. Let $K_1$ be the Dirichlet domain around $0$ for $R$ and let $K_2$ be the Dirichlet domain around $0$ for $R'$ with respect to the Euclidean metrics on $k^n$ and $\Im(k)$. Then a fundamental domain for $\mathcal Z$ in $\mathbb X$ is given by $K=K_1^n\times K_2$. In particular, the radius of $K$ is \( \operatorname{rad}(K)=\sqrt[4]{n^2\operatorname{rad}(K_1)^4+\operatorname{rad}(K_2)^2}. \) Thus, to obtain a proper system, we require $n^2\operatorname{rad}(K_1)^4+\operatorname{rad}(K_2)^2<1$. \begin{example}\label{ex:HeisHex} Suppose $k=\mathbb{C}$ and $R=\mathbb{Z}[\mathbbm{i}]$. Then we have $R'=\mathbbm{i}\mathbb{Z}$, $K_1=[-1/2,1/2)^2$, $K_2=[-1/2,1/2)\mathbbm{i}$. In this case $\operatorname{rad}(K_1)=2^{-1/2}$ and $\operatorname{rad}(K_2)=2^{-1}$. When $n=1$, this implies that $K$ is proper, and results in the Heisenberg continued fractions in Table \ref{tab:examples} above. However, $\operatorname{rad}(K)<1$ only for $n=1$ and so this cannot be directly generalized to higher Heisenberg groups. It is tempting to get around this by replacing $R$ with $\mathbb{Z}[e^{2\pi \mathbbm{i}/3}]$, the Eisenstein integers, as then $K_1$ is a hexagon with radius $3^{-1/2}$. However, this gives $R'=\sqrt{3}\mathbbm{i} \mathbb{Z}$, so that $K_2=[-\sqrt{3}/2,\sqrt{3}/2)\mathbbm{i}$, and again $\operatorname{rad}(K)<1$ only for $n=1$. We would, more generally, be interested in CFs on the Heisenberg group with coordinates related to the ring of integers of imaginary quadratic fields. However, if we use $R=\mathcal{O}_d$ for $d=2,7,11$, then the resulting fundamental domain $K_1\times K_2$ is not proper even when $n=1$. \end{example} \begin{example}\label{ex:hurwitzquat} Let $k=\mathcal{H}$ be the quaternions, $n=1$, and $R$ the Hurwitz integers \eqref{eq:Hurwitzintegers}, so that $R'=\mathbb{Z}[\mathbbm{i},\mathbbm{j},\mathbbm{k}]$. Then $\operatorname{rad}(K_1)=2^{-1/2}$ (see \cite{Mennen}) and $K_2=[-1/2,1/2)^3$ so $\operatorname{rad}(K_2)=\sqrt{3}/2$. In particular, if we look at $X^1_\mathcal{H}$, we have $\operatorname{rad}(K)=1$, narrowly missing the properness criterion. Other nearly-proper CF algorithms such as the J.~Hurwitz complex CFs are known to be convergent and ergodic, so we hope to be able to extend our results to this case. \end{example} \subsection{Completeness and Incompleteness}\label{sec:appendix} We now demonstrate how one can identify complete CFs, or identify symmetries of incomplete CFs. \begin{prop}\label{prop:firstcomplete} CF algorithms associated to $\mathbb X^1_\mathbb{R}$, $\mathcal Z=\mathbb{Z}$, and $\iota_+(x)=1/x$ (e.g., regular or $\alpha$-CFs) are incomplete with two central symmetries. CF algorithms associated to $\mathbb X^1_\mathbb{R}$, $\mathcal Z=\mathbb{Z}$, and $\iota_-(x)=-1/x$ (e.g., backwards) are complete. \begin{proof} Let $\mathcal M_+$ and $\mathcal M_-$ be the modular groups associated to $\iota_+$ and $\iota_-$, respectively. We take advantage of the fact that one can embed $\mathcal M_-$ into $SL(2,\mathbb{Z})$, while $\mathcal M_+$ naturally embeds into the larger $GL(2,\mathbb{Z})$. That is, we may identify elements of $\mathcal Z$ and the inversions $\iota_\pm$ with matrices in $GL(2,\mathbb{Z})$, acting by the usual linear fraction transformations on $\mathbb{R}$, with \( \mathcal Z=\left\{ A_n=\left(\begin{array}{cc} 1 & n \\ 0 & 1 \end{array}\right): n\in \mathbb{Z}\right\} \qquad \iota_\pm = \left( \begin{array}{cc} 0 & \pm 1 \\ 1 & 0 \end{array}\right). \) (Note that in the standard convention, translations act by upper-triangular matrices, cf.~\eqref{Eq:Aztmatrix}.) To test for completeness, note that matrices in $\text{Stab}_{\mathcal M_\pm}(\infty)$ have the form \( \left( \begin{array}{cc} a & b\\ 0 & d\end{array}\right). \) Since $a,d\in \mathbb{Z}$ and $\norm{ad}=1$, $a,d$ must be units, so we can decompose the matrix as \( \left( \begin{array}{cc} a & b\\ 0 & d\end{array}\right)=\left( \begin{array}{cc} 1 & b(d^{-1}) \\ 0 & 1\end{array}\right) \left( \begin{array}{cc} a & 0\\ 0 & d\end{array}\right), \) a product of an element of $\mathcal Z$ and a diagonal matrix. So the only things that can potentially cause incompleteness are diagonal matrices in $\mathcal M$. Since the only diagonal matrices in $SL(2,\mathbb{Z})$ are $\pm I$, which act by the identity, we can conclude $\mathcal M_-=\text{Stab}_{\mathcal M_-}(\infty)$. For $GL(2,\mathbb{Z})$, the only potential additional symmetry is given by $x\mapsto -x$, corresponding to a diagonal matrix with $a=-d$. Indeed, this is contained in $\mathcal M_+$, represented by the word $\iota A_1 \iota A_{-1} \iota A_{1}$. In particular, CFs associated with $\iota_+$ are incomplete with $2$ central symmetries. \end{proof} \end{prop} A proof similar to the above also implies that the Rosen CFs are complete. \begin{prop}\label{prop:secondcomplete} Let $k$ be the complex, quaternionic, or octonionic division algebra, with $\mathcal Z$ given by translation by Gaussian or Eisenstein integers, quaternionic or Hurwitz integers, or Cayley integers respectively. Any $k$-CFs with associated with $\mathcal Z$ and an inversion of either $z\mapsto 1/z$ or $z\mapsto -1/z$ is incomplete with at least two central symmetries. \begin{proof} One argues along the same lines as the proof of Proposition \ref{prop:firstcomplete}, embedding $\mathcal M$ into $GL(2,\mathcal{O}_k)$, where $\mathcal{O}_k$ is the corresponding ring of integers. If $\iota(z)=1/z$ then $\iota A_1 \iota A_{-1} \iota A_{1}$ is again the central symmetry $z\mapsto -z$. If $\iota(z)=-1/z$, then the central symmetry $z\mapsto -z$ can be represented by the word $\iota A_{i} \iota A_{-i} \iota A_i$. In the Hurwitz complex CF case, no other central symmetries can be obtained because the matrices of $GL(2,\mathcal{O}_k)$ obtained by the embedding have determinant $\pm 1$, and hence the only diagonal matrices have $a=\overline{d}$ or $a=-\overline{d}$. \end{proof} \end{prop} \begin{prop} The J.~Hurwitz complex CF algorithm is complete. \begin{proof} As in proposition \ref{prop:secondcomplete}, we embed $\mathcal M$ into $GL(2,\mathbb{Z}[\mathbbm{i}])$, with $\iota=\iota_+$ and $\mathcal Z=\{A_n:n\in(1+\mathbbm{i})\mathbb{Z}[\mathbbm{i}]\}$. However, by taking $\mathcal M$ modulo $4$ and performing an exhaustive computational search, one can confirm that the central symmetry $z\mapsto -z$ never appears. \end{proof} \end{prop} \begin{prop} Standard Heisenberg continued fractions are incomplete with four central symmetries. \begin{proof} Embed $\mathcal M$ into $GL(3,\mathbb{Z}[\mathbbm{i}])$ using \eqref{Eq:Aztmatrix}. Diagonal matrices then correspond to the rotations $(z,t)\mapsto (\mathbbm{i}^k z,t)$. All four of these are, in fact, realized, since one has \( \iota A_{(0,1)}\iota A_{(0,1)} \iota A_{(0,1)} = \left( \begin{array}{ccc} -\mathbbm{i} & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & -\mathbbm{i}\end{array}\right), \) corresponding to the rotation $(z,t)\mapsto(\mathbbm{i} z,t)$. \end{proof} \end{prop} \section{Convergence}\label{sec:convergence} Convergence in the specific case of proper and discrete Iwasawa continued fractions with $k=\C$, $n=1$, and $\mathcal Z$ left-translations by the integer Heisenberg group was given in \cite{LV}, Lemma 3.19 through Theorem 3.21. The proof extends readily except for the use of discreteness of the Gaussian integers in Lemma 3.20. While the rings generated by the coefficients of $\mathcal M$ (in a given matrix representation) need not be discrete, the proof only requires a lower bound on the norm of a non-zero denominator. We recover this from the discreteness of $\mathcal M$ in Lemma \ref{lemma:discreteness} below, proving: \begin{thm} \label{thm:convergence} Fix a proper and discrete Iwasawa continued fraction algorithm, and let $x\in K$. If $x$ has infinitely many CF digits, then the convergents $M_i(0)$ converge to $x$; otherwise, if $x$ has exactly $i$ CF digits, then $M_i(0)=x$. \end{thm} Fix a proper and discrete Iwasawa continued fraction algorithm. Note that we will not use properness explicitly, but it is necessary for the remainder of the proof in \cite{LV}. Recall from \S \ref{sec:hyp} that $\mathbb{H}$ is the set $\{h=(z,w)\in k^n\times k \;:\; \Re(w)>0\}$ with boundary $\partial \mathbb{H}=\mathbb X$. The coordinate $\Re(w)$ is the \emph{horoheight (at infinity)} $\operatorname{ht}_\infty(h)$. Restricting horoheight from below produces a \emph{horoball at $\infty$}, and applying a mapping $M\in \mathcal M$ produces a horoball at the point $M(\infty)$. These can be defined directly using the horoheight $\operatorname{ht}_{M(\infty)}(h):=\operatorname{ht}_{\infty}(M^{-1}(h))$. It follows from the characterization of horoballs as limits of metric balls that horoballs are geodesically convex. We denote the horoball of height $C$ based at a point $M(\infty)$ by $\mathcal B_{M(\infty)}(C)=\{h\in \mathbb{H}\;:\; \operatorname{ht}_{M(\infty)}(h)\geq C\}$. The following generalizes the disjointness result for Ford circles: \begin{thm} \label{thm:thick-thin} There exists $C_0>0$ such that for every $C\geq C_0$ and $M_1, M_2\in \mathcal M$ satisfying $M_1(\infty)\neq M_2(\infty)$, the horoballs $\mathcal B_{M_1(\infty)}(C)$ and $\mathcal B_{M_2(\infty)}(C)$ are disjoint. \begin{proof}[Sketch of Proof] The result follows from the Margulis Lemma by way of the Thick-Thin Decomposition (see e.g.\ \S5.10 of Thurston's notes \cite{ThurstonNotes}) of the quotient orbifold $\mathcal M\backslash \mathbb{H}$, which has a cusp corresponding to the point $\infty$. To see that it has this cusp, note that the translation length for elements of $\mathcal Z\subset \mathbb{H}$ goes to zero at large horoheight (note that one can compare actions at different horoheights by conjugating by the dilation $\delta_r$), so that a horoball of sufficientlly large horoheight must be contained in the thin part of $\mathcal M \backslash \mathbb{H}$. \end{proof} \end{thm} We can conclude, in particular, that horoballs based at points other than $\infty$ are quantitatively bounded with respect to horoheight from $\infty$. \begin{cor} \label{cor:smallballs} Let $\mathcal B=\mathcal B_\infty(h_1)$ be a horoball of height $h_1$ based at $\infty$. Then for every $M\in \mathcal M$ satisfying $M(\infty) \neq \infty$, one has $$\operatorname{ht}_\infty(M(\mathcal B)):=\sup \{\operatorname{ht}_\infty(h) \;:\; h\in M(\mathcal B)\}\le C_0^2/h_1=:h_2.$$ \end{cor} \begin{proof}We first show that for each $M\in \mathcal M$ there exists a $C_M>0$ such that $\operatorname{ht}_\infty(M (\mathcal B_\infty(h)))=C_Mh^{-1}$ for each $h>0$. To verify this, we use the fact that $\mathcal M=\langle \mathcal Z, \iota\rangle$ to expand $M=\iota a_n \cdots a_1 \iota$ for $a_i\in \mathcal Z$, noting that initial and final translations don't affect horoheight. On the other hand, each inversion acts, by Lemmas 3.6 and 3.8 of \cite{1510.06033}, via: $$\operatorname{ht}_\infty(\iota (\mathcal B_\infty(h)))=1/h, \hspace{.75in} \operatorname{ht}_\infty(\iota (\mathcal B_x(h))) = h \norm{x}^{-2}.$$ Thus, as long as, for each $i$, $x_i:=(a_i \iota \cdots a_1 \iota )(\infty) \neq 0$, we have $$\operatorname{ht}_\infty(M(\mathcal B_\infty(h)) = h_1^{-1} \prod_{i=1}^{n} \norm{x_i}^{-2}.$$ If at some point $x_i=0$, then we must have $(\iota a_i\iota \cdots a_1 \iota)(\mathcal B_\infty(h))=\mathcal B_\infty(h)$, so that digits $a_1, \ldots, a_i$ may be removed without altering the effect of $M$ on $\mathcal B_\infty(h)$. With the reduction implemented, the product $C_M:=\prod_{i=1}^{n} \norm{x_i}^{-2}$ is well-defined and has the desired property. To complete the argument, note that from Theorem \ref{thm:thick-thin} we have that $h^{-1}C_M<h$ for $h=C_0$, so $C_M<C_0^2$ and $\operatorname{ht}_\infty(M(\mathcal B))<h_2$, as desired. \end{proof} We now recover Lemma 3.20 of \cite{LV}. Recall that we have an embedding $\phi:\mathbb X\to k^{n+2}$ given by $\phi(z,t)=(1,\sqrt{2}z,\Norm{z}^2+t)$; with a corresponding embedding of $\mathcal M$ into $U(J)\subset GL(n+2,k)$ acting on these vectors. Isometries of $\mathbb X$ then embed as lower block triangular mappings of the form $$ \begin{bmatrix} a & 0_n & 0\\ b & A & 0_n\\ c & b^\dagger & \overline a \end{bmatrix}, $$ where $\norm{a}=1$ and $A$ is a unitary transformation. The matrix associated to the inversion is given by Lemma \ref{lemma:inversionform}. Now, given a point $x\in K$ with at least $m$ continued fraction digits (note that \cite{LV} uses the variable $n$ instead), let $q_m$ be the denominator of $M_m(0)$; that is, the first coordinate of the vector $M_m\phi(0)$. Thus in the matrix representation of $M_m$, the top-left entry is $q_m$ and the top-right entry, in norm, is $\Norm{q_{m-1}}$, matching the matrix representation in Lemma 3.16 of \cite{LV}. Lemma 3.20 from \cite{LV} proves that $q_m\neq 0$, relying on the fact that a non-zero Guassian integer must have norm at least 1. While the digits of $M_m$ need not satisfy this bound or even lie in a discrete ring, we get the following replacement: \begin{lemma} \label{lemma:discreteness} Under assumptions of Theorem \ref{thm:convergence}, there exists $C>0$ such that $q_m\neq 0$ implies $\Norm{q_m}>C$. \begin{proof} By Theorem \ref{thm:thick-thin}, there exists a horoball $\mathcal B$ based at $\infty$ of some horoheight $C_1$ such that the $\mathcal M$-orbit of $\mathcal B$ consists of disjoint horoballs. Moreover, the proof of Lemma 3.9 of \cite{1510.06033} (again, readily extended to the current setting) gives a constant $s_0$ such that if $q_m \neq 0$ then $$\operatorname{ht}_\infty(M_m(\mathcal B)):=\sup\{\operatorname{ht}_\infty(h)\;:\; h\in M_m(\mathcal B)\}\geq s_0\Norm{q_m}^{-1}.$$ The disjointness requirement forces $\operatorname{ht}_\infty(M_m(\mathcal B))<C_1$, so $\Norm{q_m}>s_0/C_1=:C$. \end{proof} \end{lemma} \section{Markable Geodesics}\label{sec:markable} We now study the way a geodesic $\gamma$ interacts with the modular group $\mathcal M$ related to a proper, discrete, and complete Iwasawa continued fraction algorithm, with the goal of proving the Markable Geodesic Theorem \ref{thm:markable}. We will track the passage of a geodesic through $\mathcal M\backslash\mathbb{H}$ by detecting intersections with the unit sphere $$\mathbb S=\{ h \in \mathbb{H} \;:\; \norm{h}=1\}$$ and its images under elements of $\mathcal M$. We will obtain an analog of geodesic coding for certain \emph{markable} geodesics, and then show that markability is a generic condition. Note that $\partial \mathbb S$ is the unit sphere in $\mathbb X$, and that $\iota(\mathbb S)=\mathbb S$. Even a generic geodesic may intersect $\mathbb S$ in more than one point; indeed when $k\neq \mathbb{R}$, $\mathbb{H}$ does not admit \emph{any} geodesically convex codimension-1 hypersurfaces. However, a generic geodesic intersects $\mathbb S$ in finitely many points, so we may speak of the \emph{last} intersection with $\mathbb S$: \begin{lemma}\label{lemma:finiteintersection} Let $\gamma$ be a geodesic in $\mathbb{H}$ not contained in $\mathbb S$. Then the set of intersections $\gamma\cap \mathbb S$ is finite. Furthermore, if there are times $t_1, t_2$ such that $\norm{\gamma(t_1)}>1$ and $\norm{\gamma(t_2)}<1$, then $\gamma$ does intersect $\mathbb S$. \begin{proof} The existence of the intersection follows from the definition of $\mathbb S$ by $\norm{\cdot}=1$. Finiteness follows by an algebraic argument. Because $\text{Isom}(\mathbb{H})$ acts transitively on geodesics, we may write $\gamma=g(\gamma_2)$, where $g\in G$ and $\gamma_2$ is the geodesic joining $0$ and $\infty$. Because $g$ and acts by projective transformations on $\mathbb{H}$, the condition $\norm{g(\gamma_2(t))}=1$ induces an algebraic condition on $t$. Thus, if the condition were to be satisfied for infinitely many $t$, it must be satisfied for all $t$, so that $\gamma\subset \mathbb S$, a contradiction. \end{proof} \end{lemma} We now establish the necessary results for the proof of the Markable Geodesic Theorem. \subsection{Decomposing an Arbitrary Geodesic}\label{subsec:geodesic} In the first stage of the proof, we will break up a geodesic $\gamma$ into segments punctuated by intersections with expected images of the sphere $\mathbb S$, in a way that gives us control of the intermediate horoheights. For a more formal statement, see Lemma \ref{lemma:upshot} below. We start by restricting our attention to geodesics that intersect near the top of $\mathbb S$. Fix $\epsilon>0$ such that $\epsilon+1< \operatorname{rad}(K)^{-1}$ (this choice comes into play in Lemma \ref{lemma:exhile}). We then have: \begin{lemma} \label{lemma:horocap} Suppose $\gamma$ is a geodesic ray with $\norm{\gamma(0)}\ge 1+\epsilon$ and $\gamma_+\in K$. Then the horoheight of any intersection of $\gamma$ with $\mathbb S$ satisfies $\operatorname{ht}_{\infty}(\gamma(t))\ge h_2$ for some $h_2\in(0,1)$ depending only on $\epsilon$. \begin{proof} The existence of the intersection follows from Lemma \ref{lemma:finiteintersection}. To obtain the lower bound on the horoheight of each intersection, note that $\gamma$ is uniformly transverse to boundary $\mathbb X$ (note that we are not working in a conformal model, so $\gamma$ is not necessarily perpendicular to $\mathbb X$), as this is true for the vertical geodesic joining $0$ and $\infty$ and the endpoints of $\gamma$ are contained in the compact set $\overline{K}\times (\widehat \mathbb{H} \setminus B(0,1+\epsilon))$. Thus, there is a minimal horoheight $h_2$ (that we may assume is in $(0,1)$) that $\gamma$ must reach as it moves away from $\gamma_-$ and $\gamma_+$ before an intersection can occur. The same bound must hold for the intermediate segment by the convexity of horoballs. \end{proof} \end{lemma} We denote the subset of $\mathbb S$ having horoheight at least $h_2$ as $\mathbb{W}$, and refer to both $\mathbb{W}$ and its images under $\mathcal M$ as ``walls''. We next fix a geodesic ray $\gamma$ originating in $\mathbb{W}$ and terminating in $K$ and let $M_i\in\mathcal M$ be the mappings associated to the CF expansion of $\gamma_+$. We now look for intersections of $\gamma$ with walls $M_i( \mathbb{W})$ by iterating the Gauss map on $\gamma$ and identifying intersections of $M_i^{-1}(\gamma)$ with $\mathbb{W}$. This happens within finitely many iterations, with control over the intermediate digits: \begin{lemma} \label{lemma:exhile} There a finite collection $\mathcal M_0\subset \mathcal M$ such that the following holds. Suppose $\gamma$ is a geodesic with $\gamma(0)\in \mathbb{W}$ satisfying $\gamma_+\in K\setminus \mathcal M \infty$. Then there exists a time $0<\mathfrak{t}_1<\infty$ and a universally bounded ${\mathfrak{i}_1}>0$ such that $M_{\mathfrak{i}_1}^{-1}(\gamma(\mathfrak{t}_1))\in \mathbb{W}$ and $M_{\mathfrak{i}_1-1}\in \mathcal M_0$. \end{lemma} At this point, for notational convenience, we will often drop parentheses when elements of $\mathcal M$ act on points or sets of points. \begin{proof} We note first that since $\gamma_+\not\in \mathcal M\infty$, then the continued fraction expansion of $\gamma_+$ does not terminate and so $M_i^{-1} \gamma_+$ is well-defined and in $K$ for all $i\in \mathbb{N}$. If $\norm{M_1\gamma(0)}\geq 1+\epsilon$, the result is immediate by Lemma \ref{lemma:horocap}. If not, we proceed iteratively on $i$, starting at $i=1$, supposing at every stage that $\norm{M_{i-1}^{-1}\gamma(0)} < 1+\epsilon$ until we find the minimum positive $\mathfrak{i}_1$ for ${i}$ for which \[\norm{M_i^{-1}\gamma(0)}\ge 1+\epsilon.\label{eq:ibound}\] Note that $M_i^{-1}=a_i^{-1}\iota M_{i-1}^{-1}$, $M_0=\operatorname{id}$, and moreover that $a_i^{-1}$ is an isometry of the metric $d$. When $i=1$, we have by the above observation and our definition of inversions that \[ d(M_1^{-1}\gamma_+,M_1^{-1}\gamma(0))&= d(\iota M_{0}^{-1}\gamma_+,\iota M_{0}^{-1} \gamma(0))= \frac{d( M_{0}^{-1}\gamma_+, M_{0}^{-1} \gamma(0))}{\norm{M_{0}^{-1}\gamma_+}\norm{M_0^{-1}\gamma(0)}}\\ &\ge \frac{d( M_{0}^{-1}\gamma_+, M_{0}^{-1} \gamma(0))}{\operatorname{rad}(K)(1+\epsilon)}=\frac{d(\gamma_+, \gamma(0))}{\operatorname{rad}(K)(1+\epsilon)}.\label{eq:radKdenom1} \] In particular, since $d(\gamma_+,\gamma(0))\ge d(K,\mathbb{W})$ this implies that \[ \norm{M_1^{-1}\gamma(0)}&\ge d(M_1^{-1}\gamma_+,M_1^{-1}\gamma(0))-\norm{M_1^{-1} \gamma_+}\\ &\geq \frac{d(K,\mathbb{W})}{\operatorname{rad}(K)(1+\epsilon)}-\operatorname{rad}(K)\label{eq:radKdenom2} \] This lower inequality could be substantially improved if more was known about $M_0^{-1}\gamma_+$. In particular, if $\norm{M_0^{-1}\gamma_+}\le r$ for \( r=\frac{d(K,\mathbb{W})}{(1+\epsilon)(1+\epsilon+\operatorname{rad}(K))}, \) then we could replace the $\operatorname{rad}(K)$ in the denominator of \eqref{eq:radKdenom1} and \eqref{eq:radKdenom2} with $r$ and obtain that $\norm{M_1^{-1}\gamma(0)}\ge 1+\epsilon$, so that $i=1$ itself is the minimum index for which \eqref{eq:ibound} holds. Now we begin the iteration. At every stage we see that \( d(M_i^{-1}\gamma_+,M_i^{-1}\gamma(0)) &\ge \frac{d(M_{i-1}^{-1}\gamma_+,M_{i-1}^{-1}\gamma(0))}{\norm{M_{i-1}^{-1}\gamma_+}\norm{M_{i-1}^{-1}\gamma(0)}}\\ &\ge \frac{d(M_{i-2}^{-1}\gamma_+,M_{i-2}^{-1}\gamma(0))}{\norm{M_{i-1}^{-1}\gamma_+}\norm{M_{i-1}^{-1}\gamma(0)}\norm{M_{i-2}^{-1}\gamma_+}\norm{M_{i-2}^{-1}\gamma(0)}}\\ &\dots\\ &\ge \frac{d(\gamma_+,\gamma(0))}{\prod_{j=0}^{i-1}\norm{M_{j}^{-1}\gamma_+}\norm{M_{j}^{-1}\gamma(0)}}, \) and thus \[ \norm{M_i^{-1}\gamma(0)}\ge \frac{d(K,\mathbb{W})}{(\operatorname{rad}(K)(1+\epsilon))^i}-\operatorname{rad}(K),\label{eq:iterativestep} \] noting again that if $\norm{M_{i-1}^{-1}\gamma_+}\le r$, then one copy of $\operatorname{rad}(K)$ in the denominator of the last inequality can be replaced with $r$. Thus $i$ satisfies \eqref{eq:ibound}. Regardless of whether $\norm{M_{i-1}^{-1}\gamma_+}\le r$ at any stage, since $\operatorname{rad}(K)(1+\epsilon)<1$ by the initial choice of $\epsilon$, within a bounded number of steps independent of our choice of $\gamma$, the expression on the right of \eqref{eq:iterativestep} exceeds $1+\epsilon$. Thus, there must be a uniform bound on $\mathfrak{i}_1$ such that $\norm{M_{\mathfrak{i}_1}^{-1}\gamma(0)}>1+\epsilon.$ Moreover, we see that if ever in our iterative process, $\norm{M_{i-1}^{-1}\gamma_+}\le r$, then this $i$ must be the desired value $\mathfrak{i}_1$. Thus for $i=\mathfrak{i}_1$, we must have that $\norm{M_j^{-1}\gamma_+}>r$, $0\le j <i-1$. However, recall that $a_{j+1}=\floor{\iota M_{j}\gamma_+} $. In particular, this tells us that $a_{j+1}$ must belong to a finite set of values for $0\le j <i-1$, and since $M_{i-1}=\iota^{-1} a_1 \iota^{-1} a_2\dots \iota^{-1} a_{i-1}$, there are finitely many options for what it could be. \end{proof} \begin{cor} \label{cor:h1} There exists a universal $h_1>0$ such that under the assumptions of the preceding lemma we have $\operatorname{ht}_{\infty}(M_{\mathfrak{i}_1}^{-1}\gamma(t))>h_1$ for all $0\leq t \leq \mathfrak{t}_1$. \begin{proof} We already know that $\operatorname{ht}_\infty(M_{\mathfrak{i}_1}^{-1}\gamma(\mathfrak{t}_1))\geq h_2$ since this point is contained in $\mathbb{W}$. Let us next consider the possible horoheights of $M_{\mathfrak{i}_1}^{-1}\gamma(0) = a_{\mathfrak{i}_1}^{-1} \iota M_{\mathfrak{i}_1-1}^{-1}\gamma(0)$. The point $\iota M_{{\mathfrak{i}_1}-1}^{-1}\gamma(0)$ lies in the relatively compact set $\cup\{\iota M^{-1}\mathbb{W} \;:\; M\in \mathcal M_0\}$, so for some $h_3$ we obtain $\operatorname{ht}_\infty (\iota M_{{\mathfrak{i}_1}-1}^{-1}\gamma(0))>h_3$. Since translation along $\mathbb X$ does not affect horoheight, we likewise have $\operatorname{ht}_\infty(M_{\mathfrak{i}_1}^{-1}\gamma(0))>h_3$. The lemma now follows with $h_1=\min(h_2,h_3)$ by convexity of horoballs. \end{proof} \end{cor} We are now able to characterize $M_{\mathfrak{i}_1}$ as the (essentially) unique element of $\mathcal M$ that can detect large horoheights along the geodesic segment between $\gamma(0)$ and $\gamma(\mathfrak{t}_1)$. Let us define an exceptional set $E\subset K$ by \[\label{eq:Edefn} E= K\cap \bigcup_{a\in \mathcal Z\setminus \{\operatorname{id}\}} aK. \] Since $K$ is a fundamental domain for $\mathcal Z$, $E$ has measure zero. \begin{cor}\label{cor:h0} There is an $h_0>1$ such that the following holds under the conditions of Lemma \ref{lemma:exhile}, and for all $0\leq t\leq \mathfrak{t}_1$. If $M^{-1}\gamma_+\in K\setminus E$, $M_{\mathfrak{i}_1}^{-1} \gamma_+\in K\setminus E$, and $\operatorname{ht}_{M\infty}(\gamma(t))>h_0$, then $M=M_{\mathfrak{i}_1}$. \begin{proof} The geodesic segment $M_{\mathfrak{i}_1}^{-1}\gamma([0,\mathfrak{t}_1])$ is contained in the horoball $\mathcal B=\mathcal B_\infty(h_1)$, and by Corollary \ref{cor:smallballs} there is an $h_0$ such that the points of $M\mathcal B$ have horoheight based at $\infty$ of at most $h_0$ when $M\infty \neq \infty$. In particular, this applies to the geodesic segment. Thus, if $\operatorname{ht}_{M\infty}(\gamma(t))>h_0$ for any $0\leq t\leq \mathfrak{t}_1$, then we conclude that $M\infty=M_{\mathfrak{i}_1}\infty$ and thus that $M^{-1}M_{\mathfrak{i}_1}\in \text{Stab}_\mathcal M(\infty)=\mathcal Z$, by completeness. Moreover, $\gamma_+\in M(K\setminus E)\cap M_{\mathfrak{i}_1}(K\setminus E)$ so that $M(K\setminus E)\cap M_{\mathfrak{i}_1}(K\setminus E) \neq \emptyset$. Thus $(M^{-1}M_{\mathfrak{i}_1} (K\setminus E))\cap (K\setminus E) \neq \emptyset$. By the definition of $E$, the only element of $\mathcal Z$ that takes any part of $K\setminus E$ back to itself is the identity element. Thus $M=M_{\mathfrak{i}_1}$ as desired. We may assume without loss of generality that $h_0>1$. \end{proof} \end{cor} Iterating the above results gives us a sequence of indices $\mathfrak{i}_j$ and times $\mathfrak{t}_j$ with the following properties: \begin{lemma}\label{lemma:upshot} Let $h_0$ be the constant in Corollary \ref{cor:h0} and $\gamma$ a geodesic ray with $\gamma(0)\in \mathbb{W}$, $\gamma(t)\not\in \mathbb{W}$ for $t>0$, and $\gamma_+\in K\setminus \mathcal M (\{\infty\}\cup E)$. Then there is an increasing sequence $\mathfrak{i}_j$, $j\geq 0$, of indices starting with $\mathfrak{i}_0=0$ and an increasing sequence of times $\mathfrak{t}_j$, $j\geq 0$, starting with $\mathfrak{t}_0=0$ such that: \begin{enumerate} \item For each $j\geq 0$: $\gamma(\mathfrak{t}_j)\in M_{\mathfrak{i}_j}\mathbb{W}$, while for $t>\mathfrak{t}_j$, $\gamma(t)\not\in M_{\mathfrak{i}_j}\mathbb{W}$, \item For each $j\geq 1$: If $\mathfrak{t}_{j-1}\leq t\leq \mathfrak{t}_j$ and a matrix $M\in \mathcal M$ satisfies both $M^{-1}\gamma_+\in K$ and $\operatorname{ht}_{\infty}{M^{-1}\gamma(t)}>h_0$, then $M=M_{\mathfrak{i}_j}$. \end{enumerate} \begin{proof}Given $\gamma$ satisfying the assumptions of the lemma, the $j=0$ case of Conclusion (1) is trivial. Moreover, we obtain $\mathfrak{i}_1$ and $\mathfrak{t}_1$ from Lemma \ref{lemma:exhile}. There might be several choices of $\mathfrak{t}_1$ due to multiple intersections with $M_{\mathfrak{i}_1}\mathbb{W}$ (see Lemma \ref{lemma:finiteintersection}); however, we let $\mathfrak{t}_1$ be the last of these. We then know that $M_{\mathfrak{i}_1}^{-1}\gamma(\mathfrak{t}_1)\in \mathbb{W}$, which is equivalent Conclusion (1) for $j=1$. We then obtain Conclusion (2) for $j=1$ from Corollary \ref{cor:h0}. We now proceed inductively: once $\mathfrak{t}_j$ and $\mathfrak{i}_j$ are defined, we replace $\gamma$ with the geodesic segment $\gamma'(t')= M_{\mathfrak{i}_j}^{-1}\gamma(t'+\mathfrak{t}_j)$ restricted to $t'\in [0,\infty)$. We then obtain $\mathfrak{t}'_1$, $\mathfrak{i}'_1$, and $M_{\mathfrak{i}'_1}$ as before, and take $\mathfrak{t}_{j+1}=\mathfrak{t}_j+\mathfrak{t}'_1$ and $\mathfrak{i}_{j+1}=\mathfrak{i}_j+\mathfrak{i}'_1$. The desired properties follow from the fact that the Gauss map acts as a shift on the digits of $\gamma_+$, via the identity $M_{\mathfrak{i}_{j+1}}=M_{\mathfrak{i}_j}M'_{\mathfrak{i}'_1}$. Finally, we note that since $h_0>1$, if $\operatorname{ht}_{\infty}{M^{-1}\gamma(t)}>h_0$, then $t$ cannot be any of the $\mathfrak{t}_j$'s, so there is no ambiguity in Conclusion (2). \end{proof} \end{lemma} \subsection{Decomposing a Markable Geodesic}\label{subsec:markable} Lemma \ref{lemma:upshot} tells us how geodesic rays leaving the wall $\mathbb{W}$ towards $K$ return to other walls $M\mathbb{W}$, for various $M\in \mathcal M$. In particular, if a point on our ray has large horoheight with respect to $M\infty$, then the ray should cross the wall $M\mathbb{W}$. We now use this to define a set $\mathcal{C}_{\mathbb{W}}\subset T^1 \mathbb{H}$ lying over $\mathbb{W}$, where this ``if" condition becomes ``if and only if." We will then call a geodesic \emph{markable} if it intersects $\mathcal M$-translates of $\mathcal{C}_{\mathbb{W}}$ infinitely often in the past and future, and show in the Markable Geodesic Theorem (Theorem \ref{thm:markable}) that the behavior of a markable geodesic's cusp excursions is directly related to the continued fraction expansion of the forward endpoint. We will see in Corollary \ref{cor:markablegeneric} that markable geodesics are generic. \begin{defi} \label{defi:CW} Using the constant $h_0>1$ provided by Lemma \ref{lemma:upshot}, we define $\mathcal{C}_{\mathbb{W}} \subset T^1\mathbb{H}$ as follows: a vector based at a point in $\mathbb{W}$ is in the set $\mathcal{C}_{\mathbb{W}}$ if and only if the corresponding geodesic \emph{line} $\gamma$ satisfies: \begin{enumerate} \item $\gamma(0)\in \mathbb{W}$, while for $t>0$, $\gamma(t)\notin \mathbb{W}$, \item $\gamma_+\in K\setminus \mathcal M(\{\infty\}\cup E)$, where $E$ is the exceptional set \eqref{eq:Edefn}, \item there exists a \emph{spotter time} $\widehat t<0$ such that $ \operatorname{ht}_\infty(\gamma(\widehat t))>h_0$. \end{enumerate} \end{defi} Critically, the third condition tells us that $\gamma$ intersects some $M\mathcal{C}_{\mathbb{W}}$ for $M\in \mathcal M$ at some time $t_M$ if and only if there is an associated spotter time $\widehat{t}_M<t_M$ satisfying $\operatorname{ht}_{\infty}M^{-1}\gamma(\widehat t_M)>h_0$, or equivalently $\operatorname{ht}_{M\infty}\gamma(\widehat t_M)>h_0$. \begin{defi} \label{defi:markable} A geodesic $\gamma$ is \emph{markable} if it intersects $\mathcal M$-translates of $\mathcal{C}_{\mathbb{W}}$ infinitely many times in both the past and the future. Unless stated otherwise, we will also assume that $\gamma(0)\in \mathcal{C}_{\mathbb{W}}$. \end{defi} In the following lemma, we will show that, for markable geodesics, spotter times follow a natural progression. That is, if we see a spotter time $\widehat t$ associated to an intersection time $t$, then we must move beyond $t$ before seeing the spotter time associated to any other intersection. \begin{lemma}\label{lemma:spotterorder} Let $\gamma$ be a markable geodesic, and $M, M'\in \mathcal M$. Suppose that $\gamma(a) \in M \mathcal{C}_{\mathbb{W}}$ and $\gamma(b)\in M'\mathcal{C}_{\mathbb{W}}$, attested by the corresponding spotter times $\widehat a, \widehat b$. Then these must alternate order: if $a<b$ then $\hat a< a< \hat b<b$. \begin{proof} We will prove an equivalent statement: if $\max(\widehat a, \widehat b)< \min(a, b)$, then $a=b$. Suppose it is false. Since $\gamma$ is markable, we may assume without loss of generality that $\gamma(0)\in C_\mathbb{W}$, $0<\hat a<\hat b<\min(a,b)$. Let $\mathfrak{t}_{j}$ be the sequence in Lemma \ref{lemma:upshot}. Then for some fixed $j$, we have $\mathfrak{t}_{j-1}<\hat a \leq \mathfrak{t}_j$. Conclusion 2 of the same lemma states that, since $\hat a$ is in the correct range and $\gamma(a)\in M\mathcal{C}_{\mathbb{W}}$, we have $M=M_{i_j}$ and by the definition of $\mathfrak{t}_j$ (that is, Conclusion 1 of the lemma) we have $a=\mathfrak{t}_j$. Furthermore, $\mathfrak{t}_{j-1}<\hat b<a=\mathfrak{t}_j$, so by the same argument $b=\mathfrak{t}_j$, as desired. \end{proof} \end{lemma} We can now show that if a geodesic starts in $\mathcal{C}_{\mathbb{W}}$, its next intersection with a translate of $\mathcal{C}_{\mathbb{W}}$ will be captured by an iteration of the Gauss map. \begin{lemma} \label{lemma:isspeedup} Let $\gamma$ be a markable geodesic such that $\gamma(0)\in \mathcal{C}_{\mathbb{W}}$, and suppose that the next intersection with a translate of $\mathcal{C}_{\mathbb{W}}$ occurs at $M\mathcal{C}_{\mathbb{W}}$. Then for some $j\ge 1$, we have $M=M_{\mathfrak{i}_j}$ and $\gamma(\mathfrak{t}_j)\in M\mathcal{C}_{\mathbb{W}}$, where $\mathfrak{i}_j,\mathfrak{t}_j$ are defined for $\gamma$ in Lemma \ref{lemma:upshot}. \begin{proof} Let $t>0$ denote the time when $\gamma(t)\in M\mathcal{C}_{\mathbb{W}}$. We know that there must exist a spotter time $\widehat t$ associated to $t$ and moreover, by Lemma \ref{lemma:spotterorder}, we know that $0<\widehat t< t$. Let $j\ge 1$ be such that $\mathfrak{t}_{j-1}\le \widehat t\le \mathfrak{t}_j$. Then by conclusion (2) of Lemma \ref{lemma:upshot}, we have that $M=M_{\mathfrak{i}_j}$ and $\gamma(\mathfrak{t}_j)\in M\mathcal{C}_{\mathbb{W}}$. \end{proof} \end{lemma} We can now prove the Markable Geodesic Theorem \ref{thm:markable}. \begin{proof}[Proof of Theorem \ref{thm:markable}] For positive $i$, let $a_i$ and $M_i$ be the digits and mappings corresponding to the CF expansion of the forward endpoint $\gamma_+$, making property (2) immediate. We will define the remaining data iteratively. Let $t_1>0$ be the first positive time when $\gamma$ intersects an $\mathcal M$-translate of $\mathcal{C}_{\mathbb{W}}$. Lemma \ref{lemma:isspeedup} then provides an index $k$ such that $t_1=\mathfrak{t}_k$ and a corresponding number $\mathfrak{i}_k$ which we record as $i_1$ satisfying $\gamma(t_1)\in M_{i_1}\mathcal{C}_{\mathbb{W}}$. We will now show that properties (1), (4), and (3) hold on the initial segment $[t_0,t_1]$. Let $\hat{t}_1$ be a spotter time associated to the intersection of $\gamma$ with $M_{i_1}\mathcal{C}_{\mathbb{W}}$; that is, $\hat{t}_1<t_1$ and $\operatorname{ht}_{M_{i_1}\infty} \gamma(\hat{t}_1)>h_0>1$. Since $\gamma(t_0)\in\mathcal{C}_{\mathbb{W}}$ and $\gamma(t_1)\in M_{i_1}\mathcal{C}_{\mathbb{W}}$, then by Lemma \ref{lemma:spotterorder} we have that $\hat{t}_1\in [t_0,t_1]$. Let $\epsilon$ be the distance (not depending on $\gamma$) between the horospheres $\operatorname{ht}_\infty(\cdot)=1$ and $\operatorname{ht}_\infty(\cdot)=h_0$. Since $\gamma$ is a unit speed geodesic, $t_1-t_0>\epsilon$, and property (1) holds for $j=1$. Next, the ``if" direction of property (4) is immediate for $j=0$ and $j=1$ from the definitions. Now suppose $t\in(t_0,t_1]$ satisfies $\gamma(t)\in M\mathcal{C}_{\mathbb{W}}$ for some $M\in\mathcal M$. Then by definition of $t_1$ we have that $t=t_1$, and from Lemma \ref{lemma:upshot} we have that $M=M_{i_1}$. Thus the ``only if" direction of property (4) holds for $t\in(t_0,t_1]$. Suppose next that $t\in [t_0, t_1]$ satisfies $\operatorname{ht}_{M\infty} \gamma(t)>h_0$ for some $M\in \mathcal M$. Then by Lemma \ref{lemma:upshot} there exists $\ell\ge 1$ and $t'>t$ such that $M=M_\ell$, and $\gamma(t')\in M_\ell \mathbb{W}$. By definition of $\mathcal{C}_{\mathbb{W}}$ via spotter times, we obtain that $\gamma(t')\in M_\ell\mathcal{C}_{\mathbb{W}}$. Since we assumed that $t_1$ is the first time that the forward ray of $\gamma$ intersects $\mathcal{C}_{\mathbb{W}}$, we have that $t_1\leq t'$. The converse inequality is given by Lemma \ref{lemma:spotterorder}, since $t$ is a spotter time associated to $t'$, so that $t_1=t'$ and $M=M_{i_1}$ follows from property (4). So property (3) holds for $j=1$. To define $t_j,i_j$ for $j\geq 2$, we now consider a renormalized geodesic $\gamma'=M_{i_1}^{-1} \gamma$ with $\gamma'(0)=M_{i_1}^{-1} \gamma (t_1)$. We may then find $t'_1,i'_1$ for $\gamma'$ as we did above and let $t_2=t_1+t'_1$ and $i_2=i_1+i'_1$. Iterating this procedure gives $t_j,i_j$ for all $j\ge 1$. By the work above, properties (1), (3), and (4) hold on the corresponding initial segment of the renormalized geodesics and thus hold on the entire forward geodesic ray of $\gamma$. Moreover from this definition, we see that property (5) holds for all $i,j,k$ that are non-negative. To define $a_i, M_i$ for non-negative $i$ and $i_j,t_j$ for negative $j$, let $t_{-1}$ be the smallest (in norm) negative value for which $\gamma(t_{-1})$ intersects a $\mathcal M$-translate $M\mathcal{C}_{\mathbb{W}}$ of $\mathcal{C}_{\mathbb{W}}$. Consider a renormalized geodesic $\gamma'=M^{-1} \gamma$ with $\gamma'(0)= M^{-1}\gamma(t_{-1})\in \mathcal{C}_{\mathbb{W}}$. Set $i_{-1}=-i'_1$, $a_i=a'_{i+i_{-1}}$, and $M_i=M^{-1}M'_{i-i_{-1}}$ for $i_{-1}< i\le 0$. Since $\gamma'$ is a markable geodesic satisfying the conditions of the theorem and properties (1)--(4) hold for $\gamma'\vert_{[0,\infty]}$, so properties (1)--(5) hold for $\gamma\vert_{[t_{-1},\infty)}$. Iterating this process yields the remaining definitions and properties on the backwards ray of $\gamma$ (note that the full ray is covered by property (1)). \end{proof} \section{Ergodicity}\label{sec:ergodicity} We now prove the ergodicity of the Gauss map by first relating the cross-section $\mathcal{C}_{\mathbb{W}}$ studied in \S \ref{sec:markable} to geodesic flow on a quotient of $\mathbb{H}$, and then to the Gauss map on the boundary. We start by recalling the ergodicity result for geodesic flow. This section culminates in the ergodicity part of Theorem \ref{thm:main}. \begin{remark} All statements concerning ergodicity and measure will be made with respect to the relevant Hausdorff measure; depending on context this can be interpreted as Haar measure, surface measure, or Lebesgue measure. Because there are no surprises along the way, we will suppress discussion of the details. \end{remark} \subsection{Ergodicity of the Geodesic Flow} The space $(\mathbb{H}, d_\mathbb{H})$ is a symmetric space with a complete Riemannian metric with pinched negative curvature. In particular, any pair of points in $\mathbb{H}$ (indeed, in $\overline{\mathbb{H}}\cup\{\infty\}$) determines a unique geodesic. Alternately, a \emph{pointed} geodesic is determined by an element of the unit tangent bundle $T^1\mathbb{H}$, namely a point in $\mathbb{H}$ and a unit vector over it. The geodesic flow on $T^1\mathbb{H}$ moves vectors along geodesics as follows: \begin{defi}[Geodesic Flow] Given a vector $(h,v)\in T^1\mathbb{H}$, let $\gamma: \mathbb{R}\rightarrow \mathbb{H}$ be a unit-speed geodesic satisfying $\gamma(0)=h$ and $\gamma'(0)=v$. The time-$t$ geodesic flow of $(h,v)$ is then given by $\phi_t(v):=(\gamma(t),\gamma'(t))\in T^1\mathbb{H}$. \end{defi} Given a set $A\subset T^1\mathbb{H}$, one says that $A$ is $\phi$-invariant, if for each $t\in \mathbb{R}$, the symmetric difference $(\phi_t^{-1}A) \triangle A$ has measure zero. We will be interested in sets $A$ that are furthermore invariant under a lattice $\Gamma\subset G$, i.e., $\mu(\gamma(A)\triangle A)=0$ for every $\gamma\in \Gamma$. We can now state Mautner's Ergodicity Theorem (cf.~Moore's extension of the result to the frame bundle \cite{MR776417}): \begin{thm}[Mautner's Ergodicity Theorem \cite{MR0084823}] \label{thm:Mautner} Let $\Gamma$ be a lattice in $G$, and $A\subset T^1\mathbb{H}$ a $\Gamma$-invariant set that is furthermore invariant under geodesic flow. Then either $\mu(A)=0$ or $\mu(T^1\mathbb{H}\setminus A)=0$. \end{thm} \subsection{Ergodicity of the Markable Cross-Section}\label{subsec:geodesicflow} We continue working with a fixed complete, discrete, and proper Iwasawa continued fraction algorithm. Consider the natural projection $\pi_\mathbb{H}: \mathbb{H} \rightarrow \mathcal M\backslash \mathbb{H}$. Mautner's Theorem \ref{thm:Mautner} immediately applies to our setting. We record this in the following lemma, which can be interpreted either in the formulation of Theorem \ref{thm:Mautner} or, equivalently, using orbifold geodesic flow. \begin{lemma} Geodesic flow on $\mathcal M\backslash \mathbb{H}$ is ergodic. \begin{proof} $\mathcal M$ is assumed to be discrete; to show it is a lattice we must show that there exists a finite-volume fundamental domain for $\mathcal M$. Let $K'$ be the region lying over both $K$ having horoheight at least $\epsilon>0$, for a choice of $\epsilon$ satisfying $\operatorname{rad}(K\times[0,\epsilon])^{-2}>1$. Given a point $h\in \mathbb{H}$, we may use $\mathcal Z$ to translate $h$ so that it lies over $K$, and invert it if necessary to increase its horoheight multiplicatively by at least $\operatorname{rad}(K\times[0,\epsilon])^{-2}$ (see \cite{1510.06033} for the interaction of horoheight and inversions), and translate again to place it over $K$. Within finitely many iterations, we obtain an image of $h$ contained in $K'$. Thus, $K'$ contains a fundamental domain for the $\mathcal M$ action on $\mathbb{H}$. Lastly, $K'$ has horoheight bounded below and bounded extent along $\mathbb X$, so has finite hyperbolic volume. \end{proof} \end{lemma} \begin{lemma}\label{lemma:piergodic} The first-return map on $\pi_\mathbb{H}(\mathcal{C}_{\mathbb{W}})$ is a.e.~well-defined and ergodic. \begin{proof} Consider the family $\mathcal F \subset T^1\mathbb{H}$ of geodesic rays that pass through $\mathcal{C}_{\mathbb{W}}$. Recalling that $\mathcal{C}_{\mathbb{W}}$ consists of geodesics coming from large horoheight through the wall $\mathbb{W}$ and proceeding to $K$, it is clear $\mathcal F$ has positive measure. Since $\mathcal M$ is discrete, $\pi_\mathbb{H}(\mathcal F)$ also has positive measure. Thus, by ergodicity, almost every geodesic in $\mathcal M\backslash \mathbb{H}$ passes through $\pi_\mathbb{H}(\mathcal{C}_{\mathbb{W}})$. Since $\pi_\mathbb{H}(\mathcal{C}_{\mathbb{W}})$ is generically transverse to geodesic flow, we conclude that almost every geodesic ray in $\pi_\mathbb{H}(\mathcal{C}_{\mathbb{W}})$ returns to $\pi_\mathbb{H}(\mathcal{C}_{\mathbb{W}})$, and that the resulting first-return map is ergodic. \end{proof} \end{lemma} We are now able to show that markable geodesics are generic: \begin{cor} \label{cor:markablegeneric} Almost every geodesic $\gamma$ satisfying $\gamma(0)\in \mathcal{C}_{\mathbb{W}}$ is markable. \begin{proof} By the previous lemma, the first-return mapping on $\pi_\mathbb{H}(\mathcal{C}_{\mathbb{W}})$ is well-defined. Thus, given a generic geodesic ray $\gamma$ in $\mathcal{C}_{\mathbb{W}}$, $\pi_\mathbb{H}(\gamma)$ will return to $\pi_\mathbb{H}(\mathcal{C}_{\mathbb{W}})$ after some time. Lifting to $\mathbb{H}$, this implies that $\gamma$ intersects $M\mathcal{C}_{\mathbb{W}}$ for some $M\in \mathcal M$. Iterating the first-return map gives infinitely many intersections. Reversing the flow gives the same result for the backward orbit of $\gamma$. \end{proof} \end{cor} Now that we have shown that almost all geodesics are markable, we can quickly prove that $\mathcal{C}_{\mathbb{W}}$ has no unexpected symmetries: \begin{cor}\label{cor:piinjective} The restriction of $\pi_\mathbb{H}$ to $\mathcal{C}_{\mathbb{W}}$ is a.e.~injective. \begin{proof} Suppose the statement is false, and there exists a non-identity mapping $M\in \mathcal M$ such that $M\mathcal{C}_{\mathbb{W}} \cap \mathcal{C}_{\mathbb{W}}$ has positive measure. Then by the previous corollary there is a markable geodesic $\gamma$ with $\gamma(0)\in M\mathcal{C}_{\mathbb{W}} \cap \mathcal{C}_{\mathbb{W}}$. But then we have $\gamma(0)\in \mathcal{C}_{\mathbb{W}}$ and $M\gamma(0)\in \mathcal{C}_{\mathbb{W}}$, and it follows from the Intersection Detection Property of Theorem \ref{thm:markable} that $M=M_{i_0}=\operatorname{id}$. \end{proof} \end{cor} \begin{defi} Let us define a mapping $\psi: \mathcal{C}_{\mathbb{W}}\rightarrow \mathcal{C}_{\mathbb{W}}$ by $\psi(\gamma)(t)=M_{i_1}^{-1}\gamma(t+t_1)$, where $M_{i_1}$ and $t_1$ are given by Theorem \ref{thm:markable}. This is well-defined almost everywhere. \end{defi} \begin{prop}\label{prop:phiergodic} The mapping $\psi: \mathcal{C}_{\mathbb{W}}\rightarrow\mathcal{C}_{\mathbb{W}}$ is ergodic. \begin{proof} The first-return map on $\pi_\mathbb{H}(\mathcal{C}_{\mathbb{W}})$ is ergodic by Lemma \ref{lemma:piergodic}. Corollary \ref{cor:piinjective} then allows us to identify $\pi_\mathbb{H}(\mathcal{C}_{\mathbb{W}})$ with $\mathcal{C}_{\mathbb{W}}$, and Theorem \ref{thm:markable} tells us that $\psi$ is indeed a lift of the first-return mapping on $\pi_\mathbb{H}(\mathcal{C}_{\mathbb{W}})$. \end{proof} \end{prop} \subsection{Ergodicity of a Natural Extension and of the Gauss Map}\label{subsec:extension} At this point, we would like to project $\mathcal{C}_{\mathbb{W}}$ onto the forward endpoint and use the ergodicity of $\psi$ to derive the ergodicity of $T$. However, the transformation that $\psi$ induces on the forward endpoint is a jump transformation associated to $T$ and it is not the case that the ergodicity of a jump transformation implies the ergodicity of the original transformation. (See, for example, Chapters 17--19 of \cite{SchweigerBook}.) So we will instead project onto both endpoints and analyze the resulting transformation more carefully. Throughout the rest of this section, we will assume, without directly stating it, that all statements about sets hold up to sets of zero measure and that any geodesic under consideration is markable, since this is a generic condition. We continue to work with a complete, discrete, and proper Iwasawa CF expansion. Let $\pi:\mathcal{C}_{\mathbb{W}}\to K\times \mathbb X$ be the injective map from a geodesic $\gamma$ intersecting $\mathcal{C}_{\mathbb{W}}$ to its forward and backward endpoints $(\gamma_+,\gamma_-)$. On $\pi(\mathcal{C}_{\mathbb{W}})$, $\psi$ induces the mapping $\Psi=\pi\circ \psi\circ \pi^{-1}$. Since, by the Markable Geodesic Theorem \ref{thm:markable}, $\psi$ acts on a geodesic $\gamma$ by the mapping $M_{i_1}$ associated to $\gamma_+$, we conclude that $\Psi(\gamma_+,\gamma_-)=(M_{i_1}^{-1}\gamma_+,M_{i_1}^{-1}\gamma_-)$. Let us extend the Gauss map $T$ to act on $K\times \mathbb X$ by $\hat{T}(z,w)=(M_1^{-1} z,M_1^{-1} w)$ where $M_1\in\mathcal M$ is the mapping associated to $z$. Since $Tz=M_1^{-1}z$, this truly is an extension. Let $\overline{K}=\cup_{i=0}^\infty \hat{T}^i \pi(\mathcal{C}_{\mathbb{W}})\subset K\times \mathbb X$. We wish to compare how $\Psi$ acts on $\pi(\mathcal{C}_{\mathbb{W}})$ with how $\hat{T}$ acts on $\overline K$. In the following lemma, will show that the restriction $\hat{T}|_{\overline{K}}$ of $\hat{T}$ to $\overline{K}$ is well-behaved. \begin{lemma} \label{lemma:Tinvariance} $\hat{T}|_{\overline{K}}: \overline K \rightarrow \overline K$ is surjective. Furthermore, a.e.~point of $\overline K$ returns to $\pi(\mathcal{C}_{\mathbb{W}})$ within finitely many iterations of $\hat{T}|_{\overline{K}}$, so that we have \[\label{eq:overlineKdef} \overline{K} = \bigcup_{i=0}^\infty \hat{T}|_{\overline{K}}^{-i} \pi(\mathcal{C}_{\mathbb{W}}).\] \begin{proof} It is immediate from the definition of $\overline K$ that $\hat{T}|_{\overline{K}}\overline K \subset \overline K$. To prove the reverse containment, we wish to show that for any $(z,w)\in\overline K$, there exists $(z',w')\in \overline K$ with $\hat{T}|_{\overline{K}}(z',w')=(z,w)$. Since $(z,w)\in \overline{K}$, there exists a smallest non-negative integer $i$ such that $(z,w)\in \hat{T}|_{\overline{K}}^i \pi(\mathcal{C}_{\mathbb{W}})$. If $i\ge 1$, then clearly there is $(z',w')\in \hat{T}|_{\overline{K}}^{i-1}\pi(\mathcal{C}_{\mathbb{W}})$ such that $\hat{T}|_{\overline{K}}(z',w')=(z,w)$. So suppose $i=0$. Then $(z,w)\in \pi(\mathcal{C}_{\mathbb{W}})$. Since $\Psi$ is an onto map of $\pi(\mathcal{C}_{\mathbb{W}})$ to itself, for a.e.~$(z,w)$ there exists some $(z'',w'')$ such that $\Psi(z'',w'')=(z,w)$. Thus, if we let $i_1$ be the index so that $\Psi(z'',w'')=(M_{i_1}^{-1}z'',M_{i_1}^{-1} w'')=\hat{T}|_{\overline{K}}^{i_1}(z'',w'')$, then we have that $(z,w)\in \hat{T}|_{\overline{K}}^{i_1} \pi(\mathcal{C}_{\mathbb{W}})$ with $i_1>0$ and the argument of the previous paragraph applies. Implicit in the last paragraph is the idea that for a.e.~$(z,w)\in \pi(\mathcal{C}_{\mathbb{W}})$, $\Psi(z,w)\in \pi(\mathcal{C}_{\mathbb{W}})$ as well, so that $(z,w)$ returns to $\pi(\mathcal{C}_{\mathbb{W}})$ in a finite number of iterations of $\hat{T}|_{\overline{K}}$. Since every $(z,w)\in \overline{K}\setminus \pi(\mathcal{C}_{\mathbb{W}})$ appears in some $\hat{T}|_{\overline{K}}^i \pi(\mathcal{C}_{\mathbb{W}})$, say, $\hat{T}|_{\overline{K}}^i (z'',w'')=(z,w)$, we can also extend this to say that a.e.~point in $\overline{K}$ returns to $\pi(\mathcal{C}_{\mathbb{W}})$ under a finite number of iterations. This immediately shows that $\overline{K} \subset \bigcup_{i=0}^\infty \hat{T}|_{\overline{K}}^{-i} \pi(\mathcal{C}_{\mathbb{W}})$ and the reverse inclusion is trivial. \end{proof} \end{lemma} We restrict our attention to $\overline{K}$, setting $\hat{T}:=\hat{T}|_{\overline{K}}$. The equation \eqref{eq:overlineKdef} looks similar to the definition of a natural extension, so raises the following question, which we will not address: \begin{question} Is $\hat{T}: \overline K\rightarrow \overline K$ the natural extension of $T: K\rightarrow K$? \end{question} Now we can state the connection between $\Psi$ and $\hat{T}$: \begin{lemma}\label{lemma:induced} $\Psi$ is the transformation induced by restricting $\hat{T}$ to $\pi(\mathcal{C}_{\mathbb{W}})$. \begin{proof} Since $\mathcal Z$ is countable, the set of points in $K$ with eventually periodic continued fraction expansions is countable as well, and hence, since we are working up to measure zero, we may assume any points under consideration are not eventually periodic. Let $(z,w)\in\pi(\mathcal{C}_{\mathbb{W}})$ and let $i(z,w)$ be the minimal positive integer such that $\hat{T}^{i(z,w)}(z,w)\in\pi(\mathcal{C}_{\mathbb{W}})$. The existence of $i(z,w)$ a.e.~follows from Lemma \ref{lemma:Tinvariance}. We wish to show that, where it exists, $\hat{T}^{i(z,w)}(z,w)=\Psi(z,w)$. Let $\gamma$ be the markable geodesic with endpoints $(z,w)$, and let $i_1$ be the corresponding value from the marking in Theorem \ref{thm:markable}. Then $\Psi(z,w)=(M_{i_1}^{-1}z,M_{i_1}^{-1}w)$ and thus $\hat{T}^{i_1}(z,w)=\Psi(z,w)\in\pi(\mathcal{C}_{\mathbb{W}})$. By the minimality of $i(z,w)$, we have that $i(z,w)\le i_1$. We must show that $i(z,w)$ cannot be strictly less than $i_1$. Suppose $i(z,w)<i_1$ and consider the mapping $M=M_{i(z,w)}$. Since $(M^{-1}z,M^{-1}w)\in\pi(\mathcal{C}_{\mathbb{W}})$, $M^{-1}\gamma$ intersects $\mathcal{C}_{\mathbb{W}}$. This means $\gamma$ intersects $M\mathcal{C}_{\mathbb{W}}$ and thus by the Intersection Detection property of Theorem \ref{thm:markable}, $M=M_{i_j}$ for some $j$. Since the two mappings are equal, we have that $T^{i(z,w)}z=M_{i(z,w)}^{-1}z=M_{i_j}^{-1} z=T^{i_j} z$. But since we have assumed $z$ does not have an eventually periodic expansion, this is only possible if $i(z,w)=i_j$. And since there are no positive $i_j$ between $0$ and $i_1$, we must have that $i(z,w)=i_1$, which completes the proof. \end{proof} \end{lemma} We next prove that $\hat{T}$ is ergodic on $\overline{K}$ before concluding that $T$ is ergodic on $K$. \begin{remark} Note that $\hat{T}$ is conservative: if $A\subset \overline K$ has measure zero, then so does $\hat{T}^{-1}A$. \end{remark} \begin{lemma}\label{lemma:kbarergodic} $\hat{T}$ is ergodic on $\overline{K}$. \begin{proof} It is clear that $\Psi$ is ergodic on $\pi(\mathcal{C}_{\mathbb{W}})$ because it is isomorphic to $\psi$ on $\mathcal{C}_{\mathbb{W}}$, which is ergodic by Proposition \ref{prop:phiergodic}. Let $A, B \subset \overline K$ be complementary $\hat{T}$-invariant regions. We must show one of them has measure $0$. Note that $A\cap \pi(\mathcal{C}_{\mathbb{W}})$ and $B\cap \pi(\mathcal{C}_{\mathbb{W}})$ are complementary regions of $\pi(\mathcal{C}_{\mathbb{W}})$. Furthermore, since $\Psi$ is the induced map of $\hat{T}$ on $\pi(\mathcal{C}_{\mathbb{W}})$, the action of $\Psi$ on each point in $A\cap \pi(\mathcal{C}_{\mathbb{W}})$ is a power of the map $\hat{T}$. Since $\hat T A\subset A$ and $\Psi(\pi(\mathcal{C}_{\mathbb{W}}))=\pi(\mathcal{C}_{\mathbb{W}})$, $\Psi(A\cap \pi(\mathcal{C}_{\mathbb{W}}))\subset A\cap \pi(\mathcal{C}_{\mathbb{W}})$ and likewise $\Psi(B\cap \pi(\mathcal{C}_{\mathbb{W}}))\subseteq B\cap \pi(\mathcal \mathcal{C}_{\mathbb{W}})$. Since $\Psi$ is an onto map and $A\cap \pi(\mathcal{C}_{\mathbb{W}})$ and $B\cap \pi(\mathcal{C}_{\mathbb{W}})$ are complementary regions of $\pi(\mathcal{C}_{\mathbb{W}})$, it follows that the two intersections must be invariant under $\Psi$. Thus, by the ergodicity of $\Psi$ one of them (say, $A\cap \pi(\mathcal{C}_{\mathbb{W}})$) must have measure zero. But by \eqref{eq:overlineKdef}, $\overline K = \cup_{i=0}^\infty \hat{T}^{-i}(\pi(\mathcal{C}_{\mathbb{W}}))$, so $A=A\cap \overline K = \cup_{i=0}^\infty A \cap \hat{T}^{-i} ( \pi(\mathcal{C}_{\mathbb{W}}))=\cup_{i=0}^\infty \hat{T}^{-i} A \cap \hat{T}^{-i} ( \pi(\mathcal{C}_{\mathbb{W}}))= \cup_{i=0}^\infty \hat{T}^{-i} (A\cap \pi(\mathcal{C}_{\mathbb{W}}))$, so $A$ has measure zero by the conservativity of $\hat{T}$, as desired. \end{proof} \end{lemma} We can now project to the first coordinate to complete the proof of Theorem \ref{thm:main} (see also \S \ref{sec:sketch}): \begin{proof} Let us suppose the Gauss map is not ergodic. Then there are complementary subsets $A$ and $B$ of $K$ that are both invariant under $T$ and have non-zero measure. We may extend these to complementary subsets $A', B'$ of $\overline K$ by taking their preimages under projection to the first coordinate. Both $A'$ and $B'$ have positive measure since $\pi(\mathcal{C}_{\mathbb{W}})\subset \overline K$ and we claim there exists a neighborhood $U$ of infinity in $\mathbb X$ such that $K\times U\subset \pi(\mathcal{C}_{\mathbb{W}})$. Let us now show that this set $U$ does exist. Consider any pair $(\gamma_+,\gamma_-)$ of endpoints of a geodesic $\gamma$, such that $\gamma_+\in K$ and $\norm{\gamma_-}$ is sufficiently large. In particular, if $\norm{\gamma_-}>1+\epsilon$ with $\epsilon$ as in Lemma \ref{lemma:horocap}, then the conclusion of that lemma and the definition of $\mathbb{W}$ imply that the geodesic $\gamma$ passes through $\mathbb{W}$. Moreover, by taking the framework of Lemma \ref{lemma:horocap} and dilating, we see that if $\norm{\gamma_-}$ is sufficiently large, then the geodesic must travel far into the cusp at infinity: namely, there must exist a time $\hat{t}$ such that $\operatorname{ht}_\infty(\gamma(\hat{t}))>h_0$. Thus, $\gamma$ does intersect $\mathcal{C}_{\mathbb{W}}$ and $(\gamma_+,\gamma_-)\in\pi(\mathcal{C}_{\mathbb{W}})$ as desired. (Since we are working up to measure zero sets, we may assume that $\gamma_+\not\in \mathcal M(\{\infty\}\cup E)$ as well.) Consider $\hat{T}^{-1}A'$. Any point $(z,w)\in \overline{K}$ such that $\hat{T}(z,w)\in A'$ must clearly satisfy $Tz\in A$. In other words $z\in T^{-1} A=A$. Thus $(z,w)\in A'$, so $\hat{T}^{-1}A'\subset A'$ and likewise $\hat{T}^{-1}B'\subset B'$. Hence $A'$ and $B'$ are both disjoint $T$-invariant subsets of $\overline K$ with positive measure. The ergodicity of $\hat T: \overline K\rightarrow \overline K$ provided by Lemma \ref{lemma:kbarergodic} gives the contradiction. \end{proof} \begin{remark} We have proved ergodicity with respect to Lebesgue measure, but with the framework we have developed, we may now consider the question of absolutely continuous invariant measures as well. First, note that since geodesic flow preserves Haar measure on $\mathbb{H}$, there is a canonical derivation of an invariant measure for $\psi$ on $\mathcal{C}_{\mathbb{W}}$. This then projects to an invariant measure for $\Psi$ on $\pi(\mathcal{C}_{\mathbb{W}})$. Since $\Psi$ is the transformation induced by restriction $\hat{T}$ to $\pi(\mathcal{C}_{\mathbb{W}})$, there is again a canonical derivation of an invariant measure for $\hat{T}$ on $\overline{K}$ (see \cite[Thm.~17.1.6]{SchweigerBook}). From here projection onto the first coordinate would give an invariant measure for $T$ on $K$. All of these operations preserve the fact that they are absolutely continuous with respect to the corresponding Hausdorff measure. Note that even though the measure on $\mathcal{C}_{\mathbb{W}}$ and $\pi(\mathcal{C}_{\mathbb{W}})$ is bounded, the measure on $\overline K$ and $K$ may be infinite. Indeed, this occurs for the Rosen continued fractions \cite{GH}. \end{remark} \subsection{Application: Ergodic components of Incomplete Iwasawa CFs} In this subsection we will prove Theorem \ref{thm:notmain}. Let $\mathcal{R}$ denote the set of central symmetries of $\mathcal M$ (cf.~Definition \ref{defi:centrallySymmetric}). \begin{lemma}\label{lemma:quasicommutative} Let $r\in \mathcal{R}$. Then for any $a\in \mathcal Z$ there exists $a'\in \mathcal Z$, $r'\in \mathcal{R}$ such that $a\iota r = r'a'\iota$. Moreover if $r'$ is the identity, then $r$ must be as well. \end{lemma} \begin{proof} Since $a\iota r \iota^{-1}\in\text{Stab}_\mathcal M(\infty)$, the decomposability assumption on $\mathcal{R}$ implies that there exist $r'\in\mathcal{R}$ and $a'\in \mathcal Z$ such that $a\iota r \iota^{-1}\iota=r'a'\iota$, as desired. Let $r''$ denote $\iota r \iota^{-1}$. Since this fixes $0$ and $\infty$, it must belong to $\mathcal{R}$. So if $r'$ is the identity, then $r'a'=ar''$ implies that $a^{-1}a'=r''$. But $\mathcal{R}\cap \mathcal Z=\{\operatorname{id}\}$, so $r''$ and hence $r$ must be the identity. \end{proof} At this point we wish to start connecting the behavior of an incomplete Iwasawa CF with $n$ central symmetries with the behavior of its completion. As such let us specialize our notation. Let $K$ be the symmetric fundamental domain for the incomplete continued fraction over $\mathcal Z$ and let $K_c$ be an associated fundamental domain for the \emph{completion} of the continued fraction over $\text{Stab}_\mathcal M(\infty)$ so that $K=\bigcup_{r\in\mathcal{R}}r K_c$ up to a set of measure zero. Let $T$ be the Gauss map on $K$ that acts by $\iota$ and then an element of $\mathcal Z$. Let $T_c$ be the Gauss map on $K_c$ that acts by $\iota$ and then an element of $\text{Stab}_\mathcal M(\infty)$. \begin{lemma} With the notation of the paragraph directly above, the map $T$ on $K$ is isomorphic to a skew-product $T_c\rtimes f$ on $K_c\times \mathcal{R}$ over the map $T_c$ on $K_c$. \begin{proof} There is an obvious isomorphism between $K_c\times \mathcal{R}$ and $K$ given by $(z,r)\leftrightarrow rz$. The map $T$ acts on $rz$ by $a\iota$ for some $a\in \mathcal Z$. By Lemma \ref{lemma:quasicommutative}, there exists $a'\in \mathcal Z, r'\in R$ such that $T(rz)=r'a'\iota (z)$. Let $r''$ be such that $r''a'\iota(z)\in K_c$, so that $T$ can be considered as acting on the space $K_c\times R$ by \( (z,r)\mapsto (r''a'\iota z, r'r''^{-1}). \) Since $r''a'\in \text{Stab}_\mathcal M(\infty)$, this maps $(z,r)$ to $T_c(z)$ in the first coordinate. Let $f(z,r)=r'r''^{-1}$, so that $T=T_c\rtimes f$. To show that $T_c\rtimes f$ is truly a skew-product and finish the proof, we must show that for almost all fixed $z$, $f(z,\cdot)$ is an injection (and hence a bijection). Suppose that $f(z,\cdot)$ is not an injection, so that $r_1\neq r_2$ but $f(z,r_1)=f(z,r_2)$. This implies that $T(r_1z)=T(r_2 z)$. Let $a_1,a_2\in \mathcal Z$ be such that $T$ acts by $a_1\iota $ on $r_1z$ and acts by $a_2 \iota$ on $r_2 z$. Then $a_1\iota r_1 \iota^{-1} (\iota z)=a_2\iota r_2 \iota^{-1} (\iota z)$. But for almost all $z$ (namely, those $z$ not belonging to the exceptional set $E$ \eqref{eq:Edefn}), $a_1\iota r_1\iota^{-1}$ is the unique element of $\text{Stab}_\mathcal M(\infty)$ that brings $\iota z$ to $K$. Thus, for such $z$, $a_1\iota r_1 \iota^{-1} =a_2\iota r_2 \iota^{-1}$. Recall from the proof of the previous lemma that $\iota r_1\iota^{-1},\iota r_2\iota^{-1}\in \mathcal{R}$. So by the uniqueness of the decomposition, we have that $\iota r_1 \iota^{-1}=\iota r_2\iota^{-1}$, and hence $r_1=r_2$. So $f(z,\cdot)$ is injective. \end{proof} \end{lemma} Theorem \ref{thm:notmain} immediately follows from the next lemma: \begin{lemma} Let $A$ be any ergodic component of $K$ with positive measure, then the measure of $A$ must be at least $1/|\mathcal{R}|$ (all with respect to a normalized Lebesgue measure on $K$). \begin{proof} We may consider $A$ as a positive measure subset of $K_c\times \mathcal{R}$ invariant under the skew-product $T_c \rtimes f$ defined in the previous lemma. Consider also the standard projection onto the first coordinate: $\pi_{K}:K_c\times R\to K_c$. Since $T_c$ is the Gauss map associated to a discrete, proper, and \emph{complete} Iwasawa CF expansion, it will be ergodic due to Theorem \ref{thm:main}, and thus it suffices to prove that $\pi_K(A)$ is a $T_c$-invariant set, since it must have full measure on $K_c$ (i.e., $1/|\mathcal{R}|$). Suppose $z\in \pi_K(A)$, so that there exists $r\in \mathcal R$ such that $(z,r)\in A$. Let $z'\in T_c^{-1}z$. Then, since $T_c\rtimes f$ is a skew-product, there exists (for almost all such $z$) $r' \in \mathcal R$ such that $(T_c\rtimes f)(z',r')=(z,r)$. Thus $(z',r')\in (T_c\rtimes f)^{-1} A = A$, so $z'\in \pi_K(A)$. Thus $T_c^{-1}\pi_K(A)$ is (up to measure zero), a subset of $\pi_K(A)$. Now suppose $z\in \pi_K(A)$ and again let $r\in \mathcal R$ be such that $(z,r)\in A=T^{-1} A$. Thus $T(z,r)\in A$, and projecting this into the first coordinate, we see that $T_c z\in \pi_K(A)$. Thus $\pi_K(A)\subset T_c^{-1}\pi_K(A)$. This proves the two sets are equal up to measure zero, as desired. \end{proof} \end{lemma} In certain cases one can show that the skew-product over an ergodic transformation is itself ergodic, see \cite{Vmatrix} and related papers of the second author for some interesting examples. If we could prove such a result here, we could remove the completeness condition in the case of centrally symmetric systems. \subsection{Application: Tail Equivalence}\label{sec:tail} In this section we prove Theorem \ref{thm:IntroTail} in the following more precise formulation (note that markable geodesics are generic by Corollary \ref{cor:markablegeneric}): \begin{thm}[Tail equivalence of markable geodesics] \label{thm:tail} Let $\gamma$ be a markable geodesic and $\gamma'=M\gamma$ with $M\in\mathcal M$ and $\gamma'_+\in K$. If $a_i, a'_i$ are the sequence of CF digits of $\gamma_+$ and $\gamma'_+$, respectively, then they have the same tail---i.e., there exist some $k,k'\in\mathbb{N}$ such that $a_{k+i}=a'_{k'+i}$ for all $i\ge 1$. \begin{remark} We note that the condition $\gamma'_+\in K$ is not necessary. If it were not there, we could define $a'_0=\floor{\gamma'_+}$ and let the continued fraction expansion of $\gamma'_+$ start with this $a'_0$; however, since this $a'_0$ might be confused with the corresponding digit of the marking, we will not use it here. \end{remark} \begin{proof} While $\gamma'$ is a markable geodesic, it may or may not pass through $\mathcal{C}_{\mathbb{W}}$. The result follows immediately from Theorem \ref{thm:markable} if $\gamma'$ \emph{does} pass through $\mathcal{C}_{\mathbb{W}}$: the Cusp Detection Property gives us that for some $j$, $M=M_{i_j}^{-1}$. So the marking of $\gamma'$ is a shift of the marking of $\gamma$. If $j\ge 0$, then $a'_i=a_{i_j+i}$ for $i\ge 1$, and if $j<0$, then $a'_{-i_j+i}=a_i$ for $i\ge 1$. We now assume that $\gamma'$ does not pass through $\mathcal{C}_{\mathbb{W}}$. If $\norm{\gamma'_-}\ge 1+\epsilon$, with $\epsilon$ as in Lemma \ref{lemma:horocap}, then we apply Lemma \ref{lemma:finiteintersection} to see that $\gamma'$ intersects $\mathbb{W}$. Let $\gamma''(t)=\gamma'(t+t')$ be such that $\gamma''(0)\in\mathbb{W}$. On the other hand, if $\norm{\gamma'_-}<1+\epsilon$, then we may apply the proof of Lemma \ref{lemma:exhile} to $\gamma'$ to find an index $\mathfrak{i}_1$ and corresponding time $\mathfrak{t}_1$ such that $M_{\mathfrak{i}_1}^{-1}\gamma'(\mathfrak{t}_1)\in\mathbb{W}$. (Note that the condition in the lemma that $\gamma(0)\in\mathbb{W}$ is not actually used in the proof, only that $|\gamma(0)|<1+\epsilon$. Morover, since $\gamma'$ is markable, we know that $\gamma'_+\not\in \mathcal M\infty$.) In this case, let $\gamma''(t)=M_{\mathfrak{i}_1}^{-1}\gamma'(t+\mathfrak{t}_1)$, so that once again $\gamma''(0)\in \mathbb{W}$. We claim that $\gamma'_+$ and $\gamma''_+$ are tail-equivalent. This is obvious in the first case, since $\gamma'_+=\gamma''_+$. In the second case, they are still tail-equivalent, since $\gamma''_+=T^{\mathfrak{i}_1}\gamma'_+$ and $T$ again acts via a shift of the digits. Moreover, $\gamma''$ is still a markable geodesic, since this property is $\mathcal M$-invariant. By applying the idea of the proof of Lemma \ref{lemma:isspeedup}, we have that $\gamma''$ intersects $M_{\mathfrak{i}_j}\mathcal{C}_{\mathbb{W}}$ at time $\mathfrak{t}_j$ for some $j$. In particular, if we let $\gamma'''(t)=M_{\mathfrak{i}_j}^{-1}\gamma''(t+\mathfrak{t}_j)$, then by the same argument as previously, we see that $\gamma'''_+$ is tail-equivalent to $\gamma''_+$ and hence to $\gamma'_+$. In addition, $\gamma'''$ now passes through $\mathcal{C}_{\mathbb{W}}$ so our earlier argument applies and we see that $\gamma'''_+$ is tail-equivalent to $\gamma_+$, as desired. \end{proof} \end{thm} \bibliographystyle{amsplain}
{ "timestamp": "2018-05-24T02:14:55", "yymm": "1805", "arxiv_id": "1805.09312", "language": "en", "url": "https://arxiv.org/abs/1805.09312", "abstract": "We prove the convergence and ergodicity of a wide class of real and higher-dimensional continued fraction algorithms, including folded and $\\alpha$-type variants of complex, quaternionic, octonionic, and Heisenberg continued fractions, which we combine under the framework of Iwasawa continued fractions. The proof is based on the interplay of continued fractions and hyperbolic geometry, the ergodicity of geodesic flow in associated modular manifolds, and a variation on the notion of geodesic coding that we refer to as geodesic marking. As a corollary of our study of markable geodesics, we obtain a generalization of Serret's tail-equivalence theorem for almost all points. The results are new even in the case of complex continued fractions.", "subjects": "Dynamical Systems (math.DS); Number Theory (math.NT)", "title": "Ergodicity of Iwasawa continued fractions via markable hyperbolic geodesics", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9902915246646304, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7084783255747008 }
https://arxiv.org/abs/2002.00921
Repeated patterns in proper colourings
For a fixed graph $H$, what is the smallest number of colours $C$ such that there is a proper edge-colouring of the complete graph $K_n$ with $C$ colours containing no two vertex-disjoint colour-isomorphic copies, or repeats, of $H$? We study this function and its generalisation to more than two copies using a variety of combinatorial, probabilistic and algebraic techniques. For example, we show that for any tree $T$ there exists a constant $c$ such that any proper edge-colouring of $K_n$ with at most $c n^2$ colours contains two repeats of $T$, while there are colourings with at most $c' n^{3/2}$ colours for some absolute constant $c'$ containing no three repeats of any tree with at least two edges. We also show that for any graph $H$ containing a cycle there exist $k$ and $c$ such that there is a proper edge-colouring of $K_n$ with at most $c n$ colours containing no $k$ repeats of $H$, while, for a tree $T$ with $m$ edges, a colouring with $o(n^{(m+1)/m})$ colours contains $\omega(1)$ repeats of $T$.
\section{Introduction} \label{sec:intro} A considerable body of recent work in extremal combinatorics is devoted to the study of rainbow patterns in proper edge-colourings of complete graphs. To mention two such results (amongst many~\cite{BaPoSu, BePoSu, CoPe, EhGlJo, GaRaWaWo, GlJo, GlKuMoOs, KeYe, KiKuKuOs, MoPoSu, MoPoSu2, Po, PoSu, PoSu2}), there is the work of Alon, Pokrovskiy and Sudakov~\cite{AlPoSu} showing that any proper edge-colouring of $K_n$ contains a rainbow path of length $n - o(n)$ and the work of Montgomery, Pokrovskiy and Sudakov~\cite{MoPoSu3} and, independently, Keevash and Staden~\cite{KeSt} resolving a celebrated conjecture of Ringel, one of whose statements involves finding a rainbow copy of any tree with $n$ edges in a particular proper edge-colouring of $K_{2n+1}$. For the most part, this recent work has focused on finding large structures in proper edge-colourings. We instead study small structures, our aim being to understand when a proper edge-colouring contains two or more repeats of a particular graph $H$. To be more precise, we say that two copies of a graph $H$ in a colouring of $K_n$ are \emph{colour isomorphic} if there exists an isomorphism between them preserving the colours. The following function is our main object of study. \begin{definition} For $k,n\geq 2$ and a graph $H$, define $f_k(n,H)$ to be the smallest integer $C$ such that there is a proper edge-colouring of $K_n$ with $C$ colours containing no $k$ vertex-disjoint colour-isomorphic copies (or `repeats') of $H$. \end{definition} We make several remarks about this definition. First, one could, in principle, ask the same question without the restriction to proper colourings. However, this changes the character of the question completely. Indeed, consider the colouring of the complete graph on vertex set $\{1, 2, \dots, n\}$ where we colour the edge $ij$ with $i < j$ by the colour $i$. Then this is a colouring with $n-1$ colours which does not even contain two disjoint edges of the same colour. On the other hand, when we restrict to proper colourings, we have that $f_k(n, K_2) \geq \lceil \frac{1}{k-1}\binom{n}{2}\rceil$ by a straightforward application of the pigeonhole principle. For $n$ sufficiently large in terms of $k$, it also follows from several well-known decomposition results, such as Gustavsson's theorem~\cite{Gu}, that this bound is tight. Our second remark collects together several simple observations that we will use throughout. \begin{remark}\label{rem:trivial} The quantity $f_k(n,H)$ is monotone increasing in $n$, but decreasing in $k$ and in $H$ (with respect to taking subgraphs). Moreover, since every proper colouring has at least $n-1$ colours, $$\binom{n}{2}\geq f_k(n,H)\geq n-1.$$ \end{remark} Finally, although our definition contains no requirement that the copies of $H$ should be rainbow, all of the results where we find repeats of a particular graph $H$ remain true up to a constant factor if we insist that each copy is rainbow. This then brings our work more fully in line with the body of research discussed at the outset. With these preliminaries out of the way, we now describe our main results. In the classical Tur\'an problem, the growth rate of the extremal function ex$(n,H)$ is subject to a well-known trichotomy. Namely, non-bipartite graphs, bipartite graphs with a cycle and forests satisfy $\textrm{ex}(n,H)=\Theta(n^2)$, $n^{1+\Omega(1)} \leq \textrm{ex}(n,H)\leq n^{2-\Omega(1)}$ and $\textrm{ex}(n,H)=\Theta(n)$, respectively. Our first theorem shows that something broadly similar holds for $f_2(n, H)$, although, unlike the extremal function, our function can, and usually does, degenerate for bipartite graphs with a cycle. Note that here and throughout, all terms in the $O$-notation are to be interpreted with respect to $n$, with all other variables treated as constants. \begin{theorem}\label{thm:f2} The growth rate of $f_2(n,H)$ satisfies: \begin{itemize} \item[(i)] $f_2(n,H)=\Theta(n^2)$ if $H$ is a forest. Otherwise, $f_2(n,H)=O(n^{2-\Omega(1)})$. \item[(ii)] If $H$ is non-bipartite, then $f_2(n,H)\leq n+1$. \item[(iii)] If $H$ is bipartite and $e(H)\geq 2|H|-2$, then $f_2(n,H)=\Theta(n)$. \item[(iv)] There exist bipartite graphs $H$ with $n^{1+\Omega(1)}\leq f_2(n,H)\leq n^{2-\Omega(1)}$. \end{itemize} \end{theorem} For three or more repeats, the class of graphs for which we know that $f_k(n, H) = O(n)$ grows as $k$ increases. In fact, for any graph $H$ containing a cycle, we can show, by using a variant of Bukh's random algebraic method~\cite{Bu}, that there exists $k$ such that $f_k(n,H) = O(n)$. \begin{theorem} \label{thm:cyclim} For any graph $H$ containing a cycle, there exists $k$ such that $f_k(n,H) = O(n)$. \end{theorem} For trees, the situation is much more involved, as spelled out in the next theorem, whose proof relies on a mixture of novel combinatorial and algebraic methods. \begin{theorem}\label{thm:trees} For any tree $T$ with $m$ edges and any $k \geq 3$: \begin{itemize} \item[(i)] $f_k(n, T) = \Omega(n^{k/(k-1)})$. Moreover, if $T$ has at least two edges, then $f_3(n,T)= \Theta(n^{3/2})$. \item[(ii)] $f_k(n, T) = \Omega(n^{(m+1)/m})$ and there exists $k'$ such that $f_{k'}(n, T) = O(n^{(m+1)/m})$. \end{itemize} \end{theorem} The paper is structured as follows. In Section~\ref{sec:prelim}, we collect some general upper and lower bounds for $f_k(n,H)$. We then proceed to study trees in Section~\ref{sec:trees}. In Section~\ref{sec:bipartite}, we study even cycles. We conclude, in Section~\ref{sec:outlook}, by discussing a broad range of open problems suggested by our work. Unless otherwise specified, the term `colouring' will refer to proper edge-colourings of a complete graph. \section{Preliminary Observations}\label{sec:prelim} Since it is not entirely obvious, we first verify that finding $f_k(n,H)$ is indeed an extremal problem. \begin{lemma} For any $k,n,H$ and $\binom{n}{2}\geq C> f_k(n,H)$, there exists a $C$-colouring of $K_n$ which does not contain $k$ repeats of $H$. \end{lemma} \begin{proof} Suppose for contradiction that the statement does not hold for some $C>f_k(n,H)$ and consider an $f_k(n,H)$-colouring of $K_n$ which avoids $k$ repeats of $H$. In this colouring, take any $C-f_k(n,H)$ edges from colour classes of size at least $2$, leaving at least one edge in each colour class and recolour each of these edges with an entirely new colour. By assumption, the resulting $C$-colouring contains $k$ repeats of $H$. Since each new colour class has only one edge, the repeats must be in the old colours, a contradiction. \end{proof} The following lemma effectively reduces the problem to connected graphs. \begin{lemma}\label{lem:components} Suppose that $H$ is a graph with connected components $H_1,\dots, H_\ell$. Then, for all $k\geq 2$, \begin{equation*}\label{eq:components} f_k(n,H)=\Theta(\min_{i\in [\ell]}\{f_k(n,H_i)\}). \end{equation*} \end{lemma} \begin{proof} Since containing no $k$-fold repeat of one of the $H_i$ implies that there is no $k$-fold repeat of $H$, we immediately have \begin{equation}\label{eq:trivialmin} f_k(n,H)\leq \min_{i\in [\ell]}\{f_k(n,H_i)\}. \end{equation} For the other direction, let $k\geq 2$ be fixed. We will prove the result by induction on $\ell$, noting first that the statement is obviously true for $\ell=1$. Given $\ell\geq 2$, $H$ and $n$, we may assume, without loss of generality, that $$f_k(n,H_\ell)=\min_{i\in [\ell]}\{f_k(n,H_i)\}$$ and let $H':=H\setminus H_\ell$. By the induction hypothesis, there exists a constant $c=c(k,\ell, H_1,\dots,H_{\ell-1})$ such that $$f_k(n, H')\geq c\cdot \min_{i\in[\ell-1]}f_k(n,H_i)\geq c \cdot f_k(n,H_\ell). $$ Observe now that \begin{equation}\label{eq:mincomp} f_k(n,H)\geq \min\{f_k(n,H_\ell),f_k(n-k|H_\ell|,H')\}, \end{equation} as otherwise we can find a $k$-repeat of $H_\ell$ and then a $k$-repeat of $H'$ on the remaining vertices, resulting in a $k$-repeat of $H$. If $f_k(n,H_\ell)\leq f_k(n-k|H_\ell|,H')$, then, by~\eqref{eq:mincomp} and ~\eqref{eq:trivialmin}, $f_k(n,H)=f_k(n,H_\ell)$ and we are done. So suppose now that $f_k(n,H_\ell)> f_k(n-k|H_\ell|,H')$, in which case~\eqref{eq:mincomp} becomes \begin{equation}\label{eq:amostfull} f_k(n,H)\geq f_k(n-k|H_\ell|,H'). \end{equation} The right-hand side of~\eqref{eq:amostfull} is at least $f_k(n,H')-k|H_\ell|n$, as can be seen by extending a colouring of the complete graph on $n-k|H_\ell|$ vertices containing no $k$-repeat of $H'$ to a colouring of $K_n$ with the same property by using a new unique colour for each edge (for a total of at most $k|H_\ell|n$ new colours). Therefore, \begin{equation}\label{eq:split_fk} f_k(n,H)\geq f_k(n-k|H_\ell|,H')\geq f_k(n,H')-k|H_\ell|n\geq c\cdot f_k(n,H_\ell)-k|H_\ell|n. \end{equation} If $2(k/c)|H_\ell|n<f_k(n,H_\ell)$, then, by~\eqref{eq:split_fk}, $$f_k(n,H)\geq c f_k(n,H_\ell)-\frac{c}{2}f_k(n,H_\ell)=\frac{c}{2}f_k(n,H_\ell). $$ On the other hand, by Remark~\ref{rem:trivial}, we have $f_k(n,H)\geq n-1\geq n/2$. Hence, if $ 2(k/c)|H_\ell|n\geq f_k(n,H_\ell)$, then $$f_k(n,H)\geq \frac{n}{2}=\frac{2(k/c)|H_\ell|n}{4(k/c)|H_\ell|}\geq \frac{c}{4k|H_\ell|}f_k(n,H_\ell), $$ completing the induction step and the proof. \end{proof} Next, we quickly resolve the case of non-bipartite graphs, proving Theorem~\ref{thm:f2}(ii). \begin{theorem}\label{thm:oddcycle} If $H$ contains an odd cycle, then $f_k(n,H) \leq n+1$. \end{theorem} \begin{proof} It suffices to show this for $k=2$ and $H = C_{2 \ell+1}$. To this end, consider the following well-known colouring $c$. Suppose that $n=2p+1$ is an odd number, let $V(G)=\mathbb{Z}_n$ and put $c(a,b):=a+b \mod n$. Clearly this is a proper colouring using $n$ colours. Moreover, whenever there is a copy of $C_{2\ell+1}$ with vertices $v_0,\dots,v_{2\ell+1}=v_0$, labelled along the cycle, we have $c_i:=c(v_i,v_{i+1})=v_i+v_{i+1} \mod n$ for each $i$. The vertices $v_0, \dots , v_{2\ell}$ are then uniquely determined by the colours $c_0,\dots,c_{2\ell}$, since $v_0=(c_0-c_1+c_2-\dots +c_{2\ell})/2 \mod n$ and a similar identity holds for the other $v_i$. Hence, no two distinct, let alone vertex-disjoint, cycles will be colour isomorphic. This proves that $f_k(n,H)\leq n$ when $n$ is odd. For even $n$, monotonicity implies that $f_k(n,H)\leq f_k(n+1,H)\leq n+1$. \end{proof} For odd $n$, the proof of Theorem~\ref{thm:oddcycle} gives $f_k(n,H)\leq n$, which is best possible, as this is the minimum number of colours in any proper colouring when $n$ is odd. For even $n$, the bounds in Remark~\ref{rem:trivial} and Theorem~\ref{thm:oddcycle} differ by $2$. We refer the reader to Section~\ref{sec:outlook} for further discussion. For upper-bound constructions, the following observation will be very useful. A not necessarily proper edge-colouring of $K_n$ is called \emph{$b$-bounded} if each colour class is a graph of maximum degree at most $b$. \begin{prop}\label{prop:b-bounded} For any graph $H$ and integers $b,k,n\geq 2$, any $b$-bounded $C$-colouring of $K_n$ without $k$ repeats of any (not necessarily proper) colouring of $H$ can be refined to a proper colouring with at most $(b+1)C$ colours and no $k$-repeat of $H$. \end{prop} \begin{proof} Since each colour class is a graph of maximum degree $b$, Vizing's theorem implies that the edges in each colour class can be recoloured with at most $b + 1$ new colours so that the new colouring is proper. A $k$-repeat of $H$ in the new colours would imply the same in the old colours and is therefore impossible. \end{proof} The next theorem provides, via a simple probabilistic argument, a general upper bound for $f_k(n,H)$. In light of Theorem~\ref{thm:oddcycle}, it is only meaningful for bipartite $H$. \begin{theorem}\label{thm:LLL} For any graph $H$ with $v$ vertices and $e$ edges, $$f_k(n,H)=O(\max\{n,n^{\frac{kv-2}{(k-1)e}}\}).$$ \end{theorem} \begin{proof} This is a standard application of the Lov\'asz Local Lemma (see, for instance,~\cite{AlSp}). We would like to find a probability $p=p(n)$ and assign each edge of $K_n$ independently one of $1/p$ colours so that with positive probability the following will hold: \begin{itemize} \item[(P1)] There is no $k$-fold repeat of any (not necessarily proper) colouring of $H$. \item[(P2)] The colouring is $(kv-2)$-bounded. \end{itemize} For this, we define two collections of `bad' events, as follows. Given $k$ disjoint copies of $H$ in $K_n$, say $H_1,\dots,H_k$, let $A(H_1,\dots, H_k)$ be the event that $H_1,\dots,H_k$ are colour isomorphic. We have $$q_A:=\mathbb{P}(A(H_1,\dots,H_k))=\Theta(p^{(k-1)e}).$$ For every copy $S$ of $S_\ell$, the star with $\ell := kv-1$ edges, we define $B(S)$ to be the event that all edges of $S$ receive the same colour. We have $$q_B:=\mathbb{P}(B(S))=p^{\ell-1}.$$ For every fixed $H_1,\dots,H_k$, the event $A(H_1,\dots,H_k)$ is independent of any event where the subgraphs involved do not share an edge with any of $H_1,\dots, H_k$. Therefore, the number of dependent events is at most $$\Delta = O(n^{kv-2} +n^{(\ell+1)-2}) = O(n^{\ell-1}).$$ Similarly, each $B(S)$ is independent from all but at most $\Delta = O(n^{\ell-1})$ other events. By the Local Lemma, in order to establish that $\mathbb{P}(\bigcap \overline{A}\cap \bigcap \overline{B})>0$, it suffices to show that $\max\{q_A,q_B\} < 1/e\Delta$. This will be the case when $$p \leq \gamma \min\{n^{-1}, n^{-\frac{\ell-1}{(k-1)e}}\}$$ for some constant $\gamma=\gamma(k,v,e)>0$. Therefore, there exists a $(kv-2)$-bounded colouring with $O(\max\{n,n^{\frac{kv-2}{(k-1)e}}\})$ colours satisfying (P1). By Proposition~\ref{prop:b-bounded}, it can be refined to a proper $O(\max\{n,n^{\frac{kv-2}{(k-1)e}}\})$-colouring with no $k$-repeat of $H$, as desired. \end{proof} We immediately deduce the following corollary, which includes Theorem~\ref{thm:f2}(iii) as a special case. \begin{cor}\label{cor:LLLbipartite} If $H$ is a bipartite graph with $e(H)\geq \frac{k}{k-1}|H|-\frac{2}{k-1}$, then $$f_k(n,H)=\Theta(n).$$ \end{cor} For $e(H) \geq |H| + 1$, this already implies that there exists $k = k(H)$ such that $f_k(n, H) = O(n)$. We will later show that this holds for all graphs containing a cycle, but using Theorem~\ref{thm:LLL} gives an explicit upper bound for the smallest $k$ such that $f_k(n,H) = O(n)$, namely, $k \leq \lceil\frac{e-2}{e-v}\rceil$, that our later approach does not provide. \section{Trees}\label{sec:trees} Applied to trees, Theorem~\ref{thm:LLL} reads as follows. \begin{cor}\label{cor:LLLtree} For any tree $T$ with $m$ edges, $$f_k(n,T)=O(n^{\frac{k(m+1)-2}{(k-1)m}}).$$ \end{cor} On the other hand, we have the following general lower bound for trees, showing that Corollary~\ref{cor:LLLtree} is not far from the truth for large trees. \begin{theorem}\label{thm:paths} For any tree $T$ with $m$ edges, any $k\geq 2$ and $n$ sufficiently large, $$f_k(n,T)\geq n^{\frac{k}{k-1}}/q,$$ where $q = (4k(km+1)(k^2-k+1))^{\frac{1}{k-1}}$. \end{theorem} \begin{proof} Fix $k$ and $T$, let $q$ be as above and suppose that for some large $n$ we have a colouring of $G=K_n$ with $C=n^{\frac{k}{k-1}}/q$ colours. Let $F$ be the following auxiliary graph. Set $V(F)=\binom{V(G)}{k}$, that is, each vertex of $F$ is a vertex subset of $G$ of order $k$. Two vertices of $F$, corresponding to sets $U=\{u_1,\dots,u_k\}$ and $W=\{w_1,\dots,w_k\}$, are connected by an edge if $U\cap W=\emptyset$ and the bipartite graph $G[U,W]$ has a matching of size $k$ which is monochromatic. By the above definition, each monochromatic $k$-matching gives rise to $2^{k-1}$ edges of $F$, while each edge of $F$ is accounted for by at most $k$ different $k$-matchings. This implies that $$e(F)\geq \frac{2^{k-1}}{k}E_k, $$ where $E_k$ is the number of monochromatic $k$-matchings in $G$. Note now that for the average degree of $F$, we have $$d_{avg}(F)=\frac{2e(F)}{|F|}\geq\frac{2^kE_k}{k\binom{n}{k}}.$$ By the convexity of binomial coefficients, $$E_k\geq {C\binom{\frac{1}{C}\binom{n}{2}}{k}}. $$ Thus, $$ d_{avg}(F)\geq \frac{2^kC\binom{\frac{1}{C}\binom{n}{2}}{k}}{k\binom{n}{k}}=(1+o(1))\frac{n^{k}}{kC^{k-1}}=(1+o(1))\frac{q^{k-1}}{k}. $$ By a folklore fact, $F$ therefore contains a subgraph $F'$ with $$d:=\delta(F')\geq \frac{d_{avg}(F)}{2}\geq (1+o(1))\frac{q^{k-1}}{2k}.$$ Let $v_0,\dots, v_m$ be the vertices of $T$ ordered so that each $v_i$ is a leaf attached to $T[v_0,\dots,v_{i-1}]$. We claim that there exists an embedding $\phi:T\rightarrow F'$ such that the $k$-vertex sets of $G$ corresponding to $\phi(v_0),\dots, \phi(v_m)$ are disjoint. We let $\phi(v_0)$ be an arbitrary vertex of $F'$ and proceed inductively. Suppose that for some $1\leq i\leq m$ we have defined $\phi(v_0),\dots,\phi(v_{i-1})$ and let $v_j$ be the unique neighbour of $v_i$ in $T[v_0,\dots, v_{i-1}]$. Take any $d$ vertices in $N_{F'}(\phi(v_j))$ and consider the underlying $k$-sets in $V(G)$. This defines a $k$-uniform hypergraph $H$. Observe that the maximum degree in $H$ is at most $k$, since for every vertex $u\notin \phi(v_j)$ there are at most $k$ monochromatic matchings in $G$ connecting $\phi(v_j)$ and a set containing $u$. Thus, the line graph of $H$ has maximum degree at most $k(k-1)$, so we can properly vertex colour it greedily with at most $k^2-k+1$ colours. Take the largest colour class: this is a matching $\mu \subseteq E(H)$ of size at least $d/(k^2-k+1)$. But $$\frac{d}{k^2-k+1}\geq (1+o(1))\frac{q^{k-1}}{2k(k^2-k+1)} = (2+o(1))(km+1).$$ Therefore, when $n$ is sufficiently large, we have $|\mu| \geq km+1\geq ki+1$, so there exists $e\in \mu$ such that the underlying $k$-vertex set in $G$ satisfies $e \cap \phi(v_\ell)=\emptyset$ for all $\ell=0,\dots, i-1$. We can therefore let $\phi(v_i):=e$, completing the induction step. The resulting copy of $\phi(T)\subseteq F'$ gives $k$ vertex-disjoint colour-isomorphic copies of $T$ in the original colouring of $K_n$, completing the proof. \end{proof} In particular, when $k = 2$, we have the following result. \begin{cor} \label{cor:trees} For any tree $T$ with $m$ edges and $n$ sufficiently large, $$f_2(n,T)\geq \frac{1}{24(2m+1)}n^2.$$ \end{cor} This also completes the proof of Theorem~\ref{thm:f2}(i). Indeed, if $H$ is a forest, then $f_2(n,H)=\Theta(n^2)$ by Corollary~\ref{cor:trees} and Lemma~\ref{lem:components}. Conversely, if $H$ is not a forest, it has a connected component $H'$ with $e(H')\geq |H'|$. By Theorem~\ref{thm:LLL}, $f_2(n,H')=O(n^{2-\Omega(1)})$, so the same then holds for $H$ by Lemma~\ref{lem:components}. We now observe that the bound in Corollary~\ref{cor:trees} is tight up to an absolute constant. \begin{lemma} For every tree $T$ with $m\geq 2$ edges, $$f_2(n,T)\leq (1+o(1))\frac{n^2}{2m}.$$ \end{lemma} \begin{proof} By Wilson's theorem~\cite{Wi}, there is an edge decomposition of all but $o(n^2)$ edges of $K_n$ into at most $\binom{n}{2}/\binom{2m+1}{2}$ copies of $K_{2m+1}$. We colour each of the $o(n^2)$ edges in a unique colour and decompose each of the copies of $K_{2m+1}$ into at most $2m+1$ (near-)perfect matchings, each of which we view as a separate colour class. Since any two copies of $K_{2m+1}$ share at most one vertex, there will be no repeat of $T$ with colours from two different $K_{2m+1}$. On the other hand, since $2m+1 < 2(m+1)$ and $T$ has $m+1$ vertices, there cannot be two vertex-disjoint copies of $T$ inside any $K_{2m+1}$. \end{proof} Next, we show that the bound in Theorem~\ref{thm:paths} is essentially tight when $k = 3$, thus completing the proof of Theorem~\ref{thm:trees}(i). \begin{theorem}\label{thm:3rep} For any tree $T$ with at least $2$ edges, $f_3(n,T)= O(n^{3/2})$. \end{theorem} \begin{proof} Suppose that $A$ is a vertex set of order $n \leq q^2 = (1 + o(1))n$ for some prime $q$, noting that such a prime exists by the prime number theorem. Our aim is to find a proper colouring of the complete graph on $A$ with $O(n^{3/2})$ colours such that no three vertices are incident to the same two colours. In particular, this will imply that the colouring has no $3$-fold repeat of the star with two edges and, hence, no $3$-fold repeat of any tree $T$ with more than one edge. To achieve this, let $A$ be indexed by $\mathbb{F}_q^2$, where $\mathbb{F}_q$ denotes the finite field of order $q$. We first deal with certain `degenerate' edges in $\binom{A}{2}$. For us, these will be edges between two vertices $u=(a,b)$ and $v=(c,d)$, where either $a=c$, $a=1$ or $c=1$. Note that there are $O(q^3)=O(n^{3/2})$ such edges, so we may colour each of them with a unique colour. To each remaining edge $\{u,v\}\in \binom{A}{2}$ with $u=(a,b)$ and $v=(c,d)$, we assign the colour $(x_1,x_2,x_3) \in \mathbb{F}_q^3$ (so that there are at most $q^3 = O(n^{3/2})$ additional colours) satisfying the equations $x_1+x_2a+x_3a^2=b$, $x_1+x_2c+x_3c^2=d$ and $x_1+x_2+x_3=a+c$. This system of three equations has a unique solution $(x_1,x_2,x_3)$, since the underlying matrix is a Vandermonde matrix (and, since $a, c$ and $1$ are all distinct, the matrix is non-singular). Now, for a fixed $u=(a,b)$, how many non-degenerate edges $uv$ with $v = (c, d)$ receive the colour $(x_1,x_2,x_3)$? Each such edge has to satisfy $c=x_1+x_2+x_3-a$, so $c$ is fixed by the choice of $u$ and the colour $(x_1, x_2, x_3)$. Moreover, since $d = x_1+x_2c+x_3c^2$, $d$ is also fixed. Hence, there is at most one edge of each colour adjacent to $u$. That is, we have a proper colouring. Note now that each non-degenerate colour $(x_1,x_2,x_3)$ incident to a vertex $(a,b)$ satisfies the equation $x_1+x_2a+x_3a^2=b$. Therefore, since there is at most one quadratic passing through any three points in $\mathbb{F}_q^2$, no three vertices can be incident to the same two colours. \end{proof} \begin{remark} If $H$ is not a forest, $f_3(n, H)$ is always polynomially smaller than $n^{3/2}$. Indeed, if $H$ is not a forest, it has a connected component $H'$ with $e(H')\geq |H'|$. By Theorem~\ref{thm:LLL}, $f_3(n,H')=O(n^{3/2-\Omega(1)})$, so the same then holds for $H$ by Lemma~\ref{lem:components}. \end{remark} We conclude this section by establishing the first part of Theorem~\ref{thm:trees}(ii). \begin{prop}\label{prop:infinite} For any tree $T$ with $m$ edges and any $k\geq 2$, $f_k(n,T)=\Omega(n^{\frac{m+1}{m}})$. \end{prop} \begin{proof} For convenience of notation, we will instead prove the equivalent statement that a proper $C$-colouring of $K_n$ with $C=o(n^{\frac{m+1}{m}})$ colours contains $\omega(1)$ repeats of $T$. The number of copies of $T$ in $K_n$ is $\Theta(n^{m+1})=\omega(C^m)$. Thus, some $m$-tuple of colours will be repeated $\omega(1)$ times. Since the colouring is proper, every vertex appears in at most $|T|=m+1$ copies of $T$ with a fixed colouring. Hence, $\omega(1)$ of the above repeats will be vertex disjoint. \end{proof} \section{Cycles}\label{sec:bipartite} We begin this section by proving two of our main results at once, Theorem~\ref{thm:cyclim} and the second part of Theorem~\ref{thm:trees}(ii). \begin{theorem}\label{thm:evencycle} For every bipartite graph $H$ containing a cycle, there exists $k=k(H)$ such that $$f_k(n,H)=O(n).$$ For every $m$-edge tree $T$, there exists $k=k(T)$ such that $$f_k(n,T)=O(n^{\frac{m+1}{m}}).$$ \end{theorem} In the proof of this theorem, we will make use of Bukh's random algebraic method~\cite{Bu} (see also~\cite{BuCo, Co}). It will be useful to briefly recall some basic terminology from algebraic geometry. Let $\overline{\mathbb{F}}_q$ stand for the algebraic closure of $\mathbb{F}_q$. A \emph{variety} over $\overline{\mathbb{F}}_q$ is a set of the form $$W=\{x\in \overline{\mathbb{F}}_q^t:f_1(x)=\dots=f_s(x)=0\}$$ for a collection of multivariate polynomials $f_1,\dots, f_s: \overline{\mathbb{F}}_q^t\rightarrow\overline{\mathbb{F}}_q$. We also write $W(\mathbb{F}_q):=W\cap \mathbb{F}_q^t$. The variety $W$ is said to be \emph{defined} over $\mathbb{F}_q$ if the coefficients of $f_1, \dots, f_s$ are in $\mathbb{F}_q$. Finally, we say that $W$ has \emph{complexity} at most $M$ if $s, t$ and the degrees of the polynomials $f_i$ are all at most $M$. We will repeatedly use the following lemma, a consequence of the well-known Lang--Weil bound~\cite{LaWe}. \begin{lemma}[\cite{BuCo}, Lemma 2.7]\label{lem:27} Suppose $W$ and $D$ are varieties over $\overline{\mathbb{F}}_q$ of complexity at most $M$ which are defined over $\mathbb{F}_q$. Then one of the following holds for all $q$ sufficiently large in terms of $M$: \begin{itemize} \item $|W(\mathbb{F}_q)\setminus D(\mathbb{F}_q)| > q/2$ or \item $|W(\mathbb{F}_q)\setminus D(\mathbb{F}_q)| < c$, where $c=c_M$ depends only on $M$. \end{itemize} \end{lemma} \begin{proof}[Proof of Theorem~\ref{thm:evencycle}] By Remark~\ref{rem:trivial}, it suffices to prove that, for a cycle or a tree $H$ with $(v,e):=(|V(H)|,|E(H)|)$, there exists $k$ such that $$f_k(n,H)=O(n^{{v}/{e}}).$$ Suppose first that $H$ is a cycle, so $v=e$. Let $n \leq q=(1+o(1))n$ be a prime, guaranteed for large $n$ by the prime number theorem. We also let $d$ and $t$ be positive integers, with $d$ chosen to be sufficiently large in terms of $t$ and $t$ sufficiently large in terms of $v$. We write $\mathcal{P}_d$ for the set of all two-variable polynomials over $\mathbb{F}_q$ of degree at most $d$ and let $g$ be a polynomial taken from $\mathcal{P}_d$ uniformly at random. Such a polynomial can be generated by selecting a coefficient $a_{d_1d_2}\in \mathbb{F}_q$ independently at random for each pair of non-negative integers $(d_1,d_2)$ with $d_1+d_2\leq d$ and writing $$g(X_1,X_2):=\sum a_{d_1d_2} X_1^{d_1}X_2^{d_2}.$$ We will make repeated use of the following property of such random polynomials. Because we will need it below, we state the result for the more general case of $t$-variate random polynomials. \begin{lemma}[\cite{BuCo}, Lemma 2.3]\label{lem:23} Suppose that $q>\binom{m}{2}$ and $d\geq m-1$. Then, if $g$ is a random polynomial from $\mathcal{P}_d$ and $x_1,\dots,x_m$ are $m$ distinct points in $\mathbb{F}_q^t$, $$\mathbb{P}[g(x_i=0) \text{ for all } \, i=1,\dots,m]=1/q^m. $$ \end{lemma} Define a $q$-colouring of $G=K_{[n]}$ by assigning each edge $ij$ with $i<j$ the colour $g(i,j)$. We claim that with positive probability this colouring has the following properties: \begin{itemize} \item[(Q1)] Every (not necessarily proper) colouring of $H$ occurs fewer than $k=k(H)$ times. \item[(Q2)] The colouring is $2d$-bounded. \end{itemize} Given a colouring satisfying these conditions, we can apply Proposition~\ref{prop:b-bounded} to obtain a proper colouring with at most $(2d+1)q=O(n)$ colours and no $k$-repeats of $H$. Suppose now that $H$ has vertices $1, \dots, v$. We consider all copies of $H$ in $G$ of fixed \emph{order and colour type}. That is, given an ordering $(\tau_1, \dots, \tau_v)$ of $1, \dots, v$ and a sequence of $e$ colours $(b_{ij}:ij\in E(H))$, we consider only those copies of $H$ in $G$ where the images $y_1,\dots, y_{v}$ of $1, \dots, v$ in $[n]$ satisfy $y_{\tau_1} < \dots < y_{\tau_v}$ and the edge between $y_i$ and $y_j$ is coloured with colour $b_{ij}$ for all $ij \in E(H)$. Let $W$ be the variety defined by the system of equations \begin{equation}\label{eq:variety} g(x_{i,j,1},x_{i,j,2})=b_{ij} \mbox{ for } ij\in E(H) \end{equation} with $v$ variables $x_1,\dots,x_v$, where each $(x_{i,j,1},x_{i,j,2})$ equals $(x_{i},x_{j})$ if $\tau_i<\tau_j$ and $(x_{j},x_{i})$ otherwise. Let $D$ be the variety defined by the equation $$\prod_{1\leq i\neq j \leq v}(x_i-x_j)=0. $$ Note that the number of copies of $H$ of given type is at most $S=|W(\mathbb{F}_q)\setminus D(\mathbb{F}_q)|$, i.e., the random variable counting the number of `non-degenerate' solutions to the system of equations~\eqref{eq:variety}. By Lemma~\ref{lem:23}, for any $s \leq d + 1$ distinct points $y_1,\dots,y_s$ in $\mathbb{F}_q^2$ and any $b_1,\dots,b_s\in \mathbb{F}_q$, we have $$\mathbb{P}[g(y_i) = b_i \mbox{ for all } i = 1,\dots,s] = 1/q^s.$$ Now consider $S^t$, observing that it counts ordered collections of $t$ (potentially overlapping or identical) copies of $H$. Note that the graph $H_t$ spanned by any collection of $t$ copies of $H$ will satisfy $e(H_t)\geq |H_t|$, since each connected component of $H_t$ contains a cycle. Therefore, for $d$ sufficiently large in terms of $v$ and $t$, $$\mathbb{E}[S^{t}]\leq \sum_{s=v}^{vt}\frac{s^{vt} q^s}{q^{s}}< (vt)^{2vt},$$ where the factor $s^{vt}$ is an upper bound for the number of ordered ways of placing $t$ ordered copies of $H$ on a fixed set of $s$ vertices. Since $W$ and $D$ are defined over $\mathbb{F}_q$ and their complexities are bounded by a function of $d$ and $v$, Lemma~\ref{lem:27} implies that there exists a constant $K=K(d, v)$ such that the random variable $S=|W(\mathbb{F}_q)\setminus D(\mathbb{F}_q)|$ satisfies either $S < K$ or $S > q/2$. Thus, by Markov's inequality, $$\mathbb{P}[S \geq K]=\mathbb{P}[S>q/2]=\mathbb{P}[S^t>(q/2)^t]\leq \frac{\mathbb{E}[S^t]}{(q/2)^t}< (vt)^{2vt}(q/2)^{-t}. $$ This gives an upper bound on the probability of having at least $K$ distinct copies of $H$ for any given order and colour type. Since there are at most $v!$ different order types and at most $q^e=q^v$ choices for the sequence of colours, the union bound implies that, for $t$ sufficiently large in terms of $v$, there are with high probability fewer than $K$ distinct copies of $H$ for each order and colour type. Therefore, writing $k = v! K$, we see that with high probability the colouring has fewer than $k$ copies of $H$ for each colour type and so it satisfies (Q1) with probability $1-o(1)$. To see that it also satisfies (Q2), we have to show that with high probability, for each fixed $i$ and $a$, the equations $g(x,i)=a$ and $g(i,y)=a$ have at most $d$ solutions, i.e., that each of $g(x,i)$ and $g(i,y)$ has a non-constant coefficient not equal to zero. Consider $g(x,i)$ and note that, when viewing $g$ as a polynomial in $\mathbb{F}_q[X_2][X_1]$, the coefficient of $X_1^p$ is a random univariate polynomial $h_p(X_2)$ of degree $d-p$ with the coefficient for each different $p$ being independent. Hence, the probability that $h_p(i)$ is $0$ for all $p \geq 1$ is $q^{-d}$. Since there are at most $q^2$ choices for $i$ and $a$, the union bound implies that with high probability $g(x,i)=a$ has at most $d$ solutions for all $i$ and $a$. Together with the analogous result for $g(i, y) = a$, this establishes (Q2) with probability $1-o(1)$. By the union bound, with positive probability the colouring satisfies both properties (Q1) and (Q2), as claimed. Suppose now that $H$ is a tree with $m\geq 1$ edges. Let $q$ be a prime with $n^{1/m} \leq q=(1+o(1))n^{1/m}$. Let $d$ and $t$ again be positive integers, with $d$ chosen to be sufficiently large in terms of $t$ and $t$ sufficiently large in terms of $m$, while $\mathcal{P}_d$ is now the set of all $2m$-variable polynomials over $\mathbb{F}_q$ of degree at most $d$. Let $g_1,\dots, g_{m+1}$ be $m+1$ polynomials taken from $\mathcal{P}_d$ independently and uniformly at random. Fix an ordering $\prec$ on $\mathbb{F}_q^m$ and associate $V(K_n)$ with a subset of $\mathbb{F}_q^m$. We define a $q^{m+1}$-colouring on $\binom{\mathbb{F}_q^m}{2}$ by assigning each edge $ij$ with $i\prec j$ the colour $(g_1(i,j),\dots,g_{m+1}(i,j))$, where we interpret $(i,j)$ as an element of $\mathbb{F}_q^m\times \mathbb{F}_q^m \cong \mathbb{F}_q^{2m}$. As in the cycle case, we claim that with positive probability the following statements hold (in which case we are again done by Proposition~\ref{prop:b-bounded}): \begin{itemize} \item[(Q1)] Every (not necessarily proper) colouring of $H$ occurs fewer than $k=k(H)$ times. \item[(Q$2'$)] The colouring is $k'$-bounded for some $k'=k'(H)$. \end{itemize} Define order and colour types as before, but with respect to the ordering $\prec$. For a fixed order type $(\tau_1,\dots,\tau_{m+1})$ and colour type $(b_{ij} : ij\in E(H))$, let $W$ be the variety defined by the system of equations \begin{equation*} g_\ell(x_{i,j,1},x_{i,j,2})=b_{ij}^{(\ell)} \mbox{ for } ij\in E(H) \mbox{ and } \ell\in[m+1] \end{equation*} with $m+1$ variables $x_1,\dots,x_{m+1} \in \mathbb{F}_q^m$, where each $(x_{i,j,1},x_{i,j,2})$ equals $(x_{i},x_{j})$ if $\tau_i<\tau_j$ and $(x_{j},x_{i})$ otherwise and $b_{ij}=(b_{ij}^{(1)}, \dots , b_{ij}^{(m+1)})$. For each $\{i,j\}\in \binom{m+1}{2}$, let $D_{ij}$ be the variety defined by the system of equations $$x_i^{(\ell)} = x_j^{(\ell)} \mbox{ for } \ell\in[m], $$ where $x_i^{(\ell)}$ and $x_j^{(\ell)}$ are the $\ell$-th coordinates of $x_i$ and $x_j\in \mathbb{F}_q^m$, respectively, and let $$D=\bigcup_{1\leq i\neq j\leq m + 1}D_{ij}. $$ We again have that the number of copies of $H$ of given type is bounded above by $S=|W(\mathbb{F}_q)\setminus D(\mathbb{F}_q)|$. By Lemma~\ref{lem:23} and the independence of the $g_\ell$, for any $s \leq d+1$ distinct points $y_1,\dots,y_s$ in $\mathbb{F}_q^{2m}$ and any choice of $b_i^{(\ell)} \in \mathbb{F}_q$ for each $i\in [s]$ and $\ell\in [m+1]$, we have $$\mathbb{P}[g_\ell(y_i) = b^{(\ell)}_i \mbox{ for all } i\in [s] \mbox{ and } \ell\in [m+1]] = 1/q^{s(m+1)}.$$ Now consider $S^t$, observing that it counts ordered collections of $t$ (potentially overlapping or identical) copies of $H$. Note that the graph $H_t$ spanned by any collection of $t$ copies of $H$ will satisfy $e(H_t)\geq \frac{m}{m+1}|H_t|$, since each connected component of $H_t$ is either a tree with at least $m$ edges or contains a cycle. Therefore, for $d$ sufficiently large in terms of $m$ and $t$, $$\mathbb{E}[S^{t}]\leq \sum_{s=m+1}^{(m+1)t}\frac{s^{(m+1)t}q^{m s}}{q^{\frac{m}{m+1}s(m+1)}}< (2mt)^{4mt}.$$ Since $W$ and $D$ are defined over $\mathbb{F}_q$ and their complexities are bounded by a function of $d$ and $m$, Lemma~\ref{lem:27} implies that there exists a constant $K=K(d, m)$ such that the random variable $S=|W(\mathbb{F}_q)\setminus D(\mathbb{F}_q)|$ satisfies either $S < K$ or $S > q/2$. Thus, by Markov's inequality, $$\mathbb{P}[S \geq K]=\mathbb{P}[S>q/2]=\mathbb{P}[S^t>(q/2)^t]\leq \frac{\mathbb{E}[S^t]}{(q/2)^t} < (2mt)^{4mt} (q/2)^{-t}. $$ Hence, as in the case of cycles, writing $k = (m+1)! K$, we obtain that with high probability the colouring has fewer than $k$ copies of $H$ for each colour type and so it satisfies (Q1) with probability $1-o(1)$. To see that it also satisfies (Q$2'$), we have to show that with high probability, for each fixed $i \in \mathbb{F}_q^m$ and $a=(a_\ell)_{\ell\in[m+1]}\in \mathbb{F}_q^{m+1}$, the systems of equations $$g_\ell(x,i)=a_\ell \mbox{ for } \ell\in[m+1]$$ and $$g_\ell(i,y)=a_\ell \mbox{ for } \ell\in[m+1]$$ have at most $k'/2$ solutions for some $k'=k'(d)$. For this, we again resort to the random algebraic method. Fix $i \in \mathbb{F}_q^m$. For each $\ell\in [m+1]$, define $g'_\ell(x):=g_\ell(x,i)$. It is not hard to see that $g'_\ell(x)$ is a uniformly random $m$-variable polynomial of degree $d$. Note also that the set \begin{equation}\label{eq:sprime} \{x\in \mathbb{F}_q^m: g_\ell'(x)=a_\ell \text{ for all }\ell\in[m+1]\} \end{equation} is of the form $W'(\mathbb{F}_q)$ for a variety $W'$ of complexity bounded by a function of $d$ and $m$. Moreover, by Lemma~\ref{lem:23} and the independence of the $g'_\ell$, for any $s \leq d+1$ distinct points $x_1,\dots,x_s\in \mathbb{F}_q^m$ and any choice of $a_j^{(\ell)}\in \mathbb{F}_q$ for each $j\in[s]$ and $\ell\in[m+1]$, we have $$\mathbb{P}[g'_\ell(x_j)=a_j^{(\ell)} \mbox{ for all } j\in[s] \mbox{ and }\ell\in[m+1]]=1/q^{s(m+1)}. $$ Consider the random variable $S'=|W'(\mathbb{F}_q)|$ and observe that $S'^t$ counts ordered collections of $t$ (not necessarily distinct) solutions to~\eqref{eq:sprime}. Therefore, for $d$ sufficiently large in terms of $t$, $$\mathbb{E}[S'^t]\leq\sum_{s=1}^{t}\frac{t^sq^{sm}}{q^{s(m+1)}}=\sum_{s=1}^{t}\left(\frac{t}{q}\right)^s<1. $$ By Lemma~\ref{lem:27}, there exists a constant $K'=K'(d, m)$ such that $S'$ satisfies either $S' < K'$ or $S'>q/2$. Thus, by Markov's inequality, provided $t$ is sufficiently large in terms of $m$, $$\mathbb{P}[S' \geq K'] = \mathbb{P}[S' > q/2]=\mathbb{P}[S'^t>(q/2)^t]\leq \frac{\mathbb{E}[S'^t]}{(q/2)^t} < \frac{2^t}{q^{t}}<\frac{1}{q^{3m+1}}. $$ In other words, the system of equations $g_\ell(x,i)=a_\ell$ with $\ell\in[m+1]$ has with high probability at most $K'$ solutions. Together with the analogous result for the system of equations $g_\ell(i,y)=a_\ell$ with $\ell\in[m+1]$, a union bound over all $i\in \mathbb{F}_q^m$ and $a=(a_\ell)_{\ell\in[m+1]}\in \mathbb{F}_q^{m+1}$ establishes (Q$2'$) with high probability for $k'=2K'$. By another simple application of the union bound, with positive probability the colouring satisfies both properties (Q1) and (Q$2'$), as claimed. \end{proof} We finish this section by completing the proof of Theorem~\ref{thm:f2}. Indeed, Theorem~\ref{thm:LLL} implies that $f_2(n,C_6)=O(n^{5/3})$, while the next lemma says that $f_2(n,C_6)=\Omega(n^{4/3})$. Together, these results establish Theorem~\ref{thm:f2}(iv) with $H = C_6$. \begin{theorem}\label{thm:C6} $f_2(n,C_6)=\Omega(n^{4/3})$. \end{theorem} \begin{proof} Suppose that $C=\gamma n^{4/3}$, where $\gamma$ is a small constant. Suppose also that $n$ is taken sufficiently large and we have a $C$-colouring of $G=K_n$. For simplicity in our notation, we will also assume that $n$ is even. Take an arbitrary equipartition of $V(G)$ into parts $X$ and $Y$. Let $U:=\binom{X}{2}$, $W:=\binom{Y}{2}$ and let $F$ be the following auxiliary graph, similar to the graph used in the proof of Theorem~\ref{thm:paths}. The vertex set is $V(F):=U\cup W$ and $F=F[U,W]$ is bipartite with $uw\in E(F)$ for $u=\{x_1, x_2\}$, $w=\{y_1, y_2\}$ if and only if at least one of $\{x_1y_1,x_2y_2\}$, $\{x_1y_2,x_2y_1\}$ is a monochromatic matching in $G$. Therefore, $$|F|=2\binom{n/2}{2}=(1+o(1))\frac{n^2}{4} \geq \frac{n^2}{5}$$ and $e(F)\geq E_2/2$, where $E_2$ is the number of monochromatic matchings of size $2$ between $X$ and $Y$, as every such matching gives rise to an edge of $F$ and each edge is counted at most twice. By the convexity of binomial coefficients, we obtain \begin{equation}\label{eq:eF} e(F)\geq\frac{E_2}{2}\geq \frac{C}{2}\binom{\frac{n^2}{5C}}{2}>\frac{n^4}{200C}=\frac{n^{8/3}}{200\gamma}>\frac{|F|^{4/3}}{200\gamma}. \end{equation} The \emph{theta graph} $\theta_{3,\ell}$ is the bipartite graph composed of $\ell$ internally-disjoint paths of length $3$ sharing the same pair of endpoints. For example, $\theta_{3,2}=C_6$. A theorem of Faudree and Simonovits~\cite{FS} (see also the recent paper~\cite{BuTa}) states that, for every $\ell \geq 2$, $\textrm{ex}(n,\theta_{3,\ell})=O_\ell(n^{4/3})$. Hence, by this and~\eqref{eq:eF}, putting $\ell=60$, there exists $\gamma_0$ such that if $\gamma<\gamma_0$, then $F$ contains a copy of $\theta_{3,60}$, noting that, since $F$ is bipartite, its endpoints cannot be in the same part. Suppose now that $u\in U$ and $w\in W$ are the endpoints of our copy of $\theta_{3,60}$, the neighbours of $u$ and $w$ are $w_1\dots,w_\ell$ and $u_1,\dots,u_\ell$, respectively, and $u w_i u_i w$ is a path for each $i = 1, 2, \dots, \ell$. We claim that, after relabelling, there are three indices $1, 2, 3$ such that the eight vertex pairs in $G$ encoded by the vertices $u,w,u_1,u_2,u_3,w_1,w_2,w_3$ are disjoint. Since $u,u_1,u_2,u_3\in U$ and $w,w_1,w_2,w_3\in W$, we only need to make sure that the vertex pairs are disjoint in each part. To this end, note, by the same hypergraph colouring argument used in the proof of Theorem~\ref{thm:paths}, that any set of $t$ vertices $w_{i_1},\dots, w_{i_t} \in N_F(u)$ contains a subset of size at least $t/3$ in which all vertices correspond to disjoint pairs in $G$. The same holds for any set of $t$ vertices in $N_F(w)$. Thus, at least $15$ of the vertices $w_1,\dots,w_\ell$ correspond to pairs in $Y$ that are disjoint from each other and from $w$ (since there are at least $20$ of the $w_i$ whose corresponding pairs are disjoint from each other and $w$ overlaps with at most two of these). Relabelling, we can assume that these vertices are $w_1,\dots,w_{15}$. Applying the same argument to $u_1,\dots, u_{15}$ and relabelling if necessary, we obtain three vertices $u_1,u_2,u_3$ corresponding to pairs in $X$ which are disjoint from each other and from $u$. Let $u:=\{x^1,x^2\}$, $w:=\{y^1,y^2\}$, $u_i:=\{x_i^1,x_i^2\}$ and $w_i:=\{y_i^1,y_i^2\}$ for $i=1,2,3$. By the definition of $F$, without loss of generality we may assume that we have monochromatic pairs $\{x^1y_i^1,x^2y_i^2\}$ and $\{y_i^1x_i^1,y_i^2x_i^2\}$ for $i=1,2,3$. For each of the three edges $u_iw\in E(F)$, we have two options: either the matching $\{x_i^1y^1,x_i^2y^2\}$ is monochromatic or the matching $\{x_i^1y^2,x_i^2y^1\}$ is. By the pigeonhole principle, there will be two indices, say $i=1,2$, where the same situation occurs. In the case where $\{x_i^1y^1,x_i^2y^2\}$ is monochromatic for $i=1,2$, we have a $2$-repeat of $C_6$ with copies $x^1y_1^1x_1^1y^1x_2^1y_2^1x^1$ and $x^2y_1^2x_1^2y^2x_2^2y_2^2x^2$. If instead, $\{x_i^1y^2,x_i^2y^1\}$ is monochromatic for $i=1,2$, our $2$-repeat of $C_6$ consists of the copies $x^1y_1^1x_1^1y^2x_2^1y_2^1x^1$ and $x^2y_1^2x_1^2y^1x_2^2y_2^2x^2$. \end{proof} \section{Open Problems}\label{sec:outlook} \subsection*{Trees} Proposition~\ref{prop:infinite} and Theorem~\ref{thm:evencycle} together imply that for each $m$-edge tree $T$, there exists a constant $k_0 = k_0(T)$ such that, for all $k\geq k_0$, $$f_k(n,T)=\Theta(n^{\frac{m+1}{m}}).$$ It would be interesting to prove a tight bound for $k_0$ -- we suspect that the answer might be $m+1$ for any $m$-edge tree. When $T$ is the star with $2$ edges, our Theorem~\ref{thm:3rep} verifies this suspicion. When $T$ is the star with $m$ edges, this problem seems closely related to the Zarankiewicz problem for $K_{m, m+1}$, suggesting that it may already be very difficult for $m \geq 3$. An easier problem might be to give an upper bound for $f_3(n, T)$ when $T$ is an $m$-edge tree which matches the lower bound $\Omega(n^{3/2}/\sqrt{m})$ of Theorem~\ref{thm:paths} up to an absolute constant factor. \subsection*{Bipartite graphs containing a cycle} Corollary~\ref{cor:LLLbipartite} tells us that if $e(H)\geq 2|H|-2$, then $f_2(n, H) = O(n)$. How sharp is this bound? Recalling that $\theta_{3,\ell}$ is the bipartite graph consisting of $\ell$ internally-disjoint paths of length $3$ sharing the same pair of endpoints, an extension of Theorem~\ref{thm:C6} implies that, for any $\ell \geq 2$, \[f_2(n, \theta_{3,\ell}) = \Omega(n^{4/3}).\] Since $e(\theta_{3, \ell}) = \frac{3}{2} |\theta_{3,\ell}| - 3$, we see that Corollary~\ref{cor:LLLbipartite} cannot be improved to say that $e(H)\geq \frac{3}{2}|H|-3$ implies that $f_2(n, H) = O(n)$. An interesting test case for deciding whether the latter bound can be pushed closer to $2 |H|$ might be to study $f_2(n, K'_t)$, where $K'_t$ is the 1-subdivision of the complete graph $K_t$. The extremal properties of these graphs have received close attention in the recent literature~\cite{CoJaLe, CoLe, Ja} and some of the ideas developed in these papers might also prove useful in our context. \subsection*{Even cycles} There are an abundance of open problems regarding even cycles, with the simplest being whether $f_2(n,C_4)=\Theta(n)$. We have only shown that $f_2(n, C_4) = O(n^{3/2})$, as a particular instance of Theorem~\ref{thm:LLL}. It would also be interesting to determine the correct exponent for $C_6$. That is, determine the exponent $\alpha$, if it exists, such that $f_2(n, C_6) = \Theta(n^\alpha)$. Our results only show that $4/3 \leq \alpha \leq 5/3$. Finally, for longer cycles, we suspect that the exponent approaches $2$. That is, is it true that for every $\varepsilon>0$, there exists $m_0=m_0(\varepsilon)$ such that, for all $m\geq m_0$, $f_2(n,C_{2m})=\Omega(n^{2-\varepsilon})$? This problem seems to bear some relation to the even cycle case of Conjecture~3.1 from~\cite{GrJaNa}. \subsection*{Non-bipartite graphs} When $H$ is non-bipartite, we know, by the remark after Theorem~\ref{thm:oddcycle}, that $f_2(n, H) = n$ for $n$ odd. On the other hand, when $n$ is even, we have only shown that $f_2(n, H) \leq n + 1$. However, in many cases, this bound can be improved to $f_2(n, H) = n - 1$. To see this, we start with the colouring of $K_{n-1}$ with $n-1$ colours used in the proof of Theorem~\ref{thm:oddcycle} and then consider the unique extension of this colouring to $K_n$. The resulting colouring does not contain repeats of `most' non-bipartite $H$, such as graphs containing two disjoint odd cycles, so that $f_2(n, H) = n - 1$ for these graphs. Similarly, this colouring allows us to conclude that $f_3(n, H) = n -1$ for $n$ even and any non-bipartite $H$. However, it remains to determine $f_2(n, H)$ exactly in the natural case where $n$ is even and $H$ is an odd cycle. Curiously, when $H$ is a triangle, each $1$-factorization of $K_n$ has $\binom{n-1}{3}$ triples of possible colours, which is less than $\binom{n}{3}$, the total number of triangles. Hence, by the pigeonhole principle, there will be two colour-isomorphic triangles, but they may not be vertex disjoint. \subsection*{Repeats of other patterns} Our results all yield repeated rainbow copies, where the edges in each copy all receive a different colour, but one could also ask for repeats of other patterns. For instance, what is the smallest number of colours in a proper edge-colouring which does not contain a $2$-repeat of the properly $2$-coloured path of length $3$? From the construction in Theorem~\ref{thm:3rep}, it is not hard to deduce an upper bound of $O(n^{3/2})$, whereas a simple counting argument yields a lower bound of $\Omega(n^{4/3})$. We believe the latter to be tight. \subsection*{A problem in Ramsey theory} Our original motivation for studying repeats came from a problem in generalised Ramsey theory raised by Krueger~\cite[Problem 1.2]{Kr}. His question asks for the minimum number of colours in an edge-colouring of $K_n$ such that every copy of $P_t$, the path with $t$ vertices, where $t$ is odd, contains at least $\frac{t+1}{2}$ colours. In our context, the most closely related problem is to study lower bounds for the appearance, in proper edge-colourings, of the pattern consisting of two repeats of the path $P_{(t+1)/2}$ sharing an endpoint. At present, the best lower bound for Krueger's problem is $\widetilde{\Omega}(n^{3/2})$ for all $t \geq 9$. It would be interesting to determine if the exponent tends to $2$ as $t$ tends to infinity both for this problem and ours. \subsection*{Hypergraphs} Finally, we note that there are several ways to generalise our problem to the setting of $r$-uniform hypergraphs, depending on how we define a proper colouring. Indeed, for any $1\leq t<r$, one could ask that any two hyperedges sharing at least $t$ vertices receive distinct colours, resulting in a family of problems. It may also be interesting to study proper colourings of Steiner Triple Systems, looking for repeats of fixed linear $3$-graphs. \vspace{4mm} \section*{Acknowledgements} We are extremely grateful to Sean English and Bob Krueger for spotting an error in an earlier version of this paper and suggesting a fix.
{ "timestamp": "2021-06-28T02:16:06", "yymm": "2002", "arxiv_id": "2002.00921", "language": "en", "url": "https://arxiv.org/abs/2002.00921", "abstract": "For a fixed graph $H$, what is the smallest number of colours $C$ such that there is a proper edge-colouring of the complete graph $K_n$ with $C$ colours containing no two vertex-disjoint colour-isomorphic copies, or repeats, of $H$? We study this function and its generalisation to more than two copies using a variety of combinatorial, probabilistic and algebraic techniques. For example, we show that for any tree $T$ there exists a constant $c$ such that any proper edge-colouring of $K_n$ with at most $c n^2$ colours contains two repeats of $T$, while there are colourings with at most $c' n^{3/2}$ colours for some absolute constant $c'$ containing no three repeats of any tree with at least two edges. We also show that for any graph $H$ containing a cycle there exist $k$ and $c$ such that there is a proper edge-colouring of $K_n$ with at most $c n$ colours containing no $k$ repeats of $H$, while, for a tree $T$ with $m$ edges, a colouring with $o(n^{(m+1)/m})$ colours contains $\\omega(1)$ repeats of $T$.", "subjects": "Combinatorics (math.CO)", "title": "Repeated patterns in proper colourings", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9902915240915783, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7084783251647255 }
https://arxiv.org/abs/0903.3068
Long-time asymptotics for fully nonlinear homogeneous parabolic equations
We study the long-time asymptotics of solutions of the uniformly parabolic equation \[ u_t + F(D^2u) = 0 \quad {in} \R^n\times \R_+, \] for a positively homogeneous operator $F$, subject to the initial condition $u(x,0) = g(x)$, under the assumption that $g$ does not change sign and possesses sufficient decay at infinity. We prove the existence of a unique positive solution $\Phi^+$ and negative solution $\Phi^-$, which satisfy the self-similarity relations \[ \Phi^\pm (x,t) = \lambda^{\alpha^\pm} \Phi^\pm (\lambda^{1/2} x, \lambda t). \] We prove that the rescaled limit of the solution of the Cauchy problem with nonnegative (nonpositive) initial data converges to $\Phi^+$ ($\Phi^-$) locally uniformly in $\R^n \times \R_+$. The anomalous exponents $\alpha^+$ and $\alpha^-$ are identified as the principal half-eigenvalues of a certain elliptic operator associated to $F$ in $\R^n$.
\section{Introduction and main results} The connection between the scaling invariance of the mathematical expressions for certain physical laws and the asymptotic behavior of physical phenomena is of fundamental importance to the study of mechanics. In this work, we have in mind the study of self-similar solutions of diffusion equations and their relationship to the long-time behavior of solutions to the Cauchy problem. A classical example is the heat equation \begin{equation}\label{eq:heatequation} u_t - \Delta u = 0 \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}_+, \end{equation} which is invariant with respect to any scaling $(x,t) \mapsto \left(\sigma^{1/2} x , \sigma t\right)$, with $\sigma > 0$. It is well-known that a solution $u(x,t)$ of equation \EQ{heatequation} with nonnegative and integrable initial data will converge in a certain sense to a multiple of the Gaussian kernel \begin{equation*} \Phi(x,t) := (4\pi t)^{-\frac{n}{2}} e^{-\frac{|x|^2}{4t}}. \end{equation*} Precisely, the rescaled solutions $u^\sigma (x,t) := \sigma^{n/2} u\left( \sigma^{1/2} x, \sigma t\right)$ will converge locally uniformly to $C \Phi(x,t)$, where the constant $C$ is given by \begin{equation*} C = \int_{\ensuremath{\mathbb{R}}^n} u(x,0) \, dx. \end{equation*} The Gaussian kernel $\Phi$ satisfies the relation \begin{equation*} \Phi(x,t) = \sigma^{\frac{n}{2}} \Phi\left( \sigma^{1/2} x, \sigma t\right), \end{equation*} and for this reason $\Phi$ is called a \emph{self-similar solution} of \EQ{heatequation}. \medskip Another important example of a diffusion equation is the generalized porous medium equation \begin{equation}\label{eq:porous-medium-equation} u_t - \Delta \Psi(u) = 0 \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}_+, \end{equation} where $\Psi$ is an increasing function of $u$. The study of self-similar solutions of \EQ{porous-medium-equation} and the long-time asymptotics of solutions to the Cauchy problem is well-developed, particularly in the case that $\Psi(u) = u^m$ for $m>1$. It originated with the celebrated work of Barenblatt, Zel'dovich, and Kompaneets, and was later developed Friedman, Kamin, V\'azquez, and others (see V\'azquez \cite{Vazquez:Book} for a well-written introduction to the subject as well as a comprehensive list of references). \medskip While the literature on self-similar solutions and asymptotics of diffusion equations is vast, relatively little is known in the case of the fully nonlinear parabolic equation \begin{equation} \label{eq:fully-nonlinear-parabolic} u_t + F(D^2u) = 0 \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}_+. \end{equation} Here $F$ is a positively homogeneous, uniformly elliptic operator. Important examples of \EQ{fully-nonlinear-parabolic} include the parabolic Bellman and parabolic Isaacs equations, which arise in the theory of stochastic optimal control and stochastic differential games, respectively. In the article, we show that equation \EQ{fully-nonlinear-parabolic} possesses a unique positive and negative self-similar solution, and that these self-similar solutions characterize the long-time asymptotic behavior of solutions of the Cauchy problem with initial data which does not change sign. \begin{thm}\label{thm:existence-self-similar-solutions} Assume that $F$ satisfies \EQ{Felliptic} and \EQ{Fhomogeneous}, below. Then there exist unique constants $\alpha^+=\alpha^+(F)>0$ and $\alpha^-=\alpha^-(F) > 0$, for which the uniformly parabolic equation \EQ{fully-nonlinear-parabolic} possesses solutions $\Phi^+ > 0$ and $\Phi^- < 0$, satisfying the relations \begin{equation*} \Phi^+(x,t) = \sigma^{\alpha^+} \Phi^+\left(\sigma^{1/2} x, \sigma t\right) \quad \mbox{and} \quad \Phi^-(x,t) = \sigma^{\alpha^-} \Phi^-\left(\sigma^{1/2} x, \sigma t\right) \end{equation*} for all $\sigma > 0$, and such that for some constants $C, a> 0$, \begin{equation*} \Phi^+(x,1) , - \Phi^-(x,1) \leq C \exp( - a|x|^2) \quad \mbox{for all} \ x \in \ensuremath{\mathbb{R}}^n. \end{equation*} Moreover, the solutions $\Phi^+$ and $\Phi^-$ are unique up to multiplication by a positive constant. \end{thm} We call the numbers $\alpha^+$ and $\alpha^-$ the \emph{positive} and \emph{negative anomalous exponents} of the operator $F$, respectively. In contrast to the case of a linear parabolic equation, $\alpha^\pm \neq n/2$ and $\alpha^+ \neq \alpha^-$, in general. We will see in \SEC{existence} that they are the principal half-eigenvalues of an elliptic operator in $\ensuremath{\mathbb{R}}^n$. The functions $\Phi^+(\cdot,1)$ and $\Phi^-(\cdot,1)$ are the corresponding principal half-eigenfunctions. Our second main result characterizes the long-time behavior of solutions of equation \EQ{fully-nonlinear-parabolic} with initial data which does not change sign and exhibits sufficient decay at infinity. \begin{thm} \label{thm:convergence-self-similar-solutions} Let the hypotheses of \THM{existence-self-similar-solutions} be in force, and consider a viscosity solution $u \in C(\ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}_+)$ of the uniformly parabolic equation \EQ{fully-nonlinear-parabolic} such that $|u(x,0)| \leq C_0\exp (-B|x|^2)$ for some constants $B,C_0>0$. If $u(\cdot,0) \geq 0$ and $u(\cdot,0) \not\equiv 0$, then there exists a constant $C^* > 0$ such that the rescaled solutions given by \begin{equation*} u^\sigma (x,t):= \sigma^{\alpha^+} u \left( \sigma^{1/2} x, \sigma t\right), \quad (x,t) \in \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}_+ \end{equation*} converge locally uniformly in $\ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}_+$ as $\sigma \to \infty$ to the function $C^*\Phi^+$. Likewise, if $u(\cdot,0) \leq 0$, $u(\cdot,0) \not \equiv 0$, then there exists a constant $C^* > 0$ such that the functions \begin{equation*} u^\sigma (x,t):= \sigma^{\alpha^-} u \left( \sigma^{1/2} x, \sigma t\right), \quad (x,t) \in \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}_+ \end{equation*} converge locally uniformly to $C^* \Phi^-$ as $\sigma \to \infty$. \end{thm} \medskip The \emph{Barenblatt equation of elasto-plastic filtration} \begin{equation}\label{eq:Barenblatt} u_t - \max\left\{ \frac{\Delta u}{1-\gamma}, \frac{\Delta u}{1+\gamma} \right\} = 0 \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}_+, \end{equation} where $0 < \gamma < 1$, is a particular example of a fully nonlinear parabolic equation of the form \EQ{fully-nonlinear-parabolic}. It arises in filtration theory as a model for an elastic fluid flowing through an irreversibly deformable elasto-plastic porous medium (see Barenblatt, Entov, and Ryzhik \cite{Barenblatt:Book:1990}). Kamin, Peletier and V\'azquez \cite{Kamin:1991} proved Theorems \ref{thm:existence-self-similar-solutions} and \ref{thm:convergence-self-similar-solutions} for equation \EQ{Barenblatt}, using different methods from those employed in this paper. Their argument makes use of the rotational invariance of the equation to reduce the problem to an ODE. The authors then employ a shooting argument in the phase plane to demonstrate the existence of self-similar solutions of equation \EQ{Barenblatt}. Their proof of the asymptotic convergence of rescaled solutions of the Cauchy problem to a multiple of the self-similar solution relies on a careful analysis of the self-similar solution at infinity. These estimates also rely on ODE theory in a crucial way, and in particular some clever applications of l'H\^opital's rule. \medskip Our proof of \THM{convergence-self-similar-solutions} employs the ``four-step method" of Kamin and V\'azquez \cite{Kamin:1988} in a new setting. The idea is to squeeze the rescaled functions $u^\sigma$ between two multiples of the self-similar solution $\Phi^\pm$, and to show that the gap between them must vanish asymptotically. Similar to \cite{Kamin:1991}, the primary difficulty in carrying out this analysis is controlling the behavior of the solutions as $t^{-1/2} |x| \to \infty$. This is overcome by the construction of a special comparison function. \medskip Self-similar solutions of equation \EQ{Barenblatt} have also been considered by Caffarelli and Stefanelli \cite{Caffarelli:2008}, and the dependence of the anomalous exponents on the parameter $\gamma$ has been examined by Goldenfeld, Martin, Oono, and Liu \cite{Goldenfeld:1990} as well as Aronson and V\'azquez \cite{Aronson:1995}. We also wish to mention the interesting work of Hulshof and V\'azquez \cite{Hulshof:1996}, who studied the long-time asymptotic behavior of viscosity solutions of an equation similar to \EQ{Barenblatt}, but with $\Delta(u^m)$ replacing both instances of $\Delta u$. As we were completing a final revision of this paper for publication, we became aware of a new preprint by Meneses and Quaas \cite{Quaas:preprint} in which the existence assertions of \THM{existence-self-similar-solutions} are also obtained. \medskip In \SEC{notation} we state our hypotheses and recall the definition of a viscosity solution. \SEC{existence} contains the proof of \THM{existence-self-similar-solutions}, and in \SEC{asymptotics} we prove \THM{convergence-self-similar-solutions}. \section{Notation and Hypotheses}\label{sec:notation} Throughout this paper, we denote $(0,\infty)$ by $\ensuremath{\mathbb{R}}_+$. The set of $n$-by-$n$ real symmetric matrices is $\ensuremath{\mathbb{S}^n}$. For $M \in \ensuremath{\mathbb{S}^n}$ and $0 < \lambda \leq \Lambda$, define the operators \begin{equation*} \Puccisub{\ellip}{\Ellip}(M) := \sup_{A\in \llbracket\lambda,\Lambda\rrbracket} \left[ - \trace(AM) \right] \quad \mbox{and} \quad \puccisub{\ellip}{\Ellip} (M) := \inf_{A\in \llbracket\lambda,\Lambda\rrbracket} \left[ - \trace(AM) \right], \end{equation*} where $\llbracket\lambda,\Lambda\rrbracket \subseteq \ensuremath{\mathbb{S}^n}$ is the set of positive definite matrices with eigenvalues contained in the interval $[ \lambda, \Lambda]$. The nonlinear operators $\Puccisub{\ellip}{\Ellip}$ and $\puccisub{\ellip}{\Ellip}$ are called the \emph{Pucci maximal} and \emph{minimal operators}, respectively. For ease of notation, we will often drop the subscripts and write $\mathcal{P}^+$ and $\mathcal{P}^-$. A convenient way to write the Pucci extremal operators is \begin{equation}\label{eq:pucci-nice-form} \mathcal{P}^+(M) = -\lambda \sum_{\mu_j > 0} \mu_j - \Lambda \sum_{\mu_j < 0} \mu_j \quad \mbox{and} \quad \mathcal{P}^-(M) = -\Lambda \sum_{\mu_j > 0} \mu_j - \lambda \sum_{\mu_j < 0} \mu_j, \end{equation} where $\mu_1, \ldots, \mu_n$ are the eigenvalues of $M$. \medskip We require our operator $F:\ensuremath{\mathbb{S}^n} \rightarrow \ensuremath{\mathbb{R}}$ to be uniformly elliptic in the sense that there exist constants $0<\lambda \leq \Lambda$, such that \begin{equation} \label{eq:Felliptic} \puccisub{\lambda}{\Lambda}(M-N) \leq F(M) - F(N) \leq \Puccisub{\lambda}{\Lambda}(M-N) \quad \mbox{for all} \quad M,N\in \ensuremath{\mathbb{S}^n}. \end{equation} We also require $F$ to be positively homogeneous of order one: \begin{equation}\label{eq:Fhomogeneous} F(\eta M) = \eta F(M)\quad \mbox{for all} \quad M \in \ensuremath{\mathbb{S}^n}, \ \eta \geq 0. \end{equation} \begin{remark}\label{rem:its-all-the-same} If $F$ satisfies hypotheses \EQ{Felliptic} and \EQ{Fhomogeneous}, then the operator $\tilde{F}(M):= - F( -M)$ satisfies \EQ{Felliptic} and \EQ{Fhomogeneous} as well. This observation will simplify the proofs of our main results. For example, the existence of the negative self-similar solution for $F$ can be deduced from the existence of the positive self-similar solution for $\tilde{F}$, and it is clear that $\alpha^-(F) = \alpha^+(\tilde{F})$. Likewise, to prove \THM{convergence-self-similar-solutions}, we need only to assume nonnegative initial data and show convergence to a multiple of the positive self-similar solution, as the last statement follows from the former applied to $\tilde{F}$. \end{remark} Every differential equation and differential inequality in this paper is assumed to be satisfied in the viscosity sense. An introduction to the theory of viscosity solutions can be found in Crandall, Ishii, and Lions \cite{UsersGuide}. For the convenience of the reader, we now recall the definition of a viscosity solution. \begin{definition} Let $G$ be an elliptic nonlinear operator, and $\Omega$ an open set in $\ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}_+$. A function $u\in C(\Omega)$ is a \emph{viscosity subsolution (supersolution)} of the parabolic equation \begin{equation}\label{eq:defviscsol} u_t + G(D^2u,Du,u,x) = 0 \quad \mbox{in} \ \Omega \end{equation} if whenever $x_0\in \Omega$ and $\varphi\in C^2(\Omega)$ are such that \begin{equation*} x \mapsto u(x) - \varphi(x) \quad \mbox{has a local maximum (minimum) at} \quad x_0, \end{equation*} we have \begin{equation*} u_t + G(D^2\varphi(x_0),D\varphi(x_0),u(x_0),x_0) \leq \ (\geq)\ 0. \end{equation*} We say that $u$ is a \emph{viscosity solution} of \EQ{defviscsol} if it is both a viscosity subsolution and supersolution of \EQ{defviscsol}. Viscosity (sub/super)solutions of elliptic equations are defined analogously. The precise meaning of the differential inequality \begin{equation*} u_t + G(D^2u,Du,u,x) \leq \ (\geq) \ 0 \quad \mbox{in} \ \Omega \end{equation*} is that $u$ is a viscosity subsolution (supersolution) of \EQ{defviscsol}. \end{definition} Several times in this article, we will make use of the strong maximum principle and estimates for viscosity solutions of uniformly parabolic equations. For these facts we refer to Wang \cite{Wang:1992a} and Crandall, Kocan, and \'Swi\c{e}ch \cite{Crandall:2000}. Analogous results for uniformly elliptic equations can be found, for example, in Caffarelli and Cabre \cite{Caffarelli:Book}, Trudinger \cite{Trudinger:1988}, and Winter \cite{Winter:2009}. \section{Existence of self-similar solutions} \label{sec:existence} In this section we prove \THM{existence-self-similar-solutions}. To motivate our approach, suppose that there exists a solution $\Phi > 0$ of equation \EQ{fully-nonlinear-parabolic} and an exponent $\alpha > 0$ which satisfy the relation \begin{equation}\label{eq:self-similarity-relation} \Phi(x,t) = \sigma^{\alpha} \Phi\left( \sigma^{1/2} x, \sigma t\right) \quad \mbox{for every} \ (x,t) \in Q, \ \sigma > 0. \end{equation} For $t=1$ this reads \begin{equation} \label{eq:motivation-existence} \Phi(x,1) = \sigma^{\alpha} \Phi\left( \sigma^{1/2} x, \sigma \right). \end{equation} Formally differentiate \EQ{motivation-existence} with respect to $\sigma$ at $\sigma =1$ to discover that \begin{equation} \label{eq:motivation-existence-1} \Phi_t \left( x, 1\right) + \alpha \Phi(x,1) + \frac{1}{2} x \cdot D \Phi(x,1) = 0. \end{equation} Inserting \EQ{motivation-existence-1} into the PDE \EQ{fully-nonlinear-parabolic} and rearranging, we derive the equation \begin{equation*} F\left( D^2 \Phi(x,1) \right) - \frac{1}{2} x \cdot D\Phi(x,1) = \alpha \Phi(x,1) \quad \mbox{for all} \ x \in \ensuremath{\mathbb{R}}^n. \end{equation*} Defining $\varphi(y) := \Phi(y,1)$, we see that the pair $(\alpha,\varphi)$ is a solution of the elliptic eigenvalue problem \begin{equation} \label{eq:elliptic-eigenvalue-Rn} F\left( D^2\varphi \right) - \frac{1}{2} y \cdot D\varphi = \alpha \varphi \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation} To prove \THM{existence-self-similar-solutions}, we will reverse the calculation above. Studying the elliptic eigenvalue problem \EQ{elliptic-eigenvalue-Rn}, we will show that there exists $\alpha > 0$ and $\varphi > 0$ which satisfy \EQ{elliptic-eigenvalue-Rn} and such that $\varphi$ decays at a suitable rate as $|y| \to \infty$. We will then define $\Phi$ by \begin{equation*} \Phi(x,t) := t^{-\alpha} \varphi\left( t^{-1/2} x \right), \end{equation*} and check that $\Phi$ is a solution of \EQ{fully-nonlinear-parabolic} which also satisfies \EQ{self-similarity-relation}. \medskip Another way to derive \EQ{elliptic-eigenvalue-Rn} is to consider the \emph{continuous rescaling} of \EQ{fully-nonlinear-parabolic} given by the change of variables \begin{equation*} y = t^{-1/2} x, \ s = \log t, \ \varphi(y,s) = t^{\alpha} \Phi(x,t), \end{equation*} and then to look for a stationary solution $\varphi(y,s) = \varphi(y)$. Thus the appearance of \EQ{elliptic-eigenvalue-Rn} in our context is similar to the emergence of the so-called Fokker-Planck equation in the study of self-similar solutions of the porous medium equation (see V\'azquez \cite[Section 18.4]{Vazquez:Book}). An elliptic equation similar to \EQ{elliptic-eigenvalue-Rn} also arises in the study of the self-similar solutions of the semilinear parabolic equation \begin{equation*} u_t - \Delta u = | u|^{p-1} u \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}_+. \end{equation*} See the work of Peletier, Terman, and Weissler \cite{Peletier:1986} and Haraux and Weissler \cite{Haraux:1982}. \medskip The theory of principal eigenvalues of fully nonlinear operators goes back to the work of Lions \cite{Lions:1983d}, who used stochastic methods to study the Bellman equation in a bounded domain. Recently, several authors, including Birindelli and Demengel \cite{Birindelli:2006,Birindelli:2007}, Ishii and Yoshimura \cite{Ishii:preprint}, and Quaas and Sirakov \cite{Quaas:2008} have studied the principal eigenvalues of more general fully nonlinear operators in bounded domains (see also \cite{Armstrong:2009}). Our methods in this section are similar to those in these works, although extra complications arise from the unboundedness of $\ensuremath{\mathbb{R}}^n$. The special form of \EQ{elliptic-eigenvalue-Rn}, particularly the gradient term, provides us with enough compactness to overcome these obstacles. \begin{lem}\label{lem:SDMP} Let $\alpha \in \ensuremath{\mathbb{R}}$, $r>0$, and assume that $u\in C(\ensuremath{\mathbb{R}}^n\backslash B_r)$ satifies \begin{equation}\label{eq:SDMP} \left\{ \begin{aligned} & \puccisub{\ellip}{\Ellip}(D^2u) - \frac{1}{2} y\cdot Du \leq \alpha u & \mbox{in} & \ \ensuremath{\mathbb{R}}^n\backslash B_r, \\ & u \leq 0 & \mbox{on} & \ \partial B_r, \end{aligned} \right. \end{equation} and $u(y) \leq C e^{-a|y|^p}$ for some constants $C,a> 0$ and $p>1$. Then there exists a constant $R=R(\alpha,\Lambda)>0$ such that $r\geq R$ implies that $u\leq 0$. \end{lem} \proof Set $R:= 2(\alpha + \Lambda+1)$. A routine calculation verifies that the function $\varphi(y):= \exp(-|y|)$ satisfies \begin{equation}\label{eq:SDMP-1} \mathcal{P}^- \left(D^2\varphi(y) \right) - \frac{1}{2} y \cdot D\varphi(y) \geq \left( -\Lambda + |y|/2 \right) \varphi(y) \geq (\alpha + 1)\varphi \quad \mbox{in} \ \{ |y| \geq R \}. \end{equation} Suppose on the contrary that there exists $|y_0| > r$ such that $u(y_0) > 0$. Let \begin{equation*} \eta := \inf \left\{ \rho > 0 : \rho \varphi \geq u \ \mbox{in} \ \ensuremath{\mathbb{R}}^n \backslash B_r \right\}. \end{equation*} Then $\eta > 0$, $\eta \varphi \geq u$ in $\ensuremath{\mathbb{R}}^n \backslash B_r$, and owing to the faster decay rate of $u$, there exists $|y_1| > r$ such that $\eta \varphi(y_1) = u(y_1)$. In particular, the map \begin{equation*} y \mapsto u(y) - \eta\varphi(y) \quad \mbox{has a local maximum at} \ y = y_1. \end{equation*} Recalling that $u$ is a viscosity solution of \EQ{SDMP}, we see that \begin{equation*} \mathcal{P}^-\left(\eta D^2\varphi(y_1) \right) - \eta \frac{1}{2} y_1\cdot D\varphi(y_1) \leq \alpha u(y_1) = \eta \alpha \varphi(y_1) < \eta (\alpha + 1) \varphi(y_1), \end{equation*} a contradiction to \EQ{SDMP-1}. \qed \medskip The next lemma is an analogue of \cite[Theorem 3.3]{Armstrong:2009}, adapted to our setting. The result goes back to observations due by Berestycki, Nirenberg and Varadhan \cite{Berestycki:1994}, and earlier versions have been developed by Quaas and Sirakov \cite{Quaas:2008}, and Ishii and Yoshimura \cite{Ishii:preprint} to study the principal half-eigenvalues of fully nonlinear elliptic operators in bounded domains. It is an important comparison tool which is essential to our analysis in this section. \begin{lem}\label{lem:parabolic-HCP} Assume that $u,v,f \in C(\ensuremath{\mathbb{R}}^n)$, $f\geq 0$, and $\alpha \in \ensuremath{\mathbb{R}}$ satisfy \begin{equation*} F(D^2u) - \frac{1}{2} y\cdot Du - \alpha u \leq f \leq F(D^2v) - \frac{1}{2} y\cdot Dv - \alpha v \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation*} Suppose also that $v>0$ and $u(y) \leq C\exp(-a |y|^p)$ for some $p>1$ and every $y\in \ensuremath{\mathbb{R}}^n$. Then either $u \leq v$ in $\ensuremath{\mathbb{R}}^n$, or $u \equiv t v$ for some constant $t > 1$. \end{lem} \proof Let $R = R(\alpha,\Lambda)$ be as in \LEM{SDMP}. For $s\geq 1$, define $w_s:= u - sv$. Then \begin{align*} \puccisub{\lambda}{\Lambda}(D^2w_s) - \frac{1}{2} y \cdot Dw_s - \alpha w_s \leq f - sf \leq 0 \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{align*} For any $s \geq 1$ so large that $w_s < 0$ on $\bar{B}_R$, \LEM{SDMP} implies that $w_s \leq 0$ on $\ensuremath{\mathbb{R}}^n \backslash B_R$. Hopf's lemma then implies that $w_s < 0$ in $\ensuremath{\mathbb{R}}^n$. Define \begin{equation*} t := \inf\left\{ s \geq 1: w_s < 0 \ \mbox{in} \ \ensuremath{\mathbb{R}}^n \right\}. \end{equation*} Then $t\geq 1$ and $w_t \leq 0$. If $t = 1$, then $u \leq v$ in $\ensuremath{\mathbb{R}}^n$, and we have nothing left to show. Suppose that $t > 1$. We claim that $w_t \equiv 0$. If not, then by Hopf's lemma, $w_t < 0$. Select $\delta > 0$ so small that $t - \delta > 1$ and $w_{t-\delta} < 0$ on $\bar{B}_R$. Then $w_{t-\delta} < 0$ on $\ensuremath{\mathbb{R}}^n$, as we argued above. This contradicts the definition of $t$, completing the proof. \qed \begin{cor}\label{cor:parabolic-HCP} Suppose that $u,v \in C(\ensuremath{\mathbb{R}}^n)$ and $\alpha \in \ensuremath{\mathbb{R}}$ satisfy \begin{equation*} F(D^2u) - \frac{1}{2} y\cdot Du - \alpha u \leq 0 \leq F(D^2v) - \frac{1}{2} y\cdot Dv - \alpha v \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation*} Suppose also that $v>0$ and $u(y) \leq C\exp(-a |y|^p)$ for some $p>1$ and every $y\in \ensuremath{\mathbb{R}}^n$. Then either $u \leq 0$ in $\ensuremath{\mathbb{R}}^n$, or $u \equiv t v$ for some constant $t > 0$. \end{cor} \proof Suppose that $u(y_0) > 0$ for some $y_0\in \ensuremath{\mathbb{R}}^n$. Using the homogeneity of $F$ and by multiplying $u$ by a large positive constant, if necessary, we may assume that $u(y_0) > v(y_0)$. According to \LEM{parabolic-HCP}, there exists $t > 0$ such that $v \equiv t u$. \qed \medskip In our analysis, an important role with be played by the functions $\left\{ \phi^a \right\}_{a>0}$, which we define by \begin{equation*} \phi^a(y) := \exp\left(-a|y|^2\right). \end{equation*} \begin{lem}\label{lem:rugged-calculations} For $a=1/4\Lambda$, the function $\phi^a$ satisfies \begin{equation} \left( \frac{n\lambda}{2\Lambda}\right) \phi^a \leq \puccisub{\ellip}{\Ellip}(D^2\phi^a) - \frac{1}{2} y\cdot D\phi^a \leq \left( \frac{(n-1)\lambda}{2\Lambda} + \frac{1}{2} \right) \phi^a \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation} Likewise, for $a=1/4\lambda$, the function $\phi^a$ satisfies \begin{equation} \left( \frac{(n-1)\Lambda}{2\lambda} + \frac{1}{2} \right) \phi^a \leq \Puccisub{\ellip}{\Ellip}(D^2\phi^a) - \frac{1}{2} y\cdot D \phi^a \leq \left( \frac{n\Lambda}{2\lambda}\right) \phi^a \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation} \end{lem} \proof For any $a> 0$, the Hessian of $\phi^a$ is given by \begin{equation*} D^2\phi^a(y) = \phi^a(y) \left( 4a^2y \otimes y - 2aI \right), \end{equation*} and has eigenvalues $(4a^2|y|^2 - 2a)\phi^a(y)$ with multiplicity $1$ and $-2a\phi^a(y)$ with multiplicity $n-1$. Hence \begin{equation*} \puccisub{\ellip}{\Ellip}(D^2\phi^a(y)) = \begin{cases} a \phi^a(y) (2 \lambda n - 4a\lambda |y|^2), & \mbox{for all} \ |y| \leq (2a)^{-\frac{1}{2}} , \\ a \phi^a(y) ( 2 \lambda (n-1) + 2\Lambda - 4a\Lambda |y|^2), & \mbox{for all} \ |y| > (2a)^{-\frac{1}{2}}. \end{cases} \end{equation*} Therefore, for each $a> 0$, \begin{equation}\label{eq:pucci-bound-alpha-1} \puccisub{\ellip}{\Ellip}(D^2\phi^a) - \frac{1}{2}y\cdot D\phi^a = a\left(2\lambda n - 4a\lambda |y|^2 + |y|^2\right) \phi^a\quad \mbox{for} \ |y| \leq (2a)^{-\frac{1}{2}}, \end{equation} and \begin{equation}\label{eq:pucci-bound-alpha-2} \puccisub{\ellip}{\Ellip}(D^2\phi^a) - \frac{1}{2}y\cdot D\phi^a = a\left(2\lambda (n-1) +2\Lambda - 4a\Lambda |y|^2 + |y|^2\right) \phi^a \quad \mbox{for} \ |y| \geq (2a)^{-\frac{1}{2}}. \end{equation} In particular, for every $a\leq (4\Lambda)^{-1}$, we have \begin{equation}\label{eq:pucci-bound-alpha} \puccisub{\ellip}{\Ellip}(D^2\phi^a) - \frac{1}{2}y\cdot D\phi^a \geq (2a\lambda n) \phi^a \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation} For $a=(4\Lambda)^{-1}$ we carefully check that \begin{equation}\label{eq:pucci-alpha-other-bound} \puccisub{\ellip}{\Ellip}(D^2\phi^a) - \frac{1}{2}y\cdot D\phi^a \leq \left( \frac{(n-1)\lambda}{2\Lambda} + \frac{1}{2} \right) \phi^a \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation} Similar calculations demonstrate that \begin{equation}\label{eq:Pucci-bound-alpha-1} \Puccisub{\ellip}{\Ellip}(D^2\phi^a) - \frac{1}{2}y\cdot D\phi^a = a\left(2\Lambda n - 4a\Lambda |y|^2 + |y|^2\right) \phi^a\quad \mbox{for} \ |y| \leq (2a)^{-\frac{1}{2}}, \end{equation} and \begin{equation}\label{eq:Pucci-bound-alpha-2} \Puccisub{\ellip}{\Ellip}(D^2\phi^a) - \frac{1}{2}y\cdot D\phi^a = a\left(2\Lambda (n-1) +2\lambda - 4a\lambda |y|^2 + |y|^2\right) \phi^a \quad \mbox{for} \ |y| \geq (2a)^{-\frac{1}{2}}. \end{equation} Thus for any $a \geq (4\lambda)^{-1}$, we have \begin{equation}\label{eq:Pucci-bound-alpha} \Puccisub{\ellip}{\Ellip}(D^2\phi^a) - \frac{1}{2}y\cdot D\phi^a \leq (2a\Lambda n) \phi^a \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation} Finally, for $a = (4\lambda)^{-1}$, we calculate \begin{equation}\label{eq:Pucci-alpha-other-bound} \Puccisub{\ellip}{\Ellip}(D^2\phi^a) - \frac{1}{2} y \cdot D\phi^a \geq \left( \frac{(n-1)\Lambda}{2\lambda} + \frac{1}{2} \right) \phi^a \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \qed \end{equation} \medskip For the rest of this section, we let $\scon$ denote the special constant \begin{equation*} \scon := \frac{1}{8\Lambda}. \end{equation*} From \EQ{pucci-bound-alpha-1} and \EQ{pucci-bound-alpha-2}, we see that $\phi^b$ satisfies \begin{equation}\label{eq:pucci-bound-alpha-b} \mathcal{P}^-(D^2\phi^b) - \frac{1}{2} y\cdot D\phi^b \geq \left( \frac{n\lambda}{4\Lambda} + \frac{1}{16\Lambda} |y|^2 \right) \phi^b \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation} Define a norm $\normx{\cdot}$ on $C(\ensuremath{\mathbb{R}}^n)$ by \begin{equation*} \normx{u} = \sup_{y\in \ensuremath{\mathbb{R}}^n} \frac{|u(y)|\exp\left(b |y|^2\right)}{1+|y|^2},\end{equation*} and let $X$ denote the Banach space \begin{equation*} X = \left\{ u \in C(\ensuremath{\mathbb{R}}^n) : \normx{u} < \infty \right\}. \end{equation*} Notice that convergence in $X$ implies uniform convergence in $\ensuremath{\mathbb{R}}^n$. Define the set \begin{equation*} \decayseta := \left\{ u \in X : 0 \leq u(y) \leq C \exp( -b |y|^2) \ \mbox{for some} \ C>0 \right\}. \end{equation*} Notice that $\decayseta$ is a convex subset of $X$. \begin{prop}\label{prop:unique-solution-allspace} For each $v \in X$ such that $v\geq 0$, there exists a unique solution $u \in X$ of the equation \begin{equation}\label{eq:unique-solution-allspace} F(D^2u) -\frac{1}{2} y\cdot Du = v \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation} Moreover, $u \in \decayseta$. \end{prop} \proof We will first demonstrate existence. For each $R> 0$, let $u^R$ be the unique solution of the Dirichlet problem \begin{equation*} \left\{ \begin{aligned} & F\left(D^2u_R\right) - \frac{1}{2}y \cdot Du_R = v & \mbox{in} & \ B_R,\\ & u_R = 0 & \mbox{on} & \ \partial B_R. \end{aligned} \right. \end{equation*} Set $K:= \normx{v} \max \left\{ 4\Lambda / n\lambda ,16\Lambda \right\}$. According to \EQ{pucci-bound-alpha-b}, the function $\psi:= K\phi^b$ is a supersolution of \begin{equation*} F(D^2\psi) - \frac{1}{2} y \cdot D\psi \geq \normx{v} \left( 1 + |y|^2 \right) \phi^b \geq v \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation*} By the maximum principle, $0 \leq u_R \leq \psi = K \exp(-b|y|^2)$ for every $R>0$. Using local $C^\alpha$ estimates for uniformly elliptic equations (c.f. \cite{Winter:2009}), we deduce that for each fixed $R_0> 0$, \begin{equation*} \sup_{ R > R_0 +1} \| u_R \|_{C^\alpha(B_{R_0})} < \infty. \end{equation*} Extend $u_R$ to be zero outside $\ensuremath{\mathbb{R}}^n \backslash B_R$, and extract a subsequence $R_j \to \infty$ such that \begin{equation*} u_{R_j} \rightarrow u \quad \mbox{locally uniformly in} \ \ensuremath{\mathbb{R}}^n \end{equation*} for some function $u \in C(\ensuremath{\mathbb{R}}^n)$. Evidently $0 \leq u \leq \psi$, and thus $u \in \decayseta$. From the stability properties of viscosity solutions under uniform convergence, it follows that $u$ is a solution of equation \EQ{unique-solution-allspace}. Uniqueness follows from \COR{parabolic-HCP}. Indeed, if $u_1, u_2\in \decayseta$ are solutions of \EQ{unique-solution-allspace}, then the function $w:= u_1-u_2$ satisfies \begin{equation*} \mathcal{P}^-(D^2w) - \frac{1}{2} y\cdot Dw \leq 0 \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation*} Comparing $w$ with $\phi^{2b}$ and using \COR{parabolic-HCP}, we deduce that $w\leq 0$ in $\ensuremath{\mathbb{R}}^n$. \qed \medskip We denote by $X_+$ the set \begin{equation*} X_+ = \left\{ u \in X : u\geq 0 \right\}. \end{equation*} Let $\mathcal{A}:X_+ \to X_+$ be the solution operator of \EQ{unique-solution-allspace}. That is, $\mathcal{A}(v) := u$, where $u$ is the unique solution of \EQ{unique-solution-allspace}. Then $\mathcal{A}\left( X_+ \right) \subseteq \decayseta$. It will be convenient to use the notation \begin{equation*} \Falpha{u} := F(D^2u) - \frac{1}{2} y\cdot Du - \alpha u. \end{equation*} Define the constant \begin{equation}\label{eq:def-of-alpha-plus} \alpha^+(F) := \sup \left\{ \alpha : \mbox{there exists} \ \varphi \in X_+\backslash \{ 0\} \ \mbox{such that} \ \Falpha{\varphi} \geq 0 \ \mbox{in} \ \ensuremath{\mathbb{R}}^n \right\}. \end{equation} We call $\alpha^+(F)$ the \emph{positive anomalous exponent} of $F$. When there is no ambiguity, we will drop the dependence on $F$ and write $\alpha^+ = \alpha^+(F)$. From \COR{parabolic-HCP} and Hopf's Lemma, it is clear that the anomalous exponent satisfies \begin{equation*} \alpha^+ \leq \inf\left\{ \alpha : \mbox{there exists} \ \varphi \in X_+\backslash \{ 0\} \ \mbox{such that} \ \Falpha{\varphi} \leq 0 \ \mbox{in} \ \ensuremath{\mathbb{R}}^n \right\}. \end{equation*} From \LEM{rugged-calculations} we see that \begin{equation}\label{eq:pucci-alpha-bounds} \frac{n\lambda}{2\Lambda} \leq \alpha^+\left( \puccisub{\ellip}{\Ellip} \right) \leq \frac{(n-1) \lambda + \Lambda}{2\Lambda} \leq \frac{n}{2} \leq \frac{(n-1) \Lambda + \lambda}{2\lambda} \leq \alpha^+(\Puccisub{\ellip}{\Ellip}) \leq \frac{n\Lambda}{2\lambda}. \end{equation} Moreover, if $\lambda \neq \Lambda$, then all of the inequalities in \EQ{pucci-alpha-bounds} are strict. If $F$ and $G$ are two uniformly elliptic, positively homogeneous operators such that $F \leq G$, then \begin{equation*} \alpha^+(F) \leq \alpha^+(G). \end{equation*} Thus from \EQ{pucci-alpha-bounds} we deduce that for every operator $F$ satisfying \EQ{Felliptic} and \EQ{Fhomogeneous}, \begin{equation} \frac{n\lambda}{2\Lambda} \leq \alpha^+(F) \leq \frac{n\Lambda}{2\lambda}. \end{equation} We will show that $\alpha^+$ is an eigenvalue of the operator $\mathcal{A}$, using the Leray-Schauder alternative. For the convenience of the reader, we first state this result. A proof can be found in \cite{Granas:Book}. \begin{definition} If $Y$ and $Z$ are Banach spaces, we say a (possibly nonlinear) map $\mathcal{A}:Y \to Z$ is \emph{compact} if, for each bounded subset $B \subseteq Y$, the closure of the set $\{ \mathcal{A}(x) : x \in B \}$ is compact in $Z$. \end{definition} \begin{thm}[Leray-Schauder Alternative]\label{thm:leray-schauder-alt} Suppose $Y$ is a Banach space, and $C \subseteq Y$ is a convex subset of $Y$ such that $0 \in C$. Assume that $\mathcal{A}:C \to C$ is a (possibly nonlinear) function which is compact and continuous. Then at least one of the following holds: \begin{enumerate} \item the set $\{ x \in C \, : \, x = \mu \mathcal{A}(x) \mbox{ for some } 0 < \mu < 1 \}$ is unbounded in $Y$, \end{enumerate} or \begin{enumerate} \addtocounter{enumi}{1} \item there exists $x\in C$ for which $x = \mathcal{A}(x)$. \end{enumerate} \end{thm} In order to apply \THM{leray-schauder-alt} in our setting, we must verify that the nonlinear operator $\mathcal{A}$ is continuous and compact. \begin{prop} The operator $\mathcal{A}$ is continuous and compact with respect to $\normx{\cdot}$. \end{prop} \proof Let $\{ v_k \}_{k\geq 1} \subseteq X$ such that $\normx{v_k} \leq 1$. Set $u_k := \mathcal{A}(v_k)$. Let $\varepsilon > 0$ be given, and fix a large constant $R> 0$ to be selected below. As in the proof of \PROP{unique-solution-allspace}, from \EQ{pucci-bound-alpha-1} and \EQ{pucci-bound-alpha-2} we see that the function $\phi:=M\phi^{b}$ satisfies \begin{equation*} \mathcal{P}^-(D^2\phi) - \frac{1}{2}y\cdot D\phi \geq \left( 1 + |y|^2 \right) \phi^{b} \geq v_k \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n \end{equation*} for $M:= \max\left\{ 4\Lambda / n\lambda, 16\Lambda \right\}$ and every $k\geq 1$. It follows that $u_k \leq \psi$ for every $k\geq 1$. Using local $C^\alpha$ estimates, we have \begin{equation*} \sup_{k \geq 1} \| u_k \|_{C^\alpha(B_R)} < \infty. \end{equation*} Therefore, we may select a subsequence, which we also denote by $k$, such that \begin{equation*} \lim_{K\to \infty} \sup_{k,l \geq K} \| u_k - u_l \|_{L^\infty(B_R)} = 0. \end{equation*} Now take $R = \left( 2M/ \varepsilon\right)^{1/2}$. Then for any $y \geq R$ and $k,l \geq 1$, \begin{equation*} \frac{\left| u_k(y) - u_l (y)\right| \exp(b|y|^2)}{1+|y|^2} \leq \frac{2|\psi(y)| \exp(b|y|^2)}{1+R^2} = \frac{2M}{1+R^2} \leq \varepsilon. \end{equation*} It follows that \begin{equation*} \lim_{K\to \infty} \sup_{k,l \geq K} \normx{ u_k - u_l} \leq \varepsilon. \end{equation*} A diagonalizing procedure now produces a subsequence of $\{ u_k \}$ which is Cauchy in $X$. Therefore, $\mathcal{A}$ is compact. To see that $\mathcal{A}$ is continuous, suppose in addition that the sequence $v_k$ converges strongly in $X$ to a function $v\in X$. In particular, $v_k \rightarrow v$ uniformly in $\ensuremath{\mathbb{R}}^n$. We can find $u \in X$ and a subsequence $u_{k_j}$ such that $u_{k_j} \rightarrow u$ in $X$, and hence uniformly in $\ensuremath{\mathbb{R}}^n$. By the stability properties of viscosity solutions with respect to uniform convergence, it follows that $u=\mathcal{A}(v)$. By uniqueness, the full sequence $u_k$ converges to $u$. \qed \begin{prop}\label{prop:principal-eigenfunction-allspace} There exists a unique $\varphi^+ \in X$ such that $\varphi^+(0) = 1$ and \begin{equation}\label{eq:eigenvalue-allspace} F(D^2\varphi^+) - \frac{1}{2} y\cdot D\varphi^+ = \alpha^+ \varphi^+ \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation} Moreover, $\varphi^+ \in \decayseta \cap C^{1,\alpha}_{\mathrm{loc}}(\ensuremath{\mathbb{R}}^n)$, and $\varphi^+ \in C^{2,\alpha}_{\mathrm{loc}}(\ensuremath{\mathbb{R}}^n)$ if $F$ is concave or convex. \end{prop} \proof Select $w \in \decayseta$ such that $\normx{w} = 1$. We claim that for any $\varepsilon > 0$, \begin{equation}\label{eq:existence-claim} \mbox{if} \ u \in \decayseta\ \mbox{and} \ \alpha\geq 0 \ \mbox{satisfy} \ u = \alpha \mathcal{A}(u+\varepsilon w), \ \mbox{then} \ \alpha \leq \alpha^+. \end{equation} Indeed, for such $\alpha \geq 0$ and $u \in \decayseta$, we have \begin{equation*} \Falpha{u} \geq 0 \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation*} Since $w\not\equiv 0$, if $\alpha > 0$ then $u \not\equiv 0$. We see from definition \EQ{def-of-alpha-plus} that in this case $\alpha \leq \alpha^+$. Obviously if $\alpha = 0$, then $u\equiv 0$. Our claim \EQ{existence-claim} is confirmed. \smallskip We may now apply \THM{leray-schauder-alt} to deduce that for each $\varepsilon > 0$, the set \begin{equation*} D_\varepsilon := \left\{ u \in \decayseta : \mbox{there exists} \ 0 \leq \alpha \leq \alpha^+ + \varepsilon \ \mbox{such that} \ u = \alpha \mathcal{A}(u+\varepsilon w) \right\} \end{equation*} is unbounded in $X$. Select $u_\varepsilon \in D_\varepsilon$ such that $\normx{u_\varepsilon} \geq 1$. Let $\alpha_\varepsilon\geq 0$ such that \begin{equation*} u_\varepsilon = \alpha_\varepsilon \mathcal{A}(u_\varepsilon+\varepsilon w) \end{equation*} Evidently, $\alpha_\varepsilon > 0$. Normalize by setting $v_\varepsilon := u_\varepsilon / \normx{u_\varepsilon}$, and notice that by the homogeneity of $\mathcal{A}$, the function $v_\varepsilon$ satisfies \begin{equation*} v_\varepsilon = \alpha_\varepsilon \mathcal{A}\left(v_\varepsilon + \varepsilon w / \normx{u_\varepsilon} \right). \end{equation*} By the compactness of $\mathcal{A}$, we may select $\varphi^+ \in X$, a number $0 \leq \alpha^* \leq \alpha^+$, and a subsequence $\varepsilon_j\to 0$, such that \begin{equation*} v_{\varepsilon_j} \to \varphi^+ \ \mbox{in} \ X \quad \mbox{and} \quad \alpha_{\varepsilon_j} \to \alpha^*. \end{equation*} Since $\mathcal{A}$ is continuous, it follows that $\varphi^+ = \alpha^* \mathcal{A}(\varphi^+)$. Thus $\varphi^+\in \decayseta$. Clearly $\normx{\varphi^+} = 1$, and thus $\alpha^* > 0$. By Hopf's Lemma, $\varphi^+> 0$ in $\ensuremath{\mathbb{R}}^n$. We will now argue that $\alpha^* = \alpha^+$, and that $\varphi^+$ is unique up to multiplication by a positive constant. Suppose that $\alpha \geq \alpha^*$ and $\psi \in X_+ \backslash \{ 0 \}$ are such that \begin{equation*} F(D^2\psi) - \frac{1}{2} y\cdot D\psi \geq \alpha \psi \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation*} Then \begin{equation} F_{\alpha^*} \left[ \varphi^+ \right] = 0 \leq \Falpha{\psi} \leq F_{\alpha^*} \left[ \psi \right]. \end{equation} According to \COR{parabolic-HCP}, we have $\psi \equiv t \varphi^+$ for some $t > 1$. This implies that $\alpha^* = \alpha$. Recalling \EQ{def-of-alpha-plus}, we see that $\alpha^* \geq \alpha^+$. Recalling that by construction $\alpha^* \leq \alpha^+$, we deduce that $\alpha^* = \alpha^+$. Moreover, we have shown that $\varphi^+$ is unique up to multiplication by a positive constant. The last statement in the proposition follows from the standard regularity theory for uniformly elliptic equations (c.f. \cite{Caffarelli:Book,Trudinger:1988}). \qed \proof[Proof of \THM{existence-self-similar-solutions}] Define \begin{equation} \Phi^+(x,t) := t^{-\alpha^+} \varphi^+ \left( \frac{x}{\sqrt{t}}\right). \end{equation} Assuming that $\Phi^+$ and $\varphi^+$ are smooth, and using \EQ{eigenvalue-allspace}, we easily verify that $\Phi^+$ is a solution of \EQ{fully-nonlinear-parabolic}. If $\Phi^+$ and $\varphi^+$ are not smooth, our calculation can be made rigorous in the viscosity sense by the use of smooth test functions. The uniqueness of $\alpha^+$ and $\Phi^+$ is established by performing this computation in reverse and appealing to \PROP{principal-eigenfunction-allspace}. All of the corresponding assertions regarding $\alpha^-$ and $\Phi^-$ now follow from \REM{its-all-the-same}. \qed \medskip We conclude this section with an estimate of our self-similar solution $\Phi^+$ from above and below, and an example. \begin{lem}\label{lem:bound-above-and-below} For each $0 < a < (4\Lambda)^{-1}$, there exists a constant $C>0$ such that \begin{equation}\label{eq:bounded-by-gaussian-above} \varphi^+(y) \leq C \exp\left( -a|y|^2 \right). \end{equation} Likewise, for each $a > (4\lambda)^{-1}$, there exists a constant $C>0$ such that \begin{equation}\label{eq:bounded-by-gaussian-below} \exp\left( -a|y|^2\right) \leq C \varphi^+(y). \end{equation} \end{lem} \proof By construction, since $\varphi^+ \in \decayseta$ we have that the estimate \EQ{bounded-by-gaussian-above} holds for $a_1 = (8\Lambda)^{-1}$. We will therefore only show \EQ{bounded-by-gaussian-below}, as a similar argument obtains \EQ{bounded-by-gaussian-above} for all $a_1 < (4\Lambda)^{-1}$. For $a > (4\lambda)^{-1}$, by \EQ{Pucci-bound-alpha-2} we have that \begin{equation*} \mathcal{P}^+(D^2 \phi^a) - \frac{1}{2} y\cdot D\phi^a \leq \alpha^+ \phi^a \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n \backslash B_r, \end{equation*} provided that we take $r> 0$ so large that \begin{equation*} r^2 > (2a)^{-1} \quad \mbox{and} \quad r^2 \geq \frac{a\left( 2\Lambda(n-1) + 2\lambda\right) - \alpha^+}{a(4a\lambda -1)}. \end{equation*} Also take $r>R$, where $R = R\left(\alpha^+,\Lambda\right)$ is the constant in \LEM{SDMP}. Let $C$ be so large that $\phi^a \leq C \varphi^+$ on $B_r$. Then the function $w:= \phi^a - C\varphi^+$ satisfies \begin{equation*} \mathcal{P}^-(D^2w) - \frac{1}{2} y\cdot Dw \leq \alpha^+ w \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n \backslash B_r, \end{equation*} and $w\leq 0$ on $\partial B_r$. According to \LEM{SDMP}, the function $w\leq 0$ in $\ensuremath{\mathbb{R}}^n$. That is, $\phi^a \leq C \varphi^+$ in $\ensuremath{\mathbb{R}}^n$. \qed \begin{cor}\label{cor:bound-above-and-below} For each $0 < a_1 < (4\Lambda)^{-1} \leq (4\lambda)^{-1} < a_2$, there exists a constant $C>1$ such that \begin{equation}\label{eq:Phi-bound-above-and-below} C^{-1} t^{-\alpha^+} \exp \left( -a_2 |x|^2 / t\right) \leq \Phi^+(x,t) \leq C t^{-\alpha^+} \exp \left( -a_1 |x|^2 / t\right) \end{equation} for all $(x,t) \in \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$. \end{cor} \begin{example} Consider the case that $F$ is convex. Then $F$ is a supremum of a collection of linear operators $L^k$ with constant coefficients, and each of which satisfy \EQ{Felliptic} and \EQ{Fhomogeneous}. Since $\alpha^+(L^k) = \alpha^-(L^k) = n/2$ for every $k$, we deduce that \begin{equation*} \alpha^-(F) \leq \frac{n}{2} \leq \alpha^+(F). \end{equation*} We claim that these inequalities are strict unless $F$ is linear. Suppose that $\alpha^+(F) = n/2$. Let $\varphi$ and $\varphi_k$ be the functions obtained in \PROP{principal-eigenfunction-allspace} for $F$ and $L^k$, respectively. Notice that \begin{equation*} F_{n/2} \left[\varphi\right] = 0 = L^k_{n/2} \left[\varphi_k\right] \leq F_{n/2} \left[\varphi_k\right] \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation*} According to \COR{parabolic-HCP}, $\varphi \equiv \varphi_k$ for every $k$. That is, the fundamental solutions of the constant-coefficient linear parabolic operators $L^k$ are equal. This implies that $L^k=L$ for every $k$ (see Friedman \cite{Friedman:Book:1964}). Hence $F=L$. \end{example} \begin{remark} In the case that $F(M)$ depends only on the eigenvalues of $M$, we immediately deduce that $\varphi^+$ and hence $\Phi^+(\cdot, t)$ are radial functions. This follows from the invariance of the equation under an orthogonal change of variables, and \COR{parabolic-HCP}. In particular, the self-similar solutions corresponding to the operators $\mathcal{P}^+$ and $\mathcal{P}^-$ are radial. \end{remark} \section{Asymptotic convergence to self-similar solutions} \label{sec:asymptotics} In this section, we present the proof of \THM{convergence-self-similar-solutions}. Owing to \REM{its-all-the-same}, we need only prove the first statement. For ease of notation, we write $\alpha = \alpha^+(F)$ and $\Phi = \Phi^+$. Fix a solution $u=u(x,t)$ of the equation \begin{equation}\label{eq:PDE} u_t + F(D^2u) = 0 \quad \mbox{in} \quad \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+, \\ \end{equation} subject to the initial condition \begin{equation} u(x,0) = g(x). \end{equation} We require the initial data $g$ to be continuous, not identically zero, and to satisfy the condition \begin{equation*} 0 \leq g(x) \leq C_0 e^{-B|x|^2} \end{equation*} for some constants $B,C_0>0$. For $\sigma > 0$, we denote \begin{equation*} u^\sigma: = \rescale{\sigma} u (x,t) := \sigma^{\alpha} u\left( \sigma^{1/2} x , \sigma t\right). \end{equation*} For each $\sigma > 0$, the function $u^\sigma$ is a solution of \EQ{PDE}. \medskip We intend to show that as the parameter $\sigma \to \infty$, the rescaled solutions $u^\sigma$ converge locally uniformly in $\ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$ to a positive multiple of $\Phi(x,t)$. Recall that $\Phi$ is invariant under $\rescale{\sigma}$: \begin{equation}\label{eq:Phi-invariant} \Phi(x,t) = \rescale{\sigma} \Phi(x,t) = \sigma^{\alpha} \Phi\left(\sigma^{1/2} x, \sigma t \right) \quad \mbox{for all} \quad (x,t) \in \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+, \ \sigma > 0. \end{equation} \medskip The proof of \THM{convergence-self-similar-solutions} will consist of a series of lemmas. As a preliminary step, we show that $u$ is bounded between positive multiples of $\Phi$, possibly shifted in time. \begin{lem}\label{lem:bound-above-initial} For each $\tau > \frac{1}{4\lambda B}$ there exists a constant $C > 0$, depending only on $C_0$ and $\tau$, such that \begin{equation}\label{eq:bound-above-initial} u(x,t) \leq C \Phi(x,t + \tau) \quad \mbox{for all} \quad (x,t) \in \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+. \end{equation} \end{lem} \proof If $\tau > \frac{1}{4B\lambda}$, then according to \COR{bound-above-and-below}, \begin{equation*} \Phi(x,\tau) \geq c(\tau) e^{-B|x|^2}, \end{equation*} provided we choose $c(\tau) > 0$ small enough. Thus $C\Phi(x,\tau) \geq g(x)$ for $C:= C_0 / c(\tau)$ and $x\in \ensuremath{\mathbb{R}}^n$. The maximum principle implies that $C\Phi (x,t+\tau) \geq u(x,t)$ for all $(x,t) \in \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$. \qed \begin{lem}\label{lem:bound-below-initial} For each $t_0,\tau > 0$, there exists $C > 0$, depending only on $B$, $C_0$, $t_0$, and $\tau$, such that \begin{equation}\label{eq:bound-below-initial} \Phi(x,t) \leq C u(x,t+\tau) \quad \mbox{for all} \quad x\in \ensuremath{\mathbb{R}}^n, \ t\geq t_0. \end{equation} \end{lem} \proof By the strong maximum principle, $u(x,\tau) > 0$ on $\ensuremath{\mathbb{R}}^n$. Let $C>0$ be so large that \begin{gather*} \Phi(x,t_0) \leq C u(x,t_0+\tau) \quad \mbox{for all} \ |x| \leq 1, \intertext{and} \Phi(x,t) \leq C u(x,t+\tau) \quad \mbox{for all} \ |x| = 1, \ 0 < t \leq t_0. \end{gather*} Applying the maximum principle, we have $\Phi(x,t_0) \leq C u(x,t_0 + \tau)$ for all $x \in \ensuremath{\mathbb{R}}^n$, and \EQ{bound-below-initial} follows from another application of the maximum principle. \qed \medskip For $\tau = 1/(2\lambda B)$, we use \EQ{Phi-invariant} to rewrite the inequality \EQ{bound-above-initial} in terms of $u^\sigma$ as \begin{equation}\label{eq:u-sigma-bounded-above-by-Phi} u^\sigma (x,t) \leq C \Phi(x, t+ \tau/ \sigma) \quad \mbox{for all} \quad (x,t) \in \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+, \ \sigma > 0. \end{equation} Recalling \EQ{Phi-bound-above-and-below}, we see that for some constant $C(t) > 0$ depending only on a lower bound for $t>0$, in addition to $B$ and $C_0$, we have the estimate \begin{equation}\label{eq:u-sigma-bounded-above} u^\sigma(x,t) \leq C(t) \exp \left( - |x|^2 / 8\Lambda ( t + \tau / \sigma) \right) \quad \mbox{for all} \quad (x,t) \in \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+, \ \sigma \geq 1. \end{equation} According to \EQ{u-sigma-bounded-above} and local H\"older estimates for solutions of uniformly parabolic equations (see Wang \cite[Theorem 4.19]{Wang:1992a}), we obtain \begin{equation*} \sup_{\sigma \geq 1} \| u^\sigma \|_{C^\gamma( \bar{Q} ) } < \infty \end{equation*} for some $0 < \gamma < 1$ and any compact parabolic domain $\bar{Q} \subseteq \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$. Therefore, for every sequence $\sigma_k \to \infty$, we may select a function $U\in C(\ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+)$ and a subsequence, also denoted by $\sigma_k$, such that $u^{\sigma_k} \to U$ locally uniformly in $\ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$. By the stability of viscosity solutions with respect to uniform convergence, each such rescaled limit $U$ is a solution of equation \EQ{PDE}. \medskip Let $\mathcal{S}$ denote the set of such sequential limits $\{ U \}$ of the family $\{ u^\sigma \}_{\sigma \geq 1}$. We will prove \THM{convergence-self-similar-solutions} by showing that $\mathcal{S}$ is a singleton set consisting only of a positive multiple of $\Phi$. \begin{lem}\label{lem:convergence-squeeze} There exists a positive constants $C>0$ such that for all $U \in \mathcal{S}$, \begin{equation}\label{eq:convergence-squeeze} C^{-1}\Phi \leq U \leq C \Phi \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+. \end{equation} \end{lem} \proof From \EQ{u-sigma-bounded-above-by-Phi}, we see that for each $U \in \mathcal{S}$ and $(x,t) \in \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$, \begin{equation*} U(x,t) \leq \limsup_{\sigma \to \infty} u^\sigma ( x, t ) \leq C \Phi\left( x,t\right). \end{equation*} For the other direction, fix $t_0, \tau > 0$. According to \EQ{bound-below-initial}, for each $t > 0$ the inequality \begin{equation}\label{eq:convergence-squeeze-1} \sigma^{-\alpha} \Phi\left( \sigma^{1/2} x, \sigma t \right) \leq C \sigma^{-\alpha} u\left( \sigma^{1/2} x , \sigma t + \tau \right) \end{equation} holds for all $(x,t) \in \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$ and all sufficiently large $\sigma \geq 1$. Rewrite \EQ{convergence-squeeze-1} as \begin{equation*} \Phi(x,t) \leq C u^\sigma \left( x, t + \tau / \sigma\right). \end{equation*} Take the lim-inf of the right side as $\sigma \to \infty$ to see that $\Phi(x,t) \leq C U( x, t)$ for any $U \in \mathcal{S}$. \qed \medskip Notice that \LEM{convergence-squeeze} and local H\"older estimates imply that \begin{equation}\label{eq:limitset-estimate} \sup_{U \in \mathcal{S}} \left\| U \right\|_{C^\gamma(\bar{Q})} < \infty, \end{equation} for every compact subset $\bar{Q} \subseteq \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$. \medskip Define the constant \begin{equation}\label{eq:Cstar} C^* : = \inf \left\{ C>0 : \mbox{there exists} \ U \in \mathcal{S} \ \mbox{such that} \ U \leq C \Phi \right\}. \end{equation} In light of \LEM{convergence-squeeze}, $0 < C^* < \infty$. We will eventually show that $\mathcal{S} = \left\{ C^* \Phi\right\}$. There are two basic steps in the proof. First, we will show that $U \leq C^* \Phi$ for every $U \in \mathcal{S}$. Second, we show that if $U \not\equiv C^* \Phi$ for some $U \in \mathcal{S}$, then we can find another function $V \in \mathcal{S}$ and a small number $\delta > 0$ such that $V \leq (C^*- \delta) \Phi$, in contradiction to the definition \EQ{Cstar} of $C^*$. Most of the subtlety in the proofs of these statements arise from difficulties in managing the ``tails" of $\Phi$. These obstructions are removed by the construction of a special subsolution, which we use as a comparison function. \begin{lem}\label{lem:special-subsolution} For each $a > \frac{1}{4\lambda}$, there exists $r,\eta >0$ and a subsolution $w = w(y,s)$ of the differential inequality \begin{equation} \label{eq:special-subsolution} w_s + \Puccisub{\ellip}{\Ellip}(D^2w) - \frac{1}{2} y\cdot Dw \leq 0 \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+, \end{equation} satisfying the initial conditions \begin{equation*} w(y,0) \leq e^{-a|y|^2} \quad \mbox{for all} \ |y| \leq r, \quad \mbox{and} \quad w(y,0) \leq -\eta e^{-|y|} \quad \mbox{for all} \ |y| > r, \end{equation*} and such that for each $R>0$ there exists a time $S>0$ such that \begin{equation*} w(y,s) > 0 \quad \mbox{for every} \ |y| \leq R, \ s\geq S. \end{equation*} \end{lem} \proof Select a constant $a > (4\lambda)^{-1}$ and set $\varphi(y) := \phi^a (y) = e^{-a|y|^2}$. Recall from \EQ{Pucci-bound-alpha} that \begin{equation*} \mathcal{P}^+ ( D^2\varphi) - \frac{1}{2} y \cdot D\varphi \leq \left( 2a\Lambda n \right) \varphi \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n. \end{equation*} Let $\beta:= 1+ 2a\Lambda n$ and $r_1:= 2(\beta+ \Lambda+1)$, and define a function $\psi(y) = \min\left\{ e^{-r_1} , e^{-|y|} \right\}$. Recalling \EQ{SDMP-1} and that the minimum of supersolutions is a supersolution in the viscosity sense, we see that \begin{equation*} \mathcal{P}^-(D^2\psi) - \frac{1}{2}y \cdot D\psi \geq 0 \quad \mbox{in} \ B_{r_1+1}, \end{equation*} and \begin{equation*} \mathcal{P}^-(D^2\psi ) - \frac{1}{2} y \cdot D\psi \geq (\beta + 1) \psi \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n \backslash B_{r_1}. \end{equation*} Now define $\bar{\varphi}(y,s) := e^{-\beta s} \varphi(y)$ and $\bar{\psi}(y,s) := - e^{-(\beta+1)s} \psi(y)$. Then \begin{equation*} \bar{\varphi}_s + \mathcal{P}^+(D^2\bar{\varphi}) - \frac{1}{2} y\cdot D\bar{\varphi} \leq - \bar{\varphi} \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+, \end{equation*} and \begin{equation*} \bar{\psi}_s + \mathcal{P}^+(D^2\bar{\psi}) - \frac{1}{2} y\cdot D\bar{\psi} \leq (\beta+1)e^{-(\beta+1)s}e^{-r_1} \chi \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+, \end{equation*} where $\chi\equiv 1$ on $\bar{B}_{r_1}$ and $\chi\equiv 0$ on $\ensuremath{\mathbb{R}}^n \backslash \bar{B}_{r_1}$. Set \begin{equation*} \delta := \frac{1}{\beta+1} e^{r_1 - ar_1^2} > 0. \end{equation*} Then $\bar{\varphi}(y,s) \geq \delta (\beta+1) e^{-(\beta+1)s -r_1}$ for all $y\in \bar{B}_{r_1}$ and $s\geq 0$. Therefore, the function $w:= \bar{\varphi} + \delta \bar{\psi}$ satisfies \EQ{special-subsolution}. We now investigate the set of $(y,s)$ for which $w > 0$. For every $|y| > r_1$, \begin{equation*} w(y,s) = e^{-\beta s} \left( e^{-a|y|^2} - \delta e^{-s} e^{-|y|} \right). \end{equation*} From this expression, we observe that $w(y,s) > 0$ whenever $s > a|y|^2 - |y| + \log \delta$ and $|y|>r_1$. Finally, select $r >0$ large enough that $r \geq r_1$ and $as^2 \geq s + \log\frac{2}{\delta}$ whenever $s \geq r$. This choice of $r$ ensures that \begin{equation*} w(y,0) \leq -\frac{\delta}{2} e^{-|y|} \quad \mbox{for all} \ |y| > r. \end{equation*} Taking $\eta := \delta /2$, the proof is complete. \qed \begin{cor}\label{cor:magic} There exist $r,\eta > 0$ such that for any $R>0$ and any subsolution $u$ of \begin{equation*} u_t + \mathcal{P}^-(D^2u) \leq 0 \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n \times (1,\infty) \end{equation*} satisfying initial conditions \begin{equation*} u(x,1) \leq -1 \quad \mbox{for every} \ |x| \leq r, \quad \mbox{and} \quad u(x,1) \leq \eta e^{-|x|} \quad \mbox{for every} \ |x| > r, \end{equation*} there exists $T> 1$ such that \begin{equation*} u(x,t) < 0 \quad \mbox{for all} \quad t\geq T, \ |x| \leq R \sqrt{t}. \end{equation*} \end{cor} \proof Let $r$, $\eta$, and $w$ be as in \LEM{special-subsolution} for $a = \frac{1}{2\lambda}$. Define \begin{equation*} v(x,t) := - w\left( \frac{x}{\sqrt{t}}, \log t\right), \quad (x,t) \in \ensuremath{\mathbb{R}}^n \times (1,\infty). \end{equation*} Then $v$ is a supersolution of the equation \begin{equation*} v_t + \mathcal{P}^-(D^2v) \geq 0 \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n \times (1,\infty), \end{equation*} such that \begin{equation*} v(x,1) \geq \eta e^{-|x|} \quad \mbox{for every}\ |x| > r, \quad \mbox{and} \quad v(x,1) \geq - e^{-a|x|^2} \quad \mbox{for every}\ |x| \leq r. \end{equation*} In particular, $v \geq u$ at time $t=1$ and thus $v\geq u$ in $\ensuremath{\mathbb{R}}^n \times (1,\infty)$ by the maximum principle. According to the conclusion of \LEM{special-subsolution}, for each $R > 0$ there exists $S > 0$ such that \begin{equation*} v\left( x , t \right) = - w\left( t^{-1/2} x, \log t \right) < 0 \quad \mbox{for all} \quad t^{-1/2} | x | \leq R, \ \log t \geq S. \end{equation*} Hence the conclusion is obtained for $T = \exp(S)$. \qed \begin{lem}\label{lem:Cstar-upperbound} For any $U\in \mathcal{S}$, \begin{equation} U (x,t) \leq C^* \Phi(x,t) \quad \mbox{for all} \quad (x,t) \in \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+. \end{equation} \end{lem} \proof Let $r,\eta$ be as in \COR{magic} and fix a small number $\varepsilon > 0$. Recalling \EQ{u-sigma-bounded-above}, we may choose constants $C_1,a>0$ such that \begin{equation*} u^{\sigma} (x,1) \leq C_1 \exp(-a|x|^2) \quad \mbox{for all} \ x\in \ensuremath{\mathbb{R}}^n, \ \sigma \geq 1. \end{equation*} Set $m:= \min_{|x|\leq r} \Phi(x,1)$ and select $r_1\geq r$ such that \begin{equation*} \frac{C_1}{m\varepsilon} e^{-a|x|^2} \leq \eta e^{-|x|} \quad \mbox{for all} \quad |x| \geq r_1. \end{equation*} According to the definition \EQ{Cstar} of $C^*$, we may select $\sigma_1 \geq 1$ such that \begin{equation*} u^{\sigma_1} (x,1) \leq (C^* + \varepsilon) \Phi(x,1) \quad \mbox{for all} \quad |x| \leq r_1. \end{equation*} Define $w:= u^{\sigma_1} - (C^* + 2\varepsilon) \Phi$. Then $w$ is a subsolution of the parabolic equation \begin{equation*} w_t + \mathcal{P}^-(D^2w) \leq 0 \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n \times (1,\infty), \end{equation*} and at time $t=1$ the function $w$ satisfies \begin{gather*} w(x,1) \leq - \varepsilon \Phi(x,1) \leq - m \varepsilon \quad \mbox{for all} \quad |x| \leq r, \\ w(x,1) \leq -\varepsilon \Phi(x,1) \leq 0 \quad \mbox{for all} \quad |x| \leq r_1, \intertext{and} w(x,1) \leq u^{\sigma_1}(x,1) \leq (m\varepsilon) \eta e^{-|x|} \quad \mbox{for all} \quad |x| > r_1. \end{gather*} According to \COR{magic}, for each $R>0$ there exists a time $T=T(R)> 1$ such that \begin{equation*} w(x, t) \leq 0 \quad \mbox{for all} \ |x| \leq Rt^{1/2} \ \mbox{and} \ t\geq T. \end{equation*} This reads \begin{equation*} \sigma_1^{\alpha}\, u\left(\sigma_1^{1/2} x, \sigma_1 t\right) \leq (C^* + 2\varepsilon) \Phi(x,t) \quad \mbox{for all} \ |x| \leq Rt^{1/2} \ \mbox{and} \ t\geq T. \end{equation*} Thus for any $\sigma > \sigma_1$, \begin{align*} u^\sigma(x,t) & = \sigma^\alpha u\left(\sigma^{1/2} x , \sigma t\right) \\ & = \sigma^\alpha u\left( \sigma_1^{1/2} (\sigma/ \sigma_1)^{1/2} x , \sigma_1 (\sigma/\sigma_1) t \right) \\ & \leq (\sigma/\sigma_1)^\alpha \left( C^* + 2\varepsilon \right) \Phi\left( (\sigma/ \sigma_1)^{1/2} x , (\sigma/\sigma_1) t \right) \\ & = \left( C^* + 2\varepsilon \right) \Phi\left( x,t \right) \end{align*} provided that $(\sigma/\sigma_1)^{1/2} |x| \leq R \left( \sigma t /\sigma_1 \right)^{1/2}$ and $\sigma t / \sigma_1 \geq T(R)$. In particular, for each $(x,t) \in \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$, there exists $\sigma_2 > \sigma_1$ large enough that \begin{equation*} u^\sigma(x,t) \leq \left( C^* + 2\varepsilon \right) \Phi\left( x,t \right) \quad \mbox{for all} \quad \sigma \geq \sigma_2. \end{equation*} It follows that for any $U \in \mathcal{S}$, \begin{equation*} U(x,t) \leq \left( C^* + 2\varepsilon \right) \Phi\left( x,t \right) \quad \mbox{for all} \quad (x,t) \in \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+. \end{equation*} The conclusion is obtained by sending $\varepsilon \to 0$. \qed \medskip To complete the proof of \THM{convergence-self-similar-solutions}, we will need elementary properties of $\mathcal{S}$ contained in the following two lemmas. \begin{lem}\label{lem:limitset-invariant} If $U \in \mathcal{S}$, then $\rescale{\sigma} U \in \mathcal{S}$ for any $\sigma > 0$. \end{lem} \proof Select a sequence $\sigma_j \to \infty$ such that $u^{\sigma_j} \rightarrow U$ locally uniformly in $\ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$. It is easy to check that for $\tilde{\sigma}_j := \sigma \sigma_j$, the sequence $u^{\tilde{\sigma}_j} \to \rescale{\sigma} U$ locally uniformly in $\ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$. \qed \begin{lem}\label{lem:limitset-closed} The set $\mathcal{S}$ is closed in the topology of local uniform convergence. \end{lem} \proof Let $U_j \in \mathcal{S}$ such that $U_j \rightarrow U$ locally uniformly in $\ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$. Fix a compact subset $\bar{Q}$ of $\ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$, and select $\sigma_j \to \infty$ such that \begin{equation*} \sup_{\bar{Q}} \left| u^{\sigma_j} - U_j \right| \leq 2^{-j}. \end{equation*} It is clear that $u^{\sigma_j}$ converges to $U$ as $j\to \infty$, uniformly on $\bar{Q}$. Now a diagonalization argument produces a sequence $\tilde{\sigma}_j \to \infty$ for which the functions $u^{\tilde{\sigma}_j}$ converge to $U$ locally uniformly in $\ensuremath{\mathbb{R}}^n\times \ensuremath{\mathbb{R}}_+$, as $j\to \infty$. \qed \begin{lem}\label{lem:Cstar-lowerbound} Suppose that $U \in \mathcal{S}$ and $C>0$ are such that $U \leq C \Phi$. Then either $U \equiv C\Phi$ or there exists $\delta > 0$ and $V \in \mathcal{S}$ such that $V \leq (C-\delta) \Phi$. \end{lem} \proof Suppose that $U\in \mathcal{S}$ and $C> 0$ are such that $U \leq C \Phi$, but $U \not\equiv C\Phi$ in $\ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$. By the strong maximum principle, $U(x,t) < C\Phi(x,t)$ for every $(x,t) \in \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$. Let $r,\eta> 0$ be as in \LEM{special-subsolution}, and choose $\varepsilon > 0$ so small that \begin{equation*} U(x,1) \leq (C-\varepsilon) \Phi(x,1) \quad \mbox{for every} \ |x|<r. \end{equation*} Let $m:= \min_{|x|\leq r} \Phi(x,1)$. Select $\delta > 0$ small enough that \begin{equation*} \frac{\varepsilon - \delta}{\delta^{1/2}} > \frac{\eta}{m} \quad \mbox{\and} \quad \delta^{1/2} \Phi(x,1) \leq e^{-|x|} \ \mbox{for all} \ |x| > r. \end{equation*} Denote by $w$ the function \begin{equation*} w(x,t) := \frac{1}{\delta^{1/2}} \left( U(x,t) - (C-\delta) \Phi(x,t) \right), \end{equation*} which is a subsolution of the equation \begin{equation*} w_t + \mathcal{P}^-(D^2w) \leq 0 \quad \mbox{in} \ \ensuremath{\mathbb{R}}^n\times \ensuremath{\mathbb{R}}_+. \end{equation*} Moreover, \begin{equation*} w(x,1) \leq \delta^{1/2} \Phi(x,1) \leq e^{-|x|} \quad \mbox{for every} \ |x| > r, \end{equation*} and \begin{equation*} w(x,1) \leq -\frac{(\varepsilon-\delta)}{\delta^{1/2}}\Phi(x,t) \leq -\eta \quad \mbox{for every} \ |x| \leq r. \end{equation*} According to \COR{magic}, for any $R> 0$ there exists $T(R)> 1$ such that \begin{equation*} U(x,t) \leq \left( C - \delta \right) \Phi(x,t) \quad \mbox{provided that} \ |x|\leq R\sqrt{t}, \ \mbox{and} \ t\geq T. \end{equation*} It follows that for each $(x,t) \in \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$ there exists $\sigma' > 1$ large enough that \begin{equation*} \rescale{\sigma} U (x,t) \leq \left(C - \delta \right) \Phi(x,t) \quad \mbox{for all}\ \sigma \geq \sigma'. \end{equation*} According to \LEM{limitset-invariant}, $\rescale{\sigma}(U) \in \mathcal{S}$. Recalling \EQ{limitset-estimate}, we may select $V \in C(\ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+)$ such that up to a subsequence, $\rescale{\sigma} U\rightarrow V$ locally uniformly in $\ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_+$. It is clear that \begin{equation*} V \leq \left( C - \delta \right) \Phi. \end{equation*} According to \LEM{limitset-closed}, $V\in \mathcal{S}$. \qed \proof[Proof of \THM{convergence-self-similar-solutions}] According to Lemmas \ref{lem:Cstar-upperbound} and \ref{lem:Cstar-lowerbound} and the definition \EQ{Cstar} of the constant $C^*$, the function $C^* \Phi$ is the only element of $\mathcal{S}$. The proof of \THM{convergence-self-similar-solutions} is complete. \qed \begin{remark} Let us repeat a remark made in \cite{Kamin:1991}. If we express the constant $C^*$ obtained in \THM{convergence-self-similar-solutions} as a function of the initial data, $C^* = C^*\!\left[ g \right]$, we see immediately that a nonnegative solution $u$ of \EQ{fully-nonlinear-parabolic} has the property that $t\mapsto C^*\!\left[ u(\cdot , t) \right]$ is constant. We thereby deduce a conservation law for our fully nonlinear equation, generalizing the conservation of mass in the case of a linear operator. It would be interesting to discover more information about $C^*\!\left[ g \right]$ in the general nonlinear case. What is this conserved quantity? \end{remark} \section{Acknowledgements} The authors would like to express their appreciation to their thesis advisor, Lawrence C. Evans for his advice and guidance, and to thank the Department of Mathematics of UC Berkeley, for its support. We also thank Juan Luis V\'azquez for his valuable comments and references, and Grigory Barenblatt for helpful comments. We are also indebted to an anonymous referee whose helpful comments greatly improved this article. \bibliographystyle{plain}
{ "timestamp": "2009-09-25T08:13:52", "yymm": "0903", "arxiv_id": "0903.3068", "language": "en", "url": "https://arxiv.org/abs/0903.3068", "abstract": "We study the long-time asymptotics of solutions of the uniformly parabolic equation \\[ u_t + F(D^2u) = 0 \\quad {in} \\R^n\\times \\R_+, \\] for a positively homogeneous operator $F$, subject to the initial condition $u(x,0) = g(x)$, under the assumption that $g$ does not change sign and possesses sufficient decay at infinity. We prove the existence of a unique positive solution $\\Phi^+$ and negative solution $\\Phi^-$, which satisfy the self-similarity relations \\[ \\Phi^\\pm (x,t) = \\lambda^{\\alpha^\\pm} \\Phi^\\pm (\\lambda^{1/2} x, \\lambda t). \\] We prove that the rescaled limit of the solution of the Cauchy problem with nonnegative (nonpositive) initial data converges to $\\Phi^+$ ($\\Phi^-$) locally uniformly in $\\R^n \\times \\R_+$. The anomalous exponents $\\alpha^+$ and $\\alpha^-$ are identified as the principal half-eigenvalues of a certain elliptic operator associated to $F$ in $\\R^n$.", "subjects": "Analysis of PDEs (math.AP)", "title": "Long-time asymptotics for fully nonlinear homogeneous parabolic equations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9902915255242086, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7084783201810407 }
https://arxiv.org/abs/1008.3854
Asymptotics of the maximal and the typical dimensions of isotypic components of tensor representations of the symmetric group
Vershik and Kerov gave asymptotical bounds for the maximal and the typical dimensions of irreducible representations of symmetric groups $S_n$. It was conjectured by G. Olshanski that the maximal and the typical dimensions of the isotypic components of tensor representations of the symmetric group admit similar asymptotical bounds. The main result of this article is the proof of this conjecture. Consider the natural representation of $S_n$ on $(\mathbb{C}^N)^{\otimes n}$. Its isotypic components are parametrized by Young diagrams with $n$ cells and at most $N$ rows. P. Biane found the limit shape of Young diagrams when $n\rightarrow\infty,\ \sqrt{n}/N\rightarrow c$. By showing that this limit shape is the unique solution to a variational problem, it is proven here, that after scaling, the maximal and the typical dimensions of isotypic components lie between positive constants. A new proof of Biane's limit-shape theorem is obtained.
\section{Introduction} For $n\in\mathbb{N}$, let $S_n$ be the symmetric group on $n$ letters. The complex finite dimensional irreducible representations of $S_n$ are parametrized by the set of partitions of $n$, or equivalently by the set $\mathbb{Y}^n$ of Young diagrams with $n$ cells. Since each irreducible representation of $S_n$ appears in the left regular representation $\mathbb{C} S_n$ of $S_n$ with multiplicity equal to its dimension \cite{FultonHarris}, it follows that $$ n!=\sum_{\lambda\in\mathbb{Y}^n}\(\dim \lambda\)^2, $$ where $\dim \lambda$ is the dimension of the irreducible representation $V_\lambda$ of $S_n$ corresponding to the Young diagram $\lambda$. Thus, the measure defined by $$ \mathbb{P}l^n(\lambda)=\frac{(\dim\lambda)^2}{n!} $$ is a probability measure on $\mathbb{Y}^n$. $\mathbb{P}l^n$ is called the Plancherel measure. If $V$ is an irreducible subrepresentation of a representation $U$ of a finite group, the isotypic component of $U$ corresponding to $V$ is defined to be the sum of all subrepresentations of $U$ which are isomorphic to $V$. Isotypic components of $U$ are subrepresentations of $U$. $U$ decomposes uniquely into a direct sum of isotypic components. Note that following a widely used convention, whenever there is no ambiguity in the action, we will identify a representation with the underlying space. It is easy to see that for $\lambda\in\mathbb{Y}^n$, $\mathbb{P}l^n(\lambda)$ is the relative dimension of the isotypic component of the regular representation corresponding to $\lambda$. Two natural questions can be posed about the asymptotics of the dimensions of irreducible representations of the symmetric group: \begin{question} What is the asymptotic behavior of the maximal dimension of irreducible representations of $S_n$ in the limit $n\rightarrow\infty$? \end{question} \begin{question} What is the asymptotic behavior of the dimension of a typical irreducible representation $V_\lambda$ of $S_n$ in the limit $n\rightarrow\infty$ if $\lambda$ is sampled randomly according to the Plancherel measure? \end{question} In 1985 Vershik and Kerov \cite{VK85} gave answers to both questions by obtaining two-sided logarithmically order-sharp asymptotic bounds. Vershik and Kerov conjectured that in the case of the typical dimension a stronger result holds: after appropriate scaling the dimensions of typical irreducible representations converge to a constant in measure. The conjecture has recently been proven by A. Bufetov \cite{Bu}. \subsection{Main results} The main results of this article are two-sided logarithmically order-sharp asymptotic bounds for the dimensions of isotypic components of tensor representations of $S_n$. Let $N,n$ be two positive integers and consider the tensor product space $(\mathbb{C}^N)^{\otimes n}$. The tensor representation of order $N$ of the symmetric group on $n$ letters is the natural action of $S_n$ on this space by permuting the factors in the tensor product. In coordinates, if $(v_1,v_2,\ldots,v_n)\in(\mathbb{C}^N)^{\otimes n}$ and $\pi\in S_n$, then $$ \pi\cdot(v_1,v_2,\ldots,v_n)=(v_{\pi^{-1}(1)},v_{\pi^{-1}(2)},\ldots,v_{\pi^{-1}(n)}). $$ It follows from Schur--Weyl duality (see Section \ref{sec:SchurWeyl}) that the irreducible representations which are subrepresentations of the representation $(\mathbb{C}^N)^{\otimes n}$ are exactly the ones which correspond to Young diagrams with $n$ cells and at most $N$ rows. Let $\mathbb{Y}_N^n$ denote the set of such Young diagrams, and given $\lambda\in\mathbb{Y}_N^n$ let $E_\lambda$ denote the isotypic component of $(\mathbb{C}^N)^{\otimes n}$ corresponding to $V_\lambda$. Looking at dimensions we have \begin{equation} N^n=\sum_{\lambda\in\mathbb{Y}_N^n}\dim E_\lambda. \end{equation} The relative dimensions of the isotypic components give a probability measure on $\mathbb{Y}_N^n$: \begin{equation*} \mathbb{P}_N^n(\lambda)=\frac{\dim(E_\lambda)}{N^n}. \end{equation*} The main results of this article are the following two theorems, conjectured by G. Olshanski, on the asymptotics of the dimensions of the isotypic components of tensor representations of the symmetric group in the limit $n,N\rightarrow\infty$, $\sqrt{n}/{N}\rightarrow c$. \begin{theorem} \label{thm:max} For any $c>0$ there exist positive numbers $\alpha_c$ and $\beta$ such that for large enough $n\in\mathbb{N}$ and for any $N\in\mathbb{N}$, if $c>{\sqrt{n}}/{N}$, then \begin{equation} \alpha_c<-\frac{1}{\sqrt{n}}\ln\frac{\max_{\lambda\in\mathbb{Y}_N^n}\{\dim E_\lambda\}}{N^n}<\beta. \end{equation} \end{theorem} \begin{theorem} \label{thm:meas} For any $c>0$ there exist positive numbers $\alpha_c$ and $\beta$ such that if $$\lim\limits_{n\rightarrow \infty}\frac {\sqrt{n}}{N}=c,$$ then \begin{equation} \lim_{n\rightarrow \infty}\mathbb{P}_N^n\left\{\lambda:\alpha_c<-\frac{1}{\sqrt{n}}\ln\frac{\dim E_\lambda}{N^n}<\beta\right\}=1. \end{equation} \end{theorem} Note that the constants obtained in this article for both theorems are the same. In Section \ref{sec:MainProofs} we obtain exact formulas for the constants $\alpha_c$ and $\beta$. Also note that the bounds we obtain do not pretend to be sharp. \subsection{Limit shape results} \label{sec:LimitShapes} The motivation behind considering the limit ${\sqrt{n}}/{N}\rightarrow c$ is a limit shape result by Vershik and Kerov \cite{VK77} and independently and simultaneously by Logan and Shepp \cite{LoganShepp} for random Young diagrams with respect to the Plancherel measure. The result has been generalized to the measures $\mathbb{P}_N^n$ by P. Biane \cite{Biane2001}. To state the results, we first need to introduce some notation. Represent a Young diagram $\lambda$ with $n$ cells as a sequence $\lambda=(\lambda_1\geq\lambda_2\geq\ldots)$ where $\lambda_i\in\mathbb{Z}_{\geq 0}$ and $\sum\lambda_i=n$. Associate with $\lambda$ its diagram as shown in Figure \ref{fig:Diagram}. Here the longest row consists of $\lambda_1$ squares of size $1$, the next longest one of $\lambda_2$ squares, and so on. \begin{figure}[ht] \centering \includegraphics[width=6cm]{FigDiagram} \caption{\label{fig:Diagram} The Young diagram $\lambda=(12,8,5,4,2,1,0,0,\dots)$.} \end{figure} Scale the picture by $\sqrt{2n}$ in both directions so that the diagram has area $1/2$ and let $x$ and $y$ be the horizontal and vertical coordinates respectively. Rotate the scaled diagram by $\pi/4$ radians as in Figure \ref{fig:RotatedDiagram}. Let $X,Y$ be the horizontal and vertical coordinates in the rotated picture. We have $X=(x-y)/{\sqrt{2}}$ and $Y=(x+y)/{\sqrt{2}}$. Let $L_\lambda(X)$ be the function giving the top boundary of the rotated scaled diagram. $L_\lambda(X)$ is a piecewise linear function of slopes $\pm 1$ such that $L_\lambda(X)=|X|$ for $|X|\gg 1$. If $D_\lambda$ represents the interior of the scaled \begin{figure}[ht] \centering \includegraphics[width=7cm]{FigRotatedDiagram} \caption{\label{fig:RotatedDiagram} A rotated Young diagram.} \end{figure} Young diagram, then in the $(X,Y)$ coordinate system it can be characterized as $$ D_\lambda=\{(X,Y):|X|\leq Y\leq L_\lambda(X)\}. $$ In 1977 Vershik and Kerov and independently and simultaneously Logan and Shepp proved that the scaled random Young diagrams have a limit shape. \begin{theorem}[Vershik and Kerov \cite{VK77}, Logan and Shepp \cite{LoganShepp}] \label{thm:VKLS} For any $\varepsilon>0$ $$\lim_{n\rightarrow\infty} \mathbb{P}l^n\{\lambda\in\mathbb{Y}^n:|L_\lambda(X)-\Omega(X)|\leq\varepsilon, \forall X\in\mathbb{R}\}=1,$$ where $\Omega(X)$ is given by \begin{equation*} \Omega(X)= \left\{\begin{array}{cl} \frac 2\pi\left(\sqrt{1-X^2}+X\arcsin(X)\right),& |X|\leq 1,\\ |X|,&|X|>1. \end{array}\right. \end{equation*} \end{theorem} Note that $\Omega(X)$ has the rather simple derivative $$\Omega'(X)=\frac 2{\pi}\arcsin(X)\text{ for }|X|\leq1.$$ The measure $\mathbb{P}_N^n$ is a deformation of the Plancherel measure in the following way. When $N\geq n$, $(\mathbb{C}^N)^{\otimes n}$ contains a copy of the regular representation of $S_n$, whence a copy of each irreducible representation of $S_n$. As a consequence, when $N\geq n$, $\mathbb{Y}_N^n$ coincides with the set $\mathbb{Y}^n$ of all Young diagrams on $n$ cells. Moreover, in the limit $N\rightarrow \infty$ the measure $\mathbb{P}_N^n$ converges to the Plancherel measure on $\mathbb{Y}^n$ (see, for example, \cite[Section 3]{OlshNotes}). It follows from Theorem \ref{thm:VKLS} that the number of rows in a typical (with respect to the Plancherel measure) Young diagram with $n$ cells is of order $\sqrt{2n}$ \cite{VK85}. Thus, when studying asymptotic properties of Young diagrams with restricted number of rows and sampled according to the deformed Plancherel measures $P_N^n$, it is natural to consider the limit when the restriction on the number of rows grows on the order of the square root of the number of cells. P. Biane \cite{Biane2001} generalized the limit shape result for the Plancherel measure to the measures $\mathbb{P}_N^n$. Using methods of free probability theory he showed that in the limit $n,N\rightarrow\infty$, when ${\sqrt{n}}/{N}\rightarrow c\in[0,\infty)$, the shape of a typical scaled Young diagram chosen from $\mathbb{Y}_N^n$ according to the measure $\mathbb{P}_N^n$ converges to a curve $\Omega_c(s)$ which is a continuous deformation (depending on $c$) of the limit shape found in \cite{VK77} and \cite{LoganShepp}. When $c=0$, the limit shape $\Omega_c(s)$ is the Vershik-Kerov-Logan-Shepp limit shape: $\Omega_0(s)=\Omega(s)$. This is not surprising, since $c=0$ implies $N\gg\sqrt{2n}$, which means that the restriction on the number of rows in the Young diagrams $\lambda\in\mathbb{Y}_N^n$ is very weak. Before we state Biane's theorem exactly, let us define the function $\Omega_c(s)$. For $2s\in[c-2,c+2]$ define \begin{multline*} h(c,s):=\frac{2}{\pi} \(s \arcsin\(\frac{2 s + c}{2 \sqrt{1 + 2 s c}}\) \right.\\\left. +\frac{1}{2 c} \arccos\(\frac{2 + 2 s c - c^2}{2 \sqrt{1 + 2 s c}}\)+\frac 14 \sqrt{4 - (2 s - c)^2}\) \end{multline*} when $0<c<\infty$ and extend it continuously to $c=0$: $$ h(0,s)=\frac{2}{\pi} \(s \arcsin(s)+\sqrt{1 - s^2}\). $$ Define the function $\Omega_c(s)$ as follows: \begin{figure}[t] \centering \includegraphics[width=12cm]{FigOmegaC} \caption{\label{fig:Omegac}Graphs of $\Omega_c(s)$ for $c=0,0.5,1,2.5$.} \end{figure} \begin{equation*} \Omega_c(s)=\left\{ \begin{array}{cl} h(c,s),&2s\in[c-2,c+2], \\|s|,&2s\notin[c-2,c+2] \end{array}\right. \end{equation*} if $0\leq c\leq 1$, and \begin{equation*} \Omega_c(s)=\left\{ \begin{array}{cl} s+\frac 1c ,&2s\in[-\frac 1c,c-2], \\h(c,s),&2s\in[c-2,c+2], \\|s|,&2s\notin[-\frac 1c,c+2], \end{array}\right. \end{equation*} if $c>1$. See Figure \ref{fig:Omegac} for graphs of the functions $\Omega_c(s)$ for several values of $c$. The graphs of the functions $\Omega_c(s)$ intersect the graph of $|s|$ at two points. All the intersections are tangential except the intersections on the left side for $c\geq 1$. At the left intersection point the graph of $\Omega_1(s)$ has slope $0$, while the graph of $\Omega_c(s)$ when $c>1$ has slope $1$. Notice that $\Omega_c$ has a rather simple derivative: \begin{equation*} \Omega_c'(s)=\frac 2\pi \arcsin\(\frac{c+2s}{2\sqrt{1+2cs}}\) \end{equation*} for $2s\in[c-2,c+2]$. \begin{theorem}[P. Biane \cite{Biane2001}, Theorem 3] \label{thm:Biane} For any $\varepsilon>0$ $$\lim_{n,N\rightarrow\infty,\frac{\sqrt{n}}N\rightarrow c} \mathbb{P}_N^n\{\lambda\in\mathbb{Y}_N^n:|L_\lambda(X)-\Omega_c(X)|\leq\varepsilon, \forall X\in\mathbb{R}\}=1.$$ \end{theorem} In this article we obtain a new proof of Biane's theorem. \subsection{Outline of the article} The first step is to obtain multiplicative formulas for the dimensions $\dim E_\lambda$. Schur--Weyl duality gives a characterization of $E_\lambda$ in terms of irreducible representations $V_\lambda$ and $W_\lambda$ of $S_n$ and the general linear group $GL(N,\mathbb{C})$ respectively, allowing us to express $\dim E_\lambda$ in terms of $\dim V_\lambda$ and $\dim W_\lambda$. For the dimensions of irreducible representations of $S_n$ we use the hook formula. For the dimensions of those irreducible representations of $GL(N,\mathbb{C})$ which appear in Schur--Weyl duality there are well-known multiplicative formulas (see Section \ref{sec:SchurWeyl}), which we use. Taking the logarithm of $\dim E_\lambda$ the multiplicative formulas yield sums. The second step is to go from sums to integrals and calculate the correction terms, which we do in Section \ref{sec:IntegralFormula}. For the dimensions of irreducible representations of $S_n$ this was done by Vershik and Kerov \cite{VK85}. The third and most difficult step is to prove that the integral part of $\dim E_\lambda$ has a unique minimizer and calculate the quadratic variation. The integral part can be viewed as a functional of the boundary function $L_\lambda$. In Section \ref{sec:UniqueMinimizer} we prove that the function $\Omega_c$ is the unique minimizer of this functional and prove that the quadratic variation is given by the $\frac 12$--Sobolev norm of $L_\lambda-\Omega_c$. In Section \ref{sec:MainProofs} we present the proofs of the main theorems. \subsection{Acknowledgements} I am very grateful to Alexander Bufetov for many useful discussions on the subject. I am also very grateful to Grigori Olshanski for communicating this problem to me and for comments and suggestions. \section{Schur--Weyl Duality} \label{sec:SchurWeyl} Notice that the general linear group $GL(N,\mathbb{C})$ also acts naturally on the tensor product space $(\mathbb{C}^N)^{\otimes n}$. In coordinates, if $(v_1,v_2,\ldots,v_n)\in(\mathbb{C}^N)^{\otimes n}$ and $A\in GL(N,\mathbb{C})$, then $$ A\cdot(v_1,v_2,\ldots,v_n)=(Av_1,Av_2,\ldots,Av_n). $$ It is easy to see that the actions of $S_n$ and $GL(N,\mathbb{C})$ commute. These actions give embeddings $S_n\hookrightarrow\End\left((\mathbb{C}^N)^{\otimes n}\right)$ and $GL(N,\mathbb{C})\hookrightarrow\End\left((\mathbb{C}^N)^{\otimes n}\right)$. Let $\mathfrak{a}_{S_n}$ and $\mathfrak{a}_{GL(N,\mathbb{C})}$ be the subalgebras of $\End\left((\mathbb{C}^N)^{\otimes n})\right)$ generated by the images of $S_n$ and $GL(N,\mathbb{C})$ respectively. Schur--Weyl duality \cite{W,FultonHarris} asserts that the subalgebras $\mathfrak{a}_{S_n}$ and $\mathfrak{a}_{GL(N,\mathbb{C})}$ are centralizers of each other in $\End\left((\mathbb{C}^N)^{\otimes n}\right)$. It follows \cite{FultonHarris} that the space $(\mathbb{C}^N)^{\otimes n}$ decomposes into a direct sum of tensor products of irreducible representations of the groups $S_n$ and $GL(N,\mathbb{C})$: \begin{equation*} (\mathbb{C}^N)^{\otimes n}=\bigoplus_{i\in I} V_i\otimes W_i, \end{equation*} where $V_i$-s are irreducible representations of $S_n$ and $W_i$-s are irreducible representations of $GL(N,\mathbb{C})$. Moreover, given $i\in I$, the isotypic component of $(\mathbb{C}^N)^{\otimes n}$ corresponding to $V_i$ is $V_i\otimes W_i$, and the same is true for $W_i$. It can also be obtained from Schur--Weyl duality \cite{FultonHarris} that the index set $I$ is the set $\mathbb{Y}_N^n$ of Young diagrams with $n$ cells and at most $N$ rows, and that given $\lambda\in I=\mathbb{Y}_N^n$, $W_\lambda$ is the irreducible highest weight representation of $GL(N,\mathbb{C})$ with highest weight $\lambda=(\lambda_1\geq\lambda_2\geq\ldots\geq\lambda_N)$. As mentioned above, the isotypic components $E_\lambda$ are $E_\lambda = V_\lambda\otimes W_\lambda$. Thus, we have $\dim E_\lambda = \dim V_\lambda \cdot \dim W_\lambda$. The dimensions of the representations $V_\lambda$ are given by the hook formula. Given a Young diagram $\lambda$ and a pair of natural numbers $(i,j)$ we will say that $(i,j)\in\lambda$ if $j\leq \lambda_i$. For $(i,j)\in\lambda$ the cell $(i,j)$ is the cell in the $i$-th row and $j$-th column in the Young diagram $\lambda$. The hook of a cell $(i,j)$ is defined to be the set of cells to the right and above the cell, including the cell itself, as shown in Figure \ref{fig:hook}. \begin{figure}[ht] \centering \includegraphics[width=6cm]{FigHook} \caption{\label{fig:hook} The hook and hook length of the cell $(2,3)$ in the Young diagram $\lambda=(9,7,6,4,3,2,0)\in\mathbb{Y}_7^{31}$.} \end{figure} The hook length $h_{i,j}$ of a cell $(i,j)\in\lambda$ is the number of cells in its hook. The following formula for $\dim V_\lambda$ is called the hook formula \cite{FultonHarris}: \begin{equation} \label{eq:HookFormula} \dim V_\lambda=\frac{n!}{\prod_{(i,j)\in\lambda}h_{i,j}}. \end{equation} The content of the cell $(i,j)$ is defined to be $c_{i,j}:=j-i$. The dimension of the representation $W_\lambda$ is given by the following formula \cite{MacDonald}: \begin{equation} \label{eq:dimHighestWeight} \dim W_\lambda = \frac{\prod_{(i,j)\in\lambda}(N+c_{i,j})}{\prod_{(i,j)\in\lambda}h_{i,j}}. \end{equation} Combining this with the hook formula we obtain \begin{equation} \label{eq:dimIso} \dim E_\lambda = \frac{n!}{\prod_{(i,j)\in\lambda}h_{i,j}} \frac{\prod_{(i,j)\in\lambda}(N+c_{i,j})}{\prod_{(i,j)\in\lambda}h_{i,j}}. \end{equation} \section{An Integral Formula for the Measure} \label{sec:IntegralFormula} The goal of this section is to obtain an integral formula for $$\ln(\mathbb{P}_N^n(\lambda))=\ln\left(\frac{\dim E_\lambda}{N^n}\right).$$ For this purpose we need to introduce the continuous version of hook length. For a bounded region $d$ between the positive coordinate semi-axes and a top boundary given by a nonincreasing nonnegative function define the hook at $(x,y)\in d$ to be $$ h_d(x,y):=\sup\{t:(x,t)\in d\}+\sup\{t:(t,y)\in d\}-x-y. $$ \begin{figure}[ht] \centering \includegraphics[width=6cm]{FigContHook} \caption{\label{fig:ContHook} Continuous hook length.} \end{figure} For a Young diagram $\lambda$, $D_\lambda$ as defined in Section \ref{sec:LimitShapes} is such a bounded region, whence continuous hook length is defined for it (see Figure \ref{fig:ContHook}). To simplify the notation, we will denote $h_\lambda:=h_{D_\lambda}$. Introduce the coordinates $s$ and $t$ as $x=L_\lambda(t)+t$ and $y=L_\lambda(s)-s$. In these coordinates the hook at a point $(x,y)$ is given by $h_\lambda(x,y)=2(s-t)$ \cite{VK85}. \begin{proposition} \label{prop:measure} For any $\lambda\in\mathbb{Y}_N^n$ we have \begin{equation} \label{eq:munN} -\frac{\ln \mathbb{P}_N^n(\lambda)}{\sqrt{n}} = \sqrt{n}(\theta(\lambda)-\rho(\lambda))+\hat{\theta}(\lambda)-\hat{\rho}(\lambda)-\varepsilon_n, \end{equation} where \begin{equation*} \theta(\lambda)=1+2\iint_{(x,y)\in D_\lambda} \ln h_\lambda(x,y) dx dy, \end{equation*} \begin{equation*} \rho(\lambda)=2\iint_{(x,y)\in D_\lambda}\ln\(1+\frac{\sqrt{2n}}{N}(x-y)\)dxdy, \end{equation*} $$\hat{\theta}(\lambda)=\frac {1}{\sqrt{n}}\sum_{(i,j)\in\lambda}m(h_{i,j}),$$ $$\hat{\rho}(\lambda)=\frac{1}{2\sqrt{n}}\sum_{(i,j)\in\lambda}m(N+c_{i,j}),$$ $$m(x)=\sum_{k=1}^\infty \frac{1}{k(k+1)(2k+1)}\frac 1{x^{2k}},$$ and $\varepsilon_n=o((\ln n)/{\sqrt{n}})$ is independent of $\lambda$. \end{proposition} \begin{remark} $\theta(\lambda)$ is called the hook integral. Vershik and Kerov \cite{VK85} gave the following formula for $\theta(\lambda)$ in terms of $L_\lambda$: \begin{equation} \label{eq:ThetaInL} \theta(\lambda)=\theta(L_\lambda):=1+2\iint_{t<s}\ln(2(s-t))(1-L_\lambda'(s))(1+L_\lambda'(t))dsdt. \end{equation} \end{remark} \begin{remark} The integrand in $\rho(\lambda)$ is constant along vertical lines in the rotated coordinate system, whence the double integral can be easily reduced to a single integral to give \begin{equation} \label{eq:RhoInL} \rho(\lambda)=\rho(L_\lambda):=2\int_{-\infty}^{\infty}\ln\(1+\frac{2\sqrt{n}}{N}s\)(L_\lambda(s)-|s|)ds. \end{equation} \end{remark} Note that originally $\theta$ and $\rho$ were defined as functions on Young diagrams. However, in light of \eqref{eq:ThetaInL} and \eqref{eq:RhoInL} we will treat them as functionals. \begin{proof}[of Proposition \ref{prop:measure}] Using \eqref{eq:HookFormula} the Plancherel measure $\mathbb{P}l^n(\lambda)=\frac{(\dim V_\lambda)^2}{n!}$ can be written as \begin{equation} \label{eq:plancherel} \mathbb{P}l^n(\lambda)=\frac{n!}{\(\prod_{(i,j)\in\lambda}h_{i,j}\)^2}, \end{equation} while using \eqref{eq:dimIso} the measure $\mathbb{P}_N^n(\lambda)=(\dim E_\lambda)/{N^n}$ can be written as $$ \mathbb{P}_N^n(\lambda)= \frac{n!}{\(\prod_{(i,j)\in\lambda}h_{i,j}\)^2} \frac{\prod_{(i,j)\in\lambda}(N+c_{i,j})}{N^n}=\mathbb{P}l^n(\lambda)\prod_{(i,j)\in\lambda}\(1+\frac{c_{i,j}}N\). $$ Thus, $$ -\frac{\ln \mathbb{P}_N^n(\lambda)}{\sqrt{n}}=-\frac{\ln \mathbb{P}l^n(\lambda)}{\sqrt{n}}-\frac 1{\sqrt{n}}\sum_{(i,j)\in\lambda}\ln\(1+\frac{c_{i,j}}{N}\). $$ Note that even though $c_{i,j}$ can be negative, since we are only considering Young diagrams with at most $N$ rows, we have that $1+{c_{i,j}}/N$ is positive. It was shown in \cite{VK85} that $$ -\frac{\ln \mathbb{P}l^n(\lambda)}{\sqrt{n}}=\sqrt{n}\theta(\lambda)+\hat{\theta}(\lambda)-\varepsilon_n. $$ Let $\square_{i,j}$ denote the $(i,j)$-th box in the scaled Young diagram, and let $(x_i,y_j)$ denote the center of $\square_{i,j}$. Note that the area of $\square_{i,j}$ is $1/{2n}$. Using this notation we obtain \begin{align*} -&\frac 1{\sqrt{n}}\sum_{(i,j)\in\lambda}\ln\(1+\frac{c_{i,j}}{N}\)+\sqrt{n}\rho(\lambda) \\&=2\sqrt{n}\(\sum_{(i,j)\in\lambda}\iint_{\square_{i,j}}\ln\(1+\frac{\sqrt{2n}}{N}(x-y)\)dxdy-\frac 1{2n}\ln\(1+\frac{\sqrt{2n}}{N}(x_i-y_j)\)\) \\&=2\sqrt{n}\sum_{(i,j)\in\lambda}\iint_{\square_{i,j}}\(\ln\(1+\frac{\sqrt{2n}}{N}(x-y)\)-\ln\(1+\frac{\sqrt{2n}}{N}(x_i-y_j)\)\)dxdy \\&=2\sqrt{n}\sum_{(i,j)\in\lambda}\int_{x_i-\frac{1}{2\sqrt{2n}}}^{x_i+\frac{1}{2\sqrt{2n}}}\int_{y_j-\frac{1}{2\sqrt{2n}}}^{y_j+\frac{1}{2\sqrt{2n}}} \\&\qquad\qquad\qquad\qquad \ln\(1+\frac{\sqrt{2n}}{N\(1+\frac{\sqrt{2n}}{N}(x_i-y_j)\)}((x-x_i)-(y-y_j))\)dydx \\&=2\sqrt{n}\sum_{(i,j)\in\lambda}\int_{-\frac{1}{2\sqrt{2n}}}^{\frac{1}{2\sqrt{2n}}}\int_{-\frac{1}{2\sqrt{2n}}}^{\frac{1}{2\sqrt{2n}}}\ln\(1+\frac{\sqrt{2n}}{N\(1+\frac{\sqrt{2n}}{N}(x_i-y_j)\)}(x-y)\)dydx. \end{align*} Denote $\alpha_{i,j}:=N(1+\frac{\sqrt{2n}}{N}(x_i-y_j))=N+c_{i,j}$. We have $$ -\frac 1{\sqrt{n}}\sum_{(i,j)\in\lambda}\ln\(1+\frac{c_{i,j}}{N}\)+\sqrt{n}\rho(\lambda)= 2\sqrt{n}\sum_{(i,j)\in\lambda}\frac{\alpha_{i,j}^2}{2n}\int_{-\frac{1}{2\alpha_{i,j}}}^{\frac{1}{2\alpha_{i,j}}}\int_{-\frac{1}{2\alpha_{i,j}}}^{\frac{1}{2\alpha_{i,j}}}\ln(1+x-y)dydx. $$ From $$ \iint\ln(1+x-y)dxdy=-\frac{(1+x-y)^2}{2}\ln(1+x-y)+\frac 34 (1+x-y)^2+C_1(x)+C_2(y) $$ it follows that \begin{multline*} -\frac 1{\sqrt{n}}\sum_{(i,j)\in\lambda}\ln\(1+\frac{c_{i,j}}{N}\)+\sqrt{n}\rho(\lambda) \\=\frac 1{2\sqrt{n}}\sum_{(i,j)\in\lambda} \(-3+(\alpha_{i,j}+1)^2\ln\(1+\frac1{\alpha_{i,j}}\)+(\alpha_{i,j}-1)^2\ln\(1-\frac 1{\alpha_{i,j}}\)\). \end{multline*} The power series expansion $$ -3+\(1+\frac 1z\)^2\ln(1+z)+\(\frac 1z-1\)^2\ln(1-z)=-\sum_{k=1}^\infty\frac{1}{k(k+1)(2k+1)}z^{2k} $$ completes the proof. $\qed$ \end{proof} \section{The Limit Shape is the Minimizer} \label{sec:UniqueMinimizer} Given a function $g$ let $\tilde{g}$ be the function defined by $\tilde{g}(x):=g(x+c/2)$. Throughout the text we will use the following shifted coordinates: \begin{equation} \label{eq:shiftedNot} z=s-\frac c2,\ \ w=t-\frac c2. \end{equation} In particular, for any function $g$ we have $g(s)=\tilde{g}(z)$. We will use the following notation: $$ \delta_{S}=\left\{\begin{array}{ll}1,&S\text{ is true}\\0,&S\text{ is false}\end{array}\right. $$ and $$ \sign(x)=\left\{\begin{array}{cl}-1,&x<0\\0,&x=0\\1,&x>0\end{array}\right. . $$ \begin{proposition} \label{prop:integral} Let $c=c_n={\sqrt{n}}/{N}>0$. Let $L(X)$ be an arbitrary continuous and piecewise differentiable function satisfying the following conditions (see Figure \ref{fig:L}): \begin{figure}[t] \centering \includegraphics[width=7cm]{FigPlotL} \caption{The graph of $L(X)$.} \label{fig:L} \end{figure} \begin{equation} \label{eq:CondOnL} \begin{array}{l} L(X)\geq |X|\text{ for all }X, \\|L'(X)|\leq 1\text{ for all }X\text{ where }L(X)\text{ is differentiable}, \\L(X)=|X|\text{ for }X\gg 1\text{ and }X\leq-\frac{1}{2c}, \\L(X)<X+\frac{1}{c}\text{ for }X\geq-\frac{1}{2c}, \\\int_{-\infty}^\infty (L(X)-|X|)dX=\frac 12. \end{array} \end{equation} Then \begin{equation} \label{eq:thetaRho-norm} \theta(L)-\rho(L)=\frac 12 \|f\|_{\frac 12}^2+2\int_{|s-\frac c2|>1}H_c'(s)f(s)ds, \end{equation} where $f(s)=L(s)-\Omega_c(s)$, \begin{multline*} \tilde{H}_c(z):= \delta_{|z|>1}\left(\left(z-\frac {1-c^2}{2c}\right) \arccosh|z| \right.\\\left.\quad\quad\quad\quad +\sign(1-c)\left(z+\frac {1+c^2}{2c}\right)\arccosh\left| \frac{1+\frac{1+c^2}{2c}z}{z+\frac{1+c^2}{2c}}\right|-\sign(z)\sqrt{z^2-1}\right) \end{multline*} and $$ \|f\|_{\frac 12}^2=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}{\(\frac{f(s)-f(t)}{s-t}\)^2}dsdt $$ is the $\frac 12$--Sobolev norm in the space of piecewise-smooth functions. \end{proposition} \begin{proof} Define $$ \phi_0(x):=-\ln|2x|,\ \phi_k(x):=\int_0^x{\phi_{k-1}(y)}dy. $$ Choose numbers $a<min\{-1+c/2,-{1}/(2c)\}$ and $b>1+ c/2$ such that $f(s)=0$ when $s\notin(a,b)$. This is possible since $L(s)=\Omega_c(s)=|s|$ when $|s|\gg 1$. Using \eqref{eq:ThetaInL} write $\theta(L)$ as follows \begin{align} \theta(L) =&1-\int_a^b\int_a^b{\phi_0(s-t)}dsdt -2\int_a^b{\phi_1(a-t)L'(t)}dt \\\nonumber &-2\int_a^b{\phi_1(b-t)L'(t)}dt +\int_a^b\int_a^b{\phi_0(s-t)L'(s)L'(t)}dsdt \\\nonumber =&-\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}{\ln|2(s-t)|f'(s)f'(t)}dsdt+\theta_1(L)+\theta_2(L), \end{align} where $$ \theta_1(L):=1-\int_a^b\int_a^b{\phi_0(s-t)}dsdt-\int_a^b\int_a^b{\phi_0(s-t)\Omega_c'(s)\Omega_c'(t)}dsdt, $$ $$ \theta_2(L):=2\int_a^b{\(I_c(s)-\phi_1(a-s)-\phi_1(b-s)\)L'(s)}ds $$ and $$ I_c(s):=\int_a^b{\phi_0(s-t)\Omega_c'(t)}dt. $$ Vershik and Kerov \cite[Lemma 4]{VK85} have shown that $$-\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}{\ln|2(s-t)|f'(s)f'(t)}dsdt=\frac 12 \|f\|_{\frac 12}^2.$$ Lemma \ref{lem:intIOmega'} below implies that \begin{multline*} \theta_1(L)=1-2\phi_2(b-a)-\int_a^b{I_c(s)\Omega_c'(s)}ds=\frac{c^2}4-2\int_a^b{G_c(s)\Omega_c'(s)}ds\\+2\int_a^b{H_c(s)\Omega_c'(s)}ds -\delta_{c>1}\(1-\frac 5{4c^2}+\frac{c^2}4-\(2+\frac 1{c^2}\)\ln(c)\) \end{multline*} while Lemma \ref{lem:I} implies that $$ \theta_2(L)=2\int_a^b{G_c(s)L_c'(s)}ds-2\int_a^b{H_c(s)L_c'(s)}ds $$ (see \eqref{eq:G} for the definition of $G$). Integrating by parts we obtain \begin{multline*} \theta_1(L)+\theta_2(L)=\frac{c^2}4+2(G_c(s)-H_c(s))f(s)\big|_a^b-2\int_a^b{(G_c'(s)-H_c'(s))f(s)}ds\\-\delta_{c>1}\(1-\frac 5{4c^2}+\frac{c^2}4-\(2+\frac 1{c^2}\)\ln(c)\). \end{multline*} Since $f(a)=f(b)=0$ and $G'_c(s)=-\ln|1+2cs|$, the above expression simplifies to \begin{multline*} \theta_1(L)+\theta_2(L)=\frac{c^2}4+2\int_a^b\ln|1+2cs|f(s)ds+2\int_a^b{H_c'(s)f(s)}ds\\-\delta_{c>1}\(1-\frac 5{4c^2}+\frac{c^2}4-\(2+\frac 1{c^2}\)\ln(c)\). \end{multline*} Combining the above results with formula \eqref{eq:RhoInL} for $\rho(L)$ we obtain \begin{multline*} \theta(L)-\rho(L)=\frac 12 \|f\|_{\frac 12}^2+2\int_a^b{H_c'(s)f(s)}ds+\frac{c^2}4 +2\int_{\frac c2-1}^{\frac c2+1}{\ln(1+2cs)(|s|-\Omega_c(s))}ds \\-\delta_{c>1}\(1-\frac 5{4c^2}+\frac{c^2}4-\(2+\frac 1{c^2}\)\ln(c)\). \end{multline*} It follows from Lemma \ref{lem:F2-2} that the sum of all the terms except the first two is zero. This completes the proof. $\qed$ \end{proof} \begin{corollary} \label{cor:UniqueMinimizer} If $L$ is any function satisfying the conditions \eqref{eq:CondOnL} then $\theta(L)-\rho(L)\geq0$ with equality holding if and only if $L=\Omega_c$. \end{corollary} \begin{proof} It is immediate that the first term on the left-hand side of \eqref{eq:thetaRho-norm} is nonnegative. Thus, it is enough to show that the integrand in the second term is nonnegative as well. Recall that $\Omega_c(s)=s$ if $s>1+ c/2$. If $0<c<1$, then we also have $\Omega_c(s)=|s|$ whenever $s<-1+ c/2$. Since $L(s)\geq|s|$ for all $s$, we have \begin{equation} \label{eq:OmegaMinusLPositiveSmallc} L(s)-\Omega_c(s)\geq 0\text{ if }0<c<1\text{ and }\left|s-\frac c2\right|>1. \end{equation} If $c\geq 1$ and $s\in(-1/(2c),-1+c/2)$, we have that $\Omega_c(s)-L(s)<0$, since $\Omega_c(s)=s+1/c$ for such $s$ and $L(s)<s+1/c$ when $s\geq -1/(2c)$. Thus, we have \begin{equation} \label{eq:OmegaMinusLPositiveLargec} \begin{array}{c} L(s)-\Omega_c(s)\geq 0\text{ if }c\geq 1\text{ and }s>1+\frac c2, \\\text{and}\\ L(s)-\Omega_c(s)\leq 0\text{ if }c\geq 1\text{ and }s\in(-\frac 1{2c},-1+\frac c2). \end{array} \end{equation} Since $L(s)-\Omega_c(s)=0$ for $s<- 1/(2c)$, in order to show \eqref{eq:thetaRho-norm} is nonnegative it is enough to show that $H'_c(s)$ also satisfies the inequalities \eqref{eq:OmegaMinusLPositiveSmallc} and \eqref{eq:OmegaMinusLPositiveLargec}. In the shifted notation we need to show that \begin{equation} \begin{array}{c} \label{eq:HpcSigns} \tilde{H}'_c(z)\geq 0\text{ when }z\in(1,\infty)\text{ or } 0<c<1\text{ and }z\in\left(-\frac {1+c^2}{2c},-1\right), \\\text{and}\\ \tilde{H}'_c(z)\leq 0\text{ when }c\geq 1\text{ and }z\in\left(-\frac {1+c^2}{2c},-1\right). \end{array} \end{equation} Differentiating $\tilde{H}_c(z)$ when $|z|>1$ we obtain \begin{equation} \label{eq:dH} \tilde{H}'_c(z)=\arccosh|z|+\sign(1-c)\arccosh\left| \frac{1+\frac{1+c^2}{2c}z}{z+\frac{1+c^2}{2c}}\right|, \end{equation} which implies that \begin{equation} \label{eq:limdH1} \lim_{z\rightarrow -1^-\text{ or }z\rightarrow 1^+} \tilde{H}'_c(z)=0. \end{equation} Differentiating \eqref{eq:dH} we obtain \begin{equation} \label{eq:ddH} \tilde{H}''_c(z)=\frac{\sign(z)\(z+\frac 1c\)}{\(\frac{1+c^2}{2c}+z\)\sqrt{z^2-1}}. \end{equation} When $0<c<1$, we have that $\frac 1c>\frac{1+c^2}{2c}$, whence \eqref{eq:ddH} implies that $\sign(\tilde{H}''_c(z))=\sign(z)$ for $z\in(-\frac {1+c^2}{2c},-1)\cup(1,\infty)$. When $c\geq 1$, we have $\tilde{H}''_c(z)>0$ for $z\in(-\frac {1+c^2}{2c},-1)\cup(1,\infty)$. These, together with \eqref{eq:limdH1} imply \eqref{eq:HpcSigns}. That the equality $\theta(L)-\rho(L)=0$ holds if and only if $L=\Omega_c$ follows immediately from the above arguments. $\qed$ \end{proof} \begin{remark} Biane's theorem (Theorem \ref{thm:Biane}) is an immediate corollary of Proposition \ref{prop:measure} and Corollary \ref{cor:UniqueMinimizer}. \end{remark} \subsection{Proofs of the lemmas} \label{sec:proofsOfLemmas} In this section we list the lemmas used in the proof of Proposition \ref{prop:integral} and give their proofs. Before we move on, let us give two integrals which will be used throughout the proofs (both formulas can be easily obtained from \cite[2.266, p.97]{GradshteynRyzhik}): \begin{equation} \label{eq:GRzSmall} \int_{-1}^1{\frac 1{(\alpha-z)\sqrt{1-z^2}}}dz= \left\{ \begin{array}{cr} \sign(\alpha)\frac \pi{\sqrt{\alpha^2-1}} ,&|\alpha|>1 \\0 ,&|\alpha|\leq1 \end{array}\right.=\delta_{|\alpha|>1}Sign(\alpha)\frac \pi{\sqrt{\alpha^2-1}} \end{equation} and if $\alpha>1$, \begin{equation} \label{eq:GRzBig} \int{\frac 1{(z+\alpha)\sqrt{z^2-1}}}dz= \frac{\sign(z)}{\sqrt{\alpha^2-1}}\arccosh\left|\frac{1+\alpha z}{z+\alpha}\right|+\const. \end{equation} In particular, setting $\alpha=-\frac{1+c^2}{2c}, c>0$ in \eqref{eq:GRzSmall} we obtain \begin{equation} \label{eq:GRzSmallc} \int_{-1}^1{\frac 1{\(\frac{1+c^2}{2c}+z\)\sqrt{1-z^2}}}dz=\frac{2c\pi}{|1-c^2|}. \end{equation} \begin{lemma} \label{lem:F2-2} Let $A(c)$ be the following integral: \begin{equation*} A(c):=\int_{a}^{b}{\ln(1+2cs)(|s|-\Omega_c(s))}ds. \end{equation*} For any $c>0$ we have \begin{equation} \label{eq:F2-2} A(c)=-\frac{c^2}8+\delta_{c>1}\(\frac 12-\frac 5{8c^2}+\frac{c^2}8-\(1+\frac 1{2c^2}\)\ln(c)\). \end{equation} \end{lemma} \begin{proof} Switching to the shifted notation \eqref{eq:shiftedNot} we obtain \begin{equation} \label{eq:lnOmega} A(c)=\int_{a-\frac c2}^{b-\frac c2}{\ln(1+c^2+2cz)\(\left|z+\frac c2\right|-\tilde{\Omega}_c(z)\)}dz. \end{equation} Integrating \eqref{eq:lnOmega} by parts we obtain: \begin{equation*} A(c)=\int_{a-\frac c2}^{b-\frac c2}\frac 1c\phi_1\(\frac{1+c^2+2cz}2\)\(\sign\(z+\frac c2\)-\tilde{\Omega}'_c(z)\)dz. \end{equation*} Integrating by parts a second time, noting the discontinuity at $z=-c/2$ and noting that $\tilde{\Omega}_c''(z)=0$ for $|z|>1$, we obtain \begin{equation*} A(c)=-\frac 2{c^2}\phi_2\(\frac 12\)+\int_{-1}^{1}\frac 1{c^2}\phi_2\(\frac{1+c^2+2cz}2\)\tilde{\Omega}''_c(z)dz, \end{equation*} whence \eqref{eq:F2-2} is equivalent to \begin{equation} \label{eq:intphi2p''} \int_{-1}^1{\phi_2\(\frac{1+c^2+2cz}2\)\tilde{\Omega}_c''(z)}dz= \left\{\begin{array}{ll} \frac 38-\frac{c^4}8,&0<c\leq 1\\ -\frac 14+\frac {c^2}2-\frac{\ln(c)}2-c^2\ln(c),&1\leq c \end{array}\right.. \end{equation} Plugging in $\phi_2(x)={3/4x^2} - 1/2 x^2 \ln(2|x|)$ and $$\tilde{\Omega}_c''(z)=\frac{2 (1 + c z)}{\pi (1 + c^2 + 2 c z) \sqrt{1 - z^2}}$$ splits \eqref{eq:intphi2p''} to two integrals. The first one is \begin{align} \label{eq:int-sin} \frac 3{8\pi}\int_{-1}^1{\frac{(1+c^2+2cz)(1+cz)}{\sqrt{1-z^2}}}dz & =\frac 3{8\pi}\int_{-\pi/2}^{\pi/2}{1+c^2+(3c+c^2)\sin\theta+2c^2 \sin^2 \theta}d\theta \\\nonumber& =\frac 38 (1+2c^2). \end{align} Using this, \eqref{eq:intphi2p''} is equivalent to \begin{multline} \label{eq:int-ln2} \frac 1{4\pi}\int_{-1}^1{\frac{\ln(1+c^2+2cz)(1+c^2+2cz)(1+cz)}{\sqrt{1-z^2}}}dz \\ =\left\{\begin{array}{ll} \frac 34 c^2+\frac{c^4}8,&0<c\leq 1\\ \frac 58+\frac {c^2}4+\frac{\ln(c)}2+c^2\ln(c),&1\leq c \end{array}\right.. \end{multline} Since both sides of this equation are differentiable in $c$ and the equation is obviously true when $c=0$, it is enough to show that the derivatives of both sides with respect to $c$ agree. Differentiating the left-hand side of \eqref{eq:int-ln2} with respect to $c$ three times and simplifying reduces \eqref{eq:int-ln2} to \begin{equation} \label{eq:intLn} \frac 1{4\pi}\int_{-1}^1{\frac{\ln(1+c^2+2cz)6z}{\sqrt{1-z^2}}}dz = \left\{\begin{array}{ll} \frac 32 c,&0<c\leq 1\\ \frac 32 \frac 1c,&1\leq c \end{array}\right.. \end{equation} Differentiating the left-hand side of \eqref{eq:intLn} with respect to $c$ we obtain \begin{align*} \frac 3{2\pi}\int_{-1}^1&{\frac{2(c+z)z}{(1+c^2+2cz)\sqrt{1-z^2}}}dz \\&\qquad\qquad= \frac 3{2\pi}\int_{-1}^1{\frac 1{\sqrt{1-z^2}}\(\frac{-1+c^2}{2c^2}+\frac zc+\frac{1-c^4}{4c^3\(\frac{1+c^2}{2c}+z\)}\)}dz \\&\qquad\qquad= \left\{\begin{array}{ll} \frac 32,&0<c\leq 1\\ -\frac 32 \frac 1{c^2},&1\leq c \end{array}\right.. \end{align*} This completes the proof of \eqref{eq:intphi2p''} and of the lemma. $\qed$ \end{proof} \begin{lemma} \label{lem:I} For any $c>0$ we have \begin{equation} \label{eq:I} I_c(s)=\phi_1(a-s)+\phi_1(b-s)+G_c(s)-H_c(s), \end{equation} where $G_c(s)$ is defined by \begin{equation} \label{eq:G} G_c(s):=\frac 1c \phi_1\(\frac{1+2cs}2\)-\frac{1-c^2}{2c}. \end{equation} \end{lemma} \begin{proof} Integrating $I_c(s)$ by parts we obtain \begin{equation*} I_c(s)=-\Omega_c'(t)\phi_1(s-t)\big|_a^{-\frac 1{2c}}-\Omega_c'(t)\phi_1(s-t)\big|_{-\frac 1{2c}}^b+\int_a^b{\phi_1(s-t)\Omega_c''(t)}dt. \end{equation*} The first two parts are equal to \begin{equation} \label{eq:I1} \phi_1(a-s)+\phi_1(b-s)+ \left\{\begin{array}{cl} 0,& c<1 \\\phi_1(\frac 1{2c}+s),& c=1 \\2\phi_1(\frac 1{2c}+s),& c>1 \end{array} \right. \end{equation} and the last part in the shifted notation becomes \begin{equation*} \int_{-1}^1{\phi_1(z-w)\tilde{\Omega}_c''(w)}dw, \end{equation*} where the integration limits are taken to be $\pm 1$ since $\tilde{\Omega}_c''(w)=0$ outside the interval $[-1,1]$. Differentiating twice with respect to $z$, decomposing into partial fractions and using \eqref{eq:GRzSmall} and \eqref{eq:GRzSmallc} we obtain \begin{multline} \label{eq:ddGH} \frac {d^2}{dz^2}\(\int_{-1}^1{\phi_1(z-w)\tilde{\Omega}_c''(w)}dw\) =\frac{-1}{\frac{1+c^2}{2c}+z}\(\delta_{|z|>1}\sign(z)\frac {\frac 1c+z}{\sqrt{z^2-1}}+Sign(1-c)\) \\=-\sign(1-c)\frac{2c}{1+c^2+2cz}-\delta_{|z|>1}\sign(z)\(\frac 1{\sqrt{z^2-1}}+\frac {\frac 1c -\frac{1+c^2}{2c}}{\(z+\frac{1+c^2}{2c}\)\sqrt{z^2-1}}\). \end{multline} Using \eqref{eq:GRzBig} with $\alpha=\frac{1+c^2}{2c}>1$ we can integrate \eqref{eq:ddGH} to obtain \begin{multline} \label{eq:dGH} \int_{-1}^1{\phi_0(z-w)\tilde{\Omega}_c''(w)}dw=\frac {d}{dz}\(\int_{-1}^1{\phi_1(z-w)\tilde{\Omega}_c''(w)}dw\) \\=-\sign(1-c)\ln(1+c^2+2cz) -\delta_{|z|>1}\sign(z)\ln|z+\sqrt{z^2-1}| \\-\delta_{|z|>1} \sign(1-c)\arccosh\left| \frac{1+\frac{1+c^2}{2c}z}{z+\frac{1+c^2}{2c}}\right|+F_1(c), \end{multline} where $F_1(c)$ is a certain function that depends only on $c$. Next, we find $F_1(c)$. Since $F_1(c)$ is independent of $z$, $z$ can be fixed. Seting $z=0$ in \eqref{eq:dGH} and integrating by parts, we obtain \begin{equation} \label{eq:F1} \int_{-1}^1{\frac 1w\tilde{\Omega}_c'(w)}dw=F_1(c)+ \left\{\begin{array}{ll} 2\ln(2)-\ln(1+c^2),&0\leq c\leq 1\\ \ln(1+c^2),&c\geq 1 \end{array}\right.. \end{equation} For $c\neq 1$ differentiating both sides with respect to $c$ we obtain $$ \frac 2{\pi} \int_{-1}^1{\frac 1w\frac{\sqrt{1-w^2}}{1+c^2+2cw}}dw=-\sign(1-c)\frac{2c}{1+c^2}+F'_1(c). $$ Since $$ \frac 1{w(1+c^2+2cw)}=\frac {2c}{1+c^2}\(\frac 1{2cw} - \frac 1{1+c^2+2cw}\) $$ and ${\sqrt{1-w^2}}/w$ is an odd function, it follows that \begin{equation} \label{eq:F1p} \frac 2{\pi} \int_{-1}^1{\frac{\sqrt{1-w^2}}{1+c^2+2cw}}dw=\sign(1-c)-\frac{1+c^2}{2c}F'_1(c). \end{equation} Calculating the left-hand side using \cite[2.267, p.97]{GradshteynRyzhik} and \eqref{eq:GRzSmallc} we obtain \begin{equation*} \frac 1{c\pi} \int_{-1}^1{\frac{\sqrt{1-w^2}}{\frac{1+c^2}{2c}+w}}dw =\left\{\begin{array}{ll} 1,&0<c< 1\\ \frac 1{c^2},&c> 1 \end{array}\right.. \end{equation*} From this calculation and \eqref{eq:F1p} it follows that $$ F'_1(c)=\left\{\begin{array}{ll} 0,&0<c< 1\\ -\frac 2c,&c> 1 \end{array}\right., $$ hence, using that $F_1(c)$ is continuous, we obtain \begin{equation} \label{eq:F1const} F_1(c)=\const+\left\{\begin{array}{ll} 0,&0<c\leq 1\\ -2\ln(c),&c\geq 1 \end{array}\right.. \end{equation} Setting $c=0$ in \eqref{eq:F1} we obtain that the constant in \eqref{eq:F1const} is $0$. Integrating \eqref{eq:dGH} with respect to $z$, using integration by parts for the last piece and using \eqref{eq:GRzBig} we obtain \begin{multline} \label{eq:GH} \int_{-1}^1{\phi_1(z-w)\tilde{\Omega}_c''(w)}dw= \sign(1-c)\frac 1c \phi_1\(\frac{1+c^2+2cz}{2}\)-\frac{1-c^2}{2c} \\-\delta_{|z|>1}\sign(z)(z\ln|z+\sqrt{z^2-1}|-\sqrt{z^2-1}) -\delta_{|z|>1}\sign(1-c)z\arccosh\left| \frac{1+\frac{1+c^2}{2c}z}{z+\frac{1+c^2}{2c}}\right| \\+\delta_{|z|>1}\sign(z)\frac{1-c^2}{2c}\int{\frac{z}{\(z+\frac{1+c^2}{2c}\)\sqrt{z^2-1}}}dz -2\delta_{c>1}\ln(c)z+F_2(c). \end{multline} Calculating the remaining integral using \eqref{eq:GRzBig} and noting that $$ \arccosh|z|=\sign(z)\ln|z+\sqrt{z^2-1}|, $$ we obtain \begin{multline} \label{eq:ph1Omegapp} \int_{-1}^1{\phi_1(z-w)\tilde{\Omega}_c''(w)}dw \\ =\tilde{G}_c(z)- \left\{\begin{array}{cl} 0,& c<1 \\\frac 1c \phi_1(\frac {1+c^2+2cz}2),& c=1 \\\frac 2c \phi_1(\frac {1+c^2+2cz}2),& c>1 \end{array} \right. -\tilde{H}_c(z)-2\delta_{c>1}\ln(c)z+F_2(c). \end{multline} It remains to find $F_2(c)$. Since $F_2(c)$ is independent of $z$, it suffices to consider the limit ${z\rightarrow -\frac{1+c^2}{2c}}$: \begin{multline*} \int_{-1}^1{\phi_1\(-\frac{1+c^2}{2c}-w\)\tilde{\Omega}_c''(w)}dw\\=\lim_{z\rightarrow -\frac{1+c^2}{2c}}\(\tilde{G}_c(z)- \left\{\begin{array}{cl} 0,& c<1 \\\frac 1c \phi_1(\frac {1+c^2+2cz}2),& c=1 \\\frac 2c \phi_1(\frac {1+c^2+2cz}2),& c>1 \end{array} \right. -\tilde{H}_c(z)-2\delta_{c>1}\ln(c)z+F_2(c)\). \end{multline*} Substituting $x-x\ln|2x|$ for $\phi_1(x)$ we obtain \begin{multline*} -\frac 1{c\pi}\int_{-1}^1{(1+\ln(c)-\ln(1+c^2+2cw))\frac{1+cw}{\sqrt{1-w^2}}}dw \\=F_2(c)+\left\{\begin{array}{ll} \frac{-1+c^2-\ln(c)}{c},&0<c\leq 1\\ \frac{2+c^2}{c}\ln(c),&c\geq 1 \end{array}\right. \end{multline*} or, equivalently, \begin{equation} \label{eq:F2c} \frac 1{\pi}\int_{-1}^1{\ln(1+c^2+2cw)\frac{1+cw}{\sqrt{1-w^2}}}dw=F_2(c)c+\left\{\begin{array}{ll} c^2,&0\leq c\leq 1\\ 1+(3+c^2)\ln(c),&c\geq 1 \end{array}\right.. \end{equation} Differentiating the left-hand side with respect to $c$ we obtain \begin{equation} \label{eq:intLn2} \frac 1{\pi}\int_{-1}^1{\frac{\ln(1+c^2+2cw)w}{\sqrt{1-w^2}}}dw +\frac 1{\pi}\int_{-1}^1{\frac{2(c+w)(1+cw)}{(1+c^2+2cw)\sqrt{1-w^2}}}dw. \end{equation} It follows from \eqref{eq:intLn} that the first part of \eqref{eq:intLn2} is equal to $c$ if $0<c\leq 1$ and $1/c$ if $c>1$. The second part is equal to \begin{equation*} \frac 1{\pi}\int_{-1}^1{\frac 1{\sqrt{1-w^2}}\(\frac{1+c^2}{2c}+w+\frac{-1+2c^2-c^4}{4c^2}\frac 1{\frac{1+c^2}{2c}+w}\)}dw =\left\{\begin{array}{ll} c,&0<c\leq 1\\ \frac{1}{c},&c\geq 1 \end{array}\right.. \end{equation*} Adding these and integrating with respect to $c$ we obtain the left-hand side of \eqref{eq:F2c} up to a constant. Setting $c=0$ in \eqref{eq:F2c} the constant is easily found, giving \begin{equation} \label{eq:F2} F_2(c)=-\delta_{c>1}\frac{1+c^2}{c}\ln(c). \end{equation} Combining \eqref{eq:I1}, \eqref{eq:ph1Omegapp} and \eqref{eq:F2} we obtain \begin{multline*} I_c(s)=\phi_1(b-s)+\phi_1(a-s)+G_c(s)- \left\{\begin{array}{cl} 0,& c<1 \\\frac 1c \phi_1(\frac {1+2cs}2),& c=1 \\\frac 2c \phi_1(\frac {1+2cs}2),& c>1 \end{array} \right. -H_c(s)\\+\left\{\begin{array}{cl} 0,& c<1 \\\phi_1(\frac 1{2c}+s),& c=1 \\2\phi_1(\frac 1{2c}+s),& c>1 \end{array} \right. +\delta_{c\geq1}\(-2\left(s-\frac c2\right)\ln(c)-\frac{1+c^2}{c}\ln(c)\), \end{multline*} which simplifies to \eqref{eq:I}. $\qed$ \end{proof} \begin{lemma} \label{lem:F3} For any $c>0$ we have \begin{multline} \label{eq:F3} \int_{-1}^1{\phi_2(x-z)\tilde{\Omega}_c''(z)}dz=\sign(1-c)\frac 1{c^2} \phi_2\(\frac{1+c^2+2cx}2\) -\frac{1-c^2}{2c}x\\-\tilde{J}_c(x)+\frac{-3+4c^2+3c^4}{16c^2}-\delta_{c>1}\(\(x+\frac{1+c^2}{2c}\)^2\ln(c)\), \end{multline} where $\tilde{J}_c(z)=0$ if $|z|\leq 1$ and \begin{multline} \tilde{J}_c(z)=\frac 12 \(1-\frac 1{2c^2}+\(z+\frac {c^2-1}{2c}\)^2\)\arccosh|z| +\sign(z)\frac{1 - c^2 - 3 c z}{4 c}\\\times\sqrt{z^2-1} +\sign(1-c)\frac 12 \(z+\frac {c^2+1}{2c}\)^2 \arccosh\left| \frac{1+\frac{1+c^2}{2c}z}{z+\frac{1+c^2}{2c}}\right|, \end{multline} if $|z|>1$. \end{lemma} \begin{proof} Differentiating the left-hand side of \eqref{eq:F3} and using \eqref{eq:ph1Omegapp} and \eqref{eq:F2} we obtain \begin{multline*} \frac d{dx}\int_{-1}^1{\phi_2(x-z)\tilde{\Omega}_c''(z)}dz =\int_{-1}^1{\phi_1(x-z)\tilde{\Omega}_c''(z)}dz \\=\tilde{G}_c(x)- \left\{\begin{array}{cl} 0,&c<1 \\\frac 1c \phi_1\(\frac{1+c^2+2cx}2\),&c=1 \\2\frac 1c \phi_1\(\frac{1+c^2+2cx}2\),&c>1 \end{array}\right. -\tilde{H}_c(x)-\delta_{c>1}\(2\ln(c)x+\frac{1+c^2}{c}\ln(c)\), \end{multline*} whence \begin{multline*} \int_{-1}^1{\phi_2(x-z)\tilde{\Omega}_c''(z)}dz=\sign(1-c)\frac 1{c^2} \phi_2\(\frac{1+c^2+2cx}2\) -\frac{1-c^2}{2c}x\\-\int{\tilde{H}_c(x)}dx-\delta_{c>1}\(\ln(c)x^2+\frac{1+c^2}{c}\ln(c)x\). \end{multline*} Since $\frac d{dx}\tilde{J}_c(x)=\tilde{H}_c(x)$ for $|x|>1$, we obtain \begin{multline*} \int_{-1}^1{\phi_2(x-z)\tilde{\Omega}_c''(z)}dz=\sign(1-c)\frac 1{c^2} \phi_2\(\frac{1+c^2+2cx}2\) -\frac{1-c^2}{2c}x-\tilde{J}_c(x)\\-\delta_{c>1}\(x^2\ln(c)+\frac{1+c^2}{c}x\ln(c)\)+F_4(c). \end{multline*} To find $F_4(c)$, find the limit of both sides when $x\rightarrow -\frac{1+c^2}{2c}$. The left-hand side becomes \begin{multline*} \int_{-1}^1{\phi_2\(\frac{1+c^2+2cz}{2c}\)\tilde{\Omega}_c''(z)}dz =\frac 1{8c^2\pi}\int_{-1}^1{(3+2\ln(c))\frac{(1+cz)(1+c^2+2cz)}{\sqrt{1-z^2}}}dz \\-\frac 1{8c^2\pi}\int_{-1}^1{2\ln(1+c^2+2cz)\frac{(1+cz)(1+c^2+2cz)}{\sqrt{1-z^2}}}dz. \end{multline*} The first integral is given in \eqref{eq:int-sin} and the second in \eqref{eq:int-ln2}, thus the left-hand side is \begin{equation*} \left\{\begin{array}{ll} \frac{3-c^4+(2+4c^2)\ln(c)}{8c^2},&0<c\leq1\\ -\frac{1-2c^2+\ln(c)+2c^2\ln(c)}{4c^2},&1\leq c \end{array}\right.. \end{equation*} The limit of the right-hand side is $$ F_4(c)+ \left\{\begin{array}{ll} \frac{9-4c^2-5c^4+(4+8c^2)\ln(c)}{16c^2},&0<c\leq1\\ \frac{-1+4c^2-3c^4+4c^4\ln(c)}{16c^2},&1\leq c \end{array}\right., $$ whence $$ F_4(c)=\frac{-3+4c^2+3c^4}{16c^2}-\delta_{c>1}\(\frac{1+c^2}{2c}\)^2\ln(c). $$ $\qed$ \end{proof} \begin{lemma} \label{lem:intIOmega'} For any $c>0$ we have \begin{multline} \label{eq:intIOmega'} \int_a^b{I_c(s)\Omega_c'(s)}ds=1-\frac{c^2}4-2\phi_2(b-a)+2\int_a^b{G_c(s)\Omega_c'(s)}ds\\-2\int_a^b{H_c(s)\Omega_c'(s)}ds+ \delta_{c>1}\(1-\frac 5{4c^2}+\frac{c^2}4-\(2+\frac 1{c^2}\)\ln(c)\). \end{multline} \end{lemma} \begin{proof} Differentiating both sides of \eqref{eq:intIOmega'} with respect to $b$ and noting that $\phi_1$ is odd since it is the integral of an even function, and that $\Omega_c'(b)=1$ since $b>1+c/2$, we obtain \begin{align*} \frac{d}{db}(\text{left-hand side}) &=I_c(b)\Omega_c'(b)+\int_a^b{\phi_0(b-s)\Omega_c'(s)}ds=I_c(b)+I_c(b)\\&=2(\phi_1(0)+\phi_1(a-b)+ G_c(b)-H_c(b))\\&=-2\phi_1(b-a)+2G_c(b)\Omega_c'(b)-2H_c(b)\Omega_c'(b) \\&=\frac{d}{db}(\text{right-hand side}). \end{align*} This implies that \begin{equation} \label{eq:intIOmega'F} \int_a^b{I_c(s)\Omega_c'(s)}ds=1-\frac{c^2}4-2\phi_2(b-a)+2\int_a^b{G_c(s)\Omega_c'(s)}ds-2\int_a^b{H_c(s)\Omega_c'(s)}ds+F_3(c) \end{equation} for some function $F_3(c)$ which will be found next. Based on the above argument $F_3(c)$ might depend on $a$, but by symmetry between $a$ and $b$ it does not. Since $F_3(c)$ is independent of $a$ and $b$, $a$ and $b$ can be fixed. Set $a=-{1}/(2c)$ and $b=1+ c/2$, switch to the shifted notation and collect all the integrals in \eqref{eq:intIOmega'F} on the left side: \begin{multline*} \int_{-\frac{1+c^2}{2c}}^1 \(\phi_1(1-z)+\phi_1\(-\frac{1+c^2}{2c}-z\)-\tilde{G}_c(z)+\tilde{H}_c(z)\)\tilde{\Omega}_c'(z)dz \\=1-\frac{c^2}4-2\phi_2\(1+\frac{1+c^2}{2c}\)+F_3(c). \end{multline*} Substituting the formula for $\tilde{G}_c(z)$ and integrating by parts we obtain \begin{align*} \int_{-\frac{1+c^2}{2c}}^1{\frac{1-c^2}{2c}\tilde{\Omega}'_c(z)}dz &-\int_{-\frac{1+c^2}{2c}}^1\(-\phi_2(1-z)-\phi_2\(-\frac{1+c^2}{2c}-z\) \right.\\&\left. \qquad\qquad\qquad\quad-\frac 1{c^2}\phi_2\(\frac{1+c^2+2cz}{2}\)+\tilde{J}_c(z)\)\tilde{\Omega}''_c(z)dz \\&-\sign(c-1)\tilde{J}_c\(-\frac{1+c^2}{2c}\)+\sign(c-1)\phi_2\(\frac{1+c^2}{2c}+1\) \\&-\phi_2\(\frac{1+c^2}{2c}+1\)-\frac 1{c^2}\phi_2\(\frac{1+c^2+2c}{2}\) \\&=1-\frac{c^2}4-2\phi_2\(1+\frac{1+c^2}{2c}\)+F_3(c), \end{align*} which, after simplifications using $\tilde{\Omega}''_c(z)=0$ for $|z|>1$, $\tilde{J}_c(z)=0$ for $|z|\leq 1$, $$ \int_{-\frac{1+c^2}{2c}}^1{\frac{1-c^2}{2c}\tilde{\Omega}'_c(z)}dz =\frac{1-c^2}{2c}\(1+\frac c2-\frac 1{2c}\) $$ and $$ \sign(c-1)\tilde{J}_c\(-\frac{1+c^2}{2c}\)=\frac 12 \(1+\frac 1{2c^2}\)\ln(c)+\frac{5+c^2}{8c}\frac{1-c^2}{2c}, $$ becomes \begin{multline*} \int_{-1}^1\(\phi_2(1-z)+\phi_2\(\frac{1+c^2}{2c}+z\)+\frac 1{c^2}\phi_2\(\frac{1+c^2+2cz}{2}\)\)\tilde{\Omega}''_c(z)dz \\=\frac 1{c^2}\phi_2\(\frac{1+c^2+2c}{2}\)+\frac 12 \(1 + \frac 1{2 c^2}\)\ln(c) +\frac{9 - 8 c + 4 c^2 + 8 c^3 - c^4}{16 c^2} \\-\left\{\begin{array}{cl} 0&c<1 \\\phi_2\(\frac{1+c^2+2c}{2c}\)&c=1 \\2\phi_2\(\frac{1+c^2+2c}{2c}\)&c>1 \end{array}\right. +F_3(c). \end{multline*} The remaining integrals are given by \eqref{eq:intphi2p''} and by Lemma \ref{lem:F3} with $x=-\frac{1+c^2}{2c}$ and $x=1$. Calculating those integrals and simplifying we obtain $$F_3(c)= \left\{\begin{array}{ll} 0,&0<c\leq 1\\ 1-\frac 5{4c^2}+\frac{c^2}4-\(2+\frac 1{c^2}\)\ln(c),&1\leq c \end{array}\right..$$ $\qed$ \end{proof} \section{Proof of the Main Theorems} \label{sec:MainProofs} \textit{The upper bound.} The number of Young diagrams with $n$ cells and at most $N$ rows is less than the number $p(n)$ of Young diagrams with $n$ cells. By the Hardy-Ramanujan formula \cite{HardyRamanujan},\cite[p. 116]{Hardy} $p(n)$ is asymptotically given by $p(n)\approx\frac{1}{4n\sqrt{3}}e^{\frac{2\pi}{\sqrt{6}}\sqrt{n}}$. Hence, $$ \mathbb{P}_N^n\left\{\lambda: \mathbb{P}_N^n(\lambda)<e^{-\frac{2\pi}{\sqrt{6}}\sqrt{n}}\right\}\leq p(n)e^{-\frac{2\pi}{\sqrt{6}}\sqrt{n}}\xrightarrow{n\rightarrow \infty}0. $$ This implies that $$\lim_{n\rightarrow \infty}\mathbb{P}_N^n\left\{\lambda: \mathbb{P}_N^n(\lambda)>e^{-\frac{2\pi}{\sqrt{6}}\sqrt{n}}\right\}=1$$ or equivalently that $$\lim_{n\rightarrow \infty}\mathbb{P}_N^n\left\{\lambda: -\frac 1{\sqrt{n}} \ln\frac{\dim E_\lambda}{N^n}<\frac{2\pi}{\sqrt{6}}\right\}=1.$$ From this it is immediate that $$-\frac{1}{\sqrt{n}} \ln \frac{\max\{\dim E_\lambda\}}{N^n}<\frac{2\pi}{\sqrt{6}}$$ for large enough $n$. \textit{The lower bound.} By Propositions \ref{prop:measure} and \ref{prop:integral}, for any $\lambda\in\mathbb{Y}_N^n$ we have \begin{equation} \label{eq:lnPNnFinal} -\frac{\ln \mathbb{P}_N^n(\lambda)}{\sqrt{n}} = \sqrt{n}\left(\frac 12 \|f\|_{\frac 12}^2+2\int_{|s-\frac c2|>1}H_c'(s)f(s)ds\right)+\hat{\theta}(\lambda)-\hat{\rho}(\lambda)-\varepsilon_n, \end{equation} where $f(s)=L_\lambda(s)-\Omega_c(s)$. Let $0<i\leq N$ be such that $\lambda_i>0$, where $\lambda_i$ is the length of the $i$-th row of the Young diagram $\lambda$. Let $h_\lambda$ denote the height of $\lambda$, i.e. the number of nonzero rows. Note that $h\leq N$. \begin{figure}[ht] \centering \includegraphics[width=7cm]{FigHookContent} \caption{\label{fig:HookContent}} \end{figure} It is easy to see that $N+c_{i,\lambda_i}\geq h_\lambda+c_{i,\lambda_i}=h_\lambda-i+\lambda_i = h_{i,1}$, i.e. that the shifted content of the last cell in a row of a Young diagram is at least as large as the hook length of the first cell of the row (see Figure \ref{fig:HookContent}). For a fixed $i$ the value of the shifted content $N+c_{i,j}$ decreases by $1$ from right to left, while the length of the hook $h_{i,j}$ decreases by at least one from left to right. Since $m(x)$ is a decreasing function, this implies that $\hat{\theta}(L_\lambda)\geq\hat{\rho}(L_\lambda)$. It was proven in Corollary \ref{cor:UniqueMinimizer} that $\int_{|s-\frac c2|>1}H_c'(s)f(s)ds\geq 0$. We will not give a lower bound for $\|f\|_{\frac 12}^2$. For our purposes a very rough estimate suffices: we only consider the contribution of diagonal slices in the double integral $\|f\|_{\frac 12}^2$ and use that $L_\lambda$ is piecewise linear with slopes $\pm 1$. Define $s_i:={i}/({2\sqrt{n}})$. Define $s^*_i:=min_{s_i\leq s\leq s_{i+1}}(f'(s))^2$. It follows that \begin{align*} \frac{\sqrt{n}}2\|f\|^2 & \geq\frac{\sqrt{n}}2\sum_i\iint_{s_i\leq s,t\leq s_{i+1}}\(\frac{f(s)-f(t)}{s-t}\)^2 ds dt\geq \frac{\sqrt{n}}2\sum_i (f'(s^*_i))^2 (\Delta s_i)^2 \\& =\frac{1}{4}\sum_i (f'(s^*_i))^2 \Delta s_i. \end{align*} Replacing the Riemann sum by the corresponding integral and using that the sum of all the other terms in \eqref{eq:lnPNnFinal} is nonnegative, we obtain that for any $\varepsilon>0$, there exists $n_0\in\mathbb{N}$ such that for all $n>n_0$ \begin{equation*} -\frac{\ln \mathbb{P}_N^n(\lambda)}{\sqrt{n}}\geq \frac 14 \int_{-\infty}^{\infty}(L_\lambda'(z)-\tilde{\Omega}'_c(z))^2 dz-\varepsilon. \end{equation*} Since $L_\lambda$ is linear in the intervals $(s_i,s_{i+1})$ and $L_\lambda'(s)=\pm 1$ for $s\in(s_i,s_{i+1})$, we obtain \begin{equation*} -\frac{\ln \mathbb{P}_N^n(\lambda)}{\sqrt{n}}\geq \frac 14 \int_{-\infty}^{\infty}(\sign(\tilde{\Omega}'_c(z))-\tilde{\Omega}'_c(z))^2 dz-\varepsilon = \frac 14 \int_{-1}^1(\sign(z)-\tilde{\Omega}'_c(z))^2 dz-\varepsilon, \end{equation*} which implies the lower bounds in Theorems \ref{thm:max} and \ref{thm:meas} with $$\alpha_c=\frac 14 \int_{-1}^1(\sign(z)-\tilde{\Omega}'_c(z))^2 dz.$$ \bibliographystyle{alpha}
{ "timestamp": "2011-10-21T02:04:04", "yymm": "1008", "arxiv_id": "1008.3854", "language": "en", "url": "https://arxiv.org/abs/1008.3854", "abstract": "Vershik and Kerov gave asymptotical bounds for the maximal and the typical dimensions of irreducible representations of symmetric groups $S_n$. It was conjectured by G. Olshanski that the maximal and the typical dimensions of the isotypic components of tensor representations of the symmetric group admit similar asymptotical bounds. The main result of this article is the proof of this conjecture. Consider the natural representation of $S_n$ on $(\\mathbb{C}^N)^{\\otimes n}$. Its isotypic components are parametrized by Young diagrams with $n$ cells and at most $N$ rows. P. Biane found the limit shape of Young diagrams when $n\\rightarrow\\infty,\\ \\sqrt{n}/N\\rightarrow c$. By showing that this limit shape is the unique solution to a variational problem, it is proven here, that after scaling, the maximal and the typical dimensions of isotypic components lie between positive constants. A new proof of Biane's limit-shape theorem is obtained.", "subjects": "Representation Theory (math.RT); Combinatorics (math.CO); Probability (math.PR)", "title": "Asymptotics of the maximal and the typical dimensions of isotypic components of tensor representations of the symmetric group", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9902915246646304, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.7084783135574549 }
https://arxiv.org/abs/2101.07560
A doubly relaxed minimal-norm Gauss-Newton method for underdetermined nonlinear least-squares problems
When a physical system is modeled by a nonlinear function, the unknown parameters can be estimated by fitting experimental observations by a least-squares approach. Newton's method and its variants are often used to solve problems of this type. In this paper, we are concerned with the computation of the minimal-norm solution of an underdetermined nonlinear least-squares problem. We present a Gauss-Newton type method, which relies on two relaxation parameters to ensure convergence, and which incorporates a procedure to dynamically estimate the two parameters, as well as the rank of the Jacobian matrix, along the iterations. Numerical results are presented.
\section{Introduction}\label{intro} Let us assume that $F(\bm{x})=[F_1(\bm{x}),\ldots,F_m(\bm{x})]^T$ is a nonlinear twice continuously Frech\'et-differentiable function with values in ${\mathbb{R}}^m$, for any $\bm{x}\in{\mathbb{R}}^n$. For a given $\bm{b}\in{\mathbb{R}}^m$, we consider the nonlinear least-squares data fitting problem \begin{equation} \min_{\bm{x}\in{\mathbb{R}}^n} \|\bm{r}(\bm{x})\|^2, \qquad \bm{r}(\bm{x}) = F(\bm{x})-\bm{b}, \label{nonlinL2} \end{equation} where $\|\cdot\|$ denotes the Euclidean norm and $\bm{r}(\bm{x})= \left[r_1(\bm{x}),\ldots,r_m(\bm{x})\right]^T$ is the residual vector function between the model expectation $F(\bm{x})$ and the vector $\bm{b}$ of measured data. The solution to the nonlinear least-squares problem gives the best model fit to the data in the sense of the minimum sum of squared errors. A common choice for solving a nonlinear least-squares problem consists of applying Newton's method and its variants, such as the Gauss--Newton method~\cite{bjo96,hansen2012least,ortega1970}. \medskip The Gauss--Newton method is based on the construction of a sequence of linear approximations to $\bm{r}(\bm{x})$. Chosen an initial point $\bm{x}^{(0)}$ and denoting by $\bm{x}^{(k)}$ the current approximation, then the new approximation is \begin{equation} \label{iter} \bm{x}^{(k+1)}=\bm{x}^{(k)}+\bm{s}^{(k)}, \qquad k=0,1,2,\ldots, \end{equation} where the step $\bm{s}^{(k)}$ is computed as a solution to the linear least-squares problem \begin{equation} \label{gauss-newt} \min_{\bm{s}\in{\mathbb{R}}^n} \| J(\bm{x}^{(k)})\bm{s}+\bm{r}(\bm{x}^{(k)}) \|^2. \end{equation} Here $J(\bm{x})$ represents the Jacobian matrix of the function $F(\bm{x})$. The solution to~\eqref{gauss-newt} may not be unique: this happens when the matrix $J(\bm{x}^{(k)})$ does not have full column rank, in particular, when $m<n$. To make the solution unique, the new iterate $\bm{x}^{(k+1)}$ is often obtained by solving the following minimal-norm linear least-squares problem \begin{equation} \begin{cases} \displaystyle\min_{\bm{s}\in{\mathbb{R}}^n}\|\bm{s}\|^2 \\ \displaystyle \bm{s} \in \bigl\{ \arg \min_{\bm{s}\in{\mathbb{R}}^n} \|J(\bm{x}^{(k)})\bm{s}+\bm{r}(\bm{x}^{(k)})\|^2 \bigr\}, \end{cases} \label{mnls} \end{equation} where the set in the lower line contains all the solutions to problem \eqref{gauss-newt}. In order to select solutions exhibiting different degrees of regularity, the term $\|\bm{s}\|^2$ in~\eqref{mnls} is sometimes substituted by the seminorm $\|L\bm{s}\|^2$, where $L\in {\mathbb{R}}^{p\times n}$ $(p\leq n)$ is a matrix which incorporates available a priori information on the solution. The case $p>n$ can be easily reduced to the previous assumption by performing a compact $L=QR$ factorization, and substituting $L$ by the triangular matrix $R$. Typically, $L$ is a diagonal weighting matrix or a discrete approximation of a derivative operator. For example, the matrices \begin{equation}\label{d1d2} D_1= \begin{bmatrix} 1 & -1 & & & \\ & \ddots & \ddots & \\ & & 1 & -1 \end{bmatrix} \quad \text{ and } \quad D_2=\begin{bmatrix} 1 & -2 & 1 & & & \\ & \ddots & \ddots & \ddots &\\ & & 1 & -2 & 1 \end{bmatrix}, \end{equation} of size $(n-1)\times n$ and $(n-2)\times n$, respectively, are approximations to the first and second derivative operators. When a regularization matrix is introduced, problem~\eqref{mnls} becomes \begin{equation} \begin{cases} \displaystyle \min_{\bm{s}\in{\mathbb{R}}^n}\|L\bm{s}\|^2 \\ \displaystyle \bm{s} \in \bigl\{ \arg \min_{\bm{s}\in{\mathbb{R}}^n} \|J(\bm{x}^{(k)})\bm{s}+\bm{r}(\bm{x}^{(k)})\|^2 \bigr\}. \end{cases} \label{Lmnls} \end{equation} Both~\eqref{mnls} and~\eqref{Lmnls} impose some kind of regularity on the update vector $\bm{s}$ for the solution $\bm{x}^{(k)}$ and not on the solution itself. The problem of imposing a regularity constraint directly on the solution $\bm{x}$ of problem~\eqref{nonlinL2}, i.e., \begin{equation}\label{minnorm} \begin{cases} \displaystyle\min_{\bm{x}\in{\mathbb{R}}^n}\|\bm{x}\|^2 \\ \displaystyle \bm{x} \in \bigl\{ \arg \min_{\bm{x}\in{\mathbb{R}}^n} \|F(\bm{x})-\bm{b}\|^2 \bigr\}, \end{cases} \end{equation} is studied in~\cite{Eriksson96optimization,ErikssonPaperII,ErikssonReg05,pr20}. These papers are based on the application of the damped Gauss--Newton method to the solution of \eqref{minnorm}. To ensure the computation of the minimal-norm solution, at the $k$th iteration, the Gauss--Newton approximation is orthogonally projected onto the null space of the Jacobian $J(\bm{x}^{(k)})$. In~\cite{pr20}, the damping parameter is estimated by the Armijo--Goldstein principle; we refer to this method as the MNGN algorithm. In the same paper, this approach is applied to the minimization of a suitable seminorm, and different regularization techniques are considered under the assumption that the nonlinear function $F$ is ill-conditioned. Unfortunately, the algorithms developed in the above papers occasionally lack to converge. They take the form $$ \bm{x}^{(k+1)} = \bm{x}^{(k)} + \alpha_k \widetilde{\bm{s}}^{(k)} - {\mathcal{P}}_{{\mathcal{N}}(J_k)} \bm{x}^{(k)}, $$ where $\widetilde{\bm{s}}^{(k)}$ is the solution of \eqref{mnls}, $\alpha_k$ is a step length, and ${\mathcal{P}}_{{\mathcal{N}}(J_k)}$ is the orthogonal projector onto the null space of $J_k=J(\bm{x}^{(k)})$. One reason for the nonconvergence of such methods is that the projection step may cause the residual to increase considerably at particular iterations. Moreover, the rank of $J(\bm{x}^{(k)})$ may vary as the iteration progresses, and its incorrect estimation often leads to the presence of small singular values for the Jacobian, which amplify computational errors. This problem of nonconvergence is dealt with in~\cite{campbell}, by a method which will be denoted CKB in the following. The authors consider a convex combination of the Gauss--Newton approximation and its orthogonal projection, and apply a relaxation parameter $\gamma_k$ to this search direction, chosen according to a given rule. After some manipulation, the method can be written as \begin{equation}\label{ckb} \bm{x}^{(k+1)} = \bm{x}^{(k)} + \widetilde{\bm{s}}^{(k)} - \gamma_k {\mathcal{P}}_{{\mathcal{N}}(J_k)} \bm{x}^{(k)}. \end{equation} This approach makes the computation of the minimal-norm solution more robust, but it may not converge in some situation; see Section~\ref{steplen}. Moreover, both the MNGN and the CKB methods suffer from serious convergence problems caused by the variation of the rank of the Jacobian along the iterations. The rank often drops to a small value in a neighborhood of the solution, while the two methods consider a fixed rank, generally assumed to be the smaller dimension of the Jacobian. In this paper, we aim at improving the convergence of the methods presented in~\cite{campbell} and~\cite{pr20}. We do this by first introducing in the MNGN method a technique to estimate the rank of the matrix $J(\bm{x}^{(k)})$ at each iteration. This procedure has the effect of improving the convergence of the method, reducing the possibility that the iteration diverges because of error amplification. Then, we introduce a second relaxation parameter for the projection term, as well as a strategy to automatically tune it, besides the usual damping parameter for the Gauss--Newton search direction. This approach produces, on the average, solutions closer to optimality, i.e., with smaller norms, than those computed by the CKB method. Furthermore, we consider a model profile $\overline{\bm{x}}$ for the solution, which is useful in applications where sufficient a priori information on the physical system under investigation is available. The paper is structured as follows. In Section~\ref{n_mns}, we revise the MNGN method and reformulate Theorem 3.1 from~\cite{pr20} by introducing a model profile for the solution. Then, we give a theoretical justification for the fact that the convergence of the method may not be ensured. Section~\ref{rankjac} explains how to estimate the numerical rank of the Jacobian $J(\bm{x}^{(k)})$ at each iteration. In Section~\ref{steplen}, we describe an algorithm which introduces a second parameter to control the size of the correction vector that provides the minimal-norm solution, and which estimates automatically such parameter. In Section~\ref{n_mLns}, we extend the discussion to the minimal-$L$-norm solution, where $L$ is a regularization matrix. Numerical examples can be found in Section~\ref{examples}. \section{Nonlinear minimal-norm solution}\label{n_mns} We begin by recalling the definition of the singular value decomposition (SVD) of a matrix $J\in{\mathbb{R}}^{m\times n}$~\cite{gvl96}, which will be needed later. The SVD is a matrix decomposition of the form $$ J=U\Sigma V^T, $$ where $U=[\bm{u}_1,\dots,\bm{u}_m]\in {\mathbb{R}}^{m\times m}$ and $V=[\bm{v}_1,\dots,\bm{v}_n]\in {\mathbb{R}}^{n\times n}$ are matrices with orthonormal columns and $\Sigma_{i,j}=0$ for $i\neq j$. The nonzero diagonal elements of the matrix $\Sigma \in {\mathbb{R}}^{m\times n}$ are the \emph{singular values} $\sigma_1\geq\sigma_2\geq\cdots\geq\sigma_r>0$, with $r=\rank(J)\leq\min(m,n)$. Let ${\mathcal{N}}(J)$ denote the null space of the matrix $J$. It is well-known that $$ {\mathcal{N}}(J) := \left\{ \bm{s}\in{\mathbb{R}}^n : J\bm{s}=0 \right\} = \Span\{ \bm{v}_{r+1},\ldots,\bm{v}_n \}. $$ Let us now briefly review the computation of the minimal-norm solution to the nonlinear problem~\eqref{nonlinL2} by the \emph{minimal-norm Gauss--Newton} (MNGN) method, presented in~\cite{pr20}. Our aim is showing the reason for the possible lack of convergence of such method. Here, we extend the discussion from~\cite{pr20} by introducing a model profile $\overline{\bm{x}}\in{\mathbb{R}}^n$, which represents an a priori estimate of the desired solution, and formulate the problem in the form \begin{equation} \begin{cases} \displaystyle\min_{\bm{x}\in{\mathbb{R}}^n}\|\bm{x}-\overline{\bm{x}}\|^2 \\ \displaystyle \bm{x} \in \bigl\{ \arg \min_{\bm{x}\in{\mathbb{R}}^n} \|F(\bm{x})-\bm{b}\|^2 \bigr\}. \end{cases} \label{nonlinmnls} \end{equation} We consider an iterative method of the type~\eqref{iter} based on the following first-order linearization of the problem \begin{equation} \begin{cases} \displaystyle\min_{\bm{s}\in{\mathbb{R}}^n}\|\bm{x}^{(k)}-\overline{\bm{x}}+\alpha_k\bm{s}\|^2 \\ \displaystyle \bm{s} \in \bigl\{ \arg \min_{\bm{s}\in{\mathbb{R}}^n} \|J_k\bm{s}+\bm{r}_k\|^2 \bigr\}, \end{cases} \label{linmnal} \end{equation} where $J_k=J(\bm{x}^{(k)})$ is the Jacobian of $F$ in $\bm{x}^{(k)}$ and $\bm{r}_k=\bm{r}(\bm{x}^{(k)})$ is the residual vector. The damping parameter $\alpha_k$ is indispensable to ensure the convergence of the Gauss--Newton method. We estimate it by the Armijo--Goldstein principle~\cite{armijo,goldstein}, but it can be chosen by any strategy which guarantees a reduction in the norm of the residual. In our case, the Armijo condition~\cite{armijo,dennis} implies \[ f(\bm{x}^{(k)}+\alpha_k \widetilde{\bm{s}}^{(k)}) \leq f(\bm{x}^{(k)}) + \mu\alpha_k \nabla f(\bm{x}^{(k)})^T \widetilde{\bm{s}}^{(k)}, \] where $\widetilde{\bm{s}}^{(k)}$ is determined by solving~\eqref{mnls} and $\mu$ is a constant in $(0, 1)$. Since $f(\bm{x})=\frac{1}{2}\|\bm{r}(\bm{x})\|^2$ and $\nabla f(\bm{x})=J(\bm{x})^T\bm{r}(\bm{x})$, it reads \[ \|\bm{r}(\bm{x}^{(k)}+\alpha_k \widetilde{\bm{s}}^{(k)})\|^2 \leq \|\bm{r}_k\|^2 + 2\mu \alpha_k \bm{r}_k^T J_k \widetilde{\bm{s}}^{(k)}. \] Note that, as $\widetilde{\bm{s}}^{(k)}$ satisfies the normal equations associated to problem \eqref{gauss-newt}, it holds $J_k^T\bm{r}_k=-J_k^TJ_k\widetilde{\bm{s}}^{(k)}$, so that $\bm{r}_k^TJ_k\widetilde{\bm{s}}^{(k)}=-\|J_k\widetilde{\bm{s}}^{(k)}\|^2$. The \emph{Armijo--Goldstein principle} \cite{bjo96,goldstein} sets $\mu=\frac{1}{4}$ and determines the scalar $\alpha_k$ as the largest number in the sequence $2^{-i}$, $i=0,1,\ldots,$ for which it holds \begin{equation}\label{armgol} \|\bm{r}_k\|^2 - \|\bm{r}(\bm{x}^{(k)}+\alpha_k \widetilde{\bm{s}}^{(k)})\|^2 \geq \frac{1}{2} \alpha_k \|J_k \widetilde{\bm{s}}^{(k)}\|^2. \end{equation} The iteration resulting from the solution of \eqref{linmnal} is defined by the following theorem. \medskip \begin{theorem}\label{theo3.1} Let $\bm{x}^{(k)}\in{\mathbb{R}}^n$ and let $\widetilde{\bm{x}}^{(k+1)}=\bm{x}^{(k)}+\alpha_k\widetilde{\bm{s}}^{(k)}$ be the Gauss--Newton iteration for~\eqref{nonlinL2}, where the step $\widetilde{\bm{s}}^{(k)}$ is determined by solving~\eqref{mnls} and the step length $\alpha_k$ by the Armijo--Goldstein principle. Then, the iteration $\bm{x}^{(k+1)}=\bm{x}^{(k)}+\alpha_k\bm{s}^{(k)}$ defined by~\eqref{linmnal} is given by \begin{equation}\label{tesi} \bm{x}^{(k+1)} = \widetilde{\bm{x}}^{(k+1)} - V_2V_2^T \bigl(\bm{x}^{(k)}-\overline{\bm{x}}\bigr), \end{equation} where $\rank(J_k)=r_k$ and the columns of the matrix $V_2=[\bm{v}_{r_k+1},\ldots,\bm{v}_n]$ are orthonormal vectors in ${\mathbb{R}}^n$ spanning the null space of $J_k$. \end{theorem} \smallskip \begin{proof} The proof follows the pattern of that of Theorem 3.1 in~\cite{pr20}. Let $U\Sigma V^T$ be the singular value decomposition of the matrix $J_k$. The upper-level problem in~\eqref{linmnal} can be expressed as \[ \|\bm{x}^{(k)}-\overline{\bm{x}}+\alpha_k\bm{s}\|^2 = \|V^T(\bm{x}^{(k)}-\overline{\bm{x}}+\alpha_k\bm{s})\|^2 = \|\alpha_k\bm{y}+\bm{z}^{(k)}\|^2, \] with $\bm{y}=V^T\bm{s}$ and $\bm{z}^{(k)}=V^T\left(\bm{x}^{(k)}-\overline{\bm{x}}\right)$. Replacing $J_k$ by its SVD and setting $\bm{g}^{(k)}=U^T\bm{r}_k$, we can rewrite~\eqref{linmnal} as the following diagonal linear least-squares problem \[ \begin{cases} \displaystyle \min_{\bm{y}\in{\mathbb{R}}^n}\|\alpha_k\bm{y}+\bm{z}^{(k)}\|^2 \\ \displaystyle \bm{y} \in \bigl\{ \arg \min_{\bm{y}\in{\mathbb{R}}^n} \|\Sigma\bm{y}+\bm{g}^{(k)}\|^2 \bigr\}. \end{cases} \] Solving the lower-level minimization problem uniquely determines the components $y_i=-\sigma_i^{-1}g^{(k)}_i$, $i=1,\ldots,r_k$, while the entries $y_i$, $i=r_k+1,\ldots,n$, are left undetermined. Their values can be found by solving the upper-level problem. From \[ \|\alpha_k\bm{y}+\bm{z}^{(k)}\|^2 = \sum_{i=1}^{r_k} \left(-\alpha_k\frac{g^{(k)}_i}{\sigma_i}+z^{(k)}_i\right)^2 + \sum_{i=r_k+1}^n \left(\alpha_k y_i+z^{(k)}_i\right)^2, \] we obtain $y_i=-\frac{z^{(k)}_i}{\alpha_k} =-\frac{1}{\alpha_k} \bm{v}_i^T(\bm{x}^{(k)}-\overline{\bm{x}})$, $i=r_k+1,\ldots,n$. Then, the solution to~\eqref{linmnal}, that is, the next approximation to the solution of~\eqref{nonlinmnls}, is $$ \bm{x}^{(k+1)} = \bm{x}^{(k)} + \alpha_k V\bm{y} = \bm{x}^{(k)} - \alpha_k\sum_{i=1}^{r_k} \frac{g^{(k)}_i}{\sigma_i} \bm{v}_i -\sum_{i=r_k+1}^n (\bm{v}_i^T(\bm{x}^{(k)}-\overline{\bm{x}})) \bm{v}_i, $$ where the last summation can be written in matrix form as $V_2V_2^T\left(\bm{x}^{(k)}-\overline{\bm{x}}\right)$, and the columns of $V_2=[\bm{v}_{r_k+1},\ldots,\bm{v}_n]$ are a basis for ${\mathcal{N}}(J_k)$. It is immediate (see \cite[Theorem~3.1]{pr20}) to prove that \[ \widetilde{\bm{x}}^{(k+1)} = \bm{x}^{(k)} + \alpha_k\widetilde{\bm{s}}^{(k)} = \bm{x}^{(k)} - \alpha_k\sum_{i=1}^{r_k} \frac{g^{(k)}_i}{\sigma_i} \bm{v}_i, \] from which \eqref{tesi} follows. \end{proof} \medskip Summarizing, the MNGN method consists of the iteration \[ \bm{x}^{(k+1)}=\bm{x}^{(k)}+\alpha_k\bm{s}^{(k)}, \] where the step is $$ \bm{s}^{(k)}= \widetilde{\bm{s}}^{(k)} - \frac{1}{\alpha_k} \bm{t}^{(k)}, $$ with \begin{equation} \label{stepdef} \widetilde{\bm{s}}^{(k)} = -\sum_{i=1}^{r_k} \frac{g^{(k)}_i}{\sigma_i} \bm{v}_i, \qquad \bm{t}^{(k)} = V_2V_2^T\bigl(\bm{x}^{(k)} -\overline{\bm{x}}\bigr). \end{equation} Since ${\mathcal{P}}_{{\mathcal{N}}(J_k)}=V_2V_2^T$ is the orthogonal projector onto ${\mathcal{N}}(J_k)$, the above theorem states that the $(k+1)$th iterate of the MNGN method is orthogonal to the null space of $J_k$. \smallskip Theorem~\ref{theo3.1} shows that the correction vector $\bm{t}^{(k)}$ defined in~\eqref{stepdef}, which allows to compute the minimal-norm solution at each step, is not damped by the parameter $\alpha_k$. As a result, in some numerical examples, the method fails to converge because projecting the solution orthogonally to the null space of $J_k$ causes the residual to increase. To understand how this can happen, a second-order analysis of the objective function is required. The second-order Taylor approximation to the function $f(\bm{x})=\frac{1}{2}\|\bm{r}(\bm{x})\|^2$ at $\bm{x}^{(k+1)}=\bm{x}^{(k)}+\alpha\bm{s}$ is \begin{equation}\label{second} f(\bm{x}^{(k+1)}) \simeq f(\bm{x}^{(k)})+ \alpha\nabla f(\bm{x}^{(k)})^T \bm{s} + \frac{1}{2} \alpha^2 \bm{s}^T \nabla^2 f(\bm{x}^{(k)}) \bm{s}. \end{equation} The gradient and the Hessian of $f(\bm{x})$, written in matrix form, are given by \[ \nabla f(\bm{x})= J(\bm{x})^T \bm{r}(\bm{x}), \qquad \nabla^2 f(\bm{x})= J(\bm{x})^T J(\bm{x}) + {\mathcal{Q}}(\bm{x}), \] where \[ {\mathcal{Q}}(\bm{x})= \sum_{i=1}^m r_i(\bm{x})\nabla^2 r_i(\bm{x}), \] and $\nabla^2 r_i(\bm{x})$ is the Hessian matrix of $r_i(\bm{x})$. By replacing the expression of $f$ and $\alpha\bm{s}=\alpha\widetilde{\bm{s}}-\bm{t}$ in~\eqref{second}, where $\widetilde{\bm{s}}$ is the Gauss--Newton step and $\bm{t}$ is in the null space of $J_k$, and letting ${\mathcal{Q}}_k={\mathcal{Q}}(\bm{x}^{(k)})$, the following approximation is obtained \[ \begin{aligned} \frac{1}{2}\|\bm{r}_{k+1}\|^2 &\simeq \frac{1}{2}\|\bm{r}_k\|^2 + \alpha \bm{r}_k^T J_k \bm{s} + \frac{1}{2} \alpha^2 \bm{s}^T \left(J_k^T J_k + {\mathcal{Q}}_k\right)\bm{s} \\ &= \frac{1}{2}\|\bm{r}_k\|^2 + \alpha \bm{r}_k^T J_k \widetilde{\bm{s}} + \frac{1}{2} \alpha^2 \widetilde{\bm{s}}^T \left(J_k^T J_k + {\mathcal{Q}}_k\right)\widetilde{\bm{s}} - \alpha\bm{t}^T {\mathcal{Q}}_k \widetilde{\bm{s}} + \frac{1}{2}\bm{t}^T {\mathcal{Q}}_k \bm{t}. \end{aligned} \] The first two terms containing second derivatives (the matrix ${\mathcal{Q}}_k$) are damped by the $\alpha$ parameter. If the function $F$ is mildly nonlinear, the third term $\frac{1}{2}\bm{t}^T {\mathcal{Q}}_k \bm{t}$ is negligible. In the presence of a strong nonlinearity, its contribution to the residual is significant and may lead to its growth. This shows that a damping parameter is required to control the step length for both the Gauss--Newton step $\widetilde{\bm{s}}$ and the correction vector $\bm{t}$. If a relaxation parameter is introduced for $\bm{t}$, Theorem~\ref{theo3.1} implies that the minimal-norm solution of \eqref{linmnal} can only be approximated. \begin{remark}\label{smallex}\rm We report a simple low dimensional example for which the MNGN method may not converge. Let us consider the function $F:{\mathbb{R}}^2 \rightarrow {\mathbb{R}}$ defined by $$ F(\bm{x})=\delta^2 \left[ (x_1-\gamma)^2+(x_2-\gamma)^2 \right]-1, $$ depending on the parameters $\delta,\gamma\in{\mathbb{R}}$. Since the Hessian matrix of the residual is given by $$ \nabla^2 r(\bm{x})= \begin{bmatrix} 2\delta^2 & 0\\ 0 & 2\delta^2 \end{bmatrix}, $$ the second-order term $\frac{1}{2}\bm{t}^T {\mathcal{Q}}_k \bm{t}$ is not negligible, in general, when $\delta$ is relatively large. For example, setting $\delta=0.7$, $\gamma=2$, and choosing an initial vector $\bm{x}^{(0)}$ with random components in $(-5,5)$, the MNGN method converges with a large number of the iterations (350 on average). Setting $\delta=0.75$, the same method does not converge within 500 iterations. \end{remark} \section{Estimating the rank of the Jacobian}\label{rankjac} In order to apply Theorem~\ref{theo3.1} to computing the minimal-norm solution by \eqref{tesi}, the rank of the Jacobian matrix $J_k=J(\bm{x}^{(k)})$ should be known in advance. As the rank may vary along the iterations, we set $r_k=\rank(J_k)$. The knowledge of $r_k$ for each $k=0,1,\ldots$, is not generally available, making it necessary to estimate its value at each iteration step, to avoid nonconvergence or a breakdown of the algorithm. In such situations, it is common to consider the numerical rank $r_{\epsilon,k}$ of $J_k$, sometimes denoted as $\epsilon$-rank, where $\epsilon$ represents a chosen tolerance. The numerical rank is defined in terms of the singular values $\sigma_i^{(k)}$ of $J_k$, as the integer $r_{\epsilon,k}$ such that \[ \sigma_{r_{\epsilon,k}}^{(k)}>\epsilon\geq \sigma_{r_{\epsilon,k}+1}^{(k)}. \] Theorem~\ref{theo3.1} can be adapted to this setting, by simply replacing at each iteration the rank $r_k$ with the numerical rank $r_{\epsilon,k}$. Determining the numerical rank is a difficult task for discrete ill-posed problems, in which the singular values decay monotonically to zero. In such a case, the numerical rank plays the role of a regularization parameter and is estimated by suitable methods, which often require information about the noise level and type; see, e.g., \cite{Hansen,rr13}. When the problem is locally rank-deficient, meaning that the rank of $J(\bm{x})$ depends on the evaluation vector $\bm{x}$, the numerical rank $r_{\epsilon,k}$ can be determined, in principle, by choosing a suitable value of $\epsilon$. Numerical experiments show that a fixed value of $\epsilon$ does not always lead to a correct estimation of $r_{\epsilon,k}$, and that it is preferable to determine the $\epsilon$-rank by searching for a sensible gap between $\sigma_{r_{\epsilon,k}}^{(k)}$ and $\sigma_{r_{\epsilon,k}+1}^{(k)}$. To locate such a gap, we adopt a heuristic approach already applied in~\cite{cnrr20} for the same purpose, in a different setting. At each step, we compute the ratios \[ \rho_i^{(k)} = \frac{\sigma_i^{(k)}}{\sigma_{i+1}^{(k)}}, \qquad i=1,2,\ldots,q-1, \] where $q=\min(m,n)$. Then, we consider the index set \[ {\mathcal{I}}_k = \left\{ i\in\{1,2,\ldots,q-1\} : \rho_i^{(k)}>R \text{ and } \sigma_i^{(k)}>\tau \right\}. \] An index $i$ belongs to ${\mathcal{I}}_k$ if there is a significant ``jump'' between $\sigma_i^{(k)}$ and $\sigma_{i+1}^{(k)}$, and $\sigma_i^{(k)}$ is numerically nonzero. If the set ${\mathcal{I}}_k$ is empty, we set $r_{\epsilon,k}=q$. Otherwise, we consider \begin{equation}\label{rankest} \rho_j^{(k)}=\max_{i\in{\mathcal{I}}_k}\rho_i^{(k)}, \end{equation} and we define $r_{\epsilon,k}=j$. This amounts to selecting the largest gap between ``large'' and ``small'' singular values. In our numerical simulations, we set $R=10^2$ and $\tau=10^{-8}$. We observed that the value of these parameters is not critical for problems characterized by a rank deficient Jacobian. Estimating the rank becomes increasingly difficult as the gap between ``large'' and ``small'' singular values gets smaller. This condition usually corresponds to ill-conditioned problems, which require specific regularization methods. \section{Choosing the projection step length}\label{steplen} The occasional nonconvergence in the computation of the minimal-norm solution to a nonlinear least-squares problem was discussed in~\cite{campbell}, where the authors propose an iterative method based on a convex combination of the Gauss--Newton and the minimal-norm Gauss--Newton iterates, which we denote by CKB. Following our notation, it can be expressed in the form \begin{equation} \label{camp} \bm{x}^{(k+1)} = \left(1-\gamma_k\right)\left[\bm{x}^{(k)} + \widetilde{\bm{s}}^{(k)}\right] + \gamma_k\left[\bm{x}^{(k)} + \widetilde{\bm{s}}^{(k)} - V_2V_2^T\bm{x}^{(k)}\right], \end{equation} where the parameters $\gamma_k\in[0,1]$, for $k=0,1,\ldots$, form a sequence converging to zero. The standard Gauss--Newton method is obtained by setting $\gamma_k=0$, while $\gamma_k=1$ leads to the minimal-norm Gauss--Newton method. In their numerical examples, the authors adopt the sequences $\gamma_k=(0.5)^{k+1}$ and $\gamma_k=(0.5)^{2^k}$. It is immediate to rewrite~\eqref{camp} in the form \eqref{ckb}, showing that the method proposed in~\cite{campbell} is equivalent to the application of the undamped Gauss--Newton method, whose convergence is not theoretically guaranteed~\cite{bjo96}, with a damped correction to favor the decrease of the norm of the solution. The numerical experiments reported in the paper show that the minimization of the residual is sped up if $\gamma_k$ quickly converges to zero, while the norm of the solution decreases faster if $\gamma_k$ has a slower decay. The choice of the sequence of parameters appears to be critical to tune the performance of the algorithm, and no adaptive choice for $\gamma_k$ is proposed. \smallskip In this paper, we propose to introduce a second relaxation parameter, $\beta_k$, to control the step length of the minimal-norm correction $\bm{t}^{(k)}$ defined in~\eqref{stepdef}. The new iterative method is denoted by MNGN2 and it takes the form \begin{equation}\label{mngn2} \bm{x}^{(k+1)} = \bm{x}^{(k)} + \alpha_k \widetilde{\bm{s}}^{(k)} - \beta_k \bm{t}^{(k)}, \end{equation} where $\widetilde{\bm{s}}^{(k)}$ is the step vector produced by the Gauss--Newton method and $\bm{t}^{(k)}$ is the projection vector which makes the norm of $\bm{x}^{(k+1)}$ minimal, without changing the value of the linearized residual. The second-order analysis reported at the end of Section~\ref{n_mns} may be adapted for the CKB method \eqref{ckb}. It shows that neither the CKB nor the MNGN method are guaranteed to converge, as both the Gauss--Newton search direction and the projection step should be damped to ensure that the residual decreases. The MNGN2 method locally converges if $\alpha_k$ and $\beta_k$ are suitably chosen, but it will recover the minimal-norm solution only if $\beta_k\simeq 1$ for $k$ close to convergence. Our numerical tests showed that it is important to choose both $\alpha_k$ and $\beta_k$ adaptively along the iterations. A simple solution is to let $\beta_k=\alpha_k$ and estimate $\alpha_k$ by the Armijo--Goldstein principle~\eqref{armgol}, with $\bm{s}^{(k)}=\widetilde{\bm{s}}^{(k)}-\bm{t}^{(k)}$ in place of $\widetilde{\bm{s}}^{(k)}$. This approach proves to be effective in the computation of the minimal-norm solution, but its convergence is often rather slow. To speed up iteration we propose a procedure to adaptively choose the value of $\beta_k$. \begin{algorithm} \caption{Outline of the MNGN2 method.} \label{algobeta} \begin{algorithmic}[1] \REQUIRE nonlinear function $F$, data vector $\bm{b}$, \REQUIRE initial solution $\bm{x}^{(0)}$, model profile $\overline{\bm{x}}$, tolerance $\eta$ for residual increase \ENSURE approximation $\bm{x}^{(k+1)}$ of minimal-norm least-squares solution \STATE $k=0$, $\beta=1$ \REPEAT \STATE $k=k+1$ \STATE estimate $r_k=\rank(J(\bm{x}^{(k)}))$ by \eqref{rankest} \STATE compute $\widetilde{\bm{s}}^{(k)}$ by the Gauss--Newton method \eqref{gauss-newt} \STATE compute $\alpha_k$ by the Armijo--Goldstein principle \eqref{armgol} \STATE compute $\bm{t}^{(k)}$ by \eqref{stepdef} \IF {$\beta<1$} \STATE $\beta=2\beta$ \label{beta2} \ENDIF \STATE $\widetilde{\bm{x}}^{(k+1)}=\bm{x}^{(k)}+\alpha_k\widetilde{\bm{s}}^{(k)}$ \STATE $\widetilde{\rho}_{k+1}=\|F(\widetilde{\bm{x}}^{(k+1)})-\bm{b}\|+\varepsilon_M$ \label{linevareps} \STATE $\bm{x}^{(k+1)}=\widetilde{\bm{x}}^{(k+1)}-\beta\bm{t}^{(k)}$ \STATE $\rho_{k+1}=\|F(\bm{x}^{(k+1)})-\bm{b}\|$ \WHILE {$(\rho_{k+1}> \widetilde{\rho}_{k+1}+\delta(\widetilde{\rho}_{k+1},\eta))$ \AND ($\beta>10^{-8}$)} \STATE $\beta=\beta/2$ \STATE $\bm{x}^{(k+1)}=\widetilde{\bm{x}}^{(k+1)}-\beta\bm{t}^{(k)}$ \STATE $\rho_{k+1}=\|F(\bm{x}^{(k+1)})-\bm{b}\|$ \ENDWHILE \STATE $\beta_k=\beta$ \UNTIL {convergence} \end{algorithmic} \end{algorithm} This procedure is outlined in Algorithm~\ref{algobeta}. Initially, we set $\beta=1$. At each iteration, we compute the residual at the Gauss--Newton iteration $\widetilde{\bm{x}}^{(k+1)}$ and at the tentative iteration $\bm{x}^{(k+1)}=\widetilde{\bm{x}}^{(k+1)}-\beta\bm{t}^{(k)}$. Subtracting the vector $\beta\bm{t}^{(k)}$ may cause the residual to increase. We accept such an increase if \begin{equation} \label{condiz} \|\bm{r}(\bm{x}^{(k+1)})\| \leq \|\bm{r}(\widetilde{\bm{x}}^{(k+1)})\| + \delta\bigl(\|\bm{r}(\widetilde{\bm{x}}^{(k+1)})\|,\eta\bigr), \end{equation} where $\delta(\rho,\eta)$ is a function determining the maximal increase allowed in the residual $\rho=\|\bm{r}(\widetilde{\bm{x}}^{(k+1)})\|$, and $\eta>0$ is a chosen tolerance. On the contrary, $\beta$ is halved and the residual is recomputed until~\eqref{condiz} is verified or $\beta$ becomes excessively small. To allow $\beta$ to increase, we tentatively double it at each iteration (see line~\ref{beta2} in the algorithm) before applying the above procedure. At line~\ref{linevareps} of the algorithm we add the machine epsilon $\varepsilon_M$ to the actual residual $\widetilde{\rho}_{k+1}$ to avoid that $\delta(\widetilde{\rho}_{k+1},\eta)$ becomes zero. A possible choice for the value of the residual increase is $\delta(\rho,\eta)=\eta\rho$, with $\eta$ suitably chosen. Our experiments showed that it is possible to find, by chance, a value of $\eta$ which produces good results, but its choice is strongly dependent on the particular example. We also noticed that, in cases where the residual stagnates, accepting a large increase in the residual may lead to nonconvergence. In such situations, a fixed multiple of the residual is not well suited to model its increase. Indeed, if the residual is large, one is prone to accept only a small increase, while if the residual is very small, a relatively large growth may be acceptable. To overcome these difficulties, we consider $\delta(\rho,\eta)=\rho^\eta$, and choose $\eta$ at each step by the adaptive procedure described in Algorithm~\ref{algoeta}. When at least $k_{\text{res}}$ iterations have been performed, we compute the linear polynomial which fits the logarithm of the last $k_{\text{res}}$ residuals in the least-squares sense. To detect if the residual stagnates or increases, we check if the slope $M$ of the regression line exceeds $-10^{-2}$. If this happens, the value of $\eta$ is doubled. The effect on the algorithm is to enhance the importance of the decrease of the residual and reduce that of the norm. To recover a sensible decrease in the norm, if at a subsequent step the residual reduction accelerates (e.g., $M<-\frac{1}{2}$), the value of $\eta$ is halved. In our experiments, we initialize $\eta$ to $\frac{1}{8}$ and set $k_{\text{res}}=5$. \begin{remark}\label{complexity}\rm The adaptive estimation of $\delta(\rho,\eta)$ does not significantly increase the complexity of Algorithm~\ref{algobeta}, as line~\ref{rline} of Algorithm~\ref{algoeta} implies the solution of a $2\times 2$ linear system whose matrix is fixed and can be computed in advance, while forming the right-hand side requires $4k_{\text{res}}$ floating point operations. \end{remark} \begin{algorithm} \caption{Adaptive determination of the residual increase $\delta(\rho,\eta)$.} \label{algoeta} \begin{algorithmic}[1] \REQUIRE actual residual $\rho=\|\bm{r}(\widetilde{\bm{x}}^{(k+1)})\|$, starting tolerance $\eta$ \REQUIRE iteration index $k$, residuals $\theta_j=\|\bm{r}(\widetilde{\bm{x}}^{(k-k_{\text{res}}+j)})\|$, $j=1,\ldots,k_{\text{res}}$ \ENSURE residual increase $\delta(\rho,\eta)$ \STATE $M_{\text{min}}=-10^{-2}$, $M_{\text{max}}=-\frac{1}{2}$ \IF {$k \geq k_{\text{res}}$} \STATE compute regression line $p_1(t)=Mt+N$ of $(j,\log(\theta_j))$, $j=1,\ldots,k_{\text{res}}$ \label{rline} \IF {$M>M_{\text{min}}$} \STATE $\eta=2\eta$ \ELSIF {$M<M_{\text{max}}$} \STATE $\eta=\eta/2$ \ENDIF \ENDIF \STATE $\delta(\rho,\eta)=\rho^\eta$ \end{algorithmic} \end{algorithm} To detect convergence, we interrupt the iteration as soon as \begin{equation}\label{conver} \|\bm{x}^{(k+1)}-\bm{x}^{(k)}\|<\tau\|\bm{x}^{(k+1)}\| \qquad\text{or}\qquad \|\alpha_k\widetilde{\bm{s}}^{(k)}\|<\tau, \end{equation} or when a fixed number of iteration $N_\text{max}$ is exceeded. The second stop condition in \eqref{conver} detects the slow progress of the relaxed Gauss--Newton iteration algorithm. This often happens close to the solution. The stop tolerance is set to $\tau=10^{-8}$. \section{Nonlinear minimal-$\boldsymbol{L}$-norm solution}\label{n_mLns} The introduction of a regularization matrix $L\in{\mathbb{R}}^{p\times n}$, $p\leq n$, in least-squares problems was originally connected to the numerical treatment of linear discrete ill-posed problems, and in particular to Tikhonov regularization. The use of a regularization matrix is also justified in underdetermined least-squares problems to select a solution with particular features, such as smoothness or sparsity, among the infinitely many possible solutions. While in \eqref{Lmnls} the seminorm $\|L\bm{s}\|$ is minimized over all the updating vectors $\bm{s}$ which minimize the linearized residual, here we seek to compute the minimal-$L$-norm solution to the nonlinear problem~\eqref{nonlinL2}, that is the vector $\bm{x}$ which solves the constrained problem \begin{equation} \begin{cases} \displaystyle\min_{\bm{x}\in{\mathbb{R}}^n}\|L(\bm{x}-\overline{\bm{x}})\|^2 \\ \displaystyle \bm{x} \in \bigl\{ \arg \min_{\bm{x}\in{\mathbb{R}}^n} \|F(\bm{x})-\bm{b}\|^2 \bigr\}. \end{cases} \label{Lnonlinmnls} \end{equation} Similarly to Section~\ref{n_mns}, we consider an iterative method of the type~\eqref{iter}, where the step $\bm{s}^{(k)}$ is the solution of the linearized problem \begin{equation} \begin{cases} \displaystyle\min_{\bm{s}\in{\mathbb{R}}^n}\|L(\bm{x}^{(k)}-\overline{\bm{x}}+\alpha\bm{s})\|^2 \\ \displaystyle \bm{s} \in \bigl\{ \arg \min_{\bm{s}\in{\mathbb{R}}^n} \|J_k\bm{s}+\bm{r}_k\|^2 \bigr\}. \end{cases} \label{Llinmnls} \end{equation} We will denote the iteration resulting from the solution of \eqref{Llinmnls} as the \emph{minimal-$L$-norm Gauss--Newton} (MLNGN) method. We recall the definition of the generalized singular value decomposition (GSVD) of a matrix pair $(J,L)$ \cite{gvl96}. Let $J\in{\mathbb{R}}^{m\times n}$ and $L\in{\mathbb{R}}^{p\times n}$ be matrices with $\rank(J)=r$ and $\rank(L)=p$. Assume that $m+p\geq n$ and $$ \rank\left(\begin{bmatrix} J\\ L \end{bmatrix}\right)=n, $$ which corresponds to requiring that $\mathcal{N}(J)\cap\mathcal{N}(L)=\{0\}$. The GSVD of the matrix pair $(J,L)$ is defined as the factorization $$ J=U\Sigma_J W^{-1}, \qquad L=V\Sigma_L W^{-1}, $$ where $U\in{\mathbb{R}}^{m\times m}$ and $V\in{\mathbb{R}}^{p\times p}$ are matrices with orthonormal columns $\bm{u}_i$ and $\bm{v}_i$, respectively, and $W\in {\mathbb{R}}^{n\times n}$ is nonsingular. If $m\geq n\geq r$, the matrices $\Sigma_J \in {\mathbb{R}}^{m\times n}$ and $\Sigma_L \in {\mathbb{R}}^{p\times n}$ have the form $$ \Sigma_J=\left[\begin{array}{ccc} O_{n-r} & & \\ & C & \\ & & I_d \\ \hline \\ & O_{(m-n)\times n} & \end{array}\right], \qquad \Sigma_L= \left[\begin{array}{cc|c} I_{p-r+d} & & \\ & & O_{p\times d} \\ & S & \end{array}\right], $$ where $d=n-p$, \begin{equation}\label{csmat} \begin{aligned} C &= \diag (c_1,\ldots,c_{r-d}), \qquad & 0<c_1\leq c_2 \leq \cdots \leq c_{r-d} < 1, \\ S &= \diag (s_1,\ldots,s_{r-d}), \qquad & 1>s_1 \geq s_2 \geq \cdots \geq s_{r-d} > 0, \end{aligned} \end{equation} with $c_i^2+s_i^2=1$, for $i=1,\ldots,r-d$. The identity matrix of size $k$ is denoted by $I_k$, while $O_k$ and $O_{k\times\ell}$ are zero matrices of size $k\times k$ and $k\times\ell$, respectively; a matrix block has to be omitted when one of its dimensions is zero. The scalars $\gamma_i=\frac{c_i}{s_i}$ are called \emph{generalized singular values}, and they appear in nondecreasing order. If $r\leq m<n$, the matrices $\Sigma_J \in {\mathbb{R}}^{m\times n}$ and $\Sigma_L \in {\mathbb{R}}^{p\times n}$ take the form $$ \Sigma_J=\left[\begin{array}{c|ccc} & O_{m-r} & & \\ O_{m\times (n-m)} & & C & \\ & & & I_d \end{array}\right], \qquad \Sigma_L= \left[\begin{array}{cc|c} I_{p-r+d} & & \\ & & O_{p\times d} \\ & S & \end{array}\right], $$ where the blocks are defined as above. \medskip Let $J_k=U\Sigma_J W^{-1}$, $L=V\Sigma_L W^{-1}$ be the GSVD of the matrix pair ($J_k$,$L$). We indicate by $\bm{w}_i$ the column vectors of the matrix $W$, and by $\widehat{\bm{w}}^j$ the rows of $W^{-1}$, that is $$ W = [\bm{w}_1,\ldots,\bm{w}_n], \qquad W^{-1} = \begin{bmatrix} \widehat{\bm{w}}^1 \\ \vdots \\ \widehat{\bm{w}}^n \end{bmatrix}. $$ We have ${\mathcal{N}}(J_k)=\Span(\bm{w}_1,\ldots,\bm{w}_{n-r_k})$, if $r_k=\rank(J_k)$; see~\cite{pr20} for a proof. \medskip \begin{theorem}\label{theo4.2} Let $\bm{x}^{(k)}\in{\mathbb{R}}^n$ and let $\widetilde{\bm{x}}^{(k+1)}=\bm{x}^{(k)}+\alpha_k\widetilde{\bm{s}}^{(k)}$ be the Gauss--Newton iteration for~\eqref{nonlinL2}, where the step $\widetilde{\bm{s}}^{(k)}$ is determined by solving~\eqref{Lmnls} and the step length $\alpha_k$ by the Armijo--Goldstein principle. Then, the iteration $\bm{x}^{(k+1)}=\bm{x}^{(k)}+\alpha_k\bm{s}^{(k)}$ for~\eqref{Llinmnls}, is given by \begin{equation}\label{mlnsol} \bm{x}^{(k+1)} = \widetilde{\bm{x}}^{(k+1)} - W_1\widehat{W}_1 \bigl(\bm{x}^{(k)}-\overline{\bm{x}}\bigr), \end{equation} where $\widehat{W}_1\in{\mathbb{R}}^{(n-r_k)\times n}$ contains the first $n-r_k$ rows of $W^{-1}$, and $W_1\in{\mathbb{R}}^{n\times(n-r_k)}$ is composed of the first $n-r_k$ columns of $W$. \end{theorem} \smallskip \begin{proof} The proof proceeds analogously to that of Theorem 4.2 in~\cite{pr20}. Replacing $J_k$ and $L$ with their GSVD and setting $\bm{y}=W^{-1}\bm{s}$, $\bm{z}^{(k)}=W^{-1}\left(\bm{x}^{(k)}-\overline{\bm{x}}\right)$, and $\bm{g}^{(k)}=U^T\bm{r}_k$,~\eqref{Llinmnls} can be rewritten as the following diagonal least-squares problem \begin{equation* \begin{cases} \displaystyle \min_{\bm{y}\in{\mathbb{R}}^n}\|\Sigma_L(\alpha_k\bm{y}+\bm{z}^{(k)})\|^2 \\ \displaystyle \bm{y} \in \bigl\{ \arg \min_{\bm{y}\in{\mathbb{R}}^n} \|\Sigma_J\bm{y}+\bm{g}^{(k)}\|^2 \bigr\}. \end{cases} \end{equation*} When $m\geq n$, the diagonal linear system in the constraint is solved by a vector $\bm{y}$ with entries $$ y_i = \begin{cases} \displaystyle -\frac{g^{(k)}_i}{c_{i-n+r_k\strut}}, \quad & i=n-r_k+1,\ldots,p, \\ -g^{(k)\strut}_i, & i=p+1,\ldots,n. \end{cases} $$ The components $y_i$, for $i=1,\ldots,n-r_k$, can be determined by minimizing the norm \begin{equation}\label{sigmanorm} \begin{aligned} \|\Sigma_L(\alpha_k\bm{y}+\bm{z}^{(k)})\|^2 &= \sum_{i=1}^{n-r_k}\left(\alpha_k y_i+z_i^{(k)} \right)^2 \\ &\phantom{=} + \sum_{i=n-r_k+1}^{p} \left( -\alpha_k\frac{g^{(k)}_i}{\gamma_{i-n+r_k}} + s_{i-n+r_k} z_i^{(k)} \right)^2, \end{aligned} \end{equation} where $\gamma_i=\frac{c_i}{s_i}$ are the generalized singular values of the matrix pair $(J_k,L)$. The minimum of~\eqref{sigmanorm} is reached for $ y_i=-\frac{1}{\alpha_k}z^{(k)}_i =-\frac{1}{\alpha_k}\widehat{\bm{w}}^{i}(\bm{x}^{(k)}-\overline{\bm{x}}) $, $i=1,\ldots,n-r_k$, and the solution to~\eqref{Llinmnls}, that is, the next approximation to the solution of~\eqref{Lnonlinmnls}, is \begin{equation} \begin{aligned} \label{mwngnit} \bm{x}^{(k+1)} &= \bm{x}^{(k)} + \alpha_k W\bm{y} \\ &= \bm{x}^{(k)} -\sum_{i=1}^{n-r_k} z_i^{(k)}\bm{w}_i -\alpha_k\sum_{i=n-r_k+1}^{p} \frac{g^{(k)}_i}{c_{i-n+r_k}} \bm{w}_i -\alpha_k\sum_{i=p+1}^n g^{(k)}_i \bm{w}_i, \end{aligned} \end{equation} where the first summation in the right-hand side can be rewritten as $W_1\widehat{W}_1(\bm{x}^{(k)}-\overline{\bm{x}})$. Applying the same procedure to~\eqref{Lmnls}, we obtain $$ \widetilde{\bm{x}}^{(k+1)} = \bm{x}^{(k)} -\alpha_k\sum_{i=n-r_k+1}^{p} \frac{g^{(k)}_i}{c_{i-n+r_k}} \bm{w}_i -\alpha_k\sum_{i=p+1}^n g^{(k)}_i \bm{w}_i, $$ from which~\eqref{mlnsol} follows. Since solving~\eqref{Llinmnls} for $m<n$ leads to a formula similar to~\eqref{mwngnit}, with $g^{(k)}_{i-n+m}$ in place of $g^{(k)}_i$, the validity of~\eqref{mlnsol} is confirmed. $\qquad$ \end{proof} As in the computation of the minimal-norm solution, the iteration based on \eqref{mlnsol} fails to converge without a suitable relaxation parameter $\beta_k$ for the projection vector $\bm{t}^{(k)}=W_1\widehat{W}_1 (\bm{x}^{(k)}-\overline{\bm{x}})$. We adopted an iteration similar to \eqref{mngn2}, choosing $\beta_k$ by adapting Algorithms~\ref{algobeta} and~\ref{algoeta} to this setting. It is important to note that $\widetilde{{\mathcal{P}}}_{{\mathcal{N}}(J_k)}=W_1\widehat{W}_1$ is an oblique projector onto ${\mathcal{N}}(J_k)$. At the same time, the rank of the Jacobian is estimated at each step by applying the procedure described in Section~\ref{rankjac} to the diagonal elements $c_j^{(k)}$, $j=1,\ldots,q-d$, of the GSVD factor $\Sigma_J$ of $J_k$; see \eqref{csmat}. In this case, at each step, we compute the ratios \[ \rho_i^{(k)} = \frac{c_{i+1}^{(k)}}{c_i^{(k)}}, \qquad i=1,2,\ldots,q-d-1, \] where $q=\min(m,n)$. Actually, the GSVD routine computes the matrix $W^{-1}$, but the matrix $W$ is needed for the computation of both the vectors $\widetilde{\bm{s}}^{(k)}$ and $\bm{t}^{(k)}$. To reduce the computational load, we compute at each iteration the LU factorization $PW^{-1}=LU$, and we use it to solve the linear system with two right-hand sides $$ W^{-1} \begin{bmatrix} \bm{t}^{(k)} & \widetilde{\bm{s}}^{(k)} \end{bmatrix} = \begin{bmatrix} \widehat{W}_1 (\bm{x}^{(k)}-\overline{\bm{x}}) & \bm{0}_{n-r} \\ \bm{0}_r & \widetilde{\bm{y}} \end{bmatrix}, $$ where $\widetilde{\bm{y}}\in{\mathbb{R}}^r$ contains the last $r$ components of the vector $\bm{y}$ appearing in \eqref{mwngnit}, and $\bm{0}_k$ denotes the zero vector of size $k$. \section{Test problems and numerical results}\label{examples} The MNGN2 method, defined by \eqref{mngn2}, was implemented in the Matlab programming language; the software is available from the authors. The developed functions implement all the variants of the MNGN2 algorithm, as well as the MNGN and CKB methods developed in \cite{pr20} and \cite{campbell}, respectively. In the following, the MNGN2 algorithm~\eqref{mngn2} will be denoted by different names, according to the particular implementation. In the method denoted by MNGN$2_\alpha$, we let $\beta_k=\alpha_k$ in \eqref{mngn2}, and determine $\alpha_k$ by the Armijo--Goldstein principle. Algorithm~\ref{algobeta} is denoted by MNGN$2_{\alpha\beta}$, when $\delta(\rho,\eta)=\eta\rho$, with a fixed value of $\eta$. The same algorithm with $\delta(\rho,\eta)=\rho^\eta$, and $\eta$ estimated by Algorithm~\ref{algoeta}, is labeled as MNGN$2_{\alpha\beta\delta}$. The algorithm~\eqref{camp} developed in~\cite{campbell} is denoted by CKB$_1$ when $\gamma_k=(0.5)^{k+1}$, and by CKB$_2$ when $\gamma_k=(0.5)^{2^k}$. The same algorithms are denoted by rCKB$_1$ and rCKB$_2$ when they are applied with the automatic estimation of the rank of the Jacobian, discussed in Section~\ref{rankjac}. To compare the methods and investigate their performance, we performed numerical experiments on various test problems that highlight particular difficulties in the computation of the minimal-norm solution. Example \ref{exam_robot} illustrates a situation where the MNGN method either fails or produces unacceptable results, while the other methods perform well; in Example \ref{exam_camp}, we investigate the dependence of the MNGN$2_{\alpha\beta}$ method on the choice of the parameter $\eta$; Example \ref{exam_2} is the first medium-size test problem we consider, it shows the importance of the Jacobian rank estimation for the effectiveness of the algorithms; in Example \ref{exam_1}, the methods are compared in the solution of minimal-$L$-norm problems with different regularization matrices; finally, in Example \ref{exam_6}, we let the dimension of the problem vary and we explore the dependence of the computed solution on the availability of a priori information in the form of a model profile. For each experiment, we repeated the computation 100 times, varying the starting point $\bm{x}^{(0)}$ by letting its components be uniformly distributed random numbers in $(-5,5)$. The model profile $\overline{\bm{x}}$ was set to the zero vector except in Example~\ref{exam_6}. We consider a numerical test a ``success'' if the algorithm converges according to condition \eqref{conver}, with stop tolerance $\tau=10^{-8}$ and maximum number of iterations $N_{\text{max}}=500$. A failure is not a serious problem, in general, because nonconvergence simply suggests to try a different starting vector. Anyway, if this happens too often, it increases the computational load. At the same time, a success of a method does not imply that it recovers the minimal-norm solution, as the convergence is only local. So, to give an idea of the performance of the methods, we measure over all the tests the average of both the number of iterations required and the norm of the converged solution $\|\widetilde{\bm{x}}\|$. We also report the number of successes. We note that the computational cost of each iteration is roughly the same for all the methods considered. Indeed, the additional complexity required by the MNGN2 algorithms consists of the estimation of the numerical rank $r_{\epsilon,k}$, of the residual increase $\delta(\rho,\eta)$, and of the projection parameter $\beta_k$. All these computations involve a small number of floating point operations; see also Remark~\ref{complexity}. \medskip \begin{example}\rm\label{exam_robot} In this first example we consider a nonlinear model that describes the behavior of a redundant parallel robot. It is a problem that concerns the inverse kinematics of position, and is defined by the following function $F:{\mathbb{R}}^4 \rightarrow {\mathbb{R}}^2$ \begin{equation*} F(\bm{x})= \begin{bmatrix} (X-A\cos(x_1))^2+(Y-A\sin(x_1))^2-x_2^2 \\ (X-A\cos(x_3)-H)^2+(Y-A\sin(x_3))^2-x_4^2 \end{bmatrix}, \end{equation*} with the data vector $\bm{b}=\bm{0}$ in \eqref{nonlinL2}. The model describes the kinematic of a robotic arm moved by 4 motors, whose position is identified by the unknowns $\{x_i\}_{i=1}^4$, which must reach a point with given coordinates $(X,Y)$; $A$ and $H$ are parameters describing the system. In our simulation we assume $(X,Y)=(3,3)$, $A=2$, $H=10$. The Jacobian matrix of $F$ is \begin{equation*} J(\bm{x})= \begin{bmatrix} \dfrac{\partial F_1}{\partial x_1} & \dfrac{\partial F_1}{\partial x_2} & 0 & 0 \\ 0 & 0 & \dfrac{\partial F_2}{\partial x_3} & \dfrac{\partial F_2}{\partial x_4} \end{bmatrix}, \end{equation*} with $$ \begin{aligned} \frac{\partial F_1}{\partial x_1} &= 2A(X-A\cos(x_1))\sin(x_1) -2A(Y-A\sin(x_1))\cos(x_1),\\ \frac{\partial F_2}{\partial x_3} &= 2A(X-A\cos(x_3)-H)\sin(x_3) -2A(Y-A\sin(x_3))\cos(x_3),\\ \frac{\partial F_1}{\partial x_2} &= -2x_2,\quad \frac{\partial F_2}{\partial x_4} = -2x_4. \end{aligned} $$ The results obtained are reported in Table~\ref{tabexrob}. We see that the MNGN$2_\alpha$ and CKB$_1$ methods recover solutions with smaller norms, in the average, but the first one requires a large number of iterations. The MNGN$2_{\alpha\beta\delta}$ implementation, with automatic estimation of the projection step $\beta_k$, quickly converges but produces solutions with slightly larger norms. The CKB$_2$ method leads to solutions with a worse norm, testifying that the performance of the method in \eqref{camp} is very sensitive to the choice of the sequence $\gamma_k$. The MNGN method from \cite{pr20} leads to solutions far from optimality, and fails in 70\% of the tests. This happens in most of the examples considered in this paper, so we will involve it only in another experiment. \begin{table}[ht]\centering \caption{Results for Example~\ref{exam_robot}.}\label{tabexrob} \begin{tabular}{lccc} \hline method & iterations & $\|\widetilde{\bm{x}}\|$ & \#success \\ \hline MNGN$2_\alpha$ & 239 & 8.7246 & 92 \\ MNGN$2_{\alpha\beta\delta}$ & 38 & 9.0621 & 96 \\ CKB$_1$ & 26 & 8.5515 & 100 \\ CKB$_2$ & 10 & 9.7344 & 100 \\ MNGN & 182 & 17.6329 & 30 \\ \hline \end{tabular} \end{table} \end{example} \medskip \begin{example}\rm\label{exam_camp} Here we consider a test problem introduced in~\cite{campbell}. Let $F:{\mathbb{R}}^3 \rightarrow {\mathbb{R}}$ be the nonlinear function defined by \[ F(\bm{x})= x_3 - (x_1-1)^2 - 2(x_2-2)^2 -3. \] The equation $F(\bm{x})=0$ represents an elliptic paraboloid in ${\mathbb{R}}^3$ with vertex $\bm{V}=(1,2,3)^T$. We remark that the minimal-norm solution is the point $$ \bm{x}^\dagger \approx (0.859754, 1.849178, 3.065164)^T, $$ and not the vector $\widehat{\bm{x}}$ reported in~\cite[Sec.~4.2]{campbell}. Indeed, $\|\bm{x}^\dagger\|\approx 3.681558$, whereas $\|\widehat{\bm{x}}\|\approx 3.706359$. The results obtained are reported in Table~\ref{tabexcamp}. The MNGN$2_{\alpha\beta}$ method is tested with two values of the parameter $\eta$ appearing in the residual increase $\delta(\rho,\eta)=\eta\rho$; see Algorithm~\ref{algobeta}. It is clear that it can lead to accurate solutions only if the parameter is suitably chosen ($\eta=2$). On the contrary ($\eta=8$), it shows a great number of failures. As in the previous example, the best results are produced by MNGN$2_{\alpha}$, and MNGN$2_{\alpha\beta\delta}$ reaches very similar solutions but is about 10 times faster. The CKB methods take a smaller number of iterations, but produce less accurate solutions. \begin{table}[ht]\centering \caption{Results for Example~\ref{exam_camp}.}\label{tabexcamp} \begin{tabular}{lccc} \hline method & iterations & $\|\widetilde{\bm{x}}\|$ & \#success \\ \hline MNGN$2_{\alpha\beta}\,(\eta=8)$ & 174 & 3.6903 & 15 \\ MNGN$2_{\alpha\beta}\,(\eta=2)$ & 62 & 3.7120 & 100 \\ MNGN$2_\alpha$ & 330 & 3.6816 & 100 \\ MNGN$2_{\alpha\beta\delta}$ & 37 & 3.6832 & 100 \\ CKB$_1$ & 26 & 3.7343 & 100 \\ CKB$_2$ & 10 & 3.7561 & 100 \\ \hline \end{tabular} \end{table} \end{example} \medskip \begin{example}\rm\label{exam_2} Let $F:{\mathbb{R}}^n \rightarrow {\mathbb{R}}^m$ be the nonlinear function \begin{equation}\label{nonlinfun} F(\bm{x})=\left[ F_1(\bm{x}),F_2(\bm{x}),\ldots,F_m(\bm{x}) \right]^T, \qquad m\leq n, \end{equation} defined by \[ F_i(\bm{x}) = \frac{1}{2} S(\bm{x}) \left(x_i^2+1\right), \qquad i=1,\ldots,m, \] where \[ S(\bm{x}) = \sum_{j=1}^{n} \left(\frac{x_j-c_j}{a_j}\right)^2 -1 \] is the $n$-ellipsoid with center $\bm{c}=(c_1,\ldots,c_n)^T$ and whose semiaxes are the components of the vector $\bm{a}=(a_1,\ldots,a_n)^T$. The locus of the solutions is the $n$-ellipsoid. Setting $y_i=x_i^2+1$, for $i=1,\ldots,m$, and $z_j=\frac{x_j-c_j}{a_j^2}$, for $j=1,\ldots,n$, the Jacobian matrix can be expressed as \[ J(\bm{x})=S(\bm{x}) D_{m,n}(\bm{x}) + \bm{y}\bm{z}^T, \] where $D_{m,n}(\bm{x})$ is an $m\times n$ diagonal matrix whose main diagonal consists of the vector $\bm{x}$. Indeed, \[ \frac{\partial F_i}{\partial x_k}= \begin{cases} x_i S(\bm{x}) + \dfrac{x_i-c_i}{a_i^2}\left(x_i^2+1\right), \qquad & k=i, \\ \dfrac{x_k-c_k}{a_k^2}\left(x_i^2+1\right), \qquad & k\neq i. \end{cases} \] When $S(\bm{x})=0$, $\rank(J(\bm{x}))=1$, so we expect the Jacobian to be rank-deficient in a neighborhood of the solution. If $\bm{a}=\bm{e}=(1,\ldots,1)^T$, the locus of the solutions is the $n$-sphere centered in $\bm{c}$ with unitary radius. If $\bm{c}=2\bm{e}$, the minimal-norm solution is \[ \bm{x}^\dagger = \left( 2 - \frac{\sqrt{n}}{n}\right) \bm{e}, \] while if $\bm{c}=(2,0,\ldots,0)^T$ it is $\bm{x}^\dagger=(1,0,\ldots,0)^T$. Table~\ref{tabrank} displays the results for the last case, when $m=8$ and $n=10$. These results aim at underlining the importance of estimating the rank of the Jacobian $J_k$. The implementations of the MNGN2 algorithm are more or less equivalent, recovering solutions with almost optimal norm; MNGN$2_\alpha$ fails in 17\% of the tests. The value of $\eta$ for MNGN$2_{\alpha\beta}$ is tailored to maximize the performance, which is not possible in practice, while it is automatically estimated for MNGN$2_{\alpha\beta\delta}$. The MNGN and CKB methods do not perform well, because of the rank deficiency of the Jacobian. We also implemented the rank estimation in the algorithms from \cite{campbell}; the corresponding methods are denoted by rCKB. It happens that rCKB$_2$ produces results comparable to the MNGN2 methods, confirming that a correct estimation of the rank is essential for the convergence, while rCKB$_1$ converges only in 32\% of the tests and produces solutions with large norms. Again, this shows that the sequence adopted for the step length in (r)CKB methods is critical for the effectiveness of the computation. \begin{table}[ht]\centering \caption{Results for Example~\ref{exam_2} with $m=8$, $n=10$, $\bm{a}=\bm{e}$, and $\bm{c}=(2,0,\ldots,0)^T$. In MNGN, CKB$_1$, and CKB$_2$, the rank is not estimated.}\label{tabrank} \begin{tabular}{lccc} \hline method & iterations & $\|\widetilde{\bm{x}}\|$ & \#success \\ \hline MNGN$2_\alpha$ & 209 & 1.0263 & 83 \\ MNGN$2_{\alpha\beta}\,(\eta=8)$ & 208 & 1.0449 & 99 \\ MNGN$2_{\alpha\beta\delta}$ & 206 & 1.0367 & 97 \\ MNGN & 70 & 2.1083 & 2 \\ CKB$_1$ & 216 & 2.2002 & 32 \\ CKB$_2$ & 20 & 2.1305 & 2 \\ rCKB$_1$ & 160 & 2.1088 & 32 \\ rCKB$_2$ & 197 & 1.0454 & 97 \\ \hline \end{tabular} \end{table} The norms of the solutions, whose average is displayed in Table~\ref{tabrank}, are reported in the boxplot in the left pane of Figure~\ref{ex2boxnorm}. In each box, the red mark is the median, the edges of the blue box are the 25th and 75th percentiles, and the black whiskers extend to the most extreme data points non considered to be outliers, which are plotted as red crosses. \begin{figure} \centering \includegraphics[width=.47\textwidth]{ex2boxnorm} \hspace{0.2cm} \includegraphics[width=.49\textwidth]{ex1boxnorm} \caption{Boxplot of the norms of the solutions for Examples~\ref{exam_2} (left) and~\ref{exam_1} (right). The series, labeled by the methods name, are displayed in the same order of Table~\ref{tabrank} and Table~\ref{tabex1.1}, respectively.} \label{ex2boxnorm} \end{figure} \end{example} \medskip \begin{example}\rm\label{exam_1} Let $F$ be a nonlinear function such as \eqref{nonlinfun}, with \begin{equation}\label{ex3} F_i(\bm{x})= S(\bm{x}) \left(x_i-c_i\right), \qquad i=1,\ldots,m, \end{equation} and $S(\bm{x})$ defined as in the previous example. The first order derivatives of $F_i(\bm{x})$ are \[ \frac{\partial F_i}{\partial x_k}= \begin{cases} \dfrac{2}{a_i^2}(x_i-c_i)^2 + S(\bm{x}), \qquad &k=i, \\ \dfrac{2^{\strut}}{a_k^2}(x_k-c_k)(x_i-c_i), \qquad &k\neq i. \end{cases} \] Setting $y_i=x_i-c_i$, for $i=1,\ldots,m$, and $z_j=\frac{x_j-c_j}{a_j^2}$, for $j=1,\ldots,n$, the Jacobian matrix can be represented as \[ J(\bm{x})=S(\bm{x}) I_{m\times n} + 2\bm{y}\bm{z}^T, \] where $I_{m\times n}$ includes the first $m$ rows of an identity matrix of size $n$. The Jacobian turns out to be a diagonal plus rank-1 matrix. This structure may be useful to reduce complexity when solving large scale problems. When $S(\bm{x})=0$, the matrix $J(\bm{x})$ has rank 1. Indeed, in this case, the compact SVD of the Jacobian is \[ J(\bm{x})=\frac{\bm{y}}{\|\bm{y}\|} (2\|\bm{y}\|\|\bm{z}\|) \frac{\bm{z}^T}{\|\bm{z}\|}, \] so that the only non-zero singular value is $2\|\bm{y}\|\|\bm{z}\|$. As in the preceding example, we may assume that the Jacobian is rank-deficient in the surroundings of a solution. \begin{figure} \centering \includegraphics[width=\textwidth]{ex3_23} \caption{Solution of problem~\eqref{ex3} (Example~\ref{exam_1}) for $m=2$ and $n=3$, with $\bm{a}=(1,1,1)^T$, $\bm{c}=(2,0,0)^T$, and $\bm{x}^{(0)}=(0,3,3)^T$. The locus of the solutions is the sphere and the line intersection of the two planes. The blue dots are the iterations of the MNGN$2_{\alpha\beta\delta}$ method, and the red ones correspond to the rCKB$_1$ method. The black circle encompasses the minimal-norm solution.} \label{ex3fig} \end{figure} The locus of the solutions is the union of the $n$-ellipsoid and the intersection between the planes $x_i=c_i$, $i=1,\ldots,m$. If $\bm{a}=\bm{e}$ and $\bm{c}=2\bm{e}$, the minimal-norm solution $\bm{x}^\dagger$ depends on the dimensions $m$ and $n$: if $m<n-\sqrt{n}+\frac{1}{4}$, then it is \[ \bm{x}^\dagger=(\underbrace{2,2,\ldots,2}_m,\underbrace{0,\ldots,0}_{n-m})^T, \] otherwise, it is \begin{equation}\label{smoothsol} \bm{x}^\dagger = \left( 2 - \frac{\sqrt{n}}{n}\right) \bm{e}. \end{equation} If $\bm{c}=(2,0,\ldots,0)^T$, it is $\bm{x}^\dagger=(1,0,\ldots,0)^T$. The case $m=2$, $n=3$, is displayed in Figure~\ref{ex3fig}, together with the iterations of the algorithms MNGN$2_{\alpha\beta\delta}$ and rCKB$_1$. In this test, the latter algorithm converges to a solution of non-minimal norm. Table~\ref{tabex1.1} illustrates the situation where $\bm{a}=\bm{e}$, $\bm{c}=(2,0,\ldots,0)^T$, $m=8$ and $n=10$. The corresponding boxplot of the norms of the solutions is displayed in the right pane of Figure~\ref{ex2boxnorm}. The MNGN$2_{\alpha\beta\delta}$ method is the only one which recovers the correct solution; MNGN$2_{\alpha}$ gets close to it, but with a very small number of successes. \begin{table}[ht]\centering \caption{Results for Example~\ref{exam_1} with $m=8$, $n=10$, $\bm{a}=\bm{e}$, and $\bm{c}=(2,0,\ldots,0)^T$.}\label{tabex1.1} \begin{tabular}{lccc} \hline method & iterations & $\|\widetilde{\bm{x}}\|$ & \#success \\ \hline MNGN$2_\alpha$ & 215 & 1.5196 & 12 \\ MNGN$2_{\alpha\beta}\,(\eta=8)$ & 11 & 1.9911 & 100 \\ MNGN$2_{\alpha\beta\delta}$ & 47 & 1.0100 & 100 \\ rCKB$_1$ & 27 & 2.0346 & 100 \\ rCKB$_2$ & 11 & 2.0531 & 100 \\ \hline \end{tabular} \end{table} Table~\ref{tabex1} reports the results obtained for $\bm{a}=\bm{e}$ and $\bm{c}=2\bm{e}$. In this case, the solution is \eqref{smoothsol}. We applied the algorithms to both the solution of the minimal-norm problem, and the computation of the minimal-$L$-norm solution with $L=D_2$, i.e., the discrete approximations of the second derivative \eqref{d1d2}. Since the solution is exactly in the null space of $L$, we expect the minimal-$L$-norm solution to perform well. No algorithm is accurate when $L=I$, as the minimal norm is $2\sqrt{n}-1=5.3246$. When $L=D_2$, the two MNGN2 implementations are superior to the rCKB methods, as $\|L\bm{x}^\dagger\|=0$. As in the previous example, MNGN$2_{\alpha}$ exhibits a large number of failures. \begin{table}[ht]\centering \caption{Results for Example~\ref{exam_1} with $m=8$, $n=10$, $\bm{a}=\bm{e}$, and $\bm{c}=2\bm{e}$.}\label{tabex1} \begin{tabular}{llccc} \hline & method & iterations & $\|L\widetilde{\bm{x}}\|$ & \#success \\ \hline $L=I$ & MNGN$2_\alpha$ & 12 & 5.6569 & 23 \\ & MNGN$2_{\alpha\beta\delta}$ & 45 & 5.4529 & 100 \\ & rCKB$_1$ & 26 & 5.7274 & 100 \\ & rCKB$_2$ & 11 & 5.7520 & 100 \\ \hline $L=D_2$ & MNGN$2_\alpha$ & 20 & 0.0500 & 26 \\ & MNGN$2_{\alpha\beta\delta}$ & 17 & 0.0765 & 100 \\ & rCKB$_1$ & 27 & 2.1694 & 100 \\ & rCKB$_2$ & 17 & 2.2761 & 100 \\ \hline \end{tabular} \end{table} Since this example is interesting in itself as a test problem, we report some further comments on it. If $m=n$, the locus of the solutions is the union of the $n$-ellipsoid and the point $\bm{x}=\bm{c}$. The spectrum of $J(\bm{x})$ is \[ \sigma(J(\bm{x})) = \left\{ S(\bm{x})+2\bm{y}^T\bm{z}, S(\bm{x}), \ldots, S(\bm{x}) \right\}, \] where the eigenvalue $S(\bm{x})$ has algebraic multiplicity $n-1$. The Jacobian matrix is invertible if and only if $S(\bm{x})\neq 0$. If this condition is met, the inverse is obtained by the Sherman--Morrison formula \[ J(\bm{x})^{-1}= \frac{1}{S(\bm{x})}I_n - \frac{2}{S(\bm{x})(S(\bm{x})+2\bm{z}^T\bm{y})}\bm{y}\bm{z}^T. \] \end{example} \medskip \begin{example}\rm\label{exam_6} Let $F$ be the nonlinear function~\eqref{nonlinfun} with components \begin{equation}\label{ex2} F_i(\bm{x})= \begin{cases} S(\bm{x}), \qquad &i=1, \\ x_{i-1}(x_i-c_i), \qquad &i=2,\ldots,m, \end{cases} \end{equation} and $S(\bm{x})$ defined as above. The first order partial derivatives of $F_i(\bm{x})$ are \[ \frac{\partial F_i}{\partial x_k}= \begin{cases} \dfrac{2}{a_k^2}(x_k-c_k), \quad & i=1, \ k=1,\ldots,n, \\ x_i-c_i, \quad & i=2,\ldots,m, \ k=i-1, \\ x_{i-1}, \quad & i=k=2,\ldots,m, \\ 0, & \text{otherwise}. \end{cases} \] Setting $z_j=2\frac{x_j-c_j}{a_j^2}$ and $y_j=x_j-c_j$, for $j=1,\ldots,n$, the Jacobian matrix of $F$ is \begin{equation}\label{jac2} J(\bm{x})= \begin{bmatrix} z_1 & z_2 & z_3 & \cdots & z_{m-1}& z_m & \cdots & z_n \\ y_2 & x_1 & & & & & & \\ & y_3 & x_2 & & & & & \\ & & \ddots & \ddots & & & & \\ & & & \ddots & \ddots & & & \\ & & & & y_m & x_{m-1} & & \end{bmatrix}. \end{equation} The locus of the solutions is the intersection between the hypersurface defined by $S(\bm{x})=0$ and by the pairs of planes $x_{i-1}=0$, $x_i-c_i=0$, $i=2,\ldots,m$. \begin{figure} \centering \includegraphics[width=\textwidth]{ex2_23} \caption{Solution of problem~\eqref{ex2} (Example~\ref{exam_6}) for $m=2$ and $n=3$, with $\bm{a}=(1,1,1)^T$, $\bm{c}=(2,0,0)^T$, and $\bm{x}^{(0)}=(\frac{1}{2},3,3)^T$. The solutions are in the intersection between the sphere and the union of the two planes. The blue dots are the iterations of the MNGN$2_{\alpha\beta\delta}$ method, and the red ones correspond to the rCKB$_1$ method. The black circle encompasses the minimal-norm solution.} \label{ex2fig} \end{figure} If $\bm{a}=\bm{e}=(1,\ldots,1)^T$ and $\bm{c}=2\bm{e}$, the minimal-norm solution is \begin{equation}\label{sol1ex2} \bm{x}^\dagger = \left( \xi_{n,m}, \underbrace{2, \ldots, 2}_{m-1}, \underbrace{\xi_{n,m},\ldots,\xi_{n,m}}_{n-m} \right)^T, \end{equation} with $\xi_{n,m}=2-(n-m+1)^{-1/2}$, while if $\bm{c}=(2,0,\ldots,0)^T$ it is $\bm{x}^\dagger=(1,0,\ldots,0)^T$. It is immediate to observe that in the last situation the Jacobian \eqref{jac2} is rank-deficient at $\bm{x}^\dagger$. This case is illustrated in Figure~\ref{ex2fig}, where the iterations of the MNGN$2_{\alpha\beta\delta}$ and the rCKB$_1$ methods are reported too. The iterations performed are 20 and 24, respectively; the computed solutions are substantially coincident. Table~\ref{tabn} displays the results obtained for the same parameter vectors of Figure~\ref{ex2fig}, when the size of the problem varies, i.e., for $(m,n)=(8k,10k)$, $k=1,2,3$. The MNGN2 algorithms behave almost optimally, while the rCKB methods lead to solutions with larger norm. The table shows that the performance is not significantly affected by the size of the problem. This example suggests that large scale problems could be faced by the methods discussed, but a suitable algorithm for the solution of the linearized problem should be adopted, to reduce the computational complexity of each step. This aspect will be the object of future research. \begin{table}[ht]\centering \caption{Results for Example~\ref{exam_6} with different size $(m,n)$, $\bm{a}=\bm{e}$, and $\bm{c}=(2,0,\ldots,0)^T$.}\label{tabn} \begin{tabular}{llccc} \hline $(m,n)$ & method & iterations & $\|\widetilde{\bm{x}}\|$ & \#success \\ \hline $(8,10)$ & MNGN$2_\alpha$ & 167 & 1.0000 & 48 \\ & MNGN$2_{\alpha\beta}\,(\eta=8)$ & 24 & 1.0508 & 100 \\ & MNGN$2_{\alpha\beta\delta}$ & 37 & 1.0659 & 100 \\ & rCKB$_1$ & 44 & 1.4867 & 100 \\ & rCKB$_2$ & 22 & 1.4776 & 100 \\ \hline $(16,20)$ & MNGN$2_\alpha$ & 144 & 1.0000 & 36 \\ & MNGN$2_{\alpha\beta}\,(\eta=8)$ & 29 & 1.0170 & 99 \\ & MNGN$2_{\alpha\beta\delta}$ & 34 & 1.0518 & 99 \\ & rCKB$_1$ & 54 & 1.4343 & 100 \\ & rCKB$_2$ & 53 & 1.5269 & 90 \\ \hline $(24,30)$ & MNGN$2_\alpha$ & 133 & 1.0000 & 34 \\ & MNGN$2_{\alpha\beta}\,(\eta=8)$ & 34 & 1.0154 & 99 \\ & MNGN$2_{\alpha\beta\delta}$ & 32 & 1.0191 & 96 \\ & rCKB$_1$ & 43 & 1.4446 & 100 \\ & rCKB$_2$ & 52 & 1.4529 & 70 \\ \hline \end{tabular} \end{table} Table~\ref{tabxbar} investigates the effectiveness of choosing an appropriate model profile $\overline{\bm{x}}$ when applying the MNGN2 algorithms. We consider the case $\bm{a}=\bm{e}$, $\bm{c}=2\bm{e}$, $m=8$, and $n=10$. The minimal-norm solution $\bm{x}^\dagger$ is \eqref{sol1ex2}, with $\xi_{8,10}\simeq 1.4226$ and $\|\bm{x}^\dagger\|\simeq 5.8371$. When $\overline{\bm{x}}=\bm{0}$, the solutions produced by the considered variants of the method are almost optimal, but the number of iterations is quite large, as well as the number of failures for MNGN$2_{\alpha\beta}$ (with a suitably chosen $\eta$) and MNGN$2_{\alpha\beta\delta}$. The model profile $\overline{\bm{x}}=2\bm{e}$ reduces the number of iterations and leads to almost 100\% of successes, but the average norm of the solutions is slightly larger than the optimal one. Choosing $\overline{\bm{x}} = 1.7\bm{e}$, a value which is roughly halfway between 2 and $\xi_{8,10}$, the extreme values of $\bm{x}^\dagger$, restores the optimality of the results. This confirms that, when a priori information is available, an accurate choice of the model profile enhances the performance of the algorithms. \begin{table}[ht]\centering \caption{Results for Example~\ref{exam_6} with $m=8$, $n=10$, $\bm{a}=\bm{e}$, and $\bm{c}=2\bm{e}$.}\label{tabxbar} \begin{tabular}{llccc} \hline & method & iterations & $\|\widetilde{\bm{x}}\|$ & \#success \\ \hline $\overline{\bm{x}} = \bm{0}$ & MNGN$2_\alpha$ & 138 & 5.8371 & 100 \\ & MNGN$2_{\alpha\beta}\,(\eta=8)$ & 175 & 5.8374 & 38 \\ & MNGN$2_{\alpha\beta\delta}$ & 94 & 5.8988 & 67 \\ \hline $\overline{\bm{x}} = 2\bm{e}$ & MNGN$2_\alpha$ & 37 & 6.1141 & 99 \\ & MNGN$2_{\alpha\beta}\,(\eta=8)$ & 34 & 6.1144 & 98 \\ & MNGN$2_{\alpha\beta\delta}$ & 34 & 6.1144 & 98 \\ \hline $\overline{\bm{x}} = 1.7\bm{e}$ & MNGN$2_\alpha$ & 54 & 5.8371 & 100 \\ & MNGN$2_{\alpha\beta}\,(\eta=8)$ & 34 & 5.8394 & 99 \\ & MNGN$2_{\alpha\beta\delta}$ & 40 & 5.8789 & 99 \\ \hline \end{tabular} \end{table} \end{example} \section{Conclusions}\label{concl} This paper explores the computation of the minimal-($L$-)norm solution of nonlinear least-squares problems, and the reasons for the occasional lack of convergence of Gauss--Newton methods. We propose an automatic procedure to estimate the rank of the Jacobian along the iteration, and the introduction of two different relaxation parameters that improve the efficiency of the iterative method. The first parameter is determined by applying the Armijo--Goldstein principle, while three techniques are investigated to estimate the second one. In numerical experiments performed on various test problems, the new methods prove to be very effective, compared to other approaches based on a single damping parameter. In particular, the variant which automatically estimates the projection parameter gives satisfactory results in all the examples. \section*{Acknowledgements} The authors are indebted to two anonymous reviewers, whose remarks were essential for improving both the content and the presentation of this paper. We thank Maurizio Ruggiu for suggesting the problem reported in Example~\ref{exam_robot}. The work of the authors was partially supported by the Regione Autonoma della Sardegna research project ``Algorithms and Models for Imaging Science [AMIS]'' (RASSR57257, intervento finanziato con risorse FSC 2014-2020 - Patto per lo Sviluppo della Regione Sardegna), and the INdAM-GNCS research project ``Tecniche numeriche per l'analisi delle reti complesse e lo studio dei problemi inversi''. Federica Pes gratefully acknowledges CRS4 (Centro di Ricerca, Sviluppo e Studi Superiori in Sardegna) for the financial support of her Ph.D. scholarship. \bibliographystyle{siam}
{ "timestamp": "2021-09-20T02:17:02", "yymm": "2101", "arxiv_id": "2101.07560", "language": "en", "url": "https://arxiv.org/abs/2101.07560", "abstract": "When a physical system is modeled by a nonlinear function, the unknown parameters can be estimated by fitting experimental observations by a least-squares approach. Newton's method and its variants are often used to solve problems of this type. In this paper, we are concerned with the computation of the minimal-norm solution of an underdetermined nonlinear least-squares problem. We present a Gauss-Newton type method, which relies on two relaxation parameters to ensure convergence, and which incorporates a procedure to dynamically estimate the two parameters, as well as the rank of the Jacobian matrix, along the iterations. Numerical results are presented.", "subjects": "Numerical Analysis (math.NA)", "title": "A doubly relaxed minimal-norm Gauss-Newton method for underdetermined nonlinear least-squares problems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9902915243781044, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.7084783133524674 }
https://arxiv.org/abs/1205.2128
Anisotropic regularity and optimal rates of convergence for the Finite Element Method on three dimensional polyhedral domains
We consider the model Poisson problem $-\Delta u = f \in \Omega$, $u = g$ on $\pa \Omega$, where $ \Omega $ is a bounded polyhedral domain in $\RR^n$. The objective of the paper is twofold. The first objective is to review the well posedness and the regularity of our model problem using appropriate weighted spaces for the data and the solution. We use these results to derive the domain of the Laplace operator with zero boundary conditions on a concave domain, which seems not to have been fully investigated before. We also mention some extensions of our results to interface problems for the Elasticity equation. The second objective is to illustrate how anisotropic weighted regularity results for the Laplace operator in 3D are used in designing efficient finite element discretizations of elliptic boundary value problems, with the focus on the efficient discretization of the Poisson problem on polyhedral domains in $\RR^3$, following {\em Numer. Funct. Anal. Optim.}, 28(7-8):775--824, 2007. The anisotropic weighted regularity results described and used in the second part of the paper are a consequence of the well-posedness results in (isotropically) weighted Sobolev spaces described in the first part of the paper. The paper is based on the talk by the last named author at the Congress of Romanian Mathematicians, Brasov 2011, and is largely a survey paper.
\section*{Introduction} Let $\Omega \subset \RR^n$ be an open, bounded set. Consider the boundary value problem \begin{equation}\label{eq.BVP} \begin{cases} \; \Delta u = f & \text{ in }\Omega \\ u\vert_{\pa \Omega} = g, & \text{ on } \Omega, \end{cases} \end{equation} defined on a bounded domain $\Omega \subset \RR^n$, where $\Delta$ is the Laplacian $\Delta = \sum_{i=1}^d \pa_i^2$. When $\pa \Omega$ is smooth, it is well known that this Poisson problem has a unique solution $u \in H^{m+1}(\Omega)$ for any $f \in H^{m-1}(\Omega)$ and $g \in H^{m+1/2}(\pa \Omega)$ \cite{Evans, LionsMagenes1, Taylor1}. Moreover, $u$ depends continuously on $f$ and $g$. This result is the {\em classical well-posedness of the Poisson problem} on smooth domains. On the other hand, when $\Omega$ is not smooth, it is also well known \cite{CDSchwab, Dauge, JerisonKenig, Kondratiev67, KMRossmann} that there exists $s = s_{\Omega}$ such that $u \in H^{s}(\Omega)$ for any $s < s_{\Omega}$, but $u \not\in H^{s_{\Omega}}(\Omega)$ in general, even if $f$ and $g$ are smooth functions defined in a neighborhood of $\Omega$. For instance, if $\Omega$ is a polygonal domain in two dimensions, then $s_{\Omega} = 1+\pi/\alpha_{MAX}$, where $\alpha_{MAX}$ is the largest interior angle of $\Omega$ \cite{Kondratiev67}. See also Wahlbin's paper \cite{Wahlbin84}. In view of applications to the Finite Element Method, we restrict out attention to domains {\em with polyhedral structure}. These are natural non-convex generalizations of classical $n$-dimensional polyhedra that allow for curved boundaries, cracks (i.e., internal faces), and non-smooth boundary points touching a smooth part of the boundary. We refer to \cite{BMNZ} for a precise formulation. There exist a very large number of papers devoted to boundary value problems on non-smooth domains. While it is impossible to mention them all, let us at least mention the papers of Arnold, Scott, and Vogelius \cite{ASV}, Babuska and Guo \cite{BabuGuo2}, B\u{a}cu\c{t}\u{a}, Bramble, and Xu \cite{bacutaBX}, Jerison and Kenig \cite{JerisonKenig}, Kondratiev \cite{Kondratiev67}, Kozlov, Mazya, and Rossmann \cite{KMRossmann}, Mitrea and Taylor \cite{MMTaylor}, Rossmann \cite{Rossmann}, Verchota \cite{Verchota}, and many others. Other results specific to numerical methods for polyhedral domains are contained in the papers of Apel and Dobrowolski \cite{ApelDobro}, Costabel, Dauge, and Nicaise \cite{CDN}, Costabel, Dauge, and Schwab \cite{CDSchwab}, Dauge \cite{Dauge}, Demkowicz, Monk, Schwab, and Vardapetyan \cite{Monk00}, Elschner \cite{Elschner1}, Guo and Schwab \cite{GuoSchwab}, and many others. Further results and references can be found in the aforementioned papers, as well as in the the monographs of Grisvard \cite{Grisvard2} as well as the recent book \cite{NP}. Regularity for polyhedral domains is useful in designing fast solvers for numerical methods \cite{ApelMG, BrannickHengguang}. See also \cite{Apel99, ApelDobro, BBP3, BlumDobro, BrennerSung, LeeHengguang, HNS, Felli1, Felli2, Hengg2D, Manteuffel2006} for more applications of these techniques to other types of Partial Differential Equations and numerical methods. In this paper, we shall review the results from \cite{BMNZ, 3D1, 3D2, HMN}, and \cite{MNelast}, which make use of the natural stratified space structure on $\Omega$. This leads, by successive conformal changes of the metric, to a metric for which the smooth part of $\overline{\Omega}$ becomes a smooth manifold with boundary whose double is complete. The resulting Sobolev spaces defined by the new metric will lead to spaces on which the Poisson problem is well-posed. We restrict for simplicity to consider only the Laplace operator in \eqref{eq.BVP}. However, all theoretical results presented here extend to scalar, strongly elliptic, linear operators $P$ with sufficiently regular coefficients, and even to elliptic systems, such as the system of anisotropic elasticity \cite{MNelast}. Furthermore, we can also treat transmission problems, for which the coefficients of $P$ are allowed to jump across piecewise-smooth hypersurfaces, representing interfaces, under some additional conditions \cite{BMNZ, HMN}. We briefly discuss these extensions in Subsection \ref{ssec.extend}. For the discretization on polyhedral domains, we build discrete spaces $S_k \subset H^1_0(\Omega)$ and Galerkin finite element projections $u_k \in S_k$ that approximate the solution $u$ of Equation \eqref{eq.BVP} for $f \in H^{m-1}(\Omega)$ arbitrary. We prove that, by using certain spaces of continuous, piecewise polynomials of degree $m$, the sequence $S_k$ achieves {\em quasi-optimal rates of convergence}. More precisely we prove the existence of a constant $C>0$, independent of $k$ and $f$, such that \begin{equation}\label{eq.optimal.rate} \|u - u_k\|_{H^1(\Omega)} \le C \operatorname{dim}(S_k)^{-m/n} \|f\|_{H^{m-1}(\Omega)}, \quad u_k \in S_k, \end{equation} where $n=2$ or $n=3$ is the dimension of our polyhedral domain. The contents of the paper are as follows. In the first section we review well-posedness results in weighted Sobolev spaces on polyhedra domains. These weighted spaces are sometimes called the Babu\v{s}ka-Kondratiev\ spaces. These results are not sufficient for our applications to the Finite Element Method in three dimensions, so in the second section we review some additional {\em anisotropic} regularity results. These results are used in the third section to construct a sequence of meshes that yields $h^m$--quasi-optimal rates of convergence in three dimensions. Finally, in the last section we discuss some of the main ingredients that enter in the proof and which are of independent interest. These include the Hardy-Poincar\'e inequality (which guarantees the coercivity of our problem) and a description of the weighted Sobolev (or Babu\v{s}ka-Kondratiev) spaces $\mathcal K_{a}^m(\Omega)$, which are the natural spaces for our well-posedness results, as the usual Sobolev spaces for a modified metric on $\Omega$, which nevertheless is conformally equivalent to the old one. \vspace{.1in} \noindent{\bf Acknowledgements:} The authors would like to thank the organizers of the International Congress of Romanian mathematicians, Brasov 2011, where these results were presented. A. M. would also like to acknowledge the support of the Mathematics Department at Cornell University, where she is currently a Michler Fellow. \section{Well posedness in isotropic weighted Sobolev spaces} \label{sec.one} Using the standard notation for partial derivatives, namely $\pa_j = \frac{\pa}{\pa x_j}$ and $\pa^\alpha = \pa_1^{\alpha_1}\ldots \pa_n^{\alpha_n}$, for any multi-index $\alpha = (\alpha_1, \ldots, \alpha_n) \in \mathbb Z_+^n$, we denote the usual Sobolev spaces on an open set $V \subset \RR^n$ by \begin{equation*} H^m(V) =\{u : V \to \mathbb C} \newcommand{\DD}{\mathbb D,\ \pa^{\alpha}u \in L^2(V),\ |\alpha| \le m\}. \end{equation*} As mentioned in the introduction, the solution of our model Poisson problem \eqref{eq.BVP} has only limited regularity in the spaces $H^m(\Omega)$. The situation changes for the better if one considers {\em weighted Sobolev spaces}, though. To define the weighted analogues of these spaces, we need to introduce the notion of {\em singular boundary points} of the domain $\Omega \subset \RR^n$. \subsection{Weighted Sobolev spaces} Let $\partial_{\operatorname{sing}} \Omega \subset \pa \Omega$ be the set of singular (or non-smooth) boundary points of $\Omega$, that is, the set of points $p \in \pa \Omega$ such $\pa \Omega$ is not smooth in a neighborhood of $p$. In case we consider mixed boundary conditions, the set of singular points includes also the set of points where the boudary conditions change. If, furthermore, interfaces are considered, the set of singular points contains the set of singular points of the interface, as well as the set of points where the interface touches the boundary. We will denote by $r_{\Omega}(x)$ the distance from a point $x \in \Omega$ to the set $\partial_{\operatorname{sing}} \Omega$ and agree to take $r_{\Omega} = 1$ if there are no such points, i.e., if $\pa \Omega$ is smooth. For $\mu \in \mathbb Z_{+}$ and $a \in \RR$, we define the weighted Sobolev spaces \begin{equation}\label{eq.def.wSsp0} \Kond{\mu}{a}(\Omega) = \{u \in L^2_{\operatorname{loc}}(\Omega),\, r_{\Omega}^{|\alpha|-a} \pa^\alpha u \in L^2(\Omega), \mbox{ for all } |\alpha| \le \mu\}, \end{equation} which we endow with the induced Hilbert space norm. We note that for $n=3$ for example and $\Omega$ a polyhedral domain in $\RR^3$, we have that $r_{\Omega}(x) $ is the distance to the skeleton comprising the union of the closed edges of $\partial \Omega$. Recently, general spaces of this kind were studied by H. Amann \cite{HAmann1, HAmann2}. Similar weighted Sobolev spaces are associated to the faces of $\Omega$. By a {\em face}, we mean the connected components of the boundary $\pa\Omega$ after the set of singular points is removed. For example for $n=3$, we define \begin{equation*} \Kond{m}{a}(\pa \Omega) = \{(u_F),\, r_{\Omega}^{|\alpha|-a}\pa^\alpha u_F \in L^2(F)\,\}, \end{equation*} where $|\alpha| \le m$ and $F$ ranges through the set of faces of ${\pa \Omega}$. For $s \in \RR_+$, we define the space $\Kond{s}{a}(\pa \Omega)$ by standard interpolation. \subsection{Well-posedness for the Poisson problem on $n$-dimensional polyhedral domains} The following result is proved in \cite{BMNZ}. For simplicity, we shall assume that $\Omega$ has no cracks and that there are no vertices that touch the boundary. (That is, we shall consider only domains $\Omega$ that coincide with the interior of their closure $\overline{\Omega}$.) \begin{theorem}\label{theorem.main} Let ${\Omega \subset \RR^n}$, be a bounded, curvilinear polyhedral domain and $m \in \mathbb Z_+$. Then there exists $\eta_\Omega > 0$ such that $\tilde{\Delta}(u) = (\Delta u, u \vert_{\pa \Omega})$ defines an isomorphism \begin{equation*} \tilde{\Delta} : \Kond{m + 1}{a+1}(\Omega) \to \Kond{m - 1}{a - 1}(\Omega) \oplus \Kond{m + 1/2}{a+1/2}(\pa \Omega), \end{equation*} for all $|a|< \eta_{\Omega}$. If $m = 0$, the solution $u$ corresponding to the data $(f, 0) \in \Kond{- 1}{a - 1}(\Omega) \oplus \Kond{ 1/2}{a+1/2}(\pa \Omega)$ is also the solution of the associated variational problem. \end{theorem} This theorem amounts to the well-posedness of the boundary value \eqref{eq.BVP} on $n$-dimensional polyhedral domains. For $n=2$ (in which case $\Omega$ is a polygonal domain), this result is due to Kondratiev \cite{Kondratiev67}, in which case $\eta_{\Omega} = \frac{\pi}{\alpha_{MAX}}$, where $\alpha_{MAX}$ is the measure in radians of the maximum angle of $\Omega$. For $n=3$, this result was proved in \cite{3D1}. For later applications, we shall need the following result. \begin{theorem}\label{theorem.ext} The results of Theorem \ref{theorem.main} remain true for infinite angles in two dimensions, infinite polyhedral cones in three dimensions, and infinite dihedral angles in three dimensions. \end{theorem} The proof of this theorem proceeds along the lines of the proof of Theorem \ref{theorem.main} in \cite{3D1} or \cite{BMNZ}. A first difference to remark is that the Hardy-Poincar\'e inequality does not hold for the whole domain. Then the ``desingularization'' $\Sigma(\Omega)$ has to involve the directions at infinity also in the case of an angle or a cone. In case of a dihedral angle $D_\alpha = \{0 < \theta < \alpha\}$, in cylindrical coordinates $(r,\theta,z)$, one has to consider the also the two point compactification of the edge. In particular, \begin{equation} \Sigma(D_\alpha) = [0, \alpha] \times [0, \infty] \times [-\infty, \infty]. \end{equation} \subsection{Extensions} \label{ssec.extend} Theorem \ref{theorem.main} above was extended in several ways. First, the proof applies with almost no change if mixed boundary value problems are considered, provided that no adjacent faces are endowed with Neumann boundary conditions. We do allow, however, different boundary conditions on the same face. We treat the points where the boundary conditions change similarly to the non-smooth boundary points, as solutions exhibit a similar singular behavior in this case. As already mentioned in the Introduction, we can more generally consider a general uniformly strongly elliptic differential operator of the form \begin{equation} L u = -\sum_{ij} \pa_i(a_{ij} \pa_j u) + cu, \mbox{ with } c \ge 0. \end{equation} (Recall that $L$ is uniformly strongly elliptic if, and only if, there exists $C>0$ such that $\sum_{ij} a_{ij} t_i t_j \ge C \sum_i t_i^2$, for all $(t_i) \in \RR^n$.) We can also include certain transmission or interface problems. More precisely, we now assume that our domain $\Omega$ can be written as a union of curvilinear polyhedral domains $\Omega_j$ with disjoint interiors:\ $\overline{\Omega} = \cup_{j=1}^K \overline{\Omega}_j$. Let $\Gamma := \cup_{j=1}^K \pa \Omega_j \smallsetminus \pa \Omega$ be the interface. We assume that $\Gamma$ is smooth and assume further that no adjacent faces of the $\Omega_j$'s are endowed with Neumann boundary conditions. We do allow $\Gamma$ to touch the boundary of $\Omega$. We can then extend the result of Theorem \ref{theorem.main} by using instead the {\em broken weighted Sobolev spaces} $\hat \mathcal K_{a}^{m}(\Omega)$, defined by \begin{equation}\label{def.broken.SS} \hat \mathcal K_{a}^{m}(\Omega) := \oplus_{j=1}^{K} \mathcal K_{a}^{m}(\Omega_j). \end{equation} We observe that, if there is no interface, $\hat \mathcal K_{a}^{m}(\Omega)=\mathcal K_{a}^{m}(\Omega)$. We let $\pa_D \Omega$ be the part of the boundary with Dirichlet boundary conditions, which we assume to be a closed subset of the boundary, and let $\pa_N \Omega := \pa \Omega \smallsetminus \pa_D \Omega$. We denote the outer normal vector to $\Omega$, which is defined a.e. on $\pa\Omega$, by $\nu$, and the {\em conormal derivative} associated to the operator $L$ by \ $D^L_\nu = \sum_{ij} \nu_i a^{ij}(x) \pa_j$. Let $\tilde L (u) = (L\, u, u\vert_{\pa_D \Omega}, D^L_{\nu} u\vert_{\pa_N \Omega})$. Our most general result in $n$ dimensions states that for $m \ge 1$ \ $\tilde L$ is an isomorphism (see \cite{BMNZ}): \begin{equation} \tilde{L} : \mathcal D_a \to \hat \mathcal K^{m - 1}_{a - 1}(\Omega) \oplus \Kond{m + 1/2}{a+1/2}(\pa_D \Omega) \oplus \Kond{m - 1/2}{a - 1/2}(\pa_N \Omega), \end{equation} where \begin{equation} \label{eq.domaindef} \mathcal D_{a} := \{ u \in \hat \mathcal K^{m+1}_{a+1}(\Omega)\cap \mathcal K^{1}_{a+1} (\Omega),\ u^+ = u^-, \ D^{L+}_\nu u = D^{L-}_\nu u \, \text{ on } \, \Gamma\, \}, \end{equation} and the subscript $\pm$ refers to non-tangential limits to each side of the interface. The conormal derivative is defined in the sense of the trace a.e. on $\pa\Omega$. Let us mention that the interface $\Gamma$ will separate different faces where it touches the boundary, and hence we assume that these faces are not both endowed with Neumann boundary conditions. For elasticity with mixed boundary conditions, a similar result is obtained by Mazzucato and Nistor in \cite{MNelast}. The results in \cite{MNelast} also extend to interface problems under the same assumptions (no adjacent faces with Neumann boundary conditions and a smooth interface) using the methods as in \cite{BMNZ} and in \cite{MNelast}. More precisely, we use Korn's inequality to obtain local regularity results (no weighted spaces). This applies, in particular, to interface problems. There the additional regularity is proved as for the additional regularity at the boundary for smooth domains. See \cite{NistorSchwab} for a proof of the additional regularity at the boundary for systems that extends to interface problems. Once one has the local regularity results, the global regularity results in {\em weighted} spaces is proved as in \cite{MNelast} using suitable partitions of unity. The solvability in $H^1$ is an immediate consequence of Korn's inequality and of the Hardy-Poincar\'e inequality. Combining regularity with solvability in $H^1$ yields solvability in higher weighted Sobolev spaces $\mathcal K_{a+1}^{m+1}(\Omega)$. Other regularity results go toward analytic regularity using countably normed spaces as in the work of Babu\v{s}ka-Guo \cite{BabuGuo, BabuGuo2}, and Costabel, Dauge, and Nicaise \cite{CDN}. See the Introduction for more references. It would be interesting to extend these results to the de Rham complex \cite{ArnoldActa, ArnoldBull}. \subsubsection{Adjacent Neumann faces and non-smooth interfaces in 2D} The assumption that no vertex $P$ be the common point of two adjacent faces with Neumann boundary conditions or the assumption that $\Gamma$ be smooth at any interface point $P$ are both equivalent to the fact that the function constant equal to one not be a singular function at that singular point $P$. This assumption is necessary, because, if it is not satisfied, the relevant operator $\tilde L$ is not even Fredholm for the value $a = 0$ and it is also not invertible for any $a \in \RR$. However, this assumption is not realistic in practice and, it turns out, not even necessary for designing graded meshes that yield quasi-optimal rates of convergence \cite{HMN}. To obtain a well-posedness result for interface problems in 2D, we can proceed as follows \cite{HMN}. Let $\chi_P$ be a smooth function that is equal to 1 near each singular point $P$ that is either a point where we have Neumann-Neumann conditions or a non-smooth interface point satisfying respectively Neumann or periodic boundary conditions on the sides at $P$. This includes points $P$ that belong to more than two of the subdomains $\overline{\Omega}_j$ (so called {\em multiple junction points}). We assume the $\chi$'s have disjoint supports. Let $W_s$ be the linear span of the functions $\chi_P$. The choice of boundary conditions or the introduction of additional singular points to a polygonal domain define a {\em polygonal structure} on $\Omega$, see \cite{HMN} for details. \begin{theorem}\label{theorem.interface} Let $\Omega$ be a domain with a polygonal structure. The there exists $\eta > 0$ such that, for all any $0 < a < \eta$ and $m \in \mathbb Z_+$, the map \begin{equation*} \tilde{L} : \mathcal D_a + W_s \to \hat \mathcal K^{m-1}_{a-1}(\Omega) \oplus \mathcal K^{m + 1/2}_{a + 1/2}(\pa_D \Omega) \oplus \mathcal K^{m - 1/2}_{a -1/2}(\pa_N \Omega), \end{equation*} with $\mathcal D_a$ given in \eqref{eq.domaindef}, is an isomorphism. \end{theorem} The proof requires the calculation of the index of the operator $\tilde L$ acting on $\hat \mathcal K^{m+1}_{a+1}(\Omega)\cap \mathcal K^{1}_{a+1} (\Omega)$. Note that our result is not valid for $a = 0$. We expect a similar result in 3D. Theorem \ref{theorem.interface} can be used to justify the construction of a sequence of meshes (in 2D) that yields quasi-optimal $h^m$ rates of convergence for transmission problems with non-smooth interfaces (and even with multiple junctions) and problems with adjacent Neumann-Neumann corners in 2D. See \cite{junping} for additional issues related to the regularity and numerical methods for interface problems. We notice that the resulting sequence of meshes is the same for all 2D problems on polygonal domains (with or without interfaces or Neumann-Neumann corners), although the theoretical PDE result (or {\em a priori} estimates) are different in these two cases. \subsection{The domain of $\Delta$ on concave polygons} Let us mention that the method used to obtain Theorem \ref{theorem.interface} can be used to describe the domain $\mathcal D(\Delta)$ of the Friedrichs extension of the Laplace operator on $\Omega$ with zero boundary conditions. First of all, the form associated to $\Delta$, namely $B(u, v) = (\nabla u, \nabla v)$, $u, v$ zero on the boundary, defines the so called {\em energy norm}:\ $|u|_{H^1(\Omega)} = B(u, u)^{1/2}$. The completion of ${\mathcal C}^{\infty}_{\text{c}}(\Omega)$ in the energy norm is $H^1_0(\Omega)$. The proofs in \cite{BNZ1, 3D1} show that $H^1_0(\Omega) = \mathcal K_{1}^{1}(\Omega) \cap \{ u \vert_{\pa \Omega} = 0 \}$, with equivalent norms. The domain of the Friedrichs extension of the Laplacian $\Delta$ is then \begin{equation} \mathcal D(\Delta) = \{u \in H^1_0(\Omega),\ \Delta u \in L^2(\Omega) \}. \end{equation} If $\Omega$ is convex, then it is known that $\mathcal D(\Delta) = H^2(\Omega) \cap H^1_0(\Omega)$. This is however not true if $\Omega$ is concave. To describe $\mathcal D(\Omega)$ in the case when $\Omega$ is concave, let us notice that the map \begin{equation} \Delta : \mathcal K_{2}^{2}(\Omega) \cap H^1_0(\Omega) \to L^2(\Omega) \end{equation} is Fredholm and its index is the number of re-entrant corners by \cite{Kondratiev67}. Let $P$ be such a re-entrant corner with angle $\alpha_{P} > \pi$. Also, let $(r, \theta)$ be polar coordinates at $P$ and consider the function $\phi_{P} = r^{\pi/\alpha_{P}} \sin(\pi \theta/\alpha_{P}) \chi_{P}$, where $\chi_{P}$ is the function considered in Theorem \ref{theorem.interface}. Let $V_s$ be the space of linear combinations of the functions $\phi_{P}$, with $P$ a re-entrant corner. Then one has that \begin{equation} \Delta : \mathcal K_{2}^{2}(\Omega) \cap H^1_0(\Omega) + V_s \to L^2(\Omega) \end{equation} has index zero, is injective, and hence bijective. This proves the following result. \begin{theorem} The domain of the Friedrichs extension of the Laplace operator with zero boundary conditions on a polygon $\Omega \subset \RR^2$ is \begin{equation*} \mathcal D(\Delta) = \mathcal K_{2}^{2}(\Omega) \cap H^1_0(\Omega) + V_s. \end{equation*} \end{theorem} A similar description is available for other types of boundary conditions. This result immediately leads to a maximal regularity result for the heat equation on polygonal domains. See also \cite{GM} and \cite{GM2} for related results on Friedrichs extensions of second order elliptic operators on manifolds with conical points. \section{Anisotropic weighted Sobolev spaces and regularity}\label{sec.two} The well-posedness result of the previous section are not enough to establish quasi-optimal rates of convergence in $3D$. We need additional regularity along the edges, as follows. Let $u$ be the solution of problem \eqref{eq.BVP} with $f \in H^{m-1}(\Omega)$ and $g=0$. We observe that this assumption is stronger than assuming that $f$ is in a weighted Sobolev space of the form $\mathcal K^{m-1}_{a-1}$ for $|a|$ small. We will need to take advantage of this additional regularity of $f$, which leads to improved regularity for $u$ along the edges. We encode this additional regularity by introducing new {\em anisotropically} weighted spaces. We assume first that the domain $\Omega$ is a dihedral angle with axis along the $z$-coordinate axis, $ D_\alpha = \{0 < \theta < \alpha\}$, using cylindrical coordinates $(r,\theta,z)$. We further assume that $f \in H^{m-1}(D_\alpha)$. Then $f \in \Kond{m-1}{a-1}(D_\alpha)$, and hence \begin{equation}\label{eq.stage1} u \in \Kond{m+1}{a+1}(D_\alpha) \end{equation} for positive and small enough $a$, by Theorem \ref{theorem.ext}. Hence, $\pa_z u \in \Kond{m}{{a}}(D_\alpha).$ However, we also have $\Delta \pa_z u = \pa_z \Delta u = \pa_z f \in H^{m-2}(\Omega).$ Then, using Theorem \ref{theorem.main} which extends to this setting, we also obtain that \begin{equation}\label{eq.stage2} \pa_z u \in \Kond{m}{a+1}(\Omega), \end{equation} a better estimate than in Equation \eqref{eq.stage1}. These calculations suggest that we introduce a scale of spaces $\mathcal D_a^m$, $m\in \mathbb Z_+$, as follows: \begin{align*} \mathcal D_a^1(D_\alpha) &:= \Kond{1}{1}(D_\alpha), \\ \mathcal D_a^m(D_\alpha) &:=\{u \in \Kond{m}{a}(D_\alpha),\ \pa_z u \in \mathcal D_a^{m-1}(D_\alpha) \}. \end{align*} The spaces $\mathcal D_a^1$ are thus independent of $a$. We assume next that the domain $\Omega$ is a cone $\mathcal C$ centered at the origin. We let $\rho(x) = |x|$, the distance from $x$ to the origin, and define \begin{equation*} \mathcal D_a^1(\mathcal C) := {\rho^{a-1}}\Kond{1}{1}(\mathcal C) =\{{\rho^{a-1}}v,\ v \in \Kond{1}{1}(\mathcal C)\}. \end{equation*} To introduce the spaces $\mathcal D_a^m(\mathcal C)$ for $m \ge 2$, we shall need to consider the vector field $\rho \pa_\rho := x \pa_x + y \pa_y + z \pa_z$, which is the infinitesimal generator of dilations centered at the vertex of the cone. We then define by induction \begin{equation*} \mathcal D_a^m(\mathcal C) := \{u \in \Kond{m}{a}(\mathcal C),\ {\rho \pa_ \rho} (u) \in \mathcal D^{m-1}_{a}(\mathcal C) \}, \quad m \ge 2. \end{equation*} For a general bounded polyhedral domain $ {\Omega }$, we define the {\em anisotropic weighted Sobolev spaces} $\mathcal D_a^m(\Omega)$ by localizing around vertices and edges, using as models cones and dihedral angles respectively, such that away from the edges these spaces coincide with the usual Sobolev spaces $H^m$. Then, we have the following regularity result \cite{3D2}: \begin{theorem}\label{theorem.anisotropic} Let $f \in H^{m-1}(\Omega)$, with $m \ge 1$. Then there exists $\eta_{\Omega, a} > 0$ such that the Poisson problem \eqref{eq.BVP} with $g=0$ has a unique solution $u \in \mathcal D^{m+1}_{a+1}(\Omega) $ for any $0 \le a < \eta=\eta_\Omega $ and \begin{equation*} \|u\|_{\mathcal D_{a+1}^{m+1}(\Omega)} \le C_{\Omega, a} \|f\|_{H^{m-1}(\Omega)}. \end{equation*} \end{theorem} See \cite{BKP, BCD, CDN, kellogg1} for related results. \section{Quasi-optimal $h^m$-mesh refinement}\label{sec.three} We describe in this section a strategy to obtain quasi-optimal $h^m$-mesh refinement. We follow \cite{3D2}, from where the pictures are taken. The theoretical justification of this construction is based on the anisotropic regularity result of the previous section, Theorem \ref{theorem.anisotropic}. Given a bounded polyhedral domain $\Omega$ and a parameter $\kappa \in (0, 1/2]$, we will provide a sequence $\mathcal T_n$ of decompositions of $\Omega$ into finitely many tetrahedra, such that, if $S_n$ is the finite element space of continuous, piecewise polynomials on $\mathcal T_n$, then is the Lagrange interpolant of $u$ of order $m$, $u_{I, n}$, has ``quasi-optimal'' approximability properties. The result can be formulated as follows: \begin{theorem}\label{theorem.interp}\ Let $a \in (0, 1/2]$ and $0 < \kappa \le 2^{-m/a}$. Then there exists a sequence of meshes $\mathcal T_n$ and a constant $C > 0$ such that, for the corresponding sequence of finite element spaces $S_n$, we have \begin{equation*} |u - u_{I,n}|_{H^1(\Omega)} \le C 2^{-km} \|u\|_{\mathcal D_{a+1}^{m+1}(\Omega)}, \end{equation*} for any $u \in \mathcal D_{a+1}^{m+1}(\Omega)$, $u\vert_{\pa\Omega} = 0$, and for any $k \in \mathbb Z_+$. \end{theorem} Theorem \eqref{eq.optimal.rate} is now a direct consequence of Theorem \ref{theorem.interp}. \subsection{Refinement Strategy} Our refinement strategy will first generate a sequence of decompositions $\mathcal T_n'$ of $\Omega$ in tetrahedra and triangular prisms, while our meshes $\mathcal T_n$ will be obtained by further dividing each prism in $\mathcal T_n'$ into three tetrahedra. We now explain how the divisions $\mathcal T_n'$ are defined inductively, $\mathcal T_{n+1}'$ being a refinement of $\mathcal T_{n}$ in which each edge is divided into two (possibly unequal) edges. To define the way each edge of $\mathcal T_n'$ is divided, we need to establish a hierarchy of the nodes of $\mathcal T_n'$. Therefore, given a point $P \in \overline{\Omega}$, we shall say that $P$ is of {\em type} {\bf V} if it is a vertex of $\Omega$; we shall say that $P$ is of {\em type} {\bf E} if it is on an open edge of $\Omega$. Otherwise, we shall say that it is of {\em type} {\bf S} (that is, a ``smooth'' point). The type of a point depends only on $\Omega$ and not on any partition or meshing. The initial \ttra\ will consist of edges of type {\bf VE, VS, ES, EE:=E${}^2$}, and {\bf S${}^2$}. We shall assume that our initial decomposition and initial \ttra\ were defined so that no edges of type {\bf V${}^2$ := VV} are present. The points of type {\bf V} will be regarded as more singular than the points of type {\bf E}, and the points of type {\bf E} will be regarded as more singular than the points of type {\bf S}. All the resulting triangles will hence be of one of the types {\bf VES, VSS, ESS}. Let us notice that once our initial refinement is fine enough, the edges of our domain will be decomposed into segments of type $VE$ and $EE$, and the segments of type $EE$ will be containted in triangular prisms. Therefore, we can assume that there are no triangles of type {\bf EES}. Our refinement procedure depends on the choice of a constant $\kappa \in (0, 2^{-m/a})$, where $a> 0$ is as in Theorem \ref{theorem.anisotropic} and $\kappa \le 1/2$. We can improve our construction by considering different values of $\kappa$ associated to different vertices or edges. This generalization can easily be carry out by the reader. See \cite{HMN} for example. Let $AB$ be a generic edge in the decompositions $\mathcal T_n$. Then, as part of the $\mathcal T_{n+1}$, this edge will be decomposed in two segments, $AC$ and $CA$, such that $|AC|= \kappa |AB|$ if $A$ is more singular than $B$ ({\em i.e., } if $AB$ is of type {\bf VE, VS}, or {\bf ES}). Except when $\kappa = 1/2$, $C$ will be closer to the more singular point. This procedure is as in \cite{BNZ1}. See Figure \ref{fig:edge}. \begin{figure}[!htb] \begin{tabular}{cc} {\includegraphics[width=2in]{r1.pdf}} & { \includegraphics[width=2in]{r2.pdf} }\\ { A more singular than B} & {A and B equally singular}\\ {$|AC|=\kappa |AB|, \ \kappa=1/4$} & {$|AC|=|AB|$} \end{tabular} \caption{ Edge decomposition} \label{fig:edge} \end{figure} The above strategy to refine the edges induces a natural strategy for refining the triangular faces. If $ABC$ is a triangle in the decomposition $\mathcal T_n'$, then in $\mathcal T_{n+1}'$, the triangle $ABC$ will be divided into four other triangles, according with the edge strategy. The decomposition of triangles of type {\bf S${}^3$} is obtained for $\kappa = 1/2$. The type\ {\bf VSS} triangle decomposition is described in Figure \ref{fig:VER} (a). In the case when $ABC$ is of type {\bf VES}, however, we shall use a different construction. Namely, in this case we remove the newly introduced segment that is opposite to $B$, see Figure \ref{fig:VER} (b), and divide $ABC$ into two triangles and a quadrilateral. The resulting quadrilateral will belong to a prism in $\mathcal T_{n+1}'$. \begin{figure}[!htb] \begin{tabular}{cc} \includegraphics[width=1.8in]{r3.pdf} & \includegraphics[height=1.4in]{r4.pdf}\\ {(a) $A$ of type {\bf V} or {\bf E}} & {VER decomposition: $\angle E=90^o$} \\ {(b) $B$ and $C$ of type {\bf S}, $ |A'B|=|A'C| $ } & {$|VC'|=\kappa |VE|,\ |VB'|=\kappa |VR|$} \\ { $|AC'|=\kappa |AB|,\ |AB'|=\kappa |AC|$} & {$ |EA'|=\kappa |ER|$, $A'C' $ was removed} \end{tabular} \caption{Triangle decomposition, $\kappa =1/4$ } \label{fig:VER} \end{figure} \subsection{Divisions in tetrahedra and prisms} We now describe the construction of the sequence of the decompositions $\mathcal T_n'$ for $n \ge 0$. The required sequence of meshes $\mathcal T_n$ will be defined by dividing all the prisms in $\mathcal T_n'$ into tetrahedra. For the first level of semi-uniform refinement of a prism, more details are presented in \cite{3D2}. \begin{figure}[!htb] \label{fig:TWO} \begin{tabular}{cc} \includegraphics[width=2.2in]{tet4.pdf} & \includegraphics[width=1.8in]{r6.pdf}\\ (a) Initial decomposition. & (b) Marking a prism: $BC' = mark$,\\ & $AA'\ ||\ BB'\ || \ CC' \perp ABC$ and $A'B'C' $ \end{tabular} \caption{The initial decomposition $\mathcal T_0'$ of $\Omega$.} \end{figure} We start with an initial division $\mathcal T_0'$ of $\Omega$ in straight triangular prisms and tetrahedra of types {\bf VESS} and {\bf VS${}^3$}, having a vertex in common with ${\Omega }$, and an interior region $\Lambda_0$. See Figure \ref{fig:TWO} (a), where we have assumed that our domain $\Omega$ is a tetrahedron. For each of the prisms we choose a diagonal (called mark) which will be used to uniquely define a partition of the triangular prism into tetrahedra. We then divide the interior region $\Lambda_0$ into tetrahedra that will match the marks. Also, we assume that the marks on adjacent prisms are compatible, so that the resulting meshes are conforming. We can further assume that some of the edge points (as in Figure \ref{fig:TWO} (b)) have been moved along the edges so that the prisms become straight triangular prisms {\em i.e.}, the edges are perpendicular to the bases. The decompositions $\mathcal T_n'$ are then obtained by induction following the Steps {\bf 1} through {\bf 3} explained next. We assume that the decomposition $\mathcal T_n'$ was defined and we proceed to define the decomposition $\mathcal T_{n+1}'$. \medskip \noindent {\bf Step 1.} The tetrahedra of type {\bf $S^4$} are refined uniformly by dividing along the planes given by \ ${x_i + x_j = k/2^n}, {1 \le k \le 2^n}$, where ${x_j}$ are affine barycentric coordinates. This refinement is compatible with the already defined refinement procedure for the faces. See Figure \ref{fig:THREE} (a) for $n=1$. \smallskip \noindent {\bf Step 2.} We perform semi-uniform refinement for prisms in our decomposition $\mathcal T_{n}'$ (all these prisms will have an edge in common with $\Omega$). This procedure is shown in Figure \ref{fig:THREE} (b). \begin{figure}[!htb] \label{fig:THREE} \begin{tabular}{cc} \includegraphics[width=2.2in]{t.pdf} & \includegraphics[width=1.3in]{r5.pdf} \\ (a) First level of uniform refinement & (b) First level of semi-uniform refinement\\ & of a prism, $CD = mark$ \end{tabular} \caption{First refinement $\mathcal T_1'$.} \end{figure} \noindent {\bf Step 3.} We perform non-uniform refinement for the tetrahedra of type {\bf VS${}^3$} and {\bf VESS}. More precisely, we divide a tetrahedron of type {\bf VS${}^3$} into 12 tetrahedra as in the uniform strategy, with the edges through the vertex of type {\bf V} divided in the ratio ${\kappa }$. We thus obtain one tetrahedron of type {\bf VS${}^3$} and 11 tetrahedra of type {\bf S${}^4$}. (At the next step, which yields $\mathcal T_{n+2}'$ we iterate this procedure for the small tetrahedron of type {\bf VS${}^3$}, while the tetrahedra of type {\bf S${}^4$} are divided uniformly.) See Figure \ref{fig:FOUR} (a). On the other hand, a tetrahedron of type {\bf VESS} will be divided it into 6 tetrahedra of type {\bf S${}^4$}, one tetrahedron of type {\bf VS${}^3$}, and a triangular prism. The vertex of type {\bf E} of will belong only to the prism. See Figure \ref{fig:FOUR} (b). This refinement is compatible with the earlier refinement of the faces. \begin{figure}[!htb] \label{fig:FOUR} \begin{tabular}{cc} \includegraphics[width=1.8in]{vs3.pdf} & \includegraphics[width=1.8in]{vess.pdf} \\ (a) Vertex A of type {\bf V}, & (b) Vertex A of type {\bf V}, B of type {\bf E},\\ B, C, D of type {\bf S} & C, D of type {\bf S} and $D_1D'=$ mark \\ & for the prism $BD_1 C_1 D' C_1B'$ \end{tabular} \caption{Refinement of tetrahedra of type {\bf VS${}^3$} and {\bf VESS}.} \end{figure} The description of our refinement procedure is now complete. \medskip \subsection{Intrinsic local refinement} We see that one of the main features of our refinement is that each edge, each triangle, and each quadrilateral that appears in a tetrahedron or prism in the decomposition $\mathcal T_n'$ is divided in the decomposition $\mathcal T_{n+1}'$ in an intrinsic way that depends only on the type of the vertices of that edge, triangle, or quadrilateral. In particular, the way that a face in $\mathcal T_n'$ is divided to yield $\mathcal T_{n+1}'$ does not depend on the type of the other vertices of the tetrahedron or prism to which it belongs. This ensures that \ttra\ $\mathcal T_{n+1}$, which is obtained from $\mathcal T_{n+1}'$ by dividing each prism in three tetrahedra, is a conforming mesh. \section{Hardy-Poincar\'e inequality and regularity: a glimpse at the proofs} \label{sec.four} There are two main ingredients for the proofs of the well-posedness results stated in the first section. One is the Hardy-Poincar\'e inequality, which yields solvability (more precisely well-posedness) in the $H^1$-type spaces and the second one is a regularity result, which allows us then to obtain well-posedness in higher regularity spaces. A third, more technical ingredient, is to describe the trace spaces at the boundary. For this, we use the same ideas as the ones used in the proof of regularity. We now discuss these ingredients. \subsection{The Hardy-Poincar\'e inequality} Let us denote by $r_{\Omega}(x)$ the distance from $x$ to the set of singular points in the boundary of $\Omega$. Recall that these singular points consist not just of the edge points, but also of the points where the boundary conditions change and the points where the interface touches the boundary. The following inequality is then proved by induction \cite{BMNZ} (see \cite{3D1} for the three dimensional case, the two dimensional case was well known, see \cite{NP} for example). \begin{proposition} \label{prop.HP} Let $\Omega$ be a polyhedral domain in $\RR^n$. We assume that either $\Omega$ is bounded, or that it is a cone or a dihedral angle. Let us assume that the Neumann part of the boundary $\pa_N \Omega := \pa \Omega \smallsetminus \pa_D \Omega$ contains no adjacent faces of $\Omega$. Then there exists a constant $C_{\Omega} > 0$, which depends only on $\Omega$ and the choice of boundary conditions such that the following {\em Hardy-Poincare\'e} inequality holds: \begin{equation*} \int_{\Omega} \frac{|u|^2}{r_\Omega^2} dx \le C_{\Omega} \int_{\Omega} |\nabla u|^2 dx \end{equation*} for any function $u \in H^1(\Omega)$ that is zero on $\pa_D \Omega$. \end{proposition} Let us assume that $\Omega$ is bounded. A simple consequence of the Hardy-Poincar\'e inequality of Proposition \ref{prop.HP} is that the spaces \begin{equation} H_D^1(\Omega) = \{ u\in H^1(\Omega), u = 0 \mbox{ on } \pa_D \Omega \} \end{equation} and \begin{equation} \mathcal K_{1}^{1}(\Omega) \cap \{ u\in H_{loc}^1(\Omega), u = 0 \mbox{ on } \pa_D \Omega \} \end{equation} are the same and their respective norms are equivalent. Neither this result nor the Hardy-Poincar\'e inequality are true if there exist two adjacent faces with Neumann boundary conditions. This is the reason we needed a different approach in Section~\ref{sec.one}. \subsection{Sobolev spaces and regularity} Our definition of weighted Sobolev spaces, Equation \eqref{eq.def.wSsp0}, is elementary. However, for the purpose of establishing the needed properties of these spaces, it is convenient to identify them with the usual Sobolev spaces associated to a different metric on $\Omega$. To this end, let us recall from \cite{BMNZ} that a {\em stratified curvilinear polyhedral domain} $\Omega$ is an open subset of a Riemannian manifold $(M, g)$ of dimension $d$ together with a stratification of \begin{equation} \overline{\Omega} = \Omega^{(d)} \supset \Omega^{(d-1)} \supset \ldots \supset \Omega^{(1)} \supset \Omega^{(0)}. \end{equation} We then define stratified curvilinear polyhedral domains by induction as follows. For $d = 0$, $\Omega$ is just a finite set of points. For $d = 1$, $\Omega$ is a finite set of intervals. The stratum $S_0$ for $d = 1$ will contain all the boundary points of the intervals, but may contain also other points. For $d > 1$, we require our domain $\Omega$ to satisfy the following conditions: for every point $p \in \pa \Omega$, there exist a neighborhood $V_p \subset M$ such that, if $p\in \Omega^{(l)}\setminus \Omega^{(l-1)}$, $l=1,\dots, d-1$, then there is a stratified curvilinear polyhedral domain $\omega_p \subset S^{d-l-1}$, $\overline{\omega_p} \neq S^{d-l-1}$, and a diffeomorphism $\phi_p : V_p \to B^{d-l} \times B^{l}$ such that $\,\phi_p(p)=0$ and \begin{equation}\label{eq.cond.poly} \phi_p(\Omega \cap V_p) = \{r x',\, 0 < r < 1,\, x' \in \omega_p\} \times B^{l}, \end{equation} inducing a homeomorphism $\overline{\Omega} \cap V_p \to \{r x',\, 0 \le r < 1,\, x' \in \overline{\omega_p}\} \times B^{l}$ of stratified spaces that is a diffeomorphism on each stratum. The set of singular points of $\Omega$ then consists of $\Omega^{(n-2)}$ and is given as part of the definition of $\Omega$, but it must contain all the geometric, intrinsic singular points of $\pa \Omega$. Althought we shall not need this definition here, let us mention nevertheless that the {\em desingularization} of $\Omega$, denoted $\Sigma(\Omega)$, is obtained by gluing in a natural way all the sets $[0, 1) \times \overline{\omega_p} \times B^{l}$ as in Equation \eqref{eq.cond.poly}. The resulting set $\Sigma(\Omega)$ is then a manifold with corners that has a natural structure of a Lie manifold with boundary, in the sense of \cite{AIN}. Then $\Sigma(\Omega) \to \Omega$ is a differentiable map that is a diffeomorphism outside the set of singular points, in $\Sigma(\Omega)$ the set of singular points being the set of points belonging to a face of codimension at least two. Let $\tilde r_0(x) \ge 0$ be the distance from $x$ to the set $\Omega^{(0)}$ if $x$. In general, the function $\tilde r_0$ will not be smooth, we therefore replace $\tilde r_0$ with an equivalent function $r_0$ that is smooth outside $\Omega^{(0)}$. Therefore, we also have that $r_0(x) > 0$, for $x \notin \Omega^{(0)}$, and that $\tilde r_0/r_0$ and $r_0/\tilde r_0$ are bounded functions. We shall say that $r_0$ is the {\em smoothed distance} to $\Omega^{(0)}$. We replace then the metric $g =: g_0$ with $g_1 := r_0^{-2}g$. We repeat this construction for the remaining non-empty strata in the increasing order of the dimension of the strata, each time measuring distances in the new metric. Thus $r_k$ is the smoothed distance to $\Omega^{(k)}$ in the metric $g_k$, and we let $g_{k+1} := r_{k}^{-2}g_{k}$, $k \le d - 2$. One can prove that $g_{d-1}$ is a compatible metric on the desingularization $\Sigma(\Omega)$ \cite{aln1, BMNZ} and hence we can use the results on Sobolev spaces from those papers. Let $\rho := r_0 r_1 \ldots r_{d-2}$. Let us denote by $\Gamma(\overline{\Omega}, TM)$ the space of restrictions to $\overline{\Omega}$ of smooth vector fields on $M$. The resulting structural Lie algebra of vector fields on $\Sigma(\Omega)$ is simply $\mathcal V = {\mathcal C}^{\infty}(\Sigma(\Omega)) \rho \Gamma(\overline{\Omega}, TM)$. Thus a basis of $\mathcal V$ over ${\mathcal C}^{\infty}(\Sigma(\Omega))$ is given by $\{\rho \pa_i\}$. The resulting Sobolev spaces are therefore \begin{equation*} \mathcal K_{a}^{m} (\Omega) := \{ u, \rho^{|\alpha| - a} \pa^\alpha u \in L^2(\Omega),\ |\alpha| \le m \} = \rho^{a - n/2} H^m(\Omega, g_{d-1}), \end{equation*} where the space $H^m(\Omega, h)$ is the Sobolev space associated to the metric $h$. Let $r_{\Omega}(x)$ denote the distance from $x$ to $\Omega^{(d-2)}$. One can prove by induction that $r_{\Omega}/\rho$ and $\rho/r_{\Omega}$ are both bounded, so in the above definition of Sobolev spaces we can replace $\rho$ with $r_{\Omega}$. See \cite{BMNZ} for details. The fact that the Sobolev spaces $\mathcal K_{a}^{m}$ are associated to a Lie manifold guarantees that Laplacian $\Delta$ satisfies elliptic regularity in the scale of spaces $\mathcal K_{a}^{m}(\Omega)$. To this end, one also needs to establish that $\rho^2 \Delta - \Delta_{g_{d-1}}$ is a lower order differential operator generated by $\mathcal V$ and ${\mathcal C}^{\infty}(\Sigma(\Omega))$. We also obtain as a byproduct the fact that the traces at the boundary of the spaces $\mathcal K_{a}^{m}(\Omega)$ can also be described in terms of the Sobolev spaces on $\pa \Omega$ associate to the conformally equivalent metric $h$. The Hardy-Poincar\'e inequality can also be interpreted in the setting of the desigularized metric. Indeed, we have that there exists $C > 0$ such that every point of $x$ is at a distance $\le C$ to the {\em Dirichlet part} of the boundary of $\Sigma(\Omega)$ if, and only if, there exist no two adjacent faces with Neumann boundary conditions. Then, once we know that every point is at a distance $\le C$ to the Dirichlet boundary, we can prove the Hardy-Poincar\'{e} inequality in the usual way. \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def\ocirc#1{\ifmmode\setbox0=\hbox{$#1$}\dimen0=\ht0 \advance\dimen0 by1pt\rlap{\hbox to\wd0{\hss\raise\dimen0 \hbox{\hskip.2em$\scriptscriptstyle\circ$}\hss}}#1\else {\accent"17 #1}\fi} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def\udot#1{\ifmmode\oalign{$#1$\crcr\hidewidth.\hidewidth }\else\oalign{#1\crcr\hidewidth.\hidewidth}\fi}
{ "timestamp": "2012-05-11T02:00:51", "yymm": "1205", "arxiv_id": "1205.2128", "language": "en", "url": "https://arxiv.org/abs/1205.2128", "abstract": "We consider the model Poisson problem $-\\Delta u = f \\in \\Omega$, $u = g$ on $\\pa \\Omega$, where $ \\Omega $ is a bounded polyhedral domain in $\\RR^n$. The objective of the paper is twofold. The first objective is to review the well posedness and the regularity of our model problem using appropriate weighted spaces for the data and the solution. We use these results to derive the domain of the Laplace operator with zero boundary conditions on a concave domain, which seems not to have been fully investigated before. We also mention some extensions of our results to interface problems for the Elasticity equation. The second objective is to illustrate how anisotropic weighted regularity results for the Laplace operator in 3D are used in designing efficient finite element discretizations of elliptic boundary value problems, with the focus on the efficient discretization of the Poisson problem on polyhedral domains in $\\RR^3$, following {\\em Numer. Funct. Anal. Optim.}, 28(7-8):775--824, 2007. The anisotropic weighted regularity results described and used in the second part of the paper are a consequence of the well-posedness results in (isotropically) weighted Sobolev spaces described in the first part of the paper. The paper is based on the talk by the last named author at the Congress of Romanian Mathematicians, Brasov 2011, and is largely a survey paper.", "subjects": "Numerical Analysis (math.NA); Mathematical Physics (math-ph); Analysis of PDEs (math.AP); Functional Analysis (math.FA)", "title": "Anisotropic regularity and optimal rates of convergence for the Finite Element Method on three dimensional polyhedral domains", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708012852458, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.7084670542729582 }
https://arxiv.org/abs/1106.1022
Bohman-Frieze processes at criticality and emergence of the giant component
The evolution of the usual Erdős-Rényi random graph model on n vertices can be described as follows: At time 0 start with the empty graph, with n vertices and no edges. Now at each time k, choose 2 vertices uniformly at random and attach an edge between these two vertices. Let \bfG_n(k) be the graph obtained at step k. Refined analysis in random graph theory now shows that for fixed t\in \Rbold, when k(n) = n/2+ n^{2/3} t/2, the sizes of the components in \bfG_n(k(n)) scale like n^{2/3} and rescaled component sizes converge to the standard multiplicative coalescent at time $t$. The last decade has seen variants of this process introduced, under the name Achlioptas processes, to understand the effect of simple changes in the edge formation scheme on the emergence of the giant component. Stimulated by a question of Achlioptas, one of the simplest and most popular of such models is the Bohman Frieze (BF) model wherein at each stage $k$, 2 edges e_1(k)=(v_1,v_2) and e_2(k) = (v_3, v_4) are chosen uniformly at random. If at this time v_1, v_2 are both isolated then this edge is added, otherwise e_2 is added. Then \cite{bohman2001avoiding} (and further analysis in \cite{spencer2007birth}) show that once again there is a critical parameter, which is larger than 1, above and below which the asymptotic behavior is as in the Erdős-Rényi setting. While an intense study for this and related models seems to suggest that at criticality, this model should be in the same universality class as the original Erdős-Rényi process, a precise mathematical treatment of the dynamics in the critical window has to date escaped analysis. In this work we study the component structure of the BF model in the critical window and show that at criticality the sizes of components properly rescaled and re-centered converge to the standard multiplicative coalescent.
\section{Introduction} \section{Introduction} \label{sec-int} Random graph models of various systems in the real world have witnessed a tremendous growth in the last decade. The availability of a large amount of empirical data on real world networks and systems such as road and rail networks, bio-chemical networks, social networks and data transmission networks such as the internet has stimulated an inter-disciplinary effort to formulate models to understand such systems ranging from (largely) static systems such as road and rail networks, to more dynamic systems such as social networks, information networks and models of coagulation and aggregation in physics and colloidal chemistry. The classical Erd\H{o}s-R\'{e}nyi(ER) random graph can be thought of as a network evolving in time via the following prescription: Start with the empty graph at time zero. Then at each discrete time point, choose an edge uniformly at random and place it in the network (irrespective of whether it was present before or not). This simple model exhibits a tremendous amount of complexity, see \cite{bollobas-rg-book,janson-luczak-bk}. As one adds more and more edges, eventually the system transitions from a ``sub-critical'' regime wherein the largest component is of order $\log{n}$, to the super-critical regime wherein there exists a unique giant component of order $n$ and the next largest component is $O(\log{n})$. Understanding the emergence of this giant component and the properties of near critical Erd\H{o}s-R\'{e}nyi random graph have stimulated an enormous amount of work, see e.g \cite{nachmias2007component,bollobas2010asymptotic,ding2010diameters} and the references therein. This model has been modified in various ways the last few years, to understand the effect of choice in the emergence of the giant component. For example, in \cite{achlioptas2009explosive} and \cite{d2010local} simulation based studies are carried out for an attachment scheme where at each stage one chooses two edges uniformly at random, and then one uses the edge which delays the emergence of a giant component (for example by choosing the edge that minimizes the product of the two adjoining components). A number of fascinating conjectures about the ``explosive emergence'' of the giant component are stated in these articles. Such modifications of the ER random graph model have generated a tremendous amount of interest in many different communities, ranging from combinatorics to statistical physics. Although a number of such models have been analyzed in the sub-critical and super-critical regime, see for example the marvelous and comprehensive \cite{spencer2007birth}, understanding how the giant component emerges, even for very simple modifications to the ER setting, for example for the Bohman-Frieze model (which we describe below) has escaped a mathematical understanding. The aim of this paper is to prove the first rigorous results about such models at criticality and obtain precise asymptotics for the merging dynamics in the critical window that lead to the emergence of the giant component. The Bohman-Frieze (BF) model is as follows: Let ${\bf 0}_n$ be the empty graph on the vertex set $[n] \stackrel{\scriptscriptstyle def}{=} \{1,2, ... , n\}$ with no edges. The random graph process $\{{\bf{G}}_n^{\scriptscriptstyle BF}(k)\}_{k \in \mathbb{N}}\equiv \{{\bf{G}}_n(k)\}_{k \in \mathbb{N}}$ evolves as follows: \begin{itemize} \item At time $k=0$ let ${\bf{G}}_n(0) = {\bf0}_n$, \item The process evolves in discrete steps. For $k \geq 0$ let the state of the graph at time $k$ be ${\bf{G}}_n(k)$. Then at time $k+1$, the graph ${\bf{G}}_n(k+1)$ is constructed as follows: Choose two edges $e_1= (v_1, v_2)$ and $e_2 = (v_3, v_4)$ uniformly at random amongst all the possible ${n\choose 2}$ edges. If $v_1, v_2$ are isolated vertices then let ${\bf{G}}_n(k+1)$ be the graph ${\bf{G}}_n(k)$ with edge $e_1$ added else let ${\bf{G}}_n(k+1)$ be the graph ${\bf{G}}_n(k)$ with the edge $e_2$ added. \end{itemize} Note that the standard Erd\H{o}s-R\'{e}nyi (ER) process can be thought of as a variant of the above process wherein we add edge $e_1$ at each stage (irrespective of whether the end points are isolated vertices or not). We shall use ${\bf{G}}_n^{\scriptscriptstyle ER}(k)$ to denote the usual Erd\H{o}s-R\'{e}nyi process. For the standard Erd\H{o}s-R\'{e}nyi process classical results (see e.g. \cite{er-1}, \cite{er-2}, \cite{bollobas-rg-book}) tell us that by scaling time by $n$ and looking at the process ${\bf{G}}_n^{\scriptscriptstyle ER}(\lfloor nt/2 \rfloor) $, a phase transition occurs at $t_c(er)=1$, namely \\(a) {\bf Subcritical regime:} For fixed $t<1 $, the largest component in ${\bf{G}}_n^{\scriptscriptstyle ER}(\lfloor nt/2 \rfloor)$ is $O(\log{n})$. \\(b) {\bf Supercritical regime:} For fixed $t> 1$, the largest component has size $\sim f(t)n$ for some positive function $f$ (namely there is a giant component) while the second largest component is of size $O(\log{n})$. \\(c) {\bf Critical regime: } For $t=1$, the first and the second largest components both are $\Theta(n^{2/3})$. For the BF model, the original work in \cite{bohman2001avoiding} shows that there exists $t_0 > 1 $ such that the size of the largest component in ${\bf{G}}_n^{\scriptscriptstyle BF}( \lfloor nt_0/2 \rfloor)$ is $o_p(n)$. Thus this model ``avoids'' the giant component for more time than the original Erd\H{o}s-R\'{e}nyi model. The comprehensive paper \cite{spencer2007birth} showed for the BF model and for a number of other related processes, that there exists a critical value $t_c$, such that conclusions (a) and (b) hold for these models as well with $1$ replaced by $t_c$. For the BF model, the supercritical regime, namely the regime $t> t_c\equiv t_c(bf)$ was studied further in \cite{janson2010phase}. Numerical calculations from this paper show that $t_c(bf)\approx 1.17$. Understanding what happens in the critical regime for these models is of tremendous interest for many different fields, as it turns out that much of the ``action'' happens at criticality. For the Erd\H{o}s-R\'{e}nyi processes, Aldous's result in \cite{aldous1997brownian} shows that if we denote the component size vector(components arranged in decreasing order), in the graph ${\bf{G}}_n^{\scriptscriptstyle ER}(\lfloor nt/2 \rfloor)$, by \[\boldsymbol{C}_n^{\scriptscriptstyle ER}(t)=(\mathcal{C}_n^{\scriptscriptstyle(i)}(t): i\geq 1),\] then, for any fixed $\lambda\in {\mathbb{R}}$, as $n \to \infty$, \begin{equation} \bar \boldsymbol{C}_n^{\scriptscriptstyle ER}(\lambda) \stackrel{\scriptscriptstyle def}{=} n^{-2/3}\boldsymbol{C}_n^{\scriptscriptstyle ER}\left( 1+\frac{\lambda}{n^{1/3}}\right)\convd \boldsymbol{X}(\lambda),\label{ins1629n}\end{equation} where the process $(\boldsymbol{X}(\lambda): -\infty< \lambda < \infty)$ is a Markov process called the {\it eternal standard multiplicative coalescent} on the space \begin{equation} l^2_{\downarrow} = \{(x_1,x_2,\ldots): x_1\geq x_2\geq \cdots \geq 0, \sum_i x_i^2< \infty\}. \label{eqn:ldown} \end{equation} The space $l^2_{\downarrow}$ is endowed with the same topology as inherited from $l^2$, and the above $\convd$ denotes weak convergence in this space. In fact the paper \cite{aldous1997brownian} shows weak convergence of $\bar \boldsymbol{C}_n^{\scriptscriptstyle ER}$ to $\boldsymbol{X}$ in $\mathcal{D}((-\infty, \infty), l^2_{\downarrow})$ (the space of RCLL functions from $(-\infty, \infty)$ to $l^2_{\downarrow}$ endowed with the usual Skorohod topology). We defer a full definition of the above Markov process to Section \ref{sec:results}. Despite the tremendous amount of interest in the critical regime of such models and the significant progress in analyzing these models above and below criticality (see e.g. \cite{spencer2007birth} for a general treatment of such models and \cite{janson2010phase} for an analysis of the BF model), analysis of the component sizes at criticality and an understanding of the dynamic behavior of these components and how they merge through the critical scaling window for the giant component to emerge, for settings beyond the classical Erd\H{o}s-R\'{e}nyi model has remained an extremely challenging problem. The most commonly used technique in getting refined results about the sizes of the largest component at criticality, which is to show that the breadth-first exploration processes of components converge to a certain `inhomogeneous Brownian motion' does not seem to extend to such settings and one needs to develop a rather different machinery. In this work we take a significant step towards the understanding of the behavior of such models by treating the asymptotics in the critical regime for the Bohman-Frieze process. Although this model makes a simple modification to the attachment scheme for the Erd\H{o}s-R\'{e}nyi setting, the analysis requires many new constructions and mathematical ideas. Our main result, Theorem \ref{theo:main}, shows that at criticality the sizes of components properly rescaled and re-centered converge, as for the original Erd\H{o}s-R\'{e}nyi process, to the eternal standard multiplicative coalescent. The proof proceeds through a sequence of approximations that reduce the study to that for a certain inhomogeneous random graph model which allows us to bring to bear techniques from the theory of multitype branching processes on general type spaces and integral operators on Hilbert spaces. These culminate in an estimate, obtained in Proposition \ref{prop:com-size}, on the size of the largest component through the ``subcritical" window. Using this estimate and other careful estimates on quadratic variations of certain martingales associated with the Bohman-Frieze process we then obtain (see Proposition \ref{prop:main}) precise asymptotics on the sum of squares and cubes of component sizes just below the critical window. The latter result is a key ingredient to the proof as it allows us to show that just before the critical window, the component sizes satisfy the regularity conditions required for convergence to the standard multiplicative coalescent, and thus allow us to couple the BF process with a standard multiplicative coalescent through the critical window to prove the main result. Although not pursued here, the techniques developed in this work for establishing the asymptotic behavior in Proposition \ref{prop:main} are potentially of use for the analysis of the critical regime for a very general class of Achlioptas processes (called bounded size rules). {\bf Organization of the paper:} Due to the length of the paper, let us now briefly guide the reader through rest of this work. We begin in Section \ref{sec:cont-time-bf} with the construction of the continuous time version of the BF-process and then state the main result (Theorem \ref{theo:main}) in Section \ref{sec:results}. Next, in Section \ref{sec:disc} we give a wide ranging discussion of related work and the relevance of our results. We shall then start with the proof of the main result by giving an intuitive sketch in Section \ref{sec:proof-idea} wherein we also provide details on the organization of the various steps in the proof carried out in Sections 5-7. Finally Section \ref{sec:proof-main} combines all these ingredients to complete the proof. \section{The Bohman-Frieze process} \label{sec:cont-time-bf} Denote the vertex set by $[n]=\{1,2,\ldots, n\}$ and the edge set by $\mathcal{E}_n=\{\{v_1,v_2\}: v_1\neq v_2\in [n]\}$. To simplify notation we shall suppress $n$ in the notation unless required. Denote by ${\bf {BF}}(t)={\bf {BF}}_n(t)$, $t \in [0,\infty)$, the continuous time Bohman-Frieze random graph process, constructed as follows:\\ Let $\mathcal{E}^2=\mathcal{E} \times \mathcal{E}$ be the set of all ordered pairs of edges. For every ordered pair of edges ${\bf e} = (e_1,e_2) \in \mathcal{E}^2$ let $\mathcal{P}_{\bf e}$ be a Poisson process on $[0,\infty)$ with rate $2/n^3$, and let these processes be independent as ${\bf e}$ ranges over $\mathcal{E}^2$. We order the points generated by all the ${n \choose 2} \times {n \choose 2}$ Poisson processes by their natural order as $0 < t_1<t_2<...$. Then we can define the BF-process iteratively as follows:\\ (a) When $t \in [0, t_1) $, ${\bf {BF}}(t)={\bf 0}_n$, the empty graph with $n$ vertices; \\ (b) Consider $t \in [t_k, t_{k+1})$, $k \in \mathbb{N}^+$, where $t_k$ is a point in $\mathcal{P}_{\bf e}$ and ${\bf e} = (e_1,e_2) =(\{v_1,v_2\}, \{v_3,v_4\})$. If $v_1$, $v_2$ are both singletons (i.e. not connected to any other vertex) in ${\bf {BF}}(t_k-)$, then ${\bf {BF}}(t)={\bf {BF}}(t_k-) \cup \{e_1\}$, else let ${\bf {BF}}(t)={\bf {BF}}(t_k-) \cup \{e_2\}$. \\ Note that multiple edges are allowed between two given vertices, however this has no significance in our analysis which is primarily concerned with component sizes.\\ Consider the same construction but with the modification that we always add $e_1$ to the graph and disregard the second edge $e_2$. Note that the total rate of adding new edges is: $${n\choose 2}\times {n \choose 2} \frac{2}{n^3} \approx \frac{n}{2}.$$ Then this random graph process is just a continuous time version of the standard Erd\H{o}s-R\'{e}nyi process and the convergence in (\ref{ins1629n}) continues to hold, where $\boldsymbol{C}_n^{\scriptscriptstyle ER}(t)$ now represents the component size vector at time $t$ for this continuous time ER process and once more, $t_c=1$ is the critical parameter for the model. As proved in \cite{spencer2007birth}, the Bohman-Frieze model also displays a phase transition and the critical time $t_c \approx 1.1763$. We now summarize some results from the latter paper that characterize this critical parameter in terms of the behavior of certain differential equations. The following notations and definitions mostly follow \cite{janson2010phase}. Let $\mathcal{C}_n^{\scriptscriptstyle (i)}(t)$ denote the size of the $i^{th}$ largest component in ${\bf {BF}}_n(t)$, and $\boldsymbol{C}_n(t)=(\mathcal{C}_n^{\scriptscriptstyle (i)}(t) : i \ge 1)$ the component size vector. For convenience, we define $\mathcal{C}_n^{\scriptscriptstyle (i)}(t)=0$ whenever $t < 0$. For fixed time $t \ge 0$, let $X_n(t)$ denote the number of singletons at this time and $\bar{x}(t)=X_n(t)/n$ denote the density of singletons. For simplicity, we have suppressed the dependence on $n$ in the notation. For $ k = 2,3$, let \begin{equation} \mathcal{S}_k(t)=\sum_{i \ge 1} (\mathcal{C}_n^{\scriptscriptstyle (i)}(t))^k \label{ins2141}\end{equation} and let $\bar{s}_k(t)=\mathcal{S}_k(t)/n$. Then from \cite{spencer2007birth}, there exist deterministic functions $x(t), s_2(t), s_3(t)$ such that for each fixed $t\geq 0$: $$\bar{x}(t)\convp x(t),\qquad \bar{s}_k(t) \convp s_k(t) \qquad \mbox{for } k=2,3,$$ as $n\to\infty$. The limiting function $x(t)$ is continuous and differentiable for all $t\in {\mathbb{R}}_+$. For $k\geq 2$, there exists $1<t_c< \infty$ such that $s_k(t)$ is finite, continuous and differentiable for $0\le t< t_c$, and $s_k(t) = \infty$ for $t\geq t_c$. Furthermore, $x, s_2, s_3$ solve the following differential equations. \begin{align} x'(t)&=-x^2(t)-(1-x^2(t))x(t) \qquad &\mbox{ for } t \in [0,\infty,) \qquad x(0)=1 \label{eqn:xss-def}\\ s_2'(t)&=x^2(t)+(1-x^2(t))s_2^2(t) \qquad &\mbox{ for } t \in [0,t_c), \qquad s_2(0)=1\label{ins-s2}\\ s_3'(t)&=3x^2(t)+3(1-x^2(t))s_2(t)s_3(t) \qquad &\mbox{ for } t \in [0,t_c), \qquad s_3(0)=1.\label{ins-s3} \end{align} This constant $t_c = t_c(bf)$ is the critical time such that whp, for $t< t_c$, the size of the largest component in ${\bf {BF}}(t)$ is $O(\log{n})$, while for $t> t_c$ there exists a giant component of size $\Theta(n)$ in ${\bf {BF}}(t)$. Furthermore from \cite{janson2010phase} (Theorem 3.2) there exist constants \begin{align} \alpha &= (1-\bar{x}^2(t_c))^{-1} \approx 1.063 \label{eqn:alpha-def}\\ \beta &=\lim_{t\uparrow t_c} \frac{s_3(t)}{[s_2(t)]^3}\approx .764 \label{eqn:beta-def} \end{align} such that as $t\uparrow t_c$ \begin{align} s_2(t) &\sim \frac{\alpha}{t_c-t} \label{eqn:s2-scaling-crit} \\ s_3(t) &\sim \beta (s_2(t))^3 \sim \beta \frac{\alpha^3}{(t_c-t)^3}. \label{eqn:s3-scaling-crit} \end{align} \subsection{Main results} \label{sec:results} Our goal in this work is to establish a limit theorem of the form in (\ref{ins1629n}) for the BF process. We begin with a precise definition of the standard eternal multiplicative coalescent process $\boldsymbol{X}$ introduced in Section \ref{sec-int}. For $x \in l^2$, let $\mbox{ord}(x) \in l^2_{\downarrow}$ be a reordering of the vector $x$ in the decreasing order. For $x\in l^2_{\downarrow}$, $1\le i< j<\infty$, let \[x^{ij}=\mbox{ord}(x + \langle x,e_j\rangle (e_i-e_j)),\] where $\{e_i\}$ is the canonical basis in $l^2$. Namely, $x^{ij}$ is the vector obtained by merging the $i$-th and $j$-th `components' in $x$ and reordering the vector. Aldous(\cite{aldous1997brownian}) showed that there is a Feller Markov process with sample paths in $D([0,\infty):l^2_{\downarrow})$ with infinitesimal generator \begin{equation} \mathcal{A}_{\mbox{\tiny{MC}}} f(x) = \sum_{i<j}x_ix_j (f(x^{ij})-f(x)), \; x \in l^2_{\downarrow}, \; f: l^2_{\downarrow} \to \mathbb{R}.\label{ins1723n}\end{equation} This Markov process describes a coalescence dynamics where at any time instant the $i$-th and $j$-th clusters merge at rate equal to the product of the sizes of the two clusters. One special choice of initial distribution for this Markov process is particularly relevant for the study of asymptotics of random graph models. We now describe this distribution. Let $\{W(t)\}_{t\ge 0}$ be a standard Brownian motion, and for a fixed $\lambda \in {\mathbb{R}}$, define \[W_\lambda(t) = W(t)+\lambda t-\frac{t^2}{2},\; t \ge 0.\] Let $\bar W_{\lambda}$ denote the reflected version of $W_{\lambda}$, i.e., \begin{equation} \bar{W}_\lambda(t) = W_\lambda(t) - \min_{0\leq s\leq t} W_\lambda(s), \; t \ge 0. \label{eqn:inh-ref-bm} \end{equation} Define an excursion of $\bar W_\lambda$ as an interval $(l,u) \subset [0,+\infty)$ such that $\bar W_\lambda(l)=\bar W_\lambda(u)=0$ and $\bar W_\lambda(t)>0$ for all $t \in (l,u)$. Define $u-l$ as the size of the excursion. Order the sizes of excursions of $\bar W_\lambda$ as \[\xi_1(\lambda)> \xi_2(\lambda)> \xi_3(\lambda)> \cdots\] and write $\Xi(\lambda) = (\xi_i(\lambda):i\geq 1).$ Then $\Xi(0)$ defines a $l^2_{\downarrow}$ valued random variable. Denote by $\{\boldsymbol{X}(\lambda)\}_{\lambda \ge 0}$ the $l^2_{\downarrow}$ valued Markov process with initial distribution as the probability law of $\Xi(0)$ and infinitesimal generator as in (\ref{ins1723n}). Then \cite{aldous1997brownian} shows that for each $\lambda \in (0,\infty)$, $\boldsymbol{X}(\lambda)$ has the same law as $\Xi(\lambda)$. In fact, \cite{aldous1997brownian} shows that the process $\boldsymbol{X}$ can be extended for $\lambda \in (-\infty, \infty)$, namely there is a process with sample paths in $\mathcal{D}((-\infty,\infty):l^2_{\downarrow})$, denoted once more as $\boldsymbol{X}$, such that for every $\lambda \in (-\infty, \infty)$, $\boldsymbol{X}(\lambda) =_d \Xi(\lambda)$ and $\{\boldsymbol{X}(\lambda+t)\}_{t\ge 0}$ is a Markov process with generator $\mathcal{A}_{\mbox{\tiny{MC}}}$. This description uniquely characterizes a probability measure on $\mathcal{D}((-\infty,\infty):l^2_{\downarrow})$ representing the probability law of $\boldsymbol{X}$. The process $\boldsymbol{X}$ is called the {\em eternal standard multiplicative coalescent}. We are now in a position to state our main result. For the rest of this work $t_c=t_c(bf)$ will denote the critical point for the continuous time Bohman-Frieze process as defined above. Recall the constants $\alpha$ and $\beta$ defined in \eqref{eqn:alpha-def} and \eqref{eqn:beta-def}. \begin{Theorem} \label{theo:main} For $\lambda \in {\mathbb{R}}$, let \begin{equation} \label{eqn:res-com-def} {\bar \boldsymbol{C}}_n^{\scriptscriptstyle BF}(\lambda) = \left(\frac{\beta^{1/3}}{n^{2/3}}\mathcal{C}^{\scriptscriptstyle (i)}_n \left(t_c+ \beta^{2/3} \alpha\frac{\lambda}{n^{1/3}} \right): i\geq 1\right) \end{equation} be the rescaled component sizes of the Bohman-Frieze process at time $t_c+\alpha\beta^{2/3} \frac{\lambda}{n^{1/3}} $. Then \[\bar {\boldsymbol{C}}_n^{\scriptscriptstyle BF} \stackrel{d}{\longrightarrow} \boldsymbol{X}\] as $n\to\infty$, where $\boldsymbol{X}$ is the eternal standard multiplicative coalescent, and $\convd$ denotes weak convergence in the space $\mathcal{D}((-\infty, \infty):l^2_{\downarrow})$. In particular for each fixed $\lambda \in \mathbb{R}$, the rescaled component sizes in the critical window satisfy ${\bar \boldsymbol{C}}_n^{\scriptscriptstyle BF}(\lambda)\convd \Xi(\lambda)$ where $\Xi(\lambda)$ are the sizes of the excursions for the reflected inhomogeneous Brownian motion in \eqref{eqn:inh-ref-bm}. \end{Theorem} \section{Discussion} \label{sec:disc} Let us now give a wide ranging discussion of the implications of the above result and proof techniques. \renewcommand{\labelenumi}{(\alph{enumi})} \begin{enumerate} \item {\bf Dynamic network models:} The formulation and study of dynamic networks models, such as the Bohman-Frieze process wherein one has both randomness and choice, are relatively new. Motivated by a question by Achlioptas, that whether simple modifications of the Erd\H{o}s-R\'{e}nyi random graph model could allow one to avoid the giant component for a larger amount of time, such models now fall loosely under the term {\it Achlioptas processes} (see e.g. \cite{bohman2001avoiding,bohman2004avoidance,spencer2007birth,krivelevich2010hamiltonicity}). The models considered in the above papers use {\it bounded size} rules, wherein there is a fixed $K$ such that when making a choice between the two randomly selected edges, all component sizes greater than some $K$ are treated identically (e.g. $K=1$ for the model considered in this study). Much recent interest has been generated by recent models formulated in \cite{achlioptas2009explosive}, wherein interesting examples of {\it unbounded size} rules have been analyzed. Simulations of such models seem to suggest a new phenomenon called ``explosive percolation'', wherein the phase transition from a subcritical to a supercritical regime with a giant component appears much more ``abruptly'', in that the largest component seems to increase from a size smaller than $\sqrt{n}$ to a size larger than $n/2$ in a very small scaling window. Examples of such rules include the product rule wherein one chooses two edges at random and connects the edge that \emph{minimizes} the product of the component sizes on the 2 end points of the edge. See \cite{d2010local,chen2010explosive} for additional models of this type. Recently in \cite{riordan2011achlioptas} it was shown that for a number of such models, the phase transition is actually ``continuous", so for such models one expects similar behavior as what one sees in the Erd\H{o}s-R\'{e}nyi random graph model. However, understanding what happens at criticality and the behavior of the scaling window is a challenging math program. The techniques in this paper have the potential to be extended to the analysis of a number of such models, which we attempt to do in work in progress. \item {\bf Recent results on the Bohman-Frieze model:} After this work was submitted for publication, we came across the interesting preprint \cite{bf-spencer-perkins-kang} that was announced around the same time. The latter paper studies the Bohman-Frieze process in the cases when $t=t_c-\varepsilon$ and $t=t_c+\varepsilon$, for fixed $\varepsilon>0$. In \cite{bf-spencer-perkins-kang} the largest and second largest components are studied, bounds on the sizes of these components, as well as the number of surplus edges amongst all components in these regimes are derived. The techniques used in \cite{bf-spencer-perkins-kang} for understanding the size of the largest component in the subcritical regime are very different from those used in the current paper. In particular, our work requires an understanding of the fine scale asymptotics of the entire vector of component sizes at criticality (properly rescaled). For identifying the scaling window and how the giant component emerges via the merger of small components through the scaling window, we need more refined results at the critical value, in particular we need to study the behavior as $\varepsilon = \varepsilon(n)\to 0$ reasonably quickly -- see Proposition \ref{prop:main} where precise asymptotic results for $\varepsilon= 1/n^{\gamma}$, $\gamma \in (1/6,1/5)$ are obtained. Conjecture 1 in \cite{bf-spencer-perkins-kang} states that one expects an upper bound of $\epsilon^{-2}\log n$ on the largest component at the time instant $(t_c-\epsilon)$. This should be compared with the $(\log n)^4/\varepsilon^2$ upper bound for times $t_c-\epsilon$ established in Proposition \ref{prop:com-size} of the current work, not just for a fixed $\varepsilon$ but for $\varepsilon=\varepsilon(n)\to 0$. In fact, the proposition establishes an estimate that is {\em uniform} over the time interval $(0, t_c - n^{-\gamma})$, $\gamma \in (0, 1/5)$. This proposition is at the heart of our analysis and its proof requires significant work and new ideas and techniques for general random graph models with immigrating vertices and near critical multi-type branching processes with general state spaces, see Sections \ref{sec:related-models} and \ref{sec:largest-com}. \item {\bf Multiplicative coalescent:} As shown in the current work and several other papers, see e.g \cite{aldous2000random,bhamidi-hofstad-van,bhamidi2009novel,nachmias-peres}, multiplicative coalescent arises as the limit object in a large number of random graph models at criticality. Thus asymptotics of large random graphs has intimate connections with the general theory of coalescent (and fragmentation) processes that is currently a very active area in probability, see e.g. \cite{aldous1999deterministic,bertoin-frag-book,pitman-book}. \item {\bf Starting from an arbitrary configuration:} In this work we have only considered the case where we start with the empty configuration ${\bf 0}_n$. One can imagine starting from a different configuration and then attempt to analyze the emergence of the giant component. Along the lines of \cite{aldous1998entrance}, under the assumption that starting configuration satisfies suitable regularity conditions, it should be possible to study the asymptotic behavior of this model in terms of the entrance boundary of the standard multiplicative coalescent. This will be pursued in future work. \item {\bf Proof techniques:} The study of critical random graphs have generated an enormous amount of interest in the probabilistic combinatorics community and a large number of techniques have been developed for the fine scale asymptotics of such models, ranging from generating function and counting arguments (see e.g. \cite{janson1994birth}, \cite{bollobas-rg-book}); branching processes exploring local neighborhoods of the graph (see e.g \cite{janson-luczak-bb},and the references therein,\cite{molloy-reed-crit}); multi-type branching processes (see e.g. \cite{bollobas-riordan-janson} and the references therein); differential equation based techniques (see e.g. \cite{spencer2007birth}, \cite{bohman2001avoiding} and for a comprehensive survey \cite{wormald1999differential}); and the the breadth first walk approach coupled with the martingale central limit theorem (see e.g.\cite{karp1990transitive},\cite{martin-lof} for one of the first few studies using this technique; \cite{aldous1997brownian} where this was used to explore the fine scale structure of the classical Erd\H{o}s-R\'{e}nyi random graph at criticality and connections with the multiplicative coalescent; see \cite{nachmias-peres}, \cite{bhamidi-hofstad-van} for further results using this technique). Our methods are inspired by \cite{aldous2000random} which uses estimates on the size of largest component in the subcritical window to analyze the asymptotic behavior of the the sum of squares and cubes of component sizes near criticality. Estimates on the latter in turn allow the verification of the sufficient conditions given in \cite{aldous1997brownian} for convergence to the standard multiplicative coalescent. Although our general outline of the proof is similar to \cite{aldous2000random} as described above, it turns out that for general Achlioptas processes, completing this program is a rather challenging problem and for the BF model treated in the current work we need to develop quite a bit of machinery, which is done in Sections \ref{sec:related-models} and \ref{sec:largest-com}, in order to obtain the required estimates. \item {\bf Other structural properties of components:} In this study we focus on the sizes of the components in the critical scaling window. Developing probabilistic methodology for understanding the actual structure of the large components is also of great interest, see for example \cite{braunstein2003optimal} for one of the first studies at a non-rigorous level, exploring the connection between the structure of these components at criticality and the internal structure of the minimal spanning tree in various random graph models (the ``strong disorder'' regime in statistical physics). In particular, it was conjectured in \cite{braunstein2003optimal} that the diameter of the minimal spanning trees in various random graph models scales like $n^{1/3}$. Rigorous studies have now been carried out for the Erd\H{o}s-R\'{e}nyi random graph model and fascinating connections have been discovered between the critical random graphs and the famous continuum random tree of Aldous (see \cite{addario2009critical}, \cite{addario2009continuum}). Proving such structural convergence of the entire components in dynamic network settings such as Achlioptas process at criticality, to random fractals such as continuum random trees, would be of great interest in a number of different fields. \end{enumerate} \section{Proof idea} \label{sec:proof-idea} Let us now give an idea of the proof. We begin by showing in Proposition \ref{prop:main} below that, just before the critical window, the configuration of the components satisfies some important regularity properties. This proposition will be used in Section \ref{sec:proof-main} in order to apply a result of \cite{aldous1997brownian} that gives sufficient conditions for convergence to the multiplicative coalescent. \begin{Proposition} \label{prop:main} Let $\gamma \in (1/6,1/5)$ and define $t_n = t_c- n^{-\gamma}$. Then we have \begin{align} \frac{n^2 \mathcal{S}_3(t_n)}{\mathcal{S}_2^3(t_n)} &\convp \beta \label{eqn:s3-s2}\\ \frac{n^{4/3}}{\mathcal{S}_2(t_n)} - \frac{n^{-\gamma+1/3}}{\alpha} &\convp 0 \label{eqn:s2-tn-n-alpha}\\ \frac{n^{2/3}\mathcal{C}_n^{\scriptscriptstyle (1)}(t_n)}{\mathcal{S}_2(t_n)} &\convp 0. \label{eqn:max-s2} \end{align} \end{Proposition} Now note that $t_n$ can be written as \[t_n = t_c+ \beta^{2/3}\alpha\frac{\lambda_n}{n^{1/3}}\] where \[\lambda_n = -\frac{n^{-\gamma+1/3}}{\alpha\beta^{2/3}} \to -\infty\] as $n\to\infty$. The above proposition implies that the configuration of rescaled component sizes, for large $n$ at time ``$-\infty$'', satisfy the regularity conditions for the standard multiplicative coalescent (see Proposition 4 in \cite{aldous1997brownian}). Once the above has been proved, the second step is to show that through the critical window, the component sizes merge as in the multiplicative coalescent, at rate proportional to the product of the rescaled component sizes. This together with arguments similar to \cite{aldous2000random} will complete the proof of the main result. \\ Let us now outline the framework of the proof: \begin{itemize} \item The bound on the largest component $\mathcal{C}_n^{\scriptscriptstyle (1)}(t)$ when $t \uparrow t_c$ (Proposition \ref{prop:com-size}) plays a crucial role in proving the statements in Proposition \ref{prop:main}. In order to achieve this, we introduce a series of related models from Section \ref{sec:model-equiv} through Section \ref{sec:model-irg}. \item Section \ref{sec:largest-com} uses these models to prove asymptotically tight bounds on the size of the largest component through the subcritical window. The main goal of this Section is to prove Proposition \ref{prop:com-size}. \item Section \ref{sec:analysis-s2s3} uses the bounds on the largest component from Proposition \ref{prop:com-size} to analyze the sum of squares and cubes of component sizes near the critical window. As one can imagine when working so close to the critical window, one needs rather careful estimates. Our arguments are based on a precise analysis of quadratic variation processes for certain martingales associated with the BF model. This section completes the proof of Proposition \ref{prop:main}. \item Finally, in Section \ref{sec:proof-main} we use Proposition \ref{prop:main} and a coupling with the standard multiplicative coalescent, in a manner similar to \cite{aldous2000random}, to prove the main result. \end{itemize} Without further ado let us now start with the proofs. \section{An estimate on the largest component } \label{sec:related-models} The following estimate on the largest component is the key ingredient in our analysis. Recall that $t_c$ denotes the critical time for the BF process. \begin{Proposition} \label{prop:com-size} Let $\gamma \in (0, 1/5)$ and let $I_n(t) \equiv \mathcal{C}_n^{\scriptscriptstyle (1)}(t)$ be the largest component of ${\bf {BF}}_n(t)$. Then, for some $B\equiv B(\gamma) \in (0, \infty)$, $$\mathbb{P} \{ I_n(t) \le m(n,t), \forall t < t_c-n^{-\gamma}\} \to 1, \mbox{ when } n \to \infty$$ where \begin{equation} \label{ins1711} m(n,t)= B \frac{(\log n)^4}{(t_c-t)^2} .\end{equation} \end{Proposition} The proof of Proposition \ref{prop:com-size} will be completed in Section \ref{sec:proof-prop-reduced}. In the current section we will give constructions of some auxiliary random graph processes that are key to our analysis. Although not pursued here, we believe that analogous constructions will be key ingredients in treatment of more general random graph models as well. The section is organized as follows. \begin{itemize} \item In Section \ref{sec:notation-con} we give the basic notation and mathematical conventions used in this paper.\\ \item In Section \ref{sec:model-equiv} we will carry out a preliminary analysis of the BF process and identify three deterministic maps $a_0, b_0, c_0$ from $[0, \infty)$ to $[0,1]$ that play a fundamental role in our analysis.\\ \item Guided by these deterministic maps, in Section \ref{sec:model-rgiva} we will define a random graph process with immigrating vertices and attachments (RGIVA) which is simpler to analyze than, and is suitably `close' to, the Bohman-Frieze process. A precise estimate on the approximation error introduced through this model is obtained in Section \ref{sec:proof-prop-reduced}.\\ \item In Section \ref{sec:model-irg} we will introduce an inhomogeneous random graph (IRG) model associated with a given RGIVA model such that the two have identical component volumes at all times. This allows for certain functional analytic techniques to be used in estimating the maximal component size. We will also make an additional approximation to the IRG model which will facilitate the analysis. \\ \item In Section \ref{sec:summary-model} we summarize connections between the various models introduced above.\\ \end{itemize} \subsection{Notation } \label{sec:notation-con} \subsubsection{ Graphs and random graphs} A graph ${\bf{G}}=\{\mathcal{V}, \mathcal{E}\}$ consists of a vertex set $\mathcal{V}$ and an edge set $\mathcal{E}$, where $\mathcal{V}$ is a subset of some type space $\mathcal{X}$ and $\mathcal{E}$ is a subset of all possible edges $\{ \{v_1,v_2\}:v_1 \ne v_2 \in \mathcal{V}\}$. An example of a type space is $[n]=\{1,2,...,n\}$. Frequently we will assume $\mathcal{X}$ to have additional structure, for example to be a measure space $(\mathcal{X},\mathcal{T},\mu)$. When $\mathcal{V}$ is a finite set, we write $|\mathcal{V}|$ for its cardinality. ${\bf{G}}$ is called \textbf{null graph} if $\mathcal{V}=\emptyset$, and we write ${\bf{G}}=\emptyset$. ${\bf{G}}$ is called an \textbf{ empty graph} if $|\mathcal{V}|=n$ and $\mathcal{E}=\emptyset$, and we write ${\bf{G}} = {\bf0}_n$. Given two graphs, ${\bf{G}}_i=\{\mathcal{V}_i,\mathcal{E}_i\}$ for $i=1,2$, ${\bf{G}}_1$ is said to be a \textbf{subgraph} of ${\bf{G}}_2$ if and only if $\mathcal{V}_1 \subset \mathcal{V}_2$ and $\mathcal{E}_1 \subset \mathcal{E}_2$ and we denote this as ${\bf{G}}_1 \le {\bf{G}}_2$ (or equivalently ${\bf{G}}_2 \ge {\bf{G}}_1$). We write ${\bf{G}}_1 ={\bf{G}}_2$ if ${\bf{G}}_1 \le {\bf{G}}_2$ and ${\bf{G}}_1 \ge {\bf{G}}_2$. A connected component $\mathcal{C}=\{\mathcal{V}_0,\mathcal{E}_0\}$ of a graph ${\bf{G}}=\{\mathcal{V}, \mathcal{E}\}$ is a subgraph which is connected (i.e. there is a path between any two vertices in $\mathcal{C}$). The number of vertices in $\mathcal{C}$ will be called the size of the component and frequently we will denote the size and the component by the same symbol. Let $\mathcal{G}$ be the set of all possible graphs $(\mathcal{V}, \mathcal{E})$ on a given type space $\mathcal{X}$. When $\mathcal{V}$ is countable, we will consider $\mathcal{G}$ to be endowed with the discrete topology and the corresponding Borel sigma field and refer to a random element of $\mathcal{G}$ as a random graph. All random graphs in this work are given on a fixed probability space $(\Omega, \mathcal{F}, \mathbb{P})$ which will usually be suppressed in our proofs. \subsubsection{Probability and analysis} All the unspecified limits are taken as $n \to +\infty$. Given a sequence of events $\{E_n\}_{n\ge 1}$, we say $E_n$ (or $E$) occurs with high probability (whp) if $\mathbb{P}\{E_n\} \to 1$. For functions $f, g: \mathbb{N} \to \mathbb{R}$, we write $g=O(f)$ if for some $C \in (0, \infty)$, $\limsup g(n)/f(n) < C$ and $g=\Theta(f)$ if $g=O(f)$ and $f=O(g)$. Given two sequences of random variables $\{\xi_n\}$ and $\{\zeta_n\}$, we say $\xi_n=O(\zeta_n)$ whp if there is a $C \in (0, \infty) $ such that $\xi_n < C \zeta_n$ whp, and write $\xi_n=\Theta(\zeta_n)$ whp if there exist $0 < C_1 \le C_2 < \infty$ such that $C_1 \zeta_n<\xi_n<C_2 \zeta_n$ whp. Occasionally, when clear from the context, we suppress `whp' in the statements. We also use the following little $o$ notation: For a sequence of real numbers $g(n)$, we write $g=o(f)$ if $\limsup|g(n)/f(n)|=0$. For a sequence of random variables $\xi_n$, we write ``$\xi_n=o_p(f)$'' if $\xi_n/f(n)$ converges to $0$ in probability. For a real measurable function $\psi$ on a measure space $(\mathcal{X},\mathcal{T},\mu)$, the norms $\|\psi\|_2$ and $\|\psi\|_\infty$ are defined in the usual way. We use $\convp$ and $\convd$ to denote the convergence in probability and in distribution respectively. We use $=_d$ to denote the equality of random elements in distribution. Suppose that $(S, \mathcal{S})$ is a measurable space and we are given a partial ordering on $S$. Given two $S$ valued random variables $\xi_1, \xi_2$, we say a pair of $S$ valued random variables $\xi_1^*, \xi_2^*$ given on a common probability space define a coupling of $(\xi_1, \xi_2)$ if $\xi_i =_d\xi_i^*$, $i=1,2$. We say the $S$ valued random variable $\xi_1$ \textbf{ stochastically dominates} $\xi_2$, and write $\xi_1 \ge_d \xi_2$ if there exists a coupling between the two random variables, say $\xi_1^*$ and $\xi_2^*$, such that $\xi_1^* \ge \xi_2^*$ a.s. For two sequences of $S$ valued random elements $\xi_n$ and $\tilde \xi_n$, we say ``$\xi_n \le_d \tilde \xi_n$ whp.'' if there exist a coupling between $\xi_n$ and $\tilde \xi_n$ for each $n$ (denote as $\xi_n^*$ and $\tilde \xi_n^*$) such that $\xi_n^* \le \tilde \xi_n^*$ whp. Two examples of $S$ that are relevant to this work are $\mathcal{D}([0,T]: \mathbb{R})$ and $\mathcal{D}([0,T]: \mathcal{G})$ with the natural associated partial ordering. \subsubsection{Other conventions} We always use $n, m, k, i, j$ to denote non-negative integers unless specified otherwise. We use $s, t, T$ to denote the time parameter for continuous time (stochastic) processes. The scaling parameter is denoted by $n$. Throughout this work $T = 2t_c$ which is a convenient upper bound for the time parameters of interest. We use $d_1, d_2, ...$ for constants whose specific value are not important. Some of them may appear several times and the values might not be the same. We use $C_1, C_2, ...$ for constants that appear in the statement of theorems. \subsection{A preliminary analysis of Bohman-Frieze process} \label{sec:model-equiv} Recall that ${\bf {BF}}_n(t)$ denotes the BF process at time $t$ and note that ${\bf {BF}}_n$ defines a stochastic process with sample paths in $\mathcal{D}([0, T]:\mathcal{G})$. Also recall that $\mathcal{C}_n^{\scriptscriptstyle (i)}(t)$ denotes the size of the $i^{th}$ largest component in ${\bf {BF}}_n(t)$, $\boldsymbol{C}_n(t)=(\mathcal{C}_n^{\scriptscriptstyle (i)}(t) : i \ge 1)$ is the vector of component sizes and $X_n(t)$ denotes the number of singletons in ${\bf {BF}}_n(t)$. We let $\mathcal{F}_t \equiv \mathcal{F}^n_t = \sigma \{ {\bf {BF}}_n(s), s \le t\}$ and refer to it as the natural filtration for the BF process. At any fixed time $t>0$, let $\mathcal{COM}(t)$ denote the collection of all non-singleton components \[\mathcal{COM}(t)= \set{\mathcal{C}_n^{\scriptscriptstyle (i)}(t): |\mathcal{C}_n^{\scriptscriptstyle (i)}(t)|\geq 2 }.\] Recall that $\bar{x}(t)=X_n(t)/n$. We will now do an informal calculation of the rate at which an edge $e=\{v_1,v_2\}$ is added to the graph ${\bf {BF}}(t)$. There are three different ways an edge can be added: (i) both $v_1$ and $v_2$ are singletons, (ii) only one of them is a singleton, (iii) neither of them is a singleton. \\ \textbf{Analysis of the three types of events:} \\ (i) Both $v_1$ and $v_2$ are singletons. We will refer to such a component that is formed by connecting two singletons as a \textbf{doubleton}. This will happen at rate \begin{equation} \frac{2}{n^3} \left[{X_n(t) \choose 2} {n \choose 2} +\left({n \choose 2}-{X_n(t) \choose 2}\right){X_n(t) \choose 2}\right] \stackrel{\scriptscriptstyle def}{=} n \cdot a_n^*(\bar{x}(t)). \label{eqn:ins609} \end{equation} The first product in the squared brackets is the count of all possible ${\bf e}=(e_1,e_2) \in \mathcal{E}^2$ such that $e_1$ joins up two singletons and thus will be added to the graph, while the second product is the count of all ${\bf e}=(e_1,e_2) \in \mathcal{E}^2$ such that the first edge $e_1$ does {\bf not} connect two singletons while $e_2$ connects two singletons and will be added. \\ Define $a_0: [0,1] \to [0,1]$ as \begin{equation} a_0(y) = 2\left(\frac{y^2}{2} \cdot \frac{1}{2}+\left(\frac{1}{2}-\frac{y^2}{2}\right) \frac{y^2}{2}\right)=\frac{1}{2} (y^2+(1-y^2)y^2). \label{eqn:a-def} \end{equation} It is easy to check that \begin{equation} \label{eqn:a-astar} a_n^*(\bar{x}(t)) = a_0(\bar{x}(t))+r_a(t), \mbox{ where, } \sup_{t}|r_a(t)| \le 5/n. \end{equation} Recall that $x(t)$ is the solution of the differential equation \eqref{eqn:xss-def}. To simplify notation we will write $a_n^*(\bar{x} (t))=a^*(\bar{x})=a^*(t)=a^*$ and $a_0(t)=a_0(x(t))$ exchangeably. Similar conventions will be followed for the functions $c_n^*, c_0$ and $b_n^*, b_0$ that will be introduced below. We shall later show that $\sup_{t\le T}|\bar{x}_n(t) - x(t)| \to 0$ in probability (see Lemma \ref{lemma:error-diff-eqn} of this paper, also see \cite{spencer2007birth}). This in particular implies that $\sup_{t\le T}|a^*_n(t) - a_0(t)| \to 0$ in probability. \\ (ii) Only one of them is a singleton: This will happen if and only if $e_1$ does not connect two singletons while $e_2$ connects a singleton and a non-singleton, thus at the rate \begin{equation} \frac{2}{n^3} \left({n \choose 2}-{X_n(t) \choose 2}\right) (n-X_n(t)) X_n(t). \label{eqn:ins633} \end{equation} We are also interested in the rate that a given non-singleton vertex (say, $v_0$) is connected to any singleton, which is \begin{equation} \frac{2}{n^3} \left({n \choose 2}-{X_n(t) \choose 2}\right) X_n(t) \stackrel{\scriptscriptstyle def}{=} c^*_n(\bar{x}(t)). \end{equation} Thus at time $t$ a singleton will be added to $\mathcal{COM}(t)$ during the small time interval $(t, t+dt]$, by attaching to a given vertex $v_0 \in \mathcal{COM}(t)$, with the rate $c^*(t)$.\\Define $c_0: [0,1] \to [0,1]$ as \begin{equation} c_0(y)=(1-y^2)y, \; y \in [0,1]. \label{eqn:c-def} \end{equation} Then \begin{equation} c^*(\bar{x}(t))=c_0(\bar{x}(t))+r_c(t) \qquad \mbox{ and } \sup_{t}|r_c(t)| \le 2/n. \label{eqn:c-def1} \end{equation} (iii) Neither of them is a singleton: This will happen at the rate \begin{equation} \frac{2}{n^3} \left({n \choose 2}-{X_n(t) \choose 2}\right) {n-X_n(t) \choose 2}. \end{equation} Also, the event that two fixed non-singleton vertices are connected has the rate \begin{equation} \frac{2}{n^3} \left({n \choose 2}-{X_n(t) \choose 2}\right) \stackrel{\scriptscriptstyle def}{=} \frac{1}{n} b^*_n(\bar{x}(t)). \label{ins1516} \end{equation} Let $b_0: [0,1] \to [0, 1]$ be defined as \begin{equation} b_0(y)=1-y^2 , \; y \in [0,1]. \label{eqn:b-def} \end{equation} Then \begin{equation} b^*(\bar{x}(t))=b_0(\bar{x}(t))+r_b(t) \qquad \mbox{ and } \sup_{t}|r_b(t)| \le 2/n. \label{eqn:b-def1} \end{equation} Note that for the study of the largest component one may restrict attention to the subgraph $\mathcal{COM}(t)$. The evolution of this subgraph is described in terms of stochastic processes $a^*(\bar x(t)), b^*(\bar x(t))$ and $c^*(\bar x(t))$. In the next subsection, we will introduce a random graph process that is ``close" to $\mathcal{COM}(t)$ but easier to analyze. Intuitively, we replace $a^*(t), b^*(t), c^*(t)$ with deterministic functions $a(t),b(t),c(t)$ which are close to $a_0(t),b_0(t),c_0(t)$ (and thus, from Lemma \ref{lemma:error-diff-eqn}, whp close to $a^*(\bar x(t)), b^*(\bar x(t)), c^*(\bar x(t))$) and construct a random graph with similar dynamics as $\mathcal{COM}(t)$. \\ \subsection{The random graph process with immigrating vertices and attachment} \label{sec:model-rgiva} In this subsection, we introduce a random graph process with immigrating vertices and attachment (RGIVA). This construction is inspired by \cite{aldous2000random} where a random graph with immigrating vertices (RGIV) is constructed -- we generalize this construction by including attachments. RGIVA process will be governed by three continuous maps $a,b,c$ from $[0, T] \to [0, 1]$ (referred to as {\bf rate functions}) and the graph at time $t$ will be denoted by ${\bf{IA}}_n(t)={\bf{IA}}_n(a,b,c)_t$. When $(a,b,c)$ is sufficiently close to $(a_0, b_0, c_0)$ , the RGIVA model well approximates the BF model in a sense that will be made precise in Section \ref{sec:proof-prop-reduced}.\\ \textbf{The RGIVA process} ${\bf{IA}}_n(t)={\bf{IA}}_n(a,b,c)_t$. Given the rate functions $a,b,c$, define ${\bf{IA}}_n(t)$ as follows:\\ (a) ${\bf{IA}}_n(0)=\emptyset$, the null graph;\\ (b) For $t \in [0,T)$, conditioned on ${\bf{IA}}_n(t)$, during the small time interval $(t,t+dt]$, \begin{itemize} \item (immigration) a doubleton (consisting of two vertices and a joining edge) will be born at rate $n \cdot a(t)$, \item (attachment) for any given vertex $v_0$ in ${\bf{IA}}_n(t)$, a new vertex will be created and connected to $v_0$ at rate $c(t)$, \item (edge) for any given pair of vertices $v_1,v_2$ in ${\bf{IA}}_n(t)$, an edge will be added between them at rate $\frac{1}{n} \cdot b(t)$. \end{itemize} The events listed above occur independently of each other.\\ In the special case where $a(t)\equiv b(t) \equiv 1$, $c(t) \equiv 0$, and doubletons are replaced by singletons, the above model reduces to the RGIV model of \cite{aldous2000random}. We note that the above construction closely follows our analysis of three types of events in Section \ref{sec:model-equiv}, replacing stochastic processes $a^*(\bar x_n(t)), b^*(\bar x_n(t)), c^*(\bar x_n(t))$ with deterministic maps $a(t), b(t), c(t)$. The following lemma establishes a connection between the Bohman-Frieze process and the RGIVA process. Recall the partial order on the space $\mathcal{D}([0,T]: {\bf{G}})$. \begin{Lemma} \label{lemma:couple-bfia} Let $(a_L,b_L,c_L)$ and $(a_U,b_U,c_U)$ be rate functions. Further, let $U \equiv U_n$ be the event that $\{ a^*(t)\le a_U(t), b^*(t)\le b_U(t), c^*(t)\le c_U(t) \mbox{ for all } t \in [0,T]\}$ and $L \equiv L_n$ be the event that $\{ a^*(t) \ge a_L(t), b^*(t) \ge b_L(t), c^*(t) \ge c_L(t) \mbox{ for all } t \in [0,T]\}$. Define for $t \in [0,T]$ \[ \mathcal{COM}_n^U(t) = \left\{ \begin{array}{l l} \emptyset & \quad \text{on $U^C$}\\ \mathcal{COM}_n(t) & \quad \text{on $U$}\\ \end{array} \right. ;\;\; \mathcal{COM}_n^L(t) = \left\{ \begin{array}{l l} {\bf{IA}}_n(a_L,b_L,c_L)_T & \quad \text{on $L^C$}\\ \mathcal{COM}_n(t) & \quad \text{on $L$}\\ \end{array} \right.\] Then\\ (i)Upper bound: $\mathcal{COM}_n^U \le_d {\bf{IA}}_n^U \equiv {\bf{IA}}_n(a_U,b_U,c_U)$.\\ (ii)Lower bound: $\mathcal{COM}_n^L \ge_d {\bf{IA}}_n^L \equiv {\bf{IA}}_n(a_L,b_L,c_L)$. \end{Lemma} \textbf{Proof:} We only argue the upper bound. The lower bound is proved similarly. Construct ${\bf{IA}}_n^U(t)$ iteratively on $[0, T]$ as described in the definition, and construct $\mathcal{COM}_n^U(t)$ simultaneously by rejecting the proposed change on the graph with probabilities $(1-a^*/a_U)^+$, $(1-b^*/b_U)^+$ and $(1-c^*/c_U)^+$ according to the three types of the events. Let $\tau= \inf\{0\le t \le T: a^*(t)>a_U(t) \mbox{ or } b^*(t)>b_U(t) \mbox{ or } c^*(t)>c_U(t) \}$ and set $\mathcal{COM}_n^U(t)$ to be the null graph whenever $t \ge \tau$. This construction defines a coupling of ${\bf{IA}}_n^U$ and $\mathcal{COM}_n^U$ such that $\mathcal{COM}_n^U \le {\bf{IA}}_n^U$ a.s. The result follows. \ \ \rule{1ex}{1ex}\\ \subsection{An inhomogeneous random graph with a weight function} \label{sec:model-irg} In this section we introduce a inhomogeneous random graph (IRG) associated with ${\bf{IA}}_n(a,b,c)$ for given rate functions $a,b,c$. For a general treatment of IRG models we refer the reader to \cite{bollobas-riordan-janson}, which our presentation largely follows. We generalize the setting of \cite{bollobas-riordan-janson} somewhat by including a weight function and considering the volume of a component instead of the number of vertices of a component. We begin with a description and some basic definitions for a general IRG model.\\ A \textbf{type space} is a measure space $(\mathcal{X}, \mathcal{T}, \mu)$ where $\mathcal{X}$ is a complete separable metric space (i.e. a Polish space), $\mathcal{T}$ is the Borel $\sigma$-field and $\mu$ is a finite measure. A \textbf{kernel} on the type space $(\mathcal{X}, \mathcal{T}, \mu)$ is a measurable function $\kappa :\mathcal{X}\times \mathcal{X} \to [0, \infty)$. The kernel $\kappa$ is said to be symmetric if $\kappa({\bf{x}}, {\bf{y}})=\kappa({\bf{y}},{\bf{x}})$ for all ${\bf{x}},{\bf{y}} \in \mathcal{X}$. We will also use $x, y$ instead of ${\bf{x}}, {\bf{y}}$ for elements in $\mathcal{X}$ when there is no confusion between an $x \in \mathcal{X}$ and the function $x(t)$ defined in \eqref{eqn:xss-def}. A \textbf{weight function} $\phi$ is a measurable, non-negative function on $(\mathcal{X}, \mathcal{T}, \mu)$. A \textbf{basic structure} is a triplet $\{ (\mathcal{X}, \mathcal{T}, \mu), \kappa, \phi\}$, which consists of a type space, a kernel and a weight function.\\ \textbf{The IRG model:} Given a type space $(\mathcal{X}, \mathcal{T}, \mu)$, symmetric kernels $\{\kappa_n\}_{n\ge 1}$, and a weight function $\phi$, a random graph ${\bf{RG}}_n(\kappa_n)$ ( $\equiv {\bf{RG}}_n(\kappa_n, \mu) \equiv {\bf{RG}}_n(\kappa_n, \mu, \phi)$), for any integer $n>0$, is constructed as follows:\\ (a) The vertex set $\mathcal{V}$ are the points of a Poisson point process on $(\mathcal{X},\mathcal{T})$ with intensity $n\cdot \mu$.\\ (b) Given $\mathcal{V}$, for any two vertices $x,y \in \mathcal{V}$, place an edge between them with probability $\left(\frac{1}{n} \cdot \kappa_n(x,y) \right)\wedge 1$.\\ One can similarly define an IRG model associated with a basic structure $\{ (\mathcal{X}, \mathcal{T}, \mu), \kappa, \phi\}$, where $\kappa$ is a symmetric kernel, by letting $\kappa_n=\kappa$ for all $n$ in the above definition. The weight function $\phi$ is used in defining the volume of a connected component in the above construction of a random graph. Given a component of ${\bf{RG}}_n(\kappa, \mu, \phi)$ whose vertex set is $\mathcal{V}_0$, define $\sum_{x\in \mathcal{V}_0}\phi(x)$ as the \textbf{volume} of the component. One can associate $\kappa$ with an integral opertor $\mathcal{K} : L^2(\mu) \to L^2(\mu)$ defined as \begin{equation} \mathcal{K} f (x)=\int_{\mathcal{X}} \kappa(x,y)f(y)\mu(dy) \label{eqn:calk-def} \end{equation} Denote by $\rho = \rho(\kappa)$ the operator norm of $\mathcal{K}$. Then $\rho=\rho(\kappa)=\|\mathcal{K}\|=\sup_{\|f\|_2=1}\|\mathcal{K} f\|_2$. \\ Given rate functions $a,b,c$, there is a natural basic structure and the corresponding IRG model associated with ${\bf{IA}}_n(a,b,c)$, which we now describe. \\ Fix $t \in [0,T]$. Then the following two stage construction describes an equivalent (in law) procedure for obtaining ${\bf{IA}}_n(a,b,c)_t$ :\\ \textbf{Stage I}: Recall that transitions in ${\bf{IA}}_n(a,b,c)$ are caused by three types of events: {\em immigration}, {\em attachment} (to an existing vertex) and {\em edge formation} (between existing vertices). Consider the random graph obtained by including all the immigration and attachment events until time $t$ but ignoring the edge formation events. We call the components resulting from this construction as clusters. Note that each cluster consists of exactly one doubleton (which starts the formation of the cluster) and possibly other vertices obtained through later attachments. Note that doubletons immigrate at rate $a(s)$ and supposing that a doubleton is born at time $s$, the size of the cluster at time $s \le u \le t$ denoted by $w(u)$ evolves according to a integer-valued time-inhomogeneous jump Markov process starting at $w(s)=2$ and infinitesimal generator $\mathcal{A}(u)$ given as \begin{equation} \label{eqn:ws-def}\mathcal{A}(u) f(r) = c(u) r \cdot \left(f(r+1) - f(r)\right), \; f: \mathbb{N} \to \mathbb{R}, s \le u \le t.\end{equation} We set $w(u) = 0$ for $0 \le u < s$ and denote this cluster which starts at instant $s$ by $(s,w)$. \textbf{Stage II:} Given a realization of the random graph of Stage I, we add edges to the graph. Each pair of vertices will be connected during $(s,s+ds]$ with rate $\frac{1}{n}b(s)$. Thus the number of edges between two clusters ${\bf{x}}=(s, w), {\bf{y}}=(r, \tilde w)$ at time instant $t$ is a Poisson random variable with mean $\frac{1}{n} \int_0^t w(u)\tilde w(u)b(u)du$. Consequently, \begin{align} \mathbb{P}\{ {\bf{x}} \mbox{ and } {\bf{y}} \mbox{ is connected } | \mbox{ Stage I} \} &=1-\exp \{ -\frac{1}{n} \int_0^t w(u)\tilde w(u)b(u)du \}\\ &\le \frac{1}{n} \int_0^t w(u)\tilde w(u)b(u)du. \label{eqn:approx-ker} \end{align} \\ It is easy to see that the graph resulting from this two stage construction has the same distribution as ${\bf{IA}}_n(a,b,c)_t$. We now introduce an IRG model associated with the above construction in which each cluster is treated as a single point in a suitable type space and the size of the cluster is recorded using an appropriate weight function. Let $\mathcal{X} = [0,T] \times \mathcal{W}$, where $\mathcal{W}=\mathcal{D}([0,T]: \mathbb{N})$ is the Skorohod $D$-space with the usual Skorohod topology. Denote by $\mathcal{T}$ the Borel sigma field on $[0,T] \times \mathcal{W}$. For future use, we will refer to this particular choice of type space $(\mathcal{X}, \mathcal{T})$ as the {\em cluster space}. For a fixed time $t\geq 0$, consider a weight function defined as \begin{equation} \label{eqn:phi-def}\phi_t({\bf{x}}) = w(t), \;\; {\bf{x}} = (s,w) \in [0,T] \times \mathcal{W}. \end{equation} Then this weight function associates with each `cluster' ${\bf{x}}$ its size at time $t$. We now describe the finite measure $\mu$ that governs the intensity of the Poisson point process $\mathcal{P}_t(a,b,c)$ of clusters (regarded as points in $\mathcal{X}$). Denote by $\nu_s$ the unique probability measure on the space $\mathcal{W}$ under which, a.s., $w(u) = 0$ for all $u < s$, $w(s) = 2$ and $w(u), u \in [s, T]$ has the probability law of the time inhomogeneous Markov process with generator $\{\mathcal{A}(u), s \le u \le T\}$ defined in \eqref{eqn:ws-def}. Let $\mu$ be a finite measure on $\mathcal{X}$ defined as $\mu(ds dw) = \nu_s(dw) a(s) ds$, namely, for a non-negative real measurable function $f$ on $\mathcal{X}$ $$\int_{\mathcal{X}} f({\bf{x}}) d\mu({\bf{x}}) = \int_0^T a(s)\left (\int_{\mathcal{W}} f(s,w) d\nu_s(w) \right) ds.$$ We also define for each $t \in [0, T]$, a finite measure $\mu_t$ on $\mathcal{X}$ by the relation $\mu_t(A) = \mu(A \cap ([0,t] \times \mathcal{W}))$. Then for $f$ as above, \begin{equation} \label{eqn:mu-def} \int_{\mathcal{X}} f({\bf{x}}) d\mu_t({\bf{x}}) = \int_0^t a(s)\left (\int_{\mathcal{W}} f(s,w) d\nu_s(w) \right) ds.\end{equation} The measure $\mu_t$ will be the intensity of the Poisson point process on $\mathcal{X}$ which will be used in our construction of the IRG model associated with ${\bf{IA}}_n(a,b,c)_t$. Now we describe the kernel that will govern the edge formation amongst the points. Define \begin{equation} \kappa_{n,t}({\bf{x}}, {\bf{y}}) =\kappa_{n,t}((s,w),(r,\tilde w))=n\left(1-\exp \{ -\frac{1}{n} \int_0^t w(u)\tilde w(u)b(u)du\}\right). \label{eqn:kappan-def}\end{equation} We will also use the following modification of the kernel $\kappa_{n,t}$. \begin{equation} \kappa_t({\bf{x}}, {\bf{y}}) =\kappa_t((s,w),(r,\tilde w))=\int_0^t w(u)\tilde w(u)b(u)du. \label{eqn:kappa-def} \end{equation} With the above definitions we can now define IRG models ${\bf{RG}}_n(\kappa_{n,t}, \mu_t, \phi_t)$ and ${\bf{RG}}_n(\kappa_{t}, \mu_t, \phi_t)$ associated with the type space $ (\mathcal{X}, \mathcal{T}, \mu_t)$. Denote the size of the largest component [resp. the component containing the first immigrating doubleton] in ${\bf{IA}}_n(a,b,c)_t$ by $\mathcal{C}^{(1)}(a,b,c)_t$ [resp. $\mathcal{C}^{(0)}(a,b,c)_t$]. Also, denote the volume of the largest component [resp. the component containing the first cluster] in ${\bf{RG}}_n(\kappa_t, \mu_t, \phi_t)$ by $\mathcal{C}^{(1)}(\kappa_t, \mu_t, \phi_t)$ [resp. $\mathcal{C}^{(0)}(\kappa_t, \mu_t, \phi_t)$]. Define $\mathcal{C}^{(1)}(\kappa_{n,t}, \mu_t, \phi_t)$, $\mathcal{C}^{(0)}(\kappa_{n,t}, \mu_t, \phi_t)$ in a similar fashion. The following is an immediate consequence of the above construction. \begin{Lemma} \label{lemma:rgiva-irg-def} We have \[(\mathcal{C}^{(1)}(a,b,c)_t, \mathcal{C}^{(0)}(a,b,c)_t) =_d (\mathcal{C}^{(1)}(\kappa_{n,t}, \mu_t, \phi_t), \mathcal{C}^{(0)}(\kappa_{n,t}, \mu_t, \phi_t))\] and $$\mathcal{C}^{(1)}(\kappa_{n,t}, \mu_t, \phi_t) \le_d \mathcal{C}^{(1)}(\kappa_{t}, \mu_t, \phi_t), \; \mathcal{C}^{(0)}(\kappa_{n,t}, \mu_t, \phi_t) \le_d \mathcal{C}^{(0)}(\kappa_{t}, \mu_t, \phi_t).$$ \end{Lemma} For future use we will write ${\bf{RG}}_n(\kappa_t, \mu_t, \phi_t) \equiv {\bf{RG}}_{n,t}(a,b,c)$. \subsection{A summary of the models} \label{sec:summary-model} As noted earlier, the key step in the proof of Proposition \ref{prop:main} is a good estimate on the size of the largest component in the Bohman-Frieze process ${\bf {BF}}_n(t)$ as in Proposition \ref{prop:com-size}. For this we have introduced a series of approximating models. We summarize the relationship between these models below.\\ \begin{itemize} \item We can decompose the Bohman-Frieze process as ${\bf {BF}}_n= \mathcal{COM}_n\cup X_n$, namely the non-singleton components and singleton components at any time $t$. \item We shall show that $\mathcal{COM}_n \approx {\bf{IA}}_n(a_0,b_0,c_0)$, where $a_0, b_0, c_0$ are defined in \eqref{eqn:a-def}, \eqref{eqn:b-def}, \eqref{eqn:c-def}. More precisely we shall show that as $n\to\infty$, for any fixed $\delta>0$, we have, whp. $${\bf{IA}}_n((a_0-\delta)^+,(b_0-\delta)^+,(c_0-\delta)^+) \le_d \mathcal{COM}_n \le_d {\bf{IA}}_n((a_0+\delta)\wedge 1,(b_0+\delta)\wedge 1,(c_0+\delta)\wedge 1). $$ This is a consequence of Lemma \ref{lemma:couple-bfia}. \item Given rate functions $(a,b,c)$, for all $t \in [0, T]$, $$\mathcal{C}^{(i)}(a,b,c)_t =_d \mathcal{C}^{(i)}(\kappa_{n,t}, \mu_t, \phi_t) \le_d \mathcal{C}^{(i)}(\kappa_t, \mu_t, \phi_t), \; i = 0,1.$$ Here $\kappa_{n,t}, \kappa_t, \mu_t, \phi_t$ and $a,b,c$ are related through \eqref{eqn:kappan-def}, \eqref{eqn:kappa-def}, \eqref{eqn:mu-def} (see also \eqref{eqn:ws-def}), \eqref{eqn:phi-def}, respectively. \end{itemize} \section{Analysis of the largest component at sub-criticality: Proof of Proposition \ref{prop:com-size}} \label{sec:largest-com} This section proves Proposition \ref{prop:com-size}. The section is organized as follows: \begin{itemize} \item In Section \ref{sec:largest-first} we reduce the problem to proving Proposition \ref{prop:reduced}. We give the proof of Proposition \ref{prop:com-size} using this result. Rest of Section \ref{sec:largest-com} is devoted to the proof of Proposition \ref{prop:reduced}.\\ \item In preparation for this proof, in Section \ref{sec:approx-error} we present some key lemmas that allow us to estimate the errors between various models summarized in Section \ref{sec:summary-model}. Proofs of some lemmas (Lemmas \ref{theo:irg-first-com}, \ref{theo:error-cont-norm} and \ref{theo:error-cont-int}) are deferred to later sections. \\ \item Using these lemmas, in Section \ref{sec:proof-prop-reduced} we prove the key proposition, Proposition \ref{prop:reduced}. The rest of Section \ref{sec:largest-com} proves the supporting Lemmas \ref{theo:irg-first-com}, \ref{theo:error-cont-norm} and \ref{theo:error-cont-int}.\\ \item In Section \ref{sec:model-branching} we introduce a branching process related to the IRG model, and prove Lemma \ref{theo:irg-first-com}. A key step in the proof is Lemma \ref{theo:branching} whose proof is left to Section \ref{sec:proof-branching}.\\ \item Section \ref{sec:analysis-ker} analyzes the kernel $\kappa_t$ associated with ${\bf{RG}}_{n,t}(a,b,c)$ and proves Lemma \ref{theo:error-cont-norm} .\\ \item Finally, in Section \ref{sec:measure-theoretic} we give the proof of Lemma \ref{theo:error-cont-int}. \end{itemize} \subsection{ From the largest component to the first component} \label{sec:largest-first} In this section we will reduce the problem of proving the estimate on the largest component in Proposition \ref{prop:com-size} to an estimate on the {\em first} component as in Proposition \ref{prop:reduced}. This reduction, although somewhat different, is inspired by a similar idea used in \cite{aldous2000random}. \\ Recall that $\mathcal{C}_n^{\scriptscriptstyle (1)}(t) \equiv I_n(t)$ denotes the largest component in ${\bf {BF}}_n(t)$. Let $\mathcal{C}_n^s(t)$, $ 0 \le s \le t$, denote the component whose first doubleton is born at time $s$ in ${\bf {BF}}_n(t)$. In particular $\mathcal{C}_n^s(t) = \emptyset$ if there is no doubleton born at time $s$. Without loss of generality, we assume that the first doubleton is born at time $0$. Then $\mathcal{C}_n^0(t)$ denotes the component of the \emph{first doubleton} at time $t$ of the BF process. The following lemma estimates the size of the largest component $I_n(t)$ in terms of the size of the first component. \begin{Lemma} \label{lem2105} For any $n \in \mathbb{N}$, $t_0 \in [0,T]$ and deterministic function $\alpha: [0,T] \to [0, \infty)$ $$\mathbb{P}\{ I_n(t) > \alpha(t), \mbox{ for some } t < t_0\} \le n T\mathbb{P}\{ \mathcal{C}_n^0(t) > \alpha(t), \mbox{ for some } t < t_0 \}.$$ \end{Lemma} \textbf{Proof:} Let $\{{\bf {BF}}_n^{(i)}(t), t \ge 0\}_{i \in \mathbb{N}_0}$ be an i.i.d. family of $\{{\bf {BF}}_n(t), t \ge 0\}$ processes on the same vertex set $[n]$. Let $N$ be a rate $n$ Poisson process independent of the above collection. Denote by $\{\tau_i, i \in \mathbb{N}\}$ the jump times of the Poisson process. Set $\tau_0=0$. Denote the first component of ${\bf {BF}}_n^{(i)}$ at time $t$ by $\mathcal{J}_n^{(i)}(t)$. Consider the random graph $${\bf{G}}_n^t = \cup_{i\in \mathbb{N}_0: \tau_i \le t} \mathcal{J}_n^{(i)}(t)$$ and let $I_n^{{\bf{G}}}(t)$ denote the size of the largest component in ${\bf{G}}_n^t$. Then since $a_n^*(t) \le 1$ for all $t$, $I_n \le_d I_n^{{\bf{G}}}$. Thus \begin{eqnarray*} \mathbb{P}\{ I_n(t) > \alpha(t), \mbox{ for some } t < t_0\} &\le & \mathbb{P}\{ I_n^{{\bf{G}}}(t) > \alpha(t), \mbox{ for some } t < t_0\}\\ &= & \sum_{k \in \mathbb{N}_0} \mathbb{P}\{ I_n^{{\bf{G}}}(t) > \alpha(t), \mbox{ for some } t < t_0, N(T) = k\}\\ & \le & \sum_{k \in \mathbb{N}_0} \mathbb{P}\{ \mathcal{J}_n^{(i)}(t) > \alpha(t), \mbox{ for some } t < t_0, \mbox{ for some } i \le k\} \mathbb{P}\{N(T) = k\}\\ & \le & \sum_{k \in \mathbb{N}_0} k \mathbb{P}\{ \mathcal{C}_n^0(t) > \alpha(t), \mbox{ for some } t < t_0\} \mathbb{P}\{N(T) = k\}. \end{eqnarray*} The result follows. \ \ \rule{1ex}{1ex} \\ Next, in the following lemma, we reduce an estimate on the probability of the event $\{ \mathcal{C}_n^0(t) > \alpha(t), \mbox{ for some } t < t_0 \}$ to an estimate on $\sup_{t \in [0, t_0]}\alpha(t)\mathbb{P}\{ \mathcal{C}_n^0(t) > \alpha(t)\}$. \begin{Lemma} \label{lem2103} There exists an $N_0 \in \mathbb{N}$ such that for all $n \ge N_0$, $t_0 \in [0,T]$ and continuous $\alpha: [0,T] \to [0, \infty)$ \begin{equation} \mathbb{P} \{ \mathcal{C}_n^0(t) >2 \alpha(t), \mbox{ for some } 0< t \le t_0 \} \le 16nT^2 \sup_{0 \le s \le t_0} \left \{\alpha(s)\mathbb{P} \{ \mathcal{C}_n^0(s) > \alpha(s)\} \right \}.\end{equation} \end{Lemma} \textbf{Proof:} Fix $N_0 \in \mathbb{N}$ such that for all $n \ge N_0$, $\sup_{s \in [0, T]}\{a_n^*(s) \vee b_n^*(s)\} \le 2$. Consider now $n \ge N_0$. Define $\tau = \inf \{ t >0: \mathcal{C}_n^0(t) > 2\alpha(t) \}$. Then \begin{equation} \label{ins1614} \mathbb{P}\{\mathcal{C}_n^0(t) > 2\alpha(t) \mbox{ for some } t \in [0, t_0]\} = \mathbb{P}\{\tau \le t_0\}. \end{equation} Denote by $\mathcal{C}^0_n \leftrightarrow_t \mathcal{C}^s_n$ the event that components $\mathcal{C}^0_n$ and $\mathcal{C}^s_n$ merge at time $t$. By convention this event is taken to be an empty set if no doubleton is born at time instant $s$. Then $$ \{\tau = t\} = \{\mathcal{C}^0_n(t-) < 2 \alpha(t)\} \cap \{\mathcal{C}^0_n(t-) + \mathcal{C}_n^s(t-) \ge 2 \alpha(t); \mathcal{C}^0_n \leftrightarrow_t \mathcal{C}^s_n, \mbox{ for some } s < t \}.$$ Next note that \begin{itemize} \item Since $a_n^*(s) \le 2$, the rate at which doubletons are born can be bounded by $2n$. \item Given a doubleton was born at instant $s$, the event $\{\mathcal{C}^0_n \leftrightarrow_u \mathcal{C}^s_n, \mbox{ for some } u \in (t, t+dt]\}$ occurs, conditionally on $\mathcal{F}_t$, with probability $\frac{1}{n} \mathcal{C}^0_n(t)\mathcal{C}^s_n(t)b^*_n(t) dt$. This probability, using the fact that $b^*_n(s) \le 2$ and $\mathcal{C}_n^s(t) \le n$, on the event $\{\mathcal{C}^0_n(t) < 2 \alpha(t)\}$ is bounded by $4\alpha(t) dt$. \item $\mathbb{P} \{\mathcal{C}^0_n(t) + \mathcal{C}_n^s(t) \ge 2 \alpha(t)\}$ is bounded by $2\mathbb{P} \{\mathcal{C}^0_n(t) \ge \alpha(t)\}$. \end{itemize} Using these observations we have the following estimate \begin{eqnarray*} \mathbb{P}\{\tau \le t_0\} &\le& \mathbb{E}\int_{[0, t_0]} {\bf 1}_{\{\mathcal{C}^0_n(t) < 2 \alpha(t)\}}\left[ \int_{[0,t]} na^*_n(s) \cdot (\frac{1}{n}\mathcal{C}^0_n(t)\mathcal{C}^s_n(t)) \cdot (b^*_n(t)) ds\right] dt\\ &\le&\int_{[0, t_0]} \left[ \int_{[0,t]} 2n \cdot 2\mathbb{P}(\mathcal{C}^0_n(t) \ge \alpha(t)) \cdot (4\alpha(t)) ds\right] dt\\ &\le& \int_{[0, t_0]} (2nt) \cdot 2\mathbb{P}(\mathcal{C}^0_n(t) \ge \alpha(t)) \cdot (4\alpha(t)) dt \\ & \le & 16nT^2 \sup_{t \in [0, t_0]}\left\{\alpha(t)\mathbb{P}\{ \mathcal{C}_n^0(t) > \alpha(t)\}\right\}. \end{eqnarray*} Result follows on combining this estimate with \eqref{ins1614}. \ \ \rule{1ex}{1ex}\\ The following proposition will be proved in Section \ref{sec:proof-prop-reduced}. \begin{Proposition} \label{prop:reduced} Given $\eta \in (0, \infty)$ and $\gamma \in (0, 1/5)$, there exist $B,C, N_1 \in (0, \infty)$ such that for all $n \ge N_1$ \begin{equation} \mathbb{P} \set{ \mathcal{C}_n^0(t) \ge m(n,t)/2 } \le Cn^{-\eta} \mbox{ for all } 0<t<t_c-n^{-\gamma},\label{ins2046n} \end{equation} where $m(n,t)$ is as defined in \eqref{ins1711}. \end{Proposition} \textbf{Remark:} Intuitively, one has that in the subcritical regime, i.e. when $t< t_c$, $\mathbb{P}\{ \mathcal{C}_n^0(t) > m \} < d_1 e^{-d_2 m}$ for some constants $d_1,d_2$. This suggests a bound as in (\ref{ins2046n}) for each fixed $t < t_c$. However, the constants $d_1$ and $d_2$ depend on $t$, and in fact one expects that, $d_2(t) \to 0$ when $t \uparrow t_c$. On the other hand, in order to prove the above proposition one requires estimates that are uniform for all $t < t_c -n^{-\gamma}$ as $n\to\infty$. This analysis is substantially more delicate as will be seen in subsequent sections. \\ We now prove Proposition \ref{prop:com-size} using the above results.\\ \textbf{Proof of Proposition \ref{prop:com-size}:} Fix $\gamma \in (0, 1/5)$ and fix $\eta > 2+ 2\gamma$. Let $B,C, N_1$ be as determined in Proposition \ref{prop:reduced} for this choice of $\eta, \gamma$ and let $m(n,t)$ be as defined in \eqref{ins1711}. Without loss of generality we can assume that $N_1 \ge N_0$ where $N_0$ is as in Lemma \ref{lem2103}. Then applying Lemmas \ref{lem2105} and \ref{lem2103} with $t_0 = t_c - n^{-\gamma}$ and $\alpha(t)=m(n,t)$, we have \begin{eqnarray*} \mathbb{P} (I_n(t) \ge m(n,t) , \mbox{ for some } 0< t < t_c-n^{-\gamma} \} &\le & nT \mathbb{P} (\mathcal{C}^0_n(t) \ge m(n,t) , \mbox{ for some } 0< t < t_c-n^{-\gamma} \}\\ &\le & 16 n^2T^3 \sup_{s \in [0, t_c-n^{-\gamma}]}\left \{m(n,s) \mathbb{P}\{\mathcal{C}^0_n(s) \ge m(n.s)/2\}\right \}\\ &\le & 16CBn^{2-\eta + 2\gamma}T^3 (\log n)^4. \end{eqnarray*} Since $\eta > 2+ 2\gamma$, the above probability converges to $0$ as $n \to \infty$. The result follows. \ \ \rule{1ex}{1ex} \\ \subsection{Some Preparatory Results} \label{sec:approx-error} This section collects some results that are helpful in estimating the errors between various models described in Section \ref{sec:summary-model}. The first lemma estimates the error between $\bar{x}_n(t)\equiv \bar{x}(t) = X_n(t)/n$ and its deterministic limit $x(t)$ defined in \eqref{eqn:xss-def}. \begin{Lemma} \label{lemma:error-diff-eqn} For any $T> 0$, there exists a $C(T) \in (0, \infty)$ such that, for all $\gamma_1\in [0, 1/2)$, \[\mathbb{P} \{ \sup_{0\leq t\leq T}|\bar{x}_n(t) - x(t)| > \frac{1}{n^{\gamma_1}} \} \leq \exp \{ -C(T) n^{1-2\gamma_1}\}.\] \end{Lemma} \textbf{Proof:} Recall that $[n] = \set{1,2,\ldots, n}$. Let $E_n = n^{-1} [n]$ and let $E=[0,1]$. Recall the three types of events described in Section \ref{sec:model-equiv} that lead to edge formation in the BF model. Of these only events of type (i) and (ii) lead to a change in the number of singletons. For the events of type (i), i.e. in the case when a doubleton is created, $\bar{x}$ decreases by $2/n$. Two key functions (see \eqref{eqn:ins609}) for this case are \begin{align*} f^*_{-2}(y) &= a^*_n(y)\\ f_{-2}(y) &=a_0(y)= \frac{1}{2} \left(y^2+ (1-y^2)y\right). \end{align*} For the events of type (ii), i.e. in the case when a singleton attaches to a non-singleton component, $\bar{x}$ decreases by $1/n$. Two key functions (see \eqref{eqn:ins633}) for this case are \begin{align*} f^*_{-1}(y) &= (1-y)c_n^*(y)\\ f_{-1}(y)&= (1-y)c_0(y) =y(1-y^2)(1-y). \end{align*} Note that $0\le f^*_{l}(\bar{x})\le 1$ for $l=-1,-2$, and that $\bar{x}(t)$ is a Markov process on the state space $E_n$ for which at time $t$ we have the transitions $\bar{x}(t)\leadsto \bar{x}(t)-1/n$ at rate $n f_{-1}^*(\bar{x}(t))$ and $\bar{x}(t)\leadsto\bar{x}(t) -2/n$ at rate $n f_{-2}^*(\bar{x}(t))$. Furthermore \begin{equation} |f_{-1}^*(y) - f_{-1}(y)|\leq \frac{2}{n} \qquad |f_{-2}^*(y) - f_{-2}(y)| \leq \frac{5}{n}, \; \mbox { for all } y \in [0,1]. \label{eqn:f1n-approx} \end{equation} Let $Y_{-1}(\cdot), Y_{-2}(\cdot)$ be independent rate one Poisson processes. Then the process $\bar{x}(t)$ started with $\bar{x}(0)=1$ can be constructed (see eg. \cite{kurtz-density}, \cite{ethier-kurtz}) as the unique solution of the stochastic equation \begin{equation} \label{eqn:stoc-xn-integ} \bar{x}(t) = 1 - \frac{1}{n} Y_{-1}\left(n\int_0^t f_{-1}^*(\bar{x}(s)) ds\right)-\frac{2}{n} Y_{-2}\left(n\int_0^t f_{-2}^*(\bar{x}(s)) ds\right). \end{equation} By Equation \eqref{eqn:xss-def}, the limiting function $x(\cdot)$ is the unique solution of the integral equation \begin{equation} x(t) = 1- \int_0^t f_{-1}(x(s)) ds - \int_0^t 2 f_{-2}(x(s)) ds. \label{eqn:xt-integ} \end{equation} Also note that $\forall y,z\in E$ \begin{equation} |(f_{-1}(y)+2f_{-2}(y)) - (f_{-1}(z)+2f_{-2}(z)) |\leq 6|y-z|. \label{eqn:f-sum-lip} \end{equation} Using \eqref{eqn:xt-integ} and \eqref{eqn:stoc-xn-integ} we get \begin{align*} |\bar{x}(t) - x(t)| \leq A_1^n(t)+ A_2^n(t)+ A_3^n(t) \end{align*} where \begin{align*} A_1^n(t) = \left|\sum_{l=-1,-2}l \left[\frac{1}{n} Y_{l}\left(n\int_0^t f_{l}^*(\bar{x}(s)) ds\right)- \int_0^t f_{l}^*(\bar{x}(s))ds\right]\right| \leq 4 \sup_{l=-1,-2}\sup_{t< T} \left|\frac{Y_{l}(n t)}{n} -t\right|. \end{align*} and by \eqref{eqn:f1n-approx} \begin{align*} A_2^n(t) =\left| \int_0^t \sum_{l=-1,-2} l \left[f_{l}^*(\bar{x}(s)) - f_{l}(\bar{x}(s)) \right] ds \right| \leq \frac{7}{n}T. \end{align*} and finally by \eqref{eqn:f-sum-lip} \begin{align*} A_3^n(t) & = \left|\int_0^s \sum_{l=-1,-2}l \left[f_{l}(\bar{x}(s)) - f_{l}(x(s))\right] ds\right|\\ &\leq 6\int_0^t |\bar{x}(s)-x(s)| ds . \end{align*} Combining these estimates we get \[|\bar{x}(t)-x(t)| \leq \left(\frac{7}{n}+ 4 \sup_{l=-1,-2}\sup_{t\le T} \left|\frac{Y_{l}(n t)}{n} -t\right|\right) + 6\int_0^t |\bar{x}(s)-x(s)| ds . \] This implies, by Gronwall's lemma (see e.g. \cite{ethier-kurtz}, p498) \[\sup_{s\leq T}|\bar{x}(s)-x(s)| \leq \left(\frac{7}{n}+ 4 \sup_{l=-1,-2}\sup_{t\le T} \left|\frac{Y_{l}(n t)}{n} -t\right|\right) e^{6T}. \] Proof is completed using standard large deviations estimates for Poisson processes. \ \ \rule{1ex}{1ex}\\ In the next lemma we note some basic properties of the integral operator associated with a kernel $\kappa$ on a finite measure space. \begin{Lemma} \label{lemma:prop-kernel} Let $\kappa$, $\kappa'$ be kernels given on a finite measure space $(\mathcal{X},\mathcal{T},\mu)$. Assume that $\kappa, \kappa' \in L^2(\mu\times \mu)$. Denote the associated integral operators by $\mathcal{K}$ and $\mathcal{K}'$ (see \eqref{eqn:calk-def}) and there norms by $\rho(\kappa), \rho(\kappa')$ respectively. Then\\ (i) $\mathcal{K}$ is a compact operator. In particular $\rho(\kappa) =\|\mathcal{K}\| \le \|\kappa\|_2 = \left(\int_{\mathcal{X} \times \mathcal{X}} \kappa^2(x,y) \mu(dx)\mu(dy) \right )^{1/2} <\infty$.\\ (ii) If $\kappa \le \kappa'$, then $\rho(\kappa) \le \rho(\kappa')$.\\ (iii) $\rho(\kappa+\kappa') \le \rho(\kappa)+\rho(\kappa')$ and $\rho(t \kappa)=t \rho(\kappa)$ for $t \ge 0$.\\ (iv) $|\rho(\kappa)-\rho(\kappa')| \le \rho(|\kappa-\kappa'|)$.\\ (v) $\rho(\kappa) \le \|\kappa\|_{\infty} \mu(\mathcal{X}).$\\ \end{Lemma} \textbf{Proof:} (i) is a standard result, see Theorem VI.23 of \cite{reed-simon-fna}.\\ (ii) For any nonnegative $f$ in $L^2(\mu)$, $\mathcal{K} f (x) \le \mathcal{K}' f (x)$ pointwise. Thus for such $f$, $\|\mathcal{K} f\|_2\le \|\mathcal{K}' f\|_2$. Result follows on observing that the suprema of $\|\mathcal{K} f\|_2, \|\mathcal{K}' f\|_2$ over $\{f \in L^2: \|f\|_2=1\}$ is the same as the suprema over $\{f \in L^2: \|f\|_2=1, f \ge 0 \}$ .\\ (iii) This follows immediately from the facts that $\|(\mathcal{K}+\mathcal{K}')f\|_2 \le \|\mathcal{K} f\|_2 + \|\mathcal{K}' f\|_2$ and $\mathcal{K}(tf) =t \mathcal{K} f$.\\ (iv) Note that $\kappa \le \kappa' + |\kappa - \kappa'|$ and $\kappa' \le \kappa + |\kappa - \kappa'|$. Result follows on combining this observation with (ii) and (iii).\\ (v) This follows immediately from (i) and the fact that $\|\kappa\|_2 \le \|\kappa\|_\infty \mu(\mathcal{X})$.\\ \ \ \rule{1ex}{1ex} \\ We now present some auxiliary estimates for the IRG model from Section \ref{sec:model-irg}. The following lemma will be proved in Section \ref{sec:model-branching}. Recall the definition of a {\em basic structure} from Section \ref{sec:model-irg}. \begin{Lemma} \label{theo:irg-first-com} Let $\{ (\mathcal{X}, \mathcal{T}, \mu), \kappa, \phi\}$ be a basic structure, where $\kappa$ is symmetric. Suppose that $\mu$ is non-atomic and $\rho(\kappa)=\|\mathcal{K}\|<1$. For fixed $x_0 \in \mathcal{X}$, denote by $\mathcal{C}_n^{\scriptscriptstyle RG}(x_0)$ the volume of the component of ${\bf{RG}}_n(\kappa)$ that contains $x_0$. Define $\mathcal{C}_n^{\scriptscriptstyle RG}(x_0) = 0$ if $x_0$ is not a vertex in ${\bf{RG}}_n(\kappa)$. Then for all $m\in \mathbb{N}$\\ \begin{equation} \label{ins1531} \mathbb{P} \{ \mathcal{C}_n^{\scriptscriptstyle RG}(x_0) > m\} < 2 \exp \{-C_1\Delta^2 m\} \end{equation} where \begin{equation} \label{ins2134} \Delta=1-\rho(\kappa), \;\; C_1=\frac{1}{8\|\phi\|_{\infty}(1 + 3\|\kappa\|_{\infty}\mu(\mathcal{X}))}.\end{equation} \end{Lemma} The above result will be useful for estimating the size of a given component in ${\bf{RG}}_n(a,b,c)$. One difficulty in directly using this result is that the kernel $\kappa_t$ and the weight function $\phi_t$ defined in \eqref{eqn:kappa-def} and \eqref{eqn:phi-def} are not bounded. We will overcome this by using a truncation argument. In order to control the error caused by truncation, the following two results will be useful. For rest of this subsection the type space $(\mathcal{X}, \mathcal{T})$ will be taken to be the {\em cluster space} introduced above \eqref{eqn:phi-def}. \begin{Lemma} \label{theo:error-wst} Given rate functions $(a,b,c)$ and $t \in [0, T]$, let $\mu_t$ be the finite measure on $(\mathcal{X}, \mathcal{T})$ defined as in \eqref{eqn:mu-def}. Let $\mathcal{P}_n$ be a Poisson point process on $(\mathcal{X}, \mathcal{T})$ with intensity $n \cdot \mu_t$. Define $$ Y_n \stackrel{\scriptscriptstyle def}{=} \sup_{(s,w) \in \mathcal{P}_n} w(t).$$ Then for every $A \in (0, \infty)$ \begin{equation*} \mathbb{P}\{Y_n > A\} < 2T \cdot n(1-e^{-T})^{A/2}. \end{equation*} \end{Lemma} \textbf{Proof:} Let $N$ be the number of points in $\mathcal{P}_n$, then $N$ is Poisson with mean $\int_0^t na(s)ds \le nT$. Let $\{Z_2^{(i)}\}_{i \ge 1}$ be independent copies of $Z_2$ (also independent of $N$), where $Z_2$ is a pure jump Markov process on $\mathbb{N}$ with initial condition $Z_2(0) = 2$ and infinitesimal generator $\mathcal{A}_0$ defined as $$\mathcal{A}_0f(k) = k(f(k+1) - f(k)), k \in \mathbb{N}, \;\; f: \mathbb{N} \to \mathbb{R}. $$ Thus $Z_2$ is just a Yule process started with two individuals at time zero. Note that $$Y_n \le \sup_{(s,w) \in \mathcal{P}_n} w(T) \le_d \sup_{1 \le i \le N} Z_2^{(i)}(T),$$ where the first inequality holds a.s and the second inequality uses the fact that $c \le 1$. Standard facts about the Yule process (see e.g.\cite{norris-mc}) imply that $Z_2^{(i)}(T)$ is distributed as sum of two independent $\mbox{Geom}\{e^{-T}\}$. Thus \begin{align*} \mathbb{P}\{Y_n > A\} &\le \mathbb{E}(N) \cdot \mathbb{P} \{ Z_2(T) > A \}\\ &\le nT \cdot 2 (1-e^{-T})^{A/2}. \end{align*} This completes the proof of the lemma. \ \ \rule{1ex}{1ex}\\ The following corollary follows on taking $A = C\log n$ in the above lemma. \begin{Corollary}\label{cor1210} Let $Y_n$ be as in the above lemma and fix $\eta \in (0, \infty)$. Then there exist $C_1(\eta), C_2(\eta) \in (0, \infty)$ such that for any rate functions $(a,b,c)$ $$\mathbb{P}\{Y_n > C_1(\eta) \log n\} < C_2(\eta) n^{-\eta}, \;\; \mbox{ for all } n \in \mathbb{N} .$$ \end{Corollary} From Section \ref{sec:model-equiv}, recall the definitions of the functions $a_0, b_0, c_0$ associated with the BF model. The following lemma will allow us to argue that ${\bf{RG}}_n(a_0,b_0,c_0)$ is well approximated by ${\bf{RG}}_n(a,b,c)$ if the rate functions $(a,b,c)$ are sufficiently close to $(a_0, b_0, c_0)$. Let $((\mathcal{X}, \mathcal{T}, \mu_t), \kappa_t, \phi_t)$ be the basic structure associated with rate functions $(a,b,c)$. Let $\mathcal{K}_t$ be the integral operator defined by \eqref{eqn:calk-def}, replacing $(\mu, \kappa)$ there by $(\mu_t, \kappa_t)$. Let $\rho_t = \rho(\kappa_t)$. In order to emphasize the dependance on rate functions $(a,b,c)$, we will sometimes write $\rho_t=\rho_t(a,b,c)$. Similar notation will be used for $\kappa_t, \mu_t, \phi_t$ and $\mathcal{K}_t$. \begin{Lemma} \label{theo:error-cont-norm} Fix rate functions $(a,b,c)$. Suppose that $\inf_{s \in [0, T]} a(s) > 0$ and for some $\theta \in (0, \infty)$, $ c(s)\ge \theta s$, for all $s \in [0, T]$. Given $\delta > 0$ and $t \in [0, T]$, let $$\rho_{+,t} = \rho_t((a+\delta)\wedge 1, (b+\delta)\wedge 1, (c+\delta)\wedge 1),\; \rho_{-,t} = \rho_t((a-\delta)^+, (b-\delta)^+, (c-\delta)^+).$$ Then there exists $C_2 \in (0, \infty)$ and $\delta_0 \in (0, 1)$ such that for all $\delta \le \delta_0$ and $t \in [0, T]$ $$ \max \{ |\rho_t- \rho_{+,t}|, |\rho_t-\rho_{-,t}|\} \le C_2 (-\log \delta)^3 \delta^{1/2}.$$ \end{Lemma} The proof of the above lemma is quite technical and deferred to Section \ref{sec:analysis-ker}. \\ The next lemma gives some basic properties of $\rho_t(a_0, b_0, c_0)$. Recall that $t_c$ denotes the critical time for the emergence of the giant component in the BF model. \begin{Lemma} \label{theo:error-cont-int} Let $\rho(t)=\rho_t(a_0, b_0, c_0)$. Then:\\ (i) $\rho(t)$ is strictly increasing in $t \in [0, T]$;\\ (ii) $ \rho(t_c)=1$;\\ (iii) $\lim_{s \to 0^+} (\rho(t_c)-\rho(t_c-s))/s = \rho'_- (t_c) >0$. \end{Lemma} The proof of the lemma is given in Section \ref{sec:measure-theoretic}. \\ \subsection{Proof of Proposition \ref{prop:reduced}} \label{sec:proof-prop-reduced} This section is devoted to the proof of Proposition \ref{prop:reduced}. Fix $\eta \in (0, \infty)$ and $\gamma \in (0, 1/5)$.\\ \textbf{Step 1: from ${\bf {BF}}_n$ to ${\bf{IA}}_{n,\delta}$ }\\ Let $\gamma_1 = 2/5$ and define $E_n = \{ \sup_{0 \le t \le T} |\bar{x}_n(t)-x(t)| \le n^{-\gamma_1}\}$. From Lemma \ref{lemma:error-diff-eqn}, \begin{equation} \mathbb{P} \{ E_n^c \} \leq \exp \{ -C(T) n^{1-2\gamma_1}\} = \exp \{ -C(T) n^{1/5}\}. \label{eqn:ineq-step1} \end{equation} From \eqref{eqn:a-astar} and recalling that the Lipschitz norm of $a_0$ is bounded by 2 (see \eqref{eqn:a-def}), we have that on $E_n$ $$|a^*(t) - a_0(t)| \le 5n^{-1} + 2 n^{-\gamma_1}, \mbox{ for all } t \in [0, T].$$ Similar bounds can be shown to hold for $b^*$ and $c^*$. Thus we can find $n_1 \in \mathbb{N}$ and $d_1 \in (0, \infty)$ such that, for $n \ge n_1$, on $E_n$ $$a_n^*(t)\le a_0(t)+\delta_n, b_n^*(t)\le b_0(t)+\delta_n, c_n^*(t)\le c_0(t)+\delta_n, \mbox{ for all } t \in [0, T],$$ where $\delta_n = d_1n^{-\gamma_1}$. Since $a_n^*, b_n^*, c_n^*$ are all bounded by $1$, setting $(a_0(t)+\delta_n)\wedge 1 = a_{n,\delta}$ and similarly defining $b_{n,\delta}, c_{n,\delta}$, we in fact have that $$a_n^*(t) \le a_{n,\delta}(t), b_n^*(t)\le b_{n,\delta}(t), c_n^*(t) \le c_{n,\delta}(t), \mbox{ for all } t \in [0, T].$$ Let $\mathcal{C}_{n,\delta}^{\scriptscriptstyle IA}(t)$ denote the size of the first component in ${\bf{IA}}_n(a_{n,\delta}, b_{n,\delta}, c_{n,\delta})_t$. From Lemma \ref{lemma:couple-bfia}, we have for any $m \in \mathbb{N}$ \begin{equation} \mathbb{P}\{ \mathcal{C}_n^{0}(t) > m, E_n\} \le \mathbb{P}\{ \mathcal{C}_{n,\delta}^{\scriptscriptstyle IA}(t) > m, E_n \} \le \mathbb{P}\{ \mathcal{C}_{n,\delta}^{\scriptscriptstyle IA}(t) > m\}. \label{eqn:prob-step1} \end{equation} \textbf{Step 2: from ${\bf{IA}}_{n,\delta}$ to ${\bf{RG}}_{n,\delta,A}$}\\ For $t \in [0, T]$, and rate functions $a_{n,\delta}, b_{n,\delta}, c_{n,\delta}$, consider the IRG model ${\bf{RG}}_n(\kappa_{t,\delta},\mu_{t,\delta}, \phi_t)$, where $\kappa_{t, \delta} = \kappa_t(a_{n,\delta}, b_{n,\delta}, c_{n,\delta})$ and $\mu_{t,\delta}$ is the measure for the IRG model corresponding to these rate functions as defined in \eqref{eqn:mu-def}. Let $A_n = C_1(\eta) \log n$, where $C_1(\eta)$ is as in Corollary \ref{cor1210}. Consider the following truncation of the kernel $\kappa_{t,\delta}$ and weight function $\phi_t(s,w) = w(t)$: $$\kappa_{t,\delta,A}({\bf{x}},{\bf{y}})=\kappa_{t,\delta} ({\bf{x}}, {\bf{y}}) {\bf1}_{\{w(T) \le A_n\} }{\bf1}_{\{\tilde w(T) \le A_n\} }, \; {\bf{x}} = (s,w), {\bf{y}} = (r, \tilde w)$$ and $$\phi_{t,A}(s,w)=\phi_t(s,w) {\bf1}_{\{w(T) \le A_n\} }.$$ Then $\|\phi_{t,A}\|_\infty \le A_n$, $\|\kappa_{t,\delta,A}\|_\infty \le T A_n^2$ .\\ Recall the Poisson point process $\mathcal{P}_t(a,b,c)$ associated with rate functions $(a,b,c)$, introduced below \eqref{eqn:phi-def} and write $\mathcal{P}_{t,\delta} = \mathcal{P}_t(a_{n,\delta}, b_{n,\delta}, c_{n,\delta})$. Let $Y_{n,\delta} = \sup_{(s,w) \in \mathcal{P}_{t,\delta}}w(T)$. From Corollary \ref{cor1210} \begin{equation} \mathbb{P}\{Y_{n,\delta} > A_n\} <C_2(\eta) n^{-\eta}. \label{eqn:ineq-step2} \end{equation} Let $\mathcal{C}_{n,\delta}^{\scriptscriptstyle RG}(t) = \mathcal{C}_{n,t}^{\scriptscriptstyle RG}(a_{n,\delta}, b_{n,\delta}, c_{n,\delta})$ be the volume of the `first' component in ${\bf{RG}}_{n,t}(a_{n,\delta}, b_{n,\delta}, c_{n,\delta})\equiv {\bf{RG}}_{n}(\kappa_{t,\delta}, \mu_{t,\delta}, \phi_{t})$. Then from Lemma \ref{lemma:rgiva-irg-def} \begin{equation} \mathbb{P}\{\mathcal{C}_{n,\delta}^{\scriptscriptstyle IA}(t) > m\} \le \mathbb{P}\{ \mathcal{C}_{n,\delta}^{\scriptscriptstyle RG}(t) >m\}.\label{ins1632} \end{equation} Letting $\mathcal{C}_{n,\delta, A}^{\scriptscriptstyle RG}(t)$ denote the volume of the first component in ${\bf{RG}}_{n}(\kappa_{t,\delta, A}, \mu_{t,\delta}, \phi_{t,A})$, namely the random graph formed using the truncated kernel. Then \begin{eqnarray} \mathbb{P}\{ \mathcal{C}_{n,\delta}^{\scriptscriptstyle RG}(t) >m\} &\le& \mathbb{P}\{Y_{n,\delta} > A_n\}+ \mathbb{P}\{ \mathcal{C}_{n,\delta}^{\scriptscriptstyle RG}(t) >m, Y_{n,\delta} \le A_n\}\nonumber \\ &=& \mathbb{P}\{Y_{n,\delta} > A_n\}+ \mathbb{P}\{ \mathcal{C}_{n,\delta,A}^{\scriptscriptstyle RG}(t) >m, Y_{n,\delta} \le A_n\} \label{eqn:trunc-no-effect}\nonumber\\ &\le& \mathbb{P}\{Y_{n,\delta} > A_n\}+ \mathbb{P}\{ \mathcal{C}_{n,\delta,A}^{\scriptscriptstyle RG}(t) >m\} \label{eqn:prob-step2} \end{eqnarray} \textbf{Step 3: Estimating $\mathcal{C}_{n,\delta,A}^{\scriptscriptstyle RG}$}\\ We will apply Lemma \ref{theo:irg-first-com}, replacing $\{(\mathcal{X}, \mathcal{T}, \mu), \kappa, \phi\}$ there by $\{(\mathcal{X}, \mathcal{T}, \mu_{t,\delta}), \kappa_{t,\delta, A}, \phi_{t,A}\}$, where $t \in (0, t_c - n^{-\gamma})$. From \eqref{ins1531} we have \begin{equation} \label{ins1532} \mathbb{P}\{ \mathcal{C}_{n,\delta,A}^{\scriptscriptstyle RG}(t) >m\} \le 2 \exp \{-C_1\Delta^2m\}, \end{equation} where $$C_1 = \frac{1}{8\|\phi_{t,A}\|_{\infty}(1+3\|\kappa_{t,\delta, A}\|_{\infty}\mu_{t,\delta}(\mathcal{X}))}, $$ and $\Delta = 1 - \rho(\kappa_{t,\delta, A})$. We now estimate $\rho(\kappa_{t,\delta, A})$. Since $ \kappa_{t,\delta, A} \le \kappa_{t,\delta}$, by (ii) of Lemma \ref{lemma:prop-kernel}, we have $\rho(\kappa_{t,\delta, A}) \le \rho(\kappa_{t,\delta})$. Note that rate functions $(a_0, b_0, c_0)$ satisfy conditions of Lemma \ref{theo:error-cont-norm}. Thus, recalling that $\delta_n = d_1 n^{-2/5}$, we have from this result, that for some $d_2 \in (0, \infty)$, $\rho(\kappa_{t,\delta})< \rho(\kappa_t)+ d_2(\log n)^3 n^{-1/5}$, for all $t \le T$. Here $\kappa_t = \kappa_t(a_0, b_0, c_0)$. Next, by Lemma \ref{theo:error-cont-int}, there exists $d_3 \in (0, \infty)$ such that $\rho(\kappa_t)<1-d_3(t_c-t)$ for all $t \in [0,t_c)$. Combining these estimates, we have for $t < t_c-n^{-\gamma}$, \begin{align*} \rho(\kappa_{t,\delta, A}) &< 1-d_3(t_c-t) +d_2(\log n)^3 n^{-1/5}. \end{align*} Recalling that $\gamma \in (0, 1/5)$ we have that, for some $n_2 \in (n_1, \infty)$ and $d_4 \in (0, \infty)$, $$ \rho(\kappa_{t,\delta, A}) \le 1-d_4 (t_c-t), \mbox { for all } t \in (0, t_c-n^{-\gamma}) \mbox{ and } n \ge n_2.$$ Using this estimate in \eqref{ins1532} and recalling that $\|\phi_{t,A}\|_\infty \le A_n$, $\|\kappa_{t,\delta,A}\|_\infty \le T A_n^2$ , we have that for some $d_5 \in (0, \infty)$ \begin{equation} \label{ins1625} \mathbb{P}\{ \mathcal{C}_{n,\delta,A}^{\scriptscriptstyle RG}(t) >m\} \le 2 \exp \{ -\frac{d_5}{(\log n)^3} (t_c-t)^2 m\}, \mbox{ for all } m \in \mathbb{N}, t \in (0, t_c-n^{-\gamma}) \mbox{ and } n \ge n_2.\end{equation} \textbf{Step 4: Collecting estimates:}\\ Combining \eqref{eqn:ineq-step1}, \eqref{eqn:prob-step1}, \eqref{ins1632}, \eqref{eqn:ineq-step2}, \eqref{eqn:prob-step2} and \eqref{ins1625}, we have \begin{eqnarray} \mathbb{P}\{ \mathcal{C}_n^{0}(t) > m\} &\le& \mathbb{P}\{ E_n^c\} +\mathbb{P}\{Y_{n,\delta} > A_n\} +\mathbb{P}\{ \mathcal{C}_{n,\delta,A}^{\scriptscriptstyle RG}(t) >m \}\nonumber \\ &\le & e^{-C(T) n^{1/5}} + C_2(\eta) n^{-\eta} +2 \exp\{-d_5 \frac{(t_c-t)^2}{(\log n)^3} m\}. \end{eqnarray} Finally, result follows on replacing $m$ in the above display with $\frac{\eta (\log n)^4}{d_5 (t_c-t)^2}$. \ \ \rule{1ex}{1ex} \\ The following lemma will be used in the proof of Lemma \ref{theo:error-cont-int}. We will use notation and arguments similar to that in the proof of Proposition \ref{prop:reduced} above. \begin{Lemma} \label{lemma:logn-bound} Let $(a,b,c)$ be rate functions. Fix $t \in [0, T]$. Let $I_n^{\scriptscriptstyle IA}(t)$ denote the largest component in ${\bf{IA}}_n(a,b,c)_t$. Suppose that $\rho_t(a,b,c)< 1$. Then for some $C_0 \in (0, \infty)$ $$\mathbb{P}\{I_n^{\scriptscriptstyle IA}(t) > C_0 (\log n)^4\} \to 0 \mbox{ when } n \to \infty.$$ \end{Lemma} \textbf{Proof: } Let $\mathcal{C}_n^{\scriptscriptstyle IA}(t)$ be the first component of ${\bf{IA}}_n(a,b,c)_t$. Then an elementary argument (cf. proof of Lemma \ref{lem2105}) shows that for $m > 0$ $$\mathbb{P}\{I_n^{\scriptscriptstyle IA}(t) > m \} \le T n \mathbb{P}\{ \mathcal{C}_n^{\scriptscriptstyle IA}(t) > m\}.$$ By an argument as in \eqref{eqn:prob-step2}, we have $$ \mathbb{P}\{\mathcal{C}_n^{\scriptscriptstyle IA}(t) > m\} \le \mathbb{P}\{ \mathcal{C}_n^{\scriptscriptstyle RG}(t) > m\}\\ \le \mathbb{P}\{ Y_n > A_n\}+\mathbb{P}\{ \mathcal{C}_{n,A}^{\scriptscriptstyle RG}(t) > m\}, $$ where $\mathcal{C}_n^{\scriptscriptstyle RG}$, $Y_n$ and $\mathcal{C}_{n,A}^{\scriptscriptstyle RG}$ correspond to $\mathcal{C}_{n,\delta}^{\scriptscriptstyle RG}$, $Y_{n,\delta}$ and $\mathcal{C}_{n,\delta,A}^{\scriptscriptstyle RG}$ introduced above in the proof of Proposition \ref{prop:reduced}, with $(a_{n,\delta}, b_{n,\delta}, c_{n,\delta})$ replaced with $(a,b,c)$. From Corollary \ref{cor1210} we can find $d_1 \in (0, \infty)$ such that $\mathbb{P}(Y_n \ge d_1 \log n) = O(n^{-2})$. Let $A_n = d_1 \log n$. Then, recalling that $\rho_t(a,b,c) < 1$, we gave by Lemma \ref{theo:irg-first-com} that, for some $d_2 \in (0, \infty)$, $$ \mathbb{P}\{ \mathcal{C}_{n,A}^{\scriptscriptstyle RG}(t) > m\} < 2 \exp \{ - d_2m/ (\log n)^3\}.$$ Taking $m= \frac{2}{d_3} (\log n)^4$, we have $ \mathbb{P}\{ \mathcal{C}_{n,A}^{\scriptscriptstyle RG}(t) > m\}=O(n^{-2})$. Combining the above estimates we have $\mathbb{P}\{I_n^{\scriptscriptstyle IA}(t) > \frac{2}{d_3} (\log n)^4 \}=O(n^{-1})$. The result follows.\ \ \rule{1ex}{1ex} \\ \subsection{Proof of Lemma \ref{theo:irg-first-com}: A branching process construction} \label{sec:model-branching} The key idea in the proof of Lemma \ref{theo:irg-first-com} is the coupling of the breadth first exploration of components in the IRG model with a certain continuous type branching process. This coupling will reduce the problem of establishing the estimate in Lemma \ref{theo:irg-first-com} to a similar bound on the total volume of the branching process (Lemma \ref{theo:branching}). We refer the reader to \cite{bollobas-riordan-janson} where a similar coupling in a setting where the type space $\mathcal{X}$ is finite using a finite-type branching process is constructed. In this subsection we will give the proof of Lemma \ref{theo:irg-first-com} using Lemma \ref{theo:branching}. Proof of the latter result is given in Section \ref{sec:proof-branching}.\\ Throughout this section we will fix a basic structure $\{(\mathcal{X},\mathcal{T},\mu), \kappa, \phi\}$, where $\kappa$ is a symmetric kernel, and a $x_0 \in \mathcal{X}$. Let ${\bf{RG}}_n(\kappa)$ be the IRG constructed using this structure as in Section \ref{sec:model-irg}. We now describe a branching process associated with the above basic structure. The process starts in the $0$-th generation with a single vertex of type $x_0 \in \mathcal{X}$ and in the $k$-th generation, a vertex $x$ will have offspring, independently of the remaining $k$-th generation vertices, according to a Poisson point process on $\mathcal{X}$ with intensity $\kappa(x,y)\mu(dy)$ to form the $(k+1)^{th}$ generation. We denote this branching process as ${\bf {BP}}(x_0)$. \\ Denote by $\{\xi_i^{(k)}\}_{i=1}^{N_k} \subset \mathcal{X}$ the $k^{th}$ generation of the branching process. Define the volume of the $k$-th generation as $G_k=\sum_{i=1}^{N_k} \phi(\xi_i^{(k)})$. The \textbf{total volume} of ${\bf {BP}}(x_0)$ is defined as $G=G(x_0)=\sum_{k=0}^{\infty} G_k$.\\ The following lemma, proved at the end of the section, shows that $\mathcal{C}_n^{\scriptscriptstyle RG}(x_0)$ is stochastically dominated by $G(x_0)$. \begin{Lemma} \label{lemma:irg-branching} For all $m_1 \in \mathbb{N}$, $$\mathbb{P}\{ \mathcal{C}_n^{\scriptscriptstyle RG}(x_0) > m_1\} \le \mathbb{P}\{ G(x_0) > m_1 \}.$$ \end{Lemma} Next lemma, proved in Section \ref{sec:proof-branching} shows that the estimate in Lemma \ref{theo:irg-first-com} holds with $\mathcal{C}_n^{\scriptscriptstyle RG}(x_0)$ replaced by $G(x_0)$. \begin{Lemma} \label{theo:branching} Suppose that $\rho(\kappa)=\|\mathcal{K}\|<1$. Then for all $m\in \mathbb{N}$\\ \begin{equation} \label{ins1531b} \mathbb{P} \{ G > m\} < 2 \exp \{-C_1\Delta^2 m\} \end{equation} where $\Delta$ and $C_1$ are as in \eqref{ins2134}. \end{Lemma} Using the above lemmas we can now complete the proof of Lemma \ref{theo:irg-first-com}.\\ \noindent \textbf{Proof of Lemma \ref{theo:irg-first-com}: } Proof is immediate from Lemmas \ref{theo:branching} and \ref{lemma:irg-branching} . \ \ \rule{1ex}{1ex} \\ We conclude this section with the proof of Lemma \ref{lemma:irg-branching}.\\ \noindent {\bf Proof of Lemma \ref{lemma:irg-branching}:} Without loss of generality assume that $\mathcal{C}_n^{\scriptscriptstyle RG}(x_0)\neq 0$. We now explore the component $\mathcal{C}_n^{\scriptscriptstyle RG}(x_0)$ in the standard breadth first manner. Define the sequence of \textbf{unexplored sets} $\{U_m\}_{m \ge 0}$ and the set of \textbf{removed vertices} $\{R_m\}_{m \ge 0}$ iteratively as follows: Let $R_0=\emptyset, U_0=\{x_0\}$ and $y_1=x_0$. Suppose we have defined $R_j, U_j$, $j = 0, 1, \cdots, m-1$ and $U_{m-1}= \{y_m, y_{m+1}, \cdots y_{t_m}\}$. Then set \begin{align*} R_m &=R_{m-1} \cup \{y_m\}\\ U_m &=U_{m-1} \cup E_m \setminus \{y_m\} \end{align*} where \[E_m=\{ x \in \mathcal{X} : x \text{ is a neighbor of $y_m$ in ${\bf{RG}}_n(\kappa)$ (i.e. $\{x,y_m\}$ is an edge) and } x \notin R_{m-1} \cup U_{m-1} \}.\] If $U_{m-1}=\emptyset$ we set $U_j=E_j = \emptyset$ and $R_j=R_{m-1}$ for all $j \ge m$. Thus $U_m$ are the vertices at step $m$ that have been revealed by the exploration but whose neighbors have not been explored yet. Note that the number of vertices in $R_{m-1}\cup U_{m-1}$ equals $t_m$. Label the vertices in $E_m$ as $y_{t_m+1}, y_{t_m+2}, \ldots y_{t_m+|E_m|}$. With this labeling we have a well defined specification of the sequence $\{R_j, U_j, E_{j+1}\}_{j \in \mathbb{N}_0}$. Note that $\mathcal{C}_n^{\scriptscriptstyle RG}(x_0)= m_0$ if and only if $U_{m_0-1}\neq \emptyset, U_{m_0}=\emptyset$ and $|R_{m_0}|=m_0$. We will now argue that for every $m \in \mathbb{N}$, conditioned on $\{U_{m-1}, R_{m-1}\}$, $E_m$ is a Poisson point process on the space $\mathcal{X}$ with intensity $$\Lambda_m^*(dx)=\beta_m(x) (\kappa(y_m, x)\wedge n) \mu(dx),$$ where $\beta_m: \mathcal{X} \to [0, 1]$ is given as $\beta_1 \equiv 1$ and, for $m >1$, $$\beta_m(x)=\Pi_{y \in R_{m-1}}\left[1- \left(\frac{\kappa(y,x)}{n}\wedge 1\right)\right], \; x \in \mathcal{X} .$$ Consider first the case $m=1$. Denote the Poisson point process on $(\mathcal{X}, \mathcal{T})$ used in the construction of ${\bf{RG}}_n(\kappa,\mu)$ by $N_n(\kappa,\mu)$. From the complete independence property of Poisson point processes and the non-atomic assumption on $\mu$, conditioned on the existence of a vertex $x_0$ in $N_n(\kappa,\mu)$, $N_n(\kappa,\mu)\setminus \{x_0\}$ is once again a Poisson point process with intensity $n \cdot \mu(dx)$ on $\mathcal{X}$. Also, conditioned on $N_n(\kappa,\mu)$, a given type $x$ vertex in $N_n(\kappa,\mu)$ would be connected to $x_0$ with probability $(\kappa(x_0,x)/n)\wedge 1 $. Thus the neighbors of $x_0$, namely $E_1$, define a Poisson point process with intensity $(\kappa(x_0,x)\wedge n)\mu(dx)$. This proves the above statement on $E_m$ with $m=1$. Consider now $m > 1$. Since $\mu$ is non-atomic and $U_{m-1}\cup R_{m-1}$ consists of only finitely many elements, it follows that conditioned on vertices in $R_{m-1} \cup U_{m-1}$ belonging to $N_n(\kappa,\mu)$, $N_n(\kappa,\mu)\setminus (R_{m-1} \cup U_{m-1})$ is once again a Poisson point process on $\mathcal{X}$ with intensity $n \cdot \mu(dx)$. Note that a vertex $x \in N_n(\kappa,\mu)\setminus (R_{m-1} \cup U_{m-1})$ is in $E_m$ if and only if $x$ is a neighbor of $y_m$ and $x$ is not a neighbor of any vertex in $R_{m-1}$. So conditioned on $\{R_{m-1}, U_{m-1}\}$, the probability that $x$ is in $E_m$ equals \[(\kappa(y_m, x)/n \wedge 1) \cdot \Pi_{y \in R_{m-1}} [1- (\kappa(y,x)/n) \wedge 1].\] From this and the fact that the edges in ${\bf{RG}}_n(\kappa, \mu)$ are placed in a mutually independent fashion, it follows that the points in $E_m$, conditioned on $\{R_{m-1}, U_{m-1}\}$, describe a Poisson point process with intensity \begin{align*} & n \mu(dx) \cdot (\kappa(y_m, x)/n \wedge 1) \cdot \Pi_{y \in R_{m-1}} [1- (\kappa(y,x)/n) \wedge 1]\\ &= \Pi_{y \in R_{m-1}} [1- (\kappa(y,x)/n) \wedge 1] \cdot (\kappa(y_m,x) \wedge n)\mu(dx)\\ &=\Lambda_m^*(dx). \end{align*} Thus conditioned on $\{R_{m-1}, U_{m-1}\}$, $E_m$ is a Poisson point process with the claimed intensity. Next note that one can carry out an analogous breadth first exploration of ${\bf {BP}}(x_0)$. Denoting the corresponding vertex sets once more by $\{R_j, U_j, E_{j+1}\}_{j \in \mathbb{N}_0}$ we see that conditioned on $\{R_{m-1}, U_{m-1}\}$, $E_m$ is a Poisson point process with intensity $\kappa(y_m,x)\mu(dx)$. As $ 0 \le \beta_m(x) \le 1$ and $\kappa \wedge n \le \kappa$, we can now construct a coupling between ${\bf {BP}}(x_0)$ and $\mathcal{C}_n^{RG}(x_0)$ by first constructing ${\bf {BP}}(x_0)$ and then by iteratively rejecting each offspring of type $x$ in $E_m$ (and all of its descendants) with probability $$1-\frac{\beta_m(x)(\kappa(y_m,x)\wedge n)}{\kappa(y_m,x)}.$$ The lemma is now immediate. \ \ \rule{1ex}{1ex} \\ \subsection{Proof of Lemma \ref{theo:branching}} \label{sec:proof-branching} Assume throughout this subsection, without loss of generality, that $\max\{\|\phi\|_{\infty}, \|\kappa\|_{\infty}, \mu(\mathcal{X})\} < \infty$. Recall that $\kappa$ is a symmetric kernel. Define, for $k \in \mathbb{N}$, the kernels $\kappa^{(k)}$ recursively as follows. $\kappa^{(1)}=\kappa$ and for all $k \ge 1$ $$\kappa^{(k+1)}(x,y)=\int_{\mathcal{X}} \kappa^{(k)}(x,u)\kappa(u,y) \mu(du).$$ Recall that $\{\xi_i^{(k)}\}_{i=1}^{N_k}$ denotes the $k$-th generation of ${\bf {BP}}(x_0)$ and note that it describes a Poisson point process with intensity $\kappa^{(k)}(x_0,y) \mu(dy)$. This observation allows us to compute exponential moments of the form in the lemmas below. \begin{Lemma} \label{theo:bp-onestep} Let $g: \mathcal{X} \to \mathbb{R}_+$ be a bounded measurable map. Fix $\delta>0$ and let $0<\epsilon < \log(1+\delta)/\|g\|_{\infty}$. Then $$\mathbb{E} \exp\{\epsilon \sum_{i=1}^{N_1} g(\xi_i^{(1)})\} \le \exp\{ \epsilon (1+\delta) (\mathcal{K} g)(x_0)\}.$$ \end{Lemma} \textbf{Proof:} Fix $\delta, \epsilon$ as in the statement of the lemma. By standard formulas for Poisson point processes \begin{align*} \mathbb{E} \exp\{\epsilon \sum_{i=1}^{N_1} g(\xi_i^{(1)})\} &= \exp \{ \int_\mathcal{X} \kappa(x_0,u) (e^{\epsilon g(u)}-1) \mu(du)\}\\ &\le \exp \{ \int_\mathcal{X} \kappa(x_0,u) (1+\delta) \epsilon g(u) \mu(du) \}\\ &= \exp\{ \epsilon (1+\delta) (\mathcal{K} g)(x_0)\}, \end{align*} where the middle inequality follows on noting that $e^{\epsilon g(u)}-1 \le (1+\delta) \epsilon g(u)$, whenever $\epsilon g(u) < \log(1+\delta)$.\\ \ \ \rule{1ex}{1ex}\\ Using the above lemma and a recursive argument, we obtain the following result. Recall that $G_k = \sum_{i=1}^{N_k} \phi(\xi_i^{(k)})$ denoted the volume of generation $k$ where volume is measured using the function $\phi$. \begin{Lemma} \label{theo:bp-keybound} Fix $k \in \mathbb{N}$ and $\delta > 0$. Given a weight function $\phi$, define $\phi_0=\phi + \sum_{i=1}^k (1+\delta)^i \mathcal{K}^i \phi$. Then for all $\epsilon \in (0, \frac{\log(1+\delta)}{\|\phi_0\|_{\infty}})$ \begin{equation} \label{ins2253}\mathbb{E} \exp \{ \epsilon \sum_{i=0}^k G_i \} \le \exp \{ \epsilon [ \phi(x_0) + \sum_{i=1}^k (1+\delta)^i \mathcal{K}^i \phi (x_0) ] \} = \exp \{\epsilon \phi_0(x_0)\}.\end{equation} \end{Lemma} \textbf{Proof:} Define $\{\phi_i\}_{i=0}^k$ using a backward recursion, as follows. Let $\phi_k=\phi$. For $0 \le i <k $ $$\phi_{i}=\phi + (1+\delta) \mathcal{K} \phi_{i+1}.$$ Let $\mathcal{F}_l = \sigma \{ \{\xi_i^{(k)}\}_{i=1}^{N_k}, k = 1, \cdots l\}$. We will show recursively, as $l$ goes from $k$ to $0$, that \begin{equation} \mathbb{E} [ \exp \{ \epsilon \sum_{i=l}^k G_i \} | \mathcal{F}_l] \le \exp \{ \epsilon \sum_{i=1}^{N_l} \phi_l (\xi_i^{(l)}) \} \}. \label{eqn:bp-induction} \end{equation} The lemma is then immediate on setting $l=0$ in the above equation.\\ When $l=k$, \eqref{eqn:bp-induction} is in fact an equality, and so \eqref{eqn:bp-induction} holds trivially for $k$. Suppose now that \eqref{eqn:bp-induction} is true for $l+1$, for some $l \in \{0, 1, \cdots, k-1\}$. Then \begin{align*} \mathbb{E} [ \exp \{ \epsilon \sum_{i=l}^k G_i \} | \mathcal{F}_l] &= \exp \{ \epsilon G_l\} \mathbb{E}[ \mathbb{E} [ \exp \{ \epsilon \sum_{i=l+1}^k G_i \} | \mathcal{F}_{l+1}] | \mathcal{F}_l]\\ &\le \exp \{ \epsilon G_l\} \mathbb{E}[ \exp \{ \epsilon \sum_{i=1}^{N_{l+1}} \phi_{l+1} (\xi_i^{(l+1)}) \}| \mathcal{F}_l] \hspace{.5in}\\ &\le \exp \{ \epsilon G_l\} \exp \{ \epsilon (1+\delta) \sum_{i=1}^{N_l} \mathcal{K} \phi_{l+1}(\xi_i^{(l)})\} \hspace{.5in} \\ &= \exp \{ \epsilon \sum_{i=1}^{N_l} \phi (\xi_i^{(l)})\} \exp \{ \epsilon (1+\delta) \sum_{i=1}^{N_l} \mathcal{K} \phi_{l+1}(\xi_i^{(l)})\}\\ &= \exp \{ \epsilon \sum_{i=1}^{N_l} [\phi (\xi_i^{(l)}) + (1+\delta)\mathcal{K} \phi_{l+1}(\xi_i^{(l)})]\} \\ &= \exp \{ \epsilon \sum_{i=1}^{N_l} \phi_l (\xi_i^{(l)}) \}. \end{align*} For the first inequality above we have used the fact that by assumption \eqref{eqn:bp-induction} holds for $l+1$ and for the second inequality we have applied Lemma \ref{theo:bp-onestep} along with the observation that $\epsilon \|\phi_l\|_{\infty} < \log(1+\delta)$ holds for all $l=1,2,...,k$, since for all $l$, $\phi_l \le \phi_0$ and $\epsilon \|\phi_0\|_{\infty} < \log(1+\delta)$. This completes the recursion and the result follows. \ \ \rule{1ex}{1ex}\\ To emphasize that $\phi_0$ in the above lemma depends on $\delta$ and $k$, write $\phi_0 = \phi^{(k)}_{\delta}$. Note that $\phi^{(k)}_{\delta}$ is increasing in $k$. Let $\phi^*_{\delta} = \lim_{k \to \infty} \phi^{(k)}_{\delta}$. The following corollary follows on sending $k \to \infty$ in \eqref{ins2253}. \begin{Corollary} \label{cor2310} Fix $\delta > 0$ and $ \epsilon \in (0, \log(1+\delta)/\|\phi^*_{\delta}\|_{\infty})$. Then \begin{equation} \mathbb{E}\{ \exp{\epsilon G}\} \le \exp\{ \epsilon \phi^*_{\delta}(x_0)\}. \label{eqn:bp-mct} \end{equation} \end{Corollary} \begin{Lemma} \label{theo:bp-contraction} For $n \in \mathbb{N}$ and $x \in \mathcal{X}$ $$\mathcal{K}^n \phi (x) \le \rho^{n-1} \|f_x\|_2 \|\phi\|_2, \mbox{ where } f_x(\cdot)=\kappa(x,\cdot) \mbox{ and } \rho = \rho(\kappa).$$ \end{Lemma} \textbf{Proof:} Note that \begin{eqnarray*} \mathcal{K}^n \phi (x) &=&\int_\mathcal{X} \kappa^{(n)}(x,u)\phi(u) \mu(du) \le \| \int_\mathcal{X} f_x(u) \kappa^{(n-1)}(u,\cdot) \mu(du)\|_2 \|\phi\|_2\\ &=&\| \mathcal{K}^{n-1} f_x\|_2 \|\phi\|_2 \le \rho^{n-1} \|f_x\|_2 \|\phi\|_2. \end{eqnarray*} \ \ \rule{1ex}{1ex} \\ Now we can finish the proof of Lemma \ref{theo:branching}.\\ \textbf{Proof of Lemma \ref{theo:branching}:} Observing that $\|\phi\|_2 \le \|\phi\|_{\infty} \mu(\mathcal{X})^{1/2} $ and $\|f_x\|_2 < \|\kappa\|_{\infty}\mu(\mathcal{X})^{1/2}$, we have for $\delta \in (0, \infty)$ such that $(1+\delta)\rho<1$, and $x \in \mathcal{X}$ \begin{align*} \phi^*_{\delta}(x) &= \phi(x)+\sum_{i=1}^{\infty} (1+\delta)^i \mathcal{K}^i \phi (x)\\ &\le \|\phi\|_{\infty} + \|f_x\|_2 \|\phi\|_2 (\sum_{i=1}^{\infty} (1+\delta)^i \rho^{i-1}) \hspace{.5in} \\ &\le \|\phi\|_{\infty} + \|\kappa\|_{\infty}\|\phi\|_{\infty}\mu(\mathcal{X})\frac{(1+\delta)}{1-(1+\delta)\rho},\end{align*} where the first inequality above follows from Lemma \ref{theo:bp-contraction}. Setting $\delta = \frac{\Delta}{2}$, we see $$(1+\delta)\rho=(1+\Delta/2)(1-\Delta)<1-\Delta/2.$$ Using this and that $\Delta < 1$, we have $$ \phi^*_{\delta}(x)\le \|\phi\|_{\infty} \left(1+ \frac{3\|\kappa\|_{\infty}\mu(\mathcal{X})}{\Delta} \right)\equiv d_1.$$ Let $\epsilon = \log(1+\delta)/(2d_1)$. Clearly $\epsilon \in (0, \log(1+\delta)/\|\phi^*_{\delta}\|_{\infty})$. Using Corollary \ref{cor2310} we now have that \begin{align*} \mathbb{P}\{ G > m \} &\le \exp\{-\epsilon m\} \exp\{\epsilon \phi^*_{\delta}(x_0)\} \\ &\le \exp\{-\epsilon m\} \exp \{\frac{\log(1+\delta)}{2}\}\\ &\le2 \exp \{-\frac{\log(1+\delta)}{2d_1}m\}. \end{align*} Finally, noting that $\log(1+\delta) \ge \frac{\delta}{2}$, we have $$\frac{\log(1+\delta)}{2d_1} \ge \frac{\Delta^2}{8\|\phi\|_{\infty}(1 + 3\|\kappa\|_{\infty}\mu(\mathcal{X}))}.$$ The result follows. \ \ \rule{1ex}{1ex} \\ \subsection{Proof of Lemma \ref{theo:error-cont-norm}} \label{sec:analysis-ker} We begin with a general result for integral operators on general type spaces. \begin{Lemma} \label{lemma: change-measure} Let $\nu, \mu$ be two mutually absolutely continuous finite measures on a measure space $(\mathcal{X}, \mathcal{T})$. Let $g=d\nu/d\mu$. Let $\kappa: \mathcal{X} \times \mathcal{X} \to \mathbb{R}_+$ be a kernel. Define another kernel $\kappa': \mathcal{X} \times \mathcal{X} \to \mathbb{R}_+$ as $$\kappa'(x,y)=\sqrt{\frac{g(x)}{g(y)}}\kappa(x,y), \; x,y \in \mathcal{X} . $$ Denote by $\mathcal{K}$ [ resp. $\mathcal{K}'$] the integral operator associated with $\kappa$ [resp. $\kappa'$] on $L^2(\mathcal{X}, \mathcal{T}, \nu)$ [resp. $L^2(\mathcal{X}, \mathcal{T}, \mu)$]. Then $\|\mathcal{K}\|_{L^2(\nu)} = \|\mathcal{K}'\|_{L^2(\mu)}$. \end{Lemma} \textbf{Proof:} Note that the operator $\mathcal{A}: L^2(\mathcal{X},\mathcal{T},\nu) \to L^2(\mathcal{X}, \mathcal{T},\mu)$ defined as $(\mathcal{A} f) =\sqrt{g} f$, $f \in L^2(\nu)$, is an isometry. Also, for $f \in L^2(\mu)$ $$( \mathcal{A} \mathcal{K} \mathcal{A} ^{-1} f)(x)= \sqrt{g(x)} \int_\mathcal{X} \kappa(x,y)\frac{1} {\sqrt{g(y)}} f(y) \mu(dy) = \int_\mathcal{X} \kappa'(x,y)f(y)\mu(dy).$$ Thus $\mathcal{A} \mathcal{K} \mathcal{A} ^{-1} = \mathcal{K}'$. The result now follows on noting that $\|\mathcal{A} \mathcal{K} \mathcal{A} ^{-1}\|_{L^2(\mu)} = \|\mathcal{K}\|_{L^2(\nu)}$. \ \ \rule{1ex}{1ex} \\ For the rest of this subsection we will take $(\mathcal{X}, \mathcal{T})$ to be the cluster space introduced in Section \ref{sec:model-irg} (see above \eqref{eqn:phi-def}). Given rate functions $(a,b,c)$, $\mu_t(a,b,c), \kappa_t(a,b,c), \phi_t(a,b,c), \mathcal{K}_t(a,b,c), \rho_t(a,b,c)$ are as introduced above Lemma \ref{theo:error-cont-norm}. \begin{Lemma} \label{lemma:RN-derivative} Let $(a_i, b_i, c_i)$, $i=1,2$ be two sets of rate functions. Suppose that $a_1, c_1$ are strictly positive on $(0, T]$ and $a_2 \le a_1, c_2 \le c_1$ on $[0, T]$. Also suppose that for some $ \delta \in (0, e^{-T}/T)$, $c_1 \le c_2+\delta$ on $[0, T]$. Fix $t \in [0, T]$. Let $\mu_i = \mu_t(a_i, b_i, c_i)$, $i=1,2$. Then $\mu_2 \ll \mu_1$ and \begin{equation*} \frac{d\mu_2}{d\mu_1} (s,w)=\frac{a_2(s)}{a_1(s)} \times \exp \{ -\int_{s}^{T} w(u)[ c_2(u)-c_1(u)]du \} \times \Pi_{i=1}^{w(T)-2} \left(\frac{c_2(\tau_i)}{c_1(\tau_i)} \right), \; (s,w) \in [0, t] \times \mathcal{W} . \end{equation*} where $\tau_i=\tau_i(s, w)$ is ($\mu_1$ a.s.) the $i^{th}$ jump of $w$ after time $s$. \end{Lemma} \textbf{Proof:} Recall the probability measure $\nu_s$ on $\mathcal{W}$ introduced below \eqref{eqn:phi-def}. Write $\nu^i_s = \nu_s(a_i, b_i, c_i)$, $i=1,2$. Note that (see \eqref{eqn:mu-def}), for $s \in [0, t]$, $\mu_i(ds\, dw) = \nu^i_s(dw) a_i(s) ds$, $i=1,2$. Thus to prove the result it suffices to show that, for all $s \in [0, t]$, $\nu^2_s \ll \nu^1_s$ and $\frac{d\nu^2_s}{d\nu^1_s} = L_s^T$, where, for $t \in [s, T]$, \[ L_s^t(w)= \exp \{ -\int_{s}^{t} w(u)[ c_2(u)-c_1(u)]du \} \times \Pi_{i \ge1}\left(\frac{c_2(\sigma_i)}{c_1(\sigma_i)} {\bf1}_{\{\sigma_i \le t\}}\right),\] and $\sigma_i(w)$ is ($\nu_s^1$ a.s.) the $i^{th}$ jump of $w$. For this, it suffices in turn to show that $\int_{\mathcal{W}} L_s^t(w) \nu_s^1(dw) = 1$ for all $t \in [s, T]$ (see eg. Theorem T3, p.166 of \cite{Bremaud}). The process $\{L_s^t\}_{ t \in [s, T]}$ on $\mathcal{W}$ with the canonical filtration is a local martingale under $\nu_s^1$ (see Theorem T2, p.166, \cite{Bremaud}) so to check the martingale property, it suffices to check (see (2.4) on page 166, and Theorem T8 on page 27 of \cite{Bremaud}), that \[ \int_{\mathcal{W}} [\int_s^T L_s^u(w) |c_1(u)-c_2(u)| du ] \nu_s^1(dw)< + \infty. \] Note that $L_s^u(w) \le \exp \{ T w(T) \delta\} $ since $c_1- \delta \le c_2 \le c_1$. Thus \begin{align*} \int_{\mathcal{W}} [\int_s^T L_s^u(w) |c_1(u)-c_2(u)| du ] \nu_s^1(dw) &\le 2T \int_{\mathcal{W}} e^{ T w(T) \delta } \nu_0^1(dw). \end{align*} Note that, under $\nu_0^1$, $w(T)$ is stochastically bounded by the sum of two independent Geom($e^{-T}$) (see proof of Lemma \ref{theo:error-wst}). Thus the last integral is bounded by $(\mathbb{E} e^{TZ\delta})^2$, where $Z$ is a Geom($e^{-T}$) random variable. This expectation is finite since $T\delta < e^{-T}$. The result follows. \ \ \rule{1ex}{1ex} \\ Proof of Lemma \ref{theo:error-cont-norm} will make use of parts (iv) and (v) of Lemma \ref{lemma:prop-kernel}. In order to use (v) we will need the kernels to be suitably bounded. For that we use a truncation of the kernels, the associated error of which is estimated through the following lemma.\\ \begin{Lemma} \label{lemma:kappa-trunc1} For $A \in (0, \infty)$, $t \in [0, T]$ and rate functions $(a,b,c)$, define the kernel $\kappa_{A,t}(a,b,c) \equiv \kappa_{A,t}: \mathcal{X} \times \mathcal{X} \to [0, \infty)$ as $$\kappa_{A,t}({\bf{x}}, {\bf{y}}) = \kappa_{t}({\bf{x}}, {\bf{y}}) 1_{\{w(T) \le A, \tilde w(T) \le A\}},\; \mbox{ where } \kappa_t = \kappa_t(a,b,c), {\bf{x}} = (s,w), {\bf{y}} = (r, \tilde w).$$ Denote by $\mathcal{K}_{A,t}$ the integral operator corresponding to $\kappa_{A,t}$ on $L^2(\mathcal{X}, \mathcal{T}, \mu_t)$ and $\rho(\kappa_{A,t})$ its norm, where $\mu_t = \mu_t(a,b,c)$. Then there exist $A_0, C_3, C_4 \in (0, \infty)$ such that $$\rho(\kappa_t) - C_3e^{-C_4A} \le \rho(\kappa_{A,t}) \le \rho(\kappa_t),$$ for all rate functions $(a,b,c)$, $t \in [0, T]$ and $A \ge A_0$. \end{Lemma} \textbf{Proof:} We will suppress $t$ in the notation. Since $\kappa_A \le \kappa$, from Lemma \ref{lemma:prop-kernel} (ii) and (iii) we have $$\rho(\kappa) - \rho (\kappa-\kappa_A) \le \rho(\kappa_A) \le \rho(\kappa).$$ Consequently, $$\rho(\kappa) - \rho (\kappa_A) \le \rho (\kappa-\kappa_A) \le \| \kappa-\kappa_A\|_2,$$ where $\|\kappa\|_2$ denotes the $L^2(\mu \times \mu)$ norm of the kernel $\kappa$. Note that for ${\bf{x}}, {\bf{y}} \in \mathcal{X}$ \begin{equation} \label{eqn:kappa-bound} \kappa({\bf{x}},{\bf{y}})=\int_0^t w(u)\tilde w(u) b(u)du \le T w(T)\tilde w(T), \mbox{ for } \mu\times\mu \mbox{ a.e. } ({\bf{x}}, {\bf{y}}) = ((s,w), (r,\tilde w)). \end{equation} Let $\Lambda=\{ (s, w)\in \mathcal{X} : w(T) > A\}$. Then for fixed ${\bf{x}}=(s, w) \in \Lambda^c$ \begin{align*} \int_\mathcal{X} (\kappa({\bf{x}},{\bf{y}})-\kappa_A({\bf{x}},{\bf{y}}))^2 \mu(d{\bf{y}}) &\le \int_\mathcal{X} (\kappa({\bf{x}},{\bf{y}}) {\bf1}_\Lambda ({\bf{y}}) )^2 \mu(d{\bf{y}})\\ &\le\int_{\mathcal{X}} [ T^2 w^2(T)\tilde w^2(T){\bf1}_{\{\tilde w(T) > A \}}] \mu(dr d\tilde w) )\\ &\le T^3 w^2(T) \mathbb{E}_0 [w^2(T){\bf1}_{\{w(T) > A \}}], \end{align*} where in the second line we have used \eqref{eqn:kappa-bound} and $\mathbb{E}_0$ in the third line denotes the expectation corresponding to the probability measure $\nu_0$ on $\mathcal{W}$. Next, for fixed ${\bf{x}}=(s, w) \in \Lambda$, we have in a similar manner \begin{equation*} \int_\mathcal{X} (\kappa({\bf{x}},{\bf{y}})-\kappa_A({\bf{x}},{\bf{y}}))^2 \mu(d{\bf{y}}) \le T^3 w^2(T) \mathbb{E}_0 [w^2(T)]. \end{equation*} Thus for any ${\bf{x}}=(s,w) \in \mathcal{X}$, \begin{equation*} \int_\mathcal{X} (\kappa({\bf{x}},{\bf{y}})-\kappa_A({\bf{x}},{\bf{y}}))^2 \mu(d{\bf{y}}) \le T^3 w^2(T) \mathbb{E}_0 [w^2(T){\bf1}_{\{w(T) > A \}}] + {\bf1}_{\{w(T) > A \}} T^3 w^2(T) \mathbb{E}_0 [w^2(T)]. \end{equation*} Integrating with respect to ${\bf{x}} \in \mathcal{X}$, and noting that $\nu_s(w(T) \ge \alpha) \le \nu_0(w(T) \ge \alpha)$, $\alpha \ge 0$, we have \begin{align*} \|(\kappa-\kappa_A)\|_2^2 &=\int_\mathcal{X} \int_\mathcal{X} (\kappa({\bf{x}},{\bf{y}})-\kappa_A({\bf{x}},{\bf{y}}))^2 \mu(d{\bf{y}}) \mu(d{\bf{x}})\\ &\le T^3 \mathbb{E}_0 [w^2(T){\bf1}_{\{w(T) > A \}}] \int_0^t a(s) \mathbb{E}_0 [ w^2(T) ]ds+ T^3 \mathbb{E}_0 [w^2(T)]\int_0^t a(s) \mathbb{E}_0[{\bf1}_{\{w(T) > A \}} w^2(T)] ds\\ &\le 2 T^4 \mathbb{E}_0 [w^2(T)] \mathbb{E}_0 [w^2(T){\bf1}_{\{w(T) > A \}}]. \end{align*} As noted in the proof of Lemma \ref{theo:error-wst}, under $\nu_0$, $w(T)$ is stochastically dominated by $ Z^*_1 +Z^*_2$ where $Z^*_1, Z^*_2$ are two independent copies of Geom($e^{-T}$). Therefore $$ \mathbb{E}_0 [w_0^2(T)] \le \mathbb{E} (Z^*_1 +Z^*_2)^2 =2 e^{2T}(2-e^{-T})<4e^{2T}$$ and for a suitable $A_0 \in (0, \infty)$ \begin{equation*} \mathbb{E}_0 [w_0^2(T){\bf1}_{\{w_0(T) > A \}}] \le \sum_{k>A} k^2 \times (k-1) e^{-2T}(1-e^{-T})^{k-2} \le (1-e^{-T})^{A/2} \end{equation*} for all $A \ge A_0$. Combining the above estimates, for all $A \ge A_0$ $$\|\kappa-\kappa_A\|_2 < 8 T^4 e^{2T}(1-e^{-T})^{A/2} .$$ The result follows. \ \ \rule{1ex}{1ex} \\ In the proof of Lemma \ref{theo:error-cont-norm} we will apply Lemma \ref{lemma: change-measure} to measures $\mu_1, \mu_2$ of the form in Lemma \ref{lemma:RN-derivative}. This latter lemma shows that (under the conditions of the lemma) $\mu_2\ll\mu_1$. However, to use Lemma \ref{lemma: change-measure} we need the two measures to be mutually absolutely continuous. To treat this difficulty we will use an additional truncation introduced in the lemma below and the elementary fact in Lemma \ref{elemz}. \begin{Lemma} \label{lemma:kappa-trunc2} Given rate functions $(a,b,c)$ and $A \in (0, \infty)$ and $t \in [0, T]$, let $\kappa_t, \kappa_{A,t}$ be as in Lemma \ref{lemma:kappa-trunc1}. For $(s,w) \in \mathcal{X}$, let $\tau(s, w) = \inf \{u > s: w(u)-w(u-) \neq 0\}$. For $\delta > 0$, let $\Lambda_{\delta} = \{(s,w) \in \mathcal{X}: s \le \delta \mbox{ and } \tau(s, w) \le s+ \delta \}$. Define the kernel $\kappa_{A, \delta, t}$ as $$ \kappa_{A, \delta, t}({\bf{x}}, {\bf{y}})=\kappa_{A,t}({\bf{x}},{\bf{y}}) {\bf 1}_{\Lambda_{\delta}^c}({\bf{x}}){\bf 1}_{\Lambda_{\delta}^c}({\bf{y}}).$$ Then there exists a $C_5 \in (0, \infty)$ such that for all rate functions $(a,b,c)$ and $\delta, A \in (0, \infty)$ $$\rho(\kappa_{A,t}) -C_5 A^2 \delta \le \rho(\kappa_{A,\delta,t}) \le \rho(\kappa_{A,t}).$$ \end{Lemma} \textbf{Proof:} Once more we will suppress $t$ from the notation. As in the proof of Lemma \ref{lemma:kappa-trunc1}, we have $$\rho(\kappa_A) -\|\kappa_A-\kappa_{A,\delta}\|_2 \le \rho(\kappa_{A,\delta}) \le \rho(\kappa_A).$$ For ${\bf{x}} \in \Lambda^c$, \begin{align*} \int_\mathcal{X} ( \kappa_A({\bf{x}},{\bf{y}})-\kappa_{A,\delta} ({\bf{x}},{\bf{y}}))^2 \mu(d{\bf{y}}) &\le \int_{\mathcal{X}} \kappa^2_A({\bf{x}},(r,\tilde w)) {\bf1}_{\{\tau \le r+\delta \}} {\bf1}_{\{r \le \delta \}} \mu(dr d\tilde w)\\ &\le T^2A^4 \int_0^\delta \nu_r \{ \tau(r,w) \le r+ \delta \} dr, \end{align*} where the above inequality uses the bound $\kappa_A \le TA^2$. Also, for fixed ${\bf{x}} \in \Lambda$, $$\int_\mathcal{X} ( \kappa_A({\bf{x}},{\bf{y}})-\kappa_{A,\delta} ({\bf{x}},{\bf{y}}))^2 \mu(d{\bf{y}}) \le T^2A^4 \cdot T$$ Thus for all ${\bf{x}} \in \mathcal{X}$, $$\int_\mathcal{X} ( \kappa_A({\bf{x}},{\bf{y}})-\kappa_{A,\delta} ({\bf{x}},{\bf{y}}))^2 \mu(d{\bf{y}}) \le T^2A^4 \int_0^\delta \nu_r \{ \tau(r,w) \le r+ \delta \} dr + {\bf1}_\Lambda({\bf{x}}) T^3 A^4.$$ Finally $$\|\kappa_A-\kappa_{A,\delta}\|_2^2 = \int_ \mathcal{X} \int_\mathcal{X} ( \kappa_A({\bf{x}},{\bf{y}})-\kappa_{A,\delta} ({\bf{x}},{\bf{y}}))^2 \mu(d{\bf{y}}) \mu(d{\bf{x}}) \le 2T^3A^4 \int_0^\delta \nu_r \{ \tau(r,w) \le r+ \delta \} dr.$$ The result follows on observing that $\nu_r \{ \tau(r,w) \le r+ \delta \}=1-\exp\{-\int_r^{r+\delta} 2c(u)du \} \le 2 \delta$. \ \ \rule{1ex}{1ex} \\ We will use the following elementary lemma. \begin{Lemma} \label{elemz} Let $\gamma_0$, $\gamma_1$ be finite measures on a measure space $(\mathcal{X}, \mathcal{T})$ such that $\gamma_0 \ll \gamma_1$. Let $G \in \mathcal{T}$ be such that $\{{\bf{x}}: d\gamma_0/ d\gamma_1 > 0 \} \supset G$. For $i=0,1$, let $\gamma_i^G$ be restriction of $\gamma_i$ to $G$: $\gamma_i^G(\cdot) = \gamma_i(\cdot \cap G)$. Then $\gamma_0^G$ and $\gamma_1^G$ are mutually absolutely continuous with $\frac{d\gamma_0^G}{ d\gamma_1^G}({\bf{x}}) = \frac{d\gamma_0}{ d\gamma_1}({\bf{x}}) {\bf 1}_G({\bf{x}})$ and $\frac{d\gamma_1^G}{ d\gamma_0^G}({\bf{x}}) = (\frac{d\gamma_0}{d\gamma_1}({\bf{x}}))^{-1} {\bf 1}_G({\bf{x}})$, a.s. $\gamma_1^G$ and $\gamma_0^G$. \end{Lemma} Lemma \ref{theo:error-cont-norm} requires establishing an estimate of the form $C_2(-\log \delta)^3 \delta^{1/2}$ for both $|\rho_t- \rho_{+,t}|$ and $|\rho_t-\rho_{-,t}|$. Proofs for the two cases are similar and so we only provide details for $|\rho_t-\rho_{-,t}|$ and leave the other case for the reader. Given $\delta \in (0, \infty)$ and rate functions $(a,b,c)$ as in the statement of Lemma \ref{theo:error-cont-norm}, we denote $(a_{\delta}, b_{\delta}, c_{\delta}) = ((a-\delta)^+, (b-\delta)^+, (c-\delta)^+)$. Note that \begin{eqnarray} |\rho_t-\rho_{-,t}| &=& |\rho_t(a, b, c)-\rho_t(a_{\delta},b_{\delta},c_{\delta})|\nonumber \\ & \le& |\rho_t(a, b, c)-\rho_t(a_{\delta},b,c_{\delta})|+|\rho_t(a_{\delta}, b, c_{\delta})-\rho_t(a_{\delta},b_{\delta},c_{\delta})|.\label{eqn:only-use-once1} \end{eqnarray} We treat the first term on the right side in Lemma \ref{lemtria} while the second term is estimated in Lemma \ref{lemtrib}. \begin{Lemma}\label{lemtria} Let $(a,b,c)$ be rate functions as in the statement of Lemma \ref{theo:error-cont-norm}. There exists $C_6, \delta_2 \in (0, \infty)$ such that for all $\delta \in (0,\delta_2)$ and $t \in [0, T]$ $$ |\rho_t(a,b,c)-\rho_t(a_{\delta},b,c_{\delta})| < C_6 (-\log \delta)^3 \delta^{1/2}.$$ \end{Lemma} \textbf{Proof:} Since the kernel $\kappa_t(a,b,c)$ does not depend on $a,c$, $\kappa_t(a,b,c) = \kappa_t(a_{\delta},b,c_{\delta})=\kappa_t$. Henceforth suppress $t$ from the notation. Let $\rho = \rho_t(a,b,c)$ and $\rho_{\delta} = \rho_t(a_{\delta},b,c_{\delta})$. For $\varepsilon > 0$, let $D \subset \mathcal{X}$ be defined as $$D\equiv D_{\varepsilon} =\{(s,w): w(T)>A \mbox{ or } (\tau(s,w) \le s+\varepsilon \mbox{ and } s \le \varepsilon )\}^c$$ and define the kernel $\kappa_{D}$ as $$\kappa_{D}({\bf{x}},{\bf{y}})=\kappa({\bf{x}},{\bf{y}}) {\bf1}_{D}({\bf{x}}){\bf1}_{D}({\bf{y}}).$$ Using Lemmas \ref{lemma:kappa-trunc1} and Lemma \ref{lemma:kappa-trunc2}, $$ \rho-C_3 e^{-C_4 A}-C_5 A^2 \epsilon \le \rho(\kappa_D) \le \rho$$ and $$ \rho_{\delta} -C_3 e^{-C_4 A}-C_5 A^2 \epsilon \le \rho_{\delta}(\kappa_D) \le \rho_{\delta}, $$ where $\rho(\kappa_D)$ [resp. $\rho_{\delta}(\kappa_D)$] is the norm of the corresponding integral operator on $L^2(\mu)$ [resp. $L^2(\mu_\delta)$], where $\mu = \mu(a,b,c)$ and $\mu_{\delta} = \mu(a_{\delta},b,c_{\delta})$. Thus we have \begin{equation} \label{eqn:only-use-once} |\rho-\rho_\delta| < 2C_3 e^{-C_4 A} + 2C_5 A^2 \epsilon + |\rho(\kappa_D)-\rho_\delta(\kappa_D)|. \end{equation} We now estimate $|\rho(\kappa_D)-\rho_\delta(\kappa_D)|$. By Lemma \ref{lemma:RN-derivative}, $\mu_\delta \ll \mu$ and \begin{equation*} g (s,w)=_{\scriptscriptstyle def}\frac{d\mu_\delta}{d\mu}(s,w)=\frac{a_\delta(s)}{a_0(s)} \times \exp \{ -\int_{s}^{T} w(u)[ c_\delta(u)-c(u)]du \} \times \Pi_{i=1}^{w(T)-2} \left(\frac{c_1(\tau_i)}{c_0(\tau_i)} \right), \end{equation*} $\mu$ a.s., where $\tau_i$ are as in the statement of Lemma \ref{lemma:RN-derivative}. For $\mu$ a.e. $(s,w) \in D$ we have $$w(t) \le w(T) \le A \mbox{ and } \tau_1(s,w) > \varepsilon, $$ consequently $c_{\delta}(\tau_i) \ge (\theta \varepsilon - \delta)^+$ for all $i$, where $\theta$ is as in the statement of Lemma \ref{theo:error-cont-norm}. Also, since $a$ is bounded away from $0$, we can find $d_1 \in (0, \infty)$ such that $a_{\delta}(s) \ge (d_1- \delta)^+$. Thus for $\mu$ a.e. ${\bf{x}} \in D$ \begin{equation}\label{ins2144} (d_1- \delta)^+ \left((\theta \varepsilon - \delta)^+\right)^A< g({\bf{x}}) < \exp\{ T A \delta \}. \end{equation} Denote by $\mu^D$ [resp. $\mu_{\delta}^D$] the restrictions of $\mu$ [resp. $\mu_{\delta}$] to $D$. Then from Lemma \ref{elemz}, whenever $\delta < \delta_0 = \min \{\theta \varepsilon , d_1\}$, $\mu^D$ and $\mu_{\delta}^D$ are mutually absolutely continuous and $$\frac{d\mu_\delta^D}{d\mu^D}({\bf{x}}) = g({\bf{x}}) {\bf 1}_D({\bf{x}}), \; \mbox{ a.e. } d\mu^D . $$ For the rest of the proof we consider only $\delta < \delta_0$. Then, \begin{equation}\label{ins1119} (1- \frac{\delta}{d_1}) \left(1 - \frac{\delta}{\theta \varepsilon}\right)^A< g({\bf{x}}) < \exp\{ T A \delta \}. \end{equation} Note that $\rho(\kappa_D, \mu) = \rho(\kappa_D, \mu^D)$ and $\rho(\kappa_D, \mu_{\delta}) = \rho(\kappa_D, \mu^D_{\delta})$. Also by Lemma \ref{lemma: change-measure}, $\rho(\kappa_D, \mu_{\delta}^D) = \rho(\kappa'_D, \mu^D)$, where $$ \kappa'_D({\bf{x}}, {\bf{y}}) = \kappa_D({\bf{x}} , {\bf{y}}) \sqrt{\frac{g({\bf{x}})}{g({\bf{y}})}}{\bf 1}_D({\bf{x}}){\bf 1}_D({\bf{y}}), \; {\bf{x}}, {\bf{y}} \in \mathcal{X} . $$ Thus using Lemma \ref{lemma:prop-kernel} \begin{eqnarray*} |\rho(\kappa_D)-\rho_\delta(\kappa_D)| &=&|\rho(\kappa_D, \mu)-\rho(\kappa_D, \mu_\delta)|\\ &=& |\rho(\kappa_D, \mu^D)-\rho(\kappa_D, \mu_\delta^D)|\\ &=& |\rho(\kappa_D, \mu^D)-\rho(\kappa'_D, \mu^D)|\\ &\le& \rho(| \sqrt{\frac{g({\bf{x}})}{g({\bf{y}})}}-1|\kappa_D({\bf{x}},{\bf{y}}), \mu^D) \\ &\le& T\sup_{{\bf{x}},{\bf{y}} \in \mathcal{X}}\left(| \sqrt{\frac{g({\bf{x}})}{g({\bf{y}})}}-1|\kappa_D({\bf{x}},{\bf{y}})\right).\end{eqnarray*} From \eqref{ins1119} we see that whenever $\delta \le \delta_0$, \begin{equation*} \sup_{{\bf{x}},{\bf{y}} \in D}\left( \sqrt{\frac{g({\bf{x}})}{g({\bf{y}})} }\vee \sqrt{\frac{g({\bf{y}})}{g({\bf{x}})}} \right) < (1+\frac{2}{d_1} \delta) \times \exp\{ T A \delta \} \times (1+ \frac{2\delta}{\theta \epsilon})^A \equiv d(\delta, \varepsilon, A). \end{equation*} Noting that $\kappa_D \le TA^2$, we have \[ \sup_{{\bf{x}},{\bf{y}} \in \mathcal{X}} \left( | \sqrt{\frac{g({\bf{x}})}{g({\bf{y}})}}-1| \kappa_D({\bf{x}},{\bf{y}})\right) \le TA^2 |d(\delta, \varepsilon, A)-1|. \] Thus $$ |\rho(\kappa_D) - \rho_{\delta}(\kappa_D)| \le T^2 A^2 |d(\delta, \varepsilon, A)-1|. $$ Combining this with \eqref{eqn:only-use-once}, we have \begin{equation} \label{ins1144} |\rho-\rho_\delta| < 2C_3e^{-C_4 A} + 2C_5A^2 \epsilon + TA^2 |d(\delta, \varepsilon, A)-1|. \end{equation} Take $A= -\frac{1}{C_4} \log \delta$ and $\epsilon = \delta^{1/2}$. Note that when $\delta$ is sufficiently small, $\delta \le \frac{1}{2} \min\{\theta \varepsilon , d_1\}$ and so the above inequality holds for such $\delta$. Also, with this choice, we can find $\delta_1 , d_2 \in (0, \infty)$ such that the sum of the first two terms on the right side in \eqref{ins1144} is bounded by $$ 2C_3\delta + \frac{2C_5}{C_4^2} (-\log \delta)^2 \delta^{1/2} \le d_2(-\log \delta)^2\delta^{1/2} \; \mbox{ for all } \delta \le \delta_1.$$ Also note that with the above choice of $\varepsilon, A$, $d(\delta, \varepsilon, A) \to 1$ as $\delta \to 0$. Furthermore $$ d(\delta, \varepsilon, A) = (1+ O(\delta))(1+O(\delta)) (1+ O(\delta^{1/2} (-\log \delta))).$$ Thus,we can find $d_3 , \delta_2 \in(0, \infty)$ such that whenever $\delta \le \delta_2$ $$ TA^2 |d(\delta, \varepsilon, A)-1| \le d_3(-\log \delta)^3\delta^{1/2}.$$ The result follows on combining the above estimates. \ \ \rule{1ex}{1ex} \\ We now estimate the second term in \eqref{eqn:only-use-once1}. \begin{Lemma} \label{lemtrib} There exists $C_7 \in (0, \infty)$ and $\delta_3 \in (0,1)$ such that for all $t \in [0, T]$ $$ |\rho_t(a_{\delta},b,c_{\delta})-\rho_t(a_{\delta},b_{\delta},c_{\delta})| < C_7 (-\log \delta)^2 \delta$$ \end{Lemma} \textbf{Proof:} Denote $\kappa_t(a_{\delta}, b_{\delta},c_{\delta}) = \kappa_{\delta,t}=\kappa_{\delta}$ and recall that $\kappa_t(a_{\delta},b_{\delta},c_{\delta})=\kappa_t=\kappa$. We suppress $t$ in rest of the proof. For $A > 0$, let $\kappa_{A}$ [resp. $\kappa_{\delta, A}$] be the truncated kernels as in Lemma \ref{lemma:kappa-trunc1} associated with $\kappa$ [resp. $\kappa_{\delta}$]. Clearly $$|\kappa_{A}-\kappa_{\delta, A}| \le TA^2\delta.$$ Using this bound and Lemma \ref{lemma:kappa-trunc1}, for all $A \ge A_0$ \begin{eqnarray*} |\rho(a_{\delta},b,c_{\delta})-\rho(a_{\delta},b_{\delta},c_{\delta})| &=& |\rho(\kappa, \mu_{\delta}) - \rho(\kappa_{\delta}, \mu_{\delta})|\\ &\le & TA^2\delta + |\rho(\kappa, \mu_{\delta}) - \rho(\kappa_{A}, \mu_{\delta})|+ |\rho(\kappa_{\delta}, \mu_{\delta}) - \rho(\kappa_{\delta, A}, \mu_{\delta})|\\ & \le & TA^2\delta + 2C_3 \exp\{-C_4A\}, \end{eqnarray*} where $\mu_{\delta} = \mu(a_{\delta}, b_{\delta}, c_{\delta}) = \mu(a_{\delta}, b, c_{\delta})$ is as in Lemma \ref {lemtria}. Result follows on taking $A = (-\log \delta)/C_4$ and $\delta$ sufficiently small. \ \ \rule{1ex}{1ex}\\ \ \\ {\bf Proof of Lemma \ref{theo:error-cont-norm}.} The proof is immediate on using Lemmas \ref{lemtria} and \ref{lemtrib} in \eqref{eqn:only-use-once1}. \ \ \rule{1ex}{1ex} \\ \subsection{Proof of Lemma \ref{theo:error-cont-int}} \label{sec:measure-theoretic} We begin with an elementary lemma which allows one to regard the operators $\mathcal{K}_t(a,b,c)$, $t \in [0, T]$, to be defined on a common Hilbert space. Recall that for a kernel $\kappa$ on $\mathcal{X} \times \mathcal{X}$ and a finite measure $\mu$ on $\mathcal{X}$, we denote by $\rho(\kappa, \mu)$ the norm of the integral operator associated with $\kappa$ on $L^2(\mathcal{X}, \mathcal{T}, \mu)= L^2(\mu)$. \begin{Lemma} \label{lemcom} Let $(a,b,c)$ be rate functions. Then for all $t \in [0, T]$ $$\rho(\kappa_t(a,b,c), \mu_t(a,b,c))=\rho(\kappa_t(a,b,c),\mu_T(a,b,c)).$$ \end{Lemma} \textbf{Proof:} Write $\kappa_t(a,b,c) = \kappa_t$, $\mu_t(a,b,c)= \mu_t$. Denote the integral operator corresponding to $\kappa_t$ on $L^2(\mu_t)$ [resp. $L^2(\mu_T)$] by $\mathcal{K}_t$ [resp. $\mathcal{K}_t^T$]. Let $\mathcal{X}_t=[0,t] \times \mathcal{W}$, then $\mu_t$ is supported on $\mathcal{X}_t$ and $\kappa_t$ is supported on $\mathcal{X}_t \times \mathcal{X}_t$. Thus for any $\psi \in L^2(\mu_T)$, $\mathcal{K}_t^T \psi=\mathcal{K}_t \psi$ is also supported on $\mathcal{X}_t$, and this implies $\|\mathcal{K}_t^T\| \le \|\mathcal{K}_t\|$. On the other hand, for any $\psi \in L^2(\mu_t)$, $\mathcal{K}_t \psi = \mathcal{K}_t^T (\psi{\bf 1}_{[0, t]\times \mathcal{W}})$ $$\|\mathcal{K}_t \psi\|_{L^2(\mu_t)} \le \|\mathcal{K}_t^T\| \|\psi{\bf 1}_{[0, t]\times \mathcal{W}}\|_{L^2(\mu_T)} = \|\mathcal{K}_t^T\| \|\psi\|_{L^2(\mu_t)}.$$ Thus $\|\mathcal{K}_t^T\| \ge \|\mathcal{K}_t\| $. \ \ \rule{1ex}{1ex} \\ The following theorem concerning IRG models on a general type space is a corollary of Theorem 3.1 and Theorem 3.12 in \cite{bollobas-riordan-janson}. \begin{Theorem}\cite{bollobas-riordan-janson} \label{theo:bollobas} Let $(\mathcal{X}, \mathcal{T}, \mu)$ be a type space. Consider the weight function $\phi \equiv 1$ on this space. Let $\kappa_n, \kappa$ be symmetric kernels on $\mathcal{X} \times \mathcal{X}$ such that $$ x_n \to x \mbox{ and } y_n \to y \mbox{ implies } \kappa_n(x_n,y_n) \to \kappa(x,y).$$ Let $\mathcal{C}^{(1)}_n(\kappa)$ denote the size of the largest component in ${\bf{RG}}_n(\kappa, \mu, \phi)$. Then\\ (i) If $\rho(\kappa,\mu) \le 1$, then $\mathcal{C}^{(1)}_n(\kappa_n)/n \convp 0$. Furthermore, if $\|\kappa\|_{\infty} < +\infty$, then $\mathcal{C}^{(1)}_n(\kappa_n) = O (\log n)$.\\ (ii)If $\rho(\kappa,\mu) > 1$, then $\mathcal{C}^{(1)}_n(\kappa_n) = \Theta (n)$. \end{Theorem} \textbf{Proof of Lemma \ref{theo:error-cont-int}:} Let $\kappa_t(a_0, b_0, c_0) = \kappa_t$ and $\mu_t(a_0, b_0, c_0) = \mu_t$. Note that for $0 \le t_1 \le t_2 \le T$, since $\kappa_{t_1} \le \kappa_{t_2}$, we have from Lemma \ref{lemcom} that $$\rho(t_1) = \rho(\kappa_{t_1}, \mu_{t_1}) = \rho(\kappa_{t_1}, \mu_{T}) \le \rho(\kappa_{t_2}, \mu_{T}) = \rho(\kappa_{t_2}, \mu_{t_2})=\rho(t_2).$$ Thus $\rho(t)$ is nondecreasing in $t$. Next note that, since $w(u)\tilde w(u) b_0(u)$ is non decreasing in $u$ for $\mu_T \times \mu_T$ a.e. $((s,w), (r, \tilde w))$, $\kappa_t({\bf{x}}, {\bf{y}})$ is convex in $t$ for a.e. ${\bf{x}}, {\bf{y}}$, i.e. for $\mu_T \times \mu_T$ a.e. $({\bf{x}} , {\bf{y}}) \in \mathcal{X} \times \mathcal{X}$, and all $t_1, t_2 \in [0, T]$, and $\alpha, \beta \in [0,1]$, $\alpha+\beta = 1$, $$\kappa_{\alpha t_1+ \beta t_2}({\bf{x}}, {\bf{y}}) \le \alpha \kappa_{t_1}({\bf{x}}, {\bf{y}}) +\beta \kappa_{t_2}({\bf{x}}, {\bf{y}}).$$ Thus \begin{align*} \rho(\alpha t_1 +\beta t_2) &=\rho( \kappa_{\alpha t_1 +\beta t_2}, \mu_T) \\ &\le \rho(\alpha \kappa_{t_1}+\beta \kappa_{t_2}, \mu_T) \\ &\le \rho(\alpha \kappa_{t_1}, \mu_T)+\rho(\beta \kappa_{t_2}, \mu_T) \\ &= \alpha \rho(\kappa_{t_1}, \mu_T)+\beta \rho(\kappa_{t_2}, \mu_T) \\ &=\alpha\rho( t_1) +\beta \rho(t_2), \end{align*} where lines 3 and 4 above use parts (ii) and (iii) of Lemma \ref{lemma:prop-kernel} and line 2 uses the convexity of $\kappa_{\cdot}$. Thus $\rho$ is convex on $[0, T]$. Also since $\rho(0) = 0$ and $\rho(t) > 0$ for $t > 0$, we have that $\rho$ is strictly increasing on $[0, T]$ and has a strictly positive left derivative on $(0, T]$. This proves parts (i) and (iii) of Lemma \ref{theo:error-cont-int}. We now consider part (ii). For $\delta > 0$, let $$\rho^{\delta,+}(t)=\rho_t((a_0+\delta)\wedge 1,(b_0+\delta)\wedge 1,(c_0+\delta)\wedge 1), \; \rho^{\delta,-}(t)=\rho_t((a_0-\delta)^+,(b_0-\delta)^+,(c_0-\delta)^+).$$ Similarly define $\mu_t^{\delta, -}$, $\kappa_t^{\delta, -}$, $\mu_t^{\delta, +}$ and $\kappa_t^{\delta, +}$. We will argue by contradiction.\\ Suppose first that $\rho(t_c) > 1$, then by Lemma \ref{theo:error-cont-norm} and the continuiity of $\rho(t)$, there exist $\epsilon, \delta >0$ such that $\rho^{\delta,-}(t_c-\epsilon)>1$. Denote ${\bf{RG}}^{\delta, -}_t(\kappa_n) = {\bf{RG}}_n(\kappa^{\delta, -}_{n,t}, \mu_t^{\delta,-}, \phi_t)$, where $\kappa^{\delta, -}_{n,t}$ is defined as in \eqref{eqn:kappan-def}, replacing $b$ there with $(b_0-\delta)^+$. Since $\kappa^{\delta, -}_{n,t}$ converges to $\kappa^{\delta, -}_{t}$ uniformly on compact subsets of $\mathcal{X} \times \mathcal{X}$, by Theorem \ref{theo:bollobas}, the size of the largest component of ${\bf{RG}}^{\delta, -}_t(\kappa_n)$, whp, is $\Theta(n)$ and consequently the volume of the largest component of ${\bf{RG}}^{\delta, -}_t(\kappa_n)$ is, whp, at least $\Theta(n)$. By Lemma \ref{lemma:couple-bfia}, and as $\bar{x}_n(t) \to x(t)$, we have whp $$\mathcal{COM}_n(t_c-\epsilon) \ge_d {\bf{IA}}_n((a_0-\delta)^+,(b_0-\delta)^+,(c_0-\delta)^+)_{t_c-\epsilon}$$ and from Lemma \ref{lemma:rgiva-irg-def} the largest component in ${\bf{IA}}_n((a_0-\delta)^+,(b_0-\delta)^+,(c_0-\delta)^+)_{t_c-\epsilon}$ has the same distribution as that in ${\bf{RG}}^{\delta, -}_{t_c-\epsilon}(\kappa_n)$. However, by Theorem 1.1 of \cite{spencer2007birth} the largest component size of Bohman-Frieze model for $t < t_c$ is, whp, $\Theta( \log n)$ which contradicts the fact that the volume of the largest component of ${\bf{RG}}^{\delta, -}_{t_c-\epsilon}(\kappa_n)$ is, whp, at least $\Theta(n)$. Thus we have shown that $\rho(t_c) \le 1$. Suppose now that $\rho(t_c) < 1$. then there exists $\epsilon, \delta >0$ such that $\rho^{\delta,+}(t_c+\epsilon) <1$. Then a similar argument as above shows that, whp, $$\mathcal{COM}_n(t_c+\epsilon) \le_d {\bf{IA}}_n((a_0+\delta)\wedge 1,(b_0+\delta)\wedge 1,(c_0+\delta)\wedge 1))_{t_c+\epsilon}=_{def} {\bf{IA}}_{n,t_c+\epsilon} ^{\delta}.$$ Lemma \ref{lemma:logn-bound} implies that whp the largest component in ${\bf{IA}}_{n,t_c+\epsilon}^\delta$ is $O(\log^4 n)$. However from Theorem 1.1 of \cite{spencer2007birth}, for $t > t_c$, the largest component size of Bohman-Frieze model is whp $\Theta(n)$. This contradiction shows that $\rho(t_c) \ge 1$. Combining the above arguments we have $\rho(t_c)=1$. \ \ \rule{1ex}{1ex}\\ \section{Proof of Proposition \ref{prop:main}} \label{sec:analysis-s2s3} We shall now study the asymptotics of $\mathcal{S}_2$ and $\mathcal{S}_3$, namely the sum of squares and cubes of the component sizes. We first analyze $\mathcal{S}_2$ since the asymptotics for this will be required in the analysis of $\mathcal{S}_3$. \subsection{Analysis of $\bar{s}_2(\cdot)$ near criticality} \label{s2sec} Let us start with the sum of squares. Recall from \eqref{ins2141} that $\mathcal{S}_2(t)$ denotes the sum of squares of the component sizes in ${\bf {BF}}(t)$ and $\bar{s}_2(t) = \mathcal{S}_2(t)/n$. Also recall the limiting functions $s_k(t)$, $k=2,3$, introduced in \eqref{ins-s2} and \eqref{ins-s3}. Note that these functions are non-decreasing and they blow up at the critical point $t_c$ (see \eqref{eqn:s2-scaling-crit} and \eqref{eqn:s3-scaling-crit}). Let $y(t) = 1/s_2(t)$. Then this function satisfies the differential equation \begin{equation} \label{eqn:yt-diff} y^\prime(t) = -x^2(t) y^2(t) - (1-x^2(t)), \; y(0)= 1, \; t \in [0, t_c]. \end{equation} Note that $y$ is a monotonically decreasing function with $y(t)\to 0$ as $t\to t_c$ and as shown in Theorem 3.2 of \cite{janson2010phase}, the scaling behavior near $t_c$ is \begin{equation} y(t) = \frac{1}{\alpha}(t_c - t)+ O(t_c-t)^2 \label{eqn:yt-scaling} \end{equation} as $t \uparrow t_c$, where $\alpha$ is as in \eqref{eqn:alpha-def}. Let $Y_n(t) = 1/\bar{s}_2(t)$. To simplify notation, we suppress the dependence of the process $Y$ on $n$ when convenient. Note that \eqref{eqn:s2-tn-n-alpha} is equivalent to showing \begin{equation} n^{1/3}\left|Y(t_n)-\frac{1}{\alpha n^\gamma}\right|\convp 0. \label{eqn:ynt-to-show} \end{equation} Here, and throughout Section \ref{sec:analysis-s2s3}, $t_n=t_c - 1/n^\gamma$ with $\gamma\in (1/6,1/5)$. From \eqref{eqn:yt-scaling} \begin{equation} \label{ins1607}\left|y(t_n) - \frac{1}{\alpha n^\gamma}\right| = O\left(\frac{1}{n^{2\gamma}}\right) =o\left(\frac{1}{n^{1/3}}\right).\end{equation} Thus to show \eqref{eqn:ynt-to-show} it is enough to prove the following \begin{Proposition} \label{prop:s2-crit} As $n\to\infty$ \[ n^{1/3}\sup_{s\leq t_n}\left|Y(s) - y(s)\right| \convp 0. \] \end{Proposition} We shall prove this via a sequence of lemmas. We begin with some notation. Recall that $\boldsymbol{C}_n^{\scriptscriptstyle BF}(t) \equiv (\mathcal{C}_n^{\scriptscriptstyle (i)}(t) : i \ge 1)\equiv (\mathcal{C}_i(t) : i \ge 1)$ is the component size vector, $I_n(t)$ the size of the largest component, and $X_n(t)$ the number of singletons, in ${\bf {BF}}_n(t)$. Let $\sum_i$ denote the summation over all components and $\sum_{i<j}$ denote the summation over all pairs of components $(i,j)$ with $i<j$. The first Lemma identifies the semimartingale decomposition of the process $Y(\cdot)$ as well as the predictable quadratic variation $\langle M\rangle$ of the martingale $M$ in the decomposition. Recall the natural filtration associated with the BF process introduced in Section \ref{sec:model-equiv}. \begin{Lemma} \label{lemma:mart-s2} The process $Y_n(\cdot)$ can be decomposed as \begin{equation} Y_n(t) = 1+ \int_0^t A_n(s) ds + M_n(t), \; t \in [0, t_c] \label{ins1006}\end{equation} where \\(a) $M_n$ is a RCLL martingale with respect to the natural filtration $\{\mathcal{F}_t\}_{t\geq 0}$ of the BF process. \\(b) The process $A_n = A_1^n+R_1^n$ where (suppressing $n$) \begin{equation} \label{eqn:gnu} A_1(u) = -Y^2(u)\bar{x}^2(u) -(1-\bar{x}^2(u)) + (1-\bar{x}^2(u))\frac{Y^2(u)}{n^2}\sum_i \mathcal{C}_i^4(u), \;\;u \le t_c \end{equation} and for some $C_7 \in (0, \infty)$, \begin{equation} |R_1^n(u)| \le C_7 \left( \frac{1}{n} + \frac{I_n^2(u)}{n} \right), \mbox{ for all } n \in \mathbb{N} \mbox{ and } u \le t_c. \label{eqn:r1s} \end{equation} \\(c) Predictable quadratic variation of $M_n$ is given as \[\langle M_n\rangle(t) = \int_0^t B_n(u) du \] where $B_n$ is such that \begin{equation} B_n(u)\le \frac{4}{n} + \frac{4 Y_n^2(u) I_n^2(u)}{n}, \; u \le t_c. \label{eqn:yi-bound} \end{equation} \end{Lemma} {\bf Proof:} We will suppress $n$ from the notation when convenient. Note that $$Y(t) = 1 + \sum_{s \le t} \Delta Y(s), \mbox{ where } \Delta Y(s) = Y(s)- Y(s-).$$ We now analyze the possible jumps of $Y$. Note than any jump in $Y$ corresponds to a jump of one of the Poisson processes $\mathcal{P}_{\bf e}$, ${\bf e} = (e_1,e_2) \in \mathcal{E}^2$ (recall the notation from Section \ref{sec:cont-time-bf}). A jump of $\mathcal{P}_{\bf e}$ at a time instant $u$ could result in two different kinds of jumps in $Y$.\\ (i) {\bf Merger caused by the first edge $e_1$:} In this case $\Delta \mathcal{S}_2(u)= \mathcal{S}_2(u) - \mathcal{S}_2(u-) =2$ which implies $$\Delta Y(u) \equiv \alpha_1(u-) = -\frac{2Y^2(u-)}{n}\left[ 1-O\left( \frac{2Y(u-)}{n}\right)\right] .$$ (ii) {\bf Merger caused by the second edge $e_2$:} In this case, suppose components $i$ and $j$ merge, then $\Delta \mathcal{S}_2(u)=2\mathcal{C}_i(u-) \mathcal{C}_j(u-)$ and thus \[\Delta Y_n(u) \equiv \alpha_2^{i,j}(u-)= -2\frac{\mathcal{C}_i(u-)\mathcal{C}_j(u-)}{n}Y_n^2(u-)\left[1- O\left(2\frac{\mathcal{C}_i(u-)\mathcal{C}_j(u-) Y_n(u-)}{n}\right)\right].\] With these observations we can represent $Y$ in terms of stochastic integrals with respect to $\mathcal{P}_{\bf e}$ as follows. Define $$ \mathcal{H}_1(u) = \{{\bf e} = (e_1, e_2) \in \mathcal{E}^2: e_1 = (v_1, v_2) \mbox{ where both } v_1, v_2 \mbox{ are singletons at time } u\},$$ $$ \mathcal{H}_2^{(i,j)}(u) = \{{\bf e} = (e_1, e_2) \in \mathcal{E}^2\setminus \mathcal{H}_1(u): e_2 = (v_1, v_2) \mbox{ where one vertex is in } \mathcal{C}^i(u) \mbox{ while the other is in } \mathcal{C}^j(u) \}.$$ Also let $$\mathcal{U}_{{\bf e}}(u) = \alpha_1(u) {\bf 1}_{\mathcal{H}_1(u)}({\bf e}), \; \mathcal{U}_{{\bf e}}^{i,j}(u) = \alpha_2^{i,j}(u){\bf 1}_{\mathcal{H}_2^{(i,j)}(u)}({\bf e}).$$ Then \begin{equation} \label{ins2023} Y(t) = 1 + \sum_{{\bf e} \in \mathcal{E}^2} \int_{(0, t]} \left (\mathcal{U}_{{\bf e}}(s-) + \sum_{i< j} \mathcal{U}_{{\bf e}}^{i,j}(s-)\right) \mathcal{P}_{{\bf e}}(ds).\end{equation} Recalling that $\mathcal{P}_{{\bf e}}$ is a rate $2/n^3$ Poisson process, one can write $Y$ as $$Y(t) = 1 + \int_{[0,t]} A(s) ds + M(t),$$ where $$ A(s) = \frac{2}{n^3} \sum_{{\bf e} \in \mathcal{E}^2}\left (\mathcal{U}_{{\bf e}}(s) + \sum_{i< j} \mathcal{U}_{{\bf e}}^{i,j}(s)\right).$$ Note that \begin{eqnarray} \sum_{{\bf e} \in \mathcal{E}^2}{\bf 1}_{\mathcal{H}_1(s)}({\bf e}) &=& {X_n(s) \choose 2}{n \choose 2}=\frac{n^4}{4}\bar{x}_n^2(s) \cdot (1+O(1/n)),\label{ins2029a}\\ \sum_{{\bf e} \in \mathcal{E}^2}{\bf 1}_{\mathcal{H}_2^{(i,j)}(s)}({\bf e}) & = & \left[{n\choose 2}-{X_n(s)\choose 2}\right] \mathcal{C}_i(s)\mathcal{C}_j(s) = \frac{n^2}{2}(1-\bar{x}_n^2(s)) \mathcal{C}_i(s)\mathcal{C}_j(s) \cdot (1+O(1/n)).\label{ins2029b} \end{eqnarray} Thus we get $A(s) = A_1(s) + R_1(s)$, where $A_1$ represents the leading order terms: \begin{align*} A_1(s) &= - \frac{n}{2}\bar{x}_n^2(s) \cdot \frac{2Y^2(s)}{n} -\sum_{i<j} \frac{1}{n}(1-\bar{x}_n^2(s))\mathcal{C}_i(s) \mathcal{C}_j (s)\cdot \frac{2Y^2(s) \mathcal{C}_i(s) \mathcal{C}_j(s)}{n}\\ &= -\bar{x}_n^2(s) Y^2(s)-(1-\bar{x}_n^2(s)) Y^2(s)\cdot \frac{1}{n^2}\sum_{i<j} 2\mathcal{C}_i^2(s)\mathcal{C}_j^2(s)\\ &= -\bar{x}_n^2(s) Y^2(s)-(1-\bar{x}_n^2(s)) Y^2(s)\cdot \frac{1}{n^2}[(\sum_{i} \mathcal{C}_i^2(s))^2-\sum_i \mathcal{C}_i^4(s)]\\ &= -\bar{x}_n^2(s) Y^2(s)-(1-\bar{x}_n^2(s)) +(1-\bar{x}_n^2(s)) Y^2(s)\cdot \frac{1}{n^2}\sum_i \mathcal{C}_i^4(s) \end{align*} and the last line follows from the fact $Y=\frac{n}{\sum_i \mathcal{C}_i^2}$. The term $R_1$ consists of the lower order terms and using the observations $Y \le 1$, $\bar{x}_n \le 1$ and $|A_1| \le \bar{x}_n^2 Y^2+(1-\bar{x}_n^2) \le 2$, it can be estimated as follows. \begin{align*} |R_1(u)| &\le |A_1(u)| \cdot \frac{d_1}{n} + \frac{n}{2}\bar{x}_n^2(u) \cdot \frac{2Y^2(u)}{n} \cdot \frac{d_2 Y(u)}{n}\\ &+ (1-\bar{x}_n^2(u)) Y^2(u)\cdot \frac{1}{n^2}\sum_{i<j} [2\mathcal{C}_i^2(u)\mathcal{C}_j^2(u) \cdot \frac{d_3 \mathcal{C}_i(u)\mathcal{C}_j(u) Y}{n}(u)]\\ &\le \frac{2d_1}{n} + \frac{d_2}{n} + \frac{d_3 I^2(u)}{n} \cdot \frac{Y^2(u)}{n^2} \sum_{i<j} 2\mathcal{C}_i^2(u)\mathcal{C}_j^2(u)\\ &\le \frac{2d_1+d_2}{n} + \frac{d_3 I^2(u)}{n}. \end{align*} Next, from \eqref{ins2023} and using independence of Poisson processes $\mathcal{P}_{{\bf e}}$, $$ \langle M\rangle(t) = \frac{2}{n^3} \sum_{{\bf e} \in \mathcal{E}^2} \int_{(0, t]} \left ((\mathcal{U}_{{\bf e}}(s))^2 + \sum_{i< j} (\mathcal{U}_{{\bf e}}^{i,j}(s))^2 \right) ds \equiv \int_{(0,t]} B(s) ds,$$ where, using \eqref{ins2029a} and \eqref{ins2029b} once more, we can estimate $B$ as follows. \begin{align*} B(u) &\le \frac{n}{2}\bar{x}_n^2(u) \cdot 2 \cdot \left(\frac{2 Y^2(u)}{n}\right)^2 + (1-\bar{x}_n^2(u)) \cdot 2 \cdot \sum_{i<j} \frac{\mathcal{C}_i(u)\mathcal{C}_j(u)}{n} \left(\frac{2Y^2(u)\mathcal{C}_i(u)\mathcal{C}_j(u)}{n}\right)^2\\ &\le \frac{4}{n} + \frac{4 Y^2(u) I^2(u)}{n} \cdot \frac{Y^2(u)}{n^2}\sum_{i<j} 2\mathcal{C}_i^2(u)\mathcal{C}_j^2(u)\\ &\le \frac{4}{n} + \frac{4 Y^2(u) I^2(u)}{n}. \end{align*} This completes the proof of the lemma. \ \ \rule{1ex}{1ex} The following result bounds the difference $Y-y$ through an application of Gronwall's lemma. \begin{Lemma} \label{lemma:gronwall} For every $n \in \mathbb{N}$, \[\sup_{s\leq t_n}|Y_n(s)-y(s)|\leq \varepsilon_n e^{2t_c}\] where \[\varepsilon_n= 4t_c\sup_{s\leq t_c}|\bar{x}(s)-x(s)|+\sup_{s\leq t_n}|M_n(t)|+C_8\int_0^{t_n} \frac{I^2_n(s)}{n}ds,\] $C_8 = (2C_7+1)$ and $C_7$, $M_n$ are as in Lemma \ref{lemma:mart-s2}. \end{Lemma} {\bf Proof:} Since $y$ solves \eqref{eqn:yt-diff} and $Y$ satisfies \eqref{ins1006}, using the fact that $\bar{x}, x$ take values in $[0,1]$, we have for any fixed $t\leq t_n$ \begin{align} |Y(t)-y(t)| \leq\left|\int_0^t \left (Y^2(s)\bar{x}^2(s)-y^2(s)x^2(s)\right) ds \right|&+ \int_0^t|\bar{x}^2(s)-x^2(s)|ds +\int_0^t |R_1(s)|ds \nonumber \\ &+\int_0^t \left(\frac{Y^2(s)}{n^2} \sum_{i}\mathcal{C}_i^4(s) \right)ds +\sup_{s\leq t_n} |M(t_n)|. \label{ins1012} \end{align} Let us now analyze each term individually. Writing \[Y^2(s)\bar{x}^2(s)-y^2(s)x^2(s) =\bar{x}^2(s)(Y^2(s)-y^2(s)) + y^2(s)(\bar{x}^2(s)-x^2(s))\] and using the fact that $Y,y,x, \bar x$ take values in the interval $[0,1]$ we get \[\left|\int_0^t Y^2(s)\bar{x}^2(s)-y^2(s)x^2(s) ds \right|\leq 2\int_0^t|Y(s)-y(s)|ds+ 2t_c\sup_{s\leq t_c}|\bar{x}(s)-x(s)|\] Similarly the second term in \eqref{ins1012} can be estimated as \[\int_0^t|\bar{x}^2(s)-x^2(s)|ds \leq 2t_c\sup_{s\leq t_c}|\bar{x}(s)-x(s)|. \] From \eqref{eqn:r1s} the integrand in the third term in \eqref{ins1012} can be bounded as $$ |R_1(s)| \le C_7 \left( \frac{1}{n} + \frac{I_n^2(s)}{n} \right) \le 2C_7\frac{I_n^2(s)}{n}. $$ Similarly, the integrand in the fourth term in \eqref{ins1012} can be bounded as \[\frac{Y^2(s)}{n^2} \sum_{i} \mathcal{C}_i^4(s) \leq \frac{I^2_n(s)}{n} Y^2(s)\frac{1}{n} \sum_{i} \mathcal{C}_i^2(s).\] Noting that by definition $Y(s) \frac{1}{n} \sum_{i} \mathcal{C}_i^2(s)=1$ and $Y(s)\leq 1$ we get \[\int_0^t \left(\frac{Y^2(s)}{n^2} \sum_{i} \mathcal{C}_i^4(s) \right )ds \leq \int_0^{t_n} \frac{I^2_n(s)}{n}ds. \] Combining we get, for all $t \le t_n$ \[|Y(t)-y(t)|\leq \varepsilon_n+\int_0^t 2|Y(s)-y(s)|ds.\] Thus Gronwall's lemma (see Theorem 5.1, Appendix in \cite{ethier-kurtz}) proves the result. \ \ \rule{1ex}{1ex} One last ingredient in the proof of Proposition \ref{prop:s2-crit} is the following estimate on the martingale in Lemma \ref{lemma:mart-s2}. \begin{Lemma} \label{prop:n1-sup-M} As $n \to \infty$, \begin{equation} n^{1/3}\sup_{s\leq t_n} |M(s)| \convp 0. \notag \end{equation} \end{Lemma} Proof of Lemma \ref{prop:n1-sup-M} is given in Section \ref{secins1049}.\\ \ \\ {\bf Proof of Proposition \ref{prop:s2-crit}: } In view of Lemma \ref{lemma:gronwall} it is enough to show $n^{1/3}\varepsilon_n\convp 0$ as $n\to\infty$, where $\varepsilon_n$ is as defined in the Lemma \ref{lemma:gronwall}. Let us analyze each of the terms in $\varepsilon_n$. First note that by Lemma \ref{lemma:error-diff-eqn}, for any $\vartheta<1/2$, and in particular for $\vartheta=1/3$ \[\sup_{s\leq t_c} n^{\vartheta}|\bar{x}(s)-x(s)|\convp 0, \; \mbox{ as } n\to\infty.\] Next, from Proposition \ref{prop:com-size} $$ \{ I_n(t) \le m(n,t), \forall t < t_c-n^{-\gamma}\} \mbox{ whp as } n \to \infty,$$ where $ m(n,t)= B \frac{(\log n)^4}{(t_c-t)^2}$. Thus, recalling that $\gamma \in (1/6, 1/5)$, we have, whp, \begin{align} \int_0^{t_n} \frac{I^2_n(s)}{n}ds &\leq \frac{B^2(\log {n})^8}{n}\int_0^{t_n} \frac{1}{(t_c-s)^4} ds \nonumber \\ &= \frac{B^2n^{3\gamma}(\log {n})^8}{n}=o\left(\frac{1}{n^{1/3}}\right). \label{ins1602} \end{align} The result now follows on combining these estimates with Lemma \ref{prop:n1-sup-M}. \ \ \rule{1ex}{1ex} \subsection{Proof of Lemma \ref{prop:n1-sup-M}} \label{secins1049} We shall prove the lemma by first showing that $n^{\vartheta}\sup_{s\leq t_n}|M(s)| \convp 0$ for any $\vartheta < 1/5$ and then sharpening the estimates to allow for any $\vartheta \le 1/3$ . \begin{Lemma} \label{lemma:mart-bound-1} As $n \to \infty$, \begin{equation} \mathbb{P}\left(\sup_{s\leq t_n} |M_n(s)|> \frac{n^{3\gamma/2}(\log{n})^6}{\sqrt{n}} \right) \to 0 \label{eqn:mart-bound-1} \end{equation} and \[\mathbb{P}(Y_n(t)\leq 2 y(t) \; \forall t< t_n)\to 1.\] \end{Lemma} {\bf Proof:} Fix $\gamma_1 \in (1/3, 1/2)$ and define stopping times $\tau_i$, $i=1,2$ by \[\tau_1 = \inf\{t: I_n^1(t) > m(n,t) \}, \;\; \tau_2= \inf\{t: |\bar{x}(t) - x(t)|> n^{-\gamma_1} \}.\] From Proposition \ref{prop:com-size} and Lemma \ref{lemma:error-diff-eqn} \begin{equation} \mathbb{P}(\tau_1\wedge \tau_2 > t_n) \to 1 \label{eqn:tau-tn} \end{equation} as $n\to \infty$. Let $\tau^* = t_n\wedge \tau_1 \wedge \tau_2$. For \eqref{eqn:mart-bound-1}, in view of \eqref{eqn:tau-tn}, it suffices to prove the statement with $t_n$ replaced by $\tau^*$. By Doob's maximal inequality we have \[\mathbb{E}(\sup_{s\leq \tau^*}|M(s)|^2) \leq 4 \mathbb{E}(\langle M\rangle(\tau^*)) = 4 \mathbb{E} \int_0^{\tau^*} B(u) du .\] Furthermore, from \eqref{eqn:yi-bound}, \begin{align*} B(s) \leq \frac{4}{n}+ \frac{4 Y^2(s) I^2_n(s)}{n} \leq \frac{4}{n}+ \frac{4 I^2_n(s)}{n}. \notag \end{align*} Since for $t< \tau^*$ we have $I_n(t) \leq B(\log{n})^4/(t_c-t)^2 $ (see \eqref{ins1711}), we have $$ \mathbb{E} [\langle M\rangle(\tau^*)] \leq \int_0^{t_n}\left ( \frac{4}{n}+ \frac{4B^2(\log{n})^8}{n(t_c-t)^4}\right) dt \leq d_1\frac{n^{3\gamma} (\log{n})^{10}}{n}. $$ Combining the estimates, we have \[\mathbb{E}(\sup_{s\leq \tau^*}|M(s)|) \leq d_1^{1/2}\frac{n^{3\gamma/2} (\log{n})^5}{\sqrt{n}}.\] A simple application of Markov's inequality and \eqref{eqn:tau-tn} gives \eqref{eqn:mart-bound-1}. To get the final assertion in the Lemma, note that on the set \[B_n = \set{\tau_1> t_n} \cap \set{\tau_2> t_n} \cap \set{\sup_{s\leq \tau^*} |M(s)| < \frac{n^{3\gamma/2}(\log{n})^6}{\sqrt{n}}}\] $|\bar{x}(t) - x(t)|< n^{-\gamma_1}$ and (see \eqref{ins1602}) $$ \int_0^{t_n} \frac{I^2_n(s)}{n}ds =o\left(\frac{1}{n^{1/3}}\right). $$ Therefore, since $\gamma <1/5$, the error $\varepsilon_n$ in Lemma \ref{lemma:gronwall} satisfies whp \[\varepsilon_n <\frac{n^{3\gamma/2}(\log{n})^7}{\sqrt{n}} = o\left(\frac{1}{n^\gamma}\right).\] Noting that $y(t)$ is a monotonically decreasing function, the above along with \eqref{ins1607} implies that $Y(t) = (1+o_p(1))y(t)$ for $t\leq t_n$ on the set $B_n$. Since $\mathbb{P}(B_n)\to 1$, the result follows. \ \ \rule{1ex}{1ex} \\ \ \\ {\bf Proof of Lemma \ref{prop:n1-sup-M}:} Along with the stopping times $\tau_1, \tau_2$ introduced in Lemma \ref{lemma:mart-bound-1}, consider the stopping time \[\tau_3 = \inf\set{t: Y(t)> 2y(t)}\] Then Lemma \ref{lemma:mart-bound-1} and \eqref{eqn:tau-tn} imply that \[\mathbb{P}(\tau_1\wedge \tau_2\wedge \tau_3 > t_n) \to 1\] as $n\to\infty$. Thus to complete the proof, it is enough to show that for the stopping time $\tau = t_n\wedge \tau_1\wedge \tau_2\wedge \tau_3$, $n^{1/3}\mathbb{E}(\sup_{s\leq \tau} M(t)) \to 0$ as $n\to\infty$. Once again by Doob's maximal inequality it is enough to show that \begin{equation} \label{ins1632b} n^{2/3}\mathbb{E}(\langle M\rangle(\tau)) \to 0\qquad \mbox{as } n\to \infty. \end{equation} Now note that \eqref{eqn:yi-bound} implies that \begin{align*} \mathbb{E}[\langle M\rangle(\tau)] &\leq \mathbb{E} \int_0^{\tau} \left ( \frac{4}{n}+ \frac{4 Y^2(s) I^2_n(s) }{n} \right )ds \\ &\leq \frac{4t_n}{n} + \frac{4}{n}\int_0^{t_n} 4y^2(s) \frac{2B^2(\log{n})^8}{(t_c-s)^4} ds \\ &\le d_1 \left ( \frac{1}{n} + \frac{1}{n}\int_0^{t_n} \frac{B^2(\log{n})^8}{\alpha^2(t_c-s)^2} ds + \frac{(\log{n})^8}{n}\right). \end{align*} In the second line of the above display we have used the fact that $I_n(t) \le m(n,t)$ for all $t \le \tau_1$ and in the last line we have used \eqref{eqn:yt-scaling}. Thus \[\mathbb{E}[n^{2/3}\langle M\rangle(\tau)] \leq d_2 \left (\frac{1}{n^{1/3}} + \frac{n^{2/3+\gamma}(\log{n})^8}{n} \right).\] Since $\gamma <1/5$ we have \eqref{ins1632b} and this completes the proof. \ \ \rule{1ex}{1ex} \subsection{Analysis of $\bar{s}_3$ near criticality} We will now analyze the sum of cubes of component sizes near criticality and consequently prove \eqref{eqn:s3-s2}. Define \[z(t) = \frac{s_3(t)}{s_2^3(t)}, \; t \in [0, t_c).\] Then differential equations \eqref{ins-s2} \eqref{ins-s3} imply (see \cite{janson2010phase}) that $z$ solves the differential equation \begin{equation} \label{eqn:diff-z} z^\prime(t) = 3 x^2(t) y^3(t) - 3 x^2(t) y(t) z(t), \; z(0)= 1, \; t \in [0, t_c) \end{equation} and furthermore $z(t)\to \beta$ as $t\to t_c$. Now consider the process \[Z_n(t)= \frac{\mathcal{S}_3(t)/n}{(\mathcal{S}_2(t)/n)^3}= Y^3_n(t) \frac{\mathcal{S}_3(t)}{n}.\] Then to show \eqref{eqn:s3-s2}, it is enough to show the following proposition: \begin{Proposition} \label{prop:zt-convg} Fix any $\gamma\in (1/6,1/5)$ and let as before $t_n = t_c-n^{-\gamma}$. Then \[|Z(t_n)-z(t_n)|\convp 0\] as $n\to \infty$. \end{Proposition} The analysis is similar to that for $\mathcal{S}_2(\cdot)$ as carried out in the Section \ref{s2sec}. We begin by writing the semimartingale decomposition for $Z_n$ and identifying the predictable quadratic variation $\langle \tilde M\rangle$ of the martingale $\tilde M$ in the decomposition. \begin{Lemma} \label{lemma:mart-decomp-z} The process $Z_n$ can be decomposed as \begin{equation} Z_n(t) = 1+ \int_0^t \tilde A_n(s) ds + \tilde M_n(t), \; t \in [0, t_c] \label{ins1006s3}\end{equation} where \\(a) $\tilde M_n$ is a RCLL martingale with respect to the natural filtration $\{\mathcal{F}_t\}_{t\geq 0}$ of the BF process. \\(b) The process $\tilde A_n = \tilde A_1^n+\tilde R_1^n + \tilde R_2^n$ where (suppressing $n$), for $u \in [0, t_c]$, \[\tilde A_1(u)= 3 \bar{x}^2(u)Y^3(u) - 3Z(u)Y(u)\bar{x}^2(u),\] \begin{equation} |\tilde R_1(u)|=\left| 3 (1-\bar{x}^2(u))\left[ Z(u)Y(u)\sum_{i}\frac{\mathcal{C}_i^4(u)}{n^2}-Y^3(u)\sum_{i}\frac{\mathcal{C}_i^5(u)}{n^2}\right]\right| \le \frac{6I^3(u)Y^2(u)}{n} \label{ins1110} \end{equation} and \[ |\tilde R_2(u)|\leq \frac{30Y^2(u)I^3_n(u)}{n}+ \frac{12 Y^3(u)I^5_n(u)}{n^2}. \] \\(c) Predictable quadratic variation of $\tilde M_n$ is given as \[\langle \tilde M_n\rangle(t) = \int_0^t \tilde B_n(u) du, \] and the process $\tilde B_n$ can be bounded as \begin{equation} \tilde B(u)\leq C_9 \left(\frac{Y^4(u) I^4(u)}{n} + \frac{ Y^5(u)I^6(u)}{n^2}+ \frac{Y^6(u) I^8(u)}{n^3}+ \frac{Y^7(u) I^{10}(u)}{n^4} + \frac{Y^8(u) I^{12}(u)}{n^5}\right) \label{eqn:zr3-bound}\end{equation} for some $C_9 \in (0, \infty)$. \end{Lemma} {\bf Proof:} We will proceed as in the proof of Lemma \ref{lemma:mart-s2}. Note that $$Z(t) = 1 + \sum_{s \le t} \Delta Z(s), \mbox{ where } \Delta Z(s) = Z(s)- Z(s-).$$ Next note that if $\Delta \mathcal{S}_2(u) = a$ and $\Delta \mathcal{S}_3(u) = b$, then $$ Z(u) = \frac{\bar{s}_3(u-)+b/n}{(\bar{s}_2(u-)+ a/n)^3} = \frac{Y^3(u-)(\bar{s}_3(u-)+b/n)}{(1+aY(u-)/n)^3}.$$ Using the estimate $|(1+x)^{-3}- (1-3x)|\leq 6 x^2$, for $0<x<1$, we get $$Z(u) = Y^3(u-)\left(\bar{s}_3(u-)+\frac{b}{n}\right)\left(1-\frac{3aY(u-)}{n}+\tilde{R}(a,u-)\right), $$ where \[|\tilde{R}(a,u-)|\leq \frac{6 a^2 Y^2(u-)}{n^2}.\] Thus \[\Delta Z(u)\equiv \tilde \zeta (a,b,u-)= -\frac{3aZ(u-)Y(u-) }{n}+ \frac{Y^3(u-)b}{n} + \tilde R_{a,b}(u-),\] where the remainder term \begin{align*} |\tilde R_{a, b}(u-)|&\leq Z(u-)|\tilde{R}(a,u-)| + \frac{3abY^4(u-)}{n^2}+\frac{bY^3(u-)|\tilde{R}(a,u-)|}{n}\\ &\leq \frac{6 a^2 Z(u-) Y^2(u-)}{n^2} + \frac{3abY^4(u-)}{n^2}+\frac{6a^2bY^5(u-)}{n^3}. \end{align*} Any jump in $\mathcal{S}_2, \mathcal{S}_3$ or $Z$ corresponds to a jump of one of the Poisson processes $\mathcal{P}_{\bf e}$, ${\bf e} = (e_1,e_2) \in \mathcal{E}^2$. A jump of $\mathcal{P}_{\bf e}$ at a time instant $u$ could result in the following different values for $a,b$.\\ (i) {\bf Merger caused by the first edge $e_1$:} In this case $a=2$ and $b=6$ and $\tilde R_{a,b}$ can be estimated as $$ |\tilde R_{2,6}(u-)| \le \frac{24 Y^4(u-) I(u-)}{n^2} + \frac{36 Y^4(u-)}{n^2}+\frac{144 Y^5(u-)}{n^3} \le \frac{204 Y^4(u-) I(u-)}{n^2}.$$ (ii) {\bf Merger caused by the second edge $e_2$:} In this case, suppose components $i$ and $j$ merge, then $$a\equiv \theta_{i,j}(u-) = 2\mathcal{C}_i(u-)\mathcal{C}_j(u-) , \; b \equiv \eta_{i,j}(u-)=3\mathcal{C}_i^2(u-)\mathcal{C}_j(u-)+ 3\mathcal{C}_i(u-)\mathcal{C}_j^2(u-)$$ and noting that $a \le 2 I^2$, $b \le 6 I^3$ and \begin{equation} Z(u)=Y^3(u) \bar{s}_3(u)\leq Y^2(u) I_n(u), \label{eqn:zu-bound} \end{equation} we have $$ |\tilde R_{a, b}(u-)| \leq \frac{6 a^2 Z(u-) Y^2(u-)}{n^2} + \frac{3abY^4(u-)}{n^2}+\frac{6a^2bY^5(u-)}{n^3} \leq \frac{30 a Y^4(u-) I^3(u-)}{n^2}+\frac{72 a Y^5(u-) I^5(u-)}{n^3}. $$ With these observations we can represent $Z$ in terms of stochastic integrals with respect to $\mathcal{P}_{\bf e}$ as follows. Recall $ \mathcal{H}_1,\mathcal{H}_2^{(i,j)}$ introduced in the proof of Lemma \ref{lemma:mart-s2}. Define $\tilde \alpha_1(u) = \tilde \zeta(2,6,u)$ and $\tilde \alpha_2^{i,j}(u) = \tilde \zeta (\theta_{i,j}(u),\eta_{i,j}(u),u)$. Also, let $$\tilde \mathcal{U}_{{\bf e}}(u) = \tilde \alpha_{1}(u) {\bf 1}_{\mathcal{H}_1(u)}({\bf e}), \; \tilde \mathcal{U}_{{\bf e}}^{i,j}(u) = \tilde \alpha_2^{i,j}(u){\bf 1}_{\mathcal{H}_2^{(i,j)}(u)}({\bf e}).$$ Then \begin{equation} \label{ins2023s3} Z(t) = 1 + \sum_{{\bf e} \in \mathcal{E}^2} \int_{(0, t]} \left (\tilde \mathcal{U}_{{\bf e}}(s-) + \sum_{i< j} \tilde \mathcal{U}_{{\bf e}}^{i,j}(s-)\right) \mathcal{P}_{{\bf e}}(ds).\end{equation} Recalling that $\mathcal{P}_{{\bf e}}$ is a rate $2/n^3$ Poisson process, one can write $Z$ as $$Z(t) = 1 + \int_{[0,t]} \tilde A(s) ds + \tilde M(t),$$ where $$ \tilde A(s) = \frac{2}{n^3} \sum_{{\bf e} \in \mathcal{E}^2}\left (\tilde \mathcal{U}_{{\bf e}}(s) + \sum_{i< j} \tilde \mathcal{U}_{{\bf e}}^{i,j}(s)\right).$$ Also, once again using independence of Poisson processes $\mathcal{P}_{{\bf e}}$, $$ \langle \tilde M\rangle(t) = \frac{2}{n^3}\sum_{{\bf e} \in \mathcal{E}^2} \int_{(0, t]} \left ((\tilde \mathcal{U}_{{\bf e}}(s))^2 + \sum_{i< j} (\tilde \mathcal{U}_{{\bf e}}^{i,j}(s))^2 \right) ds \equiv \int_{(0,t]} \tilde B(s) ds.$$ The proof is now completed upon using \eqref{ins2029a} and \eqref{ins2029b} as for Lemma \ref{lemma:mart-s2}. \ \ \rule{1ex}{1ex}\\ As is clear from the above lemma, a precise analysis of $Z$ will involve considering several terms of the form $I^\theta Y^\vartheta$. The following lemma shows that such terms are asymptotically negligible for suitable $\theta, \vartheta$. \begin{Lemma} \label{lemma:powers-bd} For any $\theta, \vartheta\geq 0$ and $p > 0$ satisfying $\gamma(2\theta-\vartheta -1)< p$ \[\int_0^{t_n}\frac{I^\theta_n(u) Y^\vartheta_n(u)}{n^p} du \convp 0,\] as $n\to\infty$. \end{Lemma} {\bf Proof:} From Proposition \ref{prop:com-size}, $$ \{I_n(t) \leq B(\log n)^4/(t_c-t)^{2} \mbox{ for all } t\leq t_n \} \mbox{ occurs whp}. $$ Also, by Proposition \ref{prop:s2-crit} and \eqref{ins1607} we know that \[\sup_{t\leq t_n}|Y(t)- y(t)|=o_p(y(t_n))\] and from \eqref{eqn:yt-scaling}, for $t$ near $t_c$, $y(t_c)\sim (t_c-t)/\alpha$. Using these bounds in the above integral proves the result. \ \ \rule{1ex}{1ex} \\ \ \\ {\bf Proof of Proposition \ref{prop:zt-convg}. }By the integral representation of the process $Z$ given in Lemma \ref{lemma:mart-decomp-z} and the fact that $z$ solves the differential equation \eqref{eqn:diff-z}, we have that \begin{align} |Z(t)-z(t)|\leq 3 \int_0^{t} |\bar{x}^2(u)Y^3(u)&-x^2(u)y^3(u)|du+ 3\int_0^{t} \left|Z(u)Y(u)\bar{x}^2(u)-z(u)y(u)x^2(u)\right|du \nonumber \\ &+ \int_0^{t_n} |\tilde R_1(u)|du + \int_0^{t_n} |\tilde R_2(u)|du +\sup_{t\leq t_n} |\tilde M(t)|. \label{ins1057} \end{align} Now the integrand in the first term can be bounded as \begin{align} 3|\bar{x}^2(u)Y^3(u)-x^2(u)y^3(u)|&\leq 3Y^3(u)|\bar{x}^2(u)-x^2(u)|+ 3\bar{x}^2(u)|Y^3(u)-y^3(u)| \notag\\ &\leq 6|\bar{x}(u)-x(u)|+ 9|Y(u)-y(u)|.\label{eqn:xy-bound} \end{align} The integrand in the second term in \eqref{ins1057} can be decomposed as \begin{align*} z(u)y(u)x^2(u)- Z(u)Y(u)\bar{x}^2(u) = x^2(u)&y(u)(z(u)-Z(u))\\ & + y(u)Z(u)(x^2(u)-\bar{x}^2(u))+ Z(u)\bar{x}^2(u)(y(u)-Y(u)). \end{align*} Thus the second integral in \eqref{ins1057} can be bounded by \begin{equation} 3 \int_0^t |(z(u)-Z(u))|du+ 6 \int_0^{t_n} Z(u)|x(u)-\bar{x}(u)|du + 3 \int_0^{t_n}Z(u)|Y(u)-y(u)|du. \label{eqn:integ-z-bd} \end{equation} Combining \eqref{eqn:integ-z-bd} and \eqref{eqn:xy-bound} we get that \[|Z(t)-z(t)| \leq \varepsilon_n+ 3 \int_0^t |(Z(u)-z(u))|du\] where \begin{align*} \varepsilon_n = 9t_c\sup_{s\leq t_n}\left(|\bar{x}(s)-x(s)|+|Y(s)-y(s)|\right) &+ 6 \int_0^{t_n} Z(u)|x(u)-\bar{x}(u)|du + 3 \int_0^{t_n}Z(u)|Y(u)-y(u)|du \\ &+ \int_0^{t_n} |\tilde R_1(u)|du + \int_0^{t_n} |\tilde R_2(u)|du +\sup_{t\leq t_n} |\tilde M(t)|\\ &\hspace{-2in} = \eta_1+\eta_2+\eta_3+\eta_4+ \eta_5 + \eta_6. \end{align*} By Gronwall's lemma, it is enough to show that $\varepsilon_n\to 0$ in probability as $n\to\infty$. Let us show each of the six constituents of $\varepsilon_n$ satisfy this asymptotics. By Lemma \ref{lemma:error-diff-eqn} and Proposition \ref{prop:s2-crit}~ $\eta_1\to 0$ in probability. Again by Lemma \ref{lemma:error-diff-eqn}, for any $\vartheta< 1/2$, whp, $$ \eta_2 \leq \frac{6}{n^\vartheta}\int_0^{t_n} Z(u) du \leq \frac{6}{n^\vartheta} \int_0^{t_n} Y^2(u) I_n(u) du.$$ Using Lemma \ref{lemma:powers-bd}, the last term converges to $0$ in probability as $n \to \infty$. Thus $\eta_2 \to 0$ in probability. An identical argument, using Proposition \ref{prop:s2-crit} instead of Lemma \ref{lemma:error-diff-eqn} shows that $\eta_3\to 0$ in probability. For $\eta_4$, note that from \eqref{ins1110},\[|R_1(u)|\leq \frac{6 I^3(n,u)Y^2(u)}{n}.\] Lemma \ref{lemma:powers-bd} now shows that $\eta_4\to 0$ in probability. A similar argument, using the bounds in Lemma \ref{lemma:mart-decomp-z} on $R_2(u)$ establishes that $\eta_5\to 0$ in probability. For $\eta_6$ note that for an arbitrary stopping time $\tau$ \[ \mathbb{E} \sup_{t \le t_n\wedge \tau} |\tilde M_t|^2 \le 4 \mathbb{E}[\langle M\rangle(t\wedge \tau)] = 4\mathbb{E} \int_0^{t\wedge \tau} B(u) du.\] The bound on $\tilde B(u)$ in \eqref{eqn:zr3-bound} along with Lemma \ref{lemma:powers-bd} and a localization argument similar to the one used in the proof of Lemma \ref{prop:n1-sup-M} now shows that $\eta_6$ converges to $0$ in probability. \subsection{Proof of Proposition \ref{prop:main}} We now complete the proof of Proposition \ref{prop:main}. Proof of \eqref{eqn:s2-tn-n-alpha} follows from Proposition \ref{prop:s2-crit} and the discussion immediately above the proposition. Proof of \eqref{eqn:s3-s2} is immediate from Proposition \ref{prop:zt-convg}. Finally we consider \eqref{eqn:max-s2}. From Proposition \ref{prop:s2-crit} and \eqref{ins1607} \begin{equation} \frac{\mathcal{S}_2(t_n)}{\alpha n^{1+\gamma}} \convp 1\label{ins2038} \end{equation} Also, from Proposition \ref{prop:com-size} \[\mathbb{P}\left(\frac{\mathcal{C}_n^{\scriptscriptstyle(1)}(t_n)}{ n^{2\gamma}\log^4{n}}\leq B\right)\to 1\] as $n\to\infty$. Combining, and recalling that $\gamma\in (1/6, 1/5)$, we have \[\frac{n^{2/3}\mathcal{C}_n^{\scriptscriptstyle (1)}(t_n)}{\mathcal{S}_2(t_n)} \convp 0.\] This completes the proof of Propostion \ref{prop:main}. \ \ \rule{1ex}{1ex} \section{Proof of Theorem \ref{theo:main}} \label{sec:proof-main} We will now complete the proof of Theorem \ref{theo:main}. As always, we write the component sizes as $$\boldsymbol{C}_n^{\scriptscriptstyle BF}(t)\equiv (\mathcal{C}_n^{\scriptscriptstyle (i)}(t) : i \ge 1)\equiv (\mathcal{C}_i(t): i\ge1 );$$ and write the scaled component sizes as \begin{equation} \label{ins1407} \bar \boldsymbol{C}_n^{\scriptscriptstyle BF}(\lambda)\equiv \left(\frac{\beta^{1/3}}{n^{2/3}}\mathcal{C}^{\scriptscriptstyle (i)}_n \left(t_c+ \beta^{2/3} \alpha\frac{\lambda}{n^{1/3}} \right) : i \ge 1 \right) \equiv \left(\bar{\mathcal{C}}_i(\lambda) : i\ge 1 \right) \end{equation} Then Proposition \ref{prop:main} proves that with \[\lambda_n = -\frac{n^{-\gamma+1/3}}{\alpha \beta^{2/3}}\] and $\gamma \in (1/6,1/5)$ we have, as $n \to \infty$, \begin{equation} \label{ins1409} \frac{\sum_i \left( \bar \mathcal{C}_i(\lambda_n)\right)^3}{\left[\sum_i \left( \bar \mathcal{C}_i (\lambda_n)\right)^2\right]^3} \convp 1, \; \frac{1}{\sum_i \left( \bar \mathcal{C}_i (\lambda_n)\right)^2} +\lambda_n \convp 0, \; \frac{ \bar \mathcal{C}_1(\lambda_n)}{\sum_i \left( \bar \mathcal{C}_i(\lambda_n)\right)^2} \convp 0. \end{equation} We shall now give an idea of the proof of the main result, and postpone precise arguments to the next two sections. The first step is to observe that the asymptotics in \eqref{ins1409} imply that the $\bar {\boldsymbol{C}}_{\scriptscriptstyle bf}$ process at time $\lambda_n$ satisfies the regularity conditions of Proposition 4 of \cite{aldous1997brownian}. The second key observation is that the scaled components merge in the critical window at a rate close to that for the multiplicative coalescent. Indeed, note that for any given time $t$ components $i<j\in {\bf {BF}}(t)$ merge in a small time interval $[t, t+dt)$ at rate \[\frac{1}{n}(1-\bar{x}^2(t)) \mathcal{C}_i(t)\mathcal{C}_j(t).\] Thus letting $\lambda = (t-t_c)n^{1/3} / (\alpha \beta^{2/3})$ be the scaled time parameter, in the time interval $[\lambda, \lambda+d\lambda)$, these two components merge at rate \begin{align*} \gamma_{ij}(\lambda)&=\frac{(1-\bar{x}^2(t_c+ \beta^{2/3} \alpha\frac{\lambda}{n^{1/3}}))}{n} \frac{\beta^{2/3} \alpha}{n^{1/3}} \mathcal{C}_i\left(t_c+ \frac{ \beta^{2/3} \alpha\lambda}{n^{1/3}}\right)\mathcal{C}_j \left(t_c+ \frac{ \beta^{2/3} \alpha\lambda}{n^{1/3}}\right) \\ &= \alpha \left(1-\bar{x}^2\left(t_c+ \beta^{2/3} \alpha\frac{\lambda}{n^{1/3}}\right)\right) \bar \mathcal{C}_i(\lambda) \bar \mathcal{C}_j(\lambda). \end{align*} Now since, for large $n$, \[\bar{x}^2\left(t_c+ \beta^{2/3} \alpha\frac{\lambda}{n^{1/3}}\right)\approx x^2(t_c)\] and from \cite{janson2010phase}, $\alpha(1-x^2(t_c)) =1$ (see \eqref{eqn:alpha-def}) we get \[\gamma_{ij}(\lambda) \approx \bar \mathcal{C}_i(\lambda) \bar \mathcal{C}_j(\lambda)\] which is exactly the rate of merger for the multiplicative coalescent. The above two facts allow us to complete the proof using ideas similar to those in \cite{aldous2000random}. Let us now make these statements precise. As before, throughout this section $t_n=t_c-n^{-\gamma}=t_c+\beta^{2/3} \alpha\frac{\lambda_n}{n^{1/3}}$, where $\gamma$ is fixed in $(1/6, 1/5)$. We will first show that $\bar \boldsymbol{C}_n^{\scriptscriptstyle BF}(\lambda) \convd \boldsymbol{X}(\lambda)$ in $l^2_{\downarrow}$ for each $\lambda \in \mathbb{R}$ and at the end of the section show that, in fact, $\bar \boldsymbol{C}_n^{\scriptscriptstyle BF} \convd \boldsymbol{X}$ in $\mathcal{D}((-\infty, \infty): l^2_{\downarrow})$. Now fix $\lambda \in \mathbb{R}$. By choosing $n$ large enough we can ensure that $\lambda \ge \lambda_n$. Henceforth consider only such $n$. Recall that $\mathcal{COM}_n(t)$ denotes the subgraph of ${\bf {BF}}_n(t)$ obtained by deleting all the singletons. Let $ \sum_{i \in \mathcal{COM}}$ denote the summation over all components in $\mathcal{COM}_n$, and $\sum_i$ denote the summation over all components in ${\bf {BF}}_n$. Since \begin{equation} \sum_i \left ( \bar \mathcal{C}_i(\lambda) \right)^2 - \sum_{i \in \mathcal{COM}} \left ( \bar \mathcal{C}_i(\lambda) \right)^2 \le \frac{d_1}{n^{4/3}} \sum_{i=1}^{X_n(t)} 1 = O(1/n^{1/3}), \label{ins1503}\end{equation} it suffices to prove Theorem \ref{theo:main} and verify Proposition \ref{prop:main} with ${\bf {BF}}_n(t)$ replaced by $\mathcal{COM}_n(t).$ We write $\sum_i$ instead of $\sum_{i \in \mathcal{COM}}$ for simplicity of the notation from now on. We begin in Section \ref{ins2122} by constructing a coupling of $\{\mathcal{COM}_n(t)\}_{t \ge t_n}$ with two other random graph processes, sandwiching our process between these two processed, and proving statements analogous to those in Theorem \ref{theo:main} for scaled component vectors associated with these processes. Proof of Theorem \ref{theo:main} will then be completed in Section \ref{ins2121}.\\ \ \\ \subsection{Coupling with the multiplicative coalescent} \label{ins2122} \textbf{Lower bound coupling:} Let, for $t\ge t_n$, $\mathcal{COM}_n^-(t)$ be a modification of $\mathcal{COM}_n(t)$ such that $\mathcal{COM}_n^-(t_n)=\mathcal{COM}_n(t_n)$, and when $t > t_n$, we change the dynamics of the random graph to the Erd\H{o}s-R\'{e}nyi type. More precisely, recall from Section \ref{sec:model-equiv} that a jump in ${\bf {BF}}_n(t)$ can be produced by three different kinds of events. These are described in items (i), (ii) and (iii) in Section \ref{sec:model-equiv}. $\mathcal{COM}^-_n(t)$, $t \ge t_n$ is constructed from $\mathcal{COM}^-_n(t_n)$ by erasing events of type (i) and (ii) (i.e. immigrating doubletons and attaching singletons) and changing the probability of edge formation between two non-singletons (from that given in \eqref{ins1516}) to the fixed value $b_n^*(t_n)/n$. Since $b_n^*(t)$ is nondecreasing in $t$, we have that $\mathcal{COM}_n(t_n + \cdot) \ge_d \mathcal{COM}_n^-(t_n + \cdot)$. Denote by $\bar{\boldsymbol{C}}_n^-(\lambda) = \left( \bar \mathcal{C}_i^-(\lambda): i\geq 1\right)$ the scaled (as in \eqref{ins1407}) component size vector for $\mathcal{COM}_n^-(t)$. From Proposition 4 of \cite{aldous1997brownian}, it follows that for any $\lambda \in {\mathbb{R}}$, \begin{equation} \label{ins1320} \bar {\boldsymbol{C}}_n^-(\lambda) \convd \boldsymbol{X}(\lambda) \end{equation} in $l^2_{\downarrow}$. Indeed, note that the first and third convergence statements in \eqref{ins1409} hold with $\bar \mathcal{C}_i$ replaced with $\bar \mathcal{C}_i^-$ since the contributions made by singletons to the scaled sum of squares is $O(n^{-1/3})$ (see \eqref{ins1503}) and to the sum of cubes is even smaller. This shows that the first and third requirements in Proposition 4 of \cite{aldous1997brownian} (see equations (8), (10) therein) are met. To show the second requirement in Proposition 4 of \cite{aldous1997brownian}, using the second convergence in \eqref{ins1409}, \begin{eqnarray} &&\lim_{n\to \infty} \left ( \left ( n^{2/3}\beta^{-1/3}\right)^2 \frac{b_n^*(t_n)}{n} \frac{\beta^{2/3}\alpha (\lambda - \lambda_n)}{n^{1/3}} - \frac{1}{\sum_i \left( \bar \mathcal{C}_i^-(\lambda_n)\right)^2}\right)\label{ins1608}\\ &=& \lim_{n\to \infty} \alpha b_n^*(t_n)\lambda - \lambda_n (\alpha b_n^*(t_n) - 1)\nonumber\\ &=& \lambda - \lim_{n\to \infty} \lambda_n (\alpha b_n^*(t_n) - 1),\nonumber \end{eqnarray} where the last equality follows on observing that, as $n \to \infty$, $b_n^*(t_n) \convp 1-x^2(t_c)$ and $\alpha(1-x^2(t_c))=1$. Also, \begin{eqnarray*} \lim_{n\to \infty} \lambda_n |\alpha b_n^*(t_n) - 1| &=& \lim_{n\to \infty}\frac{n^{-\gamma+1/3}}{\beta^{2/3}} |b_n^*(t_n) - \alpha^{-1}|\\ &=& \lim_{n\to \infty}\frac{n^{-\gamma+1/3}}{\beta^{2/3}} |b_0(\bar x(t_n)) - b_0(x(t_c))| \\ &\le & d_1 \lim_{n\to \infty}n^{-\gamma+1/3} |\bar x(t_n)- x(t_c)|\\ &\le & \lim_{n\to \infty} d_2 \left (n^{-\gamma+1/3} |\bar x(t_n) - x(t_n)| + n^{-\gamma+1/3}|t_n - t_c| \right), \end{eqnarray*} where the second equality follows from \eqref{eqn:b-def1}. The first term on the last line converges to $0$ using Lemma \ref{lemma:error-diff-eqn}. For the second term note that $n^{-\gamma+1/3}|t_n-t_c| = n^{-\gamma+1/3}n^{-\gamma}$ which converges to $0$ since $\gamma > 1/6$. Thus we have shown that the expression in \eqref{ins1608} converges to $\lambda$ as $n \to \infty$ and therefore the second requirement in Proposition 4 of \cite{aldous1997brownian} (see equation 9 therein) is met as well. This proves that $\bar{\boldsymbol{C}}_n^-(\lambda) \convd \boldsymbol{X}(\lambda)$ in $l^2_{\downarrow}$, for every $\lambda \in \mathbb{R}$. Although Proposition 4 of \cite{aldous1997brownian} only proves convergence at any fixed point $\lambda$, from the Feller property of the multiplicative coalescent process proved in Proposition 6 of the same paper it now follows that, in fact, $\bar{\boldsymbol{C}}_n^- \convd \boldsymbol{X}$ in $\mathcal{D}((-\infty, \infty): l^2_{\downarrow})$.\\ \ \\ \textbf{Upper bound coupling:} Let us construct $\{\mathcal{COM}_n^+(t): t \ge t_n \}$ in the following way. Let $t_n^+ = t_c + n^{-\gamma}$ and let $$\lambda_n^+ = (t_n^+ - t_c) n^{1/3}/(\alpha \beta^{2/3}) = n^{1/3 - \gamma}/(\alpha \beta^{2/3}).$$ Let $\mathcal{COM}_n^+(t_n)$ be the graph obtained by including all immigrating doubleton and attachments during time $t \in [t_n, t_n^+]$ to the graph of $\mathcal{COM}_n(t)$, along with all the attachment edges. Namely, we construct $\mathcal{COM}_n^+(t_n)$ by including in $\mathcal{COM}_n(t_n)$ all events of type (i) and (ii) of Section \ref{sec:model-equiv} that occur over $[t_n, t_n^+]$. For $t > t_n$ the graph evolves in the Erd\H{o}s-R\'{e}nyi way such that edges are added between each pair of vertices in the fixed rate $b_n^*(t_n^+)/n$. The coupling between $\mathcal{COM}_n^+(\cdot + t_n)$ and $\mathcal{COM}_n(\cdot + t_n)$ can be achieved as follows: Construct a realization of $\{\mathcal{COM}_n(t): t_n \le t \le t_n^+ \}$ first, then use $b_n^*(t_n^+)-b_n^*(t)$ to make up for all the additional edges in $\mathcal{COM}_n^+(t)$ for $t_n \le t \le t_n^+$. Note that $\mathcal{COM}_n(t_n+ \cdot) \le_d \mathcal{COM}_n^+(t_n+\cdot)$ over $[0, t_n^+-t_n]$. \\ Let $\bar \boldsymbol{C}_n^+(\lambda) = \left(\bar \mathcal{C}_i^+(\lambda): i\geq 1\right)$ be the scaled (as in \eqref{ins1407}) component size vector for $\mathcal{COM}_n^+$. We will once more apply Proposition 4 of \cite{aldous1997brownian}. We first show that the three convergence statements in \eqref{ins1409} hold with $\bar \mathcal{C}_i$ replaced with $\bar \mathcal{C}_i^+$. For this it will be convenient to consider processes under the original time scale. Write $\mathcal{C}_{n}^{\scriptscriptstyle(i)}(t_n) \equiv \mathcal{C}_i$. Also denote by $\{\mathcal{C}_i^+\}$ the component vector obtained by adding all events of type (ii) only, to $\mathcal{COM}_n(t_n)$ (i.e. attachment of singletons to components in $\mathcal{COM}_n(t_n)$), over $[t_n, t_n^+]$. Since $c^*$ is bounded by $1$, $\mathcal{C}_i^+$ is stochastically dominated by the sum of $\mathcal{C}_i$ independent copies of Geometric($p$), with $p=e^{t_n-t_n^+}=e^{-2n^{-\gamma}}$. Thus \[ u_i \stackrel{\scriptscriptstyle def}{=} \mathcal{C}_i^+-\mathcal{C}_i \le_d \text{Negative-binomial}(r,p)\text{ with } r=\mathcal{C}_i, p=e^{-2n^{-\gamma}}. \] The random graph $\mathcal{COM}_n^+(t_n)$ contains components other than $\{\mathcal{C}_i^+\}$. These additional components correspond to the ones obtained from doubletons immigrating over $[t_n, t_n^+]$. Since there are at most $n$ vertices, the number $N$ of such doubletons is bounded by $n/2$. Denote by $\{\tilde \mathcal{C}_i^+\}_{i=1}^N$ the components corresponding to such doubletons. Once again using the fact that $c^* \le 1$, we have that \[ \tilde \mathcal{C}_i^+ \le_d 2+ \text{Negative-binomial}(2,p)\text{ with } p=e^{-2n^{-\gamma}}.\] Write $$\mathcal{S}_k=\sum_i (\mathcal{C}_i)^k,\, \mathcal{S}_k^+=\sum_i (\mathcal{C}_i^+)^k+ \sum_{i=1}^N (\tilde \mathcal{C}_i^+)^k \mbox{ for } k=2, 3 \mbox{ and } I = \max_i \mathcal{C}_i, I^+=\max\{ \max_i \mathcal{C}_i^+, \max_i \tilde \mathcal{C}_i^+\}.$$ The following proposition shows that Propostion \ref{prop:main} holds with $(\mathcal{S}_2(t_n), \mathcal{S}_3(t_n), \mathcal{C}_n^{\scriptscriptstyle (1)}(t_n))$ replaced with $(\mathcal{S}_2^+(t_n), \mathcal{S}_3^+(t_n), I^+(t_n))$. \begin{Proposition} As $n \to \infty$, \begin{align*} I^+ &= \Theta(I)\\ \frac{\mathcal{S}_2^+}{\mathcal{S}_2} &\convp 1\\ \frac{\mathcal{S}_3^+}{\mathcal{S}_3} &\convp 1\\ n^{4/3}\left( \frac{1}{\mathcal{S}_2}-\frac{1}{\mathcal{S}_2^+} \right) &\convp 0. \end{align*} \end{Proposition} \textbf{Proof:} An elementary calculation shows that if $U$ is $\text{Negative-binomial}(r,e^{-2n^{-\gamma}})$ then for some $d_1 \in (0, \infty)$ $$\mathbb{P} (U \ge 3\gamma^{-1} r) \le \frac{d_1}{n^3}$$ and thus, as $n \to \infty$, $$\mathbb{P}(\max_i \mathcal{C}_i^+ \ge (1+ 3\gamma^{-1})I) \le \mathbb{P}(u_i \ge 3\gamma^{-1}\mathcal{C}_i \mbox{ for some } i = 1, \cdots n) \to 0.$$ A similar calculation shows that, for some $d_2 \in (0, \infty)$, as $n \to \infty$. $$\mathbb{P}(\max_{i= 1, \cdots N} \tilde \mathcal{C}_i^+ \ge d_2) \to 0.$$ The first statement in the proposition now follows on combining the above two displays. Next, note that for Negative-binomial($r,p$), the first, second and third moments are \begin{align*} M_1 &=\frac{1}{p} r(1-p)\\ M_2 &=\frac{1}{p^2}[r^2(1-p)^2+r(1-p)]\\ M_3 &=\frac{1}{p^3}[r^3(1-p)^3+3r^2(1-p)^2+r(4-9p+7p^2-2p^3)]. \end{align*} From \eqref{ins2038} and \eqref{eqn:s3-s2} it follows that $\mathcal{S}_2 = \Theta(n^{1+\gamma})$ and $\mathcal{S}_3=\Theta(n^{1+3\gamma})$. Also, clearly, $\sum_i \mathcal{C}_i =O(n)$. Write $D_2 \stackrel{\scriptscriptstyle def}{=} \mathcal{S}_2^+-\mathcal{S}_2=\sum_{i=1}^N (\tilde \mathcal{C}_i^+)^2 + \sum_i(2\mathcal{C}_i u_i + u_i^2)$, then $$\mathbb{E}[D_2 |\{ \mathcal{C}_i\}_i] \le d_2\left( n\cdot n^{-\gamma}+\sum_i[ (\mathcal{C}_i)^2 n^{-\gamma} +(\mathcal{C}_i)^2 n^{-2\gamma}+\mathcal{C}_i n^{-\gamma} ] \right)=O(n)$$ thus $D_2/\mathcal{S}_2 \convp 0$ and consequently $\mathcal{S}_2^+/\mathcal{S}_2 \convp 1$. Write $D_3 \stackrel{\scriptscriptstyle def}{=} \mathcal{S}_3^+ -\mathcal{S}_3=\sum_{i=1}^N (\tilde \mathcal{C}_i^+)^3 + \sum_i[3(\mathcal{C}_i)^2 u_i + 3 \mathcal{C}_i u_i^2 + u_i^3 ]$. One can similarly show that $$ \mathbb{E}[D_3 | \{\mathcal{C}_i\}_i] =O(n^{1+2\gamma})$$ thus $D_3/\mathcal{S}_3 \convp 0$ and so $\mathcal{S}_3^+/\mathcal{S}_3 \convp 1$. To prove the third convergence, it suffices to prove \begin{equation} \label{ins2046} \frac{n^{4/3} D_2}{(\mathcal{S}_2)^2} \convp 0. \end{equation} By the asymptotics shown above, we have $$\frac{n^{4/3} D_2}{(\mathcal{S}_2)^2} = O( n^{4/3+1-2(1+\gamma) })=O(n^{1/3-2\gamma})$$ As $\gamma > 1/6$, \eqref{ins2046} follows and thus the proof is completed. \ \ \rule{1ex}{1ex}\\ For scaled component size vector of $\mathcal{COM}_n^+$, the above proposition shows that the statements in \eqref{ins1409} hold with $\bar \mathcal{C}_i$ replaced with $\bar \mathcal{C}_i^+$. In particular, the first and third requirements in Proposition 4 of \cite{aldous1997brownian} are met by $\{\bar \mathcal{C}_i^+\}$ Also, using the second convergence in \eqref{ins1409}, a calculation similar to that for \eqref{ins1608} shows that $$ \lim_{n\to \infty} \left ( \left ( n^{2/3}\beta^{-1/3}\right)^2 \frac{b_n^*(t_n^+)}{n} \frac{\beta^{2/3}\alpha (\lambda - \lambda_n)}{n^{1/3}} - \frac{1}{\sum_i \left( \bar \mathcal{C}_i^+(\lambda_n)\right)^2}\right) \to \lambda .$$ Therefore the second requirement in Proposition 4 of \cite{aldous1997brownian} is satisfied. This proves that \begin{equation} \label{ins1318} \bar \boldsymbol{C}_n^+(\lambda) \convd \boldsymbol{X}(\lambda). \end{equation} in $l^2_{\downarrow}$, for every $\lambda \in \mathbb{R}$. Using Proposition 6 of \cite{aldous1997brownian} once again it now follows that $\bar \boldsymbol{C}_n^+ \convd \boldsymbol{X}$ in $\mathcal{D}((-\infty, \infty): l^2_{\downarrow})$. \subsection{Completing the proof of Theorem \ref{theo:main}}\label{ins2121} By \cite{aldous1998entrance, aldous2000random}, there is a natural partial order $\preceq$ on $l^2_{\downarrow}$. Informally, interpreting an element of $l^2_{\downarrow}$ as a sequence of cluster sizes, ${\bf{x}},{\bf{y}} \in l^2_{\downarrow}$, ${\bf{x}} \preceq {\bf{y}}$ if ${\bf{y}}$ can be obtained from ${\bf{x}}$ by adding new clusters and coalescing together clusters. The coupling constructed in Section \ref{ins2122} gives that, for every, $\lambda \in (\lambda_n, \lambda_n^+)$ $$ \bar{\boldsymbol{C}}_n^-(\lambda) \preceq \bar\boldsymbol{C}_n^{\scriptscriptstyle BF}(\lambda) \preceq \bar \boldsymbol{C}_n^+(\lambda).$$ Since, as $n \to \infty$, $\lambda_n \to - \infty$ and $\lambda_n^+ \to +\infty$, \eqref{ins1320}, \eqref{ins1318} along with Lemma 15 of \cite{aldous2000random} yield that $$\bar\boldsymbol{C}_n^{\scriptscriptstyle BF}(\lambda) \convd \boldsymbol{X}(\lambda)$$ for all $\lambda\in \mathbb{R}$. Finally we argue convergence in $\mathcal{D}((-\infty, \infty):l^2_{\downarrow})$. For ${\bf{x}}, {\bf{y}} \in l^2_{\downarrow}$, let ${\bf d}^2({\bf{x}}, {\bf{y}}) = \sum_{i=1}^{\infty} (x_i-y_i)^2$, ${\bf{x}} = \{x_i\}$, ${\bf{y}} = \{y_i\}$. Then ${\bf d}^2({\bf{x}},{\bf{y}}) < \sum_i y_i^2 -\sum_i x_i^2$ whenever ${\bf{x}} \preceq {\bf{y}}$. To prove that $\bar\boldsymbol{C}_n^{\scriptscriptstyle BF} \to \boldsymbol{X}$ in $\mathcal{D}((-\infty, \infty):l^2_{\downarrow})$ it suffices to prove that \begin{equation} \label{ins1338} \sup_{\lambda \in [\lambda_1, \lambda_2]} {\bf d}(\bar \boldsymbol{C}_n^{\scriptscriptstyle BF}, \bar{\boldsymbol{C}}_n^-) \convp 0, \mbox{ for all} -\infty < \lambda_1 < \lambda_2 < \infty .\end{equation} Fix $\lambda_1, \lambda_2$ as above. Then \begin{equation} \label{ab1339} \sup_{\lambda \in [\lambda_1, \lambda_2]} {\bf d}(\bar \boldsymbol{C}_n^{\scriptscriptstyle BF}, \bar{\boldsymbol{C}}_n^-) \le \sup_{\lambda \in [\lambda_1, \lambda_2]} [\sum_i (\bar \mathcal{C}_i^+(\lambda))^2-\sum_i (\bar \mathcal{C}_i^-(\lambda))^2]. \end{equation} Let, for $\lambda \in \mathbb{R}$, $$\mathcal{U}_+(\lambda) = \sum_i (\bar \mathcal{C}_i^+(\lambda))^2, \; \mathcal{U}_-(\lambda) = \sum_i (\bar \mathcal{C}_i^-(\lambda))^2 \mbox{ and } \mathcal{V}(\lambda) = \mathcal{U}_+(\lambda)- \mathcal{U}_-(\lambda).$$ From Lemma 15 of \cite{aldous2000random}, $\mathcal{V}(\lambda) \convp 0$ for every $\lambda \in \mathbb{R}$. Thus it suffices to show that $\mathcal{V}$ is tight in $\mathcal{D}((-\infty, \infty): \mathbb{R}_+)$. Note that both $\mathcal{U}_+$ and $\mathcal{U}_-$ are tight in $\mathcal{D}((-\infty, \infty): \mathbb{R}_+)$. Although, in general difference of relatively compact sequences in the $\mathcal{D}$-space need not be relatively compact, in the current setting due to properties of the multiplicative coalescent this difficulty does not arise. Indeed, if $\{\boldsymbol{X}^{\bf{x}}(t), t \ge 0\}$ denotes the multiplicative coalescent on the positive real line with initial condition ${\bf{x}} \in l^2_{\downarrow}$ then, for $\delta$ sufficiently small $$\sup_{\tau \in \mathcal{T}(\delta)}\mathbb{E} \left({\bf d}^2(\boldsymbol{X}^{\bf{x}}(\tau) , {\bf{x}})\wedge 1\right)\le \mathbb{E} \left[\sum_i(X^{\bf{x}}_i(\delta))^2 -\sum_i x_i^2\right] \le 2\sum_{i<j} \delta x_ix_j \cdot 2 x_i x_j \le 2\delta||{\bf{x}}||^4,$$ where, $||{\bf{x}}||= (\sum x_i^2)^{1/2}$, $\mathcal{T}(\delta)$ is the family of all stopping times (with the natural filtration) bounded by $\delta$. Using the above property, the Markov property of the coalescent process and the tightness of $\sup_{\lambda \in [\lambda_1, \lambda_2]}\mathcal{U}_+(\lambda)$, $\sup_{\lambda \in [\lambda_1, \lambda_2]}\mathcal{U}_-(\lambda)$ one can verify Aldous' tightness criteria (see Theorem VI.4.5 in \cite{jacod-shiryaev}) for $\mathcal{V}$ thus proving the desired tightness. \ \ \rule{1ex}{1ex} \vspace{1in} {\bf Acknowledgements} AB and XW have been supported in part by the National Science Foundation (DMS-1004418), the Army Research Office (W911NF-0-1-0080, W911NF-10-1-0158) and the US-Israel Binational Science Foundation (2008466). SB's research has been supported in part by the UNC research council and a UNC Junior Faculty development award. SB would like to thank SAMSI for many interesting discussions with the participants of the complex networks program held at SAMSI in the year 2010-2011. \bibliographystyle{plain}
{ "timestamp": "2011-06-09T02:03:02", "yymm": "1106", "arxiv_id": "1106.1022", "language": "en", "url": "https://arxiv.org/abs/1106.1022", "abstract": "The evolution of the usual Erdős-Rényi random graph model on n vertices can be described as follows: At time 0 start with the empty graph, with n vertices and no edges. Now at each time k, choose 2 vertices uniformly at random and attach an edge between these two vertices. Let \\bfG_n(k) be the graph obtained at step k. Refined analysis in random graph theory now shows that for fixed t\\in \\Rbold, when k(n) = n/2+ n^{2/3} t/2, the sizes of the components in \\bfG_n(k(n)) scale like n^{2/3} and rescaled component sizes converge to the standard multiplicative coalescent at time $t$. The last decade has seen variants of this process introduced, under the name Achlioptas processes, to understand the effect of simple changes in the edge formation scheme on the emergence of the giant component. Stimulated by a question of Achlioptas, one of the simplest and most popular of such models is the Bohman Frieze (BF) model wherein at each stage $k$, 2 edges e_1(k)=(v_1,v_2) and e_2(k) = (v_3, v_4) are chosen uniformly at random. If at this time v_1, v_2 are both isolated then this edge is added, otherwise e_2 is added. Then \\cite{bohman2001avoiding} (and further analysis in \\cite{spencer2007birth}) show that once again there is a critical parameter, which is larger than 1, above and below which the asymptotic behavior is as in the Erdős-Rényi setting. While an intense study for this and related models seems to suggest that at criticality, this model should be in the same universality class as the original Erdős-Rényi process, a precise mathematical treatment of the dynamics in the critical window has to date escaped analysis. In this work we study the component structure of the BF model in the critical window and show that at criticality the sizes of components properly rescaled and re-centered converge to the standard multiplicative coalescent.", "subjects": "Probability (math.PR); Combinatorics (math.CO)", "title": "Bohman-Frieze processes at criticality and emergence of the giant component", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773707993078212, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.7084670528395819 }
https://arxiv.org/abs/1311.0892
Equidistribution of polynomial sequences in function fields, with applications
We prove a function field analog of Weyl's classical theorem on equidistribution of polynomial sequences. Our result covers the case in which the degree of the polynomial is greater than or equal to the characteristic of the field, which is a natural barrier when applying the Weyl differencing process to function fields. We also discuss applications to van der Corput, intersective and Glasner sets in function fields.
\section{Introduction} \label{sec:intro} Equidistribution theory started with Weyl's seminal paper \cite{weyl2}. We recall that a sequence $(a_{n})_{n=1}^{\infty}$ of real numbers is said to be \textit{equidistributed} $(\mod 1)$ if for any interval $[\alpha,\beta] \subset [0,1)$, we have \[\lim_{N \rightarrow \infty }\frac{\#\left\{a_n: 1\le n \leq N\,\, \textup{and}\,\, \{ a_n \} \in [\alpha, \beta] \right\}}{N}= \beta-\alpha ,\] where $\{a\}$ is the fractional part of a real number $a$. \textit{Weyl's criterion} says that the sequence $(a_{n})_{n=1}^{\infty}$ is equidistributed $(\mod 1)$ if and only if for any integer $m \neq 0$, we have \[ \lim_{N \rightarrow \infty} \frac{1}{N} \left|\sum_{n=1}^{N} e(m a_{n})\right| =0,\] where $e(x)=e^{2\pi i x}$. Let $f(u)=\sum_{r=0}^{k} \alpha_r u^r$ be a polynomial with coefficients in ${\mathbb R}$ and degree $k$. Weyl made the important observation that by squaring the sum $\big| \sum_{n=1}^{N} e(f(n)) \big|$, one can estimate it in terms of other exponential sums involving the shift $(f(u+h)-f(u))$, which is, for each $h\in {\mathbb Z}^+$, a polynomial of degree $(k-1)$. This process is called \textit{Weyl's differencing}. If one continues the differencing process, then the polynomial in question becomes linear after $(k-1)$ steps. Using this observation, Weyl \cite{weyl2} proved that the sequence $(f(n))_{n=1}^{\infty}$ is equidistributed $(\mod 1)$ if and only if at least one of the coefficients $\alpha_1, \ldots, \alpha_k$ of $f$ is irrational. The proof of this result was later simplified with the help of \textit{van der Corput's difference theorem} \cite{vdc}, which says that if for any $h\in {\mathbb Z}^+$, the sequence $(a_{n+h}-a_{n})_{n=1}^\infty$ is equidistributed $(\mod 1)$, then the sequence $(a_{n})_{n=1}^\infty$ is also equidistributed $(\mod 1)$. Using van der Corput's difference theorem, Weyl's equidistribution theorem for polynomials follows easily by induction on the degree of the polynomial. This remains to date the standard proof of Weyl's result. Let ${\mathbb F}_q$ be the finite field of $q$ elements whose characteristic is $p$. Let ${\mathbb F}_q[t]$ be the polynomial ring over ${\mathbb F}_q$. Since ${\mathbb Z}$ and ${\mathbb F}_q[t]$ share many similarities from analytic and number-theoretic points of view, it is natural to study equidistribution in the latter setting. Let ${\mathbb K} ={\mathbb F}_q(t)$ be the field of fractions of ${\mathbb F}_q[t]$. For $f/g \in {\mathbb K}$, we define a norm $|f/g|=q^{\deg f - \deg g}$ (with the convention that $\deg 0=-\infty$). The completion of ${\mathbb K}$ with respect to this norm is ${\mathbb K}_\infty = {\mathbb F}_q((1/t))$, the field of formal Laurent series in $1/t$. In other words, every element $\alpha \in {\mathbb K}_\infty$ can be written as $\alpha = \sum_{i=-\infty}^{n} a_i t^{i}$ for some $n \in {\mathbb Z}$ and $a_i \in {\mathbb F_{q}}$ $(i\le n)$. Therefore, ${\mathbb F_{q}}[t], {\mathbb K}, {\mathbb K}_\infty$ play the roles of ${\mathbb Z}, {\mathbb Q}, {\mathbb R}$ respectively. Let \[ {\mathbb T}= \left\{ \sum_{i=-\infty}^{-1} a_i t^{i}: a_i \in {\mathbb F_{q}}\,\, (i \le -1)\right\}.\] This is the analog of the unit interval $[0,1)$ and is a compact group. Let $\lambda$ be a normalized Haar measure on ${\mathbb T}$ such that $\lambda({\mathbb T})=1$. Let $I=(c_1, \ldots, c_{k})$ be a finite sequence of elements of ${\mathbb F}_q$. We refer to a set of the form \[\mathcal{C}_I=\left\{ \sum_{i=-\infty}^{-1} a_i t^{i} \in {\mathbb T}: a_{-i}=c_i \,\, (1\le i \le k)\right\} \] as a \textit{cylinder set}. The topology on ${\mathbb T}$ induced by the norm $|\cdot|$ is generated by cylinder sets, and if $\mathcal{C}_I$ is defined as above, then $\lambda(\mathcal{C}_I)=q^{-k}$. Therefore, cylinder sets plays the role of intervals. For $\alpha = \sum_{i=-\infty}^{n} a_it^{i}\in {\mathbb K}_\infty$, if $a_n\neq 0$, we define $\textup{ord}\, \alpha = n$. Therefore, $|\alpha | = q^{\textup{ord}\, \alpha}$. We say $\alpha$ is \textit{rational} if $\alpha \in {\mathbb K}$ and \textit{irrational} if $\alpha \not \in {\mathbb K}$. We define $\{ \alpha \} = \sum_{i=-\infty}^{-1} a_i t^{i}\in {\mathbb T}$ to be the {\it fractional part} of $\alpha$, and we refer to $a_{-1}$ as the \textit{residue} of $\alpha$, denoted by $\text{res}\,\alpha$. We now define the exponential function on ${\mathbb K}_\infty$. Let $\text{ tr}:{\mathbb F}_q\rightarrow {\mathbb F}_p$ denote the familiar trace map. There is a non-trivial additive character $e_q: {\mathbb F}_q\rightarrow {\mathbb C}^\times$ defined for each $a\in {\mathbb F}_q$ by taking $e_q(a) =e(\text{tr}(a)/p)$. This character induces a map $e:{\mathbb K}_\infty\rightarrow {\mathbb C}^\times$ by defining, for each element $\alpha\in {\mathbb K}_\infty$, the value of $e(\alpha )$ to be $e_q(\text{res}\,\alpha )$. For $N \in {\mathbb Z}^+$, we write ${\mathbb G_{N}}$ for the set of all polynomials in ${\mathbb F_{q}}[t]$ whose degree are less than $N$. The following notion of equidistribution was first introduced by Carlitz in \cite{carlitz} (see also \cite[Chapter 5, Section 3]{kn}). \begin{defn} Let $(a_x)_{ x \in {\mathbb F_{q}}[t]}$ be a sequence indexed by ${\mathbb F_{q}}[t]$ and taking values in ${\mathbb K}_{\infty}$. We say that the sequence $(a_x)_{ x \in {\mathbb F_{q}}[t]}$ is {\it equidistributed} in ${\mathbb T}$ if for any cylinder set $\mathcal{C}\subset{\mathbb T}$, we have \[ \lim_{N \rightarrow \infty} \frac{\# \left\{ a_x : x \in {\mathbb G_{N}}\,\, \textup{and}\,\, \{ a_x \} \in \mathcal{C} \right\}}{q^N} = \lambda(\mathcal{C}). \] \end{defn} Since one can prove the exact analogs of Weyl's criterion and van der Corput's difference theorem in function fields, one expects to establish a ${\mathbb F_{q}}[t]$ analog of Weyl's equidistribution theorem for polynomial sequences. Let $f(u)=\sum_{r=0}^{k} \alpha_r u^r$ be a polynomial with coefficients in ${\mathbb K}_\infty$ and degree $k$. All earlier works on equidistribution in ${\mathbb T}$ have been restricted to the case when $k<p$. Under this condition, Carlitz \cite{carlitz} proved an exact analog of Weyl's equidistribution theorem for the sequence $(f(x))_{x \in {\mathbb F}_q[t]}$. Dijksma \cite{dijksma} also established the same result for another stronger notion of equidistribution, subject to the same constraint $k<p$. In Carlitz's and Dijksma's work, the use of Weyl's differencing produces a factor of $k!$. When $k\ge p$, the factor is $0$, and hence the differencing method becomes ineffective in producing a desirable result. Actually, the following example, already known to Carlitz \cite[(6.8)]{carlitz}, shows that a direct ${\mathbb F_{q}}[t]$ analog of Weyl's equidistribution theorem is not always true when $k \ge p$. \begin{ex} \label{ex:1} For $\alpha = \sum_{i=-\infty}^{n} a_i t^{i} \in {\mathbb K}_\infty$, we define \begin{equation}\label{eq:t} T(\alpha)=a_{-1}t^{-1} + a_{-p-1}t^{-2} + a_{-2p-1}t^{-3} + \cdots. \end{equation} Then $T$ is a linear map from ${\mathbb K}_\infty$ to ${\mathbb T}$ (this map will be used in Section \ref{sec:equidistribution}). For any $x=\sum_{i=0}^{m} x_{i} t^{i} \in {\mathbb F_{q}}[t]$, the coefficient of $t^{-1}$ in $\alpha x^{p}$ is \[ a_{-1}x_0^{p} + a_{-p-1}x_1^p + a_{-2p-1}x_2^p+ \cdots, \] which is $0$ if $T(\alpha)=0$. Therefore, the sequence $(\alpha x^p)_{x \in {\mathbb F_{q}}[t]}$ is not equidistributed in ${\mathbb T}$ if $T(\alpha)=0$. Without difficulty, we can find an irrational element $\alpha\in {\mathbb K}_\infty$ with $T(\alpha)=0$. \end{ex} It is desirable to give a complete description of all polynomials $f(u) \in {\mathbb K}_{\infty}[u]$ for which the sequence $(f(x))_{x \in {\mathbb F_{q}}[t]}$ is equidistributed in ${\mathbb T}$. However, in view of Example \ref{ex:1}, such a description may be complicated and not easy to state in arithmetic terms such as irrationality. In particular, equidistribution could fail if the degree of $f(u)$ is divisible by $p$. Furthermore, for a polynomial like $(\alpha x^p + \beta x)$, it is impossible to say about equidistribution if one has information on $\alpha$ or $\beta$ alone, since the terms $x^p$ and $x$ ``interfere" with each other, as the map $x \mapsto x^p$ is linear (see also \cite[(6.9)]{carlitz}). However, one may suspect that the only pathologies that prevent equidistribution are the ones described above (i.e., exponents divisible by $p$ and intefering exponents). Thus one can make the following conjecture, which is the best possible as far as a single coefficient is concerned. \begin{conj} \label{conj} Let $f(u)=\sum_{r \in {\mathcal K}\cup\{0\}} \alpha_r u^{r}$ be a polynomial supported on a set ${\mathcal K} \subset {\mathbb Z}^+$ with coefficients in ${\mathbb K}_\infty$. Suppose that $\alpha_k$ is irrational for some $k \in {\mathcal K}$ satisfying $p \nmid k$ and $p^v k \not \in {\mathcal K}$ for any $v\in {\mathbb Z}^+$. Then the sequence $(f(x))_{x \in {\mathbb F_{q}}[t]}$ is equidistributed in ${\mathbb T}$. \end{conj} In this paper, we make some progress towards this conjecture. Given a set ${\mathcal K}$, we define the \textit{shadow} of ${\mathcal K}$, $\mathcal{S}({\mathcal K})$, to be \[\mathcal{S}({\mathcal K}) = \left\{ j \in {\mathbb Z}^+ : p\nmid \binom{r}{j}\textup{ for some } r \in {\mathcal K}\right\}.\] Below is our equidistribution result, which has no restriction on the degree of $f(u)$. \begin{thm} \label{th:main3} Let $f(u)=\sum_{r \in {\mathcal K}\cup\{0\}}\alpha_r u^{r}$ be a polynomial supported on a set ${\mathcal K} \subset {\mathbb Z}^+$ with coefficients in ${\mathbb K}_\infty$. Suppose that $\alpha_k$ is irrational for some $k \in {\mathcal K}$ satisfying $p \nmid k$ and $p^v k \not \in \mathcal{S}({\mathcal K})$ for any $v\in {\mathbb Z}^+$. Then the sequence $(f(x))_{x \in {\mathbb F_{q}}[t]}$ is equidistributed in ${\mathbb T}$. \end{thm} \begin{ex} If $p \nmid k$ and $\alpha$ is irrational, then Theorem \ref{th:main3} implies that the sequence $(\alpha x^k)_{x \in {\mathbb F_{q}}[t]}$ is equidistributed in ${\mathbb T}$. More generally, let $f(u)=\sum_{r=0}^{k} \alpha_r u^r\in {\mathbb K}_\infty[u]$ and suppose that $\alpha_r$ is irrational for some $r$ with $p \nmid r$ and $r > k/p$. As a direct consequence of Theorem \ref {th:main3}, the sequence $(f(x))_{x \in {\mathbb F_{q}}[t]}$ is equidistributed in ${\mathbb T}$. \end{ex} \begin{ex} \label{example-3} Let $p>3$ and $\alpha, \beta , \gamma\in {\mathbb K}_\infty$ with $\beta $ irrational. Theorem \ref{th:main3} does not imply directly the equidistribution of the sequence $(\alpha x + \beta x^3 + \gamma x^{3p+1})_{x \in {\mathbb F_{q}}[t]}$ as $3p \in \mathcal{S}({{\mathcal K}})$. However, we will prove a more general form of Theorem \ref {th:main3} (Proposition \ref{prop:varmain3}), from which we can conclude that the above sequence is equidistributed in ${\mathbb T}$. In contrast, we are not able to confirm if the sequence $(\beta x^3 + \gamma x^{4p})_{x \in {\mathbb F_{q}}[t]}$ is equidistributed in ${\mathbb T}$, though Conjecture \ref {conj} suggests that it is the case. \end{ex} Our proof of Theorem \ref{th:main3} is based on a ``minor arc estimate'' of the sum $|\sum _{x \in{\mathbb G_{N}}} e(f(x))|$. By combining the large sieve inequality with a generalized Vinogradov's mean value theorem in ${\mathbb F}_q[t]$, we obtain a Weyl-type estimate, which avoids the problematic use of Weyl's differencing. This approach allows us to surmount the barriers that previously obstructed viable conclusions when the degree of $f(u)$ exceeds or equals to $p$. The idea of using minor arc estimates to prove equidistribution was already known to Vinogradov, when he established the equidistribution of the sequence $(p \alpha )_{p: \text{ prime}}$ for any irrational number $\alpha \in {\mathbb R}$ (see \cite[Chapter XI]{vinogradov} or \cite[Chapter9]{ellison}). In his proof, information on the major arcs is also required. In contrast, by relying on properties of continued fractions, we obtain our result exclusively from a minor arc estimate. The assumption $p^vk \not \in \mathcal{S}({\mathcal K})$ in Theorems \ref{th:main3} comes from the use of Weyl's shift in our minor arc estimate. Such a ``shift'' produces terms whose degrees are elements not only in ${\mathcal K}$, but also in $\mathcal{S}({\mathcal K})$ (see (\ref{f-shift}) in Section \ref{sec:weyl}). Therefore, we need to consider a mean value estimate whose indices are elements of $\mathcal{S}({\mathcal K})$. Such an ``extension of indices'' is a common theme in the study of Diophantine problems. For example, to establish an asymptotic formula of Waring's problem, one relates an equation of $k$th powers to Vinogradov's system of equations whose degrees range from $1$ to $k$ (see \cite[Section 5.3]{vaughan} for more details). The extension process produces an extra $k$ factor in the bound $\widetilde{G}(k)$ (for definition, see \cite[Section 10]{vw}) of Waring's problem, and in our case, it requires the stronger assumption $p^vk \not \in \mathcal{S}({\mathcal K})$, instead of $p^vk \not \in{\mathcal K}$. Although we are unable to prove Conjecture \ref{conj}, we can confirm it in the special case when $q=p$, which follows from a more general form of Theorem \ref{th:main3}. We defer to Section \ref{sec:equidistribution} for the precise statements of the results (Proposition \ref{prop:varmain3} and Corollary \ref{cor:q=p}). Given our equidistribution result, we will study some special sets in ${\mathbb F_{q}}[t]$ which are closely related to equidistribution and at present less well understood than their integer counterparts. These are van der Corput and intersective sets. In particular, we will prove the following result. \begin{thm}\label{th:sakozy} Let $\Phi(u) = \sum_{r \in {\mathcal K}\cup \{0\}}a_r u^{r}$ be a polynomial supported on a set ${\mathcal K} \subset {\mathbb Z}^{+}$ with coefficients in ${\mathbb F}_q[t]$. Suppose that $\Phi(u)$ has a root (mod $g$) for any $g \in {\mathbb F_{q}}[t] \setminus \{0\}$. Suppose further that $a_k \neq 0$ for some $k \in {\mathcal K}$ satisfying $p\nmid k$ and $p^vk\not\in \mathcal{S}({\mathcal K})$ for any $v \in {\mathbb Z}^+$. Then for any subset $\mathcal{A}$ of positive upper density of ${\mathbb F_{q}}[t]$, there exist distinct elements $a, a'$ of $\mathcal{A}$ such that $a-a'=\Phi(x)$ for some $x \in {\mathbb F_{q}}[t]$. \end{thm} The above theorem is an ${\mathbb F_{q}}[t]$ analog of a result of S\'{a}rk\"{o}zy \cite{sarkozy1}. Previously, such a result with no restriction on the degree of $\Phi$ was not available, except in the case $\Phi(0)=0$ \cite{blm}. We defer to Section \ref{sec:vdc} for an introduction to these notions and the statement of our results. The paper is organized as follows. In Section \ref{sec:prelim}, we introduce some preliminaries that are needed to prove our results. We prove a minor arc estimate in Section \ref{sec:weyl} and we derive its generalization in Section \ref{sec:weyl2}. Then we use these results to prove Theorem \ref{th:main3} in Section \ref{sec:equidistribution}. Finally, in Section \ref{sec:vdc}, we discuss applications of our equidistribution result to van der Corput and intersective sets in ${\mathbb F_{q}}[t]$. \section{Preliminaries} \label{sec:prelim} We begin this section by reviewing an orthogonal relation of the exponential function $e(\cdot)$ that is defined in Section 1. For $\alpha\in {\mathbb K}_{\infty}$, we have \cite[Lemma 7]{Ku}: \[\sum_{x \in {\mathbb G_{N}}} e(x\alpha)= \left\{ \begin{array}{ll} q^N, & \hbox{if $\textup{ord}\, \{\alpha \}<-N$,} \\ 0, & \hbox{if $\textup{ord}\, \{\alpha \}\ge -N$.} \end{array} \right.\] Therefore, for any polynomials $a, g\in {\mathbb F_{q}}[t]$ with $g\neq 0$, we have \begin{equation}\label{eq:orthogonal2} \sum_{x \in {\mathbb G}_{\textup{ord}\, g}} e\left(\frac{xa}{g}\right)= \left\{ \begin{array}{ll} |g|, & \textup{if}\,\, a \equiv 0\,\, (\mod\,g), \\ 0, & \hbox{otherwise.} \end{array} \right. \end{equation} To simplify notation in the proofs of the paper, we need to introduce additional definitions. Given $j, r \in {\mathbb Z}^+$, we write $j \preceq_p r$ if $p\nmid \binom{r}{j}$. By Lucas' theorem, this happens precisely when all the digits of $j$ in base $p$ are less than or equal to the corresponding digits of $r$. From this characterization, it is easy to see that the relation $\preceq_p$ defines a partial order on ${\mathbb Z}^+$. If $j \preceq_p r$, then we necessarily have $j \leq r$. Let ${\mathcal K} \subset {\mathbb Z}^+$. We say an element $k \in {\mathcal K}$ is {\it maximal} if it is maximal with respect to $\preceq_p$, that is, for any $r \in {\mathcal K}$, either $r \preceq_p k$ or $r$ and $k$ are not comparable. We recall that \[ \mathcal{S}({\mathcal K}) = \left\{ j \in {\mathbb Z}^+ : j \preceq_p r \textup{ for some } r \in {\mathcal K}\right\}.\] Define \begin{equation} \label{eq:k-star} {\mathcal K}^{*} = \left\{ k \in {\mathcal K}: p \nmid k\,\, \textup{and}\,\, p^v k \not \in \mathcal{S}({\mathcal K})\,\, \textup{for any}\,\, v\in {\mathbb Z}^+ \right\}. \end{equation} We have the following facts about the partial ordering $\preceq_p$. \begin{lem} \label{lem:shadow} For ${\mathcal K} \subset {\mathbb Z}^{+}$, we have \vspace{-.3cm} \begin{enumerate} \item if $k$ is maximal in ${\mathcal K}$, then $k$ is maximal in $\mathcal{S}({\mathcal K})$. \item ${\mathcal K}^* \subset \mathcal{S}({\mathcal K})^*$. \end{enumerate} \end{lem} \begin{proof} The first part of the lemma is immediate from the definition of $\mathcal{S}({\mathcal K})$. The second part follows from the observation that $\mathcal{S}(\mathcal{S}({\mathcal K})) = \mathcal{S}({\mathcal K})$. \end{proof} \begin{lem} \label{lem:shadow2} Let ${\mathcal K} \subset {\mathbb Z}^{+}$ and $k \in {\mathcal K}^*$. If $j \in {\mathcal K}$ satisfies $ k\preceq_pj$, then $j \in {\mathcal K}^*$. \end{lem} \begin{proof} We have $p \nmid k$ and $p \nmid \binom{j}{k}$. By Lucas' theorem, it follows that $p \nmid j$. Again, by Lucas' theorem, for any $v\in {\mathbb Z}^+$, we have $p^v k \preceq_p p^v j$. It follows that $p^v j \not \in \mathcal{S}({\mathcal K})$, and hence $j \in {\mathcal K}^*$. \end{proof} We will apply the following large sieve inequality to get a minor arc estimate. Given a set $\Gamma \subset {\mathbb K}_\infty$, if for any distinct elements $\gamma_1,\gamma _2 \in \Gamma$, we have $\textup{ord}\, \{(\gamma _1-\gamma_2)\} \ge \delta$, then we say that the points $\{\gamma:\gamma \in \Gamma\}$ are {\it spaced at least $q^\delta$ apart in ${\mathbb T}$}. \begin{thm} \label{largesieve} $($\textup{Hsu} \cite[Theorem 2.4]{hsu}$)$ Given $K\in {\mathbb Z}^+$, let $\Gamma\subset {\mathbb K}_\infty$ be a set whose elements are spaced at least $q^{-K}$ apart in ${\mathbb T}$. Let $(b_x)_{x \in {\mathbb F}_q[t]}$ be a sequence of complex numbers. For $\beta \in {\mathbb K}_\infty$, define \[ \mathcal{S}(\beta) = \displaystyle \sum_{x \in {\mathbb G_{N}}}\, b_x\, e(x\beta).\] Then we have \[ \displaystyle \sum_{\gamma \in \Gamma}\, |\mathcal{S}(\gamma)|^2 \le \max\left\{q^N, q^{K-1}\right\} \displaystyle \sum_{x\in {\mathbb G_{N}} }\, |b_x|^2.\] \end{thm} In the following, we state a mean value theorem whose indices are elements of $\mathcal{S}({\mathcal K})$. For $j \in \mathcal{S}({\mathcal K})$, by the definition of $\mathcal{S}({\mathcal K})$, if $i \in {\mathbb Z}^+$ satisfies $i\preceq_pj$, then $i \in \mathcal{S}({\mathcal K})$. Therefore, the set $\mathcal{S}({\mathcal K})$ satisfies Condition* which is defined in \cite[Section 1]{klz}. For $N \in {\mathbb Z}^+$, let $J_s(\mathcal{S}({\mathcal K});N)$ denote the number of solutions of the system \[ u_1^j + \cdots + u_s^j = v_1^j + \cdots + v_s^j \quad (j \in \mathcal{S}({\mathcal K}))\] with $u_r, v_r \in {\mathbb G_{N}}$ $(1 \le r \le s)$. Since $(u_1+\cdots +u_s)^p = u_1^p + \cdots + u_s^p$, the above equations are not always independent. To obtain independence, we consider the set \begin{equation}\label {S(K)-prime} \mathcal{S}({\mathcal K})' = \left\{i \in {\mathbb Z}^+: p \nmid i \,\, \textup{and}\,\, p^v i \in \mathcal{S}({\mathcal K})\,\, \textup{for some}\,\, v \in {\mathbb Z}^+\right\}. \end{equation} We note that for $j = p^vi$ with $p\nmid i$, we have $u_1^j + \cdots + u_s^j = ( u_1^i + \cdots + u_s^i)^{p^v}$. Therefore, $J_s(\mathcal{S}({\mathcal K});N)$ also counts the number of solutions of the system \[ u_1^i + \cdots + u_s^i = v_1^i + \cdots + v_s^i \quad (i \in \mathcal{S}({\mathcal K})')\] with $u_r, v_r \in {\mathbb G_{N}}$ $(1 \le r \le s)$. The following result gives an upper bound of $J_s(\mathcal{S}({\mathcal K}); N)$. \begin{thm}\label{VMT} \label{vmt}$($\textup{Liu \& Wooley \cite{lw}; see also \cite[Theorem 1.1]{klz}}$)$ Let $\psi =\#\mathcal{S}({\mathcal K})'$, $\phi = \max_{i \in \mathcal{S}({\mathcal K})'} i$ and $\kappa = \sum_{i \in \mathcal{S}({\mathcal K})'}i$. Suppose that $\phi \ge 2$ and $s \ge (\psi\phi+ \psi)$. Then for any $\epsilon>0$, there exists a constant $C_1 = C_1(s; {\mathcal K}; \epsilon; q)>0$ such that \[ J_s(\mathcal{S}({\mathcal K});N) \le C_1(q^N)^{2s-\kappa + \epsilon}. \] \end{thm} We now recall some facts about continued fractions in ${\mathbb K}_\infty$ which are needed in the proof of Theorem \ref{th:main3}. For any $\alpha \in {\mathbb K}_{\infty}$, we can write \[ \alpha = b_0 + \frac{1}{b_1+\frac{1}{b_2+\cdots}} = [b_0, b_1, b_2,, \ldots],\] where $b_i \in {\mathbb F_{q}}[t]$ and $\textup{ord}\, b_i >0$ $(i\ge 1)$. We note that $\alpha$ is irrational if and only if its continued fraction expansion is infinite. In contrast with the real case where rational numbers have two continued fraction expansions (e.g, $1/3=[0,3,0]=[0,2,1]$), continued fraction expansions in ${\mathbb K}_{\infty}$ are unique. We define two sequences $(a_n)_{n \geq -2}$ and $(g_n)_{n \geq -2}$ in ${\mathbb F_{q}}[t]$ recursively by putting $a_{-2}=0, g_{-2}=1, a_{-1}=1, g_{-1}=0$, and for all $n \ge 0$, \[ a_n = b_n g_{n-1} + h_{n-2} \qquad \textup{and} \qquad g_n = b_n g_{n-1} + g_{n-2}. \] Then for all $n \geq 0$, we have \[ g_n a_{n-1} - a_{n} g_{n-1} = (-1)^n \qquad \textup{and} \qquad [b_0, b_1, \ldots, b_n] = a_n/g_n. \] The fractions $a_n/g_n$ $(n \ge 0)$ are called the \textit{convergents} of $\alpha$. One can also show by induction that the sequence $(\textup{ord}\, g_n)_{n\ge 0}$ is strictly increasing. \begin{prop} \label{prop:cont} $($\cite[Section 1]{schmidt}$)$ Let $a_n/g_n$ $(n \ge 0)$ be convergents of $\alpha$. We have \vspace{-.3cm} \begin{enumerate} \item $\textup{ord}\, (g_n\alpha -a_n) = -\textup{ord}\, g_{n+1}$ $(n \ge 0)$. \item \textup{(Legendre's theorem)} If $a, g \in {\mathbb F_{q}}[t]$ satisfy $\textup{ord}\, (g\alpha - a) < -\textup{ord}\, g$, then $a/g$ is a convergent of $\alpha$. \end{enumerate} \end{prop} The following lemma is about elements in ${\mathbb K}_\infty$ that are well approximated by rationals. \begin{lem}\label{lem:diophantine} Suppose that $\alpha \in {\mathbb K}_{\infty}$ satisfies the following condition: there exists a constant $\kappa >1$, such that, for all $N$ sufficiently large, there exist $a, g \in {\mathbb F_{q}}[t]$ with $\textup{ord}\, (g\alpha -a) \le -\kappa N$ and $\textup{ord}\, g < N$. Then $\alpha$ is rational. \end{lem} \begin{proof} Suppose that $\alpha$ is irrational. Let $a_n/g_n$ $(n \ge 0)$ be the convergents of $\alpha$. Since $\alpha$ is irrational, we have $\lim_{n \rightarrow \infty} \textup{ord}\, g_n = \infty$. Let $n$ be sufficiently large and $N=\textup{ord}\,{g_n}$. By hypothesis, there exist $a, g\in {\mathbb F_{q}}[t]$ such that $\textup{ord}\, g< N$ and \[\textup{ord}\, (g\alpha - a) \le -\kappa N< -\textup{ord}\, g_n.\] By Proposition \ref{prop:cont}(2), $a/g$ is a convergent of $\alpha$. Since $\textup{ord}\, g< N=\textup{ord}\, g_n$ and the sequence $(\textup{ord}\, g_n)_{n\ge 0}$ is strictly increasing, there exists $m \in {\mathbb Z}^+\cup\{0\}$ with $m<n$ such that $a=a_m$ and $g=g_m$. By Proposition \ref{prop:cont}(1), \[ \textup{ord}\, (g\alpha - a) = \textup{ord}\,(g_m\alpha - a_m)= -\textup{ord}\, (g_{m+1}) \ge -\textup{ord}\, g_n,\] which contradicts the previous inequality. Therefore, $\alpha$ is rational. \end{proof} We end this section by recalling Weyl's criterion in ${\mathbb F_{q}}[t]$. \begin{thm} \textup{(Carlitz {\cite[Theorem 4]{carlitz}})} \label{th:weyl} The sequence $(a_x)_{x \in {\mathbb F_{q}}[t]} \subset {\mathbb K}_{\infty}$ is equidistributed in ${\mathbb T}$ if and only if for any $m \in {\mathbb F_{q}}[t] \setminus \{0\}$, we have \[ \lim_{N \rightarrow \infty} \frac{1}{q^N} \left|\sum_{x \in {\mathbb G_{N}}} e(ma_x) \right|=0.\] \end{thm} \section{A Weyl-type estimate} \label{sec:weyl} In this section, we will establish the following minor arc estimate. \begin{thm} \label{th:main1} Let $f(u) = \sum_{r \in {\mathcal K}\cup \{0\}} \alpha_r u^{r}$ be a polynomial supported on a set ${\mathcal K} \subset {\mathbb Z}^{+}$ with coefficients in ${\mathbb K}_\infty$. Suppose that $k\in {\mathcal K}^*$ (defined as in (\ref {eq:k-star})) is maximal in ${\mathcal K}$. Then there exist constants $c,C>0$, depending only on ${\mathcal K}$ and $q$, such that the following holds: suppose that for some $0 < \eta \le cN$, we have \[\left|\sum_{x \in {\mathbb G_{N}}} e(f(x)) \right| \geq q^{N -\eta }.\] Then for any $\epsilon>0$ and $N$ sufficiently large in terms of ${\mathcal K}$, $\epsilon$ and $q$, there exist $a,g \in{\mathbb F_{q}}[t]$ such that \[\textup{ord}\, (g\alpha_k - a) <-kN + \epsilon N+C\eta \qquad \textrm{and} \qquad \textup{ord}\, g \leq \epsilon N+C\eta.\] \end{thm} \pagebreak \noindent{\bf Remark.} \vspace{-.3cm} \begin{itemize} \item In Theorem \ref{th:main1}, the coefficient $\alpha_k$ plays the role of the leading coefficient of the polynomial. This is, in a sense, the ``true'' ${\mathbb F_{q}}[t]$ analog of the leading coefficient. \item Clearly, if $k$ is the greatest element in ${\mathcal K}$, then $k$ is maximal in ${\mathcal K}$. However, a set may have more than one maximal element. For example, if $p=2$ and ${\mathcal K} = \{9, 5, 3, 1 \}$ then $9, 5, 3$ are maximal elements of ${\mathcal K}$ and they all satisfy the hypothesis of Theorem \ref{th:main1}. \end{itemize} The following two lemmas are needed in our proof of Theorem \ref{th:main1}. \begin{lem} \textup{(Weyl's shift)} \label{weylshift} Let $\mathcal{A}$ be a subset of ${\mathbb G_{N}}$. We have \[ \displaystyle \sum_{x \in {\mathbb G_{N}}} e(f(x))= (\#\mathcal{A})^{-1}\displaystyle \sum_{x \in {\mathbb G_{N}}} \displaystyle \sum_{y \in \mathcal{A}} e(f(y-x)).\] \end{lem} \begin{proof} For $y \in \mathcal{A} \subset {\mathbb G_{N}}$, we have \[ \sum_{x \in {\mathbb G_{N}}}e(f(x)) = \sum_{y-x \in {\mathbb G_{N}}} e(f(y-x))=\sum_{x \in {\mathbb G_{N}}} e(f(y-x)).\] It follows that \[ \#\mathcal{A}\cdot \sum_{x \in {\mathbb G_{N}}}e(f(x))= \displaystyle \sum_{y \in \mathcal{A}} \sum_{x \in {\mathbb G_{N}}} e(f(y-x)) = \sum_{x \in {\mathbb G_{N}}}\displaystyle \sum_{y \in \mathcal{A}}e(f(y-x)).\] \end{proof} For ${\mathcal K} \subset {\mathbb Z}^{+}$, let $\mathcal{S}({\mathcal K})$ be its shadow. Let $f(u) = \sum_{r \in {\mathcal K}\cup \{0\}}\alpha_r u^{r}$ be a polynomial supported on ${\mathcal K}$ with coefficients in ${\mathbb K}_\infty$. For any $r \in {\mathcal K}$, we have \[ (y-x)^r = \displaystyle \sum_{j \preceq_p r} \binom{r}{j} y^j(-x)^{r-j} + (-x)^r.\] Therefore, for a fixed $x \in {\mathbb G_{N}}$, if $k$ is maximal in ${\mathcal K}$, then there exist $\gamma_j =\gamma_j(\{\alpha_r\}_{r \in {\mathcal K}}; x) \in \mathbb{K}_\infty$ $(j \in \mathcal{S}({\mathcal K}) \setminus \{k\})$ and $\gamma=\gamma(\{\alpha_r\}_{r \in {\mathcal K}\cup\{0\}}; x) \in \mathbb{K}_\infty$ such that \begin{equation}\label{f-shift} f(y-x)= \alpha_k (y-x)^k + \displaystyle \sum_{r \in {\mathcal K}\setminus \{k\}}\alpha_r (y-x)^r+\alpha_0 = \alpha_k y^k + \displaystyle \sum_{j \in \mathcal{S}({\mathcal K}) \setminus \{k\}} \gamma_j y^j + \gamma. \end{equation} \begin{lem}\label{spacing} Let $M \in {\mathbb Z}^+$ with $M \le N$. Let $k \in {\mathbb Z}^+$ with $p\nmid k$ and $\alpha _k \in \mathbb{K}_\infty$. Suppose that $a, g\in {\mathbb F}_q[t]$ with $(a,g)=1$, $\textup{ord}\,(g\alpha_k - a) < -kM$ and either $\textup{ord}\,(g\alpha_k - a)\ge(M-kN)$ or $\textup{ord}\, g >M$. Let $\mathcal{L}_0$ be a subset of monic irreducible polynomials of degree $M$, such that, for any distinct elements $l_1, l_2 \in \mathcal{L}_0$, we have $l_1^k \equiv l_2^k$ $(\mod \,g)$ if and only if $l_1 \equiv l_2$ $(\mod\, g)$. Then the points $\{\alpha_k l^k:l \in \mathcal{L}_0\}$ are spaced at least $\min\{|g|^{-1}, q^{k(M-N)}\}$ apart in $\mathbb{T}$. \end{lem} \begin{proof} Let $l_1, l_2 \in \mathcal{L}_0$ with $l_1 \not \equiv l_2$ (mod $g$). Then by the property of $\mathcal{L}_0$, we have $l_1^k \not \equiv l_2^k$ (mod $g$). Write $\alpha_k = a/g + \beta$. Then \[ \textup{ord}\, \{\alpha_k(l_1^k - l_2^k) \} = \textup{ord}\, \{a(l_1^k - l_2^k)/g + \beta(l_1^k - l_2^k)\}.\] Since $\textup{ord}\, g\beta < -kM$ and $\textup{ord}\, l_1 = \textup{ord}\, l_2= M$, we have \[ \textup{ord}\, \{\beta(l_1^k - l_2^k) \} <-kM - \textup{ord}\, g + kM = -\textup{ord}\, g.\] Since $l_1^k \not \equiv l_2^k$ (mod $g$) and $(a,g) = 1$, we have \[ \textup{ord}\, \{a(l_1^k - l_2^k)/g\} \ge -\textup{ord}\, g.\] Therefore, it follows that \begin{equation}\label{spacing1} \textup{ord}\, \{\alpha_k(l_1^k - l_2^k) \} = \textup{ord}\, \{a(l_1^k - l_2^k)/g \}\ge -\textup{ord}\, g. \end{equation} We now divide into cases, depending on the size of $\textup{ord}\, g$. \noindent \textbf{Case 1.} Suppose that $\textup{ord}\, g> M$. In this case, the elements of $\mathcal{L}_0$ are distinct (mod $g$). By (\ref{spacing1}), the points $\alpha_kl^k$ are spaced at least $|g|^{-1}$ apart in $\mathbb{T}$. \noindent \textbf{Case 2.} Suppose that $\textup{ord}\, g\le M$. Then by the assumption, we have $\textup{ord}\, (g\alpha_k - a) \ge (M-kN)$. If $l_1, l_2 \in \mathcal{L}_0$ satisfy $l_1 \not \equiv l_2$ (mod $g$), then it follows from (\ref{spacing1}) that $\alpha l_1^k$ and $\alpha l_2^k$ are spaced at least $|g|^{-1}$ apart in $\mathbb{T}$. If $l_1 \equiv l_2$ (mod $g$), since $\textup{ord}\, (g\alpha_k - a) < -kM$ and $\textup{ord}\,(g\alpha_k - a) \ge (M-kN)$, we have \begin{equation}\label{spacing2} \begin{split} \textup{ord}\, \{\alpha_k(l_1^k - l_2^k) \} &= \textup{ord}\, \{(\alpha_k-a/g)(l_1^k - l_2^k) \}\\ &= \textup{ord}\, \big((\alpha_k-a/g)(l_1^k - l_2^k)\big)\\ & \ge M - kN - \textup{ord}\, g + \textup{ord}\, (l_1^k - l_2^k). \end{split} \end{equation} We note that \[ \textup{ord}\, (l_1^k - l_2^k) = \textup{ord}\, (l_1-l_2) + \textup{ord}\, (l_1^{k-1} + l_1^{k-2}l_2 + \cdots + l_2^{k-1}).\] If $l_1 \neq l_2$ and $l_1 \equiv l_2$ (mod $g$), we have \[ \textup{ord}\, (l_1 -l_2) \ge \textup{ord}\, g.\] Furthermore, since the elements of $\mathcal{L}_0$ are monic and of degree $M$, the term $(l_1^{k-1} +l_1^{k-2}l_2 + \cdots + l_2^{k-2})$ is of degree $(k-1)M$ with leading coefficient $k$. Since $p\nmid k$, we have \[ \textup{ord}\, (l_1^{k-1} +l_1^{k-2}l_2 + \cdots + l_2^{k-1}) = (k-1)M.\] On combining the above two estimates, we have \[ \textup{ord}\, (l_1^k - l_2^k) \ge \textup{ord}\, g + (k-1)M,\] and hence by (\ref{spacing2}) we have \[ \textup{ord}\, \{\alpha_k(l_1^k - l_2^k) \} \ge k(M-N).\] In this case, therefore, $\alpha l_1^k$ and $\alpha l_2^k$ are spaced at least $q^{k(M-N)}$ apart in $\mathbb{T}$. Combining the above two cases, we see that for any distinct elements $l_1, l_2 \in\mathcal{L}$, they are spaced at least $\min\{|g|^{-1}, q^{k(M-N)}\}$ apart in ${\mathbb T}$. \end{proof} We are now ready to prove Theorem \ref{th:main1}. \vspace{-0.3cm} \begin{proof}[Proof of Theorem \ref{th:main1}] We first note that if Theorem \ref{th:main1} holds for $f(u)-\alpha_0 = \sum_{r \in {\mathcal K}}\alpha_ru^r$, then it holds for $f(u)$. Therefore, without loss of generality, we can assume $\alpha_0=0$. Let $k$ be a maximal element of ${\mathcal K}$ which satisfies $p\nmid k$ and $p^vk \not \in \mathcal{S}({\mathcal K})$ for any $v\in {\mathbb Z}^+$. Let $\alpha_k \in {\mathbb K}_\infty$ and $M \in {\mathbb Z}^+$ with $2M \le N$. By Dirichlet's theorem in ${\mathbb F}_q[t]$ \cite[Lemma 3]{Ku}, there exist $a, g \in {\mathbb F}_q[t]$ with $(a,g)=1$, $\textup{ord}\,(g\alpha_k-a) <-kM$ and $\textup{ord}\, g \le kM$. Suppose that either $\textup{ord}\,(g\alpha_k-a) \ge (M-kN)$ or $\textup{ord}\, g > M$. We will show that, for $M$ suitably chosen, such an assumption leads to an upper bound for $\big|\sum_{x \in {\mathbb G_{N}}}e(f(x))\big|$, which contradicts the lower bound stated in the theorem. Let $\mathcal{L}$ be the set of monic irreducible polynomials $l$ satisfying $\textup{ord}\, l=M$ and $(l,g)=1$. Since $\textup{ord}\, g\le kM$, $g$ has at most $k$ irreducible factors of degree $M$. Therefore, by the prime number theorem in ${\mathbb F}_q[t]$, for $M$ sufficiently large, in terms of $k$ (thus ${\mathcal K}$) and $q$, we have $\#\mathcal{L}\ge q^M/(2M)$. Let ${\mathcal A}$ be the multiset \[ \mathcal{A} = \big\{y = lw: l \in \mathcal{L}\,\, \textup{and}\,\,w \in {\mathbb F}_q[t]\,\, \textup{with}\,\,w \in {\mathbb G}_{(N-M)}\big\},\] where the multiplicity of each $y$ is the number of its representations $y=lw$. Then $\mathcal{A} \subseteq {\mathbb G_{N}}$ and \[ \#\mathcal{A} \geq q^M/(2M)\cdot q^{(N-M)} = q^N/(2M).\] By Lemma \ref{weylshift} and (\ref{f-shift}), we have \begin{eqnarray*} \displaystyle \sum_{x \in {\mathbb G_{N}}}e(f(x)) \nonumber &=& 2q^{-N}M\displaystyle \sum_{x \in {\mathbb G_{N}}} \displaystyle \sum_{y \in \mathcal{A}} e\Big(\alpha_k y^k + \displaystyle \sum_{j \in \mathcal{S}({\mathcal K}) \setminus \{k\}} \gamma_j(\{\alpha _r\}_{r\in{\mathcal K}};x) y^j + \gamma(\{\alpha _r\}_{r\in {\mathcal K}};x) \Big) \nonumber \\ &\le& 2M\max_{x \in {\mathbb G_{N}}}\left|\displaystyle \sum_{y \in \mathcal{A}} e\Big(\alpha_k y^k + \displaystyle \sum_{j \in \mathcal{S}({\mathcal K}) \setminus \{k\}} \gamma_j(\{\alpha _r\}_{r\in{\mathcal K}};x) y^j\Big)\right|. \end{eqnarray*} Let $\gamma_{j}= \gamma_j(\{\alpha _r\}_{r\in{\mathcal K}};x)\in {\mathbb K}_\infty$ ($j \in {\mathcal S} ({\mathcal K}) \setminus \{k\}$) correspond to the choice of $x$ which maximizes the above expression, and we fix them from now on. Let $s\in {\mathbb Z}^+ $ with $s\geq (\psi\phi+\psi)$, where $\psi$ and $\phi$ are defined as in Theorem \ref{vmt}. By H\"older's inequality, \[ \left|\displaystyle \sum_{x \in {\mathbb G_{N}}}e(f(x))\right|^{2s} \le (2M)^{2s}(q^M)^{2s-1}\displaystyle \sum_{l \in \mathcal{L}}\left|\displaystyle \sum_{w \in {\mathbb G}_{(N-M)}}e\Big(\alpha_k (lw)^k + \displaystyle \sum_{j \in \mathcal{S}({\mathcal K}) \setminus \{k\}} \gamma_j (lw)^j\Big)\right|^{2s}.\] Let $\epsilon>0$ be arbitrary. For $h, g \in {\mathbb F}_q[t]$ with $(h,g)=1$, by Hensel's lemma, there exists $C_2=C_2(\epsilon; q)>0$ such that (see \cite[Corollary 7.2 and (12.4)]{lw-waring} for more details) \[ \#\big\{z \in {\mathbb F}_q[t]: z^k \equiv h\, (\mod g)\,\, \textup{and}\,\, \textup{ord}\, z<\textup{ord}\, g\big\} \le C_2|g|^\epsilon.\] Therefore, there exists $L \in {\mathbb Z}^+$ satisfying $L \le C_2|g|^\epsilon$ with the following property: the set $\mathcal{L}$ can be divided into $L$ classes, $\mathcal{L}_1, \ldots, \mathcal{L}_L$, such that, for any distinct elements $l_1, l_2 \in \mathcal{L}_r$ $(1 \le r \le L)$, we have $l_1^k \equiv l_2^k$ (mod $g$) if and only if $l_1 \equiv l_2$ (mod $g$). Then there exists $r\in {\mathbb Z}^+$ with $ r \le L$ for which \[\left|\displaystyle \sum_{x \in {\mathbb G_{N}}}e(f(x))\right|^{2s} \le (2M)^{2s}(q^M)^{2s-1} C_2|g|^\epsilon\displaystyle \sum_{l \in \mathcal{L}_r}\left|\displaystyle \sum_{w \in {\mathbb G}_{(N-M)}}e\Big(\alpha_k (lw)^k + \displaystyle \sum_{j \in \mathcal{S}({\mathcal K})\setminus \{k\}} \gamma_j (lw)^j\Big)\right|^{2s}.\] Let $\mathcal{S}({\mathcal K})'$ be defined as in (\ref{S(K)-prime}). For ${\bf h} = (h_i)_{i \in \mathcal{S}({\mathcal K})'}$ with $h_i \in {\mathbb F}_q[t]$, let $b(\bf{h})$ denote the number of solutions of the system \[ w_1^i + \cdots + w_s^i = h_i\quad (i \in \mathcal{S}({\mathcal K})')\] with $ w_r\in {\mathbb G}_{(N-M)}$ $(1 \le r \le s)$. For $i \in \mathcal{S}({\mathcal K})'$, we have $h_i\in {\mathbb G}_{i(N-M)}$. Furthermore, for $j= p^vi\in \mathcal{S}({\mathcal K})$ with $i \in \mathcal{S}({\mathcal K})'$ and $v \in {\mathbb Z}^+$, we have $w_1^j + \cdots + w_s^j = h_i^{p^v}$. Therefore, by defining $h_j =h_i^{p^v}$, we see that $b(\bf{h})$ also counts the number of solutions of the system \[ w_1^j + \cdots + w_s^j = h_j\quad (j \in \mathcal{S}({\mathcal K}))\] with $w_r\in {\mathbb G}_{(N-M)}$ $(1 \le r \le s)$. We remark here that since $p\nmid k$, we have $k \in \mathcal{S}({\mathcal K})'$. Moreover, since $p^vk \not \in \mathcal{S}({\mathcal K})$ for any $v\in {\mathbb Z}^+$, a sum over $h_k$ is independent of another $h_j$ $(j \in \mathcal{S}({\mathcal K})\setminus\{k\})$. Therefore, we have \begin{equation*} \begin{split} & \quad \left|\displaystyle \sum_{x \in {\mathbb G_{N}}}e(f(x))\right|^{2s} \\ & \le (2M)^{2s}(q^M)^{2s-1}C_2|g|^\epsilon\displaystyle \sum_{l \in \mathcal{L}_r}\left|\displaystyle \sum_{\substack{h_i \in {\mathbb G}_{i(N-M)} \\ i \in \mathcal{S}({\mathcal K})'} }b({\bf h})e\Big(\alpha_kh_k l^k + \displaystyle \sum_{j \in \mathcal{S}({\mathcal K})\setminus \{k\}} \gamma_j h_jl^j\Big)\right|^2\\ & \le (2M)^{2s}(q^M)^{2s-1}C_2|g|^\epsilon(q^{(N-M)})^{\sum_{i \in \mathcal{S}({\mathcal K})'\setminus \{k\}}i}\displaystyle \sum_{\substack{h_i \in {\mathbb G}_{i(N-M)}\\i \in \mathcal{S}({\mathcal K})'\setminus\{k\}}}\displaystyle \sum_{l \in \mathcal{L}_r}\left|\displaystyle \sum_{h_k\in {\mathbb G}_{k(N-M)}} b({\bf h})e(\alpha_kh_k l^k)\right|^2. \end{split} \end{equation*} Since $p\nmid k$, by Theorem \ref{largesieve} and Lemma \ref{spacing}, we have \[ \displaystyle \sum_{l \in \mathcal{L}_r}\left|\displaystyle \sum_{h_k\in {\mathbb G}_{k(N-M)}} b({\bf h})e(\alpha_kh_k l^k)\right|^2 \le \big(|g| + q^{k(N-M)}\big) \displaystyle \sum_{h_k\in{\mathbb G}_{ k(N-M)}}|b({\bf h})|^2 .\] Furthermore, by considering the underlying equations, by Theorem \ref{vmt}, there exists a constant $C_1=C_1(s; {\mathcal K}; \epsilon; q)>0$ such that \[ \displaystyle \sum_{\substack{h_i \in {\mathbb G}_{i(N-M)}\\i \in \mathcal{S}({\mathcal K})'\setminus\{k\}}}\displaystyle \sum_{h_k\in {\mathbb G}_{k(N-M)}}|b({\bf h})|^2 \le J_s(\mathcal{S}({\mathcal K}); (N-M)) \le C_1 (q^{N-M})^{2s-\sum_{i \in \mathcal{S}({\mathcal K})'}i+(k+1)\epsilon}.\] Combining the above three estimates, it follows that \[ \left|\displaystyle \sum_{x \in {\mathbb G_{N}}}e(f(x))\right|^{2s} \le (2M)^{2s}C_1(q^N)^{2s} (q^M)^{-1}C_2|g|^\epsilon\big(|g|q^{k(M-N)}+1\big)\big(q^{(N-M)})^{(k+1)\epsilon}.\] Since $\textup{ord}\, g \le kM$ and $2M\le N$, we have \[ \left|\displaystyle \sum_{x \in {\mathbb G_{N}}}e(f(x))\right|\le 2Mq^N \big(2C_1C_2(q^M)^{-1}(q^{kM})^\epsilon\big(q^{(N-M)})^{(k+1)\epsilon}\big)^{1/2s}.\] Therefore, there exists a constant $C_3=C_3(s; {\mathcal K}; \epsilon; q)>0$ such that for $M$ sufficiently large, in terms of ${\mathcal K}$, $\epsilon$ and $q$, \[ \left|\displaystyle \sum_{x \in {\mathbb G_{N}}}e(f(x))\right|\le q^N\big(C_3(q^M)^{-1}(q^N)^{(k+1)\epsilon}\big)^{1/2s}.\] We now make the specific choice \[M=[\log_qC_3 + N(k+1)\epsilon +2s\eta+1].\] Then it follows that \[ \left|\displaystyle \sum_{x \in {\mathbb G_{N}}}e(f(x))\right|< q^{N-\eta },\] which contradicts the assumption of Theorem \ref{th:main1}. This implies that there exist $a,g\in {\mathbb F}_q[t]$ such that \[ \textup{ord}\,(g\alpha_k-a) < -kN+M \qquad \textup{and} \qquad \textup{ord}\, g \le M.\] By assuming $\epsilon < 1/(4(k+1))$, we see that the requirement $2M \leq N$ is satisfied when $0 <\eta \leq N/(8s)$ and $N$ is sufficiently large, in terms of ${\mathcal K}$, $\epsilon$ and $q$. In addition, for $N$ sufficiently large, we have \[ M \leq N(k+2)\epsilon + 2s \eta.\] Take $s=(\psi\phi + \psi)$. Since $\epsilon>0$ is arbitrary, by taking $c = 1/(8s)$ and $C=2s$, which are constants depending only on ${\mathcal K}$ and $q$, Theorem \ref {th:main1} follows. \end{proof} \section{Extending the Weyl-type estimate to other coefficients} \label{sec:weyl2} In this section, we will extend Theorem \ref{th:main1} to indices which are not maximal. \begin{thm} \label{th:main2} Let $f(u)=\sum_{r \in {\mathcal K}\cup\{0\}}\alpha_r u^{r}$ be a polynomial supported on a set ${\mathcal K} \subset {\mathbb Z}^+$ with coefficients in ${\mathbb K}_\infty$. Then for any $k \in {\mathcal K}^{*}$ (defined as in (\ref{eq:k-star})), there exist constants $c_k,C_k>0$, depending only on ${\mathcal K}$ and $q$, such that the following holds: suppose that for some $0<\eta \le c_kN$, we have \[\left|\sum_{x \in {\mathbb G_{N}}} e(f(x)) \right| \geq q^{N-\eta }.\] Then for any $\epsilon>0$ and $N$ sufficiently large in terms of ${\mathcal K}$, $\epsilon$ and $q$, there exist $a_k,g_k \in {\mathbb F_{q}}[t]$ such that \[\textup{ord}\, (g_k\alpha_k-a_k) <-kN+\epsilon N+C_k\eta \qquad \textrm{and} \qquad \textup{ord}\, g_k \leq \epsilon N+C_k\eta.\] \end{thm} \begin{proof} Without loss of generality, we can assume $\alpha_0 = 0$. We prove this theorem by downward induction on $k \in {\mathcal K}^*$ with respect to the partial order $\preceq_{p}$. If $k$ is maximal in ${\mathcal K}$, then the statement follows from Theorem \ref{th:main1}. Suppose that the theorem is established for any $h \in {\mathcal K}^*$ with $k \preceq_p h$ and $h \neq k$. Define \begin{equation}\label{k0k1} {\mathcal K}_0 = \{h \in {\mathcal K} : k \preceq_p h\,\, \textup{and}\,\, h \ne k\} \qquad \text{and} \qquad {\mathcal K}_1 = {\mathcal K} \setminus {\mathcal K}_0. \end{equation} By Lemma \ref{lem:shadow2}, ${\mathcal K}_0 \subset {\mathcal K}^*$. For $h \in {\mathcal K}_0$, let $c_h, C_h$ be defined as in Theorem \ref{th:main2}. Let \[ c = \min\big\{c_h: h \in{\mathcal K}_0\big\}\qquad \textup{and} \qquad C=\sum_{h \in {\mathcal K}_0}C_h.\] Suppose that for some $0 \leq \eta \leq cN$. \begin{equation}\label{upperbound} \left|\sum_{x \in {\mathbb G_{N}}} e(f(x)) \right| \geq q^{N-\eta}. \end{equation} Let $\epsilon>0$ be arbitrary. By induction hypothesis, for any $h \in {\mathcal K}_0$ and $N$ sufficiently large, in terms of ${\mathcal K}, \epsilon $ and $q$, there exist $a_h, g_h \in {\mathbb F_{q}}[t]$ ($h \in {\mathcal K}_0$) such that \[ \textup{ord}\,(g_h \alpha_h - a_h) < -h N + (\#{\mathcal K}_0)^{-1}\epsilon N+C_h \eta\qquad \text{and} \qquad \textup{ord}\, g_h \le (\#{\mathcal K}_0)^{-1}\epsilon N+ C_h \eta .\] Define \[ g = \prod_{h \in {\mathcal K}_0} g_h \qquad \textup{and} \qquad b_h = a_h \prod_{j \in {\mathcal K}_0\setminus\{h\} } g_j.\] Then we have \[ \textup{ord}\,(g\alpha_h - b_h) < -hN + \epsilon N+C\eta\qquad \text{and} \qquad \textup{ord}\, g \le \epsilon N+ C\eta .\] Let $M\in {\mathbb Z}^+$ with $M< (N-\textup{ord}\, g)$. We can rewrite the set ${\mathbb G_{N}}$ as follows: \begin{equation*} \begin{split} {\mathbb G_{N}} &= \big\{gv + w: \textup{ord}\, v < (N-\textup{ord}\, g)\,\, \textup{and}\,\, \textup{ord}\, w < \textup{ord}\, g\}\\ &= \big \{g(t^Mz+y)+w: \textup{ord}\, z < (N-\textup{ord}\, g-M), \,\, \textup{ord}\, y < M \,\, \textup{and}\,\, \textup{ord}\, w < \textup{ord}\, g\}\\ &= \big\{gy + (gt^Mz+w): \textup{ord}\, z < (N-\textup{ord}\, g -M),\,\, \textup{ord}\, y <M \,\, \textup{and}\,\, \textup{ord}\, w < \textup{ord}\, g\big\}. \end{split} \end{equation*} Let $s=(gt^Mz+w)$ with $z \in {\mathbb G}_{N-\textup{ord}\, g -M}$ and $\textup{ord}\, w \in {\mathbb G}_{\textup{ord}\, g}$. Then $\textup{ord}\, s < N$ and the set ${\mathbb G_{N}}$ can be partitioned into $q^{N-M}$ blocks of the form \[\mathcal{B}_s = \big\{gy + s: \textup{ord}\, y < M\big\}.\] Then (\ref{upperbound}) implies that there exists a block $\mathcal{B}_s$ such that \begin{equation}\label{th2-1} \left| \sum_{x \in \mathcal{B}_s} e(f(x)) \right| = \left| \sum_{y \in {\mathbb G_{M}}} e(f(gy + s)) \right| \geq q^{N-\eta}\big(q^{N-M}\big)^{-1} = q^{M-\eta }. \end{equation} We have \begin{align}\label{th2-2} & \quad \left|\sum_{y \in {\mathbb G_{M}}} e(f(gy + s)) \right| \nonumber \\ & = \left| \sum_{y \in {\mathbb G_{M}}} e \left( \sum_{h \in {\mathcal K}} \alpha_h (gy+s)^h \right) \right| \nonumber \\ & = \left| \sum_{y \in {\mathbb G_{M}}} e\left( \sum_{h \in {\mathcal K}_0} \alpha_h (gy+s)^h + \sum_{h \in {\mathcal K}_1} \alpha_h (gy+s)^h \right) \right| \nonumber \\ & = \left| \sum_{y \in {\mathbb G_{M}}} e\left( \sum_{h \in {\mathcal K}_0} \left( \alpha_h - b_h/g\right) \left( (gy+s)^h - s^h \right)+ \sum_{h \in {\mathcal K}_1} \alpha_h (gy+s)^h \right) \right|, \end{align} where the last equation holds since $e\left( \sum_{h \in {\mathcal K}_0}\alpha _h(-s^h)\right)$ is a constant independent of $y$ and $e\left( \sum_{h \in {\mathcal K}_0}-b_h/g \left( (gy+s)^h - s^h \right)\right)=1$. For any $y \in {\mathbb G_{M}}$ and $h \in {\mathcal K}_0$, we have \begin{equation*} \begin{split} \textup{ord}\, \big((gy+s)^h - s^h \big) & \leq \textup{ord}\, (gy) + (h-1) \cdot \max\big\{\textup{ord}\, (gy), \textup{ord}\, s\big\} \\ & < \textup{ord}\, g+ M+ (h-1)N. \end{split} \end{equation*} It follows that \begin{equation*} \begin{split} & \quad \textup{ord}\, \left(\alpha_h - b_h/g\right)\big( (gy+s)^h - s^h\big)\\ &< (-hN + \epsilon N+C\eta - \textup{ord}\, g)+ (\textup{ord}\, g + M + (h-1)N)\\ & = \epsilon N + C \eta + M-N. \end{split} \end{equation*} We now make the specific choice \[ M=[(1-\epsilon )N - C\eta-1].\] Then it follows that \[ \epsilon N + C \eta +M-N\le -1,\] and hence \[ \textup{ord}\, \left(\alpha_h - b_h/g\right)\big( (gy+s)^h - s^h\big)< -1.\] Therefore, we have \begin{equation}\label{th2-3} e\left( \sum_{h \in {\mathcal K}_0} \left( \alpha_h - a_h/g\right) \left( (gy+s)^h - s^h \right) + \sum_{h \in {\mathcal K}_1} \alpha_h (gy+s)^h \right) = e\left( \sum_{h \in {\mathcal K}_1} \alpha_h (gy+s)^h \right). \end{equation} Combining (\ref{th2-1}), (\ref{th2-2}) and (\ref{th2-3}), we have \begin{equation*} \left|\sum_{y \in {\mathbb G_{M}}} e\left( \sum_{h \in {\mathcal K}_1} \alpha_h (gy+s)^h \right)\right|\geq q^{M-\eta}. \end{equation*} We note here that since $\textup{ord}\, g \le (\epsilon N +C\eta)$, for $N$ sufficiently large, the above choice of $M$ satisfies $0<M<(N-\textup{ord}\, g)$. The polynomial $\sum_{h \in {\mathcal K}_1} \alpha_h (gy+s)^h$ is supported on $\mathcal{S}({\mathcal K}_1)$. Since $k\in {\mathcal K}^*$ is maximal in ${\mathcal K}_1$, by Lemma \ref{lem:shadow}, $k$ is maximal in $\mathcal{S}({\mathcal K}_1)$ and $k \in \mathcal{S}({\mathcal K}_1)^*$. Furthermore, the coefficient of $y^k$ in $\sum_{h \in {\mathcal K}_1} \alpha_h (gy+s)^h$ is $\alpha_kg^k$. By Theorem \ref{th:main1}, there exist constants $d_k, D_k >0$ and $\widetilde{a}_k, \widetilde{g}_k \in {\mathbb F_{q}}[t]$, such that, for $0 < \eta \le d_kN$ and $N$ sufficiently large, \[\textup{ord}\, (\widetilde{g}_k\alpha_kg^k-\widetilde{a}_k) <-kM+\epsilon M+D_k\eta \qquad \textup{and} \qquad \textup{ord}\, \widetilde{g}_k \leq \epsilon M+D_k\eta .\] Let $g_k = \widetilde{g}_kg^k$ and $a_k = \widetilde{a}_k$. Since $((1-\epsilon )N - C\eta-2)< M\le N$, for $N$ sufficiently large, we have \begin{equation*} \begin{split} \textup{ord}\, (g_k\alpha _k - a_k) & < -k \big((1-\epsilon )N - C\eta-2\big) + \epsilon N + D_k \eta \\ &<- kN+ \epsilon (k+2)N+ \left(kC+D_k\right) \eta \end{split} \end{equation*} and \[ \textup{ord}\, g_k \le (\epsilon M+D_k\eta ) + k(\epsilon N+ C \eta) \le \epsilon (k+1)N + (kC+D_k) \eta.\] Since $\epsilon>0$ is arbitrary, by taking $c_k = \min\{c, d_k\}$ and $C_k =(kC+D_k)$, Theorem \ref{th:main2} follows. \end{proof} One can extend Theorem \ref{th:main2} to indices that are not in ${\mathcal K}^*$. Let ${\mathcal K}_0 = {\mathcal K}$, and for any $n \ge 1$, let \[ {\mathcal K}_{n} = {\mathcal K}_{n-1} \setminus {\mathcal K}_{n-1}^*.\] Define \begin{equation} \label{eq:ktilde} { \widetilde{\mathcal K}} = \bigcup_{n=0}^{\infty} K_{n}^*. \end{equation} Then by induction on $n$, one can apply the method of the proof of Theorem \ref{th:main2} to obtain the following result. \begin{prop} \label{prop:varmain2} Let $f(u)=\sum_{r \in {\mathcal K}\cup\{0\}}\alpha_r u^{r}$ be a polynomial supported on a set ${\mathcal K} \subset {\mathbb Z}^+$ with coefficients in ${\mathbb K}_\infty$. Then for any $k \in { \widetilde{\mathcal K}}$, there exist constants $c_k,C_k>0$, depending only on ${\mathcal K}$ and $q$, such that the following holds: suppose that for some $0<\eta \le c_kN$, we have \[\left|\sum_{x \in {\mathbb G_{N}}} e(f(x)) \right| \geq q^{N-\eta }.\] Then for any $\epsilon>0$ and $N$ sufficiently large, in terms of ${\mathcal K}$, $\epsilon$ and $q$, there exist $a_k,g_k \in {\mathbb F_{q}}[t]$ such that \begin{equation*} \textup{ord}\, (g_k\alpha_k-a_k) <-kN+\epsilon N+C_k\eta \qquad \textrm{and} \qquad \textup{ord}\, g_k \leq \epsilon N+C_k\eta. \end{equation*} \end{prop} It seems that there is no simple description of $\widetilde{{\mathcal K}}$. In many cases, $\widetilde{{\mathcal K}}$ is larger than ${\mathcal K}^*$. For example, if $p>3$ and ${\mathcal K}=\{1,3, 3p+1\}$ (as in the first case of Example \ref{example-3}), then ${\mathcal K}^*=\{3p+1\}$, but $\widetilde{{\mathcal K}}={\mathcal K}$. More generally, if $(k,p)=1$ for any $k \in {\mathcal K}$, then it can be proved by induction that ${ \widetilde{\mathcal K}}={\mathcal K}$. On the other hand, if $p>3$ and ${\mathcal K} = \{3, 4p\}$ (as in the second case of Example \ref {example-3}), then ${\mathcal K}^* = \emptyset$, and hence ${ \widetilde{\mathcal K}} = \emptyset$. Therefore, we cannot go as far as proving Conjecture \ref{conj} by using this method. \section{Equidistribution of polynomial sequences} \label{sec:equidistribution} In this section, we prove Theorem \ref{th:main3}. Then we discuss a variant of the theorem. The following lemma is essential for our proof of Theorem \ref {th:main3}. \begin{lem} \label{pre-main3} Let $f(u)=\sum_{r \in {\mathcal K}\cup \{0\}}\alpha_r u^{r}$ be a polynomial supported on a set ${\mathcal K} \subset {\mathbb Z}^+$ with coefficients in ${\mathbb K}_\infty$. For $k \in {\mathcal K}^*$ (defined as in (\ref{eq:k-star}), suppose that $k$ is maximal in ${\mathcal K}$ and $\alpha_k$ is irrational. Then for any fixed $\eta >0$, there exists $N_0\in {\mathbb Z}^+$, such that, for any $s \in {\mathbb F_{q}}[t]$, we have \[\left| \sum_{y \in {\mathbb G}_{N_0}} e(f(y+s)) \right| < q^{N_0 - \eta}.\] \end{lem} \begin{proof} To prove the lemma, we suppose the contrary. Then for any $N\in {\mathbb Z}^+$, there exists $s_N \in {\mathbb F_{q}}[t]$ such that \[\left| \sum_{y \in {\mathbb G}_{N}} e(f(y+s_N)) \right| \geq q^{N- \eta}.\] We note that for each $s\in {\mathbb F_{q}}[t]$, the polynomial $f(y+s)$ is supported on ${\mathcal S}({\mathcal K})$. Since $k\in {\mathcal K}^*$ is maximal in ${\mathcal K}$, by Lemma \ref{lem:shadow}, $k$ is maximal in ${\mathcal S}({\mathcal K})$ and $k \in {\mathcal S}({\mathcal K})^*$. Furthermore, the coefficient of $y^k$ in $f(y+s)$ is $\alpha_k$. Applying Theorem \ref{th:main1} with $\epsilon=1/3$, there exists a constant $C>0$, such that, for any $N$ sufficiently large, in terms of ${\mathcal K}$ and $q$, there exist $a, g\in {\mathbb F_{q}}[t]$ such that \[\textup{ord}\, (g\alpha_k -a) \leq -kN+N/3 + C \eta \qquad \textrm{and} \qquad \textup{ord}\, g < N/3 + C\eta .\] For $M\in {\mathbb Z}^+$, we apply the above inequalities with $N=[3(M- C \eta)]$. Then for $M$ sufficiently large, we have \[\textup{ord}\, (g\alpha_k -a) \leq (-k+1/3) 3M +(3kC\eta +k-1/3)\leq -3M/2 \qquad \textup{and} \qquad \textup{ord}\, g < M. \] By Lemma \ref{lem:diophantine}, the above inequalities implies that $\alpha_k$ is rational, which leads to a contradiction. This completes the proof of the lemma. \end{proof} We are now ready to prove Theorem \ref{th:main3}. \vspace{-0.3cm} \begin{proof}[Proof of Theorem \ref{th:main3}] Without loss of generality, we can assume $\alpha_0 = 0$. Let $k \in {\mathcal K}^*$ and suppose that $\alpha_k$ is irrational. We prove Theorem \ref{th:main3} by downward induction on $k$ with respect to the partial order $\preceq_{p}$. Suppose that $k$ is maximal in ${\mathcal K}$. Let $\eta $ and $N_0$ be defined as in Lemma \ref {pre-main3}. For any $N\geq N_0$, we can partition the set ${\mathbb G_{N}}$ as $q^{N-N_0}$ blocks of the form \[ \mathcal{B}_s = \left\{y+s: \textup{ord}\, y < N_0\right\},\] where $s=t^{N_0}z$ for some $z \in {\mathbb G}_{N-N_0}$. Therefore, it follows from Lemma \ref{pre-main3} that \[\left| \sum_{x \in {\mathbb G}_{N}} e(f(x)) \right| < q^{N-N_0}q^{N_0-\eta}= q^{N- \eta}.\] Since $\eta>0$ is arbitrary, it follows that \[\lim_{N \rightarrow \infty} \frac{1}{q^N} \left|\sum_{x \in {\mathbb G}_{N}} e(f(x))\right| =0,\] which establishes Theorem \ref{th:main3} in the special case when $k$ is maximal in ${\mathcal K}$. Suppose that the theorem is established for any $h \in {\mathcal K}^*$ with $k \preceq_p h$ and $h \neq k$. Let ${\mathcal K}_0$ and ${\mathcal K}_1$ be defined as in (\ref{k0k1}). We note that if there exists $h \in {\mathcal K}_0$ such that $\alpha_h$ is irrational, then Theorem \ref{th:main3} follows from induction hypothesis. Therefore, it suffices to consider the case that all the $\alpha_h$ $(h \in {\mathcal K}_0)$ are rational. Let $g$ be the common denominator of $\alpha_h$ $(h \in {\mathcal K}_0)$ and $s \in {\mathbb F_{q}}[t]$ be arbitrary. For any $M \in {\mathbb Z}^+$, we have \begin{align*} \left|\sum_{y \in {\mathbb G_{M}}} e(f(gy + s)) \right| & = \left| \sum_{y \in {\mathbb G_{M}}} e \left( \sum_{h \in {\mathcal K}} \alpha_h (gy+s)^h \right) \right| \\ & = \left| \sum_{y \in {\mathbb G_{M}}} e\left( \sum_{h \in {\mathcal K}_0} \alpha_h (gy+s)^h + \sum_{h \in {\mathcal K}_1} \alpha_h (gy+s)^h \right) \right| \\ & = \left| \sum_{y\in {\mathbb G_{M}}} e\left( \sum_{h \in {\mathcal K}_0} \alpha_h \left( (gy+s)^h -s^h \right)+ \sum_{h \in {\mathcal K}_1} \alpha_h (gy+s)^h \right) \right|, \end{align*} where the last equality follows since $e\big(\sum_{h \in {\mathcal K}_0} \alpha_h(-s^h)\big)$ is a constant independent of $y$. By the definition of $g$, we have \[ e\left( \sum_{h \in {\mathcal K}_0} \alpha_h \left( (gy+s)^h -s^h \right)\right) = 1.\] It follows that \begin{equation}\label{egy} \begin{split} \left|\sum_{y \in {\mathbb G_{M}}} e(f(gy + s)) \right| &= \left| \sum_{y \in {\mathbb G_{M}}} e \left( \sum_{h \in {\mathcal K}_1} \alpha_h (gy+s)^h \right) \right|. \end{split} \end{equation} For $N \in {\mathbb Z}^+$ with $N>\textup{ord}\, g$, we write $N=M+\textup{ord}\, g$ for some $M \in {\mathbb Z}^+$. Then we can partition the set ${\mathbb G}_N$ as $q^{N-M}$ blocks of the form \[ \mathcal{B}_s = \left\{gy+s: \textup{ord}\, y< M\right\},\] where $s \in {\mathbb G}_{\textup{ord}\, g}$. It follows from (\ref {egy}) that \begin{equation}\label{main3-equal} \begin{split} \left|\sum_{x \in {\mathbb G_{N}}}e(f(x)) \right| & \le q^{N-M}\max_{s \in {\mathbb G}_{\textup{ord}\, g}} \left|\sum_{y \in {\mathbb G_{M}}} e(f(gy + s)) \right|\\ &= q^{N-M}\max_{s \in {\mathbb G}_{\textup{ord}\, g}} \left| \sum_{y \in {\mathbb G_{M}}} e \left( \sum_{h \in {\mathcal K}_1} \alpha_h (gy+s)^h \right) \right|. \end{split} \end{equation} The polynomial $\sum_{h \in {\mathcal K}_1} \alpha_h (gy+s)^h$ is supported on ${\mathcal S}({\mathcal K}_1)$. Since $k\in {\mathcal K}^*$ is maximal in ${\mathcal K}_1$, by Lemma \ref{lem:shadow}, $k$ is maximal in ${\mathcal S}({\mathcal K}_1)$ and $k \in {\mathcal S}({\mathcal K}_1)^*$. Furthermore, the coefficient of $y^k$ in $\sum_{h \in {\mathcal K}_1} \alpha_h (gy+s)^h$ is $\alpha_k g^k$, which is irrational since $\alpha_k$ is irrational. By the first part of the proof, we have \[ \lim_{M \to \infty} \frac{1}{q^M}\left|\sum_{y \in {\mathbb G_{M}}} e \left( \sum_{h \in {\mathcal K}_1} \alpha_h (gy+s)^h \right)\right| =0.\] Then it follows from (\ref{main3-equal}) that \[\lim_{N \rightarrow \infty} \frac{1}{q^N} \left|\sum_{x \in {\mathbb G}_{N}} e(f(x))\right| =0.\] By Theorem \ref{th:weyl}, it follows that the sequence $(f(x))_{x \in {\mathbb F_{p}}[t]}$ is equidistributed in ${\mathbb T}$. \end{proof} By an observation similar to the one following the proof of Theorem \ref{th:main2}, one can apply the method of the proof of Theorem \ref {th:main3} to obtain the following result. \begin{prop}\label{prop:varmain3} Let $f(u)=\sum_{r \in {\mathcal K}\cup \{0\}}\alpha_r u^{r}$ be a polynomial supported on a set ${\mathcal K} \subset {\mathbb Z}^+$ with coefficients in ${\mathbb K}_\infty$. Suppose that $\alpha_k$ is irrational for some $k \in{ \widetilde{\mathcal K}}$ (defined as in (\ref{eq:ktilde})). Then the sequence $(f(x))_{x \in {\mathbb F_{q}}[t]}$ is equidistributed in ${\mathbb T}$. \end{prop} Of notable significance is the case when $(k,p)=1$ for all $k \in {\mathcal K}$, in which we have ${ \widetilde{\mathcal K}}={\mathcal K}$. We will now show that the above proposition implies Conjecture \ref{conj} in the special case $q=p$. For the rest of this section, we assume that $q=p$. Let $T: {\mathbb K}_\infty \rightarrow {\mathbb T}$ be defined as in (\ref{eq:t}). Using the fact that $a^p = a$ for any $a \in {\mathbb F}_p$, one can show that for any $x \in {\mathbb F_{p}}[t]$, \[e \left( \alpha x^p \right) = e \left( T(\alpha) x \right).\] Therefore, for any $x \in {\mathbb F_{p}}[t]$ and $v \in {\mathbb Z}^+\cup\{0\}$, we have \begin{equation} \label{eq:t2} e \left( \alpha x^{p^v} \right) = e \left( T^v(\alpha) x \right), \end{equation} where $T^v$ is the $v$-fold composition of $T$. Let $f(u)=\sum_{r \in {\mathcal K}\cup \{0\}}\alpha_r u^r\in {\mathbb K}_\infty[u]$, and let \begin{equation}\label{cali} {\mathcal I} = \{ k \in {\mathbb Z}^+ : (k,p)=1\,\, \textup{and}\,\, p^v k \in {\mathcal K}\,\, \textup{for some}\,\,v \in {\mathbb Z}^+\cup\{0\}\}. \end{equation} For each $k \in {\mathcal I}$, define \begin{equation} \label{eq:s} S_k(f)= \sum_{\substack{v \geq 0\\p^v k \in {\mathcal K}}} T^v (\alpha_{p^v k}). \end{equation} Then it follows from (\ref{eq:t2}) that for any $x \in {\mathbb F_{p}}[t]$, \begin{equation}\label{esf} e \left( f(x) \right) = e \left( \sum_{k \in {\mathcal I}} S_k(f) x^k+\alpha _0\right). \end{equation} Since $(k,p)=1$ for any $k \in {\mathcal I}$, we have $\widetilde{{\mathcal I}} = {\mathcal I}$. By Proposition \ref{prop:varmain3}, if there exists $k \in {\mathcal I}$ such that $S_k(f)$ is irrational, then \begin{equation} \label{eq:limit} \lim_{N \rightarrow \infty} \frac{1}{q^N} \left|\sum_{x \in {\mathbb G_{N}}} e(f(x))\right| = \lim_{N \rightarrow \infty} \frac{1}{q^N} \left|\sum_{x \in {\mathbb G_{N}}} e \left( \sum_{k \in {\mathcal I}} S_k(f) x^k +\alpha_0\right)\right| =0. \end{equation} We note that for any $m \in {\mathbb F}_p[t]\setminus\{0\}$, the above equalities holds with $f$ replaced by $mf$, where $mf$ is the polynomial $mf(u) = \sum_{r \in {\mathcal K}\cup \{0\}} m\alpha_r u^r$. Therefore, by Theorem \ref{th:weyl}, we have \begin{cor} \label{cor:q=p} Let $f(u)=\sum_{r \in {\mathcal K}\cup\{0\}} \alpha_r u^{r}$ be a polynomial supported on a set ${\mathcal K} \subset {\mathbb Z}^+$ with coefficients in ${\mathbb K}_\infty$. Suppose that for some $k \in {\mathcal I}$, we have \begin{equation} \label{eq:irrational} S_k(mf) \textup{ is irrational for any } m \in {\mathbb F_{p}}[t] \setminus \{0\}. \end{equation} Then the sequence $(f(x))_{x \in {\mathbb F_{p}}[t]}$ is equidistributed in ${\mathbb T}$. \end{cor} We remark that since the map $T$ does not commute with multiplication by $m$, the condition (\ref{eq:irrational}) may not be described in simpler terms. It may also not be necessary for the equidistribution of $(f(x))_{x \in {\mathbb F_{p}}[t]}$. Regardless, suppose that $k \in {\mathcal K}$ and $p^v k \not \in {\mathcal K}$ for any $v\in {\mathbb Z}^+$. Then $S_k(f)=\alpha_k$ and $S_k(mf)=m\alpha_k$ for any $m \in {\mathbb F_{p}}[t]\setminus\{0\}$. Therefore, if $\alpha_k$ is irrational, then the condition (\ref{eq:irrational}) is satisfied. This simple observation establishes Conjecture \ref{conj} in the special case $q=p$. More precisely, we have \begin{cor} Let $f(u)=\sum_{r \in {\mathcal K}\cup\{0\}} \alpha_r u^{r}$ be a polynomial supported on a set ${\mathcal K} \subset {\mathbb Z}^+$ with coefficients in ${\mathbb K}_\infty$. Suppose that $\alpha_k$ is irrational for some $k \in {\mathcal K}^*$ (defined as in (\ref{eq:k-star}). Then the sequence $(f(x))_{x \in {\mathbb F_{p}}[t]}$ is equidistributed in ${\mathbb T}$. \end{cor} \section{Van der Corput and intersective sets in ${\mathbb F_{q}}[t]$} \label{sec:vdc} \subsection{Background and statement of results.} For a set $\mathcal{A} \subset {\mathbb Z}^+$, we define its {\it upper density } \[\overline{d}(\mathcal{A}) = \lim_{N \rightarrow \infty} \frac{\#\mathcal{A} \cap \{1, \ldots, N\}}{N}.\] We say $\mathcal{A}$ is \textit{dense} if $\overline{d}(\mathcal{A})>0$. A set $\mathcal{H} \subset {\mathbb Z}^+$ is called \textit{intersective} if for any dense subset $\mathcal{A}\subset {\mathbb Z}^+$, there exist $a, a' \in \mathcal{A}$ such that $a-a' \in \mathcal{H}$. In other words, we have $\mathcal{H}\cap (\mathcal{A}-\mathcal{A}) \ne \emptyset$. In the late 1970s, S\'{a}rk\"{o}zy \cite{sarkozy1} and Furstenberg \cite{f2} proved independently that the set $\{n^2 : n \in {\mathbb Z}^+\}$ is intersective. Their proofs use the circle method and ergodic theory, respectively. S\'{a}rk\"{o}zy went on and proved that the sets $\{n^2 -1 : n \in {\mathbb Z}^+\setminus\{1\}\}$ and $\{p-1: p\in {\mathbb Z}\,\, \textup{is prime} \}$ are also intersective \cite{sarkozy3}. We refer the reader to a survey paper of the first author \cite{le} for results and open problems regarding intersective sets. In a seemingly unrelated context, motivated by van der Corput's difference theorem, Kamae and Mend\`{e}s France \cite{km} made the following definition. A set $\mathcal{H} \subset {\mathbb Z}^+$ is said to be \textit{van der Corput} if the sequence $(a_n)_{n=1}^\infty$ is equidistributed $(\mod 1)$ whenever the sequence $(a_{n+h} - a_n )_{n=1}^\infty$ is equidistributed $(\mod 1)$ for each $h \in \mathcal{H}$. Therefore, van der Corput's difference theorem says that ${\mathbb Z}^+$ is van der Corput, but there are sparser sets which are van der Corput. In \cite{km}, Kamae and Mend\`{e}s France proved that any van der Corput set is intersective. Their result gives another approach to intersective sets. The converse of their theorem is not true. In \cite{bourgain}, Bourgain constructed a set that is intersective but not van der Corput. Let $\Phi(u) \in {\mathbb Z}[u]$ and consider the set $\{\Phi(n): n \in {\mathbb Z}\}\cap {\mathbb Z}^+$. We note that for any $g \in {\mathbb Z}^+$, the set of all multiples of $g$ is dense. Therefore, if the set $\{\Phi(n): n \in {\mathbb Z}\}\cap {\mathbb Z}^+$ is van der Corput (hence intersective), then $g$ divides $\Phi(n)$ for some $n \in {\mathbb Z}$. The following result of Kamae and Mend\`{e}s France \cite{km} shows that the divisibility condition is not only necessary, but also sufficient. \begin{prop} \label{prop:intersective} For $\Phi(u)\in {\mathbb Z}[u]\setminus\{0\}$, suppose that $\Phi$ has a root $(\mod \,g)$ for any $g \in {\mathbb Z}^+$. Then the set $\left\{\Phi(n): n \in {\mathbb Z}\right\} \cap {\mathbb Z}^+$ is van der Corput (hence intersective). \end{prop} Given the similarity of ${\mathbb Z}$ and ${\mathbb F_{q}}[t]$, it is natural to study analogous notions in ${\mathbb F_{q}}[t]$. For a set $\mathcal{A} \subset {\mathbb F_{q}}[t]$, we define its {\it upper density} \[\overline{d}(\mathcal{A})=\lim_{N \rightarrow \infty}\frac{\#\mathcal{A} \cap {\mathbb G_{N}}}{q^N}.\] We say a set $\mathcal{A}$ is {\it dense} if $\overline{d}(\mathcal{A})>0$. A set $\mathcal{H} \subset {\mathbb F_{q}}[t] \setminus \{0\}$ is called \textit{intersective} if for any dense subset $\mathcal{A} \subset {\mathbb F}_q[t]$, we have $\mathcal{H} \cap (\mathcal{A}-\mathcal{A})\neq \emptyset$. A set $\mathcal{H} \subset {\mathbb F_{q}}[t] \setminus \{0\}$ is said to be \textit{van der Corput} if the sequence $(a_{x})_{x \in {\mathbb F_{q}}[t]}$ is equidistributed in ${\mathbb T}$ whenever the sequence $(a_{x+h}-a_{x})_{x \in {\mathbb F_{q}}[t]}$ is equidistributed in ${\mathbb T}$ for each $h \in \mathcal{H}$. Many characterizations of intersective and van der Corput sets ${\mathbb Z}$ carry over to ${\mathbb F_{q}}[t]$, and we refer the reader to the Ph.D. thesis of the first author \cite[Chapter 2]{lethesis} for an exposition. In particular, in \cite[Theorem 2.3.5]{lethesis}, it was proved that any van der Corput set in ${\mathbb F_{q}}[t]$ is intersective. It remains an open problem to construct a set in ${\mathbb F_{q}}[t]$ that is intersective but not van der Corput (Bourgain's construction in ${\mathbb Z}$ is very specific to the real numbers). We now consider explicit examples of intersective and van der Corput sets in ${\mathbb F_{q}}[t]$ that are of arithmetic interest, similar to the results of S\'{a}rk\"{o}zy and Furstenberg. In our work \cite{ll}, we obtained intersectivity, in a quantitative sense, for the set $\left\{ x^2: x \in {\mathbb F_{q}}[t]\right\}\setminus\{0\}$. In a joint work of the first author with Spencer \cite{ls}, the intersectivity, in a quantitative sense, is also established for the set $\left\{l+r: l \in {\mathbb F_{q}}[t]\,\,\textup{is monic and irreducible}\right\}$ for any fixed $r\in {\mathbb F}_q\setminus\{0\}$. Motivated by Proposition \ref{prop:intersective}, one comes to the following conjecture. \begin{conj} \label{conj:intersective} For $\Phi(u)\in {\mathbb F_{q}}[t][u]\setminus\{0\}$, suppose that $\Phi$ has a root $(\mod \,g)$ for any $g \in {\mathbb F_{q}}[t]\setminus\{0\}$. Then the set $\left\{\Phi(x): x \in {\mathbb F_{q}}[t]\right\} \setminus\{0\}$ is van der Corput (hence intersective). \end{conj} Again, the divisibility condition is easily seen to be necessary. Quite surprisingly, the conjecture remains an open problem when the degree of $\Phi$ is bigger than or equal to $p$. When $\Phi(0)=0$, it follows from the polynomial Szemer\'edi theorem for modules over countable integral domains, proved by Bergelson, Leiman and McCutcheon \cite{blm}, that the set $\left\{ \Phi(x): x \in {\mathbb F_{q}}[t]\right\} \setminus \{0\}$ is intersective. Given our equidistribution theorem, in this section, we make some progress towards Conjecture \ref{conj:intersective}. We will prove the following theorem which is slightly stronger than Theorem \ref{th:sakozy}. \begin{thm} \label{th:vdc1} Let $\Phi(u) = \sum_{r \in {\mathcal K}\cup \{0\}}a_r u^r$ be a polynomial supported on a set ${\mathcal K} \subset {\mathbb Z}^+$ with coefficients in ${\mathbb F}_q[t]$. Suppose that $\Phi$ has a root $(\mod\, g)$ for any $g \in {\mathbb F_{q}}[t] \setminus \{ 0\}$. Suppose further that $a_k \neq 0$ for some $k \in {\mathcal K}^*$ (defined as in (\ref{eq:k-star})). Then the set $\{ \Phi(x): x \in {\mathbb F_{q}}[t] \} \setminus \{0\}$ is van der Corput (hence intersective). \end{thm} \noindent{\bf Remark.} \vspace{-.3cm} \begin{itemize} \item As a direct consequence of Theorem \ref{th:vdc1}, we see that Conjecture \ref{conj:intersective} is true whenever the degree of $\Phi$ is coprime to $p$. \item In view of Proposition \ref{prop:varmain3}, the condition $a_k \neq 0$ for some $k \in {\mathcal K}^*$ can be relaxed to $a_k \neq 0$ for some $k \in { \widetilde{\mathcal K}}$, where ${ \widetilde{\mathcal K}}$ is defined as in (\ref{eq:ktilde}). \end{itemize} By assuming the stronger conditions that $q=p$ and $\Phi(0)=0$, we will prove the following result. \begin{thm} \label{th:vdc2} Let $\Phi(u) \in {\mathbb F_{p}}[t][u]\setminus\{0\}$ with $\Phi(0)=0$. Then the set $\{ \Phi(x): x \in {\mathbb F_{p}}[t] \} \setminus \{0\}$ is van der Corput (hence intersective). \end{thm} We remark here that the minor arc estimate in Theorem \ref{th:main2} can be used to prove intersectivity of the set $\{ \Phi(x): x \in {\mathbb F_{q}}[t] \} \setminus \{0\}$ in Theorem \ref{th:vdc1} in a quantitative sense, similar to \cite [Theorem 3]{ll}. However, we opt to use Theorem \ref{th:main3} since the deduction is quicker, and the van der Corput property is a stronger notion than intersectivity. \subsection{Proofs of Theorem \ref{th:vdc1} and Theorem \ref{th:vdc2}} Among the many characterizations of van der Corput sets in ${\mathbb F_{q}}[t]$, we will be using the following \cite[Theorem 2.4.5 (2)]{lethesis}. Let $\mu$ be a finite measure on ${\mathbb T}$. We say $\mu $ is {\it continuous} at 0 if $\mu(\{0\}) = 0$. For any $h \in {\mathbb F}_q[t]$, the {\it Fourier transform}, $\widehat{\mu }$, of $\mu$ is defined by \[ \widehat{\mu}(h) = \int_{{\mathbb T}} e(-h\alpha) d\mu(\alpha).\] We say $\widehat{\mu}$ {\it vanishes} on a set $\mathcal{H}\subset {\mathbb F}_q[t]$ if $\widehat{\mu}(h) = 0$ for all $h \in \mathcal{H}$. \begin{thm} \textup{(Kamae \& Mend\`{e}s France, Ruzsa)} A set $\mathcal{H} \subset {\mathbb F_{q}}[t] \setminus \{ 0 \}$ is van der Corput if and only if any finite measure $\mu$ on ${\mathbb T}$, with $\widehat{\mu}$ vanishing on $\mathcal{H}$, is continuous at 0. \end{thm} \begin{proof} [Proof of Theorem \ref{th:vdc1}] Suppose that $\Phi(u) = \sum_{k \in {\mathcal K}\cup \{0\}}a_r u^r\in {\mathbb F_{q}}[t][u]$ has a root (mod $g$) for any $g\in {\mathbb F}_q[t]\setminus\{0\}$. Suppose further that $a_k \neq 0$ for some $k \in {\mathcal K}^{*}$. Let \[ \mathcal{H}=\{ \Phi(x): x \in {\mathbb F_{q}}[t] \} \setminus \{0\}.\] Let $\alpha \in {\mathbb T}$ be irrational and $g, s \in {\mathbb F_{q}}[t]$ with $g \neq 0$. By (\ref{eq:orthogonal2}), we have \begin{eqnarray*} \frac{1}{q^N} \sum_{\substack{x \in {\mathbb G_{N}}\\ x \equiv s\, (\mod g)}} e \left( \alpha \Phi (x) \right) &=& \frac{1}{q^N} \sum_{x \in {\mathbb G_{N}}} e \left( \alpha \Phi (x) \right) \frac{1}{|g|} \sum_{y \in {\mathbb G}_{\textup{ord}\, g}} e \left( \frac{y(x-s)}{g} \right) \nonumber \\ &=& \frac{1}{|g|} \sum_{y \in {\mathbb G}_{\textup{ord}\, g}} \frac{1}{q^N} \sum_{x \in {\mathbb G_{N}}} e \left( \alpha \Phi (x) + \frac{y(x-s)}{g}\right). \end{eqnarray*} We observe that the coefficient of $x^k$ in $(\alpha \Phi (x) + y(x-s)/g)$ is $\alpha a_k$ or $(\alpha a_k + y/g)$, depending on whether $k\neq 1$ or $k =1$, which in any case is irrational. Therefore, by Theorem \ref{th:main3}, for any $y \in {\mathbb G}_{\textup{ord}\, g}$, we have \[\lim_{N \rightarrow \infty} \frac{1}{q^N} \left|\sum_{x \in {\mathbb G_{N}}} e \left( \alpha \Phi (x) + \frac{y(x-s)}{g}\right)\right| =0. \] Combining the above two equations, it follows that for any irrational $\alpha \in {\mathbb T}$ and $g, s \in {\mathbb F}_q[t]$ with $g\neq 0$, \begin{equation}\label{eq:detector2} \lim_{N \rightarrow \infty} \frac{1}{q^N}\left| \sum_{x \in {\mathbb G_{N}}} e \left( \alpha \Phi (gx+s) \right)\right| = \lim_{N \rightarrow \infty} \frac{1}{q^N} \left|\sum_{\substack{x \in {\mathbb G_{N}}\\ x \equiv s \,(\mod g)}} e \left( \alpha \Phi (x) \right)\right| = 0. \end{equation} For any $M \in {\mathbb Z}^+$, let $g_M$ be the product of all monic polynomials in ${\mathbb G}_M$. Let $s_M \in {\mathbb F_{q}}[t]$ be a root of $\Phi $ (mod $g_M$). For $\alpha \in {\mathbb T}$, let \[ T_{M,N}(\alpha) = \frac{1}{q^N} \sum_{x \in {\mathbb G_{N}}} e( \alpha \Phi\left(g_M x + s_M \right) ).\] We now analyze $T_{M,N}(\alpha)$, depending on the rationality of $\alpha$. \\ {\bf Case 1.} Suppose that $\alpha\in {\mathbb T}$ is irrational. By (\ref{eq:detector2}), for any $M\in {\mathbb Z}^+$ and any irrational $\alpha \in {\mathbb T}$, we have \[ \lim_{N \rightarrow \infty} T_{M,N}(\alpha) = 0.\] {\bf Case 2.} Suppose that $\alpha \in {\mathbb T}$ is rational. Since $|T_{M,N}(\alpha)|\le 1$ and the set $\{(\alpha , M): \alpha \in {\mathbb T}\,\, \textup{is rational and}\,\, M \in {\mathbb Z}^+\}$ is countable, by a diagonalization process, we can extract a subsequence $N_i \subset {\mathbb Z}^+$ such that the limit $\lim_{i \rightarrow \infty} T_{M,N_i}(\alpha)$ exists, for any $M \in {\mathbb Z}^+$ and any rational $\alpha \in {\mathbb T}$. Since $s_M$ is a root of $\Phi$ (mod $g_M$), $\Phi(g_Mx+s_M)$ is divisible by $g_M$. Therefore, for $M$ sufficiently large such that $g_M$ absorbs the denominator of $\alpha$, we have $T_{M,N}(\alpha) = 1$. \\ Combining the above two cases, it follows that \begin{equation*} \lim_{M \rightarrow \infty} \lim_{i \rightarrow \infty} T_{M,N_i}(\alpha) = \left\{ \begin{array}{ll} 0, & \hbox{if $\alpha$ is irrational,} \\ 1, & \hbox{if $\alpha$ is rational.} \end{array} \right. \end{equation*} Let $\mu$ be a finite measure on ${\mathbb T}$. By applying the dominated convergence theorem twice, we have \begin{equation*} \begin{split} \lim_{M \rightarrow \infty} \lim_{i \rightarrow \infty} \int_{{\mathbb T}} T_{M,N_i}(\alpha ) d\mu (\alpha) &= \int_{{\mathbb T}}\lim_{M \rightarrow \infty} \lim_{i \rightarrow \infty} T_{M,N_i}(\alpha ) d\mu (\alpha) = \displaystyle \sum_{\alpha \in {\mathbb T}, \, \alpha \,\textup{rational}}\mu (\alpha) \geq \mu(\{ 0\}). \end{split} \end{equation*} Suppose that $\widehat{\mu }$ vanishes on $\mathcal{H}$. We note that by the definition of $T_{M,N}$, $\widehat{T_{M,N}} (h) \neq 0$ only if $h \in \mathcal{H} \cup\{0\}$. Therefore, \[ \left| \int_{{\mathbb T}} T_{M,N}(\alpha) d\mu (\alpha)\right| = \left| \sum_{x \in {\mathbb G_{N}}} \widehat{T_{M,N}}(x) \widehat{\mu} (x) \right| = \left|\widehat{T_{M,N}}(0) \widehat{\mu} (0) \right| = \left| \widehat{T_{M,N}}(0) \right| \mu({\mathbb T}) \leq \frac{\mu({\mathbb T})}{q^N}.\] Combining the above two inequalities, it follows that $\mu(\{0\})=0$ for any finite measure $\mu $ on ${\mathbb T}$ with $\widehat{\mu}$ vanishing on $\mathcal{H}$. Therefore, $\mathcal{H}$ is van der Corput. \end{proof} \begin{proof} [Proof of Theorem \ref{th:vdc2}] Suppose that $q=p$ and $\Phi(u) = \sum_{r \in {\mathcal K}} a_r u^r\in{\mathbb F}_p[t][u]$. Let \[ \mathcal{H}=\{ \Phi(x): x \in {\mathbb F_{p}}[t] \} \setminus \{0\}.\] Let ${\mathcal I}$ and $S_k(\Phi)$ ($k \in {\mathcal I}$) be defined as in (\ref{cali}) and (\ref{eq:s}), respectively. We have seen in (\ref{esf}) that \[e(\alpha \Phi(x)) = e \left( \sum_{k \in {\mathcal I}} S_k(\alpha \Phi) x^k\right). \] For any $M\in {\mathbb Z}^+$, let $g_M$ be the product of all monic polynomials in ${\mathbb G}_M$. For $\alpha \in {\mathbb T}$, let \[ T_{M,N}(\alpha) = \frac{1}{q^N} \sum_{x \in {\mathbb G_{N}}} e(\alpha \Phi (g_M x)) = \frac{1}{q^N} \sum_{x \in {\mathbb G_{N}}} e \left( \sum_{k \in {\mathcal I}} S_k(\alpha \Phi) (g_M x)^k \right).\] Let \[{\mathcal Q} = \{ \alpha \in {\mathbb T}: S_k(\alpha \Phi) \textrm{ is irrational for some } k \in {\mathcal I} \}.\] From (\ref{eq:limit}), for any $\alpha \in {\mathcal Q}$, we have \[ \lim_{N \rightarrow \infty} T_{M,N} (\alpha) = 0.\] On the other hand, if $\alpha \not \in {\mathcal Q}$, then $S_k(\alpha \Phi)$ is rational for any $k \in {\mathcal I}$. Since the rationals are countable, the set of all polynomials of the form $\sum_{k \in {\mathcal I}} S_k(\alpha \Phi) y^k$ ($\alpha \not \in {\mathcal Q}$) is countable (${\mathbb T} \setminus {\mathcal Q}$ need not be countable). Since $|T_{M,N}(\alpha)|\le1$, by a diagonalization process, we can extract a subsequence $N_i \subset {\mathbb Z}^+$ such that the limit $\lim_{i \rightarrow \infty} T_{M,N_i} (\alpha)$ exists for any $M \in {\mathbb Z}^+$ and any $\alpha \not \in {\mathcal Q}$. Then similarly to Case 2 of the proof of Theorem \ref{th:vdc1}, for $M$ sufficiently large, we have $T_{M,N}(\alpha) = 1$ for any $\alpha \not \in {\mathcal Q}$. It follows that \[ \lim_{M \rightarrow \infty} \lim_{i \rightarrow \infty} T_{M,N_i}(\alpha) = \left\{ \begin{array}{ll} 0, & \hbox{if $\alpha \in {\mathcal Q}$ ,} \\ 1, & \hbox{if $\alpha \not \in {\mathcal Q}$.} \end{array} \right.\] By arguing as in the proof of Theorem \ref{th:vdc1}, we see that $\mu \left( \{0 \} \right)=0$ for any finite measure $\mu$ on ${\mathbb T}$ with $\widehat{\mu }$ vanishing on $\mathcal{H}$. Therefore, $\mathcal{H}$ is van der Corput. \end{proof}
{ "timestamp": "2013-11-06T02:00:47", "yymm": "1311", "arxiv_id": "1311.0892", "language": "en", "url": "https://arxiv.org/abs/1311.0892", "abstract": "We prove a function field analog of Weyl's classical theorem on equidistribution of polynomial sequences. Our result covers the case in which the degree of the polynomial is greater than or equal to the characteristic of the field, which is a natural barrier when applying the Weyl differencing process to function fields. We also discuss applications to van der Corput, intersective and Glasner sets in function fields.", "subjects": "Number Theory (math.NT)", "title": "Equidistribution of polynomial sequences in function fields, with applications", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708039218115, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.7084670503750476 }
https://arxiv.org/abs/1207.3750
Use of MAX-CUT for Ramsey Arrowing of Triangles
In 1967, Erdős and Hajnal asked the question: Does there exist a $K_4$-free graph that is not the union of two triangle-free graphs? Finding such a graph involves solving a special case of the classical Ramsey arrowing operation. Folkman proved the existence of these graphs in 1970, and they are now called Folkman graphs. Erdős offered \$100 for deciding if one exists with less than $10^{10}$ vertices. This problem remained open until 1988 when Spencer, in a seminal paper using probabilistic techniques, proved the existence of a Folkman graph of order $3\times 10^9$ (after an erratum), without explicitly constructing it. In 2008, Dudek and Rödl developed a strategy to construct new Folkman graphs by approximating the maximum cut of a related graph, and used it to improve the upper bound to 941. We improve this bound first to 860 using their approximation technique and then further to 786 with the MAX-CUT semidefinite programming relaxation as used in the Goemans-Williamson algorithm.
\section{Introduction} Given a simple graph $G$, we write $G \rightarrow (a_1,\dots,a_k)^e$ and say that $G$ \emph{arrows} $(a_1,\dots,a_k)^e$ if for every edge $k$-coloring of $G$, a monochromatic $K_{a_i}$ is forced for some color $i \in \{1,\dots,k\}$. Likewise, for graphs $F$ and $H$, $G\rightarrow (F,H)^e$ if for every edge 2-coloring of $G$, a monochromatic $F$ is forced in the first color or a monochromatic $H$ is forced in the second. Define $\mathcal{F}_e(a_1,\dots,a_k;p)$ to be the set of all graphs that arrow $(a_1,\dots,a_k)^e$ and do not contain $K_p$; they are often called Folkman graphs. The edge Folkman number $F_e(a_1,\dots,a_k;p)$ is the smallest order of a graph that is a member of $\mathcal{F}_e(a_1,\dots,a_k;p)$. In 1970, Folkman \cite{Folkman1970} showed that for $k > \max{\{s,t\}}$, $F_e(s,t;k)$ exists. The related problem of vertex Folkman numbers, where vertices are colored instead of edges, is more studied \cite{Luczak2001,Nenov2003} than edge Folkman numbers, but we will not be discussing them. Therefore, we will skip the use of the superscript $e$ when discussing arrowing, as it is usually used to distinguish between edge and vertex colorings. In 1967, Erd\H{o}s and Hajnal \cite{Erdos1967} asked the question: Does there exist a $K_4$-free graph that is not the union of two triangle-free graphs? This question is equivalent to asking for the existence of a $K_4$-free graph such that in any edge 2-coloring, a monochromatic triangle is forced. After Folkman proved the existence of such a graph, the question then became to find how small this graph could be, or using the above notation, what is the value of $F_e(3,3;4)$. Prior to this paper, the best known bounds for this case were $19 \leq F_e(3,3;4) \leq 941$ \cite{Radziszowski2007, Dudek2008}. Folkman numbers are related to Ramsey numbers $R(s,t)$, which are defined as the least positive $n$ such that any 2-coloring of the edges of $K_n$ yields a monochromatic $K_s$ in the first color or a monochromatic $K_t$ in the second color. Using the arrowing operator, it is clear that $R(s,t)$ is the smallest $n$ such that $K_n \rightarrow (s,t)$. The known values and bounds for various types of Ramsey numbers are collected and regularly updated by the second author \cite{Radziszowski2011}. We will be using standard graph theory notation: $V(G)$ and $E(G)$ for the vertex and edge sets of graph $G$, respectively. A \emph{cut} is a partition of the vertices of a graph into two sets, $S \subset V(G)$ and $\overline{S}=V(G)\setminus S$. The \emph{size} of a cut is the number of edges that join the two sets, that is, $\abs{ \{ \{u,v\} \in E(G)\; | \; u \in S \text{ and } v \in \overline{S} \} }$. MAX-CUT is a well-known \textbf{NP}-hard combinatorial optimization problem which asks for the maximum size of a cut of a graph. \newpage \section{History of $F_e(3,3;4)$} \begin{table}[h] \centering { \renewcommand{\arraystretch}{1.2} \begin{tabular}{ l@{$\;\;\;$} | r@{ -- } l| l r }\hline Year & \multicolumn{2}{m{2.7cm}|}{\centering Lower/Upper Bounds} & Who/What & Ref.\\ \hline 1967 & \multicolumn{2}{c|}{any?$\quad$} & Erd\H{o}s-Hajnal &\cite{Erdos1967} \\ 1970 & \multicolumn{2}{c|}{exist$\quad$} & Folkman &\cite{Folkman1970}\\ 1972 & 10 & & Lin &\cite{Lin1972}\\ 1975 & & $10^{10}$? & Erd\H{o}s offers \$100 for proof &\\ 1986 & & $8 \times 10^{11}$ & Frankl-R\"{o}dl &\cite{Frankl1986}\\ 1988 & & $3\times 10^9$ & Spencer &\cite{Spencer1988} \\ 1999 & $\;\; 16$ & & Piwakowski et al. (implicit) &\cite{Piwakowski1999}\\ 2007 & 19 & & Radziszowski-Xu &\cite{Radziszowski2007}\\ 2008 & & 9697 & Lu &\cite{Lu2008}\\ 2008 & & 941 & Dudek-R\"{o}dl &\cite{Dudek2008} \\ 2012 & & 786 & this work &\\ 2012 & & 100? & Graham offers \$100 for proof &\\ \hline \end{tabular} } \caption{Timeline of progress on $F_e(3,3;4)$.} \label{tab:hist} \end{table} Table \ref{tab:hist} summarizes the events surrounding $F_e(3,3;4)$, starting with Erd\H{o}s and Hajnal's \cite{Erdos1967} original question of existence. After Folkman \cite{Folkman1970} proved the existence, Erd\H{o}s, in 1975, offered \$100 for deciding if $F_e(3,3;4)<10^{10}$. This question remained open for over 10 years. Frankl and R\"{o}dl \cite{Frankl1986} nearly met Erd\H{o}s' request in 1986 when they showed that $F_e(3,3;4)$ $< 7.02 \times 10^{11}$. In 1988, Spencer \cite{Spencer1988}, in a seminal paper using probabilistic techniques, proved the existence of a Folkman graph of order $3\times 10^9$ (after an erratum by Hovey), without explicitly constructing it. In 2007, Lu showed that $F_e(3,3;4)\leq 9697$ by constructing a family of $K_4$-free circulant graphs (which we discuss in Section \ref{sec:lu}) and showing that some such graphs arrow $(3,3)$ using spectral analysis. Later, Dudek and R\"{o}dl reduced the upper bound to the best known to date, $941$. Their method, which we have pursued further with some success, is discussed in the next section. The lower bound for $F_e(3,3;4)$ was much less studied than the upper bound. Lin \cite{Lin1972} obtained a lower bound on $10$ in 1972 without the help of a computer. All 659 graphs on 15 vertices witnessing $F_e(3,3;5)=15$ \cite{Piwakowski1999} contain $K_4$, thus giving the bound $16 \leq F_e(3,3;4)$. In 2007, two of the authors of this paper gave a computer-free proof of $18 \leq F_e(3,3;4)$ and improved the lower bound further to $19$ with the help of computations \cite{Radziszowski2007}. The long history of $F_e(3,3;4)$ is not only interesting in itself but also gives insight into how difficult the problem is. Finding good bounds on the smallest order of any Folkman graph (with fixed parameters) seems to be difficult, and some related Ramsey graph coloring problems are \textbf{NP}-hard or lie even higher in the polynomial hierarchy. For example, Burr \cite{Burr1976} showed that arrowing $(3,3)$ is $\mathbf{coNP}$-complete, and Schaefer \cite{Schaefer2001} showed that for general graphs $F$, $G$, and $H$, $F\rightarrow (G,H)$ is $\mathbf{\Pi^{\mathrm{P}}_2}$-complete. \section{Arrowing via MAX-CUT} Building off Spencer's and other methods, Dudek and R\"{o}dl \cite{Dudek2008} in 2008 showed how to construct a graph $H_G$ from a graph $G$, such that the maximum size of a cut of $H_G$ determines whether or not $G \rightarrow (3,3)$. They construct the graph $H_G$ as follows. The vertices of $H_G$ are the edges of $G$, so $\abs{V(H_G)}=\abs{E(G)}$. For $e_1,e_2 \in V(H_G)$, if edges $\{e_1,e_2,e_3\}$ form a triangle in $G$, then $\{e_1,e_2\}$ is an edge in $H_G$. Let $t_\triangle(G)$ denote the number of triangles in graph $G$. Clearly, $\abs{E(H_G)}$$=3t_\triangle(G)$. Let $MC(H)$ denote the MAX-CUT value of graph $H$. \bigskip \begin{theorem}[Dudek and R\"{o}dl \cite{Dudek2008}] \label{th:mc} $G \rightarrow (3,3)$ if and only if\\ $MC(H_G) < 2t_\triangle(G)$. \end{theorem} There is a clear intuition behind Theorem \ref{th:mc} that we will now describe. Any edge $2$-coloring of $G$ corresponds to a bipartition of the vertices in $H_G$. If a triangle colored in $G$ is not monochromatic, then its three edges, which are vertices of $H_G$, will be separated in the bipartition. If we treat this bipartition as a cut, then the size of the cut will count each triangle twice for the two edges that cross it. Since there is only one triangle in a graph that contains two given edges, this effectively counts the number of non-monochromatic triangles. Therefore, if it is possible to find a cut that has size equal to $2t_\triangle(G)$, then such a cut defines an edge coloring of $G$ that has no monochromatic triangles. However, if $MC(H_G)<2t_\triangle(G)$, then in each coloring, all three edges of some triangle are in one part and thus, $G \rightarrow (3,3)$. A benefit of converting the problem of arrowing $(3,3)$ to MAX-CUT is that the latter is well-known and has been studied extensively in computer science and mathematics (see for example \cite{Commander2009}). The decision problem MAX-CUT$(H,k)$ asks whether or not $MC(H)\geq k$. It is known that MAX-CUT is \textbf{NP}-hard and this decision problem was one of Karp's 21 \textbf{NP}-complete problems \cite{Karp1972}. In our case, $G\rightarrow (3,3)$ if and only if MAX-CUT$\left(H_G,2t_\triangle(G)\right)$ doesn't hold. Since MAX-CUT is \textbf{NP}-hard, an attempt is often made to approximate it, such as in the approaches presented in the next two sections. \subsection{Minimum Eigenvalue Method} A method exploiting the minimum eigenvalue was used by Dudek and R\"{o}dl \cite{Dudek2008} to show that some large graphs are members of $\mathcal{F}_e(3,3;4)$. The following upper bound \eqref{eq:mineig} on $MC(H_G)$ can be found in \cite{Dudek2008}, where $\lambda_{\text{min}}$ denotes the minimum eigenvalue of the adjacency matrix of $H_G$. \begin{equation}\label{eq:mineig} MC(H_G) \leq \frac{\abs{E(H_G)}}{2} - \frac{\lambda_{\text{min}}\abs{V(H_G)}}{4}. \end{equation} For positive integers $r$ and $n$, if $-1$ is an $r$-th residue modulo $n$, then let $G(n,r)$ be a circulant graph on $n$ vertices with the vertex set $\mathds{Z}_n$ and the edge set $E(G(n,r)) = \{ \{u,v\} \; | \; u \neq v \text{ and } u-v \equiv \alpha^r \bmod{n}, \text{ for some } \alpha \in \mathds{Z}_n \}$. The graph $G_{941}=G(941,5)$ has 707632 triangles. Using the MATLAB \cite{MATLAB2011} {\tt eigs} function, Dudek and R\"{o}dl \cite{Dudek2008} computed \begin{equation*} MC(H_{G_{941}})\leq 1397484 < 1415264=2t_\triangle(G_{941}). \end{equation*} Thus, by Theorem 1, $G_{941}$ $\rightarrow$ $(3,3)$. \vspace*{1em} In an attempt to improve $F_e(3,3;4) \leq 941$, we tried removing vertices of $G_{941}$ to see if the minimum eigenvalue bound would still show arrowing. We applied multiple strategies for removing vertices, including removing neighborhoods of vertices, randomly selected vertices, and independent sets of vertices. Most of these strategies were successful, and led to the following theorem: \bigskip \begin{theorem}\label{th:gc} $F_e(3,3;4) \leq 860$. \end{theorem} \vspace*{.5em} \noindent \textbf{Proof.} For a graph $G$ with vertices $\mathds{Z}_n$, define $C=C(d,k) = \{ v \in V(G)\;|\; v=id \bmod{n}, \text{ for } 0 \leq i < k\}$. Let $G=G_{941}$, $d=2$, $k=81$, and $G_C$ be the graph induced on $V(G) \setminus C(d,k)$. Then $G_C$ has 860 vertices, 73981 edges and 542514 triangles. Using the MATLAB {\tt eigs} function, we obtain $\lambda_{\text{min}} \approx -14.663012$. Setting $\lambda_{\text{min}} > -14.664$ in \eqref{eq:mineig} gives \begin{equation} MC(H_{G_C}) < 1084985 < 1085028=2t_\triangle(G_C). \end{equation} \noindent Therefore, $G_C \rightarrow (3,3)$. $\Box$ \vspace*{1em} None of the methods used allowed for $82$ or more vertices to be removed without the upper bound on $MC$ becoming larger than $2t_\triangle$. \subsection{Goemans-Williamson Method} The Goemans-Williamson MAX-CUT approximation algorithm \cite{Goemans1995} is a well-known, polynomial-time algorithm that relaxes the problem to a semi-definite program (SDP). It involves the first use of SDP in combinatorial approximation and has since inspired a variety of other successful algorithms (see for example \cite{Karloff1997,Frieze1997}). This randomized algorithm returns a cut with expected size at least 0.87856 of the optimal value. However, in our case, all that is needed is a feasible solution to the SDP, as it gives an upper bound on $MC(H)$. A brief description of the Goemans-Williamson relaxation follows. The first step in relaxing MAX-CUT is to represent the problem as a quadratic integer program. Given a graph $H$ with $V(H)=\{1,\dots,n\}$ and nonnegative weights $w_{i,j}$ for each pair of vertices $\{i,j\}$, we can write $MC(H)$ as the following objective function: \begin{align} \text{Maximize}\quad &\frac{1}{2}\sum_{i<j}w_{i,j}(1 - y_iy_j) \label{eq:quadratic}\\ \text{subject to:}\quad &y_i \in \{-1,1\} \quad \text{for all } i \in V(H). \notag \end{align} Define one part of the cut as $S=\{i\; |\; y_i = 1\}$. Since in our case all graphs are weightless, we will use \begin{equation*} w_{i,j} = \begin{cases} 1 & \text{if } \{i,j\} \in E(H),\\ 0 & \text{otherwise}. \end{cases} \end{equation*} Next, the integer program \eqref{eq:quadratic} is relaxed by extending the problem to higher dimensions. Each $y_i \in \{-1,1\}$ is now replaced with a vector on the unit sphere $\mathbf{v}_i \in \mathds{R}^n$, as follows: \begin{align} \text{Maximize}\quad &\frac{1}{2}\sum_{i<j}w_{i,j}(1 - \mathbf{v}_i\cdot \mathbf{v}_j) \label{eq:relax}\\ \text{subject to:}\quad &\norm{\mathbf{v}_i} = 1 \quad \text{for all } i \in V(H). \notag \end{align} If we define a matrix $Y$ with the entries $y_{i,j}=\mathbf{v_i}\cdot\mathbf{v_j}$, that is, the Gram matrix of ${ \mathbf{v}_1,\dots,\mathbf{v}_n}$, then $y_{i,i}=1$ and $Y$ is positive semidefinite. Therefore, \eqref{eq:relax} is a semidefinite program. \vspace*{1em} \subsection{Some Cases of Arrowing} \label{sec:lu} Using the Goemans-Williamson approach, we tested a wide variety of graphs for arrowing by finding upper bounds on MAX-CUT. These graphs included the $G(n,r)$ graphs tested by Dudek and R\"{o}dl, similar circulant graphs based on the Galois fields $GF(p^k)$, and random graphs. Various modifications of these graphs were also considered, including the removal and/or addition of vertices and/or edges, as well as copying or joining multiple candidate graphs together in various ways. We tested the graph $G_C$ of Theorem \ref{th:gc} and obtained the upper bound $MC(H_{G_C}) \leq 1077834$, a significant improvement over the bound $1084985$ obtained from the minimum eigenvalue method. This provides further evidence that $G_C \rightarrow (3,3)$, and is an example of when \eqref{eq:relax} yields a much better upper bound. Multiple SDP solvers that were designed \cite{Burer2003,Helmberg2000} to handle large-scale SDP and MAX-CUT problems were used for the tests. Specifically, we made use of a version of {\tt SDPLR} by Samuel Burer \cite{Burer2003}, a solver that uses low-rank factorization. The version {\tt SDPLR-MC} includes specialized code for the MAX-CUT SDP relaxation. {\tt SBmethod} by Christoph Helmberg \cite{Helmberg2000} implements a spectral bundle method and was also applied successfully in our experiments. In all cases where more than one solver was used, the same results were obtained. The type of graph that led to the best results was described by Lu \cite{Lu2008}. For positive integers $n$ and $s$, $s<n$, $s$ relatively prime to $n$, define set $S = \{ s^i \bmod{n} \; | \; i=0,1,\dots,m-1\}$, where $m$ is the smallest positive integer such that $s^m \equiv 1 \bmod{n}$. If $-1 \bmod{n} \in S$, then let $L(n,s)$ be a circulant graph on $n$ vertices with $V(L(n,s))=\mathds{Z}_n$. For vertices $u$ and $v$, $\{u,v\}$ is an edge of $L(n,s)$ if and only if $u-v \in S$. Note that the condition that $-1 \bmod{n} \in S$ implies that if $u-v \in S$ then $v-u \in S$. In Table 1 of \cite{Lu2008}, a set of potential members of $\mathcal{F}_e(3,3;4)$ of the form $L(n,s)$ were listed, and the graph $L(9697,4)$ was shown to arrow $(3,3)$. Lu gave credit to Exoo for showing that $L(17,2)$, $L(61,8)$, $L(79,12)$, $L(421,7)$, and $L(631,24)$ do not arrow $(3,3)$. \vspace*{1em} We tested all graphs from Table 1 of \cite{Lu2008} of order less than 941 with the MAX-CUT method, using both the minimum eigenvalue and SDP upper bounds. Table \ref{tab:results} lists the results. Note that although none of the computed upper bounds of the $L(n,s)$ graphs imply arrowing $(3,3)$, all SDP bounds match those of the minimum eigenvalue bound. This is distinct from other families of graphs, including those in \cite{Dudek2008}, as the SDP bound is usually tighter. Thus, these graphs were given further consideration. \begin{table}[h] \centering { \renewcommand{\arraystretch}{1.2} \begin{tabular}{ | c | r | r | r | r | r | r | } \hline $G$ & \multicolumn{1}{c|}{$2t_\triangle(G)$} & \multicolumn{1}{c|}{$\lambda_{\text{min}}$} & \multicolumn{1}{c|}{SDP}\\ \hline\hline $L(127,5)$ & 19558 & 20181 & 20181 \\ $L(457,6)$ & 347320 & 358204 & 358204 \\ $L(761,3)$ & 694032 & 731858 & 731858 \\ $L(785,53)$ & 857220 & 857220 & 857220 \\ \hline $G_{786}$ & 857762 & 857843 & 857753 \\ \hline \end{tabular} } \caption{Potential $\mathcal{F}_e(3,3;4)$ graphs $G$ and upper bounds on $MC(H_G)$, where ``$\lambda_{\text{min}}$'' is the bound \eqref{eq:mineig} and ``SDP'' is the solution of \eqref{eq:relax} from {\tt SDPLR-MC} and {\tt SBmethod}. $G_{786}$ is the graph of Theorem \ref{th:786}.} \label{tab:results} \end{table} $L(127,5)$ was given particular attention, as it is the same graph as $G_{127}$, where $V(G_{127})=\mathds{Z}_{127}$ and $E(G_{127})=\{ \{x,y\} \; | \; x-y \equiv \alpha^3 \bmod{127} \}$ (that is, the graph $G(127,3)$ as defined in the previous section). It has been conjectured by Exoo that $G_{127} \rightarrow (3,3)$. He also suggested that subgraphs induced on less than 100 vertices of $G_{127}$ may as well. For more information on $G_{127}$ see \cite{Radziszowski2007}. Numerous attempts were made at modifying these graphs in hopes that one of the MAX-CUT methods would be able to prove arrowing. Indeed, we were able to do so with $L(785,53)$. Notice that all of the upper bounds for $MC(H_{L(785,53)})$ are $857220$, the same as $2t_\triangle\left(L(785,53)\right)$. Our goal was then to slightly modify $L(785,53)$ so that this value becomes smaller. Let $G_{786}$ denote the graph $L(785,53)$ with one additional vertex connected to the following 60 vertices: \vspace*{1em} \begin{minipage}{\textwidth} \centering { \small \begin{verbatim} { 0, 1, 3, 4, 6, 7, 9, 10, 12, 13, 15, 16, 18, 19, 21, 22, 24, 25, 27, 28, 30, 31, 33, 34, 36, 37, 39, 40, 42, 43, 45, 46, 48, 49, 51, 52, 54, 55, 57, 58, 60, 61, 63, 66, 69, 201, 204, 207, 210, 213, 216, 219, 222, 225, 416, 419, 422, 630, 642, 645 } \end{verbatim} } \end{minipage} \vspace*{1em} $G_{786}$ is still $K_4$-free, has 61290 edges, and has 428881 triangles. The upper bound computed from the SDP solvers for $MC(H_{G_{786}})$ is 857753. We did not find a nice description for the vectors of this solution. Software implementing {\tt SpeeDP} by Grippo et al. \cite{Grippo2010b}, an algorithm designed to solve large MAX-CUT SDP relaxations, was used by Rinaldi (one of the authors of \cite{Grippo2010b}) to analyze this graph. He was able to obtain the bounds $857742 \leq MC(H_{G_{786}}) \leq 857750$, which agrees with, and improves over our upper bound computation. Since $2t_\triangle(G_{786}) = 857762$, we have both from our tests and his {\tt SpeeDP} test that $G_{786} \rightarrow (3,3)$, and the following main result. \bigskip \begin{theorem}\label{th:786} $F_e(3,3;4) \leq 786.$ \end{theorem} We note that finding a lower bound on MAX-CUT, such as the $857742 \leq MC(H_{G_{786}})$ bound from {\tt SpeeDP}, follows from finding an actual cut of a certain size. This method may be useful, as finding a cut of size $2t_\triangle(G)$ shows that $G \not\rightarrow (3,3)$. \section{Tasks to Complete} Improving the upper bound on $F_e(3,3;4)$ $\leq 786$ is the main challenge. The question of whether $G_{127} \rightarrow (3,3)$ is still open, and any method that could solve it would be of much interest. During the 2012 SIAM Conference on Discrete Mathematics in Halifax, Nova Scotia, Ronald Graham announced a \$100 award for determining if $F_e(3,3;4) < 100$. Another open question is the lower bound on $F_e(3,3;4)$, as it is quite puzzling that only 19 is the best known. Even an improvement to $20 \leq F_e(3,3;4)$ would be good progress. \section{Acknowledgments} The third author is supported by the Guangxi Natural Science Foundation (2011GXNSFA018142). We would like to thank Giovanni Rinaldi and Luigi Grippo for their enthusiastic aid in the computation of MAX-CUT bounds with their {\tt SpeeDP} algorithm \cite{Grippo2010b}. We would also like to thank the referee for the helpful comments. \bibliographystyle{plain}
{ "timestamp": "2013-03-22T01:00:58", "yymm": "1207", "arxiv_id": "1207.3750", "language": "en", "url": "https://arxiv.org/abs/1207.3750", "abstract": "In 1967, Erdős and Hajnal asked the question: Does there exist a $K_4$-free graph that is not the union of two triangle-free graphs? Finding such a graph involves solving a special case of the classical Ramsey arrowing operation. Folkman proved the existence of these graphs in 1970, and they are now called Folkman graphs. Erdős offered \\$100 for deciding if one exists with less than $10^{10}$ vertices. This problem remained open until 1988 when Spencer, in a seminal paper using probabilistic techniques, proved the existence of a Folkman graph of order $3\\times 10^9$ (after an erratum), without explicitly constructing it. In 2008, Dudek and Rödl developed a strategy to construct new Folkman graphs by approximating the maximum cut of a related graph, and used it to improve the upper bound to 941. We improve this bound first to 860 using their approximation technique and then further to 786 with the MAX-CUT semidefinite programming relaxation as used in the Goemans-Williamson algorithm.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)", "title": "Use of MAX-CUT for Ramsey Arrowing of Triangles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708039218115, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.7084670503750475 }
https://arxiv.org/abs/2202.04551
Shortest Paths without a Map, but with an Entropic Regularizer
In a 1989 paper titled "shortest paths without a map", Papadimitriou and Yannakakis introduced an online model of searching in a weighted layered graph for a target node, while attempting to minimize the total length of the path traversed by the searcher. This problem, later called layered graph traversal, is parametrized by the maximum cardinality $k$ of a layer of the input graph. It is an online setting for dynamic programming, and it is known to be a rather general and fundamental model of online computing, which includes as special cases other acclaimed models. The deterministic competitive ratio for this problem was soon discovered to be exponential in $k$, and it is now nearly resolved: it lies between $\Omega(2^k)$ and $O(k2^k)$. Regarding the randomized competitive ratio, in 1993 Ramesh proved, surprisingly, that this ratio has to be at least $\Omega(k^2 / \log^{1+\epsilon} k)$ (for any constant $\epsilon > 0$). In the same paper, Ramesh also gave an $O(k^{13})$-competitive randomized online algorithm. Since 1993, no progress has been reported on the randomized competitive ratio of layered graph traversal. In this work we show how to apply the mirror descent framework on a carefully selected evolving metric space, and obtain an $O(k^2)$-competitive randomized online algorithm, nearly matching the known lower bound on the randomized competitive ratio.
\section{Introduction} \paragraph{Our results.} In this paper we present a randomized $O(k^2)$-competitive online algorithm for width $k$ layered graph traversal. The problem, whose history is discussed in detail below, is an online version of the conventional shortest path problem (see~\cite{Sch12}), and of Bellman's dynamic programming paradigm. It is therefore a very general framework for decision-making under uncertainty of the future, and it includes as special cases other celebrated models of online computing, such as metrical task systems. The following 1989 words of Papadimitriou and Yannakakis~\cite{PY89} still resonate today: ``the techniques developed [for layered graph traversal] will add to the scarce rigorous methodological arsenal of Artificial Intelligence.'' Our new upper bound on the competitive ratio nearly matches the lower bound of $\Omega(k^2/\log^{1+\epsilon} k)$, for all $\epsilon > 0$, given by Ramesh~\cite{Ram93}. It improves substantially over the previously known $O(k^{13})$ upper bound from the same paper. \paragraph{Problem definition.} In layered graph traversal, a searcher attempts to find a short path from a starting node $a$ to a target node $b$ in an undirected graph $G$ with non-negative integer edge weights $w:E(G)\rightarrow{\mathbb{N}}$. It is assumed that the vertices of $G$ are partitioned into layers such that edges exist only between vertices of consecutive layers. The searcher knows initially only an upper bound $k$ on the number of nodes in a layer (this is called the {\em width} of $G$). The search begins at $a$, which is the only node in the first layer of the graph (indexed layer $0$). We may assume that all the nodes of $G$ are reachable from $a$. When the searcher first reaches a node in layer $i$, the next layer $i+1$, the edges between layer $i$ and layer $i+1$, and the weights of these edges, are revealed to the searcher.\footnote{So, in particular, when the search starts at $a$, the searcher knows all the nodes of layer $1$, all the edges connecting $a$ to a node in layer $1$, and all the weights of such edges.} At this point, the searcher has to move forward from the current position (a node in layer $i$) to a new position (a node in layer $i+1$). This can be done along a shortest path through the layers revealed so far.\footnote{Notice that this is not necessarily the shortest path in $G$ between the nodes, because such a path may have to go through future layers that have not been revealed so far. In fact, some nodes in layer $i+1$ might not even be reachable at this point.} The target node $b$ occupies the last layer of the graph; the number of layers is unknown to the searcher. The target gets revealed when the preceding layer is reached. The goal of the searcher is to traverse a path of total weight as close as possible to the shortest path connecting $a$ and $b$.\footnote{Note that the traversed path is not required to be simple or level-monotone.} The searcher is said to be $C$-competitive for some $C=C(k)$ iff for every width $k$ input $(G,w)$, the searcher's path weight is at most $C\cdot w_G(a,b)$, where $w_G(a,b)$ denotes the distance under the weight function $w$ between $a$ and $b$. The competitive ratio of the problem is the best $C$ for which there is a $C$-competitive searcher. Evidently, rational weights can be converted to integer weights if we know (or even update online) a common denominator, and irrational weights can be approximated to within any desired accuracy using rational weights. Moreover, any bipartite graph is layered, and any graph can be converted into a bipartite graph by subdividing edges. Thus, the layered structure of the input graph is intended primarily to parametrize the way in which the graph is revealed over time to the online algorithm. Also notice that if the width $k$ is not known, it can be estimated, for instance by doubling the guess each time it is refuted. Therefore, the assumptions made in the definition of the problem are not overly restrictive. \paragraph{Motivation and past work.} The problem has its origins in a paper of Baeza-Yates et al.~\cite{BCR88}, and possibly earlier in game theory~\cite{Gal80}. In~\cite{BCR88}, motivated by applications in robotics, the special case of a graph consisting of $k$ disjoint paths is proposed, under the name {\em the lost cow problem}. The paper gives a deterministic $9$-competitive algorithm for $k=2$, and more generally a deterministic $2\frac{k^k}{(k-1)^{k-1}}+1\approx 2ek+1$ competitive algorithm for arbitrary $k$. A year later, Papadimitriou and Yannakakis introduced the problem of layered graph traversal~\cite{PY89}. Their paper shows that for $k=2$ the upper bound of $9$ still holds, and that the results of~\cite{BCR88} are optimal for disjoint paths. Unfortunately, the upper bound of $9$ in~\cite{PY89} is the trivial consequence of the observation that for $k=2$, the general case reduces to the disjoint paths case. This is not true for general $k$. Indeed, Fiat et al.~\cite{FFKRRV91} give a $2^{k-2}$ lower bound, and an $O(9^k)$ upper bound on the deterministic competitive ratio in the general case. The upper bound was improved by Ramesh~\cite{Ram93} to $O(k^3 2^k)$, and further by Burley~\cite{Bur96} to $O(k 2^k)$. Thus, currently the deterministic case is nearly resolved, asymptotically: the competitive ratio lies in $\Omega(2^k)\cap O(k2^k)$. Investigation of the randomized competitive ratio was initiated in the aforementioned~\cite{FFKRRV91} that gives a $\frac{k}{2}$ lower bound for the general case, and asymptotically tight $\Theta(\log k)$ upper and lower bounds for the disjoint paths case. In the randomized case, the searcher's strategy is a distribution over moves, and it is $C$-competitive iff for every width $k$ input $(G,w)$, the expected weight of the searcher's path is at most $C\cdot w_G(a,b)$. It is a ubiquitous phenomenon of online computing that randomization improves the competitive ratio immensely, often guaranteeing exponential asymptotic improvement (as happens in the disjoint paths case of layered graph traversal). To understand why this might happen, one can view the problem as a game between the designer of $G$ and the searcher in $G$. The game alternates between the designer introducing a new layer and the searcher moving to a node in the new layer. The designer is oblivious to the searcher's moves. Randomization obscures the predictability of the searcher's moves, and thus weakens the power of the designer.\footnote{This can be formalized through an appropriate definition of the designer's information sets.} Following the results in~\cite{FFKRRV91} and the recurring phenomenon of exponential improvement, a natural conjecture would have been that the randomized competitive ratio of layered graph traversal is $\Theta(k)$. However, this natural conjecture was rather quickly refuted in the aforementioned~\cite{Ram93} that surprisingly improves the lower bound to $\Omega(k^2/\log^{1+\epsilon} k)$, which holds for all $\epsilon > 0$. Moreover, the same paper also gives an upper bound of $O(k^{13})$. Thus, even though for the general case of layered graph traversal the randomized competitive ratio cannot be logarithmic in the deterministic competitive ratio, it is polylogarithmic in that ratio. The results of~\cite{Ram93} on randomized layered graph traversal have not since been improved prior to our current paper. Computing the optimal offline solution, a shortest path from source to target in a weighted layered graph, is a simple example and also a generic framework for dynamic programming~\cite{Bel57}. The online version has applications to the design and analysis of hybrid algorithms. In particular, the disjoint paths case has applications in derandomizing online algorithms~\cite{FRRS94}, in the design of divide-and-conquer online algorithms~\cite{FRR90,ABM93}, and in the design of advice and learning augmented online algorithms~\cite{LV21,ACEPS20,BCKPV22}. In this context, Kao et al.~\cite{KRT93} resolve exactly the randomized competitive ratio of width $2$ layered graph traversal: it is roughly $4.59112$, precisely the solution for $x$ in the equation $\ln(x-1) = \frac{x}{x-1}$; see also~\cite{CL91}. For more in this vein, see also Kao et al.~\cite{KMSY94}. Moreover, layered graph traversal is a very general model of online computing. For example, many online problems can be represented as chasing finite subsets of points in a metric space. This problem, introduced by Chrobak and Larmore~\cite{CL91,CL93} under the name {\em metrical service systems}, is {\bf equivalent} to layered graph traversal~\cite{FFKRRV91}. The width $k$ of the layered graph instance corresponds to the maximum cardinality of any request of the metrical service systems instance. (See Section~\ref{sec: applications}.) Metrical service systems are a special case of metrical task systems, introduced by Borodin et al.~\cite{BLS87}. Width $k$ layered graph traversal includes as a special case metrical task systems in $k$-point metric spaces.\footnote{Indeed, this implies that while metrical service systems on $k$-point metrics are a special case of metrical task systems on $k$-point metrics, also metrical task systems on $k$-point metrics are a special case of metrical service systems using $k$-point requests.} There is a tight bound of $2k-1$ on the deterministic competitive ratio of metrical task systems in any $k$-point metric~\cite{BLS87}, and the randomized competitive ratio lies between an $\Omega(\log k/\log\log k)$ lower bound (Bartal et al.~\cite{BBM01,BLMN03}) and an $O(\log^2 k)$ upper bound (Bubeck et al.~\cite{BCLL19}). Thus, width $k$ layered graph traversal is strictly a more general problem than $k$-point metrical task systems. Another closely related problem is the $k$-taxi problem, whose best known lower bound for deterministic algorithms is obtained via a reduction from layered graph traversal~\cite{CK19}. \paragraph{Our methods.} Our techniques are based on the method of online mirror descent with entropic regularization that was pioneered by Bubeck et al.~\cite{BCLLM18,BCLL19} in the context of the $k$-server problem and metrical task systems, and further explored in this context in a number of recent papers~\cite{CL19,EL21,BC21}. It is known that layered graph traversal is equivalent to its special case where the input graph is a tree~\cite{FFKRRV91}. Based on this reduction, we reduce width $k$ layered graph traversal to a problem that we name the (depth $k$) \emph{evolving tree game}. In this game, one player, representing the algorithm, occupies a (non-root) leaf in an evolving rooted, edge weighted, bounded degree tree of depth $\le k$. Its opponent, the adversary, is allowed to change the metric and topology of the tree using the following repertoire of operations: ($i$) increase the weight of an edge incident to a leaf; ($ii$) delete a leaf and the incident edge, and smooth the tree at the parent if its degree is now $2$;\footnote{Smoothing is the reverse operation of subdividing. In other words, smoothing is merging the two edges incident to a degree $2$ node. We maintain w.l.o.g. the invariant that the tree has no degree $2$ node.} ($iii$) create two (or more) new leaves and connect them with weight $0$ edges to an existing leaf whose combinatorial depth is strictly smaller than $k$. The algorithm may move from leaf to leaf at any time, incurring \emph{movement cost} equal to the weight of the path between the leaves. If the algorithm occupies a leaf at the endpoint of a growing weight edge, it pays the increase in weight. We call this the {\em service cost} of the algorithm. If the algorithm occupies a leaf that is being deleted, it must move to a different leaf prior to the execution of the topology change. If the algorithm occupies a leaf that is being converted into an internal node (because new leaves are appended to it), the algorithm must move to a leaf after the execution of the topology change. At the end of the game, the total (movement + service) cost of the algorithm is compared against the adversary's cost, which is the weight of the lightest root-to-leaf path.\footnote{In fact, our algorithm can handle also the operation of reducing the weight of an edge, under the assumption that this operation incurs on both players a cost equal to the reduction in weight, if performed at their location.} Mirror descent is used to generate a fractional online solution to the evolving tree game. The algorithm maintains a probability distribution on the leaves. A fractional solution can be converted easily on-the-fly into a randomized algorithm. As in~\cite{BCLLM18,BCLL19} the analysis of our fractional algorithm for the dynamic tree game is based on a potential function that combines (in our case, a modification of) the Bregman divergence associated with the entropic regularizer with a weighted depth potential. The Bregman divergence is used to bound the algorithm's {\em service cost} against the adversary's cost. The weighted depth potential is used to bound the algorithm's {\em movement cost} against its own service cost. However, in our setting, in contrast to~\cite{BCLLM18,BCLL19}, the metric on the set of leaves, and even the topology of the underlying tree, change dynamically. This poses a few new challenges to the approach. In particular, the potential function that works for metrical task systems is not invariant under the topology changes that are needed here. We resolve this problem by working with revised edge weights that slightly over-estimate the true edge weights. When a topology change would lead to an increase of the potential function (by reducing the combinatorial depth of some vertices), we prevent such an increase by downscaling the affected revised edge weights appropriately. Even so, the extra cost incurred by the perturbation of entropy, which is required to handle distributions close to the boundary, cannot be handled in the same manner as in \cite{BCLLM18,BCLL19}. This issue is fixed by modifying both the Bregman divergence and the control function of the mirror descent dynamic. The latter damps down the movement of the algorithm when it incurs service cost at a rate close to $0$. In the competitive ratio of $O(k^2)$, one factor $k$ comes from the maximal depth $k$ of the tree. The other factor $k$ is due to the fact that the perturbation that we require is exponentially small in $k$. We note that implementing the mirror descent approach in evolving trees is a major challenge to the design and analysis of online algorithms for online problems in metric spaces (e.g., the $k$-server problem, see~\cite{Lee18}, where an approach based on mirror descent in an evolving tree is also studied). Our ideas may prove applicable to other problems. \paragraph{Organization.} The rest of this paper is organized as follows. In Section~\ref{sec: evolving tree} we define and analyze the evolving tree game. In Section~\ref{sec: motivation} we motivate the evolving tree algorithm and analysis. In Section~\ref{sec: applications} we discuss the application to layered graph traversal/small set chasing. \section{The Evolving Tree Game}\label{sec: evolving tree} For a rooted edge-weighted tree $T=(V,E)$, we denote by $r$ its root, by $V^0:= V\setminus\{r\}$ the set of non-root vertices and by $\mathcal{L}\subseteq V^0$ the set of leaves. For $u\in V^0$, we denote by $p_u$ the parent of $u$ and by $w_u$ the length of the edge connecting $u$ to $p_u$. The evolving tree game is a two person continuous time game between an adversary and an algorithm. The adversary grows a rooted edge-weighted tree $T=(V,E)$ of bounded degree. Without loss of generality, we enforce that the root $r$ always has degree $1$, and we denote its single child by $c_r$. Initially $V = \{r,c_r\}$, and the two nodes are connected by a zero-weight edge. The root $r$ will be fixed throughout the game, but the identity of its child $c_r$ may change as the game progresses. The game has continuous steps and discrete steps. \begin{itemize} \item {\bf Continuous step:} The adversary picks a leaf $\ell$ and increases the weight $w_\ell$ of the edge incident on $\ell$ at a fixed rate of $w'_\ell = 1$ for a finite time interval. \item {\bf Discrete step:} There are two types of discrete steps: \begin{itemize} \item {\bf Delete step:} The adversary chooses a leaf $\ell\in\mathcal{L}$, $\ell\ne c_r$, and deletes $\ell$ and its incident edge from $T$. If the parent $p_\ell$ of $\ell$ remains with a single child $c$, the adversary smooths $T$ at $p_\ell$ as follows: it merges the two edges $\{c,p_\ell\}$ and $\{p_\ell,p_{p_\ell}\}$ into a single edge $\{c,p_{p_\ell}\}$, removing the vertex $p_\ell$, and assigns $w_c\gets w_c + w_{p_\ell}$. \item {\bf Fork step:} The adversary generates two or more new nodes and connects all of them to an existing leaf $\ell\in\mathcal{L}$ with edges of weight $0$. Notice that this removes $\ell$ from $\mathcal{L}$ and adds the new nodes to $\mathcal{L}$. \end{itemize} \end{itemize} The continuous and discrete steps may be interleaved arbitrarily by the adversary. A pure strategy of the algorithm maps the timeline to a leaf of $T$ that exists at that time. Thus, the start of the game (time $0$ of step $1$) is mapped to $c_r$, and at all times the algorithm occupies a leaf and may move from leaf to leaf. If the algorithm occupies a leaf $\ell$ continuously while $w_\ell$ grows, it pays the increase in weight (we call this the \emph{service cost}). If the algorithm moves from a leaf $\ell_1$ to a leaf $\ell_2$, it pays the total weight of the path in $T$ between $\ell_1$ and $\ell_2$ at the time of the move (we call this the \emph{movement cost}). A mixed strategy/randomized algorithm is, as usual, a probability distribution over pure strategies. A fractional strategy maps the timeline to probability distributions over the existing leaves. Writing $x_u$ for the total probability of the leaves in the subtree of $u$, this means that a fractional strategy maintains at all times a point in the changing polytope \begin{equation}\label{eq: polytope} K(T):=\left\{x\in\mathbb{R}_+^{V^0}\colon \sum_{v \colon p(v) = u} x_v = x_u\,\forall u\in V\setminus\mathcal{L}\right\}, \end{equation} where we view $x_r:=1$ as a constant. Notice that the tree $T$, the weight function $w$, the point $x\in K(T)$, and the derived parameters are all functions of the adversary's step and the time $t$ within a continuous step. Thus, we shall use henceforth the following notation: $T(j,0)$, $w(j,0)$, $x(j,0)$, etc. to denote the values of these parameters at the start of step $j$ of the adversary. If step $j$ is a continuous step of duration $\tau$, then for $t\in [0,\tau]$, we use $T(j,t)$, $w(j,t)$, $x(j,t)$, etc. to denote the values of these parameters at time $t$ since the start of step $j$. If it is not required to mention the parameters $(j,t)$ for clarity, we omit them from our notation. We require that for a fixed continuous step $j$, the function $x(j,\cdot)\colon[0,\tau]\to K(T)$ is absolutely continuous, and hence differentiable almost everywhere. Notice that the polytope $K(T)$ is fixed during a continuous step, so this requirement is well-defined. We denote by $x'$ the derivative of $x$ with respect to $t$, and similarly we denote by $w'$ the derivative of $w$ with respect to $t$. The cost of the algorithm during the continuous step $j$ is $$ \int_0^\tau \sum_{v\in V^0} \left(w'_v(j,t) x_v(j,t) + w_v(j,t) \left|x'_v(j,t)\right|\right) dt. $$ Notice that the first summand (the \emph{service cost}) is non-zero only at the single leaf $\ell$ for which $w_\ell(j,t)$ is growing. In a discrete step $j$, the topology of the tree and thus the polytope $K(T)$ changes. The old tree is $T(j,0)$ and the new tree is $T(j+1,0)$. In a delete step, when a leaf $\ell$ is deleted, the algorithm first has to move from its old position $x(j,0)\in K(T(j,0))$ to some position $x\in K(T(j,0))$ with $x_\ell=0$. The cost of moving from $x(j,0)$ to $x$ is given by $$ \sum_{v\in V^0} w_v(j,0) \left|x_v(j,0) - x_v\right|. $$ The new state $x(j+1,0)$ is the projection of $x$ onto the new polytope $K(T(j+1,0))$, where the $\ell$-coordinate and possibly (if smoothing happens) the $p_\ell$-coordinate are removed. In a fork step, the algorithm chooses as its new position any point in $K(T(j+1,0))$ whose projection onto $K(T(j,0))$ is the old position of the algorithm. No cost is incurred here (since the new leaves are appended at distance $0$). The following lemma is analogous to Lemma~\ref{lm: fractional to mixed}. Its proof is very similar. It is omitted here; we actually do not need this claim to prove the main result of the paper. \begin{lemma}\label{lm: DTG frac to mix} For every fractional strategy of the algorithm there is a mixed strategy incurring the same cost in expectation. \end{lemma} Our main result in this section is the following theorem. \begin{theorem}\label{thm: main} For every $k\in{\mathbb{N}}$ and for every $\epsilon > 0$ there exists a fractional strategy of the algorithm with the following performance guarantee. For every pure strategy of the adversary that grows trees of depth at most $k$, the cost $C$ of the algorithm satisfies $$ C\le O(k^2\log d_{\max})\cdot(\opt + \epsilon), $$ where $\opt$ is the minimum distance in the final tree from the root to a leaf, and $d_{\max}$ is the maximum degree of a node at any point during the game. \end{theorem} Notice that for every strategy of the adversary, there exists a pure strategy that pays exactly $\opt$ service cost and zero movement cost. We will refer to this strategy as the {\em optimal play}. The algorithm cannot in general choose the optimal play because the evolution of the tree only gets revealed step-by-step. \subsection{Additional notation} Let $j$ be any step, and let $t$ be a time in that step (so, if $j$ is a discrete step then $t=0$, and if $j$ is a continuous step of duration $\tau$ then $t\in [0,\tau]$). For a vertex $u\in V(j,t)$ we denote by $h_u(j,t)$ its combinatorial depth, i.e., the number of edges on the path from $r$ to $u$ in $T(j,t)$. Instead of the actual edge weights $w_u(j,t)$, our algorithm will be based on revised edge weights defined as \begin{align*} \tilde w_u(j,t) := \frac{2k-1}{2k-h_u(j,t)}\left(w_u(j,t) + \epsilon 2^{-j_u} \right), \end{align*} where $j_u\in\mathbb N$ is the step number when $u$ was created (or $0$ if $u$ existed in the initial tree). The purpose of the term $\eps2^{-j_u}$ is to ensure that $\tilde w_u(j,t)$ is strictly positive. For $u\in V^0(j,t)$, we also define a shift parameter by induction on $h_u(j,t)$, as follows. For $u=c_r(j,t)$, $\delta_u(j,t) = 1$. For other $u$, $\delta_u(j,t) = \delta_{p_u}(j,t) / (d_{p_u}(j,t)-1)$, where $p_u = p_u(j,t)$, and $d_{p_u}(j,t)$ is the degree of $p_u$ in $T(j,t)$ (i.e., $d_{p_u}(j,t)-1$ is the number of children of $p_u$ in $T(j,t)$; note that every non-leaf node in $V^0(j,t)$ has degree at least $3$). Observe that by definition $\delta(j,t)\in K(T(j,t))$. As mentioned earlier, we often omit the parameters $(j,t)$ from our notation, unless they are required for clarity. \subsection{The algorithm}\label{sec:algo} We consider four distinct types of steps: continuous steps, fork steps, deadend steps, and merge steps. A delete step is implemented by executing a deadend step, and if needed followed by a merge step. It is convenient to examine the two operations required to implement a delete step separately. \paragraph{Continuous step.} In a continuous step, the weight $w_\ell$ of some leaf $\ell\in\mathcal{L}$ grows continuously (and thus $\tilde w_\ell'> 0$). In this case, for $u\in V^0$ we update the fractional mass in the subtree below $u$ at rate \begin{equation}\label{eq: dynamic} x_u' = -\,\, \frac{2 x_u}{\tilde w_u}\tilde w_u' + \frac{x_u+\delta_u}{\tilde w_u}\left(\lambda_{p_u} - \lambda_{u}\right), \end{equation} where $\lambda_u=0$ for $u\in\mathcal{L}$ and $\lambda_u\ge 0$ for $u\in V\setminus\mathcal{L}$ are chosen such that $x$ remains in the polytope $K(T)$. We will show in Section~\ref{sec:MDExistence} that such $\lambda$ exists (as a function of time).\footnote{The dynamic of $x$ corresponds to running mirror descent with regularizer $\Phi_t(z) = \sum_{u\in V} \tilde w_u(z_u+\delta_u)\log(z_u+\delta_u)$, using the growth rate $\tilde w'$ of the approximate weights as cost function, and scaling the rate of movement by a factor $\frac{2x_{\ell}}{x_{\ell}+\delta_{\ell}}$ when $\ell$ is the leaf whose edge grows. See Section~\ref{sec: motivation}.} \paragraph{Fork step.} In a fork step, new leaves $\ell_1,\ell_2,\dots,\ell_q$ (for some $q\ge 2$) are spawned as children of a leaf $u$ (so that $u$ is no longer a leaf). They are ``born'' with $w_{\ell_1}=w_{\ell_2}=\cdots = w_{\ell_q}=0$ and $x_{\ell_1}=x_{\ell_2}=\cdots =x_{\ell_q}=x_u/q$. \paragraph{Deadend step.} In a deadend step, we delete a leaf $\ell\ne c_r$. To achieve this, we first compute the limit of a continuous step where the weight $\tilde w_\ell$ grows to infinity, ensuring that the mass $x_\ell$ tends to $0$. This, of course, changes the mass at other nodes, and we update $x$ to be the limit of this process. Then, we remove the leaf $\ell$ along with the edge $\{\ell,p_\ell\}$ from the tree. Notice that this changes the degree $d_{p_\ell}$. Therefore, it triggers a discontinuous change in the shift parameter $\delta_u$ for every vertex $u$ that is a descendant of $p_\ell$. \paragraph{Merge step.} A merge step immediately follows a deadend step if, after the removal of the edge $\{\ell,p_\ell\}$, the vertex $v=p_\ell$ has only one child $c$ left. Notice that $v\ne r$. We merge the two edges $\{c,v\}$ of weight $w_c$ and $\{v, p_v\}$ of weight $w_v$ into a single edge $\{c,p_v\}$ of weight $w_c+w_v$. The two edges that were merged and the vertex $v$ are removed from $T$. This decrements by $1$ the combinatorial depth $h_u$ of every vertex $u$ in the subtree rooted at $c$. Thus, it triggers a discontinuous change in the revised weight $\tilde w_u$, for every vertex $u$ in this subtree. \subsection{Competitive analysis} The analysis of the algorithm is based on a potential function argument. Let $y\in K(T)$ denote the state of the optimal play. Note that as the optimal play is a pure strategy, the vector $y$ is simply the indicator function for the nodes on some root-to-leaf path. We define a potential function $P = P_{k,T,w}(x,y)$, where $x\in K(T)$ and we prove that the algorithm's cost plus the change in $P$ is at most $O(k^2\log d_{\max})$ times the optimal cost, where $d_{\max}$ is the maximum degree of a node of $T$. This, along with the fact that $P$ is bounded, implies $O(k^2\log d_{\max})$ competitiveness. In Section~\ref{sec: motivation} we motivate the construction of the potential function $P$. Here, we simply define it as follows: \begin{equation}\label{eq: potential} P := 2\sum_{u\in V^0}\tilde w_u \left(4k y_u \log \frac{1+\delta_u}{x_u+\delta_u} + (2k-h_u)x_u\right). \end{equation} We now consider the cost and potential change for each of the different steps separately. \subsubsection{Continuous step}\label{sec:growth} \paragraph{Bounding the algorithm's cost.} Let $\ell$ be the leaf whose weight $w_\ell$ is growing, and recall that $c_r$ is the current neighbor of the root $r$. By definition of the game, the algorithm pays two types of cost. Firstly, it pays for the mass $x_\ell$ at the leaf $\ell$ moving away from the root at rate $w_\ell'$. Secondly, it pays for moving mass from $\ell$ to other leaves. Notice that $x_{c_r} = 1$ stays fixed. Let $C = C(j,t)$ denote the total cost that the algorithm accumulates in the current step $j$, up to the current time $t$. \begin{lemma}\label{lm: growth cost} The rate at which $C$ increases with $t$ is $C' \le 3\tilde w_\ell'x_\ell + 2\sum_{u\in V^0} (x_u+\delta_u)\lambda_{u}$. \end{lemma} \begin{proof} We have \begin{eqnarray} C' = w_\ell'x_\ell\, + \sum_{u\in V^0\setminus \{c_r\}} w_u |x_u'| &\le& \tilde w_\ell'x_\ell\, + \sum_{u\in V^0\setminus \{c_r\}} \tilde w_u |x_u'|\label{eq: growth w to tilde w}\\ & = & \tilde w_\ell'x_\ell\, + \sum_{u\in V^0\setminus \{c_r\}} \left|-2x_u \tilde w_u' + (x_u+\delta_u)(\lambda_{p_u} - \lambda_u)\right|\label{eq: growth movement}\\ &\le& 3\tilde w_\ell'x_\ell\, + \sum_{u\in V^0\setminus \{c_r\}} (x_u+\delta_u)(\lambda_{p_u} + \lambda_u)\label{eq: growth triangle} \\ &\le& 3\tilde w_\ell'x_\ell + 2\sum_{u\in V^0} (x_u+\delta_u)\lambda_{u}.\label{eq:growthCost} \end{eqnarray} Inequality~\eqref{eq: growth w to tilde w} uses the fact that $w_u\le \tilde w_u$ and $w_\ell'\le \tilde w_\ell'$. Equation~\eqref{eq: growth movement} uses the definition of the dynamic in Equation~\eqref{eq: dynamic}. Inequality~\eqref{eq: growth triangle} uses the triangle inequality. Finally, Inequality~\eqref{eq:growthCost} uses the fact that $x,\delta\in K(T)$, so $\sum_{v \colon p_v = u} (x_v+\delta_v) = x_u+\delta_u$ for all $u\in V\setminus\mathcal{L}$. \end{proof} \paragraph{Change of potential.} We decompose the potential function as $P= 4kD - 2\Psi$, where \begin{align*} D := \sum_{u\in V^0}\tilde w_u\left(2y_u\log\frac{1+\delta_u}{x_u+\delta_u} \,\,\,+\,\,\, x_u\right) \end{align*} and \begin{align*} \Psi:= \sum_{u\in V^0} h_u \tilde w_u x_u. \end{align*} We first analyze the rate of change of $\Psi$. \begin{lemma}\label{lm: Psi change} The rate at which $\Psi$ changes satisfies: $\Psi' \ge -k \tilde w_\ell' x_\ell + \sum_{u\in V^0} \lambda_u (x_u+\delta_u)$. \end{lemma} \begin{proof} We have \begin{eqnarray} \Psi' & = & h_\ell \tilde w_\ell' x_\ell + \sum_{u\in V^0\setminus\{c_r\}} h_u \tilde w_u x_u'\nonumber\\ & = & h_\ell \tilde w_\ell' x_\ell+\sum_{u\in V^0\setminus\{c_r\}} h_u \left(-2x_u \tilde w_u' + (x_u+\delta_u)(\lambda_{p_u}-\lambda_{u})\right)\nonumber\\ & = & - h_\ell \tilde w_\ell' x_\ell + \sum_{u\in V^0\setminus\{c_r\}} h_u (x_u+\delta_u)(\lambda_{p_u}-\lambda_{u})\nonumber\\ &\ge& - k \tilde w_\ell' x_\ell + \sum_{u\in V^0} \lambda_u \left((h_u+1)\sum_{v\colon p_v=u} \!\! (x_v+\delta_v) \quad-\quad h_u (x_u+\delta_u)\right)\label{eq: psi ineq}\\ & = & - k \tilde w_\ell' x_\ell + \sum_{u\in V^0} \lambda_u (x_u+\delta_u).\label{eq:growthWdepth} \end{eqnarray} Here, Inequality~\eqref{eq: psi ineq} uses the fact that $h_\ell\le k$ and, for $u=p_v$, $h_v=h_u+1$. Equation~\eqref{eq:growthWdepth} uses the previously noted fact that, as $x,\delta\in K(T)$, then for all $u\notin \mathcal{L}$, $\sum_{v \colon p_v = u} (x_v+\delta_v) = x_u+\delta_u$ (and if $u\in\mathcal{L}$, then $\lambda_u=0$). \end{proof} Next, we analyze the rate of change of $D$. \begin{lemma}\label{lm: D change} The rate at which $D$ changes satisfies: $D' \le -\tilde w_\ell'x_\ell + 2(2+k\log D) y_\ell\tilde w_\ell'$. \end{lemma} \begin{proof} We have \begin{eqnarray} D' & = & \tilde w_\ell' \left(2y_\ell\log\frac{1+\delta_\ell}{x_\ell+\delta_\ell}+x_\ell\right) + \sum_{u\in V^0\setminus\{c_r\}} \tilde w_ux_u'\left(\frac{-2y_u}{x_u+\delta_u} + 1\right)\nonumber\\ & = & \tilde w_\ell' \left(2y_\ell\log\frac{1+\delta_\ell}{x_\ell+\delta_\ell}+x_\ell\right) + \sum_{u\in V^0\setminus\{c_r\}} \left(-2x_u \tilde w_u' + (x_u+\delta_u)(\lambda_{p_u} - \lambda_u)\right)\left(\frac{-2y_u}{x_u+\delta_u} + 1\right)\nonumber\\ & = & -\tilde w_\ell'x_\ell + 2y_\ell\tilde w_\ell' \left(\log\frac{1+\delta_\ell}{x_\ell+ \delta_\ell} + \frac{2 x_\ell}{x_\ell+\delta_\ell}\right) + \sum_{u\in V^0\setminus \{c_r\}}(\lambda_{p_u}-\lambda_u)(-2y_u+x_u+\delta_u)\nonumber\\ &\le& -\tilde w_\ell'x_\ell + 2 y_\ell\tilde w_\ell'(2+k\log d_{\max}) + \sum_{u\in V^0} \lambda_u \left(2y_u-x_u-\delta_u - \sum_{v\colon p_v=u}(2y_v-x_v-\delta_v)\right)\label{eq: D ineq}\\ & = & -\tilde w_\ell'x_\ell + 2 y_\ell\tilde w_\ell'(2+k\log d_{\max}).\label{eq:growthBregman} \end{eqnarray} Inequality~\eqref{eq: D ineq} uses the fact that $\delta_\ell\ge (d_{\max})^{1-h_\ell}\ge (d_{\max})^{1-k}$ and $2y_{c_r}-x_{c_r}-\delta_{c_r}=0$. Equation~\eqref{eq:growthBregman} uses the fact that $x,y,\delta\in K(T)$, hence for $u\in V\setminus\mathcal{L}$, $2y_u-x_u-\delta_u = \sum_{v\colon p_v=u} (2y_v-x_v-\delta_v)$ (and for $u\in\mathcal{L}$, $\lambda_u=0$). \end{proof} We obtain the following lemma, which bounds the algorithm's cost and change in potential against the service cost of the optimal play. \begin{lemma}\label{lm: P change} For every $k\ge 2$, it holds that $C' + P' \le O(k^2\log d_{\max}) w_\ell' y_\ell$. \end{lemma} \begin{proof} Combine Equations~\eqref{eq:growthCost},~\eqref{eq:growthWdepth}, and~\eqref{eq:growthBregman}, and recall that $P=4kD-2\Psi$. We get \begin{equation}\label{eq:growthMainBound} C' +P' \le (2k+3-4k)\tilde w_\ell' x_\ell + 8k(2+k\log d_{\max})\tilde w_\ell' y_\ell \le O(k^2\log d_{\max}) w_\ell' y_\ell, \end{equation} where in the last inequality we use $\tilde w_\ell' < 2w_\ell'$. \end{proof} \subsubsection{Fork step} Fork steps may increase the value of the potential function $P$, because the new edges have revised weight $> 0$. The following lemma bounds this increase. \begin{lemma}\label{lm: fork cost} The total increase in $P$ due to all fork steps is at most $\epsilon \cdot O(k^2\log d_{\max})$. \end{lemma} \begin{proof} Consider a fork step that attaches new leaves $\ell_1,\dots,\ell_q$ to a leaf $u$. The new leaves are born with revised edge weights $\frac{2k-1}{2k-h_u-1}\epsilon 2^{-j}\le \epsilon 2^{-j+1}$, where $j$ is the current step number. Since $\sum_{i=1}^q y_{\ell_i}=y_u\le 1$ and $\sum_{i=1}^q x_{\ell_i}=x_u\le 1$, the change $\Delta P$ in $P$ satisfies \begin{eqnarray*} \Delta P &\le& \epsilon 2^{-j+2}\cdot\left(4k\log\frac{1+\delta_u/q}{\delta_u/q} + 2k-h_{u}-1\right) \\ &\le& \epsilon 2^{-j+2}\cdot (2k+4k^2\log d_{\max}), \end{eqnarray*} where the last inequality follows from $\delta_u/q\ge (d_{\max})^{1-k}$. As the step number $j$ is different in all fork steps, the total cost of all fork steps is at most $\epsilon\cdot(2k+4k^2\log d_{\max})\sum_{j=1}^{\infty} 2^{-j+2} = \epsilon\cdot O(k^2\log d_{\max})$. \end{proof} \subsubsection{Deadend step} Recall that when a leaf $\ell$ is deleted, we first compute the limit of a continuous step as the weight $\tilde w_\ell$ grows to infinity. Let $\bar x$ be the mass distribution that the algorithm converges to when $\tilde w_\ell$ approaches infinity. \begin{lemma}\label{lm: limit 0} The limit $\bar x$ satisfies $\bar x_{\ell}=0$. Hence, $\bar x$ with the $\ell$-coordinate removed is a valid mass distribution in the new polytope $K(T)$. Also, a deadend step decreases $P$ by at least the cost the algorithm incurs to move to $\bar x$. \end{lemma} \begin{proof} Note that $y_\ell=0$ for the ``dying'' leaf $\ell$. Thus, by Lemma~\ref{lm: P change}, the cost of the algorithm during the growth of $\tilde w_\ell$ is bounded by the decrease of $P$ during that time. Clearly, $P$ can only decrease by a finite amount (as it remains non-negative) and thus the algorithm's cost is finitely bounded. But this means that the mass at $\ell$ must tend to $0$, since otherwise the service cost would be infinite. Moreover, notice that the growth of $\tilde w_\ell$ is just a simulation and the algorithm doesn't pay the service cost, only the cost of moving from its state $x$ at the start of the simulation to the limit state $\bar x$. However, this movement cost is at most the total cost to the algorithm during the simulation, and $P$ decreases by at least the total cost. Finally, at $\bar x$, the term in $P$ for $\ell$ equals $0$, so removing it does not increase $P$. Also, for every vertex $u$ in a subtree rooted at a sibling of $\ell$ the term $\delta_u$ increases (as the degree $d_{p_\ell}$ decreases by $1$). However, this too cannot increase $P$ (as $x_u\le 1$). \end{proof} \subsubsection{Merge step} \begin{lemma}\label{lm: merge} A merge step does not increase $P$. \end{lemma} \begin{proof} Let $j$ be the step number in which the merge happens. Substituting the expression for the revised weights, the potential $P$ can be written as \begin{align*} P&= 2\sum_{u\in V^0}\left(w_u + 2^{-j_u} \right) \left(\frac{2k-1}{2k-h_u}4k y_u \log \frac{1+\delta_u}{x_u+\delta_u} + (2k-1)x_u\right). \end{align*} Consider the two edges $\{c,v\}$ and $\{v, p_v\}$ that are to be merged, where $v=p_c(j,0)$. Firstly, for each vertex $u$ in the subtree of $c$ (including $c$ itself), its depth $h_u$ decreases by $1$. This cannot increase $P$. Notice also that as $d_v = 2$, we have $\delta_c = \delta_v$ and the merge does not change any $\delta_u$. The new value $h_c(j+1,0)$ equals the old value $h_v(j,0)$. Note also that $y_c=y_v$ and $x_c=x_v$ because $c$ is the only child of $v$. Thus, merging the two edges of lengths $w_c$ and $w_v$ into a single edge of length $w_c+w_v$, and removing vertex $v$, only leads to a further decrease in $P$ resulting from the disappearance of the $2^{-j_v}$ term. \end{proof} \subsubsection{Putting it together} \begin{proofof}{Theorem~\ref{thm: main}} By Lemmas~\ref{lm: P change},~\ref{lm: fork cost},~\ref{lm: limit 0} and~\ref{lm: merge}, $$ C\le O(k^2\log d_{\max}) \opt + P_0 - P_f + \epsilon\cdot O(k^2\log d_{\max}), $$ where $P_0$ and $P_f$ are the initial and final value of $P$, respectively. Now, observe that $P_0 = \epsilon\cdot O(k)$ and $P_f\ge 0$. \end{proofof} \section{Derivation of Algorithm and Potential Function}\label{sec: motivation} We now describe how we derived the algorithm and potential function from the last section, and justify the existence of $\lambda$. \subsection{Online mirror descent} Our algorithm is based on the online mirror descent framework of \cite{BCLLM18,BCLL19}. In general, an algorithm in this framework is specified by a convex body $K\subset \mathbb{R}^n$, a suitable strongly convex function $\Phi\colon K\to \mathbb{R}$ (called \emph{regularizer}) and a map $f\colon [0,\infty)\times K\to \mathbb{R}^n$ (called \emph{control function}). The algorithm corresponding to $K$, $\Phi$ and $f$ is the (usually unique) solution $x\colon[0,\infty)\to K$ to the following differential inclusion: \begin{align} \nabla^2\Phi(x(t))\cdot x'(t) \in f(t,x(t)) - N_K(x(t)),\label{eq:MD} \end{align} where $\nabla^2\Phi(x)$ denotes the Hessian of $\Phi$ at $x$ and \[N_K(x):=\{\mu\in\mathbb{R}^n\colon \langle \mu, y-x\rangle \le 0, \,\forall y\in K\}\] is the normal cone of $K$ at $x(t)$. Intuitively, \eqref{eq:MD} means that $x$ tries to move in direction $f(t,x(t))$, with the normal cone term $N_K(x(t))$ ensuring that $x(t)\in K$ can be maintained, and multiplication by the positive definite matrix $\nabla^2\Phi(x(t))$ corresponding to a distortion of the direction in which $x$ is moving. A benefit of the online mirror descent framework is that there exists a default potential function for its analysis, namely the Bregman divergence associated to $\Phi$, defined as \begin{align*} D_\Phi(y\| x):=\Phi(y)-\Phi(x)+\langle \nabla \Phi(x), x-y\rangle \end{align*} for $x,y\in K$. Plugging in $x=x(t)$, the change of the Bregman divergence as a function of time is \begin{align} \frac{d}{dt}D_\Phi(y\| x(t)) &= \langle \nabla^2\Phi(x(t))\cdot x'(t), x(t)-y\rangle\label{eq:chainRule}\\ &= \langle f(t,x(t))-\mu(t), x(t)-y\rangle\qquad\qquad\text{for some $\mu(t)\in N_K(x(t))$}\label{eq:plugMD}\\ &\le \langle f(t,x(t)), x(t)-y\rangle,\label{eq:BregmanBound} \end{align} where \eqref{eq:chainRule} follows from the definition of $D_\Phi$ and the chain rule, \eqref{eq:plugMD} follows from~\eqref{eq:MD}, and \eqref{eq:BregmanBound} follows from the definition of $N_K(x(t))$. \subsection{Charging service cost for evolving trees} In the evolving tree game, we have $K=K(T)$. For a continuous step, it would seem natural to choose $f(t,x(t))=-w'(t)$, so that \eqref{eq:BregmanBound} implies that the online service cost $\langle w'(t), x(t)\rangle$ plus change in the potential $D_\Phi(y\| x(t))$ is at most the offline service cost $\langle w'(t), y\rangle$. For the regularizer $\Phi$ (which should be chosen in a way that also allows to bound the movement cost later), the choice analogous to \cite{BCLL19} would be \begin{align*} \Phi(x):= \sum_{u\in V^0} w_u(x_u+\delta_u)\log(x_u+\delta_u). \end{align*} However, since $\Phi$ (and thus $D_\Phi$) depends on $w$, the evolution of $w$ leads to an additional change of $D_\Phi$, which the bound~\eqref{eq:BregmanBound} does not account for as it assumes the regularizer $\Phi$ to be fixed. To determine this additional change, first observe that by a simple calculation \begin{align*} D_\Phi(y\|x) = \sum_{u\in V^0} w_u\left[(y_u+\delta_u)\log \frac{y_u+\delta_u}{x_u+\delta_u} + x_u-y_u\right]. \end{align*} When $w_\ell$ increases at rate $1$, this potential increases at rate $(y_\ell+\delta_\ell)\log \frac{y_\ell+\delta_\ell}{x_\ell+\delta_\ell} + x_\ell-y_\ell$. The good news is that the part $y_\ell\log \frac{y_\ell+\delta_\ell}{x_\ell+\delta_\ell}\le y_\ell \cdot O(\log\frac{1}{\delta_\ell})\le O(k) y_\ell$ can be charged to the offline service cost, which increases at rate $y_\ell$. The term $-y_\ell$ also does no harm as it is non-positive. The term $x_\ell$ might seem to be a slight worry, because it is equal to the online service cost, which is precisely the quantity that the change in potential is supposed to cancel. It means that effectively we have to cancel two times the online service cost, which can be achieved by accelerating the movement of the algorithm by a factor $2$ (by multiplying the control function by a factor $2$). The main worry is the remaining term $\delta_\ell\log \frac{y_\ell+\delta_\ell}{x_\ell+\delta_\ell}$, which does not seem controllable. We would therefore prefer to have a potential that has the $\delta_u$ terms only inside but not outside the $\log$. Removing this term (and, for simplicity, omitting the $y_u$ at the end of the potential, which does not play any important role), our desired potential would then be a sum of the following two terms $L(t)$ and $M(t)$: \begin{align*} L(t) &:= \sum_{u\in V^0} w_u(t) y_u\log \frac{y_u+\delta_u}{x_u(t)+\delta_u}\\ M(t) &:= \sum_{u\in V^0} w_u(t) x_u(t) \end{align*} Let us study again why these terms are useful as part of the classical Bregman divergence potential by calculating their change. Dropping $t$ from the notation, and using that $\nabla^2\Phi(x)$ is the diagonal matrix with entries $\frac{w_u}{x_u+\delta_u}$, we have \begin{align*} L' &= \langle w',y\rangle O(k) - \langle y, \nabla^2\Phi(x)\cdot x'\rangle \\ &= \langle w',y\rangle O(k) - \langle y, f-\mu\rangle \end{align*} and \begin{align*} M' &= \langle w', x\rangle + \langle w, x'\rangle\\ &= \langle w', x\rangle + \langle x+\delta, \nabla^2\Phi(x) \cdot x'\rangle\\ &= \langle w', x\rangle + \langle x+\delta, f-\mu\rangle \end{align*} for some $\mu\in N_K(x)$. For a convex body $K$ of the form $K=\{x\in\mathbb{R}^n\colon Ax\le b\}$ where $A\in\mathbb{R}^{m\times n}$, $b\in \mathbb{R}^n$, the normal cone is given by \begin{align*} N_K(x) = \{A^T\lambda\mid \lambda\in\mathbb{R}_+^m, \langle \lambda, Ax-b\rangle=0\}. \end{align*} The entries of $\lambda$ are called \emph{Lagrange multipliers}. In our case, we will have $x_u>0$ for all $u\in V^0$, so the Lagrange multipliers corresponding to the constraints $x_u\ge 0$ will be zero. So the only tight constraints are the equality constraints, and the normal cone corresponding to \emph{any} such $x$ is given by \begin{align} N_K(x) = \{(\lambda_{u}-\lambda_{p(u)})_{u\in V^0}\mid \lambda\in\mathbb{R}^{V}, \lambda_u=0\,\forall u\in\mathcal{L}\}.\label{eq:normalCone} \end{align} Since $\delta\in K$ and $\delta_u>0$ for all $u$, we thus have $N_K(x)=N_K(\delta)$. Hence, we can cancel the $\mu$ terms in $L'$ and $M'$ by taking the potential $D=2L+M$, so that \begin{align*} D'=2L' + M' &= \langle w',y\rangle O(k) + \langle w', x\rangle + \langle x+\delta - 2y, f-\mu\rangle\\ &\le \langle w',y\rangle O(k) + \langle w', x\rangle + \langle x+\delta - 2y, f\rangle, \end{align*} where the inequality uses that $\mu\in N_K(x)$ and $\mu\in N_K(\delta)$. Recalling that $w_u'=\1_{u=\ell}$, and choosing $f=-2w'$ as the control function, we get \begin{align} \label{eq:justforbelow} D' &\le y_\ell O(k) - x_\ell, \end{align} i.e., the potential charges the online service cost to $O(k)$ times the offline service cost. Indeed, the potential function $D$ used in Section~\ref{sec:growth} is given by $D=2L+M$, up to the replacement of $w$ by $\tilde w$. Moreover we note that \eqref{eq:justforbelow} remains true with the control function $f=-\frac{2x_\ell}{x_\ell+\delta_\ell}w'$, which will be helpful for the movement as we discuss next. \subsection{Bounding movement via damped control and revised weights} Besides the service cost, we also need to bound the movement cost. By \eqref{eq:MD} and \eqref{eq:normalCone} and since $\nabla^2\Phi(x)$ is the diagonal matrix with entries $\frac{w_u}{x_u+\delta_u}$, the movement of the algorithm satisfies \begin{align} w_u x_u'&=(x_u+\delta_u)(f_u + \lambda_{p(u)} - \lambda_{u})\nonumber\\ &= -2x_u w_u' + (x_u+\delta_u)(\lambda_{p(u)} - \lambda_{u}),\label{eq:boundMovementMotivation} \end{align} where the last equation uses $f=-\frac{2x_\ell}{x_\ell+\delta_\ell}w'$. Up to the discrepancy between $w$ and $\tilde w$, this is precisely Equation~\eqref{eq: dynamic}. Here, damping the control function $f$ by the factor $\frac{x_\ell}{x_\ell+\delta_\ell}$ is crucial: Otherwise there would be additional movement of the form $\delta_\ell w_\ell'$. Although a similar $\delta$-induced term in the movement exists also in~\cite{BCLL19}, the argument in \cite{BCLL19} to control such a term relies on $w$ being fixed and would therefore fail in our case. Scaling by $\frac{x_\ell}{x_\ell+\delta_\ell}$ prevents such movement from occurring in the first place. To bound the movement cost,~\cite{BCLL19} employs a \emph{weighted depth potential} defined as \begin{align*} \Psi = \sum_{u\in V^0}h_u w_u x_u. \end{align*} Our calculation in Lemma~\ref{lm: Psi change} suggests that we can use the same $\Psi$ here, choosing the overall potential function as $P= 4kD - 2\Psi$. But now the problem is that the combinatorial depths $h_u$ can change during merge steps, which would lead to an increase of the overall potential $P$. To counteract this, we use the revised weights $\tilde w_u$: The scaling by $\frac{2k-1}{2k-h_u}$ in their definition means that $\tilde w_u$ slightly increases whenever $h_u$ decreases, and overall this ensures that the potential $P$ does not increase in a merge step. Since $\frac{2k-1}{2k-h_u}\in[1,2]$, such scaling loses only a constant factor in the competitive ratio. The additional term $2^{-j_u}$ in the definition of the revised weights only serves to ensure that $\tilde w_u>0$, so that $\Phi$ is strongly convex as required by the mirror descent framework. \subsection{Existence of the mirror descent path for time-varying $\Phi_t$}\label{sec:MDExistence} To justify the existence of our algorithm, we need the following theorem, which generalizes~\cite[Theorem 2.2]{BCLLM18} to the setting where a fixed regularizer $\Phi$ is replaced by a time-varying regularizer $\Phi_t$. \begin{theorem}\label{thm: existence of x} Let $K\subset \mathbb{R}^n$ be compact and convex, $f\colon[0,\infty)\times K\to \mathbb{R}^n$ continuous, $\Phi_t\colon K\to\mathbb{R}$ strongly convex for $t\ge 0$ and such that $(x,t)\mapsto \nabla^2\Phi_t(x)^{-1}$ is continuous. Then for any $x_0\in K$ there is an absolutely continuous solution $x\colon[0,\infty)\to K$ satisfying \begin{align*} \nabla^2\Phi_t(x(t))\cdot x'(t)&\in f(t,x(t))-N_K(x(t))\\ x(0)&=x_0. \end{align*} If $(x,t)\mapsto\nabla^2\Phi_t(x)^{-1}$ is Lipshitz and $f$ locally Lipschitz, then the solution is unique. \end{theorem} \begin{proof} It suffices to consider a finite time interval $[0,\tau]$. It was shown in \cite[Theorem~5.7]{BCLLM18} that for $C\subset\mathbb{R}^n$ compact and convex, $H\colon C\to\{A\in\mathbb{R}^{n\times n}\mid A\succ 0\}$ continuous, $g\colon[0,\tau]\times C\to\mathbb{R}^n$ continuous and $y_0\in C$, there is an absolutely continuous solution $y\colon[0,\tau]\to C$ satisfying \begin{align*} y'(t)&\in H(y(t))\cdot(g(t,y(t))-N_C(y(t)))\\ y(0)&=y_0. \end{align*} We choose $C=[-1,\tau+1]\times K$, \begin{align*} H(t,x)= \begin{bmatrix} 1& 0 \\ 0 & \nabla^2\Phi_t(x))^{-1} \end{bmatrix}, \end{align*} $g(t,(s,x))=(1,f(t,x))$ and $y_0=(0,x_0)$. Decomposing the solution as $y(t)=(s(t),x(t))$ for $s(t)\in [-1,\tau+1]$ and $x(t)\in K$, and noting that for $s(t)\in[0,\tau]$ we have $N_C(y(t))= \{0\}\times N_K(x(t))$, we get \begin{align*} s(t)&=t\\ x'(t)&\in \nabla^2\Phi_t(x))^{-1}\cdot(f(t,x(t))-N_K(x(t)))\\ x(0)&=x_0. \end{align*} By \cite[Theorem 5.9]{BCLLM18} the solution is unique provided $H$ is Lipschitz and $g$ locally Lipschitz, which is satisfied under the additional assumptions of the theorem. \end{proof} For every continuous step, $\Phi_t=\sum_{u\in V^0}\tilde w_u(t)(x_u+\delta_u)\log (x_u+\delta_u)$ and $f(t,x)=-\frac{2x_\ell(t)}{x_\ell(t)+\delta_\ell} w'(t)$ satisfy the assumptions of the theorem. By the calculation in~\eqref{eq:boundMovementMotivation} (with $w$ replaced by $\tilde w$), the corresponding well-defined algorithm is the one from equation \eqref{eq: dynamic}. Note that Lagrange multipliers for the constraints $x_u\ge 0$ are indeed not needed (see below). \paragraph{Sign of Lagrange multipliers.} We stipulated in Section~\ref{sec:algo} that $\lambda_u\ge 0$ for $u\in V\setminus\mathcal{L}$, and we do not have any Lagrange multipliers for the constraints $x_u\ge 0$. To see this, it suffices to show that $\lambda_u\ge 0$ for $u\in V\setminus\mathcal{L}$ in the case that the constraints $x_u\ge 0$ are removed from $K$: If this is true, then \eqref{eq: dynamic} shows for any leaf $u\in\mathcal{L}$ that $x_u'<0$ is possible only if $x_u>0$ (since $\lambda_u=0$ when $u$ is a leaf, recalling~\eqref{eq:normalCone}). Hence, $x_u\ge 0$ holds automatically for any leaf $u$, and thus also for internal vertices $u$ due to the constraints of the polytope. Consequently, we do not need Lagrange multipliers for constraints $x_u\ge 0$. The proof that $\lambda_u\ge 0$ for $u\in V\setminus\mathcal{L}$ is completely analogous to~\cite[Lemma~12]{BC21}. As an alternative proof, one can also see this by replacing in $K$ the constraints $\sum_{v \colon p(v) = u} x_v = x_u$ by $\sum_{v \colon p(v) = u} x_v \ge x_u$, which directly gives $\lambda_u\ge 0$ in~\eqref{eq:normalCone}; this still yields a feasible solution (with the constraints satisfied with equality) by arguments completely analogous to~\cite[Lemma~3.2]{BCLL19}. \section{Reductions and Applications}\label{sec: applications} In this section we show that layered graph traversal and small set chasing (a.k.a. metrical service systems) reduce to the evolving tree game. The reductions imply the following new bounds on the competitive ratio for these problems, the main result of this section. \begin{theorem}\label{thm: main layered} There are randomized $O(k^2)$-competitive online algorithms for traversing width $k$ layered graphs, as well as for chasing sets of cardinality $k$ in any metric space. \end{theorem} \subsection{Layered graph traversal} Recall the definition of the problem in the introduction. We will introduce useful notation. Let $V_0 = \{a\}, V_1, V_2, \dots, V_n = \{b\}$ denote the layers of the input graph $G$, in consecutive order. Let $E_1,E_2,\dots,E_n$ be the partition of the edge set of $G$, where for every $i=1,2,\dots,n$, every edge $e\in E_i$ has one endpoint in $V_{i-1}$ and one endpoint in $V_i$. Also recall that $w: E\rightarrow{\mathbb{N}}$ is the weight function on the edges, and $k = \max\{|V_i|\colon i=0,1,2,\dots,n\}$ is the {\em width} of $G$. The input $G$ is revealed gradually to the searcher. Let $G_i = (V_0\cup V_1\cup\cdots\cup V_i, E_1\cup E_2\cup\cdots\cup E_i)$ denote the subgraph that is revealed up to and including step $i$. The searcher, currently at a vertex $v_{i-1}\in V_{i-1}$ chooses a path in $G_i$ from $v_{i-1}$ to a vertex $v_i\in V_i$. Let $w_{G_i}(v_{i-1},v_i)$ denote the total weight of a shortest path from $v_{i-1}$ to $v_i$ in $G_i$. (Clearly, the searcher has no good reason to choose a longer path.) Formally, a pure strategy (a.k.a. deterministic algorithm) of the searcher is a function that maps, for all $i=1,2,\dots$, a layered graph $G_i$ (given including its partition into a sequence of layers) to a vertex in $V_i$ (i.e., the searcher's next move). A mixed strategy (a.k.a. randomized algorithm) of the searcher is a probability distribution over such functions. \subsubsection{Fractional strategies} Given a mixed strategy $S$ of the searcher, we can define a sequence $P_0,P_1,P_2,\dots$, where $P_i$ is a probability distribution over $V_i$. For every $v\in V_i$, $P_i(v)$ indicates the probability that the searcher's mixed strategy $S$ chooses to move to $v$ (i.e., $v_i = v$). A fractional strategy of the searcher is a function that maps, for all $i=1,2,\dots$, a layered graph $G_i$ to a probability distribution $P_i$ over $V_i$. For a fractional strategy choosing probability distributions $P_0,P_1,P_2,\dots,P_n$, we define its cost as follows. For $i=1,2,\dots,n$, let $\tau_i$ be a probability distribution over $V_{i-1}\times V_i$, with marginals $P_{i-1}$ on $V_{i-1}$ and $P_i$ on $V_i$, that minimizes $$ w_{G_i,\tau_i}(P_{i-1},P_i) = \sum_{u\in V_{i-1}}\sum_{v\in V_i} w_{G_i}(u,v) \tau_i(u,v). $$ The cost of the strategy is then defined as $\sum_{i=1}^n w_{G_i,\tau_i}(P_{i-1},P_i)$. The following lemma can be deduced through the reduction to small set chasing discussed later, the fact that small set chasing is a special case of metrical task systems, and a similar known result for metrical task systems. Here we give a straightforward direct proof. \begin{lemma}\label{lm: fractional to mixed} For every fractional strategy of the searcher there is a mixed strategy incurring the same cost. \end{lemma} \begin{proof} Fix any fractional strategy of the searcher, and suppose that the searcher plays $P_0,P_1,P_2,\dots,P_n$ against a strategy $G_n$ of the designer. I.e, the designer chooses the number of rounds $n$, and plays in round $i$ the last layer of $G_i = (\{a\}\cup V_1\cup V_2\cup\cdots\cup V_i,E_1\cup E_2\cup\cdots\cup E_i)$. The searcher responds with $P_i$, which is a function of $G_i$. Notice that when the designer reveals $G_i$, the searcher can compute $\tau_i$, because that requires only the distance functions $w_{G_i}$ and the marginal probability distributions $P_{i-1}$ and $P_i$. We construct a mixed strategy of the searcher inductively as follows. It is sufficient to define, for every round $i$, the conditional probability distribution on the searcher's next move $v_i\in V_i$, given any possible play so far. Initially, at the start of round $1$, the searcher is deterministically at $a$. Suppose that the searcher reached a vertex $v_{i-1}\in V_{i-1}$. Then, we set $\Pr[v_i = v\in V_i\mid v_{i-1}] = \frac{\tau_i(v_{i-1},v)}{P_{i-1}(v_{i-1})}$. Notice that the searcher can move from $v_{i-1}$ to $v_i$ along a path in $G_i$ of length $w_{G_i}(v_{i-1},v_i)$. We now analyze the cost of the mixed strategy thus defined. We prove by induction over the number of rounds that in round $i$, for every pair of vertices $u\in V_{i-1}$ and $v\in V_i$, the probability that the searcher's chosen pure strategy (which is a random variable) reaches $v$ is $P_i(v)$ and the probability that this strategy moves from $u$ to $v$ is $\tau_i(u,v)$ (the latter assertion is required to hold for $i > 0$). The base case is $i=0$, which is trivial, as the searcher's initial position is $a$, $P_0(a) = 1$, and the statement about $\tau$ is vacuous. So, assume that the statement is true for $i-1$. By the definition of the strategy, in round $i$, for every $v\in V_i$, $$ \Pr[v_i = v] = \sum_{u\in V_{i-1}} \Pr[v_{i-1} = u] \cdot \Pr[v_i = v\mid v_{i-1} = u] = \sum_{u\in V_{i-1}} P_{i-1}(u)\cdot \frac{\tau_i(u,v)}{P_{i-1}(u)} = P_i(v), $$ where the penultimate equality uses the induction hypothesis, and the final equality uses the condition on the marginals of $\tau_i$ at $V_i$. Similarly, \begin{eqnarray*} & & \Pr[\hbox{the searcher moves from } u \hbox{ to } v] \\ & = & \Pr[v_{i-1} = u]\cdot \Pr[\hbox{the searcher moves from } u \hbox{ to } v\mid v_{i-1} = u] \\ & = & P_{i-1}(u)\cdot \frac{\tau_i(u,v)}{P_{i-1}(u)} \\ & = & \tau_i(u,v). \end{eqnarray*} Thus, by linearity of expectation, the searcher's expected total cost is $$ \sum_{i=1}^n \sum_{u\in V_{i-1}} \sum_{v\in V_i} \tau_i(u,v)\cdot w_{G_i}(u,v), $$ and this is by definition equal to the cost of the searcher's fractional strategy. \end{proof} \subsubsection{Layered trees} We now discuss special cases of layered graph traversal whose solution implies a solution to the general case. We begin with a definition. \begin{definition} A rooted layered tree is an acyclic layered graph, where every vertex $v\ne a$ has exactly one neighbor in the preceding layer. We say that $a$ is the root of such a tree. \end{definition} \begin{theorem}[{Fiat et al.~\cite[Section 2]{FFKRRV91}}]\label{thm: layered trees suffice} Suppose that the designer is restricted to play a width $k$ rooted layered tree with edge weights in $\{0,1\}$, and suppose that there is a $C$-competitive (pure or mixed) strategy of the searcher for this restricted game. Then, there is a $C$-competitive (pure or mixed, respectively) strategy of the searcher for the general case, where the designer can play any width $k$ layered graph with non-negative integer edge weights. \end{theorem} A width $k$ rooted layered tree is binary iff every vertex has at most two neighbors in the following layer. (Thus, the degree of each node is at most $3$.) \begin{corollary}\label{cor: layered binary trees suffice} The conclusion of Theorem~\ref{thm: layered trees suffice} holds if there is a $C$-competitive strategy of the searcher for the game restricted to the designer using width $k$ rooted layered binary trees with edge weights in $\{0,1\}$. Moreover, the conclusion holds if in addition we require that between two adjacent layers there is at most one edge of weight $1$. \end{corollary} \begin{proof} Suppose that the designer plays an arbitrary width $k$ layered tree. The searcher converts the tree on-the-fly into a width $k$ layered binary tree, uses the strategy for binary trees, and maps the moves back to the input tree. The conversion is done as follows. Between every two layers that the designer generates, the searcher simulates $\lceil \log_2 k \rceil-1$ additional layers. If a vertex $u\in V_{i-1}$ has $m\le k$ neighbors $v_1,v_2,\dots,v_m\in V_i$, the searcher places in the simulated layers between $V_{i-1}$ and $V_i$ a layered binary tree rooted at $u$ with $v_1,v_2,\dots,v_m$ as its leaves. Notice that this can be done simultaneously for all such vertices in $V_{i-1}$ without violating the width constraint in the simulated layers. The lengths of the new edges are all $0$, except for the edges touching the leaves. For $j=1,2,\dots,m$, the edge touching $v_j$ inherits the length $w(\{u,v_j\})$ of the original edge $\{u,v_j\}$. Clearly, any path traversed in the simulated tree corresponds to a path traversed in the input tree that has the same cost---simply delete from the path in the simulated tree the vertices in the simulated layers; the edges leading to them all have weight $0$. The additional requirement is easily satisfied by now making the following change. Between every two layers $i-1,i$ of the rooted layered binary tree insert $k-1$ simulated layers. Replace the $j$-th edge between layer $i-1,i$ (edges are indexed arbitrarily) by a length $k$ path. If the original edge has weight $0$, all the edges in the path have weight $0$. If the original edge has weight $1$, then all the edges in the path have weight $0$, except for the $j$-th edge that has weight $1$. \end{proof} \subsection{Small set chasing} This two-person game is defined with respect to an underlying metric space ${\cal M} = (X,\mathrm{dist})$. The game alternates between the adversary and the algorithm. The latter starts at an arbitrary point $x_0\in X$. The adversary decides on the number of rounds $n$ that the game will be played (this choice is unknown to the algorithm until after round $n$). In round $i$ of the game, the adversary chooses a finite set $X_i\subset X$. The algorithm must then move to a point $x_i\in X_i$. The game is parametrized by an upper bound $k$ on $\max_{i=1}^n |X_i|$. The algorithm pays $\sum_{i=1}^n \mathrm{dist}(x_{i-1},x_i)$ and the adversary pays $$ \min\left\{\sum_{i=1}^n \mathrm{dist}(y_{i-1},y_i):\ y_0=x_0\wedge y_1\in X_1\wedge\cdots\wedge y_n\in X_n\right\}. $$ \begin{theorem}[{Fiat et al.~\cite[Theorem 18]{FFKRRV91}}] For every $k\in{\mathbb{N}}$ and for every $C = C(k)$, there exists a pure (mixed, respectively) $C$-competitive online algorithm for cardinality $k$ set chasing in every metric space with integral distances iff there exists a pure (mixed, respectively) $C$-competitive online algorithm for width $k$ layered graph traversal. \end{theorem} \subsection{Reduction to evolving trees} The main result of this section, Theorem~\ref{thm: main layered}, is implied by the following reduction. \begin{lemma}\label{lm: LGT to DTG reduction} Let $k\in{\mathbb{N}}$, let $C = C(k)$, and let $\epsilon > 0$. Suppose that there exists a (pure, mixed, fractional, respectively) $C$-competitive strategy for the evolving tree game on binary trees of maximum depth $k$ that always pays a cost of at most $C\cdot (\opt + \epsilon)$. Then, there exists a (pure, mixed, fractional, respectively) $C$-competitive strategy for traversing width $k$ layered graphs with minimum non-zero edge weight at least $\epsilon$.\footnote{In particular, if the edges weights are integers, one can take $\epsilon = 1$.} \end{lemma} \begin{proof} Consider at first fractional strategies. By Lemma~\ref{lm: fractional to mixed} and Corollary~\ref{cor: layered binary trees suffice}, we can restrict our attention to designing fractional strategies on width $k$ rooted layered binary trees with edge weights in $\{0,1\}$. Now, suppose that we have a fractional $C$-competitive strategy for the depth $k$ evolving tree game. We use it to construct a fractional $C$-competitive strategy for the traversal of width $k$ rooted layered binary trees as follows. To simplify the proof, add a virtual layer $-1$ containing a single node $p(a)$ connected to the source $a$ with an edge of weight $0$. We construct the layered graph strategy by induction over the current layer. Our induction hypothesis is that in the current state: \begin{enumerate} \item The evolving tree is homeomorphic to the layered subtree spanned by the paths from $p(a)$ to the nodes in the current layer. \item In this homeomorphism, $r$ is mapped to $p(a)$ and the leaves of the evolving tree are mapped to the leaves of the layered subtree, which are the nodes in the current layer. \item In this homeomorphism, each edge of the evolving tree is mapped to a path of the same weight in the layered subtree. \item The probability assigned to a leaf by the fractional strategy for the evolving tree is equal to the probability assigned to its homeomorphic image (a node in the current layer) by the fractional strategy for the layered tree. \end{enumerate} Initially, the traversal algorithm occupies the source node $a$ with probability $1$. The evolving tree consists of the two initial nodes $r$ and $c_r$, with a $0$-weight edge connecting them. The homeomorphism maps $r$ to $p(a)$ and $c_r$ to $a$. The evolving tree algorithm occupies $c_r$ with probability $1$. Hence, the induction hypothesis is satisfied at the base of the induction. For the inductive step, consider the current layer $i-1$, the new layer $i$ and edges between them. If a node in layer $i-1$ has no child in layer $i$, we delete the homeomorphic preimage (which must be a leaf and cannot be $c_r$) in the evolving tree. If a node $v$ in layer $i-1$ has two children in layer $i$, we execute a fork step where we generate two new leaves and connect them to the preimage of $v$ (a leaf) in the evolving tree, and extend the homeomorphism to the new leaves in the obvious way. Otherwise, if a node $v$ in layer $i-1$ has a single child in layer $i$, we modify the homeomorphism to map the preimage of $v$ to its child in layer $i$. After executing as many such discrete steps as needed, if there is a weight $1$ edge connecting a node $u$ in layer $i-1$ to a node $v$ in layer $i$, we execute a continuous step, increasing the weight of the edge incident on the homeomorphic preimage of $v$ in the evolving tree (which must be a leaf) for a time interval of length $1$. After executing all these steps, we simply copy the probability distribution on the leaves of the evolving tree to the homeomorphic images in layer $i$ of the layered tree. This clearly satisfies all the induction hypotheses at layer $i$. Notice that since the target $b$ is assumed to be the only node in the last layer, when we reach it, the evolving tree is reduced to a single edge connecting $r$ to the homeomorphic preimage of $b$. The weight of this edge equals the weight of the path in the layered tree from $p(a)$ to $b$, which is the same as the weight of the path from $a$ to $b$ (because the edge $\{p(a),a\}$ has weight $0$). Moreover, the fractional strategy that is induced in the layered tree does not pay more than the fractional strategy in the evolving tree. Hence, it is $C$-competitive. Finally, deterministic strategies are fractional strategies restricted to probabilities in $\{0,1\}$, hence the claim for deterministic strategies is a corollary. This also implies the claim for mixed strategies, as they are probability distributions over pure strategies. \end{proof} We note that the depth $k$ evolving tree game is strictly more general than width $k$ layered graph traversal. In particular, the evolving binary tree corresponding to the width $k$ layered graph game has depth at most $k$ and also at most $k$ leaves. However, in general a depth $k$ binary tree may have $2^{k-1}$ leaves. Our evolving tree algorithm and analysis applies without further restriction on the number of leaves. \newpage \bibliographystyle{plainurl}
{ "timestamp": "2022-02-10T02:24:38", "yymm": "2202", "arxiv_id": "2202.04551", "language": "en", "url": "https://arxiv.org/abs/2202.04551", "abstract": "In a 1989 paper titled \"shortest paths without a map\", Papadimitriou and Yannakakis introduced an online model of searching in a weighted layered graph for a target node, while attempting to minimize the total length of the path traversed by the searcher. This problem, later called layered graph traversal, is parametrized by the maximum cardinality $k$ of a layer of the input graph. It is an online setting for dynamic programming, and it is known to be a rather general and fundamental model of online computing, which includes as special cases other acclaimed models. The deterministic competitive ratio for this problem was soon discovered to be exponential in $k$, and it is now nearly resolved: it lies between $\\Omega(2^k)$ and $O(k2^k)$. Regarding the randomized competitive ratio, in 1993 Ramesh proved, surprisingly, that this ratio has to be at least $\\Omega(k^2 / \\log^{1+\\epsilon} k)$ (for any constant $\\epsilon > 0$). In the same paper, Ramesh also gave an $O(k^{13})$-competitive randomized online algorithm. Since 1993, no progress has been reported on the randomized competitive ratio of layered graph traversal. In this work we show how to apply the mirror descent framework on a carefully selected evolving metric space, and obtain an $O(k^2)$-competitive randomized online algorithm, nearly matching the known lower bound on the randomized competitive ratio.", "subjects": "Data Structures and Algorithms (cs.DS)", "title": "Shortest Paths without a Map, but with an Entropic Regularizer", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708039218115, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.7084670503750475 }
https://arxiv.org/abs/1510.06353
Localization of Quantum States and Landscape Functions
Eigenfunctions in inhomogeneous media can have strong localization properties. Filoche \& Mayboroda showed that the function $u$ solving $(-\Delta + V)u = 1$ controls the behavior of eigenfunctions $(-\Delta + V)\phi = \lambda\phi$ via the inequality $$|\phi(x)| \leq \lambda u(x) \|\phi\|_{L^{\infty}}.$$ This inequality has proven to be remarkably effective in predicting localization and recently Arnold, David, Jerison, Mayboroda \& Filoche connected $1/u$ to decay properties of eigenfunctions. We aim to clarify properties of the landscape: the main ingredient is a localized variation estimate obtained from writing $\phi(x)$ as an average over Brownian motion $\omega(\cdot)$ in started in $x$ $$\phi(x) = \mathbb{E}_{x}\left(\phi(\omega(t)) e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \right).$$ This variation estimate will guarantee that $\phi$ has to change at least by a factor of 2 in a small ball, which implicitly creates a landscape whose relationship with $1/u$ we discuss.
\section{Introduction} \subsection{The Landscape function.} It is well known that physical systems comprised of inhomogeneous materials can exhibit peculiar vibration properties: let $\Omega \in \mathbb{R}^n$ be open, bounded and \begin{align*} (-\Delta + V)\phi = \lambda \phi \qquad \mbox{in~}\Omega ~\mbox{with Dirichlet boundary conditions,} \end{align*} where $V:\Omega \rightarrow \mathbb{R}_{\geq 0}$ is a real-valued, nonnegative potential. Anderson \cite{anderson} noticed that for some potentials the low-lying eigenfunctions tend to strongly localize in a subregion of space in a very complicated manner. It seemed difficult to get any information about the localization behavior of these first few eigenfunctions without explicitely computing them. \\ In a truly remarkable contribution, Filoche \& Mayboroda \cite{fil} have given a simple but astonishingly effective method to predict the behavior of low-energy eigenfunctions. Their approach is based on the following inequality (originally due to Moler \& Payne \cite{mol}): if we associate to the problem a \textit{landscape function} $u:\Omega: \mathbb{R} \rightarrow \mathbb{R}_{}$ given as the solution of \begin{align*} (-\Delta + V)u = 1 \qquad \mbox{in~}\Omega \subset \mathbb{R}^n~~~~~ \mbox{with Dirichlet boundary conditions,} \end{align*} then there is the inequality $$ |\phi(x)| \leq \lambda u(x) \|\phi\|_{L^{\infty}(\Omega)} .$$ The regions where $u$ is small will be of particular interest because an eigenfunction $\phi$ can only localize in $\left\{x: \lambda u(x) \geq 1\right\} \subset \Omega$. The landscape function turns out to be more effective than that: it is instructive to regard the graph of $u(x)$ as a landscape comprised of 'peaks' and 'valleys'; the valleys may then be understood as inducing a partition of the domain. Numerical experiments \cite{fil} suggest that low-lying eigenfunctions respect that partition and favor localization in one or at most a few elements in that partition. Moreover, these localized eigenfunctions are 'almost' compactly supported in the sense that in crossing from one element of the partition to another eigenfunctions seem to experience exponential decay when crossing the valley (see \cite{fil}). \subsection{The effective potential.} Concerning this exponential drop in size of an eigenfunction when crossing valley, this was recently made precise by Arnold, David, Jerison, Mayboroda \& Filoche \cite{arnold} who point out that the inverse of the landscape function $1/u(x)$ acts as an effective potential responsible for the exponential decay of the localized states (the connection being that $u(x)$ is small in valleys, which makes $1/u(x)$ large and large potentials induce large decay). Their approach is based on writing an eigenfunction as $\phi = u \psi$ for some unknown function $\psi$. The equation $$ \left( - \Delta + V \right) \phi = \lambda \phi$$ then transforms into $$ \left[ \frac{1}{u^2} \mbox{div}(u^2 \nabla \psi) \right] + \frac{1}{u} \psi = \lambda \psi.$$ The new dominating potential $W \equiv 1/u$ is now responsible for the underlying dynamics. The next step is to build an Agmon distance $$ \rho(r_1, r_2) = \min_{\gamma}\left( \int_{\gamma}{\sqrt{(W(r) - \lambda)_{+}} ds} \right),$$ where $\gamma$ ranges over all paths from $r_1$ to $r_2$ and use Agmon's inequality \cite{agmon} to deduce that for eigenfunctions $\phi$ localized in $r_0 \in \Omega$ $$ |\phi(r)| \lesssim e^{-\rho(r_0, r)}.$$ This indicates that $W \equiv 1/u$ is playing a distinguished role. The paper \cite{arnold} also gives convincing numerical evidence that $W - \lambda$ seems to predict decay more accurately than the classical quantity $V-\lambda$. This might seem surprising because clearly $V$ determines the behavior of the eigenfunctions. \subsection{Organization.} The purpose of our paper is to further clarify these observations and the interplay between an eigenfunction doubling its size in a small ball and the landscape function; the main tool is an identity following from the Feynman-Kac formula. More precisely, we \begin{itemize} \item derive and discuss the relevant identity, \item use it to prove a variation estimate localized in a small ball, \item show how w.r.t. decay $V$ is not as important as a suitable mollification of $V$, \item compare how $1/u(x)$ fits into that framework \item and discuss some refinements of the landscape function $u(x)$. \end{itemize} We always assume that $\Omega \subset \mathbb{R}^n$ is bounded with a smooth boundary and $V \in C^2(\Omega)$ to be continuous. Technically, this excludes 'block potentials' (which are only $L^{\infty}$) but it is clear that the first few eigenfunctions hardly change if a potential is replaced by a suitably mollification and therefore the assumption is without loss of generality. \section{Local analysis of the heat flow} \subsection{The torision function.} The landscape function arising from $V = 0$, i.e. the solution of $$ -\Delta v = 1 \qquad \mbox{in~}\Omega ~\mbox{with Dirichlet boundary conditions,}$$ is a classical object in shape optimization called the \textit{torsion function}. It appears in elasticity theory \cite{bandle}, heat conduction \cite{vandenberg} and geometry \cite{mark}. A version of a landscape function with potential already appeared in the context of homogenization in work of Coifman \& Meyer (unpublished, but see the application to parabolic operators by S. Wu \cite{wu}). The most prominent role of the torsion function in the field of shape optimization (see e.g. \cite{ban}) is that $v(x)$ gives the expected lifetime of Brownian motion started in $x$ until it hits the boundary. This suggests to interpret the landscape function in that language; the idea of using Brownian motion to analyze decay properties of eigenfunctions is classical and was very successfully used in seminal papers by Carmona \cite{carmona}, Carmona \& Simon \cite{simon1}, Carmona, Masters \& Simon \cite{simon2}, Simon \cite{simon3} and others; recently, a similar technique was used by the author \cite{stein} to obtain bounds on the size of nodal sets of Laplacian eigenfunctions $\left\{x : \phi(x) = 0\right\}$ on compact manifolds. \subsection{The idea.} The crucial ingredient is a simple equation representing an eigenfunction $\phi(x)$ as a localized average over local Brownian motion paths running for a short time. This equation is not new and has been used earlier for very similar purposes, see e.g. Carmona \& Simon \cite{simon1}. This equation is obtained by looking at the effect of the semigroup $e^{t(\Delta - V)}$ on the eigenfunction. Since eigenfunctions diagonalize the semigroup, we have that if $$ (-\Delta + V)\phi = \lambda \phi \qquad \mbox{then} \qquad e^{t(\Delta - V)}\phi = e^{-\lambda t} \phi.$$ At the same time, there is another interpretation of the action of the semigroup in terms of Brownian motion. The \textit{Feynman-Kac formula} states that for an arbitrary function $f$ $$ e^{t(\Delta - V)}f(x) = \mathbb{E}_{x}\left(f(\omega(t)) e^{-\int_{0}^{t}{V(\omega(z))dz}} \right),$$ where the expectation $\mathbb{E}_x$ is taken with respect to Brownian motion $\omega(\cdot)$ started in $x$, running for time $t$ and destroyed upon impact on the boundary. Combining these two equations, we get $$ \forall t \geq 0 \qquad \phi(x) = \mathbb{E}_{x}\left(\phi(\omega(t)) e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \right).$$ This equation describes a complicated relationship between $\phi(x)$, $\lambda$ and $V$, however, it is perfectly suited for establishing a variation estimate in a small ball: assuming the eigenfunction $\phi$ to be essentially constant on a small scale allows us to move the eigenfunction $\phi$ out of the expectation at the cost of a very small error. We sketch a non-rigorous version of the argument. \begin{center} \begin{figure}[h!] \begin{tikzpicture}[scale=1.2] \coordinate [label=left:$x_0$] (x) at (0,0); \coordinate [] (y) at (1,1); \node (D) [name path=D,draw,circle through=(y),label=left:$B$] at (x) {}; \draw (0,0) \foreach \x in {1,...,400}{--++(rand*0.08,rand*0.08)}; \end{tikzpicture} \caption{Brownian motion started in $x_0$.} \end{figure} \end{center} Assume w.l.o.g. that $\phi(x_0) > 0$. Let $B = B(x_0, r)$ be the ball centered at $x_0$ with maximal radius $r>0$ such that $$ \forall x \in B: \frac{1}{2} |u(x_0)| \leq |u(x)| \leq 2 |u(x_0)|.$$ Assume furthermore that $V(x_0) \geq \lambda$ and that $V$ is essentially constant on $B$. We now consider the equation above for $t = c_n r^2$ with $c_n$ a small universal constant depending only on the dimension of $\Omega$. By making $c_n$ sufficiently small, we can ensure that 99\% of all Brownian paths spend are fully contained in $B$ up to time $t$. Furthermore, $$ \phi(x_0) = \mathbb{E}_{x_0}\left(\phi(\omega(t)) e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \right) \sim \phi(x_0) \mathbb{E}_{x_0}\left(e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \right).$$ In order for this to hold, we clearly require $$ \mathbb{E}_{x_0}e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \sim 1 \qquad \mbox{and thus} \qquad -1 \ll \lambda t-\int_{0}^{t}{V(\omega(z))dz} \leq 0.$$ However, this last quantity can be approximated by $$ 0 \geq \lambda t-\int_{0}^{t}{V(\omega(z))dz} \sim t\left(\lambda - V(x_0)\right) \sim r^2\left(\lambda - V(x_0)\right) \gg -1$$ which is a statement about the maximal size of $r$ depending on $\lambda - V(x_0)$. We will now make this heuristic sketch precise and phrase it in classical terms. However, we emphasize that the most useful way of thinking about the setup to be in terms of path integrals: the condition $V(x) - \lambda \geq c$ could, for example, be replaced by $V(x) - \lambda \geq 0$ and $V(x) - \lambda \geq c$ true 'on average'. \begin{theorem}[Variation estimate] Suppose $(-\Delta + V)\phi = \lambda \phi$ for $V \geq 0$. There exists a universal constant $c_n$ (depending only on the dimension) such that the following holds: if, for some $c > 0$, $$ V(x) - \lambda \geq c \qquad \mbox{or} \qquad V(x) - \lambda \leq -c$$ uniformly on the ball $$ B = B\left(x_0, \frac{c_n}{\sqrt{c}} \sqrt{\frac{\lambda}{c} + \log{ \left(c_n \frac{\|\phi \|_{L^{\infty}}}{| \phi(x_0)|} \right) }} \right) \subset \Omega,$$ then we have $$ \frac{\sup_{x \in B}{|\phi(x)|}}{\inf_{x \in B} |\phi(x)|} \geq 2.$$ \end{theorem} Opposite statements (especially for the case $V=0$) are usually called 'doubling estimates' and guarantee that an eigenfunction can at most double its size in a certain region of space which then bounds the order with which it can vanish around a root (see e.g. Bakri \cite{bakri} or the survey of Zelditch \cite{zeld}). Let us first discuss the case $V(x) - \lambda \geq c$. Then the result has the expected scaling and translates into the classical $\sqrt{(V - \lambda)_{+}}$ factor in the Agmon metric. The proof of Theorem 1 yields a stronger result: the uniform estimate $V - \lambda \geq c$ is not necessary (it is only required 'w.r.t. to path integrals') and we can rephrase the condition. \begin{quote} \textit{Local.} $\phi$ locally varies by a constant factor on scale $\sim 1/\sqrt{(V(x)-\lambda)_{+}}$.\\ \textit{Nonlocal.} $\phi$ varies locally by a constant factor on the scale $\sim \sqrt{t_x}$ where $$ t_x = \inf_{} \left\{t > 0: \mathbb{E}_{x}~e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \leq \frac{1}{2} \right\}.$$ \end{quote} The non-locality of the second formulation incorporates the diffusive action of the partial differential equation. Moreover, the nonlocal formulation allows for $t$ to be large. At the crudest level, the two estimates coincide because $$ \mathbb{E}_{x}~e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \sim 1-\left(\lambda - V(x) \right)t.$$ This estimate is only correct up to first order; one might suspect that the estimate should hold up to second order because Brownian motion is isotropic and therefore by symmetry $$ \qquad \mathbb{E}_x \left\langle \nabla V(x), \omega(t) -x\right\rangle = 0.$$ However, there is a curious contribution: Brownian motion moves to distance $\sim \sqrt{t}$ within time $t$ and this has the curious effect of turning the local geometry of $V$ at second order into a first order contribution in time $$ \qquad \mathbb{E}_x \left\langle \omega(t)-x, (D^2V)(x) \omega(t)-x \right\rangle = t \Delta V(x).$$ This gives $$\mathbb{E}_{x}\ e^{-\int_{0}^{t}{V(\omega(z))dz}} = 1 - V(x) t + \frac{t^2}{2} \left(V(x)^2 - \Delta V(x)\right) + o(t^2)$$ A geometric interpretation would be as follows: local convexity ($\Delta V(x) > 0$) yields to a stronger decay of the expectation and thus enforces a stronger decay of the eigenfunction; conversely, local concavity enforces slightly less decay (compared to flat potentials at the same numerical scale). \begin{center} \begin{figure}[h!] \begin{tikzpicture}[scale=1] \draw [ultra thick] (-3.5 ,0.5) -- (-2.5,0.5); \node at (-3,0.8) {$V(x)$}; \draw [dashed, thick, xshift=0cm] plot [smooth, tension=1] coordinates { (-2,1) (0,0) (2,1)}; \draw [thick, xshift=0cm] plot [smooth, tension=1] coordinates { (-1,0.3) (0,2.3) (1,0.3)}; \draw [dashed, thick, xshift=5cm] plot [smooth, tension=1] coordinates { (-2,0) (0,1) (2,0)}; \draw [thick, xshift=5cm] plot [smooth, tension=1] coordinates { (-1.7,0.3) (0,2.3) (1.7,0.3)}; \end{tikzpicture} \caption{Two potentials having comparable numerical value. Local convexity enforces stronger decay of the eigenfunction, local concavity flatter decay.} \end{figure} \end{center} \subsection{The case $V \leq \lambda$.} The description above only discusses the cases, where locally $V \geq \lambda$ (in order to stress the similarity to Agmon's inequality), however, it is clear that a similar argument applies whenever $V \leq \lambda$. In that case the expectatation over the path integrals will grow and this will imply variation at scale $\sqrt{t_x}$ where $$ t_x = \inf_{} \left\{t > 0: \mathbb{E}_{x}~e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \geq 2\right\}.$$ Put differently, if $V \leq \lambda$ in such a way that there is local growth in $t$ of $\mathbb{E}_{x}~e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}}$, then the only way for $$ \phi(x) = \mathbb{E}_{x}\left(\phi(\omega(t)) e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \right) $$ to be valid is for $\phi(\omega(t))$ to be, on average, smaller than $\phi(x)$ which indicates doubling of size on the length scale $\sqrt{t_x}$. In particular, the nonlocal formulation also applies to the setting where the classical $\sqrt{(V-\lambda)_{+}}$ yields no more information. Using this formulation, we can recover the classical intuition for Laplacian eigenfunctions: if $-\Delta \psi = \kappa \psi$, then one expects $\psi$ to oscillate on the wavelength $\sim \kappa^{-1/2}$. In our case, if $V \ll \lambda$, then $$-\Delta \psi = (\lambda - V)\psi \sim \lambda \psi$$ and the variation estimate guarantees oscillation on scale $\sqrt{\lambda}/c$, where $c \sim \lambda - V \sim \lambda$ and thus $\sqrt{\lambda}/c \sim \lambda^{-1/2}.$ Another way of seeing this is that we expect a doubling on the scale $\sim \sqrt{t_x}$ and whenever $V \ll \lambda$, then $$ t_x = \inf_{} \left\{t > 0: \mathbb{E}_{x}~e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \geq 2\right\} \sim \inf_{} \left\{t > 0: \mathbb{E}_{x}~e^{\lambda t} \geq 2\right\} \sim \frac{1}{\lambda}.$$ \subsection{Landscape from variation.} So far, we have only discussed the variation estimate; the relationship with landscapes is easily visualized: pick a sequence of points $x_1, x_2, \dots, x_k \dots \in \Omega$, compute $t_{x_j}$ and draw circles with radius $\sqrt{t_{x_j}}$ around the points. Smaller circles correspond to variation by a factor on a smaller spatial scale (i.e. faster growth/decay). The 'valleys' of the landscape correspond to regions with smaller circles (crossing them causes a variation by a factor 2 and thus frequent crossing is equivalent to either large growth or large decay) whereas 'hills' correspond to regions larger circles. This essentially reproduces the landscape generated by $u$ since both $\sqrt{t_x}$ and $u$ may be regarded as mollifications of $1/V$. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{bubble.jpg} \caption{An implicit description of the landscape by $\sqrt{t_{x_j}}$ balls around points $x_j$.} \end{figure} The computation of $t_{x_j}$ is clearly nontrivial, however, up to first order it is easy, since $$ \mathbb{E}_{x} e^{-\int_{0}^{t}{V(B(\omega(z)))dz}} \sim 1 - \mathbb{E}_{x} \int_{0}^{t}{V(B(\omega(z)))dz}$$ and, by linearity, we can exchange expectation and integration $$ \mathbb{E}_{x} \int_{0}^{t}{V(B(\omega(z)))dz} = \int_{0}^{t}{ \mathbb{E}_{x} V(B(\omega(z)))dz} \sim \int_{0}^{t}{ (e^{z\Delta} V)(x)dz} = \left[ \left( \int_{0}^{t}{e^{z \Delta}dz} \right)*V\right](x). $$ This is essentially accurate as along as $d(x, \partial \Omega) \gg \sqrt{t}$ while it overestimates the quantity as soon as $d(x, \partial \Omega) \lesssim \sqrt{t}$. In contrast, the landscape function $\lambda u(x)$ has a simpler and more convenient linear scaling in the eigenvalue $\lambda$. Indeed, $\lambda u(x)$ arises naturally from the following heuristic, which nicely clarifies how $u(x)$ interacts locally with the semigroup induced by $(-\Delta + V)$. \\ \textit{Heuristic.} We compute an expansion of $e^{t (\Delta - V)} u(x)$ for $t$ small in two different ways. Note that $$ (-\Delta + V)u = 1 \qquad \mbox{and thus, for $t$ small,} \qquad e^{t (\Delta - V)} u(x) = u(x) - t + o(t).$$ However, the semigroup may also be expanded using Feynman-Kac and for $t$ small \begin{align*} (e^{t (\Delta - V)} u)(x) &= \mathbb{E}_{x}\left(u(\omega(t)) e^{-\int_{0}^{t}{V(\omega(z))dz}} \right) \\ &= \mathbb{E}_{x}\left(u(\omega(t)) \left(1-\int_{0}^{t}{V(\omega(z))dz}\right) \right)\\ & \sim u(x) - u(x)\mathbb{E}_{x}\int_{0}^{t}{V(\omega(z))dz}. \end{align*} By matching the coefficient in the linear term, we get that $$ u(x)\mathbb{E}_{x}\int_{0}^{t}{V(\omega(z))dz} \sim t.$$ Therefore, for $t$ sufficiently small $$ \mathbb{E}_{x}\int_{0}^{t}{V(\omega(z))dz} \sim \frac{t}{u(x)}$$ and thus $$ \mathbb{E}_{x}e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \sim 1 + \left( \lambda - \frac{1}{u(x)} \right) t.$$ This heuristic naturally recovers decay in the region $\left\{x:1/u(x) > \lambda \right\}$. The only inaccuracy is that we actually have $ \mathbb{E}_{x} u(\omega(t)) \sim u(x) + t\Delta u(x)$ . Indeed, unless $|\Delta u(x)|$ is very big, we get from $$ (-\Delta + V)u(x) = 1 \qquad \mbox{that} \qquad u(x) \sim \frac{1}{V(x)}.$$ In that case it is not very surprising that $1/u(x)$ is effective at predicting decay. However, the relationship runs deeper than that: using again Feynman-Kac we get \begin{align*} e^{t(\Delta - V)}1 &= \mathbb{E}_{x}e^{-\int_{0}^{t}{V(\omega(z))dz}} \\ e^{t(\Delta - V)^{-1}}1 &= 1 - tu(x) +o(t) \end{align*} Furthermore, information up to second order appears with the correct sign. We can rewrite $(-\Delta + V)u=1$ as $$ u = \frac{1}{V} + \frac{\Delta u}{V}.$$ Assuming $\Delta u \sim 0$ gave the first-order approximation $u \sim 1/V$. The next natural step is to iterate this $$ u \sim \frac{1}{V} + \frac{\Delta \left(\frac{1}{V}\right)}{V} = \frac{1}{V} + 2\frac{|\nabla V|^2}{V^3} - \frac{\Delta V}{V^3}.$$ Restricting to local extrema, we see that (up to lower order terms) $1/u$ is bigger than $V$ in local minima and smaller than $V$ in local maxima (thus recreating the behavior from above). \section{New landscape functions} \subsection{Nonlocal refinement.} From now on, a 'landscape function' refers to any function $h(x)$ with the property that for a fixed eigenvalue $\lambda$ the eigenfunction $(-\Delta + V)\phi = \lambda \phi$ satisfies $$ |\phi(x)| \leq h(x) \| \phi \|_{L^{\infty}}.$$ $h(x) = 1$ is trivially admissible. We will continue to use $u(x)$ to denote the classical landscape function given as the solution of $(- \Delta + V)u = 1$. It satisfies the inequality with $h(x) = \lambda u(x)$. \begin{theorem}[Landscape bootstrapping] Suppose $(-\Delta + V)\phi = \lambda \phi$ and $h(x)$ satisfies $$ |\phi(x)| \leq h(x) \| \phi \|_{L^{\infty}},$$ then the same inequality holds for $h(x)$ replaced by \begin{align*} h_1(x) &= \inf_{t \geq 0}{ \mathbb{E}_x \left( h((\omega_t) e^{\lambda t -\int_{0}^{t}{V(\omega(s))ds} } \right) } \\ &= \inf_{t \geq 0}{ e^{\lambda t} e^{t(\Delta - V)} h(x). } \end{align*} \end{theorem} By letting $t \rightarrow 0$, we get $h_1(x) \leq h(x)$. There is a delicate balance between two terms: $e^{t(\Delta - V)} h(x)$ is a diffusion semigroup inducing exponential decay which is counteracted by the exponential growth $e^{\lambda t}$. It is interesting to note that the function $\lambda u(x)$ plays a distinguished role and is characterized by the fact that no purely local (i.e. $t \rightarrow 0$) considerations have any effects.\\ \begin{proposition} Suppose $f \in C^2(\Omega)$ satisfies $$ \frac{d}{dt} e^{\lambda t} e^{t(\Delta - V)} f(x) \big|_{t=0} = 0 \qquad \mbox{then} \qquad f(x) = \lambda u(x).$$ \end{proposition} \begin{proof} The argument is immediate. Note that for $t$ small \begin{align*} e^{\lambda t} e^{t(\Delta - V)} f(x) &= (1 + \lambda t + \mathcal{O}(t^2)))(1 + t (\Delta - V) f(x) + \mathcal{O}(t^2))\\ &= 1 +t ( \lambda + (\Delta - V)f(x) )+ \mathcal{O}(t^2)) \end{align*} and therefore $ f(x) = (-\Delta + V)^{-1}\lambda = \lambda u(x).$ \end{proof} \subsection{Computational tricks.} The main results from the previous section allow for the generation of new landscape functions out of $\lambda u(x)$, however, one should consider that solving $e^{t(\Delta - V)}u(x)$ may be as hard or harder than directly computing eigenfunctions. The purpose of this section is to suggest a cheap way of creating computationally feasible improvements. The original proofs \cite{fil, mol} demonstrating $$|\phi(x)| \leq \lambda u(x) \|\phi\|_{L^{\infty}}$$ use Green's functions. A very simple argument that we could not find in the literature is as follows. \begin{proof} $\phi$ is an eigenfunction and $-\Delta + V$ is elliptic operator. The maximum principle yields $$ \phi(x) = (-\Delta + V)^{-1} \lambda \phi(x) \leq (-\Delta + V)^{-1} \lambda \|\phi\|_{L^{\infty}} = \lambda \|\phi\|_{L^{\infty}} (-\Delta + V)^{-1}1,$$ where the last term is precisely $u(x)$. \end{proof} The very simple proof immediately suggests two improvements. The first improvement would be to iterate the inequality: let $k \in \mathbb{N}$ be arbitrary. Then we have the inequality $$ \phi(x) = (-\Delta + V)^{-k} \lambda^k \phi(x) \leq (-\Delta + V)^{-k} \lambda^k \|\phi\|_{L^{\infty}} = \lambda^k \|\phi\|_{L^{\infty}} (-\Delta + V)^{-k}1.$$ This variant was already known to Filoche \& Mayboroda; its downside is that the increased power on the eigenvalue tends to make the bounds worse for higher eigenfunctions. There exists an elementary improvement that preserves linear scaling in the eigenvalue. \begin{proposition} If $(-\Delta + V) \phi = \lambda \phi$, then $$|\phi(x)| \leq \left( \lambda(-\Delta + V)^{-1} \min(\lambda u(x), 1) \right) \|\phi\|_{L^{\infty}}$$ \end{proposition} \begin{proof} We bootstrap the original inequality and have \begin{align*} \phi= (-\Delta + V)^{-1} \lambda \phi &\leq \lambda (-\Delta + V)^{-1} |\phi| \leq \lambda (-\Delta + V)^{-1} \min(\lambda u(x),1) \|\phi\|_{L^{\infty}} \end{align*} \end{proof} \begin{figure}[h!] \centering \includegraphics[width=0.6\textwidth]{iterate.pdf} \captionsetup{width=0.8\textwidth} \caption{The profile $\phi_1/\|\phi_1\|_{L^{\infty}}$ (blue), the landscape function $u(x)$ (purple) and the first two iterations (yellow, green) of Proposition 3.} \end{figure} The reason why such a simple argument could indeed be effective is that the landscape function is sometimes bigger than 1; in that case we can perform a simple cut-off at 1 and iterate to get additional information; alternatively, one could use the bound from the previous section in a neighborhood to propagate the gain from the cutoff to nearby regions. Clearly, that simple argument could also be iterated and, by the same token, $$|\phi(x)| \leq \left( \lambda(-\Delta + V)^{-1} \min \left(1, \lambda(-\Delta + V)^{-1} \min(\lambda u(x), 1) \right) \right) \|\phi\|_{L^{\infty}}.$$ Like the original landscape function, these improvements can only be effective for $\lambda$ small: for $\lambda$ large, we will have $ \lambda u(x) \geq 1$ everywhere except for small regions close to the boundary and thus $\min(\lambda u(x), 1) \sim 1$ and the argument will merely recreate $u$. \section{Proofs} \subsection{Proof of Theorem 1.} The main idea has already been outlined above. We use $$ \phi(x_0) = \mathbb{E}_{x_0}\left(\phi(\omega(t)) e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \right)$$ for a suitable time $t$. Almost all paths are contained within a ball of radius $\sim \sqrt{t}$ (suggesting to set $t=1/c$). However, some Brownian motion paths will actually leave that ball and may give a large additional contribution but the likelihood of that happening is small. Using the standard reflection principle implies for one-dimensional Brownian motion $B(t)$ that $$ \mathbb{P}\left( \max_{0 < s < t}{B(s)} \geq a \right) = 2 \mathbb{P}(B(t) \geq a).$$ This implies that the likelihood of a Brownian motion leaving a ball of fixed radius for some time $0 < s < t$ can be bounded by the likelihood of having left the ball at time $t$. More precisely, the standard Gaussian heat kernel estimate yields for some universal constants $c_1, c_2 > 0$ (depending only on the dimension) and fixed $t > 0$ $$ \mathbb{P}(\|\omega(t) - \omega(0)\| \geq \delta t^{1/2}) \leq c_1 e^{-c_2 \delta^2}.$$ This holds at a much greater level of generality \cite{fern}. The proof consists of showing that all the quantities work together as described by doing the algebra. \begin{proof} We assume that $$ \forall x \in B: \frac{1}{2} |\phi(x_0)| \leq |\phi(x)| \leq 2 |\phi(x_0)|$$ with $$ B = B\left(x_0, \frac{c_3}{\sqrt{c}} \sqrt{\frac{\lambda}{c} + \log{ \left(c_3 \frac{\|\phi \|_{L^{\infty}}}{| \phi(x_0)|} \right) }} \right) \subset \Omega,$$ where $c_3$ is some constant we are allowed to choose depending on $c_1, c_2$. We start with assuming that $$ V - \lambda \geq c \qquad \mbox{uniformly on}~B.$$ The case $V(x) - \lambda \leq -c$ is similar and will be described afterwards. We can assume without loss of generality that $\phi(x_0) > 0$. The time scale of the argument will be $ t = 1/c$. For a Brownian motion $\omega(s)$ started in $x_0$ we distinguish two cases: \begin{itemize} \item \textbf{Case 1 (generic).} $ \left\{ \omega(s): 0 \leq s \leq t \right\} \subset B$ \item \textbf{Case 2 (rare).} $ \left\{ \omega(s): 0 \leq s \leq t \right\} \not\subset B.$ \end{itemize} Case 1 is very easy to deal with: in that case, we can easily bound any single path fully via \begin{align*} \phi(\omega(t)) e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} &\leq 2 \phi(x) e^{ \int_{0}^{t}{( \lambda - V(\omega(z)))dz}} \\ &\leq2e^{- t c} \phi(x_0) \end{align*} and the same argument holds in expectation for all paths conditioned on Case 1. It remains to consider Case 2. If it occurs, then we only have the trivial bound (using $V \geq 0$) $$ \phi(\omega(t)) e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \leq \|\phi \|_{L^{\infty}} e^{\lambda t},$$ which is large but the likelihood of that event is small. Combining this with the upper bound on the likelihood of Case 2 gives \begin{align*} \phi(x_0) = \mathbb{E}_{x_0}\left(\phi(\omega(t)) e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \right) \leq 2e^{- tc}\phi(x_0) + c_1 e^{-c_2 \delta^2} e^{\lambda t} \|\phi \|_{L^{\infty}}. \end{align*} We derive a contradiction by setting $t = 1/c$, which gives $$ \delta = c_3\sqrt{\log{ \left(c_3 \frac{e^{\frac{\lambda}{c}} \|\phi \|_{L^{\infty}}}{| \phi(x_0)|} \right)} }.$$ Simple algebra allows us to reformulate the inequality as $$ \phi(x_0) \leq 2 e^{-1} \phi(x_0) + \varepsilon_{c_1, c_2, c_3} \phi(x_0),$$ where $$ \varepsilon_{c_1, c_2, c_3} = \frac{c_1}{c_3^{c_2 c_3^2}} e^{-\frac{\lambda}{c}(c_2 c_3^2 - 1)} \left( \frac{|\phi(x_0)|}{\|\phi\|_{L^{\infty}}} \right)^{c_2 c_3^2 - 1} $$ can be made arbitrarily small by making $c_3$ sufficiently large (depending only on $c_1, c_2$). This gives a contradiction since $2e^{-1} < 1$ and concludes the first case. The other statement to consider is $$ V - \lambda \leq -c \qquad \mbox{uniformly on}~B.$$ Here, essentially all signs are reversed and we show that there is too much local growth. We can again distinguish Case 1 and Case 2 and get for Case 1 that \begin{align*} \phi(\omega(t)) e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} &\geq \frac{1}{2} \phi(x_0) e^{ \int_{0}^{t}{( \lambda - V(\omega(z)))dz}} \\ &\geq \frac{1}{2}e^{t c} \phi(x_0). \end{align*} The second case is again completely without control and we can only use the trivial estimate $$ \phi(\omega(t)) e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \geq -e^{\lambda t}\|\phi\|_{L^{\infty}}.$$ Altogether, this yields \begin{align*} \phi(x_0) = \mathbb{E}_{x_0}\left(\phi(\omega(t)) e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \right) &\geq \mathbb{P}(\mbox{Case 1}) \frac{1}{2}e^{tc}\phi(x_0) - c_1 e^{-c_2 \delta^2} e^{\lambda t} \|\phi \|_{L^{\infty}}. \end{align*} For $c_3$ sufficiently large, we can ensure that $\mathbb{P}(\mbox{Case 1}) \geq 1/2$. Thus, for $c_3$ sufficiently large \begin{align*} \phi(x_0) = \mathbb{E}_{x_0}\left(\phi(\omega(t)) e^{\lambda t-\int_{0}^{t}{V(\omega(z))dz}} \right) &\geq \frac{1}{4}e^{tc}\phi(x_0) - c_1 e^{-c_2 \delta^2} e^{\lambda t} \|\phi \|_{L^{\infty}}. \end{align*} Plugging things in as before yields (using again $t = 1/c$ and therefore same value of $\delta$ as before) $$ \phi(x_0) \geq \frac{e^{\delta}}{4}\phi(x_0) - \varepsilon_{c_1, c_2, c_3} \phi(x_0),$$ where by the same computation as above $e^{\delta}/4$ can be made bigger than 2 by choosing $c_3$ sufficiently large and $$ \varepsilon_{c_1, c_2, c_3} = \frac{c_1}{c_3^{c_2 c_3^2}} e^{-\frac{\lambda}{c}(c_2 c_3^2 - 1)} \left( \frac{|\phi(x_0)|}{\|\phi\|_{L^{\infty}}} \right)^{c_2 c_3^2 - 1} $$ can be made arbitrarily small by making $c_3$ sufficiently large. \end{proof} \subsection{Proof of the Theorem 2.} \begin{proof} The proof used uses the identity for all $t \geq 0$ to introduce an infimum. \begin{align*} \phi(x) = (e^{\lambda t}e^{t (\Delta - V)}\phi)(x) &= \mathbb{E}_x \left( \phi(B_t(\omega)) e^{\lambda t -\int_{0}^{t}{V(B(\omega(s)))ds} } \right) \\ &= \inf_{t \geq 0}{ \mathbb{E}_x \left( \phi(B_t(\omega)) e^{\lambda t -\int_{0}^{t}{V(B(\omega(s)))ds} } \right)}. \end{align*} By assumption $\phi$ is dominated by a landscape function $h(x)$ and thus \begin{align*} \inf_{t \geq 0}{ \mathbb{E}_x \left( \phi(B_t(\omega)) e^{\lambda t -\int_{0}^{t}{V(B(\omega(s)))ds} } \right)} &\leq \|\phi\|_{L^{\infty}} \inf_{t \geq 0}{ \mathbb{E}_x \left( h(B_t(\omega)) e^{\lambda t -\int_{0}^{t}{V(B(\omega(s)))ds} } \right)} \\ &= \|\phi\|_{L^{\infty}} \inf_{t \geq 0}{ e^{\lambda t} e^{t(\Delta - V)} h(x)}. \end{align*} This concludes the argument. \end{proof} \subsection{Refined variation estimate} Since Brownian motion is isotropic, the proof of the variation estimate can cover a stronger result: simply put, the variation estimate is true because of \text{curvature} of the graph. More generally, we can show that if $ h(x): \Omega \rightarrow \mathbb{R}$ is another function satisfying $h(x_0) = 0$. $$ \forall x \in B \qquad \left| e^{\lambda t}e^{t (\Delta -V)}h(x)\right| \leq \frac{1}{100}\phi(x)$$ for the value of $t$ for which we wish to apply the variation estimate, then the function $$ \phi(x) - h(x) \qquad \mbox{doubles its sizes on} ~ B.$$ The best example is perhaps given by $V=0$ (eigenfunctions of the Laplace operator $-\Delta \phi = \lambda \phi$) and $h(x)$ being the best linear approximation of $\phi$ in $x_0$ $$ h(x) = \left\langle \nabla \phi(x_0), x- x_0 \right\rangle.$$ This function is essentially invariant under heat flow for $d(x_0, \partial \Omega) \gg \sqrt{t}$ (with a negligible contribution from $|\nabla \phi(x_0)|$ that can be made precise). The variation estimate implies that $\phi(x) - h(x)$ doubles its size and since we have removed the tangent plane this implies that $\phi(x) - h(x)$ can't be too small because there is curvature in the graph. This also explains why the variation estimate does not apply when $V \sim \lambda$: then the functions have no guaranteed curvature $$-\Delta \phi = (\lambda - V)\phi \sim 0$$ and the doubling in size, if present at all, could possible vanish once an affine function is removed.\\ \textbf{Acknowledgement.} I am grateful to Ronald R. Coifman and Peter W. Jones for extensive discussions. The author was partially supported by an AMS-Simons travel grant and INET grant $\#$INO15-00038.
{ "timestamp": "2015-10-22T02:12:23", "yymm": "1510", "arxiv_id": "1510.06353", "language": "en", "url": "https://arxiv.org/abs/1510.06353", "abstract": "Eigenfunctions in inhomogeneous media can have strong localization properties. Filoche \\& Mayboroda showed that the function $u$ solving $(-\\Delta + V)u = 1$ controls the behavior of eigenfunctions $(-\\Delta + V)\\phi = \\lambda\\phi$ via the inequality $$|\\phi(x)| \\leq \\lambda u(x) \\|\\phi\\|_{L^{\\infty}}.$$ This inequality has proven to be remarkably effective in predicting localization and recently Arnold, David, Jerison, Mayboroda \\& Filoche connected $1/u$ to decay properties of eigenfunctions. We aim to clarify properties of the landscape: the main ingredient is a localized variation estimate obtained from writing $\\phi(x)$ as an average over Brownian motion $\\omega(\\cdot)$ in started in $x$ $$\\phi(x) = \\mathbb{E}_{x}\\left(\\phi(\\omega(t)) e^{\\lambda t-\\int_{0}^{t}{V(\\omega(z))dz}} \\right).$$ This variation estimate will guarantee that $\\phi$ has to change at least by a factor of 2 in a small ball, which implicitly creates a landscape whose relationship with $1/u$ we discuss.", "subjects": "Spectral Theory (math.SP); Disordered Systems and Neural Networks (cond-mat.dis-nn); Mathematical Physics (math-ph)", "title": "Localization of Quantum States and Landscape Functions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708012852458, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.7084670484638795 }
https://arxiv.org/abs/1002.0519
Coincidence isometries of a shifted square lattice
We consider the coincidence problem for the square lattice that is translated by an arbitrary vector. General results are obtained about the set of coincidence isometries and the coincidence site lattices of a shifted square lattice by identifying the square lattice with the ring of Gaussian integers. To illustrate them, we calculate the set of coincidence isometries, as well as generating functions for the number of coincidence site lattices and coincidence isometries, for specific examples.
\section{Introduction} The sublattice of finite index formed by the points of intersection of a lattice and a rotated copy of the same lattice is called a coincidence site lattice or CSL. It was Friedel in 1911 who first recognized the use of CSLs in describing and classifying grain boundaries in crystals \cite{Fr}. Since then, CSLs have proven to be an indispensable tool in the study of grain boundaries and interfaces \cite{KW, R,Bo,P}. This prompted various authors to examine the CSLs of several lattices including cubic and hexagonal ones \cite{GBW,G,G2}. The discovery of quasicrystals triggered a renewed interest in CSLs. This led to the analysis of CSLs from a more mathematical point of view. Known results for lattices were again considered and reformulated so that they may be readily extended to aperiodic situations. This was necessary since the first stage in solving the coincidence problem for quasicrystals involved calculating the coincidence site modules (CSMs) of the underlying translation modules, such as modules with 5, 8, 10, and 12-fold symmetry (see \cite{PBR,B} and references therein, see also \cite{W,WL,OWL,WL2}). Hence, coincidences of lattices and modules in dimensions $d\leq 4$ were investigated in \cite{B,Z,Z2,BZ,BGHZ}. Recent results include the decomposition of coincidence isometries of lattices and modules in Euclidean $n$-space as a product of at most $n$ coincidence reflections \cite{Zo,H} and the relationship between the sets of coincidence and similarity isometries of lattices and modules \cite{Gl,Gl2}. The mathematical treatment of the coincidence problem is very often restricted to linear coincidence isometries, that is, rotations and improper rotations, whereas isometries containing a translational part are ignored -- as we did in the first two paragraphs above. Nevertheless, general (affine) isometries are important in crystallography. Indeed, the situation where one shifts the two component crystals against each other has been investigated in \cite{GC,F} and references therein. It was shown that these shifts are needed to minimize the grain boundary energy, thus they are often referred to as ``rigid relaxations''. However, some authors claim that minimizing the energy may require shifts that destroy all coincidence sites. Even though the idea of introducing a shift after applying a linear coincidence isometry has already been dealt with in the physical literature, not much can be found in the mathematical literature where a systematic treatment of the subject is still missing. Thus, we now aim to generalize the notion of a CSL and CSM, respectively, that is, we investigate the possible intersections of two lattices (modules) that are related by an isometry. To simplify the discussion, we restrict our attention here to the lattice case though most of the results also work in the module case. In fact, some steps in the general direction have been made in \cite{PBR}. There, the authors have considered coincidence rotations around certain points which are not lattice (module) points. For instance, they determined the set of coincidence rotations about the center of a Delauney cell of the square lattice and calculated the corresponding indices. In this paper we discuss a related and special case: the coincidence problem for shifted lattices. That is, after translating the lattice by some vector and upon rotation of this shifted lattice (with respect to the origin), we consider its intersection with the shifted lattice. This should be useful in the context of bicrystallography \cite{GP, PV}. Similar to the approach in \cite{PBR, B}, we start our investigation with the square lattice and provide solutions that, when modified appropriately, also apply to planar modules. The purpose of this paper is to shed further light on the geometry of CSLs. It is beyond the scope of this paper to discuss the actual grain boundary energy, which would require considering the actual Hamiltonians. In particular, the paper is not intended to determine which translations are the most favourable ones in terms of energy. \section{The coincidence problem for lattices} Let $\Gamma\subseteq \mathbb{R}^d$ be a $d$-dimensional lattice and $R\in O(d)$. We say that $R$ is a \emph{(linear) coincidence isometry} of $\Gamma$ if $\Gamma(R):=\Gamma\cap R\Gamma$ is a sublattice of finite index in $\Gamma$ and we call $\Gamma(R)$ a \emph{coincidence site lattice} (CSL) of $\Gamma$. The \emph{coincidence index} of a coincidence isometry $R$, denoted by $\Sigma(R)$, is given by $\Sigma(R)=[\Gamma:\Gamma(R)]=\frac{\operatorname{vol}(\Gamma(R))}{\operatorname{vol}(\Gamma)}$. Geometrically, $\Sigma(R)$ is equal to the ratio of the volume of a fundamental domain of $\Gamma$ with the volume of a fundamental domain of $\Gamma(R)$. We denote the set of (linear) coincidence isometries of $\Gamma$ by $OC(\Gamma)$ and the set of coincidence rotations of $\Gamma$, that is, $OC(\Gamma)\cap SO(d)$, by $SOC(\Gamma)$. The set $OC(\Gamma)$ forms a group having $SOC(\Gamma)$ as a subgroup \cite{B}. We summarize here the known results for the square lattice $\mathbb{Z}^2$ (see \cite{PBR} for details). The group of coincidence rotations of $\mathbb{Z}^2$ is $SOC(\mathbb{Z}^2)= SO(2,\mathbb{Q})$, that is, the coincidence rotations of $\mathbb{Z}^2$ are the special orthogonal matrices having rational entries. In determining the structure of this group, the square lattice is identified with the ring of Gaussian integers $\Gamma=\mathbb{Z}[i]$, where $i=\sqrt{-1}$, embedded in the set of complex numbers $\mathbb{R}[i]=\mathbb{C}$. In this setting, a coincidence rotation $R$ by an angle of $\theta$ corresponds to multipication by the complex number $e^{i\theta}$, where \begin{equation}\label{coincrot} e^{i\theta}=\varepsilon\cdot \prod_{p\equiv 1(4)}{\left(\frac{\omega_p}{\overline{\omega_p}}\right)}^{n_p} \end{equation} with $n_p\in\mathbb{Z}$ and only a finite number of $n_p\neq 0$, $\varepsilon$ is a unit in $\mathbb{Z}[i]$, $p$ runs over the rational primes $p\equiv 1\imod{4}$, and $\omega_p$, and its complex conjugate $\overline{\omega_p}$ are the Gaussian prime factors of $p=\omega_p\cdot\overline{\omega_p}$. If we denote by $z$ the numerator of $e^{i\theta}$, that is, \begin{equation}\label{num} z=\prod_{\stackrel{p\equiv 1(4)}{n_p>0}}{\omega_p}^{n_p}\cdot\prod_{\stackrel{p\equiv 1(4)}{n_p<0}}{{\left(\overline{{\omega_p}}\right)}^{\,-n_p}}, \end{equation} then the coincidence index of $R$ is the number theoretic norm of $z$, that is, $\Sigma(R)=N(z)=z\cdot\overline{z}$, and the CSL obtained from $R$, $\Gamma(R)$, is the principal ideal $(z):=z\mathbb{Z}[i]$. Consequently, the group of coincidence rotations of the square lattice is given by $SOC(\mathbb{Z}^2)=SOC(\Gamma)\cong C_4\times\mathbb{Z}^{(\aleph_0)}$, where $C_4$ is the cyclic group of order 4 with generator $i$, and $\mathbb{Z}^{(\aleph_0)}$ is the direct sum of countably many infinite cyclic groups each of which is generated by $\omega_p/\overline{\omega_p}$ with $p\equiv 1\imod{4}$. Every coincidence isometry $T\in OC(\Gamma)\setminus SOC(\Gamma)$ can be written as $T=R\cdot T_r$, where $R\in SOC(\Gamma)$ and $T_r$ is the reflection along the real axis (complex conjugation). Here, $\Sigma(T)=\Sigma(R)$ and $\Gamma(T)=\Gamma(R)$. Finally, $OC(\mathbb{Z}^2)=OC(\Gamma)\cong SOC(\Gamma)\rtimes C_2$ (semi-direct product), where $C_2$ is the cyclic group of order 2 generated by $T_r$. The possible coincidence indices and the number of CSLs for a given index $m$ may be described by means of a generating function. Let $\hat{f}(m)$ be the number of coincidence rotations of $\Gamma$ and $f(m)$ be the number of CSLs of $\Gamma$, for a given index $m$. Then $\hat{f}(m)=4f(m)$, where the factor 4 stems from the fact that there are four symmetry rotations. The function $f(m)$ is multiplicative (that is, $f(1)=1$ and $f(mn)=f(m)f(n)$ when $m$, $n$ are relatively prime), and $f(p^r)=2$ for primes $p\equiv 1\imod{4}$ whereas $f(p^r)=0$ for primes $p\equiv 2,3\imod{4}$, where $r\in\mathbb{N}$. We write the generating function for $f(m)$ as a Dirichlet series $\Phi(s)$ given by \begin{alignat*}{2} \Phi(s)&=\sum_{m=1}^{\infty}{\frac{f(m)}{m^s}}=\prod_{p\equiv 1(4)}{\frac{1+p^{-s}}{1-p^{-s}}}\\ &=1+\tfrac{2}{5^s}+\tfrac{2}{13^s}+\tfrac{2}{17^s}+\tfrac{2}{25^s}+\tfrac{2}{29^s}+\tfrac{2}{37^s}+ \tfrac{2}{41^s}+\tfrac{2}{53^s}+\tfrac{2}{61^s}+\tfrac{4}{65^s}+\tfrac{2}{73^s}+\ldots. \end{alignat*} \section{Coincidences of shifted lattices} We now turn our attention to lattices $\Gamma$ in $\mathbb{R}^d$ that are shifted by some vector $x\in\mathbb{R}^d$ and we look at intersections of the form $(x+\Gamma)\cap R(x+\Gamma)$, where $R\in O(d)$. We remark that we actually only need to consider values of $x$ in a fundamental domain of $\Gamma$. An $R\in O(d)$ is said to be a \emph{(linear) coincidence isometry of the shifted lattice} $x+\Gamma$ if $(x+\Gamma)\cap R(x+\Gamma)$ is a sublattice of $x+\Gamma$ of finite index and we also call $(x+\Gamma)\cap R(x+\Gamma)$ a CSL of the shifted lattice $x+\Gamma$. We denote the set of all coincidence isometries of $x+\Gamma$ by $OC(x+\Gamma)$. The following theorem characterizes the set $OC(x+\Gamma)$ and relates the CSLs of $x+\Gamma$ with the CSLs of $\Gamma$ \cite{LZ}. \begin{theorem}\label{cor3} Let $\Gamma$ be a lattice in $\mathbb{R}^d$ and $x\in\mathbb{R}^d$. \begin{enumerate} \item $OC(x+\Gamma)=\set{R\in OC(\Gamma):Rx-x\in\Gamma +R\Gamma}$ \item If $R\in OC(x+\Gamma)$ with $Rx-x=t+Rs$ for some $t,s\in\Gamma$, then \[(x+\Gamma)\cap R(x+\Gamma)=x+(t+\Gamma(R)).\] \end{enumerate} \end{theorem} Theorem \ref{cor3} tells us that a CSL of $x+\Gamma$ obtained from $R\in OC(x+\Gamma)$ is just a translate of the CSL $\Gamma(R)$ in $\Gamma$. Consequently, $[x+\Gamma: (x+\Gamma)\cap R(x+\Gamma)]=\Sigma(R)$ which means that shifting the lattice does not give rise to new values of coincidence indices. In addition, we see that $OC(x+\Gamma)$ is a subset of $OC(\Gamma)$. The set $OC(x+\Gamma)$ is non-empty because the identity $1\in OC(x+\Gamma)$. Also, $OC(x+\Gamma)$ is closed under inverses, that is, $R^{-1}\in OC(x+\Gamma)$ whenever $R\in OC(x+\Gamma)$. However, given $R_1$, $R_2\in OC(x+\Gamma)$, $R_2R_1$ is not necessarily in $OC(x+\Gamma)$. In fact, $OC(x+\Gamma)$ is not a group in general \cite{LZ}. \section{The coincidence problem for the shifted square lattice} From this point onwards, we take $\Gamma=\mathbb{Z}[i]$, the square lattice viewed as the ring of Gaussian integers, and $x\in \mathbb{C}$. From \eqref{coincrot} and \eqref{num}, we see that we can associate each $R\in SOC(\Gamma)$ to $(z,\varepsilon)$, and we will write this as $R(z,\varepsilon)$. That is, $R(z,\varepsilon)\in SOC(\Gamma)$ stands for multiplication by the complex number $\varepsilon\frac{z}{\overline{z}}$. We may assume that $\frac{z}{\overline{z}}$ is reduced, that is, $z$ and $\overline{z}$ have no factors in common. In addition, we shall simply set $z=1$ whenever $R(z,\varepsilon)\in P(\Gamma)$, where $P(\Gamma)$ denotes the point group of $\Gamma$. Let $SOC(x+\Gamma):=OC(x+\Gamma)\cap SO(d)$. We start with the following lemma. \begin{lemma}\label{lem3} Let $\Gamma=\mathbb{Z}[i]$, $x\in\mathbb{C}$, $R=R(z,\varepsilon)\in SOC(\Gamma)$, and $T=RT_r$. \begin{enumerate} \item $R\in SOC(x+\Gamma)$ if and only if $(\varepsilon z-\overline{z})x\in\mathbb{Z}[i]$ \item $T\in OC(x+\Gamma)$ if and only if $\varepsilon z\overline{x}-\overline{z}x\in\mathbb{Z}[i]$ \end{enumerate} \end{lemma} \begin{proof} Recall that $\Gamma$ is a principal ideal domain. Since $\varepsilon$ is a unit in $\Gamma$ and $z$, $\overline{z}$ are relatively prime, \[\Gamma+R\Gamma=\Gamma+\varepsilon\frac{z}{\overline{z}}\Gamma=\frac{1}{\overline{z}}(\overline{z}\Gamma+z\Gamma)=\frac{1}{\overline{z}}\gcd(z,\overline{z})\Gamma=\frac{1}{\overline{z}}\Gamma.\] By Theorem \ref{cor3}, $R\in SOC(x+\Gamma)\Leftrightarrow Rx-x\in\Gamma+R\Gamma\Leftrightarrow \varepsilon\frac{z}{\overline{z}}x-x\in\frac{1}{\overline{z}}\Gamma\Leftrightarrow (\varepsilon z-\overline{z})x\in\mathbb{Z}[i].$ Similarly, $\Gamma+T\Gamma=\frac{1}{\overline{z}}\Gamma$. Applying again Theorem \ref{cor3}, we obtain the second statement. \end{proof} We now obtain the following results about $SOC(x+\Gamma)$ and $OC(x+\Gamma)$. \begin{theorem}\label{SOCgroup} If $\Gamma=\mathbb{Z}[i]$ and $x\in\mathbb{C}$ then $SOC(x+\Gamma)$ is a subgroup of $SOC(\Gamma)$. \end{theorem} \begin{proof} We have already mentioned that $1\in SOC(x+\Gamma)$ and $SOC(x+\Gamma)$ is closed under inverses. Let $R_j(z_j,\varepsilon_j)\in SOC(x+\Gamma)$ for $j=1,2$ and $g=\gcd(z_1,\overline{z_2})$. By Lemma \ref{lem3}, $(\varepsilon_jz_j-\overline{z_j})x\in\mathbb{Z}[i]$ for $j=1,2$. Write $z_1=h_1g$, $\overline{z_2}=\overline{h_2}g$, and hence, $h_1$ and $\overline{h_2}$ are relatively prime. This means that $R_1R_2$ corresponds to $(h_1h_2,\varepsilon_1\varepsilon_2)$ so that $R_1R_2\in SOC(x+\Gamma)$ if $\left(\varepsilon_1\varepsilon_2h_1h_2-\overline{h_1}\overline{h_2}\,\right)x\in\mathbb{Z}[i]$ from Lemma \ref{lem3}. Now, \begin{alignat*}{2} (\varepsilon_1\varepsilon_2h_1h_2-\overline{h_1}\overline{h_2}\,)x&=\frac{1}{g}(\varepsilon_1\varepsilon_2z_1h_2-\overline{h_1z_2}\,)x\\ &=\frac{1}{g}[(\varepsilon_1\varepsilon_2z_1h_2-\varepsilon_2h_2\overline{z_1})+(\varepsilon_2\overline{h_1}z_2-\overline{h_1z_2}\,)]x\\ &=\frac{1}{g}[\varepsilon_2h_2\underbrace{(\varepsilon_1z_1-\overline{z_1})x}_{\in\;\mathbb{Z}[i]}+\overline{h_1}\underbrace{(\varepsilon_2z_2-\overline{z_2})x}_ {\in\;\mathbb{Z}[i]}]\in \frac{1}{g}\Gamma. \end{alignat*} Similarly, we also obtain that $(\varepsilon_1\varepsilon_2h_1h_2-\overline{h_1}\overline{h_2}\,)x\in \frac{1}{\overline{g}}\,\Gamma$. Hence, \[(\varepsilon_1\varepsilon_2h_1h_2-\overline{h_1}\overline{h_2}\,)x\in\frac{1}{g}\Gamma\cap\frac{1}{\overline{g}}\Gamma=\frac{1}{g\overline{g}}(g\Gamma\cap\overline{g}\Gamma) =\frac{1}{g\overline{g}}\lcm{(g,\overline{g})}\Gamma=\mathbb{Z}[i]\] since $g$, $\overline{g}$ are relatively prime. \end{proof} For $OC(x+\Gamma)$, the situation is more complicated. One can show the following results (see \cite{LZ}). \begin{theorem}\label{prop5} Let $\Gamma=\mathbb{Z}[i]$ and $x\in\mathbb{C}$. \begin{enumerate} \item The set $OC(x+\Gamma)$ is a subgroup of $OC(\Gamma)$ if and only if for any $T_1$, $T_2\in$ \mbox{$OC(x+\Gamma)\setminus SOC(x+\Gamma)$}, $T_1T_2\in SOC(x+\Gamma)$. \item If $OC(x+\Gamma)$ contains a reflection $T\in P(\Gamma)$ then $OC(x+\Gamma)$ is a subgroup of $OC(\Gamma)$. Also, $OC(x+\Gamma)=SOC(x+\Gamma)\rtimes\langle T\rangle$, where $\langle T\rangle=\set{1,T}\cong C_2$ is the group generated by $T$. \item Suppose $OC(x+\Gamma)$ does not contain a reflection $T\in P(\Gamma)$. If $RT_r\in OC(x+\Gamma)$ where $R(z,\varepsilon_1)\in SOC(\Gamma)$ then for any unit $\varepsilon_2$, $R_2=R_2(z,\varepsilon_2)\notin SOC(x+\Gamma)$. \end{enumerate} \end{theorem} When computing for $OC(x+\Gamma)$, we see from Theorem \ref{prop5} that it is convenient to determine whether there is a reflection $T\in P(\Gamma)$ that is in $OC(x+\Gamma)$. If such a reflection $T$ exists, then $OC(x+\Gamma)$ is a group and it is the semi-direct product of $SOC(x+\Gamma)$ and $\langle T\rangle$. Otherwise, we need to check if $RT_r\in OC(x+\Gamma)$ only for those reflections $RT_r$ for which $R(z,\varepsilon)\in SOC(\Gamma)$ and $R'=R'(z,\varepsilon')\notin SOC(x+\Gamma)$ for all unit $\varepsilon'$ holds. \section{Specific examples} For the rest of the discussion, we shall assume that $R(z,\varepsilon)\in SOC(\Gamma)$. The following theorem solves completely the case when $x$ has an irrational component \cite{LZ}. \begin{theorem} Let $x=a+bi\in\mathbb{C}$. If $a$ or $b$ is irrational then $OC(x+\Gamma)$ is a group of at most two elements. In particular, if \begin{enumerate} \item $a$ is irrational and $b$ is rational then $OC(x+\Gamma)=\left\{\begin{aligned} \langle T_r\rangle &\;\text{if }2b\in\mathbb{Z}\\ \set{1} &\;\text{otherwise}. \end{aligned}\right.\;$ \item $a$ is rational and $b$ is irrational then $OC(x+\Gamma)=\left\{\begin{aligned} \langle T\rangle &\;\text{if }2a\in\mathbb{Z}\\ \set{1} &\;\text{otherwise}, \end{aligned}\right.$\\ where $T$ is the reflection along the imaginary axis. \item both $a$ and $b$ are irrational, and \begin{enumerate} \item $a$, $b$ are rationally independent then $OC(x+\Gamma)=\set{1}$. \item $a=\frac{p_1}{q_1}+\frac{p_2}{q_2}b$ where $p_j$, $q_j\in\mathbb{Z}$, $p_j$ and $q_j$ are relatively prime (for $j=1,2$) with \begin{enumerate} \item $p_2q_2$ even, then $OC(x+\Gamma)=\left\{\begin{aligned} \langle R&T_r\rangle &&\text{if }q_1|2q_2\\ \{&1\} &&\text{otherwise}, \end{aligned}\right.$\\where $R=R(p_2+q_2i,1)\in SOC(\Gamma)$. \item $p_2q_2$ odd, then $OC(x+\Gamma)=\left\{\begin{aligned} \langle R&T_r\rangle &&\text{if }q_1|q_2\\ \{&1\} &&\text{otherwise}, \end{aligned}\right.$\\where $R=R(\frac{p_2+q_2}{2}-\frac{p_2-q_2}{2}i, i)\in SOC(\Gamma)$. \end{enumerate} \end{enumerate} \end{enumerate} \end{theorem} \begin{ex} \begin{enumerate} \item[] \item Suppose $x=\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}i$. We immediately see that we cannot write $\frac{1}{\sqrt{2}}=c+d\frac{1}{\sqrt{3}}$ where $c,d\in\mathbb{Q}$ since $\sqrt{2}\notin\mathbb{Q}\left(\sqrt{3}\,\right)$. Hence, $OC(x+\Gamma)=\set{1}$. \item Let $x=\sqrt{2}-\frac{\sqrt{2}}{2}i$. We have $\sqrt{2}=\frac{0}{1}+\left(\frac{-2}{1}\right)\left(-\frac{\sqrt{2}}{2}\right)$, and since $1|(2\cdot 1)$, $OC(x+\Gamma)=\langle RT_r\rangle$ where $R=R(-2+i,1)\in SOC(\Gamma)$. \end{enumerate} \end{ex} It only remains to consider the case when both $a$ and $b$ are rational. We now consider $x=a+bi\in\mathbb{Q}(i)$ and write $x=\frac{p}{q}$ where $p$, $q\in\mathbb{Z}[i]$, and $p$, $q$ are relatively prime (in $\mathbb{Z}[i]$). The following lemma tells us that $SOC(x+\Gamma)$ depends only on the denominator $q$ of $x$. \begin{lemma}\label{prop8} Let $\Gamma=\mathbb{Z}[i]$, $x=\frac{p}{q}\in\mathbb{Q}(i)$ where $p$, $q\in\mathbb{Z}[i]$, with $p$, $q$ relatively prime, and $R(z,\varepsilon)\in SOC(\Gamma)$. Then $R\in SOC(x+\Gamma)$ if and only if $q$ divides $\varepsilon z-\overline{z}$. Furthermore, $SOC(x+\Gamma)=SOC(\frac{1}{q}+\Gamma)$. \end{lemma} \begin{proof} We know from Lemma \ref{lem3} that if $R\in SOC(x+\Gamma)$ then $(\varepsilon z-\overline{z})x=\frac{(\varepsilon z-\overline{z})p}{q}\in\mathbb{Z}[i]$. Since $p$ and $q$ are relatively prime, $q|(\varepsilon z-\overline{z})$. The second statement follows from the first. \end{proof} \begin{lemma}\label{lemfunddom} Suppose $\Gamma=\mathbb{Z}[i]$ and $x\in\mathbb{C}$. If $x'=Qx$ for some $Q\in P(\Gamma)$ then \[OC(x'+\Gamma)=Q[OC(x+\Gamma)]Q^{-1}.\] \end{lemma} \begin{proof} Since $SOC(\Gamma)$ is a normal subgroup of $OC(\Gamma)$, $R\in SOC(\Gamma)$ if and only if $QRQ^{-1}\in SOC(\Gamma)$. Thus, if $R\in SOC(\Gamma)$ then it follows from Theorem \ref{cor3} that \begin{alignat*}{2} R\in OC(x'+\Gamma)&\Leftrightarrow Rx'-x'\in\Gamma+R\Gamma\\ &\Leftrightarrow Q(Q^{-1}RQx-x)\in Q(\Gamma+Q^{-1}RQ\Gamma)\\ &\Leftrightarrow Q^{-1}RQx-x\in \Gamma+Q^{-1}RQ\Gamma\\ &\Leftrightarrow Q^{-1}RQ\in OC(x+\Gamma)\\ &\Leftrightarrow R\in Q[OC(x+\Gamma)]Q^{-1}.\qedhere \end{alignat*} \end{proof} Recall that we only need to consider values of $x=a+bi$ in a fundamental domain of $\Gamma$. A fundamental domain of $\Gamma$ is $\set{a+bi\in\mathbb{C}:-\frac{1}{2}\leq a,b<\frac{1}{2}}$ (see Figure \ref{funddom}). Observe that every point $x'$ in the chosen fundamental domain can be written as $x'=Qx$ where $Q\in P(\Gamma)$ and $x\in\set{a+bi\in\mathbb{C}: 0\leq b\leq a\leq\frac{1}{2}}$ (a fundamental domain of the symmetry group of $\Gamma$ which is a crystallographic group of type $p4m$). Hence, it follows from Lemma \ref{lemfunddom} that we only need to compute $OC(x+\Gamma)$ for values of $x=a+bi$, where $0\leq b\leq a\leq\frac{1}{2}$ (see Figure \ref{funddom}). \begin{figure}[ht] \setlength{\unitlength}{1.5in} \begin{center} \begin{picture}(1.3,1.3)(-0.65,-0.65) \linethickness{0.15pt} \put(-0.65,0){\vector(1,0){1.3}} \put(0,-0.65){\vector(0,1){1.3}} \put(0.005,-0.60){-$\frac{1}{2}$} \put(-0.6,-0.1){-$\frac{1}{2}$} \put(0.02,0.55){$\frac{1}{2}$} \put(0.52,-0.1){$\frac{1}{2}$} \put(-0.5,-0.5){\line(1,0){1}} \put(-0.5,-0.5){\line(0,1){1}} \multiput(-0.5,0.5)(0.105,0){10}{\line(1,0){0.055}} \multiput(0.5,0.5)(0,-0.105){10}{\line(0,-1){0.055}} \thicklines \put(0,0){\line(1,0){0.5}} \put(0.5,0){\line(0,1){0.5}} \put(0,0){\line(1,1){0.5}} \end{picture} \end{center} \caption{A fundamental domain of $\Gamma$, or unit cell, and a fundamental domain of the symmetry group of $\Gamma$ (black triangle)}\label{funddom} \end{figure} \begin{lemma}\label{lemgroup} If $\Gamma=\mathbb{Z}[i]$ and $x=a+bi\in\mathbb{C}$, then $OC(x+\Gamma)$ is a subgroup of $OC(\Gamma)$ if one of the following conditions is satisfied: $a\in\frac{1}{2}\mathbb{Z}$, $b\in\frac{1}{2}\mathbb{Z}$ or $a\pm b\in\mathbb{Z}$. Furthermore, $OC(x+\Gamma)=SOC(x+\Gamma)\rtimes \langle RT_r\rangle$ where $R=R(1,\varepsilon)\in SOC(\Gamma)$, and \[\varepsilon=\left\{\begin{aligned} 1 & \;\text{if }b\in\tfrac{1}{2}\mathbb{Z}\\ -1 & \;\text{if }a\in\tfrac{1}{2}\mathbb{Z}\\ i & \;\text{if }a-b\in\mathbb{Z}\\ -i & \;\text{if }a+b\in\mathbb{Z}. \end{aligned}\right.\] \end{lemma} \begin{proof} Consider the reflection $RT_r\in P(\Gamma)$. By Lemma \ref{lem3}, $RT_r\in OC(x+\Gamma)$ if and only if $\varepsilon\overline{x}-x\in\mathbb{Z}[i]$. The result follows by applying Theorem \ref{prop5}. \end{proof} In particular, for values of $a$ and $b$ for which $0\leq b\leq a\leq \frac{1}{2}$, $OC(x+\Gamma)$ is a subgroup of $OC(\Gamma)$ when $a=\frac{1}{2}$, $b=0$, or $a=b$ (the boundaries of the triangle in Figure \ref{funddom}). Before looking at some examples, we note that given $R(z,\varepsilon)\in SOC(\Gamma)$, we have \begin{equation}\label{ezmcz} \varepsilon z-\overline{z}=\left\{\begin{aligned} 2\,&\Im(z) &&\;\text{if }\varepsilon=1\\ -2\,&\Re(z) &&\;\text{if }\varepsilon=-1\\ -[\Re(z)+&\Im(z)](1-i) &&\;\text{if }\varepsilon=i\\ -i[\Re(z)-&\Im(z)](1-i) &&\;\text{if }\varepsilon=-i\;. \end{aligned}\right. \end{equation} In addition, we see from \eqref{coincrot} and \eqref{num} that $\Re(z)$ and $\Im(z)$ are relatively prime and of different parity (that is, one is odd and the other is even). We will also exhibit the number of possible coincidence rotations and CSLs obtained with given index $m$ of the shifted lattice $x+\Gamma$ by means of generating functions. We shall denote by $\hat{f}_x(m)$ the number of coincidence rotations of $x+\Gamma$ of index $m$, and $f_x(m)$ the number of CSLs of $x+\Gamma$ of index $m$. \begin{ex} $x=\frac{1}{2}+\frac{1}{2}i=\frac{1}{1-i}$ \setlength{\unitlength}{1.25in} \begin{center} \begin{picture}(0.65,0.80)(0,-0.15) \linethickness{0.10pt} \put(0,0){\vector(1,0){0.65}} \put(0,0){\vector(0,1){0.65}} \put(-0.1,0.48){$\frac{1}{2}$} \put(0.48,-0.15){$\frac{1}{2}$} \multiput(0,0.5)(0.105,0){5}{\line(1,0){0.055}} \thicklines \put(0,0){\line(1,0){0.5}} \put(0.5,0){\line(0,1){0.5}} \put(0,0){\line(1,1){0.5}} \put(0.5,0.5){\circle*{0.08}} \put(0.6,0.5){\makebox(0,0){$x$}} \end{picture} \end{center} The denominator of $x$ is $q=1-i$ and we see from \eqref{ezmcz} that $q|(\varepsilon z-\overline{z})$ for all $z$, $\varepsilon$. Lemmas \ref{prop8} and \ref{lemgroup} implies that $SOC(x+\Gamma)=SOC(\Gamma)\cong C_4\times{\mathbb{Z}}^{(\aleph_0)}$ and $OC(x+\Gamma)=OC(\Gamma)\cong SOC(x+\Gamma)\rtimes C_2$. Clearly, $OC(x+\Gamma)$ is a subgroup of $OC(\Gamma)$ in this case. Also, $\hat{f}_x(m)=\hat{f}(m)$ and $f_x(m)=f(m)$. These results agree with the results obtained in the Appendix of \cite{PBR} (just shift the center of the Delaunay cell into the origin). We also note here that $(S)OC(x+\Gamma)=(S)OC(\Gamma)$ if and only if $x=\frac{m}{2}+\frac{n}{2}i$, where $m$, $n$ are odd integers. \end{ex} \begin{ex} $x=\frac{1}{2}$ \setlength{\unitlength}{1.25in} \begin{center} \begin{picture}(0.65,0.80)(0,-0.15) \linethickness{0.10pt} \put(0,0){\vector(1,0){0.65}} \put(0,0){\vector(0,1){0.65}} \put(-0.1,0.48){$\frac{1}{2}$} \put(0.46,-0.18){$\frac{1}{2}$} \multiput(0,0.5)(0.105,0){5}{\line(1,0){0.055}} \thicklines \put(0,0){\line(1,0){0.5}} \put(0.5,0){\line(0,1){0.5}} \put(0,0){\line(1,1){0.5}} \put(0.5,0){\circle*{0.08}} \put(0.57,0.07){\makebox(0,0){$x$}} \end{picture} \end{center} The denominator of $x$ is $q=2$. Since the sum of $\Re(z)$ and $\Im(z)$ is odd, we obtain from \eqref{ezmcz} that for all $z$, $q|(\varepsilon z-\overline{z})$ if and only if $\varepsilon=\pm 1$. Hence, by Lemma \ref{prop8}, \[SOC(x+\Gamma)=\set{R(z,\varepsilon)\in SOC(\Gamma):\varepsilon=\pm 1}\cong C_2\times{\mathbb{Z}}^{(\aleph_0)}.\] From Lemma \ref{lemgroup}, $OC(x+\Gamma)=SOC(x+\Gamma)\rtimes\langle T_r\rangle$ and is a subgroup of $OC(\Gamma)$ of index 2. In this case, we have $f_x(m)=f(m)$ but $\hat{f}_x(m)=2f_x(m)$. \end{ex} \begin{ex} $x_0=\frac{1}{3}$ and $x_1=\frac{1}{3}+\frac{1}{3}i$ \setlength{\unitlength}{1.25in} \begin{center} \begin{picture}(0.65,0.80)(0,-0.15) \linethickness{0.10pt} \put(0,0){\vector(1,0){0.65}} \put(0,0){\vector(0,1){0.65}} \put(-0.1,0.48){$\frac{1}{2}$} \put(0.46,-0.18){$\frac{1}{2}$} \multiput(0,0.5)(0.105,0){5}{\line(1,0){0.055}} \thicklines \put(0,0){\line(1,0){0.5}} \put(0.5,0){\line(0,1){0.5}} \put(0,0){\line(1,1){0.5}} \put(0.33,0){\circle*{0.08}} \put(0.33,-0.1){\makebox(0,0){$x_0$}} \put(0.33,0.33){\circle*{0.08}} \put(0.23,0.33){\makebox(0,0){$x_1$}} \end{picture} \end{center} Both $x_0$ and $x_1$ have denominator $q=3$. It is easy to see that if both rational integers $m$, $n$ are not divisible by $q$, then either $m+n$ or $m-n$ is divisible by $q$. Hence, by \eqref{ezmcz}, there is a unique $\varepsilon$ so that $q|(\varepsilon z-\overline{z})$ for all $z$. We conclude from Lemma \ref{prop8} that $SOC(x_j+\Gamma)\cong{\mathbb{Z}}^{(\aleph_0)}$ for $j=0,1$. In addition, by Lemma \ref{lemgroup}, $OC(x_j+\Gamma)=SOC(x_j+\Gamma)\rtimes\langle RT_r\rangle$, where $R=R(1,i^j)\in P(\Gamma)$, and $OC(x_j+\Gamma)$ is a subgroup of $OC(\Gamma)$ of index 4. Finally, we have $\hat{f}_{x_j}(m)=f_{x_j}(m)=f(m)$. \end{ex} \begin{ex} $x_0=\frac{1}{5}$, $\frac{2}{5}$ and $x_1=\frac{1}{5}+\frac{1}{5}i$, $\frac{2}{5}+\frac{2}{5}i$ \setlength{\unitlength}{1.25in} \begin{center} \begin{picture}(0.65,0.8)(0,-0.15) \linethickness{0.10pt} \put(0,0){\vector(1,0){0.65}} \put(0,0){\vector(0,1){0.65}} \put(-0.1,0.48){$\frac{1}{2}$} \put(0.46,-0.15){$\frac{1}{2}$} \multiput(0,0.5)(0.105,0){5}{\line(1,0){0.055}} \thicklines \put(0,0){\line(1,0){0.5}} \put(0.5,0){\line(0,1){0.5}} \put(0,0){\line(1,1){0.5}} \put(0.2,0){\circle*{0.08}} \put(0.4,0){\circle*{0.08}} \put(0.2,0.2){\circle{0.08}} \put(0.4,0.4){\circle{0.08}} \put(0.75,0.4){\circle*{0.08}} \put(0.75,0.2){\circle{0.08}} \put(0.88,0.4){\makebox(0,0){$x_0$}} \put(0.88,0.2){\makebox(0,0){$x_1$}} \end{picture} \end{center} We have the denominator $q=5$ for $x_j$, $j=0$, $1$. By considering each possible combination of $\Re(z)$ and $\Im(z)$ modulo $q$, it can be verified that for all $z$ with $5\nmid N(z)$, there is a unique $\varepsilon$ such that $R(z,\varepsilon)\in SOC(\frac{1}{5}+\Gamma)$. This means that $SOC(x_j+\Gamma)\cong{\mathbb{Z}}^{(\aleph_0)}$ for $j=0,1$ by Lemma \ref{prop8}. Also, it follows from Lemma \ref{lemgroup} that $OC(x_j+\Gamma)$ is a subgroup of $OC(\Gamma)$ and $OC(x_j+\Gamma)=SOC(x_j+\Gamma)\rtimes\langle RT_r\rangle$, where $R=R(1,i^j)\in P(\Gamma)$. Furthermore, $\hat{f}_{x_j}(m)=f_{x_j}(m)$ where the Dirichlet series generating function for $f_{x_j}(m)$ is given by \begin{alignat*}{2} \Phi_{x_j}(s)&=\sum_{m=1}^{\infty}{\frac{f_{x_j}(m)}{m^s}}=\prod_{\stackrel{p\equiv 1(4)}{p\neq 5}}\frac{1+p^{-s}}{1-p^{-s}}\\ &=1+\tfrac{2}{13^s}+\tfrac{2}{17^s}+\tfrac{2}{29^s}+\tfrac{2}{37^s}+\tfrac{2}{41^s}+\tfrac{2}{53^s}+\tfrac{2}{61^s}+\tfrac{2}{73^s} +\tfrac{2}{89^s}+\tfrac{2}{97^s}+\tfrac{2}{101^s}+\tfrac{2}{109^s}+\tfrac{2}{113^s}+\\ &\quad\tfrac{2}{137^s}+ \tfrac{2}{149^s}+\tfrac{2}{157^s}+\tfrac{2}{169^s}+\tfrac{2}{173^s}+\tfrac{2}{181^s}+ \tfrac{2}{193^s}+\tfrac{2}{197^s}+\tfrac{4}{221^s}+\tfrac{2}{229^s}+\ldots.\,. \end{alignat*} \end{ex} \begin{ex} $x=\frac{2}{5}+\frac{1}{5}i=\frac{i}{1+2i}$ \setlength{\unitlength}{1.25in} \begin{center} \begin{picture}(0.65,0.80)(0,-0.15) \linethickness{0.10pt} \put(0,0){\vector(1,0){0.65}} \put(0,0){\vector(0,1){0.65}} \put(-0.1,0.48){$\frac{1}{2}$} \put(0.46,-0.18){$\frac{1}{2}$} \multiput(0,0.5)(0.105,0){5}{\line(1,0){0.055}} \thicklines \put(0,0){\line(1,0){0.5}} \put(0.5,0){\line(0,1){0.5}} \put(0,0){\line(1,1){0.5}} \put(0.4,0.2){\circle*{0.08}} \put(0.3,0.2){\makebox(0,0){$x$}} \end{picture} \end{center} Since the denominator $q=1+2i$ of $x$ does not divide $1-i$, we see from \eqref{ezmcz} that $q|(\varepsilon z-\overline{z})$ if and only if $5|(\varepsilon z-\overline{z})$. Hence, $SOC(x+\Gamma)=SOC(\frac{1}{5}+\Gamma)\cong{\mathbb{Z}}^{(\aleph_0)}$ by Lemma \ref{prop8}. Applying Theorem \ref{prop5}, if $RT_r\in OC(x+\Gamma)$ where $R(z,\varepsilon)\in SOC(\Gamma)$ then $5|N(z)$ because $OC(x+\Gamma)$ does not contain a reflection in $P(\Gamma)$. Observe that given $z$ with $5|N(z)$, we must find either $1+2i$ or $1-2i$ (and not both) in the factorization of $z$ into primes in $\mathbb{Z}[i]$. If $(1-2i)|z$ then $z\overline{x}\in\mathbb{Z}[i]$ and $\varepsilon z\overline{x}-\overline{z}x=\varepsilon z\overline{x}-\overline{z\overline{x}}\in\mathbb{Z}[i]$, $\forall\varepsilon$. Lemma \ref{lem3} then implies that \[OC(x+\Gamma)=SOC(x+\Gamma)\cup\set{RT_r: R(z,\varepsilon)\in SOC(\Gamma)\text{ with }(1-2i)|z}.\] Here, we have an example of a set $OC(x+\Gamma)$ that is not a group. Indeed, let $T_k=R_kT_r\in OC(x+\Gamma)\setminus SOC(x+\Gamma)$ where $R_k=R_k(z,\varepsilon_k)\in SOC(\Gamma)$, for $k=1,2$, with $\varepsilon_1\neq\varepsilon_2$. We obtain that $T_1T_2$ is not the identity with $T_1T_2=R_1{R_2}^{-1}\in P(\Gamma)$ which means that $T_1T_2\notin OC(x+\Gamma)$. By Theorem \ref{prop5}, $OC(x+\Gamma)$ is not a subgroup of $OC(\Gamma)$. The Dirichlet series generating function for $f_{x}(m)$ is given by \begin{alignat*}{2} \Phi_{x}(s)&=\sum_{m=1}^{\infty}{\frac{f_{x}(m)}{m^s}}=\frac{1}{1-5^{-s}}\cdot\prod_{\stackrel{p\equiv 1(4)}{p\neq 5}}\frac{1+p^{-s}}{1-p^{-s}}\\ &=1+\tfrac{1}{5^s}+\tfrac{2}{13^s}+\tfrac{2}{17^s}+\tfrac{1}{25^s}+\tfrac{2}{29^s}+\tfrac{2}{37^s}+\tfrac{2}{41^s}+\tfrac{2}{53^s}+\tfrac{2}{61^s}+\tfrac{2}{65^s}+ \tfrac{2}{73^s}+\ldots\,. \end{alignat*} Also, if we denote by $\hat{F}_x(m)$ the number of (linear) coincidence isometries of $x+\Gamma$ of index $m$, we obtain that \[\hat{F}_{x}(m)=\left\{\begin{aligned} f_{x}(m) &\;\text{if } 5 \nmid\, m\\ 4\,f_{x}(m) &\;\text{if } 5\,|\, m.\\ \end{aligned}\right.\] Hence, the Dirichlet series generating function for $\hat{F}_{x}(m)$ is given by \begin{alignat*}{2} \Psi_{x}(s)&=\sum_{m=1}^{\infty}{\frac{\hat{F}_{x}(m)}{m^s}}=\frac{1+3\cdot 5^{-s}}{1-5^{-s}}\cdot\prod_{\stackrel{p\equiv 1(4)}{p\neq 5}}\frac{1+p^{-s}}{1-p^{-s}}\\ &=1+\tfrac{4}{5^s}+\tfrac{2}{13^s}+\tfrac{2}{17^s}+\tfrac{4}{25^s}+\tfrac{2}{29^s}+\tfrac{2}{37^s}+\tfrac{2}{41^s}+\tfrac{2}{53^s}+\tfrac{2}{61^s}+\tfrac{8}{65^s}+ \tfrac{2}{73^s}+\ldots\,. \end{alignat*} \end{ex} \section{Conclusion and outlook} We have seen that the coincidence isometries of a shifted lattice are also coincidence isometries of the original lattice. Moreover, the CSLs of the shifted lattice are merely translations of CSLs of the original lattice. Thus, no new values of coincidence indices $\Sigma$ are obtained by shifting the lattice, and some $\Sigma$-values even disappear or their multiplicity is reduced. The coincidences of a shifted square lattice were examined in this paper by identifying the lattice with the ring of Gaussian integers. The problem was completely solved for the case when the shift consists of an irrational component. For the remaining case, that is, when the shift may be written as a quotient of two Gaussian integers that are relatively prime, one needs to compute the set of coincidence rotations for each possible denominator via some divisibility condition. Partial results are given here and in \cite{LZ} on how to obtain the coincidence isometries and indices for any given denominator and corresponding numerator. General results in this direction will depend on the arithmetic of the Gaussian integers. It should be emphasized that the order of rotation and translation of a lattice is in general not interchangeable. In this paper, we compare the shifted lattice with its rotated copy, that is, the translation (say $x$) comes first before rotation. This corresponds to the situation where the lattice is first rotated by $R$ and is shifted afterwards by the vector $Rx-x$, which is equivalent to a rotation of the lattice about a different point ($-x$), thus keeping at least one point ($-x$) fixed. In particular, the CSLs of a shifted lattice are shifted copies of the intersection of a lattice with a rotated, then translated version of the same lattice (see \cite{LZ}). The next step is to extend these results to planar modules by identifying the modules with rings of cyclotomic integers (\cite{PBR}). General results for lattices will of course also hold in three dimensions, but the approach in this case will not be via complex numbers but via quaternions. Finally, it is expected that the ideas behind the study of a shifted lattice may be applied to crystals where there is more than one atom per primitive unit cell, as described in \cite{GP,PV}. A related mathematical problem is the following: Suppose the coincidence problem for a sublattice (of finite index) of a given lattice has already been solved. What can be deduced about the coincidence indices of the the original lattice? A possible approach to answer this question involves looking at the coincidences of the corresponding cosets, which are just shifted copies of the sublattice. \subsection*{Acknowledgements} The authors are grateful to the referee for his valuable remarks on the manuscript. M. Loquias would like to thank the Deutscher Akademischer Austausch Dienst (DAAD) for financial support during his stay in Germany. This work was supported by the German Research Council (DFG), within the CRC~701.
{ "timestamp": "2010-02-02T16:55:10", "yymm": "1002", "arxiv_id": "1002.0519", "language": "en", "url": "https://arxiv.org/abs/1002.0519", "abstract": "We consider the coincidence problem for the square lattice that is translated by an arbitrary vector. General results are obtained about the set of coincidence isometries and the coincidence site lattices of a shifted square lattice by identifying the square lattice with the ring of Gaussian integers. To illustrate them, we calculate the set of coincidence isometries, as well as generating functions for the number of coincidence site lattices and coincidence isometries, for specific examples.", "subjects": "Metric Geometry (math.MG); Combinatorics (math.CO)", "title": "Coincidence isometries of a shifted square lattice", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708012852458, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.7084670484638794 }
https://arxiv.org/abs/2205.12249
Competitive Algorithms for Block-Aware Caching
We study the block-aware caching problem, a generalization of classic caching in which fetching (or evicting) pages from the same block incurs the same cost as fetching (or evicting) just one page from the block. Given a cache of size $k$, and a sequence of requests from $n$ pages partitioned into given blocks of size $\beta\leq k$, the goal is to minimize the total cost of fetching to (or evicting from) cache. We show the following results:$\bullet$ For the eviction cost model, we show an $O(\log k)$-approximate offline algorithm, a $k$-competitive deterministic online algorithm, and an $O(\log^2 k)$-competitive randomized online algorithm.$\bullet$ For the fetching cost model, we show an integrality gap of $\Omega(\beta)$ for the natural LP relaxation of the problem, and an $\Omega(\beta + \log k)$ lower bound for randomized online algorithms. The strategy of ignoring the block-structure and running a classical paging algorithm trivially achieves an $O(\beta)$ approximation and an $O(\beta \log k)$ competitive ratio respectively for the offline and online-randomized setting.$\bullet$ For both fetching and eviction models, we show improved bounds for the $(h,k)$-bicriteria version of the problem. In particular, when $k=2h$, we match the performance of classical caching algorithms up to constant factors.Our results establish a separation between the tractability of the fetching and eviction cost models, which is interesting since fetching/evictions costs are the same up to an additive term for classic caching. Previous work only studied online deterministic algorithms for the fetching cost model when $k > h$. Our insight is to relax the block-aware caching problem to a submodular covering LP. The main technical challenge is to maintain a competitive fractional solution, and to round it with bounded loss, as the constraints of this LP are revealed online.
\section{Appendix} \label{sec:appendix} \subsection{Deferred Proofs} \label{sec:extra_proofs} \betaoff* \begin{proof} Consider the following instance. For any $\bsize$, let $n = 2\bsize^2$ pages be organized into $2\bsize$ blocks of size $\bsize$. Let $P$ be the first $\bsize$ blocks and $Q$ be the second $\bsize$ blocks. We set $k=\bsize^2$, and fill the cache initially with all the pages of the $P$ blocks. The request sequence consists of rounds. For $i=1,\dots,\bsize$, in round $i$ request the first $\bsize-i$ pages of each $P$ block, and the first $i$ of the $Q$ blocks in their entirety, and repeat this sequence $L$ times within the round. For sufficiently large constant $L$, the optimal solution must have precisely the requested pages of a round in its cache. Thus, in round $i$ it evicts the $(\bsize-i+1)^{th}$ page of each $P$ block and fetches the $i^{th}$ $Q$ block in its entirety. The fetching cost of this solution is $\bsize$, while the eviction cost is $\bsize^2$. To see the other direction, observe that if we instead start the cache with the pages of the $Q$ blocks, and for $i=1,\dots,\bsize$ we make round $i$ request precisely the pages \emph{not} requested in round $i$ above (and once again repeat this sequence $L$ times), the optimal solution will always evict one $Q$ block in its entirety and fetch a single page from each $P$ block in each round. Here the fetching cost is $\bsize^2$ and the eviction cost is $\bsize$. \end{proof} \fsubmod* \begin{proof} Consider the function $g^\tau$, where \begin{align*} g^\tau(S) &:= \left|\{p : p \text{ is \textit{missing} at time } \tau \text{ according to } S\}\right| \\ &= \left| \bigcup_{\phi \in S} \{p : p \text{ is \textit{missing} at time } \tau \text{ according to } \phi\}\right|.\end{align*} $g^\tau$ is a coverage function, and hence it is submodular. The function $f^\tau$ is the minimum of $g^\tau$ and the constant function $n-k$, and so $f^\tau$ is also submodular. \end{proof} \smaxint* \begin{proof} It suffices to show that if $\phi$ violates a constraint $(S,\tau)$, and $(B_0,t_0)$ is such that $\phi_{B_0}^{t_0} = 1$, then $x$ also violates $(S \cup \{(B_0,t_0)\}, \tau)$. Since $x$ is violated, \begin{align*} n - k - f_\tau(S) &> \sum_{B,t} f_\tau((B,t) \mid S) \cdot \phi_{B}^t \\ &= \sum_{(B,t) \neq (B_0,t_0)} f_\tau((B,t) \mid S) \cdot \phi_{B}^t + f_\tau((B_0, t_0) \mid S)\\ &\geq \sum_{(B,t) \neq (B_0,t_0)} f_\tau((B,t) \mid S \cup \{(B_0, t_0)\}) \cdot \phi_{B}^t \\ & \quad + f_\tau((B_0, t_0) \mid S) \end{align*} where the second inequality above used submodularity. Rearranging gives that \[\sum_{(B,t) \neq (B_0,t_0)} f_\tau((B,t) \mid S \cup \{(B_0, t_0)\}) \cdot \phi_{B}^t < n - k - f_\tau(S \cup \{(B_0, t_0)\}). \qedhere\] \end{proof} \structsol* \begin{proof} To guarantee the first property, every time a page from a block $B$ is evicted to extent $\nicefrac{1}{2}$, evict the entire block $B$ for a cost of $c_B$. Charge this eviction to the evictions that caused this page to go from $0$ to $1/2$, which cost at least $c_B/2$. To ensure the second property, consider the following online algorithm. \begin{algorithm} \caption{Structure Solution} \label{alg:structsol} \begin{algorithmic}[1] \State Define solution $\widetilde x$ such that $\widetilde \phi_B^t = \phi_B^t + \mathbbm{1} \{\phi_B^t \geq \nicefrac{1}{2}\} \cdot (1 - \phi_B^t)$. \For{block $B$} \State Set $t_B \leftarrow 0$. \For{time $t \in [T]$} \State Let $\Delta = \sum_{t' = t_B + 1}^t \widetilde \phi_{B}^{t'}$. \If{$\Delta \geq \nicefrac{1}{4k^2}$} \State $\varphi_B^t \leftarrow \Delta$. \State $t_B \leftarrow t$. \label{line:tb_set} \EndIf \EndFor \EndFor \State Output $\widetilde \varphi = \min(2 \cdot \varphi, 1)$. \end{algorithmic} \end{algorithm} The structural guarantee that every nonzero coordinate has $\varphi_B^t \geq \nicefrac{1}{4k^2}$ holds by construction. The cost is also less than $2 \cdot c(\phi)$ by construction. It remains to show $\varphi$ is feasible. Consider any constraint of the form: \[\sum_{B,t} f_\tau((B,t) \mid S) \cdot \phi_B^t \geq n - k - f_\tau(S)\] For a block $B$, let $\tau_B$ be the last time before $\tau$ that $t_B$ was set to in \cref{line:tb_set}. By construction \begin{align*} \sum_{t = \tau_B+1}^\tau \phi_B^t \leq \frac{1}{4k^2}. \end{align*} Since $f((B,t) | S) \leq k$, and by the property that every $p$ has $x_p^t \in [0,\nicefrac{1}{2}]\cup\{1\}$, there are at most $2k$ blocks with nonzero pages in cache. This also means \begin{align*} \sum_B \sum_{t = \tau_B+1}^\tau f((B,t) \mid S) \cdot \phi_B^t \leq \frac{1}{2} \end{align*} and hence \begin{align*} \sum_{B,t} f_\tau ((B,t) \mid S) \cdot \varphi_B^t \geq n - k - f_\tau(S) - \frac{1}{2}. \end{align*} If $n - k - f_\tau(S) = 0$, then the constraint is also trivially satisfied by $\varphi$. Else $n - k - f_\tau(S) \geq 1$. To conclude, note that since $S$ is maximal-integral, and $\phi$ has no coordinates $\phi_B^t \in (\nicefrac{1}{2}, 1)$: \begin{align*} \sum_{t\leq \tau_0} \sum_B f((B,t) \mid S) \cdot \varphi_B^t &= 2\sum_{t\leq \tau_0} \sum_B f((B,t) \mid S) \cdot \phi_B^t \\ &\geq 2\left(n - k - f_\tau(S) - \frac{1}{2} \right) \\ &\geq n - k - f_\tau(S) \end{align*} If $\varphi$ satisfies all integral-maximal constraints, \cref{claim:S_max} implies it also satisfies all other constraints, and the lemma statement follows. \end{proof} \loading* \begin{proof} Let $\overline c(z)$ and $\overline c_{\textsc{Fetch}}(z)$ be the classic paging eviction/fetching cost of $z$, i.e. the cost if page fetches/evictions cannot be batched in blocks. The difference between the total fetching cost and eviction cost paid for a single page is at most $c(B(p))$, and hence $\overline c(z) = \overline c_{\textsc{Fetch}}(z) \pm \sum_{B \in \mathcal{B}} c_B \cdot \bsize$. On the other hand, $\overline c(z) \leq \bsize \cdot c(z)$. Combining these observations: \[c_{\textsc{Fetch}}(z) \leq \overline{c}_{\textsc{Fetch}}(z) \leq \overline{c}(z) + \bsize \cdot \sum_{B \in \mathcal{B}} c_B \leq \bsize \cdot \left(c(z) + \sum_{B \in \mathcal{B}} c_B\right). \qedhere\] \end{proof} \subsection{The Natural LP has $\Omega(\beta)$ Integrality Gap} \label{sec:basic_LP_formulation} Consider the following simple LP formulation, where $\sigma \in \{-1, 1\}$ is a fixed constant. We use $x_p^t$ for the fraction of page $p$ missing from the cache at time $t$. \begin{align} \label{eq:naive_lp} \begin{array}{|rl|} \hline & \\ \displaystyle \min_{\phi,x} & \displaystyle \sum_{B,t} c_B \cdot \phi_B^t \\ \text{subject to} & \\ & \\ \forall t \in [T]: & x_{p(t)}^t = 0 \\ & \\ \forall t \in [T], \forall B \in \mathcal{B}, \forall p \in B: & \phi_B^t \geq \sigma \left(x_p^t - x_p^{t-1}\right) \\ & \\ \forall t \in [T]: & \sum_{p} x_p^t \geq n-k \\ & \\ \forall t \in [T], \forall B \in \mathcal{B}: & \phi_B^t \in [0,1] \\ \forall t \in [T], \forall p \in [n]: & x_p^t \in [0,1] \\ & \\ \hline \end{array} \end{align} If $\sigma = 1$, this is the eviction cost model and $\phi_B^t$ denote the fractional extent to which $B$ is evicted at time $t$; if $\sigma = -1$, this is the fetching cost model and $\phi_B^t$ denote the fractional extent to which $B$ is fetched at time $t$. Unfortunately, for both fetching and eviction cost models, this LP has an integrality gap of $\Omega(\bsize)$. Consider the following instance in which $n=2\bsize$ pages are divided into two blocks $B_1$ and $B_2$. The cache is of size $k=2\bsize-1$, and is initially empty. The request sequence repeats for several rounds. In each round, it requests first all pages from $B_1$ and then all pages from $B_2$. The integral algorithm must pay at least $1$ per round, since $2\bsize$ pages are requested and the cache is of size $2\bsize - 1$. On the other hand, the fractional solution begins by loading both blocks to extent $(\bsize-1)/\bsize$. Subsequently, when $B_i$ is requested for $i \in \{1,2\}$, it loads $B_i$ to extent $1$ and $B_j$ (where $j\neq i$) to extent $(\bsize-1)/\bsize$. Hence both the fractional fetching and eviction costs per round are $2/\bsize$. We summarize this observation in the following theorem. \begin{theorem} The simple LP relaxation for block-aware caching with both fetching/eviction cost models has an integrality gap of $\Omega(\bsize)$. \end{theorem} \section{Conclusion and Open Questions} \label{sec:conclusion} In this work we significantly expand on known results of \cite{beckmann2021brief} for the block-aware caching problem. For the eviction cost model, we give the first set of algorithms avoiding a trivial multiplicative $\beta$ overhead over their classical paging counterparts. We also give $(h,k)$-bicriteria algorithms for both fetching and eviction cost models, which we in turn use to adapt the lower bound of \cite{beckmann2021brief} for the fetching cost model to randomized algorithms. We leave the following as interesting open questions. \begin{question} Can our $O(\log^2 k)$ competitive ratio for block-caching in the eviction cost model be improved to $O(\log k)$ to match the lower bound from classical paging? \end{question} Note that even for the special case of generalized caching, the first result of \cite{BBN08} achieved competitive ratio $O(\log^2 k)$. The subsequent improvement due to \cite{ACER19} required non-trivial ideas which seem difficult to adapt to the more general block-aware caching problem. \begin{question} Is there an algorithm for block-caching in the fetching cost model matching the lower bound of $\bsize+\log k$, or can the lower bound be strengthened to match the trivial $\bsize \log k$ upper bound? \end{question} \begin{question} Are there constant-approximation \emph{offline} algorithms for block-caching in either the fetching or eviction cost models? \end{question} \section{Eviction Cost} \label{sec:eviction} In this section we show our algorithmic results for block-aware caching with respect to eviction costs. Our proof uses the (online) primal-dual method and hence requires an LP for the eviction cost model. It is straightforward to write a simple LP relaxation; unfortunately, the na\"{\i}ve relaxation has an integrality gap of $\Omega(\bsize)$ (see \cref{sec:basic_LP_formulation}), and recall that our goal is to beat the trivial algorithm's linear dependence on $\bsize$. \subsection{Submodular Cover LP Formulation} \label{sec:subcov_form} To circumvent the na\"{\i}ve LP barrier, we strengthen the formulation using ideas from Wolsey's submodular cover LP. We start with some notation. A \emph{flush} is a tuple $(B,t) \in \mathcal{B} \times [T]_0$. The flush $(B,t)$ corresponds to the event of evicting all cached pages of block $B$ at time $t$. (There is no reason to only evict some of them, since they can be fetched back for free.) Let $S$ be a set of flushes. We say that a page $p$ is \textit{missing} at time $\tau$ according to $S$ if there exists $r(p,\tau) < t \leq \tau$ such that $(B(p),t) \in S$.\footnote{Adding to $S$ all flushes of the form $(B,0)$ ensures that also never-requested pages are missing by this definition.} Crucially, this definition ensures that the page $p_t$ requested at time $t$ is not missing at time $t$. We say than an algorithm is \emph{induced} by a set of flushes $S$ if the algorithm evicts all pages of block $B$ (except $p_t$) at time $t$ if and only if $(B,t) \in S$, and always loads $p_t$ at time $t$. Let $n_t$ be the number of pages requested up until time $t$. We use the above to define a set function $f_\tau: 2^{\mathcal{B} \times [T]_0} \rightarrow \mathbb{Z}$ on sets of flushes: \begin{align*} f_\tau(S) &:= \min(n-k, \ \left|\{p : p \text{ is \textit{missing} at time } \tau \text{ according to } S\}\right|) \end{align*} In words, $f_\tau(S)$ is the number of pages that are outside of the cache at time $\tau$ for the algorithm induced by $S$, where this number is capped at $n-k$. The algorithm induced by a set of flushes $S$ is feasible at time $\tau$ iff $f_\tau(S) \geq n-k$ for all $\tau$. We show the following simple fact in \cref{sec:extra_proofs}: \begin{restatable}{claim}{fsubmod} \label{claim:fsubmod} For every $\tau$, the function $f_\tau$ is submodular. \end{restatable} \begin{figure} \label{fig:ftau} \begin{mdframed} \centering \scalebox{0.65}{\input{ftau_figure.tikz}} \caption{Illustration of the function $f_\tau$. Each line represents a page, and each horizontal bar within the line represents an interval in which the page is not requested. Pages are grouped into their corresponding blocks. The solid vertical lines represent flushes. Suppose $n = 8$ and $k= 4$. Then $f_\tau(\{(B_1, t_1)\}) = 2$, $f_\tau(\{(B_2, t_2)\}) = 3$, but $f_\tau(\{(B_1, t_1), (B_2, t_2)\}) = 4$.} \end{mdframed} \end{figure} With the notation above, we can reformulate the block-aware caching problem with eviction cost as the solution to\footnote{This formulation is reminiscent of online and dynamic submodular cover problems \cite{gupta2020online,dynamicsubmod} in which the goal is also to maintain a feasible submodular cover while the underlying submodular function changes over time. However the cost models in these other works are very different.} \begin{align} \label{eq:block_caching} \begin{array}{|rl|} \hline & \\ \displaystyle \min_{S \subseteq \mathcal{B} \times [T]_0} & \displaystyle \sum_{\substack{(B,t) \in S \\ t\geq 1}} c_B \\ \text{subject to} & \\ & \\ \forall \tau \in [T]: & f_\tau(S) \geq n-k. \\ & \\ \hline \end{array} \end{align} Note that because $f_\tau(S)$ counts the number of pages evicted by $S$ that are \emph{not} $p_\tau$, this single constraint captures both that the algorithm must flush at least $n-k$ pages in order to respect the cache size limit, and that the cache must contain page $p_t$ at time $t$. Note as well that we allow the algorithm to perform flushes at time $0$, but only charge the cost for flushes performed after time $1$. This conveniently allows the algorithm to clear the cache initially at no extra cost. Finally, we are ready to write our LP, which is is the intersection of the submodular cover LPs of \eqref{eq:wolsey_lp} for the functions $f_\tau$, across all time steps $\tau \in [T]$. \begin{align} \begin{array}{|rl|} \multicolumn{2}{c}{\text{Primal}} \\ \hline & \\ \min & \displaystyle \sum_{B,t\geq 1} c_B \cdot \phi_B^t \\ \text{subject to} & \\ & \\ \begin{array}{c} \forall S\subseteq \mathcal{B} \times [T]_0, \\ \forall \tau \in [T] \end{array}: & \begin{array}{l} \displaystyle \sum_{B,t} f_\tau((B,t) \mid S) \cdot \phi_B^t \geq n-k-f_\tau(S) \end{array} \\ & \\ \forall B \in \mathcal{B}, t \in [T]_0: & \phi_B^t \geq 0 \\ & \\ \hline \end{array} \label{eq:lp_p} \tag{P} \end{align} We will require the dual of this program, which is \begin{align} \begin{array}{|rl|} \multicolumn{2}{c}{\text{Dual}} \\ \hline & \\ \max & \displaystyle \sum_{S, \tau} \left(n-k - f_\tau(S)\right) \cdot y^\tau_S \\ \text{subject to} & \\ & \\ \forall B \in \mathcal{B}, t \in [T]: & \displaystyle \sum_{S, \tau} f_\tau((B,t) \mid S) \cdot y^\tau_S \leq c_B \\ & \\ \begin{array}{c} \forall S\subseteq \mathcal{B} \times [T], \\ \forall \tau \in [T] \end{array}: & y_S^\tau \geq 0 \\ \hline \end{array} \label{eq:lp_d} \tag{D} \end{align} That \hyperref[eq:lp_p]{(P)} is a valid relaxation of \eqref{eq:block_caching} follows from \cref{lem:wolsey_integerpts}. \begin{claim}[Corollary of \cref{lem:wolsey_integerpts}] \label{lem:integerpts} A set $S$ has $f_\tau(S) = n-k$ for all $\tau$ if and only if $\chi_S$, the characteristic vector of $S$, is a feasible integer solution to \hyperref[eq:lp_p]{(P)}. \end{claim} We note for intuition's sake that even the constraints \[\sum_{B,t} f_\tau((B,t)) \cdot \phi_B^t \geq n-k\] alone already avoid the bad integrality gap example of \cref{sec:basic_LP_formulation}. One reason is that truncating $f_\tau$ at $n-k$ prevents the LP from overestimating how much space will be saved by evictions. Given an LP solution $\phi$, we also define the fractional value of a page $p$ missing from cache at time $t$ to be \begin{align}x_p^t := \begin{cases} 1 & \text{if } r(p,t) = -\infty. \\ \min\left\{1,\sum_{u =r(p,t)+1}^{t} \phi_{B(p)}^u\right\} & \text{otherwise.} \end{cases} \label{eq:xpt_def}\end{align} Intuitively, whenever some fraction $\delta$ of a flush $(B,t)$ is chosen, we imagine increasing the fractional amount by which each page in $B$ is evicted to extent $\delta$. On the other hand when a page $p_t$ is requested at time $t$, we reset its fractional value to $x^t_{p_t} = 0$. \subsection{A $k$-Competitive Deterministic Online Algorithm} \label{sec:deterministic} Our first algorithmic result is a $k$-competitive deterministic online algorithm, which beats the trivial $k\bsize$ competitive ratio obtained by running the deterministic online algorithm for classical paging. Our deterministic algorithm is given in \cref{alg:onlinedet}. The algorithm constructs simultaneously a primal solution $x$ and a dual solution $y$ to the LPs~\eqref{eq:lp_p} and \eqref{eq:lp_d}. We will ensure that these solutions satisfy all constraints known up to that time. We use $C(\tau)$ for the set of pages in cache at time $\tau$, and we use $S$ for the set of flushes performed by the algorithm so far. At the start of the algorithm, $S$ is initialized as the set of all flushes of all blocks at time $0$; this amounts to clearing the initial cache.The primal solution $\phi$ is set to the characteristic vector of $S$, and dual solutions $y$ is initialized as the all-0-vector. At time $\tau$, we first add the requested page $p_\tau$ to the cache. If this violates the cache constraint, we continuously increase the dual variable $y_S^\tau$ corresponding to the current set $S$ and the current time $\tau$ until the dual constraint corresponding to some $(B,t)$ with $f_{\tau}((B,t)\mid S) \geq 1$ becomes tight. Once this happens, we evict all pages of block $B$ that are in cache (except $p_\tau$, in case it belongs to this block) and update $x$ and $S$ to reflect that the flush $(B,\tau)$ has been performed. \begin{algorithm} \caption{Deterministic Online Algorithm} \label{alg:onlinedet} \begin{algorithmic}[1] \State $S \leftarrow \{(B, 0) : B \in \mathcal{B} \}$\label{line:initS} \State $\phi \leftarrow \chi_S$, \ $y \leftarrow \vec 0$ \For{time $\tau = 1, \ldots, T$} \State $C(\tau) \leftarrow C(\tau-1) \cup \{p_{\tau} \}$ \If{$|C(\tau)| > k $} \State Increase $y^\tau_S$ until the dual constraint corresponding to some $(B,t)$ for which ${f_{\tau}((B,t)\mid S) \geq 1}$ is tight. \State $C(\tau) \leftarrow C(\tau)\setminus(B\setminus\{p_{\tau}\})$. \State $\phi_B^{\tau} \leftarrow 1$. \label{line:det_setto1} \State $S \leftarrow S \cup \{(B,\tau)\}$. \EndIf \EndFor \end{algorithmic} \end{algorithm} We show that both primal and dual solutions are feasible, and that the cost of the primal is at most $k$ times the cost of the dual. By weak duality, this implies: \begin{theorem} \cref{alg:onlinedet} is $k$-competitive. \end{theorem} We begin by showing feasibility. \begin{lemma} \label{det_primal_feasible} \cref{alg:onlinedet} terminates. Upon termination, $\phi$ is feasible for~\eqref{eq:lp_p} and $y$ is feasible for~\eqref{eq:lp_d}. \end{lemma} \begin{proof} We first show that the algorithm terminates and the primal is feasible. Assume by induction that the algorithm maintains a feasible cache for every time step strictly less than $\tau$ (it is trivially feasible at time $0$). Since exactly one page is requested per time step, if the cache is not feasible at the beginning of time step $\tau$, then $|C(\tau)| = k+1$. If this is the case, then there must exist some $(B,t)$ such that $f_\tau((B,t) \mid S) \geq 1$, in which case the dual constraint corresponding to such a $(B,t)$ will become tight after $y_S^\tau$ is increased sufficiently (in particular, the increase of $y_S^\tau$ terminates). Note that $t\le \tau$ in this case. Then $f_\tau((B,\tau) \mid S) \geq f_\tau((B,t) \mid S) \geq 1$, so at least one page is evicted upon performing the flush $(B,\tau)$, and thus feasibility is restored at time $\tau$. Hence the algorithm maintains a feasible cache state for every time $\tau$, and by \cref{lem:integerpts} it follows that the primal is feasible for \eqref{eq:lp_p}. We now show feasibility of the dual. Clearly $y_S^\tau\ge 0$. Increasing $y_S^\tau$ could lead to a violation of the dual constraint corresponding to $(B,t)$ only if $f_\tau((B,t)\mid S)\ge 1$, but in this case we stop once the constraint becomes tight. Thus, dual constraints are never violated. Note that dual variables $y_{S}^\tau$ corresponding to future time steps $\tau$ are $0$, so it does not matter that the coefficients $f_\tau((B,t)\mid S)$ of future time steps are not known yet. \end{proof} Finally, we relate the primal and dual costs. \begin{lemma} The cost of the primal is at most $k$ times the cost of the dual. \end{lemma} \begin{proof} The algorithm sets $\phi_B^{\tau} = 1$ if and only if $(B,\tau)\in S$, so the primal cost is \begin{align*} P &= \sum_{B, \tau} c_B \cdot \phi_B^{\tau} = \sum_{(B,\tau)\in S} c_B. \end{align*} The algorithm adds $(B,\tau)$ to $S$ only if a constraint corresponding to some $(B,t)=(B,t_\tau)$ with $f_\tau((B,t_\tau)\mid S)\ge 1$ becomes tight at time $\tau$. Thus, \begin{align} c_B = \sum_{S', u} f_{u}((B,t_\tau) \mid S') \cdot y_{S'}^{u}.\label{eq:tightConstr} \end{align} To account for our primal cost, for every flush $(B,\tau)\in S$, we charge $f_u((B,t_\tau)\mid S')\cdot y^u_{S'}$ of the cost of the flush to the dual variable $y^u_{S'}$. Since every non-zero dual variable $y_{S'}^u$ in our final solution has its coefficient in the objective $n-k-f_u(S') \geq 1$ (otherwise it would never have been increased), it suffices to argue that each dual variable $y_{S'}^u$ receives a total charge of at most $k \cdot y_{S'}^u$. To see this, note that $(B,\tau)$ only charges its cost to variables $y_{S'}^u$ for which $u \in [t_\tau,\tau]$. Indeed, $f_u((B,t_\tau)\mid S')>0$ only if $t_\tau\le u$; moreover, when \eqref{eq:tightConstr} becomes tight at time $\tau$ we have $y_{S'}^u=0$ for $u>\tau$ and all $S'$, and such $y_{S'}^u$ could subsequently increase only if $f_{u}((B,t_\tau) \mid S')=0$ since otherwise the dual constraint would become violated. After $(B,\tau)$ is added to $S$, for all $t' \le \tau$ and all $\tau' \geq \tau$ the multiplier $f_{\tau'}((B,t') \mid S) = 0$, and so $y_{S'}^u$ is charged at most once by every block $B$. Furthermore, if flush $(B,\tau)$ charges dual variable $y_{S'}^u$ then the coefficient $f_u((B,t)\mid S')$ is at most the number of pages from block $B$ that were in cache at the end of time step $u-1$. Since the total number of pages in cache at the end of time step $u-1$ was at most $k$, the total amount charged to $y_{S'}^u$ is at most $k \cdot y_{S'}^u$. \end{proof} \subsection{An $O(\log k)$-Competitive Monotone-Incremental Fractional Algorithm} \label{sec:fractional} In this section we give a competitive fractional algorithm for block caching with eviction cost. For simplicity of presentation, our algorithm is not online in the strict sense, as at time $\tau$ we allow it to change the value of $\phi_B^t$ for $t < \tau$. However, it has the crucial property that it only increases LP variables $\phi_B^t$. We call an algorithm with this property \emph{monotone-incremental}. This property suffices for our rounding procedure in \cref{sec:rounding} to yield an online algorithm. We prove: \begin{theorem} \label{thm:montone_incr} There is an $O(\log k)$-competitive monotone-incremental fractional algorithm for block-aware caching with eviction cost. \end{theorem} To describe the algorithm, we define a flush $(B,t)$ to be \emph{alive} at time $\tau$ if $t=r(p,\tau)+1$ for some $p\in B$. Intuitively, for an offline algorithm it is most beneficial to flush a block $B$ only at time steps directly after some page from $B$ was requested. Accordingly, our fractional algorithm will increase $\phi_B^t$ only for $(B,t)$ that are alive. The algorithm is given in \cref{alg:onlinerand}. It starts by initializing $S$ as the set of all flushes at time $0$, $\phi$ as the corresponding characteristic vector, and $y$ as the all-$0$-vector. At time $\tau$, when some constraint $(S',\tau)$ is violated, we will show that this will also be the case for some $S'\supseteq S$, so that the condition of the while-loop will be true. We then increase the corresponding dual variable as well as primal variables corresponding to all alive flushes according to \eqref{eq:incrPhi}. While doing so, we occasionally add new flushes to the set $S$. As we will show later, all flushes $(B,t)$ added to the set $S$ will satisfy $\phi_B^t=1$, i.e., they are chosen integrally by the fractional algorithm. \begin{algorithm} \caption{$O(\log k)$-competitive monotone-incremental fractional algorithm} \label{alg:onlinerand} \begin{algorithmic}[1] \State $S \leftarrow \{(B, 0) : B \in \mathcal{B} \}$. \State $\phi \leftarrow \chi_S$, \ $y \leftarrow \vec 0$. \For{time $\tau = 1, \ldots, T$} \While{$\exists S' \supseteq S$ s.t. primal constraint of $(S',\tau)$ violated} \label{alg:frac_startwhile} \State Increase $y^\tau_{S'}$ continuously, and meanwhile for every alive $(B,t)$ increase $\phi_B^{t}$ at rate \begin{align}\frac{d\phi_B^t}{dy_{S'}^\tau} = \frac{\ln(k \cdot \bsize+1)}{c_B} \cdot f_\tau ((B,t) \mid S') \cdot \left(\phi_B^{t} + \frac{1}{k\cdot \bsize}\right) \label{eq:incrPhi} \end{align} until the dual constraint corresponding to some alive $(B_0,t_0)$ for which the inequality $f_\tau((B_0,t_0)\mid S')\ge 1$ is tight. \label{line:frac_roc} \State $S \leftarrow S \cup \{(B_0,t_0)\}$. \label{alg:frac_endwhile} \EndWhile \EndFor \end{algorithmic} \end{algorithm} \begin{lemma} \cref{alg:onlinerand} terminates. \end{lemma} \begin{proof} If the condition of the while-loop is true, then $f_\tau(S')<n-k$, and thus there exists some alive $(B_0,t_0)$ with $f_\tau((B_0,t_0)\mid S')\ge 1$. Increasing $y_{S'}^\tau$ sufficiently will eventually tighten a corresponding constraint, so each iteration of the while-loop terminates. When a new element is added to $S$ at the end of an iteration, $f_\tau(S)$ increases by at least $1$. When $f_\tau(S)$ has reached value $n-k$ (or earlier), the condition of the while-loop cannot be true any more. \end{proof} \begin{lemma} \label{lem:frac_pd_feasible} At the end of \cref{alg:onlinerand}, $\phi$ is feasible for~\eqref{eq:lp_p} and $y$ is feasible for~\eqref{eq:lp_d}. \end{lemma} Before proving \cref{lem:frac_pd_feasible}, we need the following claim, whose proof we leave for \cref{sec:extra_proofs}. \begin{definition} Given a fractional solution $\phi$, we say that the constraint $(S, \tau)$ is maximal-integral if for any $(B,t)$ such that $\phi_B^t = 1$ it holds that $(B,t) \in S$. \end{definition} \begin{restatable}{claim}{smaxint} \label{claim:S_max} If a fractional solution $\phi$ to \eqref{eq:lp_p} has no violated maximal-integral constraints, then $\phi$ is feasible. \end{restatable} \begin{proof}[Proof of \cref{lem:frac_pd_feasible}] We first show feasibility of the dual. Suppose the dual constraint corresponding to some $(B,t)$ gets violated when $y_{S'}^\tau$ is increased. Let $t_0\le t$ be maximal such that $(B,t_0)$ is alive at time $\tau$. (If no such $t_0$ exists, then any pages evicted by the flush $(B,t)$ are requested again in $[t,\tau]$; but then $f_\tau((B,t)\mid S')=0$, so increasing $y_{S'}^\tau$ would not have led to a violation of the constraint corresponding to $(B,t)$.) Since no pages of $B$ are requested at times in $[t_0,t)$, we have $f_\tau((B,t_0)\mid S'')=f_\tau((B,t)\mid S'')$ for any $S''$, meaning that the dual constraint of $(B,t_0)$ would become violated at the same time during the increase of $y_{S'}^\tau$. But then $f_\tau((B,t_0)\mid S')\ge 1$ (otherwise, increasing $y_{S'}^\tau$ would not increase the left-hand side of constraint $(B,t_0)$) and therefore we would have stopped increasing $y_{S'}^\tau$ when the constraint got tight. To see that the primal is feasible, we will show that for all $(B,t)\in S$ we have $\phi_B^t=1$. It then follows from \cref{claim:S_max} in the appendix that if a primal constraint $(S',\tau)$ is infeasible, then this is also the case for some $S'\supseteq S$; thus, the algorithm would not have terminated. Consider the differential equation $dz / dy = \eta\cdot(z + \delta)$ for some constants $\eta\ge 0$ and $\delta>0$. When $y$ increases from $a$ to $b$, we have \begin{align} \ln(z(b) + \delta) - \ln(z(a) + \delta) = \eta\cdot(b-a).\label{eq:frac_ode} \end{align} For some $(B,t)$ that eventually gets added to $S$, consider the dynamics of $\phi_B^t$. It starts at $0$, and increases with every $y_{S'}^\tau$ according to \eqref{eq:incrPhi}. Applying \eqref{eq:frac_ode} for every such $y_{S'}^\tau$ and summing, we have \begin{align*} &\ln\left(\phi_B^t + \frac{1}{k\cdot \bsize}\right) - \ln \left(\frac{1}{k\cdot \bsize}\right) = \sum_{S',\tau} \frac{\ln(k \cdot \bsize+1)}{c_B} \cdot f_\tau((B,t) \mid S') \cdot y_{S'}^\tau. \\ \intertext{Taking exponents and solving, we have that} \phi_B^{t} &= \frac{1}{k\cdot \bsize}\cdot \left( \exp\left(\frac{\ln(k \cdot \bsize+1)}{c_B} \sum_{S',\tau} f_\tau((B,t) \mid S') \cdot y_{S'}^\tau\right) -1 \right). \end{align*} In particular, when constraint $(B,t)$ becomes tight and is added to $S$, the value of $\phi_B^t$ is $1$. \end{proof} \begin{lemma} \label{lem:frac_cost} The cost of the primal is at most $O(\log k)$ times the cost of the dual. \end{lemma} \begin{proof} Consider the algorithm at a fixed time $\tau$ during a step in which $y_{S'}^\tau$ is increased by an infinitesimal amount $dy_{S'}^\tau$. The dual profit is $dy_{S'}^\tau \cdot (n-k-f_\tau(S'))$, and so it suffices bound the corresponding increase in primal cost. The primal cost increase is: \begin{align*} &\sum_{B,t} c_B \cdot \frac{1}{c_B} \ln(k \cdot \bsize+1) \cdot f_\tau((B,t) \mid S') \cdot \left(\phi_B^t + \frac{1}{k\cdot \bsize}\right) \cdot dy_{S'}^\tau \\ &= \sum_{B,t} \ln(k \cdot \bsize+1)\cdot f_\tau((B,t) \mid S') \cdot \left(\phi_B^t + \frac{1}{k\cdot \bsize}\right) \cdot dy_{S'}^\tau. \end{align*} Since we only increase $y_{S'}^\tau$ if the corresponding constraint in the primal is not satisfied, we have \begin{align} \sum_{B,t} f_\tau((B,t) \mid S') \cdot \phi_B^t &< n - k - f_\tau(S'). \label{eq:frac_primalunsat} \\ \intertext{We claim that} \sum_{(B,t)\text{ alive}} \frac{f_\tau((B,t) \mid S')}{k\cdot \bsize} &\leq n - k - f_\tau(S'). \label{eq:frac_1overkl} \end{align} Inequalities \eqref{eq:frac_primalunsat} and \eqref{eq:frac_1overkl} together imply that the increase in the primal cost is at most $2\ln(k \cdot \bsize+1)\cdot(n-k-f_\tau(S))\cdot dy_S^\tau$, which in turn implies the lemma statement since $\bsize\le k$. To prove \eqref{eq:frac_1overkl}, we first show that \begin{align}\sum_B \frac{f_\tau ((B,\tau) \mid S')}{k} \leq n - k - f_{\tau}(S').\label{eq:frac_divminineq}\end{align} Note that $\sum_B f_\tau ((B,\tau) \mid S') \leq n - f_{\tau}(S') - 1$. In the case that $n - f_{\tau}(S') \geq k+1$, \eqref{eq:frac_divminineq} holds by the fact that $(z-1)/k \leq z - k$ for every $z\geq k+1$. Otherwise, when $n - f_{\tau}(S') < k+1$, then $f_\tau(S') = n-k$ (since $f_\tau$ is integer valued and truncated at $n-k$). Then \eqref{eq:frac_divminineq} holds with both sides equal to $0$. We now obtain \eqref{eq:frac_1overkl} via \begin{align*} \sum_{(B,t)\text{ alive}} \frac{f_\tau((B,t) \mid S')}{k\cdot \bsize} &\leq \sum_{(B,t)\text{ alive}} \frac{f_\tau((B,\tau) \mid S')}{k\cdot \bsize} \\ &\leq \sum_{B} \frac{f_\tau((B,\tau) \mid S')}{k}\\ &\leq n - k - f_\tau(S') \end{align*} where the second inequality uses that there are at most $\bsize$ flushes alive at any time, and the last inequality is \eqref{eq:frac_divminineq}. \end{proof} \subsection{An $O(\log \Delta k)$-Competitive Online Randomized Rounding Scheme} \label{sec:rounding} Finally we show an online $O(\log \Delta k)$ randomized rounding scheme for our block caching LP \eqref{eq:lp_p}. Recall the definition of the aspect ratio $\Delta$ from \cref{prelim} and note that in the standard unweighted setting, $\Delta = 1$. At time $t$, the algorithm evicts block $B$ with probability $\gamma \cdot \phi_B^\tau$, where $\gamma = O(\log k \Delta)$. If the cache is still infeasible at time $t$, evict an arbitrary block so long as at least one of its pages has $x_p^t > 0$ (recall from the definition \eqref{eq:xpt_def} that $x_p^t$ is the amount missing from page $p$ at time $t$). For clarity of exposition, the rounding procedure is written as if the underlying fractional solution $(x,\phi)$ is computed online. However the procedure can be carried out so long as the solution is monotone-incremental; at time $\tau$, if the fractional solution increases any $\phi_B^t$ for $t < \tau$ by some amount $\delta_t$, we can evict $B$ at time $\tau$ with probability $\min(1, \gamma\cdot(\phi_B^\tau + \sum_{t < \tau} \delta_t))$. Thus together with \cref{thm:montone_incr}, our rounding scheme implies: \begin{theorem} For block-aware caching with eviction cost, there exists an $O(\log k \log (k\Delta))$-competitive algorithm. \end{theorem} Furthermore, using the round-or-separate procedure of \cite{gupta2020online}, one can simultaneously solve and round the \subcov LP \eqref{eq:wolsey_lp} offline in polynomial time. Using the analysis of this section, this implies: \begin{theorem} For block-aware caching with eviction cost, there exists an $O(\log (k\Delta))$-approximation algorithm. \end{theorem} We perform the rounding assuming a few key properties of our fractional solution which we show we can assume (online) without changing our asymptotic guarantees. \begin{restatable}{lemma}{structsol} \label{lem:structured_solution} Let $(x,\phi)$ be a fractional solution for LP \eqref{eq:lp_p}. For an additional multiplicative constant factor to the competitive ratio, we can assume that $(x,\phi)$ has the following properties: \begin{itemize} \item For every time $t$, every page $p$ has $x_p^t \in [0,\nicefrac{1}{2}] \cup \{1\}$. \item Every nonzero coordinate has $\phi_B^t \geq \nicefrac{1}{4k^2}$. \end{itemize} \end{restatable} We defer the proof to \cref{sec:extra_proofs}. A key technical tool in this section is a lemma from \cite{gupta2020online}, which in turn relies on a relationship between continuous extensions of submodular functions proven by \cite{vondrak2007submodularity}. \begin{lemma}[Lemma 2.5 of \cite{gupta2020online}] \label{lem:gl_rounding} Let $x \in [0,1]^n$ be a feasible solution to \eqref{eq:wolsey_lp}. Let $R$ be a set obtained by performing randomized rounding according to $\min(1,\gamma \cdot x)$. Then: \[\expectover{R}{f(R)} \geq f(\mathcal{N}) - e^{-\gamma} f(\mathcal{N}).\] \end{lemma} We can now present our rounding scheme. \begin{algorithm} \caption{$O(\log k)$-Approximate Rounding} \label{alg:onlineround} \begin{algorithmic}[1] \For {time $\tau \in [T]$} \State For every block $B$ evict the set $\{p\in B \mid x_p^t > 0\}$ with probability $\min(1, \gamma \cdot \phi_B^t)$. \label{line:r-round-step} \State Fetch $p_\tau$ if it is missing from the cache. \While{the cache is infeasible} \label{line:evict_loop} \State Let $B$ be an arbitrary block in the cache that has a page $p$ with $x_p^t > 0$, evict the set of pages $\{p\in B \mid x_p^t > 0\}$. \label{line:evict_loop2} \EndWhile \EndFor \end{algorithmic} \end{algorithm} Note that we assume that the fractional solution on which \cref{alg:onlineround} executes is one that has the properties given by \cref{lem:structured_solution}. We now prove our main rounding lemma. \begin{lemma} \label{lem:rounding2} For $\gamma = \log (4k^2 \bsize\Delta)$, given a feasible fractional solution $(x,\phi)$ with cost $c(\phi)$, \cref{alg:onlineround} produces a feasible integral cache policy of cost $O(\log k\Delta) \cdot c(\phi)$. \end{lemma} To prove \cref{lem:rounding2}, we charge the cost of the algorithm to the fetching cost of the fractional solution. To relate this fractional fetching cost to the fractional eviction cost, we need a claim which we prove in \cref{sec:extra_proofs}. \begin{restatable}{claim}{loading} \label{claim:loading} Let $c_{\textsc{Fetch}}(z)$ be the fetching cost of a fractional solution $z$. Then \[c_{\textsc{Fetch}}(z) \leq \bsize \left( c(z) + \sum_{B \in \mathcal{B}} c_B\right).\] \end{restatable} \begin{proof}[Proof of \cref{lem:rounding2}] Let $x,\phi$ be a fractional solution given by \cref{lem:structured_solution}, and let $S$ be the set of flushes performed by our algorithm. The algorithm produces a feasible cache policy by construction, as we always fetch $p_t$ and we always run the eviction loop in \cref{line:evict_loop,line:evict_loop2} until the cache is feasible. Note that there is always a block to evict with a page $p$ that has $x_p > 0$, otherwise $x$ is integral, and is the characteristic vector of the pages the algorithm has in cache, in which case the algorithm's cache is already feasible since the fractional solution is feasible. Furthermore, the expected cost of the evictions due to the randomized rounding step at \cref{line:r-round-step} is at most $\gamma \cdot c(\phi) = O(\log k\Delta)\cdot c(\phi)$. It remains to show that the total cost due to alterations in the eviction loop in \cref{line:evict_loop,line:evict_loop2} is bounded. We now show that it is at most $O(c(\phi))$. Let $\Lambda$ be the set of times $\tau$ such that $p_\tau$ is not already fully in the fractional cache. Our algorithm maintains the invariant that if $x_p^t = 0$, then it is also fully in cache of the integral solution produced by our algorithm at time $t$. This means that at times $\tau \not\in \Lambda$, neither the fractional solution nor the rounding algorithm incur a cost increase. Hence we focus on the case where $\tau \in \Lambda$. For every $\tau$, the solution $\phi$ is feasible for the LP \eqref{eq:wolsey_lp} with the function $f^\tau$, so by \cref{lem:gl_rounding} \[\expectation\expectarg{f_\tau(S)} \geq n - k -\frac{1}{4k^2 \bsize\Delta}.\] In particular, this holds for all $\tau \in \Lambda$. In words, the expected number of pages in cache is bounded by $k + \frac{1}{4k^2 \bsize\Delta}$. Since every eviction due to \cref{line:evict_loop2} costs at most $\cmax$ and evicts at least one page, the expected cost of the alteration while loop at time $\tau$ is bounded by $\cmax / (4k^2 \bsize\Delta) = \cmin / 4k^2 \bsize$. On the other hand, since $\tau \in \Lambda$, the page $p_\tau$ is not fully in cache, and since by \cref{lem:structured_solution} the fractional solution evicts pages in increments of at least $1/(4k^2)$, it holds that $x_{p_\tau}^\tau \geq 1/(4k^2)$. This means that the \emph{fetching} cost of the fractional solution at time $\tau$ is at least $\cmin / 4k^2$. Hence the expected cost of the alteration step in time $\tau$ is at most the fractional fetching cost at time $\tau$, divided by $\bsize$. Summing this inequality over time, the total cost paid by the algorithm over all all time due to \cref{line:evict_loop2} is at most $c_{\textsc{Fetch}}(\phi) / \bsize$. By \cref{claim:loading}, the fetching cost $c_{\textsc{Fetch}}(\phi) \leq \beta(c(\phi) + \sum_{B \in \mathcal{B}} c_B)$, and hence the total cost of alterations is at most $c(\phi) + \sum_{B \in \mathcal{B}} c_B$. This completes the proof. \end{proof} \section{Introduction} Caching (also known as paging) has been extensively studied since the early days of online computation and competitive analysis, establishing itself as a cornerstone problem in this field, see e.g., ~\cite{ST85,F+,Y91,Y98,GS,BBN07,BBN08,ACN,AAK99,BBK,BFT96,I98,I97,I02}. Recent years have witnessed increased activity on non-standard caching models, e.g., elastic caches~\cite{GuptaK0P19}, caching with time windows~\cite{GKP20}, caching with dynamic weights \cite{EvenMR18}, caching with machine learning predictions~\cite{LykourisV18}, and writeback-aware caching \cite{beckmann2020writeback, bansal2021efficient}. Many of the recent developments in competitive analysis, e.g., the online primal-dual method, projections, and mirror descent~\cite{BNsurvey, BCN14, BCLLM18} are all rooted in online paging. We study here {\em block-aware caching}, a non-standard caching model studied recently, as well as its generalizations. In the (classic) weighted paging problem there is a universe of $n$ pages, a cache that can hold up to $k$ pages, and each page is associated with a weight (fetch cost). At each time step a page is requested; if the requested page is already in cache then no cost is incurred, otherwise the page must be fetched into the cache, incurring a cost equal to its weight. The goal is to minimize the total cost incurred. This problem is well studied and understood, and we briefly mention the main results known for it. Sleator and Tarjan \cite{ST85}, in their seminal paper on competitive analysis, showed that any deterministic algorithm is at least $k$-competitive, and that LRU (Least Recently Used) is precisely $k$-competitive for unweighted paging (i.e., all weights are equal). The $k$-competitive bound was later generalized to weighted paging as well \cite{CL91,Y94}. When randomization is allowed, Fiat et al ~\cite{F+} gave the elegant Randomized Marking algorithm for unweighted paging, which is $\Theta(\log k)$-competitive against an oblivious adversary. For weighted paging, Bansal et al.~\cite{BBN07} gave an $O(\log k)$-competitive randomized algorithm using the online primal-dual framework~\cite{BNsurvey, AAABN03}. It uses a two-step approach. First, a deterministic competitive algorithm is designed for a fractional version of the problem. Then, a randomized online algorithm is obtained by {\em rounding} the deterministic fractional solution online. \paragraph{Block-aware caching.} Real storage systems operate by constructing a hierarchy of memory levels, starting from a very fast and small memory (e.g., an SRAM cache) to a very large and slow memory (e.g., flash or disk). The data items in each level are typically organized in {\em blocks}, and fetching (or evicting) data items from the same block incurs the same cost as fetching (or evicting) just a single item from the block. Using fetching costs models scenarios in which data is read-only; using eviction costs models scenarios in which data must be written to slow memory upon eviction, and the writing cost dominates the reading cost (see e.g. \cite{BGHM20,bansal2021efficient}). Thus, a natural question is how can one optimize cache performance by taking advantage of granularity changes across different storage hierarchy levels. This question was recently raised by Beckmann et al. \cite{beckmann2021brief}, who defined the {\em block-aware caching problem}, generalizing the classic paging problem, as follows. Given a cache of size $k$, and a sequence of requests from $n$ pages that are partitioned into given blocks of size $\bsize\leq k$, minimize the total cost of fetching (or evicting) from the cache so as to serve the requests\footnote{We note that Beckmann et al. \cite{beckmann2021brief} considered block-aware caching only in the fetching cost model.}. Block-aware caching also arises in web and cloud settings, where data items can be aggregated into chunks (i.e., blocks) of data, such that accessing a whole chunk incurs the same cost as accessing just a single item. Consider a distributed cluster of servers, where a common cache of data items is maintained. One such example is the ZFS distributed file system that aggregates different devices into a single storage pool acting as an arbitrary data store. When accessing a server for a specific data item, the main cost paid (e.g., latency) is for accessing the server. The notion of a block of data in this setting corresponds to the largest chunk of data items that can be fetched from (or evicted to) a server, while maintaining that the cost of this operation is dominated by the cost of accessing the server. Web caching is another example of block-aware caching. Consider a content delivery network (CDN) that maintains a cache of data items and suppose the CDN connects to a website so as to access a data item (see, e.g., \cite{HasanGDS14,SongBLL20}). Typically, TCP/IP provides a time window for connecting to the website and accessing the data item. Hence, it might be beneficial to fetch (or evict) many data items that belong to the website, and not just the particular data item that is currently accessed. Thus, the notion of a block of data in this setting corresponds to the maximum number of such data items that can be sent without increasing the travel time. In the generalized caching problem \cite{BBN08,ACER19}, pages are associated with both a size and a cost. At any point of time, the sum of the sizes of the pages in the cache cannot exceed the cache size. In the offline setting, generalized caching is known to be NP-hard, and in the online setting the known competitive factors for generalized caching \cite{BBN08,ACER19} match those of weighted caching. It is not hard to see that block-aware caching captures generalized caching as a special case. Replace a page $p$ of size $s$ by a block $B$ of size $s$ containing page $p$ partitioned into unit size ``slices". The cost of accessing each slice is equal to the cost of $p$. Now, a request to page $p$ is replaced by many requests to the slices in $B$. Thus, an optimal solution to the block-aware caching problem generated has to fetch the full block $B$ into the cache. \paragraph{Eviction and fetching costs.} In classic paging, costs can be associated with either evicting or fetching pages. Clearly, for a given request sequence, optimal eviction and fetching costs of serving the requests can differ by at most an additive constant that only depends on the initial contents of the cache. However, this is not the case for block-aware caching, as optimal eviction and fetching costs can differ significantly, separating the two cost models. (We provide an example in Section \ref{prelim}.) As discussed above, the two cost models are practically motivated for block-aware caching, and we thus study both of them in this paper. We note that in the eviction cost model we are able to circumvent known lower bounds \cite{beckmann2021brief} that hold in the fetching cost model. \subsection{Results and Techniques} Observe that if an algorithm is $r$-competitive for classical paging, then it is at most $\bsize \cdot r$-competitive for block-caching in both fetching/eviction cost models; the reason is simply that \opt can be simulated by a classical paging algorithm that performs any single batched fetch/eviction in at most $\bsize$ rounds. With this in mind, our goal in this work is to beat this trivial linear dependence on $\bsize$. Indeed, for the eviction cost model, we give the first set of algorithms avoiding a trivial multiplicative $\beta$ overhead over their classical paging counterparts. We also give $(h,k)$-bicriteria\footnote{In $(h,k)$ paging, an online algorithm with cache size $k$ competes against an offline cache of size $h$, where $k>h$.} algorithms for both fetching and eviction cost models, which we in turn use to adapt the lower bound of \cite{beckmann2021brief} for the fetching cost model to randomized algorithms. \paragraph{Eviction cost.} We start in \cref{sec:eviction} with our main contributions: competitive algorithms for the eviction cost model. We show the following theorem. \begin{theorem} For the block-aware caching problem with eviction cost, there exist: \begin{itemize} \item a $k$-competitive deterministic online algorithm. \item an $O(\log^2 k)$-competitive randomized (integral) online algorithm. \item an $O(\log k)$-approximate randomized offline algorithm. \end{itemize} \end{theorem} In fact, we study a more general version than the one introduced in \cite{beckmann2021brief} in which every block $B$ may have a separate cost $c_B$. For this more general weighted setting we get competitive ratios of $k$, $O(\log k \log (k \Delta))$ and $O(\log (k\Delta))$ for the deterministic online, randomized online, and randomized offline settings respectively (where $\Delta$ is the aspect ratio, i.e., the maximum cost ratio between any two blocks). A first main technical ingredient is a linear programming relaxation for block-caching. It is tempting to use a formulation with a variable $x_p^t$ for each page $p$ and time $t$ indicating whether $p$ is present in cache in step $t$. However, in this case the eviction cost becomes a complicated non-linear function of the $x_p^t$. Instead, we define variable $\phi_B^t$ for each block $B$ and time step $t$ indicating whether we evict $B$ at $t$. This is reminiscent of the linear program for classical paging of \cite{BBN07} in which every variable represents whether a page is present in cache between two subsequent requests to a page, only that it may now be necessary to evict pages at any point between subsequent requests. A na\"{\i}ve linear programming formulation has an integrality gap of $\bsize$ (see \cref{sec:basic_LP_formulation}): this is unsurprising since the na\"{\i}ve LP exhibits this gap even for the special case of generalized paging. To get around this, we express feasibility as the constraint that a particular sequence of monotone, submodular functions is maximized. We then make use of good (albeit exponential size) LP relaxations for these submodular set function constraints, which were discovered by Wolsey \cite{Wolsey1982}. Our formulation may be viewed as a generalization of the strengthened LP relaxation due to \cite{BBN08} for generalized caching, which used the so-called knapsack cover (KC) inequalities. We first use the relaxation to give a $k$-competitive deterministic online algorithmic in \cref{sec:deterministic}. Next, we develop an $O(\log k)$-competitive fractional algorithm in \cref{sec:fractional}, followed by an $O(\log \Delta k)$-competitive online randomized rounding procedure in \cref{sec:rounding}. Combined, these results imply our $O(\log k \log \Delta k)$-competitive randomized online algorithm. It is natural to try to adapt the continuous online primal-dual framework of \cite{BBN07,BBN08}; however, owing to the increased complexity of our LP, there are several technical roadblocks. For one, our formulation now has primal variables corresponding to the eviction of every block at every point in time, and a na\"{\i}ve adaptation of the continuous dynamics of \cite{BBN08} incurs loss that depends on the length of the request sequence. Nevertheless, we show how to carefully set the rate of increase of primal variables (with respect to the dual rate of increase) to construct a feasible solution with our claimed guarantee. For convenience, we do not present the fractional algorithm as online in the strict sense as we allow it to change decisions made in the past. However, it has the crucial property that it only increases LP variables, which suffices for the online rounding procedure. For the rounding step, we forego maintaining an explicit distribution over cache states as in previous work \cite{BBN07,BBN08,ACER19}, since it is unclear how to control the cost of the rebalancing stage when updating the distribution in each time step. Instead, we use the method of random rounding with alterations in a similar spirit to \cite{bansal2021efficient}. A key difference in our work is the added difficulty of working with the submodular cover formulation: this introduces additional challenges to the analysis of the rounding (as well as to the maintenance of the fractional solution). We make use of recent work on online submodular cover \cite{gupta2020online}; interestingly we are able to charge our alteration cost to the fractional \emph{fetching} cost, even though this may be a factor $\bsize$ larger than the eviction cost. \paragraph{Fetching cost.} We turn in \cref{sec:loading} to the fetching cost model, where we show strong lower bounds, implying that the integrality gap of $\Omega(\bsize)$ of the natural LP formulation cannot be circumvented. We prove the following theorem for $(h,k)$ block-aware caching\footnote{Beckmann et al. \cite{beckmann2021brief} showed several deterministic lower bounds on the competitive factor achievable for $(h,k)$ block-aware caching, when $k\geq h+\beta$-1.}. \begin{theorem} When $k=O(h)$, no randomized online algorithm has competitive ratio better than $\Omega(\bsize + \log k)$ for block aware caching with fetching costs. \end{theorem} Our main idea here is an online deterministic rounding procedure for fractional algorithms that incurs constant blowup in both cache usage and cost. This implies an online derandomization procedure for any randomized algorithm, which in turn strengthens the lower bounds for deterministic algorithms of \cite{beckmann2021brief} to apply to randomized algorithms as well. Our lower bound implies that beating the trivial linear dependence on $\bsize$ is \emph{not} possible for the fetching cost model. Our deterministic rounding procedure immediately implies improved bounds for the offline $(h,k)$ block-aware caching problem. In particular, when $k=2h$, we match the performance of classical caching algorithms up to constant factors. \section{Model and Preliminaries} \label{prelim} \subsection{Problem Definition} In the block-aware caching problem, there is a cache of size $k$ and $n$ pages which are partitioned into blocks. Let $\mathcal{B}$ be the partition of the pages into blocks. Each block contains at most $\bsize$ pages, for some $\bsize\in[k]$. For a block $B \in \mathcal{B}$, we denote by $c_B>0$ its cost. At each time-step $t$, a page $p_t$ is requested. To serve the request, the page $p_t$ must be fetched into the cache if it is missing from the cache. The goal is to obtain a feasible cache policy while minimizing the total cost. We consider two different cost functions. \paragraph{Eviction cost model.} In this model, fetching into the cache is free, while evictions have a cost that can be aggregated: Evicting any subset $A$ of a block $B$ at a time-step has a cost of $c_B$. The goal is to minimize the total eviction cost. \paragraph{Fetching cost model.} In this model, evicting pages from the cache is free, while fetching of pages has a cost that can be aggregated. Fetching of any subset $A$ of a block $B$ at a time-step has a cost of $c_B$. The goal is to minimize the total fetching cost. Unlike classic paging and its variants, the fetching cost and eviction cost models are not equivalent in block-aware caching. We show that the optimal fetching and eviction costs for the same request sequence may be off by a factor of $\bsize$ (in either direction!), and this bound is tight. \begin{restatable}{claim}{betaoff} \label{claim:beta_off} There exist instances of block-aware caching for which the optimal fetching cost is $\bsize$ larger than the optimal eviction cost, and there exist instances where the optimal eviction cost is $\bsize$ larger than the optimal fetching cost. \end{restatable} See \cref{sec:extra_proofs} for the proof. For a page $p$, define $B(p)$ to be the block containing $p$, and let $r(p,t)$ be the time of the last request to $p$ up until (and including) time $t$; if there is no such request, then $r(p,t):=-\infty$. Define the aspect ratio $\Delta := \cmax/ \cmin$ where $\cmax := \max_{B \in \mathcal{B}} c(B)$ and $\cmin := \min_{B \in \mathcal{B}} c(B)$. For convenience we will use the notation $[\ell] = \{1, \ldots, \ell\}$ and $[\ell]_0 = \{0, 1, \ldots, \ell\}$. Our work relies on the theory of submodular functions which we introduce now for completeness. \paragraph{Submodularity.} We consider set functions of the form $f: 2^{\unvrs} \rightarrow \mathbb{R}^+$, where $\unvrs$ is a set. For $A \subseteq B \subseteq \unvrs$, let $f(A \mid B) := f(A \cup B) - f(B)$. For convenience, if $A=\{v\}$ is a singleton we also write $f(v\mid B):=f(\{v\}\mid B)$. We call $f$ \textit{submodular} if for all $v\in\unvrs$, $A\subseteq B\subseteq\unvrs$ we have $f(v\mid A)\ge f(v\mid B)$. A simple result is that if a set function $f$ is submodular, then $f(\,\cdot\mid B)$ is also submodular for any $B \subseteq \unvrs$. If for all $A \subseteq B \subseteq \unvrs$ we have that $f(A) \leq f(B)$, then we say that $f$ is \textit{monotone}. \paragraph{Submodular Cover.} Wolsey \cite{Wolsey1982} introduced the following problem known as \emph{submodular cover}. Given a monotone, submodular function $f$ over ground set $\mathcal{N}$, and cost function $c: \mathcal{N} \rightarrow \mathbbm{R}^+$ on the ground set, output a minimum cost subset $S \subseteq \mathcal{N}$ such that $f(S) \geq f(\mathcal{N})$. Wolsey gave the following LP relaxation: \begin{align} \label{eq:wolsey_lp} \begin{array}{|rl|} \hline & \\ \min & \displaystyle \sum_{v \in \mathcal{N}} c(v) \cdot x_v \\ \text{subject to} & \\ & \\ \forall S\subseteq \mathcal{N}: & \displaystyle \sum_{v \not \in S} f(v \mid S) \cdot x_v \geq f(\mathcal{N}) - f(S) \\ \forall v \in \mathcal{N}: & x_v \geq 0 \\ & \\ \hline \end{array} \end{align} The constraints of this LP may be viewed as knapsack cover inequalities for a linearized version of the function $f$. Wolsey proved: \begin{claim}[Proposition 2 of \cite{Wolsey1982}] \label{lem:wolsey_integerpts} A set $S$ has $f(S) = f(\mathcal{N})$ if and only if $\chi_S$, the characteristic vector of $S$, is a feasible integer solution to \eqref{eq:wolsey_lp}. \end{claim} Furthermore, Wolsey showed that this LP has an integrality gap of $\log(\max_{v\in \mathcal{N}} f(v))+1$ when $f$ is integer valued. \section{Fetching Cost} \label{sec:loading} We present our $\Omega(\beta)$ lower bound against randomized algorithms for online block-aware caching with fetching costs. We first present a bicriteria rounding algorithm for the na\"{\i}ve LP of \cref{sec:basic_LP_formulation}. We then argue that this procedure can be used to derandomize any randomized algorithm for block-aware caching with fetching costs. Together with the lower bound against deterministic algorithms given by \cite{beckmann2021brief}, this implies a lower bound against randomized algorithms. \subsection{Bicriteria Online Rounding Algorithm} \label{sec:hk_rounding} Consider the following deterministic online rounding scheme. For every page $p$, evict $p$ from the cache at time $t$ if $x_p^t > \nicefrac{1}{2}$. If a page $p_t$ is not in cache upon request at time $t$, then at time $t$ fetch all pages from $B(p_t)$ such that $x_p^t \leq 1/2$. \begin{theorem} \label{thm:hk_rounding} Given a feasible fractional solution $x$ to the block-aware caching problem, the procedure above produces an integral solution that uses at most $2k$ cache space at any point in time, and whose fetching cost is at most twice the fetching cost of $x$. \end{theorem} \begin{proof} The procedure produces a feasible solution by construction, since $x_{p_t}^t = 0 \leq \nicefrac{1}{2}$. It also violates the cache size constraint by at most a factor of $2$, since no page is present in the integral cache unless $x_p^t \leq \nicefrac{1}{2}$, meaning the fractional cache usage is at least half the integral cache usage. Finally, to justify that the integral solution has cost at most twice the fractional cost, charge the cost of integrally loading $B(p_t)$ to the fractional decrease of $x_B^t$ since the last time $t'$ at which $B(p_t)$ was loaded. Since $p_t$ had $x_p^{t'} > \nicefrac{1}{2}$ (otherwise we would have loaded it earlier), the fractional cost incurred since time $t'$ was at least $\nicefrac{1}{2} \cdot c_{B(p)}$. \end{proof} \begin{corollary} When $k=2h$, there is a 2-competitive offline algorithm for block-aware caching with fetching cost. \end{corollary} We mention briefly that a similar rounding procedure produces a cache policy that is $2$-competitive with the eviction cost of the fractional solution, and also uses at most a factor $2$ more space. If $p_t$ is not in cache at time $t$, fetch it. On the other hand if any page in cache at time $t$ has fractional value $x_p^t < \nicefrac{1}{2}$, evict all of $B(p)$. \subsection{Lower Bounds for Randomized Algorithms} Finally we turn to showing our lower bound. Our starting point is the lower bound of \cite{beckmann2021brief} against deterministic algorithms. \begin{theorem}[Theorem 4.1 of \cite{beckmann2021brief}] \label{thm:charlie_lb} The competitive ratio of any deterministic online policy for block-aware caching with fetching costs is at least \[\frac{k+(B-1)(h-1)}{k-h+1}\] for $h \leq k-B + 1$. \end{theorem} We now show how to use the online deterministic rounding procedure of \cref{sec:hk_rounding} to derandomize any online algorithm for Block-Aware caching with fetching costs. This proves the main claim of this section: \begin{theorem} The competitive ratio of any randomized policy for block-aware caching with fetching costs is at least \[\frac{2k+(B-1)(h-1)}{4k-2h+2}\] for $h \leq k-B + 1$. \end{theorem} \begin{proof} Suppose there is a randomized online algorithm $\mathcal{R}$ for $(h,k)$-block-aware caching with fetching costs with expected cost $c_\mathcal{R}$. Then we can convert this randomized cache policy online to a fractional solution $x$. To do so, set $x^t_p$ be the expected value of the indicator of whether page $p$ is loaded at time $t$. Note that these expectations can be computed using only the sequence of requests up to and including time $t$. This solution $x$ is feasible to the simple fetching cost LP \eqref{eq:naive_lp}, and furthermore has LP cost $c_\mathcal{R}$. Applying \cref{thm:hk_rounding} to the fractional solution $x$ produces an integral cache policy cost at most $2 \cdot c_\mathcal{R}$ and space $2k$. The claim follows by using the lower bound on the cost of any such policy given by \cref{thm:charlie_lb}, and solving for $c_{\mathcal{R}}$. \end{proof} Combining this with the well known $\Omega(\log k)$ lower bound for randomized algorithms for classical paging, we obtain the following consequence. \begin{corollary} When $k= O(h)$, no randomized algorithm has competitive ratio better than $\Omega(B + \log k)$. \end{corollary}
{ "timestamp": "2022-05-25T02:25:25", "yymm": "2205", "arxiv_id": "2205.12249", "language": "en", "url": "https://arxiv.org/abs/2205.12249", "abstract": "We study the block-aware caching problem, a generalization of classic caching in which fetching (or evicting) pages from the same block incurs the same cost as fetching (or evicting) just one page from the block. Given a cache of size $k$, and a sequence of requests from $n$ pages partitioned into given blocks of size $\\beta\\leq k$, the goal is to minimize the total cost of fetching to (or evicting from) cache. We show the following results:$\\bullet$ For the eviction cost model, we show an $O(\\log k)$-approximate offline algorithm, a $k$-competitive deterministic online algorithm, and an $O(\\log^2 k)$-competitive randomized online algorithm.$\\bullet$ For the fetching cost model, we show an integrality gap of $\\Omega(\\beta)$ for the natural LP relaxation of the problem, and an $\\Omega(\\beta + \\log k)$ lower bound for randomized online algorithms. The strategy of ignoring the block-structure and running a classical paging algorithm trivially achieves an $O(\\beta)$ approximation and an $O(\\beta \\log k)$ competitive ratio respectively for the offline and online-randomized setting.$\\bullet$ For both fetching and eviction models, we show improved bounds for the $(h,k)$-bicriteria version of the problem. In particular, when $k=2h$, we match the performance of classical caching algorithms up to constant factors.Our results establish a separation between the tractability of the fetching and eviction cost models, which is interesting since fetching/evictions costs are the same up to an additive term for classic caching. Previous work only studied online deterministic algorithms for the fetching cost model when $k > h$. Our insight is to relax the block-aware caching problem to a submodular covering LP. The main technical challenge is to maintain a competitive fractional solution, and to round it with bounded loss, as the constraints of this LP are revealed online.", "subjects": "Data Structures and Algorithms (cs.DS)", "title": "Competitive Algorithms for Block-Aware Caching", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708012852457, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.7084670484638793 }
https://arxiv.org/abs/cs/0702113
Fast Computation of Small Cuts via Cycle Space Sampling
We describe a new sampling-based method to determine cuts in an undirected graph. For a graph (V, E), its cycle space is the family of all subsets of E that have even degree at each vertex. We prove that with high probability, sampling the cycle space identifies the cuts of a graph. This leads to simple new linear-time sequential algorithms for finding all cut edges and cut pairs (a set of 2 edges that form a cut) of a graph.In the model of distributed computing in a graph G=(V, E) with O(log V)-bit messages, our approach yields faster algorithms for several problems. The diameter of G is denoted by Diam, and the maximum degree by Delta. We obtain simple O(Diam)-time distributed algorithms to find all cut edges, 2-edge-connected components, and cut pairs, matching or improving upon previous time bounds. Under natural conditions these new algorithms are universally optimal --- i.e. a Omega(Diam)-time lower bound holds on every graph. We obtain a O(Diam+Delta/log V)-time distributed algorithm for finding cut vertices; this is faster than the best previous algorithm when Delta, Diam = O(sqrt(V)). A simple extension of our work yields the first distributed algorithm with sub-linear time for 3-edge-connected components. The basic distributed algorithms are Monte Carlo, but they can be made Las Vegas without increasing the asymptotic complexity.In the model of parallel computing on the EREW PRAM our approach yields a simple algorithm with optimal time complexity O(log V) for finding cut pairs and 3-edge-connected components.
\section{Introduction}\label{sec:intro} Let $\ensuremath{G} = (V, E)$ be a connected undirected graph. A part of $\ensuremath{G}$ is said to be a \emph{cut} if, after deleting it from $\ensuremath{G}$, the remaining graph is disconnected. We use the following terminology: \begin{itemize} \item A \emph{cut vertex} is a vertex $v$ such that $\{v\}$ is a cut. \item A \emph{cut edge} is an edge $e$ such that $\{e\}$ is a cut (i.e., a bridge). \item A \emph{cut pair} is a cut consisting of two edges $e, f$ such that neither $e$ nor $f$ is a cut edge. \end{itemize} For brevity we call all of these objects \emph{small cuts}. In a network (e.g., for communication or transportation), the small cuts are relevant because they represent the critical points where local failures can cause global disruption. Our primary motivation is to efficiently find all small cuts of an undirected graph. We study this problem in the sequential, distributed, and parallel models of computation. The fundamentally new idea in this paper is to identify cuts by sampling the cycle space. For a graph $(V, E)$ we say that $\phi \subseteq E$ is a \emph{binary circulation} if every vertex has even degree in $(V, \phi)$; the \emph{cycle space} of graph $(V, E)$ is the set of all its binary circulations. For $S \subseteq V$, let $\delta(S)$ denote the edges with exactly one end in $S$. An \emph{induced edge cut} is a set of the form $\delta(S)$ for some $S$; cut edges and cut pairs are induced edge cuts\footnote{Our convention is that $\delta(\varnothing)=\delta(V)=\varnothing$ is an induced edge cut --- so we don't in general assume $\delta(S)$ is a cut.}. The family of all induced edge cuts is called the \emph{cut space} of a graph. The cycle space and cut space are orthogonally complementary vector subspaces of $\ensuremath{\mathbb{Z}}_2^E$ (see \prettyref{sec:rc}), which implies that the intersection of any binary circulation and induced edge cut is of even size. At a high level, our algorithms depend on a probabilistic converse (\prettyref{prop:nonsep-indep}): if $F\subset E$ is \emph{not} an induced edge cut, the number of edges of $F$ intersecting a uniformly random binary circulation is even with probability exactly 1/2. This specific observation seems new, although it is a simple consequence of standard results on the cut and cycle spaces. Using this observation we give efficient algorithms to sample a uniformly random binary circulation in the sequential, parallel, and distributed models of computing. {\bf The Distributed Model.} Our approach improves several known time bounds in the \emph{distributed} computing model with \emph{congestion}. The precise model, denoted $\mathcal{CONGEST}$~\cite[\S 2.3]{p2000}, works as follows. The computation takes place in the graph $G = (V, E)$ where each vertex is a computer and each edge is a bidirectional communication link; i.e., we study the problem of having a network compute the small cuts of its own topology. There is no globally shared memory, only local memory at each vertex. Initially only local topology is known: each vertex knows its ID value, which is unique, and its neighbours' IDs. Time elapses in discrete \emph{rounds}. In each round, every vertex performs local computations and may send one message to each of its neighbors, to be received at the start of the next round. The \emph{time complexity} of a distributed algorithm is the number of rounds that elapse, and the \emph{message complexity} is the total number of messages that are sent. In the $\mathcal{CONGEST}$ model, every message must be at most $O(\log V)$ bits long. The model does not bound the memory capacity or computational power of the vertices, although our algorithms use time and space polynomial in $|V|$ at each vertex. Let $\ensuremath{\mathcal{D}}$ denote the diameter of $(V, E)$, i.e. $\ensuremath{\mathcal{D}} := \max_{u, v \in V}\mathrm{dist}_G(u, v)$. The message size bound, in addition to making the algorithms more practical, affects what is possible in the model, as the following example from \cite{L+06} shows. On the one hand, if messages are allowed to be arbitrarily long, any graph property whatsoever can be trivially computed in $\ensuremath{\mathcal{D}}$ time\footnote{In $\ensuremath{\mathcal{D}}$ rounds each vertex broadcasts its local topology to all other vertices, then each vertex deduces the global topology and solves the problem with a local computation.}. On the other hand, Lotker et al.\ gave a family of graphs with $\ensuremath{\mathcal{D}}=3$, such that in $\mathcal{CONGEST}$ on this family, a $\Omega(\sqrt[4]{|V|}/\sqrt{\log |V|})$-time lower bound holds to find the minimum spanning tree (MST). A distributed time complexity faster than $\Theta(V)$ on some graphs is called \emph{sub-linear}. Determining whether a task in this model can be accomplished in sub-linear time, or better yet $O(\ensuremath{\mathcal{D}})$ time, is a fundamental problem. E.g.\ one breakthrough was a sub-linear MST algorithm \cite{sublinear-mst} which was later improved \cite{kp-mst} to time complexity $O(\ensuremath{\mathcal{D}}+\sqrt{|V|}\log^* |V|)$ --- here $\log^* x$ is the number of times which $\log$ must be iteratively applied to $x$ before obtaining a number less than 1. Our breakthroughs in this regard are $O(\ensuremath{\mathcal{D}})$ time algorithms for cut pairs, cut edges, and 2-edge-connected components, and a sub-linear algorithm for 3-edge-connected components. \subsection{Existing Results} Our results apply to three common models of computation: sequential, distributed, and parallel. Abusing notation for readability, we sometimes abbreviate $|V|$ to $V$ and $|E|$ to $E$. {\bf Sequential.} In the usual sequential (RAM) model of computing, Tarjan was the first to obtain linear-time ($O(V+E)$-time) algorithms to find all cut vertices \cite{tarjandfs}, cut edges \cite{tarjandfs}, and cut vertex-pairs (cuts $C \subseteq V$ with $|C|=2$) \cite{tri-tarjan}. These algorithms are based on depth-first search (DFS). Galil \& Italiano \cite{galil-ital} gave the first linear-time algorithm to compute all cut pairs, by reducing to the cut vertex-pair problem. {\bf Distributed.} Here we only mention results valid in $\mathcal{CONGEST}$, ignoring results with $\Omega(n)$ message size such as one of \cite{chang}. {\bf Cut Edges/Vertices.} Two early distributed algorithms for cut edges and vertices, in \cite{ahujazhu} and \cite{hohberg}, use DFS. The smallest time complexity of any known distributed DFS algorithm is $\Theta(V)$; as such, the algorithms of Ahuja \& Zhu and Hohberg have $\Omega(V)$ time complexity. Huang \cite{huang} gave a non-DFS-based algorithm with $\Theta(V)$ time complexity. The first sub-linear distributed algorithms for any type of small cuts appear in \cite{Thur97}; using an MST subroutine, Thurimella obtained time complexity $O(\ensuremath{\mathcal{D}}+\sqrt{V}\log^* V)$ for both cut edges and cut vertices. {\bf Cut Pairs.} For cut pairs, \cite{JM96} gave a distributed algorithm with worst-case time and message complexity $\Theta(n^3)$, and \cite{2006tsin} obtained a DFS-based algorithm with improved time complexity $O(\ensuremath{\mathcal{D}}^2+V)$. {\bf Distributed Optimality.} Distributed $\Theta(V)$-time algorithms for cut edges are optimal (up to a constant factor) on some graphs: e.g.\ it is straightforward to see, even guaranteed that $G$ is either a $|V|$-cycle or a $|V|$-path, not all edges can determine if they are cut edges in less than $|V|/2-2$ rounds. One term for this property is \emph{existentially optimal}, due to \cite{sublinear-mst}. However, as Thurimella's algorithm \cite{Thur97} showed, there are some graphs on which $\Theta(V)$ time is not asymptotically optimal. The stronger term \emph{universally optimal}~\cite{sublinear-mst} applies to algorithms which, on \emph{every} graph, have running time within a constant factor of the minimum possible. {\bf Parallel.} In the PRAM model, an optimal $O(\log V)$-time and $O(V+E)$-work Las Vegas algorithm for cut edges and vertices was obtained in \cite{tarjan-parallel} (provided that for spanning forests, recent work of \cite{HZ01} is used). For cut pairs, it may be possible to combine a a 3-vertex-connectivity algorithm of \cite{FRT93} with the reduction of \cite{galil-ital} (and spanning forest routines from \cite{HZ01}) to yield a time- and work-optimal EREW algorithm. This is mentioned as a ``future application" by Halperin \& Zwick. However, this approach appears not to have been fully analyzed and is fairly complicated. \subsection{Our Contributions}\label{sec:contributions} Since our algorithms are randomized, we differentiate between two types of algorithms: \emph{Monte Carlo} ones have deterministically bounded running time but may be incorrect with probability at most $1/V$ and \emph{Las Vegas} ones are always correct and have bounded \emph{expected} running time\footnote{More generally, our algorithms can obtain error probability $\leq 1/V^c$ for any constant $c$ without changing the asymptotic complexity.}. (Note, a Las Vegas algorithm can always be converted to Monte Carlo, so Las Vegas is generally better). {\bf Sequential.} The random circulation approach yields \emph{new linear-time algorithms to compute all cut edges and cut pairs} of the Las Vegas type. As far as we are aware, our linear-time cut pair algorithm is the first one that does not rely on either DFS (e.g., see references in \cite{2005tsin}) or open ear decomposition (e.g., see references in \cite{FRT93}). {\bf Distributed.} We remark that all existing distributed algorithms mentioned for finding small cuts are deterministic. The random circulation approach yields \emph{faster distributed algorithms for small cuts} of the Las Vegas type. For cut edges and pairs, we obtain $O(\ensuremath{\mathcal{D}})$-time algorithms. Compared to the previous best time of $O(\ensuremath{\mathcal{D}}+\sqrt{V}\log^* V)$ for cut edges, we remove the dependence on $|V|$. Compared to the previous best time of $O(\ensuremath{\mathcal{D}}^2+V)$ for cut pairs, we obtain a quadratic speedup on every graph. For cut vertices, we obtain a $O(\ensuremath{\mathcal{D}}+\Delta/\log V)$-time algorithm where $\Delta$ is the maximum degree. Compared to the previous best time of $O(\ensuremath{\mathcal{D}}+\sqrt{V}\log^* V)$ for cut vertices, this is faster on graphs with $\Delta, \ensuremath{\mathcal{D}} = O(\sqrt{V})$. We also obtain the first sub-linear distributed algorithm for 3-edge-connected components, using a connected components subroutine of \cite{Thur97}. In Table \ref{tab1} we depict our main results and earlier work, showing both time and message complexity. {\bf Universal Optimality.} If we assume distributed algorithms must act globally in a natural sense --- either by initiating at a single vertex, or by reporting termination --- then a $\Omega(\ensuremath{\mathcal{D}})$-time lower bound holds for the problems of finding cut edges or cut pairs, on any graph. Hence under natural conditions, our $O(\ensuremath{\mathcal{D}})$-time algorithms for cut edges and cut pairs are universally optimal. {\bf Parallel.} In the PRAM model, we obtain a Las Vegas algorithm for cut pairs and 3-edge-connected components with time complexity $O(\log V+T(E))$, space complexity $O(E+S(E))$, and work complexity $O(E + W(E))$, where $T(n), S(n), W(n)$ are respectively the time, space, work complexity to sort $n$ numbers of length $O(\log n)$ bits. E.g.\ on the EREW PRAM, we can implement our algorithm in $O(\log V)$ time, $O(E)$ space and $O(E \log E)$ work using a sorting subroutine of \cite{KRS90}, or in $O(\log V)$ time, $O(E^{1+\epsilon})$ space and $O(E \sqrt{\log E})$ work using a subroutine of \cite{HS02}. \begin{table}[ht] \begin{center} \begin{tabular}{ccccc} \hline \hline & \phantom{X} Cuts Found \phantom{X} & Time & Messages\\ \hline \cite{ahujazhu} & Vertices \& Edges & $\Oh{V}$ & $\Oh{E}$ \\ \cite{Thur97} & Vertices \& Edges & $\Oh{\ensuremath{\mathcal{D}} + \sqrt{V} \log^* V}$ & $\Oh{E \cdot (\ensuremath{\mathcal{D}} + \sqrt{V} \log^* V)}$ \\ \cite{2006tsin} & Pairs & $\Oh{V+\ensuremath{\mathcal{D}}^2}$ & $\Oh{E+V\cdot\ensuremath{\mathcal{D}}}$ \\ \prettyref{thm:dce}\dag & Edges & $\Oh{\ensuremath{\mathcal{D}}}$ & $\Oh{E}$ \\ \prettyref{thm:dcpair}\dag & Pairs & $\Oh{\ensuremath{\mathcal{D}}}$ & $\Oh{\min\{V^2, E \cdot \ensuremath{\mathcal{D}}\}}$ \\ \prettyref{thm:dcv}\dag & Vertices & $\Oh{\ensuremath{\mathcal{D}} + \Delta / \log V}$ & $\Oh{E (1 +\Delta / \log V )}$ \\ \hline \hline \end{tabular} \caption{Comparison of our three main distributed results (denoted by \dag) to the best previously known algorithms. }\label{tab1} \end{center} \end{table} \subsection{Other Related Work} Randomized algorithms appear in other literature related to the cut and cycle spaces. For example, \cite{BL03} computes the genus of an embedded graph $G$ while ``observing" part of it. They use random perturbation and balancing steps to compute a \emph{near-circulation} on $G$ and the dual graph of $G$. Their computational model is quite different from the one here, e.g.\ they allow a face to modify the values of all its incident edges in a single time step. A slow bridge-finding algorithm based on random walks is given in \cite{PV06}, which inspires this paper. Random sampling is a fruitful technique to quickly compute so-called \emph{minimum cycle bases} of the cycle space, e.g.~see the survey~\cite{KL+09}. \subsection{Organization of the Paper} In \prettyref{sec:rc} we define random circulations and show how to construct them efficiently. In \prettyref{sec:app} we show how random circulations yield algorithms for small cuts and give sequential implementations. In \prettyref{sec:impl} we precisely define the assumptions in our distributed model and give the Monte Carlo algorithms; we introduce a technique called \emph{fundamental cycle-cast} which may be of independent interest. In \prettyref{sec:cc} we discuss 2- and 3-edge-connected components. In \prettyref{sec:lasvegas} we give the Las Vegas analogues of our distributed algorithms. We give $\Omega(\ensuremath{\mathcal{D}})$ distributed time lower bounds under precise assumptions in \prettyref{sec:lowerbounds}. We give the parallel cut pair algorithm in \prettyref{sec:parallel}. \section{Preliminaries on Circulations} \label{sec:rc} The cut space and cycle space over $\ensuremath{\mathbb{Z}}$ in directed graphs have been studied for quite some time \cite{BM76}. For our purposes it is convenient to work modulo 2; then, informally, we can deal with undirected graphs since $+1 \equiv -1 \pmod{2}$. For the sake of completeness, we prove the needed results. See also \cite{Diestel} which proves material equivalent to Propositions \ref{prop:vs}, \ref{prop:orth}, and \ref{prop:coolio}. For notational convenience we identify any subset $S$ of $E$ with its \emph{characteristic vector} $\chi^S \in \ensuremath{\mathbb{Z}}_2^E$ defined by $\chi^S_e = 1$ for $e \in S$ and $\chi^S_e = 0$ for $e \not\in S$. We use $\oplus$ to stand for vector addition modulo 2, so in accordance with our notational convention, for $S, T \subset E$ the expression $S \oplus T$ denotes the symmetric difference of $S$ and $T$. As mentioned earlier, $\phi \subseteq E$ is a \emph{binary circulation} if in $(V, \phi)$ every vertex has even degree; the \emph{cycle space} of graph $(V, E)$ is the set of all its binary circulations; $\delta(S)$ denotes the edges of $G$ with exactly one end in $S$; an \emph{induced edge cut} is a set of the form $\delta(S)$ for some $S$; and the family of all induced edge cuts is called the \emph{cut space} of a graph. For $v \in V$ we use $\delta(v)$ as short for $\delta(\{v\})$. \begin{prop}\label{prop:vs} The cut space and cycle space are vector subspaces of $\ensuremath{\mathbb{Z}}_2^E$. \end{prop} \begin{proof} Note it suffices to show each space contains $\varnothing$ and is closed under $\oplus$. For the cut space, this holds since $\delta(\varnothing)=\varnothing$ and $\delta(S \oplus T) = \delta(S) \oplus \delta(T)$. For the cycle space, clearly $(V, \varnothing)$ has even degree at each vertex; and if $(V, \phi_1)$ and $(V, \phi_2)$ have even degree at each vertex, then the degree of vertex $v$ in $(V, \phi_1 \oplus \phi_2)$ is $\deg_{\phi_1}(v) + \deg_{\phi_2}(v) - 2\deg_{\phi_1 \cap \phi_2}(v) \equiv 0 + 0 - 0 \pmod{2}$, so $\phi_1 \oplus \phi_2$ is a binary circulation. \end{proof} \begin{prop}\label{prop:orth} The cut space and cycle space are orthogonal. \end{prop} \begin{proof} We need precisely to show that for any binary circulation $\phi$ and any $S \subset V$ that the dot product $\phi \cdot \delta(S) \equiv 0 \pmod{2}$, or equivalently that $|\phi \cap \delta(S)|$ is even. Now $\sum_{s \in S} \deg_\phi(s) = \sum_{s \in S} |\phi \cap \delta(s)|$ and the former quantity is even since $\phi$ is a circulation. The latter sum counts every edge of $\phi \cap \delta(S)$ once, every edge of $\phi$ with both ends in $S$ twice, and every other edge zero times. Since this sum is even, $|\phi \cap \delta(S)|$ is even. \end{proof} In the next proposition, we assume $G$ is connected, and hence has a spanning tree $T$. We need to define the \emph{fundamental cuts} and \emph{fundamental cycles} of $T$. For each edge $e$ of $E \backslash E(T)$, we define the \emph{fundamental cycle} $C_e$ to be the unique cycle in $T \cup \{e\}$. Note cycles are binary circulations. For each edge $e$ of $T$, we define $S_e$ to be one of the two connected components of $T \backslash e$, and define the \emph{fundamental cut of $e$} to be $\delta(S_e)$ (note $\delta(S_e)$ does not depend on which connected component we chose). \begin{prop}\label{prop:coolio} (a) The cut space and cycle space are orthogonal complements. (b) The cycle space has dimension $|E|-|V|+1$ and the cut space has dimension $|V|-1$. (c) For any spanning tree $T$ of $G$, its fundamental cycles form a basis of the cycle space, and its fundamental cuts form a basis of the cut space.\end{prop} \begin{proof} We will show that the $|E|-|V|+1$ fundamental cycles are linearly independent in the cycle space and the $|V|-1$ fundamental cuts are linearly independent in the cut space. Basic linear algebra shows the sum of the dimensions of two orthogonal subspaces of $\ensuremath{\mathbb{Z}}_2^E$ is at most $|E|$, with equality only if they are orthogonal complements, thus by \prettyref{prop:orth}, \prettyref{prop:coolio}(a) and (b) follow, and so does (c). We use the following claim. \begin{clm}\label{clm:tech} Let $H \subset E$ and consider a family of vectors $\{x^e\}_{e \in H}$ over $\ensuremath{\mathbb{Z}}_2^E$. If $x^e_e = 1$ for all $e \in H$, and $x^e_f = 0$ for all distinct $e, f \in H$, then $\{x^e\}_{e \in H}$ is linearly independent. \end{clm} \begin{proof} Suppose for the sake of contradiction that $\bigoplus_{e \in H} a_e x^e$ is the zero vector, where $a_e \in \{0,1\}$ for each $e$ and not all $a_e$ are zero. Pick $f$ such that $a_f=1$, then $\sum_{e \in H} a_e x^e_f = 1,$ a contradiction. \end{proof} Note that $e \in C_e$ but for any other edge $f$ of $E \backslash E(T)$, $f \not\in C_e$, so by \prettyref{clm:tech} with $H=E \backslash E(T)$ and $x^e = C_e$, these vectors are linearly independent. Note that $e \in \delta(S_e)$ but for any other edge $f$ of $T$, $f \not\in \delta(S_e)$, so by \prettyref{clm:tech} with $H=E(T)$ and $x^e = \delta(S_e)$, these vectors are linearly independent. This completes the proof of \prettyref{prop:coolio}. \end{proof} \subsection{Random Circulations}\label{sec:randcirc} Next we show why uniform sampling of the cycle space is useful for identifying cuts. \begin{prop}\label{prop:nonsep-indep} Let $F \subset E$ be a set that is not an induced edge cut. If $\phi$ is a uniformly random binary circulation, then $\Pr[|F \cap \phi| \textrm{ is even}] = 1/2.$ \end{prop} \begin{proof} Since $F$ is not in the cut space, by \prettyref{prop:coolio}(a) it is not orthogonal to the cycle space, i.e.\ there is a binary circulation $\phi_F$ with $|F \cap \phi_F|$ odd. Now we pair up each binary circulation $\psi$ on $G$ with the binary circulation $\psi' := \psi \oplus \phi_F$. This yields a pairing of all binary circulations on $G$ since for all $\psi$, $\psi' \neq \psi$ and $\psi'' = \psi$. Modulo 2, $|F \cap \psi'| \equiv |F \cap \psi| + |F \cap \phi_F| \equiv |F \cap \psi|+1$, so in each pair, exactly one of the two binary circulations has even intersection with $F$. Thus, exactly half of all binary circulations have even intersection with $F$, which proves the result. \end{proof} Next we give a method for constructing binary circulations (it is an undirected version of \cite[Ex.\ 12.1.1]{BM76}). Given a spanning tree $T$ and subset $\psi$ of $E \backslash E(T)$, we say that $\phi$ is a \emph{completion} of $\psi$ if $\phi$ is a binary circulation and $\phi \cap (E \backslash E(T)) = \psi$. \begin{prop} \label{prop:cotree-basis} For any $\psi \subseteq E \backslash E(T)$, $\psi$ has a unique completion $\phi$. \end{prop} \begin{proof} First, we give a succinct proof sketch. By \prettyref{prop:coolio}(c) the cycle space can be expressed as $\{\bigoplus_{e \in E \backslash E(T)} a_e C_e \mid a \in \ensuremath{\mathbb{Z}}_2^{E \backslash E(T)}\}$. For which $a$ does this yield a completion of $\psi$? From the observations in the proof of \prettyref{prop:coolio}, for $f \in E \backslash E(T)$, the coordinate of $\bigoplus_{e \in E \backslash E(T)} a_e C_e$ at index $f$ is just $a_f$, hence the unique completion of $\psi$ is the one in which $a$ is the indicator vector of $\psi$, i.e.\ the unique completion is $\phi = \bigoplus_{e \in \psi} C_e$. Explicitly, for $f\in T$, we have $f \in \phi$ iff $f$ appears in an odd number of the fundamental cycles $\{C_e \mid e \in \psi\}$. This completes the proof, but we now give a second, algorithmic proof, which is needed later. For a leaf node $v$ incident to $e \in E(T)$, since the degree of $(V, \phi)$ at $v$ must be even, notice that we must have $e \in \phi$ if $|\psi \cap \delta(v)|$ is odd, and $e \not\in \phi$ if $|\psi \cap \delta(v)|$ is even. Iterating this argument on $T \backslash v$ yields \prettyref{alg:complete}; we will show it constructs the unique completion of $\psi$. \begin{algorithm}[ht] \caption{Given $G, T$ and $\psi \subset E \backslash E(T)$, construct binary circulation $\phi$ such that $\phi \backslash E(T) = \psi$.}\label{alg:complete} \begin{algorithmic}[1] \State Initialize $\phi := \psi, S := T$ \Comment{$S$ is the subtree of $T$ where $\phi$ is not yet defined} \State {\bf while} $S$ has any edges, \State \quad Let $v$ be any leaf of $S$ and $e$ be the unique incident edge of $v$ in $S$ \State \quad {\bf if} $|\delta(v) \cap \phi|$ is odd {\bf then} $\phi := \phi \cup \{e\}$ \label{line:conserv} \Comment{Satisfy degree constraint at $v$} \label{line:0flow} \State \quad Delete $v$ from $S$ \State Output $\phi$ \end{algorithmic} \end{algorithm} See Figure~\ref{fig:example0} for an illustration of \prettyref{alg:complete}. Now we prove \prettyref{prop:cotree-basis} using \prettyref{alg:complete}. It is clear that every vertex of $(V, \phi)$ has even degree except possibly the last vertex left in $S$. However, by the handshake lemma, no graph can have exactly one vertex of odd degree, so $\phi$ is indeed a binary circulation. To show uniqueness, suppose for the sake of contradiction that $\psi$ has two distinct completions $\phi, \phi'$. Then $\phi \oplus \phi' \subset E(T)$, and as such the nonempty forest $\phi \oplus \phi'$ has at least one vertex of degree 1. This contradicts the fact that $\phi \oplus \phi'$ is a binary circulation. \end{proof} \begin{figure}[t] \begin{center} \leavevmode \begin{pspicture}(0,0)(4, 2.4) \psset{unit=1.2} \pnode(0,0){a} \pnode(2,0){b} \pnode(1,1){c} \pnode(2,1){d} \pnode(3,1){e} \pnode(2,2){f} \psset{linewidth=2pt,linestyle=dashed} \ncline{*-*}{f}{c} \ncline{*-*}{f}{d} \ncline{*-*}{f}{e} \ncline{*-*}{c}{a} \ncline{*-*}{d}{b} \psset{linewidth=1pt} \psset{linestyle=solid} \ncline{*-*}{b}{e} \ncline{*-*}{a}{d} \psset{linestyle=dotted} \ncline{*-*}{c}{d} \ncline{*-*}{a}{b} \end{pspicture} \begin{pspicture}(0,0)(4, 2.4) \psset{unit=1.2} \pnode(0,0){a}\uput[180](0,0){$v$} \pnode(2,0){b} \pnode(1,1){c} \pnode(2,1){d} \pnode(3,1){e} \pnode(2,2){f} \psset{linewidth=2pt,linestyle=dashed} \ncline{*-*}{f}{c} \ncline{*-*}{f}{d} \ncline{*-*}{f}{e} \ncline{*-*}{d}{b} \psset{linewidth=2pt,linestyle=solid} \ncline{*-*}{c}{a}\Bput{$e$} \psset{linewidth=1pt} \ncline{*-*}{a}{d} \ncline{*-*}{b}{e} \psset{linewidth=1pt,linestyle=dotted} \ncline{*-*}{a}{b} \ncline{*-*}{c}{d} \end{pspicture} \begin{pspicture}(0,0)(4, 2.4) \psset{unit=1.2} \pnode(0,0){a} \pnode(2,0){b} \pnode(1,1){c} \pnode(2,1){d} \pnode(3,1){e} \pnode(2,2){f} \psset{linewidth=2pt,linestyle=solid} \ncline{*-*}{f}{c} \ncline{*-*}{f}{e} \ncline{*-*}{d}{b} \ncline{*-*}{c}{a} \psset{linewidth=2pt,linestyle=dotted} \ncline{*-*}{f}{d} \psset{linewidth=1pt,linestyle=solid} \ncline{*-*}{a}{d} \ncline{*-*}{b}{e} \psset{linewidth=1pt,linestyle=dotted} \ncline{*-*}{a}{b} \ncline{*-*}{c}{d} \end{pspicture} \end{center} \caption{Completing a binary circulation. The spanning tree $T$ is given by thick edges. Solid edges are in the circulation, dotted edges will not be in the circulation, and dashed edges are undecided. Left: the initial value of $\phi$ (which equals $\psi$). Middle: we ensure a leaf vertex $v$ has even degree. Right: repeating the previous step yields the completed circulation $\phi$.} \label{fig:example0} \end{figure} We now give the method for constructing uniformly random binary circulations, illustrated in \prettyref{alg:rc}: pick a uniformly random subset of $E \backslash E(T)$ and then compute its completion. \begin{algorithm}[ht] \caption{Given $G$ and spanning tree $T$, output a uniformly random binary circulation.}\label{alg:rc} \begin{algorithmic}[1] \State {\bf for} each $e$ in $E \backslash E(T)$, put $e$ in $\psi$ with independent probability 1/2 \label{line:rcast} \State Return the completion of $\psi$, using \prettyref{alg:complete} \end{algorithmic} \end{algorithm} \begin{thm}\label{thm:urand} \prettyref{alg:rc} outputs a uniformly random binary circulation. \end{thm} \begin{proof}By \prettyref{prop:coolio}(b) the cycle space contains exactly $2^{|E|-|V|+1}$ elements. \prettyref{alg:rc} makes one of $2^{|E|-|V|+1}$ choices of $\psi$ each with probability $2^{-|E|+|V|-1}$, and each distinct choice of $\psi$ leads to a distinct binary circulation.\end{proof} To increase the probability of identifying a particular cut beyond 1/2, our algorithms will sample multiple independent random circulations. For this reason it is convenient to introduce notation that incorporates multiple circulations into a single object. Let $\ensuremath{\mathbb{Z}}_2^b$ denote the set of $b$-bit binary strings. For $\phi: E \to \ensuremath{\mathbb{Z}}_2^b$, let $\phi_i(e)$ denote the $i$th bit of $\phi(e)$. \begin{defn} $\phi: E \to \ensuremath{\mathbb{Z}}_2^b$ is a \emph{$b$-bit circulation} if for each $1 \leq i \leq b$, $\{e \mid \phi_i(e)=1\}$ is a binary circulation. \end{defn} Hence, to say that $\phi$ is a uniformly random $b$-bit circulation is the same as saying that $\{\phi_i\}_{i=1}^b$ are mutually independent, uniformly random binary circulations. For brevity, we use the phrase \emph{random $b$-bit circulation} to stand for ``uniformly random $b$-bit circulation" in the rest of the paper. Let {\bf 0} denote the all-zero vector and $\oplus$ denote addition of vectors modulo 2. Using \prettyref{prop:orth} and \prettyref{prop:nonsep-indep} we obtain the following corollary. \begin{cor}\label{cor:nonsep-indep} Let $\phi$ be a random $b$-bit circulation and $F \subseteq E$. Then $$\Pr\left[\bigoplus_{e \in F} \phi(e) = {\bf 0}\right] = \begin{cases}1, &\textrm{ if $F$ is an induced edge cut;} \\ 2^{-b}, &\textrm{ otherwise.}\end{cases}$$ \end{cor} To generate a random $b$-bit circulation, it suffices to modify Algorithms \ref{alg:complete} and \ref{alg:rc} slightly so as to operate independently on each of $b$ positions at once: on \prettyref{line:rcast} of \prettyref{alg:rc} we set $\phi(e)$ to a uniformly independent $b$-bit string, and on \prettyref{line:conserv} of \prettyref{alg:complete} we set $\phi(e) := \bigoplus_{f \in \delta(v) \backslash e} \phi(f)$. We denote the resulting algorithm by \textsc{Rand-$b$-Bit-Circ}\ and illustrate it in Figure \ref{fig:example}. Under the standard assumption that the machine word size is $\Theta(\log V)$, the running time of \textsc{Rand-$b$-Bit-Circ}\ in the sequential model of computing is $O(E\lceil \frac{b}{\log V} \rceil)$. \begin{figure}[t] \begin{center} \leavevmode \begin{pspicture}(0,0)(4, 2.4) \psset{unit=1.2} \pnode(0,0){a} \pnode(2,0){b} \pnode(1,1){c} \pnode(2,1){d} \pnode(3,1){e} \pnode(2,2){f} \psset{linewidth=2pt} \ncline{*-*}{f}{c} \ncline{*-*}{f}{d} \ncline{*-*}{f}{e} \ncline{*-*}{c}{a} \ncline{*-*}{d}{b} \psset{linewidth=1pt} \ncline{*-*}{c}{d}\mput*{010} \ncline{*-*}{a}{d}\mput*{100} \ncline{*-*}{a}{b}\mput*{011} \ncline{*-*}{b}{e}\mput*{111} \end{pspicture} \begin{pspicture}(0,0)(4, 2.4) \psset{unit=1.2} \pnode(0,0){a}\uput[180](0,0){$v$} \pnode(2,0){b} \pnode(1,1){c} \pnode(2,1){d} \pnode(3,1){e} \pnode(2,2){f} \psset{linewidth=2pt} \ncline{*-*}{f}{c} \ncline{*-*}{f}{d} \ncline{*-*}{f}{e} \ncline{*-*}{c}{a}\mput*{111}\Bput{$e$} \ncline{*-*}{d}{b} \psset{linewidth=0.8pt} \ncline{*-*}{c}{d}\mput*{010} \ncline{*-*}{a}{d}\mput*{100} \ncline{*-*}{a}{b}\mput*{011} \ncline{*-*}{b}{e}\mput*{111} \end{pspicture} \begin{pspicture}(0,0)(4, 2.4) \psset{unit=1.2} \pnode(0,0){a} \pnode(2,0){b} \pnode(1,1){c} \pnode(2,1){d} \pnode(3,1){e} \pnode(2,2){f} \psset{linewidth=2pt} \ncline{*-*}{f}{c}\mput*{101} \ncline{*-*}{f}{d}\mput*{010} \ncline{*-*}{f}{e}\mput*{111} \ncline{*-*}{c}{a}\mput*{111} \ncline{*-*}{d}{b}\mput*{100} \psset{linewidth=0.8pt} \ncline{*-*}{c}{d}\mput*{010} \ncline{*-*}{a}{d}\mput*{100} \ncline{*-*}{a}{b}\mput*{011} \ncline{*-*}{b}{e}\mput*{111} \end{pspicture} \end{center} \caption{Constructing a random 3-bit circulation; thick edges are tree edges and thin edges are non-tree edges. Left: we assign random $\phi$ values to the non-tree edges. Middle: we set $\phi(e) := \bigoplus_{f \in \delta(v) \backslash e} \phi(f)$ for a leaf vertex $v$. Right: repeating the previous step yields the completed circulation $\phi$.} \label{fig:example} \end{figure} \section{Basic Algorithms}\label{sec:app} In this section we show how to use random circulations to probabilistically determine the cut edges, cut pairs, and cut vertices of a graph. These are the Monte Carlo versions of the algorithms. \subsection{Finding All Cut Edges}\label{sec:cutedgealg} We provide pseudocode in \prettyref{alg:edge} and then prove its correctness. It is based on the easy fact that $e$ is a cut edge if and only if $\{e\}$ is an induced edge cut, which we state without proof. \begin{algorithm}[ht] \caption{Given a connected graph $\ensuremath{G},$ compute the cut edges of $\ensuremath{G}.$}\label{alg:edge} \begin{algorithmic}[1] \State Let $b = \lceil \log_2 VE \rceil$ and let $\phi$ be a random $b$-bit circulation on $\ensuremath{G}$. \State Output all edges $e$ for which $\phi(e)={\bf 0}$ \label{line:edgecheck} \end{algorithmic} \end{algorithm} \begin{thm}\prettyref{alg:edge} correctly determines the cut edges with probability at least $1-1/V$ and can be implemented in $O(E)$ sequential time.\label{thm:sedge} \end{thm} \begin{proof} Using the fact above, \prettyref{cor:nonsep-indep}, and a union bound, the probability of error is at most $E/2^b \leq 1/V$. The subroutine \textsc{Rand-$b$-Bit-Circ}\, as well as \prettyref{line:edgecheck} of \prettyref{alg:edge}, each take $O(E)$ sequential time. \end{proof} \subsection{Finding All Cut Pairs and Cut Classes}\label{sec:cutpairs} \prettyref{prop:foo}, whose easy proof we omit, leads to our approach for finding cut pairs. \begin{prop}[Cut pairs are induced]\label{prop:foo} Let $e$ and $f$ be edges that are not cut edges. Then $\{e, f\}$ is a cut pair if and only if $\{e, f\}$ is an induced edge cut. \end{prop} \ignore{\begin{proof} Since $e$ is not a cut edge $G \backslash \{e\}$ is connected, and so $G \backslash \{e, f\}$ must have exactly two connected components. Let $U$ be the vertex set of one of them, and so the other is $V \backslash U$. Note, $f$ must be incident on both $U$ and $V \backslash U$; indeed, otherwise $G \backslash \{e\}$ would not be connected. A similar claim holds for $e$; i.e., both $e$ and $f$ lie in $\delta(U).$ No other edges can lie in $\delta(U)$ since this would contradict the fact that $U$ is a connected component of $G \backslash \{e, f\}$. Hence $\delta(U) = \{e, f\}$. \end{proof}} With \prettyref{cor:nonsep-indep} we immediately obtain the following. \begin{cor} \label{cor:seppair} Let $e, f$ be two distinct edges that are not cut edges. Then $\Pr[\phi(e) = \phi(f)] = 1$ if $\{e, f\}$ is a cut pair, and $2^{-b}$ otherwise. \end{cor} This yields a cute probabilistic proof of the following basic fact. \begin{cor}[Transitivity of cut pairs] \label{cor:cutpairclass} If $\{e, f\}$ and $\{f, g\}$ are cut pairs, then so is $\{e, g\}$. \end{cor} \begin{proof} Note that $e, f, g$ are not cut edges. Let $\phi$ be a random 1-bit circulation on $\ensuremath{G}$. By \prettyref{cor:seppair}, $\phi(e)=\phi(f)$ and $\phi(f)=\phi(g)$. So $\phi(e)=\phi(g)$ with probability 1. By \prettyref{cor:seppair}, $\{e, g\}$ must be a cut pair. \end{proof} \begin{defn}A \emph{cut class} is an inclusion-maximal subset $K$ of $E$ such that $|K|>1$ and every pair $\{e, f\} \subseteq K$ is a cut pair.\end{defn} We illustrate a cut class in Figure~\ref{fig:cc}. Note the cut class has a natural cyclic order. \begin{figure}[t] \begin{center} \leavevmode \begin{pspicture}(-1.5,-0.6)(3,2.1) \psset{unit=0.5cm} \psline[linewidth=1.2pt,linestyle=dashed](-2,3)(-2,1) \psline[linewidth=1.2pt,linestyle=dashed](-2,1)(-1,-1) \psline(-1,-1)(0,-2) \psline(0,-2)(1,-2) \psline(1,-2)(1,-1) \psline(1,-1)(0,0) \psline(-1,-1)(0,0) \psline(0,0)(0,-2) \psline[linewidth=1.2pt,linestyle=dashed](1,-1)(3,0) \psline[linewidth=1.2pt,linestyle=dashed](3,0)(3,2) \psline(3,0)(4,1) \psline(4,1)(5,0) \psline(5,0)(4,-1) \psline(4,-1)(3,0) \psline(3,2)(4,3) \psline(4,3)(3,4) \psline(3,4)(1,4) \psline(1,4)(1,3) \psline(1,3)(3,2) \psline(3,4)(1,3) \psline(3,2)(3,4) \psline[linewidth=1.2pt,linestyle=dashed,dash=4pt 4pt](1,4)(-1,4) \psline(-1,4)(-2,4) \psline(-2,4)(-2,3) \psline(-2,3)(-1,4) \psdots(-2,3) \psdots(-1,-1) \psdots(0,-2) \psdots(1,-2) \psdots(1,-1) \psdots(0,0) \psdots(3,0) \psdots(3,2) \psdots(4,1) \psdots(5,0) \psdots(4,-1) \psdots(4,3) \psdots(3,4) \psdots(1,4) \psdots(1,3) \psdots(-1,4) \psdots(-2,4) \psdots(-2,1) \end{pspicture} \end{center} \caption{A graph is shown with one cut class highlighted using dashed edges. Deleting any two dashed edges disconnects the graph.} \label{fig:cc} \end{figure} \prettyref{cor:cutpairclass} implies that any two distinct cut classes are disjoint. Hence, even though there may be many cut pairs, we can describe them all compactly by listing all cut classes of the graph. We now give our simple linear-time algorithm to find all cut classes\ignore{The idea is to compute a random $b$-bit circulation for large enough $b$ that $\phi(e)={\bf 0}$ only for cut edges, and so that $\phi$ labels the cut classes of other edges.}, with pseudocode given in \prettyref{alg:pair}. \begin{algorithm}[ht] \caption{Given a connected graph $\ensuremath{G},$ compute the cut classes of $\ensuremath{G}.$}\label{alg:pair} \begin{algorithmic}[1] \State Let $b = \lceil \log_2 (VE^2) \rceil$ and let $\phi$ be a random $b$-bit circulation on $\ensuremath{G}$ \label{line:sort} \State {\bf for} each $x \in \ensuremath{\mathbb{Z}}_2^b \backslash \{\bf 0\}$ such that $|\{e \in E \mid \phi(e) = x\}| \geq 2,$ output the cut class $\{e \in E \mid \phi(e) = x\}$\label{line:val-loop} \end{algorithmic} \end{algorithm} \begin{thm}\prettyref{alg:pair} correctly determines the cut pairs with probability at least $1-1/V$ and can be implemented in $O(E)$ sequential time.\label{thm:cpair}\end{thm} \begin{proof} There are $|E|$ edges and the analysis in \prettyref{sec:cutedgealg} shows that $\Pr[\phi(e) = {\bf 0}] \leq 1/2^b$ for each non-cut edge $e$. There are at most $\tbinom{E}{2}$ pairs $\{e, f\}$ of non-cut edges that are not cut pairs and \prettyref{cor:seppair} shows that $\Pr[\phi(e) = \phi(f)] \leq 1/2^b$ for each such pair. Hence, by a union bound, the total probability of error is at most $E/2^b+\tbinom{E}{2}/2^b \leq 1/V$. The subroutine \textsc{Rand-$b$-Bit-Circ}\ has time complexity $O(E)$. It remains to implement \prettyref{line:val-loop} of \prettyref{alg:pair} in $O(E)$ time. To do this, we sort all edges $e$ according to the key $\phi(e)$ using a three-pass \emph{radix sort}. I.e., we consider each value in $\ensuremath{\mathbb{Z}}_2^b$ as a three-digit number in base $2^{b/3} = O(E)$ --- see \cite[\S 9.3]{CLR90} --- then the sort takes $O(E)$ time. \end{proof} \subsection{Finding All Cut Vertices}\label{sec:cutvertexalg} The following characterization of cut vertices underlies our approach. \begin{prop}\label{prop:cvchar} The cut $\delta(v)$ properly contains a nonempty induced edge cut if and only if $v$ is a cut vertex. \end{prop} \begin{proof} First, suppose $v$ is a cut vertex. Let $V_1$ be the vertex set of one of the connected components of $\ensuremath{G} \backslash \{v\}.$ Then $\delta(v)$ properly contains the nonempty induced edge cut $\delta(V_1)$. Second, suppose $v$ is not a cut vertex, so there is a spanning tree $T'$ of $G \backslash \{v\}$. Suppose $S \subset V$ has $\delta(S) \subseteq \delta(v)$. Without loss of generality (by complementing $S$ if necessary) we assume $v \in S$. Since no edges of $T'$ are in $\delta(S)$, $S$ either contains all of $V \backslash \{v\}$ or none of $V \backslash \{v\}$. Thus either $S = V$ in which case $\delta(S)$ is empty, or $S = \{v\}$, in which case $\delta(S)$ is not a proper subset of $\delta(v)$. \end{proof} Using \prettyref{prop:cvchar}, the essential idea in our approach to find cut vertices is to detect for each vertex $v$ whether $\delta(v)$ properly contains any nonempty induced edge cuts. As usual we detect induced edge cuts via \prettyref{cor:nonsep-indep}, this time rephrasing the detection problem as one of finding linearly dependent rows of a matrix. Hence we need the following fact, when $\ensuremath{\mathbb{Z}}_2$ is viewed as a field. \begin{fact} \label{fact:binsum} In a matrix over $\ensuremath{\mathbb{Z}}_2,$ a set $C$ of columns is linearly dependent if and only if some nonempty subset of $C$ sums to the zero column vector $(\bmod\,2)$. \end{fact} Our approach works as follows. Note --- it does not have a very efficient sequential implementation, but yields an efficient distributed algorithm. We generate a random $b$-bit circulation $\phi$ for some suitably large $b$; denote the $i$th bit of $\phi(e)$ by $\phi_i(e)$. Let $d(v) := |\delta(v)|$, the \emph{degree} of $v$. Let $\Delta$ denote the maximum degree. For each vertex $v$, let $M^{[v]}$ be a matrix with $b$ rows indexed $1, \dotsc, b$, and $d(v)$ columns indexed by $\delta(v)$; then fill the entries of $M^{[v]}$ according to $M^{[v]}_{ie} = \phi_i(e)$. The following two complementary claims validate our approach. \begin{clm} \label{clm:sleep} If $v$ is a cut vertex then $\mathop{\mathrm{rank}}(M^{[v]}) \leq d(v)-2$. \end{clm} \begin{proof} Let $V_1$ be the vertex set of one of the connected components of $\ensuremath{G} \backslash \{v\}.$ Note that $\delta(v)$ can be partitioned into two induced edge cuts $\delta(V_1)$ and $\delta(\{v\} \cup V_1).$ By \prettyref{cor:nonsep-indep} the set of columns of $M^{[v]}$ corresponding to $\delta(V_1)$ adds to zero, and by \prettyref{fact:binsum} these columns are linearly dependent. Similarly, the remaining columns, indexed by $\delta(\{v\} \cup V_1)$, are linearly dependent. So $M^{[v]}$ has at least 2 columns that are linearly dependent on the others, and the result follows.\end{proof} \begin{clm} \label{clm:apa} Let $v \in V$ and assume that $v$ is not a cut vertex. Let $\varnothing \subsetneq D \subsetneq \delta(v).$ The probability that the columns of $M^{[v]}$ indexed by $D$ sum to the zero vector $(\bmod\,2)$ is $2^{-b}.$ \end{clm} \begin{proof} By \prettyref{prop:cvchar}, $D$ is not an induced edge cut, and the result follows from \prettyref{cor:nonsep-indep}. \end{proof} Next we show that for $b = \lceil \Delta + 2 \log_2 V \rceil$, it is very likely that $\mathop{\mathrm{rank}}(M^{[v]})<d(v)-1$ iff $v$ is a cut vertex. Thus our approach, with pseudocode given in \prettyref{alg:vertex}, is correct with high probability. \begin{algorithm}[ht] \caption{Given a connected graph $\ensuremath{G},$ compute the cut vertices of $\ensuremath{G}.$}\label{alg:vertex} \begin{algorithmic}[1] \State Let $b = \lceil \Delta + 2 \log_2 V \rceil$ and let $\phi$ be a random $b$-bit circulation on $G$ \State {\bf for} each vertex $v$ of $\ensuremath{G}$, {\bf if} $\mathop{\mathrm{rank}}(M^{[v]}) < d(v)-1$ {\bf then} output $v$ \end{algorithmic} \end{algorithm} \begin{thm}\prettyref{alg:vertex} correctly determines the cut vertices with probability at least $1-1/V$. \label{thm:svert} \end{thm} \begin{proof} \prettyref{clm:sleep} shows that all cut vertices are output. Consider a vertex $v$ that is not a cut vertex and let $D$ be a subset of $\delta(v)$ of size $d(v)-1$. By \prettyref{clm:apa}, \prettyref{fact:binsum}, and a union bound, the probability that the columns of $M^{[v]}$ corresponding to $D$ are linearly dependent is at most $2^{d(v)-1}2^{-b} \leq 1/V^2;$ so with probability at least $1-V^{-2},$ we have $\mathop{\mathrm{rank}}(M^{[v]}) \geq |D| = d(v)-1$ and $v$ is not output. By another union bound, the probability that any vertex is misclassified by \prettyref{alg:vertex} is at most $V/V^2=1/V.$ \end{proof} \section{Distributed Implementation}\label{sec:impl} Our algorithms make the following three assumptions: first, the network is synchronous; second, there is a distinguished \emph{leader} vertex at the start of computation; third, every node begins with a unique $O(\log V)$-bit ID. These assumptions are standard in the sense that they are made by the best previous distributed algorithms \cite{ahujazhu,Thur97,2006tsin} for small cuts. Nonetheless, these assumptions can be removed at a cost if desired, e.g. using the synchronizer of \cite{synch} at a polylog($V$) factor increase in complexity, Peleg's \cite{pelegopt} $O(\ensuremath{\mathcal{D}})$-time leader election algorithm, or by randomly assigning IDs in the range $\{1, \dotsc, V^3\}$ (resulting in additional failure probability at most $\tbinom{V}{2}/V^3$ due to ID collisions). Although only vertices can store data in the distributed model, we maintain data for each edge $e$ (e.g., to represent a tree) by having both endpoints of $e$ store the data. At the end of the algorithm, we require that the correct result is known locally, so each node stores a boolean variable indicating whether it is a cut node, and similarly for edges. To indicate cut pairs, each edge must know whether it is in any cut pair, and in addition we must give every cut class a distinct label. Previous work also essentially uses these representations. When stating distributed algorithms, the assumptions of a leader, synchrony, unique IDs, and $O(\log V)$-bit messages are implicit. Our algorithms use a breadth-first search (BFS) tree with a root $r$ as the basis for communication. One reason that BFS trees are useful is that they can be constructed quickly (e.g., see \cite[\S 5.1]{p2000}), as follows. \begin{prop}\label{prop:bfs} There is a distributed algorithm to construct a BFS tree in $O(\ensuremath{\mathcal{D}})$ time and $O(E)$ messages. \end{prop} \noindent For a tree $T$, the \emph{level} $l(v)$ of $v \in V$ is the distance in $T$ between $v$ and $r$. The \emph{height} $h(T)$ of tree $T$ is the maximum vertex level in $T$. Any BFS tree $T$ has $h(T) \leq \ensuremath{\mathcal{D}}$ and this is important because several fundamental algorithms based on passing information up or down the tree take $O(h(T))$ time. The \emph{parent} of $u$ is denoted $p(u)$. The \emph{level of tree edge $\{u, p(u)\}$} is the level of $u.$ \subsection{Random Circulations and Cut Edges} \label{sec:cutedgeimpl} When we construct a random circulation, we require at termination that each $v$ knows $\phi(e)$ for each $e \in \delta(v)$. \begin{thm}\label{thm:drc}\label{thm:zdrc} There is a distributed algorithm to sample a random $b$-bit circulation in $O(\ensuremath{\mathcal{D}})$ time and $O(E)$ messages, when $b = O(\log V)$. \end{thm} \begin{proof} We implement \textsc{Rand-$b$-Bit-Circ}\ distributively. The size bound ensures that $b$-bit strings can be sent in a message. We compute a BFS tree $T$, using \prettyref{prop:bfs}. Then for each non-tree edge $\{e\}$ in parallel, the endpoint with the higher ID picks a random $b$-bit value for $\phi(e)$ and sends it to the other endpoint. In the following $h(T)$ rounds, for $i=h(T)$ down to 1, each level-$i$ vertex computes $\phi(\{v, p(v)\}) := \bigoplus_{f \in \delta(v) \backslash \{v, p(v)\}} \phi(f)$ and sends this value to $p(v)$. The complexity is $O(\ensuremath{\mathcal{D}}+h(T))=O(\ensuremath{\mathcal{D}})$ time and $O(E+E)$ messages. \end{proof} \prettyref{thm:drc} yields our distributed cut edge algorithm. \begin{thm}\label{thm:dce} There is a distributed algorithm to compute all cut edges with probability at least $1-1/V$ in $O(\ensuremath{\mathcal{D}})$ time and using $O(E)$ messages. \end{thm} \begin{proof} We implement \prettyref{alg:edge} distributively, obtaining the required correctness probability by \prettyref{thm:sedge}. For $k = VE$, we use \prettyref{thm:drc} to compute a random $\lceil \log_2 VE \rceil$-bit circulation in the required complexity bounds. Then we identify $e$ as a cut edge if $\phi(e)={\bf 0}$. \end{proof} \subsection{Pipelining and Cut Vertices} Our cut vertex algorithm requires a circulation on $\Theta(\Delta+\log V)$ bits, and in order to construct such a circulation quickly, we use a \emph{pipelining} technique. Let $\pi$ be a distributed algorithm in which for each edge $e$, the total number of messages sent on $e$ by $\pi$ is bounded by some universal constant $C_0$. The messages' content may be random but the message-passing schedule must be deterministic. To \emph{pipeline $s$ instances of $\pi$} means to execute $s$ instances $\{\pi_i\}_{i=1}^s$ of $\pi$, each one delayed by a unit time step from the previous. When multiple instances need to simultaneously send messages along the same edge we concatenate them, increasing the message sizes by a factor of at most $C_0$. Compared to $\pi$, pipelining adds $s-1$ to the time complexity and increases the message complexity by a factor of $s.$ A straightforward implementation of \prettyref{alg:vertex} results in our cut vertex algorithm, as follows. \begin{thm}\label{thm:dcv} There is a distributed algorithm to compute all cut vertices with probability at least $1-1/V$ in $O(\ensuremath{\mathcal{D}} + \Delta / \log V)$ time and using $O(E (1+ \Delta / \log V ))$ messages. \end{thm} \begin{proof} We implement \prettyref{alg:vertex} distributively, obtaining probability $1/V$ of failure by \prettyref{thm:svert}. Let $b = \lceil \Delta + 2 \log_2 V \rceil.$ \prettyref{thm:zdrc} gives an algorithm $\pi$ to construct a random $O(\log V)$-bit circulation; note $\pi$ sends a constant number of messages along each edge. We pipeline $b/\log V$ instances of $\pi$ to construct a random $b$-bit circulation. Then, each vertex $v$ locally computes the rank of $M^{[v]}$ to determine if it is a cut vertex. Since $\pi$ takes $O(\ensuremath{\mathcal{D}})$ rounds and sends $O(E)$ messages, and $b = O(\Delta + \log V),$ the implementation takes $O(\ensuremath{\mathcal{D}} + \Delta / \log V)$ time and $O(E (1+ \Delta / \log V ))$ messages.\end{proof} \subsection{Fundamental Cycle-Cast (fc-cast)} \label{sec:fccast}\label{sec:fc} We now define a new distributed technique, needed for our cut pair algorithm. A \emph{non-tree edge} is an edge $e \in E \backslash E(T)$. For a spanning tree $T$ and non-tree edge $e,$ the unique cycle in $T \cup \{e\}$ is called \emph{the fundamental cycle of $T$ and $e$}, and we denote it by $C_e$. We call our new technique \emph{fundamental cycle-cast}, or \emph{fc-cast} for short, and informally it allows simultaneous processing on all fundamental cycles. Let each vertex $v$ store some data $\dta{v}$ of length $O(\log V)$ bits. We assume that $\dta{v}$ includes the ID, level, and parent ID of $v$, since this information can be appended to $\dta{v}$ while increasing its length by at most $O(\log V)$ bits. At the end of the fc-cast, each non-tree edge $e$ will know $\dta{u}$ for every vertex $u$ in the fundamental cycle of $T$ and $e.$ \begin{thm}\label{thm:fc-cast} There is a distributed algorithm {\sc Fc-Cast} using $O(h(T))$ time and $O(\min\{E\cdot h(T), V^2\})$ messages that, for each non-tree edge $e$, for each $v \in C_e$, sends $\dta{v}$ to both endpoints of $e$. \end{thm} As a subroutine, we need a tree broadcast subroutine adapted from \cite[\S 3.2]{p2000}. \begin{prop}\label{prop:tree-cast} There is a distributed algorithm {\sc Tree-Broadcast} using $O(h(T))$ time and $O(V \cdot h(T))$ messages that sends $\dta{v}$ to $u$ for each $v \in V$ and each descendant $u$ of $v$. \end{prop} \begin{proof} Let $\pi$ be a generic distributed algorithm that sends one message from $p(v)$ to $v$ at time $l(v);$ in particular, $\pi$ takes $O(V)$ messages, $O(h(T))$ time, and sends at most one message on each edge. Define instances $\{\pi_i\}_{i=0}^{h(t)}$ of $\pi$ so that for every vertex $v$ at level $i$, and for every descendant $u$ of $v$, instance $\pi_i$ is responsible for propagating $\dta{v}$ to $u$. Each instance $\pi_i$ sends empty messages for the first $i$ rounds, and in round $t > i$, for each $v$ with $l(v)=i$, propagates $\dta{v}$ down the level-$t$ tree edges descending from $v$. Since there are $h(T)+1$ pipelined instances and $\pi$ takes $O(h(T))$ time and $O(V)$ messages, the complexity follows. \end{proof} \begin{proof}[Proof of \prettyref{thm:fc-cast}] An fc-cast has two steps. First, we execute {\sc Tree-Broadcast}, and as a result we may assume that each vertex has a \emph{list} of the data of all its ancestors. In the second step, for each non-tree edge $\{v, w\}$ in parallel, $v$ sends its list to $w$ and vice-versa. Note that each non-tree edge $e$ can determine its fundamental cycle with $T$ by comparing its endpoints' lists. (More precisely, either endpoint of $e$ can determine such.) Each list has at most $1+h(T)$ items, each of which is $O(\log V)$ bits long and can be sent in a single message, so both steps in the fc-cast take $O(h(T))$ time. The message complexity of the second step as just described is $O(E\cdot h(T))$, but now we give a refinement that achieves $O(\min\{E\cdot h(T), V^2\})$ message complexity. The essential idea is for all $u, v \in V$, we want to avoid sending $\dta{u}$ to $v$ more than once. Implement the second step of the fc-cast so that each vertex $v$ sends one $\dta{\cdot}$ value per round, and in the order $\dta{v}$ first, then $\dta{p(v)},$ etc., with the data of the root last. When a vertex $u$ receives $\dta{x}$ for the second time for some $x$, $u$ asks the sender to stop sending its list. Likewise, if $u$ receives $\dta{x}$ from multiple neighbors at the same time, $u$ asks all but one to stop sending their lists. Along each edge, at most one redundant message and one stop request can be sent in each direction. There can only be $V^2$ non-redundant messages; hence the total number of messages sent in this step is $O(V^2+E)$. Considering the tree-broadcast as well, the total message complexity is $O(V \cdot h(T) + \min\{E\cdot h(T), V^2 + E\}) = O(\min\{E\cdot h(T), V^2\})$ as claimed. \end{proof} We can implement fc-cast in $O(h(T))$ time with message complexity even smaller than $\min\{E\cdot h(T), V^2\}$ using a nearest common ancestor labeling scheme of \cite{AGKR02}. We only sketch the idea since the precise improved complexity is somewhat awkward to state (seemingly cannot be expressed in terms of parameters $V, E, \Delta, h(T)$) and does not seem universally optimal. If $uw$ is an edge not in $T$, call $w$ a \emph{non-tree neighbour} of $u$ and vice-versa. The general idea behind the optimized implementation is that, while the implementation in \prettyref{thm:fc-cast} sends $\dta{v}$ to each descendant of $v$ and each non-tree neighbour of a descendant of $v$, we can actually send $\dta{v}$ to a smaller subset of these nodes while meeting the definition of a fundemental cycle-cast. In more detail, the scheme of \cite{AGKR02} gives each vertex an $O(\log V)$-bit label such that given \emph{just the labels} of any two nodes, we can also compute the label of their \emph{nearest common ancestor} (with a deterministic algorithm independent of $T$). Alstrup et al.\ do not work in any specific distributed model, but their scheme is built out of standard primitives like the number of descendants of a given node, and as such can be implemented in the model we consider in $O(h(T))$ time and $O(E)$ messages. The first step of our new implementation is to compute these labels. Then, in unit time and $2|E|$ messages, we have each node inform each of its neighbours of its label. At a high level, the labeling scheme allows the implementation to be optimized as follows. In the first step we send $\dta{v}$ down to its descendant $u$ only if there is some fundamental cycle containing both $u$ and $v$; in the second step each $v$ asks for $\dta{\cdot}$ values from its non-tree neighbours in such a way that $u$ receives each $\dta{\cdot}$ value at most once, and only asks for $\dta{v}$ from $w$ if $C_{uw}$ contains $v$. Implementing these steps requires that nodes have some knowledge about the relative position of their neighbours in the tree, which is accomplished using the labels. There are some slightly complicated details in implementing the first step, for which a pipelined \emph{convergecast} (see \prettyref{prop:con-cast}) suffices. \subsection{Distributed Cut Pair Algorithm}\label{sec:cutpairimpl} When computing the cut pairs, it helps if we assume that $\ensuremath{G}$ has no cut edges, i.e.\ $G$ is 2-edge-connected. To make this assumption without loss of generality, for our input graph $G$, we compute the set $E_C$ of cut edges using \prettyref{thm:lvedge} and then report the cut pairs of the \emph{2-edge-connected components}, which are the connected components of $G \backslash E_C$ (we elaborate in \prettyref{sec:cc}). It is straightforward to show that the cut pairs of $G$ are the cut pairs of these components, that each component has no cut edge, and that no component has diameter greater than $G$. It is not obvious how to implement our sequential cut pair algorithm (\prettyref{alg:pair}) distributively: although the cut classes are properly labeled with high probability by $\phi$, in order for edge $e$ to know whether it belongs to any cut pair, it needs to determine if any other $f$ has $\phi(e)=\phi(f)$, and this cannot be done using local information (i.e., in $O(1)$ rounds). We use fc-cast to overcome this obstacle. The following claims are used to relate fundamental cycles to cut classes. (The first is fairly intuitive given \prettyref{fig:cc}.) \begin{lmma} \label{lmma:ccc} If a cycle $C$ and a cut class $K$ satisfy $K \cap C \neq \varnothing$ then $K \subseteq C.$ \end{lmma} \begin{proof} Suppose that $e \in K \cap C$ but $f \in K \backslash C.$ Then by \prettyref{prop:foo}, $\{e, f\}$ is an induced edge cut. But then $|\{e, f\} \cap C|=1$, contradicting \prettyref{prop:orth} (the orthogonality of the cut space and cycle space). \end{proof} \begin{clm}\label{clm:responsible} Let $K$ be a cut class. Then $K \subset C_e$ for some $e \in E \backslash E(T)$. \end{clm} \begin{proof} First we claim $K$ contains at most one non-tree edge. Suppose otherwise, for the sake of contradiction, that $K$ contains two non-tree edges $\{e, f\}$. Then $\{e, f\}$ is a cut pair and so $\ensuremath{G} \backslash \{e, f\}$ is not connected. However, this contradicts the fact that $\ensuremath{G} \backslash \{e, f\}$ contains the spanning tree $T$. The definition of a cut class implies $|K|>1$, so $K$ contains at least one tree edge $e$. Since $e$ is not a cut edge, $\ensuremath{G} \backslash \{e\}$ is connected, and hence there is a non-tree edge $f$ that connects the two connected components of $T \backslash \{e\}.$ The fundamental cycle $C_f$ of $f$ and $T$ thus contains $e,$ and by \prettyref{lmma:ccc}, all of $K.$ \end{proof} To describe our cut pair algorithm we introduce a variant of a standard technique, the \emph{convergecast} (e.g., see \cite[\S 4.2]{p2000}). Informally, it allows each node to independently query its descendants. In this paper we take the convention that $v$ is always a descendant of itself. Let $Desc(v)$ denote the set of $v$'s descendants. For each $v \in V$, and each $u \in Desc(v)$, let $\mathop{\tt w}[u, v]$ be a variable of length $\Theta(\log V)$ stored at $u$. \begin{prop}\label{prop:con-cast} There is a distributed algorithm {\sc Converge-Cast} that uses $O(h(T))$ time and $O(V \cdot h(T))$ messages so that each $v \in V$ determines $\max \{\mathop{\tt w}[u, v] \mid u \in Desc(v)\}.$ \end{prop} \begin{proof} We assume some familiarity with the basic implementation of convergecast in order to gloss over some basic details; see \cite[\S 4.2]{p2000}. We use $\pi$ to represent a generic distributed algorithm that sends messages from leaves to the root in level-synchronized fashion. The ``standard" convergecast uses $\pi$ to compute $\max \{\mathop{\tt w}[u, r] \mid u \in V\}$ at $r$; in round $i$, for $i$ from $h(T)$ down to 1, every level-$i$ node passes up the largest value that it knows about to its parent. A slight modification yields instances $\{\pi_i\}_{i=0}^{h(t)}$ of $\pi$ so that for every vertex $v$ at level $i$, instance $\pi_i$ propagates $\max \{\mathop{\tt w}[u, v] \mid u \in Desc(v)\}$ to $v$. Since there are $h(T)+1$ pipelined instances and $\pi$ takes $O(h(T))$ time and $O(V)$ messages, the complexity follows. \end{proof} \ignore{Finally, we explain the details of convergecast. \begin{proof}[Proof of \prettyref{prop:con-cast}: {\sc Converge-Cast}] Let $\pi$ be a generic distributed algorithm that sends one message from $v$ to $p(v)$ at time $1+h(T)-l(v);$ in particular, $\pi$ takes $O(V)$ messages, $O(h(T))$ time, and sends at most one message on each edge. Define instances $\{\pi_i\}_{i=0}^{h(t)}$ of $\pi$ so that for every vertex $v$ at level $i$, instance $\pi_i$ is responsible for propagating $\bigvee_{u \in Desc(v)} \mathop{\tt w}[u, v]$ to $v$. We implement $\pi_i$ as follows. For $v' \in Desc(v)$, define $x[v', v] := \bigvee_{u \in Desc(v')} \mathop{\tt w}[u, v]$. Each level-$h(T)$ vertex $v'$ can immediately compute $x[v', v]$ for all its ancestors $v$. In $h(T)$ rounds, for $j$ from $h(T)$ down to 1, for each vertex $v'$ at level $j$ and each ancestor $v$ of $v'$ at level $i$, $v'$ computes $x[v', v]$ and sends $x[v', v]$ to $p(v')$. Observe that $$x[v', v] = \mathop{\tt w}[v', v] \vee \bigvee_{v'' \textrm{ a child of $v'$}} x[v'', v].$$ Hence, $x[v', v]$ can be computed by $v'$ by taking the OR of $\mathop{\tt w}[v', v]$ and all values sent up to $v'$ from its children in the previous round. Since there are $h(T)+1$ pipelined instances and $\pi$ takes $O(h(T))$ time and $O(V)$ messages, the complexity follows. \end{proof}} \begin{thm}\label{thm:dcpair} There is a distributed algorithm to compute all cut classes with probability at least $1-1/V$ in $O(\ensuremath{\mathcal{D}})$ time and using $O(\min\{E\cdot \ensuremath{\mathcal{D}}, V^2\})$ messages. \end{thm} \begin{proof} As in \prettyref{alg:pair}, for $b = \lceil \log_2 (VE^2) \rceil$ we compute a random $b$-bit circulation $\phi$ on $\ensuremath{G}$, using \prettyref{thm:zdrc}. Denote the following assumption by \eqref{eq:ass2}. \comment{\begin{equation}\textrm{For all edges $e$, $\phi(e)={\bf 0}$ if and only if $e$ is a cut edge}. \tag{\ensuremath{\star}} \label{eq:ass1}\end{equation} \vspace{-0.7cm}} \begin{equation} \textrm{For all edges $e, f$, $\phi(e)=\phi(f)$ if and only if $\{e, f\}$ is a cut pair}. \tag{\ensuremath{\star}} \label{eq:ass2}\end{equation} \noindent By the analysis in the proof of \prettyref{thm:cpair}, we may assume that \eqref{eq:ass2} holds without violating the required bound of $1/V$ on the probability of error. It remains only for each edge to determine whether it is a member of any cut pair, since then $\phi$ labels the cut classes. For each vertex $v \neq r$ let $\dta{v} := \phi(\{v, p(v)\}).$ We run {\sc Fc-Cast}, and as a result, the endpoints of each non-tree edge $e$ can compute the multiset $\Phi_e := \{\phi(f) \mid f \in C_e\}$. The following claim, which follows immediately from \prettyref{clm:responsible}, lets each non-tree edge determine if it is a member of any cut pair. \begin{clm}\label{clm:phish1} A non-tree edge $e$ is in a cut pair if and only if $\phi(e)$ occurs multiple times in $\Phi_e$. \end{clm} To deal with tree edges, for each $v \in V$ and each $u \in Desc(v)$, define $$\mathop{\tt w}[u, v] := | \{ e \in \delta(u) \backslash E(T) \mid \{v, p(v)\} \in C_e \textrm{ \& $\phi(\{v, p(v)\})$ occurs $\ge 2$ times in }\Phi_e \} |.$$ \noindent and note that $\mathop{\tt w}[u, v]$ can be determined by $u$ after the fc-cast. We run {\sc Converge-Cast}. \begin{clm}\label{clm:phish2} Tree edge $\{v, p(v)\}$ is in a cut pair if and only if $\exists u \in Desc(v)$ such that $\mathop{\tt w}[u, v]>0$. \end{clm} \begin{proof} If $\{v, p(v)\}$ lies in a cut pair then by \prettyref{clm:responsible} there is a fundamental cycle $C_e$ containing that cut pair. It is easy to see that one endpoint $u$ of $e$ is a descendant of $v$ and has $\mathop{\tt w}[u, v]>0$. \end{proof} By \prettyref{prop:con-cast}, after the convergecast, each tree edge can use \prettyref{clm:phish2} to determine if it is a member of any cut pair. Adding up the complexity associated with constructing a BFS tree and a random circulation, the fc-cast, and the converge-cast, we obtain $O(\ensuremath{\mathcal{D}}+\ensuremath{\mathcal{D}}+\ensuremath{\mathcal{D}}+\ensuremath{\mathcal{D}})$ time and $O(E+E+\min\{E\ensuremath{\mathcal{D}}, V^2\}+V\ensuremath{\mathcal{D}}) = O(\min\{E\ensuremath{\mathcal{D}}, V^2\})$ messages, as claimed.\end{proof} \ignore{\subsubsection{Proofs of Claims}\label{sec:phish} \begin{proof}[Proof of \prettyref{clm:phish1}] If $e$ is not in any cut pair, then by \eqref{eq:ass2}, $\phi(e) \neq \phi(f)$ for every $f \neq e$, so $\phi(e)$ occurs once in $\Phi_e$. If $e$ is in a cut pair $\{e, f\}$, \prettyref{clm:responsible} implies that $\{e, f\} \in C_e$ (because no fundamental cycle but $C_e$ contains $e$). By \eqref{eq:ass2} $\phi(e) = \phi(f)$. Since $\{e, f\} \subset C_e$, $\phi(e)$ occurs at least twice in $\Phi_e$. \end{proof} \begin{proof}[Proof of \prettyref{clm:phish2}] If $\{v, p(v)\}$ does not lie in any cut pair, then by \eqref{eq:ass2}, $\phi(\{v, p(v)\})$ cannot occur multiple times in any $\Phi_e$. Hence, $\mathop{\tt w}[u, v] = 0$ for all $u$. If $\{v, p(v)\}$ lies in some cut pair, then by \prettyref{clm:responsible} there is some non-tree edge $e$ so that $C_e$ contains the cut class of $\{v, p(v)\}$. Let $u$ be either endpoint of $e$; since $\{v, p(v)\} \in C_e$, $u$ is indeed a descendant of $v$. Due to \eqref{eq:ass2}, $\mathop{\tt w}[u, v] > 0$. \end{proof}} \section{Computing $\{2,3\}$-Edge-Connected Components}\label{sec:cc} Let $E_{C}$ denote the set of all cut edges, and $E_{CP}$ denote the set of all edges in any cut pair. \begin{defn} The \emph{$2$-edge-connected components} are the connected components of $G \backslash E_C$. The \emph{$3$-edge-connected components} are the connected components of $G \backslash (E_{CP} \cup E_C)$. \end{defn} In the sequential model, connected components of a graph can be computed in linear time. Hence we immediately see that our linear-time sequential cut edge and cut pair algorithms yield linear-time algorithms for 2- and 3-edge-connected components. In the distributed model, we first discuss 2-edge-connected components. Let $T$ denote a spanning tree and $r$ its root. The desired representation is for each vertex $v$ to store a label $\tau(v)$ so that $\tau(u)=\tau(v)$ iff $u, v$ are in the same 2-edge-connected component. Observe that $E_C \subset E(T)$, since if $e \not\in T$, then $G \backslash e \supset T$ is connected. Furthermore, the following holds. \begin{clm} If $u, v$ are in the same $2$-edge-connected component, there are no cut edges on the unique $u$-$v$ path in $T$. \end{clm} \begin{proof} Suppose such a cut edge $e = \{u', v'\}$ exists, where $u'$ is the end of $e$ closer to $u$ along the $u$-$v$ path in $T$. Then in $G \backslash \{e\}$, the remainder of the tree path connects $u$ to $u'$ and $v$ to $v'$. Since $u, v$ are in the same 2-edge-connected component, $u$ and $v$ are connected in $G \backslash \{e\}$. Thus $u'$ and $v'$ are connected in $G \backslash \{e\}$, contradicting the fact that $e = \{u', v'\}$ is a cut edge of $G$. \end{proof} \begin{cor}\label{cor:2ecc-struct} $T \backslash E_C$ is a spanning forest of the $2$-edge-connected components. \end{cor} In particular, for each 2-edge-connected component $H$, there is a subtree $T_H$ of $T \backslash E_C$ spanning $H$. The idea is to label the vertices of $H$ by the ID of the root of $T_H$. \begin{thm}\label{thm:2ecc} There is a distributed algorithm to compute all $2$-edge-connected components with probability at least $1-1/V$ in $O(\ensuremath{\mathcal{D}})$ time and using $O(E)$ messages. \end{thm} \begin{proof} Note for a vertex $v$, where $H$ denotes its 2-edge-connected component, $v$ is the root of $T_H$ if and only if either $v$ is the root $r$ of $T$, or $\{v, p(v)\}$ is a cut edge. Otherwise, $v$ and $p(v)$ are in the same 2-edge-connected component. First we compute the cut edges, using \prettyref{thm:dce}. Vertex $r$ sets $\tau(r)$ equal to its ID. In the following $h(T)$ rounds, for $i=1$ to $h(T)$, for all level-$i$ tree edges $\{v, p(v)\}$ in parallel, vertex $p(v)$ sends $\tau(p(v))$ to $v$. Upon receiving this message, $v$ sets $\tau(v) := ID(v)$ if $\{v, p(v)\}$ is a cut edge, and $\tau(v) := \tau(p(v))$ otherwise. The labeling takes $O(h(T))$ time and $|V|-1$ messages, and the result follows. \end{proof} Now we discuss 3-edge-connected components. In the distributed model, we can represent a subgraph $(V, F)$ of $(V, E)$ by using a local boolean variable for each edge. For this representation, \cite{Thur97} gave a distributed connected components algorithm in $O(\ensuremath{\mathcal{D}}+\sqrt{V}\log^* V)$ time, using an MST subroutine in which the weight of edge $e$ is 1 for $e \not\in F$ and 0 for $e \in F$. Hence we have the following corollary to our cut pair algorithm, \prettyref{thm:dce}. \begin{cor}\label{cor:dce} There is a distributed algorithm to compute all 3-edge-connected components with probability at least $1-1/V$ in $O(\ensuremath{\mathcal{D}}+\sqrt{V}\log^* V)$ time and using $O(E(\ensuremath{\mathcal{D}}+\sqrt{V}\log^* V))$ messages. \end{cor} \section{Las Vegas Distributed Implementation} \label{sec:lasvegas} In this section we describe how to turn our Monte Carlo distributed algorithms into Las Vegas algorithms, by giving a \emph{verifier} for each one. Given the output of the Monte Carlo algorithm, the verifier determines whether the output is correct or not; we re-run the Monte Carlo algorithm until the output is verified correct. For each of our verifiers, the time complexity is no more than the time complexity of the corresponding Monte Carlo algorithm; this fact and the fact that our algorithms work with high probability together imply that the resulting Las Vegas algorithms have the same asymptotic complexity as the Monte Carlo ones. See \cite[\S 1.2]{randalgs} for more details. Here is a high-level description of the three verifiers. The cut edge verifier works by labeling vertices according to their 2-edge-connected component; the cut vertex verifier works by labeling edges according to their \emph{blocks}; the cut pair verifier works by exploiting relations between cut classes and fundamental cycles. All three of the verifiers rely on the fact that our Monte Carlo algorithms have one-sided error. \subsection{Verifier for Cut Edges}\label{lv:cut edge} Recall that \prettyref{alg:edge} always outputs all cut edges, but may erroneously output some non-cut edges. Observe that a non-tree edge cannot be a cut edge; so we may assume the Monte Carlo algorithm outputs a set $E'_C$ such that $E(T) \supseteq E'_C \supseteq E_C$, by having the verifier reject any output containing a non-tree edge. Here is the key idea: we compute the connected components of $T \backslash E'_C$. We only need to show how to determine if $E'_C \backslash E_C$ is nonempty; this can be done using the following proposition and its converse, which follows. \begin{prop}\label{prop:vce}If $E'_C \backslash E_C$ is nonempty, there is a non-tree edge joining vertices in different connected components of $T \backslash E'_C$. \end{prop} \begin{proof} Let $e$ be any element of $E'_C \backslash E_C$. Since $e$ is not a cut edge, there is another edge $f \in E$ connecting the two connected components of $T \backslash e.$ The endpoints of $f$ lie in different connected components of $T \backslash E'_C$. \end{proof} \begin{prop}\label{prop:covce} If $E'_C \backslash E_C$ is empty, then the connected components of $T \backslash E'_C$ are the 2-edge-connected components, and every non-tree edge has its endpoints in the same connected component of $T \backslash E'_C$. \end{prop} \begin{proof} \prettyref{cor:2ecc-struct} guarantees that the connected components of $T \backslash E'_C$ are the 2-edge-connected components of $G$. Since each non-tree edge lies in at least one cycle (e.g.\ its fundamental cycle with $T$) its endpoints lie in the same 2-edge-connected component. \end{proof} \begin{thm}\label{thm:lvedge} There is a Las Vegas distributed algorithm to compute all cut edges in $O(\ensuremath{\mathcal{D}})$ time and using $O(E)$ messages, in expectation. \end{thm} \begin{proof} We run the $O(\ensuremath{\mathcal{D}})$-time, $O(E)$-message Monte Carlo cut edge algorithm from \prettyref{thm:dce}, and as remarked earlier, we know its output $E'_C$ satisfies $E'_C \supseteq E_C$. Then we run the following verifier, terminating if it accepts, and restarting from scratch (i.e., re-running the Monte Carlo algorithm) as long as it rejects. If $E'_C$ contains a non-tree edge, we reject. Otherwise (if $E'_C \subset E(T)$) we compute the connected components of $E(T) \backslash E'_C$ using an implementation like that in the proof of \prettyref{thm:2ecc}, which takes $O(V)$ messages and $O(\ensuremath{\mathcal{D}})$ time. If any non-tree edge has both endpoints in different components we reject, otherwise the verifier accepts; this can be checked in unit time and $O(E)$ messages. It follows from Propositions \ref{prop:vce} and \ref{prop:covce} that the verifier accepts if and only if $E'_C = E_C$. Since the probability of acceptance is $\Omega(1)$, the expected time complexity is $O(\ensuremath{\mathcal{D}}+\ensuremath{\mathcal{D}}+1)$ and the expected message complexity is $O(E+V+E)$. \end{proof} \subsection{Verifier for Cut Pairs}\label{sec:vcp} As in \prettyref{sec:cutpairimpl} we assume without loss of generality in this section that $G$ is 2-edge-connected. Consider the output of our Monte Carlo cut pair algorithm, \prettyref{alg:pair}. The sense in which its output is one-sided is that every cut class is a subset of one of its output classes; the verifier must ensure that no cut class is ``too big." To explain our approach, we define a notion of ``wanting." Recall $\Phi_e$, the multiset $\{\phi(f) \mid f \in C_e\}$ defined in \prettyref{sec:cutpairimpl}; if the value $x$ appears more than once in $\Phi_e$, say that $e$ \emph{wants} the set $\{f \in C_e \mid \phi(f) = x\}$. With high probability, the wanted sets are precisely the cut classes. First, our verifier checks that whenever an edge lies in two wanted sets, those sets are the same; second, we use the following proposition to verify that no wanted set is ``too big." \begin{prop} \label{prop:outsuf} Let $T$ be any spanning tree and $e, f$ be edges that are not cut edges. If $\{e, f\}$ is not a cut pair, then some fundamental cycle of $T$ contains exactly one of $e$ and $f.$ \end{prop} \begin{proof} We prove the contrapositive; hence we assume that the characteristic vector of $\{e, f\}$ has even dot product with every fundamental cycle. By \prettyref{prop:coolio}(c) the fundamental cycles form a basis of the cycle space; so $\{e, f\}$ is orthogonal to the cycle space, and by \prettyref{prop:coolio}(a), lies in the cut space. Thus $\{e, f\}$ is an induced edge cut, and so (by \prettyref{prop:foo}) a cut pair. \end{proof} In order to apply \prettyref{prop:outsuf}, we count the size of all wanted sets, since then each non-tree edge can determine if its fundamental cycle is ``missing" some members. Our strategy uses a modified {\sc Converge-Cast} (\prettyref{prop:con-cast}) where we interpret $\max$ as lexicographic comparison of data. We need to give each edge a distinct $O(\log V)$-bit name, e.g.\ by concatenating the IDs of its endpoints. When $e$ wants $S$, it sends the ordered pair $(e, |S|)$ towards all of $S.$ (Concretely, for each tree edge $\{v, p(v)\}$ in $S$, this data is sent to $v$.) If two pairs $(e, k)$ and $(e', k')$ such that $k \neq k'$ are sent to the same location, the verifier rejects. Otherwise, each tree edge takes the label $(e, k)$ where $e$ is the lexicographically-maximal edge that wants it. We run another fc-cast with the new labels; then each non-tree edge $f$ checks, for each distinct label $(e, k)$ occurring in $C_f$, that there are exactly $k$ edges in $C_f$ with label $(e, k)$. The complexity of the verifier is dominated by the fc-cast, and we thereby obtain the following theorem. \begin{thm}\label{thm:lvdcpair} There is a Las Vegas distributed algorithm to compute all cut classes in $O(\ensuremath{\mathcal{D}})$ time and using $O(\min\{E\cdot \ensuremath{\mathcal{D}}, V^2\})$ messages, in expectation. \end{thm} \subsection{Verifier for Cut Vertices and Blocks} For edges $e, f$ in $E(G)$, define $e \sim f$ if either $e=f,$ or $e \neq f$ and there is a cycle that contains both $e$ and $f.$ It is well-known that $\sim$ is an equivalence relation on $E$; its equivalence classes are called the \emph{blocks} of $G$. A vertex is a cut vertex iff it is incident to more than one block. The overall strategy is to try to label the edges according to the blocks, and then check via a generating relation that our labeling is correct. The strategy for this verifier is more involved than for the other two, and a high-level description is as follows. Given two equivalence relations $R$ and $R'$ on the same set, we say that \emph{$R$ refines $R'$} if every equivalence class of $R$ is a subset of some equivalence class of $R'$. Note that $R$ refines $R'$ and $R'$ refines $R$ if and only if $R = R'$. We use the notion of \emph{local blocks}: \begin{defn} The \emph{local blocks at $v$,} denoted $\sim_v$, is an equivalence relation on $\delta(v)$ obtained by restricting $\sim$ to $\delta(v):$ namely we write $e \sim_v f$ iff $e, f \in \delta(v)$ and $e \sim f$.\end{defn} An analogue of \prettyref{clm:apa} will show that with high probability, the linear dependencies amongst columns of $M^{[v]}$ correspond to the local blocks at $v$. We hence compute equivalence relations $\sim'_v$ on $\delta(v)$, for each $v$, with the following properties: \begin{itemize} \item $\sim'_v$ always refines $\sim_v$ \item we can collect the local relations $\sim'_v$ into a global equivalence relation $\sim'$ on $E$ \item $\sim'$ always refines $\sim$ \item with high probability, $\sim'_v = \sim_v$ for all $v$ \item if $\sim'_v = \sim_v$ for all $v$, then $\sim' = \sim$ \end{itemize} Finally, we need to check whether $\sim' = \sim$. To perform this check, we adapt an approach from work of \cite{tarjan-parallel} and \cite{Thur97}, exemplified in the following proposition, which we will prove in \prettyref{sec:zion}. \begin{prop}\label{prop:sim0}\label{prop:aaa} In $O(\ensuremath{\mathcal{D}})$ time and $O(E)$ messages we can compute a relation $\sim_0$ on $E$ so that (1) whenever $e \sim_0 f$, $e$ and $f$ meet at a vertex, and (2) the symmetric reflexive transitive closure of $\sim_0$ is $\sim$. \end{prop} Some logical manipulation shows that $\sim \textrm{ refines } \sim'$ if and only if $$\forall v : (\forall u, w \textrm{ adjacent to } v: \{u, v\} \sim_0 \{v, w\} \Rightarrow \{u, v\} \sim' \{v, w\})$$ and as a result, local checks complete the verification. We now give the details. \subsubsection{Computing $\sim'_v$}\label{sec:iron} What do the local blocks look like? It is not hard to see that the local blocks at $v$ correspond to the connected components of $G \backslash v$, in the sense that $\{u, v\} \sim_v \{w, v\}$ if and only if $u$ and $w$ are connected in $G \backslash v$. It is also straightforward to see that $F \subset \delta(v)$ is an induced edge cut if and only if $F$ is a disjoint union of equivalence classes of $\sim_v$. We take $b = \lceil\Delta + 2 \log_2 V\rceil$ and just as in \prettyref{clm:apa}, with probability $1-O(1/V^2)$, the following ``good" case holds: the minimal sets of linearly dependent columns of $M^{[v]}$ correspond to the parts of $\sim_v$. (Notice that $C$ is a minimal set of linearly dependent columns iff $C$'s sum is the zero vector and no subset of $C$ adds to the zero vector.) This leads to a simple idea, but we need to use some finesse in order that the $\sim'_v$ we compute from $M^{[v]}$ always refines $\sim_v$. Our starting point is to compute an arbitrary partition $\pi$ of the columns of $M^{[v]}$ into minimal zero-sum sets (such a partition exists because the sum of all columns is zero). It is possible that such a partition does not refine $\sim'_v$; so we need to check an additional property of $\pi$, namely that each pair of parts of $\pi$ has mutually orthogonal span. (If this property does not hold, the verifier rejects and we re-start the Monte Carlo algorithm.) This property ensures that the only zero-sum sets of columns are unions of parts of $\pi$, which in turn shows that $\sim_v$ refines $\pi$. (Moreover, this property holds in the ``good" case.) So we obtain $\sim'_v$ from $\pi$ by replacing each column by its index in $\delta(v)$. \subsubsection{Computing $\sim'$ from $\sim'_v$}\label{sec:lion} For the rest of the section we consider the spanning tree $T$ upon which our algorithms operate as fixed; hence when we say ``fundamental cycle of $e$" we mean with respect to $T$. We assume $T$ is rooted at the leader vertex $r$ and we let $p(v)$ denote the parent of $v$ in $T.$ In collecting the local relations into a global relation, it is instructive to consider the interaction between $T$ and the blocks of the graph; Figure \ref{fig:tblock} gives an illustration. It is not hard to argue that the intersection of $T$ with any given block $B$ is a subtree of $T$; we define the \emph{root} $r(B)$ of the block to be the root of this subtree. For example, in Figure \ref{fig:tblock}, $r$ and $u$ are each the root of two blocks, and $w$ is the root of one block. In general, the blocks for which $v$ is the root correspond to the equivalence classes of $\sim_v$ not containing $\{v, p(v)\}$ (if $v = r$, all equivalence classes of $\sim_v$). \begin{figure}[t] \begin{center} \leavevmode \begin{pspicture}(0,0)(7, 4.2) \pnode(2,4){a}\rput(2,4.4){$r$} \pnode(0,2){b} \pnode(1,2){c} \pnode(3,3){d} \pnode(5,3){e} \pnode(0,0){f} \pnode(3,2){g} \pnode(5,2){h}\rput(4.7,2.2){$u$} \pnode(4,1){i} \pnode(7,1){j} \pnode(5,1){k}\rput(4.7,1.1){$w$} \pnode(7,0){l} \pnode(4.5,0){m} \pnode(5.5,0){n} \psset{linewidth=2.4pt} \ncline{*-*}{a}{b} \ncline{*-*}{a}{c} \ncline{*-*}{a}{d} \ncline{*-*}{a}{e} \ncline{*-*}{c}{f} \ncline{*-*}{d}{g} \ncline{*-*}{e}{h} \ncline{*-*}{g}{i} \ncline{*-*}{h}{j} \ncline{*-*}{h}{k} \ncline{*-*}{j}{l} \ncline{*-*}{k}{m} \ncline{*-*}{k}{n} \psset{linewidth=0.8pt} \ncline{*-*}{b}{c} \ncline{*-*}{b}{f} \ncline{*-*}{a}{g} \ncline{*-*}{h}{i} \ncline{*-*}{h}{l} \ncline{*-*}{m}{n} \psset{linestyle=dashed} \psccurve(-0.2,-0.2)(2,3)(2,4)(1.9,4.2)(-0.2,2) \psccurve(2,3)(2,4)(2.1,4.2)(5.1,3.1)(5,2)(4,1) \psccurve(5,1)(5.2,1.5)(5,2)(4.8,1.5) \psccurve(5,1)(5.7,0)(4.3,0) \psccurve(5,2)(7,1.1)(7,-0.1)(5.9,1) \end{pspicture} \end{center} \caption{The interaction between a spanning tree and the blocks of a graph. Thick lines are tree edges, thin lines are non-tree edges, and the dashed regions indicate the five blocks of the graph.} \label{fig:tblock} \end{figure} We now define $\sim'$. For computational purposes, assign each equivalence class $X$ of $\sim_v$ a number $i_v(X)$, using the numbers $1, 2, \dotsc$ for each $v$. Then assign each block $B$ the label $(r(B), i_{r(B)}(X))$ where the equivalence class $X$ is the intersection of $\delta(r(B))$ with $B$. At a high level, to compute $\sim$ from $\sim_v$, within in each block, we broadcast its label starting from the block's root. Now given $\sim'_v$ instead of $\sim_v$, we can mimic this strategy so as to compute a global relation $\sim'$. We give pseudocode in \prettyref{alg:blabel}; the phrase ``$v$ sets directed label $(v, u)$ to $\ell$" means that $v$ stores $\ell$ as the label of $\{v, u\}$ and notifies $u$ of this fact with a message. \begin{algorithm}[ht] \caption{Given local relations $\sim'_v,$ compute a global relation $\sim'.$}\label{alg:blabel} \begin{algorithmic}[1] \State {\bf at} each vertex $v$, number the equivalence classes of $\sim'_v$ by $1, 2, \dotsc$ \State {\bf at} each vertex $v$, {\bf for} each equivalence class $X$ of $\sim'_v$ not containing $\{v, p(v)\}$, {\bf for} each $\{v, u\} \in X$, set directed label $(v, u)$ to $(v, i_v(X))$ \State {\bf when} vertex $w$ sets directed label $(w, v)$ to $\ell$, {\bf if} the label of $(v, w)$ exists and is not equal to $\ell$ then FAIL, {\bf else if} directed label $(v, w)$ is unassigned, {\bf for} each $\{v, u\} \sim'_v \{v, w\}$, set directed label $(v, u)$ to $\ell$ \State take the edge labels to identify the equivalence classes of $\sim'$ \end{algorithmic} \end{algorithm} Any pair of $\sim'$-wise related edges are connected by a path of edges related pairwise by local $\sim'_v$ relations; since $\sim'_v$ refines $\sim_v$ which is a restriction of $\sim$, we see that $\sim'$ refines $\sim$. When $\sim'_v = \sim_v$ for all $v$, the preceding discussion implies that $\sim' = \sim$. The message complexity of \prettyref{alg:blabel} is $O(E)$. When $\sim'_v = \sim_v$ for all $v$, the time complexity is $\ensuremath{\mathcal{D}}$ rounds; if more rounds than this elapse we restart the Las Vegas algorithm. \subsubsection{The Generating Relation $\sim_0$}\label{sec:zion} In order to define $\sim_0$ we need a few preliminaries. Let $pre(v)$ denote a preordering of $T$ starting from the root, and for each vertex $v$, let $desc(v)$ denote the number of descendants of $v$. Thus the set of descendants of $v$ is the set of vertices with preorder labels in $\{pre(v), \dotsc, pre(v)+desc(v)-1\}.$ The \emph{subtree-neighbourhood} of $v$ is defined to be $v$'s descendants, in addition to every other vertex that is adjacent to a descendant of $v$ via a non-tree edge. For each vertex $v$ let the values $low(v)$ and $high(v)$ denote the minimum and maximum preorder label in the subtree-neighbourhood of $v.$ Tarjan \cite{tarjan74} introduced these $low$ and $high$ functions; they have been used in several biconnectivity algorithms \cite{tarjan-parallel,Thur97}. \begin{defn} \label{defn:nice} The relation $\{w, v\}\sim_1\{v,p(v)\}$ holds if and only if $\{w, v\} \not\in T$ and either $pre(w) < pre(v)$ or $pre(w) \geq pre(v)+desc(v)$ (i.e., if $w$ is not a descendant of $v$). The relation $\{v, p(v)\} \sim_2 \{p(v),p(p(v))\}$ holds if and only if either $low(v) < pre(p(v))$ or $high(v) \geq pre(p(v))+desc(p(v))$ (i.e., if the subtree-neighbourhood of $v$ is not contained in the descendants of $p(v)$). Define $\sim_0$ to be the union of $\sim_1$ and $\sim_2$. \end{defn} We illustrate these relations in Figure \ref{fig:sim0}. Earlier work \cite{tarjan-parallel,Thur97} uses a different generating relation for $\sim$; ours is simpler and also has the crucial property that every two edges related by $\sim_0$ have a common vertex. \begin{figure}[t] \begin{center} \leavevmode \begin{pspicture}(0,0.4)(10,3) \psset{arrowsize=4pt} \pnode(2,2.5){lpv}\uput[90](2,2.5){$p(v)$} \pnode(2,1.5){lv}\uput[135](2,1.5){$v$} \pnode(3,1.5){lw}\uput[90](3,1.5){$w$} \pspolygon(2,1.5)(2.5,0.5)(1.5,0.5) \ncline[linewidth=2.4pt]{*-*}{lpv}{lv}\mput{\rnode{le}{}} \ncline[linewidth=0.8pt]{*-*}{lv}{lw}\mput{\rnode{lf}{}} \pnode(3,2.5){lh}\rput(3,2.5){$\sim_1$} \ncline[linestyle=dotted]{->}{lh}{le} \ncline[linestyle=dotted]{->}{lh}{lf} \pnode(7,3){rppv}\uput[0](7,3){$p(p(v))$} \pnode(7,2){rpv}\uput[0](7,2){$p(v)$} \pnode(7,1){rv}\uput[0](7,1){$v$} \pnode(7,0.7){rs} \pnode(5.5,0.7){rt} \pspolygon(7,2)(6,0)(8,0) \pspolygon(7,1)(7.3,0.4)(6.7,0.4) \ncline[linewidth=2.4pt]{*-*}{rppv}{rpv}\mput{\rnode{re}{}} \ncline[linewidth=2.4pt]{*-*}{rpv}{rv}\mput{\rnode{rf}{}} \ncline[linewidth=0.8pt]{*-*}{rs}{rt} \pnode(6,2){rh}\rput(6,2){$\sim_2$} \ncline[linestyle=dotted]{->}{rh}{re} \ncline[linestyle=dotted]{->}{rh}{rf} \end{pspicture} \end{center} \caption{Schematic illustrations of the relations $\sim_1$ (left) and $\sim_2$ (right). Thick edges are tree edges, thin edges are non-tree edges, and triangles depict sets of descendants. Dotted arrows indicate pairs of edges related by $\sim_i$.} \label{fig:sim0} \end{figure} From now on, given a relation $R$, let $R^*$ denote the equivalence relation obtained by taking the reflexive symmetric transitive closure of $R$. We now prove the key property of $\sim_0$. \begin{proof}[Proof of $\sim^*_0 = \sim$ (\prettyref{prop:sim0})] First, we argue that $\sim_0$ refines $\sim$; for this it suffices to show that when $e \sim_i f$ for $i \in \{1, 2\}$, $e$ and $f$ lie in the same block. If $\{w, v\} \sim_1 \{v,p(v)\}$, the fundamental cycle of $\{v, w\}$ contains $\{v, p(v)\}$, so $\{v, w\} \sim \{v, p(v)\}$ as needed. If $\{v, p(v)\} \sim_2 \{p(v), p(p(v))\}$ then there is edge from a descendant of $v$ to a non-descendant of $p(v)$; the fundamental cycle of this edge contains both $\{v, p(v)\}$ and $\{p(v), p(p(v))\}$, as needed. Second, we must show that $\sim$ refines $\sim^*_0$. Define $e \sim_{FC} f$ if $e$ and $f$ lie on a common fundamental cycle. Tarjan \& Vishkin \cite[Thm.~1]{tarjan-parallel} show that $\sim_{FC}^* = \sim.$ So it suffices to show that when $e \sim_{FC} f$, $e \sim_0^* f$ holds. In other words, we need to show that each fundamental cycle lies in a single equivalence class of $\sim_0^*$. We provide a pictorial argument of this fact in Figure \ref{fig:zz}.\end{proof} \begin{figure}[t] \begin{center} \leavevmode \begin{pspicture}(0,0)(10,3) \psset{arrowsize=4pt} \pnode(3,3){a} \pnode(2,2){b} \pnode(1,1){c} \pnode(0,0){d} \ncline[linewidth=2pt]{*-*}{a}{b}\mput{\rnode{e}{}} \ncline[linewidth=2pt]{*-*}{b}{c}\mput{\rnode{f}{}} \ncline[linewidth=2pt]{*-*}{c}{d}\mput{\rnode{g}{}} \nccurve[angleA=-45,angleB=0]{a}{d}\lput(0.5){\rnode{h}{}}\Aput{$e$} \nccurve[linestyle=dotted,angleA=135,angleB=90]{<->}{e}{f}\mput*{$\sim_2$} \nccurve[linestyle=dotted,angleA=135,angleB=90]{<->}{f}{g}\mput*{$\sim_2$} \nccurve[linestyle=dotted,angleA=0,angleB=180]{<->}{g}{h}\mput*{$\sim_1$} \ncline[linewidth=2pt]{*-*}{a}{b}\mput{\rnode{e}{}} \ncline[linewidth=2pt]{*-*}{b}{c}\mput{\rnode{f}{}} \ncline[linewidth=2pt]{*-*}{c}{d}\mput{\rnode{g}{}} \nccurve[angleA=-45,angleB=0]{a}{d}\lput(0.5){\rnode{h}{}}\Aput{$e$} \pnode(8,3){A} \pnode(7,2){B} \pnode(6,1){C} \pnode(5,0){D} \pnode(9,2){E} \pnode(10,1){F} \ncline[linewidth=2pt]{*-*}{A}{B}\mput{\rnode{G}{}} \ncline[linewidth=2pt]{*-*}{B}{C}\mput{\rnode{H}{}} \ncline[linewidth=2pt]{*-*}{C}{D}\mput{\rnode{I}{}} \ncline[linewidth=2pt]{*-*}{A}{E}\mput{\rnode{L}{}} \ncline[linewidth=2pt]{*-*}{E}{F}\mput{\rnode{K}{}} \nccurve[angleA=0,angleB=210]{D}{F}\lput(0.5){\rnode{J}{}}\Bput{$e$} \nccurve[linestyle=dotted,angleA=135,angleB=90]{<->}{G}{H}\mput*{$\sim_2$} \nccurve[linestyle=dotted,angleA=135,angleB=90]{<->}{H}{I}\mput*{$\sim_2$} \nccurve[linestyle=dotted,angleA=45,angleB=45]{<->}{K}{L}\mput*{$\sim_2$} \nccurve[linestyle=dotted,angleA=-15,angleB=135]{<->}{I}{J}\mput*{$\sim_1$} \nccurve[linestyle=dotted,angleA=60,angleB=210]{<->}{J}{K}\mput*{$\sim_1$} \ncline[linewidth=2pt]{*-*}{A}{B}\mput{\rnode{G}{}} \ncline[linewidth=2pt]{*-*}{B}{C}\mput{\rnode{H}{}} \ncline[linewidth=2pt]{*-*}{C}{D}\mput{\rnode{I}{}} \ncline[linewidth=2pt]{*-*}{A}{E}\mput{\rnode{L}{}} \ncline[linewidth=2pt]{*-*}{E}{F}\mput{\rnode{K}{}} \end{pspicture} \end{center} \caption{The fundamental cycle $C_e$ in the proof of \prettyref{prop:aaa}. Edges of $T$ are thick lines and $e$ is labeled. The left diagram shows the case that one of $e$'s endpoints is a $T$-descendant of the other, while the right diagram shows the case that $e$'s endpoints are unrelated. Dotted arrows indicate pairs of edges related by $\sim_i$.} \label{fig:zz} \end{figure} We now recap the distributed implementation of our cut vertex verifier. \begin{thm}\label{thm:lvdcv} There is a Las Vegas distributed algorithm to compute all cut vertices in $O(\ensuremath{\mathcal{D}} + \Delta / \log V)$ time and using $O(E (1+ \Delta / \log V ))$ messages, in expectation. \end{thm} \begin{proof} We compute a random $b$-bit circulation for $b = \lceil \Delta + 2\log_2 V \rceil$ and use the resulting values to compute local relations $\sim'_v$. (As mentioned in \prettyref{sec:lion} the verifier may reject at this stage.) We then combine this information into a global labeling $\sim'$ of edges (and again, the verifier may reject at this stage). There is a straightforward distributed protocol to compute $pre(v), desc(v), low(v)$ and $high(v)$ at each $v$ in $O(h(T))=O(\ensuremath{\mathcal{D}})$ time and using $O(E)$ messages; see e.g.\ \cite{dp-thesis,Thur97}. After this, each vertex sends these four values to all of its neighbours, with communication taking place along all edges in parallel; this takes $O(1)$ time and $O(E)$ messages. At this point, for each pair $e, f$ of edges that are related by $\sim_0$, their common endpoint $v$ checks that $e \sim' f$ holds. If there is a violation at any vertex, the verifier rejects, and if not, the verifier accepts. The labels $\sim'$ give the blocks; vertex $v$ is a cut vertex iff at least two blocks meet at $v$. Computing $\phi$ dominates the time and message complexity; each other step takes $O(\ensuremath{\mathcal{D}})$ time and $O(E)$ messages. Noting that the verifier accepts each time with probability at least $1-1/V$, \prettyref{thm:lvdcv} follows. \end{proof} \section{Lower Bounds on Distributed Time}\label{sec:lowerbounds} In this section we give precise assumptions under which our distributed cut edge and cut pair algorithms achieve universal optimality. Let $r$ denote the unique leader vertex in the graph. A vertex is \emph{quiescent} in a given round if it does not send any messages or modify its local memory in that round. We adopt the following terminology from \cite[\S 3.4 \& Ch.\ 24]{p2000}. \begin{defn} A distributed algorithm has \emph{termination detection} if $r$ has a local boolean variable {\tt done}, initialized to \textsc{false}, so that {\tt done} is set to \textsc{true} exactly once, in the last round of the algorithm. A distributed algorithm has \emph{a single initiator} if, except for $r$, every vertex is quiescent until it receives a message. \end{defn} The \emph{state} of a vertex means the contents of its memory. We omit the straightforward inductive proof of the following standard proposition. \begin{prop}\label{prop:speed} Let two graphs both contain a vertex $v$ and have the same graph topology and node IDs in the distance-$d$ neighbourhood of $v$. If the same deterministic distributed algorithm is run on both graphs, the state of $v$ is the same in both instances for the first $d-1$ rounds. For a randomized algorithm, the distribution over states of $v$ is the same. \end{prop} For a graph $G$, a vertex $v \in V(G)$ and an integer $\ell \geq 3$, we now define graphs $G_c$ and $G_p$ that implicitly depend on $\ell$ and $v$. Specifically, let $G_c$ denote the graph obtained from $G$ by attaching a $\ell$-edge cycle to $G$ at $v$, and let $G_p$ denote the graph obtained from $G$ by attaching a $(\ell-1)$-edge path to $G$ at $v$, as shown in Figure \ref{fig:lb}. Give corresponding vertices $v_i$ in the two graphs the same ID. \begin{figure}[t] \begin{center} \leavevmode \begin{pspicture}(0,0)(6,2.3) \psellipse[fillstyle=vlines](1,1)(1,1) \rput*(1,1){$G$} \psline[arrows=*-*](2,1)(2.3,1.7) \psline[arrows=*-*](2.3,1.7)(3,2) \psline[arrows=*-*](3,2)(3.7,1.7) \rput*(4,1.1){$\vdots$} \psline[arrows=*-*](2,1)(2.3,0.3) \psline[arrows=*-*](2.3,0.3)(3,0) \psline[arrows=*-*](3,0)(3.7,0.3) \psline[arrows=-](3.7,0.3)(3.85,0.65) \psline[arrows=-](3.7,1.7)(3.85,1.35) \uput[0](2,1){$v$} \uput[90](2.3,1.7){$v_1$} \uput[90](3,2){$v_2$} \uput[90](3.7,1.7){$v_3$} \uput[-90](2.3,0.3){$v_{\ell-1}$} \uput[-90](3,0){$v_{\ell-2}$} \uput[-90](3.7,0.3){$v_{\ell-3}$} \psline[arrows=*-*](0,1)(0,1) \uput[180](0,1){$r$} \end{pspicture} \begin{pspicture}(0,0)(4,2) \psellipse[fillstyle=vlines](1,1)(1,1) \rput*(1,1){$G$} \psline[arrows=*-*](2,1)(2.3,1.7) \psline[arrows=*-*](2.3,1.7)(3,2) \psline[arrows=*-*](3,2)(3.7,1.7) \rput*(4,1.1){$\vdots$} \psline[arrows=*-*](2.3,0.3)(3,0) \psline[arrows=*-*](3,0)(3.7,0.3) \psline[arrows=-](3.7,0.3)(3.85,0.65) \psline[arrows=-](3.7,1.7)(3.85,1.35) \uput[0](2,1){$v$} \uput[90](2.3,1.7){$v_1$} \uput[90](3,2){$v_2$} \uput[90](3.7,1.7){$v_3$} \uput[-90](2.3,0.3){$v_{\ell-1}$} \uput[-90](3,0){$v_{\ell-2}$} \uput[-90](3.7,0.3){$v_{\ell-3}$} \psline[arrows=*-*](0,1)(0,1) \uput[180](0,1){$r$} \end{pspicture} \end{center} \caption{Left: the graph $G_c$. Right: the graph $G_p$.} \label{fig:lb} \end{figure} \begin{thm}\label{thm:tdlower} Any deterministic distributed algorithm for finding all cut edges that has termination detection takes at least $\ensuremath{\mathcal{D}}/2$ rounds on every graph. \end{thm} \begin{proof} Consider for the sake of contradiction a graph $G$ upon which the algorithm terminates in $t < \ensuremath{\mathcal{D}}/2$ rounds. Let $v$ be any vertex of distance at least $\ensuremath{\mathcal{D}}/2$ away from $r$, and let $\ell = 2t + 2$. By \prettyref{prop:speed}, the algorithm also sets ${\tt done} := \textsc{true}$ at $r$ on $G^{}_p$ and $G^{}_c$ in $t$ rounds, so the algorithms terminate then. Now consider $v_{\ell/2}$; using \prettyref{prop:speed} again, we see that its state is the same at termination in both instances. Since the edges incident to $v_{\ell/2}$ are cut edges in $G^{}_p$ but not in $G^{}_c$, they must have been incorrectly classified at $v_{\ell/2}$ in at least one instance. \end{proof} If we assume that the algorithm has a single initiator instead of assuming termination detection, a similar argument works. We use the following lemma, whose easy inductive proof is omitted. \begin{lmma}\label{lmma:speed2} In an algorithm with a single initiator, every vertex at distance $t$ from $r$ is quiescent for the first $t$ rounds. \end{lmma} \begin{thm}\label{thm:sing-lb} Any deterministic distributed algorithm for finding all cut edges that has a single initiator takes at least $\ensuremath{\mathcal{D}}/2$ rounds on every graph. \end{thm} \begin{proof} Suppose the algorithm terminates in $t<\ensuremath{\mathcal{D}}/2$ rounds on a graph $G$. Let $v$ be any vertex of distance at least $\ensuremath{\mathcal{D}}/2$ away from $r$. Then by \prettyref{prop:speed} the algorithm also terminates in $t$ rounds on $G^{3,v}_c$ and $G^{3,v}_p$. By \prettyref{lmma:speed2} vertex $v_1$ is quiescent during the entire execution of the algorithm on these new graphs; hence the incident edges cannot be correctly classified in both instances. \end{proof} For randomized algorithms we have the following lower bound. \begin{thm} Any randomized distributed algorithm with error probability less than $1/4$ for finding all cut edges takes at least $\ensuremath{\mathcal{D}}/4$ rounds in expectation, if it has a single initiator or termination confirmation. \end{thm} \begin{proof} We use the same setup as in the proofs of Theorems \ref{thm:tdlower} and \ref{thm:sing-lb}. Markov's inequality shows that when running the algorithm on $G$, the time of termination $t$ satisfies $\Pr[t \leq \ensuremath{\mathcal{D}}/2] \geq 1/2.$ The distribution on the state of the crucial vertex --- $v_{\ell/2}$ for termination confirmation, $v_1$ for single initiator --- is the same on both $G_c$ and $G_p$ at time $\ensuremath{\mathcal{D}}/2$. So of the $\geq 1/2$ probability mass of termination before $\ensuremath{\mathcal{D}}/2$, either $1/4$ incorrectly classifies edges of $G_c$ as cut edges or edges of $G_p$ as not cut edges. \end{proof} The same lower bounds hold for finding 2-edge-connected components and cut pairs, since the new edges of $G^{}_c$ are in cut pairs, while the new edges of $G^{}_p$ are not. It is straightforward to verify that our distributed algorithms can be implemented so as to have a single initiator and termination detection; then their universal optimality follows. If we do not require a single initiator or termination detection, and if we change our input model to allow additional parameters of $G$ to be initially known at each node, \emph{neighbourhood cover} techniques of \cite{Elkin06} can be synthesized with our techniques to yield even faster algorithms for certain graph classes. Elkin used these techniques to obtain distributed MST algorithms faster than $O(\ensuremath{\mathcal{D}})$ on some graphs. \section{Parallel Cut Pairs on the EREW PRAM} \label{sec:parallel} In this section we give a parallel cut pair algorithm of time complexity $O(\log V)$ for the EREW PRAM. Computing the OR of $n$ bits has a lower bound of $\Omega(\log n)$ time in this model; from this an easy combinatorial reduction yields an $\Omega(\log V)$ time lower bound for finding all cut pairs of a graph, so our algorithm is time-optimal. As in \prettyref{sec:cutpairimpl} we assume without loss of generality in this section that $G$ is 2-edge-connected. We will require several common subroutines. First, we need a Las Vegas randomized spanning forest subroutine taking $O(V+E)$ work and space and $O(\log V)$ time, due to \cite{HZ01}. An \emph{ear decomposition} can be computed in the same randomized complexity using the approaches in \cite{MSV86,MR92} and plugging in the result of \cite{HZ01} for the spanning forest subroutine. \emph{Expression evaluation} of an $n$-node tree can be accomplished deterministically in $O(n)$ work and space and $O(\log n)$ time (e.g.\ see the book \cite[Ch.\ 3]{JaJa92}). We let $T(n), S(n), W(n)$ denote the time, space, work complexity to sort $n$ numbers of length $O(\log n)$ bits; we give references to the best known algorithms for this problem in Section \ref{sec:contributions} (they are deterministic). First, we give our Monte Carlo cut pair algorithm. \begin{thm} There is a parallel algorithm to compute all cut pairs with probability at least $1-1/V$ in $O(\log V + T(E))$ time, $O(E+S(E))$ space, and $O(E+W(E))$ work. \end{thm} \begin{proof} We implement \prettyref{alg:pair} distributively. First, we claim we can implement the subroutine \textsc{Rand-$b$-Bit-Circ}\ distributively to generate a random $O(\log V)$-bit circulation in logarithmic time and linear work; the completion steps (\prettyref{alg:complete}) are accomplished via a call to expression evaluation in which we compute the expression $\phi(e) := \bigoplus_{f \in \delta(v) \backslash e} \phi(f)$ for each tree edge $e = \{v, p(v)\}$. We implement \prettyref{line:val-loop} of \prettyref{alg:pair} via a sort. \end{proof} \subsection{Las Vegas Cut Pair Algorithm} The verifier for our parallel cut pair algorithm works by attempting to construct the \emph{2-cactus} of $G$ which acts as a certificate for all of the cut pairs. Our terminology is derived from a more general sort of cactus originally due to \cite{DKL76} that acts as a certificate for all minimum edge cuts. Say that $u \equiv v$ in $G$ if the edge-connectivity between $u$ and $v$ is at least 3; it is easy to show (e.g.\ using the max-flow min-cut theorem) that $\equiv$ is an equivalence relation. We now define how to \emph{contract\footnote{When we ``contract," we may identify \emph{non-adjacent} vertices; this contrasts with the more common meaning of ``contract" in the context of graph minors.} a graph by an equivalence relation}. Contraction may introduce parallel edges and/or loops; for this reason in the rest of the paper, when we say a graph, we mean a \emph{multigraph}, which may have parallel edges and/or loops. Given a graph $G$ and an equivalence relation $R$ on the vertices of $G$, the \emph{contraction} denoted $G/R$ is another (multi)graph. For each equivalence class $C$ of $R$, $G/R$ has a vertex labelled $C$. For each edge $e$ of $G$, $G/R$ has an edge from the vertex labelled by the equivalence class of $u$ to that of $v$. Since the edges of $G$ correspond bijectively to the edges of $G/R$, we will speak of the graphs as having the same edge set (alternatively, one may think of each edge of $G$ having a distinct label which is inherited by the corresponding edge of $G/R$). \begin{defn} The \emph{2-cactus} $\mathsf{Ca}(G)$ of $G$ is $G/\!\equiv$. \end{defn} \ignore{\begin{figure}[t] \begin{center} \leavevmode \begin{pspicture}(0,0)(8, 2.2) \pnode(0,0){i}\uput[225](0,0){$i$} \pnode(0,1){e}\uput[180](0,1){$e$} \pnode(0,2){a}\uput[135](0,2){$a$} \pnode(1,0){j}\uput[-90](1,0){$j$} \pnode(2,-0.5){k}\uput[0](2,-0.5){$k$} \pnode(2,0.5){h}\uput[0](2,0.5){$h$} \pnode(1,1){f}\uput[135](1,1){$f$} \pnode(2,1){g}\uput[135](2,1){$g$} \pnode(2.5,1.5){d}\uput[0](2.5,1.5){$d$} \pnode(1,2){b}\uput[90](1,2){$b$} \pnode(2,2){c}\uput[90](2,2){$c$} \ncline{*-*}{i}{e} \ncline{*-*}{i}{j} \ncline{*-*}{j}{h} \ncline{*-*}{j}{k} \ncline{*-*}{k}{h} \ncline{*-*}{e}{f} \ncline{*-*}{e}{a} \ncline{*-*}{j}{f} \ncline{*-*}{f}{g} \ncline{*-*}{g}{d} \ncline{*-*}{g}{c} \ncline{*-*}{c}{d} \ncline{*-*}{b}{a} \ncline{*-*}{b}{f} \ncline{*-*}{b}{c} \pnode(6,0){i2}\uput[225](6,0){$i$} \pnode(7,0.5){j2}\uput[-90](7,0.5){$j$} \pnode(8,0){k2}\uput[-45](8,0){$k$} \pnode(8,1){h2}\uput[45](8,1){$h$} \pnode(6,1){bef}\uput[225](6,1){$b,e,f$} \pnode(6,2){a2}\uput[135](6,2){$a$} \pnode(7,2){cg}\uput[90](7,2){$c,g$} \pnode(8,2){d2}\uput[135](8,2){$d$} \ncline{*-*}{i2}{j2} \ncline{*-*}{i2}{bef} \ncline{*-*}{j2}{bef} \ncline{*-*}{j2}{k2} \ncline{*-*}{h2}{k2} \ncline{*-*}{j2}{h2} \ncline{*-*}{cg}{cg} \ncline{*-*}{a2}{a2} \ncline{*-*}{d2}{d2} \nccurve[angleA=15,angleB=165]{cg}{d2} \nccurve[angleA=-15,angleB=195]{cg}{d2} \nccurve[angleA=30,angleB=240]{bef}{cg} \nccurve[angleA=60,angleB=210]{bef}{cg} \nccurve[angleA=105,angleB=-105]{bef}{a2} \nccurve[angleA=75,angleB=-75]{bef}{a2} \psccurve(6,1)(5.7,1.03)(5.7,0.97)(6,1) \psccurve(6,1)(6.3,1.03)(6.3,0.97)(6,1) \psccurve(7,2)(6.97,1.7)(7.03,1.7)(7,2) \end{pspicture} \end{center} \caption{A graph (left) and its 2-cactus (right).} \label{fig:2cactus} \end{figure}} \newcommand{\pnodeput}[2]{\pnode(#1){#2}\ncline[arrows=*-*]{#2}{#2}} \begin{figure}[t] \begin{center} \leavevmode \begin{pspicture}(-0.4,-0.4)(5.2,5.2) \psset{unit=0.8cm} \pnodeput{0,0}{a} \cnodeput(2,0){b}{4} \pnodeput{6,0}{c} \pnodeput{3,1}{d} \cnodeput(5,1){e}{4} \pnodeput{0,2}{f} \cnodeput(1,2){g}{1} \cnodeput(3,2){h}{3} \cnodeput(4,2){i}{3} \pnodeput{6,2}{j} \cnodeput(5,3){k}{2} \cnodeput(1,4){l}{1} \pnodeput{2,4}{m} \cnodeput(4,4){n}{2} \cnodeput(3,5){p}{2} \pnodeput{4,5}{q} \cnodeput(2,6){s}{1} \pnodeput{3,6}{t} \cnodeput(4,6){u}{2} \pnodeput{5,6}{v} \pnodeput{5,4.5}{o} \psset{linewidth=2pt} {\psset{linecolor=green} \ncline{a}{b \ncline{a}{h \ncline{e}{i } {\psset{linecolor=blue} \ncline{b}{d \ncline{e}{d } {\psset{linecolor=red} \ncline{b}{c \ncline{e}{c } \ncline{h}{i} {\psset{linecolor=cyan} \ncline{h}{g \ncline{i}{j \ncline{j}{k \ncline{s}{p } {\psset{linecolor=lightgray} \ncline{f}{g \ncline{l}{f } \ncline{l}{g} {\psset{linecolor=magenta} \ncline{l}{m \ncline{m}{s } \ncline{l}{s} \ncline{p}{n} \ncline{n}{k} {\psset{linecolor=colcite} \ncline{p}{t \ncline{t}{u } {\psset{linecolor=yellow} \ncline{n}{q \ncline{q}{u } {\psset{linecolor=brown} \ncline{k}{o \ncline{o}{v \ncline{v}{u } \psset{linewidth=1pt} \pnodeput{0,0}{za} \cnodeput*(2,0){zb}{4} \pnodeput{6,0}{zc} \pnodeput{3,1}{zd} \cnodeput*(5,1){ze}{4} \pnodeput{0,2}{zf} \cnodeput*(1,2){zg}{1} \cnodeput*(3,2){zh}{3} \cnodeput*(4,2){zi}{3} \pnodeput{6,2}{zj} \cnodeput*(5,3){zk}{2} \cnodeput*(1,4){zl}{1} \pnodeput{2,4}{zm} \cnodeput*(4,4){zn}{2} \cnodeput*(3,5){zp}{2} \pnodeput{4,5}{zq} \cnodeput*(2,6){zs}{1} \pnodeput{3,6}{zt} \cnodeput*(4,6){zu}{2} \pnodeput{5,6}{zv} \pnodeput{5,4.5}{zo} \end{pspicture} \begin{pspicture}(-2.4,-0.4)(4,5.2) \psset{unit=0.8cm} \pnodeput{0,0}{a} \cnodeput(2,0){b}{4} \pnodeput{3.5,0}{c} \pnodeput{3,1}{d} \cnodeput(0,2){e}{3} \pnodeput{2,2}{f} \cnodeput(0,4){g}{1} \pnodeput{-1.5,4}{h} \pnodeput{0,5.5}{i} \cnodeput(2,4){j}{2} \pnodeput{2,5.5}{k} \pnodeput{3,5}{l} \pnodeput{3.5,4}{m} \pnodeput{3,3}{n} \psset{linewidth=2pt} {\psset{linecolor=green} \ncline{a}{b \ncline{a}{e \ncline{e}{b } {\psset{linecolor=blue} \nccurve[angleA=-150,angleB=60]{d}{b \nccurve[angleA=-120,angleB=30]{d}{b } {\psset{linecolor=red} \nccurve[angleA=15,angleB=165]{b}{c} \nccurve[angleA=-15,angleB=-165]{b}{c} } {\psset{linecolor=cyan} \ncline{e}{f \ncline{j}{f \ncline{g}{j \ncline{e}{g } {\psset{linecolor=lightgray} \nccurve[angleA=15,angleB=165]{h}{g} \nccurve[angleA=-15,angleB=-165]{h}{g} } {\psset{linecolor=magenta} \nccurve[angleA=75,angleB=-75]{g}{i} \nccurve[angleA=105,angleB=-105]{g}{i} } {\psset{linecolor=colcite} \nccurve[angleA=75,angleB=-75]{j}{k} \nccurve[angleA=105,angleB=-105]{j}{k} } {\psset{linecolor=yellow} \nccurve[angleA=30,angleB=-120]{j}{l} \nccurve[angleA=60,angleB=-150]{j}{l} } {\psset{linecolor=brown} \ncline{j}{m \ncline{n}{m \ncline{j}{n } \psccurve(0.2,4.2)(0.5,4.4)(0.4,4.5)(0.2,4.2) \psccurve(0.2,2.2)(0.5,2.4)(0.4,2.5)(0.2,2.2) \psccurve(1.8,4.2)(1.5,4.4)(1.6,4.5)(1.8,4.2) \psccurve(1.8,3.8)(1.5,3.6)(1.6,3.5)(1.8,3.8) \psccurve(-0.2,3.8)(-0.5,3.6)(-0.4,3.5)(-0.2,3.8) \psset{linewidth=1pt} \pnodeput{0,0}{za} \cnodeput*(2,0){zb}{4} \pnodeput{3.5,0}{zc} \pnodeput{3,1}{zd} \cnodeput*(0,2){ze}{3} \pnodeput{2,2}{zf} \cnodeput*(0,4){zg}{1} \pnodeput{-1.5,4}{zh} \pnodeput{0,5.5}{zi} \cnodeput*(2,4){zj}{2} \pnodeput{2,5.5}{zk} \pnodeput{3,5}{zl} \pnodeput{3.5,4}{zm} \pnodeput{3,3}{zn} \end{pspicture} \end{center} \caption{Left: a graph. For ease of visualization, every node is labelled by its equivalence class of $\equiv$, except for nodes in singleton classes; every edge is coloured according to its cut class, except for edges in no cut pair, which are black. Right: the 2-cactus $\mathsf{Ca}(G)$ defined equal to $G/\!\equiv$.} \label{fig:2cactus} \end{figure} An example of a 2-cactus is given in Figure \ref{fig:2cactus}. \begin{prop}\label{prop:cpp} (a) For any equivalence relation $R$ on $V$, every cut pair of $G/R$ is a cut pair of $G$. (b) Every cut pair of $G$ is a cut pair of $\mathsf{Ca}(G)$. \end{prop} \begin{proof} Both results use a common observation. As before let $\delta(Z)$ denote the set of edges with exactly one endpoint in $Z$. Let $X$ be a set of equivalence classes of $R$ (which we may view as a vertex set in $G/R$) and let $\cup X$ be the union of those classes (which is a vertex set in $G$). Then from the definition of contraction it is easy to see the following: \begin{equation}\label{eq:footz} \textrm{The edge set $\delta(\cup X)$ of $G$ is the same as the edge set $\delta(X)$ of $G/R$.}\tag{\ddag}\end{equation} To prove (a), let $\delta(X) = \{e, f\}$ be a cut pair of $G/R$, where $X$ is a set of equivalence classes of $R$ (vertices of $G/R$). Let $\cup X$ be the union of these classes. By \eqref{eq:footz}, in $G$ the edge set $\delta(\cup X)$ is precisely $\{e, f\}$ giving the needed fact that $\{e, f\}$ is a cut pair of $G$. To prove (b), let $\delta(S) = \{e, f\}$ be a cut pair of $G$. By the (weak) max-flow min-cut theorem $s \not\equiv t$ holds for each $s \in S, t \not\in S$. So for any $s \in S$ and for any $t \equiv s$ we have $t \in S$, i.e.~$S$ is a union of some equivalence classes of $\equiv$. Call this set of classes $X$ (so in the earlier notation, $S = \cup X$). By \eqref{eq:footz}, in $G / \!\equiv$ (which is $\mathsf{Ca}(G)$) we have $\delta(X) = \{e, f\}$, so $\{e, f\}$ is a cut pair of $\mathsf{Ca}(G)$ as needed. \end{proof} Now we recall the earlier convention (from \prettyref{sec:randcirc}) of using $\phi$ to denote a (random) $b$-bit binary circulation, wherein for each $i$ the sets $\{e \mid \phi_i(e)=1\}$ are independent binary circulations selected uniformly at random. Recall also \prettyref{cor:seppair} which (together with the assumption that $G$ is 2-edge-connected) says that $\phi(e)=\phi(f)$ always holds when $\{e, f\}$ is a cut pair, and holds with probability $1/2^b$ otherwise. Define an \emph{illusory cut pair} to be a pair of edges $\{e, f\}$ that has $\phi(e)=\phi(f)$ but is not a cut pair. Our strategy for the parallel algorithm will be to define another relation $\equiv'$ so that $\equiv$ and $\equiv'$ will agree when there are no illusory cut pairs; we will then use \prettyref{prop:cpp} to verify that there are no illusory cut pairs. \subsubsection{Cactuslike Graphs} The relation $\equiv'$ must provide an alternate way of constructing $\mathsf{Ca}(G)$, when there are no illusory cut pairs. To this end we examine the properties of $\mathsf{Ca}(G)$ in more detail. We fix some terminology: a \emph{closed walk} has distinct edges but may repeat vertices; a \emph{simple cycle} is any closed walk without repeated vertices. Thus a simple cycle of length 1 is a loop, and a simple cycle of length 2 is a parallel pair of non-loop edges. An \emph{ear decomposition} of $G$ is a sequence of graphs $G_0 \subset G_1 \subset G_2 \subset \dotsb \subset G_k = G$ such that $G_0$ is just a vertex and each $G_i$ is obtained from $G_{i-1}$ by attaching a simple cycle (a \emph{closed ear}) or path with both endpoints in $G_{i-1}$ (an \emph{open ear}). It is well-known that a graph is 2-edge-connected if and only if it admits an ear decomposition. The path or cycle added to $G_{i-1}$ to get $G_i$ is called the $i$th \emph{ear} of $G$, and it is denoted $E_i$. We omit the straightforward proof of \prettyref{prop:2c2c}. \begin{prop}\label{prop:2c2c} Every pair of nodes in $\mathsf{Ca}(G)$ has edge-connectivity equal to 2. \end{prop} Call a graph \emph{cactuslike} if every pair of nodes has edge-connectivity equal to 2. \begin{prop}\label{prop:2cactus} The following are equivalent for any graph: (a) it is cactuslike; (b) in some ear decomposition, all its ears are closed; (c) in every ear decomposition, all its ears are closed; (d) every edge lies in exactly one simple cycle. \end{prop} \begin{proof} Trivially, (c) implies (b). It is easy to see that (b) implies (a) by induction on the ears. We now prove the contrapositive of (a) $\Rightarrow$ (d). If (d) is false there are two simple cycles $C_1, C_2$ both containing an edge $e$; since the $C_i$ are different there is another edge $f$ with $f \in C_1, f \not\in C_2$ (WOLOG). Let $P$ be the inclusion-minimal subpath of $C_1$ containing $f$ with both its endpoints in $C_2$. Then we get three edge-disjoint paths connecting the endpoints of $P$: $P$ itself plus two in $C_2$. By the definition of cactuslike, (a) is false, and we are done. We also prove the contrapositive of (d) $\Rightarrow$ (c). If (c) is false there is an ear decomposition with an open ear $E_i$ with endpoints $u, v$. Since $G_{i-1}$ is 2-edge-connected, there are two different simple $u$-$v$ paths in $G_{i-1}$. Combining these paths in turn with $E_i$ gives two simple cycles having at least one common edge, so (d) is false. \end{proof} From \prettyref{prop:2cactus}(c) we obtain the following corollary by induction on the ears. \begin{cor}\label{cor:ccp} In a cactuslike graph, for any ear decomposition, the cut classes are the same as the nonsingleton ears. \end{cor} We know from \prettyref{prop:2c2c}, along with the definitions of $\mathsf{Ca}(G)$ and cactuslike, that $\mathsf{Ca}(G)$ is a cactuslike contraction of $G$, and by \prettyref{prop:cpp} the cut classes of $\mathsf{Ca}(G)$ are the same as the cut classes of $G$. The following converse will be very useful. \begin{lmma}\label{lmma:2way} Let $R$ be an equivalence relation for which $G/R$ is cactuslike, and such that the cut classes of $G/R$ and the cut classes of $G$ are the same. Then $R$ is the same as $\equiv$. \end{lmma} \begin{proof} First, we establish that $\equiv$ refines $R$. Suppose otherwise, that there are vertices $u, v$ related by $\equiv$ but not by $R$. Then the vertices corresponding to the equivalence classes of $u$ and $v$ in $G/R$ are different. Since $G/R$ is cactuslike, there is a cut pair separating those vertices. But then \eqref{eq:footz} shows this cut pair also separates $u$ from $v$ in $G$, contradicting the fact that $u \equiv v$. Now, we establish that $R$ refines $\equiv$. Suppose otherwise, that there are vertices $u, v$ related by $R$ but not by $\equiv$. Since $u \not\equiv v$ there is a cut pair $\{e, f\}$ in $G$ separating $u$ from $v$. Thus $G \backslash \{e, f\}$ has two connected components, one containing $u$ and one containing $v$, and since $u R v$, we see $G/R \backslash \{e, f\}$ is connected. So $G$ and $G/R$ have different cut pairs, hence different cut classes, which is the needed contradiction. \end{proof} \subsubsection{Pinching Ears} In this subsection we develop an algorithmic ear-based tool to construct $\equiv$. First we need the following. \begin{lmma}\label{lmma:ecc} Every cut class of $G$ lies within a single ear of any ear decomposition of $G$. \end{lmma} \begin{proof} Suppose otherwise, that there is a cut pair $\{e, f\}$ with $e \in E_i$ and $f \in G_{i-1}$. Since $G_{i-1}$ is 2-edge-connected, $G_{i-1} \backslash f$ is connected. But $G_i \backslash \{e, f\}$ is obtained by attaching 2 paths to $G_{i-1}$, and so is connected. By induction on the remaining ears we see that each $G_j \backslash \{e, f\}$ for $j \geq i$ is connected; in particular for $j = k$ this means $G \backslash \{e, f\}$ is connected, a contradiction. \end{proof} The cut classes of $\mathsf{Ca}(G)$ and $G$ agree (\prettyref{prop:cpp}), and by \prettyref{cor:ccp} the cut classes of $\mathsf{Ca}(G)$ are the same as its non-loop simple cycles. We reiterate for future reference: \begin{equation}\label{eq:gg} \textrm{The cut classes of $G$ are the same as the non-loop simple cycles in $\mathsf{Ca}(G)$.}\tag{\ensuremath{\diamondsuit}} \end{equation} We now define a \emph{pinching} operation whose effect is also to turn a cut class into a cycle. Let a given ear $E_i$ have vertices and edges $v_0, e_1, v_1, e_2, v_2, \dotsc, e_z, v_z$ in that order, where $v_0 = v_z$ iff the ear is closed. For a given subset $U = \{e_{j(1)}, e_{j(2)}, \dotsc, e_{j(t)}\}$ of the ear's edges indexed such that $j(1) < j(2) < \dotsb < j(t)$, let $R_U$ denote the equivalence relation on the ear's vertices consisting exactly of the pairs $$R_U := \bigl\{\{v_{j(1)}, v_{j(2)-1}\}, \{v_{j(2)}, v_{j(3)-1}\}, \dotsc, \{v_{j(t-1)}, v_{j(t)-1}\}, \{v_{j(t)}, v_{j(1)-1}\}\bigr\}.$$ Thus in the ``pinched" ear $E_i / R_U$, the set $U$ forms a simple cycle. For example if $R_U$ consists of just one edge, then that edge becomes a loop in $E_i / R_U$. \begin{defn} Denote the cut classes contained in ear $i$ as $U[1], U[2], \dotsc, U[s]$, then define $\equiv_i$ to be the equivalence relation defined by the transitive closure $(R_{U[1]} \cup R_{U[2]} \cup \dotsb \cup R_{U[s]})^*$. \end{defn} In other words, $\equiv_i$ simultaneously pinches all cut classes appearing in ear $E_i$. \begin{lmma}\label{lmma:refinally} For each $i$, $\equiv_i$ refines $\equiv$. \end{lmma} \begin{proof} Using \prettyref{lmma:ecc} and the preceding definitions, it is necessary and sufficient to show that for each cut class $U$, every pair of vertices related by $R_U$ is also related by $\equiv$. It is not hard to see that we may use $U$ to decompose $G$ into $|U|$ 2-edge-connected graphs linked in a cycle by edges of $U$. We illustrate this on the left side of Figure~\ref{fig:refinally}; compare with \prettyref{fig:cc}. Moreover, it is clear that the order of appearance of $U$ in $E_i$ is the same as the cyclic order of $U$ in this decomposition. Therefore, adopting the notation in the definition of $R_U$, we label the 2-edge-connected graphs $G_1, G_2, \dotsc, G_{|U|}$ such that edge $e_{j(x)}$ joins $G_{x-1}$ to $G_x$ (or if $x=1$, $G_{|U|}$ to $G_1$). The notation is illustrated on the right side of Figure \ref{fig:refinally}. \begin{figure}[t] \begin{center} \leavevmode \begin{pspicture}(-2,-2)(4,2) \psset{unit=0.4} \psellipse[fillstyle=vlines](3,3)(1.5,1.5) \psellipse[fillstyle=vlines](-3,3)(1.5,1.5) \psellipse[fillstyle=vlines](-3,-3)(1.5,1.5) \psellipse[fillstyle=vlines](3,-3)(1.5,1.5) \psline[arrows=*-*](-1.5,3)(1.5,3) \psline[arrows=*-*](-1.5,-3)(1.5,-3) \psline[arrows=*-*](3,-1.5)(3,1.5) \psline[arrows=*-*](-3,-1.5)(-3,1.5) \end{pspicture} \begin{pspicture}(-4,-2)(2,2) \psset{unit=0.4} \psellipse[fillstyle=vlines](3,3)(1.5,1.5) \psellipse[fillstyle=vlines](-3,3)(1.5,1.5) \psellipse[fillstyle=vlines](-3,-3)(1.5,1.5) \psellipse[fillstyle=vlines](3,-3)(1.5,1.5) \psline[arrows=*-*](-1.5,3)(1.5,3) \psline[arrows=*-*](-1.5,-3)(1.5,-3) \psline[arrows=*-*](3,-1.5)(3,1.5) \psline[arrows=*-*](-3,-1.5)(-3,1.5) \rput*(-3,3){$G_4$} \rput*(3,3){$G_1$} \rput*(3,-3){$G_2$} \rput*(-3,-3){$G_3$} \uput[-30](3,1.5){$v_{j(2)-1}$} \uput[30](3,-1.5){$v_{j(2)}$} \rput*(3,0){$e_{j(2)}$} \uput[-210](-3,-1.5){$v_{j(4)-1}$} \uput[210](-3,1.5){$v_{j(4)}$} \rput*(-3,0){$e_{j(4)}$} \rput*(0,-3){$e_{j(3)}$} {\psset{labelsep=0.75} \uput[75](-1.5,-3){$v_{j(3)}$} \uput[-115](1.5,-3){$v_{j(3)-1}$}} \rput*(0,3){$e_{j(1)}$} {\psset{labelsep=0.75} \uput[115](1.5,3){$v_{j(1)}$} \uput[-60](-1.5,3){$v_{j(1)-1}$}} \end{pspicture} \end{center} \caption{Left: the structure of a 2-edge-connected graph with respect to a cut class (shown with a cut class of size 4); each hatched region indicates a 2-edge-connected subgraph (possibly a single vertex). Right: the notation used in the proof of \prettyref{lmma:refinally}.} \label{fig:refinally} \end{figure} With this setup, we now show that every pair of edges related by $R_U$ is also related by $\equiv$. First, $v_{j(x)} \equiv v_{j(x+1)-1}$ for $1 \le x < |U|$ since these two vertices are 2-edge-connected in $G_x$ and also are linked by a third disjoint path going the long way around the cycle (using all of $U$). Establishing that $v_{j(t)} \equiv v_{j(1)-1}$ is similar. \end{proof} \begin{prop} The equivalence relation $(\cup_i \equiv_i)^*$ is the same as $\equiv$.\label{prop:tar} \end{prop} \begin{proof} This is where we make use of \prettyref{lmma:2way}. The main part of the proof is to show that the simple cycles in $G/(\cup_i \equiv_i)^*$ are the same as the simple cycles in $G / \mathord{\equiv}$ (which by \eqref{eq:gg} are the same as $G$'s cut classes). Supposing we can prove this, it follows by two applications of \prettyref{prop:2cactus}(d) that $G/(\cup_i \equiv_i)^*$ is cactuslike, then \prettyref{cor:ccp} shows that $G/(\cup_i \equiv_i)^*$ has the same cut classes as $G$, then applying \prettyref{lmma:2way} we are done. Using \prettyref{lmma:ecc} any cut class $U$ of $G$ becomes a simple cycle in $G/R_U$ and hence a closed walk in $G/(\cup_i \equiv_i)^*$ (because that graph is a contraction of $G/R_U$). Moreover, we know by \eqref{eq:gg} that every cut class of $G$ is a simple cycle of $G/\mathord{\equiv}$, and since $(\cup_i \equiv_i)^*$ refines $\equiv$, it follows that every cut class of $G$ is becomes a simple cycle in $G/(\cup_i \equiv_i)^*$. Reiterating, every cut class of $G$ is a simple cycle of $G/(\cup_i \equiv_i)^*$. Could there be any other simple cycles in $G/(\cup_i \equiv_i)^*$ --- one which is not just a cut class of $G$? We will show the answer is no, which will complete the proof. Suppose that $C$ is any simple cycle in $G/(\cup_i \equiv_i)^*$. Since $(\cup_i \equiv_i)^*$ refines $\equiv$, we may view $G/\mathord{\equiv}$ as a contraction of $G/(\cup_i \equiv_i)^*$, thus the image of $C$ in $G/\mathord{\equiv}$ is a closed walk. But any closed walk is an edge-disjoint union of simple cycles (this is just a decomposition of an Eulerian graph), and the simple cycles in $G/\mathord{\equiv}$ are the cut classes of $G$; so $C$ is a union of cut classes of $G$. But the previous paragraph establishes each cut class of $G$ becomes a simple cycle in $G/(\cup_i \equiv_i)^*$; and in $G/(\cup_i \equiv_i)^*$ it is impossible for $C$ to be simultaneously a single simple cycle and a union of more than one simple cycle. Thus $C$ is just a single cut class of $G$, as needed. \end{proof} \subsubsection{Detecting Errors} In the algorithm we are designing, we don't know the cut pairs; rather, we have computed $\phi$ and know that with high probability, $\phi$ labels edges by their cut class. We compute the following instead. \begin{defn} For the $i$th ear $E_i$, enumerate $\{\phi(e) \mid e \in E_i\}$ as $\{x_1, x_2, \dotsc, x_s\}$. Let $W[k]$ denote the set $\{e \in E_i \mid \phi(e) = x_k\}$ and define $\equiv'_i$ to be the equivalence relation defined by the transitive closure $(R_{W[1]} \cup R_{W[2]} \cup \dotsb \cup R_{W[s]})^*$. \end{defn} In other words, $\equiv_i$ simultaneously pinches all sets of common $\phi$-value in ear $E_i$. Note first that if there are no illusory cut pairs, then the sets of common $\phi$-value are the same as the cut classes, and so $\equiv'_i$ is the same as $\equiv_i$. Define the equivalence relation $\equiv'$ to be equal to $(\cup_i \equiv'_i )^*.$ \begin{thm}\label{thm:lvpcv} There is a Las Vegas parallel algorithm to compute all cut pairs in $O(\log V + T(E))$ time, $O(E+S(E))$ space, and $O(E+W(E))$ work, in expectation. \end{thm} \begin{proof} Our algorithm computes $H := G / \mathord{\equiv'}$ and tries to verify all cut pairs. To compute the relations $R_{W[i]}$ on each ear, we sort the edges on that ear lexicographically according to the pair $(\phi(e), pos(e))$ where $pos(e)$ is the position along the ear. Then to compute the transitive closure $\equiv'$ of their union, we build an auxilliary graph on vertex set $V$ and draw an edge for each pair of vertices that is related by some $R_{W[i]}$ on some ear; then the equivalence classes of $\equiv'$ are the connected components of this auxilliary graph. This can be done using the connected components routine of \cite{HZ96}. From this, computing the contracted multigraph $H$ takes constant time and linear work. Now we check if $H$ is cactuslike, by computing an ear decomposition and seeing if all ears are closed. First, if $H$ is not cactuslike, by \prettyref{prop:tar}, the verifier can reject since there is an illusory cut pair (since $\equiv \neq \equiv'$). Second, if $H$ is cactuslike, then its cut classes are the same as the ears. Using this fact, and sorting all edges by their $\phi$ value, the verifier accepts iff every pair $\{e, f\}$ with $\phi(e)=\phi(f)$ is a cut pair of $H$. By \prettyref{prop:cpp}(a), the verifier will reject when there is an illusory cut pair, and accept otherwise. \end{proof} \section{Future Work} At the most basic level, it would be interesting to push further and find efficient algorithms for higher types of connectivity, such as finding all 3-edge-cuts in $O(E)$ sequential time or $O(\ensuremath{\mathcal{D}})$ distributed time. The state of the art for this problem in the sequential model is $O(V^2)$ time \cite{galil-ital,KR91}. It would also be interesting to reduce the complexity of our parallel cut pairs algorithm to linear work and logarithmic time; it seems plausible that another approach would avoid radix sort. It is possible to deterministically compute the cut edges in the distributed model using $O(\ensuremath{\mathcal{D}})$ time and $O(E)$ messages, as was shown in the thesis \cite{dp-thesis}. (The approach is based on the observation that $\{v, p(v)\}$ is a cut edge if and only if $low(v) \geq v$ and $high(v) < v + desc(v)$.) However, we do not know of any deterministic analogues of our distributed cut pair or cut vertex algorithms. It would be interesting to know if our distributed cut vertex algorithm could be synthesized with the cut vertex algorithm of \cite{Thur97} to yield further improvement. Alternatively, a lower bound showing that no $O(\ensuremath{\mathcal{D}})$-time algorithm is possible for finding cut vertices would be very interesting. \subsection*{Acknowledgments}We would like to thank Graeme Kemkes, Jochen K\"onemann, and the referees from ICALP and ACM Trans.~Alg.~for valuable feedback.
{ "timestamp": "2010-07-22T02:01:11", "yymm": "0702", "arxiv_id": "cs/0702113", "language": "en", "url": "https://arxiv.org/abs/cs/0702113", "abstract": "We describe a new sampling-based method to determine cuts in an undirected graph. For a graph (V, E), its cycle space is the family of all subsets of E that have even degree at each vertex. We prove that with high probability, sampling the cycle space identifies the cuts of a graph. This leads to simple new linear-time sequential algorithms for finding all cut edges and cut pairs (a set of 2 edges that form a cut) of a graph.In the model of distributed computing in a graph G=(V, E) with O(log V)-bit messages, our approach yields faster algorithms for several problems. The diameter of G is denoted by Diam, and the maximum degree by Delta. We obtain simple O(Diam)-time distributed algorithms to find all cut edges, 2-edge-connected components, and cut pairs, matching or improving upon previous time bounds. Under natural conditions these new algorithms are universally optimal --- i.e. a Omega(Diam)-time lower bound holds on every graph. We obtain a O(Diam+Delta/log V)-time distributed algorithm for finding cut vertices; this is faster than the best previous algorithm when Delta, Diam = O(sqrt(V)). A simple extension of our work yields the first distributed algorithm with sub-linear time for 3-edge-connected components. The basic distributed algorithms are Monte Carlo, but they can be made Las Vegas without increasing the asymptotic complexity.In the model of parallel computing on the EREW PRAM our approach yields a simple algorithm with optimal time complexity O(log V) for finding cut pairs and 3-edge-connected components.", "subjects": "Distributed, Parallel, and Cluster Computing (cs.DC); Data Structures and Algorithms (cs.DS)", "title": "Fast Computation of Small Cuts via Cycle Space Sampling", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773707986486796, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.708467046552711 }
https://arxiv.org/abs/1301.6469
Weighted Fejér Constants and Fekete Sets
We give the connections among the Fekete sets, the zeros of orthogonal polynomials, $1(w)$-normal point systems, and the nodes of a stable and most economical interpolatory process via the Fejér contants. Finally the convergence of a weighted Grünwald interpolation is proved.
\section{Introduction} L. Fej\'er introduced the so-called Hermite-Fej\'er interpolatory process, and in 1934 he gave the definition of normal- and $\varrho$-normal system of nodes for which the Hermite-Fej\'er interpolation is a positive interpolatory process. The surprising nice convergence properties of Lagrange, Hermite and Hermite-Fej\'er operators on $\varrho$-normal systems were proved by L. Fej\'er, G. Gr\"unwald, etc. On the other hand the experiences in electrostatics ensure a system of nodes: the Fekete set, which has uniform distribution in some sense, so it must be a good set for interpolation. The system of zeros of orthogonal polynomials has very similar properties, as it it well-known. From another point of view, Egerv\'ary and Tur\'an asked, that is it possible to find an interpolatory process, and a system of nodes together, such that the interpolatory polynomial has the minimal degree, and the operator has the minimal norm. The above-mentioned point systems can be a suitable system of nodes for an interpolatory process in general sense and also with respect to the Egerv\'ary-Tur\'an problem. The primary aim of this note to revisit the connections among that sets of nodes, and interpolatory problems investigated e.g. in \cite{f}, \cite{h1}, \cite{h2}, \cite{i}, \cite{j}, \cite{rs}. In the next section, we summarize and reformulate these results, and complete them, when the original statement proved only in classical cases. It will be pointed out, that in these equivalences the so-called Fej\'er constants (see(3)) play the key role, that is the characterization of this special system of nodes is ensured by the Fej\'er constants. \medskip As an application of the results of the second section, in the third section we prove a convergence theorem on Gr\"unwald interpolatory process on the real line for Freud-type weights. As it turned out, giving the weighted Fekete sets with respect to a fixed weight is difficult. (However, there are several methods of giving approximating Fekete sets.) The zeros of orthogonal polynomials are Fekete sets for some varying weights. Unfortunately these varying weights tend to zero locally uniformly, so interpolation on Fekete sets in this sense gives only trivial (convergent) processes. The investigation of these weights at infinity leads to define a weighted Gr\"unwald operator (see (11)), which has rather nice convergence properties. Comparing this result with the previous ones of \cite{lu}, \cite{ssz}, it turns out that the convergence is valid here for a wider function class. \section{Connections} At first we give the definition of classes of weights in question. \begin{defi} Let $\Sigma \subset \mathbb{C}$ is a closed set. $w$ is quasi-admissible on $\Sigma$, if it is nonnegative, upper semi-continuous, and if $\Sigma$ is unbounded, $\lim_{|z| \to \infty \atop z \in \Sigma}|z|w(z)=0$. It is admissible, if $\mathrm{cap} \{z\in \Sigma: w(z)>0\}>0$. Let us call an admissible weight as approximating on $(a,b)\subset \mathbb{R}$, if it has finite moments, it is twice differentiable and $\left(\log\left(\frac{1}{w}\right)\right){''}\geq 0$ on $(a,b)$, and if $a$ is finite, then $\lim_{ x\to a+ }\frac{w(x)}{x-a}=0$, and if $b$ is finite, then $\lim_{ x\to b-} \frac{w(x)}{b-x}=0$. \end{defi} \begin{defi}\cite{st}III.1 Let $w$ be a quasi-admissible weight on a closed set $\Sigma \subset \mathbb{C}$. Then $\mathcal{F}_n$ are called n-th weighted Fekete sets associated with $w$, if the supremum below is attained at the set $\mathcal{F}_{n,w}=\{x_1,\dots ,x_n\}$. $$d_{n,w}=\sup_{z_1, \dots ,z_n \in \Sigma }d_{n,w}(z_1, \dots ,z_n)$$ \begin{equation}=\sup_{z_1, \dots ,z_n \in \Sigma }\left(\prod_{1\leq i<j\leq n}|z_i-z_j|w(z_i)w_(z_j)\right)^{\frac{2}{n(n-1)}}\end{equation}\end{defi} Usually these points are not unique, but in one dimension by some restrictions on the weight, uniqeness can be proved. In the classical, unweighted case on $[-1,1]$, the result is proved by Popoviciu (cf. \cite{sze} Ch 6.7 p. 139., and the reference therein). In weighted case, after some restrictions on the weight a representation of Fekete points was given by M. E. H. Ismail (\cite{i} Thms. 2.1, 2.4), wich ensures the unicity of the Fekete sets as well. In the followings the one-dimensional case will be investigated. \medskip Now let us deal with the weighted Lagrange interpolatory polynomials on a system of nodes $X=\{x_{k,n}, k=1,\dots ,n; n\in \mathbb{N}\}$. Let $l_k(x)=\frac{\omega(x)}{\omega^{'}(x_k)(x-x_k)},$ where $\omega(x)=\prod_{k=1}^n(x-x_k)$ (denoting $x_k=x_{k,n}, k=1, \dots ,n$) the fundamental polynomials of the Lagrange interpolation, and let $w(x)=e^{-Q(x)}$ be an approximating weight. The properties of $L_{k,w,X}(x)=L_{k,w}(x)=w(x)\frac{l_k^2(x)}{w(x_k)}$ will be investigated. It is clear, that $L_{k,w}(x_k)=1$, that is the $\sup$-norm of this weighted polynomial is at least 1. If this $\sup$-norm is equal to one, then $L_{k,w}$ has a maximum at the point $x_k$, that is $$(L_{k,w})^{'}(x_k)=w(x)\frac{l_k^2(x)}{w(x_k)}\left(\frac{w^{'}(x_k)}{w(x_k)}+\frac{2l_k^{'}(x_k)}{l_k(x_k)}\right)$$ \begin{equation}=w(x)\frac{l_k^2(x)}{w(x_k)}\left(-Q^{'}(x_k)+\frac{\omega^{''}}{\omega^{'}}(x_k)\right)=0,\end{equation} which ensures that \begin{equation}C_{k,w}:=C_{k,w,X}=\frac{\omega^{''}}{\omega^{'}}(x_k)+\frac{w^{'}}{w}(x_k)=0.\end{equation} This is the case, when $X$ is a Fekete set with respect to $w^{\frac{1}{2(n-1)}}$, namely $$L_{k,w,X}=\frac{\prod_{1\leq l \leq n \atop l\ne k}\left((x-x_l)^2w^{\frac{2}{2(n-1)}}(x)w^{\frac{2}{2(n-1)}}(x_l)\right)}{\prod_{1\leq l \leq n \atop l\ne k}\left((x_k-x_l)^2w^{\frac{2}{2(n-1)}}(x_k)w^{\frac{2}{2(n-1)}}(x_l)\right)}$$ $$\times\frac{\prod_{1\leq i<j \leq n \atop i,j \ne l}\left((x_i-x_j)^2w^{\frac{2}{2(n-1)}}(x_i)w^{\frac{2}{2(n-1)}}(x_j)\right)}{\prod_{1\leq i<j \leq n \atop i,j \ne l}\left((x_i-x_j)^2w^{\frac{2}{2(n-1)}}(x_i)w^{\frac{2}{2(n-1)}}(x_j)\right)}\leq 1,$$ because in the denominator appears $d_{n,w^{\frac{1}{2(n-1)}}}^{n(n-1)}$. \medskip It will turn out in the followings, that the behavior of the constants $C_{k,w}$ as an indicator, shows the properties of the point systems, interpolatory systems and operators. Emphasizing the importence of these constants, let us call them as "{\bf Fej\'er constants}". \medskip Following carefully the proof of the above mentioned theorem of Ismail (\cite{i},Thm. 2.1), we get the following \begin{proposition}Let $w$ be an approximating weight on an interval $(a,b)$. Then $d_{n,w^{\frac{1}{2(n-1)}}}^{n(n-1)}(z_1, \dots ,z_n)$ attains its maximum on $(a,b)$ at a unique set $\mathcal{F}_{n,w^{\frac{1}{2(n-1}}}$, for which the following characterization is valid. \begin{equation} \mathcal{F}_{n,w^{\frac{1}{2(n-1)}}}=\{x_1, \dots ,x_n\} \hspace{4pt} \hspace{4pt} \mbox{if and only if} \hspace{4pt} \hspace{4pt} C_{k,w}=0,\hspace{4pt} \hspace{4pt} k=1, \dots , n. \end{equation}\end{proposition} At first we have to note here, that finite moments are not necessary in this statement. According to Ismail \cite{i}, the proof of this theorem is the following: taking the partial derivatives of $\log d_{n,w^{\frac{1}{2(n-1)}}}^{n(n-1)}$, it turns out, that \\ $\frac{\partial}{\partial x_j}\log d_{n,w^{\frac{1}{2(n-1)}}}^{n(n-1)}(x_1, \dots ,x_n) =0$ $j=1, \dots , n$, if and only if $C_{k,w}=0,\hspace{4pt} \hspace{4pt} k=1, \dots , n$. Computing the Hessian, it can be seen, that $-H$ is always positive definite, so recalling the boundary condition on $w$, we get that the maximum-set is unique, that is it is the unique solution of the equation system: $C_{k,w}=0,\hspace{4pt} \hspace{4pt} k=1, \dots , n$. Independently of the previous chain of ideas, an elementary proof on unicity can be given. \begin{proposition}Let $w$ be an admissible, continuous weight on $\mathbb{R}$ such that $\log\frac{1}{w}$ is convex. Then the associated weighted Fekete sets are unique.\end{proposition} \noindent {\bf Proof: } Contrary, let $\{x_i\}_{i=1}^n$ and $\{y_i\}_{i=1}^n$ are Fekete points with respect to $w$ enumerated in increasing order, and let $z_i=\frac{x_i+y_i}{2}$. Then because of the ordering of the points, and the log-convexity of the weight, by the arithmetic-geometric mean inequality $$|z_i-z_j|w(z_i)w(z_j)=\left|\frac{(x_i-x_j)+(y_i-y_j)}{2}\right|w\left(\frac{x_i+y_i}{2}\right)w\left(\frac{x_j+y_j}{2}\right)$$ $$=\frac{|x_i-x_j|+|y_i-y_j|}{2}w\left(\frac{x_i+y_i}{2}\right)w\left(\frac{x_j+y_j}{2}\right)$$ $$\geq \sqrt{|x_i-x_j|}\sqrt{|y_i-y_j|}\sqrt{w(x_i)}\sqrt{w(y_i)}\sqrt{w(x_j)}\sqrt{w(y_j)},$$ where the inequality is an equality if and only if $x_i=y_i$ for all indices, wich establishes the uniqueness. \medskip For special weights, the Fekete sets are the zeros of some orthogonal polynomials (cf. \cite{i}, \cite{h2}). Before setting the precise statement we need some definitions. \begin{defi} Let $w=e^{-Q}$ be an approximating weight on $(a,b)$. Let \begin{equation}A_n(x)=\varrho_n\int_a^bp_{n,w}^2(t)w(t)\frac{Q^{'}(t)-Q^{'}(x)}{t-x}dt, \end{equation} where $p_{n,w}=\gamma_nx^n+\dots$ is the $n^{th}$ orthonormal polynomial with respect to $w$, and $\varrho_n=\frac{\gamma_{n-1}}{\gamma_n}$ \end{defi} Now we can define our weights: \begin{defi}Let $w$ be as in the previous definition. \begin{equation}w_n(x)=\frac{w(x)\varrho_n}{A_n(x)}\end{equation}\end{defi} In the following investigations the constant $\varrho_n$ has not any role, but it will come into the picture inconnection with a convergence theorem in the next section. Let us see some examples on $\frac{A_n(x)}{\varrho_n}$ (\cite{i}), which in classical cases are different only in normalization from the weights $w_1$ (\cite{rs}), for which the derivatives of $p_{n,w}$-s are orthogonal : \noindent{\bf Example:} \noindent (1) If $w=e^{-x^2}$, $$\frac{A_n(x)}{\varrho_n}=2,$$ that is $w_n=\frac{1}{2}w$ independently of $n$ and $x$, and here $w_1=w=2w_n$. \medskip \noindent (2) If $w=x^{\alpha}e^{-x}$, $$\frac{A_n(x)}{\varrho_n}=\frac{1}{x},$$ that is $w_n=x^{\alpha+1}e^{-x}=xw$ independently of $n$, and here $w_1=w_n$. \medskip \noindent (3) If $w=(1-x)^{\alpha}(1+x)^{\beta}$, $$\frac{A_n(x)}{\varrho_n}=\frac{\alpha+\beta+1+2n}{1-x^2},$$ that is $w_n=\frac{1}{\alpha+\beta+1+2n}(1-x)^{\alpha+1}(1+x)^{\beta+1}$, and here $w_1=(\alpha+\beta+1+2n)w_n$. \medskip \noindent (4) If $w=e^{-x^4}$, $$\frac{A_n(x)}{\varrho_n}=2(x^2+\varrho_n^2+\varrho_{n+1}^2),$$ that is $w_n=\frac{1}{2(x^2+\varrho_n^2+\varrho_{n+1}^2)}w$. \medskip From another point of view $w_n$ has also an importance. Denoting by $p_n\sqrt{w_n}=z_n$, it satisfies the following differential equation with some $\Phi_n$ (cf. \cite{mh}, Th. 3.6.): \begin{equation}z_n^{''}(x)+\Phi_n(x)z_n(x)=0\end{equation} \medskip In the next statement we reformulate the results of Ismail, Rutka and Smarzewski (cf. \cite{i}, \cite{rs}). \begin{proposition} Let $w_n$ be as in the definitions above, and let us assume that $w_n$ is an approximating weight. Then \begin{equation} C_{k,w_n}=0,\hspace{4pt} \hspace{4pt} k=1, \dots , n \hspace{4pt}\ws\mbox{if and only if} \hspace{4pt} \hspace{4pt} \{x_k\} \hspace{4pt}\ws \mbox{the zeros of} \hspace{4pt}\ws p_{n,w}\end{equation}\end{proposition} The proof of this statement depends on the differential equation of orthogonal polynomials. The equation system on $C_k$-s means that the differential equation fulfils at the points $x_k, \hspace{4pt} k=1, \dots , n $. In the classical cases, it is a Sturm-Liouville equation, that is there are polynomials of degree $n$ in the differential equation, which is realized at $n$ points. In general cases unicity is used. \medskip Normal and $\varrho$-normal point systems were introduced on $[-1,1]$ by L. Fej\'er in 1934 (\cite{f}). The weighted analogon of this definition was given in \cite{h1}. The original aim of these definitions was assuring the positivity of the Hermite-Fej\'er interpolatory operator. The limit case, when $\varrho=1$ was investigated on the weighted real line in \cite{h2}. Here this last definition is cited only. \begin{defi} Let $w$ be an approximating weight on $(a,b)$. A system of nodes $X=\{x_{k,n}, k=1,\dots ,n; n\in \mathbb{N}\}$ is $1(w)$-normal, if there is an $L>1$ such that \begin{equation}|x_{k,n}|<La_n,\end{equation} where $a_n$ is the M-R-S number, and \begin{equation}w(x)\sum_{k=1}^n\frac{l_k^2(x)}{w(x_k)} \leq 1, \hspace{4pt}\ws x\in \mathbb{R},\end{equation} where $l_k(x)$-s are the fundamental polynomials of the Lagrange interpolation.\end{defi} In this definition the kernel function of the Gr\"unwald operator appears. Here we will follow the notations of \cite{lu} and \cite{ssz}, that is the weighted Gr\"unwald operator on the nodes $\{x_k\}_{k=1}^n$ with respect to an $f$ is \begin{equation}w(x)Y_n(f,x)=w(x)\sum_{k=1}^n l_k^2(x)f(x_k)\end{equation} Mostly the boundedness of the operator-norm ensures the convergence of the interpolatory process. The boundedness by one, is a very special criterium. This is the case for instance, when the reciprocal of the weight function has non-negative even derivatives, and the Gr\"unwald operator coincides with the Hermite-Fej\'er one. Also on this chain of ideas the Fej\'er constants play the key role. More precisely, with the notations above, the weighted Hermite interpolatory polinomial (with some weight $w$) of a differentiable function can be expressed as (cf. \cite{h2}) $$w(x)H_n(f,f^{'},x)=w(x)\sum_{k=1}^n\frac{(1-C_{k,w}(x-x_k))l_k^2(x)}{w(x_k)}(fw)(x_k)$$ \begin{equation}+w(x)\sum_{k=1}^n\frac{(x-x_k)l_k^2(x)}{w(x_k)}(fw)^{'}(x_k),\end{equation} and the corresponding weighted Hermite-Fej\'er operator is \begin{equation}w(x)H_{n,w}(f,x)=w(x)\sum_{k=1}^n\frac{(1-C_{k,w}(x-x_k))l_k^2(x)}{w(x_k)}(fw)(x_k),\end{equation} which coincides with the weighted function at the nodes $\{x_k\}_{k=1}^n$, and which has zero derivatives at the nodes. Furthermore by the definition of the Fej\'er constants, $H_{n,w}(f,x)$ is the (unweighted) Hermite interpolatory polynomial of $\frac{1}{w}$. So when the Fej\'er constants are zero $$Y_{n,w}(x):=w(x)Y_n\left(\frac{1}{w},x\right)=w(x)\sum_{k=1}^n\frac{l_k^2(x)}{w(x_k)}=w(x)H_{n,w}(\frac{1}{w},x)$$ \begin{equation}=w(x)\sum_{k=1}^n\frac{(1-C_{k,w}(x-x_k))l_k^2(x)}{w(x_k)}=w(x)H_n\left(\frac{1}{w},\left(\frac{1}{w}\right)^{'},x\right)\end{equation} is the Hermite interpolatory polynomial of $\frac{1}{w}$ with respect to the nodes: $\{x_k\}_{k=1}^n$. So the following connections are established. \begin{proposition} Let $w$ be a weight as above. If a system of nodes $\{x_{k}\}$ is $1(w)$-normal, then $C_{k,w}=0,\hspace{4pt} \hspace{4pt} k=1, \dots , n $. On the other hand, let us suppose further, that $\left(\frac{1}{w}\right)^{2n}\geq 0$ on $|x|\leq La_n$. Now if $C_{k,w}=0,\hspace{4pt} \hspace{4pt} k=1, \dots , n $ then the system of nodes is $1(w)$-normal.\end{proposition} \noindent {\bf Proof: } If $\{x_{k}\}$ is $1(w)$-normal, then $w(x)\frac{l_k^2(x)}{w(x_k)}\leq 1, \hspace{4pt} k=1, \dots , n$ (see (10)), so $C_{k,w}=0,\hspace{4pt} \hspace{4pt} k=1, \dots , n $. According to (14), by the error formula of the Hermite interpolation, it is clear, that $1-w(x)\sum_{k=1}^n\frac{l_k^2(x)}{w(x_k)}\geq 0$, when $\left(\frac{1}{w}\right)^{(2n)} \geq 0$ on $|x|\leq La_n$. \medskip The Egerv\'ary-Tur\'an interpolatory problem (cf. \cite{rs}, and the references therein) is to find an interpolatory process of lowest degree, and of smallest norm. Below we denote by $\hat{l}_k(x)$ each polynomial of arbitrary degree for which $\hat{l}_k(x_i)=\delta_{ki}, \hspace{4pt} i=1, \dots , n$. \begin{defi} Let $w$ be as in Definition 5. The interpolatory system of polynomials $\hat{l}_k(x), \hspace{4pt} k=1, \dots , n$ is $w$-stable on $(a,b)$ if for all $y_1, \dots , y_n \geq 0$ \begin{equation} 0\leq w(x)\sum_{k=1}^n\frac{\hat{l}_k(x)}{w(x_k)}y_k \leq \max_ky_k, \hspace{4pt}\ws x\in (a,b).\end{equation} A $w$-stable interpolatory system on $(a,b)$ is most economical, if \begin{equation} \sum_{k=1}^n \deg \left(\hat{l}_k(x)\right)\end{equation} is minimal.\end{defi} Let us remark that if the weight function tends to zero quickly at the boundary points of the fundamental interval, then the $w$-stability of the Gr\"unwald operator coincides with the $1(w)$-normality of the nodes. It is proved for all the classical weights (cf. \cite{rs}, Thm. 2.3), that an interpolatory system is $w_n$-stable and most economical, if and only if it is the Gr\"unwald operator on the zeros of $p_{n,w}$. From the previous investigations, similarly to the classical cases, we can state the parallel theorem for general weights. Let us denote by $$I_{n,w}(x):=w(x)\sum_{k=1}^n\frac{\hat{l}_k(x)}{w(x_k)}$$ \begin{proposition} Let $w$ be an approximating weight on an interval $(a,b)$. \noindent If $I_n(x)$ is $w$-stable and most economical, then \begin{equation}C_{k,w}=0,\hspace{4pt} \hspace{4pt} k=1, \dots , n.\end{equation} \noindent Let us assume further that $\left(\frac{1}{w}\right)^{(2n)} \geq 0$ on $|x|\leq La_n$. \noindent If $C_{k,w}=0,\hspace{4pt} \hspace{4pt} k=1, \dots , n ,$ then \begin{equation} I_n(x)=Y_{n,w}(x)\hspace{4pt}\ws\mbox{is $w$-stable and most economical}\end{equation} \end{proposition} \noindent {\bf Proof: } As it was pointed out eg. in \cite{rs}, if an interpolatory process $I_n(x)$ is $w$-stable and most economical, it must be the Gr\"unwald operator, because by the positivity of the operator, $\hat{l}_k(x)$ has zeros at the points $x_i, i=1,\dots ,n, i\neq k$ of even multiplicity, that is $\sum_{k=1}^n \deg \left(\hat{l}_k(x)\right)\geq 2n(n-1)$. It is realized by $Y_{n,w}$. As it was shown in Statement 4, if $Y_{n,w}$ has maxima at $x_k$-s then $C_k$-s are zero. The opposite direction is also follows from Statement 4. \medskip Finally enumerating the properties discussed above, we can summarize these results as it follows. $$\begin{array}{ll} \mbox{(\bf{A})} \hspace{4pt}\ws\hspace{4pt} C_{k,w}=0,\hspace{4pt} \hspace{4pt} k=1, \dots , n \hspace{4pt}\ws\hspace{4pt} (\mathrm{\bf{A}^{'}}) \hspace{4pt}\ws\hspace{4pt} C_{k,w_n}=0,\hspace{4pt} \hspace{4pt} k=1, \dots , n \\ \mbox{(\bf{B})}\hspace{4pt}\ws\hspace{4pt} \mathcal{F}_{n,w^{\frac{1}{2(n-1)}}}=\{x_1, \dots ,x_n\} \hspace{4pt}\ws\hspace{4pt} (\mathrm{\bf{B}^{'}}) \hspace{4pt}\ws\hspace{4pt} \mathcal{F}_{n,w_n^{\frac{1}{2(n-1)}}}=\{x_1, \dots ,x_n\}\\ \mbox{(\bf{C})}\hspace{4pt}\ws\hspace{4pt} p_{n,w}(x_k)=0,\hspace{4pt}\ws k=1, \dots , n\\ \mbox{(\bf{D})} \hspace{4pt}\ws\hspace{4pt} Y_{n,w}(x) \hspace{4pt}\ws\mbox{is $w$-stable and most economical}\\ (\mathrm{\bf{D}^{'}}) \hspace{4pt}\ws\hspace{4pt} Y_{n,w_n}(x) \hspace{4pt}\ws\mbox{is $w_n$-stable and most economical}\\ \mbox{(\bf{E})} \hspace{4pt}\ws\hspace{4pt} \{x_1, \dots ,x_n\} \hspace{4pt}\ws\mbox{is $1(w)$-normal}\hspace{4pt}\ws\hspace{4pt} (\mathrm{\bf{E}^{'}}) \hspace{4pt}\ws\hspace{4pt} \{x_1, \dots ,x_n\} \hspace{4pt}\ws\mbox{is $1(w_n)$-normal}\end{array} $$ Through the equivalence of all the above mentioned properties with property ({\bf A}) (or $(\mathrm{\bf{A}^{'}})$), that is the Fej\'er constants are zero, one can get \noindent {\bf Corollary:} Let $w$ be an admissible, approximating weight on an interval $(a,b)$. If $\left(\frac{1}{w}\right)^{(2n)} \geq 0$ on $(a,b)$, then ({\bf A}),({\bf B}),({\bf D}),({\bf E}) are equivalent, and if $\left(\frac{1}{w_n}\right)^{(2n)} \geq 0$ on $(a,b)$, then $(\mathrm{\bf{A}^{'}}), (\mathrm{\bf{B}^{'}}),$ ({\bf C}), $(\mathrm{\bf{D}^{'}}), (\mathrm{\bf{E}^{'}})$ are equivalent. \medskip We have to show an example on the second assumption . \noindent{\bf Example:} Let \begin{equation} Q(x)=\sum_{k=0}^m d_k x^{2k}, \hspace{1cm} d_k \geq 0, \hspace{4pt} k=1, \dots ,m, \end{equation} and let $w(x)=e^{-Q(x)}$. For these special Freud-type weights $\left(\frac{1}{w_n}\right)^{(2n)} \geq 0$ on $\mathbb{R}$ for all $n \in \mathbb{N}$ . According to the Leibniz rule it is enough to show that $\left(\frac{A_n}{\varrho_n}\right)^{(j)}\left(\frac{1}{w}\right)^{(2n-j)}>0$ for $j=1, \dots ,2n$. Because $$\frac{\partial^j\left(\bar{Q}(t,x)\right)}{\partial x^j}= \sum_{k=1}^m 2kd_k\frac{\partial^j\left(\frac{t^{2k-1}- x^{2k-1}}{t-x}\right)}{\partial x^j}=\sum_{k=\lceil\frac{j}{2}\rceil+1}^m2kd_k\sum_{l=j}^{2k-2}b_lt^{2k-2-l}x^{l-j},$$ where $b_l$-s are positive, taking into consideration that $w$ is an even weight function, (and so $p_n^2(w)$ is also even), one can see that $$\left(\frac{A_n}{\varrho_n}\right)^{(j)}=\sum_{k=\lceil\frac{j}{2}\rceil+1}^m 2kd_k\sum_{l=j}^{2k-2}b_l\int_{\mathbb{R}}p_n^2(w,t)w(t)t^{2k-2-l}dtx^{l-j}$$ is a polynomial of $x$ with nonnegative coefficients, and all the exponents of this polynomial are even if $j$ is even and are odd if $j$ is odd. By a simple induction one can see that $$\left(\frac{1}{w(x)}\right)^{(j)} = p(j,x)e^{Q(x)},$$ where $p(j,x)$ is a polynomial having the same properties as the previous one. Because $j$ and $2n-j$ have the same parity, $$\left(\frac{1}{w_n}\right)^{(2n)}(x)=p(x)e^{Q(x)},$$ where $p(x)$ is a polynomial with even exponents and positive coefficients, so it is positive on the real line for all $n\in \mathbb{N}$. \medskip Finally we have to remark that the assumption $\left(\frac{1}{w}\right)^{(2n)} \geq 0$ seems to be assymetric, and it is necessary only because of the method of the proof by Hermite interpolation. The question that can it be weakened or not, is unsolved yet. \section{Interpolation} In this section, let $w=e^{-Q}$ be a three times continuously differentiable Freud weight on $\mathbb{R}$, that is we suppose that $Q$ is even, $Q^{'}>0$ on $(0,\infty)$, and for some $A,B\geq 2$; $A\leq\frac{(xQ^{'}(x))^{'}}{Q^{.}(x)}\leq B$ on $(0,\infty)$, moreover there is a constant $c$ such that for every $|x|\geq 1$, $\left|\frac{xQ^{(3)}(x)}{Q^{''}(x)}\right|\leq c$. By these assumptions there is a $d \geq 1$ such that $Q^{''}(x) \geq \frac{1-(B-1)^2-c(B-1)}{x^2}$, when $|x|\geq d$. Now we can define \begin{defi} With $d>1$ given above, let \begin{equation} \tilde{w}(x)=\left\{ \begin{array}{ll} w(x), |x| \leq 1\\ w(x)\frac{x}{Q^{'}(x)}, |x| \geq d\\ \mbox{twice continuously differentiable}, \mbox{elsewhere}\end{array}\right.\end{equation} Furthermore we assume that $\log\frac{1}{\tilde{w}}$ has positive and continuous first and second derivatives on $(0,\infty)$. \end{defi} Let us remark at first that $Q^{'}(1)\leq Q^{'}(d)+\frac{d}{Q^{'}(d)}\left(\frac{Q^{'}(x)}{x}\right)^{'}(d)$, because $Q^{'}$ is increasing, and the second member of the right-hand side is positive, when $A\geq 2$. That is a suitable connection can be defined between the two parts of $\log\frac{1}{\tilde{w}}$. As usually we define \begin{defi} $$C_{\tilde{w}}=\{f\in C(\mathbb{R})|\lim_{|x| \to \infty} (f\tilde{w})(x) =0$$\end{defi} Let $Y_n(f,\cdot)$ be as in (10), the Gr\"unwald operator on the zeros of $p_{n,w}$. Now we have the following \begin{theorem} Let $f \in C_{\tilde{w}}$ Then \begin{equation}\lim_{n\to \infty}\|(Y_n(f)-f)\tilde{w}\|=0\end{equation}\end{theorem} Comparing this theorem with Cor. 2. of \cite{ssz}, we can see, that we have two different weights in this theorem, but when $A\geq 2$, then the function class is wider here, that is the fuctions can grow more quickly at infinity. The previous definition of the weight was inspirated by the next lemma. Investigating the weights $w_n$ from the previous section, it turns out, that however $w_n$ tends to zero locally uniformly when $n$ tends to infinity, the behavior of $w_n$-s are the same at infinity. It means, that the Gr\"unwald operator on Fekete points with respect to the varying weights $w_n$ has trivial convergence properties, but it allows to find a non-trivial process, as it is given in the theorem. The following estimation of $A_n$ is valid. \begin{lemma} Let $w$ be as above, and let $A\geq 2$. Let $L_0$ be a constant such that $\frac{L_0}{2}a_n>a_{2n+[A-1]+1}$. For every $L>L_0$ \begin{equation}\frac{A_n(x)}{\varrho_n}\sim\left\{\begin{array}{ll}\frac{n}{a_n^2}, \hspace{4pt}\mbox{if}\hspace{4pt} |x| \leq La_n\\ \frac{Q^{'}(x)}{x},\hspace{4pt}\mbox{if}\hspace{4pt} |x| \geq La_n\end{array}\right.,\end{equation} where the constants in $"\sim"$ depend only on $L$, but they are independent of $n$.\end{lemma} \noindent {\bf Proof: } At first we have to note that such an $L_0$ exists by \cite{lelu} 5.9. The first line of the inequality is proved by H. N. Mhaskar (\cite{mh}, Prop. 3.7.). To prove the second line we have to divide the integral to some parts. Since $A_n$ is even we can choose $x>La_n$. $$\frac{A_n(x)}{\varrho_n}=\int_{|t|\leq \frac{x}{2}}p_n^2(t)w(t)\frac{Q^{'}(t)-Q^{'}(x)}{t-x}dt+\int_{|t|>\frac{x}{2}}(\cdot)dt=I_1+I_2$$ If $|t|\leq \frac{x}{2}$, using that $Q$ is convex on $\mathbb{R}$, and estimating the denominator as $\frac{x}{2} \leq |t-x| \leq \frac{3}{2}x$, $$\frac{2}{3}\frac{\left(2^{A-1}-1\right)}{2^{A-1}}\frac{Q^{'}(x)}{x}\leq \frac{2}{3}\frac{Q^{'}(x)-Q^{'}\left(\frac{x}{2}\right)}{x}\leq \frac{Q^{'}(t)-Q^{'}(x)}{t-x}$$ $$\leq 2\frac{|Q^{'}(t)|+Q^{'}(x)}{x}\leq 4\frac{Q^{'}(x)}{x},$$ where in the first inequality we used the properties of $Q^{'}$, cf \cite{lelu} 5.3. So \begin{equation}I_1 \sim\frac{Q^{'}(x)}{x}\int_{|t|\leq \frac{x}{2}}p_n^2w\sim\frac{Q^{'}(x)}{x}\int_{\mathbb{R}}p_n^2w\sim\frac{Q^{'}(x)}{x},\end{equation} where in the second "tilde", the lower estimation fulfils by \cite{mh},(2.6), say. For $|t|>\frac{L}{2}a_n>4a_n$ \begin{equation}I_2=\int_{|t|>\frac{x}{2}}p_n^2(t)w(t)\frac{Q^{'}(t)-Q^{'}(x)}{t-x}dt =\int_{|t|>\frac{x}{2}\atop |x-t|\leq 1}(\cdot)+\int_{|t|>\frac{x}{2}\atop |x-t|>1}(\cdot) =I_3+I_4\end{equation} By the properties of Freud weights, and by \cite{mh},(2.6), $$I_3 =\int_{|t|>\frac{x}{2}\atop|x-t|\leq 1}p_n^2(t)w(t)Q^{''}(\xi(x,t))dt$$ \begin{equation}\leq c \frac{Q^{'}(x)}{x}\int_{ |t|>\frac{x}{2}}p_n^2(t)w(t)dt\leq c_1e^{-c_2n}\frac{Q^{'}(x)}{x}\end{equation} $$I_4=\int_{-\infty}^{-\frac{x}{2}}p_n^2(t)w(t)\frac{Q^{'}(|t|)+Q^{'}(x)}{|t|+x}dt+\int_{\frac{x}{2}}^{x-1}p_n^2(t)w(t)\frac{Q^{'}(t)-Q^{'}(x)}{t-x}dt$$ \begin{equation}+\int_{x+1}^{2x}p_n^2(t)w(t)\frac{Q^{'}(t)-Q^{'}(x)}{t-x}dt+\int_{2x}^{\infty}(\cdot)=I_5+I_6+I_7+I_8\end{equation} $$I_5\leq \int_{-\infty}^{-\frac{x}{2}}p_n^2(t)w(t)\frac{Q^{'}(|t|)}{|t|}dt+\int_{-\infty}^{-\frac{x}{2}}p_n^2(t)w(t)\frac{Q^{'}(x)}{x}dt$$ According to \cite{lelu}, 5.2, $$I_5\leq \int_{\frac{x}{2}}^{\infty}p_n^2(t)w(t)t^{[A-1]+1}dt+c_1e^{-c_2n}\frac{Q^{'}(x)}{x}$$ According to \cite{mh} 2.7, $$\int_{\frac{x}{2}}^{\infty}p_n^2(t)w(t)t^{[A-1]+1}dt\leq c_1e^{-c_2n}\int_{|t|\leq a_{2n+[A-1]+1}}p_n^2(t)w(t)t^{[A-1]+1}dt$$ Because $A\geq 2$, $\frac{Q^{'}(t)}{t}$ is increasing, so by \cite{lelu} 5.2 and 5.9 $$\int_{|t|\leq a_{2n+[A-1]+1}}p_n^2(t)w(t)t^{[A-1]+1}dt\leq \frac{Q^{'}(x)}{x}\int_{|t|\leq a_{2n+[A-1]+1}}p_n^2(t)w(t)t^2dt$$ $$\leq c a_n^2\frac{Q^{'}(x)}{x}$$ that is \begin{equation}I_5\leq c_3e^{-c_2n}\frac{Q^{'}(x)}{x}\end{equation} If $t\in \left(\frac{x}{2},x-1\right)$ we can write that $x=\lambda t$, where $1<\lambda\leq 2$, that is according to \cite{lelu}, 5.3, recalling that $B\geq A\geq 2$, $\frac{Q^{'}(x)-Q^{'}(t)}{x-t}\leq \frac{Q^{'}(t)}{t}\frac{\lambda^{B-1}-1}{\lambda-1}\leq c(B) \frac{Q^{'}(t)}{t}\leq c(B) \frac{Q^{'}(x)}{x}$, where in the last step we used that $\frac{Q^{'}(t)}{t}$ is increasing. So \begin{equation}I_6 \leq c \frac{Q^{'}(x)}{x}\int_{\frac{x}{2}}^{x-1}p_n^2(t)w(t)dt \leq c_1e^{-c_2n}\frac{Q^{'}(x)}{x}\end{equation} As in the previous case, when $x+1<t<2x$, $\frac{Q^{'}(t)-Q^{'}(x)}{t-x}\leq \frac{Q^{'}(x)}{x}\frac{\lambda^{B-1}-1}{\lambda-1}$ so \begin{equation}I_7 \leq c(B)\frac{Q^{'}(x)}{x} \int_{x+1}^{2x}p_n^2(t)w(t)dt\leq c_1e^{-c_2n}\frac{Q^{'}(x)}{x}\end{equation} Because $t-x>\frac{t}{2}$ in $I_8$, $$I_8\leq 2 \int_{2x}^{\infty}p_n^2(t)w(t)\frac{Q^{'}(t)}{t}dt \leq 2\int_{2x}^{\infty}p_n^2(t)w(t)t^{[A-1]+1}dt$$ So as in $I_5$, \begin{equation}\leq c_1e^{-c_2n}\int_{|t|\leq a_{2n+[A-1]+1}}p_n^2(t)w(t)t^{[A-1]+1}dt\leq c_1e^{-c_2n}\frac{Q^{'}(x)}{x}\end{equation} That is $I_1 \sim \frac{Q^{'}(x)}{x}$, and $I_2\leq c_1e^{-c_2n}\frac{Q^{'}(x)}{x},$ which proves the second line of the lemma. \medskip For the proof of the Theorem we need the following \begin{lemma} If $f \in C_{\tilde{w}}$, then by the notation of (13) \begin{equation} \|Y_{n,\tilde{w}}\|=O(1)\end{equation}\end{lemma} \noindent {\bf Proof: } At first let $\frac{a_n}{2}\leq |x|\leq La_n$. Here, by \cite{lelu} 5.5, $\frac{x}{Q^{'}(x)}\sim \frac{a_n^2}{n}$. Using that $\frac{Q^{'}(x)}{x}$ is even, and it is increasing on $\mathbb{R}_+$ when $A\geq 2$, we have $$Y_{n,\tilde{w}}(x)=\frac{x}{Q^{'}(x)}w(x)\sum_{k=1}^n\frac{l_k^2(x)}{w(x_k)}\frac{Q^{'}(x_k)}{x_k}$$ \begin{equation}\leq c \frac{a_n^2}{n}w(x)\sum_{k=1}^n\frac{l_k^2(x)}{w(x_k)}\frac{Q^{'}(a_n)}{a_n}\leq c w(x)\sum_{k=1}^n\frac{l_k^2(x)}{w(x_k)}=O(1), \end{equation} For the last equality cf. \cite{sz} (39). When $|x|>La_n>|x_k|$, then $A\geq 2$ yields that $\frac{x}{Q^{'}(x)}\frac{Q^{'}(x_k)}{x_k}\leq 1$. That is \begin{equation}Y_{n,\tilde{w}}(x)\leq c w(x)\sum_{k=1}^n\frac{l_k^2(x)}{w(x_k)}=O(1), \end{equation} as in the previous case. Let $d<|x|<\frac{a_n}{2}$. $$Y_{n,\tilde{w}}(x)=\tilde{w}(x)\sum_{k \atop |x_k|\leq 2|x|}(\cdot)+\tilde{w}(x)\sum_{k \atop |x_k|> 2|x|}(\cdot)=\Sigma_1+\Sigma_2$$ Because as in previously, when $|x_k|\leq 2|x|$, $\frac{x}{Q^{'}(x)}\frac{Q^{'}(x_k)}{x_k}<c(B)$' \begin{equation}\Sigma_1 = O(1)\end{equation} $$\Sigma_2 = w(x)p_n^2(x)\frac{x}{Q^{'}(x)}\sum_{k \atop |x_k|> 2|x|}\frac{Q^{'}(x_k)}{w_(x_k)x_k(x-x_k)^2p_n^{'2}(x_k)}$$ Since $\frac{1}{p_n^{'2}(x_k)w(x_k)}\sim \frac{a_n^2}{n}\Delta x_k$, (cf \cite{cr} 4.11, 4.17) $$\Sigma_2\leq c w(x)p_n^2(x)\frac{x}{Q^{'}(x)}\frac{a_n^2}{n}\sum_{k \atop |x_k|> 2|x|}\frac{Q^{'}(x_k)}{x_k(x-x_k)^2}\Delta x_k$$ $$ \leq cw(x)p_n^2(x)\frac{x}{x^2Q^{'}(x)}\frac{a_n^2}{n}\int_{2d}^{a_n}\frac{Q^{'}(x)}{x}dx \leq c \frac{a_n}{n}\frac{1}{xQ^{'}(x)}$$ \begin{equation}\times \left(\frac{n}{a_n^2}+\int_{2d}^{a_n}\frac{Q(x)}{x^2}dx\right) \leq c\frac{a_n}{n}\frac{1}{xQ^{'}(x)}Q(a_n)\leq c\frac{1}{dQ^{'}(d)}=O(1)\end{equation} Here we used that $w(x)p_n^2(x)\leq \frac{c}{a_n}$ for $|x| \leq \frac{a_n}{2}$, cf. \cite{cr} 4.6. Finally let $|x|\leq d$. Let us remark, that $\frac{\tilde{w}(x)}{w(x)}$ is between two constants on $[1,d]$. $$Y_{n,\tilde{w}}(x) \leq c w(x)\sum_{k, x_k<2d}\frac{l_k^2(x)}{w(x_k)}+ cw(x)\sum_{k, x_k\geq 2d}\frac{l_k^2(x)}{w(x_k)}\frac{Q^{'}(x_k)}{x_k}=\Sigma_3+\Sigma_4$$ As previously, \begin{equation}\Sigma_3 = O(1)\end{equation} Similarly to the estimation of $\Sigma_2$, \begin{equation}\Sigma_4\leq c\frac{a_n}{n}\sum_{k, x_k\geq 2d}\frac{Q^{'}(x_k)}{x_k(x-x_k)^2}\Delta x_k \leq \frac{c}{d^2}\frac{a_n}{n}\left(\frac{n}{a_n^2}+Q(a_n)\right)=O(1)\end{equation} \medskip {\bf Proof (of the Theorem):} Because polynomials are obviously in $C_{\tilde{w}}$, according to the Banach-Steinhaus theorem, the previous lemma ensures the result.
{ "timestamp": "2013-01-29T02:03:37", "yymm": "1301", "arxiv_id": "1301.6469", "language": "en", "url": "https://arxiv.org/abs/1301.6469", "abstract": "We give the connections among the Fekete sets, the zeros of orthogonal polynomials, $1(w)$-normal point systems, and the nodes of a stable and most economical interpolatory process via the Fejér contants. Finally the convergence of a weighted Grünwald interpolation is proved.", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "Weighted Fejér Constants and Fekete Sets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708052400942, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7084670455215527 }
https://arxiv.org/abs/1009.3637
Lines on projective varieties and applications
The first part of this note contains a review of basic properties of the variety of lines contained in an embedded projective variety and passing through a general point. In particular we provide a detailed proof that for varieties defined by quadratic equations the base locus of the projective second fundamental form at a general point coincides, as a scheme, with the variety of lines. The second part concerns the problem of extending embedded projective manifolds, using the geometry of the variety of lines. Some applications to the case of homogeneous manifolds are included.
\section*{Introduction} The {\it principle} that the Hilbert scheme of lines contained in a (smooth) projective variety $X\subset{\mathbb P}^N$ and passing through a (general) point can inherit intrinsic and extrinsic geometrical properties of the variety, has emerged recently. This principle allowed to attack some problems in a {\it unified way}, provided non trivial connections between different theories and put some basic questions in a new light. A typical example is the Hartshorne Conjecture on complete intersections, see \cite{HC, DD} and also \cite{QEL1, QEL2}. The technique of studying, or even reconstructing, $X$ from the {\it variety of minimal rational tangents} introduced in the work of Hwang, Mok and others (a generalization of the Hilbert scheme of lines passing through a point) was applied to the theory of Fano manifolds (see e.g. \cite{HM, HM2, HM3, Hwang, HK, FHw}). On the other hand, Landsberg and others investigated some possible characterizations of special homogeneous manifolds via the projective second fundamental form (see e.g. \cite{Lan3, Lan4, HY, LR}). The Hilbert schemes of lines through a general point of many homogeneous varieties with notable geometrical properties are also somehow {\it nested}, see Tables \eqref{hermitian} and \eqref{contact}, or {\it part of a matrioska}. For this class of varieties, or more generally for classes where the principle holds, one starts an induction process which sometimes stops after only a few steps, see e.g. \cite[Theorem 2.8, Corollary 3.1 and 3.2]{QEL1} and also \cite{FHw}. An example of this kind is the following: if $X\subset{\mathbb P}^N$ is a $LQEL$-manifold of type $\delta\geq 3$, then the Hilbert scheme of lines $\mathcal L_{x,X}\subset{\mathbb P}^{n-1}$, $n=\dim(X)$, passing through a general point $x\in X$ is a $QEL$-manifold of type $\delta-2$, \cite[Theorem 2.3]{QEL1}. Then starting the induction with $X\subset{\mathbb P}^N$ a $LQEL$-manifold of type $\frac{n}{2}$, one deduces immediately $n=2,4,8$ or 16, yielding as a consequence a quick proof that Severi varieties appear only in these dimensions (see \cite[Corollary 3.2]{QEL1}, also for the definitions of $(L)QEL$-variety and of Severi variety, introduced by Zak, see e.g. \cite{Zak}). The Hilbert scheme of lines through a point is closely related to the base locus of the (projective) second fundamental form, a classical tool used in projective differential geometry and reconsidered in modern algebraic geometry by Griffiths and Harris (see \cite{GH} and also \cite{IL}). In this theory one tries to reconstruct a (homogeneous) variety from its second fundamental form (see e.g \cite{Lan3, Lan4, HY, LR}) by integrating local differential equations and obtaining global results. We note that the base locus of the second fundamental form at a general point of a smooth variety is typically not smooth, while this property is preserved by the Hilbert scheme of lines, see Proposition \ref{Yx}. An important class where the two previous objects coincide is that of {\it quadratic varieties}, that is varieties $X\subset{\mathbb P}^N$ scheme theoretically defined by quadratic equations. All known {\it prime Fano manifolds of high index}, other than complete intersections (for example many homogeneous manifolds), are quadratic; moreover, they are embedded with {\it small codimension}. For quadratic varieties the Hilbert scheme of lines through a smooth point is also quadratic, see Proposition \ref{quadraticLx}. Moreover, since it coincides with the base locus scheme of the second fundamental form, it may be scheme theoretically defined by at most $c=\operatorname{codim}(X)$ (quadratic) equations, see Corollary \ref{quadraticformLx}. If $X\subset{\mathbb P}^N$ is smooth and $x\in X$ is general, then $\mathcal L_{x,X}\subset{\mathbb P}^{n-1}$ is also smooth, see Proposition \ref{Yx}. Thus for quadratic manifolds, if $\mathcal L_{x,X}$ is also irreducible, a beautiful {\it matrioska} naturally appears. From this point of view, a quadratic manifold $X\subset{\mathbb P}^N$ with $3n>2N$ is a complete intersection {\it because} $\mathcal L_{x,X}\subset{\mathbb P}^{n-1}$ is a smooth irreducible non-degenerate complete intersection, defined exactly by $c$ quadratic equations, so that it has the {\it right dimension}, \cite[Theorems 4.8 and 2.4]{DD} and Remark \ref{ci}. The aim of this note is twofold: In Section~ \ref{Lx} we study in detail the intrinsic and extrinsic properties of the Hilbert scheme of lines passing through a smooth point of an equidimensional connected variety $X\subset{\mathbb P}^N$, providing an almost self contained treatment. In Section~ 2 we illustrate another incarnation of the principle presented above by studying the problem of extending smooth varieties uniruled by lines as hyperplane sections of irreducible varieties. First we describe the possible singularities of $\mathcal L_{x,X}$, proving that a singular point of the Hilbert scheme of lines passing through a general point $x$ of an irreducible variety produces a line joining $x$ to a singular point of $X$, a stronger condition than the mere existence of a singular point on $X$, see Proposition \ref{Yx}. Then we relate the equations defining $X\subset{\mathbb P}^N$ with those of $\mathcal L_{x,X}\subset{\mathbb P}((t_{x,X})^*)$, see \eqref{eqLxE}. This is applied to quadratic varieties showing that the Hilbert scheme of lines passing through a smooth point is a quadratic scheme, which coincides with the projectivized tangent cone at $x$ to the scheme $T_xX\cap X$, see Proposition \ref{quadraticLx}. After introducing the base locus of the second fundamental form of $X$ at $x$, $B_{x,X}\subset {\mathbb P}((t_{x,X})^*)$, we show that in general $\mathcal L_{x,X}\subseteq B_{x,X}$ as schemes with equality holding, as schemes, if $X\subset{\mathbb P}^N$ is quadratic, see Corollary \ref{quadraticformLx}, \cite[Theorem 2.4 and Section~ 4] {DD} and also Proposition \ref{settheoretically} here. Then we recall some results about lines on prime Fano manifolds to illustrate further how geometric properties of $X\subset{\mathbb P}^N$ are transferred to $\mathcal L_{x,X}\subset{\mathbb P}((t_xX)^*)$, see Proposition \ref{Fano} and Example \ref{excomp}. In Section~ \ref{ext} we consider the classical problem of the existence of projective extensions $X\subset{\mathbb P}^{N+1}$ of a subvariety $Y\subset {\mathbb P}^N\subset{\mathbb P}^{N+1}$. It is well known that some special manifolds cannot be hyperplane sections of smooth varieties and that in some cases only the trivial extensions exist. These are given by cones over $Y$ with vertex a point $p\in{\mathbb P}^{N+1}\setminus{\mathbb P}^N$ (see e.g. \cite{CSegre}, \cite{Scorza1}, \cite{Scorza2}, \cite{Terracini} and also Section~ \ref{ext} for precise definitions). Recently the interest in the above problem (and further generalizations of it) was renewed. Complete references, many results and a lot of interesting connections with other areas, such as deformation theory of isolated singularities, can be found in the monograph \cite{Badescu}, especially relevant for this problem being Chapters 1 and 5. One could also look at the survey \cite{Zakdual}. Many sufficient conditions for the non-existence of non-trivial extensions of smooth varieties are known. These conditions are usually expressed, in the more general setting of extensions as ample divisors, by the vanishing of (infinitely many) cohomology groups of the twisted tangent bundle of $Y$ (or of its normal bundle in ${\mathbb P}^N$). These results are general and concern a lot of applications, see {\it loc. cit.}, but even in the simplest cases the computation of these cohomology groups can be quite complicated. In any case their geometrical meaning is not so obvious to the non-expert in the field. Here we prove a simple geometrical sufficient condition for non-extendability, Theorem \ref{criterion}, for smooth projective complex varieties uniruled by lines. The simplest version states that $Y\subset{\mathbb P}^N$ admits only trivial extensions $X\subset{\mathbb P}^{N+1}$ as soon as $\mathcal L_{y,X}\subset {\mathbb P}((t_yX)^*)$ admits no smooth extension (a weaker condition than the thesis!). Indeed, one easily shows in Proposition \ref{extlines}, via the results of Section~ 1, that also $\mathcal L_{y, X}\subset{\mathbb P}((t_yX)^*)$ is a projective extension of $\mathcal L_{y,Y}\subset{\mathbb P}((t_yY)^*)$ for $y\in Y$ general. Then under the hypothesis of Theorem \ref{criterion} one deduces the existence of a line through $y$ and a singular point $p_y\in X$. Then $p_y=p$ does not vary with $y\in Y$ general since $X$ has at most a finite number of singular points so that $X\subset{\mathbb P}^{N+1}$ is a cone of vertex $p$. The range of applications of Theorem \ref{criterion} is quite wide, see Corollary \ref{Segre}, \ref{exthermitian}, \ref{extcontact}, allowing us to recover some results previously obtained differently, see \cite{Badescu} and Remark \ref{dualhom}. We were led to the analysis of the problem of extending smooth varieties by the desire of understanding geometrically why in some well-known examples the geometry of $Y\subset{\mathbb P}^N$ forces that every extension is trivial and by the curiosity of explicitly constructing the cones extending $Y$. Moreover, this approach reveals that Scorza's result about the non-extendability of ${\mathbb P}^a\times{\mathbb P}^b\subset {\mathbb P}^{ab+a+b}$ for $a+b\geq 3$, originally proved in \cite{Scorza2} and recovered later by many authors (see e.g. \cite{Badescu} and Corollary \ref{Segre} here), implies the non-extendability of a lot of homogeneous varieties via the description of their Hilbert scheme of lines. From this perspective the Pl\" ucker embedding of $Y=\mathbb G(r,m)$, with $1\leq r<m-1$ and for $r=1$ with $m\geq 4$, admits only trivial extensions because $\mathcal L_{y,Y}={\mathbb P}^r\times{\mathbb P}^{m-r-1}$ admits only trivial extensions (see \cite{Fiore} for an ad-hoc proof following Scorza's approach). Besides the applications contained in Corollary \ref{Segre} and \ref{exthermitian}, we also show that our analysis can be used to provide a direct proof that $\nu_2({\mathbb P}^n)\subset{\mathbb P}^{\frac{n(n+3)}{2}}$ admits only trivial extensions, see Proposition \ref{Veronese}, a well-known classical fact originally proved by Scorza in \cite{Scorza1} and later obtained differently by many authors. {\bf Acknowledgements}. I am indebted to Paltin Ionescu for his useful remarks and comments leading to an improvement of the exposition and especially for various discussions on these subjects. Giovanni Staglian\` o read carefully the text and made useful comments on a preliminary version. A special thank to Prof. Markus Brodmann for his invitation to give a talk at the Oberseminar at Z\" urich University in May 2010, for his kind hospitality and for his interest in my work. On that occasion I began to organize the material contained in Section~ 1. \section{Geometry of (the Hilbert scheme of) lines contained in a variety and passing through a (general) point}\label{Lx} \subsection{Notation, definitions and preliminary results}\label{prel} Let $X\subset{\mathbb P}^N$ be a (non-degenerate) connected equidimensional projective variety of dimension $n\geq 1$, defined over a fixed algebraically closed field of characteristic zero, which from now on will be simply called a {\it projective variety}. If $X$ is smooth and irreducible, we shall call $X$ a {\it manifold}. Let $X_{\operatorname{reg}}=X\setminus\operatorname{Sing}(X)$ be the smooth locus of $X$. Let $t_xX$ denote {\it the affine tangent space to $X$ at $x$}, let $T_xX\subset{\mathbb P}^N$ denote {\it the projective tangent space to $X$ at $x$ of $X\subset{\mathbb P}^N$} and for an arbitrary scheme $Z$ and for a closed point $z\in Z$ let $C_zZ$ denote {\it the affine tangent cone to $Z$ at $z$}. Let $\mathcal L_{x,X}$ denote the Hilbert scheme of lines contained in $X$ and passing through the point $x\in X$. For a line $L\subset X$ passing through $x$, we let $[L]\in \mathcal L_{x,X}$ be the corresponding point. Let $\pi_x: \mathcal{H}_x \to \mathcal{L}_{x,X}$ denote the universal family and let $\phi_x: \mathcal{H}_x \to X$ be the tautological morphism. From now on we shall always suppose that $x\in X_{\operatorname{reg}}$. Note that $\pi_x$ admits a section $s_x:\mathcal L_{x,X}\to {\mathcal E}_x\subset\mathcal H_x$, which is contracted by $\phi_x$ to the point $x$. Consider the blowing-up $\sigma_x: \operatorname{Bl}_xX\to X$ of $X$ at $x$. For every $[L]\in\mathcal L_{x,X}$ the line $L=\phi_x(\pi_x^{-1}([L]))$ is smooth at $x$ so that \cite[Lemma 4.3]{IN} and the universal property of the blowing-up ensure the existence of a morphism $\psi_x : {\mathcal H}_x \to \operatorname{Bl}_xX$ such that $\sigma_x \circ \psi_x = \phi_x$. So we have the following diagram \begin{equation}\label{joindiagram1}\raisebox{.7cm}{\xymatrix{ &\mathcal{H}_x \ar[d]_{\pi_x} \ar[dr]^{\phi_x}\ar[r]^{\psi_x}&\operatorname{Bl}_xX\ar[d]^{\sigma_x}\\ &\mathcal{L}_{x,X}&X. }} \end{equation} In particular, $\psi_x$ maps the section ${\mathcal E}_x$ to $E_x$, the exceptional divisor of $\sigma_x$. Let $\widetilde \psi _x : {\mathcal E}_x \to E_x$ be the restriction of $\psi_x$ to $\mathcal E_x$. We can define the morphism \begin{equation}\label{taux} \tau_x=\tau_{x,X}=\widetilde \psi_x\circ s_x:\mathcal L_{x,X}\to {\mathbb P}((t_xX)^*)=E_x={\mathbb P}^{n-1}, \end{equation} which associates to each line $[L]\in \mathcal L_{x,X}$ the corresponding tangent direction through $x$, i.e. $\tau_x([L])={\mathbb P}((t_xL)^*)$. The morphism $\tau_x$ is clearly injective and we claim that $\tau_x$ is a closed immersion. Indeed, by taking in the previous construction $X={\mathbb P}^N$ the corresponding morphism $\tau_{x,{\mathbb P}^N}:\mathcal L_{x,{\mathbb P}^N}\to {\mathbb P}((t_x{\mathbb P}^N)^*)={\mathbb P}^{N-1}$ is an isomorphism between $\mathcal L_{x,{\mathbb P}^N}$ and the exceptional divisor of $\operatorname{Bl}_x{\mathbb P}^N$. By definition the inclusion $X\subset{\mathbb P}^N$ induces a closed embedding $i_x:\mathcal L_{x,X}\to\mathcal L_{x,{\mathbb P}^N}$. If $j_x:{\mathbb P}((t_xX)^*)\to {\mathbb P}((t_x{\mathbb P}^N)^*)$ is the natural closed embedding, then we have the following commutative diagram \begin{equation}\label{diagram2}{\xymatrix{ &\mathcal{L}_{x,X} \ar[d]_{i_x} \ar[r]^{\tau_{x,X}}&{\mathbb P}((t_xX)^*)\ar[d]^{j_x}\\ &\mathcal{L}_{x,{\mathbb P}^N}\ar[r]^{\tau_{x,{\mathbb P}^N}}&{\mathbb P}((t_x{\mathbb P}^N)^*), }} \end{equation} proving the claim. For $x\in X_{\operatorname{reg}}$ such that $\mathcal L_{x,X}\neq\emptyset$, we shall always identify $\mathcal L_{x,X}$ with $\tau_x(\mathcal L_{x,X})$ and we shall naturally consider $\mathcal L_{x,X}$ as a subscheme of ${\mathbb P}^{n-1}={\mathbb P}((t_xX)^*)$. We denote by $\mathcal C_x$ the scheme theoretic image of $\mathcal H_x$, that is $\phi_x(\mathcal H_x)=\mathcal C_x\subset X$. Via \eqref{joindiagram1} we deduce the following relation: \begin{equation}\label{tangentconeCx} {\mathbb P}(C_x(\mathcal C_x))=\mathcal L_{x,X}, \end{equation} as subschemes of ${\mathbb P}((t_xX)^*)$, where ${\mathbb P}(C_x(\mathcal C_x))$ is the {\it projectivized tangent cone to $\mathcal C_x$ at $x$}, see \cite[II,Section~ 3]{Mumford}. \subsection{Singularities of $\mathcal L_{x,X}$}\label{sing} We begin by studying the intrinsic geometry of $\mathcal L_{x,X}\subset{\mathbb P}^{n-1}$. When it is clear from the context which variety $X\subset{\mathbb P}^N$ we are considering we shall write $\mathcal L_x$ instead of $\mathcal L_{x,X}$. The normal bundle $N_{L/X}$ is locally free being a subsheaf of the locally free sheaf $N_{L/{\mathbb P}^N}\simeq\O_{{\mathbb P}^1}(1)^{N-1}$. If $L\cap X_{\operatorname{reg}}\neq\emptyset$, then $N_{L/X}$ is locally free of rank $n-1$ and more precisely \begin{equation}\label{ai1} N_{L/X}\simeq\bigoplus_{i=1}^{n-1} \O_{{\mathbb P}^1}(a_i), \end{equation} with $a_i\leq 1$ since $N_{L/X}\subset N_{L/{\mathbb P}^N}$. If $N_{L/X}$ is also generated by global sections, then \begin{equation}\label{split} N_{L/X}\simeq \O_{{\mathbb P}^1}(1)^{s(L,X)}\oplus\O_{{\mathbb P}^1}^{n-1-s(L,X)}. \end{equation} Therefore if $N_{L/X}$ is generated by global sections, then $\mathcal L_x$ is unobstructed at $[L]$, that is $h^1(N_{L/X}(-1))=0$, $\mathcal L_x$ is smooth at $[L]$ and $\dim_{[L]}(\mathcal L_x)=h^0(N_{L/X}(-1))=s(L,X)$, where $s(L,X)\geq 0$ is the integer defined in \eqref{split}. For $x\in X_{\operatorname{reg}}$, let $$S_x=S_{x,X}=\{[L]\in \mathcal L_x\text{ such that } L\cap \operatorname{Sing}(X)\neq \emptyset\;\}\subseteq \mathcal L_x.$$ Then $S_{x,X}$ has a natural scheme structure and the previous inclusion holds at the scheme theoretic level. If $X$ is smooth, then $S_{x,X}=\emptyset$. Moreover, if $L\subset X$ is a line passing through $x\in X_{\operatorname{reg}}$, clearly $[L]\not\in S_{x,X}$ if and only if $L\subset X_{\operatorname{reg}}$. \medskip We now prove that a singular point of $\mathcal L_x$ produces a line passing through $x$ and through a singular point of $X$, a stronger condition than the mere existence of a singular point on $X$. These results are well known to experts, at least for manifolds, see \cite[Proposition 1.5]{Hwang} and also \cite[Proposition 2.2]{QEL1}. In \cite{DG}, the singularities of the Hilbert scheme of lines contained in a projective variety are related to some geometrical properties of the variety. \medskip \begin{Proposition}\label{Yx} Let notation be as above and let $X\subset{\mathbb P}^N$ be an irreducible projective variety of dimension $n\geq 2$. Then for $x\in X_{\operatorname{reg}}$ general: \begin{enumerate} \item $\mathcal L_x\subset{\mathbb P}^{n-1}$ is smooth outside $S_{x,X}$, that is $\operatorname{Sing}(\mathcal L_x)\subseteq S_x.$ In particular if $X\subset{\mathbb P}^N$ is smooth and if $x\in X$ is general, then $\mathcal L_x\subset{\mathbb P}^{n-1}$ is a smooth variety. \medskip \item If $\mathcal L_x^j$, $j=1,\ldots,m$, are the irreducible components of $\mathcal L_x$ and if \[\dim(\mathcal L_x^l)+\dim(\mathcal L_x^p)\geq n-1 \quad \mbox{for some } l\neq p,\] then $\mathcal L_x$ is singular, $X$ is singular and there exists a line $[L]\in \mathcal L_x$ such that $L\cap \operatorname{Sing}(X)\neq\emptyset$. \end{enumerate} \end{Proposition} \begin{proof} There exists an open dense subset $U\subseteq X_{\operatorname{reg}}$ such that for every line $L\subset X_{\operatorname{reg}}$ such that $L\cap U\neq \emptyset$ the normal bundle $N_{L/X}$ is generated by global sections, see for example \cite[Proposition 4.14]{Debarre}. Combining this result with the above discussion, we deduce that for every $x\in U$ the variety $\mathcal L_x\subset{\mathbb P}^{n-1}$ is smooth outside $S_x$, proving the first assertion. The condition on the dimensions of two irreducible components of $\mathcal L_x$ in (2) ensures that these components have to intersect in ${\mathbb P}^{n-1}$. A point of intersection is a singular point of $\mathcal L_x\subset{\mathbb P}^{n-1}$. This forces $X$ to be singular by the first part and also the existence of a line $[L]\in S_x$, which by definition cuts $\operatorname{Sing}(X)$. \end{proof} \subsection{Equations for $\mathcal L_{x,X}\subset{\mathbb P}((t_xX)^*)$}\label{equations} \medskip We now follow and expand the treatment outlined in \cite[Theorem 2.4]{DD} by looking at the equations defining $\mathcal L_x\subset{\mathbb P}^{n-1}$ for $x\in X_{\operatorname{reg}}$. Let $$ X=V(f_1,\ldots, f_m)\subset{\mathbb P}^{N}\hskip 3cm (\ast),$$ be a projective equidimensional connected variety, not necessarily irreducible, let $x\in X_{\operatorname{reg}}$, let $n=\dim(X)$ and let $c=\operatorname{codim}(X)=N-n$. Thus we are assuming that $X\subset{\mathbb P}^N$ is scheme theoretically the intersection of $m\geq 1$ hypersurfaces of degrees $d_1\geq d_2\geq\ldots\geq d_m\geq 2.$ Moreover it is implicitly assumed that $m$ is minimal, i.e. none of the hypersurfaces contains the intersection of the others. Define, following \cite{DD}, the integer $$d:=\min\{\sum_{i=1}^{c}(d_i-1)\text{ for expressions $(\ast)$ as above}\}\geq c.$$ With these definitions $X\subset{\mathbb P}^N$ (or more generally a scheme $Z\subset{\mathbb P}^N$) is called {\it quadratic} if it is scheme theoretically an intersection of quadrics, which means that we can assume $d_1=2$. In particular $X\subset{\mathbb P}^N$ is quadratic if and only if $d=c$. \medskip We can choose homogeneous coordinates $(x_0:\ldots:x_N)$ on ${\mathbb P}^N$ such that $x=(1:0:\ldots:0),$ $T_xX=V(x_{n+1},\ldots, x_N).$ Let $\mathbb A^N={\mathbb P}^N\setminus V(x_0)$ with affine coordinates $(y_1,\ldots, y_N)$, that is $y_l=\frac{x_l}{x_0}$ for every $l=1,\ldots, N$. Let $\widetilde {\mathbb P}^N=\operatorname{Bl}_x{\mathbb P}^N$ with exceptional divisor $E'\simeq{\mathbb P}((t_x{\mathbb P}^N)^*)={\mathbb P}^{N-1}$ and let $\widetilde X=\operatorname{Bl}_xX$ with exceptional divisor $E={\mathbb P}((t_xX)^*)={\mathbb P}^{n-1}$. Looking at the graph of the projection from $x$ onto $V(x_0)$ we can naturally identify the projectivization of $\mathbb A^N\setminus \mathbf 0=\mathbb A^N\setminus x$ with $E'$ and with the projective hyperplane $V(x_0)={\mathbb P}^{N-1}$. Let $f_i=f_i^1+f_i^2+\cdots+f_i^{d_i}$, with $f_i^j$ homogeneous of degree $j$ in the variables $(y_1,\ldots, y_N)$. So $f_1^1=\ldots=f_m^1=0$ are the equations of $t_xX=T_xX\cap\mathbb A^N\subset\mathbb A^N$, which reduce to $y_{n+1}=\ldots=y_N=0$ by the previous choice of coordinates, yielding $$V(f_1^1,\cdots, f_m^1)={\mathbb P}((t_xX)^*)\subset {\mathbb P}(( t_x{\mathbb P}^N)^*)={\mathbb P}^{N-1}.$$ With the previous identifications $\mathcal L_{x,{\mathbb P}^N}=E'={\mathbb P}^{N-1}={\mathbb P}((t_x{\mathbb P}^N)^*)$. We now write a set of equations defining $\mathcal L_x\subset E\subset E'$ as a subscheme of $E'$ and of $E$. By definition $\mathbf y=(y_1:\ldots:y_n)$ are homogeneous coordinates on $E\subset E'$. For every $j=2,\ldots, m$ and for every $i=1,\ldots, m$, let $$\widetilde f_i^j(\mathbf y)=f_i^j(y_1,\ldots,y_n,0,0,\ldots,0,0).$$ Then we have that $\mathcal L_x\subset E'$ is the scheme $$ V(f_1^1,f_1^2,\cdots,f_1^{d_1}, \cdots, f_m^1,f_m^2,\cdots,f_m^{d_m})\subset E',$$ while $\mathcal L_x\subset E$ is the scheme \begin{equation}\label{eqLxE} V(\widetilde f_1^2,\cdots,\widetilde f_1^{d_1}, \cdots, \widetilde f_m^2,\cdots,\widetilde f_m^{d_m}), \end{equation} so that it is scheme theoretically defined by at most $\sum_{i=1}^m(d_i-1)$ equations. The equations of $T_xX\cap X\cap \mathbb A^N=t_xX\cap X\cap \mathbb A^N$, as a subscheme of $\mathbb A^N$, are $$V(f_1^1, \ldots, f_m^1, f_1^1+f_1^2+\cdots+f_1^{d_1},\ldots, f_m^1+f_m^2+\cdots+f_m^{d_m})=$$ \begin{equation}\label{eqTxAN} V(f_1^1, \ldots, f_m^1, f_1^2+\cdots+f_1^{d_1},\ldots, f_m^2+\cdots+f_m^{d_m})\subset\mathbb A^N. \end{equation} Thus the equations of $T_xX\cap X\cap\mathbb A^N=t_xX\cap X\cap\mathbb A^N$ as a subscheme of $t_x(X\cap \mathbb A^N)=t_xX$ are \begin{equation}\label{eqTxtx} V(\widetilde f_1^2+\cdots+\widetilde f_1^{d_1},\ldots, \widetilde f_m^2+\cdots+\widetilde f_m^{d_m})\subset t_xX=\mathbb A^n. \end{equation} Let $I=\langle \widetilde f_1^2+\cdots+\widetilde f_1^{d_1},\ldots, \widetilde f_m^2+\cdots+\widetilde f_m^{d_m}\rangle\subset\mathbb C[y_1,\ldots,y_n]=S$ and let $I^*$ be the ideal generated by the {\it initial forms} of elements of $I$. Remark that if $I$ is homogeneous and generated by forms of the same degree, then clearly $I=I^*$. Then the affine tangent cone to $T_xX\cap X$ at $x$ is $C_x(T_xX\cap X)={\rm Spec}(\frac{S}{I^*})$ so that \begin{equation}\label{exp1} {\mathbb P}(C_x(T_xX\cap X))={\rm Proj} (\frac{S}{I^*}), \end{equation} see \cite[III, Section~\,3]{Mumford}. Let $J\subset S$ be the homogeneous ideal generated by the polynomials in \eqref{eqLxE} defining $\mathcal L_{x,X}$ scheme theoretically, that is $\mathcal L_{x,X}={\rm Proj}(\frac{S}{J})\subset{\mathbb P}((t_xX)^*)$. Clearly $I^*\subseteq J$, yielding the closed embedding of schemes \begin{equation}\label{inclLxBx} \mathcal L_{x,X}\subseteq {\mathbb P}(C_x(T_xX\cap X)). \end{equation} If $X\subset{\mathbb P}^N$ is quadratic, then $I=I^*=J$. In conclusion we have proved the following results. \medskip \begin{Proposition}\label{quadraticLx} Let $X\subset{\mathbb P}^N$ be a (non-degenerate) projective variety, let $x\in X_{\operatorname{reg}}$ be a point and let notation be as above. If $X\subset{\mathbb P}^N$ is quadratic, then \begin{equation}\label{fund0} T_xX\cap X\cap\mathbb A^N=C_x(T_xX\cap X)\subset t_xX \end{equation} and \begin{equation}\label{fund} \mathcal L_{x,X}={\mathbb P}(C_x(T_xX\cap X))\subset{\mathbb P}((t_xX)^*). \end{equation} In particular if $X\subset{\mathbb P}^N$ is quadratic, then the scheme $\mathcal L_{x,X}\subset{\mathbb P}((t_xX)^*)$ is quadratic. \end{Proposition} \subsection{$\mathcal C_x$ versus $T_xX\cap X$ for a quadratic variety}\label{conesex} The closed embedding \eqref{inclLxBx} holds at the scheme theoretic level. If $\mathcal L_{x,X}$ were reduced, or better smooth, it would be enough to prove that there exists an inclusion as sets. Since $x\in X_{\operatorname{reg}}$ was arbitrary we cannot control a priori the structure of $\mathcal L_{x,X}$ even if $X\subset{\mathbb P}^N$ is a manifold. Recall that by Proposition \ref{Yx} $\mathcal L_{x,X}$ is smooth as soon as $X$ is a manifold and $x\in X$ is a general point. {\it From now on we shall suppose $X\subset{\mathbb P}^N$ quadratic}. Then \begin{enumerate} \item $(\mathcal C_x)_{\operatorname{red}}=(T_xX\cap X)_{\operatorname{red}}$; \item if $X\subset{\mathbb P}^N$ is a manifold and if $x\in X$ is a general point, then $\mathcal C_x=(T_xX\cap X)_{\operatorname{red}}$; \item the strict transforms of $\mathcal C_x$ and of $T_xX\cap X$ on $\operatorname{Bl}_xX$ cut the exceptional divisor $E={\mathbb P}((t_xX)^*)$ of $\operatorname{Bl}_xX$ in the same scheme $\mathcal L_{x,X}$ (see \eqref{tangentconeCx} and \eqref{fund}); \item if $x\in X$ is a general point on a quadratic manifold $X\subset{\mathbb P}^N$ and if $I^*$ is saturated, then $T_xX\cap X$ is reduced in a neighborhood of $x$ so that it coincides with $\mathcal C_x$ in a neighborhood of $x$. Indeed since $T_xX\cap X\cap \mathbb A^n={\rm Spec}(\frac{S}{I})$, with $I=I^*=J$ homogeneous and saturated, it follows that $T_xX\cap X$ is reduced at $x$; therefore it is reduced also in a neighborhood of $x$. \end{enumerate} Already for quadratic manifolds there exist many important differences between ${\mathbb P}(C_x(T_xX\cap X))\subset{\mathbb P}((t_xX)^*)$ and $C_x(T_xX\cap X)=T_xX\cap X\cap \mathbb A^N\subset t_xX$ and also between $T_xX\cap X$ and the cone $\mathcal C_x\subseteq T_xX\cap X$. We shall discuss some examples in order to analyze closer these important schemes containing a lot of geometrical information. \begin{Example} ({\it $T_xX\cap X$ non-reduced only at $x$})\label{exscrolls} Remark that $t_x(T_xX\cap X)=t_xX$ so that $\langle C_x(T_xX\cap X)\rangle=t_xX$, while in some cases ${\mathbb P}(C_x(T_xX\cap X))$ is degenerate in ${\mathbb P}((t_xX)^*)$. Consider a rational normal scroll $X\subset{\mathbb P}^N$, different from the Segre varieties ${\mathbb P}^1\times{\mathbb P}^{n-1}$, $n\geq 2$, and a general point $x\in X$. It is well known that $X\subset{\mathbb P}^N$ is quadratic so that $\mathcal L_{x,X}={\mathbb P}(C_x(T_xX\cap X))\subset{\mathbb P}((t_xX)^*)$ by \eqref{inclLxBx}. On the other hand, if ${\mathbb P}^{n-1}_x$ is the unique ${\mathbb P}^{n-1}$ of the ruling passing through $x\in X$, it is easy to see, letting notation as above, that in this case $$\mathcal L_{x,X}={\mathbb P}({\mathbb P}^{n-1}_x\cap\mathbb A^n)={\mathbb P}^{n-2}\subset{\mathbb P}((t_xX)^*)={\mathbb P}^{n-1}.$$ This is possible because in this example $T_xX\cap X$ and $C_x(T_xX\cap X)$ are not reduced at $x$. Indeed, the point $x\in C_x(T_xX\cap X)$ corresponds to the irrelevant ideal of $S$. $I^*$ is not saturated, because the equation defining the hyperplane $\mathcal L_{x,X}$ belongs to the saturation of $I^*$, but is not in $I^*$ ($I^*$ is generated by quadratic polynomials!). \end{Example} \medskip In the case of rational normal scrolls discussed in Example \ref{exscrolls} we saw that $T_xX\cap X\setminus x=\mathcal C_x\setminus x$ as schemes, the affine tangent cones are different affine schemes, but the projectivized tangent cones coincide. By choosing suitable quadrics $Q_1,\ldots, Q_c$ we shall see in subsection \ref{implicit} that the complete intersection $Y=Q_1\cap\ldots\cap Q_c$ coincides locally with $X$ around $x$. Thus $T_xY\cap Y$ and $T_xX\cap X$ coincide locally around $x$. In particular the intersection of their strict transform on $\operatorname{Bl}_xX$ with the exceptional divisor is the same, so that $\mathcal L_{x,X}=\mathcal L_{x,Y}$ and the last scheme can be defined scheme theoretically by $r\leq c$ linearly independent quadrics by \eqref{eqLxE}. In any case the double nature of $T_xX\cap X$ as a subscheme of $T_xX$ and $X$ plays a central role for its infinitesimal properties at $x$, measured exactly by ${\mathbb P}(C_x(T_xX\cap X))\subset{\mathbb P}((t_xX)^*)$. It is useful to think of ${\mathbb P}(C_x(T_xX\cap X))\subset{\mathbb P}((t_xX)^*)$ as being the base locus scheme of the restriction to the exceptional divisor over $x$ of the projection of $X$ from $T_xX$, as we shall do in the next section. We shall provide in this way another reason why $\mathcal L_{x,X}$ can be defined scheme theoretically by at most $c$ quadratic equations for an arbitrary point $x\in X_{\operatorname{reg}}$. \subsection{Tangential projection and second fundamental form}\label{second} There are several possible equivalent definitions of the projective second fundamental form $|II_{x,X}|\subseteq{\mathbb P}(S^2(t_xX))$ of a connected equidimensional projective variety $X\subset{\mathbb P}^N$ at $x\in X_{\operatorname{reg}}$, see for example \cite[3.2 and end of Section~3.5]{IL}. We use the one related to tangential projections, as in \cite[Remark 3.2.11]{IL}. Suppose $X\subset{\mathbb P}^N$ is non-degenerate, as always, let $x\in X_{\operatorname{reg}}$ and consider the projection from $T_xX$ onto a disjoint ${\mathbb P}^{c-1}$ \begin{equation}\label{tangentdef} \pi_x:X\dasharrow W_x\subseteq{\mathbb P}^{c-1}. \end{equation} The map $\pi_x$ is not defined along the scheme $T_xX\cap X$, which contains $x$, and it is associated to the linear system of hyperplane sections cut out by hyperplanes containing $T_xX$, or equivalently by the hyperplane sections singular~at~$x$. Let $\phi:\operatorname{Bl}_xX\to X$ be the blow-up of $X$ at $x$, let \[E={\mathbb P}((t_xX)^*)={\mathbb P}^{n-1}\subset\operatorname{Bl}_xX\] be the exceptional divisor and let $H$ be a hyperplane section of $X\subset{\mathbb P}^N$. The induced rational map $\widetilde{\pi}_x:\operatorname{Bl}_xX\dasharrow{\mathbb P}^{c-1}$ is defined as a rational map along $E$ since $X\subset{\mathbb P}^N$ is not a linear space, see also the discussion below. The restriction of $\widetilde{\pi}_x$ to $E$ is given by a linear system in $|\phi^*(H)-2E|_{|E}\subseteq|-2E_{|E}|=|\O_{{\mathbb P}((t_xX)^*)}(2)|={\mathbb P}(S^2(t_xX))$, whose base locus scheme will be denoted by $B_{x,X}$. Consider the strict transform scheme of $T_xX\cap X$ on $\operatorname{Bl}_xX$, denoted from now on by $\widetilde T=\operatorname{Bl}_x(T_xX\cap X)$. Then $\widetilde T$ is the base locus scheme of $\widetilde{\pi}_x$ and the restriction of $\widetilde{\pi}_x$ to $E$ has base locus scheme equal to \begin{equation}\label{Base} \widetilde T\cap E={\mathbb P}(C_x(T_xX\cap X))=B_{x,X}\subset {\mathbb P}((t_xX)^*). \end{equation} \begin{Definition} The {\it second fundamental form} $|II_{x,X}|\subseteq{\mathbb P}(S^2(t_xX))$ of a connected equidimensional non-degenerate projective variety $X\subset{\mathbb P}^N$ of dimension $n\geq 2$ at a point $x\in X_{\operatorname{reg}}$ is the non-empty linear system of quadric hypersurfaces in ${\mathbb P}((t_xX)^*)$ defining the restriction of $\widetilde{\pi}_x$ to $E$ and $B_{x,X}\subset{\mathbb P}((t_xX)^*)$ is the so called {\it base locus scheme of the second fundamental form of $X$ at $x$}. \end{Definition} Clearly $\dim(|II_{x,X}|)\leq c-1$ and $\widetilde{\pi}_x(E)\subseteq W_x\subseteq{\mathbb P}^{c-1}$. Let $\widetilde I\subset S$ be the homogeneous ideal generated by the $r\leq c$ linearly independent quadratic forms in the second fundamental form of $X$ at $x$. Then via \eqref{Base} we obtain \begin{equation}\label{projBx} {\rm Proj (\frac{S}{\widetilde I})}=B_{x,X}={\mathbb P}(C_x(T_xX\cap X))={\rm Proj (\frac{S}{I^*})}\subset{\mathbb P}((t_xX)^*). \end{equation} In conclusion we have proved the following results by combining \eqref{Base} with \eqref{fund} and \eqref{projBx}. \medskip \begin{Corollary}\label{quadraticformLx} Let $X\subset{\mathbb P}^N$ be a non-degenerate projective variety, let $x\in X_{\operatorname{reg}}$ be a point and let notation be as above. Then: \begin{enumerate} \item $\mathcal L_{x,X}\subseteq B_{x,X}$; \item if $X\subset{\mathbb P}^N$ is quadratic, then equality holds and $\mathcal L_{x,X}\subset{\mathbb P}((t_xX)^*)$ can be defined scheme theoretically by the $r\leq c$ quadratic equations defining the second fundamental form of $X$ at $x$. \end{enumerate} \end{Corollary} \medskip \begin{Remark}\label{ci} The previous result has many important applications. We recall that, as proved in \cite{DD}, if $X\subset{\mathbb P}^N$ is a quadratic manifold and if $c\leq\frac{n-1}{2}$, then, for $x\in X$ general, $\mathcal L_{x,X}\subset{\mathbb P}((t_xX)^*)$ is the complete intersection of the $c$ linearly independent quadratic polynomials defining $|II_{x,X}|$. Then $\mathcal L_{x,X}$ has dimension $n-1-c$ from which it follows that $X\subset{\mathbb P}^N$ is a complete intersection. This proves the Hartshorne Conjecture on complete intersections in the quadratic case and also leads to the classification of quadratic Hartshorne manifolds, see \cite[Theorem 2.4 and Section 4]{DD} for details. The paper \cite{PR} considers also irreducible projective varieties $X\subset{\mathbb P}^{2n+1}$ which are 3--covered by twisted cubics, i.e. such that through three general points of $X\subset{\mathbb P}^{2n+1}$ there passes a twisted cubic contained in $X$. A key remark for the classification of these varieties is \cite[Theorem 5.2]{PR}, which among other things shows that for such an $X$ the equality $\mathcal L_{x,X}=B_{x,X}$ holds for $x\in X$ general. A posteriori all the known examples of varieties 3--covered by twisted cubics are projectively equivalent to the so called {\it twisted cubics over Jordan algebras}, which are quadratic, see {\it loc. cit} for definitions and details. This fact has also many important consequences for the theory of Jordan algebras and for the classification of {\it quadro-quadric} Cremona transformations, as shown in the forthcoming paper \cite{PR2}. \end{Remark} \subsection{Approach to $B_{x,X}=\mathcal L_{x,X}$ via \cite{BEL}}\label{implicit} For manifolds $X\subset{\mathbb P}^N$ there is another approach based on a construction of \cite{BEL} elaborating and generalizing an idea due to Severi, see {\it loc. cit.} It can be used to give a proof of a weaker form of Corollary \ref{quadraticformLx} (in the sense that we shall prove it only for $x\in X$ general); this approach illustrates the local nature of the second fundamental form. Let us remark that the treatment in the general setting developed in the previous sections is unavoidable because the point $x\in X$ is not necessarily general on the complete intersection $Y\supseteq X$ we now construct. It was proved in \cite{BEL} that given a manifold $X=V(f_1,\ldots, f_m)\subset{\mathbb P}^N$ as above, we can choose $g_i\in H^0({\mathcal I}_X(d_i))$, $i=1,\ldots, c$ such that \begin{equation}\label{YX} Y=V(g_1,\ldots, g_c)=X\cup X', \end{equation} where $X'$ (if nonempty) meets $X$ in a divisor $D$. Moreover from \eqref{YX} it follows \begin{equation}\label{chernD} \O_X(D)\simeq\det(\frac{{\mathcal I}_X}{{\mathcal I}_X^2})\otimes\O_X(\sum_{i=1}^{c}d_i)\simeq \O_X(d-n-1)\otimes\omega_X^*, \end{equation} see also \cite[pg. 597]{BEL}. We now illustrate the usefulness of this construction by proving some facts and results contained in \cite[Theorem 2.4]{DD}. \medskip Suppose that $X\subset {\mathbb P}^N$ is a quadratic manifold and consider a point $x\in U=X\setminus\operatorname{Supp}(D)$. By definition $Y\setminus\operatorname{Supp} (D)=U\amalg V$, where $V=X'\setminus\operatorname{Supp}(D)$. Consider the two schemes $T_xX\cap X\cap U$ and $T_xY\cap Y\cap U$. Since $t_xX=t_xY$ and since $Y\cap U=X\cap U$ by the above construction, we obtain the equality as schemes $$C_x(T_xX\cap X)=C_x(T_xX\cap X\cap U)=C_x(T_xY\cap Y\cap U)=C_x(T_xY\cap Y).$$ Via \eqref{fund} we deduce the following equality as subschemes of ${\mathbb P}((t_xX)^*)$: \begin{equation}\label{asinDD} \mathcal L_{x,Y}={\mathbb P}(C_x(T_xY\cap Y))={\mathbb P}(C_x(T_xX\cap X))=\mathcal L_{x,X}. \end{equation} Since $\mathcal L_{x,Y}$ can be scheme-theoretically defined by $r\leq c$ linearly independent quadratic equations, the same is true for $\mathcal L_{x,X}$. Now, without assuming anymore that $X$ is quadratic, since $x\in X$ is general, $\mathcal L_{x,X}$ is smooth and hence reduced. Clearly a line $L$ passing through $x$ is contained in $X$ if and only if it is contained in $Y$, yielding $\mathcal L_{x,X}=(\mathcal L_{x,Y})_{\operatorname{red}}$, see \cite[Theorem 2.4]{DD}. We proved: \medskip \begin{Proposition}\label{settheoretically} Let $X\subset{\mathbb P}^N$ be a manifold, let notation be as above and let $x\in U$ be a general point. Then: \begin{enumerate} \item $\mathcal L_{x,X}=(\mathcal L_{x,Y})_{\operatorname{red}}$ so that $\mathcal L_{x,X}$ can be defined set theoretically by the $r\leq d$ equations defining $\mathcal L_{x,Y}$ scheme theoretically. In particular, if $d\leq n-1$, then $\mathcal L_{x,X}\neq \emptyset$. \item If $X\subset{\mathbb P}^N$ is quadratic, then $\mathcal L_{x,X}=\mathcal L_{x,Y}$ so that $\mathcal L_{x,X}\subset{\mathbb P}((t_xX)^*)$ is a quadratic manifold defined scheme theoretically by at most $c$ quadratic equations. \end{enumerate} \end{Proposition} \subsection{Lines on prime Fano manifolds}\label{Fano} Let $X\subset{\mathbb P}^N$ be a (non-degenerate) manifold of dimension $n\geq 2$. For a general point $x\in X$ we know that $\mathcal L_x\subset{\mathbb P}^{n-1}$ is smooth, Proposition \ref{Yx}. There are well-known examples when $\mathcal L_x\subset{\mathbb P}^{n-1}$ is not irreducible, such as $X={\mathbb P}^a\times{\mathbb P}^b\subset{\mathbb P}^{ab+a+b}$ Segre embedded, and also examples where $\mathcal L_x\subset{\mathbb P}^{n-1}$ is degenerate, see Example \ref{exscrolls} and also table \eqref{contact} below. A relevant class of manifolds where the properties of smoothness, irreducibility and non-degeneracy of $X\subset{\mathbb P}^N$ are transfered to $\mathcal L_x\subset{\mathbb P}^{n-1}$ consists of prime Fano manifolds of high index, which we now define. A manifold $X\subset{\mathbb P}^N$ is called a {\it prime Fano manifold} if $-K_X$ is ample and if $\operatorname{Pic}(X)\simeq\mathbb Z\langle\O(1)\rangle$. The {\it index of $X$} is the positive integer defined by $-K_X=i(X)H$, with $H$ a hyperplane section of $X\subset{\mathbb P}^N$. Let us recall some fundamental facts. Part (1) below is well known and follows from the previous discussion except for a fundamental Theorem of Mori which implies that for prime Fano manifolds of index greater than $\frac{n+1}{2}$, necessarily $\mathcal L_x\neq\emptyset$, see \cite{Mori} and \cite[Theorem V.1.6]{Kollar}. \medskip \begin{Proposition} Let $X\subset{\mathbb P}^N$ be a projective manifold and let $x\in X$ be a general point. Then \begin{enumerate} \item If $\mathcal L_x\neq\emptyset$, then for every $[L]\in\mathcal L_x$ we have $\dim_{[L]}(\mathcal L_x)=-K_X\cdot L-2.$ In particular for prime Fano manifolds of index $i(X)\geq \frac{n+3}{2}$ the variety $\mathcal L_x\subset{\mathbb P}^{n-1}$ is irreducible (and in particular non-empty!). \item {\rm (\cite{Hwang})} If $X\subset{\mathbb P}^N$ is a prime Fano manifold of index $i(X)\geq \frac{n+3}{2}$, then $\mathcal L_x\subset{\mathbb P}^{n-1}$ is a non-degenerate manifold of dimension $i(X)-2$. \end{enumerate} \end{Proposition} \medskip Let us finish this section by looking at another significant example in which meaningful geometrical properties of $X\subset{\mathbb P}^N$ are reflected in similar properties of $\mathcal L_x\subset{\mathbb P}^{n-1}$, when this is non-empty. \medskip \begin{Example}\label{excomp} Let $X\subset{\mathbb P}^N$ be a smooth complete intersection of type $(d_1,d_2,\ldots,d_c)$ with $d_c\geq 2$. Then: \begin{itemize} \item if $n+1-d>0,$ then $X$ is a Fano manifold and $i(X)=n+1-d$; \item if $n\geq 3$, then $\operatorname{Pic}(X)\simeq\mathbb Z\langle\O(1)\rangle$; \item if $i(X)\geq 2$, then $\mathcal L_x\neq\emptyset$ and for every $[L]\in\mathcal L_x$ we have $$\dim_{[L]}(\mathcal L_x)=(-K_X\cdot L)-2=i(X)-2=n-1-d\geq 0 ,$$ so that $\mathcal L_x\subset{\mathbb P}^{n-1}$ is a smooth complete intersection of type $$(2,\ldots,d_1; 2,\ldots, d_2;\ldots;2,\ldots d_{c-1}; 2,\ldots,d_c)$$ since it is scheme theoretically defined by the $d$ equations in \eqref{eqLxE}. \end{itemize} \end{Example} \section{A condition for non-extendability}\label{ext} \begin{Definition} Let us consider $H={\mathbb P}^N$ as a hyperplane in ${\mathbb P}^{N+1}$. Let $Y\subset{\mathbb P}^N=H$ be a smooth (non-degenerate) irreducible variety of dimension $n\geq 1$. An irreducible variety $X\subset{\mathbb P}^{N+1}$ will be called {\it an extension of $Y$} if \medskip \begin{enumerate} \item $\dim(X)=\dim(Y)+1$; \medskip \item $Y=X\cap H$ as a scheme. \end{enumerate} \end{Definition} \medskip For every $p\in {\mathbb P}^{N+1}\setminus H$, the irreducible cone $$X=S(p,Y)=\bigcup_{y\in Y}<p,y>\subset{\mathbb P}^{N+1}$$ is an extension of $Y\subset{\mathbb P}^N=H$, which will be called {\it trivial}. Let us observe that for any extension $X\subset{\mathbb P}^{N+1}$ of $Y\subset{\mathbb P}^N$ we necessarily have $\#(\operatorname{Sing}(X))<\infty$ since $X$ is smooth along the very ample divisor $Y=X\cap H$. We also remark that in our definition $Y$ is a fixed hyperplane section. In the classical approach usually it was required that $H$ was a general hyperplane section of $X$, see for example \cite{Scorza1}. Under these more restrictive hypotheses one can always suppose that a general point on $Y$ is also a general point on $X$. \medskip \subsection{Extensions of $\mathcal L_{x,Y}\subset{\mathbb P}^{n-1}$ via $\mathcal L_{x,X}\subset{\mathbb P}^n$} Let $y\in Y$ be a general point and let us consider an extension $X\subset{\mathbb P}^{N+1}$ of $Y$ and an irreducible component $\mathcal L_{y,Y}^j$ of $\mathcal L_{y,Y}\subset{\mathbb P}^{n-1}$, which is a smooth irreducible variety by Proposition \ref{Yx}. The results of Section~ 1 yield that this property is immediately translated in terms of Hilbert schemes of lines. Indeed we deduce the following result, where part (4) requires an ad hoc proof since in our hypotheses the point $y\in Y$ is general on $Y$, but not necessarily on $X$, so that we cannot apply Proposition \ref{Yx}. \medskip \begin{Proposition}\label{extlines} Let $X\subset{\mathbb P}^{N+1}$ be an irreducible projective variety which is an extension of the non-degenerate manifold $Y\subset{\mathbb P}^N$. Let $n=\dim(Y)\geq 1$ and let $y\in Y$ be an arbitrary point such that $\mathcal L_{y,Y}\neq\emptyset$. Then: \begin{enumerate} \item $\mathcal L_{y,X}\cap {\mathbb P}((t_yY)^*)=\mathcal L_{y,Y}$ as schemes. \item if $y\in Y$ is general, then $\dim_{[L]}(\mathcal L_{y,X})=\dim_{[L]}(\mathcal L_{y,Y})+1$ and $[L]$ is a smooth point of $\mathcal L_{y,X}$ for every $[L]\in\mathcal L_{y,Y}.$ \item if $y\in Y$ is general and if $\mathcal L_{y,Y}^j$ is an irreducible component of positive dimension, then there exists an irreducible component $\mathcal L_{y,X}^j$ such that $\mathcal L_{y,Y}^j=\mathcal L_{y,X}^j\cap{\mathbb P}((t_yY)^*)$ as schemes. \item If $y\in Y$ is general, then $\operatorname{Sing}(\mathcal L_{y,X})\subseteq S_{y,X}$. \end{enumerate} \end{Proposition} \begin{proof} Let $Y=X\cap H$, with $H={\mathbb P}^N\subset{\mathbb P}^{N+1}$ a hyperplane and let notation be as in subsection \ref{equations}. The conclusion in (1) immediately follows from \eqref{eqLxE}. Let us pass to (2) and consider an arbitrary line $[L]\in \mathcal L_{y,Y}^j$, an irreducible component of the smooth not necessarily irreducible variety $\mathcal L_{y,Y}$. We have an exact sequence of normal bundles \begin{equation}\label{tangentspace} 0\to N_{L/Y}\to N_{L/X}\to N_{Y/X|L}\simeq\O_{{\mathbb P}^1}(1)\to 0. \end{equation} Since $y\in Y$ is general, $N_{L/Y}$ is generated by global sections, see the proof of Proposition \ref{Yx}, so that \eqref{split} yields \begin{equation}\label{normale} N_{L/X}\simeq N_{L/Y}\oplus\O_{{\mathbb P}^1}(1)\simeq \O_{{\mathbb P}^1}(1)^{s(L,Y)+1}\oplus\O_{{\mathbb P}^1}^{n-s(L,Y)-1}. \end{equation} Thus also $N_{L/X}$ is generated by global sections, $\mathcal L_{y,X}$ is smooth at $[L]$ and $\dim_{[L]}(\mathcal L_{y,X})=\dim_{[L]}(\mathcal L_{y,Y})+1$, proving (2). Therefore if $y\in Y$ is general, there exists a unique irreducible component of $\mathcal L_{y,X}\subset{\mathbb P}((t_yX)^*)$, let us say $\mathcal L_{y,X}^j$, containing $[L]$ and by the previous calculation $\dim(\mathcal L^j_{y,X})=s(L,Y)+1=\dim(\mathcal L_{y,Y}^j)+1$. Recall that by part (1) we have $t_{[L]}\mathcal L_{y,Y}=t_{[L]}\mathcal L_{y,X}\cap {\mathbb P}((t_yY)^*)$ so that \begin{equation}\label{extYx} \mathcal L_{y,Y}^j\subseteq \mathcal L_{y,X}^j\cap {\mathbb P}((t_yY)^*)\subseteq \mathcal L_{y,Y}\subset {\mathbb P}^{n-1}={\mathbb P}((t_yY)^*), \end{equation} yielding that $\mathcal L_{y,Y}^j$ is an irreducible component of $\mathcal L_{y,X}^j\cap {\mathbb P}((t_yY)^*)$ as well as an irreducible component of the smooth variety $\mathcal L_{y,Y}$. Hence, if $\dim(\mathcal L_{y,Y}^j)\geq 1$, we have the equality $\mathcal L^j_{y,Y}= \mathcal L_{y,X}^j\cap {\mathbb P}((t_yY)^*)$ as schemes, i.e. under this hypothesis $\mathcal L_{y,X}^j\subset{\mathbb P}((t_yX)^*)$ (or better $(\mathcal L_{y,X}^j)_{\operatorname{red}}$) is a projective extension of the smooth positive dimensional irreducible variety $\mathcal L_{y,Y}^j\subset{\mathbb P}((t_xY)^*)$. Indeed, $\dim(\mathcal L^j_{y,Y})\geq 1$ forces $\dim(\mathcal L_{y,X}^j)\geq 2$ so that it is sufficient to recall that $\mathcal L_{y,X}$ is smooth along $\mathcal L_{y,Y}$ by the previous discussion and also that an arbitrary hyperplane section of the irreducible variety $(\mathcal L_{y,X}^j)_{\operatorname{red}}$ is connected by the Fulton-Hansen Theorem, \cite{FH}. More precisely, if $\dim(\mathcal L_{y,Y}^j)\geq 1$, then equality as schemes holds in \eqref{extYx}, proving part (3). By \cite[Proposition 4.9]{Debarre} there exists a non-empty open subset $U\subseteq X$ such that $N_{\widetilde L/X}$ is generated by global sections for every line $\widetilde L\subset X_{\operatorname{reg}}$ intersecting $U$. If $U\cap Y\neq \emptyset$, then (4) clearly holds. Suppose $Y\cap U=\emptyset$. Let $[\widetilde L]\in \mathcal L_{y,X}\setminus S_{y,X}$. If $\widetilde L\cap U\neq \emptyset$, then $[\widetilde L]$ is a smooth point of $\mathcal L_{y,X}$ by the previous analysis. If $\widetilde L\cap U=\emptyset$, then $\widetilde L\subset Y$ by the generality of $y\in Y$ and $N_{\widetilde L/X}$ is generated by global sections by \eqref{normale}, concluding the proof of (4). \end{proof} \medskip Now we are in position to prove the main result of this section and to deduce some applications. \medskip \begin{Theorem}\label{criterion} Let notation be as above and let $y\in Y$ be a general point. Then: \begin{enumerate} \item Suppose there exist two distinct irreducible components $\mathcal L_{y,X}^1$ and $\mathcal L_{y,X}^2$ of $\mathcal L_{y,X}\subset{\mathbb P}((t_yX)^*)$, extending two irreducible components $\mathcal L_{y,Y}^1$, respectively $\mathcal L_{y,Y}^2$, of $\mathcal L_{y,Y}$ in the sense specified above. If $\mathcal L_{y,X}^1\cap \mathcal L_{y,X}^2\neq\emptyset$, then $X\subset{\mathbb P}^{N+1}$ is a cone over $Y\subset{\mathbb P}^N$ of vertex a point $p\in {\mathbb P}^{N+1}\setminus {\mathbb P}^N$. \item If $\mathcal L_{y,Y}\subset{\mathbb P}((t_yY)^*)$ is a manifold whose extensions are singular, then every extension of $Y\subset{\mathbb P}^N$ is trivial. \end{enumerate} \end{Theorem} \begin{proof} By the above discussion, we get that in both cases, for $y\in Y$ general, the variety $S_{y,X}\subseteq \mathcal L_{y,X}$ is not empty so that for $y\in Y$ general there exists a line $L_y\subseteq X$ passing through $y$ and through a singular point $p_y\in L_y\cap \operatorname{Sing}(X)$. Since $Y$ is irreducible and since $\operatorname{Sing}(X)$ consists of a finite number of points, there exists $p\in \operatorname{Sing}(X)$ such that $p\in L_y$ for $y\in Y$ general. This implies that $X=S(p,Y)$ is a cone over $Y$ with vertex $p$. \end{proof} \medskip The first easy consequence is a result due to Scorza (see \cite{Scorza2} and also \cite{Zakdual}, \cite{Badescu}), proved by him under the stronger assumption that $Y=X\cap H$ is a general hyperplane section of $X$. Under these more restrictive hypotheses, the analysis before the proof of Theorem \ref{criterion} could be simplified via Proposition \ref{Yx}, since we may assume that the general point $y\in Y$ is also general on $X$. \medskip \begin{Corollary}\label{Segre} Let $1\leq a\leq b$ be integers, let $n=a+b\geq 3$ and let $Y\subset{\mathbb P}^{ab+a+b}$ be a smooth irreducible variety projectively equivalent to the Segre embedding ${\mathbb P}^a\times{\mathbb P}^b\subset{\mathbb P}^{ab+a+b}$. Then every extension of $Y$ in ${\mathbb P}^{ab+a+b+1}$ is trivial. \end{Corollary} \begin{proof} For $y\in Y$ general, it is well known that $\mathcal L_{y,Y}=\mathcal L_{y,Y}^1\amalg \mathcal L_{y,Y}^2\subset{\mathbb P}^{a+b-1}={\mathbb P}^{n-1}$ with $\mathcal L_{y,Y}^1={\mathbb P}^{a-1}$ and $\mathcal L_{y,Y}^2={\mathbb P}^{b-1}$, both linearly embedded. Observe that $b-1\geq 1$. By \eqref{extYx} and the discussion following it, there exist two irreducible components $\mathcal L_{y,X}^j$, $j=1,2$, of $\mathcal L_{y,X}\subset{\mathbb P}^{n}={\mathbb P}^{a+b}$ with $\dim(\mathcal L_{y,X}^1)=a$ and $\dim(\mathcal L_{y,X}^2)=b$. If $a\neq b$ then clearly $\mathcal L_{y,X}^1\neq \mathcal L_{y,X}^2$. If $a=b\geq 2$, then $\mathcal L_{y,X}^1\neq \mathcal L_{y,X}^2$ because an arbitrary hyperplane section of a variety of dimension at least 2 is connected, see \cite{FH}. Since $a+b=n$, $\mathcal L_{y,X}^1\cap \mathcal L_{y,X}^2\neq\emptyset$ and the conclusion follows from the first part of Theorem \ref{criterion}. \end{proof} \medskip The previous result has some interesting consequences via iterated applications of the second part of Theorem \ref{criterion}. Indeed, let us consider the following homogeneous varieties (also known as irreducible hermitian symmetric spaces), in their homogeneous embedding, and the description of the Hilbert scheme of lines passing through a general point, see \cite[Section~ 1.4.5]{Hwang} and also \cite{Strick}. \medskip \begin{equation}\label{hermitian} \begin{tabular}{|c|c|c|c|} \hline &$Y$ & $\mathcal L_{y,Y}$ & $\tau_y:\mathcal L_{y,Y}\to {\mathbb P}((t_yY)^*)$\\ \hline 1 &$\mathbb G(r,m)$ & ${\mathbb P}^r\times{\mathbb P}^{m-r-1}$ & \text{Segre embedding}\\ \hline 2 &$SO(2r)/U(r)$& $\mathbb G(1,r-1)$ & \text{Pl\" ucker embedding}\\ \hline 3 &$E_6$ & $SO(10)/U(5)$& \text{miminal embedding}\\ \hline 4 &$E_7/E_6\times U(1)$ & $E_6$ & \text{Severi embedding}\\ \hline 5 &$Sp(r)/U(r)$ & ${\mathbb P}^{r-1}$ & \text{quadratic Veronese embedding}\\ \hline \end{tabular} \end{equation} \medskip There are also the following homogeneous contact manifolds with Picard number one associated to a complex simple Lie algebra $\mathbf g$, whose Hilbert scheme of lines passing through a general point is known. Let us observe that in these examples the variety $\mathcal L_{y,Y}\subset{\mathbb P}^{n-1}={\mathbb P}((t_yY)^*)$ is degenerate and its linear span is exactly ${\mathbb P}((D_y)^*)={\mathbb P}^{n-2}$, there $D_y$ is the tangent space at $y$ of the distribution associated to the contact structure on $Y$, i.e. there is the following factorization $\tau_y:\mathcal L_{y,Y}\to {\mathbb P}((D_y)^*)\subset{\mathbb P}((t_yY)^*)$. For more details one can consult \cite[Section~ 1.4.6]{Hwang}.\medskip \begin{equation}\label{contact} \begin{tabular}{|c|c|c|c|} \hline &$\mathbf g$ & $ \mathcal L_{y,Y}$ & $\tau_x:\mathcal L_{x,Y}\to {\mathbb P}((D_y)^*$\\ \hline 6 &$F_4$ & $Sp(3)/U(3)$ & \text{Segre embedding}\\ \hline 7 &$E_6$ & $\mathbb G(2,5)$& \text{Pl\" ucker embedding}\\ \hline 8 & $E_7$ & $SO(12)/U(6)$ & \text{minimal embedding}\\ \hline 9 &$E_8$ & $E_7/E_6\times U(1)$& \text{minimal embedding}\\ \hline 10&${\mathbf so}_{m+4}$& ${\mathbb P}^1\times Q^{m-2}$ & \text{Segre embedding}\\ \hline \end{tabular} \end{equation} \medskip By case 1') we shall denote a variety as in 1) of \eqref{hermitian} satisfying the following numerical conditions: $r<m-1$; if $r=1$, then $m\geq 4$. By 2') we shall denote a variety as in 2) with $r\geq 5$. \medskip \begin{Corollary}\label{exthermitian} Let $Y\subset{\mathbb P}^N$ be a manifold as in Examples 1'), 2'), 3), 4), 7), 8), 9) above. Then every extension of $Y$ is trivial. \end{Corollary} \begin{proof} In cases 2'), 3), 4) and 9) in the statement the variety $\mathcal L_{y,Y}\subset{\mathbb P}^{n-1}$ of one example is the variety $Y\subset{\mathbb P}^N$ occurring in the next one. Thus for these cases, by the second part of Theorem \ref{criterion}, it is sufficient to prove the result for case 1'). For this variety the conclusion follows from Corollary \ref{Segre}. For the remaining cases, the variety $\mathcal L_{y,Y}\subset{\mathbb P}^{n-1}$ is either as in case 1') with $(r,m)=(2,5)$ or as in case 2) with $r=6$ and the conclusion follows once again by the second part of Theorem \ref{criterion}. \end{proof} \medskip The next result is also classical and well-known but we provide a direct geometric proof. Under the assumption that the hyperplane section $H\cap X=Y$ is general, it was proved by C. Segre for $n=2$ in \cite{CSegre} and by Scorza in \cite{Scorza1}, see also \cite{Terracini}, for arbitrary $n\geq 2$ (and also for arbitrary Veronese embeddings $\nu_d({\mathbb P}^n)\subset{\mathbb P}^{N(d)}$, with $n\geq 2$ and $d\geq 2$; modern proofs of this general case are contained in \cite{Badescu} and in \cite{Zakdual}). \medskip \begin{Proposition}\label{Veronese} Let $n\geq 2$ and let $Y\subset{\mathbb P}^{\frac{n(n+3)}{2}}$ be a manifold projectively equivalent to the quadratic Veronese embedding $\nu_2({\mathbb P}^n)\subset{\mathbb P}^{\frac{n(n+3)}{2}}$. Then every extension of $Y$ is trivial. \end{Proposition} \begin{proof} Let $y\in Y$ be a general point and let $N=\frac{n(n+3)}{2}$. Since $\mathcal L_{y,Y}=\emptyset$, then $\mathcal L_{y,X}\subset{\mathbb P}^n$, if not empty, consists of at most a finite number of points and through $y\in X$ there passes at most a finite number of lines contained in $X$. Consider a conic $C\subset Y$ passing through $y$. Then $N_{C/Y}\simeq\O_{{\mathbb P}^1}(1)^{n-1}$. The exact sequence of normal bundles $$0\to N_{C/Y}\to N_{C/X}\to N_{Y/X|C}\simeq\O_{{\mathbb P}^1}(2)\to 0,$$ yields $$N_{C/X}\simeq N_{C/Y}\oplus\O_{{\mathbb P}^1}(2)\simeq \O_{{\mathbb P}^1}(1)^{n-1}\oplus\O_{{\mathbb P}^1}(2).$$ Thus there exists a unique irreducible component $\mathcal C_{y,X}$ of the Hilbert scheme of conics contained in $X\subset{\mathbb P}^{N+1}$ passing through $y\in X$ to which $[C]$ belongs. Moreover $\dim(\mathcal C_{y,X})=n+1$ and the conics parametrized by $\mathcal C_{y,X}$ cover $X$. Hence there exists a one dimensional family of conics through $y$ and a general point $x\in X$. By Bend and Break, see for example \cite[Proposition 3.2]{Debarre}, there is at least a singular conic through $y$ and $x$. Since $X\subset{\mathbb P}^{N+1}$ is not a linear space, there exists no line joining $y$ and a general $x$, i. e. the singular conics through $x$ and $y$ are reduced. Thus given a general point $x$ in $X$, there exists a line $L_x\subset X$ through $x$, not passing through $y$, and a line $L_y\subset X$ through $y$ such that $L_y\cap L_x\neq \emptyset$. Since there are a finite number of lines contained in $X$ and passing through $y$, we can conclude that given a general point $x\in X$, there exists a fixed line passing through $y$, $\widetilde{L}_y$, and a line $L_x$ through $x$ such that $L_x\cap \widetilde{L}_y\neq \emptyset.$ Moreover, a general conic $[C_{x,y}]\in\mathcal C_{y,X}$ and passing through a general point $x$ is irreducible, does not pass through the finite set $\operatorname{Sing}(X)$ and has ample normal bundle verifying $h^0(N_{C_{x,y}/X}(-1))=h^0(N_{C/X}(-1))=n+1$. This means that the deformations of $C_{x,y}$ keeping $x$ fixed cover an open subset of $X$ and also that through general points $x_1,x_2\in X$ there passes a one dimensional family of irreducible conics. The plane spanned by one of these conics contains $x_1$ and $x_2$ so that it has to vary with the conic. Otherwise the fixed plane would be contained in $X$ and $X\subset{\mathbb P}^{N+1}$ would be a linearly embedded ${\mathbb P}^{N+1}$, which is contrary to our assumptions. In conclusion through a general point $z\in <x_1,x_2>$ there passes at least a one dimensional family of secant lines to $X$ so that \begin{equation}\label{dimSX} \dim(SX)\leq 2(n+1)-1=2n+1<N+1=\frac{n(n+3)}{2}+1, \end{equation} yielding $SX\subsetneq{\mathbb P}^{N+1}$. Suppose the point $p_x=\widetilde{L}_y\cap L_x$, for $y\in Y$ general, varies on $\widetilde{L}_y$. Then the linear span of two general tangent spaces $T_{x_1}X$ and $T_{x_2}X$ would contain the line $\widetilde{L}_y$. Since $T_zSX=<T_{x_1}X,T_{x_2}X>$ by the Terracini Lemma, we deduce that a general tangent space to $SX$ contains $\widetilde{L}_y$ and a fortiori $y$. Since $SX\subsetneq{\mathbb P}^{N+1}$, the variety $SX\subset{\mathbb P}^{N+1}$ would be a cone whose vertex, which is a linear space, contains $\widetilde{L}_y$ and a fortiori $y\in Y$. By the generality of $y\in Y$ we would deduce that $Y\subset{\mathbb P}^N$ is degenerate. Thus $p_x=\widetilde{L}_y\cap L_x$ does not vary with $x\in X$ general. Let us denote this point by $p$. Then clearly $X\subset{\mathbb P}^{N+1}$ is a cone with vertex $p$ over $Y$. \end{proof} \medskip \begin{Corollary}\label{extcontact} Let $Y\subset{\mathbb P}^N$ be a manifold either as in 5) above with $r\geq 3$ or as in 6) above. Then every extension of $Y$ is trivial. \end{Corollary} \begin{proof} By \eqref{hermitian} we know that in case 5) with $r\geq 3$ we have $n-1=\frac{(r-1)(r+2)}{2}$ and the variety $\mathcal L_{y,Y}\subset{\mathbb P}^{n-1}$ is projectively equivalent to $\nu_2({\mathbb P}^{r-1})\subset{\mathbb P}^{\frac{(r-1)(r+2)}{2}}$. To conclude we apply Proposition \ref{Veronese} and the second part of Theorem \ref{criterion}. Case 6) follows from case 5) with $r=3$ by the second part of Theorem \ref{criterion}. \end{proof} \medskip \begin{Remark}\label{dualhom} There is a different and interesting approach to Corollary \ref{exthermitian} and Corollary \ref{extcontact} based on the theory of dual varieties and proposed by Zak in \cite{Zakdual}, which also avoids direct computations of vanishing of cohomology groups in each case. This approach is less direct and less elementary than ours and it is based on the following facts. By a result of Kempf the dual variety of any homogeneous variety is normal, see e.g. \cite[Theorem III.1.2]{Zak}. Then in \cite[Corollary 1]{Zakdual} it is stated that a smooth variety $X\subset{\mathbb P}^N$ whose dual variety is normal admits only trivial extensions. As far as we know, to establish this result one first shows that the normality of $X^*$ implies its linear normality, which seems to follow from some well known but not trivial results. Finally one applies \cite[Theorem ]{Zakdual}, which is a general criterion for admitting only trivial extension. For us Theorem \ref{criterion} is simply another incarnation of the Principle described in the Introduction while Corollary \ref{exthermitian} and Corollary \ref{extcontact}, surely well known to everybody, were included only to show that they are an immediate consequences of Scorza's result in \cite{Scorza2}, a fact which seems to have been overlooked till now. \end{Remark} \def\bibaut#1{{\sc #1}}
{ "timestamp": "2011-05-06T02:02:41", "yymm": "1009", "arxiv_id": "1009.3637", "language": "en", "url": "https://arxiv.org/abs/1009.3637", "abstract": "The first part of this note contains a review of basic properties of the variety of lines contained in an embedded projective variety and passing through a general point. In particular we provide a detailed proof that for varieties defined by quadratic equations the base locus of the projective second fundamental form at a general point coincides, as a scheme, with the variety of lines. The second part concerns the problem of extending embedded projective manifolds, using the geometry of the variety of lines. Some applications to the case of homogeneous manifolds are included.", "subjects": "Algebraic Geometry (math.AG)", "title": "Lines on projective varieties and applications", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773707966712549, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.7084670451193346 }
https://arxiv.org/abs/0903.0840
Real loci of based loop groups
Let $(G,K)$ be a Riemannian symmetric pair of maximal rank, where $G$ is a compact simply connected Lie group and $K$ the fixed point set of an involutive automorphism $\sigma$. This induces an involutive automorphism $\tau$ of the based loop space $\Omega(G)$. There exists a maximal torus $T\subset G$ such that the canonical action of $T\times S^1$ on $\Omega(G)$ is compatible with $\tau$ (in the sense of Duistermaat). This allows us to formulate and prove a version of Duistermaat's convexity theorem. Namely, the images of $\Omega(G)$ and $\Omega(G)^\tau$ (fixed point set of $\tau$) under the $T\times S^1$ moment map on $\Omega(G)$ are equal. The space $\Omega(G)^\tau$ is homotopy equivalent to the loop space $\Omega(G/K)$ of the Riemannian symmetric space $G/K$. We prove a stronger form of a result of Bott and Samelson which relates the cohomology rings with coefficients in $\mathbb{Z}_2$ of $\Omega(G)$ and $\Omega(G/K)$. Namely, the two cohomology rings are isomorphic, by a degree-halving isomorphism (Bott and Samelson had proved that the Betti numbers are equal). A version of this theorem involving equivariant cohomology is also proved. The proof uses the notion of conjugation space in the sense of Hausmann, Holm, and Puppe.
\section{Introduction} Let $G$ be a compact connected simply connected Lie group. Consider the space $$\Omega(G):=\{\gamma : S^1 \to G \ : \ \gamma \ {\rm of \ Sobolev \ class \ } H^1, \gamma(1)=e\}$$ of all based loops in $G$ (here $S^1$ is the unit circle in the complex plane). It is known that $\Omega(G)$ is an infinite dimensional symplectic manifold which behaves in many respects like a {\it compact} symplectic manifold. For example, let us consider the canonical action of the product $T\times S^1$ on $\Omega(G)$, where $T\subset G$ is a maximal torus and $S^1$ a circle (for more details, see Section \ref{duis} below). One can show that this action is Hamiltonian. Moreover, by the convexity theorem of Atiyah and Pressley \cite{At-Pr}, the image of the corresponding moment map is a convex unbounded polyhedron (by {\it convex polyhedron} we always mean in this paper the convex hull of an infinite but discrete collection of points). Another instance of the same phenomenon is that the $T\times S^1$-equivariant cohomology of $\Omega(G)$ can be computed by Goresky-Kottwitz-MacPherson type formulas (this has been obtained by Harada, Henriques, and Holm in \cite{Ha-He-Ho}). Let $\sigma$ be a Lie group automorphism of $G$ with the following properties: \begin{itemize} \item $\sigma \circ \sigma ={\rm id}_G$, that is, $\sigma$ is an involution \item there exists a maximal torus $T\subset G$ such that $\sigma(t)=t^{-1}$ for all $t\in T$. \end{itemize} It is known (cf. e.g. \cite[Chapter VI, Theorem 4.2]{Lo}) that any simply connected compact Lie group $G$ admits such an an automorphism $\sigma$. This $\sigma$ is unique up to an inner automorphism of $G$. For example, if $G=SU(n)$, $\sigma$ is given by $$\sigma((a_{k\ell})_{1\le k,\ell\le n})=(\bar{a}_{k\ell})_{1\le k,\ell\le n},$$ for any special unitary $n\times n$ matrix $(a_{k\ell})_{1\le k,\ell\le n}$ (the bar indicates the complex conjugate). Examples of such involutions for other Lie groups are presented in Section \ref{lasts}. The automorphism $\sigma$ gives rise to the involution $\tau$ of $\Omega(G)$ given by \begin{equation}\label{taug}\tau(\gamma)(z)=\sigma(\gamma(\bar{z})),\end{equation} for all $\gamma\in \Omega(G)$ and $z \in S^1$. One can see that $\tau$ is an anti-symplectic automorphism of $\Omega(G)$, that is, it satisfies $\tau^*(\omega)=-\omega$, where $\omega$ is the symplectic form of $\Omega(G)$ (cf. \cite{Ko}). The automorphism $\tau$ of $\Omega(G)$ is compatible with the $T\times S^1$ action mentioned above: that is, we have \begin{equation}\label{co}\tau((t,z).\gamma) =(t^{-1},z^{-1}).\tau(\gamma), \end{equation} for all $(t,z)\in T\times S^1$ and all $\gamma\in \Omega(G)$ (see Proposition \ref{propo} below). Real loci of compact (finite dimensional) symplectic manifolds with compatible torus actions have been investigated by several authors, like Duistermaat \cite{Du}, O'Shea and Sjamaar \cite{OS-Sj}, Biss, Guillemin, and Holm \cite{Bi-Gu-Ho}, and Hausmann, Holm, and Puppe \cite{Ha-Ho-Pu}. The loop space $\Omega(G)$ is infinite dimensional, thus we cannot directly apply the results in the papers above. The goal of our paper is to show that the following two results can be extended to $\Omega(G)$: the Duistermaat convexity theorem (cf. \cite{Du}, see also Theorem \ref{dui} below) and a more recent result of Hausmann, Holm, and Puppe which relates the (equivariant) cohomology rings of the manifold and of the fixed point set of the involutive automorphism (cf. \cite{Ha-Ho-Pu}). More precisely, we prove Theorems \ref{firstmain} and \ref{secmain} below. The first theorem concerns the moment map of the $T\times S^1$ action on $\Omega(G)$, which is a map $\Omega(G)\to ({\rm Lie}(T)\oplus \mathbb{R})^*$. The explicit description of this map is given in Section \ref{convex}. It turns out that it is more convenient to describe it by endowing ${\rm Lie}(G)$ with an ${\rm Ad}(G)$-invariant inner product and restricting it to ${\rm Lie}(T)$, and then endowing $\mathbb{R}$ with the canonical inner product: we identify in this way $ ({\rm Lie}(T)\oplus \mathbb{R})^*= {\rm Lie}(T)\oplus \mathbb{R}$. \begin{theorem}\label{firstmain} If $\Phi : \Omega(G)\to {\rm Lie}(T) \oplus \mathbb{R}$ is the moment map of the $T\times S^1$ action, then we have $$\Phi(\Omega(G))=\Phi(\Omega(G)^\tau).$$ Here $\Omega(G)^\tau$ denotes the fixed point set of $\tau$. \end{theorem} \noindent {\bf Remarks.} 1. Let us consider the more general situation when $\sigma$ is an {\it arbitrary} involutive Lie group automorphism of $G$. The differential map $d\sigma_e$ is an involutive Lie algebra automorphism of $\frak{g}:={\rm Lie}(G)$. Let $\frak{k},\frak{p}\subset \frak{g}$ denote the corresponding $+1$, respectively $-1$ eigenspaces. We have $\frak{g}=\frak{k}\oplus\frak{p}$. Let $\frak{a}$ be a maximal abelian subspace of $\frak{g}$ with $\frak{a} \subset \frak{p}$. Then $A:=\exp(\frak{a})$ is a torus in $G$ (cf. e.g. \cite[Chapter VII]{He}). Let $T\subset G$ be a maximal torus such that $A\subset T$. Consider again the involution $\tau$ of $\Omega(G)$ given by Equation (\ref{taug}). Again, $\tau$ is an antisymplectic automorphism of $\Omega(G)$. The action of $A\times S^1$ (which is a subgroup of $T\times S^1$) on $\Omega(G)$ is compatible with $\tau$. Let $\Phi_A : \Omega(G)\to \frak{a}\oplus \mathbb{R}$ be the moment map of the $A\times S^1$ action (as before, we make the identification $(\frak{a}\oplus \mathbb{R})^*=\frak{a}\oplus \mathbb{R}$). It is not known whether $\Phi_A(\Omega(G)^\tau)=\Phi_A(\Omega(G))$: this would be a version of Duistermaat's convexity theorem stronger than Theorem \ref{firstmain} above. Note that both $\Phi_A(\Omega(G))$ and $\Phi_A(\Omega(G)^\tau)$ are convex polyhedra in $\frak{a}\oplus \mathbb{R}$: the first by Atiyah and Pressley's theorem mentioned above, the second by the convexity theorem of Terng \cite[Theorem 1.6]{Te-Convex} for infinite dimensional isoparametric submanifolds (for more details, see Section \ref{secterng} below). 2. It is probably also worth investigating whether the result in Theorem \ref{firstmain} remains true if instead of loops of Sobolev class $H^1$ we consider other classes of loops. For example, let us consider the space $\Omega_{\rm alg}(G)$ of { algebraic loops} in $G$ (see Section \ref{equivc} for the exact definition of this notion). Note that $\Omega_{\rm alg}(G)$ is a $T\times S^1$ invariant subspace of $\Omega(G)$. Atiyah and Pressley \cite{At-Pr} showed that we have $\Phi(\Omega(G))= \Phi(\Omega_{\rm alg}(G))$ (thus the latter set is also an unbounded convex polyhedron). The automorphism $\tau$ leaves $\Omega_{\rm alg}(G)$ invariant. We do not know whether $\Phi(\Omega_{\rm alg}(G)^\tau)=\Phi(\Omega_{\rm alg}(G))$. An important step towards the proof of this conjecture would be made by taking the Bruhat cells in $\Omega_{\rm alg}(G)$ (see Section \ref{equivc} below) and their closures, which are finite dimensional projective varieties. They are both $T\times S^1$ and $\tau$ invariant. One should first verify whether for any such variety $X$ we have $\Phi(X)=\Phi(X^\tau)$. 3. The proof of Theorem 1.1 will be given in Section 2. The main ingredients of the proof are as follows: first, the set $\Phi(\Omega(G)^\tau)$ is a convex subset of $\frak{t}\oplus \mathbb{R}$ (as already mentioned in Remark 1 above, a slightly more general result will be proved in Section 2.2); second, the vertices of Atiyah and Pressley's polyhedron $\Phi(\Omega(G))$ are of the form $\Phi(\lambda)$, where $\lambda : S^1 \to T$ is a group homomorphism (see \cite[Section 1, Remark 2]{At-Pr}). The following is the second main result of the paper. \begin{theorem}\label{secmain} One has the following two ring isomorphisms: \begin{align*} {}&(a) \ H^{2*}(\Omega(G);\mathbb{Z}_2)\simeq H^* (\Omega(G)^\tau;\mathbb{Z}_2)\\ {}& (b) \ H^{2*}_{T\times S^1}(\Omega(G);\mathbb{Z}_2)\simeq H^*_{T_2\times \mathbb{Z}_2} (\Omega(G)^\tau;\mathbb{Z}_2), \end{align*} where $T_2\times \mathbb{Z}_2:=\{(t,z)\in T \times S^1 \ : \ t^2 =1 \ {\it and} \ z=\pm 1\}$. \end{theorem} Note that the right-hand side of equation (b) above is well defined: by the compatibility condition (\ref{co}), the group $T_2\times \mathbb{Z}_2$ leaves $\Omega(G)^\tau$ invariant. This theorem is related to a result of Bott and Samelson \cite{Bo-Sa} concerning the space of loops in a symmetric space. To be more precise, let $G$ be, as before, a compact simply connected Lie group and $\sigma$ a group automorphism of $G$ with the property that $\sigma\circ\sigma ={\rm id}_G$ (the assumption that $\sigma(t)=t^{-1}$ for all $t\in T$ is temporarily dropped). Then $K=G^\sigma$ (the fixed point set of $\sigma$) is a connected closed subgroup of $G$ and the homogeneous space $G/K$ has a natural structure of a Riemannian symmetric space. Explicit formulas for the $\mathbb{Z}_2$ Betti numbers of the loop space $\Omega(G/K)$ are given in \cite[Corollary 3.10]{Bo-Sa}. This result also gives the $\mathbb{Z}_2$ Betti numbers of $\Omega(G)^\tau$, since the latter space is homotopy equivalent to $\Omega(G/K)$ (see for instance Proposition \ref{equiv} below). Let us now reinforce the assumption that $\sigma(t)=t^{-1}$ for all $t\in T$. Then $G/K$ is a symmetric space of maximal rank (that is, ${\rm rank} \ G/K={\rm rank} \ G$). Under this assumption, Bott and Samelson proved that \begin{equation}\label{botsa}{\rm dim} \ H^{2q}(\Omega(G);\mathbb{Z}_2)={\rm dim} \ H^q(\Omega(G/K);\mathbb{Z}_2),\end{equation} for all $q\ge 0$ (see \cite[Proposition 4.1]{Bo-Sa}). The homotopy equivalence between $\Omega(G)^\tau$ and $\Omega(G/K)$ mentioned above is $(T_2\times \mathbb{Z}_2)$-equivariant with respect to a certain natural action of $T_2\times \mathbb{Z}_2$ on $\Omega(G/K)$ (see Section \ref{two} below, especially Proposition \ref{equiv}). Consequently, $\Omega(G)^\tau$ and $\Omega(G/K)$ have the same cohomology rings, both equivariant and non-equivariant. In this way, the following result can be deduced from Theorem \ref{secmain}. Before stating it, we note that it is a stronger form of the result given by Equation (\ref{botsa}). \begin{corollary}\label{bottsam} If $G/K$ is a symmetric space of maximal rank, then one has the following two ring isomorphisms: \begin{align*} {}&(a) \ H^{2*}(\Omega(G);\mathbb{Z}_2)\simeq H^* (\Omega(G/K);\mathbb{Z}_2)\\ {}& (b) \ H^{2*}_{T\times S^1}(\Omega(G);\mathbb{Z}_2)\simeq H^*_{T_2\times \mathbb{Z}_2} (\Omega(G/K);\mathbb{Z}_2). \end{align*} \end{corollary} \noindent {\bf Remarks.} 1. The following result was also proved by Bott and Samelson. As usual, $G/K$ is a Riemannian symmetric space of maximal rank. Take $x \in \frak{t}$ and the orbits ${\rm Ad}_G(G)x=G/G_x$ and ${\rm Ad}_G(K)x=K/K_x$. We have $${\rm dim} \ H^{2q}(G/G_x;\mathbb{Z}_2) = {\rm dim} \ H^q(K/K_x;\mathbb{Z}_2),$$ for all $q\ge 0$ (see \cite[Proposition 4.3]{Bo-Sa}). Stronger forms of this result have been obtained by Hausmann, Holm, and Puppe in \cite{Ha-Ho-Pu}. Namely, they proved the following ring isomorphisms: \begin{align} {}&\ H^{2*}(G/G_x;\mathbb{Z}_2)\simeq H^*\label{unu} (K/K_x;\mathbb{Z}_2)\\ {}& \ H^{2*}_{T}(G/G_x;\mathbb{Z}_2)\simeq H^*_{T_2} (K/K_x;\mathbb{Z}_2)\label{doi}, \end{align} where $T_2:=\{t\in T \ : \ t^2=1\}$. The main idea of their proof is that $\sigma$ induces an anti-symplectic involutive automorphism of $G/G_x$, which is compatible with the $T$ action and whose fixed point set is $K/K_x$; the upshot is that this automorphism together with the Schubert cell decomposition makes $G/G_x$ into a {\it spherical conjugation complex}, and this automatically implies the isomorphisms (\ref{unu}) and (\ref{doi}). Our proof of Theorem \ref{secmain} uses a similar argument. Namely, we use the Bruhat cell decomposition of the space $\Omega_{\rm alg}(G)$ in order to show that this, together with the involution $\tau$, is a spherical conjugation complex. (We take this opportunity to note these arguments show that Theorem \ref{secmain} is also valid if $\Omega(G)$ is replaced by $\Omega_{\rm alg}(G)$.) Finally, we use a theorem which says that the inclusion $ \Omega_{\rm alg}(G)\hookrightarrow \Omega(G)$ is a homotopy equivalence. The details can be found in Section \ref{lastsec}. 2. The following result can also be proved by using the methods of our paper, combined with a theorem of Franz and Puppe (see \cite[Theorem 1.3]{Fr-Pu}). Let $\kappa$ denote any of the two isomorphisms given at points (a) and (b) of Corollary \ref{bottsam}, which are maps from the (equivariant) cohomology of $\Omega(G)$ to the (equivariant) cohomology of $\Omega(G/K)$. Then we have $$\kappa \circ {\rm Sq}^{2q} = {\rm Sq}^q \circ \kappa$$ for all $q\ge 0$. Here $ {\rm Sq}^{2q}$ and ${\rm Sq}^q$ denote the Steenrod squaring operations on the (equivariant) cohomology rings of $\Omega(G)$, respectively $\Omega(G/K)$. \noindent {\bf Note.} By $S^1$ we will interchangeably denote the unit circle in the complex plane and the quotient space $\mathbb{R}/2\pi \mathbb{Z}$. It will be clear from the context which of these two presentations is used. \noindent {\bf Acknowledgements.} We would like to thank the referees for reading carefully previous versions of the manuscript and making several valuable suggestions. \section{The image of $\Omega(G)^\tau$ under the moment map}\label{convex} \subsection{Duistermaat type convexity for $(\Omega(G),\tau,T\times S^1)$}\label{duis} Duistermaat proved the following theorem: \begin{theo}\label{dui}{\rm (\cite{Du})} Let $M$ be a compact symplectic manifold equipped with a Hamiltonian action of a torus ${\mathcal T}$ and an antisymplectic involution $\rho$ which are compatible, in the sense that \begin{equation}\label{compa} \rho(tx)=t^{-1}\rho(x), \end{equation} for all $t\in{\mathcal T}$ and all $x\in M$. If $\mu : M \to {\rm Lie}({\mathcal T})^*$ is the moment map of the ${\mathcal T}$ action, then we have $$\mu(M)=\mu(M^\rho),$$ where $M^\rho$ is the fixed point set of $\rho$. \end{theo} Our Theorem \ref{firstmain} is an extension of this result. In this section we prove Theorem \ref{firstmain}. The considerations made in the introduction right before stating this theorem are in force here. We denote by $\frak{g}$ the Lie algebra of $G$ and choose an ${\rm Ad}(G)$ invariant inner product on $\frak{g}$ (e.g. the negative of the Killing form): if $X\in \frak{g}$ then $|X|$ denotes the length of $X$. We consider the action of $T$ on $\Omega(G)$ given by pointwise conjugation of loops, that is, \begin{equation}\label{conju}(t.\gamma)(\theta) =t\gamma(\theta) t^{-1},\end{equation} for all $\gamma\in \Omega)G)$, $t\in T$, and $\theta\in S^1$. There is also an action of $S^1$ on $\Omega(G)$, given by the rotation of loops. Concretely, if $e^{i\varphi}\in S^1$ and $\gamma\in \Omega(G)$, then \begin{equation}\label{ei}(e^{i\varphi}.\gamma)(\theta):=\gamma(\theta+\varphi)\gamma(\varphi)^{-1}\end{equation} for all $\theta\in S^1$. The details concerning the following results can be found for instance in \cite{At-Pr}. First, the moment map of the $T$ action on $\Omega(G)$ is $p: \Omega(G) \to \frak{t}$ given by\footnote{The factors $\frac{1}{4\pi}$ in Equation (\ref{energy}) and $\frac{1}{2\pi}$ in Equation (\ref{projection}) are due to a canonical choice of the symplectic form on $\Omega(G)$, cf. e.g. \cite{At-Pr}.} $$p(\gamma) = \frac{1}{2\pi}\int_0^{2\pi} {\rm pr}_\frak{t} (\gamma(\theta)^{-1}\gamma'(\theta))d\theta ={\rm pr}_\frak{t}\left(\frac{1}{2\pi}\int_0^{2\pi} \gamma(\theta)^{-1}\gamma'(\theta)d\theta \right) ,$$ where $\frak{t}$ is the Lie algebra of $T$ and ${\rm pr}_\frak{t} : \frak{g} \to \frak{t}$ is the orthogonal projection. Second, the moment map of the $S^1$ action on $\Omega(G)$ is the energy functional $E:\Omega(G)\to \mathbb{R}$, \begin{equation}\label{energy}E(\gamma)=\frac{1}{4\pi}\int_0^{2\pi}|\gamma(\theta)^{-1}\gamma'(\theta)|^2 d\theta.\end{equation} The actions of $T$ and $S^1$ commute with each other. The moment map of the $T\times S^1$ action is $$\Phi=p\times E : \Omega(G)\to \frak{t} \oplus \mathbb{R}.$$ The following theorem was proved by Atiyah and Pressley in \cite{At-Pr}: \begin{theo}\label{atpr} {\rm (\cite{At-Pr})} We have $$\Phi(\Omega(G)) ={\rm cvx}\{\Phi(\lambda) \ : \ \lambda :S^1 \to T \ {\it is \ a \ group \ homomorphism } \}$$ where {\rm cvx} stands for convex hull. \end{theo} We note that the group homomorphisms $S^1 \to T$ are precisely the elements of $\Omega(G)$ which are fixed by the $T\times S^1$ action. An important ingredient of this section is the following result, which is a consequence of the convexity theorem of Terng (see \cite{Te-Convex}). We postpone its proof to Section \ref{secterng} below. \begin{theo}\label{terng} The space $\Phi(\Omega(G)^\tau)$ is a convex subset of $\frak{t}\oplus \mathbb{R}$. \end{theo} We are now ready to prove Theorem \ref{firstmain}. \noindent{\it Proof of Theorem \ref{firstmain}.} First note that $$\Phi(\Omega(G)^\tau)\subset \Phi(\Omega(G)).$$ To prove the opposite inclusion, we note that if $\lambda : S^1\to T$ is a group homomorphism, then $\lambda \in \Omega(G)^\tau$. Indeed, for any $z \in S^1$ we have $$\tau(\lambda)(z)=\sigma(\lambda(\bar{z})) =\sigma(\lambda(z^{-1}))=\sigma(\lambda(z)^{-1}) =\lambda(z).$$ From Theorem \ref{atpr} we deduce that $\Phi(\Omega(G))$ is the convex hull of some points which are in $\Phi(\Omega(G)^\tau)$. Since the latter set is convex (by Theorem \ref{terng}), we deduce that $\Phi(\Omega(G))\subset \Phi(\Omega(G)^\tau)$. This finishes the proof. \hfill $\square$ \subsection{Convexity for $(\Omega(G)^\tau,A\times S^1)$}\label{secterng The goal of this subsection is to prove Theorem \ref{terng}. In fact we will prove a stronger form of it. Namely, we consider the situation described in Remark 1 following Theorem \ref{firstmain} and we show as follows: \begin{theo}\label{terng2} The space $\Phi_A(\Omega(G)^\tau)$ is a convex subset of $\frak{a}\oplus \mathbb{R}$. \end{theo} The following expression of the moment map $\Phi_A: \Omega(G)\to \frak{a}\oplus \mathbb{R}$ will be needed in the proof (it can be deduced immediately from the description of $\Phi: \Omega(G)\to \frak{t}\oplus \mathbb{R}$ given in the previous subsection): we have $\Phi_A=p_A\times E$, where \begin{equation}\label{projection}p_A(\gamma) = \frac{1}{2\pi}\int_0^{2\pi} {\rm pr}_\frak{a} (\gamma(\theta)^{-1}\gamma'(\theta))d\theta ={\rm pr}_\frak{a}\left(\frac{1}{2\pi}\int_0^{2\pi}\gamma(\theta)^{-1}\gamma'(\theta)d\theta \right).\end{equation} We also need the following considerations, which can be found in \cite{Te-PolarActions}. We consider the loop group $$L(G)=\{\gamma : S^1 \to G \ : \ \gamma \ {\rm of \ Sobolev \ class \ } H^1\}.$$ It acts by ``gauge transformations" on the Hilbert space $H^0(S^1,\frak{g})$, by \begin{equation}\label{gam}\gamma \star u =\gamma u\gamma^{-1}-\gamma'\gamma^{-1},\end{equation} for all $\gamma\in L(G)$ and $u\in H^0(S^1,\frak{g})$. The stabilizer of the constant loop $0\in H^0(S^1,\frak{g})$ consists of all $\gamma \in L(G)$ with $\gamma'\gamma^{-1}=0$, which means that $\gamma$ is a constant loop in $G$. We deduce that the $L(G)$ orbit of $0$ can be identified with $L(G)/G$, which is the same as $\Omega(G)$. Henceforth we will make the identification \begin{equation}\label{ident}\Omega(G)=L(G)\star 0=\{\gamma'\gamma^{-1} \ : \ \gamma\in \Omega(G)\},\end{equation} which is a subspace of $H^0(S^1,\frak{g})$: more precisely, any based loop $\gamma:S^1\to G$ of Sobolev class $H^1$ is identified with $\gamma\star 0=\gamma^{-1}\gamma'$, which is an element of $H^0(S^1,\frak{g})$. In this way, the moment map corresponding to the $T\times S^1$ action on $\Omega(G)$ is $\Phi : \Omega(G)\to \frak{t}\oplus \mathbb{R}$, \begin{equation}\label{phiu}\Phi(u) =(P_\frak{t}(u), \frac{1}{2}\|u\|^2),\end{equation} for all $u\in \Omega(G)$. Here we regard $\frak{t}$ as a subspace of $H^0(S^1,\frak{g})$ (consisting of constant loops) and we denote by $P_\frak{t} : H^0(S^1,\frak{g})\to \frak{t}$ the orthogonal projection with respect to the canonical inner product on $H^0(S^1,\frak{g})$. We recall that this is given by \begin{equation}\label{product}( u,v) = \frac{1}{2\pi}\int_0^{2\pi}\langle u(\theta),v(\theta)\rangle d\theta,\end{equation} for all $u,v\in H^0(S^1,\frak{g})$ (here $\langle \ , \ \rangle$ is the ${\rm Ad}(G)$ invariant inner product on $\frak{g}$ we chose at the beginning of this section). By $\| \cdot \|$ we denote the corresponding norm on $H^0(S^1,\frak{g})$. To justify Equation (\ref{phiu}), we show that \begin{align}{}&P_\frak{t}(u)=\frac{1}{2\pi}\int_0^{2\pi}{\rm pr}_\frak{t}(u(\theta))d\theta\label{uno}\\ {}& \|u\|^2=\frac{1}{2\pi}\int_0^{2\pi}|u(\theta)|^2d\theta\label{due}, \end{align} for all $u\in H^0(S^1,\frak{g})$ (see also Equations (\ref{energy}) and (\ref{projection})). Equation (\ref{due}) follows immediately from (\ref{product}). To prove (\ref{uno}), we consider an orthonormal basis $e_1,\ldots, e_r$ of $\frak{t}$, in the sense that $\langle e_i,e_j\rangle =\delta_{ij}$, for all $1\le i,j\le r$ (here $\delta_{ij}$ is the Kronecker delta). By using Equation (\ref{product}), we deduce that $(e_i,e_j)=\delta_{ij}$, for all $1\le i,j\le r$. Thus \begin{align*}{}&P_\frak{t}(u)=\sum_{i=1}^r(u,e_i)e_i = \sum_{i=1}^r\left(\frac{1}{2\pi}\int_0^{2\pi}\langle u(\theta),e_i\rangle d\theta\right) e_i\\{}& \ \ \ \ \ \ \ =\frac{1}{2\pi}\int_0^{2\pi}\sum_{i=1}^r\langle u(\theta),e_i\rangle e_i d\theta =\frac{1}{2\pi}\int_0^{2\pi}{\rm pr}_\frak{t}(u(\theta))d\theta. \end{align*} Equation (\ref{phiu}) is now completely justified. We recall now that $\sigma$ is an involution of $G$ whose fixed point set is $K$. We denote $$\hat{K}:=\{\gamma\in L(G) \ : \ \gamma(-\theta)=\sigma(\gamma(\theta)), \ {\rm for\ all \ } \theta\in S^1\}.$$ This is a subgroup of $L(G)$ which leaves invariant the closed vector subspace $$\hat{\frak{p}}(\frak{g},\sigma):=\{u\in H^0(S^1,\frak{g}) \ : \ u(-\theta) =-d\sigma_e(u(\theta)), \ {\rm for\ all \ } \theta\in S^1\}$$ of $H^0(S^1,\frak{g})$. As before, $\frak{a}$ is a maximal abelian subspace of $\frak{p}$. It can be made into a subspace of $\hat{\frak{p}}(\frak{g},\sigma)$ by regarding every element of $\frak{a}$ as a constant loop. In what follows we will need the notion of {\it isoparametric submanifold} in Hilbert space. By definition, this is a finite codimensional Riemannian submanifold for which the normal vector bundle is flat relative to the normal connection and satisfies some other assumptions: for instance, if $v$ is a parallel normal vector field on the manifold, then the shape operators $A_{v(p)}$ and $A_{v(q)}$ corresponding to any two points $p$ and $q$ on the manifold are orthogonally conjugate. For the exact definition we refer the reader to \cite[Section 6]{Te-Proper} (see also Chapter 7 of the monograph \cite{Pa-Te}). We note that any isoparametric submanifold induces a foliation of Hilbert space by parallel submanifolds\footnote{These are not necessarily isoparametric submanifolds.}, which we will call below the isoparametric foliation. \begin{propo}\label{isopara} (a) The orbits of the $\hat{K}$ action on $\hat{\frak{p}}(\frak{g},\sigma)$ are elements of an isoparametric foliation of the Hilbert space $\hat{\frak{p}}(\frak{g},\sigma)$. (b) There exists $a\in \frak{a}$ such that the orbit $\hat{K}\star a$ is an isoparametric submanifold of $\hat{\frak{p}}(\frak{g},\sigma)$. The normal space at $a$ to this submanifold is $\frak{a}$. \end{propo} \begin{proof} We use the following identifications (see also \cite[Remark 3.4]{Te-Variational}): \begin{align*}{}&\hat{K}=\{\gamma:[0,\pi]\to G \ : \ \gamma(0),\gamma(\pi)\in K\} =:P(G,K\times K)\\ {}&\hat{\frak{p}}(\frak{g},\sigma)=H^0([0,\pi], \frak{g}). \end{align*} By \cite[Theorem 1.2]{Te-PolarActions}, the action of $P(G,K\times K)$ on $H^0([0,\pi],\frak{g})$ given by (\ref{gam}) is polar (by definition, which can be found in full detail in \cite{Te-PolarActions}, this means essentially that there exists a section of this action, that is, a submanifold of $H^0([0,\pi],\frak{g})$ which meets all orbits of the action and meets them orthogonally). By \cite[Theorem 8.10]{Te-Proper}, the orbits of this action are an isoparametric foliation. In particular, the principal orbits are isoparametric submanifolds. We are looking for such orbits. To find them, we recall that the action of $K\times K$ on $G$ given by $$(k_1,k_2).g=k_1gk_2^{-1},$$ for all $k_1,k_2\in K$ and $g\in G$ is polar; a section of this action is $A=\exp(\frak{a})$ (cf. e.g. \cite{Co}). From \cite[Theorem 1.2]{Te-PolarActions} we deduce that $\frak{a}$ (the space of constant maps from $[0,\pi]$ to $\frak{a}$) is a section of the $P(G,K\times K)$ action on $H^0([0,\pi],\frak{g})$. To prove our proposition, we only need to pick $a\in \frak{a}$ a regular point (that is, one whose orbit is principal). Such a point exists due to the following criterion (see \cite[Theorem 1.2, (6)]{Te-PolarActions}): a point $a\in\frak{a}$ is regular for the $P(G,K\times K)$ action on $H^0([0,\pi],\frak{g})$ if and only if $\exp(a)$ is regular for the $K\times K$ action on $G$. Moreover, a general result says that any section of a polar action of a compact Lie group on a simply connected compact manifold contains regular points (see e.g. \cite[Theorem 1.6]{Te-PolarActions}). This finishes the proof. \end{proof} In order to prove Theorem \ref{terng} we will show that, via the identification (\ref{ident}), $\Omega(G)^\tau$ is the same as the element $\hat{K}\star 0$ of the isoparametric foliation in the previous proposition. Then we use the convexity theorem for isoparametric foliations of Terng \cite{Te-Convex}. For the moment, we will prove the following lemma. \begin{lemma} Take $\gamma\in \Omega(G)$ and denote $\gamma_0=\tau(\gamma)$. Then we have $${\gamma_0}'(\theta){\gamma_0}^{-1}(\theta)= -d\sigma_e(\gamma'(-\theta)\gamma^{-1}(-\theta))$$ for all $\theta\in S^1$. \end{lemma} \begin{proof} If $g\in G$, then the tangent space to $G$ at $g$ consists of vectors of the form $Xg=dR(g)_e(X)$, where $X\in T_eG$. Here $R(g):G\to G$ is the right multiplication by $g$. Moreover, we have $$d\sigma_g(Xg)=d\sigma_e(X)\sigma(g).$$ Indeed, $$d\sigma_g(Xg)=d\sigma_g(dR(g)_e(X)) =d(\sigma \circ R(g))_e(X) =d(R(\sigma(g))\circ \sigma)_e(X) =d\sigma_e(X)\sigma(g).$$ We deduce that \begin{align*}{}&{\gamma_0}'(\theta){\gamma_0}^{-1}(\theta) =d\sigma_{\gamma(-\theta)}(-\gamma'(-\theta)) \sigma(\gamma(-\theta)^{-1})\\ {}& =-d\sigma_{\gamma(-\theta)}(\gamma'(-\theta)\gamma(-\theta)^{-1}\gamma(-\theta)) \sigma(\gamma(-\theta)^{-1}) =-d\sigma_{e}(\gamma'(-\theta)\gamma(-\theta)^{-1}).\end{align*} \end{proof} From this lemma we deduce \begin{equation}\label{o}\Omega(G)^\tau=\{u\in \Omega(G) \ : \ -d\sigma_e(u(\theta))=u(-\theta) \ {\rm for \ all \ } \theta\in S^1\} =\Omega(G)\cap\hat{\frak{p}}(\frak{g},\sigma).\end{equation} This space is the same as the orbit $\hat{K}\star 0$, as the following lemma shows. \begin{lemma}\label{fixq} We have $$\Omega(G)^\tau=\hat{K}\star 0.$$ \end{lemma} \begin{proof} The inclusion $\hat{K}\star 0\subset \Omega(G)^\tau$ is clear, because $\hat{K}\star 0$ is a subset of both $\hat{\frak{p}}(\frak{g},\sigma)$ and $L(G)\star 0$. We now prove the reverse inclusion. Take $\gamma\in \Omega(G)^\tau$: by identifying it with the element $\gamma\star 0=\gamma'\gamma^{-1}$ of $H^0(S^1,\frak{g})$ and taking into account Equation (\ref{o}), we have $$d\sigma_e(\gamma'(\theta)\gamma^{-1}(\theta)) =-\gamma'(-\theta)\gamma^{-1}(-\theta),$$ for all $\theta\in S^1$. We show that $\gamma\in \hat{K}$, as follows. We have \begin{align*}{}&\frac{d}{d\theta}\sigma(\gamma(\theta)) \\=&d\sigma_{\gamma(\theta)}(\gamma'(\theta)) \\=&d\sigma_e(\gamma'(\theta)\gamma^{-1}(\theta))\sigma(\gamma(\theta)) \\=&-\gamma'(-\theta)\gamma^{-1}(-\theta)\sigma(\gamma(\theta)), \end{align*} which implies $$\frac{d}{d\theta}(\sigma(\gamma(\theta)))\sigma(\gamma(\theta))^{-1} =\frac{d}{d\theta}(\gamma(-\theta))\gamma(-\theta)^{-1}.$$ We deduce that the loops $\theta \mapsto \sigma(\gamma(\theta))$ and $\theta\mapsto \gamma(-\theta)$ are equal. Thus $\tau(\gamma)=\gamma$, in other words, $\gamma \in \hat{K}$. \end{proof} We are now ready to prove our main result. \noindent {\it Proof of Theorem \ref{terng2}.} By the convexity theorem of Terng (see \cite[Theorem 1.6]{Te-Convex}), the image of the map $\label{mapp}\Psi_A:\hat{K}\star 0 \to \frak{a} \oplus \mathbb{R}$ given by $$ \Psi_A(u)= (P_\frak{a}(u), \|u\|^2) $$ is a convex polyhedron in $\frak{a}\oplus \mathbb{R}$ (we are also using Proposition \ref{isopara}). Here $P_\frak{a} : \hat{\frak{p}}(\frak{g},\sigma)\to \frak{a}$ is the orthogonal projection with respect to the Hilbert space metric. By Lemma \ref{fixq}, $\Psi_A(\Omega(G)^\tau)$ is a convex polyhedron. If we now compare the map $\Psi_A$ with the moment map $\Phi_A=p_A \times E$ (see Equations (\ref{energy}) and (\ref{projection})), we note that the two maps are essentially the same. More specifically, by taking into account the identification given by (\ref{ident}), we have $$\Phi_A(u)=\left(P_\frak{a}(u), \frac{1}{2}\|u\|^2\right), $$ for all $u\in \Omega(G)^\tau$ (this can be proved in the same way as equation (\ref{phiu})). We deduce that the set $\Phi_A(\Omega(G)^\tau)$ is obtained from $\Psi_A(\Omega(G)^\tau)$ by the automorphism of $\frak{a}\oplus \mathbb{R}$ given by $$(a,r)\mapsto \left(a, \frac{1}{2}r\right)$$ for all $(a,r)\in \frak{a}\oplus \mathbb{R}$. Thus $\Phi_A(\Omega(G)^\tau)$ is a convex polyhedron as well. This finishes the proof. \hfill $\square$ \section{(Equivariant) cohomology ring of $\Omega(G/K)$}\label{lastsec} \subsection{(Equivariant) cohomology of $\Omega(G)^\tau$}\label{equivc} In this subsection we will prove Theorem \ref{secmain}. An important ingredient of the proof will be the space $\Omega_{\rm alg}(G)$ of algebraic loops in $G$. By definition, this is $$\Omega_{\rm alg}(G)=L_{\rm alg}(G^{\mathbb{C}})\cap \Omega(G).$$ Here $G^{\mathbb{C}}$ denotes the complexification of the Lie group $G$ and $L_{\rm alg}(G^{\mathbb{C}})$ is the set of all (free) loops $\gamma : S^1\to G^\mathbb{C}$ which are restrictions of algebraic maps from $\mathbb{C}^*$ to $G^{\mathbb{C}}$. In the case when $G^{\mathbb{C}}$ is a subgroup of some general linear group $GL_n(\mathbb{C})$, elements of $L_{\rm alg}(G^{\mathbb{C}})$ are Laurent series of the form \begin{equation}\label{degree}\gamma(z)=\sum_{p=-k}^k z^pA_p,\end{equation} for some $k\ge 0$, where $A_p$ are elements of ${\rm Mat}^{n\times n}(\mathbb{C})$. For a fixed $k$, the space of all maps $\gamma$ of the form (\ref{degree}) is equipped with the standard metric topology which comes from its identification with $\left({\rm Mat}^{n\times n}(\mathbb{C})\right)^{2k+1}$; we denote by $\Omega_{\rm alg}^k(G)$ the space of all $\gamma$ of type (\ref{degree}) which map $S^1$ to $G$, and equip it with the subspace topology. We endow $\Omega_{\rm alg}(G)$ with the direct limit topology coming from the filtration $\{\Omega_{\rm alg}^k(G)\}_{k\ge 0}$. The following theorem has been proved by Mitchell in \cite{Mi} (see Theorem 4.1 and the theorem in the introduction of his paper, where the result is attributed to Quillen). Another proof can be found in \cite{Ko} (see Theorem 3.1.4 of that paper). \begin{theo}\label{mitc}{\rm (\cite{Mi}, \cite{Ko})} (a) The inclusion map $\Omega_{\rm alg}(G) \to \Omega(G)$ is a homotopy equivalence. (b) The automorphism $\tau$ of $\Omega(G)$ leaves $\Omega_{\rm alg}(G)$ invariant and the inclusion map $\Omega_{\rm alg}(G)^\tau \to \Omega(G)^\tau$ is a homotopy equivalence. \end{theo} The advantage of dealing with $\Omega_{\rm alg}(G)$ instead of $\Omega(G)$ is that the former space has a natural CW-decomposition. Its elements are the Bruhat cells, which are described in what follows (the details of this construction can be found in \cite[Sections 2 and 3]{Mi}). First we make the identification \begin{equation}\label{ome}\Omega_{\rm alg}(G)=L_{\rm alg}(G^{\mathbb{C}})/L^+_{\rm alg}(G^{\mathbb{C}})\end{equation} where $L^+_{\rm alg}(G^{\mathbb{C}})$ is the subgroup of $L_{\rm alg}(G^{\mathbb{C}})$ consisting of loops of the form (\ref{degree}) for some $k\ge 0$, where $A_p=0$ for all $p< 0$. We consider the roots of $G$ with respect to $T$, which are linear functions $\frak{t} \to \mathbb{R}$. The root space decomposition of $\frak{g}^{\mathbb{C}}:=\frak{g}\otimes \mathbb{C}$ is $$\frak{g}^{\mathbb{C}} = \frak{t}^\mathbb{C} \oplus \sum_\alpha \frak{g}^{\mathbb{C}}_\alpha,$$ where the sum runs over all the roots of $G$ with respect to $T$. We fix a simple root system $\alpha_1,\ldots,\alpha_\ell$ and denote by $B^-$ the (Borel) connected subgroup of $G^{\mathbb{C}}$ whose Lie algebra is $\frak{t}^ \mathbb{C} \oplus \sum _\alpha \frak{g}^{\mathbb{C}}_{\alpha}$, where the sum runs over all negative roots $\alpha$. The Bruhat decomposition of $\Omega_{\rm alg}(G)$ is \begin{equation}\label{cw}\Omega_{\rm alg}(G)=\bigsqcup_\lambda {\mathcal B}\lambda \end{equation} where the union runs over all group homomorphisms $\lambda: S^1\to T$ such that $\lambda'(0)$ is in the closure of the fundamental Weyl chamber of $\frak{t}$. Here ${\mathcal B}$ is the subgroup of $L^+_{\rm alg}(G^{\mathbb{C}})$ consisting of all loops $\gamma$ of the form (\ref{degree}) for some $k\ge 0$, where $A_p=0$ for all $p< 0$ and $A_0 \in B^-$. The decomposition described by (\ref{cw}) is a CW decomposition (cf. e.g. \cite[Section 3]{Mi}). The orbits ${\mathcal B}\lambda$ are the Bruhat cells. Any Bruhat cell is homeomorphic to some complex vector space. Proposition \ref{mitc} below gives a more precise description of this homeomorphism. In order to state it, we need to make some more considerations. First we note that the set of group homomorphisms $\lambda: S^1 \to T$ can be identified with the integer lattice $I=\ker(\exp : \frak{t} \to T)$. Let $W$ be the Weyl group of $G$. We recall that this is the group of linear transformations of $\frak{t}$ generated by the reflections about the hyperplanes $\ker\alpha_1$, $\ker\alpha_2,\ldots,\ker\alpha_\ell$; let us denote these reflections by $s_1,s_2, \ldots,s_\ell$. The affine Weyl group $\tilde{W}$ is the semidirect product $W\ltimes I$. It is the same as the group of affine transformations of $\frak{t}$ generated by $s_1,s_2,\ldots,s_\ell$, and $s_0$. Here $s_0$ is the reflection about the affine hyperplane $\{x\in \frak{t} \ : \ \alpha_0(x)=1\}$, where $\alpha_0$ is the highest root of $G$. To any $s\in \{s_0,s_1,\ldots,s_\ell\}$ we assign the subgroup $U_s$ of $L_{\rm alg}(G^{\mathbb{C}})$, as follows: \begin{itemize} \item For $j\in \{1,\ldots, \ell\}$ we have $U_{s_j}:=\exp (\frak{g}^{\mathbb{C}}_{\alpha_j})$ (its elements are constant loops in $G^{\mathbb{C}}$). Since $U_{s_j}$ is a unipotent group, the exponential map is an isomorphism between $U_{s_j}$ and its Lie algebra $\frak{g}^{\mathbb{C}}_{\alpha_j}$. More precisely, by fixing $E_{\alpha_j}$ a non-zero vector in $\frak{g}^{\mathbb{C}}_{\alpha_j}$, the map $\mathbb{C}\to U_{\alpha_j}$ given by $x\mapsto \exp(x E_{\alpha_j})$ is a homeomorphism. \item $U_{s_0}$ consists of loops of the form $z\mapsto \exp(z^{-1} X)$, $z\in S^1$, where $X\in \frak{g}^{\mathbb{C}}_{-\alpha_0}$. Again, since $U_{s_0}$ is a unipotent group, the exponential map is an isomorphism between $U_{s_0}$ and $\frak{g}^{\mathbb{C}}_{-\alpha_0}$. By fixing again $E_{-\alpha_0}$ a non-zero vector in $\frak{g}^{\mathbb{C}}_{-\alpha_0}$, the map $\mathbb{C}\to U_{\alpha_0}$ which assigns to $x\in \mathbb{C}$ the loop $z\mapsto \exp (z^{-1}xE_{-\alpha_0})$ is a homeomorphism. \end{itemize} We mention without any further explanations that the groups $U_s$ are the root subgroups of $L_{\rm alg}(G^{\mathbb{C}})$ corresponding to a certain canonical simple affine root system of $G$ (note that the Lie algebra of $L_{\rm alg}(G^{\mathbb{C}})$ has a root decomposition labeled by the affine roots). Take $\lambda \in I =\tilde{W}/W$ and consider the element $\tilde{w}$ of $\tilde{W}$ which has minimal length (with respect to the generating set $s_0,s_1,\ldots,s_\ell$) and satisfies $\lambda = \tilde{w}W$. Let $\tilde{w}=s_{i_1}\ldots s_{i_k}$ be any reduced decomposition of $\tilde{w}$, where $i_1,\ldots,i_k\in \{0,1,\ldots,\ell\}$. The following result has been proved by Mitchell in \cite{Mi}: \begin{propo}{\rm (\cite{Mi})}\label{mitch} The map \begin{align}{}\label{iden}{}&\mathbb{C}^k=U_{s_{i_1}}\times \ldots \times U_{s_{i_k}}\to L_{\rm alg}(G^{\mathbb{C}})/L^+_{\rm alg}(G^{\mathbb{C}})=\Omega_{\rm alg}(G)\\{}& (u_1,\ldots,u_k)\mapsto u_1 \ldots u_k L^+_{\rm alg}(G^{\mathbb{C}})\nonumber \end{align} is a homeomorphism onto the Bruhat cell ${\mathcal B}\lambda$. \end{propo} Let $\sigma$ be the automorphism of $G$ defined in the introduction. We note that the involutive automorphism $\tau$ of $\Omega(G)$ given by (\ref{taug}) leaves $\Omega_{\rm alg}(G)$ invariant. To understand this, we first extend $\sigma$ to a group automorphism of $G^{\mathbb{C}}$, namely the one whose differential at the identity element is the anti-complex linear extension of the differential of the original $\sigma$. That is, we have \begin{equation}\label{conj}d\sigma_e(X+iY)=d\sigma_e(X)-id\sigma_e(Y),\end{equation} for all $X,Y\in \frak{g}$. Then we extend $\tau$ to a group automorphism of $L_{\rm alg}(G^{\mathbb{C}})$, namely the one described by Equation (\ref{taug}) with $\gamma$ in $L_{\rm alg}(G^{\mathbb{C}})$. This map leaves $L^+_{\rm alg}(G^{\mathbb{C}})$ invariant and induces the original automorphism $\tau$ of $\Omega_{\rm alg}(G)$ via the identification (\ref{ome}). We now consider the decomposition $\frak{g}=\frak{k}\oplus \frak{p}$, where $\frak{k}=\{X\in \frak{g} \ : \ d\sigma_e(X)=X\}$ and $\frak{p}=\{X\in \frak{g} \ : \ d\sigma_e(X)=-X\}$. Note that $\frak{t}$ is a subset of $\frak{p}$. The automorphism $d\sigma_e$ of $\frak{g}^{\mathbb{C}}$ has fixed point set equal to $\frak{g}_0:=\frak{k}+i\frak{p}$. The latter space is a real form of $\frak{g}^\mathbb{C}$. Any root $\alpha$ of $\frak{g}^\mathbb{C}$ with respect to $\frak{t}\otimes \mathbb{C}$ takes real values on the subspace $i\frak{t}$ of $\frak{g}_0$. This means that $\frak{g}_0$ is a split real form of $\frak{g}^{\mathbb{C}}$ (cf. e.g \cite[Section 26.1]{Fu-Ha}). We deduce that we have the splitting $$\frak{g}_0=i\frak{t} \oplus \sum \mathbb{R} E_\alpha,$$ where the sum runs over all the roots $\alpha$ of $G$ with respect to $T$ and $E_\alpha$ is a (nonzero) root vector for any such root $\alpha$. In constructing the groups $U_{s_0},U_{s_1},\ldots, U_{s_\ell}$ (see above) we use the vectors $E_{-\alpha_0}, E_{\alpha_1},\ldots, E_{\alpha_\ell}$ in the previous equation. We will prove the following result (see also \cite[Proof of Theorem 5.9]{Mi}). \begin{propo}\label{pr} Any Bruhat cell ${\mathcal B}\lambda$ in $\Omega_{\rm alg}(G)$ remains invariant under $\tau$. Moreover, via the homeomorphism $\mathbb{C}^k\simeq {\mathcal B}\lambda$ described by equation (\ref{iden}), $\tau$ acts on ${\mathcal B}\lambda$ by complex conjugation. \end{propo} \begin{proof} We have already seen that if $\lambda: S^1\to T$ is a group homomorphism then $\tau(\lambda)=\lambda$ (see the proof of Theorem \ref{firstmain} at the end of Section \ref{duis}). The automorphism $\tau$ leaves ${\mathcal B}$ invariant: this follows from the definition of ${\mathcal B}$ and the fact that the Borel subgroup $B^-$ is $\sigma$-invariant. Consequently, $\tau$ leaves the orbit ${\mathcal B}\lambda$ invariant. The homeomorphism $ U_{s_{i_1}}\times \ldots \times U_{s_{i_k}}\to {\mathcal B}\lambda$ described by Equation (\ref{iden}) is $\tau$-equivariant, where $\tau$ acts diagonally on the domain of the map. The reason is that $\tau$ is a group automorphism of $L_{\rm alg}(G^{\mathbb{C}})$. The last statement in the proposition follows from the fact that $\tau$ leaves $U_{s_j}$ invariant, for any $j\in \{0,1,\ldots,\ell\}$; moreover, via the identification $U_{s_j}=\frak{g}^{\mathbb{C}}_{\alpha_j}=\mathbb{C}$ (see above), $\tau$ acts as the complex conjugation. Indeed, if $j\neq 0$ then $\frak{g}^{\mathbb{C}}_{\alpha_j}=\mathbb{C} E_{\alpha_j}$ and by Equation (\ref{conj}), for any $x\in \mathbb{C}$ we have $$\tau(\exp(xE_{\alpha_j}))=\sigma(\exp(xE_{\alpha_j}))= \exp(d\sigma_e(xE_{\alpha_j}))= \exp(\overline{x}E_{\alpha_j});$$ for $j=0$, we use that for any complex number $x$, the loop $z\mapsto \exp (z^{-1}xE_{-\alpha_0})$ is mapped by $\tau$ to $$z\mapsto \sigma(\exp(zxE_{-\alpha_0}))=\exp(\overline{zx}E_{-\alpha_0})) =\exp(z^{-1}\overline{x}E_{-\alpha_0}).$$ \end{proof} Our proof of Theorem \ref{secmain} uses the notion of spherical conjugation complex, defined in \cite{Ha-Ho-Pu}. By definition, a spherical conjugation complex is a (finite or infinite) cell complex $X$ equipped with an involutive automorphism $\rho$ with the following properties: \begin{itemize} \item each cell in $X$ is a complex cell, that is, it is homeomorphic to $\mathbb{C}^k$, for some $k \in \mathbb{Z}$, $k\ge 0$ \item $\rho$ leaves each cell $\mathbb{C}^k$ invariant, acting on it as the complex conjugation. That is, we have $$\rho(z_1,\ldots, z_k) =(\bar{z}_1, \ldots, \bar{z}_k),$$ for all $(z_1,\ldots, z_k) \in \mathbb{C}^k$. \end{itemize} The following theorem has been proved in \cite[Sections 5 and 7]{Ha-Ho-Pu}. \begin{theo}\label{hhp} {\rm (\cite{Ha-Ho-Pu})} Let $(X,\rho)$ be a spherical conjugation complex and denote by $X^\rho$ the fixed point set of $\rho$. Then we have as follows: (a) There exists a degree-halving ring isomorphism $H^{2*}(X;\mathbb{Z}_2) \simeq H^*(X^\rho;\mathbb{Z}_2)$. (b) Let ${\mathcal T}$ be a compact torus acting on $X$ such that the action is compatible with $\rho$, in the sense that $$\rho(tx)=t^{-1}\rho(x)$$ for all $t\in {\mathcal T}$ and all $x\in X$. Then there exists a degree-halving ring isomorphism $H^{2*}_{{\mathcal T}}(X;\mathbb{Z}_2)\simeq H^{*}_{{\mathcal T}_2}(X^\rho;\mathbb{Z}_2)$. Here ${\mathcal T}_2$ denotes the set of all $t \in {\mathcal T}$ with $t^2=1$. \end{theo} Without any further comments we mention that the key point of this theorem is that a spherical conjugation complex is a conjugation space (for the definition of this notion, see \cite{Ha-Ho-Pu}). We are now ready to give the desired proof: \noindent {\it Proof of Theorem \ref{secmain}.} By Proposition \ref{pr}, $\Omega_{\rm alg}(G)$ together with the involution $\tau$ is a spherical conjugation complex. Theorem \ref{hhp} (a) implies that we have a ring isomorphism $$H^{2*}(\Omega_{\rm alg}(G)) \simeq H^*(\Omega_{\rm alg}(G)^\tau).$$ Combined with Theorem \ref{mitc}, this implies point (a) of Theorem \ref{secmain}. Point (b) follows from the fact that the actions of $T\times S^1$ and $\tau$ on $\Omega_{\rm alg}(G)$ are compatible, see Proposition \ref{propo} below. We use Theorem \ref{hhp} (b) and again Theorem \ref{mitc}. \hfill $\square$ \subsection{The Bott-Samelson theorem for $\Omega(G/K)$}\label{two} Throughout this subsection $G$ will be a compact connected simply connected Lie group and $\sigma$ an arbitrary involutive automorphism of $G$. The notations established in Remark 1 following Theorem \ref{firstmain} are in force. We consider again the group $K=G^\sigma$ and the homogeneous space $G/K$, which has a canonical structure of a Riemannian symmetric space. Let us also consider the loop space\footnote{The reason why the loops in this definition are defined on $[0,\pi]$, and not on $[0,2\pi]$ or $S^1=\mathbb{R}/2\pi \mathbb{Z}$, as usual, will be understood later (see the proof of Proposition \ref{equiv}).} $$\Omega(G/K):=\{\mu : [0,\pi] \to G/K \ : \ \mu {\rm \ is \ of \ Sobolev \ class \ }H^1 \ {\rm and \ } \mu(0)=\mu(\pi)=eK\} $$ where $eK$ denotes the coset of $e$ in $G/K$. Consider the group $$A_2:=\{a\in A \ : \ a^2=e\}.$$ In this subsection we will define $A_2\times \mathbb{Z}_2$ actions on $\Omega(G/K)$ and $\Omega(G)^\tau$, show that these two spaces are equivariantly homotopy equivalent, and finally prove Corollary \ref{bottsam}. We first note that $A_2=A\cap K.$ This can be justified as follows: if $a\in A$ then $a=\exp (X)$ where $X\in \frak{a}$, so $\sigma(a)=a^{-1}$; consequently $$a\in K \Leftrightarrow \sigma(a)=a \Leftrightarrow a^{-1}=a \Leftrightarrow a^2=e.$$ The group $A_2$ acts on $\Omega(G/K)$ by pointwise multiplication of the loops from the left: $$(a.\mu)(\theta) = a\mu(\theta),$$ for all $a\in A_2$, $\mu\in \Omega(G/K)$ and $\theta \in [0,\pi]$. There is also an action of $\mathbb{Z}_2$ on $\Omega(G/K)$, which is more subtle. It is determined by the involutive automorphism $\mu\mapsto \tilde{\mu}$ of $\Omega(G/K)$, defined below. We first prove a lemma: \begin{lemma}\label{lemmafirst}Any loop $\mu \in \Omega(G/K)$ can be written as $$\mu(\theta) = \gamma(\theta)K,$$ where $\gamma : [0,\pi]\to G$ is an $H^1$ map such that $\gamma(0)=e$ and $\gamma(\pi)\in K$. \end{lemma} \begin{proof} We use the Path Lifting Theorem (cf. e.g. \cite[Theorem 3.4.30]{Ab-Ma-Ra}) for the locally trivial bundle $G\to G/K$. \end{proof} \begin{definition}\label{dfirst} Let $\mu\in \Omega(G/K)$ be of the form $\mu(\theta)=\gamma(\theta)K$, $\theta\in [0,\pi]$, like in the previous lemma. We define $\tilde{\mu}$ by $$\tilde{\mu}(\theta):=\sigma(\gamma(\pi-\theta))K,$$ $\theta\in [0,\pi]$. \end{definition} We first verify that the map $\mu\mapsto \tilde{\mu}$ is independent of the choice of $\gamma$: if $\gamma_1$ is another representative of $\mu$, that is, if $\gamma_1(\theta)=\gamma(\theta)k$, for some $k\in K$, then $$\sigma(\gamma_1(\pi-\theta))K=\sigma(\gamma(\pi-\theta)k)K =\sigma(\gamma(\pi-\theta))\sigma(k)K =\sigma(\gamma(\pi-\theta))kK =\sigma(\gamma(\pi-\theta))K.$$ Next we verify that the map $\mu\mapsto \tilde{\mu}$ is involutive, that is $\tilde{\tilde{\mu}}=\mu$. To do this, we write $$\tilde{\mu}(\theta):=\sigma(\gamma(\pi-\theta))\gamma(\pi)^{-1}K,$$ and deduce that $$\tilde{\tilde{\mu}}(\theta):=\sigma(\sigma(\gamma(\pi-(\pi-\theta))\gamma(\pi)^{-1}))K =\gamma(\theta)K=\mu(\theta).$$ In this way we have defined our $\mathbb{Z}_2$ action on $\Omega(G/K)$. \begin{lemma}\label{commute} The $A_2$ and $\mathbb{Z}_2$ actions on $\Omega(G/K)$ defined above commute with each other and thus define an action of $A_2\times \mathbb{Z}_2$. \end{lemma} \begin{proof} Take $a\in A_2$ and $\mu\in \Omega(G/K)$ of the form $\mu(\theta)=\gamma(\theta)K$, as in Lemma \ref{lemmafirst}. Since $a\in K$, we can write $$(a.\mu)(\theta) = a\mu(\theta) = a\gamma(\theta)K = a\gamma(\theta)a^{-1}K.$$ Then \begin{align*}{}&\widetilde{(a\mu)}(\theta) = \sigma(a\gamma(\pi-\theta)a^{-1})K =\sigma(a)\sigma(\gamma(\pi-\theta))\sigma(a^{-1})K \\ {}& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =a\sigma(\gamma(\pi-\theta))a^{-1}K=a\sigma(\gamma(\pi-\theta))K =a\tilde{\mu}(\theta).\end{align*} \end{proof} We consider again the action of $A\times S^1$ on $\Omega(G)$ given by Equations (\ref{conju}) and (\ref{ei}). We also recall (see Equation (\ref{taug})) that $\tau$ is the involutive automorphism of $\Omega(G)$ given by $$\tau(\gamma)(\theta) = \sigma(\gamma(-\theta)),$$ $\theta\in S^1$ (see Equation (\ref{taug})). The following proposition shows that the $A\times S^1$ action and the involution $\tau$ are compatible in the sense of Duistermaat \cite{Du}. \begin{propo}\label{propo} We have $$\tau((a,z).\gamma)=(a^{-1},z^{-1}).\tau(\gamma)$$ for any $\gamma\in \Omega(G)$ and any $(a,z)\in A\times S^1$. \end{propo} \begin{proof} We take the $A$ and $S^1$ actions separately. First, if $a\in A$ then we have $\sigma(a)=a^{-1}$, thus $$\tau(a.\gamma)(\theta) =\sigma(a\gamma(-\theta)a^{-1}) = \sigma(a)\sigma(\gamma(-\theta))\sigma(a^{-1}) = a^{-1}\sigma(\gamma(-\theta))a = (a^{-1}.\tau(\gamma))(\theta).$$ Second, if $z=e^{i\varphi}$, then \begin{align*}{}&\tau(z.\gamma)(\theta) = \sigma(\gamma(-\theta+\varphi)\gamma(\varphi)^{-1}) = \sigma(\gamma(-\theta+\varphi))\sigma(\gamma(\varphi)^{-1}) \\{}& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ = \tau(\gamma)(\theta-\varphi)\tau(\gamma)(-\varphi)^{-1} = (z^{-1}.\tau(\gamma))(\theta).\end{align*} \end{proof} We deduce immediately as follows. \begin{coro}\label{coro} The fixed point set $$\Omega(G)^\tau:=\{\gamma\in \Omega(G) \ : \ \sigma(\gamma(\theta))=\gamma(-\theta), \theta\in S^1\}$$ is invariant under the action of $A_2\times \mathbb{Z}_2=\{(a,z)\in A\times S^1 \ : \ a^2=1, z=1 \ {\rm or} \ z=-1\}$. \end{coro} The following proposition makes the connection between the spaces $\Omega(G)^\tau$ and $\Omega(G/K)$. It is an equivariant version of a result whose origins go back to Bott and Samelson \cite{Bo-Sa} (see also \cite{Mi}, \cite{Ko}). \begin{propo}\label{equiv} There is a homotopy equivalence between $\Omega(G/K)$ and $\Omega(G)^\tau$ which is equivariant with respect to the $A_2\times \mathbb{Z}_2$ actions defined in Lemma \ref{commute} and Corollary \ref{coro}. \end{propo} \begin{proof} We use the idea of \cite[Proposition 3.1.3]{Ko} (see also \cite[Section 5]{Mi}). The homotopy equivalence is the map $F:\Omega(G)^\tau \to \Omega(G/K)$ given by $$F(\gamma):=\gamma|_{[0,\pi]}K.$$ This map is well defined since if $\gamma$ is in $\Omega(G)^\tau$ then $\gamma(\pi)=\sigma(\gamma(\pi))$, thus $\gamma(\pi)\in K$ and consequently $\gamma(\pi)K=\gamma(0)K = eK.$ To prove that $F$ is a homotopy equivalence, we note that we can identify $\Omega(G)^\tau$ with the space of all paths $\beta:[0,\pi]\to G$ with $\beta(0)=e$ and $\beta(\pi)\in K$. The map $F$ is given by $\beta\mapsto \beta K$, for all paths $\beta$ as above. This is a principal bundle whose fiber is the group $\{\beta: [0,\pi]\to K\ : \ \beta(0)=e\}$. Since the latter space is contractible, $F$ is a homotopy equivalence, as desired. It remains to show that $F$ is $A_2\times \mathbb{Z}_2$ equivariant. Only the $\mathbb{Z}_2$-equivariance is non-trivial. Let us consider $\gamma\in \Omega(G)^\tau$ and verify that $$F((-1).\gamma)=\widetilde{F(\gamma)}.$$ Here the loop $(-1).\gamma$ is given by $$((-1).\gamma)(\theta) =\gamma(\theta+\pi)\gamma(\pi)^{-1},$$ for all $\theta\in S^1$, see Equation (\ref{ei}). Thus we have $$F((-1).\gamma)(\theta) = \gamma(\theta+\pi)\gamma(\pi)^{-1}K =\gamma(\theta+\pi)K,$$ since $\gamma(\pi)\in K$ (see above). On the other hand, for any $\theta\in S^1$ we have $$\widetilde{F(\gamma)}(\theta) = \sigma(\gamma(\pi-\theta))K =\gamma(\theta-\pi)K=\gamma(\theta+\pi)K.$$ Here we have used that $\tau(\gamma)=\gamma$, which implies that $\sigma(\gamma(\pi-\theta)) =\gamma(\theta-\pi)$. \end{proof} Finally we can spell out the details of the proof of Corollary \ref{bottsam}: it follows from Theorem \ref{secmain} by using Proposition \ref{equiv} above (in the particular situation when $A=T$). \section{Examples and counterexamples}\label{lasts} \subsection{Examples} The basic assumption of this paper is that the involutive automorphism $\sigma$ of the simply connected and compact Lie group $G$ satisfies $\sigma(t)=t^{-1}$ for all $t$ in a maximal torus $T\subset G$. In other words, if $K$ denotes the fixed point set of $\sigma$, the Riemannian symmetric pair $(G,K)$ is of maximal rank: by this we mean that the rank of the symmetric space $G/K$ is equal to the rank of $G$ (not to be confused with the situation when the homogeneous space $G/K$ has maximal rank, which means ${\rm rank} \ G={\rm rank } \ K$). Each Lie group $G$ as above has essentially one such involution $\sigma$. In the following table we describe $\sigma$ when $G$ is one of the classical simply connected compact Lie groups: in each case it is sufficient to describe the automorphism $\theta:=d\sigma_e$ of $\frak{g}$. We are using \cite[Chapter X, Section 2, Subsection 3]{He}. \begin{tabular}{|l|l|l|} \hline $G$& $\frak{g}$ & $\theta:=d\sigma_e$ \\ \hline $SU(n)$& $\frak{s}\frak{u}(n)$: $n\times n$ complex skew-Hermitean\newline & $\theta(X)=\overline{X}$ \\ & matrices $X$ & (complex conjugation)\\ \hline $Spin(2n)$& $\frak{s}\frak{o}(2n)$: $2n\times 2n$ real skew-symmetric & $\theta(X)=I_{n,n}XI_{n,n}$,\\ & matrices $X$ & where $I_{n,n}:= \left( \begin{array}{ccc} -I_n & 0\\ 0 & I_n \end{array} \right)$ \\ \hline $Spin(2n+1)$& $\frak{s}\frak{o}(2n+1)$: $(2n+1)\times (2n+1)$ real & $\theta(X)=I_{n+1,n}X I_{n+1,n}$,\\ &skew-symmetric matrices $X$ &where $I_{n+1,n}:= \left( \begin{array}{ccc} -I_{n+1} & 0\\ 0 & I_{n} \end{array} \right)$ \\ \hline $Sp(n)$ &$\frak{s}\frak{p} (n)$: $X= \left( \begin{array}{ccc} Z_{11} & Z_{12}\\ -\overline{Z}_{12} & Z_{22} \end{array} \right),$& $\theta(X)=J_nXJ_n^{-1}$, \\ & where $Z_{ij}$ are $n\times n$ complex matrices, &where $J_{n}:= \left( \begin{array}{cccc} 0 & I_n\\ -I_n & 0 \end{array} \right)$ \\ & $Z_{11}$ and $Z_{22}$ skew-Hermitean,& \\ & and $Z_{12}$ symmetric& \\ \hline \end{tabular} \noindent The pairs $(G,\sigma)$ in the table above correspond to the symmetric spaces $G/K$ of type $AI$, $BDI$ (with $p=q$ or $p=q+1$), and $CI$: for the meaning of these types, that is, for the classification of the irreducible Riemannian symmetric spaces, we refer the reader to \cite[Table V, p. 518]{He} or \cite[Table 2, pp. 312-313]{Be}. For the exceptional Lie groups, one can also consult the last two tables: the maximal rank types are $EI, EV, EVIII, FI,$ and $G$. \subsection{Counterexamples} In the remaining part of this section we will show that the hypothesis which says that the pair $(G,K)$ is of maximal rank is essential for the two main results of the paper. The notations established in Remark 1 following Theorem \ref{firstmain} are in force here. Let us start with Theorem \ref{firstmain}. We show that there exist simply connected compact Lie groups $G$ with an involution $\sigma$ and a maximal torus $T\subset G$ such that $\Phi(\Omega(G)^\tau)$ is strictly contained in $\Phi(\Omega(G))$. We first recall that, in general, the vertices of the polyhedron $\Phi(\Omega(G))$ in $\frak{t} \oplus \mathbb{R}$ are $\Phi(\gamma_\xi)=(\xi,\frac{1}{2}|\xi|^2)$, where $\xi $ is in the integral lattice $I$ of $T$ and $\gamma_\xi:S^1\to T$, $\gamma_\xi(\theta)=\exp(\theta\xi)$, for all $\theta\in S^1$, is the corresponding group homomorphism (see \cite[Section 1, Remark 2]{At-Pr}). Pick $\xi_0$ in the integer lattice $I$ such that $d\sigma_e(\xi_0)\neq -\xi_0$ (we will comment below on the existence of such $\xi_0$). Let $\gamma_0:S^1\to T$, $\gamma_0(\theta)=\exp(\theta\xi_0)$ be the corresponding group homomorphism and consider $\Phi(\gamma_0)=(\xi_0,\frac{1}{2}|\xi_0|^2)$. Assume that there exists $\gamma \in \Omega(G)^\tau$ such that $\Phi(\gamma) =\Phi(\gamma_0)$. Then $\Phi(\gamma)$ is on the paraboloid of equation $E=\frac{1}{2}|p|^2$ in $\frak{t} \oplus \mathbb{R}$, hence $\gamma$ must be a group homomorphism $S^1\to T$ (by \cite[Section 1, Remark 3]{At-Pr}). Thus $\gamma$ is of the form $\gamma(\theta)=\exp(\theta \xi)$, for all $\theta\in S^1$, where $\xi\in I$. A simple calculation shows that the condition $\tau(\gamma)=\gamma$ implies $d\sigma_e(\xi)=-\xi$, thus $\xi\in \frak{a}$. From $\Phi(\gamma)=\Phi(\gamma_0)$ we deduce $$(\xi,\frac{1}{2}|\xi|^2)=(\xi_0,\frac{1}{2}|\xi_0|^2)$$ thus $\xi=\xi_0$, which contradicts $d\sigma_e(\xi_0)\neq -\xi_0$. One can easily find examples of symmetric spaces $G/K$ for which there exists $\xi_0\in I$ with $d\sigma_e(\xi_0)\neq -\xi_0$. For example, one can take $$\mathbb{C} P^{n-1}=SU(n)/S(U(1)\times U(n-1)).$$ This is a rank 1 symmetric space (cf. e.g. \cite[Chapter X, Section 6, Table V]{He} or \cite[Example 6.6]{Mi}). Recall that the rank of a general symmetric space $G/K$ is equal to the dimension of $\frak{a}$ (cf. e.g. \cite[Chapter V, Section 6]{He}, see also Remark 1 following Theorem \ref{firstmain}). Thus, in the case at hand we have $\dim \frak{a}=1$. We can extend $\frak{a}$ to a maximal abelian subspace of ${\rm Lie}(SU(n))$, call it $\frak{t}$, which is $d\sigma_e$ invariant and such that $$\frak{a}=\{x\in \frak{t} \ : \ d\sigma_e(x)=-x\}.$$ Put $T=\exp(\frak{t})$, which is a maximal torus in $SU(n)$. It is clear that if $n\ge 3$, then $\dim \frak{t}=n-1$ is at least 2, and so not all integral elements of $T$ are in $\frak{a}$. We note that the pair $(SU(n), S(U(1)\times U(n-1))$ is far from being of maximal rank, as ${\rm rank } \ SU(n)=n-1$, whereas ${\rm rank} \ \mathbb{C} P^{n-1}=1$. An even more extreme example is given by the pair $(Sp(n), U(n))$ (see \cite[Chapter X, Section 2, Subsection 3]{He}). This pair is of maximal rank: indeed, ${\rm rank} \ Sp(n)/U(n)= {\rm rank} \ Sp(n)=n$. Hence there exists a torus $T\subset Sp(n)$ with $\sigma(t)=t^{-1}$, for all $t\in T$: Theorem \ref{firstmain} applies in this situation. However, we also have ${\rm rank} \ Sp(n)={\rm rank} \ U(n) =n$, thus there exists another maximal torus in $Sp(n)$, call it $T'$, such that $T'\subset U(n)$. This implies that $d\sigma_e(\xi)=\xi$, for all $\xi \in {\rm Lie}(T')$; thus $d\sigma_e(\xi)\neq -\xi$, unless $\xi=0$. Let us now turn to Theorem \ref{secmain}. This time we show that there exist a simply connected compact Lie group $G$ with an involution $\sigma$ such that $\dim H^{2q}(\Omega(G);\mathbb{Z}_2)\neq \dim H^q(\Omega(G)^\tau;\mathbb{Z}_2)$, for some $q\ge 0$. Indeed, let us consider again the pair $(SU(n), S(U(1)\times U(n-1)))$: the corresponding symmetric space is $SU(n)/(S(U(1)\times U(n-1))=\mathbb{C} P^{n-1}$ (see above). The $\mathbb{Z}_2$ Poincar\'e series of $\Omega(SU(n))^\tau$ and $\Omega(\mathbb{C} P^{n-1})$ are the same, being equal to ${(1+t)}{(1-t^{2n-2})^{-1}}$ (see \cite[Section 6, Example 6.6]{Mi}). The $\mathbb{Z}_2$ Poincar\'e series of $\Omega(SU(n))$ is $[(1-t^2)(1-t^4) \ldots (1-t^{2n-2})]^{-1}$ (cf. e.g. \cite[Equation (13.2)]{Bo-Sa}). Thus if $n\ge 3$, then we have $\dim H^4(\Omega(SU(n));\mathbb{Z}_2)=2$, whereas $\dim H^2(\Omega(SU(n))^\tau;\mathbb{Z}_2)=0$. \bibliographystyle{abbrv}
{ "timestamp": "2009-09-11T22:26:31", "yymm": "0903", "arxiv_id": "0903.0840", "language": "en", "url": "https://arxiv.org/abs/0903.0840", "abstract": "Let $(G,K)$ be a Riemannian symmetric pair of maximal rank, where $G$ is a compact simply connected Lie group and $K$ the fixed point set of an involutive automorphism $\\sigma$. This induces an involutive automorphism $\\tau$ of the based loop space $\\Omega(G)$. There exists a maximal torus $T\\subset G$ such that the canonical action of $T\\times S^1$ on $\\Omega(G)$ is compatible with $\\tau$ (in the sense of Duistermaat). This allows us to formulate and prove a version of Duistermaat's convexity theorem. Namely, the images of $\\Omega(G)$ and $\\Omega(G)^\\tau$ (fixed point set of $\\tau$) under the $T\\times S^1$ moment map on $\\Omega(G)$ are equal. The space $\\Omega(G)^\\tau$ is homotopy equivalent to the loop space $\\Omega(G/K)$ of the Riemannian symmetric space $G/K$. We prove a stronger form of a result of Bott and Samelson which relates the cohomology rings with coefficients in $\\mathbb{Z}_2$ of $\\Omega(G)$ and $\\Omega(G/K)$. Namely, the two cohomology rings are isomorphic, by a degree-halving isomorphism (Bott and Samelson had proved that the Betti numbers are equal). A version of this theorem involving equivariant cohomology is also proved. The proof uses the notion of conjugation space in the sense of Hausmann, Holm, and Puppe.", "subjects": "Differential Geometry (math.DG); Symplectic Geometry (math.SG)", "title": "Real loci of based loop groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708032626701, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7084670440881767 }
https://arxiv.org/abs/2002.03001
A Novel Evolution Strategy with Directional Gaussian Smoothing for Blackbox Optimization
We propose an improved evolution strategy (ES) using a novel nonlocal gradient operator for high-dimensional black-box optimization. Standard ES methods with $d$-dimensional Gaussian smoothing suffer from the curse of dimensionality due to the high variance of Monte Carlo (MC) based gradient estimators. To control the variance, Gaussian smoothing is usually limited in a small region, so existing ES methods lack nonlocal exploration ability required for escaping from local minima. We develop a nonlocal gradient operator with directional Gaussian smoothing (DGS) to address this challenge. The DGS conducts 1D nonlocal explorations along $d$ orthogonal directions in $\mathbb{R}^d$, each of which defines a nonlocal directional derivative as a 1D integral. We then use Gauss-Hermite quadrature, instead of MC sampling, to estimate the $d$ 1D integrals to ensure high accuracy (i.e., small variance). Our method enables effective nonlocal exploration to facilitate the global search in high-dimensional optimization. We demonstrate the superior performance of our method in three sets of examples, including benchmark functions for global optimization, and real-world science and engineering applications.
\section{Introduction} \label{sec:intro} Evolution strategy (ES) is a type of evolutionary algorithms for black-box optimization, where we search for the optima of a $d$-dimensional loss function $F(\bm x)$ given access to only its function queries. This is motivated by several applications where the loss function's gradient is inaccessible, e.g., in optimizing neural network architecture \cite{Real_ICML17,Miikkulainen_ICML17}, reinforcement learning \cite{Houthooft18,Ha_Schmidhuber,SHCS17}, and design of adversarial attacks to deep networks \cite{10.1145/3128572.3140448}. There are several types of evolutionary algorithms, including genetic algorithms \cite{10.5555/534133}, differential evolution \cite{DAS20161}, natural evolution strategies \cite{Wierstra_NES_14}, neuroevolution \cite{Stanley_neuroevolution}, and covariance matrix adaptation ES (CMA-ES) \cite{Hansen_Ostermeier_CMA_01,Hansen_CMA, 1554902,8410043, 10.1145/2908812.2908863}. In this work, we consider a particular class of ES that is based on Gaussian smoothing (GS) (e.g., \cite{SHCS17, Mania2018SimpleRS, Maheswaranathan_GuidedES, Choromanski_ES-Active}). GS-based ES first smooths the landscape of the loss function with $d$-dimensional Gaussian convolution and then estimates the gradient of the {smoothed} loss function using a random population generated by Monte Carlo (MC) sampling. GS-based ES is a promising strategy to handle loss landscapes possessing many local minima. Theoretically, the Gaussian convolution with a large smoothing radius enables {nonlocal} exploration that reduces the local minima effect and improves global structure characterization. However, practically, such nonlocal exploration is limited in low- to medium-dimensional settings because of the high variance (i.e., low accuracy) of MC estimation. When GS has a large smoothing radius in a high-dimensional space, the exploration domain (i.e., the high probability region) will be significantly large. To reduce the variance thus ensure the estimation accuracy, MC sampling requires either a prohibitively large number of function evaluations or a very small smoothing radius. Given the computing constraint of the former, the latter is usually applied, but this results in local exploration. Therefore, GS-based ES has been mostly used to estimate {\em local} gradients in high-dimensional black-box optimization. Several studies have been performed to improve the gradient estimation in GS-based ES. Most of them center on enhancing MC estimators, such as by variance reduction strategies \cite{MWDS18,CRSTW18,10.5555/3326943.3326962}, exploiting historical data \cite{Maheswaranathan_GuidedES,Meier_OPTRL_2019}, employing active subspaces \cite{Choromanski_ES-Active}, and searching on latent low-dimensional manifolds \cite{Sener2020Learning}. Despite of some improvements, these techniques did not fundamentally solve the limitations in GS-based ES caused by the MC estimation. We develop a novel nonlocal gradient operator based on {directional Gaussian smoothing} (DGS), and use it to replace the standard GS operator in ES to enhance nonlocal exploration in high dimensions. We name our new operator as DGS gradient and the DGS-based ES algorithm as DGS-ES method. The key idea of the DGS gradient is to conduct 1D nonlocal explorations along $d$ orthogonal directions in $\mathbb{R}^d$, each of which defines a nonlocal directional derivative as a 1D integral. Then we use Gauss-Hermite quadrature, instead of MC sampling, to estimate the $d$ 1D integrals to ensure high accuracy. Next, the estimated directional derivatives are assembled to form the DGS gradient. Compared with existing methods, our DGS-ES approach can achieve long-range exploration by being able to use a large smoothing radius and meanwhile obtains high estimation accuracy of gradients through accurate integral calculation. The proposed DGS gradient is a new {operator} for identifying search directions, not an estimator of the local gradient. In the local setting (the smoothing radius approaching to zero), we verified that the DGS-gradient operator is consistent with the GS-based gradient. However, in the nonlocal setting, the DGS gradient {is significantly different from the GS-based gradient in that it is feasible to obtain an accurate estimator of the DGS gradient with large smoothing radius} for nonlocal exploration. \textbf{Summary of contributions.} Our contribution in this paper is three fold: (1) we develop the DGS gradient operator for effective nonlocal exploration in ES, which advances the global search in high-dimensional black-box optimization; (2) we develop an accurate estimator for the DGS gradient using Gauss-Hermite quadrature, which accelerates the convergence of ES; and (3) we demonstrate the superior performance of our method on both high-dimensional, non-convex benchmark optimization problems, and two real-world science and engineering applications. \textbf{Related works.} The literature on black-box optimization is extensive. We review three types of methods that are closely related to this work (see \cite{RiosSahinidis13,Larson_et_al_19} for thorough reviews). (1) \textit{Random search}. This type of methods randomly generate the next search direction and estimate the directional derivative {or perform direct search for the updates}. Examples are two-point approaches \cite{FKM05,NesterovSpokoiny15,Duchi2015OptimalRF, MAL-024}, coordinate-descent algorithms \cite{10.5555/2999325.2999433}, three-point methods \cite{Bergou2019}, and binary search with adaptive radius \cite{Golovin2020GradientlessDH}. From theoretical perspective, { an analysis of two-point schemes based on GS is presented in the seminal paper \cite{NesterovSpokoiny15}, extended in \cite{doi:10.1137/120880811} for non-convex and in \cite{10.5555/3122009.3153008} for non-smooth loss functions.} Existing studies also focus on estimating local derivatives rather than nonlocal exploration. (2) {\em Local gradient estimation}. The most straightforward way is to use finite differences. An alternative is to use linear interpolation in a small neighborhood of the current state to estimate local gradients \cite{BCCS19b}. Another way is to estimate the local gradient by averaging multiple directional estimates by two-point schemes, and the GS-based ES methods \cite{SHCS17,Mania2018SimpleRS, Maheswaranathan_GuidedES,Choromanski_ES-Active} can be assigned to this type of methods. It is possible to augment ES by integrating the estimated gradient with new gradient-based algorithms, such as ADMM \cite{2017arXiv171007804L}, adaptive momentum method \cite{Chen2019ZOAdaMMZA}, and conditional gradient \cite{10.5555/3327144.3327264}. A comparison of local gradient estimation methods can be found in \cite{BCCS19}. (3) \textit{Smoothing techniques}. Sphere smoothing is a method similar to GS and was discussed in \cite{FKM05}. Analysis of GS applied to step functions are presented in \cite{doi:10.1080/10556780500140029}. Other strategies to transform a nonconvex and noisy optimization to a convex or more friendly version are $p$-th power transformation \cite{Li-convexification} and $\ell^2$ regularization \cite{Carlsson19}. An algorithm for estimating computational noise affecting a smooth simulation is developed in \cite{Mor2011EstimatingCN}. \section{Black-box optimization}\label{sec:setting} We are interested in solving the following black-box optimization problem \begin{equation}\label{e1} \min_{\bm x \in \mathbb{R}^d} F(\bm x), \end{equation} where $\bm x = (x_1, \ldots, x_d) \in \mathbb{R}^d$ consists of $d$ tuning parameters, and $F: \mathbb{R}^d \rightarrow \mathbb{R}$ is a $d$-dimensional black-box loss function. We assume that the gradient $\nabla F(\bm x)$ is unavailable, and $F(\bm x)$ is only accessible via function evaluations. We briefly recall the class of ES methods \cite{SHCS17} that use GS \cite{FKM05,NesterovSpokoiny15} to estimate local gradients. The smoothed loss is defined by $ F_{\sigma}(\bm x) = \mathbb{E}_{\bm u \sim \mathcal{N}(0, \mathbf{I}_d)} \left[F(\bm x + \sigma \bm u) \right], $ where $\mathcal{N}(0, \mathbf{I}_d)$ is the $d$-dimensional standard Gaussian distribution, and $\sigma > 0$ is the smoothing radius. $F_{\sigma}(\bm x)$ inherits many characteristics from $F(\bm x)$, e.g., convexity, the Lipschitz constant. Moreover, for any $\sigma >0$, $F_{\sigma}$ is always differentiable even if $F$ is not. The standard ES method \cite{SHCS17} represents the $\nabla F_\sigma(\bm x)$ as an expectation and estimate it by drawing $M$ random samples $\{\bm u_m\}_{m=1}^M$ from $\mathcal{N}(0,\mathbf{I}_d)$, i.e., \begin{equation}\label{e40} \nabla F_{\sigma}(\bm x) = \frac{1}{\sigma}\mathbb{E}_{\bm u \sim \mathcal{N}(0, \mathbf{I}_d)} \left[F(\bm x + \sigma \bm u)\, \bm u\right] \approx \frac{1}{M\sigma}\sum_{m=1}^M F(\bm x + \sigma \bm u_m)\bm u_m. \end{equation} Then, the MC estimator is substituted into any gradient-based algorithm to update the state $\bm x$. \section{New method: an evolution strategy with directional Gaussian smoothing} \label{sec:DGS-ES} We present our main contributions in this section. We start by introducing the DGS gradient in \S \ref{sec:grad}. In \S \ref{sec:ada_DGS-ES}, we introduce an accurate estimator of the DGS gradient and describe the proposed DGS-ES algorithm in detail. The proposed method was inspired by the following key idea: {\em {\bf Key idea}: The DGS gradient conducts 1D nonlocal explorations along $d$ orthogonal directions in $\mathbb{R}^d$, each of which defines a nonlocal directional derivative as a 1D integral. The Gauss-Hermite quadrature, instead of MC sampling, is used to estimate the $d$ 1D integrals to achieve high accuracy.} \subsection{The nonlocal DGS gradient operator}\label{sec:grad} To proceed, we first define a \emph{one-dimensional} cross section of $F(\bm x)$ as \begin{equation* G(y \,| \,{\bm x, \bm \xi}) = F(\bm x + y\, \bm \xi), \;\; y \in \mathbb{R}, \end{equation*} where $\bm x$ is the current state of $F(\bm x)$ and $\bm \xi$ is a unit vector in $\mathbb{R}^d$. Note that $\bm x$ and $\bm \xi$ can be viewed as parameters of the function $G$. We define the Gaussian smoothing of $G(y)$, denoted by $G_\sigma(y)$, by \begin{equation} \label{eq10} G_{\sigma}(y \,| \,{\bm x, \bm \xi}) := \frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} G(y + \sigma v\, |\, \bm x, \bm \xi)\, {\rm e}^{-\frac{v^2}{2}}\, dv = \mathbb{E}_{v \sim \mathcal{N}(0, 1)} \left[G(y + \sigma v\, |\, \bm x, \bm \xi) \right], \end{equation} which is the Gaussian smoothing of $F(\bm x)$ along the direction $\bm \xi$ in the neighbourhood of $\bm x$. The derivative of $G_{\sigma}(y|\bm x,\bm \xi)$ at $y = 0$ can be represented by a one-dimensional expectation \begin{equation}\label{e4} \mathscr{D}[G_{\sigma}(0 \,|\, \bm x, \bm \xi)] = \frac{1}{\sigma}\,\mathbb{E}_{v \sim \mathcal{N}(0,1)} \left[G(\sigma v \, | \, \bm x, \bm \xi)\, v\right], \end{equation} where $\mathscr{D}[\cdot]$ denotes the differential operator. The difference between the directional derivative of $F_\sigma(\bm x)$ and Eq.~\eqref{e4} is that $\mathscr{D}[G_{\sigma}(0 \,|\, \bm x, \bm \xi)]$ only involves the directionally smoothed function in Eq.~\eqref{eq10}. For a matrix $\bm \Xi := (\bm \xi_1, \ldots, \bm \xi_d)$ consisting of $d$ orthonormal vectors, we can define $d$ directional derivatives like those in Eq.~\eqref{e4} and assemble our DGS gradient as \begin{equation}\label{dev_smooth_func} \hspace{-0.8cm} \text{\bf The DGS gradient:}\quad {\nabla}_{\sigma, \bm \Xi}[F](\bm x) := \Big[\mathscr{D}[G_{\sigma}(0 \, |\, \bm x, \bm \xi_1)], \cdots, {\mathscr{D}}[G_{\sigma}(0\, |\, \bm x, \bm \xi_d)]\Big]\, \bm \Xi, \end{equation} where the orthogonal system $\bm \Xi$ and the smoothing radius $\sigma$ can be adjusted during an optimization process. Next we describe how to integrate the DGS gradient into ES framework. \subsection{The DGS-ES algorithm}\label{sec:ada_DGS-ES} The key step to integrate the DGS gradient in Eq.~\eqref{dev_smooth_func} into ES is to develop an accurate estimator. We exploit that each component of ${\nabla}_{\sigma, \bm \Xi}[F](\bm x)$ only involves a 1D integral, such that the Gauss-Hermite quadrature rule \cite{2013JSV...332.4403B,Handbook} can be used to approximate the integrals with high accuracy (shown in Eq.~\eqref{GH_error}). By doing a simple change of variable in Eq.~\eqref{e4}, the GH rule can be directly used to obtain the following estimator for $\mathscr{D}[G_{\sigma}(0 \,|\, \bm x, \bm \xi)]$, i.e., \begin{align} \widetilde{\mathscr{D}}^M[G_\sigma(0 \, | \, \bm x, \bm \xi)] = \frac{1}{\sqrt{\pi}\sigma} \sum_{m = 1}^M w_m \,F(\bm x + \sqrt{2}\sigma v_m \bm \xi)\sqrt{2}v_m, \label{e8} \end{align} where $\{v_m\}_{m=1}^M$ are the roots of the $M$-th order Hermite polynomial and $\{w_m\}_{m=1}^M$ are quadrature weights. Both $v_m$ and $w_m$ can be found online\footnote{Nodes and weights for GH quadrature: \url{https://keisan.casio.com/exec/system/1281195844}} or in \cite{Handbook}. Compared with MC sampling, the error of Eq.~\eqref{e8} can be bounded by \begin{align} \label{GH_error} \hspace{-0.1cm}\big|(\widetilde{\mathscr{D}}^M- \mathscr{D})[G_\sigma] \big| \le C\frac{M\,!\sqrt{\pi}}{2^M(2M)\,!} \sigma^{2M-1}, \end{align} where $M!$ is the factorial of $M$ and the constant $C>0$ is independent of $M$ and $\sigma$. Applying the GH quadrature rule $\widetilde{\mathscr{D}}^M$ to each component of ${\nabla}_{\sigma, \bm \Xi}[F](\bm x)$ in Eq.~\eqref{dev_smooth_func}, we define the following estimator: \begin{equation}\label{e5} \hspace{-0.2cm} \text{\bf The DGS estimator:}\quad \widetilde{\nabla}^M_{\sigma, \bm \Xi}[F](\bm x) = \Big[\widetilde{\mathscr{D}}^M[G_{\sigma}(0 \, |\, \bm x, \bm \xi_1)], \cdots, \widetilde{\mathscr{D}}^M[G_{\sigma}(0\, |\, \bm x, \bm \xi_d)]\Big]\, \bm \Xi. \end{equation} \begin{wrapfigure}{r}{0.5\textwidth} \vspace{-0.4cm} \hspace{-0.25cm} \includegraphics[scale = 0.45]{./fig4.pdf}\vspace{-0.2cm} \caption{Illustration of the nonlocal exploration capability of our DGS gradient. In the central plot, the blue arrow points to the {local} gradient direction and the red arrow points to the DGS gradient direction. The top and right plots show the directionally smoothed functions along the two axes. Because the DGS gradient captures the nonlocal features of $F$, it can point to a direction much closer to the global minimum than the local gradient.} \label{fig0} \vspace{-1.4cm} \end{wrapfigure} The DGS estimator has the following features: \vspace{-0.1cm} \begin{itemize}[leftmargin=13pt] \item {\bf \em Nonlocality}: The directional smoothing allows for a large radius $\sigma$ to capture global structures of loss landscapes and help escape from local minima (illustrated in Figure \ref{fig0}). % \item {\bf \em Accuracy}: The GH quadrature with the error bounded in Eq.~\eqref{GH_error} provides an estimator having much higher accuracy than MC, even when a large smoothing radius $\sigma$ is used. % \item {\bf \em Portability}: The DGS gradient can be integrated into majority of gradient-based algorithms, e.g., gradient descent, Adam, and those with constraints (shown in \S\ref{sec:ex_3}). % \item {\bf \em Scalability}: The DGS estimator in Eq.~\eqref{e5} requires $M\times d$ evaluations of $F(\bm x)$, and these evaluations are completely parallelizable as those in random sampling. % \end{itemize} \paragraph{Random perturbation of $\bm \Xi$ and $\sigma$.} The estimator in Eq.~\eqref{e5} is deterministic for a fixed $\bm \Xi$ and $\sigma$, making our approach short of random exploration. To alleviate this issue, we add random perturbations to $\bm \Xi$ and $\sigma$. First, we add a small random rotation $\Delta \bm\Xi$ to $\bm \Xi$. To make $\bm \Xi + \Delta \bm\Xi$ orthonormal, i.e., $ \mathbf{I}_d = (\bm \Xi + \Delta \bm\Xi)^{\top} (\bm \Xi + \Delta \bm\Xi), $ we generate $\Delta \bm \Xi$ as a random skew-symmetric matrix $\Delta \bm \Xi = -\Delta \bm \Xi^{\top}$ with small-value entries (controlled by $\alpha>0$), which will cancel out the first-order terms in $(\bm \Xi + \Delta \bm\Xi)^{\top} (\bm \Xi + \Delta \bm\Xi)$. The Gram-Schmidt operation is then used to eliminate the second-order term $\Delta \bm \Xi^{\top} \Delta \bm \Xi$ to ensure the othornormality of $\bm \Xi + \Delta \bm\Xi$. The perturbation of $\sigma$ is conducted by drawing $d$ random samples (one for each direction) from a uniform distribution $\mathcal{U}(r-\beta, r+\beta)$ with $\beta \ll r$. The random perturbation can be triggered by various types of indicators, e.g., the magnitude of the DGS gradient, the number of iterations completed since last perturbation. The DGS-ES method with the standard gradient descent is summarized in Algorithm 1. \paragraph{Asymptotic consistency.} The DGS gradient in Eq.~\eqref{dev_smooth_func} is not an estimator for $\nabla F_{\sigma}(\bm x)$ or $\nabla F(\bm x)$. In fact, it is designed to be used in the nonlocal setting for non-convex optimization. In nonlocal modeling, a common practice is to study the asymptotic consistency between local and nonlocal gradients (see \cite{doi:10.1137/19M1296720}). A more direct question is that ``{\em Does the DGS gradient estimator converge to the local gradient as $\sigma$ approaches to zero?}'' This question can be answered easily for $F(\bm x) \in \mathcal{C}^{1,1}(\mathbb{R}^d)$. In this case, there exists $L> 0 $ such that $ \|\nabla F(\bm x + \bm \xi) - \nabla F(\bm x)\| \le L\|\bm \xi\|, \, \forall \bm x ,\, \bm \xi\in \mathbb{R}^d $ ($\|\cdot\|$ denotes the $L^2$ norm in this work). Then, the difference between $\widetilde{\nabla}^M_{ \sigma, \bm \Xi}[F]$ and $\nabla F$ can be bounded by \begin{align*} & \left\| \widetilde{\nabla}^M_{ \sigma, \bm \Xi}[F] - {\nabla}F \right\|^2 \le \frac{2 C^2 {\pi} d (M\,!)^2}{4^M((2M)\,!)^2} \sigma^{4M-2} + 32 d L^2 \sigma^2, \notag \end{align*} where the first term on the right hand side comes from the GH quadrature and the second term measures the difference between $\nabla F$ and $\nabla_{\sigma, \bm \Xi}[F]$. It is easy to see the asymptotic consistency, i.e., \[ \lim_{\sigma \rightarrow 0} \big| \nabla F(\bm x) - \widetilde{\nabla}_{\sigma, \bm \Xi}^M[F](\bm x) \big| = 0 \]% \begin{wraptable}{r}{0.52\textwidth} \vspace{-0.2cm} \footnotesize \begin{tabular}{p{0.48\textwidth}}\\ \toprule {\;\;\bf Algorithm 1: The DGS-ES algorithm} \\\midrule \begin{algorithmic}[1] \setstretch{1.12} \STATE{\bf Hyper-parameters}:$M$: \# GH quadrature points; $\lambda_t$: learning rate; $\alpha$: the scaling factor for the rotation $\Delta \bm \Xi$; $r, \beta$: the mean and radius for sampling $\sigma$; $\gamma$: the tolerance for triggering random perturbation.\\ \STATE{\bfseries Input:} The initial state $\bm x_0$ \STATE{\bfseries Output:} The final state $\bm x_T$ \STATE Set $\bm \Xi = \mathbf{I}_d$, and $\sigma_i = r$ for $i = 1, \ldots, d$ \FOR{$t=0, \ldots T-1$} \STATE Evaluate $\{G(\sqrt{2}\sigma_i v_m \, | \, \bm x_t, \bm \xi_i)\}^{i=1, \ldots, d}_{m =1, \ldots, M}$ \FOR{$i = 1, \ldots, d$} \STATE Compute $\widetilde{\mathscr{D}}^M[G_{\sigma_i}(0\,|\, \bm x_t, \bm \xi_i)]$ in Eq.~(\ref{e8}) \ENDFOR \STATE Assemble $\widetilde{\nabla}^M_{\sigma, \bm \Xi}[F](\bm x_t)$ in Eq.~(\ref{e5}) \STATE Set $\bm x_{t+1} = \bm x_t - \lambda_t \widetilde{\nabla}^M_{\bm \sigma, \bm \Xi}[F](\bm x_t) $ \IF{$\|\widetilde{\nabla}^M_{\sigma, \bm \Xi}[F](\bm x_t)\|_2 < \gamma$} \STATE Generate $\Delta \bm \Xi$ and update $\bm \Xi = \mathbf{I}_d + \Delta \bm \Xi$ \STATE Generate $\sigma_i$ from $\mathcal{U}(r-\beta, r+\beta)$ \ENDIF \ENDFOR \end{algorithmic}\\\bottomrule \end{tabular} \vspace{-1.7cm} \end{wraptable} for $M>2$ regardless of the choice of $\bm \Xi$. Consequently, to achieve $\| \widetilde{\nabla}^M_{ \sigma, \bm \Xi}[F] - {\nabla}F \| \le \varepsilon$ for a fixed $\varepsilon >0$, we need $\sigma \le {\varepsilon}/(4L\sqrt{d})$ and $M \ge \log({2d}/{\varepsilon^2})$, which means the total number of function evaluations should be bigger than $d\log({2d}/{\varepsilon^2})$. We remark that to acquire the same consistency, the MC estimator for $\nabla F_\sigma$ in Eq.~\eqref{e40} requires $\sigma\le {\varepsilon}/(Ld)$ and the number of function evaluations to be $O({d \|{\nabla}F(\bm x) \|^2 }/{\varepsilon^2})$ (see \cite{BCCS19}). Thus, the DGS estimator requires fewer function evaluations to find search direction even in the local setting. Moreover, it seems not easy to relax this dependency of the MC estimator on $d$ and $\varepsilon$ with variance reduction techniques, e.g., \cite{CRSTW18,10.5555/3326943.3326962}. \begin{comment} Next, we show convergence results of DGS-ES in optimizing strongly convex functions. \begin{thm} \label{thm:main} Let $F$ be a strongly convex function in $C^{1,1}(\mathbb{R}^d)$, $\{\bm x_t\}_{t\ge 0}$ be generated by Algorithm 1 with $\lambda = {1}/{8L}$ and $\bm x^*$ be the global minimizer of $F$. Then, for any $t\ge 0$, we have \begin{align*} & F(\bm x_t) - F(\bm x^*) \le\, \frac{1}{2}L \left[\delta_{\sigma} + \left(1- \frac{\tau}{16 L}\right)^t ( \|\bm x_0 - \bm x^*\|^2 - \delta_{\sigma}) \right], \end{align*} where $ \delta_{\sigma} = \left(\frac{128}{\tau^2} + \frac{16}{\tau L} \right) d L^2 \sigma^{2} + \, \left(\frac{8}{\tau^2} + \frac{1}{2\tau L} \right)\frac{ C_0^2 (M\,!)^2{\pi}d}{4^M((2M)\,!)^2} \ \sigma^{2}. $ \end{thm} \begin{corollary} \label{cor:main} Let $\varepsilon >0$. If $ \sigma \le O\left(\sqrt{\frac{\varepsilon}{d}}\right),\ T = O(\log\frac{1}{\varepsilon}), \ \text{and }M\ge O\left(\log\left(\frac{d}{\varepsilon}\right)\right)$ (i.e., total number of function evaluations $\ge O\left(d\log\left(\frac{d}{\varepsilon}\right)\right) $, then $F(\bm x_T) - F(\bm x^*) \le \varepsilon$. \end{corollary} These results also hold with random perturbation of $\bm \Xi$ and $\bm \sigma$ after each iterate, as long as $\bm \Xi$ remains orthonormal and $\|\bm \sigma\|$ is fixed. Corollary \ref{cor:main} indicates that the number of iterations required by our approach is completely independent of the dimension, while the total number of function evaluations is only slightly higher than nonparallelizable random search approach, e.g., \cite{NesterovSpokoiny15,Bergou2019}. \end{comment} \section{Experiments}\label{sec:ex} We present the experimental results using three sets of problems. All experiments were implemented in Python 3.6 and conducted on a set of cloud servers with Intel Xeon E5 CPUs. We compare the DGS-ES method with the following (a) {\bf ES-Bpop}: the standard OpenAI evolution strategy in \cite{SHCS17} with a big population (i.e., using the same number of samples as DGS-ES), (b) {\bf ASEBO}\footnote{We do not report comparison with some recent work on ES methods, e.g., \cite{10.1145/2908812.2908863,8410043}, because they underperform ASEBO as shown in \cite{Choromanski_ES-Active} and code available at \url{https://github.com/jparkerholder/ASEBO}.}: Adaptive ES-Active Subspaces for Blackbox Optimization \cite{Choromanski_ES-Active} with a population of size $4+3\log(d)$, (c) {\bf IPop-CMA}: the restart covariance matrix adaptation evolution strategy with increased population size \cite{1554902}, (d) {\bf Nesterov}: the random search method in \cite{NesterovSpokoiny15}, and (e) {\bf FD}: the classical central difference scheme. The information of the codes used for the baselines is provided in Appendix. \subsection{Tests on benchmark functions for global optimization}\label{sec:ex_2} We test the DGS-ES performance on six 2000D benchmark functions \cite{10.1145/1830761.1830794,Jamil2013ALS} for global optimization, i.e., $F_1(\bm x)$: Sphere, $F_2(\bm x)$: Sharp Ridge, $F_3(\bm x)$: Ackley, $F_4(\bm x)$: Rastrigin, $F_5(\bm x)$: Schaffer, and $F_6(\bm x)$: Schwefel. Their definitions and properties are described in Appendix. We performed grid search to tune the hyper-parameters for all the algorithms to ensure a fair comparison. Description of the hyper-parameter tuning can also be found in Appendix. The results are shown in Figure \ref{fig2} and Table \ref{sample-table1}. DGS-ES has the best performance overall. In particular, DGS-ES demonstrates significantly superior performance in optimizing the highly non-convex functions $F_3$, $F_4$ and $F_5$. We explain the reasons from the two advantages of DGS-ES. {\bf \em Nonlocal exploration}. We use the averaged cosine distance in Table \ref{sample-table1} to compare the nonlocal exploration performance of each method in minimizing the six benchmark functions, \begin{equation}\label{cos_dist} {\rm Cos\_Dist} = \frac{1}{T}\sum_{t=1}^T \left(1 - \dfrac{\langle \bm x_t-\bm x_{t-1}, \bm x^*-\bm x_{t-1}\rangle}{\|\bm x_t-\bm x_{t-1}\| \|\bm x^*-\bm x_{t-1}\|}\right), \end{equation} where $\bm x^*$ is the global minimum and $T$ is the number of iterations of an optimization path\footnote{The Cos\_Dist is generated along different paths for different methods.}. The Cos\_Dist of the test cases in Figure \ref{fig2} are shown in Table \ref{sample-table1}. We have the following findings: (1) the DGS-ES provides smallest Cos\_Dist in most cases, which demonstrates that the DGS gradient is very close to the direction pointing to the global minimum (even for functions with many local minima, e.g., $F_3$, $F_4$, $F_5$). (2) FD achieves similar performance with DGS-ES for $F_1$ and $F_2$ where the local gradients also point to the global minimum, but FD is trapped in local minima for non-convex $F_3$, $F_4$, $F_5$. (3) As Nesterov randomly selects the search direction, it is reasonable that most search directions are perpendicular to $\bm x^*-\bm x_{t-1}$. (4) ES-Bpop, ASEBO and IPop-CMA have too few random samples to capture the optimal direction $\bm x^*-\bm x_{t-1}$. \begin{figure} \centering \includegraphics[scale = 0.33]{./func_2000D.pdf} \caption{Comparison of the loss decay w.r.t.~\# function evaluations for the 6 benchmark functions in 2000-dimensional spaces. Each curve was generated by averaging 20 independent trials with random initial states. The global minimum is $F(\bm x)=0$ for all the six functions. DGS-ES has the best performance overall, especially for the highly non-convex functions $F_3$, $F_4$, $F_5$. All the methods fail to find the global minimum of $F_6$ which has no global structure to exploit.} \label{fig2} \end{figure} \begin{table}[h!] \vspace{-0.0cm} \centering \small \begin{tabular}{p{1.6cm}p{1.1cm}p{1.5cm}p{1.1cm}p{1.5cm}p{1.1cm}p{1.5cm}} \toprule & \multicolumn{2}{c}{{{$F_1$: Sphere}}} & \multicolumn{2}{c}{{{$F_2$: Sharp Ridge}}} & \multicolumn{2}{c}{{{$F_3$: Ackley}}}\\ \cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7} & {Cos\_Dist} & {Grad\_Norm} & {Cos\_Dist} & {Grad\_Norm}& {Cos\_Dist} & {Grad\_Norm}\\ \midrule DGS-ES & {\bf 1.86e-9} & 8.09e+1 & 1.48e-1 & 4.11e+1 & {\bf 7.71e-2} & {\bf 5.49e-2}\\ ES-Bpop & 2.91e-1 & 1.08e+2 & 3.23e-1 & 3.88e+2 & 7.03e-1 & 1.37e-1\\ ASEBO & 9.25e-1 & 8.08e+2 & 9.25e-1 & 2.56e+2 & 1.00e+0 & 8.13e-1\\ Nesterov & 9.82e-1 & 2.75e+3 & 9.82e-1 & 2.70e+3 & 9.99e-1 & 3.28e+0\\ FD & 1.81e-1 & {\bf 8.04e+1} & {\bf 9.64e-2} & {\bf 2.60e+1} & 9.82e-1 & 7.38e-2\\ IPop-CMA & 1.07e+0 & N/A & 1.07e+0 & N/A & 9.99e-1 & N/A\\ \toprule & \multicolumn{2}{c}{{{$F_4$: Rastrigin}}} & \multicolumn{2}{c}{{{$F_5$: Schaffer}}} & \multicolumn{2}{c}{{{$F_6$: Schwefel}}}\\ \cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7} & {Cos\_Dist} & {Grad\_Norm} & {Cos\_Dist} & {Grad\_Norm}& {Cos\_Dist} & {Grad\_Norm}\\ \midrule DGS-ES & {\bf 3.01e-5} & {\bf 5.76e+1} & {\bf 4.85e-1} & {\bf 1.50e+1} & 1.04e+0 & 8.45e+1\\ ES-Bpop & 9.27e-1 & 6.46e+2 & 9.19e-1 & 5.06e+2 & 1.00e+0 & 1.23e+3\\ ASEBO & 9.93e-1 & 6.27e+3 & 9.98e-1 & 3.88e+4 & {\bf 9.94e-1} & 3.05e+4\\ Nesterov & 9.99e-1 & 4.31e+4 & 9.95e-1 & 5.50e+4 & 9.99e-1 & 1.79e+3\\ FD & 1.02e+0 & 7.25e+1 & 9.78e-1 & 2.28e+2 & 1.08e+0 & {\bf 6.33e+1}\\ IPop-CMA & 1.00e+0 & N/A & 1.01e+0 & N/A & 1.03e+0 & N/A\\ \bottomrule \end{tabular} \vspace{0.1cm} \caption{The average cosine distance in Eq.~\eqref{cos_dist} and the standard deviation of gradient's $L^2$ norm in Eq.~\eqref{grad_norm} for all the test cases shown in Figure \ref{fig2}. The cosine distance is in the range $[0,2]$. The smaller the Cos\_Dist and the Grad\_Norm, the better the performance of a method.} \label{sample-table1} \vspace{-0.6cm} \end{table} {\bf \em Variation of gradient estimators}. We use the standard deviation of the gradient estimators' $L^2$ norm in Table \ref{sample-table1}, to compare the variance of the estimators, \begin{equation}\label{grad_norm} \text{Grad\_Norm} = \left(\frac{1}{T} \sum_{t=1}^T (\|\nabla_t\| - \mu)^2\right)^{1/2}\; \text{ and }\; \mu = \frac{1}{T} \sum_{t=1}^T \|\nabla_t\|, \end{equation} where $T$ is the number of iterations and $\nabla_t$ denotes different estimators for different methods\footnote{Since CMA does not have gradient or directional derivative, we don't compare IPop-CMA in this setting.}. We have the following findings: (1) DGS-ES provides the smallest Grad\_Norm for most test cases due to the good smoothing effect with a large $\sigma$ and the high accuracy of the GH quadrature rule. (2) FD achieves similar Grad\_Norm as DGS-ES for $F_3$ and $F_4$ because it is trapped in local minima. (3) ES-Bpop and ASEBO cannot well control the Grad\_Norm because of the high variance of the MC-based gradient estimator. (4) The Grad\_Norm of Nesterov is biggest for $F_1,F_2,F_3,F_4,F_5$ because the norm of the gradients of these functions are highly fluctuating and the Nesterov method does not provide as good smoothing as DGS-ES to reduce the fluctuation. \subsection{Constrained topology optimization for architecture design}\label{sec:ex_3} We demonstrate the portability of the DGS gradient to constrained optimization using a real-world topology optimization (TO) problem. TO has many applications in engineering \cite{bendsoe2013topology, aage2017giga} and recently attracted attentions in machine learning \cite{hoyer2019neural,yu2019deep,oh2019deep,wu2015system,liu2018narrow,li2020hybrid}. We use DGS-based TO to design a 2D vertical cross section of a bridge from random initial guesses (see Figure \ref{fig4}a). \begin{wrapfigure}{r}{0.36\textwidth} \vspace{-0.3cm} \hspace{-0.15cm}\includegraphics[scale = 0.3]{./3d_bridge.png} \caption{Good conceptual design.}\label{fig3} \vspace{-0.3cm} \end{wrapfigure} The design domain is meshed by $120\times40$ elements, each of which is a design variable ranging from 0 (void) to 1 (solid). By assuming the bridge is symmetric, the total number of independent design variables is $2400~(60 \times 40)$ which is the dimension of the optimization problem. The constraints include (i) $20$\% volume constraint, i.e., the volume of solid materials (black pixels) in Figure \ref{fig4} cannot exceed $480~(2400\times 0.2)$, (ii) unit uniform load on the top and one fixed supports from the bottom. The goal is to optimize the material layout to achieve maximum load-carry capability of the bridge. A conceptually good design is shown in Figure \ref{fig3}. \begin{figure}[h!] \centering \includegraphics[scale = 0.65]{./step2.png} \caption{\footnotesize Illustration of DGS-based TO design process from random initial guess. The topology of the bridge architecture tends to be more and more clear as the loss function value $C(\times10^5)$ decreases.}\label{fig4} \end{figure} \begin{figure}[h!] \vspace{-0.4cm} \parbox{\textwidth}{ \centering \begin{minipage}{.6\textwidth} \centering \begin{center} \includegraphics[width=0.95\textwidth]{./figure_topopt2.png} \end{center} \caption{\small Comparison of final typologies. The DGS-based design shows a strong hierarchical tree feature that matches the conceptual design in Figure \ref{fig3}. IPop-CMA tends to a blurry topology. The other algorithms show many local/minor features that have negative impacts on load-carry capability and bridge construction.}\label{fig5} \end{minipage} \begin{minipage}{.39\textwidth} \hspace{-0.1cm}\includegraphics[scale=0.42]{./figure_iter5.pdf} \vspace{-0.1cm} \caption{\small Loss decay for DGS-based TO.}\label{fig6} \end{minipage} } \end{figure} \vspace{-0.1cm} The challenges in TO include highly non-convex and multi-modal loss functions and rigid constraints. Extensive research efforts have been made on developing exclusive constrained optimization algorithms for TO. The state-of-the-art is Method of Moving Asymptotes (MMA)\cite{svanberg1987method}, which is a gradient-based method. However, MMA is limited to seek optima using local gradients, either via adjoint method or FD. Here, we address this issue by inserting the DGS gradient into the MMA framework, and exploit the nonlocal exploration ability of the DGS gradient to find a better design. The hyper-parameters for the DGS gradient will be given in Appendix. Figure \ref{fig4} shows the iterative optimization procedure using the DGS-based MMA optimizer. Figure \ref{fig6} summarizes the results\footnote{All the baselines except for IPop-CMA can be inserted into MMA, where the source code can be found at \url{https://github.com/arjendeetman/TopOpt-MMA-Python}.}. We ran each algorithm for 5 times with random initial guesses and plot the mean loss decay. The DGS gradient leads to faster convergence and better final design than the baselines. FD converges fast initially but is quickly trapped into a local minimum. ASEBO and ES-Bpop perform similar to FD. Nesterov may perform well eventually but converges slowly. The IPop-CMA has the worst performance because the simple Lagrangian penalty is insufficient to enforce the constraints. The performances are also demonstrated by the final topology in Figure \ref{fig5}. \subsection{Inference of hydraulic conductivity field in subsurface environments}\label{sec:ex_4} \begin{wrapfigure}{r}{0.6\textwidth} \vspace{-0.6cm} \includegraphics[scale = 0.3]{./K_field.png} \hspace{-0.4cm} \includegraphics[scale = 0.3]{./Modflow.pdf} \vspace{-0.1cm} \caption{(Left): the target hydraulic conductivity field and the 50 locations (black dots) for collecting hydraulic head data. (Right): comparison of the loss decay w.r.t.~\# function evaluations for predicting the hydraulic conductivity field using hydraulic head data.} \label{fig7} \vspace{-0.3cm} \end{wrapfigure} We demonstrate the superior performance of DGS-ES in solving an inference problem in groundwater modeling. Hydraulic conductivity measuring the ease of liquid flow through porous media is an important parameter in predicting contaminant transport in groundwater. However, hydraulic conductivity is very difficult to measure and typically inferred from hydraulic heads (easier to measure). In this work, we use a fully connected neural network (FNN) to approximate a 2D hydraulic conductivity field (Figure \ref{fig7} (left)). The FNN has one hidden layer with 64 neurons. The input is the 2D spatial coordinates and the output is hydraulic conductivity values. $tanh(\cdot)$ is used as the activation. The training data are hydraulic head samples randomly selected at 50 locations. To map the output of the FNN to the training data space, we need to run a blackbox groundwater simulator MODFLOW \cite{modflow} which solves a second-order parabolic partial differential equation. The loss function is defined as the mean squared error between the predicted hydraulic heads and the training data\footnote{The training data is generated by running MODFLOW with the true hydraulic conductivity field.}. The hyper-parameters for all the methods and the parameter values for MODFLOW are given in Appendix. The results in Figure \ref{fig7} (right) clearly demonstrate the much faster convergence of our DGD-ES method compared to other baselines. \section{Conclusion and discussion} High-dimensional black-box optimization is an important topic in several machine learning areas, such as reinforcement learning (RL), variational inference and adversarial attacks. We developed the DGS-ES algorithm that takes a novel nonlocal gradient operator with directional Gaussian smoothing to alleviate several challenges in global optimization. Experiments demonstrated that the DGS-ES outperforms several baseline algorithms on both benchmark functions and two scientific problems. {\bf Limitations}. We realized that there are several limitations with the current version of the DGS-ES algorithm, including (1) \emph{Naive random perturbation strategy.} As the DGS estimator is deterministic, a more effective random exploration strategy is critical to the robustness of the algorithm. (2) \emph{Hyper-parameter tuning.} The most sensitive hyper-parameter is the smoothing radius $\sigma$. If $\sigma$ is too small, the loss function will be insufficiently smoothed, such that the optimizer may be trapped in a local minimum. In contrast, if $\sigma$ is too big, the loss function is overly smoothed, such that the convergence will become much slower. How to adaptively adjust the smoothing radius is still an open question. (3) {\em Sub-optimal solution for loss functions without global structures.} Figure \ref{fig2} shows that the DGS-ES cannot find the global minimum of the Schwefel function that does not have a global structure. This could happen in real-world applications. For example, even though our method outperforms the baselines in solving the TO problem, we cannot verify that the design obtained by our method is globally optimal. {\bf Future work}. We plan to implement a distributed DGS-ES version to accelerate the time to solution for computationally expensive black-box training problems and demonstrate its strong scalability and the dimension independence property on distributed deep learning frameworks, such as Ray \cite{moritz2018ray}. We also plan on extending the scalable DGS-ES to RL research and help reduce the RL training cost from days to hours or even minutes. That would be a major improvement for the whole RL community. \section*{Appendix}
{ "timestamp": "2020-06-15T02:02:14", "yymm": "2002", "arxiv_id": "2002.03001", "language": "en", "url": "https://arxiv.org/abs/2002.03001", "abstract": "We propose an improved evolution strategy (ES) using a novel nonlocal gradient operator for high-dimensional black-box optimization. Standard ES methods with $d$-dimensional Gaussian smoothing suffer from the curse of dimensionality due to the high variance of Monte Carlo (MC) based gradient estimators. To control the variance, Gaussian smoothing is usually limited in a small region, so existing ES methods lack nonlocal exploration ability required for escaping from local minima. We develop a nonlocal gradient operator with directional Gaussian smoothing (DGS) to address this challenge. The DGS conducts 1D nonlocal explorations along $d$ orthogonal directions in $\\mathbb{R}^d$, each of which defines a nonlocal directional derivative as a 1D integral. We then use Gauss-Hermite quadrature, instead of MC sampling, to estimate the $d$ 1D integrals to ensure high accuracy (i.e., small variance). Our method enables effective nonlocal exploration to facilitate the global search in high-dimensional optimization. We demonstrate the superior performance of our method in three sets of examples, including benchmark functions for global optimization, and real-world science and engineering applications.", "subjects": "Optimization and Control (math.OC); Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)", "title": "A Novel Evolution Strategy with Directional Gaussian Smoothing for Blackbox Optimization", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773707986486797, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7084670407436322 }
https://arxiv.org/abs/2102.08052
On a List Variant of the Multiplicative 1-2-3 Conjecture
The 1-2-3 Conjecture asks whether almost all graphs can be (edge-)labelled with $1,2,3$ so that no two adjacent vertices are incident to the same sum of labels. In the last decades, several aspects of this problem have been studied in literature, including more general versions and slight variations. Notable such variations include the List 1-2-3 Conjecture variant, in which edges must be assigned labels from dedicated lists of three labels, and the Multiplicative 1-2-3 Conjecture variant, in which labels~$1,2,3$ must be assigned to the edges so that adjacent vertices are incident to different products of labels. Several results obtained towards these two variants led to observe some behaviours that are distant from those of the original conjecture.In this work, we consider the list version of the Multiplicative 1-2-3 Conjecture, proposing the first study dedicated to this very problem. In particular, given any graph $G$, we wonder about the minimum~$k$ such that $G$ can be labelled as desired when its edges must be assigned labels from dedicated lists of size~$k$. Exploiting a relationship between our problem and the List 1-2-3 Conjecture, we provide upper bounds on~$k$ when $G$ belongs to particular classes of graphs. We further improve some of these bounds through dedicated arguments.
\section{Introduction} Let $G$ be a graph and $\ell$ be a \textit{$k$-labelling} of $G$, i.e., an assignment $\ell : E(G) \rightarrow \{1,\dots,k\}$ of labels $1,\dots,k$ to the edges of $G$. For every vertex $v$ of $G$, one can compute, as a colour, the \textit{sum} $\sigma_\ell(v)$ of labels assigned by $\ell$ to the edges incident to $v$, that is $$\sigma_\ell(v)= \sum_{w \in N(v)} \ell(vw).$$ We say that $\ell$ is \textit{s-proper} if $\sigma_\ell$ is a proper vertex-colouring of $G$, i.e., if, for every edge $uv$ of $G$, we have $\sigma_\ell(u) \neq \sigma_\ell(v)$. We denote by $\chi_\Sigma(G)$ the smallest $k \geq 1$, if any, such that $G$ admits s-proper $k$-labellings. It turns out that $\chi_\Sigma(G)$ is defined, i.e., that $G$ admits s-proper labellings, if and only if $G$ has no connected component isomorphic to $K_2$. For this reason, when investigating s-proper labellings, we generally focus on so-called \textit{nice graphs}, which are those graphs with no connected component isomorphic to $K_2$, i.e., having their parameter $\chi_\Sigma$ being properly defined. The \textbf{1-2-3 Conjecture}, introduced in~\cite{KLT04} by Karo\'nski, {\L}uczak and Thomason in 2004, presumes that the maximum value of $\chi_\Sigma(G)$ for a nice graph $G$ should never exceed~$3$; that is: \begin{123c} If $G$ is a nice graph, then $\chi_\Sigma(G) \leq 3$. \end{123c} Several aspects towards this conjecture have been investigated to date. For an in-depth review of most of our knowledge on the problem, we refer the reader to the survey~\cite{Sea12} by Seamone. Let us mention, as notable evidence towards the 1-2-3 Conjecture, that it is known to hold for $3$-colourable graphs~\cite{KLT04}, and that $\chi_\Sigma(G) \leq 5$ holds for every nice graph $G$~\cite{KKP10}. \medskip Our investigations in this work are primarily related to two variants of the 1-2-3 Conjecture, being its \textbf{Multiplicative} and \textbf{List variants}, which we recall in what follows. As the name suggests, the Multiplicative variant is related to products of labels rather than to sums of labels. The terminology is as follows. Let $G$ be a graph and $\ell$ be a labelling of $G$. This time, for every vertex $v$ of $G$, we compute, as a colour, the \textit{product} $\pi_\ell(v)$ of labels incident to $v$ by $\ell$, that is $$\pi_\ell(v)= \prod_{w \in N(v)} \ell(vw).$$ We say that $\ell$ is \textit{p-proper} if $\pi_\ell$ is a proper vertex-colouring of $G$, i.e., if, for every edge $uv$ of $G$, we have $\pi_\ell(u) \neq \pi_\ell(v)$. Assuming $G$ is nice, we denote by $\chi_\Pi(G)$ the smallest $k$ such that $G$ admits p-proper $k$-labellings. Introduced in~\cite{SK12} by Skowronek-Kazi\'ow in 2012, the \textbf{Multiplicative 1-2-3 Conjecture} stands as the straight product counterpart to the original 1-2-3 Conjecture: \begin{m123c} If $G$ is a nice graph, then $\chi_\Pi(G) \leq 3$. \end{m123c} The Multiplicative 1-2-3 Conjecture has received, to date, much less attention than the original 1-2-3 Conjecture. Towards it, the main results we know of are that the conjecture holds for $4$-colourable graphs~\cite{BHLS20}, and that $\chi_\Pi(G) \leq 4$ holds for every nice graph $G$~\cite{SK12}. The List variant of the 1-2-3 Conjecture is a generalisation where edges must be assigned labels from fixed-size lists that might be different from $\{1,2,3\}$. This is all defined accordingly to the following definitions. Let $G$ be a graph and $L$ be a \textit{$k$-list assignment} (to the edges) of $G$, i.e., an assignment of sets of $k$ real numbers to the edges. An \textit{$L$-labelling} $\ell$ of $G$ is a labelling where each edge is assigned a label from its list, i.e., $\ell(e) \in L(E)$ for every $e \in E(G)$. Note that the notion of s-proper labellings can be extended naturally to $L$-labellings. Now, for a nice graph $G$, we define ${\rm ch}_\Sigma(G)$ as the smallest $k$ such that $G$ admits an s-proper $L$-labelling for every $k$-list assignment $L$. The \textbf{List 1-2-3 Conjecture}, introduced in 2009 by Bartnicki, Grytczuk and Niwczyk in~\cite{BGN09}, is the straight analogue of the 1-2-3 Conjecture to the previous notions: \begin{l123c} If $G$ is a nice graph, then ${\rm ch}_\Sigma(G) \leq 3$. \end{l123c} The List 1-2-3 Conjecture is of course much stronger than the original conjecture, and, as a matter of fact, there is still no known general constant upper bound on ${\rm ch}_\Sigma$. To date, the best bound we know of, is that ${\rm ch}_\Sigma(G) \leq \Delta(G)+1$ holds for every nice graph $G$~\cite{DDWWWYZ19} (where $\Delta(G)$ denotes the maximum degree of $G$). Constant upper bounds were established for some classes of graphs; see later Section~\ref{section:cn-and-connections} for more details. For now, let us just mention that most of these results were established through an approach imagined by Bartnicki, Grytczuk and Niwczyk in~\cite{BGN09}, which is reminiscent of the studies on choosability of graphs, which relies on attacking the problem from an algebraic point of view. This will also be described further in a later section (Subsection~\ref{subsection:algebraic-tools}), as this is an important point behind our investigations. \medskip This work is dedicated to investigating a new problem inspired from the previous ones above, holding, essentially, as a \textbf{List Multiplicative 1-2-3 Conjecture}. Note that, for an $L$-labelling of a nice graph $G$, the notion of p-properness adapts naturally, and from this we can define ${\rm ch}_\Pi(G)$ as the smallest $k$ such that $G$ admits a p-proper $L$-labelling for every $k$-list assignment $L$ to its edges. This parameter ${\rm ch}_\Pi$ is precisely the one we study throughout this work. To the best of our knowledge, this parameter was, to date, only discussed briefly by Seamone in his survey~\cite{Sea12}, in which he suggests a few of its properties. This parameter is also somewhat close to other studied parameters, such as the notions of \textit{product irregularity strength}~\cite{Anh09} (related to labellings for which all vertices, not only the adjacent ones, must be incident to distinct products of labels) and \textit{neighbour-product-distinguishing index}~\cite{LQWY17} (related to labellings for which the labels assigned to the edges must form a proper edge-colouring). The current paper consists of two main sections. Section~\ref{section:cn-and-connections} stands as a preliminary section in which we raise first observations on the parameter ${\rm ch}_\Pi$. In that section, we also explore the connections between our problem and the List 1-2-3 Conjecture, from which we get first systematic upper bounds on ${\rm ch}_\Pi$. We also get to describing the algebraic approach through which we improve some of these first bounds. These improved bounds are gathered in Section~\ref{section:improved-bounds}, and are about both general graphs (Subsection~\ref{subsection:general-bounds}) and particular classes of graphs, such as trees, planar graphs with large girth, and subcubic graphs (Subsection~\ref{subsection:bounds-particular-classes}). \section{Preliminaries, tools, and connections with the sum variant}\label{section:cn-and-connections} We here introduce all tools and preliminary materials needed to establish the results in later Section~\ref{section:improved-bounds}. In Subsection~\ref{subsection:remarks-chp}, we first state a few easy observations on the parameter ${\rm ch}_\Pi$. In Subsection~\ref{subsection:connections-sum-variant}, we establish and exploit a relationship between the two parameters ${\rm ch}_\Sigma$ and ${\rm ch}_\Pi$, from which we deduce first constant bounds on ${\rm ch}_\Pi$ for several graph classes. Finally, in Subsection~\ref{subsection:algebraic-tools}, we recall algebraic tools from which improved bounds will be obtained, later in Section~\ref{section:improved-bounds}. For transparency, let us mention that some of the results from this section, mostly from Subsection~\ref{subsection:remarks-chp}, were already suggested by Seamone in~\cite{Sea12}, in which the parameter ${\rm ch}_\Pi$ is discussed very briefly. To make our contribution clear, we notify properly, through what follows, every remark also mentioned in~\cite{Sea12}. \subsection{Early remarks on the parameter ${\rm ch}_\Pi$}\label{subsection:remarks-chp} As remarked in~\cite{Sea12}, note, given an edge $uv$ of a graph $G$, that if $\ell(uv)=0$ by a labelling $\ell$ of $G$, then $\ell$ cannot be p-proper, since this would imply $\pi_\ell(u)=\pi_\ell(v)=0$. Thus, for any list assignment $L$ of $G$, a p-proper $L$-labelling is actually a p-proper $L^*$-labelling, where $L^*$ is the list assignment of $G$ verifying $L^*(e)=L(e) \setminus \{0\}$ for every edge $e \in E(G)$. Therefore, throughout this work, we consider list assignments not assigning label~$0$ to the edge lists. To catch this point, we refine the parameter ${\rm ch}_\Pi(G)$ of a graph $G$ to the parameter ${\rm ch}_\Pi^*(G)$, which is the smallest $k \geq 1$, if any, such that $G$ admits p-proper $L$-labellings for every $k$-list assignment $L$ not assigning label~$0$. By the previous remarks, obviously the following holds: \begin{observation}\label{observation:nozero} If $G$ is a nice graph, then ${\rm ch}_\Pi(G)={\rm ch}_\Pi^*(G)+1$. \end{observation} We note that if $L$ is the $1$-list assignment of $G$ where $L(e) = \{1\}$ for every edge $e$, then $G$ admits no p-proper $L$-labellings, since every such labelling $\ell$ would verify $\pi_\ell(u)=\pi_\ell(v)=1$ for every edge $uv \in E(G)$. Thus: \begin{observation} There is no graph $G$ verifying ${\rm ch}_\Pi^*(G)=1$. \end{observation} Analogous conclusions can be reached regarding graphs $G$ with ${\rm ch}_\Pi^*(G)=2$. Here, consider the $2$-list assignment $L$ of $G$ where $L(e)=\{-1,1\}$ for every edge $e$. Then, by an $L$-labelling $\ell$ of $G$, we have $\pi_\ell(v) \in \{-1,1\}$ for every vertex $v \in V(G)$. This implies that $\ell$ is p-proper if and only if $\pi_\ell$ is a proper $2$-vertex-colouring of $G$. In turn, this yields the following (also mentioned in~\cite{Sea12}): \begin{observation} If $G$ is a graph with ${\rm ch}_\Pi^*(G)=2$, then $G$ is bipartite. \end{observation} The previous condition is not sufficient, however, as nice connected bipartite graphs $G$ with ${\rm ch}_\Pi^*(G)=2$ must fulfil an additional property. \begin{proposition} Let $G$ be a connected bipartite graph with bipartition $(A,B)$. If ${\rm ch}_\Pi^*(G)=2$, then at least one of $|A|$ and $|B|$ must be even. \end{proposition} \begin{proof} Assume the claim is wrong, and let $G$ be a connected bipartite graph with ${\rm ch}_\Pi^*(G)=2$ in which the two parts $A$ and $B$ are of odd size. Consider $L$, the $2$-list assignment of $G$ where $L(e)=\{-1,1\}$ for every edge $e \in E(G)$. As mentioned earlier, by every $L$-labelling $\ell$ of $G$, we have $\pi_\ell(v) \in \{-1,1\}$ for every vertex $v \in V(G)$. Thus, because $G$ is connected, for such an $\ell$ to be p-proper we must have, say, $\pi_\ell(a) = -1$ for every $a \in A$ and $\pi_\ell(b) = 1$ for every $b \in B$. For the first condition to occur, for every $a \in A$ there must be an odd number of incident edges labelled $-1$ by $\ell$. Since $|A|$ is odd, this means that we must have an odd number of edges of $G$ labelled~$-1$ by $\ell$. For the second condition to occur, for every $b \in B$ there must be an even number of incident edges labelled $-1$ by $\ell$. For that, we must have an even number of edges of $G$ labelled~$-1$ by $\ell$, which is a contradiction. \end{proof} Thus, connected graphs $G$ with ${\rm ch}_\Pi^*(G)=2$ are connected bipartite graphs with at least one part of even cardinality. This condition is necessary but still not sufficient, however, even in simple graph classes such as trees. To see this is true, consider the following easy remarks. Suppose we have a graph $G$ with a pending path $wvu$ of length~$2$, where $d(u)=1$ and $d(v)=2$, and suppose $L$ is a $2$-list assignment to the edges of $G$. Assume more particularly that $L(wv)=\{1,a\}$ for some $a \neq 1$. Then, note that, in any p-proper $L$-labelling $\ell$ of $G$, we cannot have $\ell(vw)=1$, as otherwise we would have $\pi_\ell(v)=\pi_\ell(u)$ whatever $\ell(vu)$ is, a contradiction. In other words, the label of $wv$ by a p-proper $L$-labelling of $G$ is forced to $a$. From this, we can construct arbitrarily many trees $T$ with ${\rm ch}_\Pi^*(T)=3$ and any wanted cardinality parity for the parts of its bipartition. As an illustration (which admits obvious generalisations), consider the tree $T$ with vertex set $V(T)=\{v_1,\dots,v_8\}$ and edge set $E(T)=\{v_1v_2,v_2v_5,v_3v_4,v_4v_5,v_5v_6,v_6v_7,v_7v_8\}$, and note that $T$ has no p-proper $L$-labelling for any list assignment $L$ where $L(v_6v_7)=\{1,a^2\}$ and $L(v_2v_5)=L(v_4v_5)=\{1,a\}$ (for some $a \not \in \{1,-1\}$). \subsection{Connections with the sum variant, and first bounds on ${\rm ch}_\Pi$}\label{subsection:connections-sum-variant} As suggested by Seamone in~\cite{Sea12}, there is a straight connection between the parameters ${\rm ch}_\Sigma$ and ${\rm ch}_\Pi^*$, which follows from the product rule of logarithms. Despite this fact being easy to visualise, we give a detailed proof to establish the precise relationship between the two. \begin{theorem}\label{theorem:chs-to-chp} If $G$ is a nice graph, then ${\rm ch}_\Pi^*(G) \leq 2{\rm ch}_\Sigma(G)-1$. \end{theorem} \begin{proof} Assume we have ${\rm ch}_\Sigma(G) \leq k$ for some nice graph $G$ and $k \geq 2$. We prove that ${\rm ch}_\Pi^*(G) \leq 2k-1$. Let $L$ be a $(2k-1)$-list assignment to the edges of $G$, where none of the $L(e)$'s contains label~$0$. For every $e \in E(G)$, since $|L(e)|=2k-1$, there must be $S(e) \subset L(e)$ such that $|S(e)|=k$ and no two elements of $S(e)$ have the same absolute value. We set $X(e) = \{|x| : x \in S(e)\}$ and $L'(e) = \{\log(x) : x \in X(e)\}$\footnote{Throughout this work, any used $\log$ function can be in any fixed base.}. Then $L'$ is a $k$-list assignment of $G$ where each edge $e$ is associated $k$ nonnegative values that are logarithms of values of $L(e)$ with different absolute values. Our original assumption ${\rm ch}_\Sigma(G) \leq k$ implies that $G$ admits an s-proper $L'$-labelling $\ell'$. We now consider an $L$-labelling $\ell$ of $G$ obtained as follows. We consider every edge $e$ of $G$, and we choose, as $\ell(e)$, any label from $L(e)$ that resulted in $L'(e)$ containing $\ell'(e)$. By how $L'$ was obtained, note that, indeed, one such value belongs to $L(e)$. Thus, $\ell$ is an $L$-labelling. As a result, for every $v \in V(G)$ with incident edges $e_1, \dots, e_d$, we get $$\sigma_{\ell'}(v)=\sum_{i=1}^d \ell'(e_i)=\sum_{i=1}^d \left(\log |\ell(e_i)|\right) =\log\left(\prod_{i=1}^d |\ell(e_i)|\right)= \log(\abs{\pi_\ell(v)}).$$ In particular, $\ell$ is p-proper since $\ell'$ is s-proper. \end{proof} The connection between ${\rm ch}_\Sigma$ and ${\rm ch}_\Pi^*$ in Theorem~\ref{theorem:chs-to-chp} implies that, for any constant upper bound on ${\rm ch}_\Sigma$ for some graph class, we deduce a constant upper bound on ${\rm ch}_\Pi^*$ as well. In the next result, we have listed some constant bounds on ${\rm ch}_\Sigma$ from the literature, together with the bounds on ${\rm ch}_\Pi^*$ we get as a consequence. It is worth emphasising that we do not claim this list to be exhaustive in any way. Namely, we only list the bounds that seem the most significant to us, and the interested reader has to be aware that more results of the sort below can be established from results mentioned in the references below. \begin{corollary}\label{corollary:existing-bounds} Let $G$ be a nice connected graph. \begin{itemize} \item ${\rm ch}_\Sigma(G) \leq \Delta(G)+1$ (see \cite{DDWWWYZ19}); thus ${\rm ch}_\Pi^*(G) \leq 2\Delta(G)+1$. \item If $G$ is complete, complete bipartite, or a tree, then ${\rm ch}_\Sigma(G) \leq 3$ (see \cite{BGN09}); thus ${\rm ch}_\Pi^*(G) \leq 5$. \item If $G$ is $2$-degenerate and non-bipartite, then ${\rm ch}_\Sigma(G) \leq 3$ (see \cite{WZ18}); thus ${\rm ch}_\Pi^*(G) \leq 5$. \item If $G$ is a wheel, then ${\rm ch}_\Sigma(G) \leq 3$ (see \cite{PY13}); thus ${\rm ch}_\Pi^*(G) \leq 5$. \item If ${\rm mad}(G) \leq \frac{11}{4}$, then ${\rm ch}_\Sigma(G) \leq 3$ (see \cite{LWZ18}); thus ${\rm ch}_\Pi^*(G) \leq 5$. \item If $G$ is outerplanar, then ${\rm ch}_\Sigma(G) \leq 4$ (see \cite{PY13}); thus ${\rm ch}_\Pi^*(G) \leq 7$. \item If $\Delta(G) \leq 4$, then ${\rm ch}_\Sigma(G) \leq 4$ (see \cite{LLM20}); thus ${\rm ch}_\Pi^*(G) \leq 7$. \item If $G$ is $2$-connected and chordal, or a line graph, then ${\rm ch}_\Sigma(G) \leq 5$ (see \cite{Won21}); thus ${\rm ch}_\Pi^*(G) \leq 9$. \item If $G$ is a planar graph, then ${\rm ch}_\Sigma(G) \leq 7$ (see \cite{WZ18}); thus ${\rm ch}_\Pi^*(G) \leq 13$. \end{itemize} \end{corollary} A consequence of the first item in Corollary~\ref{corollary:existing-bounds}, is that the List 1-2-3 Conjecture itself makes plausible the existence of a general constant upper bound on ${\rm ch}_\Pi^*$. In particular, we currently have no evidence that the following, which would be a legitimate guess, might be false: \begin{lm123c} If $G$ is a nice graph, then ${\rm ch}_\Pi^*(G) \leq 3$. \end{lm123c} Recall that observations raised at the end of Subsection~\ref{subsection:remarks-chp} establish that this conjecture, if true, would actually be tight. \subsection{Algebraic tools}\label{subsection:algebraic-tools} To improve, in next Section~\ref{section:improved-bounds}, some of the bounds from Corollary~\ref{corollary:existing-bounds}, we will adapt and employ an algebraic approach that was first designed by Bartnicki, Grytczuk and Niwczyk to deal with the List 1-2-3 Conjecture in~\cite{BGN09}, and which is inspired by polynomial methods developed to deal with list colouring of graphs. Consider a graph $G$ with edges $e_1,\dots,e_m$, and a list assignment $L$ to the edges of $G$. For a vertex $u$ and an edge $e$ of $G$, we write $e \sim u$ if $e$ is incident to $u$. Let $\vec{G}$ be any orientation of $G$. To each edge $e_i$ of $G$, we associate a variable $x_i$. Now, we associate to $G$ (through $\vec{G}$) a polynomial $Q_{\vec{G}}$ with variables $x_1,\dots,x_m$, being $$Q_{\vec G}(x_1, \dots x_m)= \prod_{\vec{uv} \in A(\vec{G})} \left(\sum_{e_i \sim u} x_i - \sum_{e_i \sim v} x_i\right).$$ It is easy to see that $G$ has an s-proper $L$-labelling if and only if there are values $l_1 \in L(e_1),\dots,l_m \in L(e_m)$ such that $Q_{\vec{G}}(l_1,\dots,l_m)$ does not vanish. From this point of view, a powerful tool is the so-called \textit{Combinatorial Nullstellensatz} of Alon~\cite{Alon99}, which provides sufficient conditions, in terms of the sizes of the lists $L(e_1),\dots,L(e_m)$, for such values $l_1,\dots,l_m$ to be choosable. \begin{cn} Let $\mathbb{F}$ be an arbitrary field, and let $f=f(x_1,\dots,x_n)$ be a polynomial in $\mathbb{F}[x_1,\dots,x_n]$. Suppose the total degree of $f$ is $\sum_{i=1}^n t_i$, where each $t_i$ is a nonnegative integer, and suppose the coefficient of $\prod_{i=1}^n x_i^{t_i}$ is nonzero. If $S_1, \dots, S_n$ are subsets of $\mathbb{F}$ with $|S_i|>t_i$, then there are $s_1 \in S_1, \dots, s_n \in S_n$ so that $f(s_1,\dots,s_n) \neq 0$. \end{cn} Thus, bounds on ${\rm ch}_\Sigma(G)$ can be obtained via the Combinatorial Nullstellensatz through studying the monomials in the expansion of $Q_{\vec{G}}$, more precisely monomials with nonzero coefficient, maximum degree, and, preferably, low exponent values. Note that all the monomials of $Q_{\vec{G}}$ share the very convenient property that they are all of maximum degree $m$, which is one of the prerequisites for the Combinatorial Nullstellensatz to work. The tricky part, actually, is about anticipating the coefficients of the monomials of $Q_{\vec{G}}$ (the nonzero ones, particularly), which are far from being obvious in general. In~\cite{BGN09}, the authors developed a very nice dedicated approach, which is based on studying the permanent of a particular matrix representing $Q_{\vec{G}}$. \medskip A similar polynomial approach can of course be applied for deducing bounds on ${\rm ch}_\Pi(G)$. The main difference is that, this time, we have to consider the products of labels incident to the vertices, instead of their sums. More precisely, the polynomial of interest is here $$P_{\vec G}(x_1, \dots x_m)= \prod_{\vec{uv} \in A(\vec{G})} \left(\prod_{e_i \sim u} x_i - \prod_{e_i \sim v} x_i\right).$$ Compared to the polynomial $Q_{\vec{G}}$, a big difference is that, in the expansion of $P_{\vec{G}}$, the monomials are likely to have different degrees, which means that the Combinatorial Nullstellensatz might apply to a few of them only. Even worse is that the degree of $P_{\vec{G}}$ is generally bigger than that of $Q_{\vec{G}}$, and, in particular, the exponents of the monomials generally tend to be bigger too. Note indeed that the degree of $Q_{\vec{G}}$ is precisely $m$, while the degree of $P_{\vec{G}}$ can be as large as $\sum_{uv \in E(G)} \max\{d(u),d(v)\}$ (which can be reached, e.g. when no two adjacent vertices of $G$ have the same degree). For these reasons, as will be seen in next Section~\ref{section:improved-bounds}, deducing bounds on ${\rm ch}_\Pi^*$ via the Combinatorial Nullstellensatz only, seems to be viable in particular contexts only. \section{Improved bounds on ${\rm ch}_\Pi^*$ for some graph classes} \label{section:improved-bounds} We here improve some of the bounds on ${\rm ch}_\Pi^*$ from Corollary~\ref{corollary:existing-bounds}. We first consider graphs in general, in Subsection~\ref{subsection:general-bounds}. We then focus, in Subsection~\ref{subsection:bounds-particular-classes}, on particular classes of graphs, including trees, planar graphs with large girth, and subcubic graphs. In the latter subsection, the exhibited improved bounds are optimal, or close to optimal. \subsection{General graphs}\label{subsection:general-bounds} The bounds on ${\rm ch}_\Pi^*$ we establish in this section are expressed as functions of the maximum degree, our goal being to improve the bound of the first item of Corollary~\ref{theorem:chs-to-chp}. We start off by improving that bound slightly for all nice graphs. From the bound we provide, we deduce, towards the List Multiplicative 1-2-3 Conjecture, that the List 1-2-3 Conjecture, if verified, would imply that ${\rm ch}_\Pi^*(G) \leq 5$ holds for every nice graph $G$. \begin{theorem}\label{theorem:upper-bound-cn} If $G$ is a nice graph, then ${\rm ch}_\Pi^*(G) \leq 2\Delta(G)-1$. \end{theorem} \begin{proof} Let us denote by $e_1,\dots,e_m$ the edges of $G$, and, for every $i \in \{1,\dots,m\}$, let $x_i$ be a variable associated to $e_i$. Now, let $\vec{G}$ be any orientation of $G$, and $P_{\vec{G}}$ be the polynomial with variables $x_1,\dots,x_m$ defined as $$P_{\vec G}(x_1, \dots x_m)= \prod_{\vec{uv} \in A(\vec{G})} \left(\prod_{e_i \sim u} x_i - \prod_{e_i \sim v} x_i\right).$$ As described in Subsection~\ref{subsection:algebraic-tools}, if $L$ is a list assignment of $G$, and $P_{\vec{G}}(l_1,\dots,l_m) \neq 0$ for some $l_1 \in L(e_1),\dots,l_m\in L(e_m)$, then clearly we deduce a p-proper $L$-labelling of $G$. Let $M=cx_1^{t_1}\dots x_m^{t_m}$ be a monomial of maximum degree from the expansion of $P_{\vec{G}}$ with $c \neq 0$. Such an $M$ has to exist, since $\mathbb{R}[x_1,\dots,x_m]$ is an integral domain. We note that for every $i \in \{1,\dots,m\}$, we have $t_i \leq 2\Delta(G)-1$. This is because variable $x_i$ appears in at most $2\Delta(G)-1$ factors of $P_{\vec{G}}$: once due to the edge $e_i$, and at most $2\Delta(G)-2$ times dues to the other edges incident to the two ends of $e_i$. By earlier arguments, the Combinatorial Nullstellensatz, due to the existence of $M$, now implies that ${\rm ch}_\Pi(G) \leq 2\Delta(G)$, thus our conclusion on ${\rm ch}_\Pi^*$ by Observation~\ref{observation:nozero}. \end{proof} The next bound is a significant improvement over Theorem~\ref{theorem:upper-bound-cn}, in the case of graphs having vertices with convenient neighbourhood properties. \begin{theorem}\label{theorem:better-bound-triangle-free} If $G$ is a nice graph with a vertex $u$ such that $d(u) \geq 2$, $N(u)$ is a stable set, and ${\rm ch}_\Pi^*(G-u) \leq \Delta(G-u)+3$, then ${\rm ch}_\Pi^*(G) \leq \Delta(G-u)+3$. \end{theorem} \begin{proof} Let $L$ be a $(\Delta(G-u)+3)$-list assignment to the edges of $G$, and let $L'$ be the restriction of $L$ to the edges of $G'=G-u$. Since ${\rm ch}_\Pi^*(G') \leq \Delta(G')+3$, there is a p-proper $L'$-labelling $\ell'$ of $G'$. Our aim is to extend $\ell'$ to a p-proper $L$-labelling $\ell$ of $G$, by considering the edges $uv_1,\dots,uv_d$ ($d \geq 2$) incident to $u$, and, for each one $uv_i$ of them, assigning it a label from $L(uv_i)$ so that, eventually, no conflict arises. For every $i \in \{1, \dots, d\}$, we denote by $z_i$ the current product of $v_i$ (i.e., by $\ell'$). Because $N(v)$ is a stable set, note that assigning a label to any $uv_i$ completely determines the product of $v_i$, in the sense that all edges incident to $v_i$ get labelled. Thus, when labelling $uv_i$, we must ensure that $v_i$ does not get in conflict with its neighbours different from $u$. Since $|N(v_i) \setminus \{u\}| \leq \Delta(G-u)$, and because $|L(uv_i)| = \Delta(G-u)+3$, there are at least three distinct values in $L(uv_i)$ that can be assigned to $uv_i$ without causing any conflict between $v_i$ and its neighbours different from $u$. For every $i \in \{1,\dots,d\}$, we denote by $S_i$ this subset of ``safe'' values of $L(uv_i)$ for $uv_i$. Because $|S_i| \geq 3$, there are in $S_i$ at least two values $a_i,b_i$ such that $|a_i| \neq |b_i|$. We will be done if we can find an assignment of $a_i$'s and $b_i$'s to the $uv_i$'s for which $u$ gets in conflict with none of the $v_i$'s. Such an assignment is actually guaranteed to exist by the Combinatorial Nullstellensatz. Indeed, for every $i \in \{1, \dots, d\}$, let $x_i$ be a variable associated to the edge $uv_i$. Consider the polynomial $$P(x_1,\dots,x_d)=\prod_{i=1}^d \left(\prod_{j = 1}^d x_j - x_iz_i\right).$$ For every $i \in \{1, \dots, d\}$, set $y_i=\log(|x_i|)$. Note that considering $P$ is similar to considering $$P'(y_1,\dots,y_d)=\prod_{i=1}^d \left(\sum_{\substack{j=1 \\ j\neq i}}^d y_j - \log(z_i)\right),$$ which, because the $z_i$'s are constants, in the current context is similar to studying $$P''(y_1,\dots,y_d)=\prod_{i=1}^d \sum_{\substack{j=1 \\ j\neq i}}^d y_j.$$ We remark that the monomial $y_1 \dots y_d$ in the expansion of $P''$ has strictly positive coefficient. Thus, by the Combinatorial Nullstellensatz, we can assign values to the $y_i$'s so that $P''$ (and thus $P'$) does not vanish, assuming we have at least two possible values to choose from for each of them. From this, we get that we can assign values to the $x_i$'s so that $P$ does not vanish as long as we have at least two possible values with distinct absolute values to choose from, for each of them. Particularly, this means that we can assign a label from $\{a_i,b_i\}$ to every $uv_i$, in such a way that a p-proper $L$-labelling of $G$ results. \end{proof} Together with checking a few base cases (which is done through results in next Subsection~\ref{subsection:bounds-particular-classes}), Theorem~\ref{theorem:better-bound-triangle-free} implies the following: \begin{corollary} If $G$ is a nice triangle-free graph, then ${\rm ch}_\Pi^*(G) \leq \Delta(G)+3$. \end{corollary} \subsection{Particular classes of graphs}\label{subsection:bounds-particular-classes} \subsubsection*{Paths and cycles} Note that Theorem~\ref{theorem:upper-bound-cn} implies that the List Multiplicative 1-2-3 Conjecture holds for nice graphs $G$ with $\Delta(G) \leq 2$, i.e., paths and cycles. In such simple cases, this can actually be refined to a tightest result. In the sequel, for an $n \geq 2$ (or an $n \geq 3$ in the case of a cycle), we denote by $P_n$ and $C_n$ the path and cycle, respectively, of length~$n$. \begin{theorem}\label{theorem:cycles} For an $n \geq 2$, we have: \begin{itemize} \item ${\rm ch}_\Pi^*(P_n)=2$ if $n$ is even or $n=3$; \item ${\rm ch}_\Pi^*(P_n)=3$ otherwise. \end{itemize} For an $n \geq 3$, we have: \begin{itemize} \item ${\rm ch}_\Pi^*(C_n)=2$ if $n \equiv 0 \bmod 4$; \item ${\rm ch}_\Pi^*(C_n)=3$ otherwise. \end{itemize} \end{theorem} \begin{proof} We deal with cycles first. Let us denote by $e_0,\dots,e_{n-1}$ the successive edges of $C_n$, and by $v_0,\dots,v_{n-1}$ its successive vertices, where $e_i=v_iv_{i+1}$ for every $i \in \{0,\dots,n-1\}$ (where, here and further, operations over the indexes are understood modulo~$n$). For any two adjacent vertices $v_i$ and $v_{i+1}$, note that, in order to get $\pi_\ell(v_i) \neq \pi_\ell(v_{i+1})$ by a labelling $\ell$ of $C_n$, we must have $\ell(v_{i-1}v_i) \neq \ell(v_{i+1}v_{i+2})$. Thus, for $\ell$ to be p-proper, any two edges of $C_n$ at distance~$2$ apart must be assigned different labels. Now consider $G$, the graph constructed from $C_n$ by adding one vertex $v_{e_i}$ in $G$ for every edge $e_i$ of $C_n$, and adding an edge $v_{e_i}v_{e_j}$ between any two vertices $v_{e_i},v_{e_j}$ of $G$ if $e_i$ and $e_j$ are at distance exactly~$2$ in $C_n$. By a remark above, we have ${\rm ch}_\Pi^*(C_n)={\rm ch}(G)$ (where ${\rm ch}(G)$ refers to the usual choice number of $G$). Note that $G$ is an odd-length cycle when $n$ is odd, an union of two odd-length cycles when $n \equiv 2 \bmod 4$, and an union of two even-length cycles when $n \equiv 0 \bmod 4$. Since even-length cycles have choice number~$2$ and odd-length cycles have choice number~$3$ (see e.g.~\cite{ERT79}), the result follows. \medskip Regarding paths, remark first that if $n \equiv 1 \bmod 4$, then $P_n$ is a bipartite graph in which the two parts of the bipartition have odd cardinality. As described at the end of Subsection~\ref{subsection:remarks-chp}, we must have ${\rm ch}_\Pi^*(P_n)>2$ in such a situation, and we actually have ${\rm ch}_\Pi^*(P_n) = 3$ by Theorem~\ref{theorem:upper-bound-cn}. Let us now consider the remaining values of $n$. For a given $n \geq 2$, similarly as in the case of cycles, let us denote by $e_1, \dots, e_n$ the successive edges of $P_n$, and by $v_1, \dots, v_{n+1}$ its vertices, where $e_i=v_iv_{i+1}$ for every $i \in \{1,\dots,n\}$. Note that, contrarily to the case of cycles, labelling $P_n$ is not similar to colouring $G$, the constraint graph of the edges at distance~$2$ in $P_n$, because, when labelling $P_n$, we must also guarantee that $e_2$ and $e_{n-1}$ are not assigned label~$1$, so that $v_1$ and $v_2$ and not in conflict, and similarly for $v_n$ and $v_{n+1}$. Note that $G$ is here the union of two (possibly empty) paths; the new labelling constraint is similar to forbidding one colour for each of $v_{e_2}$ and $v_{e_{n-1}}$. A problem is when these two vertices are the ends of a same path of $G$. Indeed, from this remark, we note that in the cases where $n \equiv 3 \bmod 4$ as well, there are $2$-list assignments $L$ of $P_n$ such that $P_n$ has no p-proper $L$-labelling. Indeed, note that the set of edges $\{e_2,e_4,\dots,e_{n-1}\}$ has odd cardinality due to the value of $n$. Recall that every two edges at distance~$2$ in $P_n$ in this set must receive distinct labels by a p-proper labelling. Then note that if $L(e_2)=\{1,a\}$ for some $a \not \in \{-1,1\}$ and $L(e_{2k})=\{a,b\}$ for some $b \not \in \{1,a\}$ for every other edge $e_{2k} \not \in \{e_2,e_{n-1}\}$, then, depending on the value of $n$, for either $L(e_{n-1})=\{1,a\}$ or $L(e_{n-1})=\{1,b\}$ label~$1$ must be assigned to one of $e_2$ and $e_{n-1}$ by a p-proper $L$-labelling, which creates a conflict, a contradiction. Then ${\rm ch}_\Pi^*(P_n)>2$ for such a value of $n$, and we have ${\rm ch}_\Pi^*(P_n) = 3$ by Theorem~\ref{theorem:upper-bound-cn}. Let us now consider the remaining cases, i.e., those where $n=3$ or $n$ is even. Let $L$ be a $2$-list assignment of $P_n$. We deduce a p-proper $L$-labelling $\ell$ of $P_n$ is the following way: \begin{itemize} \item If $n=3$, then first assign to $e_2$ a label from $L(e_2)$ different from~$1$, before assigning distinct labels from $L(e_1)$ and $L(e_3)$ to $e_1$ and $e_3$, respectively. Clearly, $\ell$ is p-proper. \item Assume $n$ is even. If $n=2$, then, clearly, we are done when assigning labels from $L(e_1)$ and $L(e_2)$ different from~$1$ to $e_1$ and $e_2$, respectively. So assume $n \geq 4$. We first label the edges $e_1,e_3,\dots,e_{n-1}$ with odd index with labels from their respective lists, in such a way that 1) $\ell(e_{n-1}) \neq 1$, and that 2) no two of these edges at distance~$2$ are assigned the same label. These conditions can clearly be achieved by labelling these edges one by one following the ordering $e_{n-1}, e_{n-3}, \dots, e_1$. We then achieve the same thing for the edges $e_2,e_4,\dots,e_n$ with even index, so that 1) $\ell(e_2) \neq 1$, and that 2) no two of these edges at distance~$2$ are assigned a same label. Again, this can be easily achieved, e.g. by labelling these edges following the ordering $e_2,e_4,\dots,e_n$. By arguments above, $\ell$ is eventually p-proper. \qedhere \end{itemize} \end{proof} \subsubsection*{Trees} We now prove an upper bound on ${\rm ch}_\Pi^*$ in the case of trees. The exhibited bound is optimal in general, due to some of the remarks at the end of Subsection~\ref{subsection:remarks-chp}. Even some paths attain the upper bound, recall Theorem~\ref{theorem:cycles}. \begin{theorem}\label{theorem:trees} If $T$ is a nice tree, then ${\rm ch}_\Pi^*(T) \leq 3$. \end{theorem} \begin{proof} The proof is by induction on the number of vertices and edges of $T$. The base case is when $T$ is a path of length~$2$, in which situation the claim holds by Theorem~\ref{theorem:cycles}. Thus, we can focus on proving the general case. Let $L$ be a $3$-list assignment to the edges of $T$. We can assume that $T$ has no pending path of length at least~$3$, i.e., a path $uvwx$ such that $d(u)=1$, $d(v)=d(w)=2$, and $d(x) \geq 2$. Indeed, assume $T$ has such a path. Let $T'=T-\{u,v\}$. Clearly $T'$ is nice (as otherwise $T$ would be a path, a case for which Theorem~\ref{theorem:cycles} yields the desired conclusion), and thus $T'$ admits a p-proper $L'$-labelling $\ell'$, where $L'$ denotes the restriction of $L$ to the edges of $T'$. To extend $\ell'$ to a p-proper $L$-labelling of $T$, we have to assign to $uv$ and $vw$ labels from their lists, so that no conflict arises. To that aim, we first assign to $vw$ a label different from $1$ and from $\frac{\pi_{\ell'(x)}}{\ell'(xw)}$ so that $w$ does not get in conflict with $x$. Note that this is possible since $|L(vw)|=3$. Note that, now, because $\ell(vw) \neq 1$, whatever label we assign to $uv$, we cannot get a conflict between $u$ and $v$. Thus, when labelling $uv$, we just need to make sure that $v$ does not get in conflict with $w$, which can easily be ensured since $|L(uv)|=3$. We may also assume that $T$ has branching vertices, i.e., vertices with degree at least~$3$. Indeed, if $T$ has no branching vertex, then $T$ is a path, $\Delta(T)=2$, and the claim follows from Theorem~\ref{theorem:cycles}. So assume that $T$ has branching vertices. Root $T$ at any branching vertex $r$. This defines the usual root-to-leaf orientation, through which every non-root vertex has a unique \textit{parent}, i.e., a neighbour that is closer to $r$, and every non-leaf vertex $v$ has \textit{sons}, i.e., neighbours that are farther from $r$, and, more generally, \textit{descendants}, i.e., vertices for which the unique path to $r$ goes through $v$. Let $u$ be a branching vertex of $T$ that is at farthest distance from $r$. Note that we have $u = r$ if $r$ is the unique branching vertex of $T$. By this choice, $u$ has at least two descendants, all of which have degree at most~$2$. In other words, the descendants of $u$ form $k \geq 2$ disjoint pending paths, none of which has length more than~$2$, as mentioned earlier. There are then $k=p+q \geq 2$ pending paths attached at $u$ formed by its descendants, where $p \geq 0$ of these paths have length~$2$, while $q \geq 0$ of them have length~$1$. We denote by $v_1,\dots,v_p, w_1, \dots, w_q$ the sons of $u$, where $v_1, \dots, v_p$ belong to pending paths of length~$2$, while $w_1,\dots,w_q$ are leaves. We also denote by $v_1', \dots, v_p'$ the neighbour of $v_1,\dots,v_p$, respectively, different from $u$. Thus, the $v_i$'s have degree~$2$, while the $v_i'$'s and the $w_i$'s have degree~$1$. Lastly, we denote by $t$ the parent of $u$, if it exists (recall that we have $u=r$ when $T$ has only one branching vertex, in which case $u$ has no parent). Let $T'=T-\set{v_1,\dots,v_p,v'_1,\dots,v'_p,w_1,\dots,w_q}$. The tree $T'$ is nice, because either $r$ is a branching vertex (case where $u \neq r$) or $T'$ consists in only one vertex (case where $u=r$), and thus $T'$ admits a p-proper $L'$-labelling $\ell'$, where $L'$ denotes the restriction of $L$ to the edges of $T'$. To extend $\ell'$ to a p-proper $L$-labelling of $T$, we just have to assign labels from their lists to the edges incident to the descendants of $u$, so that no conflict arises. We distinguish several cases, based mainly on the value of $q$. \begin{itemize} \item Suppose that $q =0$. Label every edge $uv_i$ with $i \in \{1, \dots, p-1\}$ with an arbitrary label from $L(uv_i)$ different from $1$. Now, label $uv_p$ with a label from $L(uv_p)$ different from~$1$ so that $u$ does not get in conflict with $t$, if it exists (in case it does not, just assign any label different from $1$ to $uv_p$). Note that this is possible since $|L(uv_p)|=3$. Lastly, consider every edge $v_iv_i'$. Since $\ell(uv_i) \neq 1$, note that $v_i$ and $v_i'$ cannot get in conflict, whatever label from $L(v_iv_i')$ is assigned to $v_iv_i'$. Thus, when labelling $v_iv_i'$, we just need to ensure that $v_i$ and $u$ do not get in conflict, which can be avoided since $|L(v_iv_i')|=3$. \item Suppose now that $q=1$. Recall that $p \geq 1$ since $k=p+q \geq 2$. We start by labelling, for every $i \in \{1, \dots, p-1\}$, the edge $uv_i$ with any label different from~$1$, chosen from $L(uv_i)$. We then consider $uv_p$, and assign to this edge a label from $L(uv_p)$ different from~$1$ so that the resulting partial product of $u$ is different from~$1$. Note that this is possible since $|L(uv_p)|=3$. Now, note that, by this choice of $\ell(uv_p)$, no matter what $\ell(uw_1)$ is, we cannot get a conflict between $u$ and $w_1$. We then assign as $\ell(uw_1)$ a label from $L(uw_1)$ so that $u$ does not get in conflict with $t$ (if it exists). Lastly, we consider every $i \in \{1, \dots, p\}$, and, to every edge $v_iv_i'$, we assign a value from $L(v_iv_i')$ so that $v_i$ and $u$ do not get in conflict. This results in $\ell$ being p-proper. Recall, in particular, that any two $v_i,v_i'$ cannot be in conflict since $\ell(uv_i)\neq 1$. \end{itemize} Suppose now that $q \geq 2$. We start by stating the following general claim: \begin{claim}\label{claim:star-cn} Let $S$ be a star with center $u$ and $q+1 \geq 3$ leaves $t,w_1,\dots,w_q$. Assume we have a partial labelling $\ell'$ of $S$ where $ut$ is the only edge being assigned a label, $a$, and that $t$ has (virtual) product $\pi_{\ell'}(t)=A$. If $L$ is a $3$-list assignment to the $uw_i$'s, then, for every $i \in \{1,\dots,q\}$, we can assign a label from $L(uw_i)$ to $uw_i$, so that $\ell'$ is extended to a labelling $\ell$ of $S$ verifying $\pi_\ell(u) \not \in \{A,\pi_\ell(w_1),\dots,\pi_\ell(w_q)\}$. \end{claim} \begin{proofclaim} Suppose first that $q=2$. We first assign to $uw_1$ a label from $L(uw_1)$ different from $1/a$. This way, no matter what label is assigned to $uw_2$, note that $u$ and $w_2$ cannot get in conflict. We now assign a label from $L(uw_2)$ to $uw_2$ so that the resulting product of $u$ is different from $A$ and the product of $w_1$. This is possible since $|L(uw_2)|=3$. Assume now that $q \geq 3$. We distinguish the following cases: \begin{itemize} \item Assume, w.l.o.g., that the three values in $L(uw_1)$ have pairwise distinct absolute values. To each edge $uw_i$, we associate a variable $x_i$, and we consider the polynomial $$P(x_1,\dots,x_q)= \left(a \prod_{i=1}^q x_i - A \right) \cdot \prod_{i=1}^q \left(a \prod_{j=1}^q x_j-x_i \right).$$ For every $i \in \{1,\dots,q\}$, we set $y_i=\log x_i$. Now the polynomial $P$ becomes equivalent to $$P'(y_1,\dots,y_q)=\left(\log(a)+\sum_{i=1}^q y_i - \log(A)\right) \cdot \prod_{i=1}^q \left(\log(a) + \sum_{j=1}^q y_j - y_i\right).$$ Note that, in the expansion of $P'$, the monomial $y_1^2 y_2 \dots y_q$ has strictly positive coefficient. Thus, by the Combinatorial Nullstellensatz, we can assign values to the $y_i$'s so that $P'$ does not vanish, as long as we are given a set of at least three possible distinct values as $y_1$, and a set of at least two possible distinct values as each of $y_2,\dots,y_q$. In turn, this means we can assign values to the $x_i$'s so that $P$ does not vanish, as long as we have a set of at least three possible values with pairwise distinct absolute values as $x_1$, and a set of at least two possible values with distinct absolute values as each of $x_2,\dots,x_q$. Recall that we made the assumption that the three values in $L(uw_1)$ have pairwise distinct absolute values, while, for every $i \in \{2,\dots,q\}$, there must be at least two values in $L(uw_i)$ with distinct absolute values, since $|L(uw_i)|=3$. Thus, $\ell'$ can correctly be extended to $\ell$, in the desired way. \item Now assume that every $L(uw_i)$ is of the form $\{\alpha_i,\beta_i,-\beta_i\}$, where $\alpha_i$ and $\beta_i$ are distinct values with the same sign. Let us start from the labelling $\psi$ of $S$ obtained from $\ell'$ after setting $\ell(uw_i)=\alpha_i$ for every $i \in \{1,\dots,q\}$. We denote by $s \in \{-,+\}$ the sign of $\pi_\psi(u)$, while, for every sign $\epsilon \in \{-,+\}$, we denote by $W^\epsilon$ the set of vertices $w_i$ for which the sign of $\pi_\psi(w_i)$ (thus, of $\alpha_i$ and $\beta_i$) is $\epsilon$. Note that $W^-$ and $W^+$ partition the $w_i$'s. To conclude the proof, we consider two last main cases. \begin{itemize} \item Suppose that $s=+$ and $W^- = \emptyset$. We start by assigning label $-\beta_1$ from $L(uw_1)$ to $uw_1$. Note that, as long as each $uw_i$ with $i \in \{2,\dots,q\}$ is assigned a label from $\{\alpha_i,\beta_i\}$, we cannot get a conflict between $u$ and $w_i$ due to their products having different signs. Thus, under that convention, the only conflicts we must pay attention to, are along the edges $uw_1$ and, possibly, $ut$ (in case $A$ is negative). We here assign a variable $x_i$ to each edge $uw_i$ with $i \in \{2, \dots, q\}$, and consider $$P(x_2,\dots,x_q)= \left(-\beta_1 a \prod_{i=2}^q x_i - A\right) \cdot \left(-\beta_1 a \prod_{i=2}^q x_i - \beta_1\right).$$ For every $i \in \{1,\dots,q\}$, we again set $y_i=\log x_i$. Then $P$ is equivalent to $$P'(y_2,\dots,y_q)= \left(\log(-\beta_1a)+ \sum_{i=2}^q y_i - \log(A)\right) \cdot \left(\log(-\beta_1a)+ \sum_{i=2}^q y_i - \log(\beta_1)\right).$$ Recall that $q \geq 3$. Then, whatever $q$ is, in the expansion of $P'$ the monomial $y_2y_3$ has strictly positive coefficient. The Combinatorial Nullstellensatz then implies that we can assign values to $y_2,\dots,y_q$ so that $P'$ does not vanish, assuming we have at least two values to choose from for each of $y_2$ and $y_3$, and at least one value to choose from for each of $y_4, \dots, y_q$. From this, we deduce that we can assign values to $vw_2,\dots,vw_q$ from $\{\alpha_2,\beta_2\}, \{\alpha_3,\beta_3\}, \{\alpha_4\}, \{\alpha_5\}, \dots, \{\alpha_q\}$, respectively, so that $u$ is in conflict with none of $w_1$ and $t$. Recall that the resulting sign of $\pi_\ell(u)$ is negative, while the sign of all vertices $w_i$ with $i \in \{2,\dots,q\}$ is positive. Thus, these vertices also cannot be in conflict. \item Suppose that $s=+$ and $W^- \neq \emptyset$. Assume w.l.o.g. that $w_1 \in W^-$. Recall that, as long as $u$ and $w_1$ get products with different signs by a labelling, they cannot be in conflict. Thus, we here get our conclusion through the Combinatorial Nullstellensatz, by not modelling the possible conflict between $u$ and $w_1$. The precise details are as follows. For every $i \in \{1,\dots,q\}$, let $x_i$ be a variable associated to $uw_i$. We consider the polynomial $$P(x_1,\dots,x_q)= \left(a \prod_{i=1}^q x_i - A \right) \cdot \prod_{i=2}^q \left(a \prod_{j=1}^q x_j-x_i \right).$$ For every $i \in \{1,\dots,q\}$, we set $y_i=\log x_i$. Then $P$ is equivalent to $$P'(y_1,\dots,y_q)=\left(\log(a)+\sum_{i=1}^q y_i - \log(A)\right) \cdot \prod_{i=2}^q \left(\log(a) + \sum_{j=1}^q y_j - y_i\right).$$ In the expansion of $P'$, the monomial $y_1 \dots y_q$ has strictly positive coefficient, and, thus, by the Combinatorial Nullstellensatz, we can assign labels from $\{\alpha_1,\beta_1\},\dots,\{\alpha_q,\beta_q\}$ to $uw_1,\dots,uw_q$, respectively, resulting in a labelling $\ell$ of $S$ where $u$ gets in conflict with none of $w_2,\dots,w_q,t$. Proceeding that way, recall that the sign of $\pi_\ell(u)$ is positive, while that of $\pi_\ell(w_1)$ is negative. Then, also $u$ and $w_1$ cannot be in conflict, and $\ell$ is p-proper. \end{itemize} \end{itemize} To conclude the proof, let us point out that the cases where $s=-$ can be treated in a symmetric way, by considering whether $W^+$ is empty or not. \end{proofclaim} We are now ready to conclude the proof of Theorem~\ref{theorem:trees}. Recall that we have obtained a labelling $\ell'$ of $T'=T-\set{v_1,\dots,v_p,v'_1,\dots,v'_p,w_1,\dots,w_q}$ by induction, and that we are in the case where $u$ is adjacent to $q \geq 2$ leaves (and, possibly, $p$ $v_i$'s and one parent $t$). We start extending $\ell'$ to $T$ by considering every edge $uv_i$ (if such edges exist) and assigning to it a label from $L(uv_i)$ different from~$1$. This is clearly possible, since $|L(uv_i)|=3$. We now apply Claim~\ref{claim:star-cn} to the $uw_i$'s to get all edges incident to $u$ labelled, in such a way that $u$ is not in conflict with any of $t$ (if it exists; if it does not, then note that the claim applies in a very close way) and the $w_i$'s. The main difference here, is that, though we do not have to care about possible conflict between $u$ and the $v_i$'s for now, the claim must be employed with taking into consideration the contribution of the $uv_i$'s to the product of $u$. Lastly, it remains to label every $v_iv_i'$ with a label from $L(v_iv_i')$ so that $v_i$ and $u$ do not get into conflict, which is possible since we have three possible labels. Recall in particular that $v_i$ and $v_i'$ cannot be in conflict since $\ell(uv_i) \neq 1$. Eventually, $\ell$ is p-proper, as desired. \end{proof} \subsubsection*{Planar graphs with large girth} Recall that a \textit{planar graph} is a graph that can be embedded in the plane so that no two edges cross, and that, for any graph $G$, the \textit{girth} $g(G)$ of $G$ refers to the length of its shortest cycles. In case $G$ has no cycle, we set $g(G)=\infty$. Planar graphs with large enough girth are known to be $2$-degenerate and to have low maximum average degree. Thus, the third and fifth items of Corollary~\ref{corollary:existing-bounds} establish~$5$ as a constant upper bound on ${\rm ch}_\Pi^*(G)$ when $G$ is indeed a nice planar graph with large girth. In what follows, we improve this upper bound down to~$4$ when $g(G) \geq 16$, getting closer to the List Multiplicative 1-2-3 Conjecture for this class of graphs. Our proof involves arguments that are reminiscent to those used to prove Theorem~\ref{theorem:trees}, combined together with the following structural result: \begin{theorem}[e.g. Ne\v{s}et\v{r}il, Raspaud, Sopena~\cite{NRS97}]\label{theorem:threads} If $G$ is a planar graph with girth $g(G) \geq 5\ell +1$ for some $\ell \geq 1$, then either: \begin{itemize} \item $\delta(G)=1$, or \item $G$ contains an \textit{$\ell$-thread}, i.e., a path $uv_1\dots v_\ell w$ where $d(u),d(w) \geq 2$, and $d(v_i)=2$ for every $i \in \{1,\dots,\ell\}$. \end{itemize} \end{theorem} We are now ready to prove our result. \begin{theorem}\label{theorem:planar-girth16} If $G$ is a nice planar graph with girth $g(G) \geq 16$, then ${\rm ch}_\Pi^*(G) \leq 4$. \end{theorem} \begin{proof} Assume the claim is wrong, and let $G$ be a minimal counterexample to the claim. We may assume that $G$ is connected, and, due to Theorems~\ref{theorem:cycles} and~\ref{theorem:trees}, that $\Delta(G) \geq 3$ and that $G$ is not a tree. Let $L$ be a $4$-list assignment to the edges of $G$. We prove the result by contradicting the existence of $G$, i.e., by showing that $G$ admits p-proper $L$-labellings, whatever $L$ is. If $\delta(G) \geq 2$, then, by Theorem~\ref{theorem:threads}, we can find a $3$-thread $uv_1v_2v_3w$ in $G$. In that case, we consider $G'=G-v_2$. Note that $G'$ may consist in up to two connected components, each of which has at least two edges (since $d(u),d(w) \geq 2$, by the assumption on $\delta(G)$) and girth at least~$16$ (in case there is only one connected component, $G'$ might be a tree; in that case, $g(G')=\infty$, and the girth condition remains true). So $G'$ is nice and planar, and, by minimality of $G$, there is a p-proper $L'$-labelling $\ell'$ of $G'$, where $L'$ denotes the restriction of $L$ to the edges of $G'$. To obtain a contradiction, it now suffices to extend $\ell'$ to a p-proper $L$-labelling of $G$, and, for this, we just have to assign labels from $L(v_1v_2)$ and $L(v_2v_3)$ to $v_1v_2$ and $v_2v_3$, respectively, so that no conflict arises. This can clearly be done since $|L(v_1v_2)|=|L(v_2v_3)| = 4$, by first assigning to $v_1v_2$ a label different from $\ell'(v_3w)$ for which $v_1$ and $u$ get different partial products, and then assigning to $v_2v_3$ a label so that $v_1$ and $v_2$ are not in conflict, and similarly for $v_3$ and $w$. We may thus assume that $\delta(G)=1$. Since $G$ is not a tree, this means that, by repeatedly removing vertices of degree~$1$ while there are some, we end up with a planar connected graph $G^-$ such that $\delta(G^-) \geq 2$ and $g(G^-) \geq 16$. More precisely, for every $v \in V(G) \cap V(G^-)$, we can denote by $T_v$ the pending tree rooted at $v$ in $G$, which, if $d_G(v)=d_{G^-}(v)$, is reduced to the single vertex $v$. Then $G^-$ is obtained from $G$ by contracting every $T_v$ to $v$. For every $v \in V(G) \cap V(G^-)$, we deal, in $G$, with $T_v$ through the terminology introduced in the proof of Theorem~\ref{theorem:trees} (in particular, the notions of parent, son, descendant and branching vertex have the exact same meaning). Because $g(G^-) \geq 16$, then, by Theorem~\ref{theorem:threads}, we deduce that $G^-$ has a $3$-thread $P=uv_1v_2v_3w$. Note that $P$ also exists back in $G$, the difference being that $v_1,v_2,v_3$ might each be the root of a pending tree (denoted $T_{v_1},T_{v_2},T_{v_3}$, respectively, following our terminology) that might have edges. In case we have $V(T_{v_i})=\{v_i\}$ for every $i \in \{1,2,3\}$, then note that $P$ is actually a $3$-thread in $G$, in which case a contradiction can be obtained in the similar way as in the previous case $\delta(G) \geq 2$. Thus, in what follows, we assume that some of $T_{v_1},T_{v_2},T_{v_3}$ are not reduced to a single vertex. By arguments similar to some used in the proof of Theorem~\ref{theorem:trees}, we may assume that none of $T_{v_1},T_{v_2},T_{v_3}$ has 1) a non-root branching vertex, or 2) a pending path of length at least~$3$ (remind, in particular, that in the current context there is even more room for labelling extensions, due to $L$ being a $4$-list assignment). This means that each $T_{v_i}$ is a subdivided star with center $v_i$, where the pending paths attached to $v_i$ (if any) have length~$1$ or $2$. We start by handling a very particular case, which is when every $T_{v_i}$ has only one edge $v_iv_i'$, i.e., is a star with a single edge $v_iv_i'$. In this case, we consider $G'=G-v_2$. A p-proper $L'$-labelling of $G'$ (where, again, $L'$ denotes the restriction of $L$ to $G'$), which exists by minimality of $G$, can then be extended to a p-proper $L$-labelling of $G$, a contradiction, by first labelling $v_1v_2$ with a label from $L(v_1v_2)$ so that no conflict between $v_1$ and its two neighbours different from $v_2$ arises, then labelling $v_2v_3$ with a label from $L(v_2v_3)$ so that 1) no conflict between $v_3$ and its two neighbours different from $v_2$ arises, and 2) $v_2$ gets partial product different from~$1$; and lastly labelling the edge $v_2v_2'$ of $T_{v_2}$ with a label from $L(v_2v_2')$ so that no conflict between $v_2$ and its two neighbours different from $v_2'$ arises. Recall, in particular, that $v_2$ and $v_2'$ cannot be in conflict due to how $v_2v_3$ was labelled. Note also that lists of four labels are indeed sufficient to achieve this whole process. In the more general case, let us consider the graph $G'=G-(V(T_{v_1}) \setminus \{v_1\})-V(T_{v_2})-(V(T_{v_3})\setminus\{v_3\})$ (obtained by removing the non-root vertices of $T_{v_1}$ and $T_{v_3}$, and the whole of $T_{v_2}$). By arguments used earlier in the case where $\delta(G) \geq 2$, there is a p-proper $L'$-labelling $\ell'$ of $G'$, where $L'$ denotes the restriction of $L$ to the edges of $G'$. Our goal, to get a final contradiction, is to extend $\ell'$ in a p-proper way to the edges $v_1v_2$, $v_2v_3$ and those in $T_{v_1},T_{v_2},T_{v_3}$, assigning labels from their respective lists, so that a p-proper $L$-labelling of $G$ results. We start by assigning labels from $L(v_1v_2)$ and $L(v_2v_3)$ to $v_1v_2$ and $v_2v_3$, respectively, in such a way that, for the resulting partial products of $v_1,v_2,v_3$, 1) $v_2$ is in conflict with none of $v_1$ and $v_3$, 2) $v_1$ is not in conflict with $u$, 3) $v_3$ is not in conflict with $w$, and 4) none $v_i$ of $v_1,v_2,v_3$ for which $T_{v_i}$ contains only one edge, gets product~$1$ as a result. This is possible to achieve since $|L(v_1v_2)|=|L(v_2v_3)|=4$. More precisely, this can be achieved by labelling $v_1v_2$ first and $v_2v_3$ second if $T_{v_1}$ has only one edge, or by labelling $v_2v_3$ first and $v_1v_2$ second otherwise. Recall, in particular, that we have treated separately the case where all of $T_{v_1},T_{v_2},T_{v_3}$ have only one edge, so we are not in that case; the fourth condition must thus be fulfilled for at most two of the $v_i$'s. It now remains to label the edges from the $T_{v_i}$'s. We achieve this by considering $T_{v_1}$, $T_{v_2}$ and $T_{v_3}$ in turn, so that, once every $T_{v_i}$ has been treated, no vertex in $V(T_{v_1}) \cup \dots \cup V(T_{v_i})$ is involved in conflicts, and none of the vertices in $V(T_{v_{i+1}}) \cup \dots \cup V(T_{v_3})$ had its product altered. This way, the desired p-proper $L$-labelling of $G$ will result once $T_{v_3}$ has been treated. In what follows, we focus on $T_{v_1}$, but the arguments apply similarly for $T_{v_2}$ and $T_{v_3}$. Recall that $T_{v_1}$ consists of some (possibly none) pending paths of length~$1$ or~$2$ attached to $v_1$. Let us assume that $p \geq 0$ of these paths have length~$2$, while $q \geq 0$ of them have length~$1$. We denote by $b_1,\dots,b_p$ the sons of $v_1$ that belong to the pending paths of length~$2$, while we denote by $c_1,\dots,c_q$ those from the pending paths of length~$1$. Finally, for every $i \in \{1,\dots,p\}$, we denote by $b_i'$ the son of $b_i$ in $T_{v_1}$. By how $v_1v_2$ was labelled earlier, note that we already have the desired conclusion around $v_1$ if $p=q=0$. We thus focus on the cases where $p+q>0$. \begin{itemize} \item The cases where $q \in \{0,1\}$ can be treated quite similarly as the cases $q=0$ and $q=1$ in the proof of Theorem~\ref{theorem:trees}. Namely, we first label the edges $v_1b_1,\dots,v_1b_{p-1}$ (if such exist) with labels different from~$1$ from their respective lists. If $q=0$, then we label $v_1b_p$ with a label different from~$1$ from its list, with making sure that the resulting product of $v_1$ is different from that of $u$ and $v_2$. Otherwise, if $q=1$, then we label $v_1b_p$ with a label different from~$1$ from its list, with making sure that the resulting partial product of $v_1$ does not get equal to~$1$ (if $p=0$, then recall that this property is already verified at $v_1$, due to how $v_1v_2$ and $v_2v_3$ have been labelled). Still in the case where $q=1$, this guarantees that $v_1$ and $c_1$ cannot get in conflict no matter how $v_1c_1$ is labelled; thus, we can label $v_1c_1$ with a label from its list so that $v_1$ does not in conflict with $u$ and $v_2$. Note that lists of size~$4$ are sufficient to achieve these conditions in all cases. We lastly label every edge $b_ib_i'$ (if any) with a label from its list, with making sure that $b_i$ does not get in conflict with $v_1$. Because $v_1b_i$ was assigned a label different from~$1$, recall that $b_i$ and $b_i'$ cannot be in conflict. \item The cases where $q=2$ can be treated quite similarly. Start by labelling every edge $v_1b_i$ (if there are some) with a label different from~$1$ from its list. Then, label $v_1c_1$ with a label from its list, so that the resulting partial product of $v_1$ does not get equal to~$1$. Last, label $v_1c_2$ with a label from its list, so that $v_1$ gets in conflict with none of $u$, $v_2$ and $c_1$. Note that this is possible, since we do not have to care about a possible conflict between $v_1$ and $c_2$, and $|L(v_1c_2)|=4$. To conclude, we can eventually label the $b_ib_i'$'s just as in the previous case. \end{itemize} The general case is when $q \geq 3$. We need a generalisation of Claim~\ref{claim:star-cn} to the current context. \begin{claim}\label{claim:star-planar} Let $S$ be a star with center $u$ and $q+2 \geq 5$ leaves $t,t',w_1,\dots,w_q$. Assume we have a partial labelling $\ell'$ of $S$ where $ut$ and $ut'$ are the only edges being assigned a label, $a$ and $a'$, respectively, and that $t$ and $t'$ have (virtual) product $\pi_{\ell'}(t)=A$ and $\pi_{\ell'}(t')=A'$. If $L$ is a $4$-list assignment to the $uw_i$'s, then, for every $i \in \{1,\dots,q\}$, we can assign a label from $L(uw_i)$ to $uw_i$, so that $\ell'$ is extended to a labelling $\ell$ of $S$ verifying $\pi_\ell(u) \not \in \{A, A', \pi_\ell(w_1),\dots,\pi_\ell(w_q)\}$. \end{claim} \begin{proofclaim} Note that each $L(uw_i)$ contains two, three or four values with pairwise distinct absolute values. We consider several cases based on that fact. \begin{itemize} \item Assume, w.l.o.g., that the four values in $L(uw_1)$ have pairwise distinct absolute values. To each edge $uw_i$, we associate a variable $x_i$, and we consider the polynomial $$P(x_1,\dots,x_q)= \left(aa' \prod_{i=1}^q x_i - A \right) \cdot \left(aa' \prod_{i=1}^q x_i - A'\right) \cdot \prod_{i=1}^q \left(aa' \prod_{j=1}^q x_j-x_i \right).$$ For every $i \in \{1,\dots,q\}$, we set $y_i=\log x_i$. Then $P$ gets equivalent to \begin{equation*} \begin{split} P'(y_1,\dots,y_q)=&\left(\log(aa')+\sum_{i=1}^q y_i - \log(A)\right) \cdot \left(\log(aa')+\sum_{i=1}^q y_i - \log(A')\right) \\ & \cdot \prod_{i=1}^q \left(\log(aa') + \sum_{j=1}^q y_j - y_i\right). \end{split} \end{equation*} In the expansion of $P'$, the monomial $y_1^3 y_2 \dots y_q$ has strictly positive coefficient. Thus, by the Combinatorial Nullstellensatz, we can assign values to the $y_i$'s so that $P'$ does not vanish, as long as we are given a set of at least four possible distinct values as $y_1$, and a set of at least two possible distinct values as each of $y_2,\dots,y_q$. Regarding $P$, this implies we can assign values to the $x_i$'s so that $P$ does not vanish, assuming we have a set of a least four possible values with pairwise distinct absolute values as $x_1$, and a set of at least two possible values with distinct absolute values as each of $x_1,\dots,x_q$. This is met in the current case, since $L(uw_1)$ is assumed to have four values with pairwise distinct absolute values, and $|L(uw_i)|=4$ for every $i \in \{2,\dots,q\}$. Thus, $\ell'$ can be extended to $\ell$ as desired. \item Assume now that, w.l.o.g., both $L(uw_1)$ and $L(uw_2)$ include three values with pairwise distinct absolute values. Then the same conclusion as in the previous case can be reached from considering the monomial $y_1^2y_2^2y_3\dots y_q$ in the expansion of $P'$. \item We can thus assume that none of the two previous cases applies, i.e., that, w.l.o.g., $L(uw_1)$ includes two or three values with pairwise distinct absolute values, while $L(uw_2),\dots,L(uw_q)$ include each exactly two values with pairwise distinct absolute values. In other words, we have $L(uw_i)=\{\alpha_i,-\alpha_i,\beta_i,-\beta_i\}$ for every $i \in \{2,\dots,q\}$, for some distinct $\alpha_i,\beta_i$, while $L(uw_1)=\{\alpha_1,-\alpha_1,\beta_1,-\beta_1\}$ or $L(uw_1)=\{\alpha_1,-\alpha_1,\beta_1,\gamma_1\}$, for some distinct $\alpha_1,\beta_1,\gamma_1$. To conclude the proof, we consider a few more cases: \begin{itemize} \item Assume first that $A$ and $A'$ have the same sign $s \in \{-,+\}$. For every $i \in \{1,\dots,q-2\}$, let us assign to $uw_i$ a label with sign $s$ from its list. Then: \begin{itemize} \item If $s$ and the sign of the partial product of $u$ are the same, then we assign to $uw_{q-1}$ a label with sign $s$ from its list, chosen so that the partial product of $u$ gets different from~$1$. Note that this is possible, since $L(uw_{q-1})$ contains two values with sign $s$. This guarantees that $u$ and $w_q$ cannot be in conflict, whatever the label of $uw_q$ is. We then assign to $uw_{q}$ a label with sign $-s$ from its list, so that all edges are labelled and no conflict remains. In particular, $u$ gets product with sign $-s$, while only $w_q$ has this property. \item Otherwise, i.e., if $s$ and the sign of the partial product of $u$ are different, then we assign to $uw_{q-1}$ and $uw_q$ a label with sign $s$ from their lists. As a result, no conflict remains, since $u$ is the only vertex with product being of sign $-s$. \end{itemize} \item Now assume that $A$ and $A'$ have different signs, say $A$ is positive while $A'$ is negative. We here start by assigning, for every $i \in \{1,\dots,q-2\}$, a positive label to $uw_i$ from its list $L(uw_i)$. Now: \begin{itemize} \item If currently $u$ has negative product, then we assign to $uw_{q-1}$ and $uw_q$ a positive label from their respective lists, with making sure that the product of $u$ gets different from $A'$. This is possible since $L(uw_{q-1})$ and $L(uw_q)$ have two positive values each. Since only $u$ and $t'$ have negative product, no conflict remains. \item Otherwise, i.e., $u$ currently has positive product, then we first assign a positive label to $uw_{q-1}$ from its list, chosen so that the current product of $u$ does not get equal to~$1$. This is possible, since $L(uw_{q-1})$ contains two positive values. This guarantees that $u$ and $w_q$ cannot get in conflict. We then assign to $uw_q$ a negative label from $L(uw_q)$, chosen so that $u$ gets product different from~$A'$. This is possible since $L(uw_q)$ contains two negative values. Since only $A'$ and the products of $u$ and $w_q$ are negative, no conflict remains. \end{itemize} \end{itemize} \end{itemize} In all cases, we end up with the desired labelling $\ell$, which concludes the proof. \end{proofclaim} We can now conclude the case $q \geq 3$ of the proof of Theorem~\ref{theorem:planar-girth16}, thus proving the whole statement. We start by labelling every edge $v_1b_i$ (if any) with any label different from~$1$ from its list $L(v_1b_i)$. We now apply Claim~\ref{claim:star-planar} to get all $v_1c_i$'s labelled with labels from their lists, so that $v_1$ is not in conflict with any of $u$, $v_2$ and the $c_i$'s. This can be done by applying Claim~\ref{claim:star-planar} with $v_1$, $u$ and $v_2$ playing the role of $u$, $t$ and $t'$, respectively, $\pi_{\ell'}(u)$ and $\pi_{\ell'}(v_2)$ playing the role of $A$ and $A'$, respectively, $\ell'(uv_1)\prod_{i=1}^p \ell(ub_i)$ and $\ell'(v_1v_2)$ playing the role of $a$ and $a'$, respectively, and the $c_i$'s playing the role of the $w_i$'s. It remains to label the $b_ib_i'$'s (if any), and, for each such edge $b_ib_i'$, it suffices to assign a label from its list so that $b_i$ and $v_1$ do not get in conflict. Recall that we do not have to mind about a possible conflict between $b_i$ and $b_i'$, since $\ell(v_1b_i) \neq 1$. \end{proof} \subsubsection{Subcubic graphs} We now consider subcubic graphs, i.e., graphs with maximum degree~$3$. Note that, at this point, the best upper bound we have on ${\rm ch}_\Pi^*$ for these graphs is~$5$, obtained from Theorem~\ref{theorem:upper-bound-cn}. We get one step closer to the List Multiplicative 1-2-3 Conjecture for this class of graphs, by lowering the upper bound down to~$4$ in the next result. \begin{theorem}\label{theorem:subcubic} If $G$ is a nice subcubic graph, then ${\rm ch}_\Pi^*(G) \leq 4$. \end{theorem} \begin{proof} Assume the claim is wrong, and consider $G$ a minimal counterexample to the claim. Clearly, $G$ is connected. Let $L$ be a $4$-list assignment to the edges of $G$. We prove below that $G$ admits a p-proper $L$-labelling whatever $L$ is, a contradiction. To that aim, we first show that $G$ is cubic: \begin{itemize} \item Assume first that $\delta(G)=1$, and consider $u$ a degree-$1$ vertex of $G$ with unique neighbour $v$. \begin{itemize} \item Assume first that $d(v)=2$, and let $w$ denote the second neighbour of $v$. Set $G'=G-\{u,v\}$. We can assume that $G'$ is nice, as otherwise $G$ would be the path of length~$3$, in which case even ${\rm ch}_\Pi^*(G) \leq 3$ holds by Theorem~\ref{theorem:cycles}, a contradiction. Then, by minimality of $G$, there is a p-proper $L'$-labelling $\ell'$ of $G'$, where $L'$ denotes the restriction of $L$ to the edges of $G'$. We extend $\ell'$ to a p-proper $L$-labelling of $G$, getting a contradiction, by correctly assigning labels to $uv$ and $vw$ from their respective lists. We first label $vw$, by assigning a label from $L(vw)$ that is different from~$1$, and so that $w$ does not get in conflict with any of its at most two other neighbours different from $v$. Note that this is possible since $|L(vw)|=4$. We can now extend the labelling to $uv$ by assigning a label from $L(uv)$ so that $v$ does not get in conflict with $w$. Note that by how $vw$ was labelled, $u$ and $v$ cannot get in conflict. \item Assume now that $d(v)=3$, and let $w_1,w_2$ denote the two neighbours of $v$ different from $u$. Set $G'=G -\{u,v\}$. We can assume that $G'$ is nice, as otherwise either 1) one of the $w_i$'s is a degree-$2$ vertex adjacent to a $1$-vertex, or 2) $w_1w_2$ exists and both $w_1$ and $w_2$ have degree~$2$. In the former case, we fall into the previous case (where $d(v)=2$) we have handled. In the latter case, $G$ has only four edges and the claim can be checked by hand. So $G'$ is nice, and, by minimality of $G$, there is a p-proper $L'$-labelling $\ell'$ of $G'$, where $L'$ denotes the restriction of $L$ to the edges of $G'$. To extend it to one of $G$, thus getting a contradiction, we proceed as follows. For every $i \in \{1,2\}$, note that there are at least two values $a_i,b_i \in L(uw_i)$ that can be assigned to $vw_i$ without causing any conflict between $w_i$ and its at most two neighbours different from $v$. We assign labels to $vw_1$ and $vw_2$ from $\{a_1,b_1\}$ and $\{a_2,b_2\}$, respectively, so that the product of these two labels is different from~$1$. It then suffices to assign to $uv$ a label from~$L(uv)$ so that $v$ gets in conflict with none of $w_1$ and $w_2$, which is possible since $|L(uv)|=4$. Again, $u$ and $v$ cannot be in conflict due to how $vw_1$ and $vw_2$ have been labelled. \end{itemize} \item Assume now that $\delta(G)=2$, and consider $u$ a degree-$2$ vertex of $G$ with neighbours $v_1,v_2$. By the minimum degree assumption, each of $v_1$ and $v_2$ has one or two neighbours different from $u$. We here consider $G'=G-u$. We can assume that $G'$ is nice, as, because $\delta(G)=2$, otherwise it would mean that $v_1v_2$ is the only other edge, thus that $G$ is $C_3$, the cycle of length~$3$, in which case ${\rm ch}_\Pi^*(G) \leq 3$ holds by Theorem~\ref{theorem:cycles}, a contradiction. So $G'$ admits a p-proper $L'$-labelling $\ell'$, where $L'$ is the restriction of $L$ to the edges of $G'$. We show that this p-proper labelling can be extended to $uv_1$ and $uv_2$ by assigning labels from their lists, thereby getting a contradiction. Let $x_1,x_2$ be variables associated to $uv_1$ and $uv_2$, respectively. Let us denote by $y_1,y_2$ the values $\pi_{\ell'}(v_1),\pi_{\ell'}(v_2)$, respectively. Let us now consider the polynomial $$P(x_1,x_2)=(x_1x_2-x_1y_1) \cdot (x_1x_2-x_2y_2) \cdot \prod_{w \in N(v_1) \setminus \{u\}}(x_1y_1-\pi_{\ell'}(w)) \cdot \prod_{w \in N(v_2) \setminus \{u\}}(x_2y_2-\pi_{\ell'}(w)).$$ If $x_1$ and $x_2$ can be assigned values in $L(uv_1)$ and $L(uv_2)$, respectively, so that $P$ does not vanish, then we get a p-proper $L$-labelling of $G$. Since $x_1$ and $x_2$ are the only variables of $P$, it is easy to see that, in the expansion of $P$, the monomial $M$ with largest degree is either $x_1^4x_2^4$ (when $d(v_1)=d(v_2)=3$), $x_1^3x_2^4$ (when $d(v_1)=2$ and $d(v_2)=3$), $x_1^4x_2^3$ (when $d(v_1)=3$ and $d(v_2)=2$) or $x_1^3x_2^3$ (when $d(v_1)=d(v_2)=2$). In all case, since $M$ has nonzero coefficient, then, by the Combinatorial Nullstellensatz, desired values for $x_1$ and $x_2$ can be chosen from lists of size at least~$5$, thus from lists of size at least~$4$ if we are guaranteed that they do not include~$0$ (due to the first two factors of $P$). From this, we deduce that a p-proper $L$-labelling of $G$ can be obtained from $\ell'$, a contradiction. \end{itemize} Thus, from now on, $G$ can be assumed to be cubic. Let $C = u_1\dots u_pu_1$ be a smallest induced cycle of $G$. For every $i \in \{1,\dots,p\}$, we denote by $u'_i$ the neighbour of $u_i$ which does not belong to $C$. Let $G' = G - E(C)$. Note that $G'$ is nice, since the $u_i$'s have degree~$1$ and are not adjacent in $G'$, while all other vertices have degree~$3$. Thus, by minimality of $G$, there is a p-proper $L'$-labelling $\ell'$ of $G'$, where $L'$ denotes the restriction of $L$ to the edges of $G'$. Our goal is to extend it to the edges of $C$ in a p-proper way to an $L$-labelling of $G$, thereby getting a final contradiction. To ease the exposition of the upcoming arguments, let us introduce some notation. For every $i \in \{1,\dots,p\}$, we set $L_i = L(u_iu_{i+1})$, $a'_i = \ell'(u_iu'_i)$ and $A'_i = \frac{\pi_{\ell'}(u'_i)}{a'_i}$ (where, here and further, we set $u_{p+1}=u_1$ and $u_0=u_p$). For some set $X$ of values and $\lambda \in \mathbb{R}^*$, we define $\lambda X = \set{\lambda x : x \in X}$ and $\frac{\lambda}{X} = \set{\frac{\lambda}{x} : x \in X}$. For two sets $X$ and $Y$, we define $X Y = \set{xy : x \in X, y \in Y}$. The proof goes by distinguishing several cases depending on some lists by $L$ and on the structure of $G$. In each considered case, it is implicitly assumed that none of the previous cases applies. \begin{enumerate} \item \emph{There are $i_0 \in \{1,\dots,p\}$ and $\alpha \in L_{i_0-1}$ such that, for all $\alpha' \in L_{i_0}$, we have $\alpha\alpha' \neq A'_{i_0}$.} W.l.o.g., assume that $i_0 =1$. The assumption implies that $u_1$ and $u'_1$ can never be in conflict in an extension of $\ell'$ assigning label $\alpha$ to $u_{p}u_1$. Let us thus start by assigning label $\alpha$ to $u_pu_1$. We then consider the other edges $u_{p-1}u_p, u_{p-2}u_{p-1},\dots,u_1u_2$ of $C$ one by one, following this exact ordering. For every edge $u_iu_{i+1}$ considered that way, we assign a label from $L(u_iu_{i+1})$ chosen in the following manner: \begin{itemize} \item If $i \in \{3,\dots,p-1\}$, then we assign to $u_iu_{i+1}$ a label so that $u_{i+1}$ is in conflict with neither ${u_{i+2}}$ nor $u_{i+1}'$. Note that this is possible since $|L(u_iu_{i+1})|=4$. In the case where $i=p-1$, we note that $u_{i+2}=u_1$ is a vertex whose product is not fully determined yet; the conflict between $u_p$ and $u_1$ will actually be taken care of in a later stage of the extension process. \item If $i=2$, then we assign to $u_2u_3$ a label so that $u_3$ is in conflict with neither $u_4$ not $u_3'$, and the resulting partial product of $u_2$ gets different from the partial product of $u_1$. This is possible, since $|L(u_2u_3)|=4$. In case $p=3$ and, thus, $u_4=u_1$, the possible conflict between $u_3$ and $u_1$ will be handled during the next step of the process. \item If $i=1$, the we assign to $u_1u_2$ a label so that $u_2$ gets in conflict with neither $u_3$ not $u_2'$, and $u_1$ and $u_p$ are not in conflict. Again, this is possible because $|L(u_1u_2)|=4$. Recall further that $u_1$ and $u_2$ cannot be in conflict due to the choice of the label assigned to $u_2u_3$. Also, $u_1$ and $u_1'$ cannot be in conflict by the initial assumption on $\alpha$. \end{itemize} Thus, once the whole process has been carried out, we get an $L$-labelling of $G$ which is p-proper, a contradiction. \label{subcubic:item:1} \end{enumerate} Since Case~1 does not apply, then, throughout what follows, for every $i \in \set{1,\dots,p}$, we have \begin{equation} L_{i-1} = \frac{A'_i}{L_{i}} \text{ and } L_{i} = \frac{A'_i}{L_{i-1}}. \label{eq:subcubic-1} \end{equation} \begin{enumerate}[resume] \item \emph{There are $i_0 \in \{1,\dots,p\}$ and $\alpha \in L_{i_0}$ such that, for all $\alpha' \in L_{i_0+2}$, we have $\alpha a'_{i_0+1} \neq \alpha' a'_{i_0+2}$.} W.l.o.g., assume that $i_0 =1$. The assumption implies that $u_2$ and $u_3$ can never be in conflict in an extension of $\ell'$ assigning label $\alpha$ to $u_1u_2$. Let us thus assign label $\alpha$ to $u_1u_2$. We then consider the other edges of $C$, and label them with labels from their respective lists so that no conflict arises. We consider a special value of $p$, before considering the general case. \begin{itemize} \item Assume first that $p=3$, i.e., $C$ is a triangle. We start by assigning a label from $L(u_2u_3)$ to $u_2u_3$ so that $u_2$ does not get in conflict with $u_2'$, and the partial product of $u_3$ gets different from the partial product of $u_1$. Note that this is possible since $|L(u_2u_3)|=4$. We then assign a label from $L(u_1u_3)$ to $u_1u_3$ so that $u_1$ gets in conflict with neither $u_1'$ nor $u_2$, and $u_3$ does not get in conflict with $u_3'$. Again, such a label exists since $|L(u_1u_3)|=4$. Recall that $u_1$ and $u_3$ cannot be in conflict due to how $u_2u_3$ was labelled. Also, $u_2$ and $u_3$ cannot be in conflict by the assumption on $\alpha$. \item Otherwise, i.e., $p \geq 4$, we start by assigning a label from $L(u_2u_3)$ to $u_2u_3$ so that $u_2$ and $u_2'$ do not get in conflict. We then consider the remaining edges $u_pu_1,u_{p-1}u_p,\dots,u_3u_4$ of $C$ one by one, following this exact ordering. For every edge $u_iu_{i+1}$ considered that way, we assign a label from $L(u_iu_{i+1})$ chosen in the following way: \begin{itemize} \item If $i \in \{5,\dots,p\}$, then we assign to $u_iu_{i+1}$ a label chosen so that $u_{i+1}$ gets in conflict with neither $u_{i+2}$ nor $u_{i+1}'$. This is possible since $|L(u_iu_{i+1})|=4$. \item If $i=4$, then we assign to $u_4u_5$ a label chosen so that $u_5$ gets in conflict with neither $u_6$ nor $u_5'$, and the partial product of $u_4$ does not get equal to the partial product of $u_3$. This is possible since $|L(u_4u_5)|=4$. \item If $i = 3$, then we assign to $u_3u_4$ a label so that $u_4$ gets in conflict with neither $u_5$ nor $u_4'$, and $u_3$ does not get in conflict with $u_3'$. Again, this is possible since $|L(u_3u_4)|=4$. Recall that $u_4$ and $u_3$ cannot be in conflict due to how $u_4u_5$ has been labelled. Also, $u_2$ and $u_3$ cannot be in conflict by the assumption on $\alpha$. \end{itemize} \end{itemize} Thus, in all cases, we get a p-proper $L$-labelling of $G$, a contradiction. \end{enumerate} Since Case~2 does not apply in what follows, then, for every $i \in \set{1,\dots,p}$, we have \begin{equation} L_i = \frac{a'_{i+2}}{a'_{i+1}} L_{i+2}. \label{eq:subcubic-2} \end{equation} \begin{enumerate}[resume] \item \emph{$G$ is $K_4$, the complete graph on four vertices.} Here, $C$ is a cycle $u_1u_2u_3u_1$ of length~$3$, and we have $u' = u'_1 = u'_2 = u'_3$. Also, $\ell'$ assigns labels to the three edges incident to $u'$, since $G'$ is a star. Note that, as long as we label the edges of $C$ last and handle all conflicts at that point, then, prior to labelling $C$, we might actually change the labels assigned to $u_1u',u_2u',u_3u'$ by $\ell'$ for other labels from their respective lists. Note now that, for any choice of label $a_3'$ from $L(u_3u')$ assigned to $u_3u'$, Identity~(\ref{eq:subcubic-2}) must apply, i.e., we must have $L_1 = \frac{a'_{3}}{a'_{2}} L_{3}$, as otherwise previous Case~2 would apply the very same way. This implies that $|L_3| \geq 5$, a contradiction, by the following arguments. Since $|L(u_3u')| = 4$, there are at least two values $x,y \in L(u_3u')$ with distinct absolute values, say $\abs{x}<\abs{y}$. Start by assigning label $x$ to $L(u_3u')$; because Identity~(\ref{eq:subcubic-2}) applies, we deduce that for every $\alpha \in L_1$ we have $x\alpha \in L_3$. The other way around, we have $L_3=L_3'=\{x\alpha : \alpha \in L_1\}$ and $|L_3'|=|L_3|=4$. Now change the label of $u_3u'$ to $y$. Because $\abs{x}<\abs{y}$, we deduce that, for an $\alpha \in L_1$ with largest absolute value, $y\alpha \not \in L_3'$. This implies that $L_3$ must contain a fifth value not in $L_3'$ for Identity~(\ref{eq:subcubic-2}) to apply with $y$. \item \emph{$p=3$ and $C$ shares an edge with another triangle.} Assume $u_1u_2$ belongs to a triangle $u'u_1u_2u'$ different from $C$, where $u' = u'_1 = u'_2$ is the common neighbour of $u_1$ and $u_2$ different from $u_3$. Because we are not in Case~3, we have $u_3'\neq u'$, and $u'$ has a neighbour $w \not \in V(C)$. Note that, by $\ell'$, there are actually three possible values in $L(u_2'u')$ that can be assigned to $u_2'u'$ without causing $u'$ to be in conflict with $w$, thus two such values $x,y$, with, say, $|x|<|y|$. Start by setting $a_2'=y$. By an application of Identity~(\ref{eq:subcubic-2}) (which applies as otherwise Case 2 would), we deduce that $L_1 = \frac{a'_{3}}{a'_{2}} L_{3}$, which reveals the exact four values in $L_3$. Now, just as in previous Case~3, we note that by changing the value of $a_2'$ to $x$ and applying Identity~(\ref{eq:subcubic-2}) again, we deduce that $L_3$ must contain a fifth value not among the previous four revealed ones. This is a contradiction. \end{enumerate} At this point, note that if we modify the label $a'_i$ assigned to any edge $u_iu_i'$ by $\ell'$, then this has no impact on the value $A_{i+1}'$ (and, symmetrically, on $A_{i-1}'$). Indeed, if modifying $a_i'$ also modified $A_{i+1}'$, then this would imply that $u_iu_i'$ is incident to $u_{i+1}'$, thus that $u'_i=u'_{i+1}$. But, in this case, we would deduce that $u_iu_{i+1}u'_iu_i$ is a triangle sharing an edge with $C$, thereby getting a contradiction to the fact that none of Cases~3 and~4 applies. By manipulating Identities~(\ref{eq:subcubic-1}) and~(\ref{eq:subcubic-2}), note that we can establish the relationship \begin{equation} L_i = \frac{a'_{i+2}A'_{i+2}}{a'_{i+1}A'_{i+1}} L_{i} = \frac{a'_{i+1}A'_{i+1}}{a'_{i+2}A'_{i+2}} L_{i}\label{eq:subcubic-3} \end{equation} between any list $L_i$ and some of the $a_i'$'s and $A_i'$'s. For every $i \in \{1,\dots,p\}$, we define $\lambda_i = \frac{A'_{i+1}}{a'_{i+2}A'_{i+2}}$; then, $L_i=a_{i+1}'\lambda_iL_i$ by the above. \begin{enumerate}[resume] \item \emph{There are $i \in \{1,\dots,p\}$ and a $p$-proper $L$-labelling $\ell$ of $G'$ matching $\ell'$ on all edges but possibly $u_{i+1}u'_{i+1}$, and such that $\abs{\ell(u_{i+1}u'_{i+1}) \lambda_i} \neq 1$.} The definition of $\ell$ and the fact previous Cases~3 and~4 do not apply, imply that $A'_{i+1}$, $A'_{i+2}$ and $a'_{i+2}$ are the same by both $\ell'$ and $\ell$. From Identity~\ref{eq:subcubic-3}, we deduce that $L_i = \ell(u_{i+1}u'_{i+1}) \lambda_i L_i$, where $\lambda_i$ is the same by both $\ell'$ and $\ell$. Now consider $x_0 \in L_i$; from what we have just deduced, we now get that $$ \set{(\ell(u_{i+1}u'_{i+1})\lambda_i)^j x_0 }_{j \in \mathbb{N}} \subseteq L_i.$$ Because $\abs{\ell(u_{i+1}u'_{i+1}) \lambda_i} \neq 1$, we then deduce that the set $\set{(\ell(u_{i+1}u'_{i+1})\lambda_i)^j x_0 }_{j \in \mathbb{N}}$ has infinite cardinality and is included in $L_i$, which has size~$4$; a contradiction. \end{enumerate} Note that, by $\ell'$, there are actually at least two values in $L(u_iu'_i)$ that could be assigned to $u_iu_i'$ without breaking p-properness. This is because $|L(u_iu_i')|=4$, and, when labelling $u_iu_i'$, we only have to make sure that $u_i'$ gets product different from that of its at most two neighbours different from $u_i$ in $G'$ (in particular, note that we must have $A_i' \neq 1$ by $\ell'$ so that $\pi_{\ell'}(u_i) \neq \pi_{\ell'}(u_i')$, and thus we do not have to care about $u_i$ and $u_i'$ getting in conflict when relabelling $u_iu_i'$). Because Case~5 does not apply, this actually implies that there are exactly two such values from every $L(u_iu_i')$, and that these two values are precisely $a_i$ and $-a_i$. \begin{enumerate}[resume] \item \emph{There exists $i \in \{1,\dots,p\}$ such that $L_i \neq \set{\alpha,-\alpha,\beta,-\beta}$ for some distinct $\alpha,\beta \in \mathbb{R}^*$.} Let us consider the identity $L_i = a'_{i+1}\lambda_i L_{i}$ again. Since Case~5 does not apply, we have $\abs{\ell'(u_{i+1}u'_{i+1})\lambda_i} = 1$ for any possible value as $\ell'(u_{i+1}u'_{i+1})$ from $L(u_{i+1}u_{i+1}')$. Since $u'_{i+1}$ has, in $G'$, two neighbours different from $u_{i+1}$, there are, in $L(u_{i+1}u_{i+1}')$, two possible values for $u_{i+1}u'_{i+1}$ that make $u_{i+1}'$ being not in conflict with these two neighbours, and these at least two possibilities must include $a'_{i+1}$ and $-a'_{i+1}$. Now, by considering the p-proper $L'$-labelling of $G'$ obtained from $\ell'$ by changing the label of $u_{i+1}u'_{i+1}$ to $-a_{i+1}$, the same reasoning process leads us to deduce that $L_i = -a'_{i+1}\lambda_i L_{i}$. This implies that $L_i = - L_i$, a contradiction. \end{enumerate} We are now ready to conclude the proof, by considering a few cases on the length of $C$. The crucial points to keep in mind from now on, are that $L$ verifies, for every $i \in \{1, \dots, p\}$, that 1) $a_i',-a_i' \in L(u_iu_i')$ and, in $\ell'$, changing the label of $u_iu_i'$ from $a_i'$ to $-a_i'$ cannot raise a conflict in $G'$, and that 2) there are nonzero real numbers $\alpha_i,\beta_i$ such that $L_i = \{\alpha_i,-\alpha_i,\beta_i,-\beta_i\}$. \begin{enumerate}[resume] \item \emph{$p$ is even.} For every $i \in \{1,\dots,p\}$, we associate a variable $x_i$ to the edge $u_iu_{i+1}$. We consider the polynomial $$P(x_1, \dots, x_p)= \prod_{i=1}^p \left(x_{i-1}x_i - A_i'\right),$$ which is equivalent to considering $$P'(y_1,\dots,y_p)=\prod_{i=1}^p \left(y_{i-1} + y_i - \log(A_i')\right)$$ where $y_i = \log x_i$ for every $i \in \{1,\dots,p\}$. Note that the monomial $y_1\dots y_p$ has maximum degree and nonzero coefficient in the expansion of $P'$. Thus, by the Combinatorial Nullstellensatz, we can assign values to the $y_i$'s so that $P'$ does not vanish, assuming we have at least two possible values to choose from for each of the $y_i$'s. This implies that we can assign values to the $x_i$'s so that $P$ does not vanish, assuming we have at least two possible values with distinct absolute values to choose from, for each of the $x_i$'s. Particularly, since $|L(u_iu_{i+1})|=4$ for every edge $u_iu_{i+1}$, this implies that $\ell'$ can be extended to the edges of $C$, resulting in an $L$-labelling $\ell$ of $G$ where $\pi_{\ell}(u_i)$ and $\pi_{\ell}(u_i')$ have distinct absolute values for every $i \in \{1,\dots,p\}$. Now, the only possible remaining conflicts are between the $u_i$'s. Due to all the assumptions made this far, recall, for every $i \in \{1,\dots,p\}$, that $\ell$ assigns label $a_i'$ to every edge $u_iu_i'$, that $-a_i' \in L(u_iu_i')$, and that switching $\ell(u_iu_i)$ from $a_i'$ to $-a_i'$ cannot raise a conflict between $u_i'$ and its neighbours. Thus, to get a p-proper $L$-labelling of $G$, we can just consider each of the $u_iu_i'$'s in turn, and for each $u_iu_i'$ of them, switch, if necessary, its label to $-a_i'$ so that $u_i$ gets positive product if $i$ is, say, even, or negative product otherwise. \item \emph{$p=3$.} Because Cases~3 and~4 do not apply, recall that $u_1',u_2',u_3'$ are pairwise different. We extend $\ell'$ as follows. We start by assigning any label from $L(u_1u_2)$ to $u_1u_2$. Next, we assign to $u_3u_1$ a label from $L(u_3u_1)$ so that no conflict between $u_1$ and $u_1'$ arises, and the resulting partial products of $u_2$ and $u_3$ have different absolute values. Note that this is possible, since $L_3$ is of the form $\{\alpha,-\alpha,\beta,-\beta\}$. We finally assign to $u_2u_3$ a label from $L(u_2u_3)$ so that there is no conflict between $u_2$ and $u_2'$, $u_3$ and $u_3'$, and $u_1$ and $u_3$. Recall that $u_2$ and $u_3$ cannot be in conflict due to how $u_3u_1$ was labelled. Thus, the only potential conflict that can remain is between $u_2$ and $u_1$, and, if it occurs, then we can get rid of it by simply changing the label of $u_2u_2'$ from $a_2'$ to $-a_2'$. Recall that this cannot make $u_2'$ get in conflict with its neighbours different from $u_2$, and that $u_2$ and $u_2'$ also cannot get in conflict unless they already were before switching the label of $u_2u_2'$. \item \emph{$p$ is odd at least~$5$.} We first use the Combinatorial Nullstellensatz similarly as in Case~7, to label the edges of $C$ in such a way that, for certain pairs of vertices, the resulting products have distinct absolute values. More precisely, we want to achieve this for the pairs $\{u_1,u_1'\}$, $\{u_1,u_2\}$, $\{u_2,u_3\}$, $\{u_3,u_3'\}$, $\{u_4,u_4'\}$, $\{u_5,u_5'\}$, $\dots$, $\{u_{p-2},u_{p-2}'\}$ and $\{u_p,u_p'\}$. We denote by $\mathcal{S}$ the set of those pairs. In order to show that such an extension exists, for every $i \in \{1,\dots,p\}$ we associate a variable $x_i$ to the edge $u_iu_{i+1}$, and consider the polynomial \begin{equation*} \begin{split} P(x_1, \dots, x_p) = & \left(x_px_1 - A_1' \right) \cdot \left(x_pa_1' - x_2a_2' \right) \cdot \left(x_1a_2' - x_3a_3' \right) \\ & \cdot \left(\prod_{i=3}^{p-2} \left(x_{i-1}x_i - A_i' \right)\right) \cdot \left(x_{p-1}x_p - A_p' \right), \end{split} \end{equation*} which, if $y_i=\log \abs{x_i}$ for every $i \in \{1,\dots,p\}$, is the same as considering \begin{equation*} \begin{split} P'(y_1, \dots, y_p)= &\left(y_p + y_1 - \log(A_1') \right) \cdot \left(y_p + \log(a_1') - y_2 - \log(a_2') \right) \cdot \left(y_1 + \log(a_2') - y_3 - \log(a_3') \right) \\ & \cdot \left(\prod_{i=3}^{p-2} \left(y_{i-1} + y_i - \log(A_i') \right)\right) \cdot \left(y_{p-1} + y_p - \log(A_p') \right). \end{split} \end{equation*} It can be checked that, in the expansion of $P'$, the monomial $y_1\dots y_p$ has maximum degree and nonzero coefficient $-2$. Thus, by the Combinatorial Nullstellensatz we deduce that there is a way to label the edges of $C$ with labels from their respectives lists, so that the desired conflicts (between the adjacent vertices in the pairs of $\mathcal{S}$) are avoided. In particular, this is possible because all these lists are of the form $\{\alpha,-\alpha,\beta,-\beta\}$, and, in particular, contain two values with distinct absolute values. The resulting labelling might be not p-proper, and, to turn it into a p-proper one, we will \textit{switch} some edges incident to the vertices in $C$, and, by that, we mean changing the current label $l$ of an edge to $-l$. More particularly, we will switch edges of the form $u_iu_{i+1}$ and $u_iu_i'$; due to some of the assumptions made this far, recall that for every such edge $e$ with current label $l$, we do have $-l \in L(e)$. We start by switching, if necessary, $u_2u_2'$ and $u_{p-1}u_{p-1}'$ so that the products of $u_2'$ and $u_{p-1}'$ get positive and negative, respectively. Next, we switch $u_1u_2$, if necessary, so that the product of $u_2$ gets negative. Now, we consider the edges $u_3u_4, u_4u_5, \dots, u_pu_1$ one by one following this ordering, and, for every such considered edge $u_iu_{i+1}$, we switch it, if necessary, so that the product of $u_i$ gets negative if $i$ is odd, and positive otherwise. Lastly, we switch $u_1u_1'$, if necessary, so that the product of $u_1$ gets negative. We claim that the eventual labelling of $G$ is p-proper, our final contradiction. First recall, as mentioned earlier, that the switching operation guarantees that the resulting labelling is an $L$-labelling. Its p-properness follows from the following arguments. First, for all the pairs of adjacent vertices in $\mathcal{S}$, the products are different due to distinct absolute values (preserved under the switching operation). Regarding the two adjacent vertices in the pair $\{u_{p-1},u_{p-1}'\}$, the products have different signs and are thus different. Now, for every two adjacent vertices in the pairs $\{u_3,u_4\}, \{u_4,u_5\}, \dots, \{u_p,u_1\}$, the products are different due to their signs being different. \qedhere \end{enumerate} \end{proof} \section{Conclusion} In this work, we have considered a problem being a combination of the Multiplicative 1-2-3 Conjecture and of the List 1-2-3 Conjecture, standing as a List Multiplicative 1-2-3 Conjecture. In particular, we have exhibited a few bounds on the parameter ${\rm ch}_\Pi^*$, both for graphs in general and for more specific classes of graphs. While some of these bounds are tight, some others remain a bit distant from what we believe should be optimal. An interesting point stemming from our proofs, is the methods we have used to establish our bounds. In the context of the List 1-2-3 Conjecture, the algebraic approach, through, in particular, the polynomial method and tools such as the Combinatorial Nullstellensatz, is definitely the best approach we know of at the moment to establish bounds on ${\rm ch}_\Sigma$. As described notably in Subsection~\ref{subsection:algebraic-tools}, and seen throughout this work, the potential of this method is a bit less obvious for exhibiting bounds on ${\rm ch}_\Pi^*$. Recall that, in the current work, we have mainly exploited the connection between ${\rm ch}_\Sigma$ and ${\rm ch}_\Pi^*$ established in Theorem~\ref{theorem:chs-to-chp}. It might be, however, that there are dedicated ways to better exploit the algebraic approach, and get better bounds on ${\rm ch}_\Pi^*$. As a main perspective for further work on the topic, it would be nice to obtain a constant upper bound on ${\rm ch}_\Pi^*$ for graphs in general. Recall that, due to Theorem~\ref{theorem:chs-to-chp}, this could be obtained through establishing a constant upper bound on ${\rm ch}_\Sigma$. This apart, it would be interesting to verify the List Multiplicative 1-2-3 Conjecture for more classes of graphs. For instance, it would be interesting to improve any of the upper bounds in Corollary~\ref{corollary:existing-bounds}, some of which we have already improved in Subsection~\ref{subsection:bounds-particular-classes}. Notably, it is worth mentioning that the arguments used to prove Theorems~\ref{theorem:planar-girth16} and~\ref{theorem:subcubic} are tight, and, as a result, it seems that our proofs would be hard to improve to lower the bound of $4$. From this, we would be interested in having a proof of the List Multiplicative 1-2-3 Conjecture for planar graphs with girth at least~$16$ or for subcubic graphs.
{ "timestamp": "2021-02-17T02:16:06", "yymm": "2102", "arxiv_id": "2102.08052", "language": "en", "url": "https://arxiv.org/abs/2102.08052", "abstract": "The 1-2-3 Conjecture asks whether almost all graphs can be (edge-)labelled with $1,2,3$ so that no two adjacent vertices are incident to the same sum of labels. In the last decades, several aspects of this problem have been studied in literature, including more general versions and slight variations. Notable such variations include the List 1-2-3 Conjecture variant, in which edges must be assigned labels from dedicated lists of three labels, and the Multiplicative 1-2-3 Conjecture variant, in which labels~$1,2,3$ must be assigned to the edges so that adjacent vertices are incident to different products of labels. Several results obtained towards these two variants led to observe some behaviours that are distant from those of the original conjecture.In this work, we consider the list version of the Multiplicative 1-2-3 Conjecture, proposing the first study dedicated to this very problem. In particular, given any graph $G$, we wonder about the minimum~$k$ such that $G$ can be labelled as desired when its edges must be assigned labels from dedicated lists of size~$k$. Exploiting a relationship between our problem and the List 1-2-3 Conjecture, we provide upper bounds on~$k$ when $G$ belongs to particular classes of graphs. We further improve some of these bounds through dedicated arguments.", "subjects": "Combinatorics (math.CO)", "title": "On a List Variant of the Multiplicative 1-2-3 Conjecture", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773707973303964, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7084670397880479 }
https://arxiv.org/abs/1504.01721
Rainbow connection in some digraphs
An edge-coloured graph $G$ is {\it rainbow connected} if any two vertices are connected by a path whose edges have distinct colours. This concept was introduced by Chartrand et al. in \cite{ch01}, and it was extended to oriented graphs by Dorbec et al. in \cite{DI}.In this paper we present some results regarding this extention, mostly for the case of circulant digraphs.
\section{Introduction} Given a connected graph $G=(V(G), E(G))$, an edge-coloring of $G$ is called {\it rainbow connected} if for every pair of distinct vertices $u, v$ of $G$ there is a $uv$-path all whose edges received different colors. The {\it rainbow connectivity number of} $G$ is the minimum number $rc(G)$ such that there is a rainbow connected edge-coloring of $G$ with $rc(G)$ colors. Similarily, an edge-coloring of $G$ is called {\it strong rainbow connected} if for every pair $u, v\in V(G)$ there is a $uv$-path of minimal length (a $uv$-geodesic) all whose edges received different colors. The {\it strong rainbow connectivity number of} $G$ is the minimum number $src(G)$ such that there is a strong rainbow connected edge-coloring of $G$ with $src(G)$ colors. The concepts of rainbow connectivity and strong rainbow connectivity of a graph were introduced by Chartrand et al. in \cite{ch01} and, been the connectivity one fundamental notion in Graph Theory, it is not surprising that several works around these concepts has been done since then (see for instance \cite{cha,ch02,sc02,sc03,sc01,kri,li3,li4,li2}). For a survey in this topic see (\cite{li1}). As a natural extension of this notions is that of the {\it rainbow connection} and {\it strong rainbow connection} in oriented graphs, which was introduced by Dorbec et al. in \cite{DI}. Let $D=(V(D), A(D))$ be a strong connected digraph and $\Gamma : A(D)\rightarrow \{1, \dots, k\}$ be an arc-coloring of $D$. Given $x, y\in V(D)$, a directed $xy$-path $T$ in $D$ will be called {\it rainbow} if no two arcs of $T$ receive the same color. $\Gamma$ will be called {\it rainbow connected} if for every pair of vertices $x, y \in V(D)$ there is a rainbow $xy$-path and a rainbow $yx$-path. The {\it rainbow connection number of} $D$, denoted as $rc^*(D)$, is the minimum number $k$ such that there is a rainbow connected arc-coloring of $D$ with $k$ colors. Given a pair of vertices $x,y\in V(D)$, an $xy$-path $T$ will be called an {\it $xy$-geodesic} if the length of $T$ is the distance, $d_D(x,y)$, from $x$ to $y$ in $D$. An arc-coloring of $D$ will be called {\it strongly rainbow connected} if for every pair of distinct vertices $x, y$ of $D$ there is a rainbow $xy$-geodesic and a rainbow $yx$-geodesic. The {\it strong rainbow connection number of} $D$, denoted as ${src^*}(D)$, is the minimum number $k$ such that there is a strong rainbow connected arc-coloring of $D$ with $k$ colors. In this paper we present some results regarding this problem, mainly for the case of circulant digraphs. For general concepts we may refer the reader to \cite{bang}. \section{Some remarks and basic results on biorientations of graphs} Let $D=(V(D), A(D))$ be a strong connected digraph of order $n$ and let ${\hbox{diam}}(D)$ be the diameter of $D$. As we see in \cite{DI}, it follows that $${\hbox{diam}}(D)\le rc^*(D)\le {src^*}(D)\leq n.$$ Also, it is not hard to see that if $H$ is a strong spanning subdigraph of $D$, then $rc^*(D)\leq rc^*(H)$. However, as in the graph case (see\cite{cha}), this is not true for the strong rainbow connection number, as we see in the next lemma. \begin{lem}\label{srcGH} There is a digraph $D$ and a spanning subdigraph $H$ of $D$ such that ${src^*}(D)>{src^*}(H)$. \end{lem} \begin{proof} Let $H$ be as in Figure \ref{fig1}, where $D$ is obtained from $H$ by adding the arc $a_1a_2$. It is not hard to sse that the colouring in Figure \ref{fig1} is a strong rainbow 6-coloring of $H$, thus ${src^*}(H)\leq 6$. We will show that ${src^*}(D)\geq 7$. Suppose there is a strong rainbow 6-coloring $\rho$ of $D$, First notice that, for each $i$ and $j$, the $u_iv_j$-geodesic is unique and contains the arcs $u_iv_i$ and $u_jv_j$, hence there are no two arcs of the type $u_iv_i$ sharing the same colour. Without loss of generality let $\rho(u_iv_i)=i$ for $1\leq i \leq 4$. By an analogous argument, since $P_i=u_iv_ia_1a_2u_4v_4$ is the only $u_iv_4$-geodesic for $i\le3$, and $a_1a_2, a_2u_4\in A(P_i)$, we can suppose that such arcs have colours 5 and 6, respectively. If we assign any of the six colours to the arc $v_1a_1$, we see that for some $j\geq 2$ the unique $u_1v_j$-geodesic is no rainbow, contradicting the choice of $\rho$. Hence ${src^*}(G)\ge7$ and the result follows. \end{proof} \begin{figure}[ht] \begin{center} \begin{tikzpicture}[every circle node/.style ={circle,draw,fill=black,minimum size= 5pt,inner sep=0pt, outer sep=0pt}, every rectangle node/.style ={}]; \begin{scope}[scale=0.7, xshift=-5cm] \node [circle] (0) at (-3,4)[label=90:$u_1$]{}; \node [circle] (1) at (-5,3)[label=90:$v_1$]{}; \node [circle] (2) at (-6,1)[label=180:$u_2$]{}; \node [circle] (3) at (-6,-1)[label=180:$v_2$]{}; \node [circle] (4) at (-5,-3)[label=270:$u_3$]{}; \node [circle] (5) at (-3,-4)[label=270:$v_3$]{}; \node [circle] (6) at (-2,0)[]{}; \node [rectangle] (a1) at (-1.8,0.9) {$a_1$}; \node [circle] (7) at (2,3)[label=90:$b_1$]{}; \node [circle] (8) at (2,1)[label=90:$b_2$]{}; \node [circle] (9) at (2,-1)[label=90:$b_3$]{}; \node [circle] (10) at (6,0)[]{}; \node [rectangle] (a2) at (6.2,0.6) {$a_2$}; \node [circle] (11) at (9,1.5)[label=0:$u_4$]{}; \node [circle] (12) at (9,-1.5)[label=0:$v_4$]{}; \foreach \from/\to in {0/1} \draw [->, shorten <=3pt, shorten >=3pt, >=stealth, line width=.7pt] (\from) to (\to); \foreach \from/\to in {2/3} \draw [->, shorten <=3pt, shorten >=3pt, >=stealth, line width=.7pt] (\from) to (\to); \foreach \from/\to in {4/5} \draw [->, shorten <=3pt, shorten >=3pt, >=stealth, line width=.7pt] (\from) to (\to); \foreach \from/\to in {11/12} \draw [->, shorten <=3pt, shorten >=3pt, >=stealth, line width=.7pt] (\from) to (\to); \foreach \from/\to in {1/6,3/6,5/6,12/10} \draw [->, shorten <=3pt, shorten >=3pt, >=stealth, line width=.7pt] (\from) to (\to); \foreach \from/\to in {6/0,6/2,6/4,10/11} \draw [->, shorten <=3pt, shorten >=3pt, >=stealth, line width=.7pt] (\from) to (\to); \foreach \from/\to in {6/7,7/6,10/8,8/10} \draw [->, shorten <=3pt, shorten >=3pt, >=stealth, line width=.7pt] (\from) to [bend right=10] (\to); \foreach \from/\to in {6/8,8/6,10/9,9/10} \draw [->, shorten <=3pt, shorten >=3pt, >=stealth, line width=.7pt] (\from) to [bend right=10] (\to); \foreach \from/\to in {6/9,9/6,10/7,7/10} \draw [->, shorten <=3pt, shorten >=3pt, >=stealth, line width=.7pt] (\from) to [bend right=10] (\to); \draw[->, shorten <=5pt, shorten >=5pt, >=stealth, line width=.7pt, black, dashed] (-2,0) .. controls (0,-3) and (4,-3) .. (6, 0); \node [rectangle] () at (-4.1,3.9) {1}; \node [rectangle] () at (-0.3,2) {1}; \node [rectangle] () at (0,1.5) {1}; \node [rectangle] () at (3,0.2) {1}; \node [rectangle] () at (3,1.3) {1}; \node [rectangle] () at (-6.3,0) {2}; \node [rectangle] () at (1.2,0.2) {2}; \node [rectangle] () at (1.2,1.3) {2}; \node [rectangle] () at (3.6,-0.1) {2}; \node [rectangle] () at (4,-0.5) {2}; \node [rectangle] () at (-4.1,-3.9) {3}; \node [rectangle] () at (0.4,-0.1) {3}; \node [rectangle] () at (0,-0.5) {3}; \node [rectangle] () at (4,1.5) {3}; \node [rectangle] () at (4.4,2) {3}; \node [rectangle] () at (9.3,0) {4}; \node [rectangle] () at (-3.6,2) {5}; \node [rectangle] () at (-4.3,-0.3) {5}; \node [rectangle] () at (-2.8,-2.2) {5}; \node [rectangle] () at (7.7,-0.5) {5}; \node [rectangle] () at (-2.3,2.4) {6}; \node [rectangle] () at (-4,0.8) {6}; \node [rectangle] () at (-4,-1.5) {6}; \node [rectangle] () at (7.4,1.1) {6}; \node [rectangle] () at (2,-2) {7}; \end{scope} \end{tikzpicture} \end{center}\caption{The digraphs $D$ and $H$ from Lemma \ref{srcGH}.}\label{fig1} \end{figure} Given a pair $v, u\in V(D)$, if the arcs $uv$ and $vu$ are in $D$, then we say that $uv$ and $vu$ are \emph{symmetric} arcs. When every arc of $D$ is symmetric, $D$ is called a \emph{symmetric} digraph. Given a graph $G=(V(G), E(G))$, its {\it biorientation} is the symmetric digraph $\stackrel{\overleftrightarrow{\hspace{.3cm}}}{G}$ obtained from $G$ by replacing each edge $uv$ of $G$ by the pair of symmetric arcs $uv$ and $vu$. Given a graph $G$ and a (strong) rainbow connected edge-coloring of $G$, it is not hard to see that the arc-coloring of $\stackrel{\overleftrightarrow{\hspace{.3cm}}}{G}$, obtained by assign the color of the edge $uv$ to both arcs $uv$ and $vu$ is a (strong) rainbow connected arc-coloring of $\stackrel{\overleftrightarrow{\hspace{.3cm}}}{G}$. Thus $rc^*(\stackrel{\overleftrightarrow{\hspace{.3cm}}}{G})\le rc(G)$ and ${src^*}(\stackrel{\overleftrightarrow{\hspace{.3cm}}}{G})\le src(G)$. Although for some graphs and its biorientations these values coincide (for instance, as we will see, for $n\geq 4$, $rc(C_n) = src(C_n)= rc^*(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{C_n})={src^*}(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{C_n})$), for other graphs and its biorientations the difference between those values is unbounded, as we see in the case of the stars, where for each $n\geq 2$, $rc(K_{1,n})=n$ (for each path between terminal vertices we need two colors) and $rc^*(\stackrel{\overleftrightarrow{\hspace{.7cm}}}{K_{1,n}})={src^*}(\stackrel{\overleftrightarrow{\hspace{.7cm}}}{K_{1,n}})=2$ (the colouring that assigns color 1 to the in-arcs of the ``central'' vertex and assigns color 2 to the ex-arcs of the central vertex is a strong rainbow coloring). \begin{teo}\label{srcKn} Let $D$ be a nontrivial digraph, then \begin{enumerate} \item[(a)] ${src^*}(D)=1$ if and only if $rc^*(D)=1$ if and only if, for some $n\geq 2$, $D= \stackrel{\overleftrightarrow{\hspace{.3cm}}}{K_n} $; \item[(b)] $rc^*(D)=2$ if and only if ${src^*}(D)=2$. \end{enumerate} \end{teo} \begin{proof} First observe that since $D$ is nontrivial, $rc^*(D)\geq 1$ and therefore if ${src^*}(D)=1$ then $rc^*(D)=1$. If $rc^*(D)= 1$ then ${\hbox{diam}}(D) =1$ and hence $D=\stackrel{\overleftrightarrow{\hspace{.3cm}}}{K_n}$ for some $n\geq 2$. On the other hand, if $D= \stackrel{\overleftrightarrow{\hspace{.3cm}}}{K_n}$ it follows that every 1-colouring of $D$ is a strong rainbow colouring. Thus $1\geq {src^*}(D)\geq rc^*(D)\geq 1$ and (a) follows. For (b), if ${src^*}(D) = 2$, by (a), $rc^*(D)>1$ and hence $rc^*(D) = 2$. If $rc^*(D) = 2$, $D$ has a 2-rainbow colouring and, by (a) $D\not=\stackrel{\overleftrightarrow{\hspace{.3cm}}}{K_n}$. Therefore for every pair $u, v\in V(D)$, with $d(u,v)\geq 2$, exists a $uv$-rainbow path of lenght 2, which is also geodesic. Hence ${src^*}(D)=2$ and (b) folows. \end{proof} \begin{teo}\label{srcCn} \begin{enumerate} \item[(a)] For $n\geq 2$, $rc^*(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{P_n})={src^*}(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{P_n})=n-1$; \item[(b)] For $n\geq 4$, $rc^*(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{C_n})={src^*}(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{C_n})=\lceil n/2\rceil$ \item[(c)] Let $k\ge2$, if $\stackrel{\overleftrightarrow{\hspace{.3cm}}}{K}_{n_1,n_2,\dots,n_k}$ is the complete $k$-partite digraph where $n_i\ge2$ for some $i$, then $rc^*(\stackrel{\overleftrightarrow{\hspace{.3cm}}}{K}_{n_1,n_2,\dots,n_k})={src^*}(\stackrel{\overleftrightarrow{\hspace{.3cm}}}{K}_{n_1,n_2,\dots,n_k})=2$. \end{enumerate} \end{teo} \begin{proof} In \cite{ch01} it is shown that for every $n\geq 4$, $src(C_n)=\lceil \frac{n}{2}\rceil$ and for every $n\geq 2$, $src(P_n) = n-1$. Since ${\hbox{diam}}(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{P_n})= n-1$ it follows that $n-1\leq rc^*(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{P_n})\leq {src^*}(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{P_n})\leq src(P_n) = n-1$ and the first part of the theorem follows. In an analogous way, if $n$ is even, $\lceil \frac{n}{2}\rceil={\hbox{diam}}(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{C_n})\leq rc^*(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{C_n})$ and since ${src^*}(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{C_n})\le src(C_n)=\lceil \frac{n}{2}\rceil$, $rc^*(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{C_n})={src^*}(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{C_n})=\lceil \frac{n}{2}\rceil$. Let $n =2k+1$ with $k\ge2$ and let us suppose there is a rainbow $k$-colouring $\rho$ of $\stackrel{\overleftrightarrow{\hspace{.5cm}}}{C_n}$. Observe that for every $0\leq i \leq n-1$, $(v_i, v_{i+1}, \dots v_{i+k})$ is the only $v_iv_{i+k}$-path of length $d(v_i, v_{i+k}) = k$ in $\stackrel{\overleftrightarrow{\hspace{.5cm}}}{C_n}$ and therefore the $k$ colours of $\rho$ occurs in each of such geodesic paths. Thus $\rho(v_iv_{i+1}) = \rho(v_{i+k}v_{i+k+1})$ for each $0\leq i \leq n-1$, which, since $(k, n=2k+1)=1$ implies that all the arcs $v_iv_{i+1}$ in $\stackrel{\overleftrightarrow{\hspace{.5cm}}}{C_n}$ receive the same color which is a contradiction. Thus $rc^*(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{C_n}) \geq k +1 = \lceil \frac{n}{2}\rceil$ and (b) follows. For (c), since $n_i\ge2$ for some $i$, then $\stackrel{\overleftrightarrow{\hspace{.3cm}}}{K}_{n_1,n_2,\dots,n_k}$ is not a complete digraph, hence $rc^*(\stackrel{\overleftrightarrow{\hspace{.3cm}}}{K}_{n_1,n_2,\dots,n_k})\ge2$. Let $V_1,V_2,\dots,V_k$ be the $k$-partition on independent sets of $V(\stackrel{\overleftrightarrow{\hspace{.3cm}}}{K}_{n_1,n_2,\dots,n_k})$, and for each arc $uv$, with $u\in V_i$ and $v\in V_j$, assign color 1 to $uv$ if $i<j$ and color 2 if $i>j$. Since ${\hbox{diam}}(\stackrel{\overleftrightarrow{\hspace{.3cm}}}{K}_{n_1,n_2,\dots,n_k})=2$, it is not hard to see that this is a strong rainbow connected 2-coloring and therefore ${src^*}(\stackrel{\overleftrightarrow{\hspace{.3cm}}}{K}_{n_1,n_2,\dots,n_k})\le2$. \end{proof} \begin{teo}\label{teo3} Let $D$ be a spanning strong connected subdigraph of $\stackrel{\overleftrightarrow{\hspace{.5cm}}}{C_n}$ with $k \geq 1$ asymmetric arcs. Thus $$rc^*(D)=\left\{\begin{array}{ll} n-1 & \text{if }k\leq2\text{;} \\ n & \text{if }k\geq3\text{.}\end{array}\right.$$ Moreover, if $k\geq 3$, $rc^*(D)= {src^*}(D) = n$. \end{teo} \begin{proof} Let $V(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{C_n}) = \{v_0, \dots , v_{n-1}\}$ and suppose $v_0v_{n-1}\not\in A(D)$. Since $D$ is strong connected the $v_0v_{n-1}$-path $T= (v_0, v_1, \dots , v_{n-1})$ is contained in $D$, thus ${\hbox{diam}}(D)\geq n-1$. Therefore, $n-1\leq rc^*(D)\leq n$. If $k=1$ we see that $\stackrel{\overleftrightarrow{\hspace{.5cm}}}{P_n}$ is a spanning subdigraph of $D$, hence $n-1\leq rc^*(D)\leq rc^*(\stackrel{\overleftrightarrow{\hspace{.5cm}}}{P_n})$, which by Theorem \ref{srcCn} (a) implies that $rc^*(D)=n-1$. Let $k\geq 2$. If $v_{n-1}v_0\not\in A(D)$, since $D$ is strong connected it follows that $D$ is isomorphic to $\stackrel{\overleftrightarrow{\hspace{.5cm}}}{P_n}$ which have no asymmetric arcs and thus this is not possible. Therefore $v_{n-1}v_0\in A(D)$. If there is a $(n-1)$-rainbow coloring $\rho$ of $D$, since $v_{n-1}v_0\in A(D)$, the directed cycle $C$ induced by $A(T)\cup v_{n-1}v_0$ is a spanning subdigraph of $D$ and therefore there are two arcs $v_iv_{i+1}, v_jv_{j+1}\in A(C)$ such that $\rho(v_iv_{i+1}) = \rho(v_jv_{j+1})$. Since $\rho$ is a rainbow coloring, there is a rainbow $v_iv_{j+1}$-path and a rainbow $v_jv_{i+1}$-path in $D$. Thus the paths $(v_i, v_{i-1}, \dots, v_{j+2}, v_{j+1})$ and $(v_j, v_{j-1}, \dots, v_{i+2}, v_{i+1})$ most be contained in $D$ and therefore the number of assymetric arcs in $D$ is at most 2. Thus, if $k\geq 3$ then $rc^*(D)\geq n$ and hence, $rc^*(D) = n$. Finally, if $k=2$, let $\rho$ be the $(n-1)$-arc coloring of $D$ which assigns the same color to the assymetric arcs, and for the remaining $n-2$ pairs of simmetric arcs and the remaining $n-2$ colors, $\rho$ assigns the same color to each pair of simmetric arcs. It is not hard to see that $\rho$ is a rainbow coloring of $D$, thus $rc^*(D)\leq n-1$ and the first part of the theorem follows. The second part is directly from the first part of the theorem and from the fact that ${src^*}(D)\leq n$. \end{proof} As a direct corollary of the previous result we have \begin{cor} Let $D$ be a strong connected digraph with $m\geq 3$ arcs. Thus $rc^*(D)= {src^*}(D)=m$ if and only if $D = \stackrel{\rightarrow}{C_m} $.\end{cor} \section{Circulant digraphs} For an integer $n\geq2$ and a set $S\subseteq\{1, 2,\dots, n-1\}$, the \emph{circulant digraph} $C_n(S)$ is defined as follows: $V(C_n(S))=\{v_0,v_1,\dots,v_{n-1}\}$ and $$A(C_n(S))=\{v_iv_j : j-i\stackrel{n}{\equiv}s,\ s\in S\},$$ where $a\stackrel{n}{\equiv}b$ means: {\it $a$ congruent with $b$ modulo $n$}. The elements of $S$ are called \emph{generators}, and an arrow $v_iv_j$, where $j-i\stackrel{n}{\equiv}s\in S$, will be called an {\it $s$-jump}. If $s\in S$ we denote by $C_{(s)}$ the spanning subdigraph of $C_n(S)$ induced by all the $s$-jumps. Observe that for every pair of vertices $v_i$ and $v_j$ there is at most one $v_iv_j$-path in $C_{(s)}$. If such $v_iv_j$-path in $C_{(s)}$ exists will be denoted by $v_iC_{(s)}v_j$. From now on the subscripts of the vertices are taken modulo $n$. Given an integer $k\geq 1$, let $[k]=\{1,2,\dots,k\}$. \begin{teo}\label{Cn[k]} If $1\leq k\leq n-2$, then $rc^*(C_n([k]))={src^*}(C_n([k]))=\lceil \frac{n}{k}\rceil$. \end{teo} \begin{proof} Let $D = C_n[k]$. The case when $k=1$ is proved in Theorem \ref{teo3}. Let $2\leq k\leq n-2$, and $V(D) = \{v_0, \dots ,v_{n-1}\}$. By definition it follows that for every pair $0\leq i \le j\leq n-1$, $d(v_i, v_j) = d(v_0, v_{j-i})$ and $d(v_j,v_i) = d(v_0, v_{i+n-j})$. Also it is not hard to see that for every $0\leq i\leq n-1$, $d(v_0, v_i)= \lceil\frac{i}{k}\rceil$. From here it follows that ${\hbox{diam}}(D)= \lceil\frac{n-1}{k}\rceil$. Let $P=\{V_1, V_2, \dots, V_{\lceil\frac{n}{k}\rceil}\}$ be a partition of $V(D)$ such that for each $i$, with $1\leq i \leq \lfloor\frac{n}{k}\rfloor$, $V_i=\{v_j :(i-1)k\leq j\leq ik-1\}$ and, if $\lceil\frac{n}{k}\rceil \not= \lfloor\frac{n}{k}\rfloor$, $V_{\lceil\frac{n}{k}\rceil} =\{\ v_j : k\lfloor\frac{n}{k}\rfloor\leq j \leq n-1 \}$. \noindent {\bf Claim 1} For every pair $v_i, v_j\in V(D)$ there is a $v_iv_j$-geodesic path $T$ such that for every $V_p\in P$, $|V_p\cap V(T\setminus v_j)|\leq 1$. \noindent Let $v_{rk+i}, v_{sk+j}\in V(D)$. If $r\not =s$ let $0\leq q\leq k-1$ and $t$ be the minimum integer such that $(r+t)k + i +q \stackrel{n}{\equiv} sk+j$ and let $$T=(v_{rk+i}, v_{(r+1)k+i}, \dots, v_{(r+t)k+i}, v_{(r+t)k+i+q})$$ be a $v_{rk+i}v_{sk+j}$-path. Since $t$ is minimum and $0\leq q\leq k-1$ it follows that $T$ is a $v_{rk+i}v_{sk+j}$-geodesic path and, since for every $V_p\in P$, $|V_p|\leq k$, hence for every $V_p\in P$, $|V_p\cap V(T\setminus v_{sk+j})|\leq 1$. If $r=s$ and $i\leq j$ it follows that $v_{rk+i}v_{sk+j}\in A(D)$ and $T=(v_{rk+i}, v_{sk+j})$ is a $v_{rk+i}v_{sk+j}$-geodesic path with the desired properties. So, let us suppose $i\geq j+1$. Thus $$d(v_{rk+i}, v_{sk+j})=\lceil\frac{n-k(r-s)-(i-j)}{k}\rceil = \lceil\frac{n-(i-j)}{k}\rceil.$$ Let $t$ be the maximum integer such that $(r+t)k+i\leq n-1$. If $v_{(r+t)k+i}v_j\in A(D)$, then $$T=(v_{rk+i}, v_{(r+1)k+i}, \dots, v_{(r+t)k+i}, v_j, v_{k+j}, \dots v_{sk+j})$$ is a $v_{rk+i}v_{sk+j}$-geodesic path such that for every $V_p\in P$, $|V_p\cap V(T\setminus v_{sk+j})|\leq 1$. If $v_{(r+t)k+i}v_j\not\in A(D)$, since $i\geq j+1$ and $t$ is maximum, it follows that $v_{(r+t)k + i}\in V_{\lceil\frac{n}{k}\rceil-1}$ and $v_{(r+t)k + i}v_{n-1}\in A(D)$. Therefore $$T=(v_{rk+i}, \dots, v_{(r+t)k+i}, v_{n-1}, v_j, v_{k+j}, \dots v_{sk+j})$$ is a $v_{rk+i}v_{sk+j}$-geodesic path such that for every $V_p\in P$, $|V_p\cap V(T\setminus v_{sk+j})|\leq 1$, and the claim follows.\\ Let $\rho:A(D)\rightarrow \{1, 2, \dots, \lceil\frac{n}{k}\rceil\}$ be the arc-coloring of $D$ defined as follows: for every $v_iv_j\in A(D)$, $\rho(v_iv_j)= p$ if and only if $i\in V_p$. Given $v_i, v_j\in V(D)$, from Claim 1 we see there is a $v_iv_j$-geodesic path $T$ such that for every $V_i\in P$, $|V_i\cap V(T\setminus v_j)|\leq 1$ which, by definition of $\rho$, is a rainbow path. From here it follows that $\rho$ is a strong rainbow coloring of $D$. Thus, ${src^*}(D)\leq \lceil\frac{n}{k}\rceil$, and since ${\hbox{diam}}(D)= \lceil\frac{n-1}{k}\rceil$, for every $n$ such that $\lceil\frac{n}{k}\rceil = \lceil\frac{n-1}{k}\rceil$ we have $rc^*(D) = {src^*}(D) = \lceil\frac{n}{k}\rceil$. Hence, to end the proof just remain to verify the case $n=kt+1$. Let suppose there is a $t$-rainbow coloring $\rho$ of $D$, and consider $C_{(k)}$, the spanning subdigraph of $D$ induced by the $k$-jumps. Since $(k, n=kt+1)=1$ it follows that $C_{(k)}$ is a cycle, and each $v_iv_{i+tk}$-path in $C_{(k)}$ is the only $v_iv_{i+tk}$-path of length $t$ in $D$. Thus, since $\rho$ is a $t$-rainbow coloring, in every $v_iv_{i+tk}$-path in $C_{(k)}$ most appear the $t$ colors. Therefore, for every $0\leq i\leq n-1$, $\rho(v_iv_{i+k}) = \rho(v_{i+kt}v_{i+k(t+1)})$, which, since $(k, n=kt+1)=1$, implies that every arc in $C_{(k)}$ receives the same color which is a contradiction. Therefore $rc^*(D)\geq t+1 = \lceil\frac{n}{k}\rceil$ and since ${src^*}(D)\leq \lceil\frac{n}{k}\rceil$, the theorem follows. \end{proof} Now, we turn our attention on the circulant digraphs with a pair of generators $\{1, k\}$, with $2\leq k\leq n-1$. Observe that for every circulant digraph $C_n(\{a_1,a_2\})$, if $(a_1, n )= 1$ and $b\in {\mathbb{Z}}_n$ is the solution of $a_1x\stackrel{n}{\equiv}1$, then $C_n(\{1,ba_2\}))\cong C_n(\{a_1,a_2\}))$. From here, we obtain the following. \begin{cor}\label{srcC.n.1,2} For $k\ge1$, $rc^*(C_{2k+1}(1,k+1))={src^*}(C_{2k+1}(1,k+1))=k+1$. \end{cor} \begin{proof} By Theorem \ref{Cn[k]}, for every $n\geq 4$, $rc^*(C_{n}([2])={src^*}(C_{n}([2]))= \lceil\frac{n}{2}\rceil$. Since $(k+1, 2k+1) =1$ and 2 is the solution of $(k+1)x\stackrel{2k+1}{\equiv}1$, then $C_{2k+1}(\{1,k+1\}))\cong C_{2k+1}(\{1, 2\})) = C_{2k+1}([2])$ and the result follows.\end{proof} Observe that given any circulant digraph $C_n(\{1,k\})$, for every pair $v_i, v_j \in C_n(\{1,k\})$ we have $d(v_i,v_j)=d(v_0,v_{j-i})$ (where $j-i$ is taken modulo $n$). Thus, ${\hbox{diam}}(C_n(\{1,k\}))= max\{ d(v_0, v_i) : v_i\in V(C_{n}(\{1,k\}))\}$. Given two positive integers $i, k$, let denote as $re(i, k)$ the residue of $i$ modulo $k$. \begin{lem}\label{diametro} Let $C_n(\{1,k\})$ be a circulant digraph and $V= \{v_0, \dots, v_{n-1}\}$ it set of vertices . If $n\geq (k-1)\lceil\frac{n}{k}\rceil$ then for every $v_i\in V$, $d(v_0, v_i) = \lfloor\frac{i}{k}\rfloor + re(i, k)$. Moreover ${\hbox{diam}}(C_n(\{1,k\})) = \lfloor\frac{n-1}{k}\rfloor +max\{ re(n-1, k), k-2\}$. \end{lem} \begin{proof} Let $v_i\in V$, $P=(v_0=u_0, u_1\dots, u_s=v_i)$ be a $v_0v_i$-geodesic path with a minimum number of $k$-jumps, and suppose in $P$ there are $p$ $k$-jumps and $q$ $1$-jumps. Also suppose the first $p$ steps of $P$ are $k$-jumps, and the last $q$ are $1$-jumps. Thus $d(v_0, v_i)= p+q$. Since $P$ is geodesic, it follows that $q\leq k-1$ and therefore $p\geq \lfloor\frac{i}{k}\rfloor$. Hence $v_{k\lfloor\frac{i}{k}\rfloor}=u_{\lfloor\frac{i}{k}\rfloor}\in V(P)$ and the subpath $$Q=(v_{k\lfloor\frac{i}{k}\rfloor}=u_{\lfloor\frac{i}{k}\rfloor}\dots, u_j,\dots,u_s=v_i)$$ is a $v_{k\lfloor\frac{i}{k}\rfloor}v_i$-geodesic path with $p'= p-\lfloor\frac{i}{k}\rfloor$ $k$-jumps and $d(v_{k\lfloor\frac{i}{k}\rfloor}, v_i)= p'+q\leq i-k\lfloor\frac{i}{k}\rfloor= re(i, k)$. If $p>\lfloor\frac{i}{k}\rfloor$ then $q< re(i,k)$ and since $re(i, k) < k$, it follows that $p'\geq \lceil\frac{n}{k}\rceil$. Therefore, if $m= k\lceil\frac{n}{k}\rceil-n$, $v_{k\lfloor\frac{i}{k}\rfloor+m}=u_{\lfloor\frac{i}{k}\rfloor+\lceil\frac{n}{k}\rceil}\in V(Q)$ and the subpath $$(v_{k\lfloor\frac{i}{k}\rfloor}=u_{\lfloor\frac{i}{k}\rfloor}\dots, u_j,\dots,u_{\lfloor\frac{i}{k}\rfloor+\lceil\frac{n}{k}\rceil}=v_{k\lfloor\frac{i}{k}\rfloor+m})$$ is a $v_{k\lfloor\frac{i}{k}\rfloor}v_{k\lfloor\frac{i}{k}\rfloor+m}$-geodesic path of $\lceil\frac{n}{k}\rceil$ $k$-jumps and $d(v_{k\lfloor\frac{i}{k}\rfloor}, v_{k\lfloor\frac{i}{k}\rfloor+m})= \lceil\frac{n}{k}\rceil\leq m$. Since $n\geq (k-1)\lceil\frac{n}{k}\rceil$ it follows that $\lceil\frac{n}{k}\rceil\geq k\lceil\frac{n}{k}\rceil-n=m$ and therefore $d(v_{k\lfloor\frac{i}{k}\rfloor}, v_{k\lfloor\frac{i}{k}\rfloor+m})= m$. Thus, replacing in $P$ the subpath $$(v_{k\lfloor\frac{i}{k}\rfloor}=u_{\lfloor\frac{i}{k}\rfloor}\dots, u_j,\dots,u_{\lfloor\frac{i}{k}\rfloor+\lceil\frac{n}{k}\rceil}=v_{k\lfloor\frac{i}{k}\rfloor+m})$$ by the subpath $$(v_{k\lfloor\frac{i}{k}\rfloor},v_{k\lfloor\frac{i}{k}\rfloor+1}, \dots,v_{k\lfloor\frac{i}{k}\rfloor+m})$$ we obtain a $v_0v_i$-geodesic path with less $k$-jumps than $P$, which is a contradiction. Thus $p=\lfloor\frac{i}{k}\rfloor$ and therefore $q=re(i,k)$ which implies that $d(v_0, v_i) = \lfloor\frac{i}{k}\rfloor + re(i, k)$ and the first part of the result follows. For the second part, first observe that $d(v_0, v_{n-1})=\lfloor\frac{n-1}{k}\rfloor + re(n-1, k)$ and $d(v_0, v_{(k\lfloor\frac{n-1}{k}\rfloor) -1})=\lfloor\frac{n-1}{k}\rfloor + k-2 $, thus ${\hbox{diam}}(C_n(\{1,k\})) \geq \lfloor\frac{n-1}{k}\rfloor +max\{ re(n-1, k), k-2\}$. If there is $v_i\in V$ such that $d(v_0, v_i)> \lfloor\frac{n-1}{k}\rfloor + k-2$, it follows that $n-1\geq i\geq k\lfloor\frac{n-1}{k}\rfloor $ but then $d(v_0, v_i)\leq d(v_0, v_{n-1})=\lfloor\frac{n-1}{k}\rfloor + re(n-1, k)$ and the result follows. \end{proof} \begin{teo}\label{srcC.2k.1,k,k+1} For every integer $k\ge 2$\\ \hspace*{1.15cm}$(i)$ $rc^*(C_{2k}(\{1,k\}))={src^*}(C_{2k}(\{1,k\}))=k$.\\ \hspace*{1cm}$(ii)$ $rc^*(C_{2k}(\{1,k+1\}))={src^*}(C_{2k}(\{1,k+1\}))=k$. \end{teo} \begin{proof} Let $V=\{v_0, \dots, v_{2k-1}\}$ be the set of vertices of $C_{2k}(\{1,k\}$. By Lemma \ref{diametro} we see that $k={\hbox{diam}}(C_{2k}(\{1,k\}))$ and therefore $k\leq rc^*(C_{2k}(\{1,k\}))$. Let $\{V_0,\dots,V_{k-1}\}$ be a partition of $V$, where $V_r=\{v_r,v_{r+k}\}$ for $0\le r\le k-1$ and define a $k$-colouring $\rho$ such that for every $0\leq r\leq k-1$, $(u,u')\in\rho^{-1}(r)$ if $u\in V_r$. Let $v_i, v_j\in V$ and suppose $i+q+pk \stackrel{n}{\equiv} j$ where $d(v_i, v_j) = p+q$ and $q\leq k-1$. Observe that, since $q<k$, $v_iC_{(1)}v_{i+q}C_{(k)}v_{i+pk+q}$ is a rainbow $v_iv_j$-path and by Lemma \ref{diametro} is $v_iv_j$-geodesic. Therefore ${src^*}(C_{2k}(\{1,k\}))\le k$ and (i) follows. For (ii), let $V =\{v_0, \dots, v_{2k-1}\}$ be the set of vertices of $C_{2k}(\{1,k+1\})$ and let $\{V_0,\dots,V_{k-1}\}$ as before. By Lemma \ref{diametro} it follows that ${\hbox{diam}}(C_{2k}(\{1,k+1\}))=k$ which implies $k\leq rc^*(C_{2k}(\{1,k+1\}))$. Now let $\rho$ be a $k$-colouring such that $(u,u')\in\rho^{-1}(r)$ if $u\in V_r$ . Since $N^+(u)=V_{r+1}$ for each $u\in V_r$ (taken $r+1$ modulo $k$), it follows that every path of length at most $k$ is rainbow, in particular every geodesic path is rainbow. Thus $k\geq {src^*}(C_{2k}(\{1,k+1\}))$ and (ii) follows. \end{proof} \begin{teo} For every integer $k\ge3$ we have $${src^*}(C_{(k-1)^2}(\{1,k\}))=rc^*(C_{(k-1)^2}(\{1,k\}))=2k-4.$$ \end{teo} \begin{proof} By Lemma \ref{diametro} we see that ${\hbox{diam}}(C_{(k-1)^2}(\{1,k\}))=2k-4$ and therefore $rc^*(C_{(k-1)^2}(\{1,k\}))\geq 2k-4$. Let $V =\{v_0, \dots , v_{(k-1)^2-1}\}$ be the set of vertices of $C_{(k-1)^2}(\{1,k\})$ and for each $i$, with $0\leq i < (k-1)^2 $, identify the vertex $v_i$ with the pair $\langle \lfloor\frac{i}{k-1}\rfloor, re(i, k-1)\rangle$. Let $\mathcal{V}= \{V_0,\dots,V_{k-2}\}$ be a partition of $V$, where $V_r=\{\langle r,s\rangle\mid 0\le s\le k-2\}$ for $0\le r\le k-2$, and let $\rho$ be a $(2k-4)$-colouring defined as follows: For each $r$ with $0\le r\leq k-1$, \begin{enumerate} \item The arc $(\langle r,s\rangle\langle r+1, s\rangle)$, with $0\leq s\leq k-2$, receives color $r$. \item The arcs $(\langle r,0\rangle\langle r,1\rangle)$ and $(\langle r,k-2\rangle\langle r+1,0\rangle)$ receive colour $r$. \item The arc $(\langle r,s\rangle\langle r,s+1\rangle)$, with $1\leq s\leq k-3$, receives colour $k-2+s$. \end{enumerate} Observe that every path with length at most $k-1$ in $C_{(k)}$ is rainbow, and, except for those paths of lenght $k-1$ in $C_{(1)}$ starting at $\langle r, 0\rangle$ (with $0\leq r<k-1$), every path in $C_{(1)}$ with lenght at most $k-1$ is rainbow. From the structure of $\rho$ we see that to prove $\rho$ is a strong coloring we just need to show that for every $v\in V_0$ and every $w\in V$ there is a rainbow $vw$-geodesic path. Let $\langle 0, s_0\rangle \in V_0$ and $\langle r, s\rangle\in V_r$. Since $\langle 0, s_0\rangle = v_{s_0}$ and $\langle r, s\rangle = v_{r(k-1)+s}$, by Lemma \ref{diametro}, $$d(v_{s_0}, v_{r(k-1)+s}) = \lfloor \frac{(k-1)r+ s-s_0}{k}\rfloor + re((k-1)r+s-s_0, k)$$ (taken $(k-1)r+s -s_0$ modulo $(k-1)^2$). Thus, if $t= \lfloor \frac{(k-1)r+ s-s_0}{k}\rfloor$, $$P = \langle 0, s_0\rangle C_{(k)}\langle \lfloor\frac{s_0+tk}{k-1}\rfloor , re(s_0+tk, k-1)\rangle C_{(1)} \langle r, s \rangle$$ is a geodesic path. The subpath in $C_{(k)}$ receives colors $j$, with $0\leq j \leq \lfloor \frac{s_0+tk}{k-1}\rfloor -1\leq k-2$, and the subpath in $C_{(1)}$ receives colors $i$. with $k-1\leq i\leq 2k-3$ or $i=\lfloor \frac{s_0+tk}{k-1}\rfloor $. Thus, if $P$ is not rainbow then we have that: the subpath in $C_{(1)}$ most be of lenght $k-1$; $\langle\lfloor\frac{s_0+tk}{k-1}\rfloor , re(s_0+tk, k-1)\rangle = \langle r-1, 0\rangle$ and $\langle r, s\rangle = \langle r, 0\rangle$. If $r-1= 0$ it follows that $\langle 0, s_0\rangle = \langle 0, 0\rangle$ and the path $Q$ of $k$-jumps $\langle 0, 0\rangle C_{(k)}\langle 1, 0\rangle$ of lenght $k-1$ is a geodesic rainbow. If $r-1=1$, $(\langle 0, s_0\rangle\langle 1, 0\rangle)$ most be a $k$-jump which is not possible. If $r-1\geq 2$, let $Q$ be the rainbow geodesic obtained by the concatenation of the paths $\langle 0, s_0\rangle C_{(k)}\langle r-3, k-2\rangle$ (which receives colors between $0$ and $r-4$); the arcs $(\langle r-3, k-2\rangle, \langle r-2, 0\rangle)$ and $(\langle r-2, 0\rangle, \langle r-1, 1\rangle)$ (with colors $r-3$ and $r-2$ respectively); and $\langle r-1, 1\rangle C_{(1)}\langle r, 0\rangle$ (which receives the colors $r-1$ and $k-1, \dots, 2k-3$). Hence, $P$ or $Q$ is a $\langle 0, s_0\rangle\langle r, s\rangle$-geodesic rainbow, and the theorem follows.\end{proof} \begin{teo}\label{src.n=ak.rest} If $n=a_nk$ with $a_n\ge k-1\ge2$, then $${src^*}(C_n(\{1,k\}))=rc^*(C_n(\{1,k\}))=a_n+k-2.$$ \end{teo} \begin{proof} By Lemma \ref{diametro} we see that ${\hbox{diam}}(C_n(\{1,k\})) = a_n + k-2$ and then to prove the result just remain to show that ${src^*}(C_n(\{1,k\}))\leq a_n+k-2$. Let $V = \{v_0, \dots, v_{n-1}\}$ be the set of vertices of $C_n(\{1,k\})$ and, for each $i$, with $0\leq i < n $, identify the vertex $v_i$ with the pair $\langle \lfloor\frac{i}{k}\rfloor, re(i, k)\rangle$. Let $\{V_0,\dots,V_{a_n-1}\}$ be a partition of $V$, where $V_r=\{ \langle r, s\rangle : 0\leq s< k\}$ for $0\le r < a_n$, and let $\rho$ be a $(a_n+k-2)$-colouring defined as follows: For each $r$, with $0\le r\le a_n-1$, let \begin{enumerate} \item The arc $(\langle r,s\rangle\langle r+1,s\rangle)$, with $0\leq s <k$, receives color $r$. \item If $r\geq k-2$ the arcs $(\langle r,0\rangle\langle r,1\rangle)$ and $(\langle r,k-1\rangle\langle r+1,0\rangle)$ receive color $r$; and, for each $1\le j\le k-2$, the arc $(\langle r,j\rangle\langle r,j+1\rangle)$ receives color $a_n-1+j$. \item If $r\leq k-3$ the arc $(\langle r,k-2-r\rangle\langle r,k-1-r\rangle)$ receives color $r$; for each $0\leq j\leq k-3-r$ the arc $(\langle r,j\rangle\langle r,j+1\rangle)$ receives color $a_n+r+j$; for each $k-1-r\leq j\leq k-2$ the arc $(\langle r,j\rangle\langle r,j+1\rangle)$ receives color $a_n+j-(k-1-r)$; and the arc $(\langle r, k-1\rangle\langle r+1, 0\rangle)$ receives color $a_n+r$. \end{enumerate} Observe that for every pair $1\leq r, r' < a_n$ the path $\langle r, s\rangle C_{(k)}\langle r' , s\rangle$ is a rainbow path with colors $r, r+1,\dots , r'-1$ (taken the sequence modulo $a_n$). Also every path $P$ of lenght at most $k-1$ in $C_{(1)}$ is rainbow. Moreover, if for some $0\leq r < a_n$, $V(P)\subseteq V_r$ then the colors appearing in $P$ are contained in $\{a_n, \dots, a_n+(k-3)\}\cup \{r\}$; and if $V(P)$ starts at $V_r$ and ends at $V_{r+1}$, the colors of $P$ are in $\{a_n, \dots, a_n+(k-3)\}\cup \{r, r+1\}$. Let $\langle r,s\rangle$ and $\langle r',s'\rangle$ be distinct vertices of $C_n(\{1,k\})$. If $r\neq r'$ it is not hard to see that either $\langle r,s\rangle C_{(k)}\langle r',s\rangle C_{(1)}\langle r',s'\rangle$ (if $s\leq s'$) or $\langle r,s\rangle C_{(k)}\langle r'-1,s\rangle C_{(1)}\langle r',s'\rangle$ (if $s> s'$) is a rainbow $(\langle r,s\rangle\langle r',s'\rangle)$-path. If $r=r'$ and $s< s'$ we see that $\langle r,s\rangle C_{(1)}\langle r,s'\rangle$ is a rainbow path. Let us suppose $r=r'$ and $s> s'$. If no arc $(\langle r,t\rangle\langle r,t+1\rangle)$, with $0\leq t < s'$, receives color $r$, the path $\langle r-1,s\rangle C_{(1)}\langle r,s'\rangle$ receive only colors in $\{a_n, \dots, a_n+(k-3)\}\cup \{r-1\}$, and therefore $\langle r,s\rangle C_{(k)}\langle r-1,s\rangle C_{(1)}\langle r,s'\rangle$ is a rainbow path. If some arc $(\langle r,t\rangle\langle r,t+1\rangle)$, with $0\leq t < s'$, receives color $r$, by definition of $\rho$ most be either $(\langle r,0\rangle\langle r,1\rangle)$ (if $r\geq k-2$), or $(\langle r,k-2-r\rangle\langle r,k-1-r\rangle)$ (if $r\leq k-3$). For the first case in the path $P=\langle r,s\rangle C_{(k)}\langle a_n-1,s\rangle C_{(1)}\langle 0,s'\rangle C_{(k)}\langle r,s'\rangle$, the $k$-jumps receive colors $\{0, \dots, r, \dots, a_n-2\}$ and, by definition of $\rho$, the only $1$-jump of color $0$ is $(\langle 0, k-2\rangle\langle 0, k-1\rangle)$. Thus, since $s'<s\leq k-1$, the colors appearing in $\langle a_n-1,s\rangle C_{(1)}\langle 0,s'\rangle$ are contain in $\{a_n, \dots, a_n+(k-3)\}\cup \{a_n-1\}$ and therefore $P$ is rainbow. For the second case in the path $P=\langle r,s\rangle C_{(k)}\langle k-2-s,s\rangle C_{(1)}\langle k-1-s,s'\rangle C_{(k)}\langle r,s'\rangle $ the $k$-jumps receive colors $\{0, \dots, k-3-s, k-1-s, \dots, a_n-1\}$ and, since $s>s'>t\geq 0$, $k-1-s\leq k-3$ and therefore the only $1$-jump of color $k-1-s$ is $(\langle k-1-s, s-1\rangle\langle k-2-s, s\rangle)$. Thus the colors in $\langle k-2-s,s\rangle C_{(1)}\langle k-1-s,s'\rangle$ are contain in $\{a_n, \dots, a_n+(k-3)\}\cup \{k-2-s\}$ and $P$ is a rainbow path. In all the cases, from Lemma \ref{diametro} we see that all the paths are geodesic and the result follows. \end{proof}
{ "timestamp": "2015-04-08T02:11:36", "yymm": "1504", "arxiv_id": "1504.01721", "language": "en", "url": "https://arxiv.org/abs/1504.01721", "abstract": "An edge-coloured graph $G$ is {\\it rainbow connected} if any two vertices are connected by a path whose edges have distinct colours. This concept was introduced by Chartrand et al. in \\cite{ch01}, and it was extended to oriented graphs by Dorbec et al. in \\cite{DI}.In this paper we present some results regarding this extention, mostly for the case of circulant digraphs.", "subjects": "Combinatorics (math.CO)", "title": "Rainbow connection in some digraphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773707966712549, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7084670393102558 }
https://arxiv.org/abs/2101.09711
Testing for subsphericity when $n$ and $p$ are of different asymptotic order
We extend a classical test of subsphericity, based on the first two moments of the eigenvalues of the sample covariance matrix, to the high-dimensional regime where the signal eigenvalues of the covariance matrix diverge to infinity and either $p/n \rightarrow 0$ or $p/n \rightarrow \infty$. In the latter case we further require that the divergence of the eigenvalues is suitably fast in a specific sense. Our work can be seen to complement that of Schott (2006) who established equivalent results in the case $p/n \rightarrow \gamma \in (0, \infty)$. As our second main contribution, we use the test to derive a consistent estimator for the latent dimension of the model. Simulations and a real data example are used to demonstrate the results, providing also evidence that the test might be further extendable to a wider asymptotic regime.
\section{Introduction}\label{sec:intro} The objective of principal component analysis (PCA), and dimension reduction in general, is to extract a low-dimensional signal from noise-corrupted observed data. The most basic statistical model for the problem is as follows. Assume that $S_n$ is the sample covariance matrix of a random sample from a $p$-variate normal distribution whose covariance matrix has the eigenvalues $\lambda_1 \geq \cdots \geq \lambda_d > \sigma^2, \ldots, \sigma^2$ exhibiting ``spiked'' structure. The data can thus be seen to be generated by contaminating a random sample residing in a $d$-dimensional subspace with independent normal noise having the covariance matrix $\sigma^2 I_p$. This signal subspace can be straightforwardly estimated with PCA as long as one knows its dimension $d$ which is, however, usually unknown in practice. Numerous procedures for determining the dimension have been proposed, see \cite{jolliffe2002principal} for a review and, e.g., \cite{schott2006high,nordhausen2016asymptotic,virta2019estimating} for asymptotic tests and \cite{beran1985bootstrap,dray2008number,luo2016combining} for bootstrap- and permutation-based techniques. Simplest of these methods is perhaps the test of sub-sphericity based on the test statistics, \begin{align*} T_{n,j} = \frac{m_{2,p-j}(S_n)}{m_{1,p-j}(S_n)^2} - 1, \quad j = 0, \ldots, p - 1, \end{align*} where $m_{\ell,r}(A)$ denotes the $\ell$th sample moment of the last $r$ eigenvalues of the symmetric matrix $A$. Under the null hypothesis $H_{0k}: d = k$ that the signal dimension equals $k$, the limiting null distribution of $T_{n,k}$ is \begin{align}\label{eq:low_dim_convergence} \frac{1}{2} n (p - k) T_{n,k} \rightsquigarrow \chi^2_{\frac{1}{2}(p - k)(p - k + 1) - 1}, \end{align} as $n \rightarrow \infty$, see, e.g., \cite{schott2006high}. Hence, the dimension $d$ can in practice be determined by testing the sequence of null hypotheses $H_{00}, H_{01}, \ldots$ and taking the estimate of $d$ to be the smallest $k$ for which $H_{0k}$ is not rejected. By examining the power of the tests, \cite{nordhausen2016asymptotic} concluded that this procedure yields a consistent estimate of $d$ (with a suitable choice of test levels). The previous test assumes a fixed dimension $p$ and, in the face of modern large and noisy data sets with great room for dimension reduction, it is desirable to extend the test to the high-dimensional regime where $p = p_n$ is a function of $n$ and we have $p_n \rightarrow \infty$ as $n \rightarrow \infty$. This is discussed in Section \ref{sec:theory} where our first main contribution, extending the test based on \eqref{eq:low_dim_convergence} to the high-dimensional regime where either the sample size or the dimension asymptotically dominates the other, is also presented. Section \ref{sec:power} introduces our second main contribution, a power study of the test, using which we construct a consistent estimator for the true latent dimension. In Section \ref{sec:simulation} we demonstrate our results using simulations and a real data example and, in Section \ref{sec:discussion}, we finally conclude with some discussion. \section{High-dimensional testing of subsphericity}\label{sec:theory} The behaviour of most high-dimensional statistical procedures depends crucially on the interplay between $n$ and $p_n$ and the most common approach in the literature is to assume that their growth rates are proportional in the sense that $p_n/n \rightarrow \gamma \in (0, \infty)$ as $n \rightarrow \infty$, see, e.g., \cite{yao2015sample}. The limiting ratio $\gamma$ is also known as the \textit{concentration} of the regime. In \cite{schott2006high}, the test of subsphericity discussed in Section \ref{sec:intro} is extended to this asymptotic regime under the following two assumptions (note that in Assumption \ref{assu:eigenvalues} the signal dimension $d$ is a constant not depending on $n$). \begin{assumption}\label{assu:distribution} The observations $x_1, \ldots , x_n$ are a random sample from $\mathcal{N}_{p_n}(\mu_n, \Sigma_n)$ for some $\mu_n \in \mathbb{R}^{p_n}$ and some positive-definite $\Sigma_n \in \mathbb{R}^{p_n \times p_n}$. \end{assumption} \begin{assumption}\label{assu:eigenvalues} The eigenvalues of the matrix $\Sigma_n$ are $\lambda_{n1} \geq \cdots \geq \lambda_{nd} > \sigma^2 = \cdots = \sigma^2$ for some $\sigma^2 > 0$. Moreover, the eigenvalues $\lambda_{nk}$, $k = 1, \ldots , d$, satisfy $\lambda_{nk} \rightarrow \infty$. \end{assumption} In fact, \cite{schott2006high} additionally required that the quantities $\lambda_{nk}/\mathrm{tr}(\Sigma_n)$ converge to positive constants summing to less than unity, but applying our Lemma \ref{lem:wishart_2} in the proof of their Theorem 4 reveals that this condition is unnecessary, see \ref{sec:schott} for details. Hence, denoting by $S_n$ the sample covariance matrix of the observations, under Assumptions \ref{assu:distribution} and \ref{assu:eigenvalues} and $\gamma \in (0, \infty)\setminus\{ 1\} $ (see \ref{sec:schott} for more details on the exclusion of the case $\gamma = 1$), Theorem~4 in \cite{schott2006high} establishes that the test statistic, \begin{align*} T_{n, j} := \frac{m_{2,p_n - j}(S_n)}{m_{1,p_n - j}(S_n)^2} - 1, \end{align*} satisfies $ (n - d - 1) T_{n, d} - (p_n - d) \rightsquigarrow \mathcal{N}(1, 4)$ where $d$ is the signal dimension. As remarked by \cite{schott2006high}, this limiting result is consistent with its low-dimensional equivalent \eqref{eq:low_dim_convergence} in the sense that, as $p \rightarrow \infty$, \begin{align*} \frac{2}{p - d}\chi^2_{\frac{1}{2}(p - d)(p - d + 1) - 1} - (p - d) \rightsquigarrow \mathcal{N}(1, 4). \end{align*} A crucial condition that allows the above limiting result is the divergence of the spike eigenvalues $\lambda_{n1}, \ldots , \lambda_{nd}$ of the covariance matrix to infinity in Assumption \ref{assu:eigenvalues}. Indeed, usually the spikes are taken to be constant in the literature for high-dimensional PCA, see, e.g. \cite{baik2006eigenvalues,johnstone2018pca}. However, requiring the spikes to diverge to infinity is rather natural and reflects the idea that only a few principal components are sufficient to recover a large proportion of the total variance even in high dimensions. See, for example, \cite{yata2018test}, who use cross-data-matrices to detect spiked principal components with divergent variance, and the references therein. As our first contribution, we extend the result of \cite{schott2006high} outside of the regime $p_n/n \rightarrow \gamma \in (0, \infty)$, to the extreme cases $\gamma \in \{ 0 , \infty \} $. The latter have been less studied in the high-dimensional literature, but see, for example, \cite{karoui2003largest,birke2005note,yata2009pca,jung2009pca}, the last of which consider the extreme asymptotic scenario where the dimension diverges to infinity but the sample size remains fixed. In our treatment of the case $\gamma = \infty $, we further require the additional condition that $p_n/(n \sqrt{\lambda_{nd}}) \rightarrow 0$ as $n \rightarrow \infty$, i.e., the dimension must not diverge too fast compared to the sample size and the magnitude of the spike $\lambda_{nd}$ corresponding to the weakest signal. Assumptions of this form are rather common in high-dimensional PCA when the spikes are taken to diverge, see, e.g., \cite{shen2016general} who saw $n$, $\lambda_{nk}$ and $p_n$ as three competing forces affecting the consistency properties of PCA, $n$ and $\lambda_{nk}$ contributing information about the signals and $p_n$ decreasing the relative share of information in the sample by introducing more noise to the model. The condition $p_n/(n \sqrt{\lambda_{nd}}) \rightarrow 0$ can thus be interpreted as requiring that even the weakest of the spike principal components has asymptotically strong enough signal to be detected. The extension of the test to the previous regimes is given below in Theorem~\ref{theo:goes_to_zero}. The main line of proof is based on extending the work of \cite{birke2005note}, who considered testing of sphericity in the cases $\gamma \in \{ 0 , \infty \}$, to testing of subsphericity. In this sense, our work is to \cite{birke2005note} what \cite{schott2006high} is to \cite{ledoit2002some}, who studied tests of sphericity in the case where $\gamma \in (0, \infty) $ and on whose work \cite{schott2006high} based their proof. \begin{theorem}\label{theo:goes_to_zero} Under Assumptions \ref{assu:distribution} and \ref{assu:eigenvalues}, if, as $n \rightarrow \infty$, either \begin{enumerate} \item[i)] $p_n/n \rightarrow 0$, or, \item[ii)] $p_n/n \rightarrow \infty $ and $p_n/(n \sqrt{\lambda_{nd}}) \rightarrow 0$, then, \end{enumerate} \begin{align*} (n - d - 1) T_{n, d} - (p_n - d) \rightsquigarrow \mathcal{N}(1, 4). \end{align*} \end{theorem} \section{Power analysis and dimension estimation}\label{sec:power} A natural question is whether the test of subsphericity can be used to consistently estimate the latent dimension $d$ under the high-dimensional Gaussian model. In a low-dimensional setting, this is accomplished by chaining together tests for $H_{0k}: d = k$ for different values of $k$ in some specific order: In forward testing one sequentially tests for $H_{00}, H_{01}, \ldots$ and takes as the estimate of $d$ the smallest $k$ for which $H_{0k}$ is not rejected. In backward testing, the order is $H_{0(p-1)}, H_{0(p-2)}, \cdots$ and the estimate is the largest $k$ for which $H_{0(k-1)}$ is rejected. The two strategies can also be combined into a ``divide-and-conquer'' approach where one starts from the middle of the search interval and subsequently halves it with each test, this process often terminating in fewer tests than the forward and backward testing. However, in the high-dimensional setting where our working assumption is that the number of latent signals is diminutive compared to the overall dimensionality (finite $d$ vs. $p_n \rightarrow \infty$), the most economic choice is likely the forward testing. In the following we show that this strategy indeed leads, under suitable assumptions, to a consistent estimate of the dimension $d$ in various high-dimensional regimes. Even though the equivalent of Theorem \ref{theo:goes_to_zero} for $ \gamma \in (0, \infty)\setminus\{ 1\} $ was established already in \cite{schott2006high}, the following results are novel also in that case. We use the notation $g_{n,k} := (n - k - 1) T_{n, k} - (p_n - k)$, $k = 0, \ldots , p_n - 1$, for the test statistic. \begin{theorem}\label{theo:power_for_small_k} Under Assumptions \ref{assu:distribution} and \ref{assu:eigenvalues}, if, as $n \rightarrow \infty$, either \begin{enumerate} \item[i)] $p_n/n \rightarrow \gamma \in [0, \infty) \setminus \{ 1 \}$ and $p_n/\lambda_{nd}^2 \rightarrow 0$, or, \item[ii)] $p_n/n \rightarrow \infty $, $p_n/(n \sqrt{\lambda_{nd}}) \rightarrow 0$ and $p_n/(\sqrt{n} \lambda_{nd}) \rightarrow 0$, then, \end{enumerate} we have, for each $k = 0, \ldots , d - 1$ and for all $ M > 0 $, that \begin{align*} \mathbb{P}( g_{n,k}/n \leq M ) \rightarrow 0. \end{align*} \end{theorem} Theorem \ref{theo:power_for_small_k} shows that the test for $H_{0k}$ is consistent under the alternative hypothesis that the true dimension $d > k$ (the power of the test in the opposite case $d < k$ plays no role in the forward testing and, hence, is not studied here). As a straightforward corollary we then obtain the consistency of the forward testing. \begin{corollary}\label{cor:dimension_estimation} Under the assumptions of Theorem \ref{theo:power_for_small_k}, let $c_n$ be any sequence of real numbers satisfying $c_n \rightarrow \infty$ and $c_n = \mathcal{O}(n)$ as $n \rightarrow \infty$. Then, \begin{align*} \hat{d} := \min \{ k = 0, \ldots , p_n - 1: g_{n, k} \leq c_n \} \rightarrow_p d. \end{align*} \end{corollary} Choosing a sequence $c_n$ for which the forward testing estimator $\hat{d}$ performs well in finite samples is a highly non-trivial task and, thus, we advocate using in practice the alternative estimator, \begin{align}\label{eq:practical_estimator} \hat{d} := \min \{ k = 0, \ldots , p_n - 1: | (g_{n, k} - 1)/2 | \leq z_{1-\alpha/2} \}, \end{align} where $z_{1-\alpha/2}$ is the upper $\alpha/2$ quantile of the standard normal distribution, see, e.g., \cite{nordhausen2016asymptotic} for a similar modification. The resulting procedure has asymptotically zero probability to underestimate the dimension (by Theorem \ref{theo:goes_to_zero}) and carries the Type I error probability equal to $\alpha$ of overestimating the dimension (by Theorem \ref{theo:power_for_small_k}). Finally, we still briefly discuss the assumptions of Corollary \ref{cor:dimension_estimation} which, while stricter than in Theorem \ref{theo:goes_to_zero}, can nevertheless be seen to be very natural. That is, regardless of the regime, the assumptions ask that the weakest of the signals is strong enough not to be masked by the noise (similarly as in part \textit{ii)} of Theorem~\ref{theo:goes_to_zero}). To gain a more concrete idea on the severity of the assumptions, let $p_n = c n^\alpha$ and $\lambda_{nd} = n^\beta$ for some $c \neq 1$ and $\alpha, \beta > 0$. Then, the feasible values of $(\alpha, \beta)$ form a polygon in $\mathbb{R}^2$ that is illustrated in the range $0 < \alpha \leq 2$ as the grey area in Figure~\ref{fig:rev_plot_1}. The plot reveals the intuitive fact that the effect of the dimension on the minimal feasible growth rate for the signal is the stronger the faster the dimension increases (the slope of the curve is for $\alpha > 1.5$ four times higher than for $\alpha \in (0, 1)$). \begin{figure} \centering \includegraphics[width = 0.75\textwidth]{rev_plot_1} \caption{Assuming $p_n = c n^\alpha$ and $\lambda_{nd} = n^\beta$ for some $c \neq 1$ and $\alpha, \beta > 0$, the grey area in the plot contains the values of $(\alpha, \beta)$ for which the assumptions of Corollary \ref{cor:dimension_estimation} hold. The points S1--S4 correspond to the four settings used in the simulation study in Section \ref{sec:simulation}.} \label{fig:rev_plot_1} \end{figure} \section{Numerical examples}\label{sec:simulation} We first demonstrate the result of Theorem \ref{theo:goes_to_zero} using simulated data. We consider four different settings, each of which assumes a sample of size $n$ from $\mathcal{N}_{p_n}(0, \Sigma_{n})$ where $\Sigma_n = \mathrm{diag}(\lambda_{n1}, \ldots , \lambda_{nd}, 1, \ldots , 1)$. Note that this simplified form of the normal distribution (zero location, unit noise variance and diagonal covariance) is without loss of generality as our test statistic is location, scale and rotation invariant. The settings are as follows: \begin{enumerate} \item $d = 3$, $n = 21$6, $p_n = n^{3/4}$, $\lambda_{n1} = 3 n$, and $\lambda_{n2} = \lambda_{n3} = n^{1/2}$, \item $d = 3$, $n = 216$, $p_n = n^{3/4}$, $\lambda_{n1} = 3 n^{1/2}$, and $\lambda_{n2} = \lambda_{n3} = n^{1/4}$, \item $d = 2$, $n = 36$, $p_n = n^{3/2}$, $\lambda_{n1} = 2 n^2$ and $\lambda_{n2} = n^{3/2}$, \item $d = 2$, $n = 36$, $p_n = n^{3/2}$, $\lambda_{n1} = 2 n^2$ and $\lambda_{n2} = n^{1/4}$. \end{enumerate} Settings 1 and 2 fall within the case $\gamma = 0$, and their only difference is in the growth rates of the spikes. Settings 3 and 4 explore the case $\gamma = \infty$, the former satisfying the conditions of Theorem \ref{theo:goes_to_zero} and the latter not (again the only difference between them is in the growth rates of the spikes). In each case, we compute 10000 replicates of the test statistic $g_{n,d} = (n - d - 1) T_{n, d} - (p_n - d)$ and plot the obtained histogram superimposed with the density of the limiting distribution $\mathcal{N}(1, 4)$. \begin{figure} \centering \includegraphics[width = 1\textwidth]{rev_plot_2} \caption{The histograms of 10000 independent replicates of the test statistic $g_{n, d} = (n - d - 1) T_{n, d} - (p_n - d)$ under the four different settings, with the density of the limiting distribution $\mathcal{N}(1, 4)$ overlaid.} \label{fig:rev_plot_2} \end{figure} The results are shown in Figure \ref{fig:rev_plot_2} where we immediately make three observations: the convergence to the limiting distribution is (at least visually) rather fast in Settings 1--3, with the histograms exhibiting the Gaussian shape and being only slightly shifted to the left from their limiting density; Setting 1 does not appear to be significantly closer to Gaussianity than Setting 2 despite the increased amount of information in the former (in the form of more rapidly growing spike eigenvalues); in Setting 4 where the condition $p_n/(n \sqrt{\lambda_{nd}}) \rightarrow 0$ required by Theorem \ref{theo:goes_to_zero} is being violated, the histogram visibly has the correct shape and scale, but clearly underestimates the location. The difference between the true mean and the mean of the replicates in Setting 4 is approximately 1.35 and some testing (not shown here) reveals that, at least with the current parameter choices, the difference seems to stay roughly constant when $n$ is increased. Based on this, it seems possible that, even when $p_n/(n \sqrt{\lambda_{nd}}) \nrightarrow 0$, the limiting distribution of $g_{n,d}$ could be made to equal $\mathcal{N}(1, 4)$ with a suitable additive correction term $a_n$, which vanishes, $a_n \rightarrow 0$ as $n \rightarrow \infty$, when the conditions of Theorem \ref{theo:goes_to_zero} are satisfied. Next, we demonstrate how forward testing, as defined in \eqref{eq:practical_estimator}, can be used to estimate the signal dimension $d$ with a chain of hypothesis tests for the null hypotheses $H_{0k}: d = k$. That is, we sequentially test the null hypotheses $H_{00}, H_{01}, \ldots$ using, respectively, the test statistics $g_{n, 0}, g_{n, 1}, \ldots$ and take our estimate of the dimension to be the smallest $k$ for which $H_{0k}$ is not rejected. For each test, we use $\alpha = 0.05$, i.e., the two-sided $95\%$ critical regions of the limiting $\mathcal{N}(1, 4)$-distribution. We consider the same four settings as in the first simulation, but include an additional, larger sample size for each. Of the four settings, only the first and the third satisfy the assumptions of Corollary \ref{cor:dimension_estimation}, see Figure \ref{fig:rev_plot_1} on how the four settings are located with respect to the ``feasibility region'' of the assumptions. For simplicity, we report in Table \ref{tab:results_1} the rejection rates (over 10000 replicates) of the null hypotheses corresponding to the true dimension and the neighbouring dimensions only (the columns corresponding to the true dimension are shaded grey). In Settings 1 and 3 where the assumptions of Corollary \ref{cor:dimension_estimation} are satisfied, the test achieves rather accurately the nominal level at the true dimension and shows extremely good power at the smaller dimensions, as expected. Interestingly, the same conclusions are reached also in Setting 2 where the assumptions of Corollary \ref{cor:dimension_estimation} are not satisfied, implying that the assumptions, while sufficient, are not necessary for the consistency of the forward testing estimator. Finally, as expected, the procedure reaches neither a sufficient level nor power in Setting~4 where the conditions of Theorem \ref{theo:goes_to_zero} and Corollary \ref{cor:dimension_estimation} are not satisfied. \begin{table}[ht] \caption{The subtables give the observed rejection rates for different null hypotheses over 10000 independent replicates under each of the four settings. Two different sample sizes are considered for each setting. The columns corresponding to the true dimension are shaded grey.} \label{tab:results_1} \centering \begin{minipage}{.45\linewidth} \begin{tabular}{p{0.6cm}ccc} \multicolumn{4}{c}{Setting 1}\\ \hline $n$ & $H_{02}$ & $H_{03}$ & $H_{04}$ \\ \hline 216 & 1.000 & \cellcolor{black!15} 0.053 & 0.115 \\ 512 & 1.000 & \cellcolor{black!15} 0.051 & 0.138 \\ \hline \end{tabular} \end{minipage} \begin{minipage}{.45\linewidth} \begin{tabular}{p{0.6cm}ccc} \multicolumn{4}{c}{Setting 2}\\ \hline $n$ & $H_{02}$ & $H_{03}$ & $H_{04}$ \\ \hline 216 & 1.000 & \cellcolor{black!15} 0.054 & 0.122 \\ 512 & 1.000 & \cellcolor{black!15} 0.051 & 0.131 \\ \hline \end{tabular} \end{minipage} \begin{minipage}{.45\linewidth} \begin{tabular}{p{0.6cm}ccc} \multicolumn{4}{c}{Setting 3}\\ \hline $n$ & $H_{01}$ & $H_{02}$ & $H_{03}$ \\ \hline 36 & 1.000 & \cellcolor{black!15} 0.051 & 0.102 \\ 64 & 1.000 & \cellcolor{black!15} 0.053 & 0.124 \\ \hline \end{tabular} \end{minipage} \begin{minipage}{.45\linewidth} \begin{tabular}{p{0.6cm}ccc} \multicolumn{4}{c}{Setting 4}\\ \hline $n$ & $H_{01}$ & $H_{02}$ & $H_{03}$ \\ \hline 36 & 0.059 & \cellcolor{black!15} 0.091 & 0.211 \\ 64 & 0.058 & \cellcolor{black!15} 0.093 & 0.261 \\ \hline \end{tabular} \end{minipage} \end{table} We conclude with a brief application of the procedure to the \texttt{phoneme} data set in the R-package \texttt{ElemStatLearn} \citep{RElemStatLearn}. The data consists of a total of $4509$ log-periodograms of length $p = 256$, each corresponding to a single utterance of one of several phonemes. For simplicity, we consider only the phoneme ``sh'' and, moreover, take only the first utterances of it by the first 64 speakers in the data set. This yields a data matrix with the dimensions $n = 64$ and $p = 256$, meaning that the experiment can be embedded, for example, to either of the regimes $p_n = 4n$ and $p_n = n^{4/3}$. To gain some idea on the possible Gaussianity of the data, we ran separate univariate Shapiro-Wilk tests for each of the $p$ variables using the Bonferroni correction and the significance level 0.05. Based on the tests, 4 out of the 256 variables were deemed as non-normal, implying that the assumption of Gaussianity might indeed be warranted in the current context. We then applied the forward testing estimator \eqref{eq:practical_estimator} with $\alpha = 0.05$ to the data and obtained the estimate $\hat{d} = 14$, implying that there is indeed great room for dimension reduction in the data set. As an alternative, ``naive'' approach we also considered forward testing based on a sequence of tests of the form \eqref{eq:low_dim_convergence} that assume $p$ to be finite. It turned out that each of the tests was rejected (with $\alpha = 0.05$), giving the maximal estimate $\hat{d} = \min\{n, p\} = 64$. As the sample size is most likely too small for the finite-dimension asymptotics to kick in (unlike for the high-dimensional asymptotics, which are in Table~\ref{tab:results_1} seen to be good approximations already for sample sizes and dimensions comparable to the current situation), we conclude that ignoring the high-dimensional nature of the data led to a gross overestimation of the latent dimension. \section{Discussion}\label{sec:discussion} In this short note, we showed that a classical test of subsphericity is valid also in the less often studied high-dimensional Gaussian regimes where the concentration $\gamma$ is allowed to take the extreme values $0$ and $\infty$, as long as the spikes themselves diverge to infinity. The case $\gamma = \infty$ further requires the condition that $p_n/(n \sqrt{\lambda_{nd}}) \rightarrow 0$, limiting the growth rate of the dimension $p_n$ in terms of the signal strength $\lambda_{nd}$. And even though, by our simulation study, it seems plausible that the test could be extended outside of this condition, several key arguments in our proof of Theorem~\ref{theo:goes_to_zero} hinge on it, meaning that any extensions should use a different technique of proof. Additionally, we derived sufficient conditions for the consistent estimation of the latent dimension $d$ with the forward testing procedure that chains together tests for the hypotheses $H_{00}, H_{01}, \ldots$. While the conditions are rather natural, again requiring that $p_n$ does not grow too fast compared to $\lambda_{nd}$, our simulation study gives indication that there is still room for improvement. Finally, the main limiting factor of the presented results is the assumption of Gaussianity. This requirement could possibly be weakened by showing that the so-called \textit{universality phenomenon} applies to our scenario; in high-dimensional statistics, a result derived under the Gaussian assumption is said to exhibit universality if it continues to hold when the normal distribution is replaced with some other distribution that is close to it in some suitable sense, see \cite{johnstone2018pca} for a review of such results. In the current situation concerning the limiting behavior of second-order quantities, it seems reasonable to conjecture that our main results continue to hold if the normal distribution is replaced with a distribution that shares its first four moments with the normal distribution. While the actual theoretical study of this claim goes beyond the scope of the current work (our proofs rely heavily on several pre-existing results for Wishart matrices), we nevertheless did quick experiments in Settings 1-4 described in Section \ref{sec:simulation}, with the normal distribution replaced by the symmetric Laplace mixture $(1/2) \mathcal{L}(-\mu, b) + (1/2) \mathcal{L}(\mu, b)$ having the dispersion parameter $b = \sqrt{3/2} - 1$ and the mean $\mu = \sqrt{1 - 2 b^2}$. The resulting distribution then has identical moments with the standard normal up to the fourth one. The resulting rejection rates are shown in Table \ref{tab:results_2} and they indeed match very closely with those in Table \ref{tab:results_1}, giving plausibility to the universality claim. \begin{table}[h] \caption{The subtables give the observed rejection rates for different null hypotheses over 10000 independent replicates under each of the four settings when the data are drawn from the symmetric Laplace mixture. The columns corresponding to the true dimension are shaded grey.} \label{tab:results_2} \centering \begin{minipage}{.45\linewidth} \begin{tabular}{p{0.6cm}ccc} \multicolumn{4}{c}{Setting 1}\\ \hline $n$ & $H_{02}$ & $H_{03}$ & $H_{04}$ \\ \hline 216 & 1.000 & \cellcolor{black!15} 0.055 & 0.121 \\ 512 & 1.000 & \cellcolor{black!15} 0.054 & 0.137 \\ \hline \end{tabular} \end{minipage} \begin{minipage}{.45\linewidth} \begin{tabular}{p{0.6cm}ccc} \multicolumn{4}{c}{Setting 2}\\ \hline $n$ & $H_{02}$ & $H_{03}$ & $H_{04}$ \\ \hline 216 & 1.000 & \cellcolor{black!15} 0.055 & 0.124 \\ 512 & 1.000 & \cellcolor{black!15} 0.054 & 0.138 \\ \hline \end{tabular} \end{minipage} \begin{minipage}{.45\linewidth} \begin{tabular}{p{0.6cm}ccc} \multicolumn{4}{c}{Setting 3}\\ \hline $n$ & $H_{01}$ & $H_{02}$ & $H_{03}$ \\ \hline 36 & 1.000 & \cellcolor{black!15} 0.052 & 0.109 \\ 64 & 1.000 & \cellcolor{black!15} 0.049 & 0.128 \\ \hline \end{tabular} \end{minipage} \begin{minipage}{.45\linewidth} \begin{tabular}{p{0.6cm}ccc} \multicolumn{4}{c}{Setting 4}\\ \hline $n$ & $H_{01}$ & $H_{02}$ & $H_{03}$ \\ \hline 36 & 0.057 & \cellcolor{black!15} 0.091 & 0.213 \\ 64 & 0.061 & \cellcolor{black!15} 0.090 & 0.253 \\ \hline \end{tabular} \end{minipage} \end{table}
{ "timestamp": "2021-06-30T02:19:56", "yymm": "2101", "arxiv_id": "2101.09711", "language": "en", "url": "https://arxiv.org/abs/2101.09711", "abstract": "We extend a classical test of subsphericity, based on the first two moments of the eigenvalues of the sample covariance matrix, to the high-dimensional regime where the signal eigenvalues of the covariance matrix diverge to infinity and either $p/n \\rightarrow 0$ or $p/n \\rightarrow \\infty$. In the latter case we further require that the divergence of the eigenvalues is suitably fast in a specific sense. Our work can be seen to complement that of Schott (2006) who established equivalent results in the case $p/n \\rightarrow \\gamma \\in (0, \\infty)$. As our second main contribution, we use the test to derive a consistent estimator for the latent dimension of the model. Simulations and a real data example are used to demonstrate the results, providing also evidence that the test might be further extendable to a wider asymptotic regime.", "subjects": "Statistics Theory (math.ST)", "title": "Testing for subsphericity when $n$ and $p$ are of different asymptotic order", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.977370804580953, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7084670392346819 }
https://arxiv.org/abs/1807.00393
Adaptive Optimal Transport
An adaptive, adversarial methodology is developed for the optimal transport problem between two distributions $\mu$ and $\nu$, known only through a finite set of independent samples $(x_i)_{i=1..N}$ and $(y_j)_{j=1..M}$. The methodology automatically creates features that adapt to the data, thus avoiding reliance on a priori knowledge of data distribution. Specifically, instead of a discrete point-bypoint assignment, the new procedure seeks an optimal map $T(x)$ defined for all $x$, minimizing the Kullback-Leibler divergence between $(T(xi))$ and the target $(y_j)$. The relative entropy is given a sample-based, variational characterization, thereby creating an adversarial setting: as one player seeks to push forward one distribution to the other, the second player develops features that focus on those areas where the two distributions fail to match. The procedure solves local problems matching consecutive, intermediate distributions between $\mu$ and $\nu$. As a result, maps of arbitrary complexity can be built by composing the simple maps used for each local problem. Displaced interpolation is used to guarantee global from local optimality. The procedure is illustrated through synthetic examples in one and two dimensions.
\section*{\small Acknowledgments} \end{center} The authors would like to thank Yongxin Chen for connecting our variational formulation of the Kullback-Leibler divergence with the Donsker-Varadhan formula. This work has been partially supported by a grant from the Morse-Sloan Foundation. The work of Tabak was partially supported by NSF grant DMS-1715753 and ONR grant N00014-15-1-2355. The work of Essid was partially supported by NSF grant DMS-1311833. \bigskip \bibliographystyle{spmpsci} \section{Algorithm}\label{sec:algorithm} In order to complete the description of the algorithm proposed, we need to specify the functional spaces from which $g$ and $\phi$ are drawn and the procedure used for solving the minimax problem for of the Lagrangian $L(g,\phi)$. \subsection{Choice of functional spaces}\label{subsec:functionalspace} Since any two consecutive distributions $\mu,\nu$ in the procedure are close to each other, the optimal map is a perturbation of the identity. The potential $\phi$ will, thus, be chosen in the form: \begin{equation} \phi(x) = \frac{1}{2} \|x\|^2 + \psi(x) \end{equation} where $\psi$ has a Hessian with a spectral radius less than $1$. No such centering is required for $g(x)$, as at optimality $g(x) = \log\left(1 \right) = 0.$ One basic capability that one should require of the functional spaces for $g$ and $\phi$ is that of detecting and correcting global displacements and scaling --not necessarily isotropic -- between two distributions. Thus one should have \[ \phi(x) = \frac{1}{2} x^{\top}(I+A_0)x + a_1 \cdot x + \phi_{nl}(x) \] and \[ g(z) = \frac{1}{2} z^{\top} B_0 z +b_1 \cdot z + b_2 + g_{nl}(z), \] where $A_0,B_0$ are symmetric matrices in $\R^{d\times d}$, $a_1, b_1$ are vectors in $\mathbb{R}^d$, $b_2 \in \R$ is a scalar, and $\phi_{nl}$ and $g_{nl}$ stand for additional non-linear features discussed below. The quadratic polynomial in $\phi$ allows for global translations and dilations. Correspondingly, the quadratic polynomial in $g$ allows for the detection of any mismatch in the mean and co-variance of the two distributions. One can easily check that, with these basic functions available, the procedure yields the exact solution to the optimal transport problem between arbitrary Gaussians. If these are the only features available, then there is no advantage in dividing the global problem into local ones, as the composition of linear maps is also linear, thereby providing no additional richness to the single step scenario. The natural element to add is an adaptive feature that could perform --and detect the need of-- local mass displacements. In one dimension, a natural choice is provided by one or more Gaussians of the form $$ \phi_{nl}^k = \alpha_k \exp \left( - \frac{[v_k (x - \bar{x}_k)]^2}{2} \right), \quad g_{nl}^k = \beta_k \exp \left( - \frac{[s_k (z - \bar{z}_k)]^2}{2} \right), $$ where the index $k$ labels the Gaussian feature when more than one is used. The Gaussians in $\phi$ allow for local stretching/compression around $m$ with scale $|v|^{-1}$ and amplitude $\alpha$, while each Gaussian in $g$ detects local discrepancies between the two distributions, as opposed to the global scale and positioning provided by its quadratic component. The parameters $v$, $\bar{x}$, $s$ and $\bar{z}$ appear nonlinearly in $\phi$ and $g$, moving us away from the linear feature spaces of \cite{kuang2017sample} and into the realm of adaptability, as the parameters automatically select the location and scale of the changes required by the data. There are at least four alternative ways to bring these Gaussian features to higher dimensions: \begin{enumerate} \item Adopt general Gaussians of the form % $$ \phi_{nl} = \alpha \exp \left( - \frac{\|V (x - \bar{x})\|^2}{2} \right) , $$ % with $\bar{x}$ a vector and $V$ a matrix (it is more convenient to write the Gaussian in terms of a general matrix $V$ in this way, rather than in terms of the inverse covariance matrix $C^{-1} = V^T V$, as we would need to require this to be positive definite); \item adopt isotropic Gaussians % $$ \phi_{nl} = \alpha \exp \left( - \frac{v \|x - \bar{x}\|^2}{2} \right) , $$ % with $v$ a scalar, \item adopt one-dimensional Gaussians along arbitrary directions % $$ \phi_{nl} = \alpha \exp \left( - \frac{\|v\cdot(x - \bar{x})\|^2}{2} \right) , $$ % with $v$ a vector, and \item adopt a Gaussian with diagonal covariance % $$ \phi_{nl} = \alpha \exp \left( - \frac{\|D (x - \bar{x})\|^2}{2} \right) , $$ % with $D$ a diagonal matrix, \end{enumerate} and similarly for $g_{nl}$ in all four cases. The first choice has the advantage of generality but may be prone to overfitting in high dimensions, unless it is severely penalized. The second approximates a general function $\phi$ by the composition of isotropic bumps, an appropriate image is that of hammering a sheet of metal into any desired shape. Yet, it would resolve poorly local, one-dimensional changes. The third choice excels at these but will fare poorly for more isotropic local changes. Finally, the fourth choice is attached to the coordinate axes, which would make sense only if these correspond to variables that are assumed to change independently. A natural question is how many Gaussians to include in the functional space proposed. We have used two in the examples below, but one Gaussian would have sufficed: in the adversarial multistep method proposed, it is enough that the player with strategy $g(y)$ has a ``lens'' (the Gaussian) to identify the area where the two distributions least agree, and the player with strategy $\phi(x)$ has the capability to perform local moves to correct this misfit. Since the center and width of the Gaussian are free parameters, both assertions hold. With a single Gaussian feature, both players can focus only on one local misfit at a time. However, the algorithm has multiple steps, so effectively the total number of features available is the product of the features per step times the number of steps. \subsection{Local Algorithm}\label{subsec:localAlgorithm} We will use vectors $\alpha \in \mathbb{R}^a, \beta \in \mathbb{R}^b$ to parametrize $\phi(x) = \phi_{\alpha}(x)$ and $g(y) = g_{\beta}(y)$. We are seeking to solve the minimax problem in $\alpha \in \R^a, \beta \in \R^b$ for the Lagrangian: \[ L[\alpha,\beta] = \frac{1}{n} \sum_{i=1}^n g_{\beta}(\nabla \phi_{\alpha}(x_i)) - \frac{1}{m} \sum_{j=1}^m e^{g_{\beta}(y_j)} + P(\alpha,\beta) \] where $P$ is a penalization function that will be described in Section \ref{subsec:penalty}. In practice, one could use any available minimax solver to find a critical point of the above Lagrangian. Yet, to our knowledge, there is no available efficient method suitable for a non-convex/non-concave landscape. A naive algorithm would simultaneously implement gradient descent in $\alpha$ and gradient ascent in $\beta$, with updates given at each step $s$ by: \begin{align*} \alpha^{s+1} &= \alpha^s - \eta \nabla_{\alpha}L[\alpha^s, \beta^s] \\ \beta^{s+1} &= \beta^s + \eta \nabla_{\beta}L[\alpha^s, \beta^s], \end{align*} with a step size $\eta$ that may change at each iteration. From a game-theory perspective, this corresponds to two myopic players that plan their next move based only on their current position, without anticipating what the other player might do. Instead, more insightful players will choose their next move based on the future position of their opponents. This yields a second order algorithm, that we will refer to as \emph{implicit} gradient descent, with updates given by: \begin{align*} \alpha^{s+1} &= \alpha^s - \eta \nabla_{\alpha}L[\alpha^{s+1}, \beta^{s+1}] \\ \beta^{s+1} &= \beta^s + \eta \nabla_{\beta}L[\alpha^{s+1}, \beta^{s+1}] . \end{align*} A simple Taylor expansion gives: \begin{align*} \nabla_{\alpha}L[\alpha^{s+1}, \beta^{s+1}] &\approx \nabla_{\alpha}L^s + \nabla^2_{\alpha \alpha} L^s \cdot (\alpha^{s+1}- \alpha^s) + \nabla^2_{\alpha \beta} L^s \cdot (\beta^{s+1}- \beta^s) \\ \nabla_{\beta}L[\alpha^{s+1}, \beta^{s+1}] &\approx \nabla_{\beta}L^s + \nabla^2_{\alpha \beta} L^s \cdot (\alpha^{s+1}- \alpha^s) + \nabla^2_{\beta \beta} L^s \cdot (\beta^{s+1}- \beta^s) \end{align*} Defining the \emph{twisted} gradient $G^s$ and \emph{twisted} Hessian $H^s$ by \[ G^s = \begin{pmatrix} \nabla_{\alpha} L^s \\ - \nabla_{\beta} L^s \end{pmatrix}, \quad H^s = \begin{pmatrix} \nabla^2_{\alpha \alpha} L^s & \nabla^2_{\alpha \beta} L^s \\ - \nabla^2_{\alpha \beta} L^s & -\nabla^2_{\beta \beta} L^s \end{pmatrix} \] and $\gamma^s =\begin{pmatrix} \alpha^s \\ \beta^s \end{pmatrix}$, one obtains the second-order updating scheme: \begin{equation} \gamma^{s+1} = \gamma^s - \eta \left(I + \eta H^s \right)^{-1} G^s \end{equation} Notice that as $\eta \to 0$, the scheme is equivalent to a classical gradient descent. On the other hand, as $\eta \to +\infty$, the scheme converges to Newton iterations. At each iteration, we are allowed to update $\eta$ in order to accelerate convergence. Ongoing research \cite{minimax2018essid} addresses the correct rules to update $\eta$, as well as the convergence of the algorithm to a critical point of the Lagrangian. This minimax solver is robust in two senses: it guarantees both convergence to a local minimax point and constant improvement. The latter has to do with the subtlety of minimax problems, as opposed to regular minimization where enforcing a decrease of the objective function is enough. In each step of our implicit procedure to $\min_x \max_y L(x,y)$, if $L[x^{s+1}, y^{s+1}]$ is either bigger than $L[x^s, y^{s+1}]$ or smaller than $L[x^{s+1}, y^s]$, we reject the step and adopt a smaller learning rate. Because of this, the solution will always improve over the starting identity map. If computing the twisted Hessian $H$ becomes too costly, one can resort to Hessian approximation techniques such as BFGS or its variations \cite{wright1999numerical,pavon2017variational}. To conclude, the algorithm for finding the optimal match between two consecutive distributions, which we denote sample based local optimal transport (SBLOT), is summarized in Algorithm \ref{algo:SBLOT} \begin{algorithm} \caption{Sample Based Local Optimal Transport Algorithm (SBLOT)} \label{algo:SBLOT} \begin{algorithmic}[0] \Procedure{SBLOT}{$(x_i),(y_j)$} \State Initialize $\gamma$ \State Compute the twisted gradients and Hessians $G, H$ \For{$n=1,..,MaxIter$} \If{$||G|| < tolerance$} \State break \EndIf \State $\gamma \gets \gamma - \eta (I + \eta H)^{-1} G$ \State Recompute the twisted gradients and Hessians $G, H$ at $\gamma$ \State Update $\eta$ \EndFor \State \textbf{return} ${\displaystyle \nabla \phi_{\gamma[1:a]}(x)}$ \EndProcedure \end{algorithmic} \end{algorithm} \subsection{Penalization}\label{subsec:penalty} Transforming Problem \ref{minimax} into Problem \ref{sampleMinimax} amounts to replacing the theoretical measures with their empirical estimates; \[ \rho \approx \hat{\rho} = \frac{1}{n} \sum_{i=1}^n \delta_{\nabla \phi(x_i)}, \quad \nu \approx \hat{\nu} = \frac{1}{m} \sum_{j=1}^m \delta_{y_j} \] Even if $\rho \ll \nu$, this will not hold for their estimates. Allowing maximum freedom for the function $g$ will result in an infinite Kullback-Leibler divergence. For instance, if one allows functions $g$ with support including some $\nabla \phi(x_i)$ but none of the $y_j$, the Lagrangian will grow unboundedly, since the exponential term that regularly inhibits this growth is now constant. One way to avoid this problem is to use the relative entropy not between $T(X)$ and $Y$ but between $T(X)$ and $(1-\epsilon) Y + \epsilon T(X)$, as then the law of $T(X)$ is always absolutely continuous w.r.t. the law of $(1-\epsilon)Y + \epsilon T(X)$, eliminating the possibility of blowup in $g$, and the minimum is still reached when $T(X)=Y$. Another general simple way to avoid this kind of scenario is through the addition to the Lagrangian of terms that penalize overfitting. For our particular choice of functional spaces, it is only the coefficients in the argument of the exponentials that require penalization, as those are the only ones than involve spatial scales. In particular, for a component of $g$ or $\phi$ of the form $$ a e^{-(b \cdot (x-c))^2} , $$ we add penalization terms proportional to $$ e^{(\epsilon \|b\|)^2}, $$ with $\epsilon$ as defined above, to avoid resolving scales smaller than $\epsilon$, to $$ \frac{1}{(D\|b\|)^2}, $$ where $D$ measures the diameter of the support of the data, to avoid having Gaussians so broad that they are indistinguishable from the quadratic components of the functional space, to $$ \left\|\frac{c}{D}\right\|^2,$$ to avoid centering the Gaussian away from the data, and, when more that one Gaussian is used, to $$ \frac{\epsilon^2}{\|c_i-c_j\|^2}, $$ for every pair $(i,j)$ of Gaussians, to avoid possible degeneracies in the functional space when two Gaussians become nearly indistinguishable. All these terms are added and multiplied by a tunable parameter $\lambda$. Yet one more consideration is required for the penalization of the parameters of the potential $\phi$: since in the Lagrangian, $\phi$ appears only as an argument of $g$, for a fixed $\lambda$, the penalization terms and the core Lagrangian can easily become unbalanced. In particular, at the exact solution, $g$ is zero, so only the penalization terms will remain. To correct for such imbalance, we multiply the corresponding penalization terms by the average value of $\|\nabla g\|$ over all current $\nabla\phi(x_i)$. \section{Discussion and Conclusions}\label{sec:conclusion} We have developed an adaptive methodology for the sample-based optimal transport problem under the standard quadratic cost function. The main advantage of the new procedure is that it does not require any external input on the form of the distributions that one seeks to match, or any expert knowledge on the type, location and size of the features in which the source and target distribution may differ. Even though the map $\nabla \phi$ and test function $g$ used at each step are parametric, by using the composition of many simple maps and having at one's disposal a ``lens'' within $g$ that can focus on any individual local mismatch at each step, the resulting procedure can be thought as effectively free of parameters, except for the number of intermediate distributions to use, a stopping criterion, and a couple of constants associated with the penalization of the nonlinear features. Thus, it has the potential to form the basis for a universal tool that can be transferred painlessly across fields. Two main ingredients allow for the procedure to capture arbitrary variability without making use of a huge dictionary of candidate features (in its current version, it uses only three: a linear feature for global displacements, a quadratic feature for global scalings, and a Gaussian feature for localized displacements). One ingredient, borrowed from prior work in \cite{kuang2017sample}, is the factorization of the potentially quite complex global map into a sequence of much simpler local maps between nearby distributions. The optimality of the composed map is guaranteed through the use of displacement interpolation. The second ingredient is the formulation of the local problem as a two-player game where the first player seeks to push forward one distribution into the other, while the second player develops features that show where the push-forward condition fails. The variational characterization of the relative entropy between distributions that gives rise to this game-theory formulation has the additional advantage of being sample-friendly, as it involves the two distributions only through the expected values of functions, which can be naturally replaced by empirical means. Because the map between any two consecutive distributions is close to the identity, local optimality is guaranteed by requiring this map to be the gradient of a potential. Topics for future research include the extension of the algorithm to transportation costs different from the squared distance and, for the purpose of more efficient computability, the optimization of the minimax solver and the parallelization of the computation of the local maps. Most of all, we believe, the use of the new methodology in real applications will shed light on the issues that require further work, which may include the development of features and penalizations suitable for efficiently capturing sharp edges or removed objects. \section{Experiments}\label{sec:experiments} This section illustrates the algorithm through some simple examples. First we use a one-dimensional example --simplest for visualization-- and a direct solver between initial and final distributions to display the way in which the function $g$ adapts, creating features that point to those areas where transport in still deficient, thus guiding $\phi$ to correct them. The two distributions in the first example are relatively close, so that they can be matched without involving intermediate distributions. A second set of one-dimensional examples follows, involving more significant changes and hence requiring the use of interpolated distributions. Then we perform some two-dimensional examples, involving Gaussians, Gaussian mixtures and a distribution uniform within an annulus. Finally, we use an example built so that we know the exact answer, to perform an empirical analysis of convergence. All the examples presented are intended for illustration and use synthetic data; applications to real data, particularly to change detection, will be presented in field-specific articles currently under development. \subsection{Adversarial behavior of $\phi$ and $g$} This section shows, through a simple experiment, the competitive behavior exhibited by the two players $\phi$ and $g$ in the local algorithm (Algorithm \ref{algo:SBLOT}). To this end, we create data where the initial and final distribution are not very far from each other, so that the local algorithm can be used as a stand alone routine. More specifically, we map one single Gaussian distribution to a Gaussian mixture, where the two components of the mixture overlap significantly, so that they do not differ too markedly from the source. \begin{figure}[h] \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=\textwidth]{local2start}\\ Iteration 0 \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=\textwidth]{local2mid}\\ After $8$ iterations \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=\textwidth]{local2end}\\ After $17$ iterations \end{minipage} \caption[Plot at three different iteration times of Algorithm \ref{algo:SBLOT}]{Plot at three different iteration times of Algorithm \ref{algo:SBLOT}. Histograms of the source samples and their transforms are in red, and of the target samples in blue. The black curve corresponds to $g(x)$, vertically rescaled for visualization. The green curve represents the displacement $T(x)-x$.} \label{fig:Adversarial} \end{figure} Figure \ref{fig:Adversarial} shows steps in the solution to the corresponding sample based OT problem, with the source samples $(x)_i$ from a Gaussian --and their transforms-- in red and the samples $(y)_j$ from a mixture of two Gaussians in blue. Point samples are represented through histograms. The figure on the left represents the initial configuration, the one in the middle the configuration after 10 iterations of Algorithm \ref{algo:SBLOT}, and the one on the right the final configuration. On top of the histograms, we display the function $g(x)$ in black, scaled vertically to be in the interval $[-1;1]$ for easier comparison with the data, and the displacement $\nabla \phi(x)-x$ in green, representing the map that sends the initial sample (in red, in the left figure) to the current sample (in red, in the middle or right figure). The initial displacement, being $0$, was not represented at initialization, but we initialize the function $g(z)$ at the purely quadratic function: \begin{equation}\label{eq:ggauss} \frac{1}{2} z^T \left( \hat{\Sigma}_y^{-1} - \hat{\Sigma}_x^{-1} \right) z + \left( \hat{\Sigma}_x^{-1} \hat{x} - \hat{\Sigma}_y^{-1} \hat{y} \right)^T z + \frac{1}{2} \left( \hat{y}^T \hat{\Sigma}_y^{-1} \hat{y} - \hat{x}^T \hat{\Sigma}_x^{-1} \hat{x}\right) \end{equation} where $\hat{x}, \hat{y}$ are the empirical means of the samples $(x)_i,(y)_j$, and $\hat{\Sigma}_x,\hat{\Sigma}_y$ their empirical covariance matrices. Equation \ref{eq:ggauss} represents the optimal $g$ for two Gaussian measures. More generally, starting with this expression as the initial guess for $g$ instructs $\phi$ to shift the samples as well as to stretch/compress them, in order to match the first and second moments of the two distributions. The left image of Figure \ref{fig:Adversarial} shows how $g$ highlights the lack of variance in $(x)_i$; its maximum is at $0$, and it has smaller values at the edges. This forces $\phi$ to adapt accordingly, by applying a linear map to stretch $(x)_i$. When the variance of the $(\nabla \phi(x))_i$ exceeds the variance of the $(y)_j$, the shape of $g$ is inverted. In the middle image of Figure \ref{fig:Adversarial}, we can see that $\nabla \phi$ corrected the mismatches highlighted by $g$ and even started to slightly separate the mass in the middle. However, there is still too much red mass around $0$ and too little red mass around the two peaks of the blue Gaussian mixture. This is well detected by $g$, which has a local maximum within the area of red mass excess and two local minima within the area of red mass default. In the right image of Figure \ref{fig:Adversarial}, we observe that $\nabla \phi$ adapted accordingly and starts yielding satisfactory results. At this point, $g$ is very close to $0$ ($||g||_{\infty}\sim 10^{-5}$), although this is not apparent in the figure due to the normalization we applied for plotting. \subsection{The global algorithm in dimension one} Figures \ref{fig:gto2g} and \ref{fig:gto3g} represent inputs and outputs of Algorithm \ref{algo:SBOT}, where $(x)_i$ is sampled from a Gaussian and $(y)_j$ from a mixture of two and three Gaussians respectively. \begin{figure}[h] \begin{center} \begin{minipage}{0.35\textwidth} \centering \includegraphics[width=\textwidth]{globalstart}\\ Starting configuration \end{minipage} \begin{minipage}{0.35\textwidth} \centering \includegraphics[width=\textwidth]{globalend}\\ Final configuration \end{minipage} \end{center} \caption{Algorithm \ref{algo:SBOT} pushing forward a Gaussian to a mixture of two Gaussians, in 1D. The source samples and their transforms are depicted through histograms in red, and the target samples in blue.} \label{fig:gto2g} \end{figure} \begin{figure}[h] \begin{center} \begin{minipage}{0.35\textwidth} \centering \includegraphics[width=\textwidth]{1gto3gstart}\\ Starting configuration \end{minipage} \begin{minipage}{0.35\textwidth} \centering \includegraphics[width=\textwidth]{1gto3gend}\\ Final configuration \end{minipage} \end{center} \caption{Same as figure \ref{fig:gto2g} but with a mixture of three Gaussians as target.} \label{fig:gto3g} \end{figure} These results were obtained by generating $\sim 200$ samples for the source and target measures, and using the functional spaces defined in Section \ref{subsec:functionalspace} in the local algorithm (Algorithm \ref{algo:SBLOT}), with a general quadratic form for both $\phi$ and $g$, plus one adaptive Gaussian for $\phi$ and two for $g$. A total of $N=10$ and $N=20$ intermediary measures were adopted for the first and second example, respectively. As one can see, even though each local map can only perform one local deformation, the composition of many creates all the complexity required to move one single Gaussian to a mixture of two or three. \subsection{Two-dimensional examples} Switching to two dimensions, Figure \ref{fig:gtoa} represents the results of mapping a Gaussian distribution to a uniform distribution within an annulus. An isotropic Gaussian was used for $\phi_{nl}$ and two for $g_{nl}$ in the functional space of Algorithm \ref{algo:SBLOT}, and $N=30$ intermediary distributions were used in Algorithm \ref{algo:SBOT}. Figure \ref{fig:gtoadisp} represents the displacement interpolants at $t=k/5$ for $ k=1,\ldots ,5$, obtained from running Algorithm \ref{algo:SBOT} on the example in Figure \ref{fig:gtoa}. In addition to mass spreading from the isotropic Gaussian, the linear and quadratic part of $\phi$ translated and stretched the red sample accordingly. \begin{figure}[hb!] \begin{center} \begin{minipage}{0.35\textwidth} \centering \includegraphics[width=\textwidth]{ex2start}\\ Starting configuration \end{minipage} \begin{minipage}{0.35\textwidth} \centering \includegraphics[width=\textwidth]{ex2end}\\ Final configuration \end{minipage} \end{center} \caption{Algorithm \ref{algo:SBOT} from a displaced Gaussian to an annulus, in 2D} \label{fig:gtoa} \end{figure} \begin{figure}[hb!] \begin{center} \includegraphics[scale=0.6]{ex2disp} \end{center} \caption[Interpolants given by Algorithm \ref{algo:SBOT} from a Gaussian to an annulus, in 2D]{Interpolants given by Algorithm \ref{algo:SBOT} from a Gaussian to an annulus, in 2D. The top left figure (red) corresponds to the original sample. Time flows from left to right, and from top to bottom. Subsequently represented are the interpolants at time $t=k/5$ for $k=1,\ldots,5$. } \label{fig:gtoadisp} \end{figure} Similarly, Figure \ref{fig:gto2g2} represents the initial and final configurations obtained from running Algorithm \ref{algo:SBOT} to transport a two-dimensional Gaussian distribution to a mixture of two Gaussians. A diagonal covariance was used in the non-linearity $\phi_{nl}$ for the functional space in Algorithm \ref{algo:SBLOT}, and $N=30$ intermediary steps were used in Algorithm \ref{algo:SBOT}. This type of non-linearity is well adapted to separate samples along the horizontal and vertical axes. Figure \ref{fig:gto2gdisp} represents the displacement interpolants at $t=k/5$ for $ k=1,\ldots ,5$, obtained from running Algorithm \ref{algo:SBOT} on the example in Figure \ref{fig:gto2g2}. \begin{figure}[htb!] \begin{center} \begin{minipage}{0.35\textwidth} \centering \includegraphics[width=\textwidth]{ex3start}\\ Starting configuration \end{minipage} \begin{minipage}{0.35\textwidth} \centering \includegraphics[width=\textwidth]{ex3end}\\ Final configuration \end{minipage} \end{center} \caption{Algorithm \ref{algo:SBOT} from a Gaussian to a mixture of 2 Gaussians, in 2D} \label{fig:gto2g2} \end{figure} \begin{figure}[htb!] \begin{center} \includegraphics[scale=0.6]{ex3disp} \end{center} \caption[Interpolants given by Algorithm \ref{algo:SBOT} from a Gaussian to a mixture of two Gaussians, in 2D]{Interpolants given by Algorithm \ref{algo:SBOT} from a Gaussian to a mixture of two Gaussians, in 2D. The top left figure (red) corresponds to the original sample. Time flows from left to right, and from top to bottom. Subsequently represented are the interpolants at time $t=k/5$ for $k=1,\ldots,5$. } \label{fig:gto2gdisp} \end{figure} \subsection{Empirical analysis of convergence} In this subsection, we empirically analyze the convergence of the algorithm in a situation where the generating distributions, as well as the optimal map, are known: $(x_i)_{i=1,\cdots,n}$ are i.i.d. samples of a standard Gaussian distribution, $(y_j)_{j=1,\cdot,m}$ are obtained through $y_i = \phi'(x_i)$ for $\phi(x) = |x|^{1+\epsilon}$ ($\epsilon= 1/4$). Brenier's theorem guarantees that, since $\phi$ is convex, $\phi'$ is the optimal map for the quadratic Wasserstein problem. In a first set of experiments, we keep the number of samples constant at $n=m=500$, and we vary the number of intermediary steps $K$ in the global algorithm, raging through $K=1,2,3,5,10$. In a second set of experiments, we keep the number of intermediary steps in the global algorithm constant at $K=10$, and vary the number of sample points, using $n=m=25,50,100,200,500$. In both sets, we compute the experimental map $\nabla \phi_{exp}$ by \eqref{eq:compMaps}, and compare it to the optimal $\nabla \phi^*$ defined by: \[ \nabla \phi^* (x) = (1 + \epsilon) x |x|^{\epsilon - 1}. \] In each experiment, three numerical quantities are computed: \begin{enumerate} \item The weighted $L^2$ norm ${\displaystyle \int |\nabla \phi_{exp}(x) - \nabla \phi^*(x)|^2 \mu(x)dx} \approx \sum_i |\nabla \phi_{exp}(x_i) - \nabla \phi^*(x_i)|^2 $, \item The $L^{\infty}$ norm between $\nabla \phi_{exp}$ and $\nabla \phi^*$, \end{enumerate} For illustrative purposes, we show in Figure \ref{fig:compK} the differences between $\nabla \phi_{exp}$ and $\nabla \phi^*$ for various sets of parameters. Tables \ref{table:varK} and \ref{table:varN} summarize the results. \begin{table}[H] \begin{center} \begin{tabular}{|l|l|l|l|l|l|} \hline $K=$ & \multicolumn{1}{l|}{1} & \multicolumn{1}{l|}{2} & \multicolumn{1}{l|}{3} & \multicolumn{1}{l|}{5} & \multicolumn{1}{l|}{10} \\ \hline $\E[|\nabla \phi^*(X) - \nabla \phi_{exp}(X)|^2]$ & 0.74 & 0.55 & 8.3 $\cdot 10^{-1}$ & 1.7 $\cdot 10^{-2}$ & 8.7 $\cdot 10^{-3}$ \\ \hline $||\nabla \phi^* - \nabla \phi_{exp}||_{L^{\infty}}$ & 0.53 & 0.22 & 9.9 $\cdot 10^{-2}$ & 8.7 $\cdot 10^{-2}$ & 6.2 $\cdot 10^{-2}$ \\ \hline \end{tabular} \end{center} \caption{Convergence as a function of the number $K$ of intermediary steps} \label{table:varK} \end{table} \begin{table}[H] \begin{center} \begin{tabular}{|l|l|l|l|l|l|} \hline $n=m=$ & \multicolumn{1}{l|}{25} & \multicolumn{1}{l|}{50} & \multicolumn{1}{l|}{100} & \multicolumn{1}{l|}{200} & \multicolumn{1}{l|}{500} \\ \hline $\E[|\nabla \phi^*(X) - \nabla \phi_{exp}(X)|^2]$ & 1.4 & 0.35 & 7.1 $\cdot 10^{-2}$ & 2.1 $\cdot 10^{-2}$ & 8.7 $\cdot 10^{-3}$ \\ \hline $||\nabla \phi^* - \nabla \phi_{exp}||_{L^{\infty}}$ & 1.3 & 0.49 & 0.16 & 0.11 & 6.2 $\cdot 10^{-2}$ \\ \hline \end{tabular} \end{center} \caption{Convergence as a function of the number of samples $n$} \label{table:varN} \end{table} In practice, setting a number of samples less than $15$ in this example leads to poor convergence due to the extreme sparsity of data. \begin{figure}[h] \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=\textwidth]{K1}\\ $K=1$ \end{minipage} \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=\textwidth]{K3}\\ $K=3$ \end{minipage} \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=\textwidth]{K5}\\ $K=5$ \end{minipage} \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=\textwidth]{K10}\\ $K=10$ \end{minipage} \caption[Comparison between $\nabla \phi^*$ and $\nabla \phi_{exp}$ for different values of $K$]{Comparison between $\nabla \phi^*$ (blue) and $\nabla \phi_{exp}$ (orange) for different values of intermediary steps $K$.} \label{fig:compK} \end{figure} Figure \ref{fig:compK} compares the optimal map $\nabla \phi^*$ with the computed map $\nabla \phi_{exp}$. Note that the one step algorithm does not provide a monotone solution, i.e. it is not the gradient of a convex function: the source and target distributions are not close enough to guaranty that. This is corrected through the introduction of intermediate steps, which brings the source and target distributions for each step closer to each other via displacement interpolation. For the example under consideration, the optimal solution is convex for any value of $K$ bigger than $4$. Notice also that, for $K=10$ and $n=500$, the solution approximates the exact one very accurately in the bulk of the distribution, as captured by the density-weighted $L_2$ norm of their difference. On the other hand, the $L_{\infty}$ norm is dominated by the behavior at the tails, where little data is present to guide the algorithm. \section{Introduction} The optimal transport problem consists of finding, from among all transformations $y = T(x)$ that push forward a source distribution $\mu(x)$ to a target $\nu(y)$, the map that minimizes the expected transportation cost: \begin{equation} \min_T \int c\left(x, T(x)\right) \mu(x) \ dx, \quad T_{\#} \mu = \nu, \label{eq:MOT} \end{equation} where $c(x,y)$ is the externally provided cost of moving a unit of mass from $x$ to $y$ \cite{villani2003topics}. The application for which Monge formulated the optimal transport problem was the actual transportation of material between two sites at minimal cost \cite{monge1781memoire}. Two centuries later, starting with Kantorovich and Koopmans \cite{kantorovich1942v}, the problem was relaxed from maps to couplings, and applied to more general matching problems, such as matching supply and demand or positions and employees. More recently, the optimal transport problem has become a central tool in many computer and data science applications, as well as in analysis and partial differential equations. Among the many applications for which optimal transport could be used, the particular one that drove the methodology proposed in this article is change detection, for which one seeks a correspondence between two point clouds (from remote sensing data -- either imagery or laser scanning) in order to identify differences between them. The numerical solution of optimal transportation problems has been an active area of research for some years. When the two measures $\mu$ and $\nu$ have discrete support, the relaxation of optimal transport due to Kantorovich \cite{kantorovich1942v} becomes a linear programming problem, which can be solved effectively for problems of small and medium size. When the size of the problem grows, its solution can be accelerated significantly through the addition of an entropic regularization and a Sinkhorn-type iterative algorithm \cite{Cuturi,peyre2018computational}. This regularized problem, both in the discrete and the continuous versions, is equivalent to the Schr\"odinger bridge \cite{leo2,CGP2016}. When the space underlying the two measures $\mu$ and $\nu$ is continuous and the distributions are known in closed form, one can --in small dimensional problems-- discretize them on a grid or a graph before applying these techniques. Then their solution provides a point-by-point assignment between the source and the target measures. However, in most data science applications, the distributions underlying the source and/or target samples are unknown. Moreover, those samples are often embedded in a high dimensional space, and the data are relatively scarce. Density estimation techniques using this scarce data will yield a poor representation of the source and target measures. Hence the transport map or transference plan provided by these techniques will be either inaccurate or highly over-fitted, which leads to a very poor predictive power for the target of new sample points from the source. In order to provide a more flexible framework for data science applications, sample-based techniques to solve the OT problem were developed in \cite{tabak2017conditional, kuang2017sample, tabak2018explanation}. A central question to address when posing sample-based OT problems is the meaning of the push-forward condition $T_{\#} \mu = \nu$ when $\mu$ and $\nu$ are only known through samples $\left\{x_i\right\}$, $\left\{y_j\right\}$. In the formulations in \cite{tabak2017conditional, kuang2017sample, tabak2018explanation}, this condition was relaxed to the equality of the empirical means of a pre-determined set of functions or ``features'' over the two sample sets; a relaxation that appears naturally in the dual formulation of the problem. This raises the feature selection problem of finding the set of features best suited to each application. The associated challenges are particularly apparent in the change detection problem, where elements in two point clouds may differ for instance in size, color, shape, data distribution or location, may be large or small, may have appeared, disappeared, have been displaced, deformed, broken, consolidated\ldots Thus the development of a robust, application-independent feature-selection methodology is far from trivial. The methodology proposed in this article incorporates feature selection into the formulation of the optimal transport problem itself, through an adversarial approach. This involves three main steps: \begin{enumerate} \item Borrowing from the methodology developed in \cite{kuang2017sample}, we subdivide the transportation problem between $\mu$ and $\nu$ into finding $N$ local maps $T_t$ pushing forward $\rho_{t-1}$ to $\rho_{t}$, with $\rho_0 = \mu$ and $\rho_{N}=\nu$. The global map $T$ results from the composition of these local maps: $T = T_{N} \circ T_{N-1} \circ \ldots \circ T_1$, and global optimality is guaranteed by requiring that the $\rho_t$ are McCann's displacement interpolants \cite{mccann1997convexity} between $\mu$ and $\nu$. This decomposition achieves two goals: \begin{itemize} \item Because every pair of successive $\rho_t$ are close to each other, the corresponding maps $T_t$ are close to the identity, which is the gradient of the strictly convex function $\frac{1}{2} \|x\|^2$. This permits relaxing the requirement that $\phi_t$ be convex in the optimality condition $T_t = \nabla \phi_t$ for the standard quadratic transportation cost. \item Arbitrarily complex maps $T$ can be built through the composition of quite simple maps $T_t$. Thus, the maps over which to optimize each local problem can be reduced to a suitable family depending on just a handful of parameters. \end{itemize} \item We formulate the push-forward condition $T_{t\: \#} \rho_{t-1} = \rho_t$ not in terms of the empirical expectation of features but as the minimization of the relative entropy between $T_{t\: \#} \rho_{t-1}$ and $\rho_t$. One advantage of this formulation is that it is a natural relaxation of the push-forward condition when $T_t$ is restricted to a small family of maps, which renders impossible the achievement of a perfect match between $T_{t\: \#} \rho_{t-1}$ and $\rho_t$. \item We use a variational characterization of the relative entropy, as the maximizer of a suitable functional over functions $g(x)$. This formulation has three critical properties: \begin{enumerate} \item Since the variational characterization involves expected values of functions over $\rho_{t-1}$ and $\rho_t$, it can be immediately extended to a sample-based scenario, thereby, replacing those expected values by empirical means. \item Replacing ``all'' functions $g(x)$ by a suitable family of functions provides a natural relaxation in the presence of finite sample sets. We show that, unlike the maps $T_t$, which produce the global map $T$ via composition, it is the sum of the functions $g_t$ that approximates the global $g$. Moreover, we prove that, if the families of $T_t$ and $g_t$ are built through the linear superposition of a predetermined set of functions, we recover the solution in \cite{kuang2017sample}. \item Each local problem has now been given a minimax formulation (minimize over $T$, maximize over $g$.) This has a natural adversarial interpretation: while the ``player'' with strategy $T$ seeks to minimize the discrepancies between $T_{\#} \rho_{t-1}$ and $\rho_t$; its adversary with strategy $g$ develops features to prove that the two distributions have not been perfectly matched. This provides the desired adaptability: the user does not need to provide features adapted to the problem in hand, as these will emerge automatically from its solution. This facilitates applications across a broad range of problems, including problems with significant features at various, possibly unknown scales. \end{enumerate} \end{enumerate} This paper is organized as follows. After this introduction, section \ref{sec:AOT} describes the methodology and its theoretical underpinning. Subsection \ref{subsec:adv} introduces the variational characterization of the relative entropy that the algorithm uses and concludes with the sample-based minimax formulation of the local optimal transport problem. Subsection \ref{subsec:conn} shows that, when the functions $g$ and potentials $\phi$ are drawn from finite-dimensional linear functional spaces, the solution to the problem agrees with the one obtained in \cite{kuang2017sample} with pre-determined features. Subsection \ref{subsec:duality} proves that the order of minimization and maximization does not matter --that is, that there is no duality gap-- and explains the intuition behind the adversarial nature of the game, by detailing how each player reacts to the other's strategy. Subsection \ref{globalalgo} integrates the local algorithm just described into a global algorithm for the full optimal transport between $\mu$ and $\nu$. Section \ref{sec:algorithm} details the algorithm further. Subsection \ref{subsec:functionalspace} specifies the functional spaces chosen for $g$ and $\phi$, subsection \ref{subsec:localAlgorithm} the procedure used for solving the minimax problem, and subsection \ref{subsec:penalty} the additional penalization terms required for the non-linear components of the functional spaces. Finally, section \ref{sec:experiments} performs some illustrative numerical experiments, applying the new methodology to synthetic low-dimensional data. The focus of these experiments is to display in action, in easy to visualize scenarios, the adversarial nature of the formulation. \section{Adaptive optimal transport} \label{sec:AOT} \subsection{Formulation of the problem: an adversarial approach} \label{subsec:adv} We are given two sample sets $(x_i)_{i=1,..,n}$, $(y_j)_{j=1,..,m}$ $\subset \R^d$ with $n$ and $m$ sample points respectively, independent realizations of two random variables with unknown distributions $\mu$ and $\nu$. Both distributions are assumed to be absolutely continuous with respect to the Lebesgue measure on $\R^d$ and have finite second order moments. By a slight abuse of notation, we will identify the measures and their densities. In this case, Brenier's theorem \cite[p. 66]{villani2003topics} guarantees the existence of a map $T$ pushing forward $\mu$ to $\nu$ and minimizing the transportation cost \begin{equation} \int \left\|T(x) - x\right\|^2 \mu(x) \ dx. \end{equation} From the samples provided, we seek a map $T$ that would perform the transport well when applied to other independent realizations of the unknown distributions $\mu,\nu$. We can assume that the source and target distribution are close: \begin{remark} Solving the problem for nearby distributions is the building block of a general procedure for arbitrary distributions and for finding the Wasserstein barycenter of distributions \cite{kuang2017sample}. This more general procedure is presented in Section \ref{globalalgo}. \end{remark} The OT problem has two main ingredients: the push-forward condition that $(T(x_i))$ and $(y_j)$ have the same distribution and the minimization of the cost. \begin{remark} For the quadratic cost, the optimal solution is the gradient of a convex function $\phi(x)$, $y = T(x) = \nabla\phi(x)$, a convenient characterization. More general cost functions of the type $\ell(x-y)$ would only require modifying $\nabla \phi$ into $x - \nabla \ell^*(\nabla \phi)$, where $\ell^*$ represents the Legendre-Fenchel transform of the strictly convex function $\ell$, in the algorithm presented below. \end{remark} In \cite{kuang2017sample}, the push-forward condition was formulated in terms of the equality of the empirical expected values of a pre-determined set of feature functions. Instead, we propose a broader and adaptive formulation, in terms of the relative entropy between the two distributions. This introduces some significant improvements: \begin{enumerate} \item Of the two characterizations of equality of distributions: that all test-functions within a broad enough class agree and that their relative entropy vanish, the latter is far more succinct and easier to enforce. \item Replacing ``all'' test functions by a finite set, though a sensible approximation in the presence of finite sample-sizes, leads to questions of robustness and feature selection. To address this, we will use a variational characterization of the relative entropy, which automatically selects the ``best'' features within a given class. \item For finite sample sets, one would expect the empirical expected values of test functions on the two distributions to agree only in a statistical sense, so requiring their strict equality is somewhat artificial. By contrast, in the new formulation, rather than requiring the relative entropy to vanish, which may be unrealistic for finite sample-sizes and a limited family of maps $T$, we seek to minimize it. \end{enumerate} \begin{definition} For two probability measures $\rho, \nu \in P(\R^d)$, the \emph{Kullback-Leibler divergence} of $\rho$ with respect to $\nu$ --also called their relative entropy-- is defined as \begin{equation} D_{KL}(\rho||\nu) = \int \log\left( \frac{d\rho}{d\nu}\right)d\rho \end{equation} if $\rho$ is absolutely continuous with respect to $\nu$ ($\rho \ll \nu$), and $+\infty$ otherwise. \end{definition} Solving the optimal transport problem is equivalent to minimizing a Kullback-Leibler divergence, as the following proposition shows: \begin{proposition}\label{prop:KLmin} Let $\mu,\nu \in P(\R^d)$, with $\mu$ absolutely continuous with respect to the Lebesgue measure $m$ on $\R^d$. Let $\mathcal{C}$ be the set of convex functions from $\R^d \to \R$. Define the minimization problem \begin{equation}\label{eq:KLOPT}\tag{KLopt} \inf_{\phi \in \mathcal{C}} D_{KL}(\nabla \phi_{\#} \mu ||\nu) \end{equation} where $\nabla \phi_{\#} \mu (A) = \mu((\nabla \phi)^{-1}(A))$ \footnote{$\nabla \phi$ is well defined $m$-a.e. by Theorem 25.5 \cite{rockafellar1970Convex}, and hence $\mu$-a.e.}, for any Borel measurable set $A$. Then there exists a unique minimizer $\phi$ (up to zero measure sets), which coincides with the minimizer of the 2-Wassertein distance between $\mu$ and $\nu$: \[ \phi = \arg \inf_{\psi \in \mathcal{C}}D_{KL}(\nabla \psi_{\#} \mu ||\nu) \] and \[ W_2^2(\mu,\nu) = \int |\nabla \phi(x)-x|^2 d\mu(x). \] \end{proposition} \begin{proof} By Brenier's theorem, there exists a unique minimizer $\phi$ (up to zero measure sets) for the 2-Wassertein problem. The potential $\phi$ is a proper lower semi-continuous convex function and $\nabla \phi_{\#} \mu = \nu$. One easily sees that $\phi$ is the minimizer for the Kullback-Leibler divergence optimization problem \eqref{eq:KLOPT}, since for any measure $\rho \in P(\R^d)$ one has, \begin{equation}\label{eq:DKL} D_{KL}(\rho||\nu) \geq 0 \end{equation} with equality if and only if $\rho = \nu$ almost everywhere (in $\nu$). Inequality \eqref{eq:DKL} is easy to prove: if $\rho$ is not absolutely continuous w.r.t. $\nu$, the Kuklback-Leibler divergence is infinite, so the statement is true. Otherwise, we have \begin{align*} D_{KL}(\rho ||\nu) &= \int \log\left( \frac{d\rho}{d\nu}\right)d\rho\\ &= \int \log\left( \frac{d\rho}{d\nu}\right) \frac{d\rho}{d\nu} d\nu \\ &\geq \left( \int \frac{d\rho}{d\nu} d\nu \right) \log\left(\int \frac{d\rho}{d\nu} d\nu \right) = 0, \end{align*} where we used Jensen's inequality and the convexity of $x \mapsto x \log(x)$. So the equality $D_{KL}(\rho||\nu)=0$ will hold if and only if Jensen's inequality becomes an equality, i.e. if and only if $\frac{d\rho}{d\nu} \equiv 1$, or $\rho = \nu$. In particular, the solution to the optimal transport problem satisfies $\nabla \phi_{\#} \mu = \nu$. Hence \[ D_{KL}(\nabla \phi_{\#} \mu||\nu) = 0, \] which shows that $\phi$ is a minimizer of the optimal transport problem and \eqref{eq:KLOPT}. As for uniqueness, let $\phi_1, \phi_2 \in \mathcal{C}$ be two minimizers. Then $\nabla {\phi_1}_{\#} \mu = \nabla {\phi_2}_{\#} \mu = \nu$ from the statement above. By Brenier's theorem, they both solve the quadratic cost optimal transportation problem, which has a unique solution up to zero measure sets. \end{proof} Recently there has been a push in machine learning to replace the Kullback-Leibler divergence by Wasserstein distances in order to penalize differences in data sets \cite{frogner2015learning, peyre2018computational}. Unlike the Kullback-Leibler divergence, the Wasserstein distance defines a proper distance, enjoys regularity and symmetry properties, and is computationally tractable. Nonetheless, the Kullback-Leibler divergence is well suited to measure the dissimilarities between measures that we are trying to detect. In particular, the asymmetry between the two measures under the Kullback-Leibler divergence is well within the spirit of the problem, as we seek a convex function $\phi$ that makes the transported distribution $\nabla \phi_{\#} \mu$ indistinguishable from the target reference $\nu$. Also, as we shall see, the minimization of the relative entropy captures the differences between the two sample sets far more deftly than does a predefined finite set of test functions Thus, the biggest drawback in using the Kullback-Leibler divergence appears to be the difficulty in its numerical evaluation, particularly when we do not have access to a closed form expression for $\mu$ and $\nu$, but merely to a finite set of independent samples from each of these distributions. One could resort to density estimation techniques \cite{sheather1991reliable,silverman2018density} to approximate $\mu$ and $\nu$ and then proceed to numerical integration. Instead, we use a variational characterization of the Kulback-Leibler divergence of $\rho$ with respect to $\nu$, in the form of a sample-friendly expression : \begin{proposition}\label{prop:KLvar} Let $\rho, \nu \in P(\R^d)$. Then \[ D_{KL}(\rho||\nu) = 1 + \sup_{g} \left\lbrace \int g d\rho - \int e^g d\nu \right\rbrace \] \end{proposition} over all Borel measurable functions $g: \R^d \to \R$. \begin{proof} If we do not have $\rho \ll \nu$, there exists a set $A \subset \R^d$ such that $\rho(A)>0$ and $\nu(A) = 0$. Then \[ 1 +\sup_{g} \left\lbrace \int g d\rho - \int e^g d\nu \right\rbrace \] is infinite, as it can be made arbitrarily large by picking functions of the type $g = c \mathbbm{1}_A$, $c \in \R$. $D_{KL}(\rho||\nu)$ is also infinite in this case. Hence their values agree. When $\rho \ll \nu$, notice that for $\nu$-almost every $x \in \R^d$, \[ g \in \R \mapsto g \frac{d \rho}{d \nu}(x) - e^g \] is concave and maximized for $g(x) = \log \left( \frac{d\rho}{d\nu}(x) \right)$ (note that the Radon-Nikodym derivative $\frac{d\rho}{d\nu}$ is non-negative, $\nu$-a.e.). Thus, for almost every $x \in \R^d$ and any choice of $g(x) \in \R$, we have: \[ 1 + g(x)\frac{d \rho}{d \nu}(x) - e^{g(x)} \leq 1 + \frac{d\rho}{d\nu}(x) \left[ \log \left( \frac{d\rho}{d\nu}(x) \right) - 1 \right] \] with equality if and only if $g(x) = \log \left( \frac{d\rho}{d\nu}(x) \right)$. Integrating over the measure $\nu$ yields \[ 1 + \int_{\R^d} \left(g(x)\frac{d \rho}{d \nu}(x) - e^{g(x)} \right) d\nu(x) \leq \int_{\R^d} \log \left( \frac{d\rho}{d\nu}(x) \right) d\rho(x) = D_{KL}(\rho||\nu) \] and, thus, one has \[ 1 + \sup_{g} \left\lbrace \int_{\R^d} g(x) d \rho(x) - \int_{\R^d} e^{g(y)} d \nu(y) \right\rbrace = D_{KL}(\rho||\nu) \] since we have equality for \[ g = \log \left( \frac{d\rho}{d\nu} \right) \quad \text{ on the support of } \nu. \] \end{proof} \begin{remark} \begin{enumerate} \item The variational reformulation of the Kullback-Leibler divergence is a consequence of the convexity of $x \mapsto - \log(x)$. Indeed, computing its Legendre-Fenchel transform twice yields: \[ - \log(x) = \sup_{y<0} \left\lbrace x y + 1 - \log\left(-\frac{1}{y} \right) \right\rbrace = \sup_{g \in \R} \{ g - x e^g \} + 1 \] This approach extends to a broader set of f-divergences, yielding similar variational formulations, see \cite{nguyen2010estimating} and \cite{nowozin2016f}. \item A very similar variational formulation was developed in \cite{suzuki2008approximating} to estimate the likelihood of two samples being generated from independent sources. \item Note that the variational formulation represented above is very similar to the Donsker-Varadhan \cite{donsker1975asymptotic} formula: \[ \sup_{g} \left\lbrace \int_{\R^d} g(x) d \rho(x) - \log \left(\int_{\R^d} e^{g(y)} d \nu(y) \right) \right\rbrace \] Indeed, $\log(x) \leq x -1$ yields: \[ \sup_{g} \left\lbrace \int_{\R^d} g(x) d \rho(x) - \int_{\R^d} e^{g(y)} d \nu(y) \right\rbrace + 1 \leq \sup_{g} \left\lbrace \int_{\R^d} g(x) d \rho(x) - \log \left(\int_{\R^d} e^{g(y)} d \nu(y) \right) \right\rbrace \] and equality is achieved for the same maximizer $g = \log\left(\frac{d\rho} {d\mu}\right)$, if $\rho \ll \nu$ (otherwise, they are both infinite). The formula in Proposition~\ref{prop:KLvar} can be considered as a linearization of the Donsker-Varadhan formula, easier to implement numerically. \end{enumerate} \end{remark} Given two random variables $Z \sim \rho$ and $Y \sim \nu$ with $\rho \ll \nu$, we can equivalently express the formula in Proposition \ref{prop:KLvar} as: \[ D_{KL}(\rho||\nu) = 1 + \max_{g} \left\lbrace \E[g(Z)] - \E[e^{g(Y)}] \right\rbrace \] If instead, we are given \emph{independent samples} $z_1,...,z_n$ of $Z$, and $y_1,..,y_m$ of $Y$, we can approximate the above reformulation by its empirical counterpart: \[ D_{KL}(\rho||\nu) \approx 1 + \max_{g} \left\lbrace \frac{1}{n} \sum_i g\left(z_i\right) - \frac{1}{m} \sum_j e^{g(y_j)} \right\rbrace \] where the maximization is sought over a suitable class of functions $g$. Theorem 1 of \cite{nguyen2010estimating} shows that if this class of functions \begin{enumerate} \item contains the optimizer $g^* = \log \left( \frac{d \rho}{d \nu} \right)$, \item satisfies the envelope conditions \cite{nguyen2010estimating}[16a, 16b] (e.g. $g$ is bounded), \item satisfies the entropy conditions \cite{nguyen2010estimating}[17a, 17b] (e.g. Sobolev spaces $\mathcal{W}^{k,2}$ on a compact space), \end{enumerate} then we have Hellinger consistency of this estimator, that is \begin{equation}\label{eq:Hellinger} \int \left( \sqrt{\exp(g^*)} - \sqrt{\exp(g_{n,m})} \right)^2 d\nu \xrightarrow[n,m \to +\infty]{} 0 \end{equation} where $g_{n,m} = \arg\max \left\lbrace \frac{1}{n} \sum_i g\left(z_i\right) - \frac{1}{m} \sum_j e^{g(y_j)} \right\rbrace$. We deduce from Propositions \ref{prop:KLmin} and \ref{prop:KLvar} the following reformulation of the optimal transport problem between $\mu$ and $\nu$, under a quadratic cost, expressed as a minimax problem: \begin{problem}[Minimax reformulation]\label{minimax} \begin{equation*} \min_{\phi} \max_{g} L[\phi,g] \equiv \E\left[g(\nabla \phi(X))\right] - \E\left[e^{g(Y)}\right] \end{equation*} \end{problem} Note that the \emph{Lagrangian} $L$ is concave in the maximization variable $g$, but not necessarily convex in the minimization variable $\phi$. The sample-based version of Problem \ref{minimax} is given by: \begin{problem}[Sample based minimax reformulation]\label{sampleMinimax} \begin{equation*} \min_{\phi} \max_{g} L[\phi,g] \approx \min_{\phi} \max_{g} \left\lbrace \frac{1}{n} \sum_i g\left(\nabla\phi(x_i) \right) - \frac{1}{m} \sum_j e^{g(y_j)} \right\rbrace \end{equation*} \end{problem} over \emph{suitable} function spaces for $\phi(x)$ and $g(y)$, as detailed in Section \ref{sec:algorithm}. This is an adversarial setting, in which the player with strategy $\phi$ attempts to minimize the discrepancies between the distributions underlying the sample sets $\left\{\nabla \phi(x_i)\right\}$ and $\left\{y_j\right\}$, while the player with strategy $g$ attempts to show that the two distributions are in fact different. Thus $g$ would point to those areas where the two distributions differ the most, and $\phi$ would correct those discrepancies. We will see this competition in action in the examples in section \ref{sec:experiments}. This saddle point optimization problem is reminiscent of the ones encountered in the Generative Adversarial Networks (GAN) literature \cite{nowozin2016f}. Broadly speaking, a GAN learns how to generate a sample from an unknown distribution. To do so, a two-player game is introduced; a parameterized \emph{generator} $Q$ aims to produce samples as `close' as possible to the samples in the training set. This is quantified by the use of an f-divergence (e.g. Kullback-Leibler, Jensen-Shannon, or `GAN' divergence), which is given a variational formulation in the exact same way as it is done in Proposition~\ref{prop:KLvar}. This in turns introduces a \emph{discriminator}, whose role is to prove that the generator has not done the right job. Formulated as such, our optimization problem is quite similar to a GAN. Indeed, the generator $Q$ is a distribution which is usually induced by the pushforward of a generic distribution (e.g. standard Gaussian) by a map $T$. This map, as well as the discriminator, are calibrated using neural networks. This is well within the spirit of the method we use to generate the optimal transport map, as well as the function $g$ (see Section \ref{globalalgo}). The main differences with the algorithm presented in \cite{nowozin2016f} and ours are: \begin{enumerate} \item Our map $T$ is restricted to the form $\nabla \phi$ where $\phi$ is convex, in order to solve the quadratic Wasserstein problem. To our knowledge, there are no restrictions on the map in the GAN problem, \item We use a variational formulation of the Kullback-Leibler divergence instead of the `GAN' divergence, \item Instead of using a batch gradient descent for the optimization algorithm, we proceed to what we call `implicit gradient descent', which is described in Section~\ref{subsec:localAlgorithm}. \item Although our method of generating the map $T$ and `discriminant' $g$ proceed to a sum or composition of many non-linear maps, we do not directly use neural networks. \end{enumerate} \subsection{Connection with the pre-determined features case} \label{subsec:conn} In \cite{kuang2017sample}, a set of `features' $f_1,..,f_K$ serve as test functions to evaluate the statement $\rho = \nu$ for $\rho,\nu \in P(\R^d)$, when we only have sample points $(z_i)_{i=1,..,n}$ and $(y_j)_{j=1,..,m}$ generated from $Z \sim \rho$, $Y \sim \nu$. As in \cite{kuang2017sample}, we will assume that $\mu,\nu$ are `close'. The general case with more distant measures can be reduced to the solution of many local problems, as shown in Algorithm \ref{algo:TOT} below, also borrowed from \cite{kuang2017sample}. \begin{definition} The samples $(z_i)_{i=1,..,n}$ and $(y_j)_{j=1,..,m}$ generated from random variables $Z \sim \rho$, $Y \sim \nu$ are equivalent for the set of features $f_1,..,f_K$ if \[ \frac{1}{n} \sum_{i=1}^n f_k(z_i) = \frac{1}{m} \sum_{j=1}^m f_k(y_j), \quad \forall k=1,..,K \] \end{definition} The definition above is a relaxation of the equivalence $\mu = \nu \Leftrightarrow \E[f(Z)] = \E[f(Y)]$ for all test functions $f \in C_b(\R^d)$. Then solving the transport problem between the samples $(x_i)$ and $(y_j)$ is reduced to finding a map $T$ such that $(T(x_i))_i$ is equivalent to $(y_j)$, for the features $f_1,..,f_K$. In \cite{kuang2017sample}, $T$ is chosen to be of the type : \[ T(x) = \nabla \phi(x) = x + \sum_{k} \alpha_k \nabla \phi_k(x) \] for some pre-determined functions $\phi_1,..,\phi_K$ and constants $\alpha_1,...,\alpha_K$. In fact, the potentials $\phi_k$ adopted in \cite{kuang2017sample} agree with the features $f_k$, but our proposition below applies to more general choices. It shows that the procedure to solve the sample-based optimal transport problem with pre-determined features is a particular instance of Problem \ref{sampleMinimax}. A specific choice of functional space for $g$ will yield this result. Before introducing it, we need a set of compatibility conditions for the choices of possible $\phi$ and $g$. \begin{definition} The features $f_k, \: k=1,..,K$ are said to be compatible with the potentials $\phi_k, \: k=1,..,K$ for the sample $(x_i)_{i=1,..,n}$, if the matrix $C \in \R^{K \times K}$ defined as \[ C_{kk'} = \frac{1}{n} \sum_{i=1}^n \nabla \phi_{k}(x_i) \cdot \nabla f_{k'}(x_i) \] is non-singular \end{definition} This compatibility assumption essentially guarantees the non-degeneracy of the choice of functions, as it restricts the average displacement to affect the features in an independent fashion. It can be summarized by the requirement that $C = \E[J_{\phi} J_f^{\top}]$ is non-singular, where $J_{\phi}, J_f$ are the Jacobian matrices of $\phi,f$. \begin{proposition}\label{predeterminedFeatures} Given a compatible set of features $f_1,..,f_K$ and potentials $\phi_1,..,\phi_K$ for the sample $(x_i)_{i=1,..,n}$, consider Problem \ref{sampleMinimax} using the functional spaces: \[ g(z) = \sum_{k=1}^K \beta_k f_k(z), \quad \phi(x) = \frac{|x|^2}{2} + \sum_{k=1}^K \alpha_k \phi_k(x) \] for $\beta \in \R^K$, $\alpha \in \R^K$ in a small-enough neighborhood of zero. Then the optimizer $\phi$ of Problem \ref{sampleMinimax} for two sample sets close to each other solves the sample-based optimal transport problem with predetermined features; meaning that $(\nabla \phi(x_i))$ is equivalent to $(y_j)$ for the features $f_1,..,f_K$. : \[ \frac{1}{n} \sum_{i=1}^n f_k(\nabla \phi(x_i)) = \frac{1}{m} \sum_{j=1}^m f_k(y_j), \quad \forall k=1,..,K \] \end{proposition} \begin{proof} The Lagrangian $L$ as a function of $\alpha,\beta$ is given by \begin{equation*} \frac{1}{n} \sum_i \left[\sum_{k=1}^K \beta_k f_k\left(x_i + \sum_{l=1}^K \alpha_l \nabla \phi_l(x_i) \right) \right] - \left[\frac{1}{m} \sum_j e^{\sum_{k=1}^K \beta_k f_k(y_j)}\right] \end{equation*} Taking the first order conditions at optimality yields: $$ \nabla_{\alpha} L = C(\alpha) \beta, \quad \text{ where } C(\alpha)_{kk'} = \frac{1}{n} \sum_i \left[\nabla \phi_k(x_i)\cdot\nabla f_k\left(x_i + \sum_{l=1}^K \alpha_l \nabla \phi_l(x_i) \right)\right], $$ Since $\alpha$ is in a neighborhood of zero, the matrix $C(\alpha)$ is a small perturbation of the non-singular matrix $C$. Since features and potentials are compatible, the matrix $C$ is non-singular, and, thus, $C(\alpha)$ is non-singular itself. Hence $$ \nabla_{\alpha} L = 0 \Rightarrow \beta = 0. $$ Moreover, the second optimality condition evaluated at $\beta = 0$ yields $\forall k$: $$ \partial_{\beta_k} L = \frac{1}{n} \sum_i f_k\left(x_i + \sum_{l=1}^K \alpha_l \nabla \phi_l(x_i) \right) - \frac{1}{m} \sum_j f_k(y_j)$$ Hence $\nabla_{\beta} L = 0$ at $\beta = 0$ implies that \[ \frac{1}{n} \sum_i f_k\left(x_i + \sum_{l=1}^K \alpha_l \nabla \phi_l(x_i) \right) = \frac{1}{m} \sum_j f_k(y_j) \] Notice that the closeness of the two sample sets and the compatibility between the potential and features guarantee that this problem has a solution with a small $\alpha$ (in fact, this can be taken as a feature-dependent characterization of what it means for two sample sets to be close to each other). This result means that the empirical expected values of the $f_k$ agree on $\left\{T(x_i)\right\}$ and $\left\{y_j\right\}$, i.e. the samples are equivalent for the features $f_1,...,f_K$. Hence $T=\nabla \phi$ solves the sample-based optimal transport problem with pre-determined features. \end{proof} Note that we are restricting the maps $\nabla \phi$ to be `small' perturbations of the identity, by choosing $\alpha$ in a neighborhood of $0$. This is because the optimal transport procedure will only be applied to measures or samples that are `close' to each other. In this paper, we will allow $g$ to be more general than a simple linear combination of features, thus greatly expanding the procedure in \cite{kuang2017sample}. This added flexibility yields better adaptability to the most important characteristics of the data. \subsection{Duality} \label{subsec:duality} \subsubsection{No duality gap} Given the Lagrangian $L$ introduced in Problem \ref{minimax}, the primal objective functional to minimize is, according to Proposition \ref{prop:KLvar}: \begin{equation}\label{eq:primal} D[\phi] = \max_g L[\phi,g] = D_{KL} \left( \nabla \phi_{\#}\mu || \nu \right) - 1 \end{equation} The proof in Proposition \ref{prop:dualitygap} shows that the dual objective functional to be maximized is: \begin{equation}\label{eq:dual} d[g] = \min_{\phi} L[\phi,g] = \left(\min_{y \in \R^d} g(y) \right) - \mathbb{E} \left[ e^{g(Y)} \right] \end{equation} A desired property of the adversarial game, defined by the formulation in Problem \ref{minimax}, is the absence of an irreversible advantage or penalty a player gets from playing first. In other words we do not want a duality gap. This is the content of the following proposition: \begin{proposition}[Absence of duality gap]\label{prop:dualitygap} \[ \min_{\phi} \max_{g} L[\phi,g] = \min_{\phi} D[\phi] = \max_{g} d[g] = \max_{g} \min_{\phi} L[\phi,g] \] \end{proposition} \begin{proof} From Proposition \ref{prop:KLmin}, we know that \[ \min_{\phi} D_{KL}(\nabla \phi_{\# }\mu||\nu) = 0 \] with the minimizer reached for the solution of the transport problem. Hence we get in Equation \eqref{eq:primal} \[ \min_{\phi} \max_{g} L[\phi,g] = \min_{\phi} D[\phi] = -1 \] On the other hand, maximizing Equation \eqref{eq:dual} yields: \[ \max_{g} \min_{\phi} L[\phi, g] = \max_g \left\lbrace \min_{\phi} \mathbb{E}[g(\nabla \phi(X))] - \mathbb{E}\left[e^{g(Y)}\right] \right\rbrace \] Note that the inner minimum is reached for the convex function $\phi(x) = y_{min} \cdot x$ where $\min_{y} g(y) = g(y_{min}) \equiv g_{min}$. In the case where the minimum of $g$ is not reached, take a minimizing sequence $y_{min}^{n}$ such that $g(y_{min}^n) \to \inf_{y \in \R^d} g(y) \equiv g_{min}$. Then a minimizing sequence for the inner minimum in $\phi$ is given by $\phi^n(x) = y_{min}^n \cdot x$. In both cases, \[ \min_{\phi} \mathbb{E}[g(\nabla \phi(X))] = g_{min} \] We are, thus, left with maximizing the dual problem \[ \max_g d[g] = \max_g \left\lbrace g_{min} - \mathbb{E} \left[ e^{g(Y)} \right] \right\rbrace \] Since $\mathbb{E}\left[e^{g(Y)}\right] \geq e^{g_{min}}$, we can always choose $g$ to be the constant function $g_{min}$. We are then left with maximizing \[ \max_{g_{min}} g_{min} - e^{g_{min}} \] which is achieved for $g \equiv g_{min} = 0$. Hence we also have that \[ \max_g d[g] = \max_{g} \min_{\phi}L(\phi, g) = -1 \] \end{proof} \subsubsection{An adversarial view of duality} The optimality conditions for the minimax problem are given by \[ \begin{cases} \nabla \phi \text{ moves mass to where } g \text{ is smallest } \\ g(y) = \log \left( \frac{\nabla \phi_{\#} \mu (y)}{ \nu(y)} \right) \end{cases} \] Examining the primal and dual problems in light of these conditions explains the behavior of the competing players $\phi$ and $g$: \begin{itemize} \item Given a function $g$, $\phi$ will try to move mass from the areas where $g$ is large (i.e. $\nabla \phi_{\#} \mu (y) \ge \nu(y)$) to those where $g$ is small (i.e. $\nabla \phi_{\#} \mu (y) \le \nu(y)$). Following this strategy allows this player to minimize the impact of $g$ on the Lagrangian. \item Given a function $\phi$, $g$ will adapt to get closer to the function $\log \left( \frac{\nabla \phi_{\#} \mu(y)}{ \nu(y)} \right)$, which is large where mass is lacking ($\nabla \phi_{\#} \mu (y) \ge \nu(y)$) and vice-versa. Following this strategy, allows the second player to increase the Lagrangian by focusing on those areas where the push-forward condition has not been fully achieved. \end{itemize} The game concludes when $g$ becomes constant (necessarily 0) on the support of the distributions. Then $\phi$ does not need to move mass anymore, as it then receives no new directive from $g$. \subsection{Global algorithm} \label{globalalgo} One could attempt to directly use a procedure based on Problem \ref{sampleMinimax} to solve the OT problem for any samples $(x)_i$ and $(y)_j$. Such direct approach, however, would not be universally efficient for the following reasons: \begin{itemize} \item If the distributions underlying $(x)_i$ and $(y)_j$ are considerably different, one would require a very rich family of potentials to build a $\phi$ that can perform an accurate transfer. \item One would also require a rich functional space from which to draw $g$ in order to properly characterize all significant differences in the two data samples. \item Depending on the parametrization of $\phi$ and $g$, the Lagrangian can be non-convex in the variables parametrizing $\phi$, and non-concave in the variables parametrizing $g$. With distributions that are far apart, this could make the numerical solution depend on the initialization of those parameters. \item The condition that $\phi$ is a convex function is typically hard to enforce. For nearby distributions, on the other hand, it is satisfied automatically, as $\phi(x)$ is close to the convex potential $\frac{1}{2} \|x\|^2$ corresponding to the identity map. \end{itemize} For these reasons, we will solve multiple local optimal transport problems, instead of one global one. More precisely, we will apply Algorithm \ref{algo:TOT}, adapted from Algorithms 2 and 7 in \cite{kuang2017sample}. \begin{algorithm} \caption{Theoretical Global Optimal Transport Algorithm (TGOT)} \label{algo:TOT} \begin{algorithmic}[0] \Procedure{TGOT}{$\mu,\nu$} \State $\triangleright$ \textit{Step 1: Initialize intermediate nodes} \State $N \gets$ number of intermediary steps \State $\rho_0 \gets \mu, \quad \rho_T \gets \nu$ \For{$t=1,..,N-1$} \State $\rho_t \gets \frac{N-t}{N}\mu+\frac{t}{N}\nu$ \Comment{ or any arbitrary measure} \EndFor \While{\textit{not converged}} \State $\triangleright$ \textit{Step 2: Forward step} \For{$t=1,..,N$} \State Solve the optimal transport problem between $\rho_{t-1}$ and $\rho_t$, as defined in Problem \ref{minimax}. It yields an `local' optimal map $\nabla \phi_t$. \EndFor \State $\nabla \phi \gets \nabla \phi_N \circ \nabla \phi_{N-1} \circ \dots \circ \nabla \phi_1$ \State $\triangleright$ \textit{Step 3: Backward step} \For{$t=1,..,N-1$} \State $\rho_t \gets (\frac{N-t}{N}Id + \frac{t}{N} \nabla \phi)_{\#} \mu$ \EndFor \EndWhile \State \textbf{return} $\nabla \phi$ \EndProcedure \end{algorithmic} \end{algorithm} Theorem 2.4 in \cite{kuang2017sample} proves the convergence of Algorithm \ref{algo:TOT} to the solution of the OT problem. In Algorithm \ref{algo:TOT}, the forward step consists of solving multiple, small, optimal transport problems, addressed in Section \ref{subsec:localAlgorithm}. The backward step back-propagates the final sample computed in the forward pass to all the intermediate samples using McCann's displacement interpolants. This procedure, reminiscent of the neural networks of machine learning with their ``hidden layers'' replaced by local optimal transport problems, introduces several advantages: \begin{itemize} \item The global solution will be obtained by composition of the local maps: \begin{equation}\label{eq:compMaps} \nabla \phi = \nabla \phi_N \circ \nabla \phi_{N-1} \circ \dots \circ \nabla \phi_1 \end{equation} Hence one can choose a small family of maps to solve each local optimal transport problem, and still span a rich family of maps for the global displacement. Note that in our two-player game, we would theoretically have at optimality $T_{\#} \mu = \nu$ and hence the optimal $g$ would be equal to $\log(T_{\#} \mu / \nu) = 0$. \item If $\rho_t$ and $\rho_{t+1}$ are close, the local OT problem has a solution $\nabla \phi$ that is a small perturbation of the identity, i.e. the gradient of a strictly convex potential. Starting from the identity, the numerical algorithm will explore a small neighborhood around it. If the solution that we seek is in this neighborhood, convexity will be preserved. \end{itemize} The global algorithm for finding the optimal map between two distributions known through the samples $(x_i)$ and $(y_j)$ is summarized in Algorithm \ref{algo:SBOT}. \begin{algorithm} \caption{Sample Based Global Optimal Transport Algorithm (SBGOT)} \label{algo:SBOT} \begin{algorithmic}[0] \Procedure{SBGOT}{$(x_i),(y_j)$} \State $\triangleright$ \textit{Step 1: Initialize intermediate nodes} \State $N \gets$ number of intermediary steps \State $z_0 \gets x, \quad z_N \gets y$ \For{$t=1,..,N-1$} \State $z_{t,i} \gets \frac{N-t}{N}x_i+\frac{t}{N} y_{\sigma(i)}$ \Comment{ for some $\sigma:\{1,..,n\} \to \{1,..,m\}$ (or any arbitrary samples)} \EndFor \While{\textit{not converged}} \State $\triangleright$ \textit{Step 2: Forward step} \For{$t=1,..,N$} \State $z_t \gets SBLOT(z_{t-1},z_t)$ \Comment{see Algorithm \ref{algo:SBLOT}} \EndFor \State $\triangleright$ \textit{Step 3: Backward step} \For{$t=1,..,N-1$} \State $z_t \gets \frac{N-t}{N}x + \frac{t}{N}z_N$ \EndFor \EndWhile \State \textbf{return} $z_N$ \EndProcedure \end{algorithmic} \end{algorithm} Algorithm \ref{algo:SBLOT} in Section \ref{sec:algorithm} further details the procedure to solves the sample based local Optimal Transport problem
{ "timestamp": "2019-02-20T02:03:10", "yymm": "1807", "arxiv_id": "1807.00393", "language": "en", "url": "https://arxiv.org/abs/1807.00393", "abstract": "An adaptive, adversarial methodology is developed for the optimal transport problem between two distributions $\\mu$ and $\\nu$, known only through a finite set of independent samples $(x_i)_{i=1..N}$ and $(y_j)_{j=1..M}$. The methodology automatically creates features that adapt to the data, thus avoiding reliance on a priori knowledge of data distribution. Specifically, instead of a discrete point-bypoint assignment, the new procedure seeks an optimal map $T(x)$ defined for all $x$, minimizing the Kullback-Leibler divergence between $(T(xi))$ and the target $(y_j)$. The relative entropy is given a sample-based, variational characterization, thereby creating an adversarial setting: as one player seeks to push forward one distribution to the other, the second player develops features that focus on those areas where the two distributions fail to match. The procedure solves local problems matching consecutive, intermediate distributions between $\\mu$ and $\\nu$. As a result, maps of arbitrary complexity can be built by composing the simple maps used for each local problem. Displaced interpolation is used to guarantee global from local optimality. The procedure is illustrated through synthetic examples in one and two dimensions.", "subjects": "Optimization and Control (math.OC)", "title": "Adaptive Optimal Transport", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708045809529, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7084670392346818 }
https://arxiv.org/abs/2212.12401
Bakry-Émery curvature sharpness and curvature flow in finite weighted graphs. II. Implementation
In this second part of a sequence of two papers, we discuss the implementation of a curvature flow on weighted graphs based on the Bakry-Émery calculus. This flow can be adapted to preserve the Markovian property and its limits as time goes to infinity turn out to be curvature sharp weighted graphs. After reviewing some of the main results of the first paper concerned with the theoretical aspects, we present various examples (random graphs, paths, cycles, complete graphs, wedge sums and Cartesian products of complete graphs, hypercubes) and exhibit further properties of this flow. One particular aspect in our investigations is asymptotic stability and instability of curvature flow equilibria. The paper ends with a description of the available Python functions and routines available in the ancillary file. We hope that the explanations of the Python implementation via examples will help users to carry out their own curvature flow experiments.
\section{Introduction} This paper is concerned with computational aspects of a curvature flow on weighted graphs based on the Bakry-\'Emery calculus. This curvature flow was intoduced in our first paper \cite{CKLMPS-22}. A \emph{weighted graph} in this paper is a finite simple mixed combinatorial graph $G=(V,E)$ with vertex set $V$ and edge set $E = E^1 \cup E^2$ of one- and two-sided edges, together with a weighting scheme of transition rates $p_{xy} \ge 0$ for $x,y \in V$ (which can be represened by a generally non-symmetric matrix $P$ after an enumeration of the vertices). Transition rates $p_{xy}$ can only be positive if $x=y$ or if there is a one- oder two-sided edge from $x$ to $y$. One-sided edges are denoted by ordered pairs $(x,y) \in E^1 \subset V^2$, and two-sided edges are denoted by sets $\{x,y\} \in E^2$. One- or two-sided edges $(x,y) \in E^1$ or $\{x,y\} \in E^2$ with vanishing transition rates $p_{xy}=0$ are called \emph{degenerate} and a weighted graph $(G,P)$ is called \emph{non-degenerate} if it does not have degenerate edges. A weighted graph $(G,P)$ is called \emph{Markovian}, if we have $\sum_{y \in V} p_{xy} = 1$ for all $x \in V$. In this case, $P$ is a stochastic matrix and we can view the transition rates $p_{xy}$ as transition probabilities of a lazy random walk (with non-zero laziness if there exists a vertex $x \in V$ with $p_{xx} > 0$). Our curvature flow does not affect the underlying combinatorial graph $G$, but it changes the weighting scheme. We will focus on a version of our flow which preserves the Markovian property. In other words, starting with an initial Markovian weighted graph $(G,P_0)$, this flow will provide a family of Markovian weighting schemes $\{ P(t) \}_{t \ge 0}$ with $P(0) = P_0$, depending smoothly on the continuous time parameter $t$. Before we introduce our curvature flow, we need to briefly discuss the relevant Bakry-\'Emery curvature background. This curvature notion is based on the weighted Laplacian $\Delta = \Delta_P$, acting on functions $f: V \to \mathbb{R}$ as follows: $$ \Delta_P f(x) = \sum_{y \in V} p_{xy}(f(y)-f(x)). $$ The Laplacian gives rise to the following symmetric bilinear ``carr{\'e} du champ operators'' $\Gamma$ and $\Gamma_2$: \begin{eqnarray*} 2 \Gamma(f,g) &=& \Delta(fg) - f \Delta g - g \Delta f, \\ 2 \Gamma_2(f,g) &=& \Delta \Gamma(f,g) - \Gamma(f,\Delta g) - \Gamma(g,\Delta f). \end{eqnarray*} \subsection{Bakry-\'Emery curvature and curvature sharpness} Bakry-Emery curvature depends on a dimension parameter $N$ and is well-defined at every vertex $x \in V$ which is \emph{not isolated}, that is, there exists another vertex $y \in V$ with $p_{xy} > 0$. For isolated vertices, there is some ambiguity how to define its curvature, and we decided to assign to such a vertex the curvature value $0$.\footnote{Another natural choice of curvature for an isolated vertex $x \in V$ would be $K_N(x) = \infty$ for all $N \in (0,\infty]$. An argument for that choice is that an isolated vertex can be viewed as a discrete analogue of a limit of round spheres with radii shrinking to $0$, whose curvatures would diverge to infinity.} The definition of Bakry-\'Emery curvature reads as follows: \begin{defin}[Bakry-\'Emery curvature] The \emph{Bakry-\'Emery curvature} of a \emph{non-isolated} vertex $x \in V$ for a fixed dimension $N \in (0,\infty]$ is the supremum of all values $K \in \mathbb{R}$, satisfying the \emph{curvature-dimension inequality} \begin{equation} \label{eq:cd-ineq} \Gamma_2(f)(x) \ge \frac{1}{N} (\Delta f(x))^2 + K\, \Gamma(f)(x) \end{equation} for all functions $f: V \to \mathbb{R}$. We use the simplified notation $\Gamma(f) = \Gamma(f,f)$ and $\Gamma_2(f) = \Gamma_2(f,f)$. We denote the curvature at $x \in V$ by $K_N(x) = K_{P,N}(x)$. If $x \in V$ is isolated, that is, we have $p_{xy} = 0$ for all $y \in V \setminus \{x\}$, we set $K_N(x) = K_{P,N}(x)= 0$ for all $N \in (0,\infty]$. \end{defin} This curvature notion is motivated by Bochner's identity (see, e.g., \cite[Prop. 4.15]{GHL-04}), a fundamental pointwise formula in the smooth setting of $n$-dimensional Riemannian manifolds involving gradients, Laplacians, Hessians and Ricci curvature. We refer readers to \cite{BE-84, Elw-91, Schm-99, LY-10} for the Bakry-\'Emery calculus and its application in the graph theoretic setting. By the definition, the inequality \begin{equation} \label{eq:cd-ineq-f} \Gamma_2(f)(x) \ge \frac{1}{N} (\Delta f(x))^2 + K_N(x)\, \Gamma(f)(x) \end{equation} holds for every function $f$, and therefore also for the combinatorial distance function $d(x,\cdot)$. Here, $d(x,y)$ is the length of a shortest directed path from $x$ to $y$ (if there is no such path, we set $d(x,y) = \infty$). If we have equality at $x$ in \eqref{eq:cd-ineq-f} for this particular function $f=d(x,\cdot)$, we say that the vertex $x \in V$ is $N$-curvature sharp. Curvature sharpness will be particularly important in our considerations. Curvature sharpness was originally introduced in \cite[Definition 1.4]{CLP-20}. The (equivalent) definition given in this paper is inspired by \cite[Proof of Theorem 1.2]{KKRT-16}. For more details about relations between different curvature sharpness definitions see Section 3 of our first paper \cite{CKLMPS-22}. \begin{defin}[Curvature sharpness] Let $(G,P)$ be a weighted graph and $N \in (0,\infty]$. A vertex $x \in V$ is called \emph{$N$-curvature sharp} if we have \begin{equation} \label{eq:cd-up-bd} \Gamma_2(f)(x) = \frac{1}{N} (\Delta f(x))^2 + K_N(x)\, \Gamma(f)(x) \end{equation} for the distance function $f = d(x, \cdot)$. Moreover, a vertex $x \in V$ is \emph{curvature sharp} if it is curvature sharp for some dimension $N \in (0,\infty]$. A weighted graph $(G,P)$ is called \emph{curvature sharp}, if every vertex of $G$ is curvature sharp. \end{defin} Note that each function $f: V \to \mathbb{R}$ with $\Gamma(f)(x) \neq 0$ gives rise to an upper curvature bound $K_{P,N}^{f}(x)$ via the inequality \eqref{eq:cd-ineq-f}. Namely, we have \begin{equation} \label{eq:KNf-bd} K_N(x) \le K_{P,N}^{f}(x):= \frac{1}{\Gamma(f)(x)} \left( \Gamma_2(f)(x) - \frac{1}{N}(\Delta f(x))^2 \right). \end{equation} A vertex $x \in V$ is therefore $N$-curvature sharp if its Bakry-\'Emery curvature $K_N(x)$ agrees with the specific upper curvature bound $K_{P,N}^{d(x,\cdot)}(x)$. We also like to mention the following monotonicity property of curvature sharpness: If $x \in V$ is $N$-curvature sharp that this vertex is also curvature sharp for any dimension $\le N$ (see \cite[Prop. 3.1]{CKLMPS-22}). In the next subsection, we present an important reformulation of Bakry-\'Emery curvature at $x \in V$ using a specific matrix $Q(x)$, which will be important in the definition of the curvature flow. \subsection{Reformulation of curvature via a Schur complement} The combinatorial distance function allows us to define distance spheres and distance balls, \begin{eqnarray*} S_r(x) &=& \{ z \in V: d(x,z) = r \}, \\ B_r(x) &=& \{ z \in V: d(x,z) \le r \}. \end{eqnarray*} Let $x \in V$ be a non-isolated vertex. It turns out that the Bakry-\'Emery curvature $K_N(x)$ is determined locally, that is, can be derived solely from the information about the $2$-ball $$B_2(x) = \{x\} \cup S_1(x) \cup S_2(x) . $$ More precisely, denoting $S_1(x) = \{y_1,\dots,y_m\}$ and $S_2(x) = \{z_1,\dots,z_n\}$, there exist a column vector $\Delta(x)$ and a symmetric matrix $\Gamma(x)$ of size $m$ and a symmetric matrix $\Gamma_2(x)$ of size $m+n$ such that, for functions $f,g: V \to \mathbb{R}$ with $f(x) = g(x) = 0$, \begin{eqnarray*} \Delta f(x) &=& \Delta(x)^\top \vec{f}_m, \\ \Gamma(f,g)(x) &=& \vec{f}_m^\top \Gamma(x) \vec{g}_m, \\ \Gamma_2(f,g)(x) &=& \vec{f}_{m+n}^\top \Gamma_2(x) \vec{g}_{m+n}, \end{eqnarray*} where $\vec{f}_m = (f(y_1),\dots,f(y_m))^\top$ and $$ \vec{f}_{m+n} = (f(y_1),\dots,f(y_m),f(z_1),\dots,f(z_n))^\top, $$ and $\vec{g}_n, \vec{g}_{m+n}$, accordingly. Using the $(n+m)$-block decomposition $$ \Gamma_2(x) = \begin{pmatrix} \Gamma_2(x)_{S_1} & \Gamma_2(x)_{S_1,S_2} \\ \Gamma_2(x)_{S_2,S_1} & \Gamma_2(x)_{S_2} \end{pmatrix} $$ and employing the Schur complement $$ Q(x) = \Gamma_2(x)_{S_1} - \Gamma_2(x)_{S_1,S_2} \Gamma_2(x)_{S_2}^\dagger \Gamma_2(x)_{S_2,S_1} $$ for matrices $\Gamma_2(x)$ with positive semidefinite $\Gamma_2(x)_{S_2}$-blocks with $A^\dagger$ the pseudoinverse\footnote{For a given matrix $A \in \mathbb{R}^{N \times M}$, its pseudoinverse $A^\dagger \in \mathbb{R}^{M \times N}$ is defined by the following conditions: $A A^\dagger A = A$, $A^\dagger A A^\dagger = A^\dagger$, and $A A^\dagger \in \mathbb{R}^{N\times N}$ and $A A^\dagger \in \mathbb{R}^{M \times M}$ are both symmetric matrices.} of $A$, we can reformulate Bakry-\'Emery curvature at $x \in V$ for dimension $N \in (0,\infty]$ as follows (see Section 2.4 in \cite{CKLMPS-22}): \medskip \fbox{\parbox{\textwidth}{ Let $x \in V$ be a non-isolated vertex. $K_N(x)$ is then the maximum of all $K \in \mathbb{R}$ such that $$ Q(x) - \frac{1}{N} \Delta(x)\Delta(x)^\top - K \Gamma(x) \succeq 0, $$ where $A \succeq B$ means that $A-B$ is positive semidefinite. }} \medskip This curvature translation was motivated originally by the aim to reformulate the computation of Bakry-\'Emery curvature as an eigenvalue problem (see \cite{Sic-20,Sic-21} and \cite{CKLP-22}). The symmetric matrix $Q(x)$ of a non-isolated vertex $x \in V$ is of size $m$ and is -- in the non-degenerate case -- closely related to another symmetric matrix $A_\infty(x)$ which, in turn, can be viewed as a discrete counterpart of the Ricci curvature tensor at a point $x \in M$ of a Riemannian manifold $(M,g)$ (see formula (1.2) and Section 7 in \cite{CKLP-22}). In the case of a Markovian weighted graph, curvature sharpness at a vertex $x \in V$ can also be alternatively expressed with the help of the matrix $Q(x)$ as follows. \begin{thm}[see Theorem 1.3 in \cite{CKLMPS-22}] \label{thm:main} Let $(G,P)$ be a Markovian weighted graph and $x \in V$ be a non-isolated vertex with $S_1(x) = \{y_1,\dots,y_m\}$. Then the following statements are equivalent: \begin{itemize} \item[(1)] $x$ is curvature sharp, \item[(2)] $x$ is curvature sharp for dimension $N=2$, \item[(3)] We have \begin{equation} \label{eq:curv-sharp-Q} Q(x) {\bf{1}}_m = \frac{1}{2} K_{P,\infty}^{d(x,\cdot)}(x) {\bf{p}}_x, \end{equation} where ${\bf{p}}_x = (p_{xy_1},\dots,p_{xy_m})^\top$ and ${\bf{1}}_m$ is the all-one column vector of size $m$. \end{itemize} \end{thm} Note that the term $K_\infty^{d(x,\cdot)}(x)$ in \eqref{eq:curv-sharp-Q} is the upper curvature bound introduced in \eqref{eq:KNf-bd} (in the special case $N = \infty$), that is, $$ K_{P,\infty}^{d(x,\cdot)}(x) = \frac{\Gamma_2(d(x,\cdot))(x)}{\Gamma(d(x,\cdot))(x)}. $$ \subsection{Curvature flow} Let $(G,P_0)$ be a fixed initial weighted graph with $N = |V|$. For every non-isolated vertex $x \in V$, the size of the corresponding symmetric matrix $Q(x)$ agrees with the degree of the vertex $x$, that is, the number of one- and two-sided edges emanating from $x$, and the entries of $Q(x)$ are determined by the transition rates of edges of the $2$-ball $B_2(x)$. Our curvature flow associates to this initial data a smooth matrix valued function $P: [0,\infty) \to \mathbb{R}^{N \times N}$ with $P(0) = P_0$. The corresponding symmetric $Q$-matrices at time $t \in [0,\infty)$ depend on the weighting schemes $P(t)$, and we denote them henceforth by $Q_x(t)$ for all $x \in V$. Our curvature flow is now defined as follows. \begin{defin}[Curvature flow] Let $(G,P_0)$ be a finite weighted graph. The associated \emph{curvature flow} is given by the following differential equations for all non-isolated vertices $x \in V$ and all $t \ge 0$: \begin{eqnarray} p_{xx}'(t) &=& 0, \label{eq:laziness} \\ {\bf{p}}_x'(t) &=& - 4Q_x(t) {\bf{1}}_m + 2 C_x(t) {\bf{p}}_x(t), \label{eq:flowdiffeq} \end{eqnarray} where $S_1(x) = \{y_1,\dots,y_m\}$ and $$ {\bf{p}}_x(t) = (p_{xy_1}(t),\dots,p_{xy_m}(t))^\top. $$ In the case of an isolated vertex $x \in V$, its curvature flow is given by the simple equation $p_{xx}'(t) = 0$, that is $p_{xx}(t)$ is a constant function in $t$. \end{defin} We note that the curvature flow equation \eqref{eq:laziness} guarantees that the diagonal entries of the weighting scheme do not change. The functions $C_x(t)$ in the curvature flow equation \eqref{eq:flowdiffeq} play the role of a normalisation since, for the choice $C_x \equiv 0$, various transition rates will be unbounded as the time parameter $t \ge 0$ increases. Note that in the smooth case of closed Riemannian manifolds $(M,g_0)$, a suitable normalization leads to volume preservance of $(M,g_t)$ under the Ricci curvature flow. Our aim is to preserve the Markovian property, and it was shown in our first paper that the curvature flow preserves this property if we choose the normalization functions \begin{equation} \label{eq:CxMark} C_x(t) = K_{P(t),\infty}^{d(x,\cdot)}(x). \end{equation} Let us give the explicit formulas for the curvature flow equation \eqref{eq:flowdiffeq} for this particular choice of $C_x(t)$, where $y,y',y''$ always represent vertices in $S_1(x)$ (see \cite[formula (66)]{CKLMPS-22}: \begin{multline} \label{eq:flowdiffeq-explicit} p_{xy}'(t) = \\ p_{xy}(t)\left( -4p_{yx}(t) - 2 \sum_{y' \neq y} p_{yy'}(t) + \frac{4}{D_x} \sum_{y'} p_{xy'}(t)p_{y'x}(t) + \frac{1}{D_x}\sum_{y',y''} p_{xy'}(t)p_{y'y''}(t) - p_{yy}(t) \right) \\+ \underbrace{\sum_{y' \neq y} p_{xy'}(t)p_{y'y}(t)}_{(*)}. \end{multline} Here we use $D_x = \sum_{y'} p_{xy}(t) = 1 - p_{xx}(t)$. note that \eqref{eq:laziness} guarantees that $D_x \le 1$ is independent of the time parameter $t$. The following theorem collects some fundamental properties of the normalized curvature flow. \begin{thm}[see Theorem 1.5 and Prop. 1.6 in \cite{CKLMPS-22}] \label{thm:curvflowprops} Let $(G,P_0)$ be a Markovian weighted graph. Then the curvature flow $(G,P(t))_{t \ge 0}$ associated to $(G,P_0)$ with normalization \eqref{eq:CxMark} is well defined for all $t \ge 0$ and preserves the Markovian property. If $(G,P_0)$ is non-degenerate, then $(G,P(t))$ is also non-degenerate for all $t \ge 0$. Moreover, if the flow converges for $t \to \infty$ to $P^\infty = \lim_{t \to \infty} P(t)$, then the weighted graph $(G,P^\infty)$ is curvature sharp. \end{thm} We like to emphasize that, even in the Markovian case, a flow limit $P^\infty = \lim_{t \to \infty} P(t)$ of a non-degenerate weighted graph $(G,P_0)$ is in most cases no longer non-degenerate, despite the fact that all weighting schemes $P(t)$ for finite $t \ge 0$ are non-degenerate. In other words, some transition probabilities converge to zero under our normalized curvature flow, as time tends to infinity. \section{Curvature flow examples} \label{sec:curv-flow-ex} In this section, we investigate normalized curvature flows on some unmixed combinatorial graphs $G = (V,E)$. By \emph{unmixed} we mean that $G$ does not have one-sided edges. We assume in all examples and statements in this section that our graphs $G=(V,E)$ are finite, simple, unmixed and connected and that our initial weighting schemes $P_0= P(0)= (p_{xy}(0))_{x,y \in V}$ are non-degenerate Markovian without laziness (even if we do not mention this). Curvature flow limits $(G,P^\infty)$ with $P^\infty = \lim_{t \to \infty} P(t)$ are necessarily curvature sharp by Theorem \ref{thm:curvflowprops} above. We do not know of any initial Markovian weighted graph which does not converge as $t \to \infty$. Moreover, we know that every finite connected graph with at least two vertices admits many curvature sharp Markovian weighting schemes without laziness (see \cite[Theorem 1.10]{CKLMPS-22}). These facts give rise to the following conjecture. \begin{conj}[see Conjecture 1.7 in \cite{CKLMPS-22}] The curvature flow $(G,P(t))_{t \ge 0}$ with normalization \eqref{eq:CxMark} converges for any initial condition $(G,P_0)$ as $t \to \infty$. \end{conj} Let us now address some practical aspects of the curvature flow implementation. The solution of the curvature flow is computed numerically by the Runge-Kutta (RK4) method, which is based on a time discretization with time increments $dt > 0$. In the following examples we choose the step sizes $dt = 0.1$ and $dt = 0.3$. In order to distinguish between the theoretical curvature flow and its implementation, we refer to the latter as the \emph{numerical curvature flow}. Since a numerical curvature flow cannot run forever, a suitable numerical convergence criterion needs to be introduced. Our convergence criterion is based on the parameter ${\rm{lim}}_{\rm{tolerance}} > 0$. We say that a numerical curvature flow solution $(P(t))_{t \ge 0}$ has converged numerically at time $t$ (with respect to the parameter ${\rm{lim}}_{\rm{tolerance}}$), if all the entries of $P(t)$ differ from the corresponding entries of $P(t+10)$ and $P(t+20)$ by less than ${\rm{lim}}_{\rm{tolerance}}$. The numerical flow limit is then defined to be $P(t)$. In all examples to follow we set ${\rm{lim}}_{\rm{tolerance}} = 0.001$. In this section, we are particularly interested in numerical flow limits which are not \emph{numerically totally degenerate}. An unmixed weighted graph $(G,P)$ is called \emph{numerically totally degenerate} if there are no edges with numerical non-zero transition rates in both directions, where we consider a transition rate $p_{xy}$ as numerically non-zero (with respect to a parameter ${\rm{threshold}} > 0$), if and only if $p_{xy} \ge {\rm{threshold}}$. In all examples to follow we set ${\rm{threshold}} = 0.001$. \subsection{Random graphs} In this subsection we investigate the numerical curvature flow for random weighted graphs $(G,P_0)$ with vertex set $V$. The edge set $E$ of $G$ is generated by an Erd\"os-Renyi process, that is, any pair of vertices is independently and randomly connected by a two-sided edge with a probability $p \in [0,1]$. Similarly, we choose a random initial weigthing scheme $P_0$ with the property that all non-zero transition rates $p_{vv'}(0)$ lie in the interval $[{\rm{threshold}},1]$, for some positive parameter ${\rm{threshold}} > 0$. We are interested in properties of numerical flow limits of these graphs. If the parameter $p > 0$ in the Erd\"os-Renyi process is too small, these flow limits are always numerically totally degenerate. A reasonable choice to obtain not numerically totally degenerate flow limits for such random graphs with, say, $10$ vertices in roughly half of the cases, is $p = 0.7$. \begin{ex}[A random graph with $10$ vertices] \label{ex:random-graph} Let $(G,P_0)$ be the unmixed weighted Markovian graph with vertex set $V = \{v_0,\dots,v_9\}$ and $$ P_0 = \begin{pmatrix} 0& 0.1& 0.08& 0.17& 0& 0.28& 0.21& 0.08& 0& 0.08\\ 0.08& 0& 0& 0.16& 0& 0.2& 0.07& 0.3& 0.04& 0.15\\ 0.27& 0& 0& 0& 0& 0& 0.3& 0& 0& 0.43\\ 0.02& 0.19& 0& 0& 0.17& 0.17& 0.11& 0.34& 0& 0\\ 0& 0& 0& 1& 0& 0& 0& 0& 0& 0\\ 0.04& 0.21& 0& 0.41& 0& 0& 0.34& 0& 0& 0\\ 0.06& 0.29& 0.14& 0.12& 0& 0.3& 0& 0.09& 0& 0\\ 0.08& 0.31& 0& 0.19& 0& 0& 0.23& 0& 0.19& 0\\ 0& 0.13& 0& 0& 0& 0& 0& 0.25& 0& 0.62\\ 0.1& 0.33& 0.38& 0& 0& 0& 0& 0& 0.19& 0 \end{pmatrix} $$ This randomly generated initial graph is illustrated on the left hand side of Figure \ref{fig:random-graph}. The numerical curvature flow of $(G,P_0)$ has numerical convergence time $t_{\rm{max}} = 20.7$ (with respect to ${\rm{lim}}_{\rm{tolerance}} = 0.001$). The numerical flow limit $(G,P(t_{\rm{max}}))$ is presented on the right hand side of Figure \ref{fig:random-graph}. Let us briefly explain the illustration of the edges of this flow limit: Edges with numerical non-zero transition rates in both directions are displayed in green. Edges with only one-sided numerical non-zero transition rates are displayed as dashed red lines with arrows. Edges whose transition rates shrink in both directions numerically to zero under the curvature flow are displayed as dotted black lines. The corresponding non-zero transition rates are written along these edges. The vertices of the green edges of the flow limit in Figure \ref{fig:random-graph} are given by $$ W = \{ v_0, v_1, v_3, v_5, v_6, v_7 \}. $$ Since there are no numerical non-zero transition rates from these vertices to the other vertices $v_2, v_4, v_8, v_9 \in V \setminus W$, the vertex set $W$ together with the green edges and their transition rates represent a highly connected non-degenerate Markovian weigthed subgraph $(G_W,P_W)$. The combinatorial graph $G_W$ with vertex set $W$ can be viewed as a double cone over the complete graph $K_4$ of the four vertices $v_0, v_1, v_3, v_6$ with the vertices $v_5,v_7$ as its cone tips. The transition rates of the weighting scheme $P_W$ towards all vertices in $K_4$ are $0.25 = 1/4$, all transition rates towards $v_5$ are $x=0.05$ and all transition rates towards $v_7$ are $y=0.2$. Note that such a weighted double cone over $K_4$ with these transition rates for any choice of $x,y > 0$ satisfying $x+y = 1/4$ is curvature sharp (this follows readily from the geometric criterion in \cite[Theorem 3.15]{CKLMPS-22}, since the weighting scheme is volume homogeneous in all vertices and reversible with $\pi(v_5) = 4x/5$, $\pi(v_7) = 4y/5$ and $\pi\vert_{K_4} \equiv 1/5$). Therefore, not only the flow limit itself is curvature sharp by Theorem \ref{thm:curvflowprops} but also the highly connected non-degenerate weighted subgraph $(G_W,P_W)$. Figure \ref{fig:random-graph-trans-rates} presents the transition rates of the vertices $v_3, v_7$ under the curvature flow as functions over the interval $[0,t_{\rm{max}}]$. While most transition rates converge to strictly positive limits, the transition rates $p_{v_3v_4}(t)$ and $p_{v_8v_9}(t)$ shrink to zero. Consequently, the corresponding edges on the right hand side of Figure \ref{fig:random-graph} are represented by a dashed red line and a black dotted line, respectively. Let us finally, consider the curvatures $t \mapsto K_{P(t),N}(v_j)$ of the vertices $v_j$ under the curvature flow. We focus exemplary on the vertices $v_1$ and $v_9$ and the dimension parameter $N = \infty$. Figure \ref{fig:random-graph-curvatures} presents the $\infty$-curvature (in blue) and upper curvature bound $K_{P(t),\infty}^{d(v,\cdot)}(v)$ (in orange) of $v \in \{v_1,v_9\}$ as functions over the interval $[0,t_{\rm{max}}]$. Note that the absence of laziness implies that the upper curvature bounds $K_{P(t),\infty}^{d(v,\cdot)}(v)$ are all $\ge 0$ (see \cite[(19)]{CKLMPS-22}). For both vertices, the curvature and upper curvature bound functions are numerically asymptotic as $t \to t_{\rm{max}}$, indicating that $v_1$ and $v_9$ of the flow limit are $\infty$-curvature sharp. (In fact, all vertices in this example are $\infty$-curvature sharp with respect to the flow limit.) This is not always the case, but Theorem \ref{thm:main} confirms that all vertices of a flow limit $(G,P^\infty)$ are at least $N$-curvature sharp for dimension $N=2$. Moreover, the initial and final $\infty$-curvatures of all vertices under the curvature flow are presented in Table \ref{tab:random-graph-curvatures}. The final curvatures of all vertices in $K_4$ assume the highest values $0.875$, followed by the final curvature values $0.773$ and $0.476$ of the vertices $v_7$ and $v_5$, respectively. All other vertices in $V \setminus V_0$ have much lower final curvatures with values $\le 0.125$. \begin{figure}[h] \includegraphics[width=0.49\textwidth]{initial-random-weighted-graph.png} \includegraphics[width=0.49\textwidth]{curvature-flow-limit-random-graph.png} \caption{Curvature flow of a random graph with $10$ vertices with initial weighting scheme $P_0$ (left hand side) and final weighting scheme $P(t_{\rm{max}})$ (right hand side)} \label{fig:random-graph} \end{figure} \begin{figure}[h] \includegraphics[width=0.49\textwidth]{ptv3.png} \includegraphics[width=0.49\textwidth]{ptv8.png} \caption{Transition rates of vertices $v_3$ and $v_8$ of a random graph with $10$ vertices under the curvature flow} \label{fig:random-graph-trans-rates} \end{figure} \begin{figure}[h] \includegraphics[width=0.49\textwidth]{curv1.png} \includegraphics[width=0.49\textwidth]{curv9.png} \caption{Curvatures (blue) and upper curvature bounds (orange) of vertices $v_1$ and $v_9$ of a random graph with $10$ vertices for the dimension $\infty$ under the curvature flow} \label{fig:random-graph-curvatures} \end{figure} \begin{table}[h] \begin{minipage}[h]{0.49\textwidth} \begin{center} \begin{tabular}{l|ll} $j$ & $K_{P(0),\infty}(v_j)$ & $K_{P(t_{\rm{max}}),\infty}(v_j)$ \\ \hline\\[-0.4cm] $0$ & $0.406$ & $0.875$ \\ $1$ & $0.293$ & $0.875$ \\ $2$ & $0.168$ & $0.125$ \\ $3$ & $0.346$ & $ 0.875$ \\ $4$ & $0.34$ & $0$ \end{tabular} \end{center} \medskip \end{minipage} \begin{minipage}[h]{0.49\textwidth} \begin{center} \begin{tabular}{l|ll} $j$ & $K_{P(0),\infty}(v_j)$ & $K_{P(t_{\rm{max}}),\infty}(v_j)$ \\ \hline\\[-0.4cm] $5$ & $0.527$ & $0.476$ \\ $6$ & $0.404$ & $0.875$ \\ $7$ & $0.202$ & $0.773$ \\ $8$ & $0.246$ & $0.11$ \\ $9$ & $0.236$ & $0.125$ \end{tabular} \end{center} \medskip \end{minipage} \caption{Vertex curvatures for the dimension $\infty$ of a random graph with $10$ vertices at the beginning and the end of the curvature flow} \label{tab:random-graph-curvatures} \end{table} \end{ex} \FloatBarrier Example \ref{ex:random-graph} and its illustrations was generated by the following code. \begin{lstlisting}[language=Python]{} dt = 0.1 # time increment in the Runge-Kutta (RK4) algorithm stoch_corr = False # automatic stochastic correction in the RK4 algorithm norm_tolerance = 0.001 # threshold to apply stochastic correction threshold = 0.001 # threshold to consider a number numerally as zero lim_tolerance = 0.001 # threshold to define flow convergence t_lim = 10000 # maximal flow time p = 0.7 # Erdoes-Renyi probability parameter k = 1 # time multiplier for consecutive curvature computations is_Markov = True # curvatures are w.r.t. a Markovian weighting scheme N = inf # dimension parameter for curvature computations laziness = False # Boolean in the generation of random weighting schemes A = rand_adj_mat(10, 0.7, False) P_0 = randomizer(A, threshold, laziness) limit = norm_curv_flow_lim(A, P_0, dt, stoch_corr, norm_tolerance, lim_tolerance, t_lim) t_max=limit[1] # limit[1] is the convergence time # as calculated by norm_curv_flow_lim flow = norm_curv_flow(A, P_0, t_max, dt, stoch_corr, norm_tolerance) display_weighted_graph(A, P_0, "Initial random weighted graph", threshold) display_weighted_graph(A, limit[0], "Curvature flow limit", threshold) display_trans_rates(A, flow, dt, [3, 8]) curvs = calc_curvatures(A, flow, N, k) curv_bound = calc_curv_upper_bound(A, flow, N, k) display_curvatures(curvs, dt, is_Markov, N, k, curv_bound, [1, 9]) \end{lstlisting} Let us explain the commands of this program in detail. After initializing various parameters in lines 1-11, a random graph $G=(V,E)$ is generated in line 13 via an Erd\"os-Renyi process with probability $p=0.7$. A corresponding random non-degenerate Markovian weigthing scheme without laziness is generated in line 14, with each non-zero transition rate satisfying $p_{vv'}(0) \in [{\rm{threshold}},1]$. The numerical convergence time $t_{\rm{max}} \ge 0$ is determined in lines 16-17. In most cases, the convergence time does not exceed $100$. (If convergence is not achieved by $t_{\rm{lim}} = 10000$, the curvature flow computation stops at that time and notifies the user.) The numerical curvature flow on the interval $[0,t_{\rm{max}}]$ is solved again in line 21. Initial and final weighting schemes are displayed by the commands in lines 23 and 24, respectively, providing the illustrations given in Figure \ref{fig:random-graph}. The transition rates of the vertices $v_3$ and $v_8$ are displayed by the command in line 25, providing the illustrations given in Figure \ref{fig:random-graph-trans-rates} The curvatures and upper curvature bounds at time steps $j \cdot \frac{k \cdot t_{\rm{max}}}{dt}$, $j=0,1,\dots$, are computed in lines 27 and 28, respectively, with the choice $k=1$. They are displayed for the vertices $v_1$ and $v_9$ via the command in line 29, providing the illustrations given in Figure \ref{fig:random-graph-curvatures}. \medskip When running this program, users may be faced with the following message: \medskip \texttt{`norm_tolerance' has been exceeded at one or more vertices, at time t = ... Would you like to:} \\ \texttt{A = Stop calculation and return list of P-matrices so far} \\ \texttt{B = Apply manual normalization now, and apply it again when necessary without asking (you will still be notified when it is applied)} \\ \texttt{C = Apply manual normalization now, and ask again before reapplying it} \\ \texttt{Please enter A, B or C here:} \medskip The reason behind this message is the following. The computation of the numerical curvature flow is based on a time discretization. Therefore, the solution will increasingly depart from the Markovian property after each time increment $dt = 0.1$. If the sum of entries of one of the rows of $P(t)$ at time $t$ differs from one by more than ${\rm{norm}}_{\rm{tolerance}}=0.001$, the program informs the user that a normalization of the weighting scheme is needed for the continuation of the flow calculations. After choosing the option \texttt{'B'}, the program will continue with its flow calculations without further interruptions, and the user is simply notified about the times at which the program applies further artificial normalizations of the transition rates. The user can suppress this message entirely by changing line 2 of the program into "\texttt{stoch_corr = True}", in which case the program applies stochastic corrections automatically, each time with the message \medskip \texttt{Transition rates have been artificially normalized at time t = ...} \medskip The numerical observations of Example \ref{ex:random-graph} suggest that similar properties may also hold in general in the theoretical setting. Firstly, we call an edge $\{x,y\} \in E$ in an unmixed weighted graph $(G,P)$ \emph{non-degenerate} if $p_{xy}, p_{yx} > 0$. An unmixed weighted graph $(G,P)$ is called \emph{totally degenerate} if it does not have any non-degenerate edges. Secondly, we denote the vertices of all non-degenerate edges of a flow limit $(G,P^\infty)$ by $W \subset V$, and $G_W$ denotes the subgraph of $G$ consisting of the vertices $W$ and all non-degenerate edges of $(G,P^\infty)$. Moreover, $P_W$ denotes the restriction of the weigthing scheme $P^\infty$ to the vertex set $W$. We call $G_W$ the non-degenerate subgraph of $(G,P^\infty)$ and we conjecture the following: \begin{conj} If the normalized curvature flow of a nondegenerate unmixed Markovian weighted graph $(G,P_0)$ converges to a not totally degenerate limit $(G,P^\infty)$, then the non-degenerate subgraph $G_W$ coincides with the induced subgraph (of $G$) of the subset $W \subset V$ and all transition rates from $W$ to $V \setminus W$ are zero. $(G_W,P_W)$ is a non-degenerate Markovian weighted graph which is itself curvature sharp. \end{conj} Figuratively speaking, the curvature flow converges towards the non-degenerate Markovian subgraph $(G_W,P_W)$ consisting of highly connected components. Moreover, each vertex of $V \setminus W$ is usually connected to the set $W$ by a sequence of edges with one-sided non-zero transition rates pointing towards the set $W$, and we generally expect that the $\infty$-curvature values of the vertex set $W$ in $(G,P^\infty)$ are significantly larger than the $\infty$-curvature values of the set $V \setminus W$ in $(G,P^\infty)$. \medskip In Example \ref{ex:random-graph}, the weighted Markovian subgraph $(G_W,P_W)$ has only one connected component, but we will see in the next subsection in the case of paths and cycles that $(G_W,P_W)$ may be composed of more than just one connected component. \subsection{Paths and cycles} \label{subsec:paths-cyles} Let $G=(V,E)$ be a path of length $N \ge 2$, that is $V = \{v_0,\dots,v_{N-1} \}$ with a two-sided edge between $v_i $ and $v_j$ if and only if $|i-j|=1$. If $N=2$, $G$ is a trivial case of a star graph and any Markovian weigthing scheme satisfying $p_{01}=p_{21}=1$ and $p_{10}+p_{12}=1$ is curvature sharp (see \cite[Example 4.3]{CKLMPS-22}. For that reason we consider only paths of lengths $N \ge 3$. A cycle of length $3$ is the complete graph $K_3$, and a full list of all curvature sharp weighting schemes was given in \cite[Prop. 1.9]{CKLMPS-22}, so we consider only cycles of length $N \ge 4$. The following example provides some insights into some features of curvature flow limits of weighted paths and cycles. \begin{ex}[A path and a cycle of length $12$] \label{ex:path-cycle} Figure \ref{fig:path-cycle-graph} presents numerical curvature flow limits of a Markovian weighted path with vertices $12$ vertices (left hand side) and of a weighed cycle with $12$ vertices (right hand side). The non-zero transition rates of the initial weighting scheme $P_0 = (p_{ji})_{0\le i,j \le 11}$ for the path limit in Figure \ref{fig:path-cycle-graph} were chosen as follows: \medskip \begin{center} \begin{tabular}{lllllllllll} $p_{0,1}$ & $p_{1,2}$ & $p_{2,3}$ & $ p_{3,4}$ & $p_{4,5}$ & $p_{5,6}$ & $p_{6,7}$ & $p_{7,8}$ & $p_{8,9}$ & $p_{9,10}$ & $p_{10,11}$ \\ $1$ & $.025$ & $0.72$ & $0.46$ & $0.23$ & $019$ & $0.84$ & $0.71$ & $0.62$ & $0.9$ & $0.55$ \\[.2cm $p_{1,0}$ & $p_{2,1}$ & $p_{3,2}$ & $ p_{4,3}$ & $p_{5,4}$ & $p_{6,5}$ & $p_{7,6}$ & $p_{8,7}$ & $p_{9,8}$ & $p_{10,9}$ & $p_{11,10}$ \\ $0.75$ & $0.28$ & $0.54$ & $0.77$ & $0.81$ & $0.16$ & $0.29$ & $0.38$ & $0.1$ & $0.45$ & $1$ \\ \end{tabular} \end{center} \medskip The non-degenerate subgraph $G_W$ of the path limit consists of $W = \{v_1,v_2,v_3,v_9,v_{10},v_{11}\}$ as its vertex set together with the four green edges. Moreover, $G_W$ has two paths of length $2$ as its connected components. For each vertex $v \in V \backslash W$ there exists a directed path to one of the components of $G_W$ via a sequence of one-sided transition rates. For example the vertices $v_5,v_6,v_7,v_8$ are connected to the vertex $v_9 \in W$ via such one-sided paths and $v_4,v_5$ are connected to the vertex $v_3$ via such one-sided paths. The cycle limit on the right hand side of Figure \ref{fig:path-cycle-graph} is totally degenerate with no green edges, and all one-sided non-zero transition rates are oriented in a clockwise direction. Experiments show that Figure \ref{fig:path-cycle-graph} exhibit generic limit properties: flow limits of weighted paths are never totally degenerate and their non-degenerate subgraphs $G_W$ consist of disjoint paths of length $\le 2$. Such types of limits appear also in the case of weighted cycles. However, in contrast to the path case, sometimes a cycle limit is totally degenerate with all its one-sided transition rates oriented either clockwise or anti-clockwise. The code for running the curvature flow for paths and cycles of length $n$ reads as follows: \begin{lstlisting}[language=Python]{} N = 12 A = path(N) # A = cycle(N) P = randomizer(A) limit = norm_curv_flow_lim(A, P)[0] display_weighted_graph(A, P, "Initial weighted graph") display_weighted_graph(A, limit, "Curvature flow limit") \end{lstlisting} \medskip \begin{figure}[h] \includegraphics[width=0.49\textwidth]{path-limit.png} \includegraphics[width=0.49\textwidth]{cycle-limit.png} \caption{Examples of numerical curvature flow limits of a path and of a cycle of length $12$.} \label{fig:path-cycle-graph} \end{figure} \end{ex} \FloatBarrier Before we provide the following result about path limits, let us first introduce the notion of a two-sided degenerate edge of a weighted graphs $(G,P)$: An edge $\{x,y\} \in E$ of $G$ is called \emph{two-sided degenerate} if its transition rates vanish in both directions, that is, $p_{xy} = p_{yx} = 0$. Similarly, an edge is called \emph{numericall two-sided degenerate}, if its transition rates in both directions are $< \rm{threshold}$. Such edges are displayed in the routine {\texttt{display_weighted_graph}} as dotted black lines (see, e.g., the edges $\{v_2,v_9\}$ and $\{v_8,v_9\}$ on the right hand side of Figure \ref{fig:random-graph}). \begin{prop}[Flow limits of paths of length $\ge 3$] Let $(G,P_0)$ be a weighted path of length $N \ge 3$ with consecutive vertices $v_0,\dots,v_{N-1}$. Let $P(t)$ be its corresponding curvature flow converging to a limit $P^\infty = \lim_{t \to \infty} P(t)$, such that $(G,P^\infty)$ does not have two-sided degenerate edges. Then this limit is neither totally degenerate nor is it not non-degenerate (that is, it contains both green and dashed red edges). Moreover, the components of the non-degenerate subgraph $G_W$ are paths of lengths $\le 2$ and they are separated from each other by at least two degenerate edges. If a component of $G_W$ is a path of length $1$, that is, just one edge $\{v_j,v_{j+1}\}$, then we have $p_{j,j+1}^\infty=p_{j+1,j}^\infty = 1$. \end{prop} \begin{proof} By the last statement of Theorem \ref{thm:curvflowprops}, we only need to prove these statements for curvature sharp weighting schemes $P = (p_{ij})_{0\le i,j \le N}$ of paths of length $N \ge 3$. Such weighting schemes can never be non-degenerate by \cite[Prop. 1.11]{CKLMPS-22}. For the proof that curvature sharp weighting schemes can never be totally degenerate, we note that we cannot have two consecutive degenerate edges $\{v_j,v_{j+1}\},\{v_{j+1},v_{j+2}\} \in E$ with $p_{j,j+1}=1=p_{j+2,j+1}$ (since this would imply $p_{j+1,j} = p_{j+1,j+2} = 0$, contradicting to $p_{j+1,j} + p_{j+1,j+2} = 1$). Therefore, since $p_{01}=1$, any totally degenerate weighting scheme would require $p_{j,j+1}=1$ for any $j \ge 0$, in particular, $p_{N-2,N-1}=1$, but this contradicts to the fact that we also have $p_{N-1,N-2}=1$ and that the edge $\{v_{N-1},v_{N-2}\}$ needs to be degenerate. A curvature sharp weighting scheme cannot have more than two consecutive non-degenerate edges. This can be seen as follows: At any vertex $v_j$ with $p_{j,j-1},p_{j,j+1} > 0$ we must have $p_{j-1,j} = p_{j+1,j}$. This follows from the arguments in the proof of \cite[Lemma 4.1]{CKLMPS-22} (namely, since the vertex $v_j$ is not contained in any triangle, we have $p_{j-1,j} = p_{y+1,j} = p_{j,j-1}p_{j-1,j} + p_{j,j+1}p_{j+1,j}$.) If $0 < p_{j-1,j} = p_{j+1,j} < 1$, we could iterate this argument backward and forward and would end up with the fact that all entries of the matrix $P$ above the diagonal and below the diagonal would lie in $(0,1)$, which is a contradiction to $p_{01}=1$. So we must have $p_{j-1,j}=p_{j+1,j} \in \{0,1\}$. Therefore, we cannot have two consecutive indices $j \in \{0,\dots,N-1\}$ with $0 < p_{j-1,j} = p_{j+1,j} < 1$ which would exist in the case of three consecutive non-degenerate edges. So the components of $G_W$ are paths of length $\le 2$. Moreover, any gap between two consecutive non-degenerate edges must be at least two edges: this follows from the fact that if a non-degenerate edge $\{v_j,v_{j+1}\}$ is followed by a degenerate edge $\{v_{j+1},v_{j+2}\}$, then we must have $p_{j+1,j+2}=0$: if we had $p_{j+1,j+2}>0$, then we had $p_{j+1,j},p_{j+1,j+2} > 0$ and, therefore $0 < p_{j+1,j} = p_{j+2,j+1}$ and $\{v_{j+1},v_{j+2}\}$ would be non-degenerate, which is a contradiction. Similarly, if a degenerate edge $\{v_k,v_{k+1}\}$ is followed by a non-degenerate edge $\{v_{k+1},v_{k+2}\}$, we must have $p_{k+1,k} = 0$. Combining both facts implies that there cannot be a single degenerate edge separating two components of $G_W$. These arguments show also that components of $G_W$ which are single edges $\{v_j,v_{j+1}\}$ must satisfy $p_{j,j+1}=p_{j+1,j}=1$ since the adjacent degenerate edges have one-sided transition rates pointing towards this component. \end{proof} Similar arguments as above can be used to prove for cycles that the components of any flow limit of a weighted cycle are again paths of length $\le 2$, unless the limit is non-degenerate. Such non-degenerate limits exist for cycles, namely, the simple random walks, but experiments show that simple random walks are very unstable stationary solutions of the curvature flow. Small perturbations of simple random walks do not converge back to the simple random walk (unless our cycle is $K_3$) but converge usually to a degenerate limit. Finally, if a totally degenerate cycle limit does not have two-sided degenerate edges, then its transition rates must all be either oriented clockwise or anti-clockwise for, otherwise, we would necessarily have a vertex with transition rates of both incident degenerate edges pointing towards this vertex. This would fail to satisfy the Markovian condition at this vertex. \bigskip Paths and cycles of length $N \ge 3$ have the property that no edge is contained in a triangle. We like to finish section by a general statement about the curvature flow for edges not contained in triangles. \begin{prop} \label{prop:zeronotriangle} Let $(G,P_0)$ be a weighted Markovian graph without laziness and $(P(t))_{t \ge 0}$ be its associated normalized curvature flow. If we have, for some $t_0 \ge 0$ and an edge $e = \{x,y\} \in E$, $p_{xy}(t_0)=0$ and $e$ is not contained in a triangle of $G$, then we have $$ p_{xy}(t) = 0 \quad \text{for all $t \ge t_0$}. $$ \end{prop} \begin{proof} This proposition is an easy consequence of the flow equation \eqref{eq:flowdiffeq-explicit}. Since we assume no laziness, we have $p_{yy}(t)=0$, and since $e$ is not contained in a triangle, the last term of \eqref{eq:flowdiffeq-explicit}, denoted by $(*)$, is zero and the statement follows now from the uniqueness of the solution satisfying $p_{xy}(t_0)=0$. \end{proof} \subsection{Complete graphs} The simple random walk on any complete graph is a non-degenerate curvature sharp Markovian weighting scheme. Our experiments show that any non-degenerate initial weighting scheme $P_0$ on $K_n$ converges to the simple random walk, that is $p_{jk}^\infty = \frac{1}{n-1}$. The following example shows that convergence to the simple random walk appears even if the initial weighting scheme is degenerate. This is not in contradiction ot Proposition \ref{prop:zeronotriangle} since, in a complete graph $K_n$, $n \ge 3$, every edge is contained in a triangle. Example \ref{ex:complete} is the only exception in this section where we allow an initial weighting scheme to have degenerate edges. \begin{ex}[A degenerate weighted complete graph with 6 vertices] \label{ex:complete} Let $(G,P_0)$ be the complete weighted Markovian graph with vertex set $V=\{v_0,\dots,v_5\}$ and $$ P_0 = \begin{pmatrix} 0 & 0.2 & 0.1 & 0.2& 0 & 0.5 \\ 0.1 & 0 & 0.3 & 0.25 & 0.25 & 0.1 \\ 0.2 & 0 & 0 & 0.3 & 0.15 & 0.35 \\ 0.3 & 0.5 & 0.1 & 0 & 0.1 & 0 \\ 0.2 & 0.3 & 0.3 & 0.2 & 0 & 0 \\ 0.6 & 0.1 & 0.1 & 0.2 & 0 & 0 \end{pmatrix}, $$ as illustrated in Figure \ref{fig:complete-graph}. Note that the edges $\{v_0,v_4\}, \{v_1,v_2\}, \{v_3,v_5\}$ and $\{v_4,v_5\}$ of this initial weighting scheme are degenerate, with the latter being two-sided degenerate. The numerical curvature flow has numerical convergence time $t_{\rm{max}} = 18.5$ (with respect to ${\rm{lim}}_{\rm{tolerance}} = 0.001$) with the simple random walk as its numerical flow limit. Figure \ref{fig:complete-graph-trans-rates} presents the transition rates of the vertices $v_2$ and $v_5$. Particular interesting are the functions $p_{2,1}(t)$ and $p_{5,4}(t)$ for $t \in [0,t_{\rm{max}}]$, since their initial values are zero. \begin{figure}[h] \includegraphics[width=0.49\textwidth]{initial-complete-graph.png} \includegraphics[width=0.49\textwidth]{final-complete-graph.png} \caption{Curvature flow of a complete graph with $6$ vertices with degenerate initial weighting scheme $P_0$ (left hand side) and numerical flow limit $P(t_{\rm{max}})$, the simple random walk (right hand side)} \label{fig:complete-graph} \end{figure} \FloatBarrier \begin{figure}[h] \includegraphics[width=0.49\textwidth]{complete-transitions-v2.png} \includegraphics[width=0.49\textwidth]{complete-transitions-v5.png} \caption{Transition rates of vertices $v_2$ and $v_5$ of a complete graph with $6$ vertices under the curvature flow} \label{fig:complete-graph-trans-rates} \end{figure} \end{ex} Based on our experiments, we conjecture the following, which is a strengthening of \cite[Conjecture 1.8]{CKLMPS-22}. \begin{conj}[Curvature flow of complete graphs] \label{conj:complgraph} Let $P_0$ be a Markovian weighting scheme without laziness on a complete graph $K_n=(V,E)$ with $n \ge 2$ such that, for every proper subset $W \subset V$, there exist $x,x' \in W$ and $y,y' \in \in V \setminus W$ with $p_{xy} > 0$ and $p_{y'x'} > 0$. Then the curvature flow has a limit $P^\infty$ which is the simple random walk. \end{conj} \FloatBarrier \subsection{Wedge sums of complete graphs} Let $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$ be two combinatorial graphs and $x_1 \in V_1$ and $x_2 \in V_2$. By merging the vertices $x_1$ and $x_2$ into one new vertex $x$ which inherits the incident edges of both vertices $x_1$ and $x_2$, we obtain a new combinatorial graph, which we denote the wedge sum of $G_1$ and $G_2$. In this subsection, we consider flow limits of wedge sums of complete graphs. \begin{ex}[A wedge sum of a $K_4$, $K_5$, $K_2$ and $K_3$] We consider the curvature flow on the wedge sum $G=(V,E)$ of complete graphs presented in Figure \ref{fig:wedge-sum} with random initial weighting schemes $P = P_0$. The adjacency matrix $A$ of this graph is generated with our code in the following way: \begin{lstlisting}[language=Python]{} A1 = wedge_sum(complete(4), complete(5), 2, 1) A2 = wedge_sum(A1, complete(2), 6, 0) A = wedge_sum(A2, complete(3), 8, 0) \end{lstlisting} \begin{figure}[h] \includegraphics[width=.5\textwidth]{wedgesum.png} \caption{A wedge sum of a $K_4$, $K_5$, $K_2$ and a $K_3$} \label{fig:wedge-sum} \end{figure} \begin{figure} \begin{minipage}[h]{0.49\textwidth} {\scriptsize{\begin{center} \begin{multline*} P_0 = \\ \begin{pmatrix} 0 & 0.4 & 0.5 & 0.1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.1 & 0 & 0.4 & 0.5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.1 & 0.1 & 0 & 0.3 & 0.1 & 0.1 & 0.1 & 0.2 & 0 & 0 & 0 \\ 0.2 & 0.5 & 0.3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0.1 & 0 & 0 & 0.1 & 0.3 & 0.5 & 0 & 0 & 0 \\ 0 & 0 & 0.1 & 0 & 0.1 & 0 & 0.4 & 0.4 & 0 & 0 & 0 \\ 0 & 0 & 0.2 & 0 & 0.3 & 0.1 & 0 & 0.1 & 0.3 & 0 & 0 \\ 0 & 0 & 0.5 & 0 & 0.1 & 0.3 & 0.1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0.1 & 0 & 0 & 0.8 & 0.1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.5 & 0 & 0.5 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.7 & 0.3 & 0 \end{pmatrix} \end{multline*} \end{center}}} \medskip \end{minipage} \begin{minipage}[h]{0.49\textwidth} \begin{center} \includegraphics[width=\textwidth]{k3-winner-limit.png} \end{center} \medskip \end{minipage} \caption{Initial weighting scheme (left) and numerical flow limit a simple random walk on $K_3$ (right)} \label{fig:k3-winner} \end{figure} \begin{figure} \begin{minipage}[h]{0.49\textwidth} {\scriptsize{\begin{center} \begin{multline*} P_0 = \\ \begin{pmatrix} 0 & 0.3 & 0.3 & 0.4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.1 & 0 & 0.4 & 0.5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.3 & 0.2 & 0 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0 & 0 & 0 \\ 0.4 & 0.4 & 0.2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0.3 & 0 & 0 & 0.1 & 0.1 & 0.5 & 0 & 0 & 0 \\ 0 & 0 & 0.2 & 0 & 0.2 & 0 & 0.5 & 0.1 & 0 & 0 & 0 \\ 0 & 0 & 0.4 & 0 & 0.1 & 0.1 & 0 & 0.3 & 0.1 & 0 & 0 \\ 0 & 0 & 0.7 & 0 & 0.1 & 0.1 & 0.1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0.5 & 0 & 0 & 0.4 & 0.1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.4 & 0 & 0.6 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.1 & 0.9 & 0 \end{pmatrix} \end{multline*} \end{center}}} \medskip \end{minipage} \begin{minipage}[h]{0.49\textwidth} \begin{center} \includegraphics[width=\textwidth]{k4-winner-limit.png} \end{center} \medskip \end{minipage} \caption{Initial weighting scheme (left) and numerical flow limit a simple random walk on $K_4$ (right)} \label{fig:k4-winner} \end{figure} \FloatBarrier Our experiments show that, depending on the initial weighting scheme $P_0$, the curvature flow converges to a limit which is concentrated in one of the complete graphs. More precisely, the limit weighting scheme $P^\infty$ represents a simple random walk on one of the complete graphs while, from all other vertices, there is a directed path of $\{0,1\}$ transition rates towards this particular complete graph. Figures \ref{fig:k3-winner}, \ref{fig:k4-winner} and \ref{fig:k5-winner} show the numerical curvature flow limits of various random initial weighting schemes. We carried out several runs of $100000$ numerical curvature flows with random initial weighting schemes to describe the limit behaviour of these flows quantitatively. The results are presented in Table \ref{tab:k4k5k2k3-statistics}. While more than $80\%$ of limits concentrate on the largest clique $K_5$, it is somewhat surprising that more limits concentrate on $K_3$ than on the larger subgraph $K_4$. Not a single flow limit ended up concentrating on $K_2$. The mean numerical convergence time is shortest for $K_3$, followed by $K_4$ and $K_5$ (with respect to $\lim_{\rm{tolerance}} = 0.001$). While most convergence times are below 100, there were maximal numerical convergence times well above 500. \begin{figure} \begin{minipage}[h]{0.49\textwidth} {\scriptsize{\begin{center} \begin{multline*} P_0 = \\ \begin{pmatrix} 0 & 0.2 & 0.2 & 0.6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.2 & 0 & 0.3 & 0.5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.1 & 0.1 & 0 & 0.2 & 0.1 & 0.1 & 0.2 & 0.2 & 0 & 0 & 0 \\ 0.3 & 0.3 & 0.4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0.1 & 0 & 0 & 0.2 & 0.3 & 0.4 & 0 & 0 & 0 \\ 0 & 0 & 0.2 & 0 & 0.4 & 0 & 0.2 & 0.2 & 0 & 0 & 0 \\ 0 & 0 & 0.3 & 0 & 0.2 & 0.1 & 0 & 0.2 & 0.2 & 0 & 0 \\ 0 & 0 & 0.5 & 0 & 0.1 & 0.3 & 0.1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0.6 & 0 & 0 & 0.2 & 0.2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.5 & 0 & 0.5 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.3 & 0.7 & 0 \end{pmatrix} \end{multline*} \end{center}}} \medskip \end{minipage} \begin{minipage}[h]{0.49\textwidth} \begin{center} \includegraphics[width=\textwidth]{k5-winner-limit.png} \end{center} \medskip \end{minipage} \caption{Initial weighting scheme (left) and numerical flow limit a simple random walk on $K_5$ (right)} \label{fig:k5-winner} \end{figure} \FloatBarrier \begin{table}[h] \centering \begin{tabular}{l|llll} & $K_4$ & $K_5$ & $K_2$ & $K_3$ \\ \hline\\[-.3cm] concentration of flow limits & $8.7\%$ & $80.5\%$ & $0\%$ & $10.8\%$ \\ mean convergence time & $18.9$ & $24.7$ & -- & $17.7$ \\[.2cm] \end{tabular} \caption{Statistics of flow limit concentrations on the complete subgraphs $K_4,K_5,K_2$ and $K_3$ of their wedge sum, together with mean convergence times} \label{tab:k4k5k2k3-statistics} \end{table} \end{ex} The limit behaviour described in the above example seems to be common for many wedge sums of complete graphs. It is, however, not always true that flow limits concentrate on just one of the constituents of a wedge sum. A path of length $\ge 2$ can be viewed as a wedge sum of consecutive $K_2$'s, and we have seen in Subsection \ref{subsec:paths-cyles} that flow limits will concentrate on more than only one of these $K_2$'s (see left hand side of Figure \ref{fig:path-cycle-graph}). Another special case of a wedge sum is a dumbbell which is our next example. \begin{ex}[A symmetric weighted dumbbell] The weighted graph $(G,P_0)$ in this example is a wedge sum of a $K_5$, $K_2$ and another $K_5$, together with a simple random walk as initial weighting scheme (see line 4 in the code). This situation can be set up by the following code: \begin{lstlisting}[language=Python]{} A1 = complete(5) A2 = complete(5) A = bridge_at(A1,A2,0,0) P_0 = srw(A) \end{lstlisting} The numerical convergence time is $79$ and the limit of the numerical curvature flow concentrates on the ``bridge'' $K_2$ between the two $K_5$'s, as illustrated in Figure \ref{fig:dumbbell-graph}. (To obtain the flow limit illustrated at the right hand side of this figure, users should choose $\lim_{\rm{tolerance}} = 0.0001$.) \begin{figure}[h] \includegraphics[width=0.49\textwidth]{initial-dumbbell.png} \includegraphics[width=0.49\textwidth]{final-dumbbell.png} \caption{Curvature flow of a dumbbell as a wedge sum of a $K_5$, $K_2$ and another $K_5$ with simple random walk initial weighting scheme $P_0$ (left hand side) and numerical flow limit concentrated on $K_2$ (right hand side)} \label{fig:dumbbell-graph} \end{figure} \FloatBarrier This limit could have been predicted assuming that the initial weighted graph symmetry across the ``bridge'' is preserved under the curvature flow and that the limit concentrates on only one of the complete graphs $K_5, K_2, K_5$. If instead of the simple random walk, a random initial weighting scheme would have been chosen on the dumbbell $G$, the limit would have usually concentrated on one of the two $K_5$'s. \end{ex} \subsection{Cartesian products of complete graphs}\label{sec:cartprodcompl} We have the following result for Cartesian products of complete graphs: \begin{thm} Let $G$ be the Cartesian product of two complete graphs $K_{n+1}$ and $K_{m+1}$ with the non-lazy simple random walk $P$ as initial weighting scheme. Then the curvature flow $P(t)$ converges to a limit as $t \to \infty$ with limit transition rates \begin{eqnarray*} a^\infty &=& \frac{m+3}{2nm+3n+3m}, \\ b^\infty &=& \frac{n+3}{2nm+3n+3m}, \end{eqnarray*} where $a^\infty$ are the transition rates along edges between $(x,x')$ and $(y,x')$ with $x,y \in K_{n+1}$, $x \sim y$ and $x' \in K_{m+1}$ (``horizontal edges''), and $b^\infty$ are the transition rates along edges between $(x,x')$ and $(x,y')$ with $x \in K_{n+1}$ and $x',y' \in K_{m+1}$, $x' \sim y'$ (``vertical edges''). \end{thm} \begin{proof} By symmetry of the configuration, we have only two types $a(t)$ and $b(t)$ of transition rates for the curvature flow at time $t \ge 0$ (for horizontal and vertical edges, respectively) with $$ a(0) = b(t) = \frac{1}{n+m} $$ and $$ na(t) + mb(t) = 1, $$ due to the Markovian property. This implies that $b(t) = \frac{1-na(t)}{m}$ and $b'(t) = - \frac{-na'(t)}{m}$, and we only need to consider an ordinary differential equation for $a$ with initial condition $a(0) = \frac{1}{n+m}$. We can also assume without loss of generality that $n \le m$. We derive from the explicit description \eqref{eq:flowdiffeq-explicit} of the curvature flow that \begin{multline} \label{eq:diffeqprodknkm} a'(t) = \\ a(t) \left( -4a(t) - 2(n-1)a(t) + 4 (na^2(t)+mb^2(t)) + n(n-1)a^2(t) + m(m-1)b^2(t) \right) \\ + (n-1)a^2(t) = \\ n(n+3)a^3(t) + m(m+3)a(t) \left( \frac{1-na(t)}{m} \right)^2 -(n+3)a^2(t) = \\ a(t)\left( n\left( 2n+3+\frac{3n}{m}\right)a^2(t) - \left( 3n+3+\frac{6n}{m} \right)a(t) + 1 + \frac{3}{m} \right). \end{multline} If this differential equation converges as $t \to \infty$, its limit $a^\infty$ must satisfy $$ a^\infty \in \left\{ 0,\frac{1}{n},\frac{m+3}{2nm+3n+3m} \right\}, $$ that is $$ (a^\infty,b^\infty) \in \left\{ \left(0,\frac{1}{m}\right),\left(\frac{1}{n},0\right),\left(\frac{m+3}{2nm+3n+3m},\frac{n+3}{2nm+3n+3m}\right) \right\}, $$ with $$ b^\infty = \lim_{t \to \infty} b(t) = \lim_{t \to \infty} \frac{1-na(t)}{m} = \frac{1-na^\infty}{m}. $$ Our assumption $n \le m$ implies that we have $$ \frac{1}{n+m} \le \frac{m+3}{2nm+3n+3m} < \frac{1}{n} $$ and that the right hand side of \eqref{eq:diffeqprodknkm} is strictly positive for $(at)$ in the interval $$ \left[ \frac{1}{n+m},\frac{m+3}{2nm+3n+3m} \right), $$ zero at $a(t) = \frac{m+3}{2nm+3n+3m}$, and strictly negative for $a(t)$ on the interval $$ \left( \frac{m+3}{2nm+3n+3m},\frac{1}{n} \right). $$ These monotonicity properties force the function $a(t)$ to converge to the limit $a^\infty = \frac{m+3}{2nm+3n+3m}$. \end{proof} \begin{ex}[Flow limit of $K_3 \times K_4$ with simple random walk] The following code computes the numerical flow limit for $K_3 \times K_4$ with the simple random walk as initial weigthing scheme. \begin{lstlisting}[language=Python]{} n,m = 2,3 A = cart_prod(complete(n+1),complete(m+1)) P = srw(A) limit = norm_curv_flow_lim(A, P)[0] display_weighted_graph(A, P, "Initial weighting scheme of K3 x K4") display_weighted_graph(A, limit, title="Curvature flow limit") \end{lstlisting} \end{ex} The initial transition rates are all equal to $1/5=0.2$, and the transition rates of the numerical limit are $0.22\dots \approx 2/9$ and $0.19\dots \approx 5/27$, as predicted by the above theorem. \subsection{Totally degenerate flow limits} As mentioned before, many curvature limits are totally degenerate. Therefore, it is worth to investigate properties of those particular limits. \begin{ex}[The octahedron with a totally degenerate flow limit] \label{ex:octahedron} Let $G=(V,E)$ with $V = \{v_0,\dots,v_5\}$ be the unmixed graph representing the octahedron, as illustrated in Figure \ref{fig:octahedron} (left hand side). While the simple random walk without laziness is a stationary solution of the normalised curvature flow, any small perturbation of this initial weighting scheme leads to another curvature sharp limit, which is totally degenerate. For example, the initial weighting scheme \begin{equation} \label{eq:P0-octahedron} P_0 = \begin{pmatrix} 0&0.26&0&0.24&0.25&0.25\\ 0.25&0&0.25&0&0.25&0.25\\ 0&0.25&0&0.25&0.25&0.25\\ 0.25&0&0.25&0&0.25&0.25\\ 0.25&0.25&0.25&0.25&0&0\\ 0.25&0.25&0.25&0.25&0&0 \end{pmatrix} \end{equation} converges to the totally degenerate limit illustrated in Figure \ref{fig:octahedron} (right hand side) under the numerical curvature flow. \begin{figure}[h] \includegraphics[width=0.49\textwidth]{octahedron.png} \includegraphics[width=0.49\textwidth]{octahedron-limit.png} \caption{Left hand side: An octahedron with vertices $v_0,\dots,v_5$ . Right hand side: A numerical flow limit of the octahedron with a small perturbation of simple random walk as initial weighing scheme} \label{fig:octahedron} \end{figure} \end{ex} \FloatBarrier \FloatBarrier The following considerations show that the limit in Example \ref{ex:octahedron} is essentially the only totally degenerate curvature sharp weighting scheme without two-sided degenerate edges for the octahedron (see Proposition \ref{prop:octahedron} below). We start with a general unmixed combinatorial graph $G=(V,E)$ without isolated vertices. A totally degenerate weighting scheme $P$ assigns to each edge $\{x,y\} \in E$ either a direction ($x \to y$ if $p_{xy} >0$ and $y \to x$ if $p_{yx} >0$, illustrated by a red dashed line with an arrow) or the edge is two-sided degenerate (that is $p_{xy} = p_{yx}=0$, illustrated by a black dotted line). A first observation is that none of the vertices $x \in V$ can be a sink: the Markovian property requires that at least one edge incident to $x$ must be outward directed. The following lemma presents a useful property of triangles. \begin{lemma} \label{lem:totdegencurvsharp} Let $(G,P)$ be a totally degenerate curvature sharp Markovian weighted graph without laziness. Then the directions of a triangle $T = \{x,y,z\} \subset V$ without two-sided degenerate edges cannot be oriented, that is, its edges cannot have the orientations $x \to y \to z \to x$ or $x \to z \to y \to x$. \end{lemma} \begin{proof} Assume that a totally degenerate curvature sharp Markovian weigthing scheme without laziness contains a triangle $T = \{x,y,z\}$ with $p_{xz} , p_{zy}, p_{yx} > 0$, that is, we have an orientation $x \to z \to y \to x$. This means, in particular, that $p_{xy}=0$. We can read off the flow equation \eqref{eq:flowdiffeq-explicit} that curvature sharpness of a totally degenerate Markovian weighting scheme means \begin{equation} \label{eq:curvsharptotdeg} p_{xy}(-2\sum_{y'\neq y} p_{yy'} + \sum_{y',y''} p_{xy'}p_{y'y''}) + \sum_{y'\neq y} p_{xy'}p_{y'y} = 0. \end{equation} Since we have $p_{xy} = 0$, this means $$ 0 \le p_{xz} p_{zy} \le \sum_{y' \neq y} p_{xy'}p_{y'y} = 0, $$ in contradiction to $p_{xz}, p_{zy} > 0$. The orientation $x \to y \to z \to x$ can be ruled out similarly. \end{proof} The main tool in the proof of the following proposition can be found in \cite[Exercise 25.14]{Pak-10}: Assume a tessellation of the $2$-dimensional sphere carries an orientation along all its edges. For every vertex $v$ of the tessellation let ${\rm{ind}}(v) = 1-c(v)/2$, where $c(v)$ is the number of changes in the orientation of edges adjacent to $v$ (in cyclic order). For a face $f$, let ${\rm{ind}}(f) = 1-c(f)/2$, where $c(f)$ is the number of changes in the orientation (clockwise vs. anti-clockwise) of edges of $f$. Then we have $$ \sum_v {\rm{ind}}(v) + \sum_{f} {\rm{ind}}(f) = 2. $$ \begin{prop} \label{prop:octahedron} Let $G=(V,E)$ be the octahedron, as illustrated in Figure \ref{fig:octahedron} (left hand side). Then $G$ has essentially only one totally degenerate non-lazy curvature sharp Markovian weighting scheme without two-sided degenerate edges, namely, we have (up to a permutation of the vertices corresponding to a graph automorphism) $$ p_{v_0,v_1} = p_{v_1,v_2} = p_{v_2,v_3} = p_{v_3,v_0} = 1 $$ and $$ p_{v_4,v_0} = p_{v_4,v_1} = p_{v_4,v_2} = p_{v_4,v_3} = 1/4, \quad p_{v_5,v_0} = p_{v_5,v_1} = p_{v_5,v_2} = p_{v_5,v_3} = 1/4. $$ \end{prop} \begin{proof} We can think of the octahedron as a tessellation of the sphere by $8$ triangles, all of them not oriented, by Lemma \ref{lem:totdegencurvsharp}. This means that ${\rm{ind}}(f)=0$ for all faces of the octahedron. Therefore, we must have $$ \sum_v {\rm{ind}}(v) = 2, $$ that is, at least two vertices of the octahedron must have ${\rm{ind}}(v)=1$, which means that each of them must be a source or a sink. The Markovian property rules out sinks, and two sources cannot be adjacent (otherwise they would be connected by a non-degenerate edge). So the octahedron must have two sources at distance $2$, which we denote by $v_4,v_5$. Their edges are all directed towards a cycle of length $4$. The directions of the edges in this cycle must be directed, for otherwise there would be a sink in this cycle which is not possible. We denote this oriented cycle by $v_0 \to v_1 \to v_2 \to v_3 \to v_0$, and we must have $$ p_{01} = p_{12} = p_{2,3} = p_{3,0} = 1, $$ where we used the notation $p_{ij} = p_{v_i,v_j}$, for simplicity. Applying formula \eqref{eq:curvsharptotdeg} to $(x,y)=(v_4,v_0)$ yields $$ p_{40} (-2 (p_{01}+p_{03}) + \sum_{i=0}^3 p_{4i}) + p_{41}p_{10} + p_{43}p_{30} = p_{40} (-2 + 1) + p_{43} = 0, $$ that is $p_{40}=p_{43}$. Similarly, we can show that all transition rates $p_{4i}$ must coincide and, therefore, $p_{40}=p_{41}=p_{42}=p_{43}=1/4$. The same arguments apply to the vertex $v_5$, finishing the proof of the proposition. \end{proof} \begin{conj}[Flow limits of the octahedron] The normalized curvature flow on the octahedron for any non-degenerate initial weighting scheme without laziness different from the simple random walk converges always to a limit described in Proposition \ref{prop:octahedron}. \end{conj} \section{Asymptotically stable and unstable curvature sharp Markovian weighting schemes} Recall that every curvature sharp Markovian weighting scheme on a given combinatorial graph is a stationary solution of the normalized curvature flow. It is natural to ask whether such a stationary solution $P^s$ is \emph{asymptotically stable}, that is, whether any closeby Markovian weigthing scheme $P$ converges back to this equilibrium $P^s$ as $t \to \infty$. This can be decided via the linearization of the curvature flow equations around such an equilibrium. In the next subsection, we will describe this linearization in full detail before we consider various examples in the following subsection. \subsection{Linearization of the curvature flow equations at equilibria} Let $P^s$ be a curvature sharp Markovian weighting scheme of a finite simple mixed combinatorial graph $G = (V,E)$. We consider Markovian weighting schemes $P$ near $P^s$ with the same laziness, that is $p_{xx} = p_{xx}^s$ for all $x \in V$. Let $E^{\rm{dir}} = \{ (x,y) \in V \times V: d(x,y) = 1 \}$. The linearization of $F$ at the equilibrium $P^s$ of the normalized curvature flow is given for each component function $F_{xy}$, $(x,y) \in E^{\rm{dir}}$, by \begin{multline*} DF_{xy}( P^s ) (q_{uv})_{(u,v) \in E^{\rm{dir}} ) = \\ \left( -4 p_{yx}^s - 2 \sum_{y' \neq y} p_{yy'}^s + \frac{4}{D_x} \sum_{y'} p_{xy'}^s p_{y'x}^s + \frac{1}{D_x}\sum_{y',y''} p_{xy'}^s p_{y'y''}^s - p_{yy}^s \right) q_{xy}\\ + p_{xy}^s \left( -4 q_{yx} - 2 \sum_{y' \neq y} q_{yy'} + \frac{4}{D_x} \sum_{y'} \left(p_{xy'}^s q_{y'x} + p_{y'x}^s q_{xy'}\right) + \frac{1}{D_x} \sum_{y',y''} \left( p_{xy'}^s q_{y'y''} + p_{y'y''}^s q_{xy'} \right) \right) = \\ \sum_{y' \in S_1(x)} B_{xy}(xy') q_{xy'} + \sum_{y' \in S_1(x)} \sum_{z \in B_1(x)} B_{xy}(y'z) q_{y'z}. \end{multline*} Here $y',y''$ are vertices in $S_1(x)$ and the potentially non-zero $B$-coefficients are given by \begin{eqnarray} B_{xy}(xy) &=& -4 p_{yx}^s - p_{yy}^s - 2 \sum_{y' \neq y} p_{yy'}^s \nonumber \\ && + \frac{1}{D_x} \left( 4p_{xy}^sp_{yx}^s + 4\sum_{y'} p_{xy'}^s p_{y'x}^s + p_{xy}^s \sum_{y'} p_{yy'}^s + \sum_{y',y''} p_{xy'}^s p_{y'y''}^s \right), \label{eq:Bxy} \\ B_{xy}(xy') &=& \frac{4}{D_x} p_{xy}^s p_{y'x}^s + \frac{1}{D_x} p_{xy}^s \sum_{y''} p_{y'y''}^s + p_{y'y}^s \qquad \qquad\, \text{if $y' \neq y$}, \label{eq:Bxy'} \\ B_{xy}(yx) &=& 4 p_{xy}^s \left( \frac{1}{D_x} p_{xy}^s-1\right), \label{eq:Byx} \\ B_{xy}(y'x) &=& \frac{4}{D_x} p_{xy}^s p_{xy'}^s \qquad \qquad \qquad \qquad \qquad \qquad \qquad \,\,\,\,\,\, \text{if $y' \neq y$}, \label{eq:By'x} \\ B_{xy}(yy') &=& p_{xy}^s \left( \frac{1}{D_x} p_{xy}^s - 2 \right) \qquad \qquad \qquad \qquad \qquad \,\,\,\,\,\,\,\,\,\, \text{if $y' \neq y$}, \label{eq:Byy'} \\ B_{xy}(y'y) &=& p_{xy'}^s\left( \frac{1}{D_x} p_{xy}^s + 1 \right) \qquad \qquad \qquad \qquad \qquad \quad\,\,\, \text{if $y' \neq y$}, \label{eq:By'y} \\ B_{xy}(y'y'') &=& \frac{1}{D_x} p_{xy}^s p_{xy'}^s\qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \text{if $y' \neq y$ and $y'' \neq y,y'$}. \label{eq:By'y''} \end{eqnarray} All other $B$-coefficients are chosen to be zero. Note, however, that the transition probabilities $(p_{xy})_{y \in S_1(x)}$ are not independent, and therefore the choice of the $B$-coefficients is not unique, as explained in the following remark. \begin{rmk} \label{rmk:compl-freedom-asymp} Since we have $\sum_{v \in S_1(u)} q_{uv} = 0$ for all $u \in V$, there is a degree of freedom in the choice of the $B$-coefficients. For example, in the case that $G$ is an unmixed complete graph, we can replace $B_{xy}(u,v)$ by $B'_{xy}(u,v) = C_u + B_{xy}(u,v)$ with arbitrary constants $C_u$. This allows us the modify the $B$-coefficients $B_{xy}(yy')$ and $B_{xy}(y'y'')$ in \eqref{eq:Byy'} and \eqref{eq:By'y''} to vanish, and we can use instead $$ DF_{xy}( P^s ) ( (q_{uv})_{(u,v) \in E^{\rm{dir}} ) = \sum_{y'\neq x} B'_{xy}(xy')q_{xy'} + \sum_{y' \neq x} B'_{xy}(y'x)q_{y'x} + \sum_{y'\neq x,y} B'_{xy}(y'y)q_{y'y} $$ with \begin{eqnarray*} B'_{xy}(xy) &=& -4 p_{yx}^s - p_{yy}^s - 2 \sum_{y' \neq y} p_{yy'}^s + \frac{1}{D_x} \left( 4p_{xy}^sp_{yx}^s + 4\sum_{y'} p_{xy'}^s p_{y'x}^s + p_{xy}^s \sum_{y'} p_{yy'}^s + \sum_{y',y''} p_{xy'}^s p_{y'y''}^s \right), \\ B'_{xy}(xy') &=& \frac{4}{D_x} p_{xy}^s p_{y'x}^s + \frac{1}{D_x} p_{xy}^s \sum_{y''} p_{y'y''}^s + p_{y'y}^s \qquad \qquad\, \text{if $y' \neq x,y$}, \nonumber \\ B'_{xy}(yx) &=& p_{xy}^s \left( \frac{3}{D_x} p_{xy}^s-2\right), \\ B'_{xy}(y'x) &=& \frac{3}{D_x} p_{xy}^s p_{xy'}^s \qquad \qquad \qquad \qquad \qquad \qquad \qquad \,\,\,\,\,\, \text{if $y' \neq x,y$}, \\ B'_{xy}(y'y) &=& p_{xy'}^s \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \text{if $y' \neq x,y$}. \\ \end{eqnarray*} Note that in the case of the unmixed complete graph we have $S_1(x) = V \setminus \{x\}$ and, in the formulas for the $B'$-coefficients, $y'y''$ represent all vertices different from $x$, as before. \end{rmk} To end up with a uniquely defined Jacobi matric of $F$, we need to restrict to transition probabilities which are independent. For that we introduce the subset $E^{\rm{ess}} \subset E^{\rm{dir}}$ of ``essential'' transition probabilities by removing, for each $x \in V$ with outgoing directed edges, that is $S_1(x) \neq \emptyset$, one pair $(x,y)$ from $E^{\rm{dir}}$. The cardinality of $E^{\rm{ess}}$ is $M := |E^1| + 2 |E^2| - |V_0|$, where $V_0 \subset V$ is the subset of vertices $x \in V$ for which we have $S_1(x) \neq \emptyset$. Any choice $(p_{uv})_{(u,v) \in E^{\rm{ess}}}$ determines then a weighting scheme $P$ by setting $p_{uv} = D_u - \sum_{v' \in S_1(u) \setminus \{v \}} p_{uv'}$ for the directed edge $(u,v) \not\in E^{\rm{ess}}$. Similarly, any choice $(q_{uv})_{(u,v) \in E^{\rm{ess}}}$ determines also the parameters $q_{uv}$ with $(u,v) \not\in E^{\rm{ess}}$ by setting $q_{uv} = - \sum_{v' \in S_1(u) \setminus \{v \}} q_{uv'}$. Then $DF((p_{uv}^s)_{(u,v) \in E^{\rm{ess}}})$ is a square matrix of size $M$, and the weighting scheme $P^s$ corresponding to $(p_{uv}^s)_{(u,v) \in E^{\rm{ess}}}$ is \emph{asymptotically stable} if and only if the real parts of all eigenvalues of this square matrix are negative, and the weighting scheme $P^s$ is \emph{unstable} if and only if at least one of the real parts of these eigenvalues is positive. Let us reformulate this restriction in terms of matrix multiplications. We start by enumerating the vertices of the graph $G = (V,E)$: $V = \{ v_0,\dots,v_N \}$. We also introduce the following enumeration on the directed edges in $E^{\rm{dir}}$: Let $1 = j_0$ and $a_{j_0},\dots,a_{k_0}$ be the edges of the type $(v_0,*)$ (where second vertices are chosen with increasing indices) in $E^{\rm{dir}}$, $j_1 = k_0+1$ and $a_{j_1},\dots,a_{k_1}$ be the edges of the type $(v_1,*)$ in $E^{\rm{dir}}$, and so on. For all vertices $v_l \in V$ with $S_1(v_l) = \emptyset$, we set $j_l = k_{l-1}+1$ and $k_l = k_{l-1}$. We remove the edges $a_{k_0}, a_{k_1}, \dots, a_{k_N}$ from $E^{\rm{dir}}$ to obtain $E^{\rm{ess}}$. For simplicity, we use the notation $p_j^s$ and $q_j$ for $p_{a_j}^s$ and $q_{a_j}$, and we can write for all $a_j \in E^{\rm{ess}}$, $$ DF_{a_j}((p_k^s)_{a_k \in E^{\rm{ess}}})((q_k)_{a_k \in E^{\rm{ess}}} ) = \sum_{l=0}^{N} \sum_{k=j_l}^{k_l} B_{a_j}(a_k) q_k = \sum_{l=0}^{N} \sum_{k=j_l}^{k_l-1} (B_{a_j}(a_k)-B_{a_j}(a_{k_l})) q_k, $$ where the last expression involves only parameters $q_k$ corresponding to essential directed edges $a_k \in E^{\rm{ess}}$. Consequently, the Jacobi matrix $DF((p_k^s)_{a_k \in E^{\rm{ess}}})((q_k)_{a_k \in E^{\rm{ess}}} )$ can be written as \begin{equation} DF((p_k^s)_{a_k \in E^{\rm{ess}}}) = P_1 B P_2 \end{equation} where $B$ is the square matrix of size $k_N$ with $B_{jk} = B_{a_j}(a_k)$ for $a_j,a_k \in E^{\rm{dir}}$, $P_1$ is obtained from the identity matrix $I_{k_N}$ by removing the rows $k_0, k_1,\dots, k_N$, and $P_2 = P_1^\top - P_3$ with $P_3$ a $k_N \times M$ matrix whose first $k_0-1$ columns are all the standard basis vector $e_{k_0}$, the next $k_1-j_1$ columns are all the standard basis vector $e_{k_1}$, and so on. \subsection{Examples of asymptotically stable and unstable equilibria} In this subsection we investigate curvature sharp weighting schemes of various examples of unmixed combinatorial graphs. \begin{ex}[Curvature sharp weighting schemes on a cycle] Let $C_N = (V,E)$ be a cycle of length $N \ge 4$, that is $V= \{v_0,v_1,\dots,v_{N-1}\}$ and $v_i \sim v_{i+1}$ with indices $i$ modulo $N$. For simplicity, we refer to vertex $v_i$ henceforth as $i$. We assume $P^s$ to be a non-lazy curvature sharp weighting scheme on $C_N$, and we remove the directed edges $(i,i+1) \in E^{\rm{dir}}$ to obtain $E^{\rm{ess}}$, and the only coefficients $B_{a_j}(a_k)$ with $a_j \in E^{\rm{ess}}$ and $a_k \in E^{\rm{dir}}$, which may be potentially non-zero are (see \eqref{eq:Bxy}, \eqref{eq:Bxy'}, \eqref{eq:Byx} and \eqref{eq:By'x}), \begin{eqnarray*} B_{i,i-1}(i,i-1) &=& -4p_{i,i-1}^s + 8p_{i,i-1}^s p_{i-1,i}^s + 4p_{i,i+1}^s p_{i+1,i}^s, \\ B_{i,i-1}(i,i+1) &=& 4 p_{i,i-1}^s p_{i+1,i}^s, \\ B_{i,i-1}(i-1,i) &=& 4(p_{i,i-1}^s)^2-4p_{i,i-1}^s, \\ B_{i,i-1}(i+1,i) &=& 4p_{i,i-1}^s p_{i,i+1}^s. \end{eqnarray*} This implies \begin{multline*} DF_{i,i-1}((p_k^s)_{a_k \in E^{\rm{ess}}})((q_k)_{a_k \in E^{\rm{ess}}}) \\ = (B_{i,i-1}(i,i-1)-B_{i,i-1}(i,i+1)) q_{i,i-1} + B_{i,i-1}(i+1,i) q_{i+1,i} - B_{i,i-1}(i-1,i) q_{i-1,i-2} \\ = 4( (p_{i,i-1}^s)^2-p_{i,i-1}^s ) q_{i-1,i-2} + 4(-p_{i,i-1}^s + 2 p_{i,i-1}^s p_{i-1,i}^s + p_{i,i+1}^s p_{i+1,i}^s - p_{i,i-1}^s p_{i+1,i}^s) q_{i,i-1} + 4 p_{i,i-1}^s p_{i,i+1}^s q_{i+1,i}. \end{multline*} In the case of the simple random walk $p_{i,i_1}^s=p_{i,i+1}^s = 1/2$, $i \in \{0,\dots,N-1\}$, this simplifies to $$ DF_{i,i-1}((p_k^s)_{a_k \in E^{\rm{ess}}})((q_k)_{a_k \in E^{\rm{ess}}}) = q_{i-1,i-2} + q_{i+1,i},$$ and the corresponding matrix $DF((p_k^s)_{a_k \in E^{\rm{ess}}})$ coincides with the adjacency matrix of $C_N$ whose largest eigenvalue is $2$. Therefore the simple random walk on $C_N$, $N \ge 4$, is an unstable equilibrium. This is in contrast to the simple random walk on $C_3 = K_3$, which is asymptotically stable, as we will see in Example \ref{ex:srwcompletestable}. In the case of the totally degenerate clockwise weighting scheme $p_{i,i-1}=1$ and $p_{i,i+1}=0$ (see right hand side of Figure \ref{fig:path-cycle-graph}), we have $$ DF_{i,i-1}((p_k^s)_{a_k \in E^{\rm{ess}}})((q_k)_{a_k \in E^{\rm{ess}}}) = - 4 q_{i,i-1}, $$ that is, $DF((p_k^s)_{a_k \in E^{\rm{ess}}}) = - 4 {\rm{Id}}_N$, and this curvature sharp Markovian weigthing scheme is asymptotically stable. This agrees with the fact that many initial weighting schemes end up in this limit under the curvature flow. The same holds true for the corresponding totally degenerate anti-clockwise weighting scheme with $p_{i,i+1}=1$ and $p_{i,i-1}=0$. \end{ex} \begin{ex}[Flow limits of the octahedron] We know from Example \ref{ex:octahedron} that stationary solutions of the normalized curvature flow on the octahedron are the simple random walk without laziness as well as the totally degenerate weighting scheme given as matrix $P^s$ in the following code (see also the right hand side of Figure \ref{fig:octahedron}). The linearization $DF(P^s)$ is analyzed in line 10 of the program, and the function \textcolor{blue}{\texttt{equilibrium_type}} with the parameters chosen in lines 6 and 7 returns one of the values $-1,0,1$ (corresponding to ``asymtotically stable'', ''undecided'', ``unstable'', respectively), following by a list of its eigenvalues. That is, after execution of line 10, {\tt{result[0]}} is one of the values $-1,0,1$ and {\tt{result[1]}} is a list of the 18 eigenvalues $\lambda_j$ of $DF(P^s)$. \begin{lstlisting}[language=Python]{} A = [[0,1,0,1,1,1],[1,0,1,0,1,1],[0,1,0,1,1,1], [1,0,1,0,1,1],[1,1,1,1,0,0],[1,1,1,1,0,0]] # Ps = srw(A) Ps = [[0,1,0,0,0,0],[0,0,1,0,0,0],[0,0,0,1,0,0], [1,0,0,0,0,0],[1/4,1/4,1/4,1/4,0,0],[1/4,1/4,1/4,1/4,0,0]] eigenvalues = True jacobi_matrix = False threshold = 0.001 norm_tolerance = 0.001 result = equilibrium_type(A,Ps,eigenvalues,jacobi_matrix,norm_tolerance,threshold) print("Flow dynamics eigenvalues:") for j in range(18): print(np.around(result[1][j],3)) \end{lstlisting} The program provides us with the following list of complex eigenvalues: \medskip \begin{center} \begin{tabular}{lrrrrrrrr} $\lambda_j$ & $-1$ & $-1+i$ & $-1-i$ & $-2$ & $-2+i$ & $-2-i$ & $-3$ & $-4$ \\ \hline multiplicity & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $4$ \end{tabular} \end{center} \medskip This shows that the totally degenerate curvature sharp weighting scheme $P^s$ on the octahedron is asymptotically stable. This is expected since the initial weighting scheme $P_0$ in \eqref{eq:P0-octahedron} of Example \ref{ex:octahedron} converges to this limit under the normalized numerical curvature flow. Running the same code for the simple random walk without laziness instead (by uncommenting line 3 and commenting out lines 4 and 5 in the above code) shows that this second curvature sharp weighting scheme is unstable. The eigenvalues in this case are all real valued, one of them $0.5$, and given as follows: \medskip \begin{center} \begin{tabular}{lrrrrr} $\lambda_j$ & $0.5$ & $0$ & $-0.75$ & $-1$ & $-1.5$ \\ \hline multiplicity & $3$ & $3$ & $2$ & $6$ & $4$ \end{tabular} \end{center} \end{ex} \begin{ex}[Simple random walk on a complete graph] \label{ex:srwcompletestable} Let $K_{n+1}=(V,E)$ be the complete unmixed graph with $n+1\ge 3$ vertices. Instead of the $B$-coefficients, we make use of the $B'$-coefficients introduced in Remark \ref{rmk:compl-freedom-asymp}. The simple random walk $p_{xy} = \frac{1}{n}$ for $x \neq y$ is a curvature sharp Markovian weighting scheme, and the non-zero $B'$-coefficients are then given by $$ B'_{xy}(xy) = \frac{(n+1)(3-n)}{n^2}, \, B'_{xy}(xy') = \frac{2n+3}{n^2}, \, B'_{xy}(yx) = \frac{3-2n}{n^2}, \, B'_{xy}(y'x) = \frac{3}{n^2}, \, B'_{xy}(y'y) = \frac{1}{n}, $$ where $y' \in V$ is an arbitrary vertex different from $x,y$. Let $\{ v_0, v_1, \dots, v_n \}$ be the vertex set of $K_{n+1}$ and, as in the previous example, we refer to vertex $v_i$ as $i$, for simplicity. Let us now consider the case $n=2$. The process of removing edges from $E^{\rm{dir}}$ described earlier leads to the following remaining edges in $E^{\rm{ess}}$: $$ a_1 = (0,1), \quad a_3 = (1,0), \quad a_5 = (2,0). $$ Choosing the simple random walk $p_{01}^s = p_{10}^s = p_{20}^s = 1/2$, we have \begin{multline} DF(p_{01}^s,p_{10}^s,p_{20}^s) = \begin{pmatrix} e_1^\top \\ e_3^\top \\ e_5^\top \end{pmatrix} B' \begin{pmatrix} e_1-e_2 & e_3-e_4 & e_5 - e_6 \end{pmatrix} \\ = \begin{pmatrix} B_{01}'(01) - B'_{01}(02) & B_{01}'(10) - B_{01}'(12) & B_{01}'(20) - B_{01}'(21) \\ B_{10}'(01) - B_{10}'(02) & B_{10}'(10) - B_{10}'(12) & B_{10}'(20) - B_{10}'(21) \\ B_{20}'(01) - B_{20}'(02) & B_{20}'(10) - B_{20}'(12) & B_{20}'(20) - B_{20}'(21) \end{pmatrix} = \begin{pmatrix} -1 & \frac{1}{4} & \frac{1}{4} \\ \frac{1}{4} & -1 & \frac{1}{4} \\ \frac{1}{4} & \frac{1}{4} & -1 \end{pmatrix}, \end{multline} which is a negative definite matrix with eigenvalues $-\frac{1}{2}$, $-\frac{5}{4}$, $-\frac{5}{4}$, showing that the simple random walk on $K_3$ is an asymptotically stable equilibrium. This result can be numerically verified via the following code: \begin{lstlisting}[language=Python]{} n = 2 A = complete(n+1) P = srw(A) result = equilibrium_type(A, P, True, True) print() print("Eigenvalues:") print(np.around(result[1], 3)) print() print("Linearised flow matrix at equilibrium:") print(np.around(result[2], 3)) \end{lstlisting} The program returns the eigenvalues of the linearized flow matrix of $K_3$ at the simple random walk. Note however that the computed matrix is based here on the $B$-coefficients instead of the $B'$-coefficients, so this matrix is slightly different from the one given above, while the eigenvalues are the same. In the case $n=3$, we have $p_{01}^s=p_{02}^s=p_{10}^s=p_{12}^s=p_{20}^s=p_{21}^s=p_{30}^s=p_{31}^s=1/3$ and the returned matrix by the program (after changing {\tt{n=2}} into {\tt{n=3}} in line 1 of the code) is as follows: $$ DF(p_{01}^s,p_{02}^s,p_{10}^s,p_{12}^s,p_{20}^s,p_{21}^s,p_{30}^s,p_{31}^s) = \begin{pmatrix} -1 & 0 & -1/3 & 0 & 1/3 & 1/3 & 1/3 & 1/3 \\ 0 & -1 & 1/3 & 1/3 & -1/3 & 0 & 0 & -1/3 \\ -1/3 & 0 & -1 & 0 & 1/3 & 1/3 & 1/3 & 1/3 \\ 1/3 & 1/3 & 0 & -1 & 0 & -1/3 & -1/3 & 0 \\ 0 & -1/3 & 1/3 & 1/3 & -1 & 0 & 0 & -1/3 \\ 1/3 & 1/3 & 0 & -1/3 & 0 & -1 & -1/3 & 0 \\ 1/3 & 1/3 & 0 & -1/3 & 0 & -1/3 & -1 & 0 \\ 0 & -1/3 & 1/3 & 1/3 & -1/3 & 0 & 0 & -1 \end{pmatrix}. $$ Its eigenvalues are $-\frac{2}{3}$ with multiplicity $6$ and $-2$ with multiplicity $2$. This shows that the simple random walk on $K_4$ is again an asymptotically stable equilibrium. \end{ex} Since numerical experiments show that all non-degenerate Markovian weighting schemes without laziness on $K_{n+1}$ converge to the simple random walk, we expect that the simple random walk on $K_{n+1}$ is an asymptotically stable equilibrium for all $n \ge 2$ (see also our Conjecture \ref{conj:complgraph}, which an is even stronger statement). To this end, our code provides the following numerical results: The eigenvalues of $DF(P^s)$ for the simple random walk without laziness on the complete graph $K_{n+1}$ are real valued and given by \begin{itemize} \item $-\frac{7}{16}$ with multiplicity $4$, $-\frac{3}{4}$ with multiplicity $6$, and $-\frac{7}{4}$ with multiplicity $5$ for $n=4$, \item $-\frac{8}{25}$ with multiplicity $5$, $-\frac{4}{5}$ with multiplicity $10$, and $-\frac{8}{5}$ with multiplicity $9$ for $n=5$, \item $-\frac{1}{4}$ with multiplicity $6$, $-\frac{5}{6}$ with multiplicity $15$, and $-\frac{3}{2}$ with multiplicity $14$ for $n=6$, and \item $-\frac{10}{49}$ with multiplicity $7$, $- \frac{6}{7}$ with multiplicity $21$, and $-\frac{10}{7}$ with multiplicity 20 for $n=7$. \end{itemize} These results give rise to the following conjecture: \begin{conj} The non-lazy simple random walk $P^s$ on the unmixed complete graph $K_{n+1}$, $n \ge 2$, is asymptotically stable and the eigenvalues of $DF(P^s)$ are given by \begin{eqnarray*} - \frac{n-1}{n} && \qquad \text{with multiplicity ${n \choose 2}$,} \\ - \frac{n+3}{n^2} && \qquad \text{with multiplicity $n$,} \\ - \frac{n+3}{n} && \qquad \text{with multiplicity ${n \choose 2}-1$.} \end{eqnarray*} As $n \to \infty$, we have $- \frac{n-1}{n} \to -1$, $- \frac{n+3}{n^2} \approx -\frac{1}{n} \to 0$ and $- \frac{n+3}{n} \to -1$. \end{conj} \begin{ex}[Simple random walks on hypercubes] We know from \cite[Corollary 1.14]{CKLMPS-22} that the simple random walk without laziness on a hypercube $Q^d = (K_2)^d$ is curvature sharp, and it is the only non-degenerate curvature sharp weigthing schemes without laziness if and only if $d$ is odd. For $d=2$, a non-degenerate curvature sharp weigthing scheme on $Q^2$ different from the simple random walk is given in line 4 of the following code: \begin{lstlisting}[language=Python]{} A = hypercube(2) p = rand() q = 1-p P = [[0,p,q,0],[p,0,0,q],[p,0,0,q],[0,p,q,0]] result = equilibrium_type(A,P,True) print("flow dynamics eigenvalues:", result[1]) \end{lstlisting} All eigenvalues $\lambda_j$ of the corresponding linearized flow matrix provided by this code via line 6 seem to be real. Two of them are numerically zero and the other two are given by $\pm \lambda$ with a non-zero real $\lambda$. So this equilibrium is unstable. Our experiments for arbitrary weighting schemes on $Q^d$, $d \ge 2$, close to the simple random walk show that the numerical flow converges usually to a degenerate limit. This agrees with our observation that the linearized flow matrix at the simple random walk seems always to have real eigenvalues with some of them being positive. These eigenvalues can be obtained via the following code: \begin{lstlisting}[language=Python]{} d = 3 A = hypercube(d) P = srw(A) result = equilibrium_type(A,P,True) print("Flow dynamics eigenvalues at the simple random walk:") for j in range((d-1)*(2**d)): print(np.around(result[1][j],4)) \end{lstlisting} \end{ex} Let us finish this section with a conjecture which we verified numerically for $d=2,3,\dots,9$: \begin{conj} The non-lazy simple random walk $P^s$ on the hypercube $Q^d$, $d \ge 2$, is unstable and the eigenvalues of the corresponding linearized curvature flow matrix are all real with the largest eigenvalue $\lambda_{\max}$ equals $$ \lambda_{\max} = \frac{4}{d}. $$ \end{conj} \section{Implementation of the curvature flow and useful tools} This section provides a short description of a program, written in Python, which allows users to carry out their own curvature flow experiments. This program is provided as an ancillary file and can be used in combination with the ``Graph Curvature Calculator'', which can be freely accessed at \begin{center} \textcolor{blue}{\url{http://www.mas.ncl.ac.uk/graph-curvature}} \end{center} The Graph Curvature Calculator is a powerful but easy to use interactive Web tool to draw graphs and to compute various types of curvatures like Bakry-\'Emery curvature on its vertices or Ollivier Ricci curvature on its edges (for details see \cite{CKLLS-19}). This Web tool allows users to obtain the adjacency matrix of the graph under consideration which can then be used as input for the curvature flow code. \smallskip The curvature flow code provides functions and routines which can be divided into six categories: \begin{enumerate} \item Functions related to graphs and their adjacency matrices \item Functions related to weighting schemes \item Functions testing combinatorial and weighted graphs \item Curvature flow computation routines \item Curvature computation routines \item Display routines \end{enumerate} Each category is discussed in one of the following subsections. \subsection{Functions related to graphs and their adjacency matrices} Recall that the topology of a given Markov chain $(G,P)$ is contained in the combinatorial graph $G=(V,E)$. This information is provided by the adjacency matrix $A = A_G$ of the graph. Firstly, we go through some useful functions performing operations with these adjacency matrices. Every graph generated by one of the functions below is returned as such an adjacency matrix, in particular as a NumPy array. The names of these functions are as follows: \begin{itemize} \item \textcolor{blue}{\texttt{rand\_adj\_mat(n, p, connected=False)},} \item \textcolor{blue}{\texttt{complete(n)},} \item \textcolor{blue}{\texttt{path(n)},} \item \textcolor{blue}{\texttt{cycle(n)},} \item \textcolor{blue}{\texttt{wedge\_sum(A, B, i, j)},} \item \textcolor{blue}{\texttt{bridge\_at(A, B, i, j)},} \item \textcolor{blue}{\texttt{hypercube(n)},} \item \textcolor{blue}{\texttt{cart\_prod(A, B)},} \item \textcolor{blue}{\texttt{onespheres(A)}.} \end{itemize} \textcolor{blue}{\texttt{rand\_adjmat(n, p)}} returns a random adjacency matrix with \texttt{n} vertices, where \texttt{p} is the probability that an edge exists between any two vertices. Therefore higher values of \texttt{p} usually lead to better connected graphs. Choosing \texttt{connected=True} instead guarantees that the returned adjacency matrix provides a connected combinatorial graph. The functions \textcolor{blue}{\texttt{complete(n)}}, \textcolor{blue}{\texttt{path(n)}} and \textcolor{blue}{\texttt{cycle(n)}} return the adjacency matrix of a complete graph with \texttt{n} vertices, a path of length $n$ and a cycle of length $n$, respectively. \textcolor{blue}{\texttt{wedge\_sum(A, B, i, j)}} produces an adjacency matrix of a graph that is the wedge sum of two pointed graphs represented by the adajcency matrices \texttt{A} and \texttt{B} and the \texttt{i}th vertex of \texttt{A} and the \texttt{j}th vertex of \texttt{B}, that is, the new graph is obtained from the disjoint union of these two graphs by only identifying these two vertices and keeping all other edges and vertices disjoint. There is also \textcolor{blue}{\texttt{bridge\_at(A, B, i, j)}}, which returns an adjacency matrix of the graph formed by connecting the graphs represented by \texttt{A} and \texttt{B} by a single edge at the \texttt{i}th vertex of \texttt{A} and the \texttt{j}th vertex of \texttt{B}. The function \textcolor{blue}{\texttt{hypercube(n)}} returns the adjacency matrix of the \texttt{n}-dimensional hypercube. The Cartesian product of two graphs represented by adjacency matrices \texttt{A} and \texttt{B} is returned by the function \textcolor{blue}{\texttt{cart\_prod(A, B)}}. (The $n$-dimensional hypercube cand also be alternatively generated as the Cartesian product of $n$ complete graphs $K_2$.) Finally, for a graph $G = (V,E)$ given by the adjacency matrix \texttt{A} and $V = \{0,1,\dots,n-1\}$, the function \textcolor{blue}{\texttt{onespheres(A)}} returns a list of lists whose $i$-th entry is a list of all neighbours of vertex $i \in V$, and whose $n$-th entry is a list of the combinatorial degrees of the vertices in $V$. This function is mainly used in execution of the other functions. \subsection{Functions related to weighting schemes} The weighting scheme of a Markov chain $(G,P)$ is provided via the weighted matrix $P = P_G$. The functions in this category are the following: \goodbreak \begin{itemize} \item \textcolor{blue}{\texttt{randomizer(A, threshold=0.001, laziness=False)},} \item \textcolor{blue}{\texttt{srw(A, laziness=False)},} \item \textcolor{blue}{\texttt{cart\_prod\_prob(P, Q, p, q)}.} \end{itemize} \textcolor{blue}{\texttt{randomizer(A)}} and \textcolor{blue}{\texttt{srw(A)}} are two useful functions that can be used to give weighted matrices from an adjacency matrix. The function \textcolor{blue}{\texttt{randomizer(A)}} returns a weighting scheme for the graph $G$ with random, numerically non-degenerate transition rates, that is, no transition rate is chosen to be below the parameter \texttt{threshold}. Usually, the returned weighting schemes are without laziness, but choosing \texttt{laziness=True} returns weighting schemes with all vertices having laziness $\ge$ \texttt{threshold}. \textcolor{blue}{\texttt{srw(A)}} returns the weighting scheme corresponding to the non-lazy simple random walk on the graph $G = (V,E)$ represented by the adjacency matrix \texttt{A}. Choosing \texttt{laziness=True} the transition rates of a vertex $v \in V$ with degree $n$ to its neighbours are chosen to be $\frac{1}{n+1}$ and its laziness is also chosen to be $\frac{1}{n+1}$. The function \textcolor{blue}{\texttt{cart\_prod\_prob(P, Q, p ,q)}} is a ``weighted'' analogue of the function \textcolor{blue}{\texttt{cart\_prod}} from the previous subsection for the Cartesian product of two weighting schemes \texttt{P}, \texttt{Q} with weights \texttt{p}, \texttt{q}, with $\texttt{p}+\texttt{q} = 1$. If \texttt{P} and \texttt{Q} are of size $n$ and $m$, respectively, this function returns the matrix $\texttt{p} \texttt{P} \otimes I_n + \texttt{q} I_m \otimes \texttt{Q}$ of size $nm$, where $I_n$ is the identity matrix of size $n$ and $A \otimes B$ is the Kronecker product of $A$ and $B$. \subsection{Functions testing combinational and weighted graphs} For combinatorial graphs given by their adjacency matrices \texttt{A} and weighted graphs given additionally by their weighting scheme \texttt{P}, we have the following test functions: \begin{itemize} \item \textcolor{blue}{\texttt{is\_connected(A)},} \item \textcolor{blue}{\texttt{is\_weakly\_connected(A, threshold=0.001)},} \item \textcolor{blue}{\texttt{is\_totally\_degenerate(A, P, threshold=0.001)},} \item \textcolor{blue}{\texttt{is\_markovian(P, norm\_tolerance=0.001)},} \item \textcolor{blue}{\texttt{is\_curvature\_sharp(A, P, norm\_tolerance=0.001, threshold=0.001)},} \item \textcolor{blue}{\texttt{equilibrium\_type(A, P, eigenvalues=False, jacobi\_matrix=False,\\ norm\_tolerance=0.001, threshold=0.001)}.} \end{itemize} The function \textcolor{blue}{\texttt{is\_connected(A)}} returns \texttt{True} if and only if the adjacency matrix \texttt{A} represents a connected graph. The function \textcolor{blue}{\texttt{is\_weakly\_connected(P)}} is a ``weighted'' analogue, which returns \texttt{True} if and only if the weighted matrix \texttt{P} represents a weakly connected graph. It does this by forming an adjacency matrix with a \texttt{1} in the $(\texttt{i}, \texttt{j})$th entry if and only if \texttt{P[i, j] > threshold} or \texttt{P[j, i] > threshold} and then testing for connectedness. Recall that a weighted graph $(G,P)$ is called \emph{numerically totally degenerate} if there are no two-sided edges with numerical non-zero transition rates in both directions and no one-sided edges with numerical non-zero transition rate, where we consider a transition rate $p_{xy}$ as numerically non-zero if and only if $p_{xy} \ge$ \texttt{threshold}. The function \textcolor{blue}{\texttt{is\_totally\_degenerate(A,P)}} tests this property. The function \textcolor{blue}{\texttt{is\_markovian(P)}} tests whether the entries of each of the columns of $P$ add up numerically to $1$ up to an error $\le \texttt{norm\_tolerance}$. Numerical curvature sharpness (up to an error $\le \texttt{threshold}$) of Markovian weighted graphs given by \texttt{(A,P)} is tested by \textcolor{blue}{\texttt{is\_curvature\_sharp(A,P)}}. If \texttt{(A,P)} fails to be Markovian (with respect to \texttt{norm\_tolerance}), this function returns \texttt{NONE} and gives notice to the user by an error message. For dynamical investigations of curvature flow equilibria, we have the function \textcolor{blue}{\texttt{equilibrium\_type(A,P)}} which always returns a list of length three. The function checks first whether \texttt{(A,P)} satisfies the Markovian property and is numerically curvature sharp. If this is not the case, it returns a list of three \texttt{NONE} values. Otherwise, the function investigates the real parts of the eigenvalues $\lambda_j$ of the linearized curvature flow matrix at the equilibrium \texttt{P}. The first entry of the return list is $-1, 0$ or $1$ depending on the maximum $\max_j {\rm{Re}}(\lambda_j)$. If this maximum is $\ge \texttt{threshold}$, the return value is $1$ (for ``unstable'') and if this maximum is $\le - \texttt{threshold}$, the return value is $-1$ (for ``asymptotically stable''). Otherwise, the dynamical nature of the equilibrium cannot be numerically decided and the function returns the value $0$. The following two entries of the return list are usually \texttt{NONE} unless the user made the choices \texttt{eigenvalues=True} or \texttt{jacobi\_matrix=True}. In the first case, the second entry of the return list is a list of all eigenvalues of the linearized curvature flow matrix, and in the second case the third entry of the return is is the linearized curvature flow matrix itself. \subsection{Curvature flow computation routines} At the heart of the program are the curvature flow routines solving the initial value ordinary differential equations \eqref{eq:laziness} and \eqref{eq:flowdiffeq}. The relevant routines are the following: \begin{itemize} \item \textcolor{blue}{\texttt{curv\_flow(A, P, t\_max, dt=0.3, C=zeroes)},} \item \textcolor{blue}{\texttt{norm\_curv\_flow(A, P, t\_max, dt=0.3, stoch\_corr=True, norm\_tolerance=0.001)},} \item \textcolor{blue}{\texttt{norm\_curv\_flow\_lim(A, P, dt=0.3, stoch\_corr=True, norm\_tolerance=0.001,\\ lim\_tolerance=0.001, t\_lim=10000)}.} \end{itemize} The initial Markov chain $(G,P_0)$ with $G=(V,E)$ is entered by the adjacency matrix \texttt{A} describing the topology of the graph $G$ and the weighting scheme \texttt{P} containing the initial probability transitions $p_{xy}(0)$. The first routine \textcolor{blue}{\texttt{curv_flow(A,P)}} computes the non-normalized numerical curvature flow with coefficients $C_x(t)=0$ in \eqref{eq:flowdiffeq}. If users decide to choose other coefficient functions, they need to modify the input parameter \texttt{C=zeroes}. Note that \textcolor{blue}{\texttt{zeroes(A,P)}} is a function returning simply a list of zeroes of length $|V|$. Users can investigate modifications of the curvature flow by choosing their own coefficient functions with input values \texttt{(A,P)} and returning a list of length $|V|$ of real values. Using the discretization parameter \texttt{dt=0.3} for the discrete time steps starting at \texttt{t=0}, the curvature flow routine creates a list \textcolor{blue}{\texttt{P\_list}} of weighting schemes (represented by NumPy arrays of size $|V| \times |V|$) at each time increment using the Runge-Kutta algorithms RK4. There are two internal subroutines involved which we would like to mention briefly. \textcolor{blue}{\texttt{Pvecs\_to\_P}} translates a weighting scheme given by a list of lists (where each inner list contains the transition rates of the corresponding vertex) into the corresponding NumPy array. \textcolor{blue}{\texttt{Pvecs_prime}} computes, for a given weighting scheme \texttt{P}, the right hand side of the ordinary differential equations describing the curvature flow. The representation of this right hand side is again a list of lists, as described before. The computation stops just before the discrete time steps exceed the limit time \texttt{t\_max} and the routine returns \texttt{P\_list}. Note that in this general setting, transition rates can assume arbitrary values and even negative ones or diverge to infinity in finite time which may lead to system error messages. Users need to be aware of this possibility. The normalized curvature flow, using the coefficients $C_x(t) = K_{P(t),\infty}^{d(x,\cdot)}(x)$ (see \eqref{eq:CxMark}), is numerically computed by the routine \textcolor{blue}{\texttt{norm\_curv\_flow(A,P)}}. This special flow is the main focus of this paper and could also be mimicked by choosing \texttt{C=K\_inf} in \textcolor{blue}{\texttt{curv\_flow}}. The function \textcolor{blue}{\texttt{K\_inf(A,P)}} returns a list of upper curvature bounds for all vertices of the Markovian weighted graph represented by \texttt{(A,P)} (see \cite[formula (66)]{CKLMPS-22} for an explicit expression of this function in terms of transition rates). During the numerical computations of subsequent time steps, the Markovian property of the corresponding weighting schemes may be slightly violated. If this violation exceeds the threshold \texttt{norm\_tolerance}, the routine prepares for a potential correction according to the Boolean variable \texttt{stoch_corr}. If \texttt{stoch\_corr=False}, the program stops with a message to the user as discussed in Example \ref{ex:random-graph}. Otherwise the program carries out the following automatic Markovian renormalization of the currently considered weighting scheme: while the diagonal entries (the laziness values) are unchanged, the off-diagonal entries of every row are rescaled by the same factor to guarantee that the resulting matrix becomes stochastic again. As before, this routine returns a list \texttt{P\_list} of consecutive weighting schemes up to the time limit \texttt{t\_max}. While the user needs to specify the time limit \texttt{t\_max} in the above two routines, the third routine \textcolor{blue}{\texttt{norm\_curv\_flow\_lim(A,P)}} continues computing the normalized numerical curvature flow until a numerical flow limit is reached. This limit is determined by the parameter \texttt{lim\_tolerance}. The details for this numerical limit are explained in the introductory part of Section \ref{sec:curv-flow-ex}. This routine returns a list of length two: the limiting weighting scheme as a NumPy array followed by the numerical convergence time. Since it may happen that a normalized numerical flow does not converge at all (even though we are not aware of any such example), the parameter \texttt{t\_lim} provides an upper time limit beyond which the routine will not continue. The parameters \texttt{stoch\_corr} and \texttt{norm\_tolerance} play the same role as in the routine \textcolor{blue}{\texttt{norm\_curv\_flow}}. \subsection{Curvature computation routines} The functions in this section calculate Bakry-\'Emery curvatures and curvature upper bounds of graphs with given weighting schemes at all vertices. \begin{itemize} \item \textcolor{blue}{\texttt{curvatures(A, P, N=inf, onesps=[], q=None)},} \item \textcolor{blue}{\texttt{calc\_curvatures(A, P\_list, N=inf, k=1)},} \item \textcolor{blue}{\texttt{K\_{inf}(A, P)},} \item \textcolor{blue}{\texttt{calc\_curv\_upper\_bound(A, P\_list, N=inf, k=1)}.} \end{itemize} The routine \textcolor{blue}{\texttt{curvatures(A,P)}} computes, for a weighted graph $(G,P)$ with $G=(V,E)$ and represented by \texttt{(A,P)}, the curvatures of all vertices for dimension \texttt{N} $=\infty$ and returns them as a list of length $|V|$. If users are interested in curvatures for other dimensions, they need to change the parameter \texttt{N=inf}. There are two other inputs which can speed up the curvature calculations: if the number \texttt{q} of vertices in $V$ is given, it can be specified to avoid its repeated recalculation, for example during a curvature flow process. Similarly, if \textcolor{blue}{\texttt{onespheres(A)}} has already be calculated earlier, this information can be communicated to the routine via the input variable \texttt{onesps}. After a curvature flow computation with corresponding list \texttt{P\_list} of consecutive weighting schemes, the routine \textcolor{blue}{\texttt{calc\_curvatures(A,P\_list)}} computes the corresponding evolution of vertex curvatures by calling \textcolor{blue}{\texttt{curvatures(A,P\_list[j])}} and returns it as a list of lists. Here the $j$-th inner list contains the curvature evolution of the $j$-th vertex of the graph $G=(V,E)$ represented by \texttt{A}. The dimension parameter \texttt{N=inf} plays the same role as before. \textcolor{blue}{\texttt{calc\_curvatures}} computes curvatures only of each \texttt{k}-th weighting scheme provided by \texttt{P\_list}. Where appropriate, this can help to reduce computation time. The routine \textcolor{blue}{\texttt{K\_inf(A,P)}} was already discussed in the previous subsection and provides a list of upper curvature bounds for all vertices of the weighted graph $(G,P)$ represented by \texttt{(A,P)}. \textcolor{blue}{\texttt{calc\_curv\_upper\_bound}} is completely analogous to \textcolor{blue}{\texttt{calc\_curvatures}}, but it calls \textcolor{blue}{\texttt{K\_inf}} instead of \textcolor{blue}{\texttt{curvatures}}. \subsection{Display routines} The main display routines for users are the evolution of curvatures at various vertices during the curvature flow, the evolution of transition rates of edges emanating from vertices and the display of individual weighted graphs with vertices arranged in a circle. The relevant routines are the following: \begin{itemize} \item \textcolor{blue}{\texttt{display\_curvatures(curv, dt=0.3, is\_Markovian=True, N=inf, k=1,\\ curv\_bound=[], vertex\_list=[])},} \item \textcolor{blue}{\texttt{display\_trans\_rates(A, P\_list, dt=0.3, vertex\_list=[])},} \item \textcolor{blue}{\texttt{display\_weighted\_graph(A, P, title=None, threshold=10**(-3), \\display\_options=[10, True, 2, []], laziness=False)}.} \end{itemize} Given the evolution of vertex curvatures during a curvature flow process via a list of lists, where the $j$-th inner list is the curvature evolution of the $j$-th vertex, \textcolor{blue}{\texttt{display_curvatures(curv)}} displays the curvature evolution for each consecutive vertex separately, as illustated for example, in Figure \ref{fig:random-graph-curvatures}. If this information should be only given for specific vertices, this can be specified by the input parameter \texttt{vertex\_list}. The time step \texttt{dt=0.3} and the value of \texttt{k} together determine the labelling of the horizontal time axis. For the role of \texttt{k} we refer readers to our explanation about the routine \textcolor{blue}{\texttt{calc_curvatures}}. Upper curvature bounds can be inserted into the displays by the input parameter \texttt{curv\_bound}, which needs to be given in the same format as the vertex curvatures. Constant lower and upper curvature bounds $-1$ and $2$ are plotted alongside if the Boolean \texttt{is_Markovian} is chosen to be \texttt{True} and if the dimension parameter \texttt{N} is $\ge 2$. These bounds appear, for example, in the illustrations given in Figure \ref{fig:random-graph-curvatures}. Given the evolution of weighting schemes during a curvature flow process on a graph with adjacency matrix \texttt{A} by a list \texttt{P\_list} of NumPy arrays, \textcolor{blue}{\texttt{display\_trans\_rates(A,P\_list)}} displayes the evolution of transition rates of emanating edges for each consecutive vertex separately, as illustrated for example, in Figure \ref{fig:random-graph-trans-rates}. The input parameters \texttt{dt} and \texttt{vertex\_list} play the same role as in the previous routine. Finally, there is the routine \textcolor{blue}{\texttt{display\_graph(A,P)}} with input parameters \texttt{A} and \texttt{P} representing a weighted graph $(G,P)$. This routine produces a MatPlotLib plot of this weighted graph, with the vertices arranged counter-clockwise in a circle. The plot uses the following convention to illustrate different types of edges: \begin{itemize} \item \textcolor{OliveGreen}{green, solid lines} represent numerically non-degenerate edges, that is $\{x, y\} \in E$ with both $p_{xy}, p_{yx} \ge$ \texttt{threshold}, \item \textcolor{red}{red, dashed lines} with an arrow represent numerically degenerate edges, that is $\{x,y\} \in E$ with exactly one of $p_{xy}$ and $p_{yx}$ strictly less than \texttt{threshold}. \item black, dotted lines represent edges with numerically vanishing transition rates in both directions, that is $\{x,y\} \in E$ with both $p_{xy}, p_{yx}$ strictly less than \texttt{threshold}. \end{itemize} Users can add a title into the display by specifying the input parameter \texttt{title}. For Markovian weighted graphs with non-vanishing laziness, the option \texttt{laziness=True} labels each vertex with its corresponding laziness. It remains to discuss the input parameter \texttt{display\_options} of the \textcolor{blue}{\texttt{display\_graph}} routine. This parameter is a list of four entries. The first entry determines the size of the plot. Usually, the transition rates are printed above the edges, but if the second entry is chosen to be \texttt{False}, this information about the transition rates is omitted. Otherwise, the transition rates are given to a number of decimal places determined by the third entry. The default positions of these transition rates are $\frac{1}{6}$ of the way along the edges, with the number closest to the vertex $x$ in the edge $(x, y)$ being $p_{xy}$, but this can be altered manually to avoid overlapping by specification in the fourth entry of \texttt{display\_options}. For example, if one wishes the $p_{45}$ label to be moved to a position $\frac{1}{4}$ of the way from vertex $4$ to vertex $5$ and the $p_{62}$ label to be moved to a position $\frac{1}{5}$ of the way from vertex $6$ to vertex $2$, this fourth entry should be chosen to be \texttt{[[4, 5, 1/4, 1/6], [6, 2, 1/5, 1/6]]}. This completes the description of the functions and routines in the accompanying Python program to this article. \medskip {\bf{Acknowledgement:}} Shiping Liu is supported by the National Key R and D Program of China 2020YFA0713100 and the National Natural Science Foundation of China (No. 12031017). We like to thank the London Mathematical Society for their support of Ben Snodgrass via the Undergraduate Research Bursary URB-2021-02, during which the curvature flow was implemented and which lead to many of the research results presented in this paper. David Cushing is supported by the Leverhulme Trust Research Project Grant number RPG-2021-080. {\footnotesize \bibliographystyle{amsalpha}
{ "timestamp": "2022-12-26T02:12:39", "yymm": "2212", "arxiv_id": "2212.12401", "language": "en", "url": "https://arxiv.org/abs/2212.12401", "abstract": "In this second part of a sequence of two papers, we discuss the implementation of a curvature flow on weighted graphs based on the Bakry-Émery calculus. This flow can be adapted to preserve the Markovian property and its limits as time goes to infinity turn out to be curvature sharp weighted graphs. After reviewing some of the main results of the first paper concerned with the theoretical aspects, we present various examples (random graphs, paths, cycles, complete graphs, wedge sums and Cartesian products of complete graphs, hypercubes) and exhibit further properties of this flow. One particular aspect in our investigations is asymptotic stability and instability of curvature flow equilibria. The paper ends with a description of the available Python functions and routines available in the ancillary file. We hope that the explanations of the Python implementation via examples will help users to carry out their own curvature flow experiments.", "subjects": "Classical Analysis and ODEs (math.CA); Combinatorics (math.CO); Differential Geometry (math.DG); Probability (math.PR)", "title": "Bakry-Émery curvature sharpness and curvature flow in finite weighted graphs. II. Implementation", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708039218115, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7084670387568899 }
https://arxiv.org/abs/math/0701629
Linear spaces with a line-transitive point-imprimitive automorphism group and Fang-Li parameter gcd(k,r) at most eight
In 1991, Weidong Fang and Huiling Li proved that there are only finitely many non-trivial linear spaces that admit a line-transitive, point-imprimitive group action, for a given value of gcd(k,r), where k is the line size and r is the number of lines on a point. The aim of this paper is to make that result effective. We obtain a classification of all linear spaces with this property having gcd(k,r) at most 8. To achieve this we collect together existing theory, and prove additional theoretical restrictions of both a combinatorial and group theoretic nature. These are organised into a series of algorithms that, for gcd(k,r) up to a given maximum value, return a list of candidate parameter values and candidate groups. We examine in detail each of the possibilities returned by these algorithms for gcd(k,r) at most 8, and complete the classification in this case.
\section{Introduction} A {\em finite linear space} ${\cal S}=({\cal P},{\cal L})$ consists of a finite set ${\cal P}$ of points and a non-empty set ${\cal L}$ of distinguished subsets of ${\cal P}$ called lines such that any two points lie in exactly one line and each line contains at least two points. A linear space is said to be \emph{trivial} if it has only one line, or if all its lines have only two points; otherwise it is called non-trivial. The {\em automorphism group} ${\rm Aut}({\cal S})$ of ${\cal S}$ consists of all permutations of ${\cal P}$ that leave ${\cal L}$ invariant. We are interested in {\em line-transitive} linear spaces ${\cal S},$ that is, those for which ${\rm Aut}({\cal S})$ acts transitively on ${\cal L}$. In this case, the size of lines is constant, say $k$, and to avoid trivial cases we assume that $2<k<v$. Also, by a result of Richard Block~\cite{Block67},\cite{Block68}, if a subgroup $G \le {\rm Aut}({\cal S})$ is transitive on the lines of ${\cal S},$ then it is also transitive on points. It is possible for a line-transitive group $G$ to leave invariant a non-trivial partition of the point set (see Subsection~\ref{sub:imprim}), and in this case we say that $G$ is \emph{point-imprimitive} on ${\cal S}$. Let $v,$ $b$ and $r$ be the number of points, the number of lines, and the number of lines through a point, respectively. In 1991, Weidong Fang and Huiling Li~\cite{FangLi91,FangLi93} proved that for a given value of $k^{(r)}:=\gcd(k,r)$, there are only finitely many line-transitive, point-imprimitive linear spaces. Moreover, in~\cite{LiLiu01}, Huiling Li and Weijun Liu proved that, if $G$ is line-primitive and $k^{(r)}\leq 10$, then $G$ is point-primitive. The aim of this paper is to demonstrate that the `Fang-Li bound' can be made effective without the additional assumption of line-primitivity. We present a collection of tests, based on the theory of point-imprimitive, line-transitive linear spaces ${\cal S}$, to restrict both the parameters of such spaces, and the structure of a line-transitive, point-imprimitive subgroup $G$. These tests were organised into a series of algorithms, which we describe in the paper. The algorithms were then applied to produce a list of all candidate parameters and groups for pairs $({\cal S}, G)$, in the case where $G$ is line-transitive and point-imprimitive on ${\cal S}$, and the Fang-Li parameter $k^{(r)}$ is at most eight. We then dealt with each of these possibilities and classified all line-transitive, point-imprimitive linear spaces with $k^{(r)}\leq8$. \begin{theorem}\label{main} Suppose that ${\cal S}$ is a linear space with $v$ points, line size $k$, and $r$ lines on each point, that admits a line-transitive, point-imprimitive subgroup of automorphisms, and is such that the Fang-Li parameter $k^{(r)}\leq8$. Then ${\cal S}$ is the Desarguesian projective plane ${\rm PG}(2,4)$ or ${\rm PG}(2,7)$, or the Mills design or Colbourn-McCalla design, both with $(v,k)=(91,6)$, or one of the $467$ designs constructed in {\rm\cite{NNOPP}}, all with $(v,k)=(729,8)$. \end{theorem} This result may also be viewed as a strengthening of a classification of Camina and Mischke~\cite{CaminaMischke96}: we replace their condition $k\leq 8$ with the much weaker condition gcd$(k,r)\leq 8$ and prove that no additional examples arise. \section{Basic facts about line-transitive, point-imprimitive linear spaces} \label{sect:basic} The following equalities and inequalities are considered basic, for a linear space with parameters $v,r,k$ and $b\geq2$, as defined above. \begin{align} vr &= kb, \label{eqn:vrkb} \\ v-1 &= r(k-1), \label{eqn:vrk} \\ b &= \binom{v}{2} / \binom{k}{2} \label{eqn:b} \\ \intertext{and} v &\le b \text{\;\; (Fisher's inequality), \;\;} \label{eqn:fisher} \end{align} with equality if and only if ${\cal S}$ is a {\em projective plane,} that is, a space where $v =n^2+n+1=b$ and $k=r=n+1$ for some integer $n.$ \subsection{Point partitions and the Delandtsheer--Doyen parameters} \label{sub:imprim} A partition ${\mathfrak C}$ of a finite set $X$ is a set of non-empty pairwise disjoint subsets whose union equals the set $X.$ We write ${\mathfrak C} = \{ C_1,\ldots, C_d \}$ and call the $C_i$ the {\em classes } of ${\mathfrak C}.$ A group $G$ acting on a set $X$ is said to {\em leave the partition ${\mathfrak C}$ invariant} if, for all $g \in G$ and all $C \in {\mathfrak C},$ the image $C^g$ also is a class of ${\mathfrak C}.$ A transitive group $G$ of permutations of $X$ is said to act {\em imprimitively} on $X$ if there exists a $G$-invariant partition of $X$ which is not trivial, that is to say, the partition ${\mathfrak C}$ neither consists of only one class nor does it contain only one-element classes. Otherwise, the group is said to be {\em primitive} on $X.$ \bigskip Linear spaces admitting a line-transitive, point-imprimitive automorphism group deserve special attention due to the following result, which shows that the number of points is bounded above by a function of the line size $k.$ \smallskip \begin{theorem}[Delandtsheer and Doyen~\cite{DelandtsheerDoyen89}]\label{thm:DD} Let ${\cal S} = ( {\cal P}, {\cal L} )$ be a linear space admitting a line-transitive point-imprimitive automorphism group $G$. Let ${\mathfrak C} = \{C_1, \ldots , C_d\}$ be a $G$-invariant partition of ${\cal P}$ with $d$ classes of size $c$. Let $x$ be the number of inner pairs of a line $\lambda,$ that is, pairs of points $\{\alpha,\beta\} \subseteq \lambda$ such that $\alpha$ and $\beta$ lie in the same class of ${\mathfrak C}$. Then there exists another positive integer $y$ such that \begin{align}\label{eqn:DD} c =\frac{\binom{k}{2}-x}{y} \quad \text{and} \quad d=\frac{\binom{k}{2}-y}{x}. \end{align} \end{theorem} We call the pair $(x,y)$ the {\em Delandtsheer-Doyen parameters} corresponding to~${\mathfrak C}.$ Note that the above equalities are equivalent to \begin{align}\label{eqn:DDreversed} x = \frac{\binom{k}{2}(c-1)}{cd-1} \quad \text{and} \quad y = \frac{\binom{k}{2}(d-1)}{cd-1}. \end{align} Hence the parameters $(c,d,k)$ and $(x,y,k)$ mutually determine each other. Moreover, if $(c,d,k)$ corresponds to $(x,y,k)$, then the triple $(d,c,k)$ corresponds to $(y,x,k)$. We emphasise that this second correspondence is purely arithmetic: if there is a line-transitive point-imprimitive linear space with Delandtsheer--Doyen parameters $(x,y)$ corresponding to some point partition, there may or may not exist an example for which $(y,x)$ are the Delandtsheer--Doyen parameters. \subsection{The Fang-Li parameters of linear spaces}\label{sub:fangli} In 1991, Fang and Li introduced some more parameters of linear spaces. Let ${\cal S} = ({\cal P},{\cal L})$ be a linear space admitting a line-transitive automorphism group $G.$ Note that (\ref{eqn:vrk}) implies that $v$ and $r$ are coprime. Put \begin{align*} k^{(v)} := \gcd(k,v), \quad k^{(r)} := \gcd(k,r), \quad b^{(v)} := \gcd(b,v), \quad b^{(r)} := \gcd(b,r). \end{align*} Then the parameters $v, k, b, r$ factorise naturally, as can be seen from equations (\ref{eqn:vrkb}) -- (\ref{eqn:b}). \begin{lemma}\label{prop:fl1} \begin{enumerate} \item $k = k^{(v)} \cdot k^{(r)}$ and $b=b^{(v)} \cdot b^{(r)}$ and \item $v = b^{(v)} \cdot k^{(v)}$ and $r = b^{(r)} \cdot k^{(r)}.$ \end{enumerate} \end{lemma} Fisher's inequality in terms of the Fang-Li parameters becomes: \begin{align}\label{fisherfangli} k^{(v)} \le b^{(r)}, \end{align} with equality if and only if the linear space is a projective plane, and in this case we have in fact that $k^{(v)}=b^{(r)}=1$. \bigskip If $G$ preserves a non-trivial partition ${\mathfrak C}$ of the point set ${\cal P}$ with $d$ classes of size $c,$ much more can be said: \smallskip \begin{proposition}[Fang and Li~\cite{FangLi91}]\label{prop:fl2} There exist positive integers $\gamma$ and $\delta$ such that \begin{enumerate} \item \label{flcd} $c=\gamma b^{(r)} + 1$ and $d = \delta b^{(r)} + 1;$ \item $x = \gamma k^{(v)} / 2$ and $y = \delta k^{(v)} / 2$; In particular, $(\gamma, c-1, x)=\gamma(1,b^{(r)},\frac{k^{(v)}}{2})$ and $(\delta, d-1, y)=\delta(1,b^{(r)},\frac{k^{(v)}}{2})$; \item $\gamma + \delta + \gamma\delta b^{(r)} = k^{(r)} (k-1)$ and $\gamma \delta < {k^{(r)}}^2$; \item $k^{(v)}$ divides $(\gamma + k^{(r)})(\delta + k^{(r)})$. \end{enumerate} \end{proposition} \bigskip The parameters $k^{(v)}, k^{(r)}, b^{(v)}, b^{(r)}, \gamma, \delta$ are called the {\em Fang-Li parameters} of the linear space corresponding to the partition ${\mathfrak C}$. Note that parts (iii) and (iv) of Proposition~\ref{prop:fl2} imply that $k$ is bounded above by a function of $k^{(r)}$, and then Theorem~\ref{thm:DD} implies that $v$ also is bounded above by a function of $k^{(r)}$. \subsection{Partition refinements}\label{sub:partitions} Recall the ordering of (unordered) partitions of sets: A partition ${\mathfrak C}$ of a set $X$ is {\em contained} in a partition ${\mathfrak C}'$ if every class of ${\mathfrak C}$ is contained in a class of ${\mathfrak C}'.$ In this case, we say that ${\mathfrak C}$ {\em refines} ${\mathfrak C}'$ and that ${\mathfrak C}'$ is {\em coarser} than ${\mathfrak C}.$ The set of all partitions of $X$ equipped with this ordering forms a lattice, called the {\em partition lattice} of $X$. In this paper, partitions left invariant by an imprimitive group $G$ acting on $X$ are of special importance. Note that the join and meet of partitions invariant under a group $G$ are again $G$-invariant. Hence the set of $G$-invariant partitions forms a sublattice of the partition lattice of $X$. We use the following notation: for a group $G$ acting on a set $X$ and a subset $C\subseteq X$, we define the setwise and pointwise stabilisers of $C$ in $G$ by $G_C$ and $G_{(C)}$ respectively. Note that $G_C$ acts on $C$ with kernel $G_{(C)}$, and we use $G^C$ to denote the permutation group on $C$ induced by $G_C$. Clearly, $G^C\cong G_C/G_{(C)}$. A transitive group $G$ of permutations of a set $X$ leaving invariant a non-trivial partition ${\mathfrak C}$ of $X$ gives rise to two further permutation groups. For a class $C \in {\mathfrak C},$ we have the group $G^C$ induced on $C$ (this group is sometimes called the {\em bottom group} and is independent of the choice of $C$ up to permutational isomorphism). The permutation group which $G$ induces on the set of classes is denoted by $G^{\mathfrak C}$ (this group is sometimes called the {\em top group}). By a standard result about permutation groups, $G$ can be embedded in the wreath product $G^C\wr G^{\mathfrak C}$, and this wreath product is the largest subgroup of permutations on $X$ leaving ${\mathfrak C}$ invariant, and inducing the same top group and bottom group as $G$. We will therefore assume that $G\leq G^C\wr G^{\mathfrak C}$. In any family of partitions of $X$ that contains the two trivial partitions, we call a partition {\em minimal} if the only strict refinement in the family is the discrete partition with all classes of size 1; and a partition ${\mathfrak C}$ in the family is {\em maximal} if the only partition in the family that is strictly coarser than ${\mathfrak C}$ is the all-in-one partition (with a single class). For a minimal $G$-invariant partition, the group $G^C$ induced on a class $C$ is primitive. For a maximal $G$-invariant partition ${\mathfrak C}$, the group $G^{\mathfrak C}$ induced on the set of classes is primitive. If ${\mathfrak C}$ is both minimal and maximal as a $G$-invariant partition of $X$, then we say that the action of $G$ on $X$ is 2-\emph{step imprimitive} relative to ${\mathfrak C}$. Minimality and maximality of a point-partition ${\mathfrak C}$ invariant under the action of a line-transitive automorphism group of a linear space can sometimes be infered from the parameters, as demonstrated by the next two results from \cite{PraegerTuan02}. \begin{theorem}[Praeger and Tuan~\cite{PraegerTuan02}, Theorem 1.2]\label{thm:refinement} Suppose that $G$ is a line-transit\-ive point-im\-pri\-mi\-tive automorphism group of a linear space. Let ${\mathfrak C}$ be a non-trivial $G$-invariant partition with $d$ classes of size $c$ and let $x$ and $y$ be the Delandtsheer--Doyen parameters with respect to ${\mathfrak C}.$ Suppose there is another $G$-invariant partition ${\mathfrak C}'$ refining ${\mathfrak C}$ and let this partition have $d'$ classes of size $c'$ with Delandtsheer--Doyen parameters $x'$ and $y'.$ Then \begin{enumerate} \item $c'$ divides $x - x'$ and $d$ divides $y' - y$; \item $x \ge 3x' + 1$ and $y' \ge 3y + 1$; \item $2r (xy - x'y') = (x'-x + y' - y) k;$ \item $\displaystyle \frac{2r}{k} = \frac{c-c'}{x-x'}.$ \end{enumerate} \end{theorem} \begin{proof} (i)-(iii) refer to ~\cite[Theorem 1.2]{PraegerTuan02}. (iv) Double counting the number of ${\mathfrak C}$-inner, ${\mathfrak C}'$-outer pairs gives $b(x-x')=d'\frac{c'(c-c')}{2}=v\frac{c-c'}{2}$. It follows that $\frac{2r}{k} =\frac{2b}{v}= \frac{c-c'}{x-x'}$. \end{proof} \bigskip \begin{corollary}[Praeger and Tuan~\cite{PraegerTuan02}]\label{cor:maxmin} If $G$ is a line-transitive point-imprimitive automorphism group of a linear space and ${\mathfrak C}$ is a $G$-invariant partition of the point set with Delandtsheer-Doyen parameters $x$ and $y,$ then \begin{enumerate} \item if $\binom{k}{2} > (x-2) xy + x + y$ then $G^C$ is primitive; \item if $\binom{k}{2} > (y-2) xy + x + y$ then $G^{\mathfrak C}$ is primitive. \end{enumerate} In particular, if $x \le 4$ then $G^C$ is primitive, and if $y \le 4$ then $G^{\mathfrak C}$ is primitive. \end{corollary} \subsection{Normal partitions and quasiprimitive groups}\label{sub:normal} The kernel of $G$ on ${\mathfrak C}$ is the subgroup $G_{({\mathfrak C})}$ of elements $g \in G$ with $C^g = C$ for each $C\in{\mathfrak C}$. Thus $G^{\mathfrak C} \simeq G/G_{({\mathfrak C})}$. We say that ${\mathfrak C}$ is {\em $G$-normal} if $G_{({\mathfrak C})}$ is transitive on each of the classes of ${\mathfrak C}.$ Note that the set of orbits of a normal subgroup $N$ of $G$ always forms an invariant partition with $N \le G_{({\mathfrak C})}$ transitive on each of the classes. However, in general, not every $G$-invariant partition arises as the set of orbits of a normal subgroup. If no non-trivial $G$-normal partition exists, then every normal subgroup distinct from the identity subgroup acts transitively. In this case we say that $G$ is {\em quasiprimitive} on the set $X.$ We then have $G^{\mathfrak C} \simeq G$ since the kernel $G_{({\mathfrak C})}$ on the set of classes is trivial in this case. Summarizing the above remarks:\ \emph{for an imprimitive permutation group $G$ on $X$, either $G$ is quasiprimitive on $X$, or there is a non-trivial $G$-normal partition of $X$.} We now turn to the context in which $X$ is the point set ${\cal P}$ of a linear space and $G$ is a line-transitive, point-imprimitive group of automorphisms. In a study in \cite{CaminaPraeger01} of the case where $G$ was assumed to be point-quasiprimitive (that is, quasiprimitive on ${\cal P}$), it was proved that $G$ must be {\em almost simple}, that is, there is a non-abelian simple group $T$ such that $T \le G \le {\rm Aut}(T).$ However, no examples of this kind are known. In fact, in all the known examples, there is a non-trivial $G$-normal point-partition (see \cite[Section 5]{CaminaPraeger01}). \begin{theorem}[Camina and Praeger~\cite{CaminaPraeger01}]\label{thm:CP01} Let $G$ be a line-transitive point-imprim\-itive\\ group of automorph\-isms of a linear space. Then either \begin{enumerate} \item there exists a non-trivial $G$-normal point-partition; \ \ or \item $G$ is point-quasiprimitive and almost simple. \end{enumerate} \end{theorem} In an earlier study of the $G$-normal case, Camina and Praeger showed that an intransitive normal subgroup $N$ must act faithfully on each of its orbits $C$, that is to say, $N_{(C)}=1$ so that $N\simeq N^C$. \begin{theorem}[Camina and Praeger~\cite{CaminaPraeger93}]\label{thm:CP93} If $G$ is a line-transitive group of automorph\-isms of a linear space ${\cal S} = ({\cal P}, {\cal L})$, and $N$ is a normal subgroup that is intransitive on ${\cal P}$, then, for each $N$-orbit $C$ in ${\cal P}$, \begin{enumerate} \item $N$ acts faithfully on $C$; \ \ and in particular, \item if $N$ is abelian then $|N|=|C|$ is odd. \end{enumerate} \end{theorem} \section{Outline of the search strategy}\label{sect:outline} In this section we describe briefly our approach to the search for line-transitive, point-imprimitive linear spaces for which the Fang-Li parameter $k^{(r)}$ is at most some maximum value $k^{(r)}_{\max}$. In particular, we will apply this approach in the case where $k^{(r)}_{\max}=8$. Throughout the rest of the paper we assume the following. \medskip\noindent {\sc Hypothesis.}\label{hyp}\quad Let ${\cal S}=({\cal P},{\cal L})$ be a non-trivial linear space, admitting a line-transitive point-imprimit\-ive automorphism group $G\leq{\rm Aut} ({\cal S}).$ Let $v, k, b, r$ be as in (\ref{eqn:vrkb})--(\ref{eqn:fisher}). Let ${\mathfrak C}$ be a non-trivial $G$-invariant partition of ${\cal P}$ with $d$ classes of size $c$, and let $x$ and $y$ be the Delandtsheer--Doyen parameters, and $\gamma, \delta$ be the Fang-Li parameters, corresponding to ${\mathfrak C}$. \smallskip First we make explicit our aims and the nature of the output, and then we describe how the search proceeds in five broad steps. \subsection{Aim and nature of the output} \bigskip \noindent {\sc Aim:}\quad Find each linear space ${\cal S}=({\cal P},{\cal L})$ such that the Fang-Li parameter $k^{(r)}\leq k^{(r)}_{\max}$, and there exists a group $G\leq {\rm Aut}({\cal S})$ that is line-transitive and point-imprimitive. \bigskip By Theorem~\ref{thm:CP01}, the search must take into account the following two cases. \smallskip\noindent {\sc Case 1.}\quad $G$ is point-quasiprimitive and almost simple; and \noindent {\sc Case 2.}\quad $G$ leaves invariant a non-trivial $G$-normal point-partition. \medskip If $G$ is point-quasiprimitive then $G$ is faithful on all non-trivial $G$-invariant partitions, so we search only for the maximal ones. Thus the output from searching these cases will be as follows. \smallskip\noindent {\sc Output from Case 1.}\quad all $({\cal S}, G, {\mathfrak C})$, where $G$ is point-quasiprimitive and almost simple, and ${\mathfrak C}$ is a maximal $G$-invariant point-partition. \noindent {\sc Output from Case 2.}\quad all $({\cal S}, G, {\mathfrak C})$, where ${\mathfrak C}$ is a non-trivial $G$-normal partition. Note that the output from these two cases will produce all pairs $({\cal S}, G)$ where $G$ is line-transitive and point-imprimitive on a linear space ${\cal S}$. The search will not necessarily identify every non-trivial $G$-invariant partition for a given $({\cal S}, G)$. In particular, if $G$ is not point-quasiprimitive but acts faithfully on a non-trivial point partition ${\mathfrak C}$ then this partition will not be identified. See also Remark~\ref{rem:1}. In the first part of the search we treat these two cases together, since the tests we apply are valid whether or not $G$ is point-quasiprimitive. \subsection{Descriptions of {\sc Steps} in Search} Suppose that ${\cal S}=({\cal P},{\cal L})$ is a non-trivial linear space, $G$ is a line-transitive group of automorphisms, and ${\mathfrak C}$ is a non-trivial $G$-invariant partition of ${\cal P}$ with $d$ classes of size $c$. In Section~\ref{sect:params} we define some additional parameters that give extra information and restrictions, and extract a series of tests that form Algorithm~\ref{alg1}. We also present Algorithm~\ref{alg2} that tests for certain sufficient conditions under which the partition ${\mathfrak C}$ is guaranteed to be minimal or maximal. \smallskip\noindent {\sc Step 1. (Fang-Li Parameter Sift)}\quad \\ Apply Algorithm~\ref{alg1} and output {\sc ParameterList}$(k^{(r)}_{\max})$, a list of parameter values that pass all these tests with $k^{(r)}\leq k^{(r)}_{\max}$. Each {\sc Line} of {\sc ParameterList}$(k^{(r)}_{\max})$ gives values of the following parameters: \begin{enumerate} \item $d$ and $c,$ and hence $v = d c;$ \item the Fang-Li parameters $k^{(v)}$, $k^{(r)}$, $b^{(v)},$ $b^{(r)}$, as in Subsection~\ref{sub:fangli}, and hence $k,$ $r,$ $b,$; \item the Delandtsheer-Doyen parameters $x$ and $y$ as in (\ref{eqn:DDreversed}); \item the Fang-Li parameters $\gamma$ and $\delta$, as in Proposition~\ref{prop:fl2}; \item the intersection type $(0^{d_0},1^{d_1},\ldots, k^{d_k})$ and hence also ${\rm spec}\,{\cal S}$, as defined in Subsection~\ref{sub:inter}.\; \item an upper bound $t_{\max}$ for the transitivity of the `top group' $G^{\mathfrak C}$, as defined in Definition~\ref{def:tmax}, (that is to say, if $G^{\mathfrak C}$ is $t$-transitive then $t\leq t_{\max}$). \end{enumerate} \smallskip\noindent {\sc Step 2. (Two-Step Imprimitivity Test)}\quad To each {\sc Line} of {\sc ParameterList}$(k^{(r)}_{\max})$ apply Algorithm~\ref{alg2} and partition the list from {\sc Step} 1, {\sc ParameterList}$(k^{(r)}_{\max})$, into two parts: {\sc ParameterListA}$(k^{(r)}_{\max})$ consists of those {\sc Lines} for which $G$ is guaranteed to be 2-step imprimitive relative to ${\mathfrak C}$, while {\sc ParameterListB}$(k^{(r)}_{\max})$ contains all the remaining {\sc Lines}. \smallskip It turned out that for all the {\sc Lines} in {\sc ParameterList}$(8)$, the partition ${\mathfrak C}$ was both minimal and maximal, that is to say, {\sc ParameterListB}$(8)$ was empty. Thus in each {\sc Line} of {\sc ParameterListA}$(8)$ we have that the top group $H:=G^{\mathfrak C}$ is primitive and also the bottom group $L:=G^C$ (where $C\in{\mathfrak C}$) is primitive, and $G\leq L\wr H$. Next we enhanced the information contained in each {\sc Line} of {\sc ParameterListA}$(k^{(r)}_{\max})$ by applying tests that restricted the possibilities for the groups $H$ and $L$. In some cases these tests eliminated the {\sc Line} as no possibilities for one of $H$, $L$ remained. These tests are described in Section~\ref{sect:groups} and form Algorithms~\ref{alg:topgroup} and \ref{alg:bottomgroup}. These algorithms may also be applied to candidate parameter values in {\sc ParameterListB}$(k^{(r)}_{\max})$ to determine candidate top groups in the case of a maximal $G$-invariant partition, or for candidate bottom groups in the case of a minimal $G$-invariant partition. The remaining {\sc Steps} 4 and 5 that we describe, however, are focused on dealing with {\sc ParameterListA}$(k^{(r)}_{\max})$. Some of the tests depend on the availability of a complete list of all primitive groups of a given degree, and we had available to us, through the computer system {\sf GAP} \cite{GAP4} such lists for degrees less than 2,500. Thus the next step could be applied completely only in those cases where both the number $d$ of classes and the class size $c$ were less than 2,500. \smallskip\noindent {\sc Step 3. (Top and Bottom Group Sifts and Grid Test)}\quad Apply Algorithm~\ref{alg:topgroup} to give information in a list called {\sc TopGroups} about the primitive `top group' $H=G^{{\mathfrak C}}$. For each {\sc Line} in {\sc ParameterListA}$(k^{(r)}_{\max})$ for which a list {\sc Prim}$(d)$ of all primitive permutation groups of degree $d$ is available, {\sc TopGroups} will contain an explicit list of candidates for $H$. If {\sc Prim}$(d)$ is not available then {\sc TopGroups} will contain this information, and possibly some restrictions on $H$. Apply Algorithm~\ref{alg:bottomgroup} to give information in a list called {\sc BottomGroups} about the primitive `bottom group' $L=G^{C}$. For each {\sc Line} in {\sc ParameterListA}$(k^{(r)}_{\max})$ for which a list {\sc Prim}$(c)$ of all primitive permutation groups of degree $c$ is available, {\sc BottomGroups} will contain an explicit list of candidates for $L$. If {\sc Prim}$(c)$ is not available then {\sc BottomGroups} will contain this information, and possibly some restrictions on $L$. Then, in Algorithm~\ref{alg:step3and4} we do the following. If, for a {\sc Line} of {\sc ParameterlistA}$(k^{(r)}_{\max})$, one of {\sc TopGroups} or {\sc BottomGroups} is empty then this {\sc Line} is removed from the list {\sc ParameterListA}$(k^{(r)}_{\max})$. At this point we will have, in {\sc ParameterListA}$(k^{(r)}_{\max})$, {\sc Lines} representing all candidates for parameters for any $({\cal S}, G, {\mathfrak C})$ satisfying the {\sc Hypothesis}. We make one further pass through this list before dividing our search into the two {\sc Cases}. For each {\sc Line} representing a possible $({\cal S},G, {\mathfrak C})$, we define {\sc PossibleGrid} to be `yes' if either $c=d$, or there is another {\sc Line} in {\sc ParameterListA}$(k^{(r)}_{\max})$ corresponding to the same value of $k$ and to a point-partition with $c$ classes of size $d$. Otherwise the value of {\sc PossibleGrid} is defined as `no'. We add the value of {\sc PossibleGrid} to the {\sc Line}, (see Remark~\ref{rem:1}). This completes {\sc Step 3}. \smallskip In the final {\sc Steps} we divide the search into the two {\sc Cases}. For each {\sc Line} of the list {\sc ParameterListA}$(k^{(r)}_{\max})$ for which an explicit list {\sc TopGroups} is available (which means for us, $d<2,500$), Algorithm~\ref{alg:qptopgroup} produces candidates for almost simple point-quasiprimitive groups $G$, and for each {\sc Line} of {\sc ParameterListA}$(k^{(r)}_{\max})$ for which an explicit list {\sc BottomGroups} is available (for us this means $c<2,500$), Algorithm~\ref{alg:gnormalbottomgroup} produces candidates for the primitive bottom group in the case where ${\mathfrak C}$ is a $G$-normal partition. \smallskip\noindent {\sc Step 4. (Quasiprimitive and $G$-Normal Sifts)}\quad Apply Algorithm~\ref{alg:qptopgroup} to each {\sc Line} of {\sc ParameterListA}$(k^{(r)}_{\max})$ to obtain a list {\sc QuasiprimTopGroups}: if {\sc TopGroups} contains an explicit list of groups, then {\sc QuasiprimTopGroups} is a list of candidate primitive top groups in the case where $G$ is quasiprimitive, and otherwise is a list containing restrictions for this case. Apply Algorithm~\ref{alg:gnormalbottomgroup} to each {\sc Line} to obtain a list {\sc GNormalBottomGroups}: if {\sc BottomGroups} contains an explicit list of groups, then {\sc GNormalBottomGroups} is a list of candidate bottom groups in the case where ${\mathfrak C}$ is $G$-normal, and otherwise is a list containing restrictions for this case. \medskip Our final step we state only for $k^{(r)}_{\max}=8$. \smallskip\noindent {\sc Step 5. (Analysing Remaining Lines)}\quad We make a {\sc Line}-by-{\sc Line} consideration of the output {\sc ParameterListA}$(8)$ of {\sc Step} 4 to determine all the possible linear spaces, and thereby complete the proof of Theorem~\ref{main}. \begin{remark}\label{rem:1}{\rm Finally we make some comments on the need for the parameter {\sc PossibleGrid}, and on its name. Consider a situation in which a group $G$ leaves invariant two non-trivial partitions of ${\cal P}$, namely a partition ${\mathfrak C}$ with $d$ classes of size $c$, and a partition ${\mathfrak C}'$ with $c$ classes of size $d$. Suppose also that the associated groups $G^{\mathfrak C}, G^{{\mathfrak C}'}$, and the groups induced on classes are all primitive. Then each class of ${\mathfrak C}'$ consists of one point from each of the classes of ${\mathfrak C}$, and each class of ${\mathfrak C}$ consists of one point from each of the classes of ${\mathfrak C}'$. Thus the point set ${\cal P}$ can be identified with the Cartesian product ${\mathfrak C}\times{\mathfrak C}'$, and $G$ is a subgroup of the full stabiliser ${\rm Sym}({\mathfrak C})\times {\rm Sym}({\mathfrak C}')$ of this `grid structure'. It is possible that the group $G$ acts faithfully on one of these partitions, say ${\mathfrak C}'$, and is not faithful on the other partition ${\mathfrak C}$. (For a simple example in the group setting, take $|{\mathfrak C}|=2$ and $|{\mathfrak C}'|=5$. Then $S_2 \times S_5$ contains a transitive subgroup isomorphic to $S_5$ with these properties.) In this case, $G$ is not quasiprimitive, and ${\mathfrak C}$ is $G$-normal but ${\mathfrak C}'$ is not $G$-normal. Thus after the next steps the triple $({\cal S}, G, {\mathfrak C})$ will be identified in {\sc Case} 2, but the triple $({\cal S}, G,{\mathfrak C}')$ will not appear in either {\sc case}. This is in line with our aim to find all possible ${\cal S}$. However, when identifying the $G$-normal triple $({\cal S}, G, {\mathfrak C}')$, the information that there was a possible grid structure could be useful at {\sc Step 5}. In other {\sc Lines} it could also be important at {\sc Step 5} to know that it is impossible for such a grid structure to be preserved by the group. }\end{remark} \section{Parameter restrictions and 2-step imprimitivity}\label{sect:params} Let $({\cal S}, G, {\mathfrak C})$ be as in the {\sc Hypothesis}. In this section we present several results that restrict the parameters and provide sufficient conditions for 2-step imprimitivity. These results are used to design the {\sc Fang-Li Parameter Sift} and the {\sc Two-Step Imprimitivity Test} in Subsections~\ref{sec:flsift} and \ref{sub:2step}. \bigskip First we record some equalities and inequalities relating these parameters. The last statement of part (ii) was proved in \cite{DelandtsheerNiemeyeretal01}. \begin{theorem}[Praeger and Tuan~\cite{PraegerTuan02}, Theorem 1.1]\label{thm:CNP} Assume that the {\sc Hypothesis} holds. Then \begin{enumerate} \item $c = \frac{2xr}{k} + 1,$ $d = \frac{2yr}{k} + 1$, and $\frac{b}{v} = \frac{r}{k} = \frac{\binom{k}{2} - x - y}{2xy}.$ \item $\binom{k}{2} \ge 2xy + x + y,$ $c \ge 2x + 1$ and $d \ge 2y + 1,$ and equality holds in one of these relations if and only if equality holds in all three, and this occurs if and only if ${\cal S}$ is a projective plane. In particular, if $c = 3$ or $d=3$ then ${\cal S}$ is a projective plane. \item At least one of $k \ge 2x$ and $k \ge 2y$ holds. Moreover \begin{enumerate} \item if $k \ge 2x$ then $k - 2x \ge 2,$ $y \le \binom{k - 2x}{2},$ and $d \ge 2k -2x - 1 > k;$ \item if $k \ge 2y$ then $k - 2y \ge 2,$ $x \le \binom{k - 2y}{2},$ and $c \ge 2k -2y - 1 > k.$ \end{enumerate} In particular $k < \max\{c,d\}.$ \end{enumerate} \end{theorem} \smallskip Under additional conditions, equality holds in the bound for $y$ in (iii) (a), namely, $y = \binom{k-2x}{2}.$ See Theorem~\ref{thm:normal}. \subsection{Intersection numbers and spectrum}\label{sub:inter} Let $\lambda \in {\cal L}.$ Define the intersection numbers \begin{equation}\label{eqn:di} d_i = \Big| \big\{ \; C \in {\mathfrak C}: \; |C \cap \lambda|=i \big\}\Big|, \end{equation} which are independent of the choice of the line $\lambda$. The {\em intersection type} is the vector $(0^{d_0},1^{d_1},\ldots, k^{d_k})$ and the {\em spectrum} is the set of non-zero intersection sizes \begin{equation}\label{eqn:spec} {\rm spec}\,{\cal S} := \{ i > 0 \mid d_i \neq 0 \}. \end{equation} We sometimes write ${\rm spec}_{\mathfrak C} \, {\cal S}$ if we need to specify the partition ${\mathfrak C}.$ \smallskip Of particular interest are the smallest and the largest elements of this set: \begin{eqnarray*} i_{\min} &:=& \min({\rm spec} \, {\cal S}), \qquad \text{and} \qquad i_{\max} := \max({\rm spec} \, {\cal S}). \end{eqnarray*} \smallskip For a point $\alpha$, $C(\alpha)$ denotes the class of ${\mathfrak C}$ containing $\alpha$. A point $\alpha$ and a line $\lambda$ are said to be \emph{$i$-incident} if $|\lambda \cap C(\alpha) | = i$. A class $C$ and a line $\lambda$ are \emph{$i$-incident} if $|C \cap \lambda | = i.$ Transitivity of $G$ on classes and on points implies that the following numbers are well defined, that is, they do not depend on the choice of the class $C \in {\mathfrak C}$ and the choice of the point $\alpha \in {\cal P}:$ \begin{align*} b_i & = \text{ the number of lines which are $i$-incident with a class $C$}, \\ r_i & = \text{ the number of lines which are $i$-incident with a point $\alpha$}. \end{align*} \smallskip Recall that for two points $\alpha$ and $\beta,$ $\lambda(\alpha,\beta)$ denotes the unique line joining $\alpha$ with $\beta$. \smallskip Next we give a few properties of ${\rm spec}\,{\cal S}$. As was noted in \cite{Delandtsheer89}, part (i) of Theorem~\ref{thm:HM} below can be derived from the proof of \cite[Proposition 3]{HigmanMcLaughlin61}, but for completeness we give a short proof. For any subset $F \subseteq {\cal P}$ containing at least two points, define \begin{equation}\label{eqn:induced} {\cal L}|_F := \{ \lambda \cap F \;:\; \lambda \in {\cal L}, \; |\lambda \cap F| \ge 2\}. \end{equation} Then the induced incidence structure ${\cal S}|_F := (F, {\cal L}|_F)$ is a (possibly trivial) linear space, called the {\em linear space induced on $F.$} Under certain conditions on the parameters, ${\cal S}|_F$ has constant line size. \begin{theorem}\label{thm:HM} Assume that the {\sc Hypothesis} holds. Then \begin{enumerate} \item $|{\rm spec} \, {\cal S}| \ge 2;$ \item $c\not\in{\rm spec}\,{\cal S}$; \item if ${\rm spec} \, {\cal S} = \{ 1,h\}$ with $h \ge 2,$ then the induced linear space ${\cal S}|_C$ has constant line size $h$. (If $h$ is $2$, this structure is essentially a complete graph with $c$ vertices.) \end{enumerate} \end{theorem} \begin{proof} Suppose that ${\rm spec}\, {\cal S} = \{h \} $. Since two points of $C$ lie on a line, $h\geq2$. For a point $\alpha\in C$, the set $\lambda_1,\dots,\lambda_r$ of lines containing $\alpha$ gives rise to a partition $\{ (C\cap\lambda_i)\setminus\{\alpha\}\,:\,1\leq i\leq r\}$ of $C\setminus\{\alpha\}$, whence $c =r (h-1) + 1$. Hence $v=r(k-1)+1 = d(r(h-1)+1)$, which implies that $r\big((k-1)-d(h-1)\big) = d-1$. Since $d \geq 2$ and $h\geq2$, this implies that $r \leq d-1 < d \leq k-1 $, so that $ r < k$, contradicting (\ref{eqn:vrkb}) and (\ref{eqn:fisher}). Thus part (i) is proved. Part (ii) is proved in \cite[Corollary 2.2]{DelandtsheerNiemeyeretal01}, and part (iii) is proved in \cite[Proposition 2.3(iii)]{DelandtsheerNiemeyeretal01}. \end{proof} \smallskip The next lemma is an immediate consequence of a result of Camina and Siemons~\cite[Lemma 2]{CaminaSiemons89a}. It gives sufficient conditions for ${\cal S}|_F$ (as defined above by (\ref{eqn:induced})) to be line-transitive. \begin{lemma}[Camina and Siemons~\cite{CaminaSiemons89a}]\label{lem:CaminaSiemons} Assume that ${\cal S}=({\cal P},{\cal L})$ is a non-trivial linear space admitting a line-transitive automorphism group $G$. Let $\lambda \in{\cal L}$ and $H\leq G_{\lambda}$ such that, for $F:={\rm Fix}_{{\cal P}}(H)$, \begin{enumerate} \item $2\leq |F\cap \lambda| <|F|$, and \item if $K\leq G_\lambda$ and $|{\rm Fix}_{{\cal P}}(K)\cap \lambda|\geq 2$, and $H$ and $K$ are conjugate in $G$, then $H$ and $K$ are conjugate in $G_\lambda$. \end{enumerate} Then ${\cal S}|_F$ has constant line size and $N_G(H)$ acts line-transitively on ${\cal S}|_F$. \end{lemma} \begin{corollary} \label{cor:CaminaSiemons} Assume that the {\sc Hypothesis} holds. Let $\lambda\in{\cal L}$ and $p$ be a prime dividing $|G_{\lambda}|$. Let $P$ be a Sylow $p$-subgroup of $G_\lambda$ and $F:={\rm Fix}_{{\cal P}}(P)$. Suppose that $2\leq |F\cap\lambda|<|F|$. Then \begin{enumerate} \item $N_G(P)$ is line-transitive on ${\cal S}|_{F}$; \item ${\mathfrak C}|_F:=\{C\cap F:C\in{\mathfrak C},C\cap F\ne\emptyset\}$ is an $N_G(P)$-invariant partition of $F$; \item $|F|=f.|C\cap F|$ where $f=|{\mathfrak C}|_F|,\,C\cap F\in{\mathfrak C}|_F$, and $|C\cap F|\geq 3$. \end{enumerate} \end{corollary} \begin{proof} Part (i) follows from Lemma~\ref{lem:CaminaSiemons}, and so by \cite{Block68}, $N_G(P)$ is transitive on $F$. Consider $C\cap F$, for some class $C\in{\mathfrak C}$ such that $C\cap F\ne\emptyset$. Then $(C\cap F)^g=C^g\cap F^g=C^g\cap F$ for any $g\in N_G(P)$. Since $C$ is a block of imprimitivity for $G$, $C^g=C$ or $C^g\cap C=\emptyset$, and so $(C\cap F)^g=C\cap F$ or $(C\cap F)^g\cap(C\cap F)=\emptyset$. Thus ${\mathfrak C}|_F$ forms an $N_G(P)$-invariant partition of $F$, and in particular, $|F|=f.|C\cap F|$ where $f$ is the number of classes in ${\mathfrak C}$ that contain fixed points of $P$, and hence are fixed by $P$. Moreover, $|C\cap F|\ne 2$ by Theorem~\ref{thm:CNP}(ii) so $|C\cap F|\geq 3$. \end{proof} Finally in this subsection we collect some useful arithmetical relationships between these parameters. Some parts were proved in~\cite{DelandtsheerNiemeyeretal01} and \cite{PraegerTuan02}. \smallskip \begin{sloppypar} \begin{proposition}\label{prop:intersection} The following all hold. \begin{enumerate} \item $\displaystyle d = \sum_{i=0}^k d_i,$ $\displaystyle b = \sum_{i=0}^k b_i,$ $\displaystyle cr = \sum_{i=1}^k ib_i,$ $\displaystyle r = \sum_{i=1}^k r_i,$ $\displaystyle k = \sum_{i=1}^k i d_i,$ $\displaystyle x = \sum_{i=2}^k \binom{i}{2}d_i,$ \item $\displaystyle c-1 = \sum_{i=1}^k (i-1)r_i,$ $\displaystyle v-c = \sum_{i=1}^k (k-i)r_i,$ $\displaystyle \binom{c}{2} = \sum_{i=1}^k \binom{i}{2} b_i,$ $\displaystyle \binom{v-c}{2} = \sum_{i=0}^k \binom{k-i}{2} r_i,$ $\displaystyle c (v-c) = \sum_{i=1}^{k-1} i (k-i)r_i,$ \item $\displaystyle b d_i = d b_i$, $\displaystyle c r_i=i b_i $, and $\displaystyle v r_i = b i d_i$ for $0 \le i \le k,$ \item $\displaystyle \frac{c d_i}{k^{(v)}} = \frac{b_i}{b^{(r)}}$ and $\displaystyle r_i' := \frac{r_i}{b^{(r)}} = \frac{i d_i}{k^{(v)}}$ are integers for $0 \le i \le k,$ \item $\displaystyle \beta := \frac{d}{\gcd(d,b^{(v)})} \; \big| \; d_i$ for $0 \le i \le k,$ hence for all these $i,$ $\displaystyle d_i' := \frac{d_i}{\beta},$ $\displaystyle \frac{k}{\beta} = \sum_{i=0}^k i \cdot d_i',$ and $\displaystyle \frac{2 x}{\beta} = \sum_{i=2}^k i (i-1) d_i'$ are all integers. \item $\displaystyle \alpha_i := \frac{k^{(v)}}{\gcd(i,k^{(v)})} \;\Big|\; d_i$ for $0\le i\le k,$ and hence $\displaystyle \frac{\alpha_i }{\gcd(\alpha_i, \beta)} \;\Big|\; d_i'.$ \item $i_{\max} < \min \{ \sqrt{c} + \frac{1}{2},\; \sqrt{2x} + 1\}.$ \item Given a point $\alpha,$ the number of points $\beta \neq \alpha$ such that $\lambda(\alpha,\beta)$ is $1$-incident with both $\alpha$ and $\beta$ is $r_1(d_1-1) = \frac{r}{k}d_1(d_1 - 1)$ (we call such configurations 1-incident point-line-point triples). \item $d_1 \ge k - 2x$ with equality if and only if ${\rm spec} \, {\cal S} = \{ 1,2\},$ that is, if and only if the intersection type is $(0^{d-k+x},1^{k-2x}, 2^x, 3^0, \ldots, k^0).$ In particular, if $k \ge 2x$ then $d_1 > 0.$ \end{enumerate} \end{proposition} \end{sloppypar} \begin{proof} (i)\quad The equations follow from the definitions of the $d_i$, $b_i$ and $r_i$ (also see~\cite{DelandtsheerNiemeyeretal01}). (ii)\quad The first equality is proved in ~\cite[Proposition 2.4(iv)]{DelandtsheerNiemeyeretal01}. The second one follows from this and the fact that $v-1=r(k-1)$. We obtain the third equality by counting, for a fixed class $C$, the number of pairs $(\{\alpha,\beta\},\lambda)$ for which $\alpha,\beta\in C\cap \lambda$. For the fourth equation, we count, for a fixed class $C$, the number of pairs $(\{\alpha,\beta\},\lambda)$ where $\alpha,\beta\in \lambda$ and $\alpha,\beta\in {\cal P}\setminus\{C\}$. Similarly, counting, for a fixed class $C$, the number of pairs $(\{\alpha,\beta\},\lambda)$ where $\alpha,\beta\in \lambda$, and $\alpha \in C$ , $\beta\in {\cal P}\setminus\{C\}$ in two ways yields the last equality. (iii) and (iv) are proved in~\cite[Proposition 2.3]{DelandtsheerNiemeyeretal01}. (v)\quad By (iii), $\displaystyle\frac{db_i}{b}=d_i$. Noting that $d$ (as a divisor of $v$) is prime to $b^{(r)}$, we must have that $\beta:=\displaystyle\frac{d}{gcd(d,b^{(v)})}$ divides $d_i$ for $0\leq i\leq k$. Note that this number does not depend on $i$. The equation $k=\sum_{i=1}^{k}id_i$ from (i) reduces to $\frac{k}{\beta}=\sum_{i=1}^{k}id_i'$. Similarly, the equation $2x=\sum_{i=2}^{k}i(i-1)d_i$ of (i) becomes $\frac{2x}{\beta}=\sum_{i=2}^{k}i(i-1)d_i'$. (vi)\quad By (iv), $r_i'=\frac{id_i}{k^{(v)}}$ is an integer, and therefore $\frac{k^{(v)}}{gcd(k^{(v)},i)}$ a divisor of $d_i$. (vii) is proved in~\cite[Lemma 3.1]{DelandtsheerNiemeyeretal01}. (viii) is proved in~\cite[Lemma 2.3]{PraegerTuan02}. (ix) By part (i), $k-2x=\sum_{i=1}^kid_i-\sum_{i=2}^ki(i-1)d_i =d_1-\sum_{i=2}^ki(i-2)d_i\leq d_1$. Equality holds if and only if $d_i=0$ for all $i\geq3$, that is, if and only if ${\rm spec}\,{\cal S}\subseteq\{1,2\}$. Since $|{\rm spec}\,{\cal S}|\geq2$, by Theorem~\ref{thm:HM}~(i), this condition is equivalent to ${\rm spec}\,{\cal S}=\{1,2\}$, which in turn is equivalent to the intersection type being $(0^{d-k+x},1^{k-2x}, 2^x, 3^0, \ldots, k^0).$ Now suppose that $k\geq 2x$. If $k>2x$ then $d_1\geq k-2x>0$, so suppose that $k=2x$. If $d_1=0$ then $d_1=k-2x=0$, and we have just shown that in this case the intersection type is $(0^{d-k+x},1^{k-2x}, 2^x, 3^0, \ldots, k^0)$, which implies that $|{\rm spec}\,{\cal S}|=1$, contradicting Theorem~\ref{thm:HM}. \end{proof} \bigskip Hence the intersection type $(0^{d_0},1^{d_1},\ldots, k^{d_k})$ determines the numbers $r_i$ and $b_i$ via (iii): \begin{align}\label{eqn:ribifromdi} r_i = \frac{b i d_i}{v} = \frac{r i d_i}{k}, \quad \text{and} \quad b_i = \frac{cr_i}{i} = \frac{cr}{k} d_i. \end{align} In particular, the numbers $b_i$ are proportional to the $d_i.$ \subsection{Parameter restrictions using the top group} Here we derive an upper bound for the transitivity of the top group $G^{\mathfrak C}$ that depends only on the intersection type $(0^{d_0}, 1^{d_1}, \ldots, k^{d_k})$. Recall the definitions of the $d_i$ and the spectrum ${\rm spec}\, {\cal S}$ in (\ref{eqn:di}) and (\ref{eqn:spec}). \begin{definition}\label{def:tmax}{\rm For a given intersection type $(0^{d_0},1^{d_1},\ldots, k^{d_k})$, and non-empty subset $S\subseteq {\rm spec} \, {\cal S}$, set $d(S):= \sum_{i\in S} d_i.$ Define $t_{\max}$ to be the largest positive integer $t$ such that, for all $S\subseteq {\rm spec} \, {\cal S}$, and all positive integers $h\leq\min\{t, d(S)\}$, \[ \prod_{j=0}^{h-1} (d-j) \quad \mbox{divides} \quad b \prod_{j=0}^{h-1} (d(S)-j). \] } \end{definition} For $h=1$ and any non-empty subset $S$ of ${\rm spec} \, {\cal S}$, the displayed condition in Definition~\ref{def:tmax} is simply `$d$ divides $b\,d(S)$', and the truth of this follows immediately from the definition of $d(S)$ and from the equalities $d b_i=b d_i$ which hold for all $i\in S$ by Proposition~\ref{prop:intersection}~(iii). Hence the condition in Definition~\ref{def:tmax} holds for $t=1$, and so $t_{\max}$ is well-defined. We prove next that $t_{\max}$ is an upper bound for the transitivity of $G^{\mathfrak C}$. \begin{lemma}\label{lem:maxtrans} If $G^{\mathfrak C}$ is $t$-transitive then $t\leq t_{\max}$. \end{lemma} \begin{proof} As remarked above, $t_{\max}\geq1$, so if $t=1$ then the result holds. Assume that $t\geq2$. Let $S\subseteq {\rm spec} \, {\cal S}$, set $s:=d(S)$ as in Definition~\ref{def:tmax}, and let $h$ be such that $h\leq\min\{t, s\}$. We double count the set \[ {\cal M} = \Big\{ ((C_1,\ldots,C_h), \lambda) \in {\mathfrak C}^h \times {\cal L} \;:\; C_i \neq C_j \;\; \text{for} \;\; i \neq j, \;\; |C_i \cap \lambda| \in S \;\; \text{for} \;\; i=1,\ldots,h \Big\}. \] Since $G$ acts $t$-transitively on the set ${\mathfrak C}$ of classes, it also acts transitively on the set of $h$-tuples of pairwise distinct classes. Thus the number of possibilities for the line $\lambda$ in the second component, for a given $h$-tuple $(C_1,\dots,C_h)$, does not depend on the choice of the $h$-tuple. Call this number $n,$ so that the cardinality of the set ${\cal M}$ is $d(d-1)\cdots (d-h+1)\cdot n$. On the other hand, given any line $\lambda,$ the number of choices of $h$-tuples of pairwise distinct classes $(C_1,...,C_h),$ each intersecting $\lambda$ in a number of points belonging to $S$, is $s(s-1)\cdots (s-h+1),$ so $ |{\cal M}|= b \cdot \prod_{j=0}^{h-1}(s-j).$ Hence $\prod_{j=0}^{h-1}(d-j)$ divides $b \cdot \prod_{j=0}^{h-1}(s-j).$ Thus the displayed condition of Definition~\ref{def:tmax} holds for $t$, and so $t\leq t_{\max}$. \end{proof} \bigskip Although the next result will not be used until we consider more detailed group theoretic properties, we derive here the following extension of Lemma~\ref{lem:maxtrans}. \begin{lemma}\label{lem:altd} If $G^{\mathfrak C} \ge {\rm Alt}_d,$ then $\displaystyle{\mathrm{lcm}_{i=1}^k \binom{d}{d_i}}$ divides $b.$ \end{lemma} \begin{proof} By assumption, $G^{\mathfrak C}$ is $t$-transitive, where $t=d-2$ if $G^{\mathfrak C}$ is ${\rm Alt}_d$, and $t=d$ if $G^{\mathfrak C}={\rm Sym}_d$. Let $i$ be such that $0<i\leq k$. By Theorem~\ref{thm:HM}~(i), $d_i<d$. If $d_i=d-1$, then by Proposition~\ref{prop:intersection}~(iii), $d$ divides $bd_i=b(d-1)$, and hence $d=\binom{d}{d_i}$ divides $b$. Suppose now that $d_i\leq d-2$. Choose $S=\{i\}\subseteq {\rm spec}\,{\cal S}$ so that $d(S)=d_i\leq d-2\leq t$, and choose $h =d_i= \min\{t,d(S)\}$. Then, by Lemma~\ref{lem:maxtrans} and Definition~\ref{def:tmax}, ${ \prod_{j=0}^{d_i-1} (d-j)}$ divides ${ b\prod_{j=0}^{d_i-1} (d_i-j)}$. Now ${ \prod_{j=0}^{d_i-1} (d-j) = \frac{d!}{(d-d_i)!}}$ and ${ b\prod_{j=0}^{d_i-1} (d_i-j)} =bd_i!.$ It follows that $\binom{d}{d_i}$ divides $b$. As this is the case for all $i$ with $0<i\leq k$, the result follows. \end{proof} Finally in this subsection we record a test for determining $t_{\max}$, that essentially just checks the condition of Definition~\ref{def:tmax}. \begin{algorithm}\label{algtmax}{\rm ({\sc TMax})\\ {\sc Input:}\quad The values $d,c,k$ and an intersection type $(0^{d_0},1^{d_1},\ldots, k^{d_k})$. \smallskip\noindent {\sc Output:}\quad $t_{\max}$ and ${\rm spec}\,{\cal S}$. \smallskip\noindent compute\ ${\rm spec}\,{\cal S} :=\{i>0\;:\;d_i\ne0\}$;\\ compute \ $\mathcal{T} :=\{ \sum_{i\in S}d_i \;:\; S\subseteq{\rm spec}\,{\cal S},\,S\ne\emptyset\}$;\\ set $T:=0$;\\ for $t=1,\ldots,d$\\ \phantom{mm} if, for some $s\in\mathcal{T}$ with $s\geq t$, \\ \phantom{mm} \phantom{mm} $d(d-1)\ldots(d-t+1)$ does not divide $bs(s-1)\ldots(s-t+1)$\\ \phantom{mm} \phantom{mm} skip to instruction $(\ast)$; \\ \phantom{mm} else set $T:=T+1$;\\ ($\ast$) set $t_{\max}:=T$ and return $t_{\max}$ and ${\rm spec}\,{\cal S}$. } \end{algorithm} \begin{lemma}\label{lem:algtmax} Let $d,c,k$ be given, and let $(0^{d_0},1^{d_1},\ldots, k^{d_k})$ be a corresponding intersection type with $\sum_i id_i=k$. Then, along with ${\rm spec}\,{\cal S}$, either Algorithm~\ref{algtmax} returns $0$ and there are no linear spaces satisfying the {\sc Hypothesis} with these parameters, or it returns the correct value of $t_{\max}$. \end{lemma} \begin{proof} Let ${\cal S}, \mathcal{T}$ be as in Algorithm~\ref{algtmax}. Suppose that $t'$ is the integer returned by Algorithm~\ref{algtmax}. If $t'=0$ then the parameter $T$ is not increased during the run of the `if loop' with $t=1$, and hence, for some $s\in\mathcal{T}$, $d$ does not divide $bs$, contradicting Proposition~\ref{prop:intersection}~(iii) (see the discussion preceding Lemma~\ref{lem:maxtrans}). Thus in this case there is no linear space with these parameters satisfying the {\sc Hypothesis}. Suppose now that $t'>0$. This means that each run of the `if loop' in which the parameter $t\leq t'$ finishes with $T$ being increased by $1$, so that at the end of all these runs of the `if loop' we have $T=t'$. Also, if $t'<d$, then in the run of the `if loop' with $t=t'+1$ the value of $T$ is not increased and some instance of the divisibility condition fails. Let $s\in\mathcal{T}$, let $h$ satisfy $1\leq h\leq\min\{t',s\}$, and let $c(s,h)$ denote the condition `$d(d-1)\ldots(d-h+1)$ divides $bs(s-1)\ldots(s-h+1)$'. Then in the run of the `if loop' with $t=h$ we would have verified that $c(s,h)$ holds. Thus $t'\leq t_{\max}$ by Definition ~\ref{def:tmax}. If $t'=d$ then (since $t_{\max}\leq d$), we must have $t'=t_{\max}$, so suppose that $t'<d$. As mentioned above, in the run of the `if loop' with $t=t'+1$, $c(s,t'+1)$ fails for some $s\in\mathcal{T}$ with $s\geq t'+1$. Hence $t'=t_{\max}$ in this case also. \end{proof} \subsection{{\sc Fang-Li Parameter Sift}}\label{sec:flsift} In this subsection we describe Algorithm~\ref{alg1} that uses the results presented so far to sift for feasible parameter sets for line-transitive, point-imprimitive linear spaces satisfying the {\sc Hypothesis} given at the beginning of this section. Applying this algorithm will complete {\sc Step} 1 of the search procedure outlined in Section~\ref{sect:outline}. We apply tests to determine feasible values for the parameters $k, d, c$ and also the Fang--Li and the Delandtsheer--Doyen parameters. In addition, we compute feasible intersection types, and the value of $t_{\max}$. We restrict to the cases where \begin{align}\label{restrictions} k^{(r)} \le k^{(r)}_{\max} := 8. \end{align} The output of Algorithm~\ref{alg1} is a list {\sc ParameterList}$(k^{(r)}_{\max})$ of parameter sequences, called {\sc Lines}, that satisfy the conditions given so far on the parameters of line-transitive, point-imprimitive linear spaces where the Fang--Li parameter $k^{(r)}$ is at most some given upper bound $k^{(r)}_{\max}$. \begin{algorithm}\label{alg1}{\rm ({\sc ParameterList})\\ {\sc Input:}\quad An upper bound $k^{(r)}_{\max}$ for the Fang--Li parameter $k^{(r)}$. \smallskip\noindent {\sc Output:}\quad The list {\sc ParameterList}$(k^{(r)}_{\max})$. \smallskip\noindent set {\sc ParameterList}$(k^{(r)}_{\max}):=$ an empty list; \\ for $k^{(r)} = 1 , \ldots, k^{(r)}_{\max}$ \\ \phantom{mm} for $\gamma = 1, \ldots , {k^{(r)}}^2-1$ (Proposition~\ref{prop:fl2}(iii))\\ \phantom{mm} \phantom{mm} for $\delta = 1, \ldots$ with $\gamma\delta < {k^{(r)}}^2$ (Proposition~\ref{prop:fl2}(iii)) \\ \phantom{mm} \phantom{mm} \phantom{mm} for all $k^{(v)}$ dividing $(\gamma + k^{(r)})(\delta + k^{(r)})$ (Proposition~\ref{prop:fl2}(iv))\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} $k := k^{(v)} \cdot k^{(r)}$ (Lemma~\ref{prop:fl1}(i)); \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if $2 \nmid \gamma k^{(v)}$ skip to next $k^{(v)}$;\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} $x := \frac{\gamma k^{(v)}}{2}$ (Proposition~\ref{prop:fl2}(ii));\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if $2 \nmid \delta k^{(v)}$ skip to next $k^{(v)}$;\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} $y := \frac{\delta k^{(v)}}{2}$ (Proposition~\ref{prop:fl2}(ii));\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if $\gamma\delta \nmid (k^{(r)} (k - 1) - \gamma - \delta)$ skip to next $k^{(v)}$;\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} $b^{(r)} := \frac{k^{(r)} (k - 1) - \gamma - \delta}{\gamma\delta}$ (Proposition~\ref{prop:fl2}(iii));\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if $k^{(v)} > b^{(r)}$ skip to next $k^{(v)}$ (Fisher's inequality (\ref{fisherfangli})); \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} $v := (\gamma + \delta + \gamma\delta b^{(r)}) \cdot b^{(r)} + 1$;\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if $k^{(v)} \nmid v$ skip to next $k^{(v)}$;\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} $b^{(v)} := \frac{v}{k^{(v)}}$ (Lemma~\ref{prop:fl1}(i));\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} $b := b^{(v)} \cdot b^{(r)}$ (Lemma~\ref{prop:fl1}(i));\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} $r := k^{(r)} \cdot b^{(r)}$ (Lemma~\ref{prop:fl1}(ii));\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} $c := \gamma b^{(r)} + 1$ (Proposition~\ref{prop:fl2}(i));\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} $d := \delta b^{(r)} + 1$ (Proposition~\ref{prop:fl2}(i));\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if $k \ge 2x$ \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if $y > \binom{k - 2x}{2}$ skip to next $k^{(v)}$ (Theorem~\ref{thm:CNP}(c)(i)); \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if $k \ge 2y$ \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if $x > \binom{k - 2y}{2}$ skip to next $k^{(v)}$ (Theorem~\ref{thm:CNP}(c)(ii)); \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} compute all intersection types $(0^{d_0},1^{d_1},\ldots, k^{d_k})$ \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} with respect to Proposition~\ref{prop:intersection}(v)-(vii) and Theorem~\ref{thm:HM};\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if there are no intersection types, skip to next $k^{(v)}$; \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} for each intersection type $(0^{d_0},1^{d_1},\ldots, k^{d_k})$\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} compute $t_{\max}$ and ${\rm spec}\,{\cal S}$ (Algorithm~\ref{algtmax}); \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if $t_{\max}=0$ skip to next intersection type (Lemma~\ref{lem:algtmax});\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} insert in {\sc ParameterList}$(k^{(r)}_{\max})$ the {\sc Line} \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} $(d,c,x,y,\gamma,\delta,k^{(v)},k^{(r)},b^{(v)},b^{(r)},(0^{d_0},1^{d_1},\ldots,k^{d_k}),t_{\max},{\rm spec}\,{\cal S})$; \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} according to the value of $v$;\\ return {\sc ParameterList}$(k^{(r)}_{\max})$. } \end{algorithm} Applying Algorithm~\ref{alg1} with $k^{(r)}_{\max}=8$ produced a list {\sc ParameterList}(8) containing 1207 {\sc Lines}. We refer to {\sc Line} $i$ of {\sc ParameterList}(8) as {\sc ParamList}${}_8(i)$. As noted in {\sc Step} 1 in Section~\ref{sect:outline}, {\sc ParamList}${}_8(i)$ will give values of the following parameters: \[ (d,c,x,y,\gamma,\delta,k^{(v)}, k^{(r)}, b^{(v)}, b^{(r)}, (0^{d_0},1^{d_1},\ldots, k^{d_k}), t_{\max}, {\rm spec}\, {\cal S}). \] We give a shortened version of the list in Appendix~\ref{appx}, namely we omit from each {\sc Line} the intersection type $(0^{d_0},1^{d_1},\ldots, k^{d_k})$, $t_{\max}$ and ${\rm spec}\, {\cal S}$. For those {\sc Lines} that survive the further tests of {\sc Steps} $2-4$, the information about the intersection type, $t_{\max}$ and ${\rm spec}\, {\cal S}$ is needed for the final analysis. Full information about these {\sc Lines} is presented in Section~\ref{sect:results}. \subsection{Testing for $2$-step imprimitivity}\label{sub:2step} As we noted in Subsection~\ref{sub:partitions}, information in Theorem~\ref{thm:refinement} and Corollary~\ref{cor:maxmin} provides sufficient conditions for the partition ${\mathfrak C}$ to be maximal or minimal, and hence also provides sufficient conditions for $G$ to be 2-step imprimitive relative to ${\mathfrak C}$ (that is, ${\mathfrak C}$ is both maximal and minimal). We formalise these sufficient conditions in Algorithm~\ref{alg2}. Applying this algorithm will complete {\sc Step} 2 of the search procedure described in Section~\ref{sect:outline}, the {\sc Two-Step Imprimitivity Test}. We need only the values for $(d,c,x,y,k^{(v)},k^{(r)})$ for each {\sc Line} of {\sc ParameterList}$(k^{(r)}_{\max})$, so we will denote this subsequence of parameters of {\sc ParamList}${}_{k^{(r)}_{\max}}(i)$ by {\sc ParamList}${}_{k^{(r)}_{\max}}'(i)$. \begin{algorithm}\label{alg2}{\rm ({\sc TwoStepImprimitive})\\ {\sc Input:}\quad {\sc ParameterList}$(k^{(r)}_{\max})$. \smallskip\noindent {\sc Output:}\quad {\sc ParameterListA}$(k^{(r)}_{\max})$ containing those {\sc Lines} where $G$ is guaranteed to be 2-step imprimitive relative to ${\mathfrak C}$, and {\sc ParameterListB}$(k^{(r)}_{\max})$ containing the remaining {\sc Lines} of {\sc ParameterList}$(k^{(r)}_{\max})$ (sometimes with the information that ${\mathfrak C}$ is minimal or maximal). \smallskip\noindent set {\sc ParameterListA}$(k^{(r)}_{\max}):=$ an empty list;\\ set {\sc ParameterListB}$(k^{(r)}_{\max}):=$ {\sc ParameterList}$(k^{(r)}_{\max})$;\\ let $n$ be the number of {\sc Lines} of {\sc ParameterListB}$(k^{(r)}_{\max})$;\\ for $i=1,\dots,n$ apply the following tests to {\sc ParamList}${}_{k^{(r)}_{\max}}'(i)=(d,c,x,y,k^{(v)},k^{(r)})$\\ \phantom{mm} if at least one of the following conditions holds\\ \phantom{mm} \phantom{mm} (i)\ $x\leq 4$,\\ \phantom{mm} \phantom{mm} (ii)\ $\binom{k}{2} > (x-2)xy+x+y$,\\ \phantom{mm} \phantom{mm} (iii)\ there is no $j\leq n$ such that, if {\sc ParamList}${}_{k^{(r)}_{\max}}'(j)=(d',c',x',y',k^{(v)'},k^{(r)'})$,\\ \phantom{mm} \phantom{mm} \phantom{mm} then $k'=k$, $c'$ is a proper divisor of $c$, and $d'=cd/c'$,\\ \phantom{mm} then record that ${\mathfrak C}$ is minimal for {\sc ParamList}${}_{k^{(r)}_{\max}}(i)$;\\ \phantom{mm} if at least one of the following conditions holds\\ \phantom{mm} \phantom{mm} (i)\ $y\leq 4$,\\ \phantom{mm} \phantom{mm} (ii)\ $\binom{k}{2} > (y-2)xy+x+y$,\\ \phantom{mm} \phantom{mm} (iii)\ there is no $j\leq n$ such that, if {\sc ParamList}${}_{k^{(r)}_{\max}}'(j)=(d',c',x',y',k^{(v)'},k^{(r)'})$,\\ \phantom{mm} \phantom{mm} \phantom{mm} then $k'=k$, $d'$ is a proper divisor of $d$, and $c'=cd/d'$,\\ \phantom{mm} then record that ${\mathfrak C}$ is maximal for {\sc ParamList}${}_{k^{(r)}_{\max}}(i)$;\\ \phantom{mm} if ${\mathfrak C}$ is both minimal and maximal for {\sc ParamList}${}_{k^{(r)}_{\max}}(i)$ \\ \phantom{mm} \phantom{mm} then remove {\sc ParamList}${}_{k^{(r)}_{\max}}(i)$ from {\sc ParameterListB}$(k^{(r)}_{\max})$\\ \phantom{mm} \phantom{mm} and add it to {\sc ParameterListA}$(k^{(r)}_{\max})$;\\ return {\sc ParameterListA}$(k^{(r)}_{\max})$ and {\sc ParameterListB}$(k^{(r)}_{\max})$. } \end{algorithm} \begin{lemma}\label{lem:test2step} The output of Algorithm~{\rm\ref{alg2}} is correct. \end{lemma} \begin{proof} If ${\mathfrak C}$ is not minimal then there exists a proper $G$-invariant refinement ${\mathfrak C}'$ of ${\mathfrak C}$ with $d'$ classes of size $c'$ where $c'$ is a proper divisor of $c$. Similarly if ${\mathfrak C}$ is not maximal then ${\mathfrak C}$ is a proper refinement of a $G$-invariant partition ${\mathfrak C}'$ with $d'$ classes of size $c'$ where $d'$ is a proper divisor of $d$. Thus parts (iii) are sufficient conditions for minimality and maximality respectively. The sufficiency of parts (i) and (ii) follows from Corollary~\ref{cor:maxmin}. \end{proof} \bigskip Algorithm~\ref{alg2} was applied to {\sc ParameterList}$(8)$ (obtained from Algorithm~\ref{alg1}). It turned out that the list {\sc ParameterListB}$(8)$ returned was empty, and hence that for every {\sc Line} of {\sc ParameterList}$(8)$, the group $G$ was guaranteed to be 2-step imprimitive relative to ${\mathfrak C}$. \begin{theorem}\label{thm:2step} If the {\sc Hypothesis} holds and $k^{(r)}\leq8$, then the group $G$ is $2$-step imprimitive relative to ${\mathfrak C}$. \end{theorem} \section{Sifting for primitive top and bottom groups}\label{sect:groups} In this section we give some additional properties of the top group $H=G^{\mathfrak C}$ and the bottom group $L=G^C$. In Subsection~\ref{sub:topbottom} we present three algorithms for determining possibilities for these groups in the cases where ${\mathfrak C}$ is maximal or minimal respectively, and determining the value of the parameter {\sc PossibleGrid}. Applying these algorithms will complete {\sc Step} 3 of the search procedure described in Section~\ref{sect:outline}, the {\sc Top and Bottom Group Sifts}. \subsection{Subdegrees} Assume $G$ is a transitive group acting on a set $X$ and let $\alpha\in X$. The orbits of the stabiliser $G_\alpha$ in $X$ are called the {\em suborbits of $G$ relative to $\alpha$} and all suborbits apart from $\{\alpha\}$ are called non-trivial (even if they have length 1). The lengths of the suborbits are the {\em subdegrees} of $G$, and a subdegree is called non-trivial if it is the length of a non-trivial suborbit. The subdegrees are independent of the choice of $\alpha$ because of the transitivity of $G$. The {\em rank} of the group, abbreviated as ${\rm rk} \, G,$ is the number of suborbits. We have upper bounds on the ranks of $G^{\mathfrak C}$ and $G^C$ in terms of the parameters of ${\cal S}$. \smallskip \begin{lemma}[Delandtsheer, Niemeyer and Praeger~\cite{DelandtsheerNiemeyeretal01}, Proposition 2.6] \label{lem:subdegree} Assume\\ that the {\sc Hypothesis} holds. \begin{enumerate} \item The number $b^{(r)}$ divides each non-trivial subdegree of $G^{\mathfrak C}$, and in particular, ${\rm rk} \, G^{\mathfrak C} \le 1 + \delta.$ \item The number $b^{(r)}$ divides each non-trivial subdegree of $G^C$, and in particular, ${\rm rk} \, G^C \le 1 + \gamma.$ \item Moreover, for $\alpha\in{\cal P}$, $b^{(r)}$ divides each non-trivial subdegree of $G^{\cal P}$ and each orbit length of $G_\alpha$ in $\{\lambda\in{\cal L}\;:\;\alpha\in\lambda\}$. \end{enumerate} \end{lemma} \bigskip Next we give a link between the transitivity of the bottom group and the spectrum. A permutation group $G$ on $X$ is {\it $t$-homogeneous} if it acts transitively on the set of $t$-element subsets of $X$. For all $t$, a $t$-transitive group is $t$-homogeneous, while for $t\geq 2$, a transitive $t$-homogeneous group is $(t-1)$-transitive (see~\cite[Section 9.4]{DixonMortimer}). \begin{lemma}\label{lem:twotrans} Assume that the {\sc Hypothesis} holds. \begin{enumerate} \item If $G^C$ is $3$-homogeneous then ${\rm spec} \, {\cal S} = \{1,2\}.$ \item If $G^C$ is $2$-homogeneous then ${\rm spec} \, {\cal S} = \{1,h\}$ with $h \ge 2.$ \end{enumerate} \end{lemma} \begin{proof} (i) Consider the linear space ${\cal S}|_C$ defined at (\ref{eqn:induced}), which admits $G^C$ as an automorphism group. In any linear space, three points are either collinear or they form a triangle, that is, the three pairs lie on three different lines. Since $G^C$ is transitive on 3-subsets of points of $C,$ only one of these types of three point sets is possible. If all sets of three points were collinear, then all points of $C$ would be contained in one line of ${\cal L}$, and hence $c\in{\rm spec}\, {\cal S}$, contradicting Theorem~\ref{thm:HM}(ii). Hence no line of ${\cal L}|_C$ contains more than two points, which means that ${\rm spec} \, {\cal S}$ is a subset of $\{1,2\}.$ On the other hand, by Theorem~\ref{thm:HM}(i), the cardinality of ${\rm spec} \, {\cal S}$ is at least $2,$ so ${\rm spec} \, {\cal S} = \{1,2\}.$ (ii) Here, $G^C$ is an automorphism group of ${\cal S}|_C$ which is transitive on unordered pairs of points of $C.$ Since any two points of $C$ determine a line of ${\cal L}|_C$, and since $G^C$ is transitive on pairs from $C$, all lines of ${\cal L}|_C$ have the same size, say $h$, with $h \ge 2.$ Thus ${\rm spec} \, {\cal S} = \{1,h\}$ with $h \ge2.$ \end{proof} \subsection{Sifting for primitive top and bottom groups}\label{sub:topbottom} In this subsection we describe separately procedures for restricting the possible top groups $H=G^{\mathfrak C}$ and bottom groups $L=G^C$ in the cases where ${\mathfrak C}$ is maximal and minimal respectively. Recall that the $G$-invariance of ${\mathfrak C}$ implies that $G\leq L\wr H$, which is why we refer to $H$ and $L$ as the top and bottom groups of the wreath product. The reason for separating these procedures is that the procedures are applicable for searches over larger ranges of values of $k^{(r)}$, and in particular in circumstances where it is not known that both the top group and the bottom group are primitive. We then present Algorithm~\ref{alg:step3and4} that completes {\sc Step 3} of the overview of the search described in Section~\ref{sect:outline}. Our procedures use tables of primitive groups of a given degree, if available. Currently such lists are available in the computer algebra package GAP \cite{GAP4} for all degrees less than $2,500$. A few tests derived from the theoretical results presented so far in this paper do not require these lists, but if the relevant list is available, then additional tests are applied to each primitive group in the list. Recall that the parameters and the intersection type are fixed at this stage. First we give the algorithm to sift for $G^{\mathfrak C}$ (the top group) in the case where ${\mathfrak C}$ is maximal, that is, where $G^{\mathfrak C}$ is a primitive group of degree $d$. By the \emph{transitivity} of a primitive permutation group $H$ we mean the maximum integer $t$ such that $H$ is $t$-transitive. \begin{algorithm}\label{alg:topgroup}{\rm ({\sc TopGroupSift})\\ {\sc Input:}\quad A {\sc Line} from {\sc ParameterList}$(k^{(r)}_{\max})$ for which ${\mathfrak C}$ is maximal, and the list {\sc Prim}$(d)$ of all primitive permutation groups of degree $d$, if available. \smallskip\noindent {\sc Output:}\quad {\sc TopGroups}; this is a list of candidate primitive groups $G^{\mathfrak C}$ for that {\sc Line} if {\sc Prim}$(d)$ is available, and otherwise contains an entry `{\sc Prim}$(d)$ unavailable' (and possibly some other information about $G^{\mathfrak C}$). \smallskip\noindent set {\sc TopGroups}$:=$ an empty list;\\ if {\sc Prim}$(d)$ is not available\\ \phantom{mm} set {\sc TopGroups}[1]\,:= `{\sc Prim}$(d)$ unavailable';\\ \phantom{mm} if ${\mathrm{lcm}_{i>0} \binom{d}{d_i}} \nmid b$ add to {\sc TopGroups}\ \ `$G^{{\mathfrak C}}\ne {\rm Alt}_d,{\rm Sym}_d$' (Lemma~\ref{lem:altd});\\ else for each $H\in$ {\sc Prim}$(d)$ \\ \phantom{mm} let $t$ be the transitivity of $H$; \\ \phantom{mm} if $t > t_{\max}$ skip to next $H$ (Lemma~\ref{lem:maxtrans}); \\ \phantom{mm} if $b^{(r)}$ does not divide all non-trivial subdegrees of $H$ skip to next $H$ (Lemma~\ref{lem:subdegree}(i)); \\ \phantom{mm} if $H \ge {\rm Alt}_d$ and ${\mathrm{lcm}_{i>0} \binom{d}{d_i}} \nmid b$ skip to next $H$ (Lemma~\ref{lem:altd});\\ \phantom{mm} add $H$ to {\sc TopGroups};\\ return {\sc TopGroups}. } \end{algorithm} \bigskip Now we give the algorithm to sift for $G^C$ (the bottom group) in the case where ${\mathfrak C}$ is minimal, that is, where $G^C$ is a primitive group of degree $c$. \begin{algorithm}\label{alg:bottomgroup}{\rm ({\sc BottomGroupSift})\\ {\sc Input:}\quad A {\sc Line} from {\sc ParameterList}$(k^{(r)}_{\max})$ for which ${\mathfrak C}$ is minimal, and the list {\sc Prim}$(c)$ of all primitive permutation groups of degree $c$, if available. \smallskip\noindent {\sc Output:}\quad {\sc BottomGroups}; this is a list of candidate primitive groups $G^C$ for that {\sc Line} if {\sc Prim}$(c)$ is available, and otherwise contains an entry `{\sc Prim}$(c)$ unavailable' (and possibly some other information about $G^C$). \smallskip\noindent set {\sc BottomGroups}$:=$ an empty list;\\ if {\sc Prim}$(c)$ is not available\\ \phantom{mm} set {\sc BottomGroups}[1]\,:= `{\sc Prim}$(c)$ unavailable';\\ \phantom{mm} if ${\rm spec}\,{\cal S}\ne\{1,h\}$ with $h\geq2$\\ \phantom{mm} \phantom{mm} add to {\sc BottomGroups}\ \ `$G^C$ is not $2$-homogeneous' (Lemma~\ref{lem:twotrans});\\ \phantom{mm} if ${\rm spec}\,{\cal S}=\{1,h\}$ for some $h\geq3$\\ \phantom{mm} \phantom{mm} add to {\sc BottomGroups}\ \ `$G^C$ is not $3$-homogeneous' (Lemma~\ref{lem:twotrans});\\ else for each $L\in$ {\sc Prim}$(c)$ \\ \phantom{mm} if $b^{(r)}$ does not divide all non-trivial subdegrees of $L$ skip to next $L$ (Lemma~\ref{lem:subdegree}(ii)); \\ \phantom{mm} let $t$ be the transitivity of $L$; \\ \phantom{mm} if $t \ge 3$ and ${\rm spec} \, {\cal S} \neq \{ 1,2\}$ skip to next $L$ (Lemma~\ref{lem:twotrans});\\ \phantom{mm} if $t \ge 2$ then \\ \phantom{mm} \phantom{mm} if $|{\rm spec} \, {\cal S}| \neq 2$ skip to next $L$ (Lemma~\ref{lem:twotrans}); \\ \phantom{mm} \phantom{mm} if $1 \not\in {\rm spec} \, {\cal S}$ skip to next $L$ (Lemma~\ref{lem:twotrans}); \\ \phantom{mm} add $L$ to {\sc BottomGroups};\\ return {\sc BottomGroups}. } \end{algorithm} To complete {\sc Step} 3 of the search, we apply Algorithms~\ref{alg:topgroup} and \ref{alg:bottomgroup} to each of the {\sc Lines} of {\sc ParameterListA}$(k^{(r)}_{\max})$, and remove a {\sc Line} if at least one of the lists {\sc TopGroups} or {\sc BottomGroups} is empty for that {\sc Line}. Then we determine the value of {\sc PossibleGrid} for each of the {\sc Lines} remaining. These tests are recorded in the algorithm below. \begin{algorithm}\label{alg:step3and4}{\rm ({\sc Top\&BottomGroups}) \\ {\sc Input:}\quad {\sc ParameterListA}$(k^{(r)}_{\max})$ and lists {\sc Prim}$(n)$ (where available) of primitive\\ permutation groups of degree $n$, for all $n$ equal to an entry $d$ or $c$ in some {\sc Line} of \\ {\sc ParameterListA}$(k^{(r)}_{\max})$. \smallskip\noindent {\sc Output:}\quad An enhanced {\sc ParameterListA}$(k^{(r)}_{\max})$ containing lists {\sc TopGroups} and {\sc BottomGroups}, and the value of {\sc PossibleGrid} for each surviving {\sc Line}. \smallskip\noindent for each {\sc Line} of {\sc ParameterListA}$(k^{(r)}_{\max})$\\ \phantom{mm} compute {\sc TopGroups} (Algorithm~\ref{alg:topgroup});\\ \phantom{mm} \phantom{mm} if {\sc TopGroups} is empty then\\ \phantom{mm} \phantom{mm} \phantom{mm} remove this {\sc Line} from {\sc ParameterListA}$(k^{(r)}_{\max})$ and skip to next {\sc Line};\\ \phantom{mm} \phantom{mm} else append {\sc TopGroups} to this {\sc Line};\\ \phantom{mm} compute {\sc BottomGroups} (Algorithm~\ref{alg:bottomgroup});\\ \phantom{mm} \phantom{mm} if {\sc BottomGroups} is empty then\\ \phantom{mm} \phantom{mm} \phantom{mm} remove this {\sc Line} from {\sc ParameterListA}$(k^{(r)}_{\max})$ and skip to next {\sc Line};\\ \phantom{mm} \phantom{mm} else append {\sc BottomGroups} to this {\sc Line};\\ for each (surviving) {\sc Line} of {\sc ParameterListA}$(k^{(r)}_{\max})$\\ \phantom{mm} set {\sc PossibleGrid} to be `no';\\ \phantom{mm} if either $c=d$, or there is another (surviving) {\sc Line} of {\sc ParameterListA}$(k^{(r)}_{\max})$\\ \phantom{mm} \phantom{mm} corresponding to the same value of $k$ and a partition with $c$ classes of size $d$, then\\ \phantom{mm} \phantom{mm} reset {\sc PossibleGrid} to be `yes';\\ \phantom{mm} append {\sc PossibleGrid} to this {\sc Line};\\ return {\sc ParameterListA}$(k^{(r)}_{\max})$. } \end{algorithm} Note that at this point, in each surviving {\sc Line} of {\sc ParameterListA}$(k^{(r)}_{\max})$, the entries {\sc TopGroups} and {\sc BottomGroups} are both non-empty. \section{Quasiprimitive and $G$-normal sifts}\label{sect:stepfour} This section comprises three subsections which together enable us to complete {\sc Step} 4 of the search strategy described in the overview in Section~\ref{sect:outline}. In Subsection~\ref{subqp} we present an algorithm to search for candidate groups $G$ in the cases where $G$ is quasiprimitive. In Subsection~\ref{sec:normal} we prove several theorems concerning groups in the $G$-normal case, and in Subsection~\ref{sub:normalsift} we bring together the restrictions these theorems provide into an algorithm to refine the lists of feasible bottom groups in the case of a $G$-normal partition. \subsection{Sifting for primitive top groups in the quasiprimitive case} \label{subqp} In this subsection we give an algorithm that gives more restrictions on the possibilities for top groups in the case where $G$ is quasiprimitive on ${\cal P}$ and ${\mathfrak C}$ is maximal. The algorithm produces, for each {\sc Line} of {\sc ParameterListA}$(k^{(r)}_{\max})$ for which {\sc TopGroups} contains an explicit list of groups, a (possibly empty) list {\sc QuasiprimTopGroups} of feasible top groups in the case where $G$ is quasiprimitive; in other cases it gives restrictions on such a list. This is the first part of {\sc Step} 4. Recall that, for a quaisprimitive group $G$, the kernel $G_{({\mathfrak C})}$ of the action of $G$ on ${\mathfrak C}$ is trivial, and therefore $G \simeq G^{\mathfrak C}.$ \begin{algorithm}\label{alg:qptopgroup}{\rm ({\sc QuasiprimitiveSift})\\ {\sc Input:}\quad {\sc ParameterListA}$(k^{(r)}_{\max})$ from Algorithm~\ref{alg:step3and4}. \smallskip\noindent {\sc Output:}\quad An enhanced {\sc ParameterListA}$(k^{(r)}_{\max})$ such that each {\sc Line} has an extra entry {\sc QuasiprimTopGroups} appended: if {\sc TopGroups} has first entry `{\sc Prim}$(d)$ unavailable', then the list {\sc QuasiprimTopGroups} contains entries `{\sc Prim}$(d)$ unavailable' and `$G^{\mathfrak C}$ almost simple'; otherwise {\sc QuasiprimTopGroups} is a (possibly empty) list of candidate quasiprimitive groups $G$ for this {\sc Line}. \smallskip\noindent for each {\sc Line} from {\sc ParameterListA}$(k^{(r)}_{\max})$ \\ \phantom{mm} set {\sc QuasiprimTopGroups}\ $:=$ {\sc TopGroups};\\ \phantom{mm} if the first entry of {\sc QuasiprimTopGroups} is `{\sc Prim}$(d)$ unavailable' then \\ \phantom{mm} \phantom{mm} add to {\sc QuasiprimTopGroups} `$G^{\mathfrak C}$ almost simple' \\ \phantom{mm} \phantom{mm} skip to instruction $(\ast)$ ({\sc Case} 1 in Section~\ref{sect:outline}); \\ \phantom{mm} else for each $H\in $ {\sc TopGroups} \\ \phantom{mm} \phantom{mm} if $H$ is not almost simple then remove $H$ from {\sc QuasiprimTopGroups} \\ \phantom{mm} \phantom{mm} \phantom{mm} skip to next $H$ ({\sc Case} 1 in Section~\ref{sect:outline});\\ \phantom{mm} \phantom{mm} if $v \nmid |H|$ then remove $H$ from {\sc QuasiprimTopGroups} \\ \phantom{mm} \phantom{mm} \phantom{mm} skip to next $H$ (since $H = G^{\mathfrak C} \simeq G$ is point transitive);\\ \phantom{mm} $(\ast)$ append {\sc QuasiprimTopGroups} to this {\sc Line};\\ return {\sc ParameterListA}$(k^{(r)}_{\max})$. } \end{algorithm} The result of applying Algorithm~\ref{alg:qptopgroup} in the case $k^{(r)}_{\max}=8$ was a list of only 19 {\sc Lines} in which {\sc QuasiprimTopGroups} was not an empty list. These {\sc Lines} are given in Section~\ref{sect:results}, with those parameter {\sc Lines} with $k\leq 8$ distinguished (for reasons discussed there). See Table~\ref{tab:outputqp} and {\sc Line} 2 of Table~\ref{tab:outputkleq8}. \subsection{Further restrictions for the $G$-normal case} \label{sec:normal} Here we give some additional restrictions on the group $G$ in the case where ${\mathfrak C}$ is $G$-normal. The algorithm given at the end of this subsection will complete {\sc Step} 4 described in the overview in Section~\ref{sect:outline}. First we give several results, the first of which is from \cite{PraegerTuan02}. Recall that $d_1$ is the number of classes that meet a line in exactly one point. A permutation group is {\it semiregular} if only the identity element fixes a point; it is {\it regular} if it is both transitive and semiregular. \begin{theorem}[Praeger and Tuan~\cite{PraegerTuan02}, Theorems 1.5 and 1.6] \label{thm:semiregular} Assume that the {\sc Hypothesis} holds and that ${\mathfrak C}$ is $G$-normal. \begin{enumerate} \item[(a)] If $k > 2x + \frac{3}{2} + \sqrt{4x - \frac{7}{4}},$ then $G_{({\mathfrak C})}$ is semiregular on points and lines, $|G_{({\mathfrak C})}| = c$ is odd, and $d_1 > 0.$ \item[(b)] If $x\leq 8$, then $G_{({\mathfrak C})}$ has an abelian subgroup $S$ of index at most $2$ such that $S$ is normal in $G$, semiregular on points, and $|S|=c$ is odd. \item[(c)] If either of the conditions of (a) or (b) holds, and if ${\mathfrak C}$ is minimal, then $c$ is an odd prime power and $G^C$ is affine. \end{enumerate} \end{theorem} \begin{proof} Parts (a) and (b) follow from Theorems 1.5 and 1.6 of \cite{PraegerTuan02} respectively. Suppose now that ${\mathfrak C}$ is minimal and in part (a) set $S:=G_{({\mathfrak C})}$. Then $S^C\cong S$ by \cite[Theorem 1]{CaminaPraeger93}, and hence $S^C$ is an odd order semiregular normal subgroup of the primitive group $G^C$. Thus $S^C$ is elementary abelian, $c$ is a prime power and $G^C$ is affine. \end{proof} Recall the definition in (\ref{eqn:induced}) of the linear space ${\cal S}|_F= (F,{\cal L}|_F)$ induced on a subset $F\subseteq {\cal P}$ of size at least two. Under certain conditions, $G_{({\mathfrak C}),\alpha}$ fixes exactly one point of each class of the partition ${\mathfrak C}$ and, for $F:= {\rm Fix}_{\cal P}\,(G_{({\mathfrak C}),\alpha})$, the induced linear space ${\cal S}|_F$ has constant line size. The next result extends a result of Praeger and Tuan~\cite{PraegerTuan02}. The proof of part (a) uses the fact that, for a line-transitive, point-imprimitive group $G$, every involution in $G$ fixes at least one point. For if an involution in a line-transitive group $G$ has no fixed points then it was shown in \cite[Lemma 4]{CaminaSiemons89a} that $k$ divides $v$, and then by \cite{CaminaGagen84}, $G$ is flag-transitive and hence primitive on ${\cal P}$. The proof also uses the fact that a group of odd order is soluble. Recall that a point $\alpha$ and a line $\lambda$ are called $i$-incident if $\lambda$ meets the class of ${\mathfrak C}$ containing $\alpha$ in exactly $i$ points; also $C\in{\mathfrak C}$ is said to be $i$-incident with $\lambda$ if $|\lambda\cap C|=i$. \begin{theorem}\label{thm:normal} Assume that the {\sc Hypothesis} holds and that ${\mathfrak C}$ is $G$-normal and minimal. Let $\alpha\in C\in{\mathfrak C}$, and set $F:={\rm Fix}_{\cal P}(G_{({\mathfrak C}),\alpha})$. Then \begin{enumerate} \item[(a)] either \begin{enumerate} \item[(i)] $G_{({\mathfrak C})}=Z_p^a$ is semiregular of order $c=p^a$ for some odd prime $p$; or \item[(ii)] $F\cap C=\{\alpha\}$, and the set of classes of ${\mathfrak C}$ that contain some point of $F$ is a block of imprimitivity for $G^{\mathfrak C}$. In particular, if in addition ${\mathfrak C}$ is maximal, then either \begin{enumerate} \item[(ii-1)] $F=\{\alpha\}$, and $d_1\leq 1$; or \item[(ii-2)] $F$ consists of exactly one point from each class of ${\mathfrak C}$. \end{enumerate} \end{enumerate} \item[(b)] In particular, if $k \ge 2x$, then $c = p^a$ for some odd prime $p$, and either \begin{enumerate} \item[(i)] $G_{({\mathfrak C})}=Z_p^a$ is semiregular, or \item[(ii)] $G_{({\mathfrak C})} = Z_p^a \cdot Z_2,$ ${\rm spec}\,{\cal S} = \{1,2\},$ (a)(ii-2) holds for $F$, ${\cal S}|_F = (F,{\cal L}|_F)$ is a linear space with lines of size $d_1$ admitting a line-transitive action by $N_G(G_{({\mathfrak C}),\alpha})$. Moreover $d_1 = k - 2x \ge 2,$ $y = \binom{d_1}{2},$ $y$ divides $\binom{d}{2}$, and $d_1 -1 $ divides $d - 1.$ Also, for any pair $C,D$ of distinct classes of ${\mathfrak C},$ $G_{C,D}$ fixes setwise disjoint subsets of ${\mathfrak C} \setminus \{ C,D\}$ of sizes $d_1 - 2$ and $x.$ In particular if $d_1 = 2$ then $G^{\mathfrak C}$ is $2$-homogeneous. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} Part (b) was proved in \cite[Theorem 1.4]{PraegerTuan02}. Thus we only need to prove part (a). Let $N:= G_{({\mathfrak C})}$, so $N$ is a non-trivial normal subgroup of $G$ and ${\mathfrak C}$ is the set of $N$-orbits in ${\cal P}$. By the minimality of ${\mathfrak C}$, the permutation group $G_C^C$ is primitive on $C$, and $N^C$ is a transitive normal subgroup. Moreover, by Theorem~\ref{thm:CP93}, $N$ is faithful on $C$. Let $\alpha\in C$. Consider first the case where $N^C$ is regular. Then $N_\alpha=1$, and so $N$ is semiregular on ${\cal P}$. As discussed above, all involutions in $G$ fix at least one point, and it follows that $|N|$ is odd. Hence $N$ is soluble. Since $N^C\cong N$ is a soluble regular normal subgroup of the primitive group $G_C^C$, it follows that $G_C^C$ is of affine type and $N^C$ is elementary abelian of order $|C|=p^a$ for some odd prime $p$. Thus (a)~(i) holds. Suppose now that $N^C$ is not regular. Since $N^C$ is a normal subgroup of the transitive group$G_C^C$, ${\rm Fix}_C(N_\alpha)$ is a block of imprimitivity for $G_C^C$, and since $G_C^C$ is primitive it follows that ${\rm Fix}_C(N_\alpha)=\{\alpha\}$. Therefore $F={\rm Fix}_{\cal P}(N_\alpha)$ consists of at most one point from each class of ${\mathfrak C}$. If $\beta\in F$ then $N_\alpha\subseteq N_\beta$, and since $|N_\alpha|=|N|/c=|N_\beta|$ we have $N_\alpha=N_\beta$. Moreover, if $g\in G$ is such that $\alpha^g=\beta$, then $(N_\alpha)^g=N_{\alpha^g}=N_\beta=N_\alpha$, that is $g\in N_G(N_\alpha)$. It follows that $M:=N_G(N_\alpha)$ is transitive on ${\rm Fix}_\mathcal{P}(N_\alpha)$. Since $G_\alpha\leq M<G$, we deduce that ${\rm Fix}_{\cal P}(N_\alpha)$ is a block of imprimitivity for $G$ in ${\cal P}$ and its setwise stabiliser is $M$. Let $\mathcal{D}$ be the subset of ${\mathfrak C}$ consisting of those classes containing a point of $F$, and let $D:=\bigcup_{C'\in\mathcal{D}}C'$. Since $N$ fixes each class of ${\mathfrak C}$ setwise, $D$ is $N$-invariant, and since $F$ is $M$-invariant, $D$ is also $M$-invariant. Thus $NM$ leaves $D$ invariant, and since $N$ is transitive on each class of ${\mathfrak C}$ and $M$ is transitive on $F$, it follows that $NM$ is transitive on $D$. Again, since $G_\alpha<NM\leq G$ it follows that $D$ is also a block of imprimitivity for $G$ in ${\cal P}$. It then follows easily that $D$ is a block of imprimitivity for the action of $G$ on ${\mathfrak C}$. If ${\mathfrak C}$ is maximal then $G$ is primitive on ${\mathfrak C}$, so either $D=C$ or $D={\cal P}$, and hence either $F=\{\alpha\}$ or (a) (ii-2) holds respectively. Finally if $F=\{\alpha\}$, we claim that $d_1\leq1$. If $d_1\geq2$ and $\alpha,\beta$ are 1-incident with a line $\lambda$, then $N_\lambda$ fixes both $\alpha$ and $\beta$. However, by \cite[Corollary 4.2]{PraegerTuan02}, $N_\lambda=N_\alpha=N_\beta$, and so $F$ contains both $\alpha$ and $\beta$ contrary to our assumption. \end{proof} \bigskip The following result explores case (a)~(ii), that is, when the kernel $G_{({\mathfrak C})}$ is not semiregular on points. It makes use of various results from~\cite{BettenDelandtsheeretal02}. \begin{theorem}\label{notL27} Assume that the {\sc Hypothesis} holds and that ${\mathfrak C}$ is $G$-normal and minimal with $G_{({\mathfrak C})}$ not semiregular. Let $m\in {\rm spec}\,{\cal S}$ with $m\geq2$. \begin{enumerate} \item[(a)] Then the $G_{({\mathfrak C})}$-orbits on lines have equal length $c/z$ where $z$ divides $\gcd(c,m)$, and each prime divisor of $|G_{({\mathfrak C}),\alpha}|$ is at most $m(m-1)/z$. \item[(b)] If $m=2$, then $G_{({\mathfrak C}),\alpha}$ is a Sylow $2$-subgroup of $G_{({\mathfrak C})}$, and ${\rm Fix}_\mathcal{P}(G_{({\mathfrak C}),\alpha})$ consists of exactly one point from each class of ${\mathfrak C}$. Moreover either \begin{enumerate} \item[(i)] ${\rm Soc}(G_{({\mathfrak C})})=Z_p^a$ (so $G^C$ is affine) and $c=p^a$ for some odd prime $p$ and $a\geq1$, or \item[(ii)] ${\rm Soc}(G_{({\mathfrak C})})={\rm PSL}(2,q)^a$, where $a\geq 2$ and $(c,q)=(21^a, 7)$ or $(45^a,9)$. \end{enumerate} \item[(c)] If $m=3$ or $4$, then $G_{({\mathfrak C}),\alpha}$ is a $\{2,3\}$-group, and $G_{({\mathfrak C}),\alpha}$ has an orbit in $C$ of length $\ell$, where $\ell>1$ and $\ell$ divides $m(m-1)$. \end{enumerate} \end{theorem} \begin{proof} Set $N:=G_{({\mathfrak C})}$. By Theorem~\ref{thm:normal}, ${\rm Fix}_C(N_\alpha)=\{\alpha\}$. Suppose that $\beta\in C\setminus\{\alpha\}$ and that the (unique) line $\lambda$ containing $\alpha$ and $\beta$ is $m$-incident with $C$. By \cite[Proposition 4.1]{DelandtsheerNiemeyeretal01}, the $N$-orbits on lines have constant length $|N:N_\lambda|= c/z$ for some divisor $z$ of $c$. Since $|\lambda\cap C|=m$, we have, for each $\gamma\in \lambda\cap C$, that $m_\gamma:=|N_\lambda: N_{\lambda\,\gamma}|\leq m$. Thus $c=|N:N_\gamma|$ divides $m_\gamma c/z$, and so $z$ divides $m_\gamma$. Since this is true for all such $\gamma$, it follows that $z$ divides $m$. Now $N_{\alpha\,\beta}$ fixes $\lambda$, and hence $N_{\alpha\,\beta} \leq N_{\lambda\,\alpha}\leq N_\alpha$. Thus the length $\ell$ of the $N_\alpha$-orbit containing $\beta$ is $\ell=|N_\alpha:N_{\alpha\,\beta}|=\ell_1\ell_2$, where $\ell_1=|N_\alpha:N_{\lambda\,\alpha}|$ and $\ell_2= |N_{\lambda\,\alpha}:N_{\alpha\,\beta}|$. Now $\ell_2$ is the length of the $N_{\lambda\,\alpha}$-orbit containing $\beta$. This is at most the length $m_\beta$ of the $N_\lambda$-orbit containing $\beta$ which, in turn, is contained in $(\lambda\cap C)\setminus\{\alpha\}$, whence $\ell_2\leq m_\beta\leq m-1$. Also \[ \ell_1=|N_\alpha:N_{\lambda\,\alpha}|=\frac{|N:N_\lambda|\,|N_\lambda:N_{\lambda\,\alpha}|}{|N:N_\alpha|} =\frac{1}{z}\, |N_\lambda: N_{\lambda\,\alpha}|\leq\frac{m}{z} \] and hence $\ell=\ell_1\ell_2\leq m(m-1)/z$. By Theorem~\ref{thm:CP93}, $N$ is faithful on $C$, and hence $N_\alpha^C\cong N_\alpha\ne 1$. Thus $N^C$ is a non-regular normal subgroup of the primitive group $G_C^C$. By \cite[Corollary~2.2\,(c)]{BettenDelandtsheeretal02}, each prime $p$ dividing $|N_\alpha|$ also divides the order of the permutation group induced by $N_\alpha$ on any of its orbits in $C\setminus\{\alpha\}$. Therefore $p$ is at most the minimum of the lengths of the $N_\alpha$-orbits in $C\setminus\{\alpha\}$, and in particular $p\leq m(m-1)/z$. This completes the proof of part (a). We now use this strategy a little more carefully to prove the other parts. Suppose that $m=2, 3$ or 4. Since $N_{\alpha\,\beta} \leq N_{\lambda\,\alpha}\leq N_\alpha$, the permutation group induced by $N_\alpha$ on its orbit containing $\beta$ is a subgroup of $S_{\ell_2}\wr S_{\ell_1}$, and since $\ell_1\leq m/z\leq 4$ and $\ell_2\leq m-1\leq 3$, it follows that the only primes dividing the order of this induced permutation group are 2 or 3. It follows from \cite[Corollary~2.2\,(c)]{BettenDelandtsheeretal02} that $N_\alpha$ is a $\{2,3\}$-group. Since ${\rm Fix}_C(N_\alpha)=\{\alpha\}$, all $N_\alpha$-orbits in $C\setminus\{\alpha\}$ have length at least 2, and the $N_\alpha$-orbit containing $\beta$ has length $\ell=\ell_1\ell_2\leq m(m-1)/z$. If $m=2$, then we must have $\ell=2$. In this case, by \cite[Theorem~2.3\,(c)]{BettenDelandtsheeretal02}, $N_\alpha$ is a Sylow 2-subgroup of $N$ and (since $N\cong N^C$) one of (b)~(i) or (b)~(ii) holds (but possibly with $a=1$ in the case of (b)~(ii)). By Sylow's Theorems, the stabiliser in $N$ of an arbitrary point of ${\cal P}$ is conjugate to $N_\alpha$, and hence $N_\alpha$ fixes a point from each class of ${\mathfrak C}$. Thus (b) is proved, except for proving that $a\geq 2$ in case (b)~(ii) (see below for the proof). Suppose next that $m=3$. Then $\ell_1\leq 3$, $\ell_2\leq 2$, and $\ell=\ell_1\ell_2\leq 6/z$. Thus either $\ell$ divides 6 or $\ell_1=\ell_2=2$ and $z=1$. However in the latter case, $|N_\lambda:N_{\lambda\,\alpha}|=|N_\alpha:N_{\lambda\,\alpha}|=\ell_1=2$, and since $|\lambda\cap C|=m=3$ it follows that $N_\lambda$ fixes a point of $(\lambda\cap C)\setminus\{\alpha\}$, and hence that $\ell_2=|N_{\lambda\,\alpha}:N_{\alpha\,\beta}|\leq m-2$, contradicting the fact that $\ell_2=2$. Now assume that $m=4$. Here $\ell_1\leq 4/z, \ell_2\leq \min\{3, m_\beta\}$ and $\ell=\ell_1\ell_2\leq 12/z$. Hence $\ell$ divides 12, or $\ell_1=4,\,\ell_2=2,\,z=1$, or $\ell_1=\ell_2=3,\,z=1$. In the third case, we obtain a contradiction using the same argument as for $m=3$. Thus we may assume that $\ell_1=4,\,\ell_2=2,\,z=1$. In this case $|N_\lambda:N_{\lambda\,\alpha}|= |N_\alpha:N_{\lambda\,\alpha}|=\ell_1=4$ so $N_\lambda$ is transitive on $\lambda\cap C$. Since $|N_{\lambda\,\alpha}:N_{\alpha\,\beta}|= \ell_2=2$ it follows that $N_{\lambda\,\alpha}$ fixes two points of $\lambda\cap C$ and interchanges the remaining two points. Replacing $\beta$ with the point in $(\lambda\cap C)\setminus\{\alpha\}$ fixed by $N_{\lambda\,\alpha}$, we obtain a new $N_\alpha$-orbit of length equal to the new value of $\ell$, namely 4. This completes the proof of part (c). It remains to prove that $a\geq2$ in part(b)~(ii). Suppose then that (b)~(ii) holds with $a=1$. We showed above that the $N_\alpha$-orbit $\Delta:=\{\beta^g\,|\,g\in N_\alpha\}$ has length 2. Let $\lambda_0$ be the (unique) line containing $\Delta$. Then $N_\alpha$ fixes $\lambda_0$ setwise, and hence $\lambda_0$ is a union of some $N_\alpha$-orbits. Let $\mathcal{D}:=\{\Delta^g\,|\,g\in G\}$ and $\mathcal{D}_C:=\{\Delta^g\,|\,g\in G_C\}$. By \cite[Lemma 3.2\,(c)]{BettenDelandtsheeretal02}, $|\mathcal{D}_C|=2c$ and it follows that $|\mathcal{D}|=2cd=2v$. Now the setwise stabiliser of $\Delta$ satisfies $G_\Delta\leq G_{\lambda_0}<G$, and since $G$ is line-transitive it follows that each line contains exactly $s:=|G_{\lambda_0}:G_\Delta|$ elements of $\mathcal{D}$, and that the stabiliser of a line acts transitively on the $s$ elements of $\mathcal{D}$ it contains. This implies that $bs=|\mathcal{D}|=2v\leq 2b$ (by Fisher's inequality (\ref{eqn:fisher})), so $s\leq 2$. Substituting $bs=2v$ into (\ref{eqn:b}) yields $v-1=2k(k-1)/s$. Since $N_\alpha$ fixes a unique point in each class of ${\mathfrak C}$, it follows from \cite[Lemma 3.1]{BettenDelandtsheeretal02} that $N_\alpha$ has $2d$ orbits of length $2$ (all members of $\mathcal{D}$), $2d$ orbits of length $4$ (each the union of two disjoint members of $\mathcal{D}$), and either $d$ (if $q=7$) or $4d$ (if $q=9$) orbits of length $8$. Moreover, if $q=9$ then exactly $2d$ of the $N_\alpha$-orbits of length $8$ are unions of pairwise disjoint members of $\mathcal{D}$. Also, the union $D$ of a fixed point, say $\gamma$, of $N_\alpha$ and an $N_\alpha$-orbit of length 2 lying in the class $C(\gamma)$ is a block of imprimitivity for $N$ in $C(\gamma)$, and $D$ contains three members of $\mathcal{D}$. Thus, since $s\leq2$, $\lambda_0$ contains no $N_\alpha$-orbits of length 4, and in the case $q=9$, $\lambda_0$ contains none of the $N_\alpha$-orbits of length 8 which are unions of elements of $\mathcal{D}$. Also $\lambda_0$ does not contain an $N$-block of imprimitivity of length 3 in any class of ${\mathfrak C}$, and in particular $\alpha\not\in\lambda_0$. If $s=2$ then the second element $\Delta'$ of $\mathcal{D}$ contained in $\lambda_0$ is also fixed setwise by $N_\alpha$. Since $N_\alpha$ fixes exactly one point in each class of ${\mathfrak C}$ and since $\Delta'$ is contained in some class, it follows that $\Delta'$ is an $N_\alpha$-orbit of length 2. Since $M:=N_G(N_\alpha)$ is transitive on $F:={\rm Fix}_{\cal P}(N_\alpha)$, it follows that $M$ is transitive on ${\mathfrak C}$, and hence $M$ permutes the $N_\alpha$-orbits of length $8$ in orbits of length $d$ (or possibly $2d$ if $q=9$). Now $G_\alpha\leq M$ and $M$ is transitive on the $d$ fixed points of $N_\alpha$, and so $|G:M|=c$. Also $M$ is transitive on the set of $2d$ elements of $\mathcal{D}$ which are $N_\alpha$-orbits. Thus the $M$-orbit $\lambda_0^M$ containing $\lambda_0$ consists of $2d/s$ lines. Let $\lambda_0$ contain $a_i$ of the $N_\alpha$-orbits of length $i$, for $i=1$ and $i=8$. Then $k=a_1+2s+8a_8$. Since at most $2d$ orbits of $N_\alpha$ of length 8 (or at most $d$ if $q=7$) lie in lines in $\lambda_0^M$, and since each such orbit is contained in at most one line, it follows that and we deduce that either (i) $a_8=0$, or (ii) $a_8=1$, $s=2$, or (iii) $q=9$, and $a_8=s\leq 2$. In particular $a_8|\lambda_0^M|\leq 2d$ and hence $a_8\leq s$, so $k\leq a_1+10s$. Suppose first that $\lambda_0^M$ contains all the lines fixed by $N_\alpha$. Any line $\lambda'$ containing a pair of points of $F={\rm Fix}_{\cal P}(N_\alpha)$ is fixed by $N_\alpha$, and by assumption $\lambda'\in\lambda_0^M$. It follows that $\lambda_0$ must contain at least one pair of points from $F$, so $a_1\geq2$. There are $d(d-1)$ such ordered pairs of points, and each of the $2d/s$ lines of $\lambda_0^M$ contains $a_1(a_1-1)$ of them. Therefore $a_1(a_1-1)=(d-1)s/2$. We showed above that $v-1=2k(k-1)/s$, and so \begin{eqnarray*} 21d-1\leq v-1&=&\frac{2}{s} k(k-1)\leq \frac{2}{s}(a_1+10s)(a_1+10s-1)\\ &=&\frac{2}{s}a_1(a_1-1) + 20(2a_1-1)+200s\\ &=&d-1 + 40a_1 -20 + 200s. \end{eqnarray*} Hence $d\leq 2a_1+10s-1$, so $a_1^2-a_1=(d-1)s/2\leq s(a_1+5s-1)$. This implies that $a_1\leq 3$ if $s=1$ and $a_1\leq 6$ if $s=2$. For $s=1$, we compute the possibilities for $a_1=2$ or 3, $d=2a_1(a_1-1)+1, k=a_1+2+8a_8$ (with $a_8=0$ or 1), and $c=(2k(k-1)+1)/d$. In no case do we find $c=21$ or 45, so we have a contradiction. Hence $s=2$ and so $d=a_1(a_1-1)+1, k=a_1+4+8a_8$ and $c=(k(k-1)+1)/d$. We compute all possibilites for these parameters with $a_1=2,\dots,6$ and $a_8=0, 1, 2$. The only case for which $c$ turns out to be 21 or 45 is $(a_1,a_8,c)=(6,2,21)$. However we showed above that $a_8=2$ is only possible if $q=9$ and $c=45$. Thus we have a contradiction. Therefore there is a line $\lambda'$ fixed by $N_\alpha$ and not lying in $\lambda_0^M$, and hence containing no $N_\alpha$-orbit of length 2. The $s$ (at most 2) elements of $\mathcal{D}$ contained in $\lambda'$ are fixed setwise by $N_\alpha$, and therefore $s=2$ and $N_\alpha$ interchanges the two elements of $\mathcal{D}$ in $\lambda'$. Thus $\lambda'$ contains a unique $N_\alpha$-orbit of length 4. By \cite[Lemma 3.1\,(a)]{BettenDelandtsheeretal02}, there are $2d$ such orbits and they are permuted transitively by $M$, and hence $(\lambda')^M$ forms a second $M$-orbit of lines fixed by $N_\alpha$, and $|(\lambda')^M|=2d$, while $|\lambda_0^M|=2d/s=d$. Since any line fixed by $N_\alpha$ and not lying in $\lambda_0^M$ contains an $N_\alpha$-orbit of length 4, these two $M$-orbits contain all of the lines fixed by $N_\alpha$. Suppose that $\lambda'$ contains $e_i$ of the $N_\alpha$-orbits of length $i$, for $i=1, 8$. Then $k=e_1+4+8e_8$. Each line containing two points of $F$ is fixed by $N_\alpha$ and so lies in $\lambda_0^M$ or in $(\lambda')^M$. Thus $d(d-1)=da_1(a_1-1)+2de_1(e_1-1)$, that is, \[ d-1=a_1(a_1-1)+2e_1(e_1-1). \] Also $k=a_1+4+8a_8=e_1+4+8e_8$. If $\lambda'\supseteq F$, then $\lambda'$ would be fixed setwise by $M$, which is not the case. Hence $\lambda'\cap F\ne F$ so $e_1\leq d-1$. Similarly $a_1\leq d-1$. If $e_8=0$ then, using $v-1=2k(k-1)/s=k(k-1)$, we obtain \begin{eqnarray*} 21d-1\leq v-1&=& k(k-1)= (e_1+4)(e_1+3)\\ &=&e_1(e_1-1) + 8e_1+12\\ &\leq&\frac{d-1}{2} + 8(d-1)+12 \end{eqnarray*} which is a contradiction. Hence $e_8\geq 1$, which implies that there are $2de_8$ orbits of $N_\alpha$ of length 8 contained in lines in $(\lambda')^M$. If $a_8=0$, then a similar argument leads to a contradiction. Hence $a_8\geq 1$, which implies that there are $da_8$ orbits of $N_\alpha$ of length 8 contained in lines in $\lambda_0^M$. By \cite[Lemma 3.1\,(a)]{BettenDelandtsheeretal02}, there are $d$, $4d$ orbits of $N_\alpha$ of length 8, for $q=7,9$ respectively. Hence $q=9$, $e_8=1$, and $1\leq a_8\leq 2$. However, this means that there are at least $3d$ of the $N_\alpha$-orbits of length 8 contained in lines in $(\lambda')^M$ or $\lambda_0^M$, and by \cite[Lemma 3.1\,(c)]{BettenDelandtsheeretal02}, some of these are unions of elements of $\mathcal{D}$. This contradicts the fact that each line in $(\lambda')^M$ or $\lambda_0^M$ contains only two elements of $\mathcal{D}$. \end{proof} Our last result in this subsection is applied in particular in $G$-normal situations when the bottom group is 2-transitive. \begin{theorem}\label{not28} Assume that the {\sc Hypothesis} holds and that ${\mathfrak C}$ is $G$-normal and minimal such that $L:=G^C$ is almost simple and $S={\rm Soc}(L)$ has at most two conjugacy classes of subgroups isomorphic to $S_\alpha$ (where $\alpha\in C$). (In particular this holds if $L$ is $2$-transitive and almost simple.) Then there is a second $G$-invariant partition ${\mathfrak C}'$ of $\mathcal{P}$ with $c$ classes of size $d$ such that $S\cong{\rm Soc}(G^{{\mathfrak C}'})$. Moreover, each class of ${\mathfrak C}'$ meets each class of ${\mathfrak C}$ in $1$ point. \end{theorem} \begin{proof} Since the socle of an almost simple 2-transitive permutation group of degree $c$ has at most two pairwise inequivalent transitive representations of degree $c$, the condition on ${\rm Soc}(L)$ holds if $L$ is almost simple and 2-transitive. (This follows from the classification of finite 2-transitive groups, see~\cite{Cam}.) Since ${\mathfrak C}$ is $G$-normal, $G_{({\mathfrak C})}\ne1$. Let $S:={\rm Soc}(G_{({\mathfrak C})})$, so $S$ is normal in $G$. Since ${\mathfrak C}$ is minimal, the classes of ${\mathfrak C}$ are the orbits of $S$ in ${\cal P}$. Hence by Theorem~\ref{thm:CP93}, $S$ is faithful on $C$. Since $L$ is almost simple and $S^C$ is normal in $L$, it follows that $S^C={\rm Soc}(L)$ and so $S\cong S^C$ is simple. Let $\alpha\in{\cal P}$ with $\alpha\in C$, and let $F$ denote the set of fixed points in ${\cal P}$ of the stabilizer $S_\alpha$. Since $S^C$ is normal in the primitive group $L$, $F\cap C$ is a block of imprimitivity for $L$ in $C$, and so $F\cap C$ is either $C$ or $\{\alpha\}$. However, since $L$ is primitive and almost simple, its socle $S^C$ is not regular, and hence $F\cap C=\{\alpha\}$. Moreover, since $S$ is normal in $G$, $F$ is a block of imprimitivity for $G$ in ${\cal P}$, and so determines a non-trivial $G$-invariant partition, ${\mathfrak C}'$ say. We claim that $F$ consists of exactly one point from each class of ${\mathfrak C}$. Define a relation on the classes of ${\mathfrak C}$ by $C_1 \sim C_2$ if, for $\alpha_1\in C_1$, $S_{\alpha_1}$ fixes a point in $C_2$. Clearly this is an equivalence relation and there are at most two equivalence classes by the assumption on ${\rm Soc}(L)$. Moreover, it follows, from the definition of $\sim$, that $\sim$ is $G$-invariant. Suppose that there are two $\sim$-equivalence classes, and consider the two-class partition of ${\cal P}$ where each class is the union of the ${\mathfrak C}$-classes in a $\sim$-equivalence class. This is a $G$-invariant point partition with two classes, contradicting Theorem~\ref{thm:CNP}~(ii). Thus there is only one $\sim$-equivalence class, and hence $F$ contains at least one point from each class of ${\mathfrak C}$. The argument in the previous paragraph shows conversely that $|F\cap C| \leq1$ for each class $C$, and hence $F$ consists of exactly one point from each class of ${\mathfrak C}$, proving the claim. Finally, $L$ acts on ${\mathfrak C}'$ in the same way that it acts on $C$. Then, as $S\triangleleft\, G$, we must have that $G^{{\mathfrak C}'}$ is almost simple with socle $S^{{\mathfrak C}'}\cong S$. \end{proof} \bigskip \subsection{Sifting for primitive bottom groups in the $G$-normal case}\label{sub:normalsift} In this subsection we describe an algorithm for restricting the possibilities for the bottom group $L=G^C$ in the case where the partition ${\mathfrak C}$ is $G$-normal and minimal. This will complete {\sc Step} 4 of the overview in Section~\ref{sect:outline}. \begin{algorithm}\label{alg:gnormalbottomgroup}{\rm ({\sc GNormalSift})\\ {\sc Input:}\quad {\sc ParameterListA}$(k^{(r)}_{\max})$ from Algorithm~\ref{alg:step3and4}. \smallskip\noindent {\sc Output:}\quad An enhanced {\sc ParameterListA}$(k^{(r)}_{\max})$ such that each {\sc Line} has an extra entry {\sc GNormalBottomGroups} appended: the algorithm may prove that ${\mathfrak C}$ is not $G$-normal and in this case {\sc GNormalBottomGroups} is an empty list; if this is not the case then if {\sc BottomGroups} has first entry `{\sc Prim}$(c)$ unavailable', then {\sc GNormalBottomGroups} contains `{\sc Prim}$(c)$ unavailable' and possibly some other information about the bottom groups in the case when ${\mathfrak C}$ is $G$-normal; otherwise {\sc GNormalBottomGroups} is a (possibly empty) list of candidate bottom groups for this {\sc Line} in the case when ${\mathfrak C}$ is $G$-normal. \smallskip\noindent for each {\sc Line} from {\sc ParameterListA}$(k^{(r)}_{\max})$ \\ \phantom{mm} set {\sc GNormalBottomGroups}$:=$ {\sc BottomGroups};\\ \phantom{mm} if $k \ge 2x$ and $c$ is not an odd prime power then \\ \phantom{mm} \phantom{mm} reset {\sc GNormalBottomGroups}$:=$ an empty list \\ \phantom{mm} \phantom{mm} skip to instruction $(\ast)$ (Theorem~\ref{thm:normal}~(b));\\ \phantom{mm} if the first entry of {\sc GNormalBottomGroups} is `{\sc Prim}$(c)$ unavailable' then\\ \phantom{mm} \phantom{mm} if $k > 2x + \frac{3}{2}+\sqrt{4x - \frac{7}{4}}$,\quad or if $x\leq 8$\\ \phantom{mm} \phantom{mm} \phantom{mm} add to {\sc GNormalBottomGroups}\ `$L$ is affine' (Theorem~\ref{thm:semiregular});\\ \phantom{mm} \phantom{mm} if $2\in{\rm spec}\,{\cal S}$ \\ \phantom{mm} \phantom{mm} \phantom{mm} if $c=c_0^a$ with $a\geq2$ and $c_0=45$ or $21$ \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} add to {\sc GNormalBottomGroups}\ `${\rm Soc}(L)={\rm PSL}(2,q)^a$';\\ \phantom{mm} \phantom{mm} \phantom{mm} or if $c$ is an odd prime power \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} add to {\sc GNormalBottomGroups}\ `$L$ is affine'; \\ \phantom{mm} \phantom{mm} \phantom{mm} else reset {\sc GNormalBottomGroups}$:=$ an empty list \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} skip to instruction $(\ast)$ (Theorems~\ref{thm:normal}~(a) and~\ref{notL27}~(b));\\ \phantom{mm} \phantom{mm} if $3$ or $4\in{\rm spec}\,{\cal S}$ \\ \phantom{mm} \phantom{mm} \phantom{mm} add to {\sc GNormalBottomGroups} `${\rm Soc}(L)_\alpha$ is a $\{2,3\}$-group' (Theorem~\ref{notL27}~(c));\\ \phantom{mm} else \\ \phantom{mm} \phantom{mm} for each $L\in ${\sc BottomGroups} \\ \phantom{mm} \phantom{mm} \phantom{mm} if $L$ is not affine \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if $k > 2x + \frac{3}{2}+\sqrt{4x - \frac{7}{4}}$, \quad or if $x\leq 8$\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} remove $L$ from {\sc GNormalBottomGroups}\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} skip to next $L$ (Theorem~\ref{thm:semiregular});\\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if $2\in{\rm spec}\,{\cal S}$ \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if $c$ is not $c_0^a$ with $a\geq 2$ and $c_0=45$ or $21$ \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} remove $L$ from {\sc GNormalBottomGroups} \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} skip to next $L$ (Theorem~\ref{notL27}~(b)); \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if $3$ or $4\in{\rm spec}\,{\cal S}$ \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if the point stabilizer ${\rm Soc}(L)_\alpha$ is not a $\{2,3\}$-group \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} remove $L$ from {\sc GNormalBottomGroups} \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} skip to next $L$ (Theorem~\ref{notL27}~(c), since ${\rm Soc}(L)_\alpha\leq G^C_{({\mathfrak C}),\alpha}$);\\ \phantom{mm} \phantom{mm} \phantom{mm} if $L$ is almost simple \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if {\sc PossibleGrid} is `no' \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} if ${\rm Soc}(L)$ has $\leq 2$ conjugacy classes of subgroups isomorphic to ${\rm Soc}(L)_\alpha$ \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} remove $L$ from {\sc GNormalBottomGroups} \\ \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} \phantom{mm} skip to next $L$ (Theorem~$\ref{not28}$);\\ \phantom{mm} $(\ast)$ append {\sc GNormalBottomGroups} to this {\sc Line};\\ return {\sc ParameterListA}$(k^{(r)}_{\max})$. } \end{algorithm} \smallskip The result of applying Algorithm~\ref{alg:gnormalbottomgroup} in the case $k^{(r)}_{\max}=8$ was a list of 36 {\sc Lines} in which {\sc GNormalBottomGroups} was not an empty list. These {\sc Lines} are given in Section~\ref{sect:results}, where those {\sc Lines} with $k\leq 8$ are distinguished (for reasons discussed there). See Table~\ref{tab:outputgnormal} and Table~\ref{tab:outputkleq8}. \section{Results of algorithms applied for $k^{(r)}\leq 8$}\label{sect:results} In this section we present summaries of the output of Algorithms~\ref{alg:step3and4},~\ref{alg:qptopgroup} and \ref{alg:gnormalbottomgroup} in the case $k^{(r)}_{\max}=8$. In \cite{CaminaMischke96}, Camina and Mischke classified all line-transitive and point-imprimitive linear spaces with $k\leq 8$, extending the classification of \cite{NNOPP},\cite{KPP93}. We decided to include these cases in the parameter sift so that it could provide a check of our sifting process. In light of their result, we present the surviving {\sc Lines} from {\sc ParameterListA}$(8)$ in three separate tables: (i) those {\sc Lines} with $k\leq 8$; (ii) those {\sc Lines} with $k>8$ and which are potentially quasiprimitive; and (iii) those {\sc Lines} with $k>8$ and which are potentially $G$-normal. The entry in the column labeled {\sc Line} in each of these tables is the line number from {\sc ParameterList}$(8)$, which is given in increasing order of $v$. The column labeled ``int type'' records the intersection type $(0^{d_0},1^{d_1},\ldots, k^{d_k})$ by omitting the entry $0^{d_0}$ and any entries $i^{d_i}$ for which $d_i=0$. This means that ${\rm spec}\,{\cal S}$ can be read off immediately by ignoring the non-zero superscripts $d_i$ of the remaining entries in the intersection type. Following each table is a summary of relevant information obtained from the algorithms concerning the candidate top and bottom groups in each case. \subsection{Surviving {\sc Lines} with $k\leq 8$}\label{subsec:outputkleq8} Table~\ref{tab:outputkleq8} displays the output of Algorithms~\ref{alg:step3and4},~\ref{alg:qptopgroup} and \ref{alg:gnormalbottomgroup} with the restriction $k\leq 8$. All {\sc Lines} form the output of Algorithm~\ref{alg:gnormalbottomgroup}, and hence are potentially $G$-normal. The information obtained concerning the candidate groups is displayed in Table~\ref{tab:outputgnormalgrpskleq8}. Algorithm~\ref{alg:qptopgroup} also produced {\sc Line} 2 as the unique (potentially quasiprimitive) output, with candidate top group $G\cong G^{\mathfrak C}\cong{\rm PSL}(3,2)<S_7$ and bottom group $G^C$ either $A_3$ or $S_3$. \begin{table*}[!h] \begin{center} \begin{footnotesize} \begin{tabular}{|cr@{\,$\cdot$\,}lr@{\,,\,}lr@{\,,\,}lr@{\,$\cdot$\,}lr@{\,$\cdot$\,}lcc|} \hline {\sc Line} & $d$ & $c$ & $(x$ & $y)$ & $(\gamma$ & $\delta)$ & $k^{(v)}$ & $k^{(r)}$ & $b^{(v)}$ & $b^{(r)}$ & int type & $t_{\max}$ \cr \hline 1& 3&7 & (3&1) & (6&2) & 1&5 & 21&1 & $(1^2,3)$ &3\\ 2& 7&3 & (1&3) & (2&6) & 1&5 & 21&1 & $(1^3,2)$ &2\\ 3& 5&5 & (1&1) & (2&2) & 1&4 & 25&2 & $(1^2,2)$ &5\\ 4& 3&19 & (9&1) & (18&2) & 1&8 & 57&1 & $(1,3,4)$ &3\\ 5& 19&3 & (1&9) & (2&18) & 1&8 & 57&1 & $(1^6,2)$ &2\\ 6& 9&9 & (1&1) & (2&2) & 1&5 & 81&4 & $(1^3,2)$ &2\\ 7& 5&17 & (4&1) & (8&2) & 1&7 & 85&2 & $(1^2,2,3)$ &5\\ 8& 17&5 & (1&4) & (2&8) & 1&7 & 85&2 & $(1^5,2)$ &1\\ 9& 7&13 & (2&1) & (4&2) & 1&6 & 91&3 & $(1^2,2^2)$ &2\\ 10& 13&7 & (1&2) & (2&4) & 1&6 & 91&3 & $(1^4,2)$ &2\\ 12& 13&13 & (2&2) & (4&4) & 1&8 & 169&3 & $(1^4,2^2)$ &1\\ 14& 9&25 & (3&1) & (6&2) & 1&8 & 225&4 & $(1^2,2^3)$ &2\\ 15& 9&25 & (3&1) & (6&2) & 1&8 & 225&4 & $(1^5,3)$ &2\\ 16& 25&9 & (1&3) & (2&6) & 1&8 & 225&4 & $(1^6,2)$ &2\\ 35& 27&27 & (1&1) & (2&2) & 1&8 & 729&13 & $(1^6,2)$ &2\\ \hline \end{tabular} \caption{Surviving {\sc Lines} with $k\leq 8$\label{tab:outputkleq8}} \end{footnotesize} \end{center} \end{table*} \begin{table*}[!h] \begin{center} \begin{footnotesize} \begin{tabular}{|c|l|l|} \hline {\sc Line} & $G^{\mathfrak C}$ & $G^C$ \cr \hline 1& $A_{3}, S_{3}$ & ${\rm PSL}(3,2), C_7, D_{14}, 7:3, {\rm AGL}(1,7)$ \\ 2& ${\rm PSL}(3,2), C_7, D_{14}, 7:3, {\rm AGL}(1,7)$ & $A_{3}, S_{3}$ \\ 3& $A_5, S_5, D_{10}, {\rm AGL}(1,5)$ & $D_{10}, {\rm AGL}(1,5)$ \\ 4& $A_{3}, S_{3}$ & $C_{19}, D_{38}, 19:3, 19:6, 19:9$ \\ 5& $C_{19}, D_{38}, 19:3, 19:6, 19:9, {\rm AGL}(1,19)$ & $A_{3}, S_{3}$ \\ 6& affine & affine \\ 7& $A_5, S_5, D_{10}, {\rm AGL}(1,5)$ & $D_{34}, 17:4, 17:8$ \\ 8& $D_{34}, 17:4, 17:8$ & $D_{10}, {\rm AGL}(1,5)$ \\ 9& ${\rm PSL}(3,2), 7:3, {\rm AGL}(1,7)$ & $13:3, 13:6, {\rm AGL}(1,13)$ \\ 10& ${\rm PSL}(3,3), 13:3, 13:6, {\rm AGL}(1,13)$ & $7:3, {\rm AGL}(1,7)$ \\ 12& $13:3, 13:6$ & $13:3, 13:6, {\rm AGL}(1,13)$ \\ 14& affine & affine \\ 15& affine & affine; $(A_5\times A_5):2 \leq G^C \leq (S_5\times S_5):2$ \\ 16& affine; $(A_5\times A_5):2 \leq G^{\mathfrak C} \leq (S_5\times S_5):2$ & affine \\ 35& $3^3:13, {\rm AGL}(1,27), 3^3.13.3, {\rm A\Gamma L}(1,27),$ & $3^3:13, {\rm AGL}(1,27), 3^3.13.3, {\rm A\Gamma L}(1,27),$ \\ & ${\rm ASL}(3,3), {\rm AGL}(3,3)$ & ${\rm ASL}(3,3), {\rm AGL}(3,3)$ \\ \hline \end{tabular} \caption{Candidate groups for surviving potentially $G$-normal {\sc Lines} with $k\leq8$\label{tab:outputgnormalgrpskleq8}} \end{footnotesize} \end{center} \end{table*} \subsection{Surviving potentially quasiprimitive {\sc Lines} with $k>8$}\label{subsec:outputqp} Tables~\ref{tab:outputqp} and \ref{tab:outputqpgrps} display the output of Algorithm~\ref{alg:qptopgroup} with the restriction that $k>8$. In particular, Table~\ref{tab:outputqp} displays the parameters of those {\sc Lines} in {\sc ParameterListA}$(8)$ with $k>8$ for which {\sc QuasiprimTopGroups} was not an empty list in the output of Algorithm~\ref{alg:qptopgroup} (including those for which $d\geq 2500$), with Table~\ref{tab:outputqpgrps} displaying information about the candidate top and bottom groups for each surviving {\sc Line}. Recall that $G\cong G^{\mathfrak C}$ is almost simple (see Section~\ref{sect:outline}). \begin{table*}[!h] \begin{center} \begin{footnotesize} \begin{tabular}{|cccccccc|} \hline {\sc Line} & $d\cdot c$ & $(x,y)$ & $(\gamma,\delta)$ & $k^{(v)}\cdot k^{(r)}$ & $b^{(v)}\cdot b^{(r)}$ & int type & $t_{\max}$ \cr \hline 23& $31\cdot16$ & $(2,4)$ & $(1,2)$ & $4\cdot3$ & $124\cdot15$ & $(1^8,2^2)$ &2\\ 45& $133\cdot12$ & $(3,36)$ & $(1,12)$ & $6\cdot5$ & $266\cdot11$ & $(1^{24},2^3)$ &2\\ 146& $528\cdot32$ & $(11,187)$ & $(1,17)$ & $22\cdot5$ & $768\cdot31$ & $(1^{88},2^{11})$ &1\\ 670& $3112\cdot184$ & $(32,544)$ & $(1,17)$ & $64\cdot7$ & $8947\cdot183$ & $(1^{384},2^{32})$ &1\\ 675& $3501\cdot176$ & $(36,720)$ & $(1,20)$ & $72\cdot7$ & $8558\cdot175$ & $(1^{432},2^{36})$ &1\\ 707& $3510\cdot320$ & $(36,396)$ & $(1,11)$ & $72\cdot7$ & $15600\cdot319$ & $(1^{432},2^{36})$ &1\\ 710& $7410\cdot240$ & $(76,2356)$ & $(1,31)$ & $152\cdot7$ & $11700\cdot239$ & $(1^{912},2^{76})$ &1\\ 736& $3520\cdot783$ & $(144,648)$ & $(2,9)$ & $144\cdot7$ & $19140\cdot391$ & $(1^{720},2^{144})$ &2\\ 743& $5720\cdot603$ & $(234,2223)$ & $(2,19)$ & $234\cdot7$ & $14740\cdot301$ & $(1^{1170},2^{234})$ &1\\ 747& $5083\cdot848$ & $(52,312)$ & $(1,6)$ & $104\cdot7$ & $41446\cdot847$ & $(1^{624},2^{52})$ &2\\ 1179& $6256\cdot696$ & $(64,576)$ & $(1,9)$ & $128\cdot7$ & $34017\cdot695$ & $(1^{768},2^{64})$ &2\\ 1184& $3910\cdot1304$ & $(40,120)$ & $(1,3)$ & $80\cdot7$ & $63733\cdot1303$ & $(1^{480},2^{40})$ &2\\ 1188& $9775\cdot544$ & $(100,1800)$ & $(1,18)$ & $200\cdot7$ & $26588\cdot543$ & $(1^{1200},2^{100})$ &1\\ 1194& $10166\cdot536$ & $(104,1976)$ & $(1,19)$ & $208\cdot7$ & $26197\cdot535$ & $(1^{1248},2^{104})$ &1\\ 1197& $3519\cdot1760$ & $(36,72)$ & $(1,2)$ & $72\cdot7$ & $86020\cdot1759$ & $(1^{432},2^{36})$ &2\\ 1205& $17595\cdot464$ & $(180,6840)$ & $(1,38)$ & $360\cdot7$ & $22678\cdot463$ & $(1^{2160},2^{180})$ &1\\ 1206& $3128\cdot3128$ & $(32,32)$ & $(1,1)$ & $64\cdot7$ & $152881\cdot3127$ & $(1^{384},2^{32})$ &2\\ 1207& $3128\cdot3128$ & $(32,32)$ & $(1,1)$ & $64\cdot7$ & $152881\cdot3127$ & $(1^{408},2^8,3^8)$ &2\\ \hline \end{tabular} \caption{Surviving potentially quasiprimitive {\sc Lines} with $k>8$\label{tab:outputqp}} \end{footnotesize} \end{center} \end{table*} \begin{table*}[!h] \begin{center} \begin{footnotesize} \begin{tabular}{|c|l|l|} \hline {\sc Line} & $G^{\mathfrak C}$ & $G^C$ \cr \hline 23& ${\rm PSL}(3,5), {\rm PSL}(5,2)$ & $A_{16}, S_{16}$, affine \\ 45& ${\rm PSL}(3,11)$ & $M_{11}, M_{12}, {\rm PSL}(2,11), {\rm PGL}(2,11), A_{12}, S_{12}$ \\ 146& $A_{33}, S_{33}$ on pairs & ${\rm PSL}(2,31), {\rm PGL}(2,31), A_{32}, S_{32}$, affine \\ 675& $\ngeq A_d$ & $HS, A_{176}, S_{176}$ \\ 1206& $\ngeq A_d$ & no information obtained \\ 1207& $\ngeq A_d$ & not 2-homogenenous \\ 710,1184,1197,1205& $\ngeq A_d$ & ${\rm PSL}(2,c-1), {\rm PGL}(2,c-1), A_{c}, S_{c}$ \\ 670,707,736,743,747, & $\ngeq A_d$ & $A_{c}, S_{c}$ \\ 1179,1188,1194 &&\\ \hline \end{tabular} \caption{Candidate groups for surviving potentially quasiprimitive {\sc Lines} with $k>8$\label{tab:outputqpgrps}} \end{footnotesize} \end{center} \end{table*} \subsection{Surviving potentially $G$-normal {\sc Lines} with $k>8$}\label{subsec:outputgnormal} Tables~\ref{tab:outputgnormal} and \ref{tab:outputgnormalgrps} display the output of Algorithm~\ref{alg:gnormalbottomgroup} with the restriction that $k>8$. In particular, Table~\ref{tab:outputgnormal} displays the parameters of those {\sc Lines} in {\sc ParameterListA}$(8)$ with $k>8$ for which {\sc GNormalBottomGroups} was not an empty list in the output of Algorithm~\ref{alg:gnormalbottomgroup} (including those for which $c\geq 2500$), with Table~\ref{tab:outputgnormalgrps} displaying information about the candidate top and bottom groups for each surviving {\sc Line}. \begin{table*}[!h] \begin{center} \begin{footnotesize} \begin{tabular}{|cccccccc|} \hline {\sc Line} & $d\cdot c$ & $(x,y)$ & $(\gamma,\delta)$ & $k^{(v)}\cdot k^{(r)}$ & $b^{(v)}\cdot b^{(r)}$ & int type & $t_{\max}$ \cr \hline 22& $16\cdot31$ & $(4,2)$ & $(2,1)$ & $4\cdot3$ & $124\cdot15$ & $(1^4,2^4)$ &2\\ 46& $21\cdot81$ & $(28,7)$ & $(8,2)$ & $7\cdot5$ & $243\cdot10$ & $(2^7,3^7)$ &2\\ 56& $18\cdot137$ & $(24,3)$ & $(8,1)$ & $6\cdot5$ & $411\cdot17$ & $(2^6,3^6)$ &2\\ 60& $24\cdot139$ & $(18,3)$ & $(6,1)$ & $6\cdot5$ & $556\cdot23$ & $(1^{12},3^6)$ &2\\ 64& $85\cdot43$ & $(5,10)$ & $(2,4)$ & $5\cdot6$ & $731\cdot21$ & $(1^{20},2^5)$ &2\\ 91& $40\cdot157$ & $(60,15)$ & $(12,3)$ & $10\cdot7$ & $628\cdot13$ & $(1^{30},4^{10})$ &2\\ 127& $32\cdot373$ & $(48,4)$ & $(12,1)$ & $8\cdot7$ & $1492\cdot31$ & $(1^8,3^{16})$ &2\\ 128& $32\cdot373$ & $(48,4)$ & $(12,1)$ & $8\cdot7$ & $1492\cdot31$ & $(1^{24},4^8)$ &2\\ 156& $64\cdot379$ & $(24,4)$ & $(6,1)$ & $8\cdot7$ & $3032\cdot63$ & $(1^8,2^{24})$ &2\\ 157& $64\cdot379$ & $(24,4)$ & $(6,1)$ & $8\cdot7$ & $3032\cdot63$ & $(1^{32},3^8)$ &2\\ 185& $192\cdot383$ & $(8,4)$ & $(2,1)$ & $8\cdot7$ & $9192\cdot191$ & $(1^{40},2^8)$ &2\\ 673& $176\cdot3501$ & $(720,36)$ & $(20,1)$ & $72\cdot7$ & $8558\cdot175$ & $(3^{120},6^{24})$ &2\\ 674& $176\cdot3501$ & $(720,36)$ & $(20,1)$ & $72\cdot7$ & $8558\cdot175$ & $(3^{144},9^8)$ &2\\ 709& $240\cdot7410$ & $(2356,76)$ & $(31,1)$ & $152\cdot7$ & $11700\cdot239$ & $(4^{152},5^{76},19^4)$ &2\\ 1198& $464\cdot17595$ & $(6840,180)$ & $(38,1)$ & $360\cdot7$ & $22678\cdot463$ & $(6^{360},9^{40})$ &2\\ 1199& $464\cdot17595$ & $(6840,180)$ & $(38,1)$ & $360\cdot7$ & $22678\cdot463$ & $(3^{120},5^{144},6^{120},10^{72})$ &2\\ 1200& $464\cdot17595$ & $(6840,180)$ & $(38,1)$ & $360\cdot7$ & $22678\cdot463$ & $(3^{120},6^{240},9^{80})$ &2\\ 1201& $464\cdot17595$ & $(6840,180)$ & $(38,1)$ & $360\cdot7$ & $22678\cdot463$ & $(5^{360},10^{72})$ &2\\ 1202& $464\cdot17595$ & $(6840,180)$ & $(38,1)$ & $360\cdot7$ & $22678\cdot463$ & $(5^{216},6^{120},9^{80})$ &2\\ 1203& $464\cdot17595$ & $(6840,180)$ & $(38,1)$ & $360\cdot7$ & $22678\cdot463$ & $(3^{120},5^{216},9^{120})$ &2\\ 1204& $464\cdot17595$ & $(6840,180)$ & $(38,1)$ & $360\cdot7$ & $22678\cdot463$ & $(5^{432},15^{24})$ &2\\ \hline \end{tabular} \caption{Surviving potentially $G$-normal {\sc Lines} with $k>8$\label{tab:outputgnormal}} \end{footnotesize} \end{center} \end{table*} \begin{table*}[!h] \begin{center} \begin{footnotesize} \begin{tabular}{|c|l|l|} \hline {\sc Line} & $G^{\mathfrak C}$ & $G^C$ \cr\hline 22& affine $\leq {\rm AGL}(4,2)$ & $31:[15a]\leq {\rm AGL}(1,31),\,\, a\mid 2$ \\ 46& $A_7, S_7, {\rm PSL}(3,4), {\rm P\Sigma L}(3,4),$ & affine $\leq {\rm AGL}(4,3)$ \\ & ${\rm PGL}(3,4), {\rm P\Gamma L}(3,4)$ & \\ 56& ${\rm PSL}(2,17)$ & $137:[17a]\leq {\rm AGL}(1,137),\,\, a\mid 4$ \\ 60& ${\rm PSL}(2,23)$ & $139:[23a]\leq {\rm AGL}(1,139),\,\, a\mid 6$ \\ 64& ${\rm PSL}(4,4), {\rm P\Sigma L}(4,4)$ & $43:[21a]\leq {\rm AGL}(1,43),\,\, a\mid 2$ \\ 91& ${\rm PSL}(4,3), {\rm PGL}(4,3)$ & $157:[13a]\leq {\rm AGL}(1,157),\,\, a\mid 12$ \\ 127,128& ${\rm PSL}(2,31), {\rm AGL}(1,32), {\rm A\Gamma L}(1,32)$ & $373:[31a]\leq {\rm AGL}(1,373),\,\, a\mid 12$ \\ 156,157& affine $\leq {\rm AGL}(6,2)$ & $379:[63a]\leq {\rm AGL}(1,379),\,\, a\mid 6$ \\ 185& ${\rm PSL}(2,191)$ & $383:[191a]\leq {\rm AGL}(1,383),\,\, a\mid 2$ \\ 673,674& $HS$ & not 3-homogeneous, $G^C_\alpha\,\,{\rm is}\,\,$\{2,3\}${\rm -group}$ \\ 709& ${\rm PSL}(2,239)$ & not 2-homogeneous, $G^C_\alpha\,\,{\rm is}\,\,$\{2,3\}${\rm -group}$ \\ 1202& ${\rm PSL}(2,463)$ & not 2-homogeneous \\ 1198,1201,1204& ${\rm PSL}(2,463)$ & not 3-homogeneous \\ 1199,1200,1203& ${\rm PSL}(2,463)$ & not 2-homogeneous, $G^C_\alpha\,\,{\rm is}\,\,$\{2,3\}${\rm -group}$ \\ \hline \end{tabular} \caption{Candidate groups for surviving potentially $G$-normal {\sc Lines} with $k>8$\label{tab:outputgnormalgrps}} \end{footnotesize} \end{center} \end{table*} \clearpage \section{Analysing remaining {\sc Lines}}\label{sec:proofs} In this section we consider each of the surviving cases $({\cal S},G)$ appearing in the tables of Section~\ref{sect:results} and complete the proof of Theorem~\ref{main}. Before treating the three distinguished situations, we give two useful lemmas. \begin{lemma}[Davies~\cite{Davies}]\label{lem:Davies} Let $g$ be a non-trivial automorphism of a linear space ${\cal S}$ with constant line size $k$ and $r$ lines through a point. Let $g$ have prime order $p$. Then $g$ has at most $\max(r+k-p-1,r)$ fixed points. Moreover, $|{\rm Fix}(h)|\leq k+r-3$ for any non-identity automorphism $h$ of such a linear space. \end{lemma} \begin{lemma} \label{lem:KSXY} Assume that the {\sc Hypothesis} holds and that ${\mathfrak C}$ is $G$-normal and minimal. Let $K:= G_{({\mathfrak C})}$, $S:={\rm Soc}(K)$, $X:=C_G(K)$, and $Y:=C_G(S)$. Then the following hold. \begin{enumerate} \item[(a)] Either (i) $Y\cap K=1$ and $S$ is nonabelian, or (ii) $Y\cap K=S$ and $S$ is elementary abelian. \item[(b)] Either (i) $X\cap K=1$, or (ii) $X\cap K=K=S$ and $S$ is elementary abelian. \item[(c)] Suppose in addition that ${\mathfrak C}$ is maximal. If there is a non-trivial intransitive subgroup $N\vartriangleleft G$ such that $N\cap S=1$, then there is a second $G$-normal partition ${\mathfrak C}'$ with $c$ classes of size $d$ such that $|C\cap C'|=1$ for each $C\in{\mathfrak C},C'\in{\mathfrak C}'$. \end{enumerate} \end{lemma} \begin{proof} (a) If $Y\cap K=1$ then in particular $S$ is nonabelian. So suppose $Y\cap K\ne 1$ and note that $Y\cap K=C_K(S)$. Since ${\mathfrak C}$ is minimal, $G^C$ is primitive. By Theorem~\ref{thm:CP93}, $K\cong K^C$ where $C\in{\mathfrak C}$. Then $Y\cap K\cong(Y\cap K)^C$, a non-trivial normal subgroup that centralises $S\cong S^C\subseteq{\rm Soc}(G^C)$. Since $G^C$ is primitive, the normal subgroups $(Y\cap K)^C$ and $S^C$ are both transitive, and since the centraliser in ${\rm Sym}(C)$ of a transitive group is semiregular, it follows that both $(Y\cap K)^C$ and $S^C$ are regular and are isomorphic to each other. Moreover in this case $Y\cap K\subseteq{\rm Soc}(K)=S$, and hence $Y\cap K=S$ is elementary abelian. (b) Since $S\leq K$, we have $X\leq Y$. Thus if $Y\cap K=1$ then $X\cap K=1$. So we may assume by part (a) that $Y\cap K=S$ and $S$ is elementary abelian. As in the previous paragraph, $S^C\cong S$ is self-centralising in ${\rm Sym}(C)$ and in fact $S^C$ is a minimal normal subgroup of $G^C$, for $C\in{\mathfrak C}$. Since $X\cap K=C_K(K)=Z(K)$, we have $X\cap K\cong(X\cap K)^C$ centralising $S\cong S^C$, and so $X\cap K\leq S$. By the minimality of $S^C$, either $X\cap K=1$ or $X\cap K=S$. Since also $X\cap K=Z(K)$, in the case $X\cap K=S$ we have $S=K$. (c) Suppose such a subgroup $N$ exists. We claim that $N\cap K=1$. If not, then $N\cap K$ is a non-trivial normal subgroup of $K$ and so contains some minimal normal subgroup $U$ of $K$. Then $U\leq N\cap S$, which is a contradiction. Hence $N\cap K=1$ and so $N^{\mathfrak C}\cong N$. Also $N^{\mathfrak C}$ is normal in the primitive group $G^{{\mathfrak C}}$ (since $N$ is normal, and ${\mathfrak C}$ is maximal), and thus $N^{{\mathfrak C}}$ is transitive. Let $C'$ be an $N$-orbit in $\cal P$. Then $C'$ contains an equal number of points from each class of ${\mathfrak C}$ and for $C\in{\mathfrak C}$, $C\cap C'$ is an orbit for $N_C$ in $C$. Also, since $N^C\vartriangleleft G^C$, $C\cap C'$ is a block of imprimitivity for the primitive group $G^C$. Hence either $C\cap C'=C$ or $|C\cap C'|=1$. Since $N$ is intransitive it follows that $|C\cap C'|=1$ and hence $N$ has $c$ orbits of size $d$, and so the set of $N$-orbits in ${\cal P}$ forms a $G$-normal partition with $c$ classes of size $d$. \end{proof} \subsection{{\sc Lines} with $k\leq 8$}\label{subsec:proofskleq8} As mentioned above, line-transitive, point-imprimitive linear spaces with $k\leq 8$ have been classified by Camina and Mischke~\cite{CaminaMischke96}. We apply this result in our proof of the following Proposition. \begin{proposition}\label{prop:k<9} Suppose $k\leq 8$ so that one of {\sc Lines} $1-10,\, 12,\, 14-16,\, 35$ hold, as in Table~\ref{tab:outputkleq8}. Then one of the following holds. \begin{enumerate} \item[(a)] ${\cal S}={\rm PG}(2,q)$, $G$ lies in $Z_{q^2+q+1}.Z_3$, the normaliser of a Singer cycle, and either $q=4$ and {\sc Line} $1$ or $2$ holds, or $q=7$ and {\sc Line} $4$ or $5$ holds; \item[(b)] ${\cal S}$ is the Colbourn-McCalla design, $Z_{91}:Z_3\leq G\leq Z_{91}:Z_{12}$, and {\sc Line} $9$ or $10$ holds; \item[(c)] ${\cal S}$ is the Mills design, $G=Z_{91}:Z_3$, and {\sc Line} $9$ or $10$ holds; \item[(d)] ${\cal S}$ is one of the $467$ linear spaces constructed in \cite{NNOPP}, $G={\rm Aut}({\cal S})$, and {\sc Line} $35$ holds. \end{enumerate} In part (d) (see \cite{NNOPP},\cite{KPP93}) all the automorphism groups ${\rm Aut}({\cal S})$ are of the form $N.Z$ with $|N|=3^6$ and $|Z|=13$. There are three possibilities for $N$, namely $Z_3^6,Z_9^3$, and a special $3$-group of exponent $3$; these correspond to $27,13$ and $427$ linear spaces respectively. \end{proposition} \begin{proof} First we prove that ${\mathfrak C}$ is $G$-normal. Suppose this is not so. Then we are in {\sc Case} 1 and $G$ is quasiprimitive. From Subsection~\ref{subsec:outputkleq8}, {\sc Line} 2 of Table~\ref{tab:outputkleq8} holds and the only possibility for $G$ is $G\cong G^{\mathfrak C}\cong{\rm PSL}(3,2)$. By \cite{CaminaMischke96}, ${\cal S}={\rm PG}(2,4)$. However ${\rm PSL}(3,2)$ is not transitive on the points of ${\rm PG}(2,4)$ (see \cite{ATLAS}). Thus ${\mathfrak C}$ is $G$-normal. Next we apply the classification in \cite{CaminaMischke96} to deduce that {\sc Lines} $3,6-8,12,14-16$ of Table~\ref{tab:outputkleq8} do not occur. We consider the remaining {\sc Lines}. Suppose that {\sc Line} 1 or 2 of Table~\ref{tab:outputkleq8} holds. By \cite{CaminaMischke96}, ${\cal S}$ is a projective plane, and by \cite[p232]{KPP93}, ${\cal S}={\rm PG}(2,4)$ and $G=Z_{21}$ or $Z_{21}.Z_3$ as in (a) and there are two $G$-normal partitions satisfying {\sc Lines} 1 and 2. Next suppose that {\sc Line} 4 or 5 of Table~\ref{tab:outputkleq8} holds. By \cite{CaminaMischke96}, ${\cal S}$ is a projective plane, and as there is a unique projective plane of order 7 (see \cite{Hall},\cite{Hall-correction}), ${\cal S}={\rm PG}(2,7)$. If {\sc Line} 4 holds then $N={\rm Soc}(K)\cong Z_{19}$ is normal in $G$. On the other hand if {\sc Line} 5 holds then $S={\rm Soc}(K)\cong Z_3$ by Theorem~\ref{thm:CP93} (and see Table~\ref{tab:outputgnormalgrpskleq8}). Then by Lemma~\ref{lem:KSXY}(a), $Y=C_G(S)$ satisfies $Y\cap K=S$. Since $G/Y\leq{\rm Aut}(S)=Z_2$, $1\ne Y^{\mathfrak C}\leq G^{\mathfrak C}\leq{\rm AGL}(1,19)$ and so $G$ has a normal subgroup $M$ such that $S<M\leq Y$ and $M/S\cong Z_{19}$. In this case the unique Sylow 19-subgroup $N$ of $M$ is normal in $G$. Thus in {\sc Line} 4 or {\sc Line} 5 we have a normal subgroup $N\cong Z_{19}$ of $G$. Hence $G\leq N_{{\rm PGL}(2,7)}(N)=Z_{57}.Z_3$ as in (a) and there are two $G$-normal partitions satisfying {\sc Lines} 4 and 5. Now suppose that one of {\sc Lines} 9 or 10 of Table~\ref{tab:outputkleq8} holds. By \cite{CaminaMischke96},\cite{ST} ${\cal S}$ is either the Colbourn-McCalla design which has automorphism group $Z_{91}:Z_{12}$ or the Mills design which has automorphism group $Z_{91}:Z_3$. Both designs have two invariant partitions, one satisfying {\sc Line} 9 and the other satisfying {\sc Line} 10. For the Colbourn-McCalla design, the line-transitive subgroups $G$ are all subgroups containing $Z_{91}:Z_3$. Thus (b) or (c) holds. Finally suppose that {\sc Line} 35 holds. Then part (d) follows from \cite{NNOPP},\cite{KPP93}. \end{proof} \subsection{Potential quasiprimitive cases with $k>8$}\label{subsec:proofsqp} In this section we discuss the quasiprimitive cases. In each case we have $K=G_ {({\mathfrak C})}= 1$ and $G\cong G^{\mathfrak C}$ is almost simple. \begin{proposition}\label{prop:qp} There are no line-transitive point-imprimitive linear spaces with a quasi\-primitive action corresponding to any of the {\sc Lines} of Table~\ref{tab:outputqp}, i.e., {\sc Lines} $23$, $45$, $146$, $670$, $675$, $707$, $710$, $736$, $743$, $747$, $1179$, $1184$, $1188$, $1194$, $1197$, $1205$, $1206$, $1207$ cannot occur. \end{proposition} \begin{proof} Suppose one of the {\sc Lines} of Table~\ref{tab:outputqp} holds. We first deal with some special cases, before giving a general argument for the remaining cases. {\sc Line} 23: Here $G^{\mathfrak C}$ is either ${\rm PSL}(3,5)$ or ${\rm PSL}(5,2)$, and $G^C$ contains either $Z_2^4$ or $A_{16}$. Suppose that $G^{\mathfrak C}={\rm PSL}(3,5)$. Then $G_C\cong G^{{\mathfrak C}}_C=5^2:{\rm GL}(2,5)$. However, no quotient of this group has a normal subgroup $Z_2^4$ or $A_{16}$. Hence $G^{\mathfrak C}={\rm PSL}(5,2)$, and this possibility will be considered below in the general argument. {\sc Line} 675: Here $G^{\mathfrak C}$ does not contain $A_{3501}$, and $G^C$ is either $HS$, $A_{176}$ or $S_{176}$. Suppose that $G^C\cong HS$. Now $G^{{\mathfrak C}}$ is a primitive group of degree $9 \times 389$, so let $p=389$ and $g$ be an element in $G^{{\mathfrak C}}$ of order $p$. Suppose $g$ has some fixed points in ${\mathfrak C}$. Then it has at least 389 fixed points and $q$ cycles of length $p$, where $q\leq 8$. By~\cite[Theorem 13.10]{Wielandt64}, since $G^{{\mathfrak C}}$ is not alternating or symmetric, the number of fixed points should be at most $4q-4$. Thus $g$ fixes no element of ${\mathfrak C}$, and the almost simple group $G^{{\mathfrak C}}$ satisfies the hypothesis (*) in ~\cite{LiebeckSaxl}. However, by checking the list in~\cite[Theorem 1.1(iii), Table 3]{LiebeckSaxl}, we find no primitive group of degree $9\times 389$. Hence $G^C$ may only be $A_{176}$ or $S_{176}$, and these possibilities will be considered below in the general argument. {\sc Lines} 1206, 1207: Here $\gamma=\delta=1$, and by Lemma~\ref{lem:subdegree} it follows that $G^{\mathfrak C}$ and $G^C$ are both 2-transitive of degree 3128. Since 3128 is not a prime power, these groups are almost simple. Now 3127 is not a prime power, and 3128 is not of the form $2^{n-1}(2^{n}\pm1)$ or $(q^n-1)/(q-1)$ for a prime power $q$, and hence the only 2-transitive groups of degree 3128 are $A_{3128}$ and $S_{3128}$ (see \cite{Cam}). However this contradicts the fact that $t_{\max}=2$. Thus {\sc Lines} 1206,1207 cannot occur. We will now treat all remaining quasiprimitive cases in a somewhat uniform manner. The approach is to construct an induced linear space on the fixed points of a certain subgroup, and then determine contradictions in the resulting parameters, hence ruling out the remaining possibilities in Table~\ref{tab:outputqp}. With reference to Table~\ref{tab:someqp}, we have for each {\sc Line} a prime $p$ which divides $|G|$, but which does not divide the number of lines $b$. Thus, in each case, a Sylow $p$-subgroup $P$ of $G$ will fix some line, $\lambda$ say. Let $F:={\rm Fix}_{\cal P}(P)$. For the line size $k$, define $k'$ to be the integer such that $0\leq k'<k$ and $k\equiv k'\pmod{p}$, and similarly define $d_i'$ for each $i\in{\rm spec}\,{\cal S}$. Since $P$ fixes $\lambda$ setwise, $|F\cap\lambda|\geq k'$, and these values are displayed in Table~\ref{tab:someqp}. If $k'=0$ or $1$, then we consider the sizes of the intersections of the classes with $\lambda$. Since $P$ fixes $\lambda$, $P$ must preserve the intersection type, and so fixes the set ${\mathfrak C}_i$ of $d_i$ classes $C$ such that $|\lambda\cap C|=i$, for each $i\in{\rm spec}\,{\cal S}$. Within each ${\mathfrak C}_i$, $P$ must fix setwise at least $d_i'$ classes, and within each of these fixed classes, $P$ must fix $i$ points, since $i<p$ for each {\sc Line}. So $|F\cap\lambda|\geq\sum_{i\in{\rm spec}\,{\cal S}}id_i'$, and these values are displayed in Table~\ref{tab:someqp} where necessary. For each {\sc Line} we have that $|F\cap\lambda|\geq 2$. Since for each {\sc Line} $v-k\not\equiv 0\pmod{p}$, there exist points not on $\lambda$ which are fixed by $P$, and so $F\nsubseteq\lambda$. Thus we may apply Corollary~\ref{cor:CaminaSiemons} to obtain, for each case, an induced linear space ${\cal S}|_F=(F,{\cal L}|_F)$ upon which $N_G(P)$ acts line-transitively. Then ${\cal S}|_F$ has $v_0=|F|$ points, and the lines have size $k_0=|F\cap\lambda|$. The lower bound for $k_0$ determined previously can be used to determine a lower bound for $v_0$, since by considering the lines through a point $\alpha_0$ not on some line $\lambda_0$, we see that $v_0-1\geq k_0(k_0-1)$. However by Lemma~\ref{lem:Davies}, we have that $v_0=|F|\leq k+r-3$. These inequalities, whose values are displayed in Table~\ref{tab:someqp} for each {\sc Line}, lead to a contradiction in all but the following cases: {\sc Line} 23 with $G^{\mathfrak C}={\rm PSL}(5,2)$, {\sc Line} 710 with $G^C\geq {\rm PSL}(2,239)$, {\sc Line} 1184 with $G^C\geq {\rm PSL}(2,1303)$, {\sc Line} 1205 with $G^C\geq {\rm PSL}(2,463)$. \begin{table*}[!h] \begin{center} \begin{footnotesize} \begin{tabular}{|c|l|c|c|c|c|c|c|} \hline & & & \multicolumn{2}{c|}{$k_0\geq$} & $v-k$ && \\ {\sc Line} & Group & $p$& $k'$ & $\sum id_i'$ & mod $p$ & $k_0(k_0-1)+1$ & $k+r-3$ \\ \hline 23 &\mbox{$G^{{\mathfrak C}}={\rm PSL}(5,2)$}&7&5&&1&21&54\\ 45 &\mbox{$G^{{\mathfrak C}}={\rm PSL}(3,11)$}&5&0&10&1&91&82\\ 146 &\mbox{$G^{{\mathfrak C}}=A_{33},S_{33}$ on pairs}&29&23&&24&507&262\\ 670 &\mbox{$G^C\geq A_{184}$}&181&86&&19&7311&1726\\ 675 &\mbox{$G^C\geq A_{176}$}&173&158&&138&24807&1726\\ 707 &\mbox{$G^C\geq A_{320}$}&317&187&&199&34783&2734\\ 710 &\mbox{$G^C\geq A_{240}$} &233&132&&12&17293&2734\\ &\mbox{$G^C\geq{\rm PSL}(2,239)$}&17&10&&3&91&2734\\ 736 &\mbox{$G^C\geq A_{783}$}&773&235&&180&54991&3742\\ 743 &\mbox{$G^C\geq A_{603}$}&601&436&&186&189661&3742\\ 747 &\mbox{$G^C\geq A_{848}$}&839&728&&552&529257&6654\\ 1179 &\mbox{$G^C\geq A_{696}$}&691&205&&671&41821&5758\\ 1184& \mbox{$G^C\geq A_{1304}$}&1301&560&&762&313041&9678\\ &\mbox{$G^C\geq {\rm PSL}(2,1303)$}&31&2&&6&6&9678\\ 1188 &\mbox{$G^C\geq A_{544}$}&541&318&&334&100807&5198\\ 1194 &\mbox{$G^C\geq A_{536}$}&523&410&&475&167691&5198\\ 1197 &\mbox{$G^C\geq A_{1760}$}&1753&504&&1340&253513&12814\\ &\mbox{$G^C\geq {\rm PSL}(2,1759)$}&293&211&&88&44311&12814\\ 1205 &\mbox{$G^C\geq A_{464}$}&461&215&&16&46011&5758\\ &\mbox{$G^C\geq {\rm PSL}(2,463)$}&7&0&14&1&183&5758\\ \hline \end{tabular} \caption{Relevant information for some potentially quasiprimitive cases\label{tab:someqp}} \end{footnotesize} \end{center} \end{table*} For {\sc Line} 23 ($G^{\mathfrak C}={\rm PSL}(5,2)$), we have $k_0=5$ and $21\leq v_0\leq 54$ (else $k_0(k_0-1)+1>v_0$ or $v_0>k+r-3$). Now $v_0\equiv v\pmod{p}$ and also $(k_0-1)\mid (v_0-1)$. So $v_0\equiv 6\pmod{7}$ and $v_0-1\equiv 0\pmod{4}$, and thus $v_0=41$. Now $N_G(P)$ is transitive on $F$, and so $v_0\mid |N_G(P)|$. However $41\nmid |G|=|G^{\mathfrak C}|$, and we have a contradiction. For {\sc Line} 710 ($G^C\geq{\rm PSL}(2,239)$), {\sc Line} 1184 ($G^C\geq{\rm PSL}(2,1303)$), and {\sc Line} 1205 ($G^C\geq{\rm PSL}(2,463)$) we have $d\not\equiv 0\pmod{p}$, and so in each case $P$ fixes some class setwise, $C$ say. So $P\leq G_C$, and by Corollary~\ref{cor:CaminaSiemons}, $|F|=|{\rm Fix}_{\mathfrak C}(P)|\cdot |{\rm Fix}_C(P)|$ with $|{\rm Fix}_C(P)|\geq 3$. Now $G^C\geq{\rm PSL}(2,c-1)$, and in each {\sc Line}, $c-1$ is prime, so $P^C<G^C\leq{\rm PGL}(2,c-1)$. Since ${\rm PGL}(2,c-1)$ is sharply 3-transitive, the nontrivial subgroup $P^C$ can fix at most 2 points of $C$, contradicting $|{\rm Fix}_C(P)|\geq 3$. \end{proof} \subsection{Potential $G$-normal cases with $k>8$}\label{subsec:proofsgnormal} In this section we discuss the $G$-normal {\sc Lines}. Set $K:=G_{({\mathfrak C})}$ so that $K\ne 1$. \begin{proposition}\label{prop:Gnormal} There are no line-transitive point-imprimitive linear spaces with an action preserving a $G$-normal partition corresponding to any of the {\sc Lines} of Table~\ref{tab:outputgnormal}, i.e., {\sc Lines} $22$, $46$, $56$, $60$, $64$, $91$, $127$, $128$, $156$, $157$, $185$, $673$, $674$, $709$, $1198-1204$ cannot occur. \end{proposition} \begin{proof} We give a general group theoretic argument for the {\sc Lines} of Table~\ref{tab:outputgnormal} where $c$ is a prime (with a slight variation for {\sc Line} 46, where $c$ is a prime power), to construct a non-trivial intransitive normal subgroup which gives rise to a second $G$-normal partition with $c$ classes of size $d$. In each case there is no corresponding {\sc Line} in Table~\ref{tab:outputgnormal}, and hence these {\sc Lines} are ruled out by Lemma~\ref{lem:KSXY}(c). The remaining {\sc Lines} of Table~\ref{tab:outputgnormal} are then also dealt with in a somewhat uniform manner. Let $K,S,X,Y$ be defined as in Lemma~\ref{lem:KSXY}. Suppose one of {\sc Lines} $22$, $56$, $60$, $64$, $91$, $127$, $128$, $156$, $157$, $185$ holds. Then the class size $c$ is a prime, $1\ne K^C\trianglelefteq G^C$ and \mbox{$Z_c:Z_u$}$\cong G^C\leq {\rm AGL}(1,c)$ with $u\mid (c-1)$ as in Table~\ref{tab:someGnorm}. Also, by Theorem~\ref{thm:CP93}, $K=K^C$, so $S={\rm Soc}(K)\cong Z_c$, and $G/Y\leq {\rm Aut}(S)\cong Z_{c-1}$. For all these {\sc Lines}, $G/K\cong G^{\mathfrak C}$ is non-cyclic and hence $Y\nleq K$. Thus $Y^{\mathfrak C}\ne 1$. Since $Y\vartriangleleft G$, the induced group $Y^{\mathfrak C}$ is normal in the primitive group $G^{\mathfrak C}$ and hence $Y^{\mathfrak C}\geq {\rm Soc}(G^{\mathfrak C})$. By Lemma~\ref{lem:KSXY}(a), $Y\cap K=S$. Thus there exists a normal subgroup $M$ of $G$ such that $S<M\leq Y$ and $M/S\cong M^{\mathfrak C}={\rm Soc}(G^{\mathfrak C})$. Suppose that $G^{\mathfrak C}$ is affine (that is, in {\sc Lines} 22, 156, 157, and sometimes in {\sc Lines} 127,128). Then $M/S=Z_2^{c'}$ where $d=2^{c'}$. Since $M$ centralises $S$, it follows in these cases that $M=N\times S$ with $N$ the unique Sylow 2-subgroup of $M$. Thus $N\vartriangleleft G$, $N$ is intransitive on ${\cal P}$ with $c$ orbits of size $d$, and $N\cap S=1$. For the remaining cases, $M/S\cong {\rm Soc}(G^{\mathfrak C})={\rm PSL}(n,q)$ where $d=\frac{q^n-1}{q-1}$. Since the prime $c$ does not divide the order of the Schur multiplier of ${\rm PSL}(n,q)$ in any of these cases (see \cite[page 302]{Gorenstein}), it follows that $M=N\times S$ with $N\cong {\rm Soc}(G^{\mathfrak C})$, $N\vartriangleleft G$, and $N\cap S=1$. Moreover $c\nmid |N|$, so $N$ is intransitive with $c$ orbits of size $d$. Thus for each {\sc Line}, by Lemma~\ref{lem:KSXY}(c), there is a $G$-normal partition with $c$ classes of size $d$. This is a contradiction since there are no such {\sc Lines} in Table~\ref{tab:outputgnormal}. \begin{table*}[!h] \begin{center} \begin{footnotesize} \begin{tabular}{|c|l|l|l|l|} \hline {\sc Line} & $G^{{\mathfrak C}}$ & $d$ & $G^C=c:u$ &int type \\ \hline 22 & \mbox{$\leq {\rm AGL}(4,2)$} & 16 & \mbox{31:[15a],\, $a\mid 2$} & $(1^4,2^4)$ \\ 56 & \mbox{${\rm PSL}(2,17)$} & 18 & \mbox{137:[17a],\, $a\mid 4$} & $(2^6,3^6)$ \\ 60 & \mbox{${\rm PSL}(2,23)$} & 24 & \mbox{139:[23a],\, $a\mid 6$} & $(1^{12},3^6)$ \\ 64 & \mbox{$\geq{\rm PSL}(4,4)$} & 85 & \mbox{43:[21a],\, $a\mid 2$} & $(1^{20},2^5)$ \\ 91 & \mbox{$\geq{\rm PSL}(4,3)$} & 40 & \mbox{157:[13a],\, $a\mid 12$} & $(1^{30},4^{10})$ \\ 127,128 & \mbox{$2^5:[31a]$,\, $a\mid 5$} & 32 & \mbox{373:[31a],\, $a\mid 12$} & \mbox{$(1^8,3^{16})$,$(1^{24},4^8)$} \\ & \mbox{or {\rm PSL}(2,31)} & & & \\ 156,157 & \mbox{$\leq {\rm AGL}(6,2)$} & 64 & \mbox{379:[63a],\, $a\mid 6$} & \mbox{$(1^{8},2^{24})$, $(1^{32},3^{8})$} \\ 185 & \mbox{${\rm PSL}(2,191)$} & 192 & \mbox{383:[191a],\, $a\mid 2$} & $(1^{40},2^8)$ \\ \hline \end{tabular} \caption{Relevant information for some potentially $G$-normal cases\label{tab:someGnorm}} \end{footnotesize} \end{center} \end{table*} Now consider {\sc Line} 46. We follow the proof above (with $S,Y$ as there). Here $K\cong K^C\leq {\rm AGL}(4,3)$ so $S={\rm Soc}(K)\cong Z_3^4$ and $G/Y\leq GL(4,3)$. This means that $7\nmid |G/Y|$ and hence $Y\nleq K$. Thus $Y^{\mathfrak C}\ne 1$. Arguing as in the proof above, $Y\cap K=S$ and we have a normal subgroup $M$ of $G$ such that $S<M\leq Y$, and $M/S \cong {\rm Soc}(G^{{\mathfrak C}})\cong A_7$ or ${\rm PSL}(3,4)$. If $Z(M')\ne 1$ then $Z(M')$ lies in the 3-part of the Schur multiplier of ${\rm Soc}(G^{{\mathfrak C}})$, and hence $Z(M')\cong Z_3$. This means that $Z(M')<S$, and $Z(M')\vartriangleleft G$, which is a contradiction since $G$ (even $G_C$) acts irreducibly on $S$. Thus $Z(M')=1$ and so $M=M'\times S$ with $M'$ intransitive, $M'\cap S=1$ and $M'\vartriangleleft G$. By Lemma~\ref{lem:KSXY}(c), there is a $G$-normal partition with 81 classes of size 21, and this is a contradiction. In each of the remaining {\sc Lines} $673,674,709,1198-1204$, we have a prime $p$ (displayed in Table~\ref{tab:specialGnorm}) such that $p\mid |G^{\mathfrak C}|$ and $p\nmid b$. Thus a Sylow $p$-subgroup $P$ of $G$ fixes some line, $\lambda$ say. Consider {\sc Lines} $673,674$. Here $p=3$. Let $Q:=P\cap K$. Then $Q<G_\lambda$ and $Q$ is a Sylow 3-subgroup of $K$. Since $9\mid c$, all $P$-orbits in $C$ (for any $C\in {\mathfrak C}$) have length divisible by 9. However in each {\sc Line} there is a class $C$ such that $|\lambda\cap C|=3$, and this gives a contradiction. Thus one of the other {\sc Lines} holds, and in particular $p$ divides $d-2$. Since ${\rm Soc}(G^{\mathfrak C})={\rm PSL}(2,d-1)$, it follows that $P$ fixes exactly two classes of ${\mathfrak C}$ setwise, say $C_1$ and $C_2$. Consider {\sc Line} 709. Here $p=17$ and as $P\leq G_\lambda$, $P$ fixes setwise each of the four classes $C$ such that $|\lambda\cap C|=19$. This is a contradiction. Hence $p=11$, and if the number $d_i$ of classes $C$ such that $|\lambda\cap C|=i$ is nonzero then $d_i$ must be congruent to 0,1 or 2 modulo 11 (as otherwise $P$ would fix more than 2 classes setwise). In each of {\sc Lines} $1198-1204$ there is an $i$ for which the condition fails. \begin{table*}[!h] \begin{center} \begin{footnotesize} \begin{tabular}{|l|l|l|c|l|} \hline {\sc Line} & $G^{{\mathfrak C}}$ & $d$ & $p$ & int type \\ \hline 673, 674 & $HS$ & 176 & 3 & \mbox{$(3^{120},6^{24})$,$(3^{144},9^8)$} \\ 709 & ${\rm PSL}(2,239)$ & 240 & 17 & $(4^{152},5^{76},19^{4})$ \\ 1198-1204 & ${\rm PSL}(2,463)$ & 464 & 11 & \mbox{see Table~\ref{tab:outputgnormal}} \\ \hline \end{tabular} \caption{Relevant information for some potentially $G$-normal cases\label{tab:specialGnorm}} \end{footnotesize} \end{center} \end{table*} \end{proof} \subsection{Proof of Theorem~\ref{main} and concluding remarks} Theorem~\ref{main} follows from Propositions~\ref{prop:k<9},~\ref{prop:qp} and ~\ref{prop:Gnormal}. Our choice of $k^{(r)}_{\max}=8$ was an ambitious one, but we chose it in order to obtain at least a characterisation of line-transitive, point-imprimitive linear spaces that included all the known examples apart from Desarguesian projective planes. It turned out that a significant range of new theory was needed to complete this classification. Disappointingly no new linear spaces were discovered in the project. However the outcomes include a set of computational tools for searches for such linear spaces, underpinned by a broad range of combinatorial and group theoretic results. The algorithms have been implemented as follows: Algorithms 1-3 in C and Algorithms 4-8 in GAP 4~\cite{GAP4}. The algorithms can easily be modified for searches over different ranges of parameters and additional group theoretic restrictions, when available, may be added. For example, these tools are being used to extend the Camina-Mischke classification to all line sizes up to and including line size 12. \def$'${$'$}
{ "timestamp": "2007-01-31T07:26:17", "yymm": "0701", "arxiv_id": "math/0701629", "language": "en", "url": "https://arxiv.org/abs/math/0701629", "abstract": "In 1991, Weidong Fang and Huiling Li proved that there are only finitely many non-trivial linear spaces that admit a line-transitive, point-imprimitive group action, for a given value of gcd(k,r), where k is the line size and r is the number of lines on a point. The aim of this paper is to make that result effective. We obtain a classification of all linear spaces with this property having gcd(k,r) at most 8. To achieve this we collect together existing theory, and prove additional theoretical restrictions of both a combinatorial and group theoretic nature. These are organised into a series of algorithms that, for gcd(k,r) up to a given maximum value, return a list of candidate parameter values and candidate groups. We examine in detail each of the possibilities returned by these algorithms for gcd(k,r) at most 8, and complete the classification in this case.", "subjects": "Combinatorics (math.CO); Group Theory (math.GR)", "title": "Linear spaces with a line-transitive point-imprimitive automorphism group and Fang-Li parameter gcd(k,r) at most eight", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708039218115, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7084670387568899 }
https://arxiv.org/abs/2209.10028
Lines in quasi-metric spaces with four points
A set of n non-collinear points in the Euclidean plane defines at least n different lines. Chen and Chvátal in 2008 conjectured that the same results is true in metric spaces for an adequate definition of line. More recently, this conjecture was studied in the context of quasi-metric spaces. In this work we prove that there is a quasi-metric space on four points a, b, c and d whose betweenness is B={(c,a,b),(a,b,c),(d,b,a),(b,a,d)}. Then, this space has only three lines none of which has four points. Moreover, we show that the betweenness of any quasi-metric space on four points with this property is isomorphic to B. Since B is not metric, we get that Chen and Chvátal's conjecture is valid for any metric space on four points.
\section{Introduction} The Chen and Chvátal's conjecture, introduced in 2008 (see \cite{CC}), affirms that every finite metric space $M$ with $n\geq 2$ points has the so called \emph{de Bruijn and Erdös} (DBE) property: \begin{equation} M \text{ has a \emph{line} containing all the points or at least $n$ different \emph{lines}} \label{dbe}\tag{DBE} \end{equation} This conjecture generalizes to metric spaces a well-known result known as de Bruijn and Erdös's Theorem (see \cite{chvatal_2018} for a more detailed account), proved by Erdös in \cite{erdos1943} and generalized by de Bruijn and Erdös in \cite{dbe}, saying that every set of $n\geq 2$ points in the Euclidean plane determines at least $n$ distinct lines unless all the points lay in the same line. Lines are defined in metric spaces as a natural generalization of its definition in the plane. If $d$ is the metric of a metric space, then given two distinct points $x$ and $y$, the \emph{segment} defined by $x$ and $y$, denoted by $[xy]$, is the set defined by $$[xy]=\{z\mid d(x,y)=d(x,z)+d(z,y)\}.$$ Similarly, the \emph{line} defined by $x$ and $y$, denoted by $\ovd{xy}$, is the set defined by $$\ovd{xy}=\{z\mid x\in [zy] \lor z\in [xy]\lor y\in [xz]\}.$$ The set of all lines in a metric space $M=(V,d)$ is denoted by $\mathcal{L}(M)$. That is, $$\mathcal{L}(M)=\{\ovd{xy}\mid x,y\in V\}.$$ In \cite{AM20}, the DBE property was study in the context of \emph{quasi-metric spaces}, first studied by Wilson in \cite{Wilson1931}, and defined as a pair $(V,d)$ where $d$ is a quasi-metric on $V$, that is, it satisfies $\forall x,y\in V, d(x,y)=0 \iff x=y$, and $\forall x,y,z\in V, d(x,y)\leq d(x,z)+d(z,y)$. As far as we known, the DBE property is valid for metric spaces with distances in $\{0,1,2\}$ \cite{ChCh11, Chvatal2}, with bounded diameter and large enough number of points \cite{metricSpace} and for metric spaces defined by chordal graphs \cite{BBCCCCFZ}, distance-hereditary graphs \cite{AK}, graphs such that every induced subgraph is either a chordal graph, has a cut-vertex or a non-trivial module \cite{AMRZ}, bisplit graphs \cite{BKR}, $(q,q-4)$-graphs \cite{SS} and graphs without induced house or hole \cite{ABMZ}. It is also valid for metric spaces defined by points in the plane with the Manhattan metric, when no two points lay in a horizontal or in a vertical line \cite{KP2013}. Additionally, any quasi-metric space induced by tournaments or by bipartite tournaments of diameter at most three \cite{AM20} has the DBE property. Stronger results were proved in \cite{MZ2020}. In this work we study quasi-metric spaces on four points. It is worth to notice that the number of quasi-metric spaces on a given number of points is infinite since we do not have a restriction on the distances other than that given by the triangle inequality. This is also true, even when we consider only non-isomorphic quasi-metric spaces. However, in the study of lines, even some non-isomorphic quasi-metric spaces could be considered as the same. In fact, it is clear that by knowing the segments defined by pairs of distinct points we also know all the lines defined by them. This fact is captured by the \emph{betweenness relation} first studied by Menger \cite{menger1928} in the context of metric spaces. The betweenness of a quasi-metric space $Q$, denoted by $\mathcal{B}(Q)$, is the set of all triples $(x,y,z)$ with $x,y,z$ three distinct points in $Q$ such that $y\in [xz]$. In this work, to ease the presentation, we denote a triple $(x,y,z)$ by the word $xyz$. Then, the betweenness of a quasi-metric space is given by: $$\mathcal{B}(Q)=\{xyz\in V^3\mid d(x,z)=d(x,y)+d(y,z)\}.$$ When $Q$ has $n$ points we have that $|B(Q)|\leq n(n-1)(n-2)$, and thus the number of betweenness of quasi-metric spaces on $n$ points is at most $2^{n(n-1)(n-2)}$. The upper bound $n(n-1)(n-2)$ is too pessimistic since when a triple $xyz$ belongs to $\mathcal{B}(Q)$ we know that the triples $yxz$ and $xzy$ do not belong to $\mathcal{B}(Q)$. Moreover, two quasi-metric spaces $Q=(V,d)$ and $Q'=(V',d')$ whose betweenness relations are different could still be considered as equal when their betweenness relations are (\emph{point-wise}) isomorphic. That is, when there is a bijection $f$ between $V$ and $V'$ such that $$\forall x,y,z\in V, xyz\in \mathcal{B}(Q) \iff f(x)f(y)f(z)\in \mathcal{B}(Q'). $$ When $n=3$, we can show that there are only five quasi-metric spaces whose betweenness are not isomorphic. The betweenness of these spaces on the set $\{a,b,c,d\}$ are: $$\emptyset, \{abc\}, \{abc,cba\}, \{abc,bca\} \text{ and }\{abc,bca,cab\}.$$ To each of them we can associate a quasi-metric space defined by a directed graph (see Figure \ref{f:3digraphs}). If $\mathcal{B}$ is a betweenness of a metric space, then $xyz\in \mathcal{B}$ implies that $zyx\in \mathcal{B}$. Hence, only the empty set and $\{abc,cba\}$ are betweenness of metric spaces on three points. \begin{figure} \centering \begin{tabular}{ccccc} \begin{tikzpicture}[scale=1] \node[vertex][circle, minimum size=4pt](1) at (-1,0) {}; \node[vertex][circle, minimum size=4pt](2) at (1,0) {}; \node[vertex][circle, minimum size=4pt](3) at (0,1) {}; \node (4) at (-1.3,-0.3) {$a$}; \node (5) at (1.3,-0.3) {$b$}; \node (6) at (0,1.4) {$c$}; \draw[-{Latex}] (1) - (2); \draw[-{Latex}] (2) to[out=-160,in=-20] (1); \draw[-{Latex}] (2) -- (3); \draw[-{Latex}] (3) to[out=-10,in=100] (2); \draw[-{Latex}] (3) -- (1); \draw[-{Latex}] (1) to[out=80,in=-160] (3); \end{tikzpicture} & \begin{tikzpicture} \node[vertex][circle, minimum size=4pt](1) at (-1,0) {}; \node[vertex][circle, minimum size=4pt](2) at (1,0) {}; \node[vertex][circle, minimum size=4pt](3) at (0,1) {}; \node (4) at (-1.3,-0.3) {$a$}; \node (5) at (1.3,-0.3) {$b$}; \node (6) at (0,1.4) {$c$}; \draw[-{Latex}] (1) - (2); \draw[-{Latex}] (2) to[out=-160,in=-20] (1); \draw[-{Latex}] (2) -- (3); \draw[-{Latex}] (3) to[out=-10,in=100] (2); \draw[-{Latex}] (3) -- (1); \end{tikzpicture} & \begin{tikzpicture} \node[vertex][circle, minimum size=4pt](1) at (-1,0) {}; \node[vertex][circle, minimum size=4pt](2) at (1,0) {}; \node[vertex][circle, minimum size=4pt](3) at (0,1) {}; \node (4) at (-1.3,-0.3) {$a$}; \node (5) at (1.3,-0.3) {$b$}; \node (6) at (0,1.4) {$c$}; \draw[-{Latex}] (1) - (2); \draw[-{Latex}] (2) -- (3); \draw[-{Latex}] (3) to[out=-10,in=100] (2); \draw[-{Latex}] (3) -- (1); \end{tikzpicture} & \begin{tikzpicture} \node[vertex][circle, minimum size=4pt](1) at (-1,0) {}; \node[vertex][circle, minimum size=4pt](2) at (1,0) {}; \node[vertex][circle, minimum size=4pt](3) at (0,1) {}; \node (4) at (-1.3,-0.3) {$a$}; \node (5) at (1.3,-0.3) {$b$}; \node (6) at (0,1.4) {$c$}; \draw[-{Latex}] (1) - (2); \draw[-{Latex}] (2) to[out=-160,in=-20] (1); \draw[-{Latex}] (2) -- (3); \draw[-{Latex}] (3) to[out=-10,in=100] (2); \end{tikzpicture} & \begin{tikzpicture} \node[vertex][circle, minimum size=4pt](1) at (-1,0) {}; \node[vertex][circle, minimum size=4pt](2) at (1,0) {}; \node[vertex][circle, minimum size=4pt](3) at (0,1) {}; \node (4) at (-1.3,-0.3) {$a$}; \node (5) at (1.3,-0.3) {$b$}; \node (6) at (0,1.4) {$c$}; \draw[-{Latex}] (1) - (2); \draw[-{Latex}] (2) -- (3); (2); \draw[-{Latex}] (3) -- (1); \end{tikzpicture} \\ $\mathcal{B}=\emptyset$ & $\mathcal{B}=\{abc\}$ & $\mathcal{B}=\{abc,bca\}$& $\mathcal{B}=\{abc,cba\}$ & $\mathcal{B}=\{abc,bca,cab\}$ \end{tabular} \caption{Directed graphs with three vertices defining quasi-metric spaces on three points with non-isomorphic betweenness.}\label{f:3digraphs} \end{figure} The lines of these quasi-metric spaces are the following. \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline $\mathcal{B}$ & $\ovd{ab}$ & $\ovd{ba}$ & $\ovd{ac}$ & $\ovd{ca}$ & $\ovd{bc}$ & $\ovd{cb}$ & $|{\cal L}(Q)|$ \\ \hline $\emptyset$ & $\{a,b\}$ & $\{a,b\}$ & $\{a,c\}$ & $\{a,c\}$ & $\{b,c\}$ & $\{b,c\}$ & 3\\ \hline $\{abc\}$ & $\{a,b,c\}$ & $\{a,b\}$ & $\{a,b,c\}$ & $\{a,c\}$ & $\{a,b,c\}$ & $\{b,c\}$ & 4\\ \hline $\{abc,bca\}$ & $\{a,b,c\}$ & $\{a,b,c\}$ & $\{a,b,c\}$ & $\{a,b,c\}$ & $\{a,b,c\}$ & $\{b,c\}$ & 2\\ \hline $\{abc,cba\}$ & $\{a,b,c\}$ & $\{a,b,c\}$ & $\{a,b,c\}$ & $\{a,b,c\}$ & $\{a,b,c\}$ & $\{a,b,c\}$ & 1\\ \hline $\{abc,bca,cab\}$ & $\{a,b,c\}$ & $\{a,b,c\}$ & $\{a,b,c\}$ & $\{a,b,c\}$ & $\{a,b,c\}$ & $\{a,b,c\}$ & 1\\ \hline \end{tabular} \end{center} Hence, all have the DBE property. Moreover, each betweenness on three points can be defined by a quasi-metric space defined by a directed graph. We show below a quasi-metric space $Q(4)$, on four points, with distance in $\{0,1,2,3\}$, which does not have the DBE property, showing that the Chen and Chvátal conjecture does not extend to the class of quasi-metric spaces with distance in $\{0,1,2,3\}$. However, it could be a notable exception: our main result in this work is that any quasi-metric space on four points whose betweenness is not isomorphic to $\mathcal{B}(Q(4))$ has the DBE property. Moreover, we prove that $\mathcal{B}(Q(4))$ is neither isomorphic to the betweenness of a quasi-metric space with distances in $\{0,1,2\}$, nor to that of a space defined by a directed graph, nor to that of a metric space. As a consequence, any metric space on four points has the DBE property. The quasi-metric of the space $Q(4)$ and its representation as a weighted directed graph is given in the following table. \begin{tabular}[b]{c c c} \begin{minipage}[b]{6cm} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $d(\cdot,\cdot)$ & $p$ & $s$ & $q$ & $r$ \\ \hline $p$ & $0$ & $1$ & $1$ & $3$ \\ \hline $s$ & $3$ & $0$ & $2$ & $3$ \\ \hline $q$ & $1$ & $2$ & $0$ & $2$ \\ \hline $r$ & $1$ & $1$ & $2$ & $0$\\ \hline \end{tabular} \end{center} $Q(4)$: quasi-metric space without Property (DBE). \end{minipage} & \ \ \ & \begin{minipage}[b]{7cm} \begin{center} \begin{tikzpicture}[scale=0.8] \node[vertex][circle, minimum size=8pt](1) at (0,2) {}; \node[vertex][circle, minimum size=8pt](2) at (-1.5,0) {}; \node[vertex][circle, minimum size=8pt](3) at (1.5,0) {}; \node[vertex][circle, minimum size=8pt](4) at (0,4) {}; \node (5) at (-0.6,4) {$p$}; \node () at (2.0,0) {$s$}; \node () at (-2.0,0) {$r$}; \node () at (-0.0,1.5) {$q$}; \node () at (0.2,2.8) {$1$}; \node () at (-1.6,2.4) {$3$}; \node () at (-1.1,2.4) {$1$}; \node () at (1.1,2.4) {$1$}; \node () at (1.6,2.4) {$3$}; \node () at (0.5,0.8) {$2$}; \node () at (-0.5,0.8) {$2$}; \node () at (0,0.3) {$3$}; \node () at (0,-0.6) {$1$}; \draw[-latex] (3) to[out=80,in=-40] (4); \draw[-latex] (4) to[out=-60,in=100] (3); \draw[-latex] (4) to[out=-140,in=100] (2); \draw[-latex] (2) to[out=80,in=-120] (4); \draw[latex-latex] (1) to (4); \draw[-latex] (3) to[out=180,in=0] (2); \draw[-latex] (2) to[out=-20,in=-160] (3); \draw[latex-latex] (1) --(3); \draw[latex-latex] (1) -- (2); \end{tikzpicture} \end{center} Representation of $Q(4)$ as a weighted directed graph. \end{minipage} \end{tabular} \begin{proposition}\label{p:qmnotdbe} The quasi-metric space $Q(4)$ has three lines none of which is universal. Moreover, its betweenness is neither isomorphic to that of a metric space, nor to those of a quasi-metric space with distances in $\{0,1,2,3\}$, nor to those of a quasi-metric space defined by a directed graph. \end{proposition} \begin{proof} One can check that the betweenness of $Q(4)$ is given by: $$\mathcal{B}(Q(4))=\{pqr,rpq,sqp,qps\}.$$ Then, the lines of $Q(4)$ are given by: $$\ovd{pq}=\ovd{pr}=\ovd{qr} = \ovd{rp}=\ovd{rq}= \{ p,q,r\},$$ $$\ovd{qp}=\ovd{sq}=\ovd{sp}=\ovd{qs}=\ovd{ps} = \{p,q,s\}$$ and $$\ovd{rs}=\ovd{sr}=\{r,s\}$$ proving the first statement. Since $pqr\in \mathcal{B}(Q(4))$ and $rqp\notin \mathcal{B}(Q(4))$, the betweenness $\mathcal{B}(Q(4))$ is not isomorphic to one of a metric space. If $\mathcal{B}(Q(4))$ is isomorphic to the betweenness of a quasi-metric space $Q=(\{a,b,c,d\},\rho)$, with $\rho:\{a,b,c,d\}^2\to \{0,1,2\}$, then without loss of generality we can assume that this isomorphism is given by $f(a)=p$, $f(b)=q$, $f(c)=r$ and $f(d)=s$ and then $\mathcal{B}(P)=\{abc,cab,bad,dba\}$. Hence, $$\rho(a,b)=\rho(b,a)=\rho(b,c)=\rho(c,a)=\rho(d,b)=\rho(a,d)=1 $$ and $$\rho(a,c)=\rho(c,b)=\rho(d,a)=\rho(b,d)=2.$$ Since $cad\notin \mathcal{B}(P)$ we have that $\rho(c,d)<\rho(c,a)+\rho(a,d)=2$ and then $\rho(c,d)=1$. This implies the contradiction $bcd\in \mathcal{B}(P)$. Therefore, $\mathcal{B}(Q(4))$ is not isomorphic to the betweenness of a quasi-metric space with distances in $\{0,1,2\}$. From this we deduce that $\mathcal{B}(Q(4))$ is not isomorphic to the betweenness of a quasi-metric space defined by a directed graph. Indeed, the diameter of a directed graph with four vertices is at most three. When it is less than three, the directed graph defines a quasi-metric space with distances in $\{0,1,2\}$ and we have shown that in this case its betweenness is not isomorphic to $\mathcal{B}(Q(4))$. When it is three, then it defines a quasi-metric space with a universal line and thus its betweenness can not be isomorphic to $\mathcal{B}(Q(4))$ either. \end{proof} We now prove our main result. \begin{theorem}\label{p:unique} Let $P=(\{a,b,c,d\},\rho)$ a quasi-metric space without universal lines and with less than four lines, then $\mathcal{B}(P)$ is isomorphic to $\mathcal{B}(Q(4))$. \end{theorem} \begin{proof} Let $\mathcal{B}=\mathcal{B}(P)$ be the betweenness of $P$. We can assume that $\mathcal{B} \neq \emptyset$ as otherwise, for each pair of points $x,y$ of $P$ the line $\ovd{xy}$ contains only $x$ and $y$ and then $P$ would have six different lines. Without loss of generality we assume that $abc\in \mathcal{B}$. Then, $a\in \ovd{bc}$, $b\in \ovd{ac}$ and $c\in \ovd{ab}$. Since $P$ has four points and no universal line, we deduce that $$\ovd{ab}=\ovd{ac}=\ovd{bc}=\{a,b,c\}=:\ell_1 .$$ The point $d$ does not belong to any of these lines. In terms of the betweenness this fact can be expressed as follows. \begin{eqnarray} d \notin \ovd{ab} \iff \{dab,adb,abd\} \cap \mathcal{B} = \emptyset \label{eq:ab} \\ d \notin \ovd{bc} \iff \{dbc,bdc,bcd\} \cap \mathcal{B} = \emptyset \label{eq:bc} \\ d \notin \ovd{ac} \iff \{dac,adc,acd\} \cap \mathcal{B} = \emptyset. \label{eq:ac} \end{eqnarray} If the point $d$ appears in no triple of $\mathcal{B}$, then each line $\ovd{dx}$ contains only $d$ and $x$, for each $x\in\{a,b,c\}$, which would imply that $P$ has more than three lines. Thus, there is a triple in $\mathcal{B}$ containing the point $d$. From (\ref{eq:ab}), (\ref{eq:bc}) and (\ref{eq:ac}) the only options for this triple are those associated to $d\in \ovd{ca}\cup \ovd{cb}\cup \ovd{ba}$. We first prove that $d\notin \ovd{ca}$. For the sake of a contradiction, let us assume that $d\in \ovd{ca}$. Then, $\ovd{ca}\neq \ell_1$ and since $P$ has no universal line we get that $$\ell_2:=\ovd{ca}=\{a,c,d\}.$$ Moreover, $b\notin \ell_2=\ovd{ca}$ implies $\ovd{db}=\ovd{bd}\notin \{\ell_1,\ell_2\},$ since $d\notin \ell_1$. It also implies that $a\notin \ovd{cb}$, since $cab,cba$ are not in $\mathcal{B}$, and, as $abc\in \mathcal{B}$, we also know that $acb\notin \mathcal{B}$. From the fact that $a\notin \ovd{cb}$, we get that $\ovd{cb}\notin \{\ell_1, \ell_2\}$ and then, $$\ell_3=\{b,c,d\}=\ovd{cb}=\ovd{bd}=\ovd{db}.$$ This implies that $c\in \ovd{bd}$ which in turns implies that $cbd\in \mathcal{B}$, since $d\notin \ovd{bc}$. As $a\notin \ell_3$ we get that $\ell_2=\ovd{ad}=\ovd{da}=\ovd{ca}$. This implies that $c\in \ovd{ad}$ which, as before, implies that $cad\in \mathcal{B}$, since $d\notin \ovd{ac}$. But, $cbd,cad\in \mathcal{B}$ implies that $\ovd{cd}=\{a,b,c,d\}$ which establishes the contradiction. We know that $d\notin \ovd{ac}$ and we have just proved that $d\notin \ovd{ca}$. Then, no triple in $\mathcal{B}$ contains $a$, $c$ and $d$. Thus, $c\notin \ovd{ad}\cup \ovd{da}$ and $a\notin \ovd{cd}\cup \ovd{dc}$ which implies that the three lines in $P$ are \begin{eqnarray*} \ell_1& =& \{a,b,c\}=\ovd{ab}=\ovd{ac}=\ovd{bc}=\ovd{ca} \\ \ell_2 & = & \ovd{da}=\ovd{ad}, c\notin \ell_2 \\ \ell_3 & = & \ovd{dc}=\ovd{cd}, a\notin \ell_3 \end{eqnarray*} Our analysis now splits into two cases: $b\in \ell_2$ or $b\in \ell_3$. If $b\in \ell_2=\ovd{ad}$, then at least one of the triples $bad,abd$ or $adb$ belongs to $\mathcal{B}$. We know that $d\notin \ovd{ab}$ which implies that the triples $dab,adb,abd$ are not in $\mathcal{B}$. Thus, we get that $bad\in \mathcal{B}$. Similarly, as $\ell_2$ is also equal to $\ovd{da}$ we get that the triple $dba$ belongs to $\mathcal{B}$ and that the triples $bda$ and $dab$ does not belong to $\mathcal{B}$. Hence $bad$ and $dba$ are the only triples in $\{a,b,d\}^3$ which belong to $\mathcal{B}$. From the fact that the triple $bad$ belongs to $B$ we get that $\ovd{ba}=\ovd{bd}=\ovd{db}=\ell_2$. Now, $b\in \ovd{ca}$ if and only if some of the triples $bca,cba$ or $cab$ belongs to $\mathcal{B}$. But, as $c\notin \ell_2=\ovd{ba}$ we get that the triple $cab$ is in $\mathcal{B}$ and that the triples $cba,bca$ are not in $\mathcal{B}$. Since we are assuming that $abc\in \mathcal{B}$ we also get that the triples $bac$ and $acb$ are not in $\mathcal{B}$. Whence, we get that $\mathcal{B}\cap \{a,b,c\}^3=\{abc,cab\}$. As above, from the fact that the triple $cab$ belongs to $\mathcal{B}$ we get that $a\in \ovd{cb}=\{a,b,c\}=\ell_1$. Then, $d\notin \ovd{cb}\cup \ovd{bc}$ which proves that $\mathcal{B}$ contains no triple containing $b,c$ and $d$. This shows that $\ell_3=\{c,d\}$ and that $$\mathcal{B}=\{abc,bad,cab,dba\}.$$ It is easy to see that the mapping associating $p$ to $a$, $q$ to $b$, $r$ to $c$ and $s$ to $d$ shows that $\mathcal{B}(Q(4))$ and $\mathcal{B}$ are isomorphic. A similar argument can be applied when $b\in \ell_3=\ovd{dc}$. In this situation we get that $\mathcal{B}\cap \{b,c,d\}^3=\{dcb,cbd\}$ which implies that $\ovd{cb}=\{b,c,d\}=\ell_3$. We also get that $\mathcal{B}\cap \{a,b,c\}^3=\{abc,bca\}$ which shows that $\ovd{ba}=\{a,b,c\}=\ell_1$ from which we conclude that $\mathcal{B}$ contains no triple containing $a$ and $d$. Therefore, $\mathcal{B}=\{abc,bca,cbd,dcb\}$ and the mapping associating $p$ to $b$, $q$ to $c$, $r$ to $a$ and $s$ to $d$ shows that $\mathcal{B}$ and $\mathcal{B}(Q(4))$ are indeed isomorphic. \end{proof} \begin{corollary} The DBE property is valid for every quasi-metric space on four points which is metric, has distances in $\{0,1,2\}$ or is defined by a directed graph. \end{corollary} \begin{proof} Direct from previous theorem and Proposition \ref{p:qmnotdbe}. \end{proof}
{ "timestamp": "2022-09-22T02:05:06", "yymm": "2209", "arxiv_id": "2209.10028", "language": "en", "url": "https://arxiv.org/abs/2209.10028", "abstract": "A set of n non-collinear points in the Euclidean plane defines at least n different lines. Chen and Chvátal in 2008 conjectured that the same results is true in metric spaces for an adequate definition of line. More recently, this conjecture was studied in the context of quasi-metric spaces. In this work we prove that there is a quasi-metric space on four points a, b, c and d whose betweenness is B={(c,a,b),(a,b,c),(d,b,a),(b,a,d)}. Then, this space has only three lines none of which has four points. Moreover, we show that the betweenness of any quasi-metric space on four points with this property is isomorphic to B. Since B is not metric, we get that Chen and Chvátal's conjecture is valid for any metric space on four points.", "subjects": "Metric Geometry (math.MG); Combinatorics (math.CO)", "title": "Lines in quasi-metric spaces with four points", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708039218115, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7084670387568899 }
https://arxiv.org/abs/2006.11402
Subspace controllability of multi-partite spin networks
In a network of spin 1/2 particles, controlled through an external electro-magnetic field, the gyromagnetic ratio of each spin is a parameter that characterizes the interaction of the spin with the external control field. Multipartite networks are such that the spins are divided into subsets according to their gyromagnetic ratio and spins in one set interact in the same way with all spins in another set. Due to the presence of symmetries in this type of systems, the underlying Hilbert state space splits into invariant subspaces for the dynamics. Subspace controllability is verified if every unitary evolution can be generated by the dynamics on these subspaces. We give an exact characterization, in term of graph theoretic conditions, of subspace controllability for multipartite quantum spin networks. This extends and unifies previous results.
\section{Introduction and statement of main result} The dynamics of quantum mechanical systems, subject to a control electromagnetic field, can often be described by the Schr\"odinger equation in the form \be{Scro1} \dot \psi=A \psi+\sum_{j=1}^mB_j u_j \psi, \end{equation} where $u_j$, $j=1,...,m$, are the control variables and $\{A,B_1,...,B_m\}$ are given operators, with $\psi$ denoting the state of the quantum system, varying in the underlying Hilbert space ${\cal H}$. In finite dimensions, the controllability properties of system (\ref{Scro1}) are usually assessed using the {\it Lie algebra rank condition} (see, e.g., , \cite{HT}, \cite{JS}). One calculates the Lie algebra , ${\cal G}$, generated by the matrices $\{A,B_1,...,B_m\}$, which is called the {\bf dynamical Lie algebra}. Given $e^{\cal G}$ the connected Lie group associated with it, assumed compact, the condition says that the reachable set ${\cal R}_{\psi_0}$, for (\ref{Scro1}) starting from $\psi_0$ is given by $$ {\cal R}_{\psi_0}=\{ X \psi_0 \, | \, X \in e^{\cal G}\}. $$ In the case of large systems, it is important to find ways to assess controllability which avoid the repeated calculation of commutators of very large matrices in (\ref{Scro1}). Such controllability criteria should be easily related to the physical structure of the system under consideration. One example of large system is given by networks of $n$ interacting spin $\frac{1}{2}$ particles, where the dimension of the Hilbert space ${\cal H}$ grows exponentially with $n$, as $2^n$. In some cases, {\it graph theoretic conditions} have been given to assess the controllability of quantum systems (see, e.g., \cite{NoiLAAold}, \cite{Turinici}), and this paper has this objective as well. In the presence of a group of symmetries $G$, i.e., a (discrete) group of matrices commuting with the matrices $\{A,B_1,...,B_m\}$ in (\ref{Scro1}), the underlying Hilbert space ${\cal H}$ for the system splits in the direct sum of invariant subspaces for the dynamics (\ref{Scro1}) and, in an appropriate basis, the matrices $\{A,B_1,...,B_m\}$ in (\ref{Scro1}) take a block diagonal form \cite{ConJonas}. Transitions from one subspace to the other are forbidden and therefore controllability is lost. For a network of $n$ spins, the topology of the network itself often suggests the symmetries to be considered which typically are subgroups of the permutation groups leaving the network unchanged. For example, for the network of Figure \ref{F1} the group of permutations on the three spin in the set $Cl_3$ is a symmetry group for the system. In these cases, it is of interest to investigate whether one has controllability within the invariant subspaces. This property is called {\bf subspace controllability} and it has been investigated in several recent papers (see, e.g., \cite{NoiLAA}, \cite{Xinhua1}, \cite{Dan1}, \cite{Dan2} ). We shall see that, in general, this property is subspace-dependent, that is, for the same decomposition, there might be some subspaces of dimension $D$ where the restriction of the dynamical Lie algebra is the {\it full} $su(D)$ (special unitary) Lie algebra of dimension $D$ and some others where it is not. In the first case subspace controllability is verified, in the second case it is not. In this paper, we shall explore subspace controllability for networks of spin $\frac{1}{2}$ particles in the multipartite configuration. This means that spin particles are collected in sets, which we shall call {\it clusters} according to the value of their {\it gyromagnetic ratio}, that is the parameter which models the interaction with an external control field. Spins in the same cluster interact in the same way with spins in another cluster. These systems presents a group of symmetries given by permutations of the spins within the same cluster. We shall give in Theorem \ref{MainT} a necessary and sufficient condition of subspace controllability for such systems in graph theoretic terms. This result will extend the result of \cite{NoiLAA} which only dealt with the bipartite case and with bounds on the number of spin in one cluster. We remove such bounds. The technique we shall use is different from the one in \cite{NoiLAA} which was based on a direct computation of the dynamical Lie algebra. Here we shall use techniques of representation theory and, in particular, the{\it Clebsch-Gordan decomposition} (see, e.g.,\cite{FH}, \cite{Ramond}, \cite{Woit}) of the tensor product representation of $su(2)$. Our result also generalizes the result of \cite{NoiLAAold} which is found as a special case when all the spins have different gyromagnetic ratios. \subsection{Basic notations} { We recall the definition of the Pauli matrices \be{PauliMat} \sigma_x:=\frac{1}{2} \begin{pmatrix} 0 & 1 \cr 1 & 0 \end{pmatrix}, \qquad \sigma_y:= \frac{1}{2} \begin{pmatrix} 0 & i \cr -i & 0 \end{pmatrix}, \qquad \sigma_z:= \frac{1}{2} \begin{pmatrix} 1 & 0 \cr 0 & -1\end{pmatrix}, \end{equation} which satisfy the basic commutation relations \be{commurel} [i\sigma_x,i\sigma_y]=i\sigma_z, \qquad [i\sigma_y,i\sigma_z]=i\sigma_x, \qquad [i\sigma_z,i\sigma_x]=i\sigma_x, \end{equation} and \be{basic2} \sigma_x^2=\sigma_y^2=\sigma_z^2=\frac{1}{4}{\bf 1}, \qquad \{ \sigma_x ,\sigma_y\}=\{ \sigma_y, \sigma_z\}=\{ \sigma_z, \sigma_x\}=0, \end{equation} where $\{A, B\}$ is the {\it anticommutator} of $A$ and $B$, that is, $\{A,B\}:=AB+BA$. Here and in the following ${\bf 1}$ always denotes the identity matrix or operator, the dimension being understood from the context. The matrices $\{i\sigma_x, i\sigma_y, i\sigma_z\}$ along with the commutation relations (\ref{commurel}) form an irreducible 2-dimensional representation of $su(2)$, the {\it standard representation}. Given a certain positive integer $\tilde{n}$, which is usually determined by the context, the matrices $S_{x,y,z}$ are defined as the sums of $\tilde{n}$ terms where each term is the tensor product of $\tilde n$ factors, each being the $2 \times 2$ identity except the $l$-th one which is $\sigma_{x,y,z}$, for $l=1,2,...,\tilde{n}$. So, for example, for $\tilde{n}=3$, $$ S_x=\sigma_x \otimes {\bf 1} \otimes {\bf 1}+ {\bf 1} \otimes \sigma_x \otimes {\bf 1} + {\bf 1} \otimes {\bf 1} \otimes {\sigma_x}. $$ We denote by $I_{gb}$ the sum of matrices where each term of the sum is the tensor product of $\tilde{n}$ identities except for one position occupied by $\sigma_g$ and one occupied by $\sigma_b$ and viceversa. The sum extends over all possible pairs of locations and therefore contains $\tilde{n}(\tilde{n}-1)$ terms. For example for $\tilde{n}=3$, $I_{xy}$ is equal to $$ I_{xy}=\sigma_x \otimes \sigma_y \otimes {\bf 1}+ \sigma_y\otimes \sigma_x \otimes {\bf 1}+ \sigma_x \otimes {\bf 1} \otimes \sigma_y + \sigma_y\otimes {\bf 1} \otimes \sigma_x + {\bf 1} \otimes \sigma_x \otimes \sigma_y + {\bf 1} \otimes \sigma_y \otimes \sigma_x $$ { We consider a network of spin $\frac{1}{2}$ particles grouped in $N$ {\bf clusters} of indistinguishable spins. Clusters are defined as sets of spin particles which have the same gyromagnetic ratios. Moreover, we assume that each spin in a cluster interacts in the same way with spins in a different cluster and do not interact with each other. Any permutation of the spins belonging to the same cluster will leave the dynamics unchanged. } If a network has $N$ clusters, with the $k$-cluster having $n_k$ spin particles, we denote by $A^{j}$ a matrix which is the tensor product of $N$ identity matrices, where in the $k$-cluster the identity has dimension $2^{n_k}$, except in the position $j$ which is occupied by the matrix $A$, a $2^{n_j} \times 2^{n_j}$ matrix. Examples of these types of matrices we shall often use are $S_{x,y,z}^j$, { $j=1,\ldots,N$. So, for example: \[S^2_x={\bf 1} \otimes S_x\otimes {\bf 1} \otimes\cdots \otimes {\bf 1},\] where $S_x$ has dimension $2^{n_2}$ and the identity matrix in the first position has dimension $2^{n_1}$, the one in the third position has dimension $2^{n_3}$, and so on. } Extending this notation, the matrices of the form $A^{j} B^k$, with $j\not=k$, $j,k \in \{1,2,...,N\}$, can be seen as the product of $A^j$ and $B^k$ but also as tensor products of identities, with various dimensions, except in the positions $j$ and $k$ occupied by $A$ and $B$, respectively, of dimensions $2^{n_j}$ and $2^{n_k}$. This notation is naturally extended to any number of factors in the product besides two. \subsection{The model}\label{TM} The quantum control system model we shall study in this paper is a network of spin $\frac{1}{2}$ particles interacting with each other. We have grouped the spins in $N$ clusters of indistinguishable spins, each interacting with the same coupling constant with spins in other clusters. The interaction is assumed to be of the Ising $Z-Z$ form {($S^j_z S^k_z$) (although the results will be extended to every other type of two body interaction (coupling) in section \ref{Exte})}. The network is represented by a {\bf connectivity graph} where each node represents a cluster of equivalent spins and there is an edge connecting two nodes if there is a non zero interaction between spins in the corresponding clusters. We assume the interactions between spin in two different clusters all equal. For example, the network of Figure \ref{F1} consists of a total of eight spin $\frac{1}{2}$ particles, two of them in the first cluster ($Cl_1$), two of them in the second one ($Cl_2$), three of them in the third one ($Cl_3$) and one in the fourth one ($Cl_4$). The lines represent nonzero interactions which are assumed to be the same for spins belonging to the same couple of clusters. The connectivity graph for such a network is given in Figure \ref{F2}. \begin{figure}[h] \centerline{\includegraphics[width=4.8in,height=2.5in]{fromFra1}} \caption{Example of a multipartite spin network} \label{F1} \end{figure} \begin{figure}[h] \centerline{\includegraphics[width=4.8in,height=2.5in]{fromFra1-b}} \caption{Connectivity graph for the network of Figure \ref{F1}} \label{F2} \end{figure} The Schr\"odinger equation which models the dynamics takes the form (\ref{Scro1}) with \be{A} iA:=\sum_{1\leq j < k \leq N} A_{j,k}S_z^jS_z^k, \end{equation} with $A_{j,k}$ the {\it coupling constants} and \be{B} iB_{x,y,z}=\sum_{j=1}^N\gamma_j S_{x,y,z}^j, \end{equation} where $\gamma_j$ are the {\it gyromagnetic ratios} of the spins in the cluster $j$, assuming an isotropic type of interaction with the three components of the electro-magnetic field $u_{x,y,z}$. We assume that some of the coupling constants $A_{j,k}$ are different from zero so that the {\em {connectivity graph associated with the network is connected.}} This is done without loss of generality since if the graph has several connected components we can repeated the analysis we shall perform on each one of them. The dynamical Lie algebra ${\cal G}$, for this type of systems, is generated by $\{A,B_{x}, B_y, B_z \}$, in (\ref{A}) and (\ref{B}). A crucial observation for our development is that, with ${n}$ spin particles, $\{ i S_x, iS_y,\, {\text{and}}\, iS_z\}$ span a $2^{{n}}$-dimensional representation of $su(2)$ since they satisfy (cf. (\ref{commurel})) \be{commurel2} [iS_x,iS_y]=iS_z, \qquad [iS_y,iS_z]=iS_x, \qquad [iS_z,iS_x]=iS_x. \end{equation} This representation coincides with the tensor product of ${n}$ copies of the standard representations (see, e.g., \cite{Woit}) as it will be further elaborated upon in the following. By using (\ref{commurel2}), we have that \[ [B_{x},B_{y}]=\sum_{j=1}^N\gamma^2_j S_{z}^j, \quad [B_{y},B_{z}]=\sum_{j=1}^N\gamma^2_j S_{x}^j, \quad [B_{z},B_{x}]=\sum_{j=1}^N\gamma^2_j S_{y}^j, \] belong to ${\cal G}$, and by iterating the Lie brackets, we have that all \[ \sum_{j=1}^N\gamma^l_j S_{z}^j, \qquad \sum_{j=1}^N\gamma^l_j S_{x}^j, \qquad \sum_{j=1}^N\gamma^l_j S_{y}^j, \] for $l \geq 1$, belong to ${\cal G}$. Using a Vandermonde determinant type of argument and assuming, as we will, that the $\gamma_{j}$'s are all different from zero (besides being different from each other), it follows that $iS_{x,y,z}^j$ for $j=1,...,N$, also belong to ${\cal G}$. Therefore, the dynamical Lie algebra ${\cal G}$ is generated by $$ {\cal S}:=\{ iS_x^j,i S_y^j, iS_z^j |\, j=1,...,N\}, \qquad A=-i \sum_{1\leq j < k \leq N} A_{j,k}S_z^jS_z^k. $$ We also have \bl{Lemma1} The Lie algebra ${\cal G}$ is the same as the one generated by ${\cal S}$ and by all the $ i S^j_z S_z^k $ such that $A_{j,k}\not=0$. \end{lemma} \begin{proof} Set $j=1$ and $k=2$, without loss of generality and assume $A_{1,2}\not=0$. We want to show that $i S^1_zS^2_z$ belongs to ${\cal G}$. Start with $[A, i S^1_x]$ to obtain $H_1:=-i \sum_{l>1}A_{1l} S_y^1 S_z^l$. Then take $[H_1, i S^1_x]$ to obtain $H_2=i \sum_{l>1}A_{1l} S_z^1 S_z^l$. Then take $[H_2, i S_x^2]$ to obtain $H_3=-iA_{12} S_z^1 S_y^2$. Then take $[H_3,i S_x^2]$ to obtain $iA_{12} S_z^1 S_z^2$. Since $A_{12}\not=0$, we obtain the result. \end{proof} \subsection{Decomposition in invariant subspaces and subspace controllability} Let $n_j$ denote the number of spins in the $j$-th cluster. According to the postulates of quantum mechanics the subsystem corresponding to the $j$-th cluster lives in a Hilbert space $(V^1)^{\otimes n_j}$ where $V^1$ denotes the two dimensional (spin $\frac{1}{2}$) carrier of the standard representation of $su(2)$. The full space Hilbert state space is therefore \be{fullspace} {\cal H}=(V^1)^{\otimes n_1} \otimes (V^1)^{\otimes n_2} \otimes \cdots \otimes (V^1)^{\otimes n_N}. \end{equation} Extending the above notation, let us denote by $V^l$ the spin $\frac{l}{2}$ irreducible representation of $su(2)$. Here $V^l$ has (complex) dimension $l+1$. Using (iteratively) {\bf Clebsch-Gordan decomposition} (see, e.g., \cite{Woit}) we have that $(V^1)^{\otimes n_j}$ decomposes in the direct sum of a number of (possibly repeated) subspaces $V^{n_j}$, $V^{n_j-2}$,..., where the last term is $V^0$ or $V^1$ according to whether $n_j$ is even or odd, respectively. It is not important for our purposes how many copies of the same $V^l$ are present. This will be determined on a case by case basis according to the iteration for the given cluster. { For a fixed cluster $j$, the matrices $S^j_{x,y,z}$ act on each space $V^l$ as the $\frac{l}{2}$ irreducible representation of $su(2)$. In particular when $l=0$ they have value equal to zero. This will be used in the following.} \bex{fromeex} Consider the network of spins of Figure \ref{F1} and the first cluster for which Clebsch-Gordan decomposition gives $V^1\otimes V^1=V^{1+1}\oplus V^{1+1-2}=V^2 \oplus V^0$. For the third cluster Clebsch-Gordan decomposition gives $$ V^1 \otimes V^1 \otimes V^1=(V^2 \oplus V^0) \otimes V^1=(V^2 \otimes V^1) \oplus (V^0 \oplus V^1)= V^3\oplus V^1 \oplus V^1. $$ For the second cluster, we have $V^1 \otimes V^1=V^2\oplus V^0$ and for the fourth cluster, we have $V^1$. \end{example} We consider as {\em{ invariant }} subspaces of the full system of $N$ clusters of spins the spaces \be{subspacesF} S =F_1 \otimes F_2 \otimes \cdots \otimes F_N, \end{equation} where $F_j$, $j=1,..,N$, is one of the spaces $V^{n_j}$, $V^{n_j-2}$,.... The spaces (\ref{subspacesF}) are indeed invariant under the dynamical Lie algebra ${\cal G}$ since they are invariant under the generators. {We shall see later (see Remark \ref{minimality}) that they are {\it minimal} invariant, that is, they contain no proper nontrivial invariant subspaces. In the language of representation theory, they carry {\it irreducible} representations of the dynamical Lie algebra ${\cal G}$.} {\color{black} As a result of the Clebsch-Gordan decomposition on each factor corresponding to a cluster the full Hilbert space ${\cal H}$ in (\ref{fullspace}) decomposes into the direct sum of invariant spaces of the form (\ref{subspacesF}). We can then take a basis of the full Hilbert space ${\cal H}$ by putting together the (orthogonal) bases of the subspaces of the type (\ref{subspacesF})}. In this basis the dynamical Lie algebra ${\cal G}$ takes a block diagonal form. The dimension of each subspace $S$ in (\ref{subspacesF}) is \be{dimensioni} D^S:=\dim(F_1)\dim(F_2)\cdots \dim(F_N). \end{equation} { Subspace controllability} is a feature of each invariant subspace in (\ref{subspacesF}). \bd{SCdef} An invariant subspace (\ref{subspacesF}) is said to be {\bf subspace controllable} if and only if, for every $M$ in $su(D^S)$, there exists a matrix in ${\cal G}$ such that its restriction to $S=F_1 \otimes F_2 \otimes \cdots \otimes F_N$ in (\ref{subspacesF}) is equal to $M$. The full system is called subspace controllable if every invariant subspace is subspace controllable. More generally we define a {\bf subspace dynamical Lie algebra} ${\cal G}_S$ for the subspace (\ref{subspacesF}) as the largest Lie subalgebra of $su(D^S)$ such that for every matrix $M \in {\cal G}_S$ there exists an element in ${\cal G}$ whose restriction to $S=F_1 \otimes F_2 \otimes \cdots \otimes F_N$ is equal to $M$. Subspace controllability is verified when ${\cal G}_S=su(D^S)$. \end{definition} \subsection{Statement of main result} The subspace dynamical Lie algebra, and therefore subspace controllability, can be assessed using a graph associated with the invariant subspace (\ref{subspacesF}) which we shall call the {\bf associated graph}. Such a graph is obtained from the connectivity graph of the spin network by removing the nodes corresponding to values of $j$ such that $F_j=V^0$ in (\ref{subspacesF}) and all the edges having such nodes as endpoint. Even if the original connectivity graph was connected (as we have assumed) the resulting associated graph for a subspace (\ref{subspacesF}) might not be be connected, and, in general, it will have a number $m_c$ of connected components ${\cal C}_1$, ${\cal C}_2$,...,${\cal C}_{m_c}$. We define the dimension associated with $h$-th connected component, as (cf., (\ref{dimensioni})) \be{dimensioni2} D^S_h:=\prod_{j \in {\cal C}_h} \dim(F_j). \end{equation} In the special case where $m_c=1$, we have only $D^S_1$ which coincides with $D^S$ in (\ref{dimensioni}). \bex{fromeex2} Reconsider the network of Example \ref{fromeex} and Figures \ref{F1} \ref{F2} for which we have calculated the decompositions for any cluster as $V^2 \oplus V^0,$ $V^2 \oplus V^0,$ $V^3 \oplus V^1 \oplus V^1, $ $V^1. $ The possible invariant subspaces (\ref{subspacesF}) are $T_{2,2,3,1}:=V^2\otimes V^2 \otimes V^3 \otimes V^1$, $T_{2,2,1,1}:=V^2\otimes V^2 \otimes V^1 \otimes V^1$, $T_{2,0,3,1}:=V^2\otimes V^0 \otimes V^3 \otimes V^1$, $T_{2,0,1,1}:=V^2\otimes V^0 \otimes V^1\otimes V^1$, $T_{0,2,3,1}:=V^0\otimes V^2 \otimes V^3 \otimes V^1$, $T_{0,2,3,1}:=V^0\otimes V^2 \otimes V^3 \otimes V^1$, $T_{0,2,1,1}:=V^0\otimes V^2 \otimes V^1 \otimes V^1$, $T_{0,0,3,1}:=V^0\otimes V^0 \otimes V^3 \otimes V^1$, $T_{0,0,1,1}:=V^0\otimes V^0 \otimes V^1 \otimes V^1$. In Figure \ref{Fig2} we report the associated graphs for $T_{2,2,3,1}$ (which coincides with the connectivity graph), $T_{2,0,3,1}$, and $T_{0,2,1,1}$. \end{example} \begin{figure}[h] \centerline{\includegraphics[width=4.8in,height=3.5in]{Assgraph-corrected}} \caption{Associated graphs for invariant subaspaces $T_{2,2,3,1}$ (b), $T_{2,0,3,1}$ (c), $T_{0,2,1,1}$ (d), as compared with the connectivity graph of the network in part (a). } \label{Fig2} \end{figure} \vspace{0.25cm} The following result is the main theorem of this paper. It allows to characterize the subspace dynamical Lie algebra and therefore subspace controllability in every case. \bt{MainT} Consider an invariant subspace of the form (\ref{subspacesF}) and its associated graph with $m_c$ connected components. Then, the subspace dynamical Lie algebra ${\cal G}_S$ has the form of a direct sum \be{formaSDLA} {\cal G}_S={\cal G}_1 \otimes {\bf 1} \otimes \cdots \otimes {\bf 1}+{\bf 1} \otimes {\cal G}_2 \otimes {\bf 1} \otimes \cdots \otimes {\bf 1}+\cdots + {\bf 1} \otimes \cdots \otimes {\bf 1} \otimes {\cal G}_{m_c}, \end{equation} where ${\cal G}_h$ is a Lie algebra acting on the space given by $ {\large{\otimes}}_{j \in {\cal C}_h} F_j. $ This space has dimension $D^S_h$ in (\ref{dimensioni2}) and it corresponds to the $h$-th connected component in the associated graph to (\ref{subspacesF}). In (\ref{formaSDLA}) ${\cal G}_h$, $h=1,...,m_c$ is (modulo multiples of the identity) \begin{enumerate} \item Equal to the $\dim(F_j)$-irreducible representation of $su(2)$ if ${\cal C}_h$ only contains one node, the node $j$. \item Equal to $su(D^S_h)$ if ${\cal C}_h$ contains more than one node. \end{enumerate} \end{theorem} From the above theorem the following exact characterization of subspace controllability follows. \bc{exact} A subspace (\ref{subspacesF}) is subspace controllable if and only if the associated graph is connected and contains at least two nodes. \end{corollary} \bex{continuedz} Consider the subspaces of Example \ref{fromeex2} with the associated graphs reported in Figure \ref{Fig2}. According to Corollary \ref{exact} subspace controllability is verified in the cases $T_{2,2,3,1}$ and $T_{0,2,1,1}$. It is not verified in the case of $T_{2,0,3,1}$. In this case, on the given subspace, the subspace dynamical Lie algebra is the direct sum of two subalgebras, one subalgebra given by the irreducible representation of $su(2)$ on $V^2$, i.e., a representation of dimension $3$, and a subalgebra given by $su(D)$. Here $D=\dim(V^3) \dim(V^1)=4 \times 2=8$ acting on invariant spaces associated with the clusters $3$ and $4$. \end{example} {\color{black} \br{minimality} In Theorem \ref{MainT}, every invariant subspace $S=F_1 \otimes F_2 \otimes \cdots \otimes F_N$ in (\ref{subspacesF}), is written in the form $E_1 \otimes E_2 \otimes \cdots \otimes E_{m_c}$ where each subspace $E_h={\large{\otimes}}_{j \in {\cal C}_h} F_j$ refers to one connected component of the associated graph. On this invariant subspace, the action of the dynamical Lie algebra (and of the associated group of possible evolutions which is a subgroup of the unitary group) is a tensor product action. Moreover, such a decomposition into invariant subspaces is {\it minimal} in the following sense: Given $E_1 \otimes E_2 \otimes \cdots \otimes E_{m_c}$ there is no other invariant subspace $E_1^{'} \otimes E_2^{'} \otimes \cdots \otimes E_{m_c}^{'}$, with $E^{'}_h \subseteq E_h$, $h=1,...,m_c$, where the strict inclusion holds for at least one $h$. This is due to the fact that every Lie algebra ${\cal G}_h$ in (\ref{formaSDLA}) is an {\it irreducible representation}, either of $su(2)$ or of $su(D^S_h) $ being the standard representation for the given dimension $D^S_h$, which is also irreducible. \end{remark} } \section{Proof of Theorem \ref{MainT}} \label{prova} \subsection{Casimir operators} An important operator for what will follow will be the {\bf Casimir operator} $C^j$ (on the $j$-th space $(V^1)^{\otimes n_j}$ in (\ref{fullspace})) defined as \be{Casimir} C^j:=(S^j_x)^2+(S^j_y)^2+(S^j_y)^2, \end{equation} which is scalar on each irreducible representation $V^{f}$ with value on $V^{f}$ given by $\frac{f}{2}(\frac{f}{2}+1)$ \cite{Woit}. {In particular, it is zero on (and only on) $V^0$.} In the following, operators will appear which are products of certain powers of the Casimir operator at certain locations in $\{1,...,N\}$ and other operators at other locations. For example $C^jS^k_x$, is the product of the Casimir operator at location $j$ with $S_x$ at location $k$, with $j\not=k$. Another example would be $(C^j)^2C^lS^k_x$ with all different $j,k,l$ which is a square of $C^j$ together with $C^l$ and $S^k_x$. Another example is $A^j$ itself for an operator $A$ where all the powers of the Casimir operators are zero. Linear combinations of powers of Casimir operators form a (unital) commutative algebra. Therefore, their behavior in Lie brackets calculations when generating a given Lie algebra is easy to control. We shall denote by $\Upsilon$ a general operator which is the product of Casimir operators. If we write $\Upsilon A^{j_1}B^{j_2} \cdots L^{j_r}$, we mean an operator which is $A$ in location $j_1$, $B$ in location $j_2$,...,$L$ in location $j_r$ and unspecified powers of Casimir operators in the remaining locations. If we want to point out the fact that these latest factors might be different from one operator to the other we use $\Upsilon_1 A^{j_1}B^{j_2} \cdots L^{j_r}$ and $\Upsilon_2 A^{j_1}B^{j_2} \cdots L^{j_r}$, for example. \subsection{Reduction of the problem} We first prove that we can reduce ourselves to the following special case. \bp{Specialcase} Assume that no subspace $F_j$ in (\ref{subspacesF}) is equal to $V^0$ and that the connectivity graph of the network is connected. Then, if $N=1$, ${\cal G}_S$ is the representation of $su(2)$ associated with $F_1$. If $N \geq 2$ then ${\cal G}_S=su(D^S)$, with $D^S$ in (\ref{dimensioni}). \end{proposition} {Notice that if $F_j\neq V^0$ for all $j=1,\ldots,N$, then the connectivity graph of the network coincides with the associated graph relative to the invariant subspace.} To see that the general case can be reduced to the special case of Proposition \ref{Specialcase}, write the tensor product $S$ in (\ref{subspacesF}) by placing the $V^0$ spaces in the first $\bar N$ positions, i.e., like \be{firstspaces} S=V^0 \otimes V^0 \otimes \cdots \otimes V^0 \otimes F_{\bar N+1} \otimes F_{\bar N+2}\otimes \cdots \otimes F_{N}, \end{equation} where $F_j=V^{r_j}$ with $r_j \geq 1$, for $j=\bar N+1,\ldots,N$. The dynamical Lie algebra ${\cal G}$ is generated by all the $S^j_{x,y,z}$ and by all the $S^j_z S^k_z$ for which the coupling constant $A_{j,k}$ are different from zero (Lemma \ref{Lemma1}). However, on the subspace (\ref{firstspaces}) $S^j_{x,y,z}$, $j=1,...,\bar N$ are all zero, { since, as we have mentioned when we introduced the Clebsch-Gordan decomposition, $S_{x,y,z}^j$ is zero on the $V^0$ representation of $su(2)$. For the same reason, $S^j_zS^k_z$, with $j<k$, and with $j=1,...,\bar N$ are also zero. } Moreover zeros so are also all their (repeated) Lie brackets. As a consequence, on these spaces, the dynamical Lie algebra is the one generated by $iS^j_{x,y,z} $ and $iS^j_zS^k_z$, $j<k$, for all pairs $j$ and $k$ such that $A_{j,k} \not=0$ and $j=\bar N+1,...,N$. The connectivity graph of the network of $N-\bar N$ clusters of spins is not necessarily connected and coincides with the graph associated with the subspace (\ref{firstspaces}), i.e., the one obtained by removing the first $\bar N$ nodes and corresponding edges. Now by collecting in $ F_{\bar N+1} \otimes F_{\bar N+2}\otimes \cdots \otimes F_{N}$ elements corresponding to the same connected components in order, we notice that the element $S^j_zS^k_z$ and $S^j_{x,y,z}$ corresponding to pairs $(j,k)$ in the same connected component generate a subalgebra which commutes with the ones correspoonding to the other connected components. Therefore the whole subspace dynamical Lie algebra ${\cal G}_S$ takes the form in (\ref{formaSDLA}). Each term corresponds to one connected component of the associated graph and if we reduce ourselves to only one connected component the proof is reduced to the case of Proposition \ref{Specialcase}. The case $N=1$ of Proposition \ref{Specialcase} follows immediately because if $N=1$ there is no interaction matrix of the form $S^j_zS_z^k$ but only the matrices $iS^1_{x,y,z}$ form the Lie algebra, which form indeed a representation of $su(2)$. The type of representation depends on the nature of the space $F_1$. The next subsections are devoted to prove the case $N \geq 2$ of Proposition \ref{Specialcase}. \subsection{Generation of terms $S_{z}^j S_{z}^k$} Lemma \ref{Lemma1} shows that the matrices $iS^j_zS^{k}_z$ belong to the dynamical Lie algebra ${\cal G}$ for every pair of clusters $j,k$ with nonzero coupling. The following Lemma shows that for a connected connectivity graph, ${\cal G}$ contains matrices of the form $i \Upsilon S_{z}^j S_{z}^k$, for {\it any} pair of clusters $j, k$ (recall that $\Upsilon$ indicates a general operator which is the product of Casimir operators) { \bl{connect} Assume the connectivity graph of the network is connected. Then, for every pair $j < k \in \{1,2,...,N\}$, there exists in the dynamical Lie algebra ${\cal G}$ a matrix \be{newform} i \Upsilon S_{z}^j S_{z}^k. \end{equation} \end{lemma} } \begin{proof} Fix two nodes $1\leq j<k\leq N$. Given the connectedness assumption of the graph, we know that there exists a path of length $r\geq 1$ of nodes $\hat n_i$, $i=0, \ldots, r$, with $\hat n_0=j$ and $\hat n_r=k$ such that $A_{\hat n_i,\hat n_{i+1}}\neq 0$. The claim will be proved by induction on the length $r$ of the path joining the two nodes. If $r=1$, the claim follows from Lemma \ref{Lemma1}. Assume $r>1$. Since the nodes $\hat n_0=j$ and $\hat n_{r-1}$ are connected by a path of length $r-1$, by the inductive assumption, we know that the dynamical Lie algebra ${\cal G}$ contains a matrix of the type: \be{uno-connect} i\Upsilon S^j_zS^{\hat n_{r-1}}_z, \end{equation} Moreover since $A_{\hat n_{r-1},k}\neq 0$, we know from Lemma \ref{Lemma1} that the matrix \be{due-connect} iS^{\hat n_{r-1}}_zS^k_z, \end{equation} is in the dynamical Lie algebra ${\cal G}$ as well. Since all the matrices of the type i$S^l_{x,y,z}$ are in ${\cal G}$, for any $l=1,...,N$, by taking Lie brackets of the matrices in (\ref{uno-connect}) and (\ref{due-connect}), with these matrices we get that ${\cal G}$ contains all matrices of the type: \be{tre-connect} i\Upsilon S^j_{x,y,z} S^{\hat n_{r-1}}_{x,y,z}, \end{equation} and \be{quattro-connect} S^{\hat n_{r-1}}_{x,y,z}S^k_{x,y,z}, \end{equation} respectively. Notice that all $\Upsilon$ operators appearing in (\ref{tre-connect}) are the same. Now, we calculate (using (\ref{commurel2})) \be{previous1} \left[ i\Upsilon S^j_{z} S^{\hat n_{r-1}}_{x}, iS^{\hat n_{r-1}}_{y}S^k_{z} \right ]=i \Upsilon S^j_{z}S^{\hat n_{r-1}}_{z}S^k_{z}, \end{equation} which belongs to ${\cal G}$ as well. Again, since all the matrices of the type $i S^l_{x,y,z}$ are in ${\cal G}$, by taking Lie brackets of these matrices with the one in (\ref{previous1}) we get that: \be{cinque-connect} i \Upsilon S^j_{x,y,z}S^{\hat n_{r-1}}_{x,y,z}S^k_{x,y,z} \in {\cal{G}}, \end{equation} for all possible choices of $x,$ $y$ and $z$. Now, we use matrices of type (\ref{tre-connect}) and (\ref{cinque-connect}), and we get: \[ \left[ i \Upsilon S^j_{x} S^{\hat n_{r-1}}_{x}, i \Upsilon S^j_{y}S^{\hat n_{r-1}}_{x}S^k_{z} \right]=i \Upsilon_1 S^j_{z}(S^2_{x})^{\hat n_{r-1}}S^k_{z} \in {\cal G} \] By using $S^{\hat n_{r-1}}_{y,z}$ instead of $S^{\hat n_{r-1}}_x$ in the previous computation, we get that the three matrices $$ i \Upsilon_1 S^j_{z}(S^2_{x,y,z})^{\hat n_{r-1}}S^k_{z} $$ are all in ${\cal G}$, with the sames value for $\Upsilon_1$ for $x,y$, and $z$. By summing these matrices, using the definition of the Casimir operator (\ref{Casimir}), we get \[ i \Upsilon_2 S^j_{z} S^k_{z}\in {\cal G}, \] which is the claim of the Lemma. \end{proof} \subsection{Generation of terms $I_{zz}^j-I_{yy}^j$ and $I_{yy}^j-I_{xx}^j$} \bl{First} For every cluster $j=1,...,N$, there exists a matrix $ i \Upsilon (I^j_{zz}- I^j_{yy})$ and a matrix $i \Upsilon (I^j_{yy}- I^j_{xx})$ in the dynamical Lie algebra ${\cal G}$. \end{lemma} In the case where the $j$-th cluster contains only one spin $I^{j}_{(x,y,z)(x,y,z)}$ are taken equal to zero. So the statement is trivially true. \begin{proof} Let us set $j=1$ (without loss of generality) and $k=2$. We have that taking the Lie brackets between $i \Upsilon S_z^1 S^2_z$ (from Lemma \ref{connect}) and $S_{x,y}^1$ and $S_{x,y}^2$, we obtain all possible $i \Upsilon S_{x,y}^1 S^2_{x,y}$, and in fact, taking, possibly one extra Lie bracket with $S_{x,y}^1$ or $S_{x,y}^2$, we obtain all possible matrices \be{SxyzSxyz} i \Upsilon S^1_{x,y,z} S^2_{x,y,z} \in {\cal G}. \end{equation} Also observe from the calculation that the unspecified powers of Casimir operators in (\ref{SxyzSxyz}), which are collected in the term $\Upsilon$, are the same for all the matrices in (\ref{SxyzSxyz}). Now consider \be{partialeq} \left[ i \Upsilon S_z^1 S_x^2, i \Upsilon S_z^1 S_y^2 \right ]=i\Upsilon_1 {(S_z^1)^2} S_z^2= i\Upsilon_1 ({\frac{n_1}{4}{\bf 1}^1+2 I_{zz}^1}) S_z^2, \end{equation} {\color{black} since, as it is easily seen by induction, on a space of $n_1$ spin $\frac{1}{2}$ of dimension $2^{n_1}$, \be{basicrel} (S_g)^2=\frac{n_11}{4}{\bf 1}+2 I_{gg}, \qquad \text{for} \qquad g=x,y,z. \end{equation} Now, {by using $S^1_y$ instead of $S^1_z$ in (\ref{partialeq}) we obtain the matrix $i\Upsilon_1 {(S_y^1)^2} S_z^2= i\Upsilon_1 ({\frac{n_1}{4}{\bf 1}^1+2 I_{yy}^1}) S_z^2$. Taking the difference between this matrix and the one in (\ref{partialeq}) we obtain that $i \Upsilon_2 ( I_{zz}^1- I_{yy}^1)(S_z^2)^2$ belongs to ${\cal G}$}.} With analogous calculations, replacing $S_z^2$ with $S_x^2$ or $S_y^2$ we obtain also $i \Upsilon_2 (I_{zz}^1-I_{yy}^1) (S_x^2)^2$ and $i \Upsilon_2 (I_{zz}^1-I_{yy}^1) (S_y^2)^2$. It is important to notice at this point that since the omitted Casimir operators in (\ref{SxyzSxyz}) are all equal and the sequence of calculation is the same in all three cases (with $x,y,$ or $z$ on the right hand side), the omitted Casimir operators (in the operator $\Upsilon_2$) are the same in all three cases. We can therefore sum these three matrices and obtain using the definition of the Casimir operator (\ref{Casimir}) that $i \Upsilon_3 ( I_{zz}^1- I_{yy}^1)$ belongs to ${\cal G}$, for some $\Upsilon_3$ operator. A completely analogous calculation gives that $i \Upsilon ({ I^1}_{yy}- I^1_{xx})$ also belongs to ${\cal G}$, for some $\Upsilon$ operator. \end{proof} \subsection{Lie subalgebra of $u(2^n)$ commuting with the symmetric group} We now need to recall some general facts on the Lie subalgebra of $u(2^n)$ of matrices commuting with the permutation group $P_n$. Denote this subalgebra as $u^{P_n}(2^n)$. Its dimension is given by (cf. \cite{NoiJMP}) \be{dimen} \dim\left( u^{P_n}(2^n) \right)={{n+3}\choose{n}}. \end{equation} One of the main results of \cite{NoiJMP} is the following \bt{fromJMP} $\{iI_{zz}, iS_{x,y,z} , i{\bf 1} \}$ generate $u^{P_n}(2^n)$, and $\{iI_{zz}, iS_{x,y,z} \}$ generate $u^{P_n}(2^n)\cap su(2^n)$. \end{theorem} As we already recalled, the space $(V^1)^{\otimes n}$ decomposes according to (iterated) Clebsch-Gordan decomposition of a tensor product representation in the direct sum of (possibly repeated) $V^n$, $V^{n-2}$,..., irreducible representations of $su(2)$. Since $S_{x,y,z}$ and $I_{zz}$ leave such subspaces invariant,\footnote{To see this for $I_{zz}$ recall that $S_z^2=\frac{n}{4}{\bf 1} +2 I_{zz}$ (from (\ref{basicrel})) so that $I_{zz}=\frac{1}{2}(S_z^2-\frac{n}{4} {\bf 1} )$.} these spaces are invariant for $u^{P_n}(2^n)$ as well, because of Theorem \ref{fromJMP}. Therefore, in coordinates given by the bases of these spaces, the matrices of $u^{P_n}(2^n)$ take a block diagonal form.\footnote{In \cite{ConJonas} such a block diagonal form was described using a different approach based on Young symmetrizers.} Consider two subspaces in the decomposition of the form $V^f$ for some $f$, i.e., two subspaces of the same dimension, say $V^f_1$ and $V^f_2$. A basis for these spaces can be obtained starting with the {\it highest weight vector} and then successively applying the lowering operator as described for example in \cite{Woit}. The operators $S_{x,y,z}$ and therefore $I_{zz}=\frac{1}{2}(S_z^2-\frac{n}{4} {\bf 1})$ as well as the identity $i{\bf 1}$ act in the same way on these bases, and therefore (by induction), each repeated Lie bracket of them. Therefore we can take a basis so that the blocks of $u^{P_n}(2^n)$ of the same dimension are equal to each other. Furthermore, each block of dimension $f+1$ can take any value in $u(f+1)$ independently of the other blocks of different dimensions, that is, for each block of dimension $f+1$ there are $(f+1)^2$ degrees of freedom. If this was not the case for one block, we would have a total number of degrees of freedom, {\it which is the dimension of $ u^{P_n}(2^n)$}, strictly less than $T_n$, where $T_n$ is defined, for $n$ odd, as \be{Tnodd} T_n=2^2+4^2+\cdots + (n+1)^2, \end{equation} and, for $n$ even, as \be{Tneven} T_n=1^2+3^2+\cdots + (n+1)^2. \end{equation} However in both cases, $n$ odd in (\ref{Tnodd}) and $n$ even in (\ref{Tneven}), an induction argument shows that $$ T_n={{n+3}\choose{n}}, $$ which is from (\ref{dimen}) the dimension of $ u^{P_n}(2^n)$. So we obtain a contradiction. Therefore, we have the following form of Theorem \ref{fromJMP}, which will be useful for us \bc{fromJMP2} The restrictions of $\{iI_{zz}, iS_{x,y,z}, i{\bf 1} \}$ to every irreducible representation $V^f$ of $su(2)$ generate $u(f+1)$. \end{corollary} \subsection{Controllability on a single factor in (\ref{subspacesF})} We now show a notion of controllability on each factor $F_j$ in (\ref{subspacesF}). Recall that each of these factors is assumed of the form $V^f$, with $f\geq 1$ in Proposition \ref{Specialcase}, although the next lemma can be stated without restrictions on $f$. \bl{Oneextrastep} Fix any $j \in \{1,...,N\}$ with $F_j$ in (\ref{subspacesF}) equal to $F_j=V^f$ so that $f+1=\dim(F_j)$. Then for every $M \in su(f+1)$ the dynamical Lie algebra ${\cal G}$ contains a matrix $i\Upsilon A^j$ such that the restriction of $iA^j$ to $F_j$ is equal to $M$. \end{lemma} \begin{proof} As we have done above, to simplify notations, we set, without loss of generality $j=1$. The statement is trivially true (and not useful for us because we are assuming in Proposition \ref{Specialcase} that all $F_j$ have dimensions strictly larger than $1$) if $\dim(F_1)=1$ and it is also true in the case $\dim(F_1)=2$ since $iS^j_{x,y,z}$ belong to ${\cal G}$. It is useful to use the notation $\langle B_1,...,B_s \rangle$ for the Lie algebra generated by certain matrices $\{B_1,...,B_s \}$ so that, for instance, the first statement of Theorem \ref{fromJMP} reads as $\langle iI_{zz}, iS_{x,y,z}, i{\bf 1} \rangle=u^{P_n}(2^n)$. Denote by $n_1$ the number of spin $\frac{1}{2}$ particles in the first cluster. Consider the matrix $Q^1:=I^1_{xx}+I^1_{yy}+I^1_{zz}=\frac{1}{2}(C^1-3\frac{n_1}{4}{\bf 1}^1)$, with the Casimir operator (\ref{Casimir}) on the first set, which commutes with every matrix in $\{i( I^1_{zz}-I_{yy}^1), i( I^1_{yy}-I_{xx}^1), iS_{x,y,z}^1 \}$ (and therefore with each repeated Lie bracket of them). Then we have by Theorem \ref{fromJMP} \be{basicCompu} \left( su(2^{n_1}) \cap u^{P_{n_1}}(2^{n_1}) \right) \otimes{\bf 1} \otimes {\bf 1} \otimes \cdots \otimes {\bf 1} = \langle iI_{zz}^1, iS_{x, y, z}^1 \rangle \subseteq \langle i( I^1_{zz}-I_{yy}^1), i( I^1_{yy}-I_{xx}^1), iS_{x,y, z}^1, iQ^1 \rangle = \end{equation} $$ \langle i(I^1_{zz}-I_{yy}^1), i( I^1_{yy}-I_{xx}^1), i S_{x,y, z}^1 \rangle + \texttt{span}(i Q^1) \subseteq u^{P_{n_1}}(2^{n_1}) \otimes {\bf 1} \otimes {\bf 1} \otimes \cdots \otimes {\bf 1}. $$ In the first equality, we used Theorem \ref{fromJMP} and in the second equality we used the commutativity of $Q^1$. Now, consider relation (\ref{basicCompu}) in the basis where matrices are block diagonal and in particular on the block corresponding to $F_1 \otimes F_2 \otimes \cdots \otimes F_N$ in (\ref{subspacesF}). Restricting to this block we notice that $\texttt{span}(Q^1)$ is included in the span of the identity on it (it commutes with an irreducible representation of $su(2)$ given by the restriction of $\texttt{span} \{ iS_x^1, i S_y^1,i S_z^1) \}$ and therefore it must be a multiple of the identity according to Schur's lemma (see, e.g., \cite{Woit})). Consider now the block diagonal form of the relation (\ref{basicCompu}), and its form on the block corresponding to $F_1 \otimes F_2 \otimes \cdots \otimes F_N$. The first Lie algebra on the left is $su(f+1) \otimes {\bf 1} \otimes {\bf 1} \otimes \cdots \otimes {\bf 1}$, the second to last Lie algebra is the restriction of $\langle i( I^1_{zz}-I_{yy}^1), i( I^1_{yy}-I_{xx}^1), i(S_{x,y,z}^1) \rangle$ to $F_1$ plus the span of the identity, everything tensored by the identity $N-1$ times. The last Lie algebra is $u(f+1) \otimes {\bf 1} \otimes {\bf 1} \otimes \cdots \otimes {\bf 1}$. Now, using the fact from Lemma \ref{First} that ${\cal G}$ contains $\{i\Upsilon(I^1_{zz}- I_{yy}^1), i \Upsilon( I^1_{yy}- I_{xx}^1), i S_{x,y,z}^1 \}$ and that Casimir operators are {\it all non zero} on the subspaces $F_2,F_3,...,F_N$ because of our assumption on the dimension, it follows that we can generate every element of the restriction of $\langle i( I^1_{zz}-I_{yy}^1), i( I^1_{yy}-I_{xx}^1), iS_x^1, i S_y^1,i S_z^1 \rangle=su(2^{n_1}) \cap u^{P_{n_1}}(2^{n_1})$ to $F_1$. This concludes the proof. \end{proof} \subsection{Maximal subalgebras in $su(rs)$} Now that we know that ${\cal G}$ acts as any desired element of $su(f+1)$ on any factor in (\ref{subspacesF}) we need to show that from these elements we can generate all of $su(D^S)$ with $D^S$ in (\ref{dimensioni}). Recall that ${\cal G}$ also contains $i \Upsilon S_z^j S_z^k$ for every pair $j,k$ according to Lemma \ref{connect}. Denote by $f_j+1$, $j=1,...,N$ the dimension of $F_j$. According to Lemma \ref{Oneextrastep} we have on $F_1 \otimes F_2 \otimes \cdots \otimes F_N$, $su(f_1+1) \otimes {\bf 1} \otimes {\bf 1} \otimes \cdots \otimes {\bf 1}$, $ {\bf 1} \otimes su(f_2+1) \otimes {\bf 1} \otimes \cdots \otimes {\bf 1}$,..., $ {\bf 1} \otimes {\bf 1} \otimes \cdots \otimes {\bf 1} \otimes su(f_N+1)$, besides the restriction of $i \Upsilon S_z^j S_z^k$. We will apply iteratively the following result \bt{fromDynkin} For each pair $r,s \geq 2$, the Lie algebra which is a direct sum of $su(r) \otimes {\bf 1}$ and ${\bf 1} \otimes su(s)$ is a maximal Lie algebra of $su(rs)$. \end{theorem} A maximal Lie algebra ${\cal L} \subseteq su(rs)$ is by definition such that for every element $A \in su(rs)$ with $A \notin {\cal L}$, $\langle A, {\cal L} \rangle =su(rs)$. Theorem \ref{fromDynkin} was proved by E.B. Dynkin in \cite{Dynkinpaper} (Theorem 1.3 in that paper). We only need a simpler version of it, which says that for each $A \otimes B $, $\notin su(r) \otimes {\bf 1}$ and $\notin {\bf 1} \otimes su(s)$, $\langle A\otimes B, su(r) \otimes {\bf 1}, {\bf 1} \otimes su(s)\rangle\rangle =su(rs)$. In order to see this, consider $$ {+}_{m=0}^\infty ad_{{\bf 1} \otimes su(s)}^m A \otimes B= A\otimes \left( {+}_{m=0}^\infty ad_{su(s)}^m B \right). $$ Since ${+}_{m=0}^\infty ad_{su(s)}^m B$ is a nonzero ideal in $su(s)$ and $su(s)$ is simple, it must be equal to $su(s)$. Therefore for every matrix $C \in su(s)$ we have that $A \otimes C$ belongs to the generated Lie algebra. Fixing $C$, and doing the same thing on the left, we have that for every $E \in su(r)$, $E\otimes C$ also belongs to the generated Lie algebra. Therefore, in conclusion, such a Lie algebra contains all the matrices of the form $E \otimes C$ with $E \in su(r)$ and $C \in su(s)$ beside $su(r) \otimes {\bf 1}$ and ${\bf 1} \otimes su(s)$. Putting these together, they span of $su(rs)$. \subsection{Conclusion of the proof} The proof of the Proposition \ref{Specialcase} and therefore of the theorem is completed as follows. On the space $F_1 \otimes F_2$, we have $su(f_1+1) \otimes {\bf 1}$ and $ {\bf 1} \otimes su(f_2+1) $ along with the restriction of $i\Upsilon S_z^1 S_z^2$ which is nonzero because all the restriction of all the Casimir operators are nonzero multiples of the identity, and it is not in $su(f_1+1) \otimes {\bf 1}$ nor in $ {\bf 1} \otimes su(f_2+1) $. Therefore, using Theorem \ref{fromDynkin}, we have that ${\cal G}$ contains matrices that are equal to $M$ for any $M \in su((f_1+1)(f_2+1))$ on $F_1 \otimes F_2$ and equal to the identity on the other factors in (\ref{subspacesF}). Then we iterate this argument by using $i \Upsilon S_z^2 S_z^3$ to show this fact for $M \in su((f_1+1)(f_2+1)(f_3+1))$, $i \Upsilon S_z^3 S_z^4$ and so on up to $i \Upsilon S_z^{N-1} S_z^N$ for $M \in su(D^S)$. \vspace{0.25cm} { \section{Discussion and Extensions}\label{Exte} We now discuss several possible extensions of the result of Theorem \ref{MainT} to networks different from the {\it multipartite} case with Ising coupling above treated. \subsection{Networks with different couplings between spins} The Ising coupling between spins in two different clusters, $A_{j,k} S_z^jS_z^k$, can be replaced by a more general two body coupling so that $A$ in (\ref{A}) is replaced by $\hat A$ with \be{newinter} i\hat A=\sum_{1\leq j < k \leq N} A_{j,k}S_z^jS_z^k+B_{j,k} S_x^jS_x^k+C_{j,k} S_y^jS_y^k. \end{equation} The result of Theorem \ref{MainT} is still valid as long as we consider as a non-zero interaction between the $j$-th and the $k$-th cluster if $(A_{j,k}, B_{j,k}, C_{j,k}) \not= (0,0,0)$. In order to see this, notice that the subspaces (\ref{subspacesF}) are still invariant for the dynamics if the interaction takes the more general form (\ref{newinter}) and that the reduction to the case of Proposition \ref{Specialcase} still holds. If there is only one cluster in the network, there is no term of the two-body form (\ref{newinter}) and so the result of the proposition holds. If there is more than one cluster in a connected network we have proven subspace controllability in the Ising $Z-Z$ case. Let us see why this is true in the general case of interaction (\ref{newinter}). By taking repeated Lie brackets of the interaction (\ref{newinter}) with matrices of the form $iS_{x,y,z}^j$ we can obtain (as long as the coupling is nonzero) the Ising terms $iS_z^j S_z^k$. Therefore, the dynamical Lie algebra generated by replacing the Ising interaction (\ref{A}) with the more general (\ref{newinter}) two body interaction is larger than or equal to the one obtained with Ising interaction. Since in the latter case we have subspace controllability, the same is true for the more general interaction (\ref{newinter}). \subsection{Coupling between spins in the same cluster} If we add to the interaction $A$ in (\ref{A}) a term modeling interaction between spins in the same cluster, the coupling takes the more general form \be{iAgen} iA_{gen} =iA+\sum_{j=1}^NH_0^j, \end{equation} where $A$ is the same as in (\ref{A}) (or (\ref{newinter})) and $H_0^j$ models these `internal' interactions. By using the form of the interaction $A$ in (\ref{A}) and taking repeated Lie brackets of (\ref{iAgen}) with $S_{x,y,z}^j$, $j=1,...,N$, we can detach $iA$ from $iA_{gen}$ in (\ref{iAgen}) and therefore the dynamical Lie algebra in this case is generated by the same dynamical Lie algebra calculated above for the case without internal interactions, and the matrix $\sum_{j=1}^NiH_0^j$ (which can also be separated into pieces with the same technique). Therefore the dynamical Lie algebra will be in general larger and the spaces (\ref{subspacesF}) will in general not be invariant anymore. \subsection{Different couplings for spins in the same cluster} As it is intuitive, if we allow spins of the same cluster to interact differently with the same spin in another cluster we increase the controllability of the system in that some of the subspaces (\ref{subspacesF}) will not be invariant anymore and larger invariant subspaces have to be considered. We illustrate this fact with a simple example. \bex{Exaplus1} Consider first two spin $\frac{1}{2}$ particles with the same gyromagnetic ratio interacting in the same way with one spin $\frac{1}{2}$ particle with a different gyromagnetic ratio. We have two clusters with two and one spin respectively. On the first cluster, the Hilbert space $V^1 \otimes V^1$ splits according to Clebsch-Gordan decomposition as $V^1 \otimes V^1=V^2 \oplus V^0$ so that the full space $(V^1 \otimes V^1) \otimes V^1$ splits as $(V^2 \otimes V^1) \oplus (V^0 \otimes V^1)$. Therefore the spaces $V^2 \otimes V^1$ and $V^0 \otimes V^1\simeq V^1$ are the ones to be considered in (\ref{subspacesF}). In the first case the associated graph coincides with the connectivity graph of the network, the dimension $D^S$ in (\ref{dimensioni}) is equal to $D^S=6$, and the dynamical Lie algebra acting on this invariant space coincides with $su(6)$. In the second case the associated graph only has the node corresponding to the second cluster. The dynamical Lie algebra on the given subspace coincides with $su(2)$ (its irreducible standard representation). Therefore in the appropriate basis, the dynamical full Lie algebra ${\cal G}$ can be written in block diagonal form, with blocks of dimension $6$ and $2$. However, if the coupling constants are different in absolute value, a direct calculation of the dynamical Lie algebra shows that it is equal to $su(8)$. Therefore, there is no nontrivial invariant subspace and the system is controllable as a whole. The two subspaces above are included in a single invariant subspace equal to the whole space. \end{example} We now want to obtain some insight into the mechanism of increase in controllability and enlargement of the invariant subspaces when the coupling constants differ which we have seen in the previous example. We start with the basic situationof the type of networks considered in the previous sections and then perturb some coupling constants. Consider, in particular, a network with $N$ clusters as in the previous sections, each cluster with uniform coupling with any other cluster. Consider then an associate invariant subspace as in (\ref{subspacesF}). Assume now that the coupling constants of one of the cluster, say the cluster $N-1$, with another cluster, say the $N$-th cluster, {split}. A subcluster of the $(N-1)$-th cluster has coupling constant with the $N$-th cluster equal to $W$ and another subcluster has coupling constant $Y$ (assuming for simplicity that there are only two values of coupling constants and furthermore assume the stronger condition $|Y|\not=|W|$). The matrix $A$ in (\ref{A}) can then be written as \be{newA} iA:=\sum_{1\leq j < k \leq N, \, (j,k)\not= (N-1,N)} A_{j,k}S_z^jS_z^k + WS_{z,1}^{N-1}S_z^N+YS_{z,2}^{N-1} S_z^N, \end{equation} where we have split $S_z^{N-1}$ in two parts, $S_{z,1}^{N-1}$ and $S_{z,2}^{N-1}$, according to their interaction with the $N$-th cluster. Now, if $F_N=V^0$, the last two terms in (\ref{newA}) as well as all the coupling $S^j_zS^N_z$ and also $S_{x,y,z}^N$ give zero, the associated graph to the subspace (\ref{subspacesF}) only contains the first $N-1$ nodes. The splitting of the coupling constants in the cluster $N-1$ plays no role and the situation is equivalent to the one we considered in the previous sections but with the first $N-1$ clusters only. If however, $F_N\not=V^0$, by taking (repeated) Lie brackets of $A$ in (\ref{newA}) with $iS^N_{x,y,z}$ in and $iS_{x,y,z}^{N-1}$ we obtain all matrices of the form $ i WS_{x,y,z,1}^{N-1} S_{x,y,z}^N + i YS_{x,y,z,2}^{N-1} S_{x,y,z}^N $ where we have split $S_{x,y,z}^{N-1}$ as $S_{x,y,z}^{N-1}=S_{x,y,z,1}^{N-1}+S_{x,y,z,2}^{N-1}$, generalizing what we have done above. Taking the Lie brackets of $ i WS_{x,1}^{N-1} S_{x}^N + i YS_{x,2}^{N-1} S_{x}^N $ with $ i WS_{y,1}^{N-1} S_{x}^N + i YS_{y,2}^{N-1} S_{x}^N $, we obtain $ \left(i W^2S_{z,1}^{N-1} + i Y^2S_{z,2}^{N-1} \right) (S_{x}^N)^2 $. Analogously we obtain $ \left(i W^2S_{z,1}^{N-1} + i Y^2S_{z,2}^{N-1} \right) (S_{y}^N)^2 $ and $ \left(i W^2S_{z,1}^{N-1} + i Y^2S_{z,2}^{N-1} \right) (S_{z}^N)^2 $, and summing all we obtain \be{obtain3} \left(i W^2S_{z,1}^{N-1} + i Y^2S_{z,2}^{N-1} \right) C^N, \end{equation} where $C^N$ is the Casimir operator. Analogously, we can obtain (\ref{obtain3}) with $z$ replaced by $x$ and $y$, respectively. Since $C^N$ is a multiple of the identity on $F_N$, we effectively obtain $W^2 S_{x,y,z,1}+Y^2S_{x,y,z,2}$ and since we already had $S_{x,y,z}=S_{x,y,z,1}+S_{x,y,z,2}$ we obtain the two matrices $S_{x,y,z,1}$ and $S_{x,y,z,2}$. We have effectively split the cluster $N-1$ into two subclusters. The subspace $F_{N-1}$ is not invariant anymore. If we reconsider the separation of the $(N-1)-$th cluster into the two subclusters as above we can apply the Clebsch-Gordan decomposition to each subcluster. {Assume that the first subcluster has $m_1$ spins and the second $m_2$ (thus $n_{N-1}=m_1+m_2$), we will have a decomposition of $(V^1)^{\otimes m_1}$ for the first subcluster and a decomposition of $(V^1)^{\otimes m_2}$ for the second subcluster. } Pick a space in the first decomposition, say $V^{f_1}$ and a space in the second decomposition, say $V^{f_2}$, which carry respectively an irreducible representation corresponding to $f_1$ and $f_2$ of $su(2)$. To $V^{f_1} \otimes V^{f_2}$ we can apply the Clebsch-Gordan decomposition into the direct sum of invariant subspaces. The original invariant subspace $F_{N-1}$ was selected among such spaces. However, with the division into two subclusters above, the tensor product $V^{f_1} \otimes V^{f_2}$ has to be considered as a whole, giving therefore a larger invariant space. \section*{Acknowledgement} D. D'Alessandro research was supported by the NSF under Grant ECCS 1710558.
{ "timestamp": "2020-06-23T02:04:11", "yymm": "2006", "arxiv_id": "2006.11402", "language": "en", "url": "https://arxiv.org/abs/2006.11402", "abstract": "In a network of spin 1/2 particles, controlled through an external electro-magnetic field, the gyromagnetic ratio of each spin is a parameter that characterizes the interaction of the spin with the external control field. Multipartite networks are such that the spins are divided into subsets according to their gyromagnetic ratio and spins in one set interact in the same way with all spins in another set. Due to the presence of symmetries in this type of systems, the underlying Hilbert state space splits into invariant subspaces for the dynamics. Subspace controllability is verified if every unitary evolution can be generated by the dynamics on these subspaces. We give an exact characterization, in term of graph theoretic conditions, of subspace controllability for multipartite quantum spin networks. This extends and unifies previous results.", "subjects": "Quantum Physics (quant-ph)", "title": "Subspace controllability of multi-partite spin networks", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773707953529716, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7084670383546714 }
https://arxiv.org/abs/2005.13081
Decomposition of Topological Azumaya Algebras
Let $\mathcal{A}$ be a topological Azumaya algebra of degree $mn$ over a CW complex $X$. We give conditions for the positive integers $m$ and $n$, and the space $X$ so that $\mathcal{A}$ can be decomposed as the tensor product of topological Azumaya algebras of degrees $m$ and $n$. Then we prove that if $m<n$ and the dimension of $X$ is higher than $2m+1$, $\mathcal{A}$ may not have such decomposition.
\section{Introduction} The classical theory of central simple algebras over a field was generalized by Azumaya \cite{Azu1951} and Auslander-Goldman \cite{AG1960} by introducing the concept of Azumaya algebra over a local commutative ring and over an arbitrary commutative ring, respectively. This concept was generalized by Grothendieck \cite{GroI1966}*{1.1} to the notion of topological Azumaya algebra. \begin{rem} In its widest generality, the notion of Azumaya algebra can be defined over any locally ringed topos, \cite{GroI1966}. \end{rem} \begin{defn} A \textit{topological Azumaya algebra of degree $n$} over a topological space $X$ is a bundle of complex algebras over $X$ that is locally isomorphic to the matrix algebra $\M_{n}(\C)$. \end{defn} \begin{rem} For a deeper discussion on topological Azumaya algebras and the topological Brauer group, we refer to \cite{AWtwisted2014}. \end{rem} The tensor product of complex algebras can be extended to topological Azumaya algebras by performing the operation fiberwise. Saltman asked in \cite{Sdiv1999}*{page 35} whether there is prime decomposition for topological Azumaya algebras under the tensor product operation, as there is for central simple algebras over a field. Antieau--Williams answered this question in \cite{AW2x32014}*{Corollary 1.3} by showing the following result: \begin{theorem} For $n>1$ an odd integer, there exist a six-dimensional CW complex $X$ and a topological Azumaya algebra $\Az{A}$ on $X$ of degree $2n$ and period $2$ such that $\Az{A}$ has no decomposition $\Az{A}\iso\Az{A}_{2}\tensor\Az{A}_{n}$ for topological Azumaya algebras of degrees $2$ and $n$, respectively. \end{theorem} The aim of this paper is to provide conditions on a positive integer $n$ and a topological space $X$ such that a topological Azumaya algebra of degree $n$ on $X$ has prime decomposition. The main result of this paper is the following theorem: \begin{thm}\label{main} Let $m$ and $n$ be positive integers such that $m$ and $n$ are relatively prime and $m<n$. Let $X$ be a CW complex such that $\dim(X)\leq 2m$. If $\Az{A}$ is a topological Azumaya algebra of degree $mn$ over $X$, then there exist topological Azumaya algebras $\Az{A}_{m}$ and $\Az{A}_{n}$ of degrees $m$ and $n$, respectively, such that $\Az{A}\iso \Az{A}_{m}\tensor\Az{A}_{n}$. \end{thm} Theorem \ref{main} is a corollary of a more general result. Let $a$ and $m$ be positive integers. Let $\mu_{m}\subset \SU_{ma}$ be the central subgroup of $m$-th roots of unity. There is a short exact sequence of Lie groups: \begin{equation*} \begin{tikzcd} 1 \arrow{r}& \mu_{a} \arrow{r}& \SU_{am}/\mu_{m} \arrow[r,"\rho"]& \PU_{am} \arrow{r}& 1. \end{tikzcd} \end{equation*} The homomorphism $\rho$ induces a map of classifiying spaces $\B\SU_{am}/\mu_{m} \rightarrow \B\PU_{am}$. From this, given a map from a topological space $X$ to the space $\B\SU_{am}/\mu_{m}$, there is a degree-$am$ topological Azumaya algebra over $X$. We prove in Theorem \ref{abpq/pq} that a map $X\rightarrow \B\SU_{abmn}/\mu_{mn}$ can be lifted to $\B\SU_{am}/\mu_{m}\times \B\SU_{bn}/\mu_{n}$ when the dimension of $X$ is less than $2am+1$ and the positive integers $a$, $b$, $m$ and $n$ are such that $am$ is relatively prime to $bn$ and $am<bn$. The proof of Theorem \ref{abpq/pq} relies significantly in the description of the homomorphisms induced on homotopy groups by the $r$-fold direct sum of matrices $\mif{\bigoplus^{r}}{\U_{n}}{\U_{rn}}$ in the range $\{0,1,\dots,2n\}$. We call this set ``the stable range'' for $\U_{n}$. This paper is organized as follows. The second section presents some preliminaries on operations on unitary groups and the description of the homomorphisms these operations induce on homotopy groups. The third section is devoted to the proof of Theorem \ref{abpq/pq}. We explain why the decomposition in Theorem \ref{main} is not unique up to isomorphism. \subsection*{Acknowledgements} The author would like to express her deep gratitude to Ben Williams, her thesis advisor, for having proposed this research topic, pointing out relevant references, and having devoted a great deal of time to discuss details of the research with the author. \subsection*{Notation} Throughout this paper, all topological spaces will be CW complexes. We fix basepoints for connected topological spaces, and for topological groups we take the identities as basepoints. We write $\pi_{i}(X)$ in place of $\pi_{i}(X,x_{0})$. If $f:X\rightarrow Y$ is a continous map, the induced homomorphism on homotopy groups is denoted by $\pi_{i}(f):\pi_{i}(X)\rightarrow \pi_{i}(Y)$, for all $i\in \N$. \section{Stabilization of operations on \texorpdfstring{$\U_{n}$}{Un}} Let $m, n \in \N$, we consider the following matrix operations: \begin{enumerate} \item The \textit{direct sum of matrices}, $\mif{\bigoplus}{\U_{m}\times \U_{n}}{\U_{m+n}}$ defined by \[ A\oplus B= \begin{pmatrix} A & 0\\ 0 & B \end{pmatrix}. \] \item The \textit{$r$-fold direct sum}, $\mif{\bigoplus^{r}}{\U_{n}}{\U_{rn}}$ given by $A^{\oplus r}=\underbrace{A\oplus \cdots \oplus A}_{r\text{-times}}$. \item The \textit{tensor product of matrices}, $\mif{\bigotimes}{\U_{m}\times\U_{n}}{\U_{mn}}$ defined by \[ A\tensor B = \begin{pmatrix} a_{11}B & \cdots & a_{1m}B\\ \vdots & \ddots & \vdots \\ a_{m1}B & \cdots & a_{mm}B \end{pmatrix}, \] for $A=(a_{ij}) \in \U_{m}$. \item The \textit{$r$-fold tensor product}, $\mif{\bigotimes^{r}}{\U_{n}}{\U_{n^{r}}}$ given by $A^{\tensor r} =\underbrace{A\tensor \cdots \tensor A}_{r\text{-times}}$. \end{enumerate} The homomorphism of homotopy groups induced by the operations above will be denoted by $\bigoplus_{*}$, $\bigoplus^{r}_{*}$, $\bigotimes_{*}$ and $\bigotimes^{r}_{*}$, respectively. We recall the homotopy groups of $\U_{n}$ and $\SU_{n}$ in low degrees, and compute the homotopy groups of $\U_{am}/\mu_{m}$ and $\SU_{am}/\mu_{m}$ in low degrees. The homotopy groups of the unitary group can be calculated by using Bott periodicity. \cite{bott1958} proves that \[ \pi_{i}(\U_{n})\iso \begin{cases} 0 &\text{if $i<2n$ is even,}\\ \Z &\text{if $i<2n$ is odd,}\\ \Z/n! &\text{if $i=2n$.} \end{cases} \] Since there is a fibration $\SU_{n}\hookrightarrow \U_{n} \xrightarrow{\det} S^{1}$, we can use the long exact sequence associated to it to see that \[ \pi_{i}(\SU_{n})\iso \begin{cases} 0 &\text{if $i=1$,}\\ \pi_{i}(\U_{n}) &\text{otherwise.} \end{cases} \] Note that $\SU_{am}$ is a simply connected $m$-cover of $\SU_{am}/\mu_{m}$, we deduce that \[ \pi_{i}(\SU_{am}/\mu_{m})\iso \begin{cases} \Z/m &\text{if $i=1$,}\\ \pi_{i}(\SU_{am}) &\text{otherwise}. \end{cases} \] Observe diagram \eqref{hg} below \begin{equation}\label{hg} \begin{tikzcd}[column sep=large] \mu_{m} \arrow[equal,r] \arrow[hookrightarrow,d] & \mu_{m} \arrow[r] \arrow[hookrightarrow,d] & \{1\} \arrow[d]\\ \SU_{am} \arrow[hookrightarrow,r] \arrow[twoheadrightarrow,d] & \U_{am} \arrow[twoheadrightarrow,r,"\det"] \arrow[twoheadrightarrow,d] & S^{1} \arrow[equal,d]\\ \SU_{am}/\mu_{m} \arrow[r] & \U_{am}/\mu_{m} \arrow[r,"\det"] & S^{1} . \end{tikzcd} \end{equation} All columns, as well as the two top rows, of diagram \eqref{hg} are exact. The nine-lemma implies that the bottom row is also exact. Therefore, \[ \pi_{i}(\U_{am}/\mu_{m})\iso \pi_{i}(\SU_{am}/\mu_{m}) \quad \text{ for all } i>1. \] It remains to compute the fundamental group of $\U_{am}/\mu_{m}$. By exactness of the bottom row of diagram \eqref{hg}, the induced sequence on fundamental groups is exact, \begin{equation}\label{fgU/m} 0 \rightarrow \pi_{1}\left(\SU_{am}/\mu_{m}\right) \rightarrow \pi_{1}\left(\U_{am}/\mu_{m}\right) \rightarrow \pi_{1}\left(S^{1}\right) \rightarrow 0. \end{equation} Since the fundamental group of $\SU_{am}/\mu_{m}$ is $\Z/m$, then sequence \eqref{fgU/m} splits. Hence, $\pi_{1}\left(\U_{am}/\mu_{m}\right)\iso \Z \oplus \Z/m$. \subsection{Stabilization} For $n \in \N$, the standard inclusion of unitary groups $\U_{n} \hookrightarrow \U_{n+1}$ is $2n$-connected. \begin{lemma}\label{esta} For $m,n \in \N$ and $m \leq n$. Let $\mif{\esta_{m,n}}{\U_{m}}{\U_{m+n}}$ be the map defined by \begin{equation*} \esta_{m,n}(A)= \begin{pmatrix} A & 0\\ 0 & I_{n} \end{pmatrix}. \end{equation*} The map $\mif{\esta_{m,n}}{\U_{m}}{\U_{m+n}}$ is $2m$-connected. \end{lemma} \begin{proof} The map $\esta_{m,n}$ is equal to the composite of a series of standard inclusions $\U_{m} \hookrightarrow \U_{m+1} \hookrightarrow \U_{m+2}\hookrightarrow \cdots \hookrightarrow \U_{m+n}$. \end{proof} \begin{lemma}\label{sj} Let $n,r \in \N$. For all $j=1,\dots,r$ define $\mif{\s_{j}}{\U_{n}}{\U_{rn}}$ by \[ \s_{j}(A)=\diag(I_{n},\dots,I_{n},A,I_{n},\dots,I_{n}), \] where $A$ is in the $j$-th position. The map $\s_{j}$ is $2n$-connected for all $j=1,\dots,r$. \end{lemma} \begin{proof} The essential observation is that for all $j=1,\dots,r-1$ the maps $\s_{j}$ and $\s_{j+1}$ are homotopic. For this reason, all homomorphisms induced on homotopy groups are equal. It suffices to study the homomorphism induced by $\s_{1}$. Note that $\s_{1}=\esta_{n,(r-1)n}$. By applying Lemma \ref{esta} the proof is completed. \end{proof} \begin{rem}\label{EH} The matrix multiplication $\mif{\m}{\U_{n}\times\U_{n}}{\U_{n}}$ induces a homomorphism of homotopy groups $\mif{\pi_{i}(\m)}{\pi_{i}(\U_{n}\times\U_{n})}{\pi_{i}(\U_{n})}$ by pre-composition, for all $i \in \N$. By using the isomorphism of homotopy groups \begin{equation*} \begin{tikzcd}[row sep=tiny, column sep=large] \pi_{i}(X)\times\pi_{i}(Y) \arrow[r,rightarrow,"\iso"] & \pi_{i}(X\times Y)\\ \bigl(\alpha,\beta\bigr) \arrow[r,mapsto] & \alpha \times \beta, \end{tikzcd} \end{equation*} we can define a group operation $\mif{\m_{*}}{\pi_{i}(\U_{n})\times\pi_{i}(\U_{n})}{\pi_{i}(\U_{n})}$. Therefore, $\pi_{i}(\U_{n})$ have two operations, the usual sum and the one induced by matrix multiplication. The Eckmann-Hilton argument implies these operations are equal, this is $\m_{*}(\alpha,\beta)=\alpha+\beta$. \end{rem} \subsection{Operations} \begin{prop}\label{DS} For all $i\in \N$, the homomorphism $\mif{\bigoplus_{*}}{\pi_{i}(\U_{m})\times \pi_{i}(\U_{n})}{\pi_{i}(\U_{m+n})}$ is equal to the composite of $\pi_{i}(\esta_{m,n})\times \pi_{i}(\esta_{n,m})$ and $\m_{*}$. \end{prop} \begin{proof It is enough to observe that the direct sum factors as \begin{equation*} \begin{tikzcd}[row sep=tiny, column sep=large] \U_{m}\times\U_{n} \arrow[r,rightarrow,"(\esta_{m,n}\text{,}\esta_{n,m})"] & \U_{m+n}\times\U_{m+n} \arrow[r,"\m"] &\U_{m+n}\\ (A,B) \arrow[r,mapsto] &% \left(% \begin{pmatrix} A & 0\\ 0 & I_{n} \end{pmatrix},% \begin{pmatrix} I_{m} & 0\\ 0 & B \end{pmatrix}% \right) \arrow[r,mapsto] &% \begin{pmatrix} A & 0\\ 0 & B \end{pmatrix}. \end{tikzcd} \end{equation*} \end{proof \begin{cor}\label{cDS} If $m<n$, then $\bigoplus_{*}(x,y)=x+y$ for all $i<2m$. \end{cor} \begin{proof By Lemma \ref{esta} and Remark \ref{EH}, the homomorphisms $\pi_{i}(\esta_{m,n})$ and $\pi_{i}(\esta_{n,m})$ are bijective for $i<2m$ and $i<2n$, respectively. Hence, $\bigoplus_{*}(x,y)=\m_{*}(x,y)=x+y$ for $x \in \pi_{i}(\U_{m})$, $y \in \pi_{i}(\U_{n})$ and $i<2m$. \end{proof \begin{prop}\label{nBS} For all $i\in \N$, the homomorphism $\mif{\bigoplus^{r}_{*}}{\pi_{i}(\U_{n})}{\pi_{i}(\U_{rn})}$ is equal to the composite of the product of the stabilization maps $\mif{\pi_{i}(\s_{j})}{\pi_{i}(\U_{n})}{\pi_{i}(\U_{rn})}$, $j=1,\dots,r$ and multiplication by $r$. \end{prop} \begin{proof The $r$-block summation factors as \begin{equation*} \begin{tikzcd}[row sep=tiny, column sep=normal] \U_{n} \arrow[r,rightarrow,"\Delta"] & (\U_{n})^{\times r} \arrow[r,rightarrow,"\prod \s_{j}"] & (\U_{rn})^{\times r} \arrow[r,rightarrow,"\m"] & \U_{rn} \\ A \arrow[r,mapsto] & (A,\dots,A) \arrow[r,mapsto] & \bigl(\s_{1}(A),\dots,\s_{r}(A)\bigr) \arrow[r,mapsto] & \s_{1}(A)\cdots \s_{r}(A)=A^{\oplus r}. \end{tikzcd} \end{equation*} By the Eckmann-Hilton argument $\mif{\m_{*}}{\pi_{i}(\U_{rn})^{r}}{\pi_{i}(\U_{rn})}$ is given by \begin{equation*} \m_{*}(a_{1},\dots,a_{r})=a_{1}+\cdots+a_{r}, \;\;\; \text{for $a_{j} \in \pi_{i}(\U_{rn})$ and $j=1,\dots,r$}. \end{equation*} As a result, $\bigoplus_{*}^{r}$ takes the form \begin{equation*} \begin{tikzcd}[row sep=tiny, column sep=scriptsize] \pi_{i}(\U_{n}) \arrow[r,rightarrow,"\Delta"] & \pi_{i}(\U_{n})^{\times r} \arrow[r,rightarrow,"\prod \pi_{i}(\s_{j})"] & \pi_{i}(\U_{rn}) ^{\times r} \arrow[r,rightarrow,"\m"] & \pi_{i}(\U_{rn}) \\ a \arrow[r,mapsto] & (a,\dots,a) \arrow[r,mapsto] & \bigl(\s_{*}(a),\dots,\s_{*}(a)\bigr) \arrow[r,mapsto] & \s_{*}(a)+\cdots+\s_{*}(a)=r\s_{*}(a), \end{tikzcd} \end{equation*} where $\s_{*}$ denotes $\pi_{i}(\s_{j})$ with $j=1,\dots,r$. This notation makes sense because Lemma \ref{sj} yields the equality $\pi_{i}(\s_{1})=\cdots=\pi_{i}(\s_{r})$. This proves the statement. \end{proof \begin{cor}\label{cnBS} If $i<2n$, then $\bigoplus_{*}^{r}(x)=rx$. \end{cor} \begin{proof By Lemma \ref{sj}, $\mif{\pi_{i}(\s_{j})}{\pi_{i}(\U_{n})}{\pi_{i}(\U_{rn})}$ is an isomorphism for all $j=1,\dots,r$ and $i<2n$, then $\mif{\prod \pi_{i}(\s_{j})}{\pi_{i}(\U_{n})^{\times r}}{\pi_{i}(\U_{rn})^{\times r}}$ is an isomorphism for all $i<2n$. By Proposition \ref{nBS} we conclude $\bigoplus_{*}^{r}(x)=rx$ for $x \in \pi_{i}(\U_{n})$ and $i<2n$. \end{proof \begin{lemma}\label{st} The homomorphisms \[ \mif{-\tensor I_{n}}{\U_{m}}{ \U_{mn}} \quad \text{ and } \quad \mif{ I_{n}\tensor -}{\U_{m}}{ \U_{mn}} % \] are homotopic. \end{lemma} \begin{proof Let $A\in \U_{m}$. Let $E_{ij}$ denote the matrix obtained by swapping the $i$th row and the $j$th row of $I_{mn}\in \M_{mn}(\C)$. Observe that after perfoming a finite number of row and column operations on the matrix $I_{n}\otimes A \in U_{mn}$, namely row interchange and column interchange, we obtain the the matrix $A\otimes I_{n}$. In other words, $A\otimes I_{n}=R(I_{n}\otimes A)C$ where $R$ and $C$ are products of elementary matrices $E_{ij}$. Since $\U_{mn}$ is path-connected, there exist paths $\alpha_{R}$ and $\alpha_{C}$ in $\U_{mn}$ from $I_{mn}$ to $R$ and $C$, respectively. We define a homotopy $\mif{H}{\U_{m}\times[0,1]}{\U_{mn}}$ between $-\tensor I_{n}$ and $I_{n}\tensor -$ by $H(A,t)=\alpha_{R}(t)(I_{n}\otimes A)\alpha_{C}(t)$. \end{proof \begin{prop}\label{TP} Let $i\in \N$, then the homomorphism $\mif{\bigotimes_{*}}{\pi_{i}(\U_{m})\times\pi_{i}(\U_{n})}{\pi_{i}(\U_{mn})}$ is given by $\bigotimes_{*}(x,y)=\bigoplus^{n}_{*}(x)+\bigoplus^{m}_{*}(y)$ for $x \in \pi_{i}(\U_{m})$ and $y\in \pi_{i}(\U_{n})$. \end{prop} \begin{proof Consider the composites \begin{equation*} \begin{tikzcd}[row sep=tiny,column sep=small] \U_{m} \arrow[r,hookrightarrow] & \U_{m}\times\{I_{n}\} \arrow[r,"\bigotimes"] &\U_{mn} & \text{and} & \U_{n} \arrow[r,hookrightarrow] & \{I_{m}\}\times \U_{n} \arrow[r,"\bigotimes"] &\U_{mn} \\ A \arrow[r,mapsto] & (A,I_{n}) \arrow[r,mapsto] & A\tensor I_{n} & & B \arrow[r,mapsto] & (I_{m},B) \arrow[r,mapsto] & I_{m}\tensor B. \end{tikzcd} \end{equation*} Since $I_{m}\tensor B=B^{\oplus m}$, then the second composite is equal to $\bigoplus^{m}$. By Lemma \ref{st} the first composite is equivalent to the map $A\mapsto I_{n}\otimes A$, then it is equivalent to $\bigoplus^{n}$. From this we get the commutative diagram below \begin{equation*} \begin{tikzcd}[row sep=large,column sep=huge] \U_{m}\times\{I_{n}\} \arrow[hookrightarrow,r] & \U_{m}\times\U_{n} \arrow[d,"\bigotimes"] & \{I_{m}\}\times \U_{n} \arrow[hookrightarrow,l] \arrow[dl,"\bigoplus^{m}"] \\ & \U_{mn}. \arrow[leftarrow,ul,"\bigoplus^{n}"]& \end{tikzcd} \end{equation*} Thus the induced diagram on homotopy groups takes the form \begin{equation*} \begin{tikzcd}[row sep=large,column sep=huge] \pi_{i}(\U_{m}) \times\{0\} \arrow[hookrightarrow,r] &\pi_{i}(\U_{m})\times\pi_{i}(\U_{n}) \arrow[d,"\bigotimes_{*}"] & \{0\}\times\pi_{i}(\U_{n}) \arrow[hookrightarrow,l] \arrow[dl,"\bigoplus^{m}_{*}"] \\ & \pi_{i}(\U_{mn}). \arrow[leftarrow,ul,"\bigoplus^{n}_{*}"]& \end{tikzcd} \end{equation*} \end{proof \begin{cor}\label{cTP} If $m<n$, then $\bigotimes_{*}(x,y)=nx+my$ for $i<2m$. \end{cor} \begin{proof The statement follows from Proposition \ref{TP} and Corollary \ref{cnBS}. \end{proof \begin{prop}\label{nBT} Let $i\in \N$, then the homomorphism $\mif{\bigotimes^{r}_{*}}{\pi_{i}(\U_{n})}{\pi_{i}(\U_{n^{r}})}$ is given by $\bigotimes^{r}_{*}(x)=r\bigoplus^{n^{r-1}}_{*}(x) $ for $x \in \pi_{i}(\U_{n})$. \end{prop} \begin{proof It can be proven by induction and Proposition \ref{TP}. \end{proof \begin{cor}\label{cnBT} If $i<2n$, the map $\bigotimes^{r}_{*}(x)=rn^{r-1}x$. \end{cor} \begin{proof Proposition \ref{nBT} and Corollary \ref{cnBS} yield the result. \end{proof \subsubsection{Tensor product on the quotient} Let $a$, $b$, $m$ and $n$ be positive integers. The tensor product operation $\mif{\bigotimes}{\U_{am}\times\U_{bn}}{\U_{abmn}}$ sends the group $\mu_{m}\times \mu_{n}$ to $\mu_{mn}$. In consequence, the operation descends to the quotient \begin{equation}\label{q} \mif{\otimes}{\U_{am}/\mu_{m}\times\U_{bn}/\mu_{n}}{\U_{abmn}/\mu_{mn}}. \end{equation} \begin{prop}\label{TP1} Let $i>1$, then the homomorphism \[ \mif{\otimes_{*}}{\pi_{i}(\U_{am}/\mu_{m})\times\pi_{i}(\U_{bn}/\mu_{n})}{\pi_{i}(\U_{abmn}/\mu_{mn})} \] is given by $\bigotimes_{*}(x,y)=\bigoplus^{bn}_{*}(x)+\bigoplus^{am}_{*}(y)$ for $x \in \pi_{i}(\U_{am}/\mu_{m})$ and $y\in \pi_{i}(\U_{bn}/\mu_{n})$. \end{prop} \begin{proof There exits a map of fibrations \begin{equation}\label{mf} \begin{tikzcd} \mu_{m}\times\mu_{n} \arrow[r,hookrightarrow] \arrow[d,"\m"] & \U_{am}\times\U_{bn} \arrow[r] \arrow[d,"\bigotimes"] & \U_{am}/\mu_{m}\times\U_{bn}/\mu_{n} \arrow[d,"\bigotimes"]\\ \mu_{mn} \arrow[r,hookrightarrow] & \U_{abmn} \arrow[r] & \U_{abmn}/\mu_{mn}, \end{tikzcd} \end{equation} where $\m$ is the matrix multiplication. Then there exits a homomorphism between the long exact sequences associated to the fibrations in diagram \eqref{mf}. We obtain a commutative square \begin{equation*} \begin{tikzcd} \pi_{i}(\U_{am})\times\pi_{i}(\U_{bn}) \arrow[r,"\iso"] \arrow[d,"\bigotimes_{*}"] & \pi_{i}(\U_{am}/\mu_{m})\times\pi_{i}(\U_{bn}/\mu_{n}) \arrow[d,"\bigotimes_{*}"]\\ \pi_{i}(\U_{abmn}) \arrow[r,"\iso"] & \pi_{i}(\U_{abmn}/\mu_{mn}). \end{tikzcd} \end{equation*} Then the result follows from Proposition \ref{TP}. \end{proof \begin{prop}\label{TP2} The homomorphism \[ \mif{\otimes_{*}}{\pi_{1}(\U_{am}/\mu_{m})\times\pi_{1}(\U_{bn}/\mu_{n})}{\pi_{1}(\U_{abmn}/\mu_{mn})} \] is given by $\bigotimes_{*}(x+\alpha,y+\beta)=bnx+amy+\alpha\beta$ for $x+\alpha \in \Z\oplus \Z/m$, $y+\beta \in \Z\oplus \Z/n$. \end{prop} \begin{proof There exists a similar map of fibrations as the one in diagram \eqref{mf}, but with the spaces $\SU_{am}$ and $\SU_{bn}$ instead of $\U_{am}$ and $\U_{bn}$, respectively. In this case we obtain the commutative square, \begin{equation*} \begin{tikzcd} \pi_{1}(\SU_{am}/\mu_{m})\times\pi_{1}(\SU_{bn}/\mu_{n}) \arrow[r,"\iso"] \arrow[d,"\bigotimes_{*}"] & \mu_{m}\times\mu_{n} \arrow[d,"\m"]\\ \pi_{1}(\SU_{abmn}/\mu_{mn}) \arrow[r,"\iso"] & \mu_{mn}. \end{tikzcd} \end{equation*} Thus the homomorphism $\otimes_{*}:\pi_{1}(\SU_{am}/\mu_{m})\times\pi_{1}(\SU_{bn}/\mu_{n})\rightarrow \pi_{1}(\SU_{abmn}/\mu_{mn})$ is equal to the multiplication. Consider the map of fibrations \begin{equation* \begin{tikzcd}[column sep=huge] \SU_{am}/\mu_{m}\times\SU_{bn}/\mu_{n} \arrow[r,hookrightarrow] \arrow[d,"\bigotimes"] & \U_{am}/\mu_{m}\times\U_{bn}/\mu_{n} \arrow[r,"\det\times\det"] \arrow[d,"\bigotimes"] & S^{1}\times S^{1} \arrow[d,"\phi"]\\ \SU_{abmn}/\mu_{mn} \arrow[r,hookrightarrow] & \U_{abmn}/\mu_{mn} \arrow[r,"\det"] & S^{1}, \end{tikzcd} \end{equation*} where $\phi(x,y)=x^{bn}y^{am}$. From this diagram we have a map of short exact sequences, \begin{equation*} \begin{tikzcd}[column sep=scriptsize] \pi_{1}(\SU_{am}/\mu_{m})\times\pi_{1}(\SU_{bn}/\mu_{n}) \arrow[r,hookrightarrow,"i"] \arrow[d,"\m"] & \pi_{1}(\U_{am}/\mu_{m})\times\pi_{1}(\U_{bn}/\mu_{n}) \arrow[r,twoheadrightarrow] \arrow[d,"\bigotimes_{*}"] & \pi_{1}(S^{1})\times\pi_{1}(S^{1}) \arrow[d,"\phi_{*}"]\\ \pi_{1}(\SU_{abmn}/\mu_{mn}) \arrow[r,hookrightarrow,"j"] & \pi_{1}(\U_{abmn}/\mu_{mn}) \arrow[r,twoheadrightarrow]& \pi_{1}(S^{1}). \end{tikzcd} \end{equation*} Note that the homomorphism induced by $\det\times\det$ and $\phi_{*}$ have sections. Let $s$ and $\psi$ denote their sections, respectively. Define $t:\pi_{1}(S^{1})\rightarrow \pi_{1}(\SU_{abmn}/\mu_{mn})$ by $t\coloneqq \tensor_{*} s \psi$. The homomorphism $t$ is a section of $\pi_{1}(\SU_{abmn}/\mu_{mn}) \rightarrow \pi_{1}(S^{1})$ and makes the square below commute \begin{equation*} \begin{tikzcd} \pi_{1}(S^{1})\times\pi_{1}(S^{1}) \arrow[r,"s"] \arrow[d,"\phi_{*}"] & \pi_{1}(\U_{am}/\mu_{m})\times\pi_{1}(\U_{bn}/\mu_{n}) \arrow[d,"\bigotimes_{*}"]\\ \pi_{1}(S^{1}) \arrow[r,"t"] & \pi_{1}(\SU_{abmn}/\mu_{mn}). \end{tikzcd} \end{equation*} Therefore, the short exact sequences above split and \begin{equation*} \begin{tikzcd} \pi_{1}(\U_{am}/\mu_{m})\times\pi_{1}(\U_{bn}/\mu_{n}) \arrow[d,"\bigotimes_{*}"] \arrow[r,"\iso"] & \Ker i \oplus \im s \arrow[d,"\m+\phi_{*}"]\\ \pi_{1}(\SU_{abmn}/\mu_{mn}) \arrow[r,"\iso"] & \Ker j \oplus \im t. \end{tikzcd} \end{equation*} This is $\bigotimes_{*}(x+\alpha,y+\beta)=\phi_{*}(x,y)+\m(\alpha,\beta)$ for $x+\alpha \in \Z\oplus \Z/m$, $y+\beta \in \Z\oplus \Z/n$. By Remark \ref{EH} and Corollary \ref{cTP}, $\bigotimes_{*}(x+\alpha,y+\beta)=bnx+amy+\alpha\beta$. \end{proof \begin{rem} Observe that we can also define operations on $\SU_{m}\times\SU_{n}$ that satisfy similar properties as those proved for the operations we defined on $\U_{m}\times\U_{n}$. \end{rem} \section{Proof of Theorem \ref{main}} We start this section by recalling some facts about topological Azumaya algebras. There is a natural bijective correspondance between isomorphism classes of topological Azumaya algebras of degree $n$ over $X$ and principal $\PU_{n}$-bundles. Therefore, a topological Azumaya algebra on $X$ of degree $n$ will be considered as a homotopy class in $[X,\B\PU_{n}]$. Consider the space $\B\SU_{am}/\mu_{m}$ and the map $\B\SU_{am}/\mu_{m} \rightarrow \K(\Z/m,2)$ which is the projection of $\B\SU_{am}/\mu_{m}$ on the the first non-trivial stage of its Postnikov tower. Given a map $\mif{\Az{A}}{X}{\B\SU_{am}/\mu_{m}}$, we define the \textit{Brauer class} of $\Az{A}$ as follows. Let $\chi_{m}$ denote the composite of $\Az{A}$ and $\B\SU_{am}/\mu_{m} \rightarrow \K(\Z/m,2)$. The \textit{Brauer class} of $\Az{A}$ is $\cl(\Az{A})=\RBn_{m}(\chi_{m})$, where $\RBn_{m}:\K(\Z/m,2)\rightarrow \K(\Z,3)$ is the reduced Bockstein map. As illustrated in diagram \eqref{clA}. \begin{equation}\label{clA} \begin{tikzcd}[column sep=large] & \B\SU_{am}/\mu_{m} \arrow[d] &\\ X \arrow[ru,"\Az{A}"] \arrow[r,"\chi_{m}" below=3pt] & \K(\Z/m,2) \arrow[r,"\RBn_{m}"] & \K(\Z,3). \end{tikzcd} \end{equation} \subsection{A left homotopy inverse} Let $a$, $b$, $m$ and $n$ be positive integers. By applying the classifying-space functor to the homomorphism \eqref{q} we get a map \begin{align*} F_{\tensor}: \B\U_{am}/\mu_{m}\times\B\U_{bn}/\mu_{n} \rightarrow \B\U_{abmn}/\mu_{mn}. \end{align*} Similarly, we define $F_{\tensor}: \B\SU_{am}/\mu_{m}\times\B\SU_{bn}/\mu_{n} \rightarrow \B\SU_{abmn}/\mu_{mn}$. When $a=b=1$, we write $f_{\tensor}$ instead of $F_{\tensor}$. \begin{prop}\label{UN} Let $a$, $b$, $m$ and $n$ be positive integers such that $ma$ and $nb$ are relatively prime and $am<bn$. There exist $N>0$ and a homomorphism $\mif{\Tr}{\U_{am}\times \U_{bn}}{\U_{N}}$ such that \begin{enumerate} \item The homomorphism $\Tr$ descends to $\mif{\widetilde{\Tr}}{\U_{am}/\mu_{m}\times \U_{bn}/\mu_{n}}{\U_{N}}$. \item The map $\mif{\bigl(F_{\tensor},\B\widetilde{\Tr}\bigr)}{\B\U_{am}/\mu_{m}\times\B\U_{bn}/\mu_{n}}{\B\U_{abmn}/\mu_{mn}\times\B\U_{N}}$ is $(2am+1)$-connected. \end{enumerate} \end{prop} \begin{proof \textit{Existence of $\Tr$}. Since $am$ and $bn$ are relatively prime, so are $(am)^{m+1}$ and $(bn)^{n+1}$. Hence there exist positive integers $u$ and $v$ such that $(bn)^{n+1}v-(am)^{m+1}u=\pm1$. Let $N$ denote $u(am)^{m}+v(bn)^{n}$. We define $\Tr$ using the operations described in Section 2, as the composite \begin{equation*} \begin{tikzcd}[column sep=large] \Tr:\U_{am}\times\U_{bn} \arrow[r,"(\tensor^{m}\text{,}\tensor^{n})"] & \U_{(am)^{m}}\times\U_{(bn)^{n}} \arrow[r,"(\oplus^{u}\text{,}\oplus^{v})"] & \U_{u(am)^{m}}\times\U_{v(bn)^{n}} \arrow[r,"\oplus"] & \U_{N} \end{tikzcd} \end{equation*} \textit{The homomorphism $\Tr$ descends to $\mif{\widetilde{\Tr}}{\U_{am}/\mu_{m}\times \U_{bn}/\mu_{n}}{\U_{N}}$.} We must show that $\mu_{m}\times \mu_{n}$ is contained in $\Ker(\Tr)$. Let $\alpha$ and $\beta$ be $m$-th and $n$-th roots of unity, respectively. Note that the element at the leftmost side of diagram \eqref{desc} is sent to the identity matrix in $\U_{N}$. \begin{equation}\label{desc} \bigl(\alpha I_{am},\beta I_{bn}\bigr) \mapsto \bigl(\alpha^{m}I_{(am)^{m}},\beta^{n}I_{(bn)^{n}}\bigr) \mapsto \bigl(I_{u(am)^{m}},I_{v(bn)^{n}}\bigr) \mapsto I_{N}. \end{equation} \textit{The map $\bigl(F_{\tensor},\B\widetilde{\Tr}\bigr)$ is $(2am+1)$-connected.} We want to prove that the induced homomorphism on homotopy groups \begin{equation}\label{FB} \mif{\bigl(F_{\tensor},\B\widetilde{\Tr}\bigr)_{i}}{\pi_{i}(\B\U_{am}/\mu_{m})\times \pi_{i}(\B\U_{bn}/\mu_{n})}{\pi_{i}(\B\U_{abmn}/\mu_{mn})\times\pi_{i}(\B\U_{N}}) \end{equation} is bijective for all $i<2am+1$ and surjective for $i=2am+1$. It suffices to prove that $\bigl(F_{\tensor},\B\widetilde{\Tr}\bigr)_{i}$ is a bijection for all $i$ even and $i<2am+1$. \textbf{Case 1:} Let $i<2am+1$ and $i\neq 2$. The homomorphism \eqref{FB} takes the form \[ \mif{\bigl(F_{\tensor},\B\widetilde{\Tr}\bigr)_{i}}{\Z\times\Z}{\Z\times\Z}. \] Proposition \ref{TP1} and Corollary \ref{cTP} show $F_{\tensor}(x,y)=\bigotimes_{*}(x,y)=bnx+amy$. Proposition \ref{TP1} and Corollaries \ref{cDS}, \ref{cnBS} and \ref{cnBT} imply $\mif{\B\widetilde{\Tr}_{i}}{\pi_{i}(\B\U_{am}/\mu_{m})\times \pi_{i}(\B\U_{bn}/\mu_{n})}{\pi_{i}(\B\U_{N})}$ is given by $\B\widetilde{\Tr}_{i}(x,y)=u(am)^{m}x+v(bn)^{n}y$. Thereby, the homomorphism \eqref{FB} is represented by the matrix \[ \begin{pmatrix} bn & am\\ u(am)^{m} & v(bn)^{n} \end{pmatrix}, \] which is invertible. This proves $\bigl(F_{\tensor},\B\widetilde{\Tr}\bigr)_{i}$ is bijective. \textbf{Case 2:} When $i=2$, the homomorphism \eqref{FB} takes the form \[ \mif{\bigl(F_{\tensor},\B\widetilde{\Tr}\bigr)_{2}}{(\Z\oplus\Z/m)\times(\Z\oplus\Z/n)}{(\Z\oplus\Z/mn)\times\Z}. \] The homomorphism $\mif{\B\widetilde{\Tr}_{2}}{\pi_{2}(\B\U_{am}/\mu_{m})\times\pi_{2}(\B\U_{bn}/\mu_{n})}{\pi_{2}(\B\U_{N})}$ is such that $\B\widetilde{\Tr}_{2}(x+\alpha,y+\beta)=\B\widetilde{\Tr}_{i}(x,y)$ for all $x+\alpha \in \Z\oplus\Z/m$ and $y+\beta \in \Z\oplus\Z/n$. To see this we can use that $\mu_{m}\times\mu_{n}\subset\Ker(\Tr)$ and an argument similar to the one used in Proposition \ref{TP2}. Thus $\bigl(F_{\tensor},\B\widetilde{\Tr}\bigr)_{2}(x+\alpha,y+\beta) =\bigl(F_{\tensor}(x,y)+\alpha\beta,\B\widetilde{\Tr}_{i}(x,y)\bigr)$. Consequently, $(F_{\tensor},\B\widetilde{\Tr})_{2}$ is bijective. \end{proof \begin{prop}\label{SUN} Let $a$, $b$, $m$ and $n$ be positive integers such that $ma$ and $nb$ are relatively prime and $am<bn$. There exist $N>0$ and a homomorphism $\mif{\Tr}{\SU_{am}\times \SU_{bn}}{\SU_{N}}$ such that \begin{enumerate} \item The homomorphism $\Tr$ descends to $\mif{\widetilde{\Tr}}{\SU_{am}/\mu_{m}\times \SU_{bn}/\mu_{n}}{\SU_{N}}$. \item The map $\mif{(F_{\tensor},\B\widetilde{\Tr})}{\B\SU_{am}/\mu_{m}\times\B\SU_{bn}/\mu_{n}}{\B\SU_{abmn}/\mu_{mn}\times\B\SU_{N}}$ is $(2am+1)$-connected. \end{enumerate} \end{prop} \begin{proof By proceding as we did in Proposition \ref{UN} and using Corollaries \ref{TP1} and \ref{TP2} the result follows. \end{proof \begin{cor} Let $m$ and $n$ be relatively prime positive integers. There exist $N>0$ and a homomorphism $\mif{\tau}{\SU_{m}\times \SU_{n}}{\SU_{N}}$ such that \begin{enumerate} \item The homomorphism $\tau$ descends to $\mif{\widetilde{\tau}}{\PU_{m}\times \PU_{n}}{\SU_{N}}$. \item The map $\mif{(f_{\tensor},\B\widetilde{\tau})}{\B\PU_{m}\times \B\PU_{n}}{\B\PU_{mn}\times\B\SU_{N}}$ is $(2m+1)$-connected. \end{enumerate} \end{cor} \subsection{Factorization through \texorpdfstring{$F_{\tensor}: \B\SU_{am}/\mu_{m}\times\B\SU_{bn}/\mu_{b} \rightarrow \B\SU_{abmn}/\mu_{mn}$}{F:BSUam/m x BSUbn/n-->BSUabmn/mn}} \begin{thm}\label{abpq/pq} Let $a$, $b$, $m$ and $n$ be positive integers such that $ma$ and $nb$ are relatively prime and $am<bn$. Let $X$ be a CW complex such that $\dim(X)\leq 2am$. Every map $\Az{A}:X \rightarrow \B\SU_{abmn}/\mu_{mn}$ can be lifted to $\B\SU_{am}/\mu_{m}\times\B\SU_{bn}/\mu_{n}$ along the map $F_{\tensor}$. \end{thm} \begin{proof Diagramatically speaking, we want to find a map $\Az{A}_{m}\times\Az{A}_{n}:X \rightarrow \B\U_{am}/\mu_{m}\times\B\U_{bn}/\mu_{n}$ such that diagram \eqref{lpF} commutes up to homotopy \begin{equation}\label{lpF} \begin{tikzcd}[execute at begin picture={\useasboundingbox (-4.5,-1) rectangle (4.5,1);},row sep=large,column sep=huge] & \B\SU_{am}/\mu_{m}\times\B\SU_{bn}/\mu_{n} \arrow[d,"F_{\tensor}"] \\ X \arrow[r,"\Az{A}"] \arrow[ur,dotted,"\Az{A}_{m}\times\Az{A}_{n}"] & \B\SU_{abmn}/\mu_{mn}. \end{tikzcd} \end{equation} Proposition \ref{SUN} yields a map $\mif{J}{\B\SU_{am}/\mu_{m}\times\B\SU_{bn}/\mu_{n}}{\B\SU_{N}}$ where $N$ is some positive integer. Observe that $F_{\tensor}$ factors through $\B\SU_{abmn}/\mu_{mn}\times\B\SU_{N}$, so we can write $F_{\tensor}$ as the composite of $(F_{\tensor},J)$ and the projection $\proj_{1}$ shown in diagram \eqref{FJ}. \begin{equation}\label{FJ} \begin{tikzcd}[row sep=large,column sep=huge] \B\SU_{am}/\mu_{m}\times\B\SU_{bn}/\mu_{n} \arrow[r,"(F_{\tensor} \text{,} J)"] \arrow[d,"F_{\tensor}"] & \B\SU_{abmn}/\mu_{mn}\times\B\SU_{N} \\ \B\SU_{abmn}/\mu_{mn} \arrow[ur,leftarrow,"\proj_{1}"]& \end{tikzcd} \end{equation} Since $(F_{\tensor},J)$ is $(2am+1)$-connected and $\dim(X)<2am+1$, then by Whitehead's theorem \[ (F_{\tensor},J)_{\#}:[X, \B\SU_{am}/\mu_{m}\times\B\SU_{bn}/\mu_{n}] \rightarrow [X,\B\SU_{abmn}/\mu_{mn}\times\B\SU_{N}] \] is a bijection, \cite{SpaAT2012}*{Corollary 7.6.23}. Let $s$ denote a section of $\proj_{1}$. The bijectivity of $(F_{\tensor},J)_{\#}$ implies $s\circ \Az{A}$ has a unique preimage $\Az{A}_{m}\times\Az{A}_{n}:X \rightarrow \B\U_{am}/\mu_{m}\times\B\U_{bn}/\mu_{n}$ such that $(F_{\tensor},J)\circ (\Az{A}_{m}\times\Az{A}_{n})\simeq s\circ \Az{A}$. Commutativity of diagram \eqref{lpF} follows from commutativity of diagram \eqref{FJ}. Thus, the result follows. \end{proof \begin{rem} If $\dim(X)=2am+1$, the map $(F_{\tensor},J)_{\#}$ is a surjection. Then $\Az{A}_{m}\times\Az{A}_{n}$ still exists, although it will not be unique, even up to homotopy. \end{rem} \subsection{Factorization through \texorpdfstring{$f_{\tensor}: \B\PU_{am}\times\B\PU_{bn} \rightarrow \B\PU_{abmn}$}{f:BPUam x BPUbn-->BPUabmn}} \begin{thm}\label{abpq} Let $a$, $b$, $m$ and $n$ be positive integers such that $ma$ and $nb$ are relatively prime and $am<bn$. Let $X$ be a CW complex such that $\dim(X)\leq 2am$. If $\Az{A}$ is a topological Azumaya algebra of degree $abmn$ such that $\cl(\Az{A})$ has period $mn$, then there exist topological Azumaya algebras $\Az{A}_{m}$ and $\Az{A}_{n}$ of degrees $am$ and $bn$, respectively, such that $\per(\cl(\Az{A}_{m}))=m$, $\per(\cl(\Az{A}_{n}))=n$ and $\Az{A}\iso \Az{A}_{m}\tensor\Az{A}_{n}$. \end{thm} \begin{proof In this case we want to solve the lifting problem, shown in diagram \eqref{MLP}, up to homotopy, with $\per(\cl(\Az{A}_{m}))=m$, $\per(\cl(\Az{A}_{n}))=n$. \begin{equation}\label{MLP} \begin{tikzcd}[row sep=large, column sep=huge] & \B\PU_{am}\times\B\PU_{bn} \arrow{d} \arrow[d,"f_{\tensor}"] \\ X \arrow[r,"\Az{A}"] \arrow[ur,dotted,"\Az{A}_{m}\times\Az{A}_{n}"] & \B\PU_{abmn}. \end{tikzcd} \end{equation} By \cite{Geight2019}*{Proposition 4.3} there exists a map $\Az{A}':X \rightarrow \B\SU_{abmn}/\mu_{mn}$ such that $\per(\cl(\Az{A}'))=\per(\cl(\Az{A}))=mn$. Then, by Theorem \ref{abpq/pq} there exists a map $\mif{\Az{A}'_{m}\times\Az{A}'_{n}}{X}{\B\SU_{am}/\mu_{m}\times\B\SU_{bn}/\mu_{n}}$. \begin{claim}\label{tecRBn} $\per(\cl(\Az{A}'_{m}))=m$ and $\per(\cl(\Az{A}'_{n}))=n$. \end{claim} By Claim \ref{tecRBn} and \cite{Geight2019}*{Proposition 4.3} there exists a map $\mif{\Az{A}_{m}\times\Az{A}_{n}}{X}{\B\PU_{am}\times\B\PU_{bn}}$ such that $\per(\cl(\Az{A}_{m}))=m$ and $\per(\cl(\Az{A}_{n}))=n$. It remains to show that diagram (\ref{MLP}) commutes. Consider the diagram below \begin{equation}\label{ss} \begin{tikzcd}[row sep=normal,column sep=huge] \B\SU_{am}/\mu_{m}\times\B\SU_{bn}/\mu_{n} \arrow[rd,leftarrow,"\Az{A}'_{m}\times\Az{A}'_{n}"] \arrow[rr] & &\B\PU_{am}\times\B\PU_{bn} \arrow[dd,"f_{\tensor}"]\\ & X \arrow[ru,"\Az{A}_{m}\times\Az{A}_{n}"] \arrow[rd,"\Az{A}"]& \\ \B\SU_{abmn}/\mu_{mn} \arrow[uu,leftarrow,"F_{\tensor}"] \arrow[ru,leftarrow,"\Az{A}'"] \arrow[rr] & & \B\PU_{abmn} \end{tikzcd} \end{equation} Observe that the square, as well as top, bottom and left triangles, of diagram \eqref{ss} commute. Hence, the right triangle commutes. \end{proof \begin{proof}[Proof of Claim \ref{tecRBn}] Let $\red_{m}:\Z/mn \rightarrow \Z/m$ be the reduction homomorphism, which induces a map $\red_{m}^{*}:\K(\Z/mn,2) \rightarrow \K(\Z/m,2)$. Observe that $\chi'_{m}=\red_{m}^{*}\chi'_{mn}$. Moreover, there is a commutative diagram \begin{equation*} \begin{tikzcd}[row sep=normal,column sep=huge] \K(\Z/mn,2) \arrow[r,"\red_{m}^{*}"] \arrow[d,"\RBn_{mn}"]& \K(\Z/m,2) \arrow[d,"\RBn_{m}"]\\ \K(\Z,3) \arrow[r,"\times n"] & \K(\Z,3). \end{tikzcd} \end{equation*} Thus, $\cl(\Az{A}'_{m})=n\cl(\Az{A}')$. Given that $\cl(\Az{A}')$ has period $mn$, we have $\per(\cl(\Az{A}'_{m}))=m$. A similar argument shows $\per(\cl(\Az{A}'_{n}))=n$. \end{proof} Theorem \ref{main} is a corollary of Theorem \ref{abpq}. \begin{rem} The topological Azumaya algebras $\Az{A}_{m}$ and $\Az{A}_{n}$ in Theorem \ref{main} are not unique up to isomorphism. In order to see this, we consider the relative Postnikov tower of the map $f_{\tensor}$: \begin{equation}\label{Pt} \begin{tikzcd}[row sep=small] & \B\PU_{m}\times\B\PU_{n} \arrow{d} &\\ & \vdots \arrow{d} &\\ F_{5} \arrow[r] & Y[5] \arrow{d} \\ F_{4} \arrow[r] & Y[4] \arrow[d] \arrow[r,"k_{4}"] & \K(\pi_{5}F_{5},6)\\ & \B\PU_{mn} \arrow[r,"k_{3}"]& \K(\pi_{4}F_{4},5) \end{tikzcd} \end{equation} where $k_{i-1}:Y[i-1]\rightarrow \K\bigl(\pi_{i}F_{i},i+1\bigr)$ is the $k$-invariant that classifies the fiber sequence $F_{i}\rightarrow Y[i] \rightarrow Y[i-1]$. Let $X$ be a CW complex of $\dim(X)\leq 6$. Let $m$ and $n$ be as in the hypothesis of Theorem \ref{main}, and $m>3$. Let $\Az{A}$ be a topological Azumaya algebra of degree $mn$. As $m>3$ we can use the properties of the relative Postnikov tower of $f_{\tensor}$ to simplify the tower in diagram \eqref{Pt}. For instance, there are homotopy equivalences $Y[4]\simeq Y[5]$ and $Y[6]\simeq Y[7]$, and the homotopy groups of the homotopy fibers $F_{4}$ and $F_{6}$ are both isomorphic to the integers. \begin{equation}\label{Pt2} \begin{tikzcd}[row sep=small] &&\B\PU_{m}\times\B\PU_{n} \arrow{d} & \\ && \vdots \arrow{d} & \\ && Y[4]\simeq\B\PU_{mn}\times \K(\Z,4) \arrow{d} \arrow[r,"k_{4}"] & \K(\Z,7)\\ X \arrow[rr,"\Az{A}"] \arrow[rru,bend left=10,"(\Az{A,}\xi)" near end] \arrow[rruuu,bend left,"\Az{A}_{m}\times\Az{A}_{n}"] & &\B\PU_{mn} \arrow[r,"k_{3}"] & \K(\Z,5)\\ \end{tikzcd} \end{equation} Observe that the $k$-invariant $k_{3}$ is nullhomotopic because $\Hy^{5}(\B\PU_{mn};\Z)$ is trivial. Hence there is no obstruction to lift $\Az{A}$ to $Y[4]$. Similarly, we can lift the identity map $id_{\B\PU_{mn}}$ to $Y[4]$, in this case we obtain the splitting $Y[4]\simeq \B\PU_{mn}\times\K(\Z,4)$. Then the lifting of $\Az{A}$ takes the form $(\Az{A},\xi):X\rightarrow \B\PU_{mn}\times\K(\Z,4)$. The cohomology groups of $X$ vanish for all degrees greater than 6, given that $X$ is 6-dimensional. Thus $(\Az{A},\xi)$ can be lifted up the Postnikov tower to $\B\PU_{m}\times\B\PU_{n}$. See diagram \eqref{Pt2}. This proves that $\Az{A}$ can be decomposed as $\Az{A}_{m}\tensor\Az{A}_{n}$. The lifting $(\Az{A},\xi)$ is not necessarily unique. In fact, every cohomology class $\xi \in \Hy^{4}(X;\Z)$ gives rise to a lifting $(\Az{A},\xi)$. \end{rem} \newpage \bibliographystyle{alpha}
{ "timestamp": "2020-05-28T02:06:39", "yymm": "2005", "arxiv_id": "2005.13081", "language": "en", "url": "https://arxiv.org/abs/2005.13081", "abstract": "Let $\\mathcal{A}$ be a topological Azumaya algebra of degree $mn$ over a CW complex $X$. We give conditions for the positive integers $m$ and $n$, and the space $X$ so that $\\mathcal{A}$ can be decomposed as the tensor product of topological Azumaya algebras of degrees $m$ and $n$. Then we prove that if $m<n$ and the dimension of $X$ is higher than $2m+1$, $\\mathcal{A}$ may not have such decomposition.", "subjects": "Algebraic Topology (math.AT)", "title": "Decomposition of Topological Azumaya Algebras", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708032626701, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7084670382790978 }
https://arxiv.org/abs/1709.00404
Unbiased Hamiltonian Monte Carlo with couplings
We propose a methodology to parallelize Hamiltonian Monte Carlo estimators. Our approach constructs a pair of Hamiltonian Monte Carlo chains that are coupled in such a way that they meet exactly after some random number of iterations. These chains can then be combined so that resulting estimators are unbiased. This allows us to produce independent replicates in parallel and average them to obtain estimators that are consistent in the limit of the number of replicates, instead of the usual limit of the number of Markov chain iterations. We investigate the scalability of our coupling in high dimensions on a toy example. The choice of algorithmic parameters and the efficiency of our proposed methodology are then illustrated on a logistic regression with 300 covariates, and a log-Gaussian Cox point processes model with low to fine grained discretizations.
\section{Introduction \label{sec:Context-and-goal}} \subsection{Parallel computation with Hamiltonian Monte Carlo \label{subsec:Parallelizing-Hamiltonian-Monte}} Hamiltonian Monte Carlo is a Markov chain Monte Carlo method to approximate integrals with respect to a target probability distribution $\pi$ on $\mathbb{R}^{d}$. Originally proposed by \citet{Duane:1987} in the physics literature, it was later introduced in statistics by \citet{Neal:1993} and is now widely adopted as a standard sampling tool \citep{brooks2011handbook,lelievre2012}. Various aspects of its theoretical properties have been studied: see \citet{betancourtetal:2017} and \citet{betancourt:2017} for its geometric properties, \citet{Livingstone:2016} and \citet{durmus2017convergence} for ergodicity results, \citet{beskos2013optimal}, \citet{Mangoubi:2017} and \citet{bourabee:2018} for scaling results with respect to the dimension $d$. These results suggest that Hamiltonian Monte Carlo compares favorably to other Markov chain Monte Carlo algorithms such as random walk Metropolis--Hastings and Metropolis-adjusted Langevin algorithms in high dimensions. In practice, Hamiltonian Monte Carlo is at the core of the No-U-Turn sampler \citep{hoffman2014no} which is implemented in the software Stan \citep{carpenter2016stan}. If one could initialize from the target distribution, usual estimators based on any Markov chain Monte Carlo would be unbiased, and one could simply average over independent chains \citep{rosenthal2000parallel}. Except certain applications where this can be achieved with perfect simulation methods \citep{casella:lavine:robert:2001,huber2016perfect}, Markov chain Monte Carlo estimators are ultimately consistent in the limit of the number of iterations. Algorithms that rely on such asymptotics face the risk of becoming obsolete if computational power continue to increase through the number of available processors and not through clock speed. Several methods have been proposed to address this limitation with varying generality \citep{mykland1995regeneration,neal2002circularly,glynn2014exact}. Our approach builds upon recent work by \citet{jacob2017unbiased}, which introduces unbiased estimators based on Metropolis--Hastings algorithms and Gibbs samplers. The present article describes how to design unbiased estimators for Hamiltonian Monte Carlo and some of its variants \citep{girolami2011riemann}. The proposed methodology is widely applicable and involves a simple coupling between a pair of Hamiltonian Monte Carlo chains. Coupled chains are run for a random but almost surely finite number of iterations, and combined in such a way that resulting estimators are unbiased. One can produce independent copies of these estimators in parallel and average them to obtain consistent approximations in the limit of the number of replicates. This also yields confidence intervals valid in the number of replicates through the central limit theorem; see also \citet{Glynn1991} for central limit theorems parametrized by number of processors or time budget. We begin by introducing some preliminary notation in Section \ref{subsec:Notation} and recapitulating the unbiased estimation framework of \citet{jacob2017unbiased} in Section \ref{subsec:Context:-unbiased-estimation}. \subsection{Notation\label{subsec:Notation}} Given a sequence $(x_n)_{n\geq0}$ and integers $k< m$, we use the convention that $\sum_{n=m}^k x_n=0$. The set of natural numbers is denoted by $\mathbb{N}$ and the set of non-negative real numbers by $\mathbb{R}_{+}$. The $d$-dimensional vector of zeros is denoted by $0_{d}$ and the $d\times d$ identity matrix by $I_d$. The Euclidean norm of a vector $x\in\mathbb{R}^{d}$ is written as $|x|=(\sum_{i=1}^{d}x_{i}^{2})^{1/2}$. Given a subset $A\subseteq\varOmega$, the indicator function $\mathbb{I}_A:\varOmega\rightarrow\{0,1\}$ is defined as $\mathbb{I}_A(x)=1$ if $x\in A$, and $0$ if $x\in\varOmega\setminus A$. For a smooth function $f:\mathbb{R}^d\rightarrow\mathbb{R}$, we denote its gradient by $\nabla f:\mathbb{R}^d\rightarrow\mathbb{R}^d$ and its Hessian by $\nabla^2f:\mathbb{R}^d\rightarrow\mathbb{R}^{d\times d}$. The gradient of a function $(x,y)\mapsto f(x,y)$ with respect to the variables $x$ and $y$ are denoted by $\nabla_{x}f$ and $\nabla_{y}f$ respectively. Given functions $f:\mathbb{R}^n\rightarrow\mathbb{R}^m$ and $g:\mathbb{R}^d\rightarrow\mathbb{R}^n$, we define the composition $f\circ g:\mathbb{R}^d\rightarrow\mathbb{R}^m$ as $(f\circ g)(x)=f\{g(x)\}$ for all $x\in\mathbb{R}^d$. The Borel $\sigma$-algebra of $\mathbb{R}^{d}$ is denoted by $\mathcal{B}(\mathbb{R}^{d})$; on the product space $\mathbb{R}^d\times\mathbb{R}^d$, $\mathcal{B}(\mathbb{R}^{d})\times\mathcal{B}(\mathbb{R}^{d})$ denotes the product $\sigma$-algebra. The Gaussian distribution on $\mathbb{R}^d$ with mean vector $\mu$ and covariance matrix $\Sigma$ is denoted by $\mathcal{N}(\mu,\Sigma)$, and its density by $x\mapsto\mathcal{N}(x;\mu,\Sigma)$. The uniform distribution on $[0,1]$ is denoted as $\mathcal{U}[0,1]$. We use the shorthand $X\sim\eta$ to refer to a random variable with distribution $\eta$. On a measurable space $(\varOmega,\mathcal{F})$, given a measurable function $\varphi:\varOmega\rightarrow\mathbb{R}$, a probability measure $\eta$, and a Markov transition kernel $M$, we define the integral $\eta(\varphi)=\int_{\varOmega}\varphi(x)\eta(dx)$ and the function $M(\varphi)(x)=\int_{\varOmega}\varphi(y)M(x,dy)$ for $x\in\varOmega$. \subsection{Unbiased estimation with couplings \label{subsec:Context:-unbiased-estimation}} Suppose $h:\mathbb{R}^d\rightarrow\mathbb{R}$ is a measurable function of interest and consider the task of approximating the integral $\pi(h)=\int h(x)\pi(dx)<\infty$. Following \citet{glynn2014exact} and \citet{jacob2017unbiased}, we will construct a pair of coupled Markov chains $X=(X_{n})_{n\geq0}$ and $Y=(Y_{n})_{n\geq0}$ with the same marginal law, associated with an initial distribution $\pi_{0}$ and a $\pi$-invariant Markov transition kernel $K$ defined on $\{\mathbb{R}^d, \mathcal{B}(\mathbb{R}^d)\}$. To do so, we introduce a Markov transition kernel $\bar{K}$ on $\{\mathbb{R}^d\times\mathbb{R}^d, \mathcal{B}(\mathbb{R}^d)\times\mathcal{B}(\mathbb{R}^d)\}$ that admits $K$ as its marginals, i.e. $\bar{K}\{(x,y),A\times\mathbb{R}^d\}=K(x,A)$ and $\bar{K}\{(x,y),\mathbb{R}^d\times A\}=K(y,A)$ for all $x,y\in\mathbb{R}^d$ and $A\in\mathcal{B}(\mathbb{R}^d)$. After initializing $(X_0,Y_0)\sim\bar{\pi}_0$ with a coupling that has $\pi_0$ as its marginals, we then simulate $X_1\sim K(X_0,\cdot)$ and $(X_{n+1},Y_n)\sim\bar{K}\{(X_n,Y_{n-1}),\cdot\}$ for all integer $n\geq 1$. We will write pr to denote the law of the coupled chain $(X_{n},Y_{n})_{n\geq0}$, and $E$ to denote expectation with respect to pr. We now consider the following assumptions. \begin{assumption}[Convergence of marginal chain]\label{ass:convergence} As $n\to\infty$, we have $E\{h(X_{n})\}\to\pi(h)$. Furthermore, there exist $\kappa_1>0$ and $C_1<\infty$ such that $E\{h(X_{n})^{2+\kappa_1}\}<C_1$ for all integer $n\geq0$. \end{assumption} \begin{assumption}[Tail of meeting time]\label{ass:tail} The meeting time $\tau=\inf\{ n\geq1:\;X_{n}=Y_{n-1}\}$ satisfies a geometric tail condition of the form pr$(\tau>n)\leq C_2\kappa_2^{n}$ for some constants $C_2\in\mathbb{R}_+,\kappa_2\in(0,1)$ and all integer $n\geq0$. \end{assumption} \begin{assumption}[Faithfulness]\label{ass:faithfulness} The coupled chains are faithful \citep{rosenthal1997faithful}, i.e. $X_{n}=Y_{n-1}$ for all integer $n\geq\tau$. \end{assumption} Under these assumptions, the random variable defined as \begin{align} H_{k}(X,Y)=h(X_{k})+\sum_{n=k+1}^{\tau-1}\left\{ h(X_{n})-h(Y_{n-1})\right\}\label{eq:Hk} \end{align} for any integer $k\geq0$, is an unbiased estimator of $\pi(h)$ with finite variance \citep[Proposition 3.1]{jacob2017unbiased}. Computation of (\ref{eq:Hk}) can be performed with $\tau-1$ applications of $\bar{K}$ and $\max(1,k+1-\tau)$ applications of $K$; thus the compute cost has a finite expectation under Assumption \ref{ass:tail}. The first term, $h(X_{k})$, is in general biased since the chain $(X_{n})_{n\geq0}$ might not have reached stationarity by iteration $k$. The second term acts as a bias correction and is equal to zero when $k\geq\tau-1$. As the estimators $H_{k}(X,Y)$, for various values of $k$, can be computed from a single realization of the coupled chains, this prompts the definition of a time-averaged estimator $H_{k:m}(X,Y)=(m-k+1)^{-1}\sum_{n=k}^mH_n(X,Y)$ for integers $k\leq m$. The latter inherits the unbiasedness and finite variance properties, and can be rewritten as \begin{align} H_{k:m}(X,Y)= M_{k:m}(X) + \sum_{n=k+1}^{\tau-1}\min\left(1,\frac{n-k}{m-k+1}\right)\left\{ h(X_{n})-h(Y_{n-1})\right\}\label{eq:Hkm} \end{align} where $M_{k:m}(X)=(m-k+1)^{-1}\sum_{n=k}^{m}h(X_{n})$ can be viewed as the usual Markov chain estimator with $m$ iterations and a burn-in period of $k-1$. As before, the second term plays the role of bias correction and is equal to zero when $k\geq\tau-1$. Hence if the value of $k$ is sufficiently large, we can expect the variance of $H_{k:m}(X,Y)$ to be close to that of $M_{k:m}(X)$. Moreover, the cost of computing (\ref{eq:Hkm}), which involves $\tau-1$ applications of $\bar{K}$ and $\max(1,m+1-\tau)$ applications of $K$, becomes comparable to $m$ iterations under $K$ for sufficiently large $m$. Therefore we can expect the asymptotic inefficiency of $H_{k:m}(X,Y)$ in the limit of our computational budget, given by the product of the expected compute cost and the variance of $H_{k:m}(X,Y)$ \citep{glynn1992asymptotic}, to approach the asymptotic variance of the underlying Markov chain as $m$ increases. We refer to \citet[Section 3.1]{jacob2017unbiased} for a more detailed discussion on the impact of $k$ and $m$, and recall their proposed guideline of having $k$ as a large quantile of the meeting time $\tau$ and $m$ as a large multiple of $k$. In practice, our proposed methodology involves simulating $R$ pairs of coupled Markov chains $(X^{(r)},Y^{(r)})=(X_{n}^{(r)},Y_{n}^{(r)})_{n\geq0},r=1,\ldots,R$ completely in parallel, with each pair taking a random compute time depending on their meeting time. As this produces $R$ independent replicates $H_{k:m}(X^{(r)},Y^{(r)}), r=1,\ldots,R$ of the unbiased estimator (\ref{eq:Hkm}), one can compute the average $R^{-1}\sum_{r=1}^RH_{k:m}(X^{(r)},Y^{(r)})$ to approximate $\pi(h)$. By appealing to the usual central limit theorem for independent and identically distributed random variables, confidence intervals that are justified as $R\rightarrow\infty$ can also be constructed. Explicit constructions of coupled chains satisfying Assumptions \ref{ass:convergence}--\ref{ass:faithfulness} for Markov kernels $K$ that are defined by Metropolis--Hastings algorithms and Gibbs samplers are given in \citet[Section 4]{jacob2017unbiased} and \citet{jacob_smoothing2018}. The focus of this article is to propose a coupling strategy that is tailored for Hamiltonian Monte Carlo chains, so as to enable the use of unbiased estimators (\ref{eq:Hk})--(\ref{eq:Hkm}). We will illustrate in Section \ref{sec:Numerical-illustrations} that this approach applies to realistic settings and retains the benefits of Hamiltonian Monte Carlo in terms of scaling with dimension. \section{Hamiltonian dynamics \label{sec:Hamilton's-equations}} \subsection{Hamiltonian flows \label{subsec:Hamiltonian-dynamics}} Suppose that the target distribution has the form $\pi(dq)\propto\exp\{-U(q)\}dq$, where the potential function $U:\mathbb{R}^{d}\rightarrow\mathbb{R}_{+}$ satisfies the following assumptions. \begin{assumption}[Regularity and growth of potential]\label{ass:potential} The potential $U$ is twice continuously differentiable and its gradient $\nabla U:\mathbb{R}^d\rightarrow\mathbb{R}^d$ is globally $\beta$-Lipschitz, i.e. there exists $\beta>0$ such that $|\nabla U(q)-\nabla U(q')|\leq\beta|q-q'|$ for all $q,q'\in\mathbb{R}^{d}$. \end{assumption} These assumptions imply at most quadratic growth of the potential, or equivalently that the tails of the target distribution are no lighter than Gaussian. We now introduce Hamiltonian flows on the phase space $\mathbb{R}^{d}\times\mathbb{R}^d$, which consists of position variables $q\in\mathbb{R}^{d}$ and momentum variables $p\in\mathbb{R}^{d}$. We will be concerned with a Hamiltonian function $\mathcal{E}:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}_{+}$ of the form $\mathcal{E}(q,p)=U(q)+|p|^{2}/2$. We note the use of the identity mass matrix here and will rely on preconditioning in Section \ref{subsec:Cox-Process} to incorporate curvature properties of $\pi$. The time evolution of a particle $\{q(t),p(t)\}_{t\in\mathbb{R}_{+}}$ under Hamiltonian dynamics is described by the ordinary differential equations \begin{align}\label{eq:hamilton_ode} \frac{d}{dt}q(t) &= \nabla_{p}\mathcal{E}\{q(t),p(t)\} = p(t),\quad \frac{d}{dt}p(t) = -\nabla_{q}\mathcal{E}\{q(t),p(t)\} = -\nabla U\{q(t)\}. \end{align} Under Assumption \ref{ass:potential}, (\ref{eq:hamilton_ode}) with an initial condition $\{q(0),p(0)\}=(q_{0},p_{0})\in\mathbb{R}^{d}\times\mathbb{R}^{d}$ admits a unique solution globally on $\mathbb{R}_{+}$ \citep[p. 14]{lelievre2012}. Therefore the flow map $\Phi_{t}(q_{0},p_{0})=\{q(t),p(t)\}$ is well-defined for any $t\in\mathbb{R}_{+}$, and we will write its projection onto the position and momentum coordinates as $\Phi_{t}^{\circ}(q_{0},p_{0})=q(t)$ and $\Phi_{t}^{*}(q_{0},p_{0})=p(t)$ respectively. It is worth recalling that Hamiltonian flows have the following properties. \begin{property}[Reversibility]\label{property:reversibility} For any $t\in\mathbb{R}_{+}$, the inverse flow map satisfies $\Phi_{t}^{-1}=M\circ\Phi_{t}\circ M$, where $M(q,p)=(q,-p)$ denotes momentum reversal. \end{property} \begin{property}[Energy conservation]\label{property:energy} The Hamiltonian function satisfies $\mathcal{E}\circ\Phi_{t}=\mathcal{E}$ for any $t\in\mathbb{R}_{+}$. \end{property} \begin{property}[Volume preservation]\label{property:volume} For any $t\in\mathbb{R}_{+}$ and $A\in\mathcal{B}(\mathbb{R}^{2d})$, we have $\mathrm{Leb}_{2d}\{\Phi_{t}(A)\}=\mathrm{Leb}_{2d}(A)$, where $\mathrm{Leb}_{2d}$ denotes the Lebesgue measure on $\mathbb{R}^{2d}$. \end{property} These properties imply that the extended target distribution on phase space $\tilde{\pi}(dq,dp)\propto\exp\{-\mathcal{E}(q,p)\}dqdp$ is invariant under the Markov semi-group induced by the flow, i.e. for any $t\in\mathbb{R}_{+}$, the pushforward measure $\Phi_{t}\sharp\tilde{\pi}$, defined as $\Phi_{t}\sharp\tilde{\pi}(A)=\tilde{\pi}\{\Phi_{t}^{-1}(A)\}$ for $A\in\mathcal{B}(\mathbb{R}^{2d})$, is equal to $\tilde{\pi}$. \subsection{Coupled Hamiltonian dynamics \label{subsec:Coupled-Hamiltonian-dynamics}} We now consider the coupling of two particles $\{q^{i}(t),p^{i}(t)\}_{t\in\mathbb{R}_{+}},\ (i=1,2)$ evolving under (\ref{eq:hamilton_ode}) with initial conditions $\{q^{i}(0),p^{i}(0)\}=(q_{0}^{i},p_{0}^{i}),\ (i=1,2)$. We first draw some insights from a Gaussian example. \begin{example} Let $\pi$ be a Gaussian distribution on $\mathbb{R}$ with mean $\mu\in\mathbb{R}$ and variance $\sigma^{2}>0$. In this case, we have $U(q)=(q-\mu)^{2}/(2\sigma^{2}), \nabla U(q)=(q-\mu)/\sigma^{2}$ and the solution of (\ref{eq:hamilton_ode}) is \begin{align*} \Phi_{t}(q_{0},p_{0})=\left(\begin{array}{c} \mu+(q_{0}-\mu)\cos\left(\frac{t}{\sigma}\right)+\sigma p_{0}\sin\left(\frac{t}{\sigma}\right)\\ p_{0}\cos\left(\frac{t}{\sigma}\right)-\frac{1}{\sigma}(q_{0}-\mu)\sin\left(\frac{t}{\sigma}\right) \end{array}\right). \end{align*} Hence the difference between particle positions is \begin{align*} q^{1}(t)-q^{2}(t) &= (q_{0}^{1}-q_{0}^{2})\cos\left(\frac{t}{\sigma}\right)+\sigma(p_{0}^{1}-p_{0}^{2})\sin\left(\frac{t}{\sigma}\right). \end{align*} If we set $p_{0}^{1}=p_{0}^{2}$, then $|q^{1}(t)-q^{2}(t)|=|\cos(t/\sigma)|\,|q_{0}^{1}-q_{0}^{2}|$, so for any non-negative integer $n$, the particles meet exactly whenever $t=(2n+1)\pi\sigma/2$, and contraction occurs for any $t\neq\pi n\sigma$. \end{example} This example motivates a coupling that simply assigns particles the same initial momentum. Moreover, it also reveals that certain trajectory lengths will result in larger contraction than others. We now examine the utility of this approach more generally. Define $\Delta(t)=q^{1}(t)-q^{2}(t)$ as the difference between particle locations and note that \begin{align*} \frac{1}{2}\frac{d}{dt}|\Delta(t)|^{2} = \Delta(t)^{\top}\left\lbrace p^{1}(t)-p^{2}(t)\right\rbrace. \end{align*} Therefore by imposing that $p^{1}(0)=p^{2}(0)$, the function $t\mapsto|\Delta(t)|$ admits a stationary point at time $t=0$. This is geometrically intuitive as the trajectories at time zero are parallel to one another for an infinitesimally small amount of time. To characterize this stationary point, we compute \begin{align*} \frac{1}{2}\frac{d^{2}}{dt^{2}}|\Delta(t)|^{2}=-\Delta(t)^{\top}\left[\nabla U\{q^{1}(t)\}-\nabla U\{q^{2}(t)\}\right] + |p^{1}(t)-p^{2}(t)|^{2} \end{align*} and consider the following assumption. \begin{assumption}[Local convexity of potential]\label{ass:convexity} There exists a compact set $S\in\mathcal{B}(\mathbb{R}^{d})$, with positive Lebesgue measure, such that the restriction of $U$ to $S$ is $\alpha$-strongly convex, i.e. there exists $\alpha>0$ such that $\left(q-q'\right)^{\top}\left\lbrace\nabla U(q)-\nabla U(q')\right\rbrace\geq\alpha|q-q'|^{2}$ for all $q,q'\in S$. \end{assumption} Under Assumption \ref{ass:convexity}, we have \begin{align*} \frac{1}{2}\frac{d^{2}}{dt^{2}}|\Delta(0)|^{2}\leq-\alpha|\Delta(0)|^{2}+|p^{1}(0)-p^{2}(0)|^{2} \end{align*} if $q_{0}^{1},q_{0}^{2}\in S$ and $q_{0}^{1}\neq q_{0}^{2}$. Therefore by taking $p^{1}(0)=p^{2}(0)$, it follows from the second derivative test that $t=0$ is a strict local maximum point. Continuity of $t\mapsto|\Delta(t)|^{2}$ implies that there exists a trajectory length $T>0$ such that for any $t\in(0,T]$, there exists $\rho\in[0,1)$ satisfying \begin{align}\label{eq:exact_contract_small_t} |\Phi_{t}^{\circ}(q_{0}^{1},p_{0})-\Phi_{t}^{\circ}(q_{0}^{2},p_{0})|\leq\rho|q_{0}^{1}-q_{0}^{2}|. \end{align} We note the dependence of $T$ on the initial positions $q_{0}^{1}, q_{0}^{2}$ and momentum $p_{0}$. We now strengthen the above claim. \begin{lemma} \label{lem:exact_contraction}Suppose that the potential $U$ satisfies Assumptions \ref{ass:potential}--\ref{ass:convexity}. For any compact set $A\subset S\times S\times\mathbb{R}^{d}$, there exists a trajectory length $T>0$ such that for any $t\in(0,T]$, there exists $\rho\in[0,1)$ satisfying (\ref{eq:exact_contract_small_t}) for all $(q_{0}^{1},q_{0}^{2},p_{0})\in A$. \end{lemma} Although the qualitative result in Lemma \ref{lem:exact_contraction} is sufficient for our purposes, we note that more quantitative results of this type have been established recently by \citet[Theorem 6]{Mangoubi:2017} and \citet[Theorem 2.1]{bourabee:2018} to study the mixing time of Hamiltonian Monte Carlo. The preceding results show that the trajectory length $T$ yielding contraction of the coupled system and the corresponding contraction rate $\rho$ do not depend on $d$ but only on the constants $\alpha$ and $\beta$ of Assumptions \ref{ass:potential}--\ref{ass:convexity}. This suggests that such a coupling strategy can be effective in high dimension as long as the Hessian of $U$ is sufficiently well-conditioned. \section{Coupled Hamiltonian Monte Carlo\label{sec:Hamiltonian-Monte-Carlo}} \subsection{Leap-frog integrator \label{subsec:Leap-frog-integrator}} As the flow defined by (\ref{eq:hamilton_ode}) is typically intractable, time discretizations are required. The leap-frog symplectic integrator is a standard choice as it preserves Properties \ref{property:reversibility} and \ref{property:volume}. Given a step size $\varepsilon>0$ and a number of leap-frog steps $L\in\mathbb{N}$, this scheme initializes at $(q_{0},p_{0})\in\mathbb{R}^{d}\times\mathbb{R}^{d}$ and iterates \begin{align*} p_{\ell+1/2} & =p_{\ell}-\frac{\varepsilon}{2}\nabla U(q_{\ell}),\quad q_{\ell+1} =q_{\ell}+\varepsilon p_{\ell+1/2}, \quad p_{\ell+1} =p_{\ell+1/2}-\frac{\varepsilon}{2}\nabla U(q_{\ell+1}), \end{align*} for $\ell=0,\ldots,L-1$. We write the leap-frog iteration as $\hat{\Phi}_{\varepsilon}(q_{\ell},p_{\ell})=(q_{\ell+1},p_{\ell+1})$ and the corresponding approximation of the flow as $\hat{\Phi}_{\varepsilon,\ell}(q_{0},p_{0})=(q_{\ell},p_{\ell})$ for $\ell=0,\ldots,L$. As before, we denote by $\hat{\Phi}_{\varepsilon,\ell}^{\circ}(q_{0},p_{0})=q_{\ell}$ and $\hat{\Phi}_{\varepsilon,\ell}^{*}(q_{0},p_{0})=p_{\ell}$ the projections onto the position and momentum coordinates respectively. It can be established that the leap-frog scheme is of order two \citep[Theorem 3.4]{hairer:2005}, i.e. for sufficiently small $\varepsilon$, we have \begin{align} |\hat{\Phi}_{\varepsilon,L}(q_{0},p_{0})-\Phi_{\varepsilon L}(q_{0},p_{0})|& \leq C_{3}(q_0,p_0,L)\varepsilon^{2},\label{eq:leapfrog_traj_error} \\ |\mathcal{E}\{\hat{\Phi}_{\varepsilon,L}(q_{0},p_{0})\}-\mathcal{E}(q_{0},p_{0})| & \leq C_{4}(q_0,p_0,L)\varepsilon^{2},\label{eq:leapfrog_hamiltonian_error} \end{align} for some positive constants $C_{3}$ and $C_{4}$ that depend continuously on the initial condition $(q_0,p_0)$ for any number of leap-frog iterations $L$. To simplify our exposition and focus on the proposed methods, we will assume throughout the article that (\ref{eq:leapfrog_traj_error})--(\ref{eq:leapfrog_hamiltonian_error}) hold. We refer to the book by \citet{hairer:2005} on geometric numerical integration and to the survey by \citet{bou2018geometric} for additional assumptions under which these error bounds hold. We now discuss how the above constants behave with dimension and integration length. Firstly, under the simplified setting of a target distribution with independent and identical marginals and appropriate growth conditions on the potential, the results of \citet[Proposition 5.3 \& 5.4]{beskos2013optimal} indicate that these constants would scale as ${d}^{1/2}$. Hence if we scale the step size $\varepsilon$ as $d^{-1/4}$, advocated by \citet{beskos2013optimal} in this setting, we can expect these errors to be stable in high dimensions. Secondly, while the constant associated to the pathwise error bound (\ref{eq:leapfrog_traj_error}) will typically grow exponentially with $L$ \citep[Section 2.2.3]{Leimkuhler:2005}, the constant of the Hamiltonian error bound (\ref{eq:leapfrog_hamiltonian_error}) on the other hand can be stable over exponentially long time intervals $\varepsilon L$ \citep[Theorem 8.1]{hairer:2005}. Although the Hamiltonian is not conserved exactly under time discretization, one can employ a Metropolis--Hastings correction as described in the following section. \subsection{Coupled Hamiltonian Monte Carlo kernel \label{subsec:Coupled-HMC-kernel}} Hamiltonian Monte Carlo \citep{Duane:1987,Neal:1993} is a Metropolis--Hastings algorithm that targets $\pi$ using time discretized Hamiltonian dynamics as proposals. In view of Section \ref{subsec:Coupled-Hamiltonian-dynamics}, we consider coupling two Hamiltonian Monte Carlo chains $(Q_{n}^{1},Q_{n}^{2})_{n\geq0}$ by initializing $(Q_0^1,Q_0^2) \sim \bar{\pi}_0$ and evolving the chains jointly according to the following procedure. \begin{algorithm} \caption{Coupled Hamiltonian Monte Carlo step given $(Q_{n-1}^1,Q_{n-1}^2)$.} \label{alg:coupled_hmc} \begin{tabbing} Sample momentum $P_{n}^{*}\sim\mathcal{N}(0_{d},I_{d})$ and $U_n\sim\mathcal{U}[0,1]$ independently \\ For $i=1,2$ \\ \qquad Set $(q_{0}^i,p_{0}^i)=(Q_{n-1}^i,P_{n}^{*})$\\ \qquad Perform leap-frog integration to obtain $(q_{L}^i,p_{L}^i)=\hat{\Phi}_{\varepsilon,L}(q_{0}^i,p_{0}^i)$ \\ \qquad If $U_n < \alpha\{(q_{0}^i,p_{0}^i),(q_{L}^i,p_{L}^i)\}$, set $Q_{n}^i=q_{L}^i$ \\ \qquad Otherwise set $Q_{n}^i=Q_{n-1}^i$ \\ Output $(Q_{n}^1,Q_{n}^2)$ \end{tabbing} \end{algorithm} Since the leap-frog integrator preserves Properties \ref{property:reversibility} and \ref{property:volume}, the Metropolis--Hastings acceptance probability is \begin{align} \alpha\left\lbrace(q,p),(q',p')\right\rbrace=\min\left[1,\exp\left\lbrace \mathcal{E}(q,p)-\mathcal{E}(q',p')\right\rbrace\right],\label{eq:MH_acceptance} \end{align} for $(q,p),(q',p')\in\mathbb{R}^{d}\times\mathbb{R}^{d}$. Iterating the above yields two marginal chains $(Q_{n}^1)_{n\geq0}$ and $(Q_{n}^2)_{n\geq0}$ that are $\pi$-invariant. Algorithm \ref{alg:coupled_hmc} amounts to running two Hamiltonian Monte Carlo chains with common random numbers; this has been considered in \citet{neal2002circularly} to remove the burn-in bias, and in \citet{Mangoubi:2017} and \citet{bourabee:2018} to analyze mixing properties. We denote the associated coupled Markov transition kernel on the position coordinates as $\bar{K}_{\varepsilon,L}\{(q^{1},q^{2}),A^{1}\times A^{2}\}$ for $q^{1},q^{2}\in\mathbb{R}^{d}$ and $A^{1},A^{2}\in\mathcal{B}(\mathbb{R}^{d})$. Marginally we have $\bar{K}_{\varepsilon,L}\{(q^{1},q^{2}),A^{1}\times\mathbb{R}^{d}\}=K_{\varepsilon,L}(q^{1},A^{1})$ and $\bar{K}_{\varepsilon,L}\{(q^{1},q^{2}),\mathbb{R}^{d}\times A^{2}\}=K_{\varepsilon,L}(q^{2},A^{2})$, where $K_{\varepsilon,L}$ denotes the Markov transition kernel of the marginal Hamiltonian Monte Carlo chain. If we supplement Assumption \ref{ass:potential} with the existence of a local minimum of $U$, then aperiodicity, Lebesgue-irreducibility and Harris recurrence of $K_{\varepsilon,L}$ follow from \citet[Theorem 2]{durmus2017convergence}; see also \citet{Cances:2007} and \citet{Livingstone:2016} for previous works. Hence ergodicity follows from \citet[Theorem 13.0.1]{meyn:tweedie:2009} and Assumption \ref{ass:convergence} is satisfied for test functions satisfying $\pi(h^{2+\kappa_1})<\infty$ for some $\kappa_1>0$. We will write the law of the coupled Hamiltonian Monte Carlo chain as pr$_{\varepsilon,L}$, and ${E}_{\varepsilon,L}$ to denote expectation with respect to pr$_{\varepsilon,L}$. The following result establishes that the relaxed meeting time $\tau_{\delta}=\inf\{ n\geq 0:|Q_{n}^{1}-Q_{n}^{2}|\leq\delta\}$, for any $\delta>0$, has geometric tails. \begin{theorem}\label{thm:relaxed_meeting} Suppose that the potential $U$ satisfies Assumptions \ref{ass:potential}--\ref{ass:convexity}. Assume also that there exists $\tilde{\varepsilon}>0$ such that for any $\varepsilon\in(0,\tilde{\varepsilon})$ and $L\in\mathbb{N}$, there exist a measurable function $V:\mathbb{R}^d\rightarrow[1,\infty)$, $\lambda\in(0,1)$ and $b<\infty$ such that \begin{align}\label{eqn:drift} K_{\varepsilon,L}(V)(q) \leq \lambda V(q) + b \end{align} for all $q\in\mathbb{R}^d$, $\pi_0(V)<\infty$ and $\{q\in \mathbb{R}^d: V(q)\leq \ell_1\}\subseteq\{q\in S : U(q)\leq \ell_0\}$ for some $\ell_0\in(\inf_{q\in S}U(q), \sup_{q\in S}U(q))$ and $\ell_1>1$ satisfying $\lambda + 2b(1-\lambda)^{-1}(1+\ell_1)^{-1} < 1$. Then for any $\delta>0$, there exist $\varepsilon_0\in(0,\tilde{\varepsilon})$ and $L_0\in\mathbb{N}$ such that for any $\varepsilon\in(0,\varepsilon_0)$ and $L\in\mathbb{N}$ satisfying $\varepsilon L<\varepsilon_0 L_0$, we have \begin{align}\label{eqn:relaxedmeeting_tails} \mathrm{pr}_{\varepsilon,L}(\tau_{\delta}>n)\leq C_0\kappa_0^n \end{align} for some $C_0\in\mathbb{R}_+, \kappa_0\in(0,1)$ and all integer $n\geq 0$. \end{theorem} The proof of Theorem \ref{thm:relaxed_meeting} proceeds by first showing that the relaxed meeting can take place, in finite iterations, whenever both chains enter a region of the state space where the target distribution is strongly log-concave. As suggested in \citet{neal2002circularly}, one can expect good coupling behaviour if the chains spend enough time in this region of the state space; the second part of the proof makes this intuition precise by controlling excursions with the geometric drift condition (\ref{eqn:drift}). The latter can be established under additional assumptions on the potential $U$ \cite[Theorem 9]{durmus2017convergence}. As Theorem \ref{thm:relaxed_meeting} implies that the coupled chains can get arbitrarily close with sufficient frequency, one could potentially employ the unbiased estimation framework of \citet{glynn2014exact} that introduces a truncation variable. To verify Assumption \ref{ass:tail} that requires exact meetings, in the next section, we combine the coupled Hamiltonian Monte Carlo kernel with another coupled kernel that is designed to trigger exact meetings when the two chains are close. \section{Unbiased Hamiltonian Monte Carlo \label{sec:Proposed-estimators}} \subsection{Coupled random walk Metropolis--Hastings kernel \label{subsec:Making-chains-meet}} Let $K_{\sigma}$ denote the $\pi$-invariant Gaussian random walk Metropolis--Hastings kernel with proposal covariance $\sigma^2I_d$. The following describes a coupling of $K_{\sigma}(x,\cdot)$ and $K_{\sigma}(y,\cdot)$ that results in exact meetings with high probability when $x,y\in\mathbb{R}^d$ are close \citep{Johnson:1998,jacob2017unbiased} and $\sigma$ is appropriately chosen. We begin by sampling the proposals $X^*\sim\mathcal{N}(x,\sigma^2I_d)$ and $Y^*\sim\mathcal{N}(y,\sigma^2I_d)$ from the maximal coupling of these two Gaussian distributions \citep[Section 4.1]{jacob2017unbiased}. Under the maximal coupling, the probability of $\{X^*\neq Y^*\}$ is equal to the total variation distance between the distributions $\mathcal{N}(x,\sigma^{2}I_{d})$ and $\mathcal{N}(y,\sigma^{2}I_{d})$. Analytical tractability in the Gaussian case allows us to write that distance as $\mathrm{pr}(2\sigma |Z|\leq\delta)$, where $Z\sim\mathcal{N}(0,1)$ and $\delta=|x-y|$. By approximating the folded Gaussian cumulative distribution function \citep{pollard:2005}, we obtain \begin{align}\label{eqn:TV_approx} \mathrm{pr}(X^{*}=Y^{*})=\mathrm{pr}(2\sigma |Z|>\delta) =1-(2\pi)^{-1/2}\frac{\delta}{\sigma} +\mathcal{O}\left(\frac{\delta^{2}}{\sigma^{2}}\right) \end{align} as $\delta/\sigma\rightarrow 0$. Hence to achieve pr$(X^{*}=Y^{*})=\theta$ for some desired probability $\theta$, $\sigma$ should be chosen approximately as $\delta/\{(2\pi)^{1/2}\left(1-\theta\right)\}$. The proposed values $X^{*}$ and $Y^{*}$ are then accepted according to Metropolis--Hastings acceptance probabilities, i.e. if $U^*\leq\min\{1,\pi(X^{*})/\pi(x)\}$ and $U^*\leq\min\{1,\pi(Y^{*})/\pi(y)\}$ respectively, where a common uniform random variable $U^*\sim\mathcal{U}[0,1]$ is used for both chains. We denote the resulting coupled Markov transition kernel on $\{\mathbb{R}^d\times\mathbb{R}^d, \mathcal{B}(\mathbb{R}^d)\times\mathcal{B}(\mathbb{R}^d)\}$ as $\bar{K}_{\sigma}$. If $\sigma$ is small relative to the spread of the target distribution, the probability of accepting both proposals would be high. On the other hand, (\ref{eqn:TV_approx}) shows that $\sigma$ needs to be large compared to $\delta$ for the event $\{X^{*}=Y^{*}\}$ to occur with high probability. This leads to a trade-off; in practice, one can monitor acceptance probabilities of random walk Metropolis--Hastings chains from preliminary runs to guide how small $\sigma$ should be. Although most simulations in Section \ref{sec:Numerical-illustrations} will employ $\sigma=10^{-3}$ as the default value, the sensitivity of the choice of $\sigma$ on our proposed methodology will be investigated in Sections \ref{subsec:Logistic-regression} and \ref{subsec:Cox-Process}. \subsection{Combining coupled kernels \label{subsec:Proposed-algorithm}} We now combine the coupled Hamiltonian Monte Carlo kernel $\bar{K}_{\varepsilon,L}$ with the coupled random walk Metropolis--Hastings kernel $\bar{K}_{\sigma}$, introduced in Sections \ref{subsec:Coupled-HMC-kernel} and \ref{subsec:Making-chains-meet} respectively, using the following mixture \begin{align}\label{eqn:coupled_mixture} \bar{K}_{\varepsilon,L,\sigma}\{(x,y),A\times B\} = (1-\gamma)\bar{K}_{\varepsilon,L}\{(x,y),A\times B\} + \gamma\bar{K}_{\sigma}\{(x,y),A\times B\} \end{align} for $x,y\in\mathbb{R}^d$ and $A,B\in\mathcal{B}(\mathbb{R}^d)$, where $\gamma\in(0,1), \varepsilon>0, L\in\mathbb{N},\sigma>0$ are appropriately chosen. The rationale for this choice is to enable exact meetings using the coupled random walk Metropolis--Hastings kernel when the chains are brought close together by the coupled Hamiltonian Monte Carlo kernel. To address the choice of $\gamma$, in light of the efficiency considerations in Section \ref{subsec:Context:-unbiased-estimation}, we should understand how $\gamma$ impacts both the average meeting time, which we will investigate in Sections \ref{subsec:Logistic-regression} and \ref{subsec:Cox-Process}, and the asymptotic inefficiency of the marginal kernel ${K}_{\varepsilon,L,\sigma}=(1-\gamma)K_{\varepsilon,L}+\gamma K_{\sigma}$. We now compare the asymptotic inefficiency of ${K}_{\varepsilon,L,\sigma}$ to that of ${K}_{\varepsilon,L}$. Assuming that evaluation of the potential and its gradient have the same cost, the latter is given by the product of its cost $L+2$ and its asymptotic variance $v(h,K_{\varepsilon,L})=\lim_{n\rightarrow\infty}\mathrm{var}_{\varepsilon,L}\{n^{-1/2}\sum_{i=1}^nh(X_i)\}$ where $X_0\sim\pi$ and $X_n\sim K_{\varepsilon,L}(X_{n-1},\cdot)$ for all integer $n\geq 1$. Noting that the expected cost of ${K}_{\varepsilon,L,\sigma}$ is $(1-\gamma)(L+2)+\gamma$, we now consider its asymptotic variance $v(h,K_{\varepsilon,L,\sigma})$. By Peskun's ordering \citep{peskun1973optimum}, we have $v(h,K_{\varepsilon,L,\sigma})\leq v(h,P_{\varepsilon,L})$ where $P_{\varepsilon,L}=(1-\gamma)K_{\varepsilon,L}+\gamma I$ with the identity kernel defined as $I(x,A)=\mathbb{I}_A(x)$ for $x\in\mathbb{R}^d$ and $A\in\mathcal{B}(\mathbb{R}^d)$. We then apply \citet[Corollary 1]{latuszynski2013clts} to obtain $v(h,K_{\varepsilon,L,\sigma})\leq \gamma(1-\gamma)^{-1}\mathrm{var}_{\pi}\{h(X)\}+(1-\gamma)^{-1}v(h,K_{\varepsilon,L})$. Hence in summary the relative asymptotic inefficiency can be upper bounded by \begin{align}\label{eqn:relative_compare_mixture} \left\lbrace 1 + \gamma(1-\gamma)^{-1}(L+2)^{-1}\right\rbrace \left[ 1 + \gamma\{1+\Psi(h,K_{\varepsilon,L})\}^{-1}\right], \end{align} where $\Psi(h,K_{\varepsilon,L})=1+2\sum_{n=1}^{\infty}\mathrm{Corr}_{\varepsilon,L}\{h(X_0),h(X_n)\}$ denotes the integrated auto-correlation time of a stationary Hamiltonian Monte Carlo chain. In view of (\ref{eqn:relative_compare_mixture}), we advocate choosing only small values of $\gamma$ to reduce the loss of efficiency of the marginal chain; most simulations in Section \ref{sec:Numerical-illustrations} will employ $\gamma=1/20$ as the default value. We will write $Q_{\sigma}(x,A)=\int_{A}\mathcal{N}(y;x,\sigma^2I_d)dy, x\in\mathbb{R}^d, A\in\mathcal{B}(\mathbb{R}^d)$ as the Markov transition kernel of the Gaussian random walk, the law of the resulting coupled chain $(X_n,Y_n)_{n\geq 0}$ as pr$_{\varepsilon,L,\sigma}$, and ${E}_{\varepsilon,L,\sigma}$ to denote expectation with respect to pr$_{\varepsilon,L,\sigma}$. The following details the simulation of $(X_n,Y_n)_{n\geq 0}$ to compute the unbiased estimators described in Section \ref{subsec:Context:-unbiased-estimation}. \begin{algorithm} \caption{Compute unbiased estimator $H_{k:m}(X,Y)$ of $\pi(h)$ } \label{alg:coupled_mixture} \begin{tabbing} Initialize $(X_0,Y_0)\sim\bar{\pi}_0$ from a coupling with $\pi_0$ as marginals \\ With probability $\gamma$, sample $X_1\sim K_{\sigma}(X_0,\cdot)$; otherwise sample $X_1\sim K_{\varepsilon,L}(X_0,\cdot)$ \\ Set $n=1$. While $n<\max(m,\tau)$\\ \qquad With probability $\gamma$, sample $(X_{n+1},Y_n)\sim\bar{K}_{\sigma}\{(X_{n},Y_{n-1}),\cdot\}$\\ \qquad Otherwise sample $(X_{n+1},Y_n)\sim\bar{K}_{\varepsilon,L}\{(X_{n},Y_{n-1}),\cdot\}$\\ \qquad If $X_{n+1}=Y_n$ set $\tau = n+1$\\ \qquad Increment $n \leftarrow n+1$\\ Compute $H_{k:m}(X,Y)$ using (\ref{eq:Hkm}) \end{tabbing} \end{algorithm} The mixture kernel $K_{\varepsilon,L,\sigma}$ inherits ergodicity properties from any of its components, therefore Assumption \ref{ass:convergence} can be satisfied following the discussion in Section \ref{subsec:Coupled-HMC-kernel}. Noting that the faithfulness property in Assumption \ref{ass:faithfulness} holds by construction, we now turn our attention to Assumption \ref{ass:tail}. \begin{theorem}\label{thm:exact_meeting} Suppose that the potential $U$ satisfies Assumptions \ref{ass:potential}--\ref{ass:convexity}. Assume also that there exist $\tilde{\varepsilon}>0$ and $\tilde{\sigma}>0$ such that for any $\varepsilon\in(0,\tilde{\varepsilon}),L\in\mathbb{N}$ and $\sigma\in(0,\tilde{\sigma})$, there exist a measurable function $V:\mathbb{R}^d\rightarrow[1,\infty)$, $\lambda\in(0,1), b<\infty$ and $\mu>0$ such that \begin{align}\label{eqn:new_drift} K_{\varepsilon,L}(V)(x) \leq \lambda V(x) + b\quad\mbox{and}\quad Q_{\sigma}(V)(x)\leq \mu\{V(x)+1\} \end{align} for all $x\in\mathbb{R}^d$, $\pi_0(V)<\infty$, $\lambda_0=(1-\gamma)\lambda+\gamma(1+\mu)<1$ and $\{x\in \mathbb{R}^d: V(x)\leq \ell_1\}\subseteq\{x\in S : U(x)\leq \ell_0\}$ for some $\ell_0\in(\inf_{x\in S}U(x), \sup_{x\in S}U(x))$ and $\ell_1>1$ satisfying $\lambda_0 + 2\{(1-\gamma)b+\gamma\mu\}(1-\lambda_0)^{-1}(1+\ell_1)^{-1} < 1$. Then there exist $\varepsilon_0\in(0,\tilde{\varepsilon}), L_0\in\mathbb{N}$ and $\sigma_0>0$ such that for any $\varepsilon\in(0,\varepsilon_0), L\in\mathbb{N}$ satisfying $\varepsilon L<\varepsilon_0 L_0$ and $\sigma\in(0,\sigma_0)$, we have \begin{align}\label{eqn:exact_meetingtime} \mathrm{pr}_{\varepsilon,L,\sigma}(\tau>n)\leq C_0\kappa_0^n \end{align} for some $C_0\in\mathbb{R}_+, \kappa_0\in(0,1)$ and all integer $n\geq 0$. \end{theorem} Proof of the above result proceeds in two parts as in Theorem \ref{thm:relaxed_meeting}, but requires slightly stronger assumptions to ensure that the mixture kernel still satisfies a geometric drift condition. The assumptions of Theorems \ref{thm:relaxed_meeting}--\ref{thm:exact_meeting} can be verified for target distributions given by multivariate Gaussian distributions and posterior distributions arising from Bayesian logistic regression; see Section \ref{sec:check_assumptions} of the supplement. Although the above discussion guarantees validity of the unbiased estimator computed by Algorithm \ref{alg:coupled_mixture} for a range of tuning parameters, its efficiency will depend on the distribution of the meeting time $\tau$ induced by the coupling and mixing properties of the marginal kernel $K_{\varepsilon,L,\sigma}$. \section{Numerical illustrations \label{sec:Numerical-illustrations}} \subsection{Preliminaries}\label{sec:preliminaries} In practice, we will run Algorithm \ref{alg:coupled_mixture} $R$ times independently in parallel to obtain the unbiased estimators $H_{k:m}(X^{(r)},Y^{(r)}), r=1,\ldots,R$. Following the framework of \citet{glynn1992asymptotic}, we define the asymptotic inefficiency in the limit of our computational budget as $i(h,\bar{\pi}_0,\bar{K}_{\varepsilon,L,\sigma})=E_{\varepsilon,L,\sigma}\{2(\tau-1)+\max(1,m+1-\tau)\}\,\mathrm{var}_{\varepsilon,L,\sigma}\{H_{k:m}(X,Y)\}$, assuming that applying $\bar{K}_{\varepsilon,L,\sigma}$ costs twice as much as $K_{\varepsilon,L,\sigma}$. This measure of efficiency accounts for the fact that, with a given compute budget, one can average over more estimators if each is cheaper to compute. We will approximate this inefficiency by empirical averages over the $R$ realizations. For comparison, the asymptotic variance $v(h,K_{\varepsilon,L})$ of the standard Hamiltonian Monte Carlo estimator will be approximated with the \texttt{spectrum0.ar} function of the \texttt{coda} R package \citep{codapackage} using $10,000$ iterations after a burn-in of $1,000$ for all examples. We will consider estimating first and second moments, i.e. set $h_{i}(x)=x_i$ and $h_{d+i}(x)=x_i^2$ for $i=1,\ldots,d$, and compare $i(\bar{\pi}_0,\bar{K}_{\varepsilon,L,\sigma})=\sum_{i=1}^{2d}i(h_{i},\bar{\pi}_0,\bar{K}_{\varepsilon,L,\sigma})$ with $v(K_{\varepsilon,L})=\sum_{i=1}^{2d}v(h_i,K_{\varepsilon,L})$ at possibly different parameter configurations. An important point to be illustrated in the following is that the parameters $\varepsilon$ and $L$ minimizing the asymptotic inefficiency $(L+2)v(K_{\varepsilon,L})$ might not necessarily be suitable for our proposed estimator. Lastly, we will employ the guideline of taking $k$ as the $90\%$ sample quantile of meeting times, obtained from a small number of preliminary runs, and setting $m = 10k$. \subsection{Toy examples \label{subsec:toy_example}} We first investigate the scalability of the proposed approach in high dimensions on a standard Gaussian target distribution on $\mathbb{R}^d$, by examining the average meeting time of stationary coupled chains generated by (\ref{eqn:coupled_mixture}). For simplicity, the parameters $\sigma=10^{-3}$ and $\gamma=1/20$ are taken as their default values. To ensure stable acceptance probabilities as $d\rightarrow\infty$ \citep{beskos2013optimal}, we scale the step size as $\varepsilon=Cd^{-1/4}$ and select different constants $C>0$ to induce a range of acceptance probabilities. The number of leap-frog steps is taken as $L=1 + \lfloor \varepsilon^{-1} \rfloor$, which fixes the integration time $\varepsilon L$ as approximately one. For comparison, we consider (\ref{eqn:coupled_mixture}) with $L=1$, as this corresponds to the Metropolis-adjusted Langevin algorithm, and adopt the scaling $\varepsilon^2=C^2d^{-1/3}$ \citep{roberts1998optimal}; see also Section \ref{sec:coupling_mala} of supplementary material for an alternative coupling. Lastly, we also consider coupled chains generated solely by the coupled random walk Metropolis--Hastings kernel described in Section \ref{subsec:Making-chains-meet}, with proposal variance scaled as $\sigma^2=C^2d^{-1}$ \citep{roberts1997weak}. The results displayed in Fig. \ref{fig:gaussian} demonstrate the effectiveness of our coupling strategy in high dimensions, and illustrates the appeal of Hamiltonian Monte Carlo kernels in high dimensional settings. \begin{figure} \begin{centering} \begin{minipage}{0.3\textwidth} \includegraphics[scale=0.35]{rwmh_meetingtime.eps} \end{minipage}\hspace{0.5cm} \begin{minipage}{0.3\textwidth} \includegraphics[scale=0.35]{mala_meetingtime.eps} \end{minipage}\hspace{0.5cm} \begin{minipage}{0.3\textwidth} \includegraphics[scale=0.35]{hmc_meetingtime.eps} \end{minipage} \par\end{centering} \caption{Gaussian example in Section \ref{subsec:toy_example}. Scaling of average meeting time with dimension for $1,000$ coupled chains based on random walk Metropolis--Hastings (left), Metropolis-adjusted Langevin algorithm (middle) and Hamiltonian Monte Carlo (right). The symbols and lines correspond to $C=1$ (dot-solid), $C=1.5$ (triangle-small dashes) and $C=2$ (square-dashes).} \label{fig:gaussian} \end{figure} Next we consider a banana-shaped target distribution on $\mathbb{R}^2$, whose potential is given by the Rosenbrock function $U(x_1,x_2)=(1-x_1)^2+10(x_2-x_1^2)^2$ for $(x_1,x_2)\in\mathbb{R}^2$. The aim here is to examine the utility of our proposed coupling for a highly non-convex potential, and to explore the use of a new coupling for Hamiltonian Monte Carlo introduced by \citet[Section 2.3.2]{bourabee:2018}. In contrast to Algorithm \ref{alg:coupled_hmc} which assigns the same initial momentum to both chains, the latter samples an initial momentum $P_n^1\sim\mathcal{N}(0_d,I_d)$ for the first chain, and sets the initial momentum for the second chain as \begin{align*} P_n^2 = \begin{cases} P_n^1+\kappa\Delta_{n-1}, &\mbox{with probability } \frac{\mathcal{N}\left(\bar{\Delta}_{n-1}^{\top}P_n^1+\kappa|\Delta_{n-1}|; 0,1\right)}{\mathcal{N}\left(\bar{\Delta}_{n-1}^{\top}P_n^1;0,1\right)},\\ P_n^1-2(\bar{\Delta}_{n-1}^{\top}P_n^1)\bar{\Delta}_{n-1}, &\mbox{otherwise}, \end{cases} \end{align*} where $\kappa>0$ is a tuning parameter, $\Delta_{n-1}=Q_{n-1}^1-Q_{n-1}^2$ denotes the difference between the chains at iteration $n-1$, and $\bar{\Delta}_{n-1}=\Delta_{n-1}/|\Delta_{n-1}|$ the normalized difference. Leap-frog integration and Metropolis--Hastings acceptance of the output are then performed in the same way as Algorithm \ref{alg:coupled_hmc}; the resulting coupled Hamiltonian Monte Carlo kernel is then employed in the mixture (\ref{eqn:coupled_mixture}). We simulate $1,000$ coupled chains, initialized independently from the uniform distribution on $[-5,5]^2$, using this new coupling with $\kappa=1$ and the previous one which corresponds to $\kappa=0$. Employing the same parameters $(\varepsilon,L,\sigma,\gamma)=(1/500, 500, 10^{-3}, 1/20)$ for both couplings, we observe that the new coupling reduces the average meeting time from $158$ to $52$. This example illustrates that the proposed methodology can be used beyond convex potentials, and that alternative couplings can result in significantly shorter meeting times. \subsection{Logistic regression \label{subsec:Logistic-regression}} We now consider a Bayesian logistic regression on the classic German credit dataset, as in \citet{hoffman2014no}. After including all pairwise interactions and performing standardization, the design matrix has $1,000$ rows and $300$ columns. Given covariates $x_i\in\mathbb{R}^{300}$, intercept $a\in\mathbb{R}$ and coefficients $b\in\mathbb{R}^{300}$, each observation $y_i\in\{0,1\}$ is modelled as an independent Bernoulli random variable with probability of success $\{1+\exp(-a-b^{\top}x_i)\}^{-1}$. The prior is specified as $a|s^2\sim\mathcal{N}(0,s^2), b|s^2\sim\mathcal{N}(0_{300},s^2I_{300})$ independently, and an Exponential distribution with rate $0.01$ for the variance parameter $s^2$. The target $\pi$ is the posterior distribution of parameters $(a,b,\log s^2)$ on $\mathbb{R}^d$ with $d=302$. Initializing coupled chains independently from $\pi_0=\mathcal{N}(0_d,I_d)$, for each parameter configuration $(\varepsilon,L)\in\{0.01,0.0125,\ldots,0.04\}\times\{10,20,30\}$, we run $5$ pairs of coupled Hamiltonian Monte Carlo chains for $1,000$ iterations. This computation can be done independently in parallel for each configuration and repeat; the output is displayed in the left panel of Fig. \ref{fig:logistic}. Although multiple configurations lead to contractive chains, it is not the case for $(\varepsilon,L)=(0.03,10)$ which are optimal parameters for Hamiltonian Monte Carlo. For configurations that yield distances that are less than $10^{-10}$, we simulate $100$ meeting times in parallel using the mixture kernel (\ref{eqn:coupled_mixture}) with $\sigma=10^{-3}$ and $\gamma=1/20$. We then select the parameter configuration $(\varepsilon,L)=(0.0125,10)$ that gave the least average compute cost, taken as $L+2$ times the average meeting time. To illustrate the impact of $\sigma$ and $\gamma$, we fix $(\varepsilon,L)=(0.0125,10)$ and examine the distribution of meeting times as $\sigma$ or $\gamma$ varies. Decreasing $\sigma$ leads to larger meeting times: conservatively small values of $\sigma$ require more iterations before the chains get close enough for the maximal coupling to propose the same value with high probability. On the other hand, if $\sigma$ was too large, large meeting times would be observed as random walk proposals would be rejected with high probability. The middle panel of Fig. \ref{fig:logistic} suggests that the effectiveness of our coupling is not highly sensitive to the choice of $\sigma$, provided that it is small enough. Similarly, the right panel of Fig. \ref{fig:logistic} also shows stable meeting times for the range of values of $\gamma$ considered. Finally, we produce $R=1,000$ coupled chains in parallel with $(\varepsilon,L,\sigma,\gamma)=(0.0125,10,10^{-3},1/20)$ and compare the inefficiency of our estimator with the asymptotic variance of the optimal Hamiltonian Monte Carlo estimator for various choices of $k$ and $m$. The results, summarized in Table \ref{table:logistic}, illustrate that bias removal comes at a cost of increased variance, and that this can be reduced with appropriate choices of $k$ and $m$. Our guideline for $k$ and $m$ results in a relative inefficiency of $1.05$ at an average compute cost of $3518$ applications of $K_{\varepsilon,L,\sigma}$, or approximately $5$ minutes of computing time with our implementation. Therefore, thanks to unbiasedness, we can safely average over independent copies of an estimator whose expected cost is of the order of a few thousand Hamiltonian Monte Carlo iterations. \begin{figure} \begin{centering} \begin{minipage}{0.3\textwidth} \includegraphics[scale=0.35]{logistic_contraction.eps} \end{minipage}\hspace{0.5cm} \begin{minipage}{0.3\textwidth} \includegraphics[scale=0.35]{logistic_meetingtime.eps} \end{minipage}\hspace{0.5cm} \begin{minipage}{0.3\textwidth} \includegraphics[scale=0.35]{logistic_gamma_meetingtime.eps} \end{minipage} \par\end{centering} \caption{Logistic regression example in Section \ref{subsec:Logistic-regression}. Average distance between coupled chains at iteration $1,000$ against integration time $\varepsilon L$ (left). The symbols and lines correspond to $L=10$ (dot-solid), $L=20$ (triangle-small dashes) and $L=30$ (square-dashes). Boxplot of meeting times as parameter $\sigma$ (middle) or $\gamma$ (right) varies.} \label{fig:logistic} \end{figure} \begin{table} \def~{\hphantom{0}} \caption{Relative inefficiency of proposed estimator in logistic regression example\label{table:logistic} } \begin{center} \begin{tabular}{ccccc} $k$ & $m$ & Cost & Variance & Relative inefficiency \\[5pt] $1$ & $k$ & $436$ & $4.0\times10^2$ & 1989.07 \\ $1$ & $5k$ & $436$ & $3.4\times10^2$ & 1671.93 \\ $1$ & $10k$ & $436$ & $2.8\times10^2$ & 1403.28 \\ $\mathrm{median}(\tau)$ & $k$ & 458 & $7.4\times10^0$ & 38.22 \\ $\mathrm{median}(\tau)$ & $5k$ & 1258 & ~$1.1\times10^{-1}$ & 1.58 \\ $\mathrm{median}(\tau)$ & $10k$ & 2298 & ~$4.5\times10^{-2}$ & 1.18 \\ $90\%\,\mathrm{quantile}(\tau)$ & $k$ & 553 & $6.0\times10^0$ & 38.11 \\ $90\%\,\mathrm{quantile}(\tau)$ & $5k$ & 1868 & ~$5.8\times10^{-2}$ & 1.23 \\ $90\%\,\mathrm{quantile}(\tau)$ & $10k$ & 3518 & ~$2.6\times10^{-2}$ & 1.05 \\ \end{tabular} \caption*{Cost refers to the expected compute cost, variance denotes the sum of variances when estimating first and second moments, and relative inefficiency is the ratio of the asymptotic inefficiency $i(\bar{\pi}_0,\bar{K}_{\varepsilon,L,\sigma})$ with parameters $(\varepsilon,L,\sigma,\gamma)=(0.0125,10,10^{-3},1/20)$, to the asymptotic variance $v(K_{\varepsilon,L})$ with optimal parameters $(\varepsilon,L)=(0.03,10)$. These quantities were computed using $R=1,000$ independent runs, while the median and $90\%$ quantile of the meeting time were computed with $100$ preliminary runs.} \end{center} \end{table} \subsection{Log-Gaussian Cox point processes \label{subsec:Cox-Process}} We end with a challenging high dimensional application of Bayesian inference for log-Gaussian Cox point processes on a dataset concerning the locations of $126$ Scot pine saplings in a natural forest in Finland \citep{Moller:1998}. After discretizing the plot into an $n \times n$ regular grid, the number of points in each grid cell $y_i\in\mathbb{N}$ is assumed to be conditionally independent, given a latent intensity process $\Lambda_i, i\in\{1,\ldots,n\}^2$, and modelled as Poisson distributed with mean $a\Lambda_i$, where $a=n^{-2}$ is the area of each grid cell. The prior is specified by $\Lambda_i=\exp(X_i)$, where $X_i,i\in\{1,\ldots,n\}^2$ is a Gaussian process with mean $\mu\in\mathbb{R}$ and exponential covariance function $\Sigma_{i,j}=s^2\exp\{-|i-j|/(nb)\}$ for $i,j\in\{1,\ldots,n\}^2$. We will adopt the parameter values $s^2=1.91,b=1/33$ and $\mu=\log(126)-s^2/2$ estimated by \citet{Moller:1998} and infer the posterior distribution of the latent process $X_i,i\in\{1,\ldots,n\}^2$ given the count data and these hyperparameter values. We will consider three discretizations with $n\in\{16, 32, 64\}$, which correspond to target distributions $\pi$ on $\mathbb{R}^d$ with $d\in\{256, 1024, 4096\}$. Owing to the high dimensionality of this model, the mixing of random walk Metropolis--Hastings is known to be prohibitively slow \citep{Christensen:2002}, while the Metropolis-adjusted Langevin algorithm requires a computationally costly reparameterization to be effective \citep{Christensen:2005}. We will consider the use of Hamiltonian Monte Carlo and Riemann manifold Hamiltonian Monte Carlo with metric tensor $\Sigma^{-1}+a\exp(\mu+s^2/2)I_d$ \citep{girolami2011riemann}. We proceed as in Section \ref{subsec:Logistic-regression} to seek parameter configurations $(\varepsilon,L)\in\{0.05,0.07,\ldots,0.45\}\times\{10,20,30\}$ that yield contractive coupled chains with small compute cost, when initialized independently from the prior distribution. Although both algorithms have multiple configurations that result in contractive chains, the parameters $\varepsilon$ and $L$ that were optimal for these methods only led to contractive coupled Riemann manifold Hamiltonian Monte Carlo chains for all three discretizations. By simulating $100$ meeting times with $\sigma=10^{-3}$ and $\gamma=1/20$ for configurations that yield distances of less than $10^{-10}$, for $d\in\{256, 1024, 4096\}$ respectively, we select $(\varepsilon,L)\in\{(0.11,10),(0.15,10),(0.17,10)\}$ for Hamiltonian Monte Carlo, and $(\varepsilon,L)\in\{(0.11,10),(0.11,10),(0.13,10)\}$ for Riemann manifold Hamiltonian Monte Carlo, which gave the smallest average compute cost for each algorithm. The corresponding meeting times in the left panel of Fig. \ref{fig:coxprocess} show the effectiveness of our coupling strategy even in high dimensions. The middle and right panels of Fig. \ref{fig:coxprocess}, which display the meeting times of coupled Riemann manifold Hamiltonian Monte Carlo chains for the finest discretization, also illustrate the robustness of our coupling to the choice of $\sigma$ and $\gamma$. With the above parameters and the guideline for choosing $k$ and $m$, we computed $R=1,000$ coupled chains in parallel for each algorithm and discretization. For $d\in\{256, 1024, 4096\}$ respectively, the relative inefficiency was found to be $11.00, 5.43, 2.73$ for Hamiltonian Monte Carlo, and $11.68, 7.85, 3.72$ for Riemann manifold Hamiltonian Monte Carlo. For the finest discretization, the average compute time was approximately $90$ and $20$ minutes with our implementation. Despite some loss of efficiency, the benefits of exploiting parallel computation for this problem is apparent since one can only run $4439$ and $714$ iterations of these algorithms respectively for the same compute time. \begin{figure} \begin{centering} \begin{minipage}{0.3\textwidth} \includegraphics[scale=0.35]{coxprocess_dim_meetingtime.eps} \end{minipage}\hspace{0.5cm} \begin{minipage}{0.3\textwidth} \includegraphics[scale=0.35]{coxprocess_sigma_meetingtime.eps} \end{minipage}\hspace{0.5cm} \begin{minipage}{0.3\textwidth} \includegraphics[scale=0.35]{coxprocess_gamma_meetingtime.eps} \end{minipage}\hspace{0.5cm} \par\end{centering} \caption{Cox process example in Section \ref{subsec:Cox-Process}. Boxplot of meeting times for both algorithms and all three discretizations (left), and as parameter $\sigma$ (middle) or $\gamma$ (right) varies. } \label{fig:coxprocess} \end{figure} \section{Discussion} Construction of couplings could be explored for other variants of the Hamiltonian Monte Carlo method, such as the use of partial momentum refreshment \citep{horowitz1991generalized}, the adaptation of tuning parameters \citep{hoffman2014no}, different choices of kinetic energy \citep{livingstone2017kinetic}, and in combination with new sampling paradigms \citep{pollock2016scalable,fearnhead2016piecewise,vanetti2017piecewise}. Other ways of leveraging parallel hardware for Hamiltonian Monte Carlo include the work in \citet{calderhead2014general}, which builds on \citet{tjelmeland2004using} and focuses on parallel computation at each iteration of the algorithm. \section*{Acknowledgement} The computations in this article were run on the Odyssey cluster supported by the FAS Division of Science, Research Computing Group at Harvard University. Pierre E. Jacob gratefully acknowledges support by the National Science Foundation through grant DMS-1712872. Both authors gratefully acknowledge support by the Army Research Office through grant W911NF-15-1-0172. \section*{Supplementary material} \label{SM} An R package is available at \href{https://github.com/pierrejacob/debiasedhmc}{github.com/pierrejacob/debiasedhmc} and contains the scripts used to produce the figures of this article. The supplementary material (available below) includes an alternative coupling for the Metropolis-adjusted Langevin algorithm, additional simulation results on truncated Gaussian distributions, the proofs of Lemma \ref{lem:exact_contraction} and Theorems \ref{thm:relaxed_meeting}--\ref{thm:exact_meeting}, and notes on verifying the assumptions of Theorems \ref{thm:relaxed_meeting}--\ref{thm:exact_meeting} for target distributions given by posterior distributions of Bayesian logistic regression. \bibliographystyle{abbrvnat} \phantomsection\addcontentsline{toc}{section}{\refname}
{ "timestamp": "2018-08-28T02:21:51", "yymm": "1709", "arxiv_id": "1709.00404", "language": "en", "url": "https://arxiv.org/abs/1709.00404", "abstract": "We propose a methodology to parallelize Hamiltonian Monte Carlo estimators. Our approach constructs a pair of Hamiltonian Monte Carlo chains that are coupled in such a way that they meet exactly after some random number of iterations. These chains can then be combined so that resulting estimators are unbiased. This allows us to produce independent replicates in parallel and average them to obtain estimators that are consistent in the limit of the number of replicates, instead of the usual limit of the number of Markov chain iterations. We investigate the scalability of our coupling in high dimensions on a toy example. The choice of algorithmic parameters and the efficiency of our proposed methodology are then illustrated on a logistic regression with 300 covariates, and a log-Gaussian Cox point processes model with low to fine grained discretizations.", "subjects": "Computation (stat.CO)", "title": "Unbiased Hamiltonian Monte Carlo with couplings", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708026035287, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7084670378013058 }
https://arxiv.org/abs/2301.07872
Automorphisms of weighted projective hypersurfaces
We prove several results concerning automorphism groups of quasismooth complex weighted projective hypersurfaces; these generalize and strengthen existing results for hypersurfaces in ordinary projective space. First, we establish a Grothendieck-Lefschetz type theorem for these hypersurfaces. We next provide a characterization of when their linear automorphism group is finite and find explicit uniform upper bounds on the size of this group. Finally, we describe the automorphisms of a generic quasismooth hypersurface with given weights and degree.
\section{Introduction} Hypersurfaces in projective space are among the most well-studied varieties. In particular, a great deal is known about their automorphism groups, due to landmark theorems of Grothendieck-Lefschetz \cite{SGA2}, Matsumura-Monsky \cite{MM}, and others. In this paper, we generalize and strengthen several of these results to hypersurfaces in any weighted projective space over ${\mathbb C}$. Given a collection of positive integers $a_0,\ldots,a_{n+1}$, the {\it weighted projective space} $\mathbb{P} \coloneqq \mathbb{P}(a_0,\ldots,a_{n+1})$ is defined to be the projective quotient variety $(\mathbb{A}^{n+2} \setminus \{ 0 \})/\mathbb{G}_{\on{m}}$. Here the multiplicative group $\mathbb{G}_{\on{m}}$ acts by the formula $t \cdot (x_0,\ldots,x_{n+1}) = (t^{a_0}x_0,\ldots,t^{a_{n+1}}x_{n+1})$. When all weights $a_i$ are equal to $1$, we recover ordinary projective space $\mathbb{P}^{n+1}$. Unlike $\mathbb{P}^{n+1}$, weighted projective spaces are usually singular. Hypersurfaces in weighted projective space are an extremely useful class of algebraic varieties. Indeed, many of their properties are determined combinatorially by the choice of degree and weights, and they exhibit a large range of behavior. In particular, there is significant evidence that weighted projective hypersurfaces are flexible enough to solve a diverse range of optimization problems in algebraic geometry (see, for example, \cite{ETW,CEW,ETWindex,Totaro}). It is therefore desirable to understand basic properties of their automorphism groups. The outline of the paper is as follows: in \Cref{background}, we discuss the necessarily preliminaries on the objects of study. In \Cref{linearity}, we prove that all automorphisms of well-formed quasismooth weighted projective hypersurfaces $X \subset \mathbb{P}$ extend to the ambient weighted projective space if $\dim(X) \geq 3$ or $\dim(X) = 2$ and $X$ has non-trivial canonical class (\Cref{main-theorem}). This generalizes the same statement for hypersurfaces in $\mathbb{P}^{n+1}$, which is a consequence of the Grothendieck-Lefschetz theorem. An automorphism of $X$ that extends to $\mathbb{P}$ is called {\it linear}. We prove several results on the size of the linear automorphism group $|\mathrm{Lin}(X)|$ of a well-formed quasismooth $X$ in \Cref{bounds}. In particular, we give a exact characterization of when this group is finite in terms of the degree and weights of $X$ (\Cref{finite}). When it is finite, we prove that there is an upper bound $$|\mathrm{Lin}(X)| \leq C_n \frac{d^{n+2}}{a_0 \cdots a_{n+1}},$$ where $C_n$ depends only on the dimension $n$ (\Cref{bound}). We give a procedure for effectively calculating a value $C_n$ for which the inequality above holds using the Jordan constants of general linear groups over ${\mathbb C}$. We also compute the optimal value for $C_1$. Finally, in \Cref{verygeneral}, we consider automorphisms of a very general quasismooth hypersurface $X$ with a given degree and weights. We prove that when $d \geq 5 \max\{a_0,\ldots,a_{n+1}\}$, the group $\mathrm{Lin}(X)$ is contained in the center of $\mathrm{Aut}(\mathbb{P})$ (\Cref{generic}). When $\mathbb{P} = \mathbb{P}^{n+1}$, the center of $\mathrm{Aut}(\mathbb{P}^{n+1}) = \mathrm{PGL}_{n+2}$ is trivial, so we recover the usual statement that $\mathrm{Lin}(X) = 1$ for $X$ very general. However, this stronger statement is not always true for other choices of weights and degree. We give several examples illustrating new phenomena that arise for generic automorphisms of weighted projective hypersurfaces that don't occur in ordinary projective space. \noindent{\it Acknowledgements. }The author was supported by NSF grant DMS-2054553. Thanks to Alex Duncan and Burt Totaro for useful conversations. \subsection{Background on Weighted Projective Hypersurfaces} \label{background} Throughout the paper, we'll work over the complex numbers, though some of the statements below remain true over other fields. For a complete introduction to weighted projective hypersurfaces, see \cite{Iano-Fletcher}. We say that $\mathbb{P}$ is {\it well-formed} if the action of $\mathbb{G}_{\on{m}}$ on $\mathbb{A}^{n+2}$ has trivial stabilizers in codimension $1$. This holds exactly when $\gcd(a_0,\ldots,\widehat{a_i},\ldots,a_{n+1}) = 1$ for each $i$. Every weighted projective space is isomorphic as a variety to a well-formed one, so we will only consider well-formed spaces $\mathbb{P}$. If $S = {\mathbb C}[x_0,\ldots,x_{n+1}]$ is the graded polynomial ring with the weight of each $x_i$ equal to $a_i$, then $\mathrm{Proj}(S) \cong \mathbb{P}(a_0,\ldots,a_{n+1})$. The space $\mathbb{P}$ is equipped with a reflexive sheaf $\mathcal{O}(d)$ for every integer $d \in {\mathbb Z}$ from the Proj construction, which is the sheaf associated to a Weil divisor of degree $d$. It is a line bundle if and only if $d$ is divisible by every weight $a_i$. Let $f$ be a weighted homogeneous polynomial of degree $d$. Then $f$ defines a hypersurface $X = \{f = 0\} \subset \mathbb{P}(a_0,\ldots,a_{n+1})$ of dimension $n$. The {\it affine cone} $C_X$ over $X$ is the subvariety of $\mathbb{A}^{n+2}$ defined by the same equation $f$, while the {\it punctured} affine cone is $C_X^* \coloneqq C_X \setminus \{0\}$. A hypersurface $X$ is {\it quasismooth} if the punctured affine cone is smooth, and it is {\it well-formed} if $\mathbb{P}$ is well-formed and $X$ does not contain any codimension $2$ singular stratum of $\mathbb{P}$. When $X$ is well-formed and quasismooth, the canonical divisor satisfies the adjunction formula $K_X = \mathcal{O}_X(d-a_0 - \cdots -a_{n+1})$. Quasismooth hypersurfaces of degree $d$ in $\mathbb{P}$ only exist for certain choices of weights and degree, according to the following criterion, which holds in characteristic $0$: \begin{proposition}{\cite[Theorem 8.1]{Iano-Fletcher}} \label{qsmooth} There exists a quasismooth hypersurface $X$ of degree $d$ in the weighted projective space $\mathbb{P}(a_0,\ldots, a_{n+1})$ if and only if one of the following properties holds: \begin{enumerate} \item $a_i = d$ for some $i \in \{0,\ldots,n+1\}$, or \item for each nonempty subset $I$ of $\{0,\ldots ,n+1\}$, either \begin{enumerate} \item $d$ is an $\mathbb{N}$-linear combination of the weights $a_i$ for $i \in I$, or \item there are at least $|I|$ indices $j \notin I$ such that $d-a_j$ is an $\mathbb{N}$-linear combination of the numbers $a_i$ with $i \in I$. \end{enumerate} \end{enumerate} If (1) or (2) holds, then the general hypersurface of degree $d$ is quasismooth. \end{proposition} Here, ``$\mathbb{N}$-linear combination" means a linear combination with nonnegative integer coefficients. We'll frequently use the following version of the proposition applied to singleton sets $I = \{ i \}$: \begin{proposition} \label{monomialexistence} Suppose $X = \{f = 0\} \subset \mathbb{P}$ is a quasismooth hypersurface of degree $d$ in $\mathbb{P}$. Then for each $i = 0,\ldots,n+1$, there exists a monomial of degree $d$ with nonzero coefficient in $f$ having the form either (a) $x_i^k$, or (b) $x_i^k x_j$ for some $j \neq i$. \end{proposition} \begin{proof} If such a monomial does not exist for some $i$, then the function $f$ and all its derivatives would vanish at the coordinate point $e_i \in \mathbb{A}^{n+2}$, contradicting the smoothness of the punctured affine cone $C_X^*$. \end{proof} As mentioned above, $\mathbb{P}(a_0,\ldots,a_{n+1}) = \mathrm{Proj}(S)$, where $S = {\mathbb C}[x_0,\ldots,x_{n+1}]$ is the graded polynomial ring with variables of the given weights. We also use the notation $S_m$ to denote the $m$th graded piece of $S$. Any graded automorphism of $S$ induces an automorphism of weighted projective space; let $\mathrm{Aut}(S)$ denote the group of graded automorphisms. In fact, every automorphism of $\mathbb{P}$ comes from $\mathrm{Aut}(S)$. This is proven in \cite[Section 8]{Amrani}, but for completeness, we provide a proof here: \begin{lemma} \label{autP} Let $\mathbb{P} \coloneqq \mathbb{P}(a_0,\ldots,a_{n+1})$ be a well-formed weighted projective space and $S \coloneqq {\mathbb C}[x_0,\ldots,x_{n+1}]$ the graded polynomial ring with the weight of $x_i$ equal to $a_i$. Then the natural map $\mathrm{Aut}(S) \rightarrow \mathrm{Aut}(\mathbb{P})$ is surjective. The kernel is isomorphic to ${\mathbb C}^*$, where the isomorphism associates to any $t \in {\mathbb C}^*$ the automorphism mapping $x_i \mapsto t^{a_i} x_i$ for each $i$. \end{lemma} \begin{proof} Suppose that $f: \mathbb{P} \rightarrow \mathbb{P}$ is an automorphism. Pullback by $f$ yields an isomorphism of class groups $f^*: \mathrm{Cl}(\mathbb{P}) \rightarrow \mathrm{Cl}(\mathbb{P})$. Since $\mathrm{Cl}(\mathbb{P}) \cong {\mathbb Z}$ and ampleness of classes must be preserved, $f^* \mathcal{O}(1) = \mathcal{O}(1)$. It follows that for every $m$ we have an isomorphism $f^*: H^0(\mathbb{P},\mathcal{O}(m)) \rightarrow H^0(\mathbb{P},\mathcal{O}(m))$ and that these isomorphisms are compatible with tensor product. Since $H^0(\mathbb{P},\mathcal{O}(m))$ is the $m$th graded piece of $S$, these assemble to a graded isomorphism $f^*: S \rightarrow S$, which induces the original $f$. It's straightforward to check that only maps of the form $x_i \mapsto t^{a_i}x_i$ for all $i$ are in the kernel, and that every $t \neq 1$ gives a non-identity map $S \rightarrow S$ by well-formedness. \end{proof} Given a fixed embedding of a weighted projective hypersurface $X \subset \mathbb{P}$, we'll define the subgroup $\mathrm{Lin}(X) \subset \mathrm{Aut}(X)$ of {\it linear automorphisms} to consist of those automorphisms of $X$ which extend to $\mathbb{P}$ (or, equivalently, to some graded automorphism of $S$). We retain the terminology ``linear" by analogy with ordinary projective space, but it's important to note that the images of the variables $x_i$ under a graded automorphism of $S$ need not be linear as polynomials. For example, if $S = {\mathbb C}[x_0,x_1,x_2]$ and $x_0,x_1,x_2$ have weights $4,3,$ and $1$, respectively, we could define an element of $\mathrm{Aut}(S)$ by $x_0 \mapsto x_0 + x_1 x_2 - x_2^4$, $x_1 \mapsto 2x_1 + x_2^3$, $x_2 \mapsto - x_2$. We'll need the following fact about automorphisms of graded polynomial rings in Sections \ref{bounds} and \ref{verygeneral}. We say that a group $G \subset \mathrm{Aut}(S)$ is {\it diagonalizable} if, after conjugating $G$ by an automorphism of $S$, each element of $g$ sends each variable $x_i$ to some scalar multiple of itself. Any such automorphism also sends an arbitrary monomial to a scalar multiple of itself. \begin{lemma} \label{diagonalizable} Let $S = {\mathbb C}[x_0,\ldots,x_{n+1}]$ be a weighted polynomial ring with weights $a_0, \ldots, a_{n+1}$ and $\mathrm{Aut}(S)$ the group of graded automorphisms. If $G \subset \mathrm{Aut}(S)$ is a finite abelian group, then $G$ is diagonalizable. \end{lemma} This is, of course, a generalization of the well-known fact that any finite abelian group in $\mathrm{GL}_{n+2}({\mathbb C})$ is diagonalizable. \begin{proof} The group $\mathrm{Aut}(S)$ embeds naturally in $\bigoplus_{b \in \mathcal{B}} \mathrm{GL}(S_b)$, where $\mathcal{B} = \{b : b = a_i \text{ for some } i\}$ is the set of integers that appear as a weight of $S$. (In particular, $\mathrm{Aut}(S)$ is a linear algebraic group.) To diagonalize a finite abelian group $G \subset \mathrm{Aut}(S)$, we'll focus on the representation on each of these pieces $S_b$ in turn. Of course, within each $S_b$, we can diagonalize $G$ using some change of coordinates in $S_b$, but we must prove that we can do this simultaneously for all $b$ using conjugation by an element $\gamma$ in $\mathrm{Aut}(S)$. To do this, we put the integers in $B$ in increasing order and construct $\gamma$ inductively. For the smallest integer $b_0 \in \mathcal{B}$, there is a basis of $S_{b_0}$ given by $\{x_i: a_i = b_0 \}$. Define $\gamma$ on the smallest weight variables in such a way that the representation of $G$ on $S_{b_0}$ becomes diagonal in the given basis (this is no problem because all monomials of degree $b_0$ are generators of $S$). By the inductive hypothesis, we now assume $G$ acts by multiplication by a scalar on all variables $x_i$ of weight $a_i < b$ and consider the action on $S_b$. Write a basis for $S_b$ beginning with the variables of weight $b$, followed by the other monomials of degree $b$. By the inductive hypothesis, we've already constructed a $\gamma$ so that the representation of $G$ after changing coordinates in only the smaller variables is of the form $$ \begin{pmatrix} A(g) & 0 \\ B(g) & D(g) \end{pmatrix}.$$ Here $D(g)$ is a diagonal sum of characters of $G$, because each $g \in G$ acts on monomials in the smaller weight variables by scalar multiplication. Since the entire space $S_b$ must also be a direct sum of one-dimensional characters of $G$, we can find a change of coordinates affecting only coordinates in the first part of the basis (variables of weight $b$) so that the representation becomes diagonal. This finishes the definition of the automorphism $\gamma$ on variables of weight up to $b$. By induction, the proof is complete. \end{proof} \section{Linearity of Hypersurface Automorphisms} \label{linearity} Let $X$ be a smooth hypersurface in $\mathbb{P}^{n+1}$. Then, automorphisms of $X$ extend to $\mathbb{P}^{n+1}$ whenever $n \geq 1$ and $d \geq 3$ unless $(n,d) = (1,3)$ or $(2,4)$. When $n \geq 3$, this is a consequence of the Grothendieck-Lefschetz theorem \cite[Expos\'{e} XII, Corollaire 3.6]{SGA2}, which holds even for arbitrarily singular hypersurfaces in projective space. For $n = 2$, $d \neq 4$, this is a theorem due to Matsumura and Monsky \cite{MM}; when $n = 1, d \geq 4$, it is due to Chang \cite{Chang}. In this section, we prove a generalization to hypersurfaces in weighted projective space. We will deduce this statement from the local version of Grothendieck's Lefschetz theorems, but some care is needed. One complication is that a hypersurface in weighted projective space need not be a Cartier divisor (this is usually assumed even for variants of the global Lefschetz theorems that deal with singular varieties, e.g. \cite{RS}). \begin{theorem} \label{main-theorem} Let $X \subset \mathbb{P}(a_0,\ldots,a_{n+1})$ and $X' \subset \mathbb{P}(a_0',\ldots,a_{n+1}')$ be two complex weighted projective hypersurfaces of weighted degrees $d$ and $d'$, respectively. Suppose further that $X$ and $X'$ are well-formed and quasismooth, $X$ is not a linear cone, and one of the following holds: \begin{enumerate} \item $n \geq 3$; or \item $n = 2$ and $a_0 + a_1 + a_2 + a_3 \neq d$. \end{enumerate} Then, if $g: X' \rightarrow X$ is an isomorphism, we have $d = d'$, the $a_i$ coincide with the $a_i'$ up to rearrangement, and $g$ is induced by an automorphism of $\mathbb{P}(a_0,\ldots,a_{n+1})$. \end{theorem} \begin{remark} \begin{enumerate} \item The assumption of well-formedness is necessary (note that a quasismooth hypersurface in a well-formed weighted projective space of dimension at least $3$ is automatically itself well-formed \cite[Theorem 6.17]{Iano-Fletcher}). For example, any hypersurface $X_4 \subset \mathbb{P}^4(2,2,2,2,2)$ is isomorphic as a variety to a hypersurface $X_2 \subset \mathbb{P}^4(1,1,1,1,1)$ with the same equation. \item A hypersurface is a {\it linear cone} if the equation $f$ contains a linear term $x_i$ for some $i$. In this case, the hypersurface $f = 0$ is isomorphic to a weighted projective space of smaller dimension, and the theorem above fails rather trivially: for example, $\{x_0 = 0 \} \subset \mathbb{P}^4(1,1,1,1,1)$ and $\{x_0 = 0\} \subset \mathbb{P}^4(2,1,1,1,1)$ are both isomorphic to $\mathbb{P}^3$. We avoid the linear cone case to exclude this pathology. \end{enumerate} \end{remark} \begin{proof} Let $S = {\mathbb C}[x_0,\ldots,x_{n+1}]$ be the graded polynomial ring in variables $x_i$ of degrees $a_i$, and define $S'$ in the same way, so that $\mathbb{P} \coloneqq \mathbb{P}(a_0,\ldots,a_{n+1}) = \mathrm{Proj}(S)$ and $\mathbb{P}' \coloneqq \mathbb{P}(a_0',\ldots,a_{n+1}') = \mathrm{Proj}(S')$. If $f$ and $f'$ are the homogeneous polynomials of weighted degrees $d$, $d'$ defining $X$ and $X'$, respectively, then $X = \on{Proj}(S/(f))$ and $X' = \on{Proj}(S'/(f'))$. We'll first show that the isomorphism $g: X' \rightarrow X$ comes from an isomorphism of graded rings $g^*: S/(f) \rightarrow S'/(f')$. This is non-trivial since not every isomorphism of two Proj's comes from an underlying isomorphism of the graded rings. For each integer $m$, we may twist the ideal sheaf sequence for $X$ by $\mathcal{O}(m)$ to obtain $$0 \rightarrow \mathcal{O}_{\mathbb{P}}(m-d) \xrightarrow{\cdot f} \mathcal{O}_{\mathbb{P}}(m) \rightarrow \mathcal{O}_X(m) \rightarrow 0.$$ The second map is multiplication by $f$. By \cite[Section 1.4]{Dolgachev}, $H^1(\mathbb{P},\mathcal{O}_{\mathbb{P}}(m-d)) = 0$ for all $m$ because the dimension of the weighted projective space $\mathbb{P}$ is at least $2$. This is the analog of projective normality for weighted projective hypersurfaces. Therefore, we arrive at the exact sequence of global sections $$0 \rightarrow H^0(\mathbb{P},\mathcal{O}_{\mathbb{P}}(m-d)) \xrightarrow{\cdot f} H^0(\mathbb{P},\mathcal{O}_{\mathbb{P}}(m)) \rightarrow H^0(X,\mathcal{O}_X(m)) \rightarrow 0.$$ Since $S_m \cong \mathcal{O}_{\mathbb{P}}(m)$ and the image of the first map is $f S_{m-d}$, we may identify $H^0(X,\mathcal{O}_X(m))$ with the $m$th graded piece of $S/(f)$. Therefore, $$S/(f) = \bigoplus_{m=0}^{\infty} H^0(X,\mathcal{O}_X(m)),$$ and similarly with $S'/(f')$. Thus, it will be enough to find maps $H^0(X,\mathcal{O}_X(m)) \rightarrow H^0(X',\mathcal{O}_{X'}(m))$ for each $m$ to build our desired map $S/(f) \rightarrow S'/(f')$. Since $X$ is quasismooth, the punctured affine cone $C_X^* \subset \mathbb{A}^{n+2} \setminus \{0\}$ is smooth. The hypersurface $X$ is a quotient of $C_X^*$ by the action of $G \coloneqq \mathbb{G}_{\on{m}}$ on $\mathbb{A}^{n+2} \setminus \{0\}$; we'll denote the quotient morphism by $q: C_X^* \rightarrow X$. Using the assumption of well-formedness of $X$, $\on{Sing}(X)$ is codimension at least $2$ in $X$. $\on{Sing}(X)$ is also the image of the locus in $C_X^*$ where the $G$-action has nontrivial stabilizers. Thus, we have $\on{Cl}(X) \cong \on{Cl}(X_{\on{sm}})$, where $X_{\on{sm}} = X \setminus \on{Sing}(X)$ is the smooth locus. Because $X_{\on{sm}}$ is smooth, $\on{Cl}(X_{\on{sm}}) = \on{Pic}(X_{\on{sm}})$. Further, $\on{Pic}(X_{\on{sm}})$ is isomorphic to the group $\on{Pic}^G(q^{-1}(X_{\on{sm}}))$ of $G$-equivariant vector bundles on $q^{-1}(X_{\on{sm}})$, since $q^{-1}(X_{\on{sm}}) \rightarrow X_{\on{sm}}$ is the quotient by a free group action. Finally, $q^{-1}(X_{\on{sm}})$ is codimension at least $2$ in $C_X^*$, so $G$-equivariant line bundles on $X_{\on{sm}}$ extend to all of $C_X^*$. In summary, $$\on{Cl}(X) \cong \on{Pic}(X_{\on{sm}}) \cong \on{Pic}^G(q^{-1}(X_{\on{sm}})) \cong \on{Pic}^G(C_X^*).$$ The isomorphism $g : X' \rightarrow X$ induces an isomorphism $X'_{\on{sm}} \rightarrow X_{\on{sm}}$, which also gives a pullback map $\on{Pic}(X_{\on{sm}}) \xrightarrow{\cong} \on{Pic}(X'_{\on{sm}})$. We'll identify $\mathrm{Cl}(X)$ and $\mathrm{Pic}^G(C_X^*)$ below without further comment and use the same notation $\mathcal{O}_X(m)$ for the corresponding elements in either of these groups. \begin{proposition} \label{preservegenerator} Suppose that $X$ and $X'$ satisfy the conditions of \Cref{main-theorem}. Then an isomorphism $g: X' \rightarrow X$ induces an isomorphism $\on{Cl}(X) \xrightarrow{\cong} \on{Cl}(X')$ which maps $\mathcal{O}_X(1)$ to $\mathcal{O}_{X'}(1)$. \end{proposition} \begin{proof} We argue differently depending on the dimension of $X$. When $\dim(X) \geq 3$, we claim that $\on{Pic}^G(C_X^*) \cong {\mathbb Z} \cdot \mathcal{O}_X(1)$ (and analogously for $X'$). Indeed, if we forget the $G$-equivariant structure, all line bundles on the smooth variety $C_X^*$ are trivial when $\dim(X) \geq 3$. This is because the local ring $\mathcal{O}_{C_X,0}$ is a complete intersection ring of dimension at least $4$, regular outside its maximal ideal, so it is a parafactorial ring \cite[Expos\'{e} XI, Th\'{e}or\`{e}me 3.13]{SGA2}. It follows that $\on{Pic}(C_X) \rightarrow \on{Pic}(C_X^*)$ is an isomorphism; furthermore, $\on{Pic}(C_X)$ is trivial \cite[Section 3.2.2]{Dolgachev}. From $\mathrm{Pic}(C_X^*) = 0$, we deduce that the group of $G$-equivariant line bundles on $C_X^*$ is naturally isomorphic to the character group of $G$, namely ${\mathbb Z}$ \cite[Lemma 4.1.7]{Brion}. It's straightforward to check that the linearization of the trivial bundle by the character $t \mapsto t^m$ coincides with the $G$-equivariant bundle $\mathcal{O}_X(m)$. Therefore, $\on{Cl}(X) = {\mathbb Z} \cdot \mathcal{O}_X(1)$. Since $X$ and $X'$ are isomorphic, $\dim(X') \geq 3$ also, so identical reasoning shows $\on{Cl}(X') = {\mathbb Z} \cdot \mathcal{O}_{X'}(1)$. The pullback of an ample divisor by an isomorphism is still ample, so the isomorphism $\on{Cl}(X) \xrightarrow{\cong} \on{Cl}(X')$ must send $\mathcal{O}_X(1)$ to $\mathcal{O}_{X'}(1)$. This proves the proposition when $n \geq 3$. If $\dim(X) = 2$, we use a different argument, since we may not have $\on{Cl}(X) \cong {\mathbb Z}$ in this case. The canonical class of $X$ is $K_X = \mathcal{O}_X(r)$, where $r = d - \sum_i a_i \neq 0$ by assumption. We also have $K_{X'} = \mathcal{O}_X(r')$ with $r = d' - \sum_i a_i'$ having the same sign as $r$, depending on whether $X$ and $X'$ have ample or anti-ample canonical class. The canonical class is preserved by isomorphism, so pullback sends $\mathcal{O}_X(r)$ to $\mathcal{O}_{X'}(r')$. We'll prove that $r = r'$ and moreover that $\mathcal{O}_X(1)$ maps to $\mathcal{O}_{X'}(1)$. \begin{lemma} \label{fundamentalgroup} Let $V$ be a connected scheme of finite type over ${\mathbb C}$. If $\pi_1(V) = 1$, then ${\on{Pic}}(V)$ is torsion-free. \end{lemma} \begin{proof} For any positive integer $\ell$, we have the following Kummer exact sequence of sheaves of abelian groups on $V$ in the \'{e}tale topology: $$1 \rightarrow \mu_{\ell} \rightarrow \mathbb{G}_{\on{m},V} \xrightarrow{x \mapsto x^{\ell}} \mathbb{G}_{\on{m},V} \rightarrow 1.$$ The associated long exact sequence in cohomology gives $$\cdots \rightarrow H^1(V,\mu_{\ell}) \rightarrow \on{Pic}(V) \xrightarrow{L \mapsto L^{\ell}} \on{Pic}(V) \rightarrow \cdots.$$ Since $V$ is connected, $H_1(V,{\mathbb Z})$ is the abelianization of $\pi_1(V)$; hence $H_1(V,{\mathbb Z}) = 0$. The universal coefficient theorem then gives that $H^1(V,\mu_{\ell}) \cong H^1(V,{\mathbb Z}/{\ell}) = 0$, so that the $\ell$th tensor power map on $\on{Pic}(V)$ is injective. Since this holds for any positive integer $\ell$, $\on{Pic}(V)$ is torsion-free. \end{proof} \begin{commentA} Here's another, more geometric proof of \Cref{fundamentalgroup}. We use the cyclic covering trick. Let $L$ be a torsion element of ${\on{Pic}}(V)$ of order $\ell$, so that there is an isomorphism $s: L^{\ell} \cong \mathcal{O}_V$ given by some nowhere vanishing global section of $L^{\ell}$. We define a finite covering $$\underline{\mathrm{Spec}}_V \bigoplus_{i = 0}^{\ell-1} L^{i} \rightarrow V.$$ Multiplication in the sheaf of $\mathcal{O}_V$-algebras $\mathcal{A} \coloneqq \oplus_{i = 0}^{\ell-1} L^{i}$ is defined by the usual multiplication $L^{a} \otimes L^{b} \rightarrow L^{\otimes (a+b)}$ when $a + b < \ell$ and by the composition $L^a \otimes L^b \rightarrow L^{a+b} \xrightarrow{s} L^{a+b-\ell}$ if $a + b \geq \ell$. Then this is an \'etale cover of $V$. However, since the fundamental group of $V$ is trivial, this covering must simply be a disjoint union of copies of $V$. This means that $L \cong \mathcal{O}_V$, as required. \end{commentA} Now we'll conclude the proof of \Cref{preservegenerator}. By \cite[Section 3.2.2]{Dolgachev}, the fundamental group of the punctured affine cone $C_X^*$ vanishes whenever $X$ is a quasismooth hypersurface of dimension at least $2$. Therefore, the (non-equivariant) Picard group $\on{Pic}(C_X^*)$ is torsion-free. Next, we use the exact sequence $$0 \rightarrow {\mathbb Z} \rightarrow \on{Pic}^G(C_X^*) \rightarrow \on{Pic}(C_X^*),$$ where the first map sends $1 \mapsto \mathcal{O}_X(1)$ and the second map forgets the linearization. Exactness of this sequence follows from \cite[Theorem 4.2.2]{Brion} plus the observation that ${\mathbb Z} \rightarrow \on{Pic}^G(C_X^*)$ is injective by ampleness of $\mathcal{O}_X(1)$ on $X$. It follows that $\on{Pic}^G(C_X^*)$ is torsion-free, since any torsion element must map to zero in $\on{Pic}(C_X^*)$, and hence be in the image of ${\mathbb Z} \rightarrow \on{Pic}^G(C_X^*)$. In addition, we see that $\mathcal{O}_X(k) \in \on{Pic}^G(C_X^*)$ is not divisible by any integer other than the factors of $k$, because otherwise the cokernel of the first map would contain torsion elements. This implies $r = r'$ above. Indeed, if $g^*(\mathcal{O}_X(r)) \cong \mathcal{O}_{X'}(r')$, then the image $L \in \on{Pic}^G(C_{X'}^*)$ of $\mathcal{O}_X(1)$ satisfies $L^r = \mathcal{O}_{X'}(r')$. Since $\mathcal{O}_{X'}(r')$ is divisible by $r$, $r$ is a factor of $r'$ by the above. Arguing symmetrically with the inverse isomorphism gives $r = r'$. In particular, we then have that the image of $\mathcal{O}_X(1)$ differs from $\mathcal{O}_{X'}(1)$ by a torsion element of order $r$. Because we also saw that $\on{Pic}^G(C_X^*)$ is torsion free, this proves that $g^*(\mathcal{O}_X(1)) \cong \mathcal{O}_{X'}(1)$. \end{proof} Finally, we're ready to complete the proof of \Cref{main-theorem}. We've now seen that in the assumptions of the theorem, pullback by the isomorphism $g$ sends $\mathcal{O}_X(m)$ to $\mathcal{O}_{X'}(m)$ for any $m \in {\mathbb Z}$. Therefore, $g$ induces isomorphisms $$g^*: H^0(X,\mathcal{O}_{X}(m)) \rightarrow H^0(X',\mathcal{O}_{X'}(m)).$$ These maps respect the tensor product of sections, so we may assemble them into an isomorphism of graded rings $g^*: S/(f) \rightarrow S'/(f')$. In particular, $g^*$ gives an isomorphism of the maximal irrelevant ideal $\mathfrak{m}$ of elements of positive degree in $S/(f)$ with $\mathfrak{m}' \subset S'/(f')$. The number of generators of $\mathfrak{m}$ and their degrees (up to reordering) coincide with those of $\mathfrak{m'}$. This is because if $m$ is the smallest positive index with the graded piece $\mathfrak{m}_m$ nonempty, the number of generators of degree $m$ equals $\dim(\mathfrak{m}_m)$, and then we can factor out by these generators and repeat inductively. Since $X$ is quasismooth and not a linear cone, we have that $d$, the degree of the smallest relation among the $x_i$, is strictly greater than all weights $a_i$. It follows that $x_0,\ldots,x_{n+1}$ are a minimal system of $n+1$ generators for the homogeneous ideal $\mathfrak{m}$. Similarly, $x_0',\ldots,x_{n+1}'$ generate $\mathfrak{m}'$, which is an isomorphic to $\mathfrak{m}$, so this set of generators must also be minimal and the collection $a_0,\ldots,a_{n+1}$ must be the same as $a_0',\ldots,a_{n+1}'$, up to reordering. Dimension counting yields that the relations are then also in the same degree, so we have $d = d'$. Lastly, we observe that the isomorphism $g^* \colon S/(f) \rightarrow S'/(f')$ is induced by an isomorphism $S \rightarrow S' $. Indeed, for $m < d$, the $m$th graded piece of $S$ is isomorphic to that of $S/(f)$, so $g^*$ gives isomorphisms $S_m \rightarrow S'_m$. All the generators of $S$ are contained in $S_m$ for $m < d$ so this gives rise to a homomorphism $S \rightarrow S'$. The inverse of $g$ similarly gives a morphism $S' \rightarrow S$ whose composition with the previous map is the identity on generators, and hence on all of $S$. Therefore, the homomorphism $S \rightarrow S'$ is an isomorphism. By our work above, both $\on{Proj}(S)$ and $\on{Proj}(S')$ are isomorphic to $\mathbb{P}(a_0,\ldots,a_{n+1})$, so the isomorphism $S \rightarrow S'$ gives an automorphism of this weighted projective space inducing the original isomorphism $g \colon X' \rightarrow X$ of hypersurfaces, as claimed. \end{proof} This theorem fails if we weaken the hypotheses on dimension. As mentioned above, two smooth plane curves $C$ and $C'$ in $\mathbb{P}^2$ of degree at least $4$ are isomorphic if and only if they differ by an isomorphism of the projective plane \cite{Chang}. However, the situation for weighted projective curves is considerably more complicated: there exist many curves of genus at least $2$ which can be embedded as well-formed quasismooth hypersurfaces in different weighted projective spaces. \begin{example}[Hyperelliptic Curves] Let $C$ be a hyperelliptic curve of genus $g$, $p: C \rightarrow \mathbb{P}^1$ a $2:1$ cover of $\mathbb{P}^1$, and $P \in C$ a ramification point of the cover. Then $C$ is isomorphic to $\mathrm{Proj} (R(C,P))$, where $$R(C,P) \coloneqq \bigoplus_{k=0}^{\infty} H^0(C,kP).$$ This ring has generators $x,y$, and $z$ in degrees $1$, $2$, and $2g+1$, respectively, and a single relation in degree $4g+2$. It is possible to choose the generator $z$ so that the relation has the form $f(x,y,z) \coloneqq z^2 - h(x^2,y) = 0$, where $h$ is a polynomial of degree $2g+1$. This embeds $C$ as a quasismooth hypersurface of degree $4g+2$ in $\mathbb{P}^2(2g+1,2,1)$. However, if we use $R(2C,P) = R(C,K_C)$ instead, we have another embedding of the same curve as a degree $2g+2$ hypersurface in $\mathbb{P}^2(g+1,1,1)$. \end{example} \begin{example} There are also non-hyperelliptic curves exhibiting similar behavior. For $C$ a smooth non-hyperelliptic curve of genus $3$, we have the canonical embedding of $C$ in $\mathbb{P}^2 = \mathbb{P}^2(1,1,1)$ as a degree $4$ plane curve. Suppose further that $C$ has the property that there is a line $\ell \subset \mathbb{P}^2$ tangent to $C$ at a point $P$ with multiplicity $4$. Then we have $4P \sim K_C$. One can show that ring $R(C,P)$ has generators in degrees $1$, $3$, and $4$, giving an embedding $C \subset \mathbb{P}^2(1,3,4)$ as a hypersurface of degree $12$. There are many similar examples for curves of higher genus. \end{example} When $\dim X = 2$ but $\sum_i a_i = d$, \Cref{main-theorem} also fails. This is because the resulting hypersurfaces are K3 surfaces in this case, which frequently have infinite automorphism group. An example of Fano and Severi of a quartic surface in $\mathbb{P}^3$ with infinite automorphism group is described in the proof of Theorem 4 in \cite{MM}, for instance. However, we'll see below in \Cref{finite} that the linear automorphism groups of weighted projective surfaces with $\sum_i a_i = d$ are always finite. Hence, some automorphisms of these surfaces aren't linear. \begin{comment} For surfaces, at least part of the theorem above remains true: \begin{proposition} Using the same notation and assumptions as in \Cref{main-theorem}, suppose that $X$ and $X'$ are Calabi-Yau weighted projective surfaces (so that $n = 2$, $\sum_i a_i = d$, and $\sum_i a_i' = d'$) with the property that $X \cong X'$. Then $X$ and $X'$ belong to the same weighted projective $\mathbb{P}^3$. \end{proposition} -CAN I EXTEND TO BIRATIONAL? \begin{proof} Using Reid's famous list of the 95 families of (well-formed, quasismooth) K3 hypersurfaces in weighted projective space \cite[Section 13.3]{Iano-Fletcher}, we'll show that no two members from different families could possibly be isomorphic. First, each family comes with a certain basket of cyclic quotient singularities. Any two quasismooth hypersurfaces of the same degree in the same weighted projective space have the same singularities, \'{e}tale-locally \cite[Lemma 2.5]{ETW}. It follows that an isomorphism between two families can only occur if the baskets are the same. This only occurs for two pairs among the 95 hypersurfaces: $X_4 \subset \mathbb{P}(1,1,1,1)$ and $X_6 \subset \mathbb{P}(1,1,1,3)$ are both smooth, while $X_5 \subset \mathbb{P}(1,1,1,2)$ and $X_{12} \subset \mathbb{P}(1,1,4,6)$ both have a single $A_1$ singularity. How do we distinguish these? Can we use something about principal polarizations? \end{proof} \end{comment} \section{Bounds on Linear Automorphism Groups} \label{bounds} Recall that the linear automorphism group $\on{Lin}(X) \subset \on{Aut}(X)$ of a hypersurface $X \subset \mathbb{P}$ is the subgroup of automorphisms that extend to $\mathbb{P}$. As long as the degree of $X$ exceeds all weights of $\mathbb{P}$, the only automorphism of $\mathbb{P}$ that fixes $X$ pointwise is the identity (use the same argument on extending morphisms of graded rings as above), so in this case we may consider $\on{Lin}(X)$ as a subgroup of both $\mathrm{Aut}(X)$ and $\mathrm{Aut}(\mathbb{P})$. \Cref{main-theorem} shows that $\on{Lin}(X) = \mathrm{Aut}(X)$ whenever $\dim(X) \geq 3$ or $\dim(X) = 2$ and $K_X \ncong \mathcal{O}_X$. In this section, we prove several results on the size of $\on{Lin}(X)$. These will imply the same results for $\mathrm{Aut}(X)$ in most dimensions in light of the last section. First, we give a criterion in terms of the degree $d$ and the weights $a_0,\ldots,a_{n+1}$ which determines whether or not $\on{Lin}(X)$ is finite. \begin{theorem} \label{finite} Let $X \subset \mathbb{P}(a_0,\ldots,a_{n+1})$ be a well-formed, quasismooth weighted projective hypersurface of degree $d$, where $n \geq 1$. Then $\on{Lin}(X)$ is finite if and only if one of the following two conditions holds: \begin{enumerate} \item $d > 2 \max \{a_0,\ldots,a_{n+1}\}$; or \item $d = 2 \max \{a_0,\ldots,a_{n+1}\}$ and only a single weight achieves the maximum. \end{enumerate} Further, if neither (1) nor (2) holds (so that $\mathrm{Lin}(X)$ is infinite), then $X$ is rational. \end{theorem} \begin{remark} This generalizes a theorem of Matsumura and Monsky \cite[Theorem 1]{MM}, which shows that the linear automorphism group of a smooth hypersurface of degree $d$ in $\mathbb{P}^{n+1}$ over an algebraically closed field $k$ is finite if $n \geq 2$ and $d \geq 3$. For $k = {\mathbb C}$, this result was known in some form at least as far back as 1880, when it appeared in a work of Jordan \cite{Jordan2} (see also \cite[Section 6]{OS} for further historical remarks on this theorem). Note also that if $d < 3$, $X$ is a hyperplane or a quadric. These always have infinite linear automorphism groups and are rational for $n \geq 1$. \end{remark} \begin{proof} First, assume that one of the two conditions on $d$ in \Cref{finite} holds. We'll show that $\on{Lin}(X)$ is finite, using generally the same approach as in \cite{MM}. By \Cref{autP}, any automorphism of $\mathbb{P}$ comes from a graded automorphism of $S = {\mathbb C}[x_0,\ldots,x_{n+1}]$, so we perform most of our analysis on the level of graded ring automorphisms. For any graded homomorphism $S \rightarrow S$, the image of each generator $x_i$ is contained in the finite-dimensional vector space $S_{a_i}$. Thus, the endomorphism monoid of $S$ is isomorphic to $\mathbb{A}^{N}$ as a variety, where $N \coloneqq \sum_i \dim(S_{a_i})$. The linear algebraic group $\mathrm{Aut}(S)$ is an open subvariety of $\mathbb{A}^N$. This is a generalization of the fact that $\mathrm{GL}_{n+2}({\mathbb C})$ is an open subvariety of $\mathbb{A}^{(n+2)^2}$. We saw in \Cref{autP} that $\mathrm{Aut}(S)$ acts on $\mathbb{P}(a_0,\ldots,a_{n+1})$ with kernel isomorphic to ${\mathbb C}^*$, where $t \in {\mathbb C}^*$ acts as $t \cdot x_i = t^{a_i} x_i$ for each $i$. Let $G \subset \mathrm{Aut}(S)$ be the subgroup of elements mapping the polynomial $f$ defining $X$ to a multiple of itself. Then $\on{Lin}(X) = G/{\mathbb C}^*$. Since $\on{Lin}(X)$ is an algebraic group, if it has trivial Lie algebra, then it must be finite. We'll show that the Lie algebra $\mathfrak{g}$ of $G$ equals that of the subgroup $\ker(\mathrm{Aut}(S) \rightarrow \mathrm{Aut}(\mathbb{P})) \cong {\mathbb C}^*$; this implies that the Lie algebra of the quotient $\mathrm{Lin}(X) = G/{\mathbb C}^*$ is trivial, as required. The tangent space to $\mathrm{Aut}(S)$ at the identity is naturally isomorphic to $S_{a_0} \oplus \cdots \oplus S_{a_{n+1}}$, where an element $z \coloneqq (z_0,\ldots,z_{n+1})$ corresponds to the infinitesimal automorphism $x_i \mapsto x_i + \epsilon z_i$. Our aim is to show that if $z \in \mathfrak{g}$, then in fact $z$ is a multiple of $(a_0 x_0, \ldots, a_{n+1}x_{n+1})$, which is the derivative of the function ${\mathbb C}^* \rightarrow G$ given by $t \mapsto (x_i \mapsto t^{a_i} x_i)$ at $t = 1$. Every $z$ in the Lie algebra of $\mathrm{Aut}(S)$ defines a derivation $D_z: S \rightarrow S$ by the formula $h \mapsto \frac{d}{d \epsilon} h(x + \epsilon z)|_{\epsilon = 0}$. The fact that $z \in \mathfrak{g}$ means that $D_z(f) = c f$ for some constant $c$. But we may express $D_z(f)$ in terms of partial derivatives $f_i \coloneqq \frac{\partial f}{\partial x_i}$ as: $$D_Z(f) = \sum_i f_i z_i.$$ Therefore, we have $$\sum_i f_i z_i = cf = \sum_i \frac{ca_i x_i}{d}f_i.$$ In this equation, the last equality follows from the following weighted variant of Euler's formula for homogeneous polynomials. Namely, for $f$ homogeneous of weighted degree $d$ in variables $x_i$ with weights $a_i$: $$\sum_i a_i x_i f_i = df.$$ Rearranging the equation above now gives \begin{equation} \label{lincombo} \sum_i \left(z_i - \frac{ca_i x_i}{d} \right)f_i = 0. \end{equation} Since $X$ is a quasismooth hypersurface, its punctured affine cone in $\mathbb{A}^{n+2} \setminus \{0\}$ is smooth, so that the only common zero of the partial derivatives $f_0,\ldots,f_{n+1}$ is at the origin $x = 0$ in $\mathbb{A}^{n+2}$. The ring $S$ is Cohen-Macaulay, each $f_i$ is a homogeneous element of positive degree in a graded ring, and these $n+2$ polynomials cut out the subvariety $\{0\}$ of codimension $n+2$ in $\mathbb{A}^{n+2}$. Therefore, it follows that any permutation of the $f_i$ form a regular sequence in the ring $S$. A particular consequence of that fact is that $f_i$ is not a zero divisor in the ring $S/I_i$, where $I_i \coloneqq (f_0,\ldots,\hat{f}_i,\ldots,f_{n+1})$. The equation \eqref{lincombo} then implies that $z_i - ca_i x_i/d \in I_i$ for each $i$. That element is homogeneous of weighted degree $a_i$, so we can guarantee that it is zero if every nonzero polynomial in $I_i$ has degree greater than $a_i$. Each partial derivative $f_j$ has degree $d - a_j$, so we can conclude that $z_i - ca_i x_i/d = 0$ if $d-a_j$ is greater than $a_i$ for all $j \neq i$. Either of the two conditions in the theorem guarantees that this criterion is met for all $i$. Therefore, if we assume one of these conditions, then $z_i = ca_i x_i/d = (c/d) a_i x_i$ for all $i$, as required. Next, we'll show the converse: if either $d < 2 \max \{a_0,\ldots,a_{n+1}\}$ or $d = 2 \max \{a_0,\ldots,a_{n+1}\}$ and there are multiple weights equal to the maximum, then $\on{Lin}(X)$ is infinite. Furthermore, we'll prove that $X$ is rational. It's helpful to consider a few distinct cases. Throughout, we'll assume the weights are arranged in decreasing order, so in particular $a_0 = \max \{a_0,\ldots,a_{n+1}\}$. We note that either assumption on degree above guarantees that $d < a_0 + \cdots + a_{n+1}$ so that the hypersurface is Fano. \begin{commentA} Here's a full proof of the fact that hypersurfaces failing (1) and (2) in \Cref{finite} are Fano. \begin{proposition} Let $X = \{f = 0\} \subset \mathbb{P}(a_0,\ldots,a_{n+1})$ be a quasismooth hypersurface of dimension at least $1$ which does not satisfy either of the conditions (1), (2) of \Cref{finite}. Then $X$ is Fano. \end{proposition} \begin{proof} Suppose first that $d < 2 \max \{a_0,\ldots,a_n\}$. For quasismooth $X$, the minimum possible value of $d$ is $\max \{a_0,\ldots,a_n\}$, which corresponds to the case of a linear cone. The hypersurface $X$ is then isomorphic to a weighted projective space, and is hence Fano. If $d$ doesn't equal this minimum, suppose without loss of generality that $a_0$ is a maximum weight. Then quasismoothness of $X$ means (by \Cref{monomialexistence}) that there must be a monomial $x_0 x_i$ of degree $d$ for some $i$. Thus, $a_0 + a_i = d$. Since $n \geq 1$, $d - \sum_i a_i < d- a_0 - a_i = 0$, so $X$ is Fano. Similarly, if $d = 2 \max \{a_0,\ldots,a_n\}$, condition (2) of \Cref{finite} fails only if we have (at least) two weights $a_0$ and $a_1$ which equal $d/2$. In this case $d - \sum_i a_i < d - a_0 - a_1 < 0$, so $X$ is again Fano. \end{proof} \end{commentA} If $d = \max \{a_0,\ldots,a_{n+1}\} = a_0$, then $X$ is a linear cone, and hence isomorphic to $\mathbb{P}(a_1,\ldots,a_{n+1})$, which is rational. We may also assume that the equation of $X$ is $x_0 + f(x_1,\ldots,x_n) = 0$, where $f$ is homogenenous of degree $a_0$. Under the automorphism $x_0 \mapsto x_0 - f$ of $\mathbb{P}$, this becomes $x_0 = 0$. Every automorphism of $\{x_0 = 0\} = \mathbb{P}(a_1,\ldots,a_{n+1})$ extends to $\mathbb{P}(a_0,\ldots,a_{n+1})$ and this group is infinite since $n \geq 1$. Now suppose $d < \max \{a_0,\ldots,a_{n+1}\} < 2d$. In order for $X$ to be quasismooth, its equation must contain a monomial $x_0 x_l$ (for some $l \neq 0$) with nonzero coefficient by \Cref{monomialexistence}. By a transformation of $x_l$, we may assume that $x_0 x_l$ is the only term involving $x_0$. The equation then looks like $$x_0 x_l + x_l f(x_1,\ldots,x_{n+1}) + g(x_1,\ldots,\hat{x}_l,\ldots,x_n),$$ where $f$ is homogeneous of degree $d - a_l = a_0$ and may include $x_l$, while $g$ is homogeneous of degree $d$ and consists of terms not containing $x_0$ or $x_l$. Composing with $x_0 \mapsto x_0 - f$ then eliminates the middle term. After these transformations, it's clear that $X$ contains an infinite group of automorphisms which for each $t \in {\mathbb C}^*$ maps $x_0 \mapsto t x_0$, $x_l \mapsto \frac{1}{t}x_l$, and fixes all other variables. To show that $X$ is rational, note that the $g$ term above is nonzero since $X$ is quasismooth. On the open set $x_l \neq 0$, we may isolate $x_0 = -g/x_l$, so that the projection forgetting $x_0$ is a birational map to the rational toric variety $\mathbb{P}(a_1,\ldots,a_{n+1})$. Finally, suppose $d = 2 \max \{a_0,\ldots,a_{n+1}\}$, but that both $a_0$ and $a_1$ equal $\frac{d}{2}$. A similar series of reductions to the equation of $f$ works here: we can change variables so that the quadratic in $x_0$ and $x_1$ equals $x_0 x_1$ and eliminate any other terms involving $x_0$ and $x_1$. The equation $x_0x_1 + f(x_2,\ldots,x_{n+1}) = 0$ has the same infinite family $x_0 \mapsto t x_0$, $x_1 \mapsto \frac{1}{t}x_1$ of automorphisms, and the projection forgetting the first coordinate is again a birational map. This completes the proof. \end{proof} Next, we consider bounds on the size of the linear automorphism groups of hypersurfaces when they are finite. Some results in this direction are known for hypersurfaces of degree $d$ in ordinary projective space $\mathbb{P}^{n+1}$, which we know have finite linear automorphism groups when $d \geq 3$. An unpublished work of Bott and Tate from around 1961 showed that there is a bound on the size of the $|\mathrm{Lin}(X)|$ in terms of $d$ and $n$ (see \cite{OS} for an exposition of these ideas). Around twenty years later, Howard and Sommese \cite{HS} showed that there is a constant $k_n$ for every dimension $n$ such that $|\mathrm{Lin}(X)| < k_n d^{n+1}$, for any $d \geq 3$. We'll prove an even stronger theorem in the setting of weighted projective hypersurfaces. \begin{theorem} \label{bound} For every positive integer $n$, there exists a constant $C_n$ depending only on $n$ with the following property: for any well-formed, quasismooth hypersurface $X \subset \mathbb{P}(a_0,\ldots,a_{n+1})$ of degree $d$ and dimension $n$, if $\on{Lin}(X)$ is finite, then \begin{equation} \label{bound_eq} |\on{Lin}(X)| \leq C_n \frac{d^{n+1}}{a_0 \cdots a_{n+1}}. \end{equation} \end{theorem} In particular, the same constant $C_n$ works for hypersurfaces in {\it any} weighted projective space of dimension $n$. The comments following the proof of \Cref{Jordan_const} describe an explicit procedure for effectively producing a constant $C_n$ for which the theorem holds. We'll need the following definitions during the proof. \begin{definition} \label{Jordan_def} Let $\mathcal{G}$ be a group. We say that $\mathcal{G}$ has the {\it Jordan property} if there exists a constant $J(\mathcal{G})$ such that for every finite subgroup $H \subset \mathcal{G}$, there exists a normal abelian subgroup $A \subset H$ with index $[H:A] \leq J(\mathcal{G})$. The minimum $J(\mathcal{G})$ with this property is called the {\it Jordan constant} of $\mathcal{G}$. The {\it weak Jordan constant} $\bar{J}(\mathcal{G})$ of $\mathcal{G}$ is the minimum constant such that every finite $H \subset \mathcal{G}$ has a (not necessarily normal) abelian subgroup $A \subset H$ with $[H:A] \leq \bar{J}(\mathcal{G})$. \end{definition} An 1878 result by Jordan \cite{Jordan} shows that $\mathrm{GL}_N({\mathbb C})$ has the Jordan property for all $N$ (for a modern exposition of his original proof and subsequent developments, see \cite{Breuillard}). The explicit values of the Jordan constants $J_N \coloneqq J(\mathrm{GL}_N({\mathbb C}))$ were not computed until much later: Collins \cite{Collins} calculated all the $J_N$ and in particular showed that when $N \geq 71$, $J_N = (N+1)!$; this index is achieved by the $N$-dimensional standard representation of the symmetric group $S_{N+1}$. His proof relies on the classification of finite simple groups. Weak Jordan constants have not been as well studied, but it follows from a theorem of A. Chermak and A. Delgado that for any group with the Jordan property, $\bar{J}(\mathcal{G}) \leq J(\mathcal{G}) \leq \bar{J}(\mathcal{G})^2$ (see \cite[Theorem 1.41]{Isaacs} and \cite[Remark 1.2.2]{PS}). The precise values of $\bar{J}_N \coloneqq \bar{J}(\mathrm{GL}_N({\mathbb C}))$ for small $N$ are computed in \cite{PS}, but to the author's knowledge there has been no complete calculation of all the $\bar{J}_N$. \begin{proof}[Proof of \Cref{bound}] We'll prove the theorem in the following three steps: \begin{enumerate} \item[{\bf Step 1}:] Show that $\mathrm{Lin}(X)$ is the image of a finite group of graded ring automorphisms which fixes the function $f$ defining $X$ and has order $d|\mathrm{Lin}(X)|$. \item[{\bf Step 2}:] Find a uniform bound $C_n$ on the weak Jordan constants $\bar{J}(\mathrm{Aut}(S))$ of the graded automorphism groups of weighted polynomial rings $S$ in $n+2$ variables. \item[{\bf Step 3}:] Show that the order of an abelian group of graded ring automorphisms fixing $f$ is at most $d^{n+2}/(a_0 \cdots a_{n+1})$. \end{enumerate} \noindent {\bf Step 1}: Suppose that $G \coloneqq \on{Lin}(X)$ is a finite group, for a quasismooth hypersurface $X$ of degree $d$ in $\mathbb{P} = \mathbb{P}(a_0,\ldots,a_{n+1})$. Let $S = {\mathbb C}[x_0,\ldots,x_{n+1}]$ be the weighted polynomial ring with weights $a_i$, $f$ be the polynomial defining the hypersurface $X$, and $\pi$ be the natural quotient homomorphism $\pi: \mathrm{Aut}(S) \rightarrow \mathrm{Aut}(\mathbb{P})$. If $g \in \pi^{-1}(G) \subset \mathrm{Aut}(S)$, the induced automorphism of $\mathbb{P}$ preserves $f$, so $g \cdot f = cf$ for some constant $c \in {\mathbb C}$. Define $H$ to be the subgroup of $\pi^{-1}(G)$ of elements $g$ which satisfy the stronger condition $g \cdot f = f$. We claim that $\pi|_H: H \rightarrow G$ is a surjective homomorphism with kernel of order $d$, so that in particular $H$ is a finite group with $|H| = d |G|$. It's clear that $\pi|_H: H \rightarrow G$ is surjective because we can compose any automorphism in $\pi^{-1}(G)$ with an element of $\ker(\pi) \cong \mathbb{C}^*$ (see \Cref{autP}) to scale the factor $c$ to $1$. An element of the kernel of $\pi|_H$ is a $t \in {\mathbb C}^*$ with $f(t^{a_0}x_0,\ldots,t^{a_{n+1}}x_{n+1}) = t^d f = f$, so $t$ is a $d$th root of unity. This proves $|H| = d |G|$. To bound the order of $G$, we can therefore analyze the group $H \subset \mathrm{Aut}(S)$ instead. \noindent {\bf Step 2}: Next, we reduce to only considering abelian groups by computing the weak Jordan constant for the group $\mathrm{Aut}(S)$ of graded automorphisms. For any weighted polynomial ring $S$, $\mathrm{Aut}(S)$ is a linear algebraic group. This implies that $\mathrm{Aut}(S)$ has the Jordan property. However, even for a fixed number of variables, $\mathrm{Aut}(S)$ can have arbitrarily large dimension as an algebraic group: for example, if $S = {\mathbb C}[x_0,x_1,x_2]$ with weights $a$, $1$, and $1$, respectively, then $\dim(\mathrm{Aut}(S)) = \dim(S_1) + \dim(S_1) + \dim(S_a) = a + 6$. Despite this, we'll prove that the Jordan constant of $\mathrm{Aut}(S)$ is uniformly bounded among polynomial rings $S$ with a fixed number of variables. Following the notation used in \Cref{diagonalizable}, we let $\mathcal{B} \coloneqq \{b : b = a_i \text{ for some } i\}$ be the set of positive integers that occur as a weight of the polynomial ring $S$. For each $b \in \mathcal{B}$, $N_b$ is the number of weights equal to $b$. Recall that $\bar{J}_N \coloneqq \bar{J}(\mathrm{GL}_N({\mathbb C}))$. \begin{lemma} \label{Jordan_const} Let $S = {\mathbb C}[x_0,\ldots,x_{n+1}]$ be a weighted polynomial ring with weights $a_0,\ldots,a_{n+1}$. Then $\bar{J}(\mathrm{Aut}(S)) = \prod_{b \in \mathcal{B}} \bar{J}_{N_b}$. In particular, for any integer $n$, there is a uniform upper bound $C_n$ on the weak Jordan constants of all groups $\mathrm{Aut}(S)$ where $S$ has $n+2$ variables. \end{lemma} \begin{proof} Inside each graded piece $S_b$ with $b \in \mathcal{B}$, there is a subspace $V_b$ of dimension $N_b$ spanned by the variables of weight $b$, and a complementary subspace $W_b$ spanned by the remaining monomials of weighted degree $b$. The direct sum $\bigoplus \mathrm{GL}(V_b)$ embeds as a subgroup of $\mathrm{Aut}(S)$, consisting of all automorphisms that don't ``mix" variables of different weights. We'll show that any finite group $G \subset \mathrm{Aut}(S)$ is conjugate to a subgroup of $\bigoplus \mathrm{GL}(V_b) \subset \mathrm{Aut}(S)$. To do this, we'll construct the necessary coordinate change inside each $S_b$. Since finite groups are linearly reductive in characteristic zero, the representation of $G$ on $S_b$ splits into a direct sum of irreducible representations. In particular, since $W_b$ is $G$-invariant, we can find a complementary $G$-invariant subspace $V'_b$ inside $S_b$. Define the change of coordinates on variables of weight $b$ in such a way that the span of the variables of weights $b$ becomes $V'_b$. We can construct this change of coordinates independently within each $S_b$ and arrive at an automorphism of the entire graded ring $S$. By construction, elements $g \in G$ don't mix variables of different weights in the new coordinates. This proves that any finite group $G$ that appears as a subgroup of $\mathrm{Aut}(S)$ also embeds in $\bigoplus \mathrm{GL}(V_b)$. Therefore, $$\bar{J}(\mathrm{Aut}(S)) = \bar{J} \left( \bigoplus_{b \in \mathcal{B}} \mathrm{GL}(W_b) \right) = \prod_{b \in \mathcal{B}} \bar{J}_{N_b}.$$ Here the last equality comes from the general fact that $\bar{J}(G_1 \times G_2) = \bar{J}(G_1) \bar{J}(G_2)$ (this is one convenient property of weak Jordan constants that doesn't hold for regular Jordan constants). Because there are $n+2$ weights total, we have $\sum_{b \in \mathcal{B}} N_b = n+2$. There are only finitely many possibilities for the collection of positive integers $N_b$ for a fixed $n$, so there is a uniform upper bound $C_n$ on the weak Jordan constant of $\mathrm{Aut}(S)$ depending only on $n$. \begin{comment}[Proof in the case that we know \bar{J}_N \leq N!] By the result of Collins we know $\bar{J}_N \leq N!$ for large $N$ (in particular, $N \geq 71$ suffices). Thus, for large enough $n$, $(n+2)!$ is greater than or equal to any product $\prod \bar{J}_{N_b}$ with $\sum_{b \in \mathcal{B}} N_b = n+2$. This is straightforward to see when each $\bar{J}_{N_b} \leq N_b!$, since the factorial of a sum is larger than the product of the individual factorials. Even though $\bar{J}_N \leq N!$ is violated for a finite number of small values of $N$, we still have that the product is smaller than $(n+2)!$ for $n$ large. This gives the last statement. Here are the details on how to prove that $(n+2)!$ is indeed an upper bound for all products of weak Jordan constants $\bar{J}_{N_b}$ with $\sum N_b = n+2$. First, note the following lemma: \begin{lemma} Let $k = k_1 + \cdots + k_r$ be a sum of positive integers. Then $k! \geq k_1 ! \cdots k_r !$. \end{lemma} \begin{proof} This follows using induction from the case of $r = 2$: $k = k_1 + k_2$. Then $k!/k_1! = k(k-1) \cdots (k_2 + 1)$. This is greater than $k_2! = k_2 (k_2-1) \cdots \cdot 1$ because there are $k_2$ terms in each product and $k > k_2, k-1 > k_2 - 1, \ldots, k_2 + 1 > 1$. \end{proof} Choose a constant $B$ so that $B^N > \bar{J}_N$ for $N < 71$. Now consider the product $\prod_b \bar{J}_{N_b}$, where $\sum N_b = n+2$. We can split the product into two parts: one containing the terms with $N_b \geq 71$, and the remaining terms. Let $R$ be the sum of the $N_b$ with $N_b \geq 71$. By the lemma and the definition of $B$ $$\prod_b \bar{J}_{N_b} \leq B^{n+2-R} R!.$$ Suppose that $n$ is large enough so that $(n+2)! > B^{n+2}$. Then we claim that $B^{n+2-R} R! < (n+2)!$. Indeed, if $R \leq B$, then this follows automatically from $(n+2)! > B^{n+2}$. On the other hand, if $R \geq B$, then $$\frac{(n+2)!}{R!} = (n+2)(n+1) \cdots (R+1)$$ is a product of $n+2-R$ integers greater than $B$, so it is of course larger than $B^{n+2-R}$. This completes the proof. This argument can be used to give a crude estimate on the necessary bound above which we can take $C_n = (n+2)!$ in \Cref{bound}. However, it is likely far from optimal. In fact, it's possible that $n \geq 2$ already suffices (see \Cref{curvebounds} and the comments afterward). \end{comment} \end{proof} We may explicitly compute a value for $C_n$ for a particular $n$ by considering all partitions of $n+2$ as a sum of positive integers $N_b$ and multiplying the corresponding values of $J_{N_b}$ computed by Collins \cite{Collins} (since $\bar{J}_N \leq J_N$). \noindent {\bf Step 3}: Using \Cref{Jordan_const}, we now only need to bound the order of abelian subgroups of automorphisms $A \subset \mathrm{Aut}(S)$, where $S$ has $n+2$ variables. Suppose that $A \subset H$ is an abelian subgroup of smallest index and assume we've changed coordinates on $S$ so that the action of $A$ is diagonal, using \Cref{diagonalizable}. Now suppose that $f$ is a sum of $s$ monomials with nonzero coefficients and write it as $$f = \sum_{i=1}^s \prod_{j=0}^{n+1} K_{ij} x_j^{m_{ij}}.$$ Package the exponents $m_{ij}$ into an $s \times (n+2)$ matrix $M$. Each row corresponds to a monomial in $f$. We can use \Cref{monomialexistence} to pick a distinguished collection of $n+2$ monomials in $f$: indeed, for each $i$, select a monomial of the form $x_i^{b_i}$ or $x_i^{b_i} x_j$, $j \neq i$ which has a nonzero coefficient in $f$. Take only the $n+2$ rows of $M$ corresponding to these, and assemble them into a square $(n+2) \times (n+2)$ minor $B$ of $M$ in such a way that the monomial associated to $x_i$ goes in the $i$th row. \begin{lemma} \label{determinant} The matrix $B$ constructed above is invertible and has determinant satisfying $$0 < \mathrm{det}(B) \leq \frac{d^{n+2}}{a_0 \cdots a_{n+1}}.$$ \end{lemma} \begin{proof} We first note the following properties of $B$: first, every entry $b_i$ on the main diagonal is a positive integer satisfying $2 \leq b_i \leq d/a_i$. (The lower bound is by the criterion in \Cref{finite}, while the upper bound is because $x_i^{b_i}$ or $x_i^{b_i}x_j$ is a monomial of weighted degree $d$.) Second, each row of $B$ contains at most one nonzero element off the main diagonal; if it does, this element must be a $1$. We can begin to compute the determinant by expanding along any rows or columns that have only one nonzero entry, namely the diagonal entry. At each such step, the diagonal entry $b_i$ is a positive integer which is at most $d/a_i$, where $i$ is the index of the row in question. After removing the $i$th row and column, the resulting minor always has the same properties as $B$, so it suffices to prove the inequality in the lemma with one copy of $d$ in the numerator and the $a_i$ in the denominator removed. Continuing in this way, we may assume $B$ has exactly one off-diagonal $1$ in each row and column. Up to a permutation of the indices, we can now further assume that $B$ is block diagonal with blocks of the form $$\begin{pmatrix} b_0 & 1 & 0 & \cdots & 0 \\ 0 & b_1 & 1 & \cdots & 0 \\ \vdots & & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & b_{r-1} & 1 \\ 1 & 0 & \cdots & 0 & b_r \end{pmatrix}.$$ It now suffices to prove the lemma in the case that $B$ is a single block of the form above (so $r = n+1$). It's straightforward to compute that this ``loop matrix" has determinant $b_0 \cdots b_{n+1} + (-1)^{n+1} \neq 0$, so it is invertible (here we use $b_i \geq 2$). As for the bound on the determinant, it automatically holds when $n$ is even since $b_0 \cdots b_{n+1} -1 < b_0 \cdots b_{n+1}$ and each $b_i < \frac{d}{a_i}$. When $n$ is odd, use the series of equations $b_0 = (d-a_1)/a_0, b_1 = (d-a_2)/a_1, \ldots, b_{n+1} = (d-a_0)/a_{n+1}$ to compute \begin{align*} \det(B) & = b_0 \cdots b_{n+1} + 1 = \frac{(d-a_0)(d-a_1) \cdots (d-a_{n+1})}{a_0 \cdots a_{n+1}} + 1 \\ & = \frac{d^{n+2}-d^{n+1}s_1 + d^n s_2 - \cdots + d s_{n+1}}{a_0 \cdots a_{n+1}} < \frac{d^{n+2}}{a_0 \cdots a_{n+1}}. \end{align*} Here $s_l \coloneqq s_l(a_0,\ldots,a_{n+1})$ is the degree $l$ elementary symmetric polynomial in the weights $a_0,\ldots,a_{n+1}$. In the last equality, the final term $-s_{n+2}/(a_0 \cdots a_{n+1}) = -1$ in the expansion cancelled with the $1$. Since all weights are smaller than $d$, the terms $d^{n+2 - l} s_l$ decrease in magnitude as $l$ increases. This justifies the last inequality. \end{proof} With these properties of $B$ in hand, we return to the proof of the theorem. We'll show that for any diagonal automorphism $x_j \mapsto c_j x_j$ in $A$, the scalars $c_j$ satisfy $|c_j| = 1$. Indeed, since this automorphism preserves $f$, it preserves each monomial individually, and $$\prod_j K_{ij}(c_j x_{j})^{m_{ij}} = \prod_j K_{ij}x_j^{m_{ij}},$$ for each $i = 1,\ldots,s$. Therefore, $\prod_j c_j^{m_{ij}} = 1$. Taking logarithms of the $|c_j|$, this means that $( \ln |c_0|, \ldots, \ln |c_{n+1}|) \in \ker M$. But $\ker M \subset \ker B = \{0\}$ since $B$ is invertible, so $|c_j| = 1$ for each $j$. Therefore, any element of $A$ can be represented as an $(n+2)$-tuple $(\theta_0,\ldots,\theta_{n+1})$ of elements of ${\mathbb Q}/{\mathbb Z}$, where $c_j = e^{2 \pi i \theta_j}$. The condition that $(\theta_0,\ldots,\theta_{n+1})$ preserves $f$ can be expressed as $M (\theta_0,\ldots,\theta_{n+1})^\mathsf{T} \in {\mathbb Z}^s$. We can obtain an upper bound for the order of $A$ by considering the weaker condition $B (\theta_0,\ldots,\theta_{n+1})^\mathsf{T} \in {\mathbb Z}^{n+2}$ instead (this says that at least the $n+2$ selected monomials in $f$ are preserved by the automorphism). The number of distinct solutions for $(\theta_0,\ldots,\theta_{n+1})$ modulo ${\mathbb Z}^{n+2}$ to this latter equation is the index of ${\mathbb Z}^{n+2}$ in the superlattice spanned by $B^{-1} e_0, \ldots, B^{-1} e_{n+1}$, where $e_0,\ldots,e_{n+1}$ are the standard basis vectors in ${\mathbb Z}^{n+2}$. This index equals $\det(B)$, so $$|A| \leq |\det(B)| \leq \frac{d^{n+2}}{a_0 \cdots a_{n+1}}$$ by \Cref{determinant}. The original group $H$ which contained $A$ as a smallest index abelian subgroup therefore has order $$|H| \leq \bar{J}(\mathrm{Aut}(S))\frac{d^{n+2}}{a_0 \cdots a_{n+1}} \leq C_n \frac{d^{n+2}}{a_0 \cdots a_{n+1}}.$$ Finally, we have the desired bound $$|\on{Lin}(X)| = |G| = \frac{|H|}{d} \leq C_n \frac{d^{n+1}}{a_0 \cdots a_{n+1}}.$$ \end{proof} \begin{example}[Fermat Hypersurfaces] For any positive integer $n$ and degree $d \geq 3$, the {\it Fermat hypersurface} of dimension $n$ and degree $d$ in $\mathbb{P}^{n+1}$ is $X \coloneqq \{x_0^d + \cdots + x_{n+1}^d = 0\} \subset \mathbb{P}^{n+1}$. Then $\on{Lin}(X)$ contains a copy of the symmetric group $S_{n+2}$ acting by permutation of the variables $x_i$ and the diagonal automorphisms given by multiplying each $x_i$ by an arbitrary $d$th root of unity (modulo the scalar transformations). Therefore, $|\on{Lin}(X)| \geq (n+2)! d^{n+1}$. In fact, a computation of Shioda \cite{Shioda} shows that $|\on{Lin}(X)| = (n+2)! d^{n+1}$ when $X$ is defined over an algebraically closed field of characteristic zero (see also \cite{Kontogeorgis} for another proof and some generalizations of this result). Note that a Fermat hypersurface can have extra automorphisms in positive characteristic. \end{example} \begin{comment} The result in \Cref{bound} is related to a theorem of Hacon-M\textsuperscript{c}Kernan-Xu \cite{HMX}, which proves that for any dimension $n$, the ratio $\mathrm{Aut}(X)/\mathrm{vol}(X,K_X)$ is uniformly bounded above for all varieties $X$ of general type with at worst canonical singularities. When $X$ is a weighted projective hypersurface of degree $d$ in $\mathbb{P}(a_0,\ldots,a_{n+1})$, the volume is $$\mathrm{vol}(X,K_X) = \frac{d(d-a_0- \cdots - a_{n+1})^n}{a_0 \cdots a_{n+1}}.$$ In \cite[Section 1]{HMX}, it's noted that the Fermat hypersurface $X$ of degree $d = n+3$ in $\mathbb{P}^{n+1}$ achieves a ratio $$\frac{\marthm{Aut}(X)}{\mathrm{vol}(X,K_X)} = (n+2)!(n+3)^n.$$ In general, the strongest estimates for this ratio seem to come from varieties which are ``barely" of general type. Indeed, results of the author with Totaro and Wang \cite{ETW} show that there exist canonical weighted projective hypersurfaces of general type in dimension $n$ with $\mathrm{vol}(X,K_X) < 1/2^{2^{n/2}}$, so that the bound on this ratio must grow at least doubly exponentially with $n$. However, if we instead choose a Fermat hypersurface of very large degree, the ratio approaches $(n+2)!$, matching the constant in \Cref{bound} for large $n$. \end{comment} This example shows that the order of growth with respect to the degree $d$ in the estimate \Cref{bound} is optimal and that we must have $C_n \geq (n+2)!$ for all $n$. A natural question is whether we may actually take $C_n = (n+2)!$ for all $n$. We'll show that this is nearly true for $n = 1$, but not quite. \begin{comment}[If we prove better bound on weak Jordan constants, we get the corollary] In particular, we have the following corollary for hypersurfaces in $\mathbb{P}^{n+1}$, which to the author's knowledge has not appeared in the literature before. \begin{corollary} For sufficiently large dimension $n$ and any $d \geq 3$, the Fermat hypersurface of degree $d$ in $\mathbb{P}^{n+1}$ has the largest possible automorphism group of any smooth degree $d$ hypersurface. \end{corollary} \end{comment} \begin{proposition} \label{curvebounds} Let $X$ be a well-formed quasismooth weighted projective curve of degree $d$ in $\mathbb{P}(a,b,c)$ with finite linear automorphism group. Then $$|\mathrm{Lin}(X)| \leq \frac{6d^2}{abc},$$ unless $a = b = c = 1$ and $X$ is projectively equivalent to one of the following two plane curves in $\mathbb{P}^2 = \mathbb{P}(1,1,1)$: \begin{enumerate} \item The {\it Klein quartic} $xy^3 + yz^3 + zx^3 = 0$, with automorphism group isomorphic to $\mathrm{PSL}_2(\mathbb{F}_7)$ of order $168$; \item The {\it Wiman sextic} $10x^3 y^3 + 9x^5 z + 9y^5 z - 45x^2 y^2 z^2 - 135 xyz^4 + 27 z^6 = 0$, with automorphism group isomorphic to $A_6$ of order $360$. \end{enumerate} \end{proposition} \begin{proof} In order for a weighted projective curve to be well-formed, we must have that the weights $a, b$ and $c$ are pairwise relatively prime, and that each weight divides $d$. Indeed, if some weight does not divide $d$, then the intersection $X \cap \mathbb{P}_{\mathrm{sing}}$ contains the corresponding coordinate point, which has codimension $1$ in $X$. There are three cases to consider, depending on which of $a$, $b$, or $c$ coincide. Suppose first that $a$, $b$, and $c$ are all distinct. Then, any finite subgroup of $\mathrm{Aut}(\mathbb{P}(a,b,c))$ is abelian. This follows from \Cref{Jordan_const}, which shows that $\bar{J}(\mathrm{Aut}(S)) = 1$ in this case. Abelian subgroups of $\mathrm{Aut}(S)$ fixing the defining polynomial $f$ of $X$ have order at most $d^3/(abc)$ by {\bf Step 3} of the proof of \Cref{bound}; hence by {\bf Step 1}, $|\mathrm{Lin}(X)| \leq d^2/(abc)$. Now suppose that $b = c$, but that $a$ is distinct from the other two weights. Since $\mathbb{P}(a,b,c)$ is well-formed, we must have $b = c = 1$. The weak Jordan constant of $\mathrm{Aut}(S)$ is $\bar{J}_1 \bar{J}_2 = \bar{J}_2 = 12$ in this case \cite[Section 2.2]{PS}. Suppose our hypersurface is given by $X = \{f = 0\}$, where $f$ has weighted degree $d$. In order for $|\mathrm{Lin}(X)|$ to exceed $6d^2/(abc) = 6d^2/a$, we would have to have (after conjugation) a finite subgroup $G$ of $\mathrm{GL}_1({\mathbb C}) \oplus \mathrm{GL}_2({\mathbb C}) \subset \mathrm{Aut}(S)$ fixing $f$ of order exceeding $6d^3/a$. The maximal possible order of an abelian subgroup preserving $f$ is $d^3/a$, so we require our hypothetical $G$ to have no abelian subgroup of index less than or equal to $6$. The image of $G$ under the projection $$\mathrm{GL}_1({\mathbb C}) \oplus \mathrm{GL}_2({\mathbb C}) \xrightarrow{p_2} \mathrm{GL}_2({\mathbb C})$$ would also have no abelian subgroup of index less than or equal to $6$. All finite subgroups of $\mathrm{GL}_2({\mathbb C})$ are central extensions of cyclic groups, dihedral groups, $A_4$, $S_4$, or $A_5$. Of these, only $A_5$ has the required property: the largest abelian subgroup has index $12$. Therefore, the image of $G$ in $\mathrm{PGL}_2({\mathbb C})$ is isomorphic to $A_5$. It follows that $p_2(G)$ is a central extension of $A_5$ in $\mathrm{GL}_2({\mathbb C})$. Since $X$ is quasismooth, the polynomial $f$ is of the form $$f = x^{d/a} + x^{d/a-1}g_a(y,z) + x^{d/a-2}g_{2a}(y,z) + \cdots + g_d(y,z),$$ for some polynomials $g_a, g_{2a}, \ldots, g_{d}$ of the indicated degrees in $y$ and $z$. Here $g_d(y,z)$ must be nonzero since $f$ is irreducible. Each of the terms must be individually preserved by the action of $G$ because that action is block diagonal; in particular, $g_d(y,z)$ is an invariant polynomial under the action of $p_2(G)$. But this means that the intersection of $p_2(G)$ with the center of $\mathrm{GL}_2({\mathbb C})$ has order at most $d$ (primitive roots of unity of higher degree could not preserve this polynomial), so $|p_2(G)| \leq |A_5|d = 60d$. Similarly, $|\ker(p_2) \cap G| \leq d/a$, so $$|G| \leq \frac{60d^2}{a}.$$ The combination of the inequalities $|G| \leq 60d^2/a$ and $|G| > 6d^3/a$ means that $d < 10$. However, we've already seen that $g_d(y,z)$ is a polynomial invariant of the action of the binary icosahedral group in $\mathrm{GL}_2({\mathbb C})$. The homogeneous generators for that invariant ring have degrees $12$, $20$, and $30$ by a result of Klein \cite{Klein2}, contradicting the bound on the degree. It follows that no weighted projective curve of this form has more than $6d^2/a$ linear automorphisms. The last possibility is that $a = b = c$, so that $X$ is a smooth plane curve. In this case, the problem of finding the largest possible automorphism groups in different degrees is well studied. For $d \geq 4$, recall that all plane curve automorphisms are linear \cite{Chang}. Klein \cite{Klein} computed the linear automorphism group of the quartic curve in \Cref{curvebounds}; it has the largest possible automorphism group of any curve of genus $g = 3$ by the Hurwitz bound $|\mathrm{Aut}(X)| < 84(g-1)$. Wiman \cite{Wiman} first computed that the sextic in the proposition has automorphism group $A_6$. Later work showed that the Wiman sextic is the unique degree $6$ curve with largest automorphism group up to projective equivalence \cite{DIK} and that the Fermat curve has the same property for various other $d \leq 20$ \cite{KMP1,KMP2}. Finally, Harui \cite[Theorem 2.5]{Harui} proved that the two curves listed in \Cref{curvebounds} are the only ones with $|\mathrm{Lin}(X)| > 6d^2$ for {\it any} degree $d$. This proves the proposition. \end{proof} This classification shows that we may take $C_1 = \frac{21}{2}$ in \Cref{bound}. The author is unaware of any counterexamples to the theorem with $C_n = (n+2)!$ for $n \geq 2$. By analogy with Collins' computations of Jordan constants, we might expect that unusual behavior such as in the $n = 1$ case occurs only for small $n$. \begin{question} \label{Cnquestion} Does \Cref{bound} hold with $C_n = (n+2)!$ for $n \geq 2$? In particular, does the Fermat hypersurface have the largest automorphism group of any smooth hypersurface of degree $d$ in $\mathbb{P}^{n+1}$ when $n \geq 2$ and $d \geq 3$? \end{question} Many partial results in this direction are known for smooth hypersurfaces in ordinary projective space. For instance, we have a fairly complete picture of the possible {\it orders} of automorphisms that can occur \cite{GAL,Zheng}. The possible automorphism groups of smooth cubic surfaces over an algebraically closed field of arbitrary characteristic were classified by Dolgachev and Duncan \cite{DD}. Moreover, the linear automorphism groups of smooth cubic threefolds and smooth quintic threefolds were classified by works of Wei and Yu \cite{WY} and Oguiso and Yu \cite{OY}, respectively. The Fermat cubic fourfold is also known to have the largest possible automorphism group by a result of Laza and Zheng \cite{LZ}. In summary, the second part of \Cref{Cnquestion} is known to have an affirmative answer at least for the following pairs $(n,d)$: $(2,3), (3,3), (3,5),$ and $(4,3)$. \section{Automorphisms of a Very General Hypersurface} \label{verygeneral} Another result of Matsumura and Monsky is that the automorphism group of a {\it very general} hypersurface in $\mathbb{P}^{n+1}$ with $n \geq 2$ and degree $d \geq 3$ is trivial \cite[Theorem 5]{MM}. One might hope that the same result holds for weighted projective hypersurfaces whenever the conditions of \Cref{finite} are met. This turns out to be false, but we have the following result under a slightly stronger assumption on the degree. \begin{theorem} \label{generic} Suppose that there exists a hypersurface $X \subset \mathbb{P}(a_0,\ldots,a_{n+1})$ of degree $d$ which is quasismooth and well-formed, where $n \geq 1$ and $d \geq 5 \max\{a_0,\ldots,a_{n+1} \}$. Then for a very general such $X$, $\on{Lin}(X)$ is contained in the center of $\mathrm{Aut}(\mathbb{P})$ and is toric. In particular, $\on{Lin}(X)$ is abelian. \end{theorem} \begin{proof} Fix a very general hypersurface $X = \{f = 0\}$ with the given weights and degree. Any element of $\on{Lin}(X)$ comes from an automorphism $\alpha: S \rightarrow S$ of the graded ring $S = {\mathbb C}[x_0,\ldots,x_{n+1}]$. The fact that $\alpha$ descends to $X$ means that $\alpha(f) = cf$ for some constant $c \in {\mathbb C}$. The conditions of \Cref{generic} on the weights and degree are strictly stronger than those of \Cref{finite}, so we know that $\mathrm{Lin}(X)$ is finite. In particular, the automorphism $\alpha$ has finite order. It follows from \Cref{diagonalizable} that after conjugating by some automorphism of the graded ring $S$, $\alpha$ becomes diagonal, i.e., maps each $x_i$ to a scalar multiple of itself. Let $\gamma: S \rightarrow S$ be such an automorphism that brings $\alpha$ into diagonal form. Define $\beta \coloneqq \gamma \alpha \gamma^{-1}$ and $g \coloneqq \gamma (f)$, so that $\beta (g) = cg$. For each $i$, let $c_i$ be the scalar such that $\beta(x_i) = c_i x_i$. Next, let $G \coloneqq \mathrm{Aut}(S)$ and $H \coloneqq C_G(\beta)$ be the centralizer of the element $\beta$ in $G$. We will show that unless $G = H$, that is, unless $\beta$ is actually contained in the center of $\mathrm{Aut}(S)$, the fact that $\{g = 0\}$ has automorphism $\beta$ forces more than $\dim(G) - \dim(H)$ monomials of degree $d$ in the polynomial $g$ to vanish. This would contradict the assumption that $f$ was originally chosen to be very general, since the space of degree $d$ polynomials with $\beta$ as an automorphism would have codimension greater than $\dim(G/H)$. The homogeneous space $G/H$, in turn, is isomorphic to the orbit of $\beta$ under conjugation. This is the same idea used in Matsumura and Monsky's proof \cite[Theorem 5]{MM} of the analogous fact for hypersurfaces in $\mathbb{P}^n$. (In that paper, they considered both diagonalizable and unipotent automorphisms; since we are working over ${\mathbb C}$ instead of an arbitrary algebraically closed field, we only need to consider the former type.) We've already seen that the dimension of $G = \mathrm{Aut}(S)$ is $\dim(G) = \dim(S_{a_0}) + \cdots + \dim(S_{a_{n+1}})$. To compute the dimension of the centralizer $H$, it suffices to compute the dimension of its Lie algebra; we may do this by seeing which infinitesimal transformations commute with $\beta$. Indeed, let $\sigma: x \mapsto x + \epsilon z$ be such a transformation, where $z = (z_0,\ldots,z_{n+1})$ is an $(n+2)$-tuple of homogeneous polynomials with $z_i \in S_{a_i}$. We have that $$\sigma \beta (x_i) = \sigma (c_i x_i) = c_i \sigma(x_i) = c_i (x_i + \epsilon z_i),$$ while $$\beta \sigma (x_i) = \beta (x_i + \epsilon z_i) = c_i x_i + \epsilon \beta(z_i).$$ Comparing these two equations, we have that $\beta$ and $\sigma$ commute if and only if $\beta(z_i) = c_i z_i$; that is, if and only if each monomial in $z_i$ is multiplied by $c_i$ when applying $\beta$. Therefore, $\dim(H)$ is equal to the cardinality of the set of ordered $2$-tuples $(i,y)$ such that $i \in \{0,\ldots,n+1\}$ and $y$ is a monomial of degree $a_i$ with $\beta(y) = c_i y$. We may also describe the dimension of the entire group $G = \mathrm{Aut}(S)$ in a similar way: it is just the size of the set of {\it all} $2$-tuples $(i,y)$ with $y$ a monomial of degree $a_i$. Therefore, $\dim(G/H) = |\Gamma|$, where $\Gamma$ is the set $$\Gamma \coloneqq \{(i,y): \beta(y) \neq c_i y\}.$$ If $\Gamma$ is empty, then $G = H$, which is what we want. Assuming that it is nonempty, we will now exhibit a vanishing monomial in $g$ for each $(i,y) \in \Gamma$, plus exactly one extra. This would show that the number of vanishing monomials is greater than $\dim(G/H)$, as required. We'll begin by finding one vanishing monomial for each $(i,y) \in \Gamma$. Since $X$ is quasismooth, \Cref{monomialexistence} guarantees that we may choose a monomial of degree $d$ of one of the following two forms: $x_i^k$ or $x_i^k x_j$, $i \neq j$, for some positive integer $k$. By default, we'll always choose the form $x_i^k$ when $a_i$ actually divides $d$. By the assumption on degree, we have $k \geq 5$ in either case (if $k < 5$ and $ka_i + a_j = d$ for some $i, j$, then we must have $k = 4$, $a_i = a_j$ by the assumption on degree, so $5a_i = d$ and we can choose $x_i^5$ instead). If $a_i$ divides $d$, consider the pair of monomials $\{ x_i^{k-1} y, x_i^{k-2} y^2 \}$. Since $(i,y) \in \Gamma$, $\beta(y) = c_y y$ where $c_y \neq c_i$. But then, our two monomials are multiplied by $c_i^{k-1} c_y$ and $c_i^{k-2} c_y^2$, respectively, under $\beta$. These constants cannot be equal or else we would have $c_y = c_i$. If both monomials had nonzero coefficients in $g$, that would contradict the fact that $\beta(g) = cg$. Therefore, at least one monomial of the pair vanishes in $g$. The same reasoning works for the pair $\{ x_i^{k-1} y x_j,x_i^{k-2} y^2 x_j \}$ in the event that we began with $x_i^k x_j$ of degree $d$ instead. This argument exhibits exactly $|\Gamma| = \dim(G/H)$ distinct vanishing monomials in the polynomial $g$. They are all distinct because any two pairs of monomials chosen above are disjoint. This follows from the fact that we can recover the pair $(i,y)$ uniquely from either monomial of the pair. This works as follows: given a monomial $x^I$ belonging to the pair we created from $(i,y) \in \Gamma$, find an index $i'$ such that: (1) the corresponding weight $a_{i'}$ is maximal among all variables appearing in $x^I$ with exponent at least $2$, and (2) the exponent of $x_{i'}$ is itself maximal among variables with indices satisfying the first condition. Examining the forms of the monomials we chose above, one can show that since $k-2 > 2$, the index $i'$ identified by this procedure must be unique and equal to $i$. If we have two elements $(i,y_1)$ and $(i,y_2)$ in $\Gamma$ with $y_1 \neq y_2$, it's clear that the chosen pairs of monomials are disjoint. Thus, $y$ is also uniquely determined. The final step is to find just one extra monomial in $g$ that vanishes. To do this, we'll make a slight modification to the list of pairs above, without breaking the disjointness property of the previous paragraph. Since $\Gamma$ is nonempty, we can fix a particular element $(i,y) \in \Gamma$. Depending on the properties of $i$ and $y$, we find two vanishing monomials associated to $(i,y)$ rather than just one as follows: \begin{itemize} \item If we chose $x_i^k$ with degree $d$ (here $k \geq 5$) and $y$ is not equal to some other variable $x_{i'}$, replace the pair $\{ x_i^{k-1} y,x_i^{k-2} y^2 \}$ with the two pairs $\{ x_i^k,x_i^{k-1} y \}$ and $\{ x_i^{k-2} y^2, x_i^{k-3} y^3 \}$. We can do the same modification when we have $x_i^k x_j$ of degree $d$ (and $k \geq 5$, $y$ not linear) instead. None of the new monomials we've introduced can repeat among the ones we previously found for other elements in $\Gamma$. We may now find two vanishing monomials for $(i,y)$ instead of one. \item If $x_i^k$ has degree $d$ with $k \geq 5$ and $y = x_{i'}$, then we still replace the pair $ \{ x_i^{k-1} x_{i'}, x_i^{k-2}x_{i'}^2 \}$ with the two pairs $\{ x_i^k,x_i^{k-1} x_{i'} \}$ and $\{ x_i^{k-2} x_{i'}^2, x_i^{k-3} x_{i'}^3 \}$. However, the latter pair overlaps with the one we found for $(i',x_i) \in \Gamma$ in the special case that $k = 5$. To remedy the issue in this one case, also replace the pair $\{ x_i x_{i'}^4, x_i^2 x_{i'}^3 \}$ associated to $(i',x_i)$ with $\{ x_{i'}^5,x_i x_{i'}^4 \}$. As before, the process would be the same if we had started with $x_i^k x_j$ of degree $d$ in the beginning; no repeats are introduced. \end{itemize} By contradiction, we've now shown that $G = H$ so that $\beta$ is in the center of $\mathrm{Aut}(S)$. This means that $\alpha = \gamma^{-1} \beta \gamma = \beta$, and more generally that $\alpha$ is diagonal in any choice of coordinates. The induced automorphism of $\mathbb{P}(a_0,\ldots,a_{n+1})$ is therefore always toric, as claimed. \end{proof} For hypersurfaces satisfying the condition $d \geq 5 \max \{a_0,\ldots,a_{n+1}\}$ in \Cref{generic}, the stronger statement that $\mathrm{Lin}(X) = \{1\}$ for $X$ very general is not always true. \begin{example} Consider the family of hypersurfaces of degree $180$ in $\mathbb{P}^3(36,31,30,25)$. The general $X$ in this family is quasismooth and well-formed. Furthermore, the weights and degree satisfy the hypothesis of \Cref{generic}. However, since the only monomial of degree $180$ involving the variable $x_0$ of weight $36$ is $x_0^5$, any quasismooth $X$ has a non-trivial automorphism of order $5$ given by $x_0 \mapsto \zeta x_0$ for $\zeta$ a primitive fifth root of unity. As predicted by the theorem, this automorphism is in the center of $\mathrm{Aut}(\mathbb{P})$. \end{example} \begin{commentA} \begin{example} Here's another example where the degree is not in the range of \Cref{generic} but there is still a non-trivial automorphism in the center of $\mathrm{Aut}(\mathbb{P})$. Consider the family of hypersurfaces of degree $48$ in $\mathbb{P}(16,13,12,9)$. The general $X$ in this family is quasismooth and well-formed, but the only monomial of degree $48$ involving $x_0$ is $x_0^3$. Therefore, $X$ always has a non-trivial automorphism of order $3$ given by $x_0 \mapsto \zeta x_0$ for $\zeta$ a primitive third root of unity. \end{example} \end{commentA} In a similar way, one can construct examples where $d$ is arbitrarily large relative to the maximum of the weights, but non-trivial automorphisms still exist for any quasismooth $X$. By having multiple ``isolated" weights, generic automorphism groups can be made to contain any abelian group. Further, in the range where the conditions of \Cref{finite} apply but those of \Cref{generic} do not (i.e., when the degree satisfies $2 \max\{a_0,\ldots,a_{n+1}\} \leq d < 5 \max\{a_0,\ldots,a_{n+1}\}$), there are many examples of hypersurfaces with generic automorphisms outside the center of $\mathrm{Aut}(\mathbb{P})$. These show that \Cref{generic} is close to optimal. \begin{example}[Hyperelliptic curves, revisited] We saw above that hyperelliptic curves of genus $g$ naturally embed via the canonical map as $X_{2g+2} \subset \mathbb{P}^2(g+1,1,1)$. Conversely, the general hypersurface of this degree is a hyperelliptic curve; up to a transformation of weighted projective space, a general equation becomes $x_0^2 + f(x_1,x_2) = 0$ and gives a double cover of $\mathbb{P}^1$. The hyperelliptic involution is given by $x_0 \mapsto -x_0$, which is a nontrivial automorphism of $\mathbb{P}^2(g+1,1,1)$ that descends to $X$. Further, this automorphism is not in the center of $\mathrm{Aut}(\mathbb{P})$. Similar reasoning works for other families of hypersurfaces $X_d \subset \mathbb{P}(a_0,\ldots,a_{n+1})$ in higher dimensions whenever $a_0 = d/2$; $X$ always has the involution given by the double cover, and this involution is often not in the center of $\mathrm{Aut}(\mathbb{P})$. \end{example} \begin{example} It's well known that any cubic plane curve has non-trivial linear automorphisms. Indeed, any smooth complex cubic plane curve $X_3 \subset \mathbb{P}^2(1,1,1) = \mathbb{P}^2$ is projectively equivalent to a curve in {\it Hesse normal form} $$x^3 + y^3 + z^3 = 3C xyz,$$ where $C \in {\mathbb C}$, $C^3 \neq 1$. This result dates back at least to the late 19th century \cite[v.3, p.22]{Weber}. The curve $X$ defined by this equation has a linear automorphism group of order at least $18$ (in fact, the order equals $18$ except when $C$ takes one of a handful of special values \cite[Corollary 3.10]{BM}). For general $C$, this group is generated by permutations of $x,y,z$ and the transformation $(x:y:z) \mapsto (x:\zeta y: \zeta^2 z)$ for $\zeta$ a primitive third root of unity. It acts transitively on the nine flex points of the curve, and is furthermore non-abelian. Since $\mathrm{Aut}(\mathbb{P}^2) = \mathrm{PGL}_3({\mathbb C})$ is centerless, all non-trivial automorphisms in this group must be outside the center. In the world of weighted projective hypersurfaces, we can bootstrap this example to any number of dimensions by looking at families such as $X_{15} \subset \mathbb{P}^4(5,5,5,3,3)$. The variables corresponding to weights of $3$ and $5$ never mix, so we can again change coordinates so that the equation $f$ defining $X$ assumes Hesse normal form in the first three variables. Then, the transformations above are still automorphisms of $f$, leaving the variables of weight $3$ unchanged. \end{example} \begin{example} Examples of generic non-central automorphisms with $d = 4 \max \{a_0,\ldots,a_{n+1}\}$ also exist because of the fact that any quartic ``hypersurface" in $\mathbb{P}^1$, that is, a collection of four general points, has nontrival linear automorphism group. More precisely, let $p_1,p_2,p_3,p_4 \in \mathbb{P}^1$ be four general points. Any automorphism of $\mathbb{P}^1$ must preserve the cross-ratio of these four points, and the stabilizer of the cross-ratio under the permutation action of $S_4$ on these points is the Klein four-group $K = \{\mathrm{id}, (12)(34),(13)(24),(14)(23)\}$. Further, since $\mathrm{PGL}_2({\mathbb C})$ is three-transitive, there is an automorphism mapping $p_1$, $p_2$ and $p_3$ according to any permutation $\sigma \in K$; by cross-ratio considerations, it also acts as $\sigma$ on the fourth. It follows that the subgroup of $\mathrm{PGL}_2({\mathbb C})$ preserving this set of four points is isomorphic to $K \cong {\mathbb Z}/2 \oplus {\mathbb Z}/2$. We can pick matrices in $\mathrm{GL}_2({\mathbb C})$ which descend to these transformations and preserve the quartic equation in two variables defining the given set of four points. This construction allows us to find many positive-dimensional examples with non-central automorphisms. For instance, let $X_{20} \subset \mathbb{P}^3(5,5,4,4)$ be very general. Then $X$ has nontrivial automorphisms defined by the same linear transformations as above in the first two variables and the identity on the variables of weight $4$. \end{example}
{ "timestamp": "2023-01-20T02:05:52", "yymm": "2301", "arxiv_id": "2301.07872", "language": "en", "url": "https://arxiv.org/abs/2301.07872", "abstract": "We prove several results concerning automorphism groups of quasismooth complex weighted projective hypersurfaces; these generalize and strengthen existing results for hypersurfaces in ordinary projective space. First, we establish a Grothendieck-Lefschetz type theorem for these hypersurfaces. We next provide a characterization of when their linear automorphism group is finite and find explicit uniform upper bounds on the size of this group. Finally, we describe the automorphisms of a generic quasismooth hypersurface with given weights and degree.", "subjects": "Algebraic Geometry (math.AG)", "title": "Automorphisms of weighted projective hypersurfaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708026035286, "lm_q2_score": 0.7248702702332476, "lm_q1q2_score": 0.7084670378013058 }
https://arxiv.org/abs/2211.11819
Descent modulus and applications
The norm of the gradient $\nabla$f (x) measures the maximum descent of a real-valued smooth function f at x. For (nonsmooth) convex functions, this is expressed by the distance dist(0, $\partial$f (x)) of the subdifferential to the origin, while for general real-valued functions defined on metric spaces by the notion of metric slope |$\nabla$f |(x). In this work we propose an axiomatic definition of descent modulus T [f ](x) of a real-valued function f at every point x, defined on a general (not necessarily metric) space. The definition encompasses all above instances as well as average descents for functions defined on probability spaces. We show that a large class of functions are completely determined by their descent modulus and corresponding critical values. This result is already surprising in the smooth case: a one-dimensional information (norm of the gradient) turns out to be almost as powerful as the knowledge of the full gradient mapping. In the nonsmooth case, the key element for this determination result is the break of symmetry induced by a downhill orientation, in the spirit of the definition of the metric slope. The particular case of functions defined on finite spaces is studied in the last section. In this case, we obtain an explicit classification of descent operators that are, in some sense, typical.
\section{Introduction\label{sec01:Intro}} In \cite{BCD2018} the following surprising result was obtained: two $\mathcal{C}^{2}$-smooth convex bounded from below functions defined on a Hilbert space $\mathcal{H}$ are equal up to an additive constant, provided they have the same modulus of derivative at every point. In other words, for this class of functions, equality of moduli of derivatives ($\Vert \nabla f\Vert =\Vert \nabla g\Vert $) implies equality of the derivatives ($\nabla f=\nabla g$). An alternative way to announce this result is to say that the operator \begin{equation} f\mapsto \Vert \nabla f\Vert \label{eq:d1} \end{equation} determines the function $f$ (modulo a constant) for the class of $\mathcal{C}^{2}$-smooth convex and bounded from below functions defined on the Hilbert space $\mathcal{H}$.\smallskip The above result has been extended in \cite{PSV2021} to the class of convex continuous bounded from below functions on a Hilbert space $\mathcal{H}$. A further extension for functions defined on an arbitrary Banach space $X$ has been achieved in \cite{TZ2022}. In both cases, the operator \begin{equation} f\mapsto d(0,\partial f(x))\quad \text{(remoteness of the subdifferential)} \label{eq:d2} \end{equation} determines the function $f$ (modulo a constant) for the class of convex continuous and bounded from below functions on a Banach space $X$.\smallskip In \cite{DS2022} the authors worked on an arbitrary metric space $(X,d)$. Using the notion of metric slope $|\nabla f|$, they established the following result: two continuous coercive functions $f,g:X\rightarrow~\mathbb{R}$ are equal, provided they have the same metric slope ($|\nabla f|=|\nabla g|$) and coincide on the (common) critical set $S=|\nabla f|^{-1}(0)=|\nabla g|^{-1}(0).$ (We refer to Section~\ref{sec02:Pre} for notation and relevant definitions; see also Subsection~\ref{ssec:2.1} for a more detailed description of the above results.) Denoting by $\mathcal{K}(X)$ the class of continuous coercive functions on $X$ (the exact definition of coercivity will be given in~\eqref{sec02:eq:Coercive}), we consider the following equivalent relation: $f\sim g$ if and only if $f$ and $g$ have the same (metric) critical set and their values coincide there up to a constant, that is, \begin{equation*} f\sim g \quad \iff \quad S=|\nabla f|^{-1}(0)=|\nabla g|^{-1}(0)\text{ and } f\big|_S - g\big|_S = c, \,\,\text{for some }c\in\mathbb{R}. \end{equation*} Then, the aforementioned result of \cite{DS2022} asserts that the operator \begin{equation} f\mapsto |\nabla f| \label{eq:d3} \end{equation} is injective on $\mathcal{K}(X)/\!\!\sim $, that is, it determines functions $f\in \mathcal{K}(X)$ modulo the equivalent relation~$\sim$. \smallskip We refer to all above results as determination results on a specific class of function (modulo a natural equivalent relation). Although the last result is formulated in an abstract metric space and is quite general, we will show in this work that a deeper result is hidden. Namely, the metric structure is ostensibly required to define the determining operator, but is not really paramount: the quantities $\Vert \nabla f(x)\Vert $ (in the smooth case) and $|\nabla f|(x)$ (in a metric setting) express the steepest descent of $f$ at a given point $x,$ however, this is not the only possible choice to deal with descent properties. For instance, one can also consider average descent (based on some probability measure on the space $X$) and emancipate dependence from the metric structure. The above leads to a definition of an abstract descent operator (which does not rely on a distance or even a topology). This abstract scheme, developed in Section~\ref{sec03:Axiomatic}, encompasses several instances of descent-type operators, in particular both paradigms of steepest descent and average descent. In Section~\ref{sec04:Averaged} we study general diffusion processes in metric spaces and show that asymmetrization (via downhill orientation) is the key property to obtain a determination result, in a complete analogy to the asymmetric definitions of~\eqref{eq:d2}--\eqref{eq:d3}. In the last section we consider the particular case of descent operators in finite dimensional spaces and obtain an explicit classification of a broad subfamily of these operators. \section{Notation and Preliminaries}\label{sec02:Pre} We set $\overline{\mathbb{R}} = \mathbb{R} \sqcup \{-\infty,+\infty\}$ and $\overline{\mathbb{R}}_+ = \mathbb{R}_+\sqcup\{+\infty\}$. For any $a\in\mathbb{R}$ we set $a_+=\max\{a,0\}$. For two real numbers $r,s\in \mathbb{R}$, we write $r\wedge s := \min\{ r,s \}$ and $r\vee s := \max\{r,s \}$. Given a nonempty set $X$ and a function $f:X\to \overline{\mathbb{R}}$ we define the domain of $f$ as follows: \[ \mathrm{dom }(f) :=\{ x\in X:\, f(x)<+\infty \}\,. \] Given $\alpha\in \mathbb{R}$, we write \begin{align*} [f\leq \alpha] &:= \{ x\in X\ :\ f(x) \leq \alpha \}\\ [f < \alpha] &:= \{ x\in X\ :\ f(x) < \alpha \} \end{align*} to denote the sublevel set and strict sublevel set of $f$ at value $\alpha$. The sets $[f = \alpha]$, $[f\geq \alpha]$ and $[f>\alpha]$ are defined analogously. \smallskip \newline We shall often equip the set $X$ with a topology, denoted by $\tau$. In this case, we denote by $\mathcal{B}(X)$ the $\sigma$-algebra of the Borel subsets of the topological space $(X,\tau)$. \smallskip \newline We say that a function $f:X\to\mathbb{R}\cup\{+\infty\}$ is \textit{$\tau$-lower semicontinuous} if all sublevel sets $[ f\leq a]$, $a\in\mathbb{R}$, are $\tau$-closed subsets of $X$. The function $f$ is called \textit{$\tau$-coercive} if \begin{equation}\label{sec02:eq:Coercive} \text{ for every } \alpha\in (-\infty, \sup f),\quad \text{the sublevel set } \,\, [f\leq \alpha]\text{ is $\tau$-compact}. \end{equation} We simply call $f$ lower semicontinuous (respectively, coercive), when no ambiguity on the topology occurs. Notice that the above definition of coercivity encompasses in particular all constant functions. \smallskip We further denote by \begin{equation} \mathcal{C}(X) \text{ the space of continuous functions from } X \text{ to } \mathbb{R} \end{equation} and we define the subclass of coercive continuous functions by \begin{equation} \mathcal{K}(X) := \{ f\in \mathcal{C}(X)\ :\ f\,\text{ is }\tau\text{-coercive}\}. \end{equation} If $(X,\tau)$ is compact, then every lower semicontinuous functional is coercive and $\mathcal{K}(X) = \mathcal{C}(X)$.\smallskip\newline Let $\mathcal{L}_n$ stand for the usual Lebesgue measure over $\mathbb{R}^n$ and let $B_n(x,r)$ (respectively, $\overline{B}_n(x,r)$) be the open (respectively, closed) ball centered at $x\in\mathbb{R}^n$ of radius $r>0$. We also denote by $\mathbb{B}_n$ (respectively, $\mathbb{S}_n$) the closed unit ball (respectively, unit sphere) of $\mathbb{R}^n$. If there is no ambiguity, we omit the subindex $n$ for each of the elements above. It is well known (see, e.g., \cite{SV1989}) that the $n$-dimensional volume of the ball of radius $r>0$ in $\mathbb{R}^n$ is given by \begin{equation}\label{eq:VolumeNBall} \mathcal{L}_n(B(0,r)) = \frac{\pi^{n/2}}{\Gamma\left( \tfrac{n}{2} + 1 \right)}r^n \, \end{equation} where $\Gamma$ stands for the gamma function. In particular, the volume of the $n$-dimensional ball $B(0,r)$ is proportional to $r^n$. For any (affine) subspace $W$ of $\mathbb{R}^n$, we denote by $\dim(W)$ its (affine) dimension. \smallskip\newline We say that a family $\mathcal{F}$ of real-valued functions is a \textit{cone} if for every $f\in\mathcal{F}$ and $r\geq 0$ we have $rf \in \mathcal{F}$. In addition, we say that $\mathcal{F}$ is a \emph{translation cone} if it is closed by translations (that is, for every $f\in\mathcal{F}$ and every constant $c\in\mathbb{R}$, we have that $f+c\in\mathcal{F}$). Clearly, the set $\mathcal{K}(X)$ of coercive continuous functions is a translation cone in $\mathcal{C}(X)$. \smallskip \newline For an operator $T:\mathcal{F}\to (\overline{\mathbb{R}}_+)^X$, we define its \textit{domain} \[ \dom(T):=\{ f\in \mathcal{F}: \quad T[f](x)<+\infty,\,\,\text{for all } x\in X\}. \] If $(X,d)$ is a metric space, we define the metric slope $|\nabla f|(x)$ of an extended real-valued function $f:X\rightarrow \mathbb{R}\cup \{+\infty \}$ at the point $x\in \mathrm{dom}(f)$ as follows: \begin{equation}\label{eq:metslo} |\nabla f|(x):=\left\{ \begin{array}{cl} \underset{y\rightarrow x}{\limsup }\frac{\left\{ f(x)-f(y)\right\} _{+}}{d(y,x)}, & \text{ if }x\text{ is not isolated,} \\ 0, & \text{ otherwise.} \end{array} \right. \end{equation} In the same setting, the global slope $\mathscr{G}[f](x)$ is defined as follows: \begin{equation*} \mathscr{G}[f](x):=\,\underset{y\in X}{\sup }\frac{\left\{ f(x)-f(y)\right\} _{+}}{d(y,x)}. \end{equation*} Notice that $\mathscr{G}[f](x)=0$ if and only if $x\in \argmin f$ (\textit{i.e.} $x$ is global minimum of $f$), while $|\nabla f|(x)=0$ whenever $x$ is a local minimum of $f$. The notions of metric slope (also known as strong slope) and global slope are well known in the literature (see, e.g., \cite{AGS2008} and the references therein). \smallskip\newline Let us now assume that $X$ is a Banach space with dual $X^{\ast }.$ It is well-known that if $f:X\rightarrow \mathbb{R}$ is a smooth function, then \begin{equation*} |\nabla f|(x)=\Vert \nabla f(x)\Vert . \end{equation*} In the nonsmooth setting, if $f:X\rightarrow \mathbb{R}\cup \{+\infty \}$ a lower semicontinuous \textit{convex} function, its (Fenchel-Moreau) subdifferential $\partial f(x)$ at $x\in \mathrm{dom}(f)$ is defined as follows: \begin{equation*} \partial f(x)=\{p\in X^{\ast }:\forall y\in X,\,f(y)\geq f(x)+\langle p,y-x\rangle \}. \end{equation*} It is well-known that $\partial f(x)$ is a closed convex set and it is nonempty if $x$ is a point of continuity of $f$ (see, e.g., \cite{RockafellarWets1998}). Moreover, it is known (see, e.g., \cite[Proposition 1.4.4]{AGS2008}) that for any lower semicontinuous convex function over a Banach space, one has \begin{equation} |\nabla f|(x)=\mathscr{G}[f](x)=d(0,\partial f(x)). \label{eq:global-local} \end{equation} \subsection{State-of-the-art}\label{ssec:2.1} The derivative of a smooth function recovers, up to an additive constant, the function through integration. In the nonsmooth case, Rockafellar \cite{Rock70} showed that every lower semicontinuous convex function can be represented through its subdifferential by means of a \textit{nonsmooth integration}. This result has been refined in \cite{BD2015} for Banach spaces with the Radon-Nikodym property, provided the function satisfies a mild coercivity property (namely, the asymptotic cone of its epigraph is epi-pointed). In this latter case, a partial knowledge of the subdifferential mapping $\partial f$ is sufficient to recover the function up to a constant.\smallskip Historically, this integration result was first stated as a determination result: For every two proper convex lower semicontinuous functions $f,g:X\to\mathbb{R}$ over a Banach space $X$, one has that \begin{equation} (\partial f(x) = \partial g(x), \forall x\in X) \implies f = g + c,\,\, \text{for some }c\in\mathbb{R}. \end{equation} This result was first obtained in Hilbert spaces by Moreau \cite{Moreau1965}, and generalized to Banach spaces one year later by Rockafellar \cite{Rockafellar1966}. A more general result was established by Brezis for monotone operators \cite{BrezisOperateurs1973}, where the same determination result can be obtained in Hilbert spaces only in terms of the element of minimal norm of the subdifferential, that is, \begin{equation} ( \mathrm{proj}(0,\partial f(x)) = \mathrm{proj}(0,\partial g(x)), \forall x\in \mathcal{H}) \implies f = g + c,\,\, \text{for some }c\in\mathbb{R}, \end{equation} where $\mathrm{proj}(x,A)$ denotes the metric projection of $x$ onto the set $A$. Notice that knowledge of a full gradient $\nabla f(x)$ (respectively, subdifferential $\partial f(x)\subset X^{\ast }$, or $ \mathrm{proj}(0,\partial f(x))\in\mathcal{H}$) at many (all) $x\in X$ is already a rich information: at every such point $x$ we need to know a vector (respectively, a set of vectors). Nonewithstanding, it has recently become clear that much less information (namely, a scalar) is often sufficient if our objective is only to determine functions (rather than recovering them via an explicit formula). This is resumed below: \subsubsection{Determination of convex functions\label{ssec:3.1}} Let $\mathcal{H}$ be a Hilbert space and $f:\mathcal{H}\rightarrow \mathbb{R}$ be a $\mathcal{C}^{2}$-smooth convex and bounded from below function. Set $V(x)=\frac{1}{2}\Vert \nabla f(x)\Vert ^{2}$ and consider the second-order dynamical system \begin{equation} \ddot{x}(t)=\nabla V(x(t)), \label{eq:DS2} \end{equation}% with initial condition $x(0)=x_{0}\in \mathcal{H}$. It has been shown in \cite{BCD2018} that every evanescent solution of~\eqref{eq:DS2} (that is, every solution satisfying $\Vert \dot{x}(t)\Vert \rightarrow 0$ and $\Vert \nabla V(x(t))\Vert \rightarrow 0,$ as $t\rightarrow \infty $) is solution of the first order gradient system: \begin{equation} \left\{ \begin{array}{l} \dot{x}(t)=-\nabla f(x(t))\smallskip \\ x(0)=x_{0}. \end{array} \right. \label{eq:DS1} \end{equation} On the other hand, $\mathcal{C}^{2}$-smoothness assumptions guarantees that~\eqref{eq:DS1} has unique solution. The fact that $f$ is bounded from below ensures that this solution is evanescent. By straightforward differentiation we deduce that this solution is also solution of the second-order system~\eqref{eq:DS2}, therefore \eqref{eq:DS2} and \eqref{eq:DS1} have the same solutions. Since \eqref{eq:DS2} depends only on $\Vert \nabla f\Vert $ (rather than on $\nabla f$), the following conclusion has been obtained: \begin{itemize} \item If $f,g:\mathcal{H}\rightarrow \mathbb{R}$ are two $\mathcal{C}^{2} $-smooth convex bounded from below functions, then \begin{equation} \Vert \nabla f\Vert =\Vert \nabla g\Vert \,\Longleftrightarrow \,\nabla f=\nabla g\,\Longleftrightarrow \,f=g+c,\,\text{for some }c\in \mathbb{R}. \label{eq:tahar} \end{equation} \end{itemize} Notice that $\mathcal{C}^{2}$-smoothness was required in order to define property \eqref{eq:DS2}. However, this assumption can be relaxed to $\mathcal{C}^{1}$-smoothness, assuming existence of minimizers \cite{Baillon2018}. This is based on the remark that \begin{equation} \Vert \nabla f\Vert =\Vert \nabla g\Vert \,\,\Longleftrightarrow \,\langle \nabla (f+g),\nabla (f-g)\rangle \label{eq:baillon} \end{equation} which ensures in turn that the function $f-g$ is constant among the gradient orbits of the (convex) function $f+g.$ Since each such orbit \textit{lands} on the (common) set $S$ of minimizers of the functions $f,$ $g$ and $f+g,$ and since $f-g$ is constant there (with value $\min f-\min g$), the result follows.\smallskip \newline A generalization of \eqref{eq:tahar} has been carried out in \cite{PSV2021}, where smoothness assumption has been replaced by continuity. \begin{itemize} \item If $f,g:\mathcal{H}\rightarrow \mathbb{R}$ are two convex continuous and bounded from below functions, then \begin{equation} \begin{aligned} d(0,\partial f(x))=d(0,\partial g(x)),\;\text{for all }x\in \mathcal{H}\,\,&\Longleftrightarrow \,\,\partial f=\partial g\\ &\Longleftrightarrow \,\,f=g+c,\,\,\text{for some }c\in\mathbb{R}. \end{aligned} \label{eq:salas} \end{equation} \end{itemize} To achieve the above result, the authors studied the subgradient system $\dot{x}(t)\in -\partial f(x(t))$ and showed that in this case the assumption $d(0,\partial f(\cdot ))\geq d(0,\partial g(\cdot ))$ yields $f\geq g+c$. This is proven based on two key observations: First, the solution $x(t)$ is not only a minimizing curve for $f$ (i.e., $f(x(t))\to \inf f$ as $t\to +\infty$), but it is also a minimizing curve for $g$. Second, the chain rule of the convex subdifferential entails that $(f-g)$ is nonincreasing along $x(t)$. Thus, one can consider $c = \inf f - \inf g$. After proving this comparison principle, \eqref{eq:salas} follows by symmetry. \subsubsection{Determination in metric spaces} \label{ssec:3.2} Convexity assumption was important for the proofs of \eqref{eq:tahar} and \eqref{eq:salas}. However, the outlined proof via \eqref{eq:baillon} was using convexity for two factors: to conclude that every steepest descent curve lands on a critical point (i.e. has an accumulation point in the set of critical points), and that every critical point is global minimizer, that is, \begin{equation}\label{eq:stalin} \mathrm{Crit\ }f=\left\{ x\in X:\ \nabla f(x)=0\right\} =\arg \min f. \end{equation} Assuming~\eqref{eq:stalin} and some coercivity condition (instead of convexity), the same argument leads to the following result: \begin{itemize} \item If $f,g:\mathcal{H}\rightarrow \mathbb{R}$ are two $\mathcal{C}^{1}$-smooth coercive functions, then: \begin{equation*} \left. \begin{array}{c} \Vert \nabla f\Vert =\Vert \nabla g\Vert \text{ \quad and}\medskip \\ \mathrm{Crit\ }f=\arg \min f=\arg \min g\neq \emptyset \end{array} \right\} \,\Longrightarrow \;\,f=g+c,\,\,\text{for some }c\in\mathbb{R}. \end{equation*} \end{itemize} On the other hand, all results mentioned in the previous subsection are strongly based on (sub)gradient dynamical systems and the Hilbertian structure of the space. Quite surprisingly, it turns out that this structure is eventually not required. Indeed, in the recent work \cite{TZ2022}, the result \eqref{eq:salas} has been extended to arbitrary Banach spaces, through a completely different approach, which was based on the notion of \textit{countable orderable sets} introduced in \cite{GZ1992}. In that work the authors establish that two continuous and bounded from below functions $f,g:X\rightarrow \mathbb{R}$ defined on a metric space $(X,d)$ and with finite global slopes are equal up to a constant, provided they have the same global slopes at every point. In other words: \begin{equation} \mathscr{G}[f]=\mathscr{G}[g]<+\infty \,\Longrightarrow \,f=g+c,\text{ for some }c\in\mathbb{R}. \label{eq:Glodet} \end{equation} The key technique to achieve such a result is the construction of a minimizing sequence by means of the global slope. The construction is based on the following result (proved in \cite{TZ2022}): for every sequence $\{x_i\}_i$ of the metric space $(X,d)$ and for every proper extended real-valued function $f:X\to \mathbb{R}\sqcup\{\infty\}$ it holds: \begin{equation} \left( \lim_{i\to \infty}\mathscr{G}[f](x_i) = 0 \text{ and }\sum_{i=1}^\infty \mathscr{G}[f](x_i)d(x_i,x_{i+1}) < \infty\right) \implies \liminf_{i\to \infty} f(x_i ) = \inf_X f. \end{equation} Although the setting is quite general (metric spaces), the notion of global slope is rather restrictive, since it does not coincide with the modulus of the derivative in the smooth case. But this notion is a very good fit for convex functions defined on a general Banach space $X.$ In this case, combining \eqref{eq:global-local} with \eqref{eq:Glodet} yields a generalization of~\eqref{eq:salas}.\smallskip In an independent work~\cite{DS2022}, the authors considered the local notion of metric slope and established the following comparison result for the class of continuous coercive functions. In what follows we denote by \begin{equation*} \mathrm{Crit }f=\left\{ x\in X:\ |\nabla f|(x)=0\right\} \end{equation*} the set of (metrically) critical points. \begin{proposition}[slope comparison] \label{prop:metric-comp} Let $(X,d)$ be a metric space and $f,g:X\rightarrow \mathbb{R}$ be two continuous coercive functions. Assume that \begin{itemize} \item[(i).] $|\nabla f|(x)>|\nabla g|(x),\;\,$for all $x\in X\diagdown \mathrm{Crit }f.$ ;\quad and \item[(ii).] $f(x)>g(x)$, for all $x\in \mathrm{Crit }f.$ \end{itemize} Then, $f>g$. \end{proposition} The proof was obtained by reasoning to contradiction and using discrete iterations and transfinite induction. The following determination result was then obtained as consequence of Proposition~\ref{prop:metric-comp}. \begin{theorem}[Determination in metric spaces] \label{thm:metric-det}Let $(X,d)$ be a metric space and $f,g:X\rightarrow \mathbb{R}$ be two continuous coercive functions. Assume that \begin{itemize} \item[(i).] $|\nabla f|(x)=|\nabla g|(x)<+\infty ,\;\,$for all $x\in X$ ;\quad and \item[(ii).] $f(x)=g(x)$, for all $x\in \mathrm{Crit\ }f.$ \end{itemize} Then, $f=g$. \end{theorem} The above result asserts that information on the metric slope $|\nabla f|$ and critical values is sufficient to determine every continuous coercive functions with finite slope (therefore, in particular, every Lipschitz continuous coercive functions). Taking into account the pathologies that prevail Lipschitz functions, the above statement appears close to be optimal: In \cite{DS2022} several counterexamples are presented to show the pertinence of the assumptions. This being said, there is still room for improvements: indeed, assuming $X$ is a complete metric space, it seems plausible to relaxe coercivity/compactness assumption (which is required in the current proof), by an alternative assumption ensuring the existence of appropriate descent (generalized) sequences that link any point to the critical set \subsection{Description of the current work} Revising the arguments employed in \cite{DS2022} for the proof of Proposition~\ref{prop:metric-comp} and Theorem~\ref{thm:metric-det} we observe that continuity and coercivity are topological notions, while the metric structure of $(M,d)$ is only required in order to define the metric slope, see~\eqref{eq:metslo}. In particular, one can assume continuity and coercivity with respect to another topology (not related to the given distance $d$) and the topological part of the proof can be completely decoupled.\smallskip \newline In this work we show that a similar result to Theorem~\ref{thm:metric-det} holds for any topological space equipped with a Borel probability measure $\mu $, if we replace the metric slope $|\nabla f|(x)$ (corresponding to the steepest descent at $x$) by the $\mu $-\textit{average descent }$T_{\mu }(f)(x)$ \textit{at }$x$ given by \begin{equation*} T_{\mu }(f)(x):=\int_{X}\left\{ f(x)-f(y)\right\} _{+}d\mu (y)=\int_{[f\leq f(x)]}\left[ f(x)-f(y)\right] d\mu (y). \end{equation*} More generally, we introduce an abstract descent operator $T[f]$ (see Definition~\ref{sec03:def:ModulusOfDescent}) that encompasses both metric and global slopes (in metric spaces) and average descent (in probability spaces) as well as many other instances, see Subsection~\ref{sec03-03:Stability} for further examples and stability properties of this operator. We then establish an abstract determination result (Theorem~\ref{sec03:thm:Determination}) revealing that the metric structure is neither minimal nor optimal framework, as hinted by the topological and metric decoupling observed in \cite{DS2022}. In Section~\ref{sec04:Averaged} we study general stochastic processes in metric spaces and define adequate oriented operators (particular instances of Definition~\ref{sec03:def:ModulusOfDescent}) that allow to obtain determination results. Finally, in Section~\ref{sec:5} we introduce an equivalence relation among descent moduli for functions $f\in \mathbb{R}^{\mathcal{V}}$ defined in finite spaces $\mathcal{V}$ and show that a typical descent modulus is equivalent to a steepest descent with respect to a prescribed active neighborhood system (see Theorem~\ref{critmap}). \section{Descent modulus: definition, properties and main examples} \label{sec03:Axiomatic} Let $\mathcal{F}$ be a family of functions from a nonempty set $X$ to $\mathbb{R}$. For an operator $T:\mathcal{F}\rightarrow (\overline{\mathbb{R}}_{+})^{X}$, we define its \textit{domain} \begin{equation}\label{eq:domT} \dom(T):=\{f\in \mathcal{F}:\quad T[f](x)<+\infty ,\,\,\text{for all }x\in X\}. \end{equation} We also define the set $\mathcal{Z}_{T}(f)$ of \textit{$T$-critical points} of $f\in \mathcal{F}$ as follows: \begin{equation} \mathcal{Z}_{T}(f)=\{x\in X\ :\ T[f](x)=0\}. \label{eq:ZTf} \end{equation} (Note that every $T$-critical point of $f$ is a global minimizer for the function $T[f]$.)\smallskip\newline In this section we give an axiomatic definition of an abstract descent operator, that is, an operator $T$ acting on (a certain class of) functions $f$ from $X$ to $\mathbb{R}$. This operator associates to each point $x\in X$ an extended nonnegative number $T[f](x)\in \mathbb{R}\cup \{+\infty \}$ which corresponds to an abstract measure of descent (henceforth called \textit{descent modulus}) of $f$ at $x$. \smallskip \newline The required properties of this abstract definition will be kept minimal to encompass several instances stemming from classical and variational analysis, metric geometry and stochastic processes: in particular, the metric slope (used in~\cite{DS2022}), the global slope (used in~\cite{TZ2022}) and the notion of average descent (that will be discussed later in this work) are all captured by the proposed abstract scheme. \subsection{Axiomatic definition\label{sec03-01:Construction}} Let $\mathcal{F}$ be a translation cone in the space of functions from a nonempty set $X$ to $\mathbb{R}$. \begin{definition} [Abstract descent modulus]\label{sec03:def:ModulusOfDescent} Let $T:\mathcal{F}\rightarrow(\overline{\mathbb{R}}_{+})^{X}$ be a nonlinear operator.\\ We say that:\smallskip\newline\textrm{(D1).} $T$ \textit{preserves global minima}, if for every function $f\in\mathcal{F}$ and $x\in X$ we have \[ x\in\argmin f\implies x\in\mathcal{Z}_{T}(f)\,. \] \textrm{(D2).} $T$ \textit{is monotone at $x$}, if for every $f,g\in\mathcal{F}$ we have: \begin{equation} \forall z\in X:\,(f(x)-f(z))_{+}\geq(g(x)-g(z))_{+}\,\,\implies\,\,T[f](x)\geq T[g](x).\label{eq:ad} \end{equation} \textrm{(D3).} $T$ \textit{is scalar-monotone at $x$}, if for every function $f\in\mathcal{F}$ and $r>1$, we have \[ 0<T[f](x)<+\infty\,\implies\,T[f](x)<T[rf](x). \] The operator $T$ is called (scalar) monotone, if it is (scalar) monotone at every $x\in X$. \smallskip\newline We say that $T$ is a \textit{descent modulus} for the class $\mathcal{F}$ if properties (\textrm{D1})--(\textrm{D3}) hold, that is, if $T$ is monotone, scalar-monotone and preserves global minima. \end{definition} Before we proceed, let us have a brief discussion on the above properties:\medskip \newline Property \textrm{(D1)} states that there is \textit{no descent at global minima}; thus $T[f](x)=0$ holds at every $x\in\argmin f$.\smallskip\newline Property \textrm{(D2)} expresses the fact that the \textit{amount of descent} of $f$ at a point $x$ depends only on the sublevel set $\lbrack f\leq f(x)]$ and is captured by the function $z\mapsto(f(x)-f(z))_{+}$. Accordingly, for a fixed $x\in X$, the relation \begin{equation*}\label{eq:PreorderDef} g\preceq_{x}f\quad\underset{\text{def}}{\iff}\quad\forall z\in X:\quad(g(x)-g(z))_{+}\leq (f(x)-f(z))_{+} \end{equation*} is a preorder relation on $\mathcal{F}$ which roughly reads as follows: ``$f$ has more descent than $g$ at $x$". Under this terminology, (\textrm{D2}) requires the mapping $\mathcal{F}\ni f\mapsto T[f](x)$ to be monotone with respect to $\preceq_{x}$. \smallskip\newline Notice further that (\ref{eq:ad}) yields the following: If $g\preceq_x f$, then \begin{equation}\label{eq:ad2} z\notin\lbrack f< f(x)]\quad\Longrightarrow\quad g(z)\geq g(x)\quad\text{(that is, } \,z\notin\lbrack g<g(x)]\text{)}. \end{equation} This means that $g\preceq_x f$ implies that $[g< g(x)]\subset [f< f(x)]$. Finally, scalar-monotonicity in {\rm (D3)} expresses the fact that the descent of the function $g=rf$ should be larger than the one of~$f$, when $r>1$. \smallskip\newline In conclusion, the above axioms (D1)--(D3) are natural requirements for an abstract notion of descent of a function $f$ at a point $x$. The following proposition reveals further properties that can be derived from the axioms of Definition~\ref{sec03:def:ModulusOfDescent}. \begin{proposition}[Properties of descent moduli]\label{sec03:prop:EquivalentProperties} Let $\mathcal{F}\subset\mathcal{C}(X)$ (as before) and $T:\mathcal{F}\to (\overline{\mathbb{R}}_+)^X$ be an operator. Then: \smallskip\newline $\mathrm(a).$ {\rm (one-step descent property)} $T$ is monotone if and only if for every $f,g\in \mathcal{F}$ and $x\in X$ \begin{equation}\label{sec03:eq:one-stepDescent} T[f](x) > T[g](x)\quad \implies \quad \exists z\in [f<f(x)]:\,\, f(x) - f(z) > g(x) - g(z). \end{equation} $\mathrm(b).$ {\rm (translation-invariance)} If $T$ is a descent modulus for $\mathcal{F}$, then for every $c\in \mathbb{R}$ and $f\in \mathcal{F}$ we have: $$T[f] = T[f+c].$$ $\mathrm(c).$ {\rm (strict monotonicity)} Let $T$ be monotone at $x\in X$. Then the following are equivalent:\smallskip\newline -- $\mathrm(c_1).$ $T$ is scalar-monotone at $x$.\smallskip\newline -- $\mathrm(c_2).$ For every $f,g\in \mathcal{F}$ with $T[f](x)>0$, $T[g](x)<+\infty$ and $[g\leq g(x)]\subset [f\leq f(x)]$, the implication $$\exists \delta>0:\, \forall z \in [g\leq g(x)]\,\, \implies\,\, f(x) - f(z)\geq (1+\delta)(g(x) - g(z)),$$ yields $$T[f](x) > T[g](x).$$ -- $\mathrm(c_3).$ For every $f\in \mathcal{F}$, $x\in X$ and $r\in (1,+\infty)$ such that $0<T[f](x)$ and $T[rf](x) <+\infty$, the mapping $$ [0,r-1]\ni \delta\,\,\mapsto \,\, T[(1+\delta)f](x)$$ is strictly increasing. \end{proposition} \begin{proof} Let us show the above properties separately: \smallskip\newline $\mathrm(a).$ (\textit{sufficiency}) Reasoning by absurd, assume $T$ verifies the one-step property but it is not monotone. Then, there exist $f,g\in\mathcal{F}$ and $x\in X$ with $(f(x)-f(\cdot))_+\geq (g(x)-g(\cdot))_+$ but with $T[f](x)<T[g](x)$. By the one-step descent property \eqref{sec03:eq:one-stepDescent}, there exists $z\in [g< g(x)]$ such that \[ g(x) - g(z) > f(x) - f(z). \] However, the inequality $(f(x)-f(\cdot))_+\geq (g(x)-g(\cdot))_+$ yields that $[g<g(x)] \subset [f\leq f(x)]$, and so the above inequality is a direct contradiction. \smallskip (\textit{necessity}) Assume that $T$ is monotone but the one-step descent property does not hold. Then, there exist $f,g\in\mathcal{F}$ and $x\in X$ with $T[f](x) > T[g](x)$ such that for all $z\in X$ we either have $f(x)\leq f(z)$ or $f(x)-f(z)\leq g(x)-g(z)$. It is not hard to see that for every $z\in X$ one has that \[ (f(x)-f(z))_+ = \begin{cases} \, f(x)-f(z) \leq g(x)-g(z)\,,\quad&\text{ if }f(x)>f(z)\medskip \\ \phantom{david salasios}0\,, &\text{ otherwise.} \end{cases} \] Thus, in any case, we get that $(f(x)-f(z))_+\leq (g(x)-g(z))_+$. Then, monotonicity yields that $T[f](x)\leq T[g](x)$, which is a contradiction. \medskip\newline $\mathrm(b).$ We show that for every $f\in \mathcal{F}$ and $c\in \mathbb{R}$, we have $T[f] = T[f+c]$. \smallskip\newline Notice that $\left[ (f(x)+c) - (f(z)+c)\right]_+ \geq \left[f(x)-f(z)\right]_+$ holds trivially for all $x,z\in X$. By monotonicity we deduce that $T[f]\leq T[f+c]$. Replacing now $f$ by $f' = f+c$ and respectively, $c$ by $c' = -c$, we obtain equality. \smallskip\newline $\mathrm(c).$ Let us show first that $(c_1)\Rightarrow (c_2)$:\smallskip\newline Reasoning by absurd, assume that there exist $f,g\in\mathcal{F}$ and $\delta>0$ satisfying the hypotheses of the statement and $x\in X$ with $0 < T[f](x)\leq T[g](x)< +\infty$. Then for all $z\in X$ it holds: \[ (f(x) - f(z))_+\geq ((1+\delta)g(x) - (1+\delta)g(z))_+\,. \] By monotonicity, we deduce that $T[(1+\delta)g] \leq T[f]$. Further, using scalar-monotonicity, we get \[ 0 < T[g](x)< +\infty\, \implies \, T[g](x) < T[(1+\delta) g](x) \leq T[f](x) \leq T[g](x), \] which is a contradiction.\smallskip\newline Let us now show that $(c_2)\Rightarrow (c_3)$: \smallskip\newline Let $f\in \mathcal{F}$, $x\in X$ and $r>1$ such that $0<T[f](x), T[rf](x)<+\infty$. Fix $\delta_1,\delta_2 \in [0,r-1]$ with $\delta_1<\delta_2$. We set $ g = (1+\delta_1)f$ and $h = (1+\delta_2)f$. Monotonicity yields that $0\leq T[g](x)$ and $T[h](x)<+\infty$. Then, setting $$\delta = \frac{1+\delta_2}{1+\delta_1} - 1$$ we have that for all $y\in X$, $[g\leq g(y)] = [h\leq h(y)]$, and for all $y,z\in X$: \[ h(y) - h(z) = (1+\delta)(g(y) - g(z)). \] Thus, (c2) yields that $T[h](x)>T[g](x)$. \smallskip\newline Let us finally establish that $(c_3)\Rightarrow (c_1)$. \smallskip\newline To this end, let $f\in \mathcal{F}$, $r>1$ and $x\in X$ such that $0<T[f](x)<+\infty$. We need to show that $T[f](x) < T[rf](x)$. This holds trivially if $T[rf](x) = +\infty$, therefore we can assume that $T[rf](x) <+\infty$. Since $T$ is monotone, we have that $T[rf](x) \geq T[f](x)$ already, and in particular, $T[rf](x)>0$. Then, by hypothesis, we have that the mapping $$[0,r-1]\ni \delta \mapsto T[(1+\delta)f](x)$$ is strictly increasing, which leads us directly to the desired inequality. \end{proof} \subsection{Determination in topological spaces}\label{sec03-02:GeneralDetermination} Let $(X,\tau )$ be a topological space and let $T$ be a descent modulus for $\mathcal{K}(X)$. Let us define the following equivalent relation on the class $\mathcal{K}(X)$ of continuous coercive functions: we say that the functions $f,g\in \mathcal{K}(X)$ are equivalent (and we denote $f\thicksim g$) if they have the same $T$-critical set and they are equal there.\\ In other words: \begin{equation*} f\thicksim g\qquad \Longleftrightarrow \qquad \mathcal{Z}_T(f)=\mathcal{Z}_T(g)=S\quad \text{and}\quad f|_{S}=g|_{S}. \end{equation*} In this section, borrowing from techniques developed in \cite{DS2022}, we show that properties (D1)--(D3) of the descent modulus (\textit{cf.} Definition~\ref{sec03:def:ModulusOfDescent}) are sufficient to guarantee that the mapping $f\mapsto T[f]$ is injective on $\mathcal{K}(X)$, modulo the above equivalent relation. Therefore, according to our terminology, the descent modulus \textit{determines} the class $\mathcal{K} (X)$. At this stage, let us also outline the topological nature of this result: no linear or metric structure is required. \smallskip \newline The results of this section will be stated in a slightly more general framework. We assume, similarly to the previous section, that $\mathcal{F}\subset\mathcal{K}(X)$ is a translation cone.\smallskip \newline We start with the following lemma. \begin{lemma}[strict domination of descent modulus] \label{sec03:lemma:strictComparison} Let $T$ be a descent modulus for the class~$\mathcal{F}$. Let $f,g\in \dom(T)$ such that \begin{equation*} \forall x\in X\setminus \mathcal{Z}_T(f),\quad T[f](x) > T[g](x). \end{equation*} Then, for all $x\in X$, we have that \begin{equation*} f(x) \geq g(x) + \mu(x), \end{equation*} where \begin{equation*} \mu(x) := \inf\{(f-g)(z)\ :\ z\in [f\leq f(x)]\cap \mathcal{Z}_T(f)\}\in \mathbb{R}\cup \{-\infty\}. \end{equation*} \end{lemma} \begin{proof} Since the set of global minimizers of every function $f\in\mathcal{F}$ is nonempty and the abstract descent modulus $T$ preserves global minima, we deduce that $\mathcal{Z}_T(f)\neq \emptyset$ and consequently, $ \mu(x) <+\infty$. Let us assume, towards a contradiction, that there exists $x\in X$ such that $f(x)<g(x) + \mu(x)$. Then, clearly $ \mu(x)>-\infty$ which readily yields $x\in X\setminus \mathcal{Z}_T(f)$. Therefore, by assumption, $T[f](x) > T[g](x)$. Applying the one-step descent property~\eqref{sec03:eq:one-stepDescent} of~$T$, we infer that there exists $z_0\in X$ such that \[ f(z_0)<f(x)\quad\text{and}\quad(g-f)(z_0) = c > (g-f)(x)> - \mu(x). \] Since $z_0$ is not a $T$-critical point, we can repeat the above argument to obtain $z_1\notin\mathcal{Z}_{T}$ such that $f(z_1)<f(z_0)$ and $(g-f)(z_1) > c = (g-f)(z_0)$. Following the strategy of \cite[Proposition~2.2]{DS2022}, we construct (by means of a transfinite induction over the ordinals) a generalized sequence $\{z_{\alpha}\}_{\alpha}\subset [f\leq f(z_0)]$ such that $\{f(z_{\alpha})\}_a$ is decreasing and $\{(g-f)(z_\alpha)\}_{\alpha}$ is increasing: \smallskip\newline -- If $\alpha = \beta+1$ is a successor ordinal then, since $z_{\beta}\notin \mathcal{Z}_T(f)$ and $g(z_{\beta}) \geq f(z_{\beta}) +c$, the one-step descent property~\eqref{sec03:eq:one-stepDescent} yields $z_{\beta+1}$ such that \[ f(z_{\beta +1})< f(z_{\beta}) \leq f(z_0)\quad\text{ and }\quad(g-f)(z_{\beta+1})> (g-f)(z_\beta)\geq c. \] -- If $\alpha$ is a limite-ordinal and $\{z_{\beta}\}_{\beta <\alpha}\subset [f\leq f(z_0)]$ is defined accordingly, then since the sublevel set $[f\leq f(z_0)]$ is compact, the $\omega$-limit set \[ A = \bigcap_{\beta < \alpha} \overline{\{ z_{\eta}\ :\ \beta\leq\eta < \alpha \}}, \] is nonempty. Pick any $z_{\alpha}\in A$. Clearly, $z_{\alpha}\in [f\leq f(z_0)]$, $f(z_{\alpha})\leq f(z_{\beta})$ for all $\beta\leq \alpha$, and by continuity \[ (g-f)(z_{\beta}) = \inf \{ (g-f)(z_{\eta})\ :\ \beta\leq\eta < \alpha \} \leq (g-f)(z_{\alpha}). \] Notice that the above construction never meets a $T$-critical point of $f$. Indeed, if $z_{\alpha}\in \mathcal{Z}_T(f)$ for some ordinal $\alpha$, then since $f(z_{\alpha})<f(x)$ we would have that \begin{align*} -\mu(x) &= \sup\{ (g-f)(z)\ :\ z\in [f\leq f(x)]\cap \mathcal{Z}_T(f) \}\\ &\geq (g-f)(z_{\alpha}) \geq c > -\mu(x), \end{align*} which is a contradiction. Due to a cardinality obstruction, we necessarily deduce that $z_{\alpha}=z_{\beta}$ for some ordinals $\alpha$, $\beta$ with $\alpha>\beta$. This yields \[ (g-f)(z_{\beta+1})>(g-f)(z_{\beta})= (g-f)(z_{\alpha})\geq (g-f)(z_{\beta+1}), \] which is clearly a contradiction. The conclusion follows. \end{proof} \begin{theorem}[Comparison principle] \label{sec03:thm:ComparisonPrinciple} Let $T$ be a descent modulus for $\mathcal{F}$ and let $f,g\in \dom(T)$ and $c\in \mathbb{R}$ such that \begin{itemize} \item[\textrm{(i).}] $T[f](x) \geq T[g](x)$, for all $x\in X$; and \item[\textrm{(ii).}] $f(\bar x)\geq g(\bar x)+c$, for all $\bar x\in \mathcal{Z}_T(f)$. \end{itemize} Then, $f\geq g+c$. \end{theorem} \begin{proof} Let $x \in X\setminus\mathcal{Z}_T(f)$ be arbitrarily chosen. Fix $\varepsilon>0$, set $f_{\varepsilon } = (1+\varepsilon)f$ and notice that monotonicity of $T$ yields that $\mathcal{Z}_T(f_{\varepsilon})\subset \mathcal{Z}_T(f)$. Let $z\in X\setminus \mathcal{Z}_T(f_{\varepsilon})$. We have two cases: \begin{itemize} \item[Case 1:] $z\in X\setminus\mathcal{Z}_T(f)$. Then scalar-monotonicity of $T$ yields \[ T[f_{\varepsilon}](z) = T[(1+\varepsilon)f](z) > T[f](z) \geq T[g](z). \] \item[Case 2:] $z\in \mathcal{Z}_T(f)\setminus\mathcal{Z}_T(f_{\varepsilon})$. Then $T[f_{\varepsilon}](z)>0 = T[f](z)\geq T[g](z)$. \end{itemize} In both cases, $T[f_{\varepsilon}](z)> T[g](z)$. Thus, by Lemma~\ref{sec03:lemma:strictComparison}, we have that \begin{align*} f_{\varepsilon}(x) &\geq g(x) + \inf\{ (f_{\varepsilon}-g)(z)\ :\ z\in \mathcal{Z}_T(f_{\varepsilon})\cap [f_{\varepsilon}\leq f_{\varepsilon}(x)] \}\\ &\geq g(x) + \inf\{ (f_{\varepsilon}-g)(z)\ :\ z\in \mathcal{Z}_T(f)\cap [f\leq f(x)] \}\\ &\geq g(x) + c + \varepsilon \inf\{ f(z) :\ z\in \mathcal{Z}_T(f)\cap [f\leq f(x)]\}\\ &\geq g(x) + c + \varepsilon\min f. \end{align*} Finally, by taking $\varepsilon\to 0$, we deduce that $f(x)\geq g(x) + c$. The proof is complete. \end{proof} Applying twice Theorem~\ref{sec03:thm:ComparisonPrinciple}, we deduce easily the following determination result. \begin{theorem}[Determination of continuous coercive functions] \label{sec03:thm:Determination} Let $T$ be a descent modulus for a translation cone $\mathcal{F}$ of $\mathcal{K}(X)$. Let $f,g\in \dom(T)$ and $c\in \mathbb{R}$ be such that \begin{itemize} \item[(i).] $T[f](x) = T[g](x)$ for all $x\in X$ (whence $\mathcal{Z}_T(f)= \mathcal{Z}_T(g)$); and \item[(ii).] $f(x) = g(x) + c$, for all $x\in \mathcal{Z}_T(f)$. \end{itemize} Then, $f = g + c$. \end{theorem} \begin{remark} A descent modulus $T$ for a class $\mathcal{F}$ is meant to assign a quantified measure of descent at every point of $f\in\mathcal{F}$. This quantity is also allowed to be infinite at some points of some functions and whenever this happens the determination result cannot apply. Therefore, $T$ does not determine the whole class $\mathcal{F}$, but instead only functions in $\dom(T)\subset \mathcal{F}$. \end{remark} \subsection{Stability properties of descent moduli and examples\label{sec03-03:Stability}} The metric slope (used in \cite{AGS2008}, \cite{GMT1980} \textit{e.g.}) is a natural instance of abstract descent modulus and the results of the previous section can be seen as a minimal axiomatic presentation of the slope determination result given in \cite{DS2022}. In this section, we show that the axiomatic descent modules also captures the notion of global slope (used in \cite{TZ2022}) as well as several natural adaptations of the notion of slope to topological spaces, emancipating from the metric framework. \smallskip \newline Throughout this section, $\mathcal{F}$ will denote a translation cone of $\mathcal{C}(X)$. \begin{proposition}[$m$-slope] \label{prop-m-slope} Let $m:X\times X\to \mathbb{R}_+$ be a mapping satisfying: \begin{equation*} m(x,y) = 0 \quad\iff \quad x= y \qquad\text{(separation axiom)}. \end{equation*} Let further $\mathcal{D} = \{ \mathcal{D}_x \}_{ x\in X}$ be a family of subsets of $X$ satisfying $x\in \mathcal{D}_x$ for every $x\in X$. Then, the $m$-slope \begin{equation*} s_f(x) := \left\{ \begin{array}{cl} \displaystyle\limsup_{y\to x} \Delta_f^+(x,y) & \text{ if }x\text{ is not isolated,} \\ 0 & \text{ otherwise,} \end{array} \right. \end{equation*} and the semiglobal $(\mathcal{D},m)$-slope \begin{equation}\label{eq:D-global} \mathscr{G}_{\mathcal{D}}[f](x) = \sup_{y\in \mathcal{D}_x} \Delta_f^+(x,y) \end{equation} are moduli of descent for the class $\mathcal{F}$, where \begin{equation} \Delta_f^+(x,y) = \left\{ \begin{array}{cc} \frac{(f(x) - f(y))_+}{m(x,y)} & \text{ if }y\neq x, \\ & \\ 0 & \text{ if }y=x. \end{array} \right. \end{equation} \end{proposition} \begin{proof} Let us show that the above operators of (local) $m$-slope and (semiglobal) $(\mathcal{D},m)$-slope satisfy axioms (D1)--(D3) of Definition~\ref{sec03:def:ModulusOfDescent}. It is straightforward to see that (D1) (preservation of global minima) is fulfilled. Axiom (D3) (scalar monotonicity) is also fulfilled, since for every$f\in \mathcal{F}$ and $r>0$ we have \[ s_{rf}(x) = r s_f(x)\quad\text{ and }\quad \mathscr{G}_{\mathcal{D}}[rf](x) = r\mathscr{G}_{\mathcal{D}}[f](x). \] It remains to show that both operators also satisfy axiom (D2) (Monotonicity). To this end, let $f,g\in \mathcal{F}$ such that \[ (f(x) - f(z))_+ \geq (g(x) - g(z))_+. \] Then for every $y\in X$ we have $\Delta_f^+(x,y)\geq \Delta_g^+(x,y)$, which readily yields that $s_f(x) \geq s_g(x)$ and $\mathscr{G}_{\mathcal{D}}[f](x)\geq \mathscr{G}_{\mathcal{D}}[g](x)$. The proof is complete. \end{proof} \begin{remark} When $(X,\tau)$ is a metric space and $m$ is the distance function, the $m$-slope $s_f(x)$ coincides with the usual metric slope $|\nabla f |(x)$ and the main result of~\cite{DS2022} follows directly from Theorem~\ref{sec03:thm:Determination}. Taking now $\mathcal{D}_x=X$ for all $x\in X$, the semiglobal $(\mathcal{D},m)$-slope $\mathscr{G}_{\mathcal{D}}[f](x)$ coincides with the global slope $\mathscr{G}[f](x)$ (see, e.g., \cite[Definition~1.2.4]{AGS2008}) which was used in~\cite{TZ2022}. \end{remark} Notice that the semiglobal slope $\mathscr{G}_{\mathcal{D}}[f]$ is intrinsically different from the metric slope (or the norm of the gradient $\|\nabla f\|$ in the differentiable case), which already reveals that Definition~\ref{sec03:def:ModulusOfDescent} represents a much more general setting. The next proposition shows that we can go even further. \begin{proposition}[Constructing descent moduli] \label{sec03:prop:StabilityModuli} $\mathrm{(i).}$ Let $T_1$, $T_2$ be descent moduli for the class~$\mathcal{F}$. Then $T_1+ T_2$ is also a descent modulus for $\mathcal{F}$, where \begin{equation*} (T_1+ T_2)[f](x):= T_1[f](x) + T_2[f](x), \quad \text{for all }\, f\in \mathcal{F} \,\, \text{and }\, x\in \dom{f}. \end{equation*} $\mathrm{(ii)}.$ Let $T$ be a descent modulus for $\mathcal{F}$ and let $\phi:\mathbb{R}_+\to\mathbb{R}_+$ be a strictly increasing function with $\phi(0) = 0$ and $\limsup_{t\to+\infty}\phi(t) = +\infty$. Then \begin{equation*} (\phi T)[f](x):=(\phi\circ T[f])(x), \quad \text{for all }\, f\in \mathcal{F} \,\, \text{and }\, x\in \dom{f}, \end{equation*} is also a descent modulus for $\mathcal{F}$, under the convention $\phi(+\infty) = \limsup_{t\to+\infty}\phi(t)=+\infty$. \smallskip\newline In particular, $r T$, $r\geq 0$ is a descent modulus for $\mathcal{F}$, where \begin{equation*} (r T)[f](x):= r\cdot T[f](x), \end{equation*} under the convention $r\!\cdot\! (+\infty) = +\infty$ for $r>0$, and $0\!\cdot\! (+\infty) = 0$. \end{proposition} \begin{proof} Let $T_1$, $T_2$, $T$ and $\phi$ as in the statements (i) and (ii). We show that axioms (D1)--(D3) of Definition~\ref{sec03:def:ModulusOfDescent} are fulfilled:\smallskip\newline -- (D1) (\textit{Preservation of global minima}) Let $f\in \mathcal{F}$ and $x\in \argmin f$. Then $$T_1[f](x) = T_2[f](x) = T[f](x) = 0$$ and consequently $(T_1 + T_2)[f](x) = 0$\, and \,$\phi(T[f](x)) = \phi(0) = 0.$ Therefore, $T_1+T_2$ and $\phi T$ preserve global minima. \medskip\newline -- (D2) (\textit{Monotonicity}) Let $f,g\in\mathcal{F}$ and $x\in X$ such that \[ (f(x) - f(z))_+ \geq (g(x) - g(z))_+,\quad \forall x\in X. \] Then, since $T_1$ and $T_2$ are monotone, we have that \[ (T_1+T_2)[f](x) = T_1[f](x)+T_2[f](x) \geq T_1[g](x) + T_2[g](x) = (T_1+T_2)[g](x). \] Similarly, since $T$ is monotone and $\phi$ is non-decreasing, we get that \[ (\phi T)[f](x) = \phi(T[f](x))\geq \phi(T[g](x)) = (\phi T)[g](x). \] Thus, $T_1+T_2$ and $\phi T$ are monotone. \medskip\newline -- (D3) (\textit{Scalar monotonicity}) Let $f\in \mathcal{F}$, $x\in X$ and $r>1$ and assume $0 < (T_1+T_2)[f](x) < +\infty$. Up to a mutual change of $T_1$ and $T_2$, we many assume $0<T_1[f](x) < +\infty$. Then, using the scalar monotonicity of $T_1$ and the monotonicity of $T_2$, we deduce \begin{align*} (T_1+T_2)[rf](x) &= T_1[rf](x) + T_2[rf](x) \, > \, T_1[f](x) + T_2[rf](x)\smallskip \\ &\geq T_1[f](x) + T_2[f](x) = (T_1+T_2)[f](x). \end{align*} Thus, $(T_1+T_2)$ is scalar-monotone. \smallskip\newline Let us now assume $0 < (\phi T)[f](x) < +\infty$. Since $\phi(0) = 0$ and $\phi(+\infty) = +\infty$, we obtain again $0<T[f](x)<+\infty$. Thus, $T[rf](x)>T[f](x)$ and \[ (\phi T)[rf](x) = \phi(T[rf](x))>\phi(T[f](x))=(\phi T)[rf](x), \] yielding that $(\phi T)$ is scalar-monotone. We conclude that both $(T_1+T_2)$ and $(\phi T)$ are descent moduli for $\mathcal{F}$. \end{proof} Notice that the family of descent moduli for the class $\mathcal{F}$ has the structure of a convex cone (i.e., it is a cone closed for the sum), with the sum and the scalar multiplication being defined as in Proposition~\ref{sec03:prop:StabilityModuli}. \smallskip \newline The following proposition provides other types of operations, based on \textit{truncations}, that preserve descent moduli. \begin{proposition}[Truncated descents] \label{prop:trunc} Let $T$ be a descent modulus for the class $\mathcal{F} $. Then: \smallskip\newline (i). For every $\varepsilon>0$, the operator $T_{\varepsilon}$ given by \begin{equation*} T_{\varepsilon}[f](x) = \left\{ \begin{array}{cl} T[f](x), & \text{ if }f(x) > \inf f + \varepsilon \smallskip \\ 0, & \text{ otherwise,} \end{array} \right. \end{equation*} is a descent modulus for $\mathcal{F}$.\smallskip\newline (ii). For every $K\subset X$, the operator $T\big|_K$ given by \begin{equation*} T\big|_K[f](x) = \left\{ \begin{array}{cl} T[f](x), & \text{ if }x\in K \smallskip \\ 0, & \text{ otherwise,} \end{array} \right. \end{equation*} is a descent modulus for $\mathcal{F}$. \end{proposition} \begin{proof} Let $T$, $\varepsilon>0$ and $K\subset X$ as in the statement of the proposition. We will show that the operators $T_{\varepsilon}$ and $T\big|_{K}$ satisfy properties (D1)--(D3) of Definition~\ref{sec03:def:ModulusOfDescent}. Notice that for every $f\in \mathcal{F}$ and $x\in X$ we have $T[f](x)\geq T_{\varepsilon}[f](x)$ and $T[f](x) \geq T\big|_K[f](x)$. Therefore, if $T[f](x) = 0$, the above readily yields $T_{\varepsilon}[f](x) =\big|_{K}(x) = 0$, and (D1) holds trivially.\smallskip\newline Let us now prove (D2). To this end, Let $f,g\in\mathcal{F}$ and $x\in X$ such that \[ (f(x) - f(z))_+ \geq (g(x) - g(z))_+,\quad \forall x\in X. \] Let us first deal with $T_{\varepsilon}$: if $f(x) > \inf f + \varepsilon$, then $T_{\varepsilon}[f](x) = T[f](x) \geq T[g](x) \geq T_{\varepsilon}[g](x)$. On the other hand, if $f(x) \leq \inf f + \varepsilon$, then $[f(x) - f(z)]_+\leq \varepsilon$ for all $z\in X$, whence $g(x) \leq \inf g + \varepsilon$ and $T_{\varepsilon}[f](x) = T_{\varepsilon}[g](x) = 0$ (by definition of $T_{\varepsilon}$). We conclude that $T_{\varepsilon}[f](x) \geq T_{\varepsilon}[g](x)$. \smallskip\newline Let us now deal with $T\big|_K$: If $x\in K$, then $T\big|_K[f](x) = T[f](x) \geq T[g](x) = T\big|_K[g](x)$, while if $x\in X\setminus K$, then $T\big|_K[f](x) = T\big|_K[g](x) = 0$. In both cases $T\big|_K[f](x) \geq T\big|_K[g](x)$.\medskip\newline It remains to prove (D3). Let $f\in \mathcal{F}$, $x\in X$ and $r>1$. If $\inf f=-\infty$, then $T_{\varepsilon}[f]=T[f]$ and the result follows. Therefore, we may assume $\inf f>-\infty$ and $0<T_{\varepsilon}[f](x) < + \infty$. This yields $f(x)> \inf f + \varepsilon$ and consequently, $T_{\varepsilon}[f](x) = T[f](x)$. Noting that $$rf(x) > r(\inf f + \varepsilon) > \inf rf + \varepsilon,$$ we conclude that $T_{\varepsilon}[rf](x) = T[rf](x)$, as well. Then, since $T_{\varepsilon}[f](x) = T[f](x)<T[rf](x) = T_{\varepsilon}[rf](x)$, we conclude that $T_{\varepsilon}$ is scalar-monotone.\smallskip\newline Let us now assume $0<T\big|_K[f](x) < + \infty$. This yields in particular that $x\in K$ and so $T\big|_K[f](x) = T[f](x)$ and $T\big|_K[rf](x) = T[rf](x)$. Then, since $T\big|_K[f](x) = T[f](x)<T[rf](x) = T\big|_K[rf](x)$, we conclude that $T\big|_K$ is scalar-monotone. \end{proof} The last stability property that we study is the pointwise limit. In general, this operation does not preserve moduli of descent, since scalar-monotonicity can be lost in the limit process, as the following example reveals. \begin{example}[Axiom (D3) is not preserved under pointwise limits] \label{sec03-03:ex:LimitNotModulus} Let $X=\mathbb{R}^n$ and consider the class $\mathcal{F}= \mathcal{C}^1(\mathbb{R}^n)$ of $\mathcal{C}^1$-smooth functions. Let us further consider the sequence of descent moduli \begin{equation*} T_{n}[f](x) = \sqrt[n]{\|\nabla f(x)\|}, \quad n\in \mathbb{N}, \end{equation*} and its pointwise limit operator: \begin{equation*} T[f](x) = \lim_{n\to \infty} T_n[f](x) = \begin{cases} \phantom{jo}0\,,\quad & \text{ if }\nabla f(x) = 0, \\ \phantom{jo}1\,, & \text{ otherwise.} \end{cases} \end{equation*} The operator $T$ preserves global minima and is monotone. However, it is not scalar-monotone (and it clearly fails to determine functions in the sense of Theorem~\ref{sec03:thm:Determination}. $\hfill\Diamond$ \end{example} The following definition introduces a large subclass of abstract descent moduli which provides a remedy to the above situation. \begin{definition}[Homogeneous descent moduli] \label{def-homog} Let $\mathcal{F}\subset\mathcal{C}(X)$ be a translation cone, and let $p\in (0,+\infty )$. An operator $T:\mathcal{F}\rightarrow (\overline{\mathbb{R}}_{+})^{X}$ is said to be \begin{itemize} \item[$(i).$] \textit{$p$--homogeneous} if $T[rf](x) = r^p\,T[f](x)$, for every $f\in \mathcal{F}$ and $r>0$. \item[$(ii).$] \textit{$p$--superhomogeneous} if $T[rf](x) \geq r^p\,T[f](x)$ , for every $f\in \mathcal{F}$ and $r>0$. \end{itemize} \end{definition} Clearly all $p$--homogeneous and all $p$--superhomogeneous operator are also scalar-monotone. The interest of this class is that every operator $T$ which is defined as a pointwise limit of a sequence of $p$-(super)homogeneous descent moduli $\{T_n\}_{n\in\mathbb{N}}$, that is, \begin{equation*} T[f](x)=\lim_{n\to +\infty}T_n[f](x),\qquad\text{for all } f\in\mathcal{F} \,\,\text{and }\, x\in \dom(f), \end{equation*} is itself a $p$--(super)homogeneous descent modulus. In other words, axiom (D3) (scalar-monotonicity) is preserved in this context. One can also observe that up to a composition with the strictly increasing function $\varphi(t):=t^{1/p}$, $p$--(super)homogenicity reduces to $1$ --(super)homogenicity. \begin{proposition} Let $(\Lambda, \preccurlyeq)$ be a directed set, $p\in (0,+\infty)$ and $(T_{\alpha})_{\alpha\in\Lambda}$ be a generalized sequence of $p$ --(super)homogeneous descent moduli for the class $\mathcal{F}$. Then the following operators, defined for every $f\in \mathcal{F}$ and $x\in \dom(f)$ , are descent moduli for the class $\mathcal{F}$:\medskip\newline $(i).$ $\left(\underset{\alpha\in\Lambda}{\limsup } \,T_{\alpha} \right)[f](x) := \underset{\alpha\in\Lambda}{\limsup }\, T_{\alpha}[f](x)$; \smallskip\newline $(ii).$ $\left(\underset{\alpha\in\Lambda}{\sup }\, T_{\alpha}\right)[f](x) := \underset{\alpha\in\Lambda}{\sup } \,T_{\alpha}[f](x)$;\smallskip\newline $(iii).$ $\left(\underset{\alpha\in\Lambda}{\liminf }\, T_{\alpha}\right)[f](x) := \underset{\alpha\in\Lambda}{\liminf }\, T_{\alpha}[f](x)$ ; \smallskip\newline $(iv).$ $\left(\underset{\alpha\in\Lambda}{\inf }\, T_{\alpha}\right)[f](x) := \underset{\alpha\in\Lambda}{\inf}\, T_{\alpha}[f](x)$. \end{proposition} \medskip \begin{proof} Let us verify that $T := \limsup_{\alpha} T_{\alpha}$ satisfies axioms (D1)--(D3) of Definition~\ref{sec03:def:ModulusOfDescent}. (A~si\-mi\-lar reasoning will apply to the other three operators.)\smallskip\newline -- (D1) (\textit{Preservation of global minima}) Choose $f\in \mathcal{F}$ and $x\in\argmin f$. Then, $T_{\alpha}[f](x) = 0$ for all $\alpha\in \Lambda$ and so $T[f](x) = 0$. Thus, $T$ preserves global minima. \smallskip\newline -- (D2) (\textit{Monotonicity}). Let $f,g\in \mathcal{F}$ and $x\in X$ be such that $ (f(x) - f(z))_+\geq (g(x) - g(z))_+,$ for all $z\in X$. Then, $T_{\alpha}[f](x)\geq T_{\alpha}[g](x)$ for each $\alpha\in \Lambda$. Thus, $T[f](x)\geq T[g](x)$ as well, showing that $T$ is monotone \smallskip\newline -- (D3) (\textit{Scalar-monotonicity}): Let $f\in \mathcal{F}$ and $x\in X$. We readily deduce from $p$-superhomogeneity that $T[rf](x) = \limsup_{\alpha} T_{\alpha}[rf](x)\,\geq\, r^p\, \limsup_{\alpha} T_{\alpha}[f](x) = \, r^p\,T[f](x)$. It follows that $T$ is also $p$--superhomogeneous, therefore, in particular, scalar-monotone. \smallskip\newline The proof is complete. \end{proof} \subsection{Slope-like operators that are not descent moduli} We finish this section by discussing two examples in the literature that have being introduced as ``slope operators'' on a metric space $(X,d)$, but fail to verify Definition~\ref{sec03:def:ModulusOfDescent} of descent modulus. The first concept is the so-called \textit{weak slope}, introduced in \cite{DM1994Critical,CDM1993Deformation}. For a continuous function $f:X\to \mathbb{R}$, the weak slope at a point $x\in X$, denoted by $|df|(x)$, is defined as the supremum of $\sigma\in\mathbb{R}_+$ such that there exist $\delta>0$ and a continuous map $\mathcal{H}:[0,\delta]\times B(x,\delta)\to X$ such that \begin{equation}\label{eq:CondWeakSlope} \forall s\in [0,\delta],\,\forall y\in B(x,\delta),\quad d(\mathcal{H}(s,y),y)\leq s\,\, \text{ and } \, f(\mathcal{H}(s,y)) \leq f(y) - \sigma s. \end{equation} Notice that $|df(x)|\geq \sigma$ whenever it is possible to find a continuous deformation $\mathcal{H}$ over a neighborhood of $x$, such that the descent of $f$ through that deformation is at least $\sigma$ for every point $y$ over which $\mathcal{H}$ is acting. Thus, one might interpret the weak-slope as the slowest descent around $x$. This concept has been largely studied in the setting of nonsmooth variational analysis and critical point theory. The second concept is the \textit{limiting slope} (see, e.g. \cite[Definition 8.4]{Ioffe2017Variational}), which is defined as the lower semicontinuous envelope (or closure) of the strong slope $|\nabla f|$. That is, for a lower semicontinuous function $f:X\to \mathbb{R}$ and a point $x\in X$, the limiting slope of $f$ at $x$ is given by \begin{equation} \overline{|\nabla f|}(x) := \lim_{\varepsilon \to 0}\inf\left\{|\nabla f |(y)\ :\ d(x,y)\leq \varepsilon,\text{ and }f(y)\leq f(x)+\varepsilon\right\}. \end{equation} Since the slope that can be very ill-behaved, the limiting slope provides a regularized alternative. It is worth to mention that using this notion, Drusvyatskiy, Ioffe and Lewis were able to deal with the long-standing problem of existence of steepest descent curves \cite{DIL2015}. \smallskip\newline The following example shows that the weak slope and the limiting slope are not descent moduli for $\mathcal{K}(X)$, since they fail to determine coercive continuous functions even in the interval $[0,1]$. \begin{example} Let $\mathfrak{c}:[0,1]\to[0,1]$ be the well-known Cantor Staircase and let us consider the function $f:[0,1]\to \mathbb{R}$ given by $f(t) = \mathfrak{c}(t) + t$. By construction, it is not hard to see that $|\nabla f|(t) \in \{ 1,+\infty\}$ for every $t\in (0,1]$, that $|\nabla f|(0) = 0$ (since $0\in\argmin f$), and that the slope is $+\infty$ only in a subset of the Cantor set. Thus, \[ \overline{|\nabla f|}(t) = \mathds{1}{(0,1]}(t):=\begin{cases} 1,\quad&\text{ if }t\in(0,1]\\ 0,\quad&\text{ if }t = 0. \end{cases} \] Similarly, we claim that $|df|(t)$ takes the same values as $\overline{|\nabla f|}(t)$. Clearly $|df|(0) = 0$ and $|df|(t)\geq 1$ for all $t\in (0,1]$. Now, fix $\bar{t} \in (0,1]$ and take any $\sigma>0$, $\delta>0$ and $\mathcal{H}$ satisfying \eqref{eq:CondWeakSlope}. Since $f$ is strictly increasing, $\mathcal{H}(s,t)<t$ for every $t\in B(\bar{t},\delta)$ and every $s\in [0,\delta]$. In particular, $0<t-\mathcal{H}(s,t)= d(\mathcal{H}(s,t),t)\leq s$. Whence $t-s\leq \mathcal{H}(s,t)$ and consequently, $f(t-s)\leq f(\mathcal{H}(s,t))$. Since the Cantor set is totally disconnected, there exists $t\in (\bar{t}-\delta,\bar{t})$ such that $|\nabla f|(t) = 1$. Then, \[ |\nabla f|(t) \geq \limsup_{s\to 0^+}\, \frac{f(t) - f(t-s)}{s}\,\geq \,\limsup_{s\to 0^+} \frac{f(t) - f(\mathcal{H}(s,t))}{s}\geq \sigma. \] Thus, $\sigma\leq 1$, which proves that $|df(\bar{t})|=1$. This proves the claim. By taking $g:[0,1]\to \mathbb{R}$ given by $g(t) = t$, we get that $\overline{|\nabla g|}(t) = |dg|(t) = \mathds{1}{(0,1]}(t)$, and so, the conclusion of Theorem~\ref{sec03:thm:Determination} fails to hold for both the weak and the limiting slope. Since clearly both operators preserve global minima and are scalar-monotone (by homogeneity), we conclude that both operators fail to be monotone in the sense of Definition~\ref{sec03:def:ModulusOfDescent}.\hfill$\diamond$ \end{example} \section{The paradigm of averaged descent} \label{sec04:Averaged} It was shown in \cite[Theorem~3.8]{BCD2018} that two $\mathcal{C}^2$-smooth convex and bounded from below functions $f,g$ defined on a Hilbert space $\mathcal{H}$ are equal up to a constant, provided $\|\nabla f(x)\| = \|\nabla g(x)\|$, for all $x\in \mathcal{H}$. In other words, the operator: \begin{equation}\label{eq:gamma} f\mapsto \Gamma[f] := \|\nabla f \|^2 \end{equation} is injective, modulo the constant functions, on the class of $\mathcal{C}^2$-smooth convex and bounded from below functions. Notice that the $\Gamma$-operator defined in \eqref{eq:gamma} (also known as \textit{carr\'e-du-champ} operator) is strongly related to the Wiener diffusion process, generated by the Laplacian operator. This hints towards a new important instance of descent modulus, namely the average descent, giving rise to a determination result of probabilistic nature. This will be developed in this section, in full generality. \subsection{Extension of dispersion measures \label{sec04-01:extension}} We first recall that for a $\mathcal{C}^1$-smooth function $f:\mathbb{R}^n\to\mathbb{R}$ the following formula holds: \begin{equation}\label{sec04:eq:IntegroDiff-formula} \|\nabla f(x)\|^2 = \lim_{\varepsilon\to 0} \frac{n}{\mathcal{L}_n(B_n(x,\varepsilon))}\int_{B_n(x,\varepsilon)}\left[ \frac{f(x) - f(y)}{\|x-y\|} \right]^2 \mathcal{L}_n(dy), \end{equation} where, as mentioned in Section~\ref{sec02:Pre}, $\mathcal{L}_n$ stands for the usual Lebesgue measure on $\mathbb{R}^n$. The above formula is well-known and can be deduced from the following (also well-known) lemma, for which we provide a simple proof for completeness. \begin{lemma} For any $k\geq 1$, any $r>0$ and $V\in \mathbb{R}^k$ it holds: \begin{equation} \|V\|^{2}\,=\,\frac{k}{\mathcal{L}_{k}(B_{k}(0,r))}\int_{B_{k}(0,r)} \left\langle V,\frac{u}{\|u\|}\right\rangle ^{2}du. \label{eq:jaime} \end{equation} \end{lemma} \begin{proof} The proof is a consequence of the invariance by rotations of the ball. Consider $(e_i)_{i=1}^k$ the usual orthonormal basis of $\mathbb{R}^k$. By symmetry, we can restrict to the case where $V = \| V\| \cdot e_1$, so that \begin{eqnarray*} \int_{B_{k}(0,r)} \left\langle V,\frac{u}{\|u\|}\right\rangle ^{2}\, du&=&\| V\|^2\int_{B_{k}(0,r)} \left\langle e_1,\frac{u}{\|u\|}\right\rangle ^{2}\,du\\ &=&\| V\|^2\int_{B_{k}(0,r)} \left\langle e_i,\frac{u}{\|u\|}\right\rangle ^{2}\,du \end{eqnarray*} for any $i\in\{1, ..., n\}$. We deduce \begin{eqnarray*} \int_{B_{k}(0,r)} \left\langle V,\frac{u}{\|u\|}\right\rangle ^{2}\, du&=&\f{\| V\|^2}{k}\int_{B_{k}(0,r)}\sum_{i=1}^n \left\langle e_i,\frac{u}{\|u\|}\right\rangle ^{2}\,du\\ &=&\f{\| V\|^2}{k}\int_{B_{k}(0,r)}\left \| \frac{u}{\|u\|} \right\|^2\, du\\ &=&\f{\| V\|^2}{k}\int_{B_{k}(0,r)}1\, du\\ &=&\f{\| V\|^2\mathcal{L}_{k}(B_{k}(0,r))}{k}\end{eqnarray*} leading to the desired equality. \end{proof} Based on equation \eqref{sec04:eq:IntegroDiff-formula}, we propose an extension of the $\Gamma$-operator \eqref{eq:gamma}, that we call \textit{dispersion operator}, for functions defined on a topological space $(X,\tau)$. \smallskip\newline To this end, we consider the family $\beta = \{\beta_x\}_{x\in X}$ of neighborhood bases: $\beta_x$ is a neighborhood base at $x$ of the topology $\tau$. We further denote by $$\mu: X\times \mathcal{B}(X)\to \mathbb{R}_+$$ a mapping that associates to every $x\in X$, a locally finite measure $\mu(x,\cdot)\equiv \mu_x$ (that is, for every $y\in X$, $\mu_x$ is finite on a neighborhood $V_y$ of $y$), with positive measure at every element of $\beta_x$. Let further $m:X\times X\to \mathbb{R}_+$ be as in Proposition~\ref{prop-m-slope}, that is, $$m(x,y) = 0 \iff x = y.$$ Finally, let us consider the \textit{local dimension} mapping $n:X\to \mathbb{R}_+$, where we interpret $n(x)$ to be the local dimension of $X$ at $x$. (Obviously, if $X=\mathbb{R}^n$ or if $X$ is a manifold of dimension $n$, then $n(x)\equiv n$, for all $x\in X$.) \smallskip\newline We are now ready to give the following definition: \begin{definition}[Dispersion operator]\label{sec04:def:ExtendendedDiffusionOp} Let $p\in(0,+\infty)$. We define the $p$-dispersion operator $T_{\mu}$ (depending also on $\beta$ and $n:X\to \mathbb{R}_+$) as follows: \begin{equation}\label{sec04:eq:ExtensionDiffusionOp} T_{\mu}[f](x) := \limsup_{B\in \beta_x} \frac{n(x)}{\mu_x(B)} \int_{B}\left|\Delta_f(x,y) \right|^p \mu_x(dy) \end{equation} where the limit-superior is taken over the inductive set $\beta_x$ endowed with the partial order of the reverse inclusion and \begin{equation}\label{eq:Def-Deltaf} \Delta_f(x,y) := \left\{\begin{array}{cc} \frac{f(x) - f(y)}{m(x,y)}, &\text{ if }y\neq x,\\ \\ 0,&\text{ if }y=x. \end{array}\right. \end{equation} \end{definition} \begin{remark} $\mathrm{(i).}$ We kept the notation simple and denoted the above dispersion operator by $T_{\mu}$ (rather than $T_{\mu,\beta,m,n,p}$) in order to emphasize that $T_{\mu}$ is the limit-superior of integral operators. The action at $x$ in these operators is integrated by the measure $\mu_{x}$.\smallskip\newline $\mathrm{(ii).}$ Definition~\ref{sec04:def:ExtendendedDiffusionOp} is inspired by a construction used in~\cite{Sturm1998} to extend diffusion processes to metric spaces. The ``$\limsup$'' ensures that $T_{\mu}$ is always well-defined, with possibly $+\infty$--values. When $X$ is a metric space and $m$ is the distance function, the domain $\dom(T_{\mu})$ contains at least all (locally) Lipschitz functions. This makes the dispersion operator to be a nontrivial extension of \eqref{eq:gamma} beyond the differentiable setting. \smallskip\newline $\mathrm{(iii).}$ The family $\beta$ in Definition~\ref{sec04:def:ExtendendedDiffusionOp} encompasses several natural choices when the structure of the space $(X,\tau)$ is known. For example, if $(X,\tau)$ is a (pseudo)metric space, then we can take the set of corresponding balls $\beta_x = \{ B(x,r)\}_{r>0}$, for all $x\in X$. More generally, if the topological space $(X,\tau)$ is first-countable, then a natural choice is $\beta_x = \{ \mathcal{V}_n\}_{n\in\mathbb{N}}$, where $\{\mathcal{V}_n\}_{n\in \mathbb{N}}$ is any countable basis of the neighborhoods of~$x$.\smallskip\newline If $X = \mathbb{R}^n$, then our default choice will be $\beta_x := \{ B(x, r)\}_{r>0}$. \end{remark} We denote by $\mathcal{S}_n^+$ the set of ($n\times n$)--positive semidefinite matrices, and let us consider a map $R:\mathbb{R}^{n}\rightarrow\mathcal{S}_n^{+}$. The following proposition shows that the operators of the form $$\Gamma_R [f](x) = \|R(x)\nabla f(x)\|^2, \,\, x\in \mathbb{R}^n$$ can be obtained as particular cases of~\eqref{sec04:eq:ExtensionDiffusionOp}, under suitable choices of the parameter $p>0$, the separation map $m$, the measure map $\mu:\mathbb{R}^n\times\mathcal{B}(\mathbb{R}^n)\to \mathbb{R}_+$ and a local dimension map $x\mapsto n(x)$. \smallskip\newline In what follows $\mathrm{supp }\left(\mu_{x}\right)$ stands for the support of the measure $\mu_{x}:=\mu(x,\cdot)$. We say that a measure $\mu$ is absolutely continuous with respect to $\nu$ (and denote $\mu<<\nu$) if both measures are defined on the same measurable space $(X,\mathcal{B})$ and it holds: $$\nu(A)=0\,\Longrightarrow\,\mu(A)=0,\quad\text{ for all } \,A\in \mathcal{B}.$$ We are now ready to state and prove the following result: \begin{proposition}\label{sec04:prop:extensionDifussionToMetric} Let $R:\mathbb{R}^{n}\rightarrow \mathcal{S}_{n}^{+}$ and set $W_{x} :=x+\mathrm{Ker}(R(x))^{\perp}$, for each $x\in \mathbb{R}^n$. Then for $m(x,y):=\Vert x-y\Vert $, and $p=2$, there exist a measure map $\mu :\mathbb{R}^{n}\times \mathcal{B}(\mathbb{R}^{n})\rightarrow \mathbb{R}_{+}$ and a dimension map $n:X\to \mathbb{R}_+$ such that $\mathrm{supp}\left( \mu _{x}\right) \subset W_{x}$ for all $x\in \mathbb{R}^{n}$ and \begin{equation*} T_{\mu }[f](x)=\Vert R(x)\nabla f(x)\Vert ^{2},\quad \text{for every }f\in \mathcal{C}^{1}(\mathbb{R}^{n}). \end{equation*} \end{proposition} \begin{proof} Let us fix $x\in \mathbb{R}^{n}$. We are going to define a positive real value $n(x)$ and a measure $\mu _{x}$ whose support is contained in $W_{x}$, in a way that: \begin{equation} T_{\mu }[f](x)=\limsup_{r>0}\frac{n(x)}{\mu _{x}(B(x,r))}\int_{B(x,r)}\Delta_{f}(x,y)^{2}\mu _{x}(dy)=\Vert R(x)\nabla f(x)\Vert ^{2}. \label{eq:ortega} \end{equation} Set $k=\dim \left( \mathrm{Ker\ }R(x)\right) ^{\perp },$ $0\leq k\leq n.$\smallskip\newline If $k=0,$ then $\mathrm{Ker\ }R(x)=\mathbb{R}^{n},$ $W_{x}\equiv x$ and $R(x)\nabla f(x)=0.$ Then (\ref{eq:ortega}) holds trivially by setting $\mu_{x}=\delta _{x}$ (the Dirac measure at $x$) and using the fact that $\Delta_{f}(x,x)=0$ (\textit{cf.} (\ref{eq:Def-Deltaf})).\smallskip \newline Let us now assume that $1\leq k\leq n.$ Let $\{e_{j}\}_{j=1}^{n}$ be an orthonormal base of $\mathbb{R}^{n}$ such that \begin{equation*} \left( \mathrm{Ker\ }R(x)\right) ^{\perp }=\mathrm{span\ } (e_{j})_{j=1}^{k}=\mathbb{R}^{k} \end{equation*} and \begin{equation*} \mathrm{Ker\ }R(x)\equiv\mathbb{R}^{n-k}=\left\{ \begin{tabular}{cc} $\mathrm{span\ }(e_{j})_{j=k+1}^{n},$ & \quad for $k<n$\smallskip \\ $\{0\}$ & \quad for $k=n$. \end{tabular} \right. \end{equation*} Then there exists $R\in \mathcal{S}_{k}^{+}$ (the trace of $R(x)\in \mathcal{S}_{n}^{+}$ on the subspace $\mathbb{R}^{k}\times\{0\}^{n-k}$ of $\mathbb{R}^n$) such that decomposing $z\in \mathbb{R}^{n}$ as $z=(v,w)\in \mathbb{R}^{k}\times \mathbb{R}^{n-k},$ it holds $R(x)z=Rv.$ \smallskip \newline Let $\Psi :\mathbb{R}^{k}\rightarrow \mathbb{R}^{k}$ be given by \begin{equation}\label{eq:ort-3} \Psi (u)=\left\{ \begin{array}{cl} \frac{\Vert u\Vert }{\Vert Ru\Vert }Ru & \text{ if }u\neq 0 \smallskip \\ 0 & \text{ otherwise}. \end{array} \right. \end{equation} Clearly $\Psi $ is an isometric automorphism of $\mathbb{R}^{k}$ (depending on $x$, which is fixed) with inverse: \begin{equation*} \Psi ^{-1}(v)=\left\{ \begin{array}{cl} \frac{\Vert v\Vert }{\Vert R^{-1}v\Vert }R^{-1}v & \text{ if }v\neq 0\smallskip \\ 0 & \text{ otherwise}. \end{array} \right. \end{equation*} In particular $\Psi (B_{k}(0,r))=B_{k}(0,r),$ for every $r>0$. Furthermore, $\Psi $ is a $\mathcal{C}^{1}$-diffeomorphism of $\mathbb{R}^{k}\setminus \{0\}$. Following the notation of \cite[Chapter 3]{EG2015Measure}, let us define the Jacobian operator as \begin{equation} J\Psi(u) = |\det (D\Psi(u))|, \end{equation} where $D\Psi$ is the derivative of $\Psi$. We define $h_{k}:\mathbb{R}^{k}\rightarrow \mathbb{R}$ (depending on $\Psi$, therefore on $x$) such that \begin{equation*} h_{k}(v)=\frac{\Vert R\Psi ^{-1}(v)\Vert ^{2}}{\Vert \Psi ^{-1}(v)\Vert ^{2}} \,\left[J\Psi (\Psi ^{-1}(v))\right]^{-1},\quad v\in \mathbb{R}^{k}. \end{equation*} Notice that for $v=\Psi (u)$ the above yields: \begin{equation} h_{k}(\Psi (u))=\frac{\Vert Ru\Vert ^{2}}{\Vert u\Vert ^{2}}\,[J\Psi (u)] ^{-1},\quad u\in \mathbb{R}^{k}. \label{eq:ort-1} \end{equation} We set \begin{equation} \left\{ \begin{array}{l} h:\mathbb{R}^{n}\rightarrow \mathbb{R}\medskip \\ h(z):=\,h_{k}(v),\quad \text{for }z=(v,w)\in \mathbb{R}^{n} \end{array} \right. \label{eq:ort0} \end{equation} and consider the measure $\lambda :\mathcal{B}(\mathbb{R}^{n})\rightarrow \mathbb{R}_{+}$ (depending on $k=\dim(W_x)$) given by the formula \begin{equation} \lambda (A)=\mathcal{L}_{k}\left(A\cap \left(\mathbb{R}^{k}\times \{0\}^{n-k}\right)\right),\qquad \text{for all }A\in \mathcal{B}(\mathbb{R}^{n}). \label{eq:ort-4} \end{equation} Let $\pi_k$ denote the projection of $\mathbb{R}^n$ to the first $k$-coordinates. Notice that $\lambda $ is the trivial extension to $\mathcal{B}(\mathbb{R} ^{n})$ of the Lebesgue measure $\mathcal{L}_{k}$ on $\mathcal{B}(\mathbb{R} ^{k})$. \smallskip Let us define \begin{equation} \kappa:=\kappa(x) =\mathcal{L}_k(B_k(0,1))^{-1}\left( \int_{B_k(0,1)} \frac{\|Ru\|^2}{\|u\|^2}\mathcal{L}_k(du)\right). \end{equation} where $B_{k}(0,r)=\pi _{k}\left( B(0,r)\cap \left(\mathbb{R}^{k}\times \{0\}^{n-k}\right)\right)$. We finally set $n(x) := \kappa(x)\dim(W_x)$ and define the measure $\mu _{x}:\mathcal{B}(\mathbb{R}^{n})~\rightarrow ~\mathbb{R}_{+}$ as follows: \begin{equation} \mu _{x}(A):=\int_{A-x} h(z)\lambda (dz)\equiv \int_{\pi _{k}\left( (A-x)\cap \left(\mathbb{R}^{k}\times \{0\}^{n-k}\right)\right)} h_{k}(v)\,\mathcal{L}_{k}(dv),\qquad \text{for all }A\in \mathcal{B}(\mathbb{R}^{n}). \label{eq:ort1} \end{equation} This operation eliminates the last $n-k$ coordinates (which are equal to $0$ for all elements of $(A-x)\cap \left(\mathbb{R}^{k}\times \{0\}^{n-k}\right)$), adjusting vectors to the right dimension for integration. By means of a change of variables induced by $\Psi$ (see, e.g., \cite[Theorem~3.9]{EG2015Measure}), we deduce \begin{align*} \mu_x(B(x,r))& = \int_{B_k(0,r)}h_k(v)\mathcal{L}_{k}(dv) = \int_{B_k(0,r)} h_k(\Psi(u))J\Psi(u)\mathcal{L}_{k}(du)\\ &= r^{k}\int_{B(0,1)} \frac{\| Ru\|^2}{\|u\|^2} \mathcal{L}_{k}(du) = \kappa \mathcal{L}_k(B_k(0,1))r^k = \kappa \mathcal{L}_k(B_k(0,r)) \end{align*} Now, using the first-order Taylor approximation of $f$ at $x,$ we deduce from \eqref{eq:Def-Deltaf} that \begin{equation*} \Delta _{f}(x,y)^{2}=\left[ \left\langle \nabla f(x),\frac{y-x}{\|y-x\|}\right\rangle +\varepsilon (\|y-x\|)\right] ^{2},\qquad \text{where}\ \lim_{r\rightarrow 0}\,\varepsilon (r)=0. \end{equation*} For any $r>0$ we deduce from \eqref{eq:ort1} that: \begin{align*} \int_{B(x,r)}\Delta _{f}(x,y)^{2}\mu _{x}(dy)& =\int_{B(0,r)}\left[ \langle \nabla f(x),\frac{z}{\|z\|}\rangle\, +\,\varepsilon (\|z\|) \right] ^{2}h(z)\lambda (dz) \\ & =\int_{B(0,r)}\langle \nabla f(x),\frac{z}{\|z\|}\rangle ^{2}\,h(z)\,\lambda (dz)\,\,+\\ & \phantom{ds}+ \int_{B(0,r)}2\,\langle \nabla f(x),\frac{z}{\|z\|}\rangle\, \varepsilon (\|z\|)\,h(z)\,\lambda (dz) +\,\int_{B(0,r)}\varepsilon (\|z\|)^{2}\,h(z)\,\lambda (dz). \end{align*} Let $M>0$ be an upper bound of the function $h$ on $B(0,1)$. Since $\mu_{x}(B(x,r))= \kappa\mathcal{L}_{k}(B_{k}(0,r))$ and $n(x)=k\cdot\kappa$, it follows that \begin{align*} \frac{2n(x)}{\mu _{x}(B(x,r))}\int_{B(0,r)}\left\langle \nabla f(x),\frac{z}{\|z\|}\right\rangle \varepsilon (\|z\|)h(z)\lambda (dz)& \leq 2\,k\,M\,\Vert \nabla f(x)\Vert \,\varepsilon (r)\longrightarrow 0\quad \text{and} \\ \text{ }\frac{n(x)}{\mu _{x}(B(x,r))}.\int_{B(0,r)}\varepsilon (\|z\|)^{2}h(z)\lambda (dz)& \leq \,k\,M\,\varepsilon (r)^{2}\longrightarrow 0\quad \text{(as }r\rightarrow 0\text{).} \end{align*} Denoting by $[\nabla f(x)]_{k}\in \mathbb{R}^{k}$ the vector consisting of the first $k$-coordinates of $\nabla f(x)$ and recalling the decomposition $z=(v,w)\in \mathbb{R}^{k}\times \mathbb{R}^{n-k}$ we deduce from \eqref{eq:ort0}: \begin{align*} \int_{B(0,r)}\left\langle \nabla f(x),\frac{z}{\|z\|}\right\rangle^{2}\, h(z)\lambda (dz)&\,= \int_{B(0,r)}\left\langle \nabla f(x),\frac{(v,w)}{\|(v,w)\|}\right\rangle h_{k}(v)\mathcal{L}_{k}(dv). \end{align*} Using the change of variables $v=\Psi (u)$ (recall that $\Psi (B(0,r))=B(0,r) $ for every $r>0$) we obtain from \eqref{eq:ort-3} and \eqref{eq:ort-1} \begin{align*} &\int_{B(0,r)}\left\langle \nabla f(x),\frac{z}{\|z\|}\right\rangle ^{2}h(z)\lambda (dz)\\ =&\int_{B_{k}(0,r)}\left\langle [\nabla f(x)]_{k},\frac{\Psi (u)}{||\Psi (u)||}\right\rangle ^{2}\frac{\Vert Ru\Vert ^{2}}{\Vert u\Vert ^{2}}\,[J\Psi (u)]^{-1}J\Psi (u)du\, \\ =& \int_{B_{k}(0,r)}\left\langle [\nabla f(x)]_{k},\frac{Ru}{\|u\|}\right\rangle ^{2}du\,=\int_{B_{k}(0,r)}\left\langle R[\nabla f(x)]_{k},\frac{u}{\|u\|}\right\rangle ^{2}du. \end{align*} Therefore we deduce from (\ref{eq:jaime}) and from the definitions of $n(x)$ and $\mu_x$: \begin{align*} T_{\mu }[f](x)&=\limsup_{r>0}\frac{n(x)}{\mu_x(B(x,r))} \int_{B_{k}(0,r)}\left\langle R[\nabla f(x)]_{k},\frac{u}{\|u\|} \right\rangle ^{2}du\\ &=\limsup_{r>0}\frac{k}{\mathcal{L}_{k}(B_{k}(0,r))} \int_{B_{k}(0,r)}\left\langle R[\nabla f(x)]_{k},\frac{u}{\|u\|} \right\rangle ^{2}du\\ &=\|R[\nabla f(x)]_{k}\|^{2}\equiv\Vert R(x)\nabla f(x)\Vert ^{2}. \end{align*} The proof is complete. \end{proof} \subsection{Oriented dispersion operators \label{sec04-02:OrientedDiffusion}} The operator $T_{\mu}$ defined in \eqref{sec04:eq:ExtensionDiffusionOp} fails to determine continuous coercive functions, and consequently is not a descent modulus outside the differentiable setting. The reason for this failure will be illustrated in the following example. \begin{example}\label{sec04:example:DiffusionNotDetermining} Let $X = [-1,1]$ and let $m$ be its usual metric. For each $x\in [-1,1]$, let $\mu(x,\cdot)$ be the usual Lebesgue measure over $[-1,1]$ and $n(x) = 1$. Set $p=2$. \[ f(x) = x^2\qquad\text{ and }\qquad g(x) = -x^2. \] By \eqref{sec04:eq:IntegroDiff-formula}, we have that \[ \forall x\in (-1,1),\quad T_{\mu}[g](x) = T_{\mu}[f](x) = |2x|^2. \] Furthermore, it is not hard to see that at $x = \pm 1$, we have that $$ T_{\mu}[g](x) = T_{\mu}[f](x) = \lim_{\varepsilon\to 0}\frac{1}{\varepsilon} \int_{1-\varepsilon}^1 \left(\frac{1 - t^2}{1-t}\right)^2dt =\lim_{\varepsilon\to 0}\frac{1}{\varepsilon} \int_{1-\varepsilon}^1 (1+t)^2dt = 4 $$ Since the only $T$-critical point of $g$ is $0$, we deduce that $T$ does not preserve global minima and so it is not a descent modulus. Furthermore, since the only $T$- critical point of $f$ is $0$ as well, we have constructed two different functions with $T_{\mu}[f] = T_{\mu}[g]$ and that coincide over $\mathcal{Z}_{T}(f)$. In conclusion, $T$ fails to determine continuous coercive functions in general metric spaces, in the sense of Theorem~\ref{sec03:thm:Determination}. \hfill$\Diamond$ \end{example} In the above example, the points $x=-1$ and $x=1$ should have been critical for the function $g(x) = -x^2$, since they are global minimizers. However, this fails to be the case because the operator $T_{\mu}$ is not oriented. This leads to the following definition, which induces asymmetry between descent and ascent directions (by penalizing the latter). As we shall see, this is particularly relevant in nonsmooth settings. \begin{definition}[Oriented dispersion operator] \label{def-oriented}Let $\mu$, $\beta$, $m$, $n$ and $p$ be as in Definition~\ref{sec04:def:ExtendendedDiffusionOp}. We define the oriented dispersion operator, denoted by $T^+_{\mu}$, as \begin{align*} T^+_{\mu}[f](x) &:= \limsup_{B\in \beta_x}\frac{n(x)}{\mu_x(B)}\int_{B\cap [f\leq f(x)]} \left[\Delta_f(x,y)\right]^p\mu(x,dy)\\ &= \limsup_{B\in \beta_x}\frac{n(x)}{\mu_x(B)}\int_{B} \left[\Delta_f^+(x,y)\right]^p\mu(x,dy), \end{align*} where \begin{equation}\label{eq:Delta+} \Delta_f^+(x,y) := \left\{\begin{array}{cl} \frac{ (f(x) - f(y))_+}{m(x,y)}&\text{ if }x\neq y\\ \\ 0&\text{ if }x=y. \end{array}\right. \end{equation} \end{definition} The value $T_{\mu}^+[f](x)$ corresponds to the dispersion of $f$ at $x$ which is exclusively due to the directions of descent. In the smooth case, the value of the oriented dispersion $T_{\mu}^+[f](x)$ is the half of the value of the non-oriented dispersion $T_{\mu}[f](x)$, as expected by symmetry. This is the content of the following proposition. \begin{proposition} Let $X = \mathbb{R}^n$, $n(x) \equiv n$, $\beta_x = \{ B(x,\varepsilon)\ :\ \varepsilon>0 \}$ and $\mu_x$ be the $n$-dimensional Lebesgue measure for every $x\in\mathbb{R}^n$. Take $p=2$ and $m(x,y) = \|x-y\|$. Then \[ T^+_{\mu}[f](x) = \frac{1}{2}\|\nabla f(x)\|^2,\quad \text{for every }f\in \mathcal{C}^{1}(\mathbb{R}^{n}). \] \end{proposition} \begin{proof} If $\|\nabla f(x)\| = 0$, then $0\leq T_{\mu}^+[f](x) \leq T_{\mu}[f](x) = \|\nabla f(x)\|^2 = 0$ and the conclusion follows trivially. \smallskip \newline Let us now consider the case $\|\nabla f(x)\| \neq 0$. By a change of coordinates, we may assume that $x= 0$, $f(0) = 0$ and $\nabla f(0) = r\,e_n$, where $r = \|\nabla f(0)\|>0$ and $e_n$ be the $n$-th vector of an orthonormal base of $\mathbb{R}^n$. In this setting, we denote $$S := [f\leq f(0)]\quad \text{and }\quad \mathbb{R}^{n-1}:=\mathrm{span} \{e_{j}\}_{j=1}^{n-1}\, \equiv \,\{x\in\mathbb{R}^n:\, \langle e_n,z\rangle = 0\}.$$ Following a similar development as in the proof of Proposition~\ref{sec04:prop:extensionDifussionToMetric}, for the particular case $R(x)=\mathbb{I}_n$ (the identity map on $\mathbb{R}^n$), we deduce \[ T_{\mu }^+[f](x)\,=\,\limsup_{r>0}\frac{n}{\mathcal{L}_{n}(B(0,r))} \int_{B(0,r)\cap S}\left\langle \nabla f(0),\frac{u}{\|u\|} \right\rangle ^{2}du. \] Consider the semispace $H = \{x\in\mathbb{R}^n:\,\langle e_n, v \rangle\leq 0\}$. Then we have: \[ \int_{B(0,r)\cap S}\left\langle \nabla f(0), \frac{u}{\|u\|} \right\rangle^2 du = \int_{B(0,r)\cap H}\left\langle \nabla f(0), \frac{u}{\|u\|}\right\rangle^2 du + \int_{B(0,r)\cap (S\triangle H)}\left\langle \nabla f(0), \frac{u}{\|u\|}\right\rangle^2 du\,. \] In what follows we show that $\mathcal{L}_n(B(0,r)\cap (S\triangle H))$ is small, where $S\triangle H$ denotes the symmetric difference between $S$ and $H$. To this end, it is easy to see that \[ B(0,r)\cap (S\triangle H) \subset B_{n-1}(0,r)\times [-d(r),d(r)], \] where $d(r)$ stands for the maximal distance between the subspace $\mathbb{R}^{n-1}\times \{0\}$ and the elements of the following set (see Figure~\ref{fig1}) \[ D(r)=\{ (y,z)\in B(0,r):\ y\in B_{n-1}(0,r)\text{ and }f(y,z) = 0\}\,\bigcap\,B(0,r). \] \begin{figure}[h!]\label{fig1} \centering \begin{tikzpicture}[>=stealth',scale=1] \begin{axis}[ xmin=-2,xmax=2, ymin=-2,ymax=2, yticklabels=\empty, xticklabels = \empty, axis line style={draw=none}, tick style={draw=none} ] \addplot[name path=f,-,thick] expression[domain=-2:2,samples=100,smooth]{x^3+0.5*x^2}; \path[name path=axis] (axis cs:-2,0) -- (axis cs:2,0); \addplot [ thick, color=gray, fill=gray, fill opacity=0.3 ] fill between[ of=f and axis, soft clip={domain=0:0.9}, ]; \addplot [ thick, color=gray, fill=gray, fill opacity=0.3 ] fill between[ of=axis and f, soft clip={domain=-1.125:0}, ]; \draw[-,very thick] (axis cs:{-2},{0})--(axis cs:{2},{0}); \draw[thick] (axis cs:{0},{0}) circle (2.2cm); \node at (axis cs:{-1.3},{1.5}) {\small $B(0,r)$}; \fill (axis cs:{0},{0}) circle (0.05cm) node[above right]{\small $0$}; \draw[->,very thick](axis cs:{0},{0})--(axis cs:{0},{1}) node[midway,left]{\small $\nabla f(0)$}; \draw[thick,dashed] (axis cs:{-1.3},{-0.9^3-0.5*0.9^2})--(axis cs:{-1.3},{0.9^3+0.5*0.9^2})--(axis cs:{1.3},{0.9^3+0.5*0.9^2}) -- (axis cs:{1.3},{-0.9^3-0.5*0.9^2})--cycle; \node at (axis cs:{0.5},{1.8}) {\small $[f= f(0)]$}; \draw [thick, decorate,decoration={brace,amplitude=2pt,mirror},xshift=21.4pt,yshift=-0.4pt](axis cs:{0.9},{0})--(axis cs:{0.9},{0.9^3+ 0.5*0.9^2}) node[black,midway,xshift=0.4cm] {\tiny $d(r)$}; \draw[fill] (axis cs:{0.9},{0.9^3+0.5*0.9^2}) circle (0.05cm); \node at (axis cs:{1.8},{-0.15}) {\small $\mathbb{R}^{n-1}$}; \end{axis} \end{tikzpicture} \caption{ The gray area corresponds to the asymmetric difference $S\triangle H$. The dashed line outlines the set $B_{n-1}(0,r)\times [-d(r),d(r)]$, where the dot depicts the farthest point of the set $D(r)$ to the linear subspace $\mathbb{R}^{n-1}$.} \end{figure} Using the Implicit Function Theorem, we deduce the existence of an open subset $\mathcal{U}\subset \mathbb{R}^{n-1}$ containing~$0$, an open set $\mathcal{V}\subset \mathbb{R}^n$ containing $0$ and a function $\varphi:\mathcal{U}\to \mathbb{R}$ of class $\mathcal{C}^1$ such that its graph coincides with $[f = 0]\cap \mathcal{V}$ and \[ \nabla \varphi(0) = - (\partial_n f(0))^{-1} \begin{pmatrix} \partial_1 f(0)\\ \vdots\\ \partial_{n-1}f(0) \end{pmatrix} = \mathbf{0}_{n-1}. \] Therefore, for $r>0$ sufficiently small, we have $B_{n-1}(0,r)\subset \mathcal{U}$ and $B(0,r)\subset \mathcal{V}$, which yields \[ \{ (y,z)\in B_{n-1}(0,r)\times \mathbb{R} : \, f(y,z) = 0\} = \{ (y,\varphi(y)):\, y\in B(0,r) \}. \] Therefore $d(r) = \sup \{ |\varphi(y)|:\, y\in B_{n-1}(0,r) \}$. Evoking the mean value theorem we deduce \[ \sup\{ |\varphi(y)|: \, y\in B_{n-1}(0,r) \} \,\leq \, r\cdot \sup\{ \|\nabla\varphi(y)\|:\, y\in B_{n-1}(0,r) \}. \] By continuity of $\nabla \varphi$ and recalling that $\nabla\varphi(0) = 0$ we deduce that $d(r) = o(r)$. Recalling formula~\eqref{eq:VolumeNBall} for the volume of the ($n-1$)-dimensional ball $B_{n-1}(0,r)$, we set $$ K = \frac{2\pi^{(n-1)/2}}{\Gamma\left( \frac{n-1}{2} + 1 \right)} $$ and we obtain: \[ \mathcal{L}_n (B(0,r)\cap (S\triangle H)) \leq \mathcal{L}_{n-1}(B_{n-1}(0,r))\cdot 2d(r) = (K\cdot r^{n-1})\,o(r) \equiv o(r^{n}) \] Therefore, \begin{align*} \frac{n}{\mathcal{L}_n(B(0,r))} \int_{B(0,r)\cap (S\triangle H)}\left\langle \nabla f(0), \frac{u}{\|u\|}\right\rangle^2 du\, &\leq n\,\|\nabla f(0)\|^2 \,\frac{\mathcal{L}_n (B(0,r)\cap (S\Delta H))}{\mathcal{L}_n (B(0,r))} \smallskip\\ &= \frac{n\|\nabla f(0)\|^2}{K_n}\frac{o(r^n)}{r^n} \xrightarrow{r\to 0} 0. \end{align*} Since $B(0,r)\cap H$ is the south--half of the ball $B(0,r)$, a symmetry argument ensures \begin{align*} T^+_{\mu}[f](0) &= \limsup_{r\to 0} \frac{n}{\mathcal{L}_n(B(0,r))} \, \int_{B(0,r)\cap H}\left\langle \nabla f(0), \frac{u}{\|u\|}\right\rangle^2 du \smallskip \\ &= \frac{1}{2}\,\limsup_{r\to 0}\frac{n}{\mathcal{L}_n(B(0,r))}\,\int_{B(0,r)}\left\langle \nabla f(0), \frac{u}{\|u\|}\right\rangle^2 du\, = \frac{1}{2} \|\nabla f(0)\|^2. \end{align*} The proof is complete. \end{proof} \begin{remark} The above arguments can be easily adapted to show that when $\mu_x$ and $n(x)$ are as in Proposition~\ref{sec04:prop:extensionDifussionToMetric}, then the oriented dispersion operator $T_{\mu}^+$ (\textit{cf.} Definition~\ref{def-oriented}) satisfies: \[ T_{\mu}^+[f](x) = \frac{1}{2}\|R(x)\nabla f(x)\|^2, \qquad\text{for all } f \in \mathcal{C}^1. \] \end{remark} The following proposition justifies the introduction of the oriented dispersion in a nonsmooth setting. Given a metric space $(X,d)$ we denote by $\mathrm{Lip}(X)$ the class of real-valued Lipschitz continuous functions on $X$. \begin{theorem}\label{sec04:thm:DispersionIsModDescent} Let $(X,d)$ be a metric space and $\mu:X\times\mathcal{B}(X)\to\mathbb{R}_+$ a measure mapping such that for every $x\in X$, $\mu_x$ is a locally finite measure with positive measure on open sets and $\beta=\{\beta_x\}_{x\in X}$ be any family of neighborhood bases. Then, the oriented dispersion operator $T^+_{\mu}$ is a descent modulus for $\mathcal{K}(X)$ and verifies that $\mathcal{K}(X)\cap\mathrm{Lip}(X) \subset \dom(T^+_{\mu})$. \end{theorem} \begin{proof} The conditions over $\mu$ ensure that $\mathcal{K}(X)\cap\mathrm{Lip}(X)\subset \dom(T^+_{\mu})$. Clearly the operator $T^+_{\mu}$ preserves global minima and is scalar-monotone. Let us now show that it is monotone. Let $f,g\in\mathcal{K}(X)$ and let $x\in X$ such that \[ (f(x) - f(z))_+ \geq (g(x) - g(z))_+. \] Then, $\Delta_f^+(x,z) \geq \Delta_g^+(x,z)$ for all $z\in X$ and the conclusion follows. \end{proof} \begin{remark} The measure map $\mu:X\times\mathcal{B}(X)\to\mathbb{R}_+$ is assumed to be locally finite, which yields in particular that each measure $\mu_x$ is finite on the compact sets of $(X,\tau)$. Apart from this assumption and the existence of a neighborhood system $\{\beta_x \}_{x\in X}$ where $\mu_x$ takes nonzero values, no other property is required. In this setting, the superior limit in Definition~\ref{sec04:def:ExtendendedDiffusionOp} and Definition~\ref{def-oriented} are well-defined, yielding that the dispersion operators are descent moduli. Even less will be required to define nonlocal operators (see next section), namely, $\mu_x$ to be finite on compact sets. \end{remark} \subsection{Oriented nonlocal operators} Apart from the diffusion operators, which are of local nature, one can also consider nonlocal operators. These latter serve to model jump dynamics, see \textit{e.g.} \cite{EK1986}. We shall now define dispersion measures for these processes. \begin{definition}[Nonlocal dispersion operators]\label{sec04:def:nonlocalDispersion} Let $\mu:X\times\mathcal{B}(X)\to\mathbb{R}_+$ be a measure mapping such that for every $x\in X$, $\mu_x$ is finite on all compact sets, and let $\phi:\mathbb{R}_+\to\mathbb{R}_+$ be a strictly increasing function with $\phi(0) = 0$. We define the nonlocal dispersion operator induced by $\phi$ and $\mu$ as \[ T_{\phi,\mu}[f](x) = \int_X \phi(|f(x) - f(y)|)\mu(x,dy), \] \end{definition} By construction, $T_{\phi,\mu}$ is finite for every measurable bounded function with compact support. When $X=V$ is a finite space, the nonlocal operators are particularly relevant, due to the fact that all points are isolated and so diffusion is not possible. In this setting, the measure map $\mu$ can be represented by a matrix $L:V\times V\to\mathbb{R}_+$, in the form of \[ T_{\phi,\mu}[f](x) = \sum_{y\in V} L(x,y) \,\phi(|f(x) - f(y)|) \] \begin{remark} In the context of Markov generators, the nonlocal operators are of the form $$L[f](x) = \int_X (f(y) - f(x))\mu_x(dy),$$ where $\mu$ is assumed to be regular in the sense that $x\mapsto \mu_x(A)$ is measurable for every $A\in\mathcal{B}(X)$. When $\phi(t) = t^2$ we are working with the dispersion operator $T_{\phi,\mu}[f] := \Gamma[f]$, where $\Gamma$ is the carr\'e-du-champ operator associated to $L$, which in all generally is defined by the identity $\Gamma[f] = L[f^2] - 2fL[f]$ (as soon as $f,f^2\in \dom (L)$, see e.g. \cite{BGL2014}). \end{remark} In general, a nonlocal operator $T_{\phi,\mu}$ might fail to be a descent modulus and to determine functions in the sense of Theorem~\ref{sec03:thm:Determination}. \begin{example}\label{example:CarreFailsDetermination} Let $\phi(t):= t^2$. Fix $N\in\mathbb{N}$ even, and set $\mathcal{V}:= \mathbb{Z}_{N}\cup\{\bar{0}\}$, where $\bar{0}\notin \mathbb{Z}_N$ and $\mathbb{Z}_N =\{0,1,\ldots, N-1\}$ stands for the usual cyclic additive group modulo $N$. We define a nonlocal operator $L$ as follows: \[ L(x,y) = \begin{cases} \phantom{jo}1/2,\qquad & \text{ if }x\in \mathbb{Z}_N\setminus\{0\}\text{ and }y = x \pm 1,\\ \phantom{jo}1/3,\qquad &\text{ if }x = 0\text{ and }y\in\{\bar{0},1,-1\},\\ \phantom{jo}1,\qquad &\text{ if }x=\bar{0} \text{ and }y = 0,\\ \phantom{jo}0,&\text{ otherwise.} \end{cases} \] \begin{figure}[h] \fontsize{6pt}{6pt}\selectfont \centering \begin{tikzpicture}[->,>=stealth',shorten >=0pt,auto,node distance=2cm, semithick] \node[state] (A){ $\bar{0}$}; \node[state] (B) [right of=A] {$0$}; \node[state] (C) [above right of=B] {$1$}; \node[state] (D) [right of =C] {$2$}; \node[state] (E) [below right of=D] {$3$}; \node[state] (F) [below left of=E] {$4$}; \node[state] (G) [left of=F] {$5$}; \path (A) [bend right=20] edge node[below]{$1$} (B) (B) edge node[above]{$\tfrac{1}{3}$} (A) % (B) edge node[below]{$\tfrac{1}{3}$} (C) (C) edge node[above]{$\tfrac{1}{2}$} (B) % (C) edge node[below]{$\tfrac{1}{2}$} (D) (D) edge node[above]{$\tfrac{1}{2}$} (C) % (D) edge node[below]{$\tfrac{1}{2}$} (E) (E) edge node[above]{$\tfrac{1}{2}$} (D) % (E) edge node[above]{$\tfrac{1}{2}$} (F) (F) edge node[below]{$\tfrac{1}{2}$} (E) % (F) edge node[above]{$\tfrac{1}{2}$} (G) (G) edge node[below]{$\tfrac{1}{2}$} (F) % (G) edge node[above]{$\tfrac{1}{2}$} (B) (B) edge node[below]{$\tfrac{1}{3}$} (G) ; \end{tikzpicture} \caption{Case $N=6$.} \label{fig:ExampleZN} \end{figure} Now, choose two functions $f_1,f_2\in\mathbb{R}^\mathcal{V}$ satisfying that \[ f_i(\bar{0}) = f_i(0) = 0\quad\text{ and }\quad |f(x\pm 1) - f(x)| = 1,\forall x\in \mathbb{Z}_N. \] There is at least ${ N\choose N/2} > 1$ functions verifying the above requirements, so we can take $f_1\neq f_2$. However, it is not hard to see that for the measure map $\mu$ associated with $L$, the nonlocal operator $T_{\mu}$ verifies that \[ T_{\mu}[f_i] (x) = \begin{cases} 0\quad&\text{ if }x=\bar{0},\\ 2/3\quad&\text{ if }x= 0,\\ 1&\text{ otherwise.} \end{cases} \] for $i=1,2$. Thus, $T_{\mu}$ does not preserve the global minima since either $\argmin f_i \supseteq \{ 0,\bar{0}\}$ or $\argmin f_i \subset \mathcal{V}\setminus \{ 0,\bar{0}\}$. Furthermore, $T_{\mu}$ fails the determination theorem even for functions with $\mathcal{Z}_{T_{\mu}}(f)\neq \emptyset$. \hfill$\Diamond$ \end{example} \smallskip Example~\ref{example:CarreFailsDetermination} is very illustrative as concerns the following: when $\phi(t) = t^2$, nonlocal operators do not preserve global minima in general. Indeed, if $\mathcal{V}$ is a finite state space, $T_{\mu}[f](x)$ measures the dispersion around point $x\in \mathcal{V}$, when $L(x,y)$ represents the probability to jump from the point~$x$ to the point~$y$. Therefore it is natural for $T_{\mu}[f](x)$ to be strictly positive. However, by imposing $f(0)=f(\bar 0)$ in Example~\ref{example:CarreFailsDetermination} we are forcing a point with no dispersion: starting from $x=\bar{0}$, the only possibility is to jump to~$0$. \begin{definition}[Oriented nonlocal operators]\label{sec04:def:OrientedNonlocal} Let $\phi$ and $\mu$ be as in Definition~\ref{sec04:def:nonlocalDispersion}. We define the oriented nonlocal operator induced by $\phi$ and $\mu$ as \begin{equation} T_{\phi,\mu}^+[f](x) = \int_{[f\leq f(x)]} \phi(f(x) - f(y))\mu_x(dy) = \int_{X} \phi((f(x) - f(y))_+)\mu_x(dy) \end{equation} \end{definition} Similarly to the (local) oriented dispersion operator, the above operator is a descent modulus for $\mathcal{K}(X)$, and always determines a suitable subclass of continuous coercive functions $\mathcal{K}(X)$. In the nonlocal case, we do not need to assume Lipschitz continuity, and consequently, $X$ can be a mere topological space. The class is given by the \textit{strictly coercive} functions, which is given by \begin{equation}\label{eq:StrictlyCoercive} \mathcal{K}_s(X) = \left\{ f:X\to \mathbb{R}\ :\ \forall x\in X,\, [f\leq f(x)]\text{ is compact} \right\}. \end{equation} The main difference between $\mathcal{K}_s(X)$ and $\mathcal{K}(X)$ is that the latter class admits functions attaining their maximum value since the set $[f \leq \max_X f]$ does not have to be compact. If $X$ is compact, then the classes $\mathcal{K}(X)$ and $\mathcal{K}_s(X)$ coincide, however, if $X$ is noncompact, then functions in $\mathcal{K}_s(X)$ cannot attain their supremum. \begin{theorem}\label{thm:CarreNonlocalDetermining} Let $\mu:X\times\mathcal{B}(X)\to\mathbb{R}_+$ be a measure mapping such that $\mu_x$ is finite on all compact sets, for every $x\in X$. Then the oriented nonlocal operator $T_{\phi,\mu}^+$ is a descent modulus for $\mathcal{K}(X)$ and verifies that $\mathcal{K}_s(X)\subset \dom(T_{\phi,\mu}^+)$. \end{theorem} \begin{proof} Since $\mu_x$ is finite on all compact sets, for each $x\in X$, we deduce that $\mathcal{K}_s(X)\subset \dom(T_{\phi,\mu}^+)$. Furthermore, since $\phi(0) = 0$, it is clear that $T_{\phi,\mu}^+$ preserves global minima. Let us show now that $T_{\phi,\mu}^+$ is monotone: let $f,g\in \mathcal{K}(X)$ and $x\in X$ such that \[ (f(x)-f(z))_+\geq (g(x) - g(z))_+,\quad \forall z\in X. \] Then, since $\phi$ is non-decreasing, we have that \[ T_{\phi,\mu}^+[f](x) = \int_X\phi( (f(x)-f(z))_+ )\mu_x(dz) \geq \int_X\phi( (g(x)-g(z))_+ )\mu_x(dz) = T_{\phi,\mu}^+[g](x). \] We conclude that $T_{\phi,\mu}^+$ is monotone. Finally, let us show that $T_{\phi,\mu}^+$ is scalar-monotone. Let $f\in \mathcal{K}(X)$, let $r>1$ and let $x\in X$ such that $0<T[f](x)<+\infty$. By monotonicity, we have that $T_{\phi,\mu}^+[rf](x) \geq T_{\phi,\mu}^+[f](x)>0$. Let us now define the sets \[ A_{n} = \left\{ z\in X: \, f(x)- f(z) \geq \frac{1}{n} \right \}\, \bigcap\, \left \{ z\in X\ :\ \phi(r(f(x)- f(z))) - \phi(f(x)- f(z)) \geq \frac{1}{n} \right \}. \] Clearly $\{A_n\}_n$ is an increasing sequence of $\mu_x$-measurable sets satisfying: $$\bigcup_{n\geq 1} A_n = [f<f(x)].$$ Thus, by monotone convergence theorem, we have that \[ \lim_{n} \int_{A_n} \phi(f(x) - f(z))\mu_x(dz) = \int_{[f<f(x)]}\phi(f(x) - f(z))\mu_x(dz) = T_{\phi,\mu}^+[f](x)>0. \] Choose then $n\in\mathbb{N}$ such that $\int_{A_n} \phi(f(x) - f(z))\mu_x(dz)> 0$. Then, $\mu(x,A_n)>0$ and \begin{align*} T_{\phi,\mu}^+[rf](x) &= \int_X \phi(r(f(x) - f(z))_+)\mu_x(dz)\\ &= \int_{A_{n}} \phi(r(f(x) - f(z)))\mu_x(dz) + \int_{X\setminus A_n} \phi(r(f(x) - f(z))_+)\mu_x(dz)\\ &\geq \int_{A_{n}} \phi((f(x) - f(z))) + \frac{1}{n}\mu_x(dz) + \int_{X\setminus A_n} \phi((f(x) - f(z))_+)\mu(x,dz)\\ &= \int_X \phi((f(x) - f(z))_+)\mu(x,dz) +\frac{1}{n}\mu(A_n) > T_{\phi,\mu}^+[f](x). \end{align*} All three properties of Definition~\ref{sec03:def:ModulusOfDescent} are satisfied and the proof is complete. \end{proof} \begin{remark} Any $\Gamma$-operator (carr\'e-du-champ operator) coming from a regular Markov generator in $\mathbb{R}^n$ (with the euclidean distance) has the form \[ \Gamma[f](x) = \limsup_{r\to 0} \frac{n(x)}{\mu_{1,x}(B(x,r))} \int_{B(x,r)}\left[ \Delta_f(x,y) \right]^2 \mu_{1,x}(dy) + \int_X (f(x) - f(y))^2\mu_{2,x}(dy). \] The above operator measures the dispersion of the function $f$ around a point $x$, when the point evolves following a local diffusion process linked to $(\mu_{1,x})_x$ and a nonlocal jump process given by $(\mu_{2,x})_x$. The oriented dispersion is only taking into account the descent directions and has the form \[ \Gamma^+[f](x) = \limsup_{\varepsilon\to 0} \frac{n(x)}{\mu_{1,x}(B(x,\varepsilon))} \int_{B(x,\varepsilon)}\left[ \Delta_f^+(x,y) \right]^2 \mu_{1,x}(dy) + \int_X [(f(x) - f(y))_+]^2\mu_{2,x}(dy) \] The above oriented operator is a descent modulus for $\mathcal{K}(X)$. Thus, if we know the oriented dispersion of a continuous coercive function $f$ (with finite oriented dispersion), and we know its values on the critical points (that is, points with zero oriented dispersion), we completely determine the function $f$, in the spirit of Theorem~\ref{sec03:thm:Determination}. \end{remark} \section{Descent moduli over finite sets}\label{sec:5} Finite state spaces provide a simple and experimental framework to investigate further properties of moduli of descent. We shall use the terminology \textquotedblleft \textit{finite descent modulus}" to refer to a descent modulus over a finite set. In this section we study two particular features of finite descent moduli: \begin{itemize} \item an alternative proof, based on a probabilistic approach, of (an enhanced version of) the determination theorem for descent moduli mimicking Markov generators; and \item a characterization of \textit{homogeneous} finite descent moduli, up to a natural equivalence relation based on the corresponding critical map. \end{itemize} \medskip We have already encountered a finite descent moduli in Example~\ref{example:CarreFailsDetermination}. Let us present a general procedure generating finite descent moduli: on a finite state space $\mathcal{V}$ (neither empty nor a singleton), consider a Markov generator $L:=(L(x,y)_{x,y\in \mathcal{V}}$, namely a matrix satisfying \begin{equation*} \left\{ \begin{array}{lcl} \forall x,y\in \mathcal{V}:\,\,\quad x\neq y\,\,\Longrightarrow \,\,L(x,y) & \geq & 0 \\ [2mm] \forall x\in \mathcal{V}:\qquad \phantom{salas}\sum_{y\in \mathcal{V}}L(x,y) & = & 0 \end{array} \right. \end{equation*} Such a generator acts linearly on any function $f\in \mathbb{R}^{\mathcal{V}}$ (which coincides with $\mathcal{K}(\mathcal{V})$) via \begin{equation} \forall \ x\in \mathcal{V},\qquad L[f](x)=\sum_{y\in \mathcal{V}}L(x,y)(f(y)-f(x)) \label{eq:star} \end{equation} By analogy to Definition~\ref{def-oriented} and Definition~\ref{sec04:def:OrientedNonlocal}, we consider the non-linear operator $T_{L}$ acting on any function $f\in \mathbb{R}^{\mathcal{V}}$ via \begin{equation}\label{TL} \forall \ x\in \mathcal{V},\qquad T_{L}[f](x)=\sum_{y\in \mathcal{V}}L(x,y)(f(x)-f(y))_{+} \end{equation} From Theorem \ref{thm:CarreNonlocalDetermining}, $T_{L}$ is a descent modulus. In Subsection~\ref{apa}, we will recover the determination theorem for this kind of descent modulus via a probabilistic approach.\smallskip More generally, for any $m>0$, one can consider $T_{L,m}$ given by \begin{equation} \forall \ x\in \mathcal{V},\qquad T_{L,m}[f](x)=\left( \sum_{y\in \mathcal{V}}L(x,y)((f(x)-f(y))_{+})^{m}\right) ^{1/m} \label{eq:TLp} \end{equation} as well as its limit $T_{L,\infty }$ as $m$ goes to infinity: \begin{equation} \forall \ x\in \mathcal{V},\qquad T_{L,\infty }[f](x)=\max \{(f(x)-f(y))_{+}\,:\,y\in D_{x}\} \label{eq:TLi} \end{equation} where for every $x\in \mathcal{V}$ we set: \begin{equation}\label{eq:act} D_{x}:=\{x\}\sqcup \{y\in \mathcal{V}\,:\,L(x,y)>0\} \end{equation} Let us recall (see Definition~\ref{def-homog} for $p=1$) that a descent modulus $T$ is said to be \textit{homogeneous} (or 1--\textit{homogeneous}) if for all $r\geq 0$ and $f\in \mathbb{R}^{\mathcal{V}}$ we have: $T[rf]=rT[f]$.\smallskip\newline All the above operators $T_{L,m}$, $m\in (0,+\infty ],$ are homogeneous descent moduli. It should be noticed that there are many more homogeneous descent moduli: for instance in \eqref{eq:TLp} we can allow the exponent $m$ to depend on $x\in \mathcal{V}$. Moreover, given $n$ homogeneous descent moduli $T_{1}$, ..., $T_{n}$, and positive numbers $a_{1},...,a_{n}>0$, the weighted sum $a_{1}T_{1}+\cdots +a_{n}T_{n}$ is again a homogeneous descent moduli. Even fancier constructions are possible. This being said, there exist non-homogeneous descent moduli. Indeed, for any non-decreasing mapping $\phi \,:\,\mathbb{R}_{+}\rightarrow \mathbb{R}_{+}$ with $\phi (0)=0$, the descent modulus $T_{\phi }$ defined by \begin{equation*} \forall \ x\in \mathcal{V},\qquad T_{L}[f](x)=\sum_{y\in \mathcal{V} }L(x,y)\phi ((f(x)-f(y))_{+}) \end{equation*} is homogeneous if and only if $\phi $ is linear, as long as $L\neq 0$ .\smallskip Given a descent modulus $T$ we recall from (\ref{eq:ZTf}) the \textit{critical map} $\mathcal{Z}_{T}$, which associates to every function $f\in \mathbb{R}^{\mathcal{V}}$ its set of critical points $\mathcal{Z}_{T}(f)=(T[f])^{-1}(0)$. Notice that the critical maps $\mathcal{Z}_{T_{L,m}}$ related to the moduli $T_{L,m}$ in \eqref{eq:TLp}--\eqref{eq:TLi} are all the same as $m$ varies in $(0,+\infty ]$.\smallskip\ In Subsection~\ref{cm} we introduce an equivalence relation among homogeneous descent moduli, using the critical maps. Under this relation, all moduli $T_{L,m}$ in \eqref{eq:TLp} turn out to be equivalent to each other (for different values of $m\in \mathbb{N}$) and also equivalent to $T_{L,\infty }.$ The main result of this section is to show that every homogeneous descent modulus on a general finite set $\mathcal{V}$ (without generator $L$) is still of the form \eqref{eq:TLi} for some family $\mathcal{D}=\{\mathcal{D}_{x}\}$ which is naturally associated to~$T$, provided it satisfies a (necessary and sufficient) mild condition. \subsection{A probabilistic approach} \label{apa} Let $L:=(L(x,y))_{x,y\in \mathcal{V}}$ be a Markov generator on the finite set $\mathcal{V}$. \smallskip For every $f\in \mathbb{R}^{\mathcal{V}}$ the associated $f$-oriented Markov generator $L^{f}:=(L^{f}(x,y))_{x,y\in \mathcal{V}}$ is defined for $x,y\in \mathcal{V}$ with $x\neq y$ as follows: \begin{equation*} L^{f}(x,y):=\left\{ \begin{array}{ll} L(x,y), & \hbox{if $f(y)\leq f(x)$} \\[2mm] \phantom{dav}0, & \hbox{otherwise.} \end{array} \right. \end{equation*} The values $L^{f}(x,x)$ on the diagonal are determined by the fact that the sum of the rows $\sum_{x\in\mathcal{V}}L(x,y)$ should vanish.\smallskip Let $T:\mathbb{R}^{\mathcal{V}}\rightarrow \mathbb{R}^{\mathcal{V}}$ be defined for every $f\in \mathbb{R}^{\mathcal{V}}\ $and $x\in \mathcal{V}$ as follows: \begin{equation*} T[f](x):=-L^{f}[f](x)=-\sum_{y\in \mathcal{V}}L^{f}(x,y)(f(y)-f(x))=\sum_{y\in \mathcal{V}}L(x,y)(f(x)-f(y))_{+}. \end{equation*} This non-linear operator $T$ coincides with $T_L$ defined in \eqref{TL} and is a descent modulus. For every $f\in \mathbb{R}^{\mathcal{V}}$ the set of $T$-critical points is given by the formula \begin{equation} \mathcal{Z}_{T}(f):=\{x\in \mathcal{V}\,:\,T[f](x)=0\}. \label{eq:ZT} \end{equation} Given $x,y\in \mathcal{V},$ an $L$-path from $x$ to $y$ is a finite sequence $\{x_{k}\}_{0\leq k\leq N}$ with $N\geq 0$, $x_{0}=x$, $x_{N}=y$ and such that for all $0\leq k<N$, $L(x_{k},x_{k+1})>0$. This path is called an $L^{f} $-path from $x$ to $y$ if in addition $\{f(x_{k})\}_{0\leq k\leq N}$ is a non-increasing finite sequence. We write $x\overset{f}{\rightarrow }y$ to indicate that there exists a $L^{f}$-path from $x$ to $y.$ We set: \begin{equation*} x\succeq _{f}y\,\Longleftrightarrow \,x\overset{f}{\rightarrow }y\qquad \text{and}\qquad x\approx _{f}y\,\Longleftrightarrow \,\left\{ \begin{array}{c} x\overset{f}{\rightarrow }y \\ y\overset{f}{\rightarrow }x \end{array} \right. \end{equation*} It is straighforward to check that $\succeq _{f}$ is an order relation on $\mathcal{V}$ and $\approx _{f}$ is its corresponding equivalence relation ($x\approx _{f}y$ if and only if $x\succeq _{f}y$ and $y\succeq _{f}x$). The set of minima of $\succeq _{f}$ is defined as follows: \begin{equation}\label{eq:M} M(f):=\left\{ \bar{x}\in \mathcal{V}:\;\,\forall x\in \mathcal{V},\;\left( \bar{x}\succeq _{f}x\,\Rightarrow \bar{x}\approx _{f}x\right) \right\} . \end{equation} Notice that $\bar{x}\in M(f)$ if and only if for any $y\in \mathcal{V}$ with $f(y)<f(\bar{x})$ and any $L$-path $\{x_{k}\}_{0\leq k\leq N}$ from $x$ to $y $, we have $\max_{0\leq k\leq N}f(x_{k})>f(\bar{x})$. Moreover, we always have $M(f)\subset \mathcal{Z}_{T}(f)$ and the inclusion may be strict. \begin{example} Let $\mathcal{V} = \mathbb{Z}_9$, and set $L$ such that \[ L(x,y) >0 \iff y=x\pm 1. \] Consider $f = (1,0,0,1,2,1,1,2,1)$. The set $\mathcal{V}$, its connections through $L$ and the level sets of $f$ are depicted in Figure \ref{fig:ExampleLF}. \begin{figure}[h] \fontsize{6pt}{6pt}\selectfont \centering \begin{tikzpicture}[->,>=stealth',shorten >=0pt,auto,node distance=1.5cm, semithick] \node (L1) {}; \node (L2) [node distance = 1.06 cm, above of = L1]{}; \node (L0) [node distance = 1.06 cm, below of = L1]{}; \node (L1end) [node distance = 13.3cm,right of = L1]{$f = 1$}; \node (L2end) [node distance = 13.3cm,right of = L2]{$f = 2$}; \node (L0end) [node distance = 13.3cm,right of = L0]{$f = 0$}; \path (L1) edge [-,dashed] (L1end) (L0) edge [-,dashed] (L0end) (L2) edge [-,dashed] (L2end) ; \node[state] (A)[fill = white, right of = L1,node distance = 1cm]{ $0$}; \node[state] (B) [fill = white,below right of=A] {$1$}; \node[state] (C) [fill = white,right of=B] {$2$}; \node[state] (D) [fill = white,above right of =C] {$3$}; \node[state] (E) [fill = white,above right of=D] {$4$}; \node[state] (F1) [fill = white,below right of=E] {$5$}; \node[state] (F2) [fill = white,right of=F1] {$6$}; \node[state] (G) [fill = white,above right of=F2] {$7$}; \node[state] (H) [fill = white,below right of=G] {$8$}; \node[state] (I) [fill = white, right of=H] {$0$}; \path (A) [bend right=20] edge node{} (B) (B) edge node{} (A) % (B) edge node {}(C) (C) edge node{} (B) % (C) edge node{} (D) (D) edge node{} (C) % (D) edge node{} (E) (E) edge node{} (D) % (E) edge node{} (F1) (F1) edge node{} (E) % (F1) edge node{} (F2) (F2) edge node{} (F1) % (F2) edge node{} (G) (G) edge node{} (F2) % (G) edge node{} (H) (H) edge node{} (G) % (H) edge node{} (I) (I) edge node{} (H) % ; \end{tikzpicture} \caption{Node $0$ has been replicated at the beginning and end of the representation. Only connections $(x,y)$ with $L(x,y)>0$ have been drawn. } \label{fig:ExampleLF} \end{figure} Here, $M(f) = \left\{1,2,5,6\right\}$ and $\mathcal{Z}_T(f) = \{ 1,2,5,6,8 \}$. The node $8$ is critical since $L$ does not allow to jump to any node with smaller value in one step. However, the path $8\to0\to1$ is an $L^f$-path leading to a point with smaller $f$-value. Note that $5$ and $6$ are in $M(f)$ since there is no $L^f$-path emanating from any of them and landing at a different node with smaller $f$-value.\hfill$\diamond$ \end{example} \smallskip For $x\in \mathcal{V}$, let $X_{x}^{f}:=(X_{x}^{f}(t))_{t\geq 0}$ stand for a Markov process starting from $x$ and whose generator is $L^{f}$. For such a process, the function \begin{equation*} \mathbb{R}_{+}\ni t\mapsto f(X_{x}^{f}(t)) \end{equation*} is almost surely non-increasing and bounded, thus converging. Furthermore, the finite Markov process $X_{x}^{f}(t)$ is converging in law for large $t\geq 0$ toward a distribution which may depend on the initial point $x$ and whose support is included into the set $M(f).$\medskip Fix $f,g\in \mathbb{R}^{\mathcal{V}}$. Since $\mathcal{V}$ is finite, the functions $f,g$ are trivially continuous and coercive. Therefore, Theorem~\ref{sec03:thm:Determination} directly yields: \begin{equation} \left. \begin{array}{c} T[f]=T[g] \smallskip \\ f=g\text{ \ on }\mathcal{Z}_{T}(f) \end{array} \right\} \Longrightarrow f=g. \label{eq:dm} \end{equation} In what follows, we obtain~\eqref{eq:dm} via a probabilistic approach, in a slightly enhanced version, namely replacing the set $\mathcal{Z}_{T}(f)=\mathcal{Z}_{T}(g)$ (where $f$ and $g$ are assumed to be equal) by the (potentially smaller) set $M(f)\cup M(g)$. The technical ingredient of the proof is contained in the following lemma. \begin{lemma} \label{comp} For any $f,g\in \mathbb{R}^{\mathcal{V}}$ with $T[f]\geq T[g]$, we have $L^{f}[g]\geq L^{f}[f]$. \end{lemma} \begin{proof} Indeed, for any $x\in \mathcal{V}$, we have \begin{eqnarray*} -L^{f}[g](x) &=&\sum_{y\,:\,f(y)\leq f(x)}L(x,y)(g(x)-g(y))= \\ &=&\!\!\!\!\!\sum_{y\,:\,f(y)\leq f(x),\,g(y)\leq g(x)}\!\!\!\!\!L(x,y)(g(x)-g(y))+\!\!\!\!\!\sum_{y\,:\,f(y)\leq f(x),\,g(y)>g(x)}\!\!\!\!\!L(x,y)(g(x)-g(y)) \\ &\leq &\sum_{y\,:\,f(y)\leq f(x),\,g(y)\leq g(x)}L(x,y)(g(x)-g(y))\leq \sum_{y\,:\,g(y)\leq g(x)}L(x,y)(g(x)-g(y)) \\ &=&T[g](x)\leq T[f](x)=-L^{f}[f](x). \end{eqnarray*} \end{proof} We are now ready to give a probabilistic proof of the following comparison result. (Recall from~\eqref{eq:M} the definition of $M(f)$.) \begin{proposition} Let $f,g\in \mathbb{R}^{\mathcal{V}}$ be two functions satisfying: \smallskip\newline {\rm (i).} $T[f](x)\geq T[g](x)$, for all $x\in\mathcal{V}$ ; and \smallskip\newline {\rm (ii).} $f(x)\geq g(x)$, for all $x\in M(f)$ . \medskip\newline Then $f\geq g$. \end{proposition} \begin{proof} Due to the martingale problem characterization of $X_{x}^{f}$ (see \cite{EK1986} e.g.), there exists a martingale $\{M_{g}^{f}(t)\}_{t\geq 0}$ starting from 0 such that \begin{equation*} g(X_{x}^{f}(t))=g(x)+\int_{0}^{t}L^{f}[g](X_{x}^{f}(s))\,ds+M_{g}^{f}(t),\qquad\text{for all } t\geq 0. \end{equation*} Taking expectations we get \begin{equation} \mathbb{E}[g(X_{x}^{f}(t))]=g(x)+\int_{0}^{t} \mathbb{E}[L^{f}[g](X_{x}^{f}(s))]\,ds,\qquad\text{for all } t\geq 0. \label{eq:t} \end{equation} Denote by $\pi ^{f}$ the limit law of the distributions of $X_{x}^{f}(t)$ for large $t\geq 0$. Then $\pi ^{f}$ is supported on $M(f)$ and \begin{equation*} \lim_{t\rightarrow +\infty }\mathbb{E}[g(X_{x}^{f}(t))]=\pi ^{f}[g]. \end{equation*} In particular the integral of the right hand side\ of \eqref{eq:t} converges for large $t\geq 0$ and it holds: \begin{equation}\label{eq:star} \pi ^{f}[g]\,=\,g(x)\,+\int_{0}^{+\infty }\mathbb{E}[L^{f}[g](X_{x}^{f}(s))]\,ds. \end{equation} Applying the above arguments with $g$ replaced by $f$, we also obtain \begin{eqnarray} \label{r2} \pi^f[f]\,=\,f(x)+\int_0^{+\infty} \mathbb{E}[L^f[f](X^f_x(s))]\, ds. \end{eqnarray} The assumption (ii) yields $\pi ^{f}[f]\geq \pi ^{f}[g]$. On the other hand, from Lemma~\ref{comp}, we have \begin{equation*} \mathbb{E}[L^{f}[g](X_{x}^{f}(s))]\geq \mathbb{E} [L^{f}[f](X_{x}^{f}(s))], \qquad\text{for all } s\geq 0. \end{equation*} Combining the above with~\eqref{eq:star} we deduce \begin{equation*} \pi^{f}[f]\,\geq \,\pi^{f}[g]= g(x)+\int_{0}^{+\infty }\mathbb{E}[L^{f}[g](X_{x}^{f}(s))]\,\geq\,g(x)+\int_{0}^{+\infty }\mathbb{E}[L^{f}[f](X_{x}^{f}(s))] \,ds. \end{equation*} Comparing the above inequality with~\eqref{r2} yields $f(x)\geq g(x)$ and the result follows. \end{proof} \bigskip By symmetry we obtain the following corollary: \begin{corollary} Let $f,g\in \mathbb{R}^{\mathcal{V}}$ be such that \smallskip\newline {\rm (i).} $T[f](x)= T[g](x)$, for all $x\in\mathcal{V}$ ; and \smallskip\newline {\rm (ii).} $f(x)= g(x)$, for all $x\in M(f)\cup M(g)$ . \smallskip\newline Then $f = g$. \end{corollary} \subsection{Classification of descent moduli on $\mathbb{R}^{\mathcal{V}}$} \label{cm} Denote $\mathcal{P(\mathcal{V})}^{\ast }$ the family of nonempty subsets of $\mathcal{V}$. Given a descent modulus $T$ on $\mathcal{V}$ we recall from \eqref{eq:ZT} the critical map \begin{equation*} \mathcal{Z}_{T}\,:\,\mathbb{R}^{\mathcal{V}}\rightarrow \mathcal{P(V)}^{\ast}. \end{equation*} \begin{definition}[equivalence of descent moduli] Let $T$, $S$ be two descent moduli on $\mathcal{V}.$ We say that the moduli $T$ and $S$ are \textit{equivalent} (and denote $T\sim S$) if $\mathcal{Z}_{T}=\mathcal{Z}_{S}$. \end{definition} Notice that if $T\sim S$, then $T$ and $S$ determine the same functions via Theorem \ref{sec03:thm:Determination}.\medskip A family $\mathcal{D}:=\{\mathcal{D}_{x}\}_{x\in \mathcal{V}}$ is called an \textit{active neighborhood system}, provided $x\in \mathcal{D}_{x}\subset \mathcal{V}$ for every $x\in \mathcal{V}$. An example of such system has been defined in~\eqref{eq:act} in the particular case where the set $\mathcal{V}$ is equipped with a generator $L$.\smallskip\newline We henceforth denote by $\mathcal{E}(\mathcal{V})$ the set of active neihborhood systems on $\mathcal{V}$. Then for any such system $\mathcal{D}\in \mathcal{E}(\mathcal{V})$ we associate a descent modulus $T_{\mathcal{D}}$ defined for $f\in \mathbb{R}^{\mathcal{V}}\ $and $x\in \mathcal{V}$ as follows (compare with~\eqref{eq:D-global} in Proposition~\ref{prop-m-slope}): \begin{equation} T_{\mathcal{D}}[f](x):=\max_{y\in \mathcal{D}_{x}}\, (f(x)-f(y))_{+} \label{eq:21} \end{equation} Conversely, given any (abstract) descent modulus $T$ we set: \begin{equation} \left\{ \begin{array}{rcl} \mathcal{K}_{x}(T) & \df & \left\{ K\subset \mathcal{V}\,:\,x\in K\cap \mathcal{Z}_{T}(\mathds{1}_{K})\right\} \\[2mm] D_{x}(T) & \df & \bigcap_{K\in \mathcal{K}_{x}(T)}K \end{array} \right. \label{eq:22} \end{equation} where $\mathds{1}_{K}$ denotes the characteristic function of the set $K,$ that is: \begin{equation*} \mathds{1}_{K}(x)=\left\{ \begin{tabular}{ll} $1,$ & if $x\in K$ \\ $0,$ & if $x\notin K.$ \end{tabular} \right. \end{equation*} The interest of these notions is illustrated by the following result: \begin{theorem}[classification of moduli] \label{caract} If a homogeneous descent modulus $T$ satisfies \begin{equation} \forall \ x\in \mathcal{V}:\quad D_{x}(T) \in \mathcal{K}_{x}(T) \tag{$\mathcal{H}$} \end{equation} then there exists a family $\mathcal{D}\in \mathcal{E}(\mathcal{V})$ such that $T$ is equivalent to $T_{\mathcal{D}}$ given in (\ref{eq:21}). \end{theorem} \medskip Before we proceed, let us introduce the following definition. \begin{definition} For a critical map $\mathcal{Z}:\,\mathbb{R}^{\mathcal{V}}\rightarrow \mathcal{P(V)}^{\ast }$ and for each $x\in \mathcal{V}$, we define \begin{equation} \left\{ \begin{array}{rcl} \mathcal{K}_{x}(\mathcal{Z}) & \df & \{K\subset \mathcal{V}\,:\,x\in K\cap \mathcal{Z}(\mathds{1}_{K})\} \\[2mm] \mathcal{D}_{x}(\mathcal{Z}) & \df & \bigcap_{K\in \mathcal{K}_{x}}K \end{array} \right. \label{CD} \end{equation} If there is no confusion, we might simply write $\mathcal{K}_{x}$ and $\mathcal{D}_{x}$, respectively. \end{definition} The proof of Theorem~\ref{caract} is based on a characterization of those maps $\mathcal{Z}:\,\mathbb{R}^{\mathcal{V}}\rightarrow \mathcal{P(V)}^{\ast}$ for which there exists a descent modulus $T$ such that $\mathcal{Z=Z}_{T}.$ More precisely, let $\mathcal{Z}\,:\,\mathbb{R}^{\mathcal{V}}\rightarrow \mathcal{P(\mathcal{V})}^{\ast }$ be an \textit{abstract critical map} satisfying the following conditions :\medskip \newline (Z1) for every $f\in \mathbb{R}^{\mathcal{V}}$\ and $r\in \mathbb{R}$: $\mathcal{Z}[f+r]=\mathcal{Z}[f]$\medskip \newline (Z2) for every $f\in \mathbb{R}^{\mathcal{V}}$ and $r>0$:\ $\mathcal{Z[}rf]=\mathcal{Z}[f]$\medskip \newline (Z3) for every $f\in \mathbb{R}^{\mathcal{V}}$\ and $r\in \mathbb{R}$: \ \ $\mathcal{Z}[f]=\left( \mathcal{Z}[\phi _{r}(f)]\cap \lbrack f\leq r]\right) \sqcup \left( \mathcal{Z}[\varphi _{r}(f)]\cap \lbrack f>r]\right) $\smallskip \newline where \begin{equation*} \forall \ r\in \mathbb{R},\,\forall \ s\in \mathbb{R},\qquad \left\{ \begin{array}{rcl} \phi _{r}(s) & \df & r\wedge s \\ \varphi _{r}(s) & \df & r\vee s. \end{array} \right. \end{equation*} We also assume\medskip \newline (Z4) for every$\ K\subset \mathcal{V}$ we have: $K^{\mathrm{c}}\subset \mathcal{Z}(\mathds{1}_{K});$ and\medskip \newline (Z5) for every $x\in \mathcal{V}:$ \begin{equation*} \mathcal{K}_{x}=\{K\subset \mathcal{V}\,:\,\mathcal{D}_{x}\subset K\}. \end{equation*} The announced characterization of critical maps is the following: \begin{theorem}[characterization of critical maps] \label{critmap} An abstract critical map $\mathcal{Z}\,:\,\mathbb{R}^{\mathcal{V}}\rightarrow \mathcal{P(\mathcal{V})}^{\ast }$ is associated to some homogeneous descent modulus $T$ (that is, $\mathcal{Z=Z}_{T}$) if and only if conditions $(Z1)$--$(Z5)$ hold. \newline In this case $\mathcal{Z}=\mathcal{Z}_{T_{\mathcal{D}}}$, where $T_{\mathcal{D}}$ is defined by~\eqref{eq:21} for $\mathcal{D}:=\{\mathcal{D}_{x}\}_{x\in \mathcal{V}}$ constructed in~\eqref{CD}. \end{theorem} The last assertion of Theorem~\ref{critmap} is implicitly assuming that $\mathcal{D}\in \mathcal{E}(\mathcal{V})$. The following lemma confirms that this is indeed the case: \begin{lemma}\label{fac} Let $\mathcal{Z}\,:\,\mathbb{R}^{\mathcal{V}}\rightarrow \mathcal{P(\mathcal{V})}^{\ast }$ be an abstract critical mapping that satisfies conditions $(Z1)$--$(Z4)$. Let further $\mathcal{D}:=\{\mathcal{D}_{x}\}_{x\in \mathcal{V}}$ be constructed as in~\eqref{CD}. Then $\mathcal{Z}[\mathds{1}_{\mathcal{V}}]=\mathcal{V}$ and $\mathcal{D\in E}(\mathcal{V})$. \end{lemma} \begin{proof} Applying (Z4) with $K=\emptyset$, we get $\mathcal{V}\subset \mathcal{Z}(\mathds{1}_{\emptyset })=\mathcal{Z}[\boldsymbol{0}]$, where $\boldsymbol{0}$ denotes the null function on $\mathcal{V}$. We deduce from (Z1) (for $r=1$) that $\mathcal{Z}[\mathds{1}_{\mathcal{V}}]=\mathcal{Z}[\boldsymbol{0}]=\mathcal{V}$. Recall that the family $\mathcal{D}:=(\mathcal{D}_{x})_{x\in \mathcal{V}}$ belongs to $\mathcal{E}(\mathcal{V})$ if and only if it satisfies \begin{equation*} x\in \mathcal{D}_{x},\,\, \forall x\in \mathcal{V}. \end{equation*} Fix $x\in \mathcal{V}$. Since $\mathcal{Z}[\mathds{1}_{\mathcal{V}}]=\mathcal{V}$, we have $x\in \mathcal{Z}[\mathds{1}_{\mathcal{V}}]$, which in conjunction with $x\in \mathcal{V}$ yields $\mathcal{V}\in \mathcal{K}_{x}$. It follows that $\mathcal{K}_{x}\neq \emptyset $. By definition of $\mathcal{K}_{x}$, for any $K\in \mathcal{K}_{x}$, we have $x\in K$, so that $x\in \bigcap_{K\in \mathcal{K}_{x}}K=D_{x}$. Therefore $\mathcal{D\in E}(\mathcal{V})$. \end{proof} Let us postpone for a while the proof of Theorem~\ref{critmap} and show instead that Theorem~\ref{critmap} implies Theorem~\ref{caract}. To this end, let $T$ be a homogeneous descent modulus on $\mathcal{V}$. Then using Theorem~\ref{critmap}, it is easy to see that $T$ is equivalent to $T_{\mathcal{D}}$ for some active neighborhood system $\mathcal{D}\in \mathcal{E}(\mathcal{V})$ provided the following proposition is proven: \begin{proposition} \label{verif} Under Assumption ($\mathcal{H}$), the critical map $\mathcal{Z}_{T}$ satisfies $(Z1)$--$(Z5)$. \end{proposition} \begin{proof} We verify successively that conditions (Z1)--(Z5) hold. Indeed, condition (Z1) comes from the translation invariance property of $T$, see Proposition~\ref{sec03:prop:EquivalentProperties}, while (Z2) is consequence of the homogeneity assumption for $T$. \smallskip\newline Verifying (Z3) requires some extra work: fix $f\in \mathbb{R}^{\mathcal{V}}$ and $r\in \mathbb{R}$. Since $(Z3)$ holds trivially for constant functions, we may assume that $f$ takes at least two different values. Then for any $s,s^{\prime }\in \mathbb{R}$ we have \begin{eqnarray*} (\phi _{r}(s)-\phi_r (s^{\prime }))_+ &\leq &(s-s^{\prime })_+ \\ (\varphi _{r}(s)-\varphi_r (s^{\prime }))_+ &\leq &(s-s^{\prime })_+ \end{eqnarray*} and monotonicity yields \begin{eqnarray*} T[\phi _{r}(f)] &\leq &T[f] \\ T[\varphi _{r}(f)] &\leq &T[f] \end{eqnarray*} Consequently: \begin{equation*} \mathcal{Z}_{T}[f]\subset \mathcal{Z}_{T}[\phi _{r}(f)]\cap \mathcal{Z}_{T}[\varphi _{r}(f)] \end{equation*} so that \begin{align*} \mathcal{Z}_{T}[f] &=\left(\mathcal{Z}_{T}[f]\cap [f\leq r]\right)\sqcup \left(\mathcal{Z}_{T}[f]\cap [f> r]\right)\\ & \subset \left( \mathcal{Z}_{T}[\phi _{r}(f)]\cap \lbrack f\leq r]\right) \sqcup \left( \mathcal{Z}_{T}[\varphi _{r}(f)]\cap \lbrack f>r]\right) . \end{align*} To get the reserved inclusion, consider $x\in \mathcal{V}$ with $f(x)\leq r$, in particular $\phi _{r}(f(x))=f(x)$. For any $z\in \mathcal{V}$ with $\phi _{r}(f(z))<\phi _{r}(f(x))$, we have $\phi _{r}(f(z))=f(z)$, so that \begin{equation*} (\phi _{r}(f(x))-\phi _{r}(f(z)))_{+}=(f(x)-f(z))_{+} \end{equation*} For any $z\in \mathcal{V}$ with $\phi _{r}(f(z))\geq \phi _{r}(f(x))$, we must have $f(z)\geq f(x)$, thus \begin{equation*} (\phi _{r}(f(x))-\phi _{r}(f(z)))_{+}=0\ =\ (f(x)-f(z))_{+} \end{equation*} From the monotonicity property, we deduce $T[\phi _{r}(f)](x)=T[f](x)$. These considerations show that \begin{equation} \mathcal{Z}_{T}[\phi _{r}(f)]\cap \lbrack f\leq r]\subset \mathcal{Z}_{T}[f] \label{ZTsub} \end{equation} Finally, consider $x\in \mathcal{V}$ with $f(x)>r$, in particular $\varphi _{r}(f(x))=f(x)$. Since $f$ is not constant, we can define \begin{equation*} a:=\frac{f(x)-(r\vee \min f)}{\max f-\min f} > 0. \end{equation*} Now, on the one hand, for any $z\in \mathcal{V}$ with $\varphi _{r}(f(z))\geq \varphi _{r}(f(x))$, we must have $f(z)\geq f(x)$, thus \begin{equation*} (\varphi _{r}(f(x))-\varphi _{r}(f(z)))_{+}=0\ =\ (f(x)-f(z))_{+} =\ (af(x)-af(z))_{+} \end{equation*} On the other hand, for any $z\in \mathcal{V}$ with $\varphi _{r}(f(z))<\varphi _{r}(f(x))$, we have \begin{align*} \varphi _{r}(f(x))-\varphi _{r}(f(z)) \geq f(x) - (r\vee \min f) \geq a(f(x)-f(z)) \end{align*} We deduce that \begin{equation*} \forall \ z\in \mathcal{V},\qquad (\varphi _{r}(f(x))-\varphi _{r}(f(z))_{+}\geq (af(x)-af(z))_{+} \end{equation*} and by monotonicity $T[\varphi _{r}(f)](x)\geq T[af](x)=aT[f](x)$, by homogeneity. It follows that \begin{equation*} \mathcal{Z}_{T}[\varphi _{r}(f)]\cap \lbrack f>r]\subset \mathcal{Z}_{T}[f]. \end{equation*} Combining with \eqref{ZTsub}, we get the reverse inclusion \begin{equation*} \mathcal{Z}_{T}[f]\supset \left( \mathcal{Z}_{T}[\phi _{r}(f)]\cap \lbrack f\leq r]\right) \sqcup \left( \mathcal{Z}_{T}[\varphi _{r}(f)]\cap \lbrack f>r]\right), \end{equation*} therefore (Z3) holds. \smallskip\newline Condition (Z4) is a consequence of the preservation of global minima, since the set of global minima of $\mathds{1}_{K}$ coincides with $K^{\mathrm{c}}$. \smallskip\newline It remains to show (Z5). Set $$\widetilde{\mathcal{K}_x} =\{K\subset \mathcal{V}:\,\mathcal{D}_x\subset K\}.$$ Then for every $K\in\mathcal{K}_x$ we have $\mathcal{D}_x\subset K$, that is, $K\in\widetilde{\mathcal{K}_x}$ and $\mathcal{K}_x\subset\widetilde{\mathcal{K}_x}$. To prove the reverse inclusion, consider $K\subset \mathcal{V}$ with $\mathcal{D}_{x}\subset K$. We need to verify that $K\in \mathcal{K}_{x}$. Since $x\in \mathcal{D}_{x}$, we get $x\in K$. Furthermore, for every $z\in \mathcal{V}$ we have: \begin{equation*} \left (\mathds{1}_{\!K}(x)-\mathds{1}_{\!K}(z)\right )_{+}=1-\mathds{1}_{\!K}(z)\leq 1-\mathds{1}_{\!\mathcal{D}_{x}}(z)=(\mathds{1}_{\!\mathcal{D}_{x}}(x)-\mathds{1}_{\!\mathcal{D}_{x}}(z))_{+} \end{equation*} and thus by monotonicity, $T[\mathds{1}_{\!K}](x)\leq T[\mathds{1}_{\!\mathcal{D}_{x}}](x)=0$, where the last equality is obtained via~($\mathcal{H}$). It follows that $x\in \mathcal{Z}_{T}(K)$ whence $K\in \mathcal{K}_{x}$. \smallskip\newline The proof is complete. \end{proof} For $\mathcal{D}\in \mathcal{E}(\mathcal{V})$, denote for simplicity by $\mathcal{Z}_{\mathcal{D}}$ the critical map $\mathcal{Z}_{T_{\mathcal{D}}}$ associated to the homogeneous descent modulus $T_{\mathcal{D}}$. The following lemma shows how to recover the active neighborhood system $\mathcal{D=}\{\mathcal{D}_{x}\}_{x}$ from $\mathcal{Z}_{\mathcal{D}}$: \begin{lemma} \label{cKD} For any $x\in \mathcal{V}$, we have \begin{equation*} \mathcal{D}_{x}=\bigcap_{K\in \mathcal{K}_{x}}K \end{equation*} where \begin{equation*} \mathcal{K}_{x}:=\{K\subset \mathcal{V}\,:\,x\in K\cap \mathcal{Z}_{\mathcal{% D}}(\mathds{1}_{\!K})\}. \end{equation*} \end{lemma} \begin{proof} For any $f\in\mathbb{R}^\mathcal{V}$, recall that \begin{eqnarray*} \mathcal{Z}_{\mathcal{D}}[f]&=&\{x\in \mathcal{V}\,:\, T_{\mathcal{D}}[f](x)=0\} \,=\, \{x\in \mathcal{V}\,:\, \max_{y\in \mathcal{D}_x}(f(x)-f(y))_+=0\}\\ &=&\{x\in \mathcal{V}\,:\, \forall\ y\in \mathcal{D}_x,\, f(y)\geq f(x)\}. \end{eqnarray*} \par In particular taking $f=\mathds{1}{K}$ with $K\in\mathcal{P}(\mathcal{V})^*$, we get \[ \mathcal{Z}_{\mathcal{D}}(\mathds{1}_{\!K})=\{x\in \mathcal{V}\,:\, \forall\ y\in \mathcal{D}_x,\, \mathds{1}{K}(y)\geq \mathds{1}{K}(x)\} =\{x\in K\,:\, \mathcal{D}_x\subset K\} \cup K^c. \] \par Fix $x\in \mathcal{V}$ and consider $K\in\mathcal{K}_x$. Since $x\in K$ and $x\in \mathcal{Z}_{\mathcal{D}}(\mathds{1}_{\!K})$, we deduce that $\mathcal{D}_x\subset K$. It follows that \begin{eqnarray*} \mathcal{D}_x&\subset&\bigcap_{K\in\mathcal{K}_x} K.\end{eqnarray*} \par To get the reverse implication, it is sufficient to check that $\mathcal{D}_x\in\mathcal{K}_x$. Note that \[ \mathcal{Z}_{\mathcal{D}}[\mathds{1}_{\mathcal{D}_x}]=\{y\in \mathcal{D}_x:\, \mathcal{D}_y\subset \mathcal{D}_x\}\cup \mathcal{D}_x^c. \] Therefore, $x\in\mathcal{Z}_\mathcal{D}[\mathds{1}_{\mathcal{D}_x}]$. Since we also have $x\in \mathcal{D}_x$, we deduce that $\mathcal{D}_x\in\mathcal{K}_x$. \end{proof} \medskip The above lemma justifies the introduction of the objects $\mathcal{K}_{x}$, $\mathcal{D}_{x}$, for $x\in \mathcal{V}$, for any mapping $\mathcal{Z}:\,\mathbb{R}^{\mathcal{V}}\rightarrow \mathcal{P(\mathcal{V})}^{\ast }$ in \eqref{CD} by analogy to \eqref{eq:22}. Denote by $\mathcal{\hat{Z}}$ the set of mappings $\mathcal{Z}\,:\,\mathbb{R}^{\mathcal{V}}\rightarrow \mathcal{P(\mathcal{V})}^{\ast }$ satisfying $\mathcal{Z}[\mathds{1}]=\mathcal{V}$ and by $\mathcal{\hat{Z}}_{\mathcal{E}}$ the set of critical maps $\mathcal{Z}_{\mathcal{D}}$ associated to $T_{\mathcal{D}}$ with $\mathcal{D}\in \mathcal{E}(\mathcal{V}) $. Let $\mathcal{Q}$ be the mapping $\mathcal{\hat{Z}}\ni \mathcal{Z}\mapsto \mathcal{Z}_{\mathcal{D}}\in \mathcal{\hat{Z}}_{\mathcal{E}}$ considered above. Lemma~\ref{cKD} shows that $\mathcal{Q}^{2}=\mathcal{Q}$, that is, $\mathcal{Q}$ is a kind of non-linear projection.\smallskip Let us show that in Proposition~\ref{verif} we don't need to assume ($\mathcal{H}$) if the critical map is of the form $\mathcal{Z}_\mathcal{D}$: \begin{lemma} \label{vD} For any $\mathcal{D}\in \mathcal{E}(\mathcal{V})$, $\mathcal{Z}_{\mathcal{D}}$ satisfies $(\mathcal{H})$ and thus $(Z5)$. \end{lemma} \begin{proof} Thanks to Lemma~\ref{cKD}, the family $\mathcal{\{\bigcap }_{K\in \mathcal{K}_{x}}K\}_{x\in \mathcal{V}}$ constructed in \eqref{CD} coincides with the active neighborhood system $\mathcal{D=}\{\mathcal{D}_{x}\}_{x\in \mathcal{V}}$ in the definition of $\mathcal{Z}_{\mathcal{D}}$. Thus to check $(\mathcal{H})$, it suffices to show that $x\in \mathcal{Z}_{\mathcal{D}}[\mathds{1}{\mathcal{D}_{x}}]$, for any $x\in \mathcal{V}$, or equivalently $T_{\mathcal{D}}[\mathds{1}_{\!\mathcal{D}_{x}}](x)=0$. A direct computation gives: \begin{equation*} T_{\mathcal{D}}[\mathds{1}_{\!\mathcal{D}_{x}}](x)=\max_{z\in \mathcal{D}_{x}}\mathds{1}_{\!\mathcal{D}_{x}}(x)-\mathds{1}_{\!\mathcal{D}_{x}}(z)=0 \end{equation*} Condition (Z5) then follows from Proposition~\ref{verif}. \end{proof} Here is the first step towards Theorem~\ref{critmap}: \begin{proposition} \label{prop:ReductionToSets} Let $\mathcal{Z}:\,\mathbb{R}^{\mathcal{V}}\rightarrow \mathcal{P(\mathcal{V})}^{\ast }$ satisfying $(Z1)$--$(Z3)$ and $\mathcal{Z}[\mathds{1}_{\mathcal{V}\!}]=\mathcal{V}$. Let $\mathcal{D}$ be constructed as in \eqref{CD} and define $\mathcal{Z}_{\mathcal{D}}$ the critical map associated to $T_{\mathcal{D}}$ given in \eqref{eq:21}. Assume that \begin{equation*} \forall \ K\subset \mathcal{V},\qquad \mathcal{Z}(\mathds{1}_{K})=\mathcal{Z}_{\mathcal{D}}(\mathds{1}_{K}) \end{equation*} Then we have $\mathcal{Z}=\mathcal{Z}_{\mathcal{D}}$. \end{proposition} \begin{proof} From Proposition~\ref{verif} and Lemma~\ref{fac}, $\mathcal{Z}_{\mathcal{D}}$ also satisfies (Z1)--(Z3) and $\mathcal{Z}_{\mathcal{D}}[\mathds{1}_{\mathcal{V}\!}]=\mathcal{V}$. Let $f\in \mathbb{R}^{\mathcal{V}}$. We prove that $\mathcal{Z}[f]= \mathcal{Z}_{\mathcal{D}}[f]$ via induction over the number $n\in \mathbb{N}$ of values taken by $f$. $\bullet $ We begin with the case where $n=1$, that is, $f$ is constant. Denote by $a\in \mathbb{R}$ the value of $f$. Taking into account condition (Z1) and the fact that $\mathcal{Z}[\mathds{1}_{\mathcal{V}\!}]=\mathcal{V}$, we obtain \begin{equation*} \mathcal{Z}[f]=\mathcal{Z}[f-a+1]=\mathcal{Z}[\mathds{1}_{\mathcal{V}}]=\mathcal{Z}_{\mathcal{D}}[\mathds{1}_{\mathcal{V}}]=\mathcal{V}=\mathcal{Z}_{\mathcal{D}}[f] \end{equation*} $\bullet $ Consider the case where $n=2$ and let $f(\mathcal{V})=\{a,b\}$ with $a<b$. Set $K:=[f=b]$. Using (Z1) and (Z2), we get \begin{eqnarray*} \mathcal{Z}[f] =\mathcal{Z}\left[ \frac{f-a}{b-a}\right] =\mathcal{Z}(\mathds{1}_{K})=\mathcal{Z}_{\mathcal{D}}(\mathds{1}_{K})= \mathcal{Z}_{\mathcal{D}}[f]. \end{eqnarray*} $\bullet $ Consider the case where $n>2$, assuming that $\mathcal{Z}[g]= \mathcal{Z}_{\mathcal{D}}[g]$ for all $g\in \mathbb{R}^{\mathcal{D}}$ taking at most $n-1$ values. Write $f_{1}<f_{2}<\cdots <f_{n}$ the values taken by $f$. Take $k=\lfloor\frac{n+1}{2}\rfloor$ (integer part), set $r=f_{k}$ and \begin{eqnarray*} g_{-} &\df&\phi _{r}\circ f \\ g_{+} &\df&\varphi _{r}\circ f \end{eqnarray*} By the choice of $r$, both $g_-$ and $g_+$ take at most $n-1$ values. Condition (Z3) then yields \begin{eqnarray*} \mathcal{Z}[f] &=&\left( \mathcal{Z}[g_{-}]\cap \lbrack f\leq r]\right) \sqcup \left( \mathcal{Z}[g_{+}]\cap \lbrack f>r]\right) \\ &=&\left( \mathcal{Z}_{\mathcal{D}}[g_{-}]\cap \lbrack f\leq r]\right) \sqcup \left( \mathcal{Z}_{\mathcal{D}}[g_{+}]\cap \lbrack f>r]\right) = \mathcal{Z}_{\mathcal{D}}[f]. \end{eqnarray*} as desired. \end{proof} Having established Proposition~\ref{prop:ReductionToSets}, the following result finishes the proof of Theorem~\ref{critmap}: \begin{proposition} Let $\mathcal{Z}:\,\mathbb{R}^{\mathcal{V}}\rightarrow \mathcal{P(\mathcal{V})}^{\ast }$ be a mapping satisfying $(Z1)$--$(Z5)$ and let $\mathcal{D}=(\mathcal{D}_{x})_{x\in \mathcal{V}}\in \mathcal{E}(\mathcal{V})$ be as in~\eqref{CD}. Then we have \begin{equation*} \forall \ K\subset \mathcal{V}:\qquad \mathcal{Z}[\mathds{1}_{K}]=\mathcal{Z}_{\mathcal{D}}[\mathds{1}_{K}] \end{equation*} \end{proposition} \begin{proof} From (Z4), we have \begin{equation*} K^{\mathrm{c}}\cap \mathcal{Z}(\mathds{1}_{K})=K^{\mathrm{c}}=K^{\mathrm{c}}\cap \mathcal{Z}_{\mathcal{D}}(\mathds{1}_{K}). \end{equation*} Now, let $K^{\prime }=\{x\in K\ :\ \mathcal{D}_{x}\subset K\}$. For every $x\in K^{\prime }$, due to (Z5), we have $K\in \mathcal{K}_{x}$, so $x\in \mathcal{Z}[\mathds{1}_{K}]$. Since this is true for any $x\in K^{\prime }$, we get $K^{\prime }\subset \mathcal{Z}[\mathds{1}_{K}]$. According to Lemma~\ref{vD}, $\mathcal{Z}_{\mathcal{D}}$ also satisfies (Z5), and it follows as above that $K^{\prime }\subset \mathcal{Z}_{\mathcal{D}}[\mathds{1}{K}]$. We deduce \begin{equation*} K^{\prime }\cap \mathcal{Z}(\mathds{1}_{K})=K^{\prime }\ =\ K^{\prime }\cap \mathcal{Z}_{\mathcal{D}}(\mathds{1}_{K}). \end{equation*} Finally, let $K\diagdown K^{\prime }=\{x\in K\ :\ \mathcal{D}_{x}\setminus K\neq \emptyset \}$. For every $x\in K\diagdown K^{\prime }$, since $\mathcal{D}_{x}\not\subset K$, we have $K\notin \mathcal{K}_{x}$, due to the definition of $\mathcal{K}_{x}$ in \eqref{CD}. Since $x\in K$, the only possibility is that $x\notin \mathcal{Z}(\mathds{1}_{K})$. The same reasoning applies to $\mathcal{Z}_{D}$ (recalling Lemma~\ref{cKD}) and we get \begin{equation*} K\diagdown K^{\prime }\cap \mathcal{Z}(\mathds{1}_{K})=\emptyset \ =\ K\diagdown K^{\prime }\cap \mathcal{Z}_{\mathcal{D}}(\mathds{1}_{K}) \end{equation*} Since $\mathcal{V}=K^{\mathrm{c}}\sqcup K^{\prime }\sqcup( K\diagdown K^{\prime })$, we conclude that \begin{equation*} \mathcal{Z}(\mathds{1}_{K})=K^{\mathrm{c}}\sqcup K^{\prime }=\mathcal{Z}_{\mathcal{D}}(\mathds{1}_{K}), \end{equation*} finishing the proof. \end{proof} \begin{remark} Note that (Z5) was only used to prove that $K^{\prime }\subset \mathcal{Z}(\mathds{1}_{K})$. Thus, when (Z5) is not verified, the constructed modulus of descent $T_{\mathcal{D}}$ might enlarge the critical map $\mathcal{Z}_{\mathcal{D}}$ with respect to $\mathcal{Z}$, as illustrated by Example~\ref{exafin} below. \end{remark} The following two examples show that homogeneity and~(Z4) are necessary assumptions. \begin{example}[A non-homogeneous descent modulus failing (Z2)] Let $\varepsilon >0$ and consider the operator $T_{\varepsilon }:\mathbb{R}^{\mathcal{V}}\rightarrow \mathbb{R}_{+}^{\mathcal{V}}$ given by \[ T_{\varepsilon }[f](x) \df \left\{ \begin{array}{cl} f(x)-\min f & \text{ if }f(x)>\min f+\varepsilon \\[2mm] 0 & \text{ if }f(x)\leq \min f+\varepsilon . \end{array} \right. \,\,=\,\,\phi _{\epsilon }(f(x)-\min f) \] where the mapping $\phi _{\epsilon }$ is defined for $r\geq 0$ by \begin{equation*} \phi _{\epsilon }(r)\df\left\{ \begin{array}{ll} 0,\; & \text{if }r\in \lbrack 0,\epsilon ] \\ r,\; & \text{if }r>\epsilon. \end{array} \right. \end{equation*} We claim that $T_{\varepsilon }$ is a descent modulus. \begin{itemize} \item Let $f\in \mathbb{R}^{\mathcal{V}}$. For every $x\in \argmin f$, we have that $T_{\varepsilon }[f](x)=0$, and so $T_{\varepsilon }$ preserves global minima. \item Let $f,g\in \mathbb{R}^{\mathcal{V}}$ and $x\in \mathcal{V}$ such that \begin{equation*} (f(x)-f(z))_{+}\geq (g(x)-g(z))_{+},\quad \forall z\in \mathcal{V}. \end{equation*} On the one hand, if $f(x)\leq \min f + \varepsilon$, we have that \[ \varepsilon\geq (f(x)-f(z))_{+}\geq (g(x)-g(z))_{+},\quad \forall z\in \mathcal{V}, \] and so, $g(x)\leq \min g + \varepsilon$ as well. Then $T_{\varepsilon }[f](x) = T_{\varepsilon }[g](x)$. On the other hand, if $f(x)\geq \min f + \varepsilon$, by taking $z^{\ast }\in \argmin g$, we have that \begin{align*} T_{\varepsilon }[f](x)& =f(x)-\min f\geq (f(x)-f(z^{\ast }))_{+} \\ & \geq (g(x)-g(z^{\ast }))_{+}=g(x)-\min g \geq T_{\varepsilon }[g](x). \end{align*} Thus, $T_{\varepsilon }$ is monotone. \item Let $f\in \mathbb{R}^{\mathcal{V}}$ and $r>1$. Then, for every $x\in \mathcal{V}$, \begin{equation*} T_{\varepsilon }[f](x)>0\implies \epsilon \leq T_{\varepsilon }[f](x)=f(x)-\min f<r(f(x)-\min f)=rf(x)-\min rf=T_{\varepsilon }[rf](x). \end{equation*} Thus, $T_{\varepsilon }$ is scalar-monotone. \end{itemize} Then, by definition, $T_{\varepsilon }$ is a descent modulus. However, choose $f\in \mathbb{R}^{\mathcal{V}}$ such that $\alpha =\max f-\min f>\varepsilon $, and choose $r=\frac{\varepsilon }{\alpha }$. Then, we have that $\mathcal{Z}_{T}(f)\neq \mathcal{V}$ but \begin{equation*} \forall x\in \mathcal{V},\quad rf(x)-\min rf\leq r(\max f-\min f)=\varepsilon \implies \mathcal{Z}_{T}(rf)=\mathcal{V}. \end{equation*} \hfill$\diamond$ \end{example} \begin{example}[A family of homogeneous moduli of descent failing~(Z4)] \label{exafin} Let $\mathcal{D}^{\prime }=(\mathcal{D}_{x}^{\prime })_{x\in \mathcal{V}}\in \mathcal{E}(\mathcal{V)}$ such that there exists $\bar{x}\in \mathcal{V}$ where $|\mathcal{D}_{\bar{x}}^{\prime }|\geq 3$, and let $T:\mathbb{R}^{\mathcal{V} }\rightarrow \mathbb{R}_{+}^{\mathcal{V}}$ be the operator given by \begin{equation*} T[f](x)\df\left\{ \begin{array}{cl} \left( f(x)-\displaystyle\max_{y\in \mathcal{D}_{x}^{\prime }\setminus \{x\}}f(y)\right) _{+} & \text{ if }\mathcal{D}_{x}^{\prime }\neq \{x\} \\ 0 & \text{ if }\mathcal{D}_{x}^{\prime }=\{x\}. \end{array} \right. \end{equation*} Clearly $T$ is homogeneous and preserves global minima. Let us prove that $T$ is monotone: let $f,g\in \mathbb{R}^{\mathcal{V}}$ and $x\in \mathcal{V}$ such that \begin{equation*} (f(x)-f(z))_{+}\geq (g(x)-g(z))_{+},\quad \forall z\in \mathcal{V}. \end{equation*} If $\mathcal{D}_{x}^{\prime }=\{x\}$, then $T[f](x)=0\geq 0=T[g](x)$. If $\mathcal{D}_{x}^{\prime }\neq \{x\}$, then there exist $y^{\ast }\in \mathcal{D}_{x}^{\prime }\setminus \{x\}$ such that $f(y^{\ast })=\max_{y\in \mathcal{D}_{x}^{\prime }\setminus \{x\}}f(y)$. Then, \begin{equation*} T[f](x)=(f(x)-f(y^{\ast }))_{+}\geq (g(x)-g(y^{\ast }))_{+}\geq \left( g(x)-\max_{y\in \mathcal{D}_{x}^{\prime }\setminus \{x\}}g(y)\right) _{+}=T[g](x). \end{equation*} Thus, $T$ is monotone and therefore it is a descent modulus. Now, let $(\mathcal{K}_{x})_{x\in \mathcal{V}}$ and $\mathcal{D}=(\mathcal{D}_{x})_{x\in \mathcal{V}}$ constructed as in \eqref{CD} for $\mathcal{Z}_{T}$. Then: \begin{itemize} \item If $\mathcal{D}_{x}^{\prime }=\{x\}$, then $\mathcal{K}_{x}=\{K\in \mathcal{P}(\mathcal{V})^*\ :\ x\in K\}$ and so $\mathcal{D}_{x}=\mathcal{D}_{x}^{\prime } $. \item If $\mathcal{D}_{x}^{\prime }=\{x,y\}$ for some $y\neq x$, then $\mathcal{K}_{x}=\{K\in \mathcal{V}\,:\,\{x,y\}\subset K\}$. Indeed, consider $K\in \mathcal{K}_{x}$, we have $x\in K$ and $x\in \mathcal{Z}_{T}(K)$. We compute \begin{equation*} T(\mathds{1}_{\!K})(x)=(\mathds{1}_{\!K}(x)-\mathds{1}_{\!K}(y))_{+}=1-\mathds{1}_{\!K}(y) \end{equation*} so for this expression to vanish, we must have $y\in K$. Conversely, if $\{x,y\}\subset K$, then $x\in K$ and \begin{equation*} T(\mathds{1}_{\!K})(x)=(\mathds{1}_{\!K}(x)-\mathds{1}_{\!K}(y))_{+}=0 \end{equation*} so $K\in \mathcal{K}_{x}$. We deduce $\mathcal{D}_{x}=\{x,y\}$ and so $\mathcal{D}_{x}=\mathcal{D}_{x}^{\prime }$. \item If $|\mathcal{D}_{x}^{\prime }|\geq 3$, we have that there is $y,z\in \mathcal{D}_{x}^{\prime }\setminus \{x\}$ with $y\neq z$ such that $\{x,y\},\{x,z\}\in \mathcal{K}_{x}$. Thus, $\mathcal{D}_{x}\subset \{x,y\}\cap \{x,z\}=\{x\}\neq \mathcal{D}_{x}^{\prime }$, it follows that $\mathcal{D}_{x}=\{x\}$. Furthermore, $\{x\}\notin \mathcal{K}_{x}$ since $T[\mathds{1}_{\!\{x\}}](x)=1$. \end{itemize} Thus, since $|\mathcal{D}_{\bar{x}}^{\prime }|\geq 3$, $T$ fails (Z4).\hfill$\diamond$ \end{example} \noindent\rule{4cm}{1.2pt} \medskip \textbf{Acknowledgement} A major part of this work has been realized during a research stay of the first and the third author to Toulouse School of Economics, France (May 2022). These two authors wish to thank J. Bolte and the host institute for hospitality. The first author also acknowledges support from the Austrian Science Fund (FWF, P-36344-N). \begin{center} \noindent\rule{4cm}{1.4pt} \end{center}
{ "timestamp": "2022-11-23T02:01:24", "yymm": "2211", "arxiv_id": "2211.11819", "language": "en", "url": "https://arxiv.org/abs/2211.11819", "abstract": "The norm of the gradient $\\nabla$f (x) measures the maximum descent of a real-valued smooth function f at x. For (nonsmooth) convex functions, this is expressed by the distance dist(0, $\\partial$f (x)) of the subdifferential to the origin, while for general real-valued functions defined on metric spaces by the notion of metric slope |$\\nabla$f |(x). In this work we propose an axiomatic definition of descent modulus T [f ](x) of a real-valued function f at every point x, defined on a general (not necessarily metric) space. The definition encompasses all above instances as well as average descents for functions defined on probability spaces. We show that a large class of functions are completely determined by their descent modulus and corresponding critical values. This result is already surprising in the smooth case: a one-dimensional information (norm of the gradient) turns out to be almost as powerful as the knowledge of the full gradient mapping. In the nonsmooth case, the key element for this determination result is the break of symmetry induced by a downhill orientation, in the spirit of the definition of the metric slope. The particular case of functions defined on finite spaces is studied in the last section. In this case, we obtain an explicit classification of descent operators that are, in some sense, typical.", "subjects": "Classical Analysis and ODEs (math.CA); Optimization and Control (math.OC); Probability (math.PR)", "title": "Descent modulus and applications", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708006261043, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7084670363679296 }
https://arxiv.org/abs/2302.12355
Fundamental Bounds on Online Strategic Classification
We study the problem of online binary classification where strategic agents can manipulate their observable features in predefined ways, modeled by a manipulation graph, in order to receive a positive classification. We show this setting differs in fundamental ways from non-strategic online classification. For instance, whereas in the non-strategic case, a mistake bound of $\ln|H|$ is achievable via the halving algorithm when the target function belongs to a known class $H$, we show that no deterministic algorithm can achieve a mistake bound $o(\Delta)$ in the strategic setting, where $\Delta$ is the maximum degree of the manipulation graph (even when $|H|=O(\Delta)$). We obtain an algorithm achieving mistake bound $O(\Delta\ln|H|)$. We also extend this to the agnostic setting and obtain an algorithm with a $\Delta$ multiplicative regret, and we show no deterministic algorithm can achieve $o(\Delta)$ multiplicative regret.Next, we study two randomized models based on whether the random choices are made before or after agents respond, and show they exhibit fundamental differences. In the first model, at each round the learner deterministically chooses a probability distribution over classifiers inducing expected values on each vertex (probabilities of being classified as positive), which the strategic agents respond to. We show that any learner in this model has to suffer linear regret. On the other hand, in the second model, while the adversary who selects the next agent must respond to the learner's probability distribution over classifiers, the agent then responds to the actual hypothesis classifier drawn from this distribution. Surprisingly, we show this model is more advantageous to the learner, and we design randomized algorithms that achieve sublinear regret bounds against both oblivious and adaptive adversaries.
\section{Supplementary Materials} \subsection{Proof of~\Cref{thm:biased-weighted-maj-vote-mistake-bound}} \medskip \noindent\textbf{\Cref{thm:biased-weighted-maj-vote-mistake-bound}} (Restated)\textbf{.}\emph{ \Cref{alg:biased-weighted-maj-vote} makes at most $e(\Delta+2)(\ln|\mathcal{H}|+\mathsf{OPT})$ mistakes against any adversary. } \medskip \begin{proof} To begin with, we show that if a mistake is made in round $t$, then the weights get updated such that $W_{t+1}\leq W_t\big(1-\gamma/(\Delta+2)\big)$. % Moreover, the algorithm penalizes an expert only if it made a mistake. In other words, the algorithm never over-penalizes experts who do not make a mistake. First, suppose a mistake is made on a true negative. In this case, $v_t$ is labeled as positive by $h_t$, so the total weight of experts predicting positive on $v_t$ is at least $W_t/(\Delta+2)$, and each of their weights is decreased by a factor of $\gamma$. As a result, we have $W_{t+1}\leq W_t\big(1-\gamma/(\Delta+2)\big)$. Moreover, for each classifier $h$ that gets penalized, we have $h(v_t)=+1$, so $v_t$ belongs to the positive region $S_h$, which implies that the initial node $u_t\in N[v_t]$ is able to reach the positive region $S_h$. Therefore, our previous observation indicates that $u_t$ would have ended up being predicted as positive had it best responded to $h$, so $h$ had also made a mistake. Next, consider the case of making a mistake on a true positive. Similar to the proof of \ref{thm:biased-weighted-maj-vote-mistake-bound}, we argue that the agent has not moved from a different location to $v_t$ to get classified as negative, so $v_t=u_t$. Since the agent did not move, none of the vertices in $N[v_t]$ was labeled positive by the algorithm, implying that for each $x\in N[v_t]$, weights of experts labeling x as positive is less than $W_t/(\Delta+2)$. Therefore, taking the union of all $x\in N[v_t]$, we conclude that the total weight of experts predicting negative on all $x\in N[v_t]$ is at least $W_t\Big(1-(\Delta+1)/(\Delta+2)\Big) = W_t/(\Delta+2)$. All these experts are making a mistake as $v_t=u_t$ cannot reach the positive region of any of these experts, so they all end up classifying agent $u_t$ as negative. As a result, the algorithm cuts their weight by a factor of $\gamma$, resulting in $W_{t+1}\leq W_t-(\gamma W_t)/(\Delta+2)$. Let $M=\mathsf{Mistake}(T)$ denote the number of mistakes made by the algorithm. Since the initial weights are all set to 1, we have $W_0=|\mathcal{H}|$. Together with the property that $W_{t+1}\leq W_t\left(1-\frac{\gamma}{\Delta+2}\right)$ on each mistake, we have $W_T\leq |\mathcal{H}|\left(1-\frac{\gamma}{\Delta+2}\right)^M$. Now we show that $W_T\ge \gamma^{\mathsf{OPT}}$. We have proved that whenever the algorithm decreases the weight of an expert, they must have made a mistake. However, it can be the case that an expert makes a mistake, but the algorithm does not detect that. In other words, the algorithm may under-penalize an expert, but it would never over-penalize. Let $h^\star\in\mathcal{H}$ denote the best expert that achieves the minimum number of mistakes $\mathsf{OPT}$. Suppose the algorithm detects $q$ of the rounds where $h^\star$ makes a mistake, then we have $q\leq \mathsf{OPT}$. Therefore, after $T$ rounds, $W_T\ge w_T(h^\star)=\gamma^{q}\geq \gamma^{\mathsf{OPT}}$, since $0\leq\gamma\leq 1$. Finally, we have: \begin{align*} &\gamma^{\mathsf{OPT}}\leq W_T\leq |\mathcal{H}|\left(1-\frac{\gamma}{\Delta+2}\right)^M\\ \Rightarrow\ &\mathsf{OPT}\cdot\ln{\gamma}\leq \ln{|\mathcal{H}|}+M\ln{\Big(1-\frac{\gamma}{\Delta+2}\Big)}\leq \ln{|\mathcal{H}|}-M\frac{\gamma}{\Delta+2}\\ \Rightarrow\ & M\leq \frac{\Delta+2}{\gamma}\ln{|\mathcal{H}|}-\frac{\ln{\gamma}(\Delta+2)}{\gamma}\mathsf{OPT}\\ \end{align*} By setting $\gamma=1/e$, we bound the total number of mistakes as $M\leq e(\Delta+2)(\ln{|\mathcal{H}|}+\mathsf{OPT})$. \end{proof} \subsection{Improving the Upper Bound} \label{sec:improving-upper-bound} In this section, we propose a pre-processing step to improve the mistake bound of~\Cref{alg:halving} in some cases, depending on the structure of the underlying manipulation graph. We leave it open to get a general mistake bound that depends on other characteristics of the manipulation graph besides the maximum degree. Consider the case where the manipulation graph $G(\mathcal{X}, \mathcal{E})$ is a complete graph, and the hypothesis class $\mathcal{H}$ includes all possible labelings of $\mathcal{X}$, i.e. $|\mathcal{H}|=2^{|\mathcal{X}|}$. However,~\Cref{prop:effective-hypothesis-class-complete-graphs} shows that all the examples $(u_t,y_t)$ arriving over time get labeled the same: either all positive or all negative. Therefore, the size of the \emph{effective} hypothesis class is $2$. \begin{proposition} \label{prop:effective-hypothesis-class-complete-graphs} If the manipulation graph $G(\mathcal{X},\mathcal{E})$ is a complete undirected graph, then all the examples arriving over time are labeled the same, i.e. all positive or all negative. \end{proposition} \begin{proof} Consider a hypothesis $h$ that labels at least one node $v\in \mathcal{X}$ as positive. Then any example $u_t$ arriving at time-step $t$ can reach $v$ and get classified as positive. Hence, $h$ classifies all the examples as positive. On the other hand, if $h$ labels all the nodes $v\in \mathcal{X}$ as negative, then it would classify all the examples arriving over time as negative. \end{proof} \Cref{alg:halving} has a mistake bound of $\mathcal{O}(\Delta \ln|\mathcal{H}|)$ in the realizable case. However, when the manipulation graph is complete, we can get a mistake bound of $1$ as follows: initially starting with an all-positive classifier, if a mistake happens, switch to an all-negative classifier. The case of complete graphs shows that depending on the underlying manipulation graph, there can be a large gap between the upper bound given by~\Cref{alg:halving} and the best achievable bound. ~\Cref{alg:improvement-halving} is a pre-processing step to improve this gap. % \begin{algorithm} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetNoFillComment \Input{$G(\mathcal{X},\mathcal{E})$, hypothesis class $\mathcal{H}$} \For{$t=1,\cdots,T$}{ Commit to $h_t$ that labels all nodes as positive\; \tcc{Observe $(v_t,y_t)$} \If{$y_t\neq h_t(v_t)$}{\tcc{when the first mistake happens, remove all the hypotheses that make a mistake} $\mathcal{H}'\gets\mathcal{H}\setminus \{h:\ \exists v\in N[v_t],\ h(v)=+1\}$\; Break\; } } Run~\Cref{alg:halving} on $(G, \mathcal{H}')$\; \caption{A pre-processing step to improve the mistake bound of~\Cref{alg:halving}} \label[algo]{alg:improvement-halving} \end{algorithm} \Cref{alg:improvement-halving} initially starts with an all-positive classifier. When the first mistake happens on a node $v_t$, it means that $v_t$ and all its neighbors need to be classified as negative. Hence, we exclude all the hypothesis $h\in \mathcal{H}$ that classify any node $v\in N[v_t]$ as positive from $\mathcal{H}$. After filtering $\mathcal{H}$, we run~\Cref{alg:halving} on the new set $\mathcal{H}'$. We now restate the guarantee of \Cref{alg:improvement-halving} that we presented in \Cref{thm:mistake-bound-improved-halving}, and show its proof. \medskip \noindent\textbf{\Cref{thm:mistake-bound-improved-halving}} (Restated)\textbf{.}\emph{ \Cref{alg:improvement-halving} makes at most $\min\{n-\delta, 1+\Delta\cdot \min\{\ln|\mathcal{H}|, n-\delta-1\}\}$ mistakes, where $n=|\mathcal{X}|$ and $\delta$ is the minimum degree of $G(\mathcal{X},\mathcal{E})$. } \medskip \begin{proof} After the first mistake happens on $v_t$,~\Cref{alg:improvement-halving} only keeps the hypotheses that label all the nodes in $N[v_t]$ as negative. Since $\big|N[v_t]\big|\geq \delta+1$, the number of such hypotheses is at most $2^{n-(\delta+1)}$. Therefore, the filtered-out hypothesis set $\mathcal{H'}$ satisfies $|\mathcal{H}'|\leq \min\{|\mathcal{H}|, 2^{n-\delta-1}\}$. Therefore, the number of mistakes that~\Cref{alg:halving} makes on the filtered hypothesis set is at most: \[1+\Delta\cdot\ln(|\mathcal{H}'|)\leq1+\Delta\cdot\min\{\ln|\mathcal{H}|, \ln(2^{n-\delta-1}) = 1+\Delta\cdot\min\{\ln|\mathcal{H}|, n-\delta-1\}\] Suppose that $n-\delta<1+\Delta\cdot\min\{\ln|\mathcal{H}|, n-\delta-1\}$. After the first mistake happens on $v_t$ and the labels of $N[v_t]$ get flipped, for the remaining graph $G\setminus N[v_t]$, the labels can get flipped one by one whenever a mistake is observed. Therefore, the total number of mistakes is at most: \[\min\{n-\delta, 1+\Delta\cdot \min\{\ln|\mathcal{H}|, n-\delta-1\}\}\] \end{proof} \begin{remark} When the manipulation graph is dense, the mistake bound in \Cref{thm:mistake-bound-improved-halving} can greatly outperform that given in \Cref{thm:baseline-realizable-upper-bound}. For instance, in complete graphs where both the minimum degree and the maximum degree are $n-1$, \Cref{thm:mistake-bound-improved-halving} guarantees that \Cref{alg:improvement-halving} makes at most one mistake, whereas \Cref{alg:halving} could end up making $n$ mistakes in total, one on each vertex. \end{remark} \subsection{Extension of the Deterministic Model to Directed Manipulation Graphs} \label{sec:directed-graphs} Suppose that the manipulation graph $G(\mathcal{X},\mathcal{E})$ is a directed graph. We show how to modify~\Cref{alg:halving,alg:biased-weighted-maj-vote} to work in the case of directed manipulation graphs and get a regret bound that depends on $\Delta_{\text{out}}$ instead of $\Delta$, where $\Delta_{\text{out}}$ is the maximum out-degree of all the nodes $v\in\mathcal{X}$. \begin{proposition} In the realizable case,~\Cref{alg:halving} can be modified to make at most $(\Delta_{\text{out}}+2)\ln |\mathcal{H}|$ mistakes. \end{proposition} \begin{proof} First, we need to change the threshold of the majority vote for classifying a node as positive from $1/(\Delta+2)$ to $1/(\Delta_{\text{out}}+2)$. Now, if a mistake on a true negative happens, then $1/(\Delta_{\text{out}}+2)$ of the remaining hypotheses gets discarded, which are the set of experts that predict the observable node as positive. On the other hand, if a mistake on a true positive happens, it means that the agent was classified as negative and did not move. Therefore, all the nodes in the reachable out-neighborhood were classified as negative by the algorithm. The number of reachable nodes from the starting node is at most $\Delta_{\text{out}}+1$, and for each of them. less than $(1/(\Delta_{\text{out}}+2))|\mathcal{H}|$ experts classified them as positive. Therefore, a total of $|\mathcal{H}|\Big(1-(\Delta_{\text{out}}+1)/(\Delta_{\text{out}}+2)\Big)=(1/(\Delta_{\text{out}}+2))|\mathcal{H}|$ remaining hypotheses are classifying the entire reachable set as negative, and they are all making a mistake. As a result, whenever a mistake happens, $1/(\Delta_{\text{out}}+2)$ fraction of the hypotheses can get discarded. This results in a mistake bound of $(\Delta+2)\ln{|\mathcal{H}|}$. \end{proof} Similarly, we can show that \Cref{alg:biased-weighted-maj-vote} can be modified to get a mistake bound that depends on $\Delta_{\text{out}}$ instead of $\Delta$, as shown in the following proposition. \begin{proposition} In the unrealizable case,~\Cref{alg:biased-weighted-maj-vote} can be modified to make at most $e(\Delta_{\text{out}}+2)(\ln |\mathcal{H}|+\mathsf{OPT})$ mistakes. \end{proposition} \subsection{Regret bound of $\widetilde{\mathcal{O}}\left(T^{\frac{3}{4}}\ln^{\frac{1}{4}} |\mathcal{H}|\right)$ against an adaptive adversary} \label{sec:adaptive-adversaries} In this section, we present an algorithm (\Cref{alg:reduction-adaptive}) based on the idea of full-information acceleration, and prove a regret bound of $\widetilde{\mathcal{O}}\left(T^{\frac{3}{4}}\ln^{\frac{1}{4}} |\mathcal{H}|\right)$ against general adaptive adversaries in \Cref{thm:regret-alg-adaptive-reduction}. The proof of this theorem requires a more careful analysis of the difference between the estimated loss sequence and the actual loss sequence using martingale difference sequences, which borrows similar ideas from \citet{mcmahan2004online}. \begin{algorithm}[!ht] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetNoFillComment Initialize $w_1(h)\gets0,\ \forall h\in\mathcal{H}$\; Initialize step size $\eta\gets \sqrt{\frac{8\ln|\mathcal{H}|}{T}}$, exploration coefficient $\gamma\gets T^{-\frac{1}{4}}\ln^{\frac{1}{4}}(T|\mathcal{H}|)$\; Let $h^+$ be an all-positive classifier\; \For{$t\in [T]$}{ \tcc{Commit to a distribution $\mathcal{D}_t$ defined as follows, then draw classifier $h_t\sim\mathcal{D}_t$} Let $\mathcal{D}_t$ be a distribution over $\mathcal{H}\cup\{h^+\}$ specified by probabilities $p_t(h^+)=\gamma$, and $p_t(h)=(1-\gamma)\frac{w_t(h)}{W_t}$ for all $h\in\mathcal{H}$, where $W_t=\sum_{h'\in\mathcal{H}} w_t(h')$\; \tcc{Observe agent $(v_t,y_t)$.} \tcc{Construct an estimated loss vector and use it to update the weights:} \For{$h\in \mathcal{H}$}{ $\hat{\ell_t}(h)\gets\frac{\ell(h,\mathsf{BR}_h(v_t),y_t)\cdot\indicator{h_t=h^+}}{\gamma}$\; $w_{t+1}(h)\leftarrow w_t(h) e^{-\eta \cdot\hat{\ell}_t(h)}$. } } \caption{Randomized algorithm against adaptive adversaries} \label[algo]{alg:reduction-adaptive} \end{algorithm} \begin{theorem} \Cref{alg:reduction-adaptive} achieves a regret of $\widetilde{\mathcal{O}}\left(T^{\frac{3}{4}}\ln^{\frac{1}{4}} |\mathcal{H}|\right)$ against any adaptive adversary. \label{thm:regret-alg-adaptive-reduction} \end{theorem} \begin{proof} Similar to the proof of \Cref{thm:regret-alg-oblivious}, we first show that at every round $t$ and for all experts $h\in\mathcal{H}$, $\hat{\ell}_t(h)$ is an unbiased estimate of $\ell_t(h)$, where $\ell_t(h)=\ell(h,\mathsf{BR}_{h}(u_t),y_t)$ is the true loss of $h$. Since we are dealing with adaptive adversaries, we show that for every $h\in\mathcal{H}$, $\left(\hat{\ell}_t(h)-\ell_t(h)\right)_{t=1}^T$ is a Martingale Difference Sequence: let $\mathcal{F}_t$ denote the $\sigma$-algebra generated by the randomness up to time $t$, then \begin{align} \E\left[\left.\hat{\ell_t}(h)-\ell_t(h)\right|\mathcal{F}_{t-1}\right]=&\E\left[\left.\gamma\cdot \frac{\ell(h,\mathsf{BR}_h(v_t),y_t)}{\gamma} -\ell(h,\mathsf{BR}_h(u_t),y_t)\ \right|\mathcal{F}_{t-1}\right]=0. \label{eq:adaptive-unbiased} \end{align} Here, the first equality is because $\hat{\ell}_t\neq0$ only when $h_t=h^+$ is an all-positive classifier, which happens with probability $\gamma$. The second equality is because the agent would not move under $h^+$, resulting in $u_t=v_t$. Moreover, from the definition of $\hat{\ell}_t$, the term $\hat{\ell}_t(h)-\ell_t(h)$ is bounded in absolute value by $\frac{1}{\gamma}$. Now we calculate the expected cumulative loss of \Cref{alg:reduction-adaptive}. \begin{align} \E\left[\sum_{t=1}^T \ell_t(h_t)\right]=& \E\left[\sum_{t=1}^T \E\left[\ell_t(h_t)|\mathcal{F}_{t-1}\right] \right]\label{eq:tower-property}\\ =&\E\left[\sum_{t=1}^T \E\left[\left.\gamma\cdot\indicator{y_t\neq1}+ (1-\gamma)\cdot \E_{h\sim \frac{w_t(\cdot)}{W_t}}\left[\ell_t(h)\right]\ \right|\ \mathcal{F}_{t-1}\right] \right]\nonumber\\ \le &\E\left[\sum_{t=1}^T \gamma+\E_{h\sim p_t'}\left[\ell_t(h)\right] \right],\qquad\qquad\qquad\small{ \text{where }p_t'(h)\triangleq \frac{w_t(h)}{W_t},\ \forall h\in\mathcal{H}}; \label{eq:red-tmp-1}\\ =&\gamma T+ \E\left[\sum_{t=1}^T \E_{h\sim p_t'}\left[\hat{\ell}_t(h)\right] \right]+\E\left[\sum_{t=1}^T \E_{h\sim p_t'}\left[\ell_t(h)-\hat{\ell}_t(h)\right] \right]. \label{eq:red-tmp-2} \end{align} In the above equations, \Cref{eq:tower-property} is from the tower property of conditional expectations, \Cref{eq:red-tmp-1} is because $\indicator{y_t\neq1}\le1$ and $1-\gamma\le1$, where we also use the tower property to remove the conditional expectations. Finally, \Cref{eq:red-tmp-2} is because we add and subtract the second term. Now, for the third term in \eqref{eq:red-tmp-2}, note that $p_t'$ is defined on $\mathcal{F}_{t-1}$, so $\left(\E_{h\sim p_t'}\left[\ell_t(h)-\hat{\ell}_t(h)\right]\right)_{t=1}^T$ is also a martingale difference sequence with respect to the filtration $\left(\mathcal{F}_{t}\right)_{t=1}^T$. Again, from the tower property, this term is always zero: \begin{align} \E\left[\sum_{t=1}^T \E_{h\sim p_t'}\left[\ell_t(h)-\hat{\ell}_t(h)\right] \right]=\E\left[\sum_{t=1}^T \E\left[\E_{h\sim p_t'}\left[\left.\ell_t(h)-\hat{\ell}_t(h)\ \right|\mathcal{F}_{t-1}\right]\right] \right]=0. \label{eq:concentration-loss} \end{align} Since $p_1',\cdots,p_T'$ are exactly the same as the strategies generated by running Hedge on the estimated loss sequence $\hat{\ell}_1,\cdots,\hat{\ell}_T$, and the magnitude of the losses are all bounded by $\frac{1}{\gamma}$, we have the following regret guarantee from~\citet{freund1997decision}: \begin{align} \E\left[\sum_{t=1}^T \E_{h\sim p_t'}\left[\hat{\ell}_t(h)\right] -\min_{h^\star\in\mathcal{H}}\sum_{t=1}^T \hat{\ell}_t(h^\star)\right]\le {\mathcal{O}}\left(\frac{1}{\gamma}\sqrt{T\ln|\mathcal{H}|}\right).\label{eq:hedge-guarantee} \end{align} Putting \Cref{eq:red-tmp-2,eq:concentration-loss,eq:hedge-guarantee} together gives us the bound on expected loss: \begin{align} \E\left[\sum_{t=1}^T \ell_t(h_t) \right]\le &\E\left[\min_{h^\star\in\mathcal{H}}\sum_{t=1}^T \hat{\ell}_t(h^\star)\right] +{\mathcal{O}}\left(\gamma T+\frac{1}{\gamma}\sqrt{T\ln|\mathcal{H}|}\right).\label{eq:tmp10} \end{align} We define \begin{align*} \widehat{\mathsf{OPT}}\triangleq\min_{h^\star\in\mathcal{H}}\sum_{t=1}^T \hat{\ell}_t(h^\star), \end{align*} then the above inequality \eqref{eq:tmp10} implies \begin{align} \E[\mathsf{Regret}]=\E\left[\sum_{t=1}^T \ell_t(h_t) -\mathsf{OPT}\right]\le \E\left[\widehat{\mathsf{OPT}}-\mathsf{OPT}\right]+{\mathcal{O}}\left(\gamma T+\frac{1}{\gamma}\sqrt{T\ln|\mathcal{H}|}\right). \label{eq:tmp23} \end{align} Now, the last step is to bound the expected difference between $\widehat{\mathsf{OPT}}$ and the true optimal $\mathsf{OPT}=\min_{h^\star\in\mathcal{H}}\ell_t(h^\star)$. We have: \begin{align*} \E\left[\widehat{\mathsf{OPT}}-\mathsf{OPT}\right] =\E\left[\min_{\hat{h}}\max_{h}\sum_{t=1}^T \hat{\ell}_t(\hat{h})-\ell_t(h)\right] \le\E\left[\max_{h\in\mathcal{H}} \sum_{t=1}^T\hat{\ell}_t(h)-\ell_t(h)\right]. \end{align*} Since $\left(\hat{\ell}_t(h)-\ell_t(h)\right)_{t=1}^T$ is a martingale difference sequence for any fixed $h$, we use Azuma-Hoeffding inequality together with the union bound to obtain \begin{align*} \Pr\left[\max_{h\in\mathcal{H}} \sum_{t=1}^T\hat{\ell}_t(h)-\ell_t(h)\ge\frac{1}{\gamma}\sqrt{2T\ln\left(\frac{1}{\delta}\right)}\right]\le\delta|\mathcal{H}|. \end{align*} Setting $\delta=\frac{1}{T|\mathcal{H}|}$ gives us \begin{align} \E\left[\widehat{\mathsf{OPT}}-\mathsf{OPT}\right]\le\E\left[\max_{h\in\mathcal{H}} \sum_{t=1}^T\hat{\ell}_t(h)-\ell_t(h)\right]\le \frac{1}{\gamma}\sqrt{2T\ln\left(\frac{1}{\delta}\right)}+\delta|\mathcal{H}|\cdot T\le\mathcal{O}\left(\frac{1}{\gamma}\sqrt{2T\ln\left(T|\mathcal{H}|\right)}\right). \label{eq:opt-concentration} \end{align} Finally, by putting \Cref{eq:tmp23,eq:opt-concentration} together, and setting $\gamma=T^{-\frac{1}{4}}\ln^{\frac{1}{4}}(T|\mathcal{H}|)$, we {derive} the desired regret bound: \begin{align*} \E[\mathsf{Regret}]=\E\left[\sum_{t=1}^T \ell_t(h_t) -\mathsf{OPT}\right]\le \mathcal{O}\left(T^{\frac{3}{4}}\ln^{\frac{1}{4}}(T|\mathcal{H}|)\right). \end{align*} \end{proof} \subsection{Strategic online linear classification} \label{sec:strategic-perceptron} In this section, we propose an algorithm for the {problem of online} linear classification in the presence of strategic behavior. {In this setting,} each {original }example $\mathbf{z}_t$ can move for an $\ell_2$ distance of at most $\alpha$ and reach a new observable state $\mathbf{x}_t$; and the examples would move for a minimum distance that results in a positive classification. {\citet{ahmadi2021strategic} propose an algorithm for the case that} {original} examples are linearly separable; {in the case of inseparable examples, they get a mistake bound in terms of the hinge loss of \emph{manipulated} examples, and }leave it as an open problem to obtain a mistake bound in terms of the hinge-loss of \emph{original} examples. In this section, we propose an algorithm for the inseparable case {that obtains a bound in terms of the hinge-loss of \emph{original} examples. However, our mistake bound has an additional $\mathcal{O}(\sqrt{T})$ additive term compared to the bound obtained by \citet{ahmadi2021strategic} in the separable case}. The idea behind this algorithm is to use an all-positive classifier at random time steps to observe the un-manipulated examples. Using the un-manipulated examples, the standard Perceptron algorithm suffices to deal with inseparable data. For simplicity, we present the algorithm for oblivious adversaries and remark that a similar bound could be obtained for the case of adaptive adversaries using similar techniques as in \Cref{sec:adaptive-adversaries}. \begin{algorithm}[!ht] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetNoFillComment Partition the timeline $1,\cdots, T$ into $K$ consecutive blocks $B_1,\cdots,B_K$ where $B_j=[\frac{(j-1)T}{K}+1,\frac{jT}{K}]$\; Initialize $w_1\gets\mathbf{0}$\; \For{$j\in[K]$}{ Sample $\tau_j \in B_j$ uniformly at random\; \For{$t\in B_j$}{ \eIf{$t=\tau_j$}{ Use classifier $h_t\gets h^+$, where $h^+(\mathbf{x})=+1\ \forall \mathbf{x}$\; } { Use classifier $h_t\gets h^j$, where $h^j(\mathbf{x})=\text{sgn}\left(\frac{\mathbf{w}_j^\mathsf{T} \mathbf{x}}{|\mathbf{w}_j|}-\alpha\right)$\; } \tcc{Observe example $(\mathbf{x}_t,y_t)$} } \If{$y_{\tau_j}\neq h_{\tau_j}(\mathbf{x}_{\tau_j})$}{ $\mathbf{w}_{j+1}\gets {\mathbf{w}_j}+y_{\tau_j}\mathbf{x}_{\tau_j}$\; } } \caption{Algorithm for online linear strategic classification when original examples are inseparable} \label[algo]{alg:reduction-MAB-FIB-hinge-loss} \end{algorithm} \begin{theorem} Let $S=\{(\mathbf{z}_t,y_t)\}_{t=1}^T$ be the set of original data points, where $\max_t|\mathbf{z}_t|\le R $. For any $\mathbf{w}^\star$, \Cref{alg:reduction-MAB-FIB-hinge-loss} with parameter $K=\sqrt{T} R\|\mathbf{w}^\star\|$ satisfies \begin{align} \E\left[\mathsf{Mistake}(T)\right]\le 2L_{\text{hinge}}(\mathbf{w}^\star,S)+2\sqrt{T}R\|\mathbf{w}^\star\|, \end{align} where the hinge loss is defined as $$L_{\text{hinge}}(\mathbf{w}^\star,S)\triangleq \sum_{(z_t,y_t)\in S} \max\left\{0,1-y_t (\mathbf{z}_t^\mathsf{T} \mathbf{w}^\star)\right\}.$$ \end{theorem} \begin{proof} We use $\ell_t(h)=\indicator{y_t\neq h(\mathsf{BR}_{h}(\mathbf{z}_t)}$ to denote the loss of classifier $h$ had agent $(\mathbf{z}_t,y_t)$ best responded to $h$. In each block $B_j$, we have $\ell(h_{\tau_j})=\indicator{y_{\tau_j}\neq +1}\le1$ on the all-positive step $\tau_j$. On the other steps $t\neq \tau_j$, since $h^j$ is obtained by shifting the boundary $\mathbf{w}_j$ by $\alpha$, an agent $(\mathbf{x}_t)$ can reach the positive region of $h^j$ if and only if its original features $(\mathbf{z}_t)$ have a nonnegative dot product with $\mathbf{w}_j$. Thus we have $$\ell_t(h^j)=\indicator{h^j(\mathbf{x}_t)\neq y_{t}}=\indicator{\text{sgn}\left(\frac{\mathbf{w}_j^\mathsf{T} \mathbf{x}_t}{|\mathbf{w}_j|}-\alpha\right)\neq y_{t}} =\indicator{\text{sgn}\left(\frac{\mathbf{w}_j^\mathsf{T} \mathbf{z}_t}{|\mathbf{w}_j|}\right)\neq y_t}=\indicator{\text{sgn}\left({\mathbf{w}_j^\mathsf{T} \mathbf{z}_t}\right)\neq y_t}.$$ As a result, we can bound the number of mistakes as follows: \begin{align} \E\left[\mathsf{Mistake}(T) \right] =& \E\left[\sum_{j=1}^K \sum_{t\in B_j}\ell_t(h_t)\right] {\le}\E\left[\sum_{j=1}^K\left(1+ \sum_{t\in B_j} \ell_t(h^j) \right)\right]\nonumber\\ {=}& K+\sum_{j=1}^K \sum_{t\in B_j} \indicator{\text{sgn}\left({\mathbf{w}_j^\mathsf{T} \mathbf{z}_t}\right)\neq y_t}\nonumber\\ {\le}& K+\frac{T}{K}\sum_{j=1}^K \E_{\tau_j\sim B_j}\indicator{\text{sgn}\left({\mathbf{w}_j^\mathsf{T} \mathbf{z}_{\tau_j}}\right)\neq y_{\tau_j}},\label{tmp::5} \end{align} where the last step is because $\tau_j$ is sampled uniformly at random from $B_j$. Note that $w_1,\cdots,w_K$ is obtained from running the standard Perceptron algorithm on examples $S_\tau\triangleq\{(\mathbf{x}_{\tau_1},y_{\tau_1}),\cdots, (\mathbf{x}_{\tau_K},y_{\tau_K})\}$. Since at each $\tau_j$, the learner uses an all-positive classifier to stop the agents from moving, we have $\mathbf{x}_{\tau_j}=\mathbf{z}_{\tau_j}$, and $S_\tau=\{(\mathbf{z}_{\tau_1},y_{\tau_1}),\cdots, (\mathbf{z}_{\tau_K},y_{\tau_K})\}$. From \citet{block1962perceptron}, we have $$\sum_{j=1}^K \indicator{\text{sgn}\left({\mathbf{w}_j^\mathsf{T} \mathbf{z}_{\tau_j}}\right)\neq y_{\tau_j}}\le R^2\|\mathbf{w}^\star\|^2+2L_{\text{hinge}}(\mathbf{w}^\star,S_\tau).$$ Taking the expectation over $\tau_1,\cdots,\tau_K$, we have the following mistake bound on the standard perceptron algorithm: \begin{align} \frac{T}{K}\sum_{j=1}^K \E_{\tau_j\sim B_j}\indicator{\text{sgn}\left({\mathbf{w}_j^\mathsf{T} \mathbf{z}_{\tau_j}}\right)\neq y_{\tau_j}}\le& \frac{T}{K}R^2\|\mathbf{w}^\star\|^2+2 \frac{T}{K}\sum_{j=1}^K \E_{\tau_j\sim B_j} L_{\text{hinge}}(\mathbf{w}^\star,(\mathbf{z}_{\tau_j},y_{\tau_j}))\nonumber\\ =&\frac{T}{K}R^2\|\mathbf{w}^\star\|^2+2 \frac{T}{K}\sum_{j=1}^K \frac{1}{|B_j|}\sum_{t\in B_j}L_{\text{hinge}}(\mathbf{w}^\star,(\mathbf{z}_t,y_t))\label{tmp::8}\\ =&\frac{T}{K}R^2\|\mathbf{w}^\star\|^2+2 \sum_{j=1}^K \sum_{t\in B_j}L_{\text{hinge}}(\mathbf{w}^\star,(\mathbf{z}_t,y_t))\label{tmp::4}\\ =&\frac{T}{K}R^2\|\mathbf{w}^\star\|^2+2 L_{\text{hinge}}(\mathbf{w}^\star,S).\label{tmp::3} \end{align} In the above inequalities, \Cref{tmp::8} follows from the fact that $\tau_j$ is distributed uniformly at random in block $B_j$, and \Cref{tmp::4} is because every block has size $|B_j|=\frac{T}{K}$. Now we plug \Cref{tmp::3} back into \eqref{tmp::5} and obtain \begin{align*} \E\left[\mathsf{Mistake}(T) \right]\le K+\frac{T}{K}R^2\|\mathbf{w}^\star\|^2+2L_{\text{hinge}}(\mathbf{w}^\star,S). \end{align*} Finally, letting $K=\sqrt{T} R\|\mathbf{w}^\star\|$ yields the desired bound. \end{proof} \input{two-populations} \subsection*{Acknowledgements} This work was supported in part by the National Science Foundation under grant CCF-2212968 {and grant CCF-2145898}, by the Simons Foundation under the Simons Collaboration on the Theory of Algorithmic Fairness, by the Defense Advanced Research Projects Agency under cooperative agreement HR00112020003, {by a C3.AI Digital Transformation Institute grant, and a Berkeley AI Research (BAIR) Commons award}. The views expressed in this work do not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred. Approved for public release; distribution is unlimited. \bibliographystyle{plainnat} \section{Conclusion and Open Problems} \label{sec:open-problems} In this paper, we studied the problem of online strategic classification under manipulation graphs. We showed fundamental differences between strategic and non-strategic settings in both deterministic and randomized models. In the deterministic model, we show that in contrast to the nonstrategic setting where $O(\ln|\mathcal{H}|)$ bound is achievable by the simple $\mathsf{Halving}$ algorithm, in the strategic setting, mistake/regret bounds are closely characterized by the maximum degree $\Delta$ even when $|\mathcal{H}|=O(\Delta)$. In the randomized model, we show that unlike the nonstrategic setting where withholding random bits can benefit the learner, in the strategic setting, hiding the random choices has to suffer $\Omega(\Delta)$-multiplicative regret, whereas revealing the random choices to the strategic agents can provably bypass this barrier. We also design generic deterministic algorithms that achieve $O(\Delta)$-multiplicative regret and randomized algorithms that achieve $o(T)$ regret against both oblivious and adaptive adversaries. Our work suggests several open problems. The first is to design a deterministic algorithm in the realizable setting that achieves a mistake bound in terms of generic characteristics of the manipulation graph other than the maximum degree. Recall that our upper bound of $O(\Delta\ln|\mathcal{H}|)$ and lower bound of % {$\Omega(\Delta)$} are not matching, so it would be interesting to tighten either the upper or lower bound in this setting. The second open question is to incorporate the graph structure into randomized algorithms and achieve a {$o(T)$} regret bound that depends on the characterizations of the graph, such as the maximum degree. \section{Deterministic Classifiers} \label{sec:deterministic} \subsection{Realizable Case} \label{sec:realizable} In the realizable case, we assume that there exists a perfect expert $h^\star\in\mathcal{H}$ with zero mistakes, i.e., $\mathsf{OPT}=0$. This implies that for all time steps $t\in [T]$, we have $\ell(h^\star,\mathsf{BR}_{h^\star}(u_t),y_t)=0$. In this case, our goal of bounding the Stackelberg regret coincides with the mistake bound: \begin{align} \mathsf{Mistake}(T)\triangleq\sum_{t=1}^T \ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t). \end{align} For notational convenience, let $S^\star$ denote % the set of nodes in $\mathcal{X}$ with positive labels under $h^\star$, namely $S^\star\triangleq\left\{u\in\mathcal{X}:\ h^\star(u)=+1\right\}$. Then realizability implies that $S^\star$ must satisfy two properties: (1) all the true positives can reach $S^\star$ within no more than one hop; (2) No true negatives can reach $S^\star$ in one hop. We formalize these two properties in \Cref{prop:dominating-set}. \begin{proposition} \label{prop:dominating-set} In the realizable case, there exists a subset of nodes $S^\star\subseteq \mathcal{X}$ such that $S^\star$ is a \emph{dominating set} for all the true positives $u_t$, i.e. $\mathsf{dist}(u_t, S^\star)\leq 1$. Additionally, none of the true negatives $u_t$ are dominated by $S^\star$, i.e. $\mathsf{dist}(u_t, S^\star)>1$, where $\mathsf{dist}(u, S^\star)$ represents the minimum distance from node $u$ to the set $S^\star$. \end{proposition} \subsubsection{The failure of vanilla Halving} In the problem of nonstrategic online classification with expert advice, the well-known $\mathsf{Halving}$ algorithm achieves a mistake bound of $\mathsf{Mistake}(T)=\mathcal{O}(\ln{|\mathcal{H}|})$. In each iteration, $\mathsf{Halving}$ uses the majority vote of remaining experts to make predictions on the next instance, which ends up reducing the number of remaining experts by at least half on each mistake. Since there are $|\mathcal{H}|$ experts at the beginning and at least one expert at the end, the total number of mistakes is bounded by $\mathcal{O}(\ln{|\mathcal{H}|})$. However, in the following example, we show that when agents are strategic, the vanilla $\mathsf{Halving}$ algorithm may suffer from an infinite number of mistakes, as do two extensions of vanilla $\mathsf{Halving}$ that consider the best response function before taking majority votes. Moreover, our construction indicates that these algorithms fail even when the sequence of agents is chosen by an oblivious adversary. \begin{figure} \centering \begin{tikzpicture} \def 20 {20} \def 2cm {2cm} \def \margin {8} % \node[draw, circle] at (360:0mm) (ustar) {$x_0$}; \node at (352:2cm*0.3) {\textcolor{red}{$-$}}; \foreach \i [count=\ni from 0] in {\Delta,1,2,3,4}{ \node[draw, circle] at ({120-\ni*36}:2cm) (u\ni) {$x_{\i}$}; \node at ({116-\ni*36}:2cm*1.3) {\textcolor{red}{$-$}}; \draw[thick] (ustar)--(u\ni); } \node[draw, circle] at ({225}:2cm) (ui) {$x_{i}$}; \node at ({225}:2cm*1.3) {\textcolor{red}{+}}; \draw[thick] (ustar)--(ui); \foreach \i in {5,6,8,9}{ \node[circle] at ({120-\i*36}:2cm) (aux) {\phantom{$u_{5}$}}; \draw[dotted, thick, shorten >=1mm, shorten <=2mm] (ustar)--(aux); } \draw[dotted, semithick, red] (-40:2cm/2) arc[start angle=-40, end angle=-120, radius=2cm/2]; \draw[dotted, semithick, red] (-150:2cm/2) arc[start angle=-150, end angle=-230, radius=2cm/2]; \end{tikzpicture} \caption{Expert $h^i$} \label{fig:lower-bound-deterministic} \end{figure} \begin{example} \label{example:halving-fails} Consider the following manipulation graph $G(\mathcal{X},\mathcal{E})$ and hypothesis class $\mathcal{H}$: $G(\mathcal{X},\mathcal{E})$ is a star that includes a central node $x_0$, and $\Delta$ leaves $x_1,\cdots,x_{\Delta}$. Hypothesis class $\mathcal{H}=\{h^1,\cdots,h^{\Delta}\}$, where each $h^i\in \mathcal{H}$ assigns positive to $x_i$, and negative to all other nodes in $\mathcal{X}$ (see \Cref{fig:lower-bound-deterministic}). The perfect expert is $h^\star=h^j\in \mathcal{H}$ for some $j\in[\Delta]$ unknown to the learner. \end{example} Now consider two algorithms: the vanilla $\mathsf{Halving}$ algorithm and the variant that performs an expansion of positive region on top of $\mathsf{Halving}$. \begin{enumerate} \item \textbf{Vanilla $\mathsf{Halving}$}. Consider the following sequence of agents: at every time $t$, the same agent with initial position $u_t=x_0$ and label $y_t=+1$ arrives. We claim that the $\mathsf{Halving}$ algorithm makes mistakes on each agent regardless of the total number of rounds executed. First, note that this sequence is realizable with respect to class $\mathcal{H}$: for all $h^i\in \mathcal{H}$, we have $\mathsf{BR}_{h^i}(x_0)=x^i$ and $h^i(x_i)=+1$, so each $h^i$ classifies $(x_0,+1)$ correctly in isolation. Therefore, any expert in $\mathcal{H} $ achieves % zero mistakes on this sequence of agents. Now consider the vanilla $\mathsf{Halving}$ algorithm. Initially, for each node $x\in\mathcal{X}$, there is at most one expert in $\mathcal{H}$ that labels it as positive. Therefore, the majority vote classifier of $\mathcal{H}$ labels every node as negative. In response to this all-negative majority vote classifier, the first agent $(x_0,+1)$ stays put and gets classified as negative mistakenly. However, % we know that each classifier $h^i$ predicts correctly on $(x_0,+1)$. As a result, none of the experts get discarded. Therefore, a mistake is made by the learner, but no progress is made in terms of shrinking the set $\mathcal{H}$. The same agent appears at every round, resulting in the $\mathsf{Halving}$ algorithm % making mistakes in each round. \item \textbf{A strategic variant of $\mathsf{Halving}$}. Now consider a different voting rule for taking the majority-vote classifier based on the best-response function: Let $\overline{h}(u)=h(\mathsf{BR}_{h}(u))$ for all $h\in\mathcal{H}$ and $u\in\mathcal{X}$, and suppose that the learner runs $\mathsf{Halving}$ on the hypotheses class $\overline{\mathcal{H}}=\{\overline{h^1},\cdots,\overline{h^\Delta}\}$. Specifically, for each $h^i\in \mathcal{H}$, $\overline{h^i}(x_0)=h^i(\mathsf{BR}_{h^i}(x_0))=h^i(x_i)=+1$, therefore the majority-vote classifier predicts positive on $x_0$. On the other hand, the majority vote classifier predicts negative on all the leaves. Now, suppose the adversary secretly chooses $j\in[\Delta]$ and constructs a sequence in which $h^j$ is realizable as follows: at each time step $t$, selects an example with true label $y_t=-1$ and initial position $u_t=x_i\in \mathcal{X}\setminus\{x_0,x_j\}$. Note that all classifiers in ${\mathcal{H}}$ except ${h^i}$ will classify $(u_t,y_t)$ correctly. However, the majority vote classifier will make a mistake because $u_t$ can manipulate to $x_0$ and get classified as positive. Once the mistake is made, had the true location $u_t=x_i$ been observable, the learner could have shrunk the size of $\mathcal{H}$ by discarding $h^i$. However, $u_t$ is hidden from the learner, so the learner would not know which classifier is making a mistake. Therefore, it cannot make progress by excluding at least one expert from $\mathcal{H}$ in each round. \item \textbf{Another strategic variant of $\mathsf{Halving}$.} The positive region of $h^{\text{maj}}$ in the previous variation can be reached by all the nodes in the graph, which makes gaming too easy for the agents. Now, suppose the learner's goal is to shrink the positive region of $h^{\text{maj}}$ and get a new classifier $h$, such that the positive region of $h$ can only be reached by the true positives under $h^{\text{maj}}$, but none of the true negatives. We use the same example as above to show the failure of this algorithm because such $h$ does not exist. Recall that the positive region of $h^{\text{maj}}$ contains only the central node $x_0$. Suppose such an $h$ exists, then $x_0$ cannot belong to the positive region of $h$, because it can be reached by all leaf nodes $x_i$, which are true negatives under $h^{\text{maj}}$. In addition, no leaf nodes should be included in the positive region of $h$ either. This implies that the positive region of $h$ is empty, which contradicts with the assumption that true positive node $x_0$ can reach it. For this reason, the learner is unable to find an $h$ satisfying this property. % \end{enumerate} \Cref{example:halving-fails} indicates that taking majority votes fails in the strategic setting. One crucial point is that the leaves do not meet the threshold for \emph{majority}, and therefore they are always negative under the majority vote classifier (whether we consider the best response function or not) and thus indistinguishable, weakening the learner's leverage to identify the optimal expert. In fact, in this example, the only evidence for removing an expert is a false positive agent at the corresponding leaf node, so the learner should classify the leaves as positive in order to make progress. Therefore, one needs to lower the threshold for majority votes to increase the likelihood of false positives and make more room for improvement. In the next section, we propose an algorithm based on the idea of \emph{biased majority vote} in favor of positive predictions, which provably achieves finite mistake bounds against any adversarial sequence of strategic agents. We show that compared to the nonstrategic setting, the extra number of mistakes made by the learner is closely characterized by the maximum degree of the manipulation graph. \subsubsection{Upper Bound: Biased Majority-Vote Algorithm} In this section, we propose a biased version of the majority vote algorithm for the realizable strategic setting. The algorithm proceeds in rounds as follows: At each round $t$, a new agent arrives and gets observed as $v_t$. From the remaining set of experts, if at least $1/(\Delta+2)$ fraction of them classify $v_t$ as positive, then the algorithm predicts positive. If the algorithm made a mistake, all the experts that predicted positive get removed from $\mathcal{H}$. If less than $1/(\Delta+2)$ fraction of the experts classify $v_t$ as positive, the algorithm predicts negative. If the prediction was wrong, then each expert that labeled all the vertices in the neighborhood of $v_t$, i.e. $N[v_t]$, as negative gets removed from $\mathcal{H}$. We present this algorithm in \Cref{alg:halving} and analyze its mistake bound in \Cref{thm:baseline-realizable-upper-bound}. \begin{algorithm}[!ht] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetNoFillComment \Input{$G(\mathcal{X},\mathcal{E})$, hypothesis class $\mathcal{H}$} \For{$t=1,2,\cdots$}{ \tcc{learner commits to a classifier $h_t$ that is constructed as follows:} \For{$v\in \mathcal{X}$}{ \eIf{$|h\in \mathcal{H}:h(v)=+1|\geq |\mathcal{H}|/(\Delta+2)$}{ $h_t(v)\leftarrow+1$\; } { $h_t(v)\leftarrow-1$\; } } \tcc{example $v_t$ is observed.} predict according to $h_t(v_t)$\; \tcc{If there was a mistake:} \If{$h_t(v_t)\neq y_t$}{ \eIf{$y_t=-1$}{ $\mathcal{H}\leftarrow \mathcal{H}\setminus \{h\in \mathcal{H}:h(v_t)=+1\}$\tcp*{at least $|\mathcal{H}|/(\Delta+2)$ experts are removed.} } { $\mathcal{H}\leftarrow \mathcal{H}\setminus \{h\in \mathcal{H}: \forall x\in N[v_t], h(x)=-1\}$\tcp*{at least $|\mathcal{H}|/(\Delta+2)$ experts are removed.} } } } \caption{Biased majority-vote algorithm.} \label[algo]{alg:halving} \end{algorithm} \begin{theorem} \label{thm:baseline-realizable-upper-bound} If there exists at least one perfect expert under manipulation, \Cref{alg:halving} makes at most $(\Delta+2)\ln{|\mathcal{H}|}$ mistakes. \end{theorem} \begin{proof} We show whenever a mistake is made, at least $1/(\Delta+2)$ fraction of the remaining experts get excluded from $\mathcal{H}$, % but the realizable classifier $h^\star$ is never excluded. First, consider the case of making mistake on a true negative, i.e. $y_t=-1$. In this case, at least $|\mathcal{H}|/(\Delta+2)$ of the experts are predicting positive on $v_t$, and all of them are excluded from $\mathcal{H}$. % On the other hand, according to \Cref{prop:dominating-set}, all neighbors of $u_t$ are labeled as negative by $h^\star$. Since $v_t\in N[u_t]$, this implies that $h^\star$ must have labeled $v_t$ as negative, so $h^\star$ will not be excluded. Next, consider the case of making a mistake on a true positive, i.e. $y_t=+1$. Since the algorithm is predicting negative on $v_t$, the agent has not moved from a different location to $v_t$ to get classified as negative. Hence, it must be the case that $v_t=u_t$. Since the agent did not move, none of the vertices in its neighborhood has been labeled positive by the algorithm, which means each of the vertices in $N[v_t]$ is labeled positive by less than $|\mathcal{H}|/(\Delta+2)$ of the experts. Since there are at most $(\Delta+1)$ vertices in $N[v_t]$, at least $|\mathcal{H}|(1-(\Delta+1)/(\Delta+2)) = |\mathcal{H}|/(\Delta+2)$ experts are predicting negative on all vertices in $N[v_t]$, all of which will be excluded. On the other hand, by \Cref{prop:dominating-set} again, $u_t=v_t$ is dominated by the positive region of $h^\star$, so at least one vertex in $N[u_t]$ is labeled positive by $h^\star$, which implies that $h^\star$ will not be excluded from $\mathcal{H}$. In either case, when a mistake is made, at least $1/(\Delta+2)$ fraction of the remaining experts get excluded, but the perfect expert never gets excluded. Therefore, the total number of mistakes $M=\mathsf{Mistake}(T)$ can be bounded as follows: \begin{align*} &\left(1-\frac{1}{\Delta+2}\right)^M|\mathcal{H}|\ge 1 \quad\Rightarrow\quad M\leq (\Delta+2)\ln|\mathcal{H}|. \end{align*} \end{proof} \paragraph{Improving the Upper Bound} In~\Cref{sec:improving-upper-bound}, we propose a pre-processing step (\Cref{alg:improvement-halving}) that improves the mistake bound of \Cref{alg:halving} when the underlying manipulation graph is dense, i.e., the minimum degree of all the vertices is large. We achieve the following upper bound: \begin{theorem}[Improving the number of mistakes] \label{thm:mistake-bound-improved-halving} \Cref{alg:improvement-halving} makes at most $\min\{n-\delta, 1+\Delta\cdot \min\{\ln|\mathcal{H}|, n-\delta-1\}\}$ mistakes, where $n=|\mathcal{X}|$ and $\delta$ is the minimum degree of $G(\mathcal{X},\mathcal{E})$. \end{theorem} We leave it open to get a general instance-dependent upper bound that potentially depends on other characteristics of the manipulation graph besides the maximum/minimum degree. \subsection{Unrealizable Case} In the unrealizable (agnostic) case, we remove the assumption that there exists a perfect classifier under manipulation. Our goal is to design an adaptive algorithm that does not make too many mistakes compared to $\mathsf{OPT}$ (which is the minimum number of mistakes achieved by any classifier in $\mathcal{H}$), without a priori knowledge of the value of $\mathsf{OPT}$ or the optimal classifier that achieves this value. Before presenting our algorithm, we first observe that for any classifier $h$ with positive region $S_h$, $h$ makes a mistake on a true positive agent $u$ if and only if $u$ cannot reach $S_h$, and $h$ makes a mistake on a true negative agent $u$ if and only if $u$ can reach $S_h$. \subsubsection{Upper Bound: Biased Weighted Majority-Vote Algorithm} \label{sec:unrealizable} Next, we propose an algorithm for the unrealizable setting. The algorithm is adapted from the \emph{weighted majority vote} algorithm, which maintains a weight for each hypothesis in $\mathcal{H}$ that are initially set to be 1. Similar to~\Cref{alg:halving}, at each round $t$, a new example arrives and gets observed as $v_t$. Let $W_+^t$ and $W_-^t$ denote the sum of weights of experts that predict $v_t$ as positive and negative respectively. Let $W_t = W_+^t+W_-^t$. If $W_+^t\geq W_t/(\Delta+2)$, the algorithm predicts % positive, otherwise it predicts % negative. If the algorithm makes a mistake on a true negative, then we decrease the weights of all experts that predicted $v_t$ as positive by a factor of $\gamma$. If the algorithm makes a mistake on a true positive, then we decrease the weights of all experts that labeled all the vertices in $N[v_t]$ as negative by a factor of $\gamma$. We formally present this algorithm in \Cref{alg:biased-weighted-maj-vote} and its mistake bound guarantee in \Cref{thm:biased-weighted-maj-vote-mistake-bound}. \begin{algorithm}[!ht] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetNoFillComment \Input{$G(\mathcal{X},\mathcal{E})$, $\mathcal{H}$} Set weights $w_0(h)\leftarrow 1$ for all classifiers $h\in \mathcal{H}$\; Let $\gamma=\frac{1}{e}$\; \For{$t=1,2,\cdots$}{ \tcc{the learner commits to a classifier $h_t$ that is constructed as follows:} \For{$v\in V$}{ Let $W_t^+(v) = \sum_{h\in\mathcal{H}:h(v)=+1}w_t(h)$, $W_t^-(v) = \sum_{h\in\mathcal{H}:h(v)=-1}w_t(h)$, and $W_t = W_t^+(v)+W_t^-(v) = \sum_{h\in\mathcal{H}}w_t(h)$\; \eIf{$W_t^+(v)\geq W_t/(\Delta+2)$}{ $h_t(v)\leftarrow +1$\; } { $h_t(v)\leftarrow -1$\; } } \tcc{example $v_t$ is observed.} output prediction $h_t(v_t)$\; \tcc{If there was a mistake:} \If{$h_t(v_t)\neq y_t$}{ \eIf{$y_t=-1$}{ \tcc{ penalize the experts that label $v_t$ as positive.} $\mathcal{H'}\leftarrow \{h\in \mathcal{H}: h(v_t)=+1\}$\; } { \tcc{ penalize the experts that label all nodes in $N[v_t]$ as negative.} $\mathcal{H'}\leftarrow \{h\in \mathcal{H}: \forall x\in N[v_t], h(x)=-1\}$\; } if $h\in \mathcal{H}'$, then $w_{t+1}(h)\leftarrow\gamma\cdot w_t(h)$; otherwise, $w_{t+1}(h)\gets w_t(h)$\; } } \caption{Biased weighted majority-vote algorithm.} \label[algo]{alg:biased-weighted-maj-vote} \end{algorithm} \begin{theorem} \label{thm:biased-weighted-maj-vote-mistake-bound} \Cref{alg:biased-weighted-maj-vote} makes at most $e(\Delta+2)(\ln|\mathcal{H}|+\mathsf{OPT})$ mistakes against any adversary. \end{theorem} \subsection{Lower Bound} \label{sec:lower-bound-deterministic} In this section, we show lower bounds on the number of mistakes made by any \emph{deterministic} learner against an adaptive adversary in both realizable and agnostic settings. We present the lower bounds in \Cref{thm:deterministic-lower-bound}. \begin{theorem} \label{thm:deterministic-lower-bound} There exists a manipulation graph $G(\mathcal{X},\mathcal{E})$, a hypothesis class $\mathcal{H}:\mathcal{X}\to\mathcal{Y}$, and an adaptive adversary, such that any deterministic learning algorithm has to make at least $\Delta-1$ mistakes in the realizable setting and $\Delta\cdot\mathsf{OPT}$ mistakes in the agnostic setting, where $\mathsf{OPT}$ captures the minimum number of mistakes made by any classifier in the hypothesis class $\mathcal{H}$. \end{theorem} \begin{proof} Here, we use the same manipulation graph $G$ and expert class $\mathcal{H}$ as shown in \Cref{example:halving-fails}. The manipulation graph $G(\mathcal{X},\mathcal{E})$ is a star that includes a central node $x_0$, and $\Delta$ leaves $x_1,\cdots,x_{\Delta}$. Hypothesis set $\mathcal{H}=\{h^1,\cdots,h^{\Delta}\}$, where each $h^i\in \mathcal{H}$, assigns $+1$ to $x_i$, and $-1$ to all other nodes in $G$ (\Cref{fig:lower-bound-deterministic}). In the agnostic setting, we construct an adaptive adversary that always can pick a bad example $(u_t,y_t)$ on observing $h_t$, such that $h_t$ fails to classify this example correctly (i.e., $\ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)=1$), but this example can be successfully classified by all but one expert. The detailed construction is as follows: \begin{enumerate} \item If $h_t(x_0)=+1$, then the adversary picks $(u_t = x_j,y_t=-1)$ for an arbitrary $j\in[\Delta]$. Since $u_t$ can move to $x_0$ and get classified as positive by $h_t$, we have $\ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)=1$. On the other hand, all experts except for $h^j$ classify this example correctly. \item If $h_t(x)=-1$ for all nodes $x\in\mathcal{X}$, then the adversary picks $(u_t=x_0,y_t=+1)$. In this case, $\ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)=1$. However, $\forall h^i\in \mathcal{H}$, we have $\mathsf{BR}_{h^i}(u_t)=x_0$, so this example can receive a positive classification and therefore $\ell(h^i,\mathsf{BR}_{h^i}(u_t),y_t)=0$. \item If $h_t(x_0)=-1$ and there exists $j\in[\Delta]$ such that $h_t(x_j)=+1$, then the adversary picks $(u_t=x_j,y_t=-1)$. In this case, $h_t$ will classify this example as a false positive. On the other hand, all experts except for $h^j$ will correctly classify it as negative. \end{enumerate} Following the above construction, the learner is forced to make a mistake at all rounds; however, in each round, at most one of the experts makes a mistake, implying that sum of the number of mistakes made by all experts is at most $T$. Since the number of experts is $\Delta$, by the pigeon-hole principle there exists an expert that makes at most $T/\Delta$ mistakes. Therefore, $\mathsf{OPT}\le T/\Delta$, implying a mistake lower bound of $\Delta\cdot\mathsf{OPT}$. In the realizable setting, we use the same construction but only focus on the first $\Delta-1$ time steps, such that the learner is forced to make $\Delta-1$ mistakes, but there exists at least one expert that has not made a mistake so far, suppose $h^i$ is one of such experts. After the first $\Delta-1$ steps, the adversary keeps showing the same agent $(x_i,+1)$ to the learner, such that expert $h^i$ is still realizable. \end{proof} We remark that \Cref{thm:deterministic-lower-bound} implies no deterministic algorithm is able to make $o(\Delta)$ mistakes in the realizable setting and $o(\Delta)$ multiplicative regret in the agnostic setting. {Moreover, the construction shows that any deterministic algorithm is forced to err at every round in the worst-case agnostic setting, resulting in an $\Omega(T)$ regret as long as $\Delta\ge2$.} \section{Fractional Classifiers} \label{sec:fractional-model} \subsection{Model} In this section, we consider the randomized model where the learner uses a deterministic algorithm to output a probability distribution over classifiers at each round. After the learner commits to a distribution, an agent $(u_t,y_t)$ (which is chosen by an adversary) best responds to this distribution by selecting $v_t$ that maximizes the expected utility. In particular, let $P_{h_t}(v)\in[0,1]$ denote the induced probability of $h_t$ classifying node $v$ as positive, then the agent's best response function can be written as: \begin{align} v_t\in \mathsf{BR}_{h_t}(u_t)\triangleq\arg\max_{v\in \mathcal{X}} \Big[P_{h_t}(v)-\mathsf{Cost}(u_t,v)\Big].\label{eq:fractional-agent-util} \end{align} As a result of manipulation, the observable feature $v_t\in \mathsf{BR}_{h_t}(u_t)$ is revealed to the learner, and the learner suffers an expected loss of \begin{align} \E\left[\ell(h_t,v_t,y_t)\right]=\Pr\left[{y_t\neq h_t(v_t)}\right]=\begin{cases} P_{h_t}(v_t),&\text{if }{y_t=-1};\\ 1-P_{h_t}(v_t),&\text{if }{y_t=+1}. \end{cases} \label{eq:fractional-learner-util} \end{align} From \Cref{eq:fractional-agent-util,eq:fractional-learner-util}, we can see that the set of induced probabilities $P_{h_t}(u)$ for each $u\in\mathcal{X}$ serves as a sufficient statistics for both the learner and the agent. Therefore, instead of committing to a distribution and having the agents calculate the set of induced probabilities, the learner can directly commit to a \emph{fractional classifier} $h_t$ that explicitly specifies the probabilities $P_{h_t}(u)\in[0,1]$ for each $u\in\mathcal{X}$. Then, after the agent best responds to these fractions and reaches $v_t$, random label $h_t(v_t)$ is realized according to the proposed probability $P_{h_t}(v_t)$. We remark that deterministic classifiers are special cases of the fractional classifiers where $P_h(u)\in\{0,1\}$. Since the experts in $\mathcal{H}$ are all deterministic, the benchmark $\mathsf{OPT}$, which is the minimum number of mistakes achieved by the best expert in hindsight, is still a deterministic value. In this setting, we consider two cost functions: the \emph{weighted-graph} cost function, where the manipulation cost {from $u$ to $v$} is defined as the total weight on the shortest path {from $u$ to $v$}; and the free-edges model, where the first hop is free and the second hop costs infinity. Recall that agents break ties by preferring features with higher expected values, so the agents in the \emph{free-edges} cost model will move to a neighbor $v_t\in N[u_t]$ with the highest probability of getting classified as positive. In the \Cref{sec:fractional-lower-bound}, we show that the type of randomness is of limited help because they can only reduce the $\Delta$-multiplicative regret by constants. This is evidenced by our lower bounds in \Cref{thm:fractional-onehop-lower-bound,thm:fractional-multi-hops-lower-bound}, which states that any algorithm using this type of randomness needs to suffer $\frac{\Delta}{2}$-multiplicative regret in the free-edges model and $\frac{{\Delta}}{4}$-multiplicative regret in the weighted-graph model. We also complement this result by providing nearly-matching upper bounds in \Cref{sec:fractional-upper-bound}. \subsection{Lower Bound} \label{sec:fractional-lower-bound} \begin{theorem} \label{thm:fractional-onehop-lower-bound} In the model of ``free edges'' cost functions, for any sequence of fractional classifiers chosen by a deterministic algorithm, there exists an adaptive adversary such that the learner must make at least $\frac{\Delta}{2}\cdot \mathsf{OPT}$ mistakes in expectation. \end{theorem} \begin{proof} Consider a manipulation graph $G(\mathcal{X},\mathcal{E})$ that is a star with a central node $x_0$, and $\Delta$ leaves $x_1,\cdots,x_{\Delta}$. Hypothesis set $\mathcal{H}=\{h^1,\cdots,h^{\Delta}\}$, where each $h^i\in \mathcal{H}$ assigns positive to $x_i$ and negative to all other nodes in $G$, as shown in \Cref{fig:lower-bound-deterministic}. We construct an adversary that picks $(u_t,y_t)$ upon receiving the fractional classifier $h_t$ at each round, such that $h_t$ makes a mistake with probability at least 0.5 whereas all but one expert predicts correctly. Our detailed construction is as follows: \begin{enumerate} \item If $P_{h_t}(x_0)\geq 0.5$, then the adversary picks $(u_t = x_j,y_t=-1)$ for an arbitrary $j\in[\Delta]$. Since $x_0\in N[u_t]$ and $v_t$ is the node in $N[u_t]$ that achieves the largest success probability, we have $\E[\ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)]=P_{h_t}(v_t)\ge P_{h_t}(x_0)\ge0.5$. On the other hand, only $h^j\in \mathcal{H}$ makes a mistake on $(x_j,-1)$, and all other experts classify it correctly. \item If $P_{h_t}(x_0)<0.5$ but there exists $j\in[\Delta]$ such that $P_{h_t}(x_j)\geq 0.5$, then the adversary picks $(u_t=x_j,y_t=-1)$. Since the closed neighborhood of $x_j$ only contains $\{x_j,x_0\}$, we have $v_t=u_t$ and $\E[\ell(h_t,v_t,y_t)]\geq 0.5$. In addition, all experts but $h^j$ classify this example correctly. \item If neither of the above two conditions holds, i.e., $P_{h_t}(v)<0.5$ for all nodes $v\in \mathcal{X}$, then the adversary picks $(u_t=x_0,y_t=+1)$. In this case, no matter how the agent chooses $v_t$, the probability $P_{h_t}(v_t)$ cannot exceed 0.5. As a result, the learner suffers an expected loss of $\E[\ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)]=1-P_{h_t}(v_t)\ge 0.5$. On the other hand, all experts classify this example correctly because $x_0$ can move to the corresponding leaf node and get classified as positive. \end{enumerate} As a result, the learner has an expected loss of at least $0.5$ on each round, which implies $$\E[\mathsf{Mistake}(T)]=\E\left[\sum_{t=1}^T \ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)\right]\geq T/2.$$ However, in each round, at most one expert makes a mistake. Following the same arguments as \Cref{thm:deterministic-lower-bound}, we conclude that $\mathsf{OPT}\le \frac{T}{\Delta}$. Putting all together, the expected number of mistakes made by the learner is at least $$\E[\mathsf{Mistake}(T)]\ge\frac{T}{2}=\frac{\Delta}{2}\cdot\frac{T}{\Delta}\ge \frac{\Delta}{2}\cdot \mathsf{OPT}.$$ \end{proof} \begin{theorem} In weighted graphs, for any sequence of fractional classifiers chosen by a deterministic algorithm, there exists an adaptive adversary such that the learner must make at least $\frac{{\Delta}}{4}\cdot \mathsf{OPT}$ mistakes in expectation. \label{thm:fractional-multi-hops-lower-bound} \end{theorem} \begin{proof} Again, we consider the same graph structure as in \Cref{thm:fractional-onehop-lower-bound}, where $G(\mathcal{X},\mathcal{E})$ is a star with central node $x_0$ and leaf nodes $x_1,\cdots,x_{\Delta}$. Assume each edge $e\in\mathcal{E}$ has the same weight $w(e)=w$, where $w\triangleq 0.5+\epsilon$ for an infinitesimal constant $\epsilon$. Note that in this graph, no agent has the incentive to travel more than one edge, because it would cost them more than $1$. We work with the hypothesis set $\mathcal{H}=\{h^1,\cdots,h^{\Delta}\}$, assuming each $h^i\in \mathcal{H}$ assigns positive to $x_i$ and negative to all other nodes in $\mathcal{X}$. We construct an adversary that picks $(u_t,y_t)$ upon receiving the fractional classifier $h_t$ as follows, such that $h_t$ makes a mistake with probability at least $\frac{1}{4}$, whereas all but one expert predicts correctly. Our detailed construction is as follows: Let $p=\max_{x\in\mathcal{X}}P_{h_t}(x)$ denote the maximum fraction on any node. If $p<w$, then the adversary can simply pick $(u_t=x_0,y_t=+1)$, such that the learner suffers from an expected loss of $$\E[\ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)]=1-P_{h_t}(v_t)\ge 1-p>1-w>\frac{1}{4}.$$ As for the experts, all of them classify this agent correctly. Therefore, it suffices to consider the case of $p\ge w$ for the rest of the proof. We consider two cases depending on where $p$ is achieved: \begin{enumerate} \item If $p$ is achieved at a leaf node $x_i$ (i.e., $P_{h_t}(x_i)=p$) for some $j\in[\Delta]$, then the adversary chooses $(u_t=x_i,y_t=-1)$. We claim that $u_t=v_t=x_i$, since the agent is already placed at the node with the highest fraction, so they do not need to pay a nonnegative cost to reach a node with an even smaller fraction. As a result, we have $\E[\ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)]=P_{h_t}(x_i)=p\ge w>\frac{1}{4}$. On the other hand, all but expert $h^i$ classify this agent correctly. \item If $p$ is achieved at the central node $x_0$, i.e., $P_{h_t}(x_0)=p$, then every leaf node have fractions no more than $p$. We first assume at least one leaf node $x_i$ ($i\in[\Delta]$) satisfies $P_{h_t}(x_i)< p-w$. In this case, the adversary chooses $(u_t=x_i,y_t=-1)$. Since $P_{h_t}(x_0)-\mathsf{Cost}(u_t,x_0)=p-w > P_{h_t}(u_t)-\mathsf{Cost}(u_t,u_t)$, the agent will select $v_t=x_0$ as the best response, and achieve a success probability of $P_{h_t}(x_0)=p$. Therefore, the learner has expected loss $\E[\ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)]=p\ge w>\frac{1}{4}$. On the other hand, all but expert $x^i$ labels this agent correctly. \item Now we consider the last case where $p$ is achieved at the central node $x_0$ and all the leaf nodes have fractions at least $p-w$. In this case, no agent has the incentive to move regardless of their initial positions. The adversary can select the next agent as follows: if $1-p\ge p-w$, then choose $(u_t=x_0,y_t=+1)$ and make the learner err with probability $1-P_{h_t}(x_0)=1-p$; otherwise, choose $(u_t=x_i,y_t=-1)$ for an arbitrary $i\in[\Delta]$ and make the learner err with probability $P_{h_t}(x_i)\ge p-w$. In either case, the learner has to suffer from an expected loss of $\E[\ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)]\ge\max\{1-p,p-w\}\ge\frac{1-w}{2}=\frac{1}{4}-\frac{\epsilon}{2}$. As for the experts, at most one of them is making a mistake. \end{enumerate} Putting together all the possible cases and let $\epsilon\to0$, the learner is forced to make mistakes with probability at least $\frac{1}{4}$ on each round, i.e., $\sum_{t=1}^T \E[\ell_t(h_t,\mathsf{BR}_{h_t}(u_t),y_t)]\geq T/4$. However, in each round, at most one of the experts makes a mistake, implying that $\mathsf{OPT}\le\frac{T}{\Delta}$ as proved in \Cref{thm:deterministic-lower-bound}. As a result, the total loss made by the learner is bounded as $$\E[\mathsf{Mistake}(T)]=\E\left[\sum_{t=1}^T \ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)\right]\geq \frac{T}{4} \ge\frac{\Delta}{4}\cdot\mathsf{OPT}.$$ The proof is thus complete. \end{proof} \begin{remark} In a related work, \citet{Braverman2020TheRO} % showed that introducing randomization in their classification rule can increase the learner's classification accuracy, and the optimal randomized classifier has the structure that agents are better off not manipulating. In case (3) of the proof of \Cref{thm:fractional-multi-hops-lower-bound}, we show that even when learners choose such an ``optimal'' classifier under which agents have no incentive to manipulate, the adversary is still able to impose a high misclassification error. This example shows the limitations of fractional classifiers. \end{remark} \subsection{Upper Bound} \label{sec:fractional-upper-bound} In this section, we show how to use the idea of \Cref{alg:biased-weighted-maj-vote} to obtain upper bounds in the randomized classifiers model. \begin{proposition} In the free-edges model, \Cref{alg:biased-weighted-maj-vote} achieves a mistake bound of \[\mathsf{Mistake}(T)\le e(\Delta+2)(\ln|\mathcal{H}|+\mathsf{OPT}).\] \end{proposition} \begin{proof} To prove this proposition, it suffices to show that if the learner uses deterministic classifiers as a special case of fractional classifiers, then the \emph{free-edges} cost model and \emph{unweighted graph} cost model result in the same best response functions. In fact, in both models, agents manipulate their features if and only if their original nodes are labeled as negative and there exists a neighbor that is labeled as positive. Therefore, the two cost models yield the same best response behaviors to deterministic classifiers. As a result, \Cref{alg:biased-weighted-maj-vote} suffers from the mistake bound of $e(\Delta+2)(\ln|\mathcal{H}|+\mathsf{OPT})$. \end{proof} Now we consider weighted manipulation graphs. In this setting, we can run \Cref{alg:biased-weighted-maj-vote} on the expanded manipulation graph $\Tilde{G}$ that is an unweighted graph constructed from $G$ by connecting all pairs $u,v$ of vertices in $G$ such that $\mathsf{Cost}(u,v)\leq 1$. As a result, we obtain mistake bound in terms of $\Tilde{\Delta}$ instead of $\Delta$, where $\Tilde{\Delta}$ is the maximum degree of $\Tilde{G}$. \begin{proposition} \label{prop:fractional-multi-hop-upper-bound} Given a weighted manipulation graph $G$, running \Cref{alg:biased-weighted-maj-vote} on the expanded graph $\tilde{G}$ achieves a mistake bound of $\mathsf{Mistake}(T)\le e(\Tilde{\Delta}+2)(\ln|\mathcal{H}|+\mathsf{OPT})$, where $\Tilde{\Delta}$ is the maximum degree of $\Tilde{G}$. \end{proposition} \begin{proof} After constructing $\Tilde{G}$, we can see that under any deterministic classifier, a manipulation from $u$ to $v$ happens in the weighted graph $G$ if and only if the same manipulation happens in the unweighted graph $\Tilde{G}$. Therefore, by running \Cref{alg:biased-weighted-maj-vote} on $\Tilde{G}$, we obtain a mistake bound in the original manipulation graph $G$ of $e(\Tilde{\Delta}+2)(\ln|\mathcal{H}|+\mathsf{OPT})$, where $\Tilde{\Delta}$ is the maximum degree of $\Tilde{G}$. \end{proof} \section{Introduction} \emph{Strategic classification} concerns the problem of learning classifiers that are robust to gaming by self-interested agents~\cite{10.1145/2020408.2020495,Hardt2016}. An example is deciding who should be qualified for getting a loan and who should be rejected. Since applicants would like to be approved for getting a loan, they may spend efforts on activities that do not truly change their underlying loan-worthiness but may cause the classifier to label them as positive. An example of such efforts is holding multiple credit cards. Such gaming behaviors have nothing to do with their true qualification but could increase their credit score and therefore their chance of getting a loan. Strategic classification is particularly challenging in the \emph{online} setting where data points arrive in an online manner. In this scenario, the way that examples manipulate depends on the \emph{current classifier}. % Therefore, the examples' behavior changes over time and it may be different from examples with similar features observed in the previous rounds. Additionally, there is no useful source of unmanipulated data since there is no assumption that the unmanipulated data is coming from an underlying distribution. Strategic agents are modeled as having bounded manipulation power, and a goal of receiving a positive classification. The set of plausible manipulations has been characterized in two different ways in the literature. The first model considers a geometric setting where each example is a point in the space that can move in a ball of bounded radius (e.g., % ~\citet{dong2018strategic,chen2020learning,haghtalab-ijcai2020,ahmadi2021strategic,ghalme2021strategic}). Another model is an abstraction of feasible manipulations using a \emph{manipulation graph} that first was introduced by \citet{zhang2021incentive}. % We follow the second model and formulate possible manipulations using a graph. Each possible feature vector is modeled as a node in this graph, and an edge from $\vec{x}\rightarrow \vec{x'}$ in the manipulation graph implies that an agent with feature vector $\vec{x}$ may modify their features to $\vec{x'}$ if it helps them to receive a positive classification. We consider the problem of online strategic classification given an underlying manipulation graph. Our goal is to minimize the \emph{Stackelberg regret} which is the difference between the learner's cumulative loss and the cumulative loss of the best-fixed hypothesis against the same sequence of agents, but best-responding to this fixed hypothesis. In this paper, we consider three models with different levels of randomization. First, we consider the scenario where the learner can pick \emph{deterministic} classifiers. A well-known deterministic algorithm in the context of online learning is the \emph{halving} algorithm, which classically makes at most $O(\ln|\mathcal{H}|)$ mistakes when the target function belongs to class $\mathcal{H}$. First, we show that when agents are strategic, the \emph{halving} algorithm fails completely and may end up making mistakes at every round even in this realizable case. Moreover, we show that no deterministic algorithm can achieve a mistake bound $o(\Delta)$ in the strategic setting, where $\Delta$ is the maximum degree of the manipulation graph, even when $|\mathcal{H}|=O(\Delta)$. We complement this result with a general algorithm achieving mistake bound $O(\Delta\ln|\mathcal{H}|)$ in the strategic setting. We further extend this algorithm to achieve $O(\Delta)$ multiplicative regret bounds in the non-realizable (agnostic) strategic setting, giving matching lower bounds as well. Our next model is a {\em fractional} model where at each round the learner chooses a probability distribution over classifiers, inducing expected values on each vertex (the probability of each vertex being classified as positive), which the strategic agents respond to. The agents' goal is to maximize their utility by reaching a state that maximizes their chance of getting classified as positive minus their modification cost. For this model, we show regret upper and lower bounds similar to the deterministic case. In the last model, the learner again picks a probability distribution over classifiers, but now, while the adversary who selects the next agent must respond to this probability distribution, the agent responds to the actual classifier drawn from this distribution. That is, in this model, the random draw occurs after the adversary's selection of the agent but before the agent responds, whereas in the fractional model the random draw occurs after the agent responds. Surprisingly, we show this model is % not only more transparent to the agents, but also more advantageous to the learner than the fractional model. { We argue that transparency can make the learner and agents cooperate against the adversary in a way that would be more beneficial to both parties, which is an interesting phenomenon that differentiates the strategic setting from the nonstrategic one.} % {In this model, }we design randomized algorithms that achieve sublinear regret bounds against both oblivious and adaptive adversaries. We give a detailed overview of our results in~\Cref{sec:overview-results}. \section{Randomized Algorithms} In this section, we propose another model of randomization. Unlike the fractional classifiers model discussed in \Cref{sec:fractional-model}, we show that this randomized model induces a different type of manipulation behavior, for which success probabilities (fractions) are no longer sufficient to characterize. In this model, the interaction between the classifier, adversary, and agents proceeds as follows: At each round $t$, the learner commits to a probability distribution $\mathcal{D}_t$ over a set of deterministic classifiers $\{h:\mathcal{X}\to\mathcal{Y}\}$; and promises to use $h_t\sim\mathcal{D}_t$. Based on this mixed strategy $\mathcal{D}_t$ (and before the random classifier $h_t$ gets realized), the adversary specifies the next agent to be $(u_t,y_t)$. Then comes the most important step that differentiates this model from the {fractional classifiers} setting: the learner samples $h_t\sim \mathcal{D}_t$ and \emph{releases it to the agent}, who then best responds to the true $h_t$ by modifying its features from $u_t$ to $v_t$. The learner aims to minimize the (pseudo) regret with respect to class $\mathcal{H}$: \begin{align} \E\left[\mathsf{Regret}(T)\right]\triangleq \E\left[\sum_{t=1}^T\ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)\right]-\min_{h^\star\in\mathcal{H}}\left[\sum_{t=1}^T \ell(h^\star,\mathsf{BR}_{h^\star}(u_t),y_t)\right]. \end{align} We show that, surprisingly, releasing the random choices to the agents can help the learner to surpass the {$\Omega(\Delta\cdot\mathsf{OPT})$} lower bound. In this model, we propose three algorithms that achieve $o(T)$ regret, which does not depend on $\mathsf{OPT}$ or $\Delta$. \subsection{Between bandit and full-information feedback} Before presenting our algorithms, we first investigate the feedback information available to the learner at the end of each round. After the agents respond, the learner observes not only the loss of the realized expert ($\ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)$), but also the best response state $v_t=\mathsf{BR}_{h_t}(u_t)$ and the true label $y_t$. However, because the original state $u_t$ is hidden, the losses of other experts $\ell(h',\mathsf{BR}_{h'}(u_t),y_t)$ for $h'\neq h_t$ are not fully observable. For this reason, the feedback structure is potentially richer than bandit feedback, which only contains $\ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)$ for the realized expert $h_t$; but sparser than full-information feedback, which contains $\ell(h',\mathsf{BR}_{h'}(u_t),y_t)$ for all $h'\in\mathcal{H}$. Nevertheless, we remark that the learner is capable of going beyond the bandit feedback using the additional information $(v_t,y_t)$. For instance, if $h'$ fully agrees with the realized $h_t$ on the entire 2-hop neighborhood of $v_t$, then $\ell(h',\mathsf{BR}_{h'}(u_t),y_t)=\ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)$. Another scenario is when the agent ends up reporting truthfully ($u_t=v_t$), so the learner can explicitly calculate the best response $\mathsf{BR}_{h'}(u_t)$ and the loss $\ell(h',\mathsf{BR}_{h'}(u_t),y_t)$ for all $h'\in\mathcal{H}$. In \Cref{sec:mixed-strategy-adaptive}, we consider a learning algorithm that discards additional information and only uses bandit feedback, which achieves $\mathcal{O}\left(\sqrt{T |\mathcal{H}|\ln|\mathcal{H}|}\right)$ regret. To remove the polynomial dependency on $|\mathcal{H}|$, we propose a generic algorithmic idea that uses an all-positive classifier at random time steps to encourage the truthful reporting of agents. In this way, the learner can obtain full-information feedback on these time steps, which accelerates the learning process. In \Cref{sec:mixed-strategy-oblivious}, we use this idea to achieve $\mathcal{O}\left(T^{\frac{2}{3}}\ln^{\frac{1}{3}} |\mathcal{H}|\right)$ regret against any oblivious adversary. In \Cref{sec:adaptive-adversaries}, we extend this idea to the general case of adaptive adversaries and obtain a bound of $\widetilde{\mathcal{O}}\left(T^{\frac{3}{4}}\ln^{\frac{1}{4}} |\mathcal{H}|\right)$. We also show that this framework could be useful in other strategic settings as well. For example, in \Cref{sec:strategic-perceptron}, we apply it to the setting of strategic online linear classification and obtain a mistake bound in terms of the hinge loss of the original examples when the original data points are not linearly separable. \subsection{Algorithm based on bandit feedback} \label{sec:mixed-strategy-adaptive} As a warmup, we show the learner can use the vanilla EXP3 algorithm~\citep{auer2002nonstochastic}, which is a standard multi-armed bandit algorithm, to obtain sublinear regret. This algorithm works by maintaining a distribution over $\mathcal{H}$ from which classifier $h_t$ is sampled, where the weights of each expert are updated according to $$p_{t+1}(h)\propto p_t(h)\cdot \exp\left(-\eta\cdot\frac{\ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)\cdot\indicator{h=h_t}}{p_t(h)}\right),\ \forall h\in\mathcal{H}.$$ It is known that running EXP3 with learning rate $\eta=\sqrt{\frac{2\ln|\mathcal{H}|}{|\mathcal{H}|T}}$ will achieve a regret bound of $O(\sqrt{T|\mathcal{H}|\ln|\mathcal{H}|})$, see \citet{auer2002nonstochastic} for a proof. \subsection{Algorithm based on full-information acceleration} \label{sec:mixed-strategy-oblivious} In this section, we provide an algorithm with $\mathcal{O}\left(T^{\frac{2}{3}}\log^{\frac{1}{3}} n\right)$ regret against \emph{oblivious adversaries}. An oblivious adversary is one who chooses the sequence of agents $\{(u_t,y_t)\}_{t=1}^T$ before the interaction starts, irrespective of the learner's decisions during the game. Our algorithm (\Cref{alg:reduction-MAB-FIB}) uses a reduction from the partial-information model to the full-information model, which is similar in spirit to \cite{awerbuch2004adaptive} and \citet[Chapter 4.6]{blum_mansour_2007}. The main idea is to divide the timeline $1,\cdots, T$ into $K$ consecutive blocks $B_1,\cdots,B_K$, where $B_j=\{(j-1)(T/K)+1,\cdots,j(T/K)\}$, and simulate a full-information online learning algorithm (Hedge) with each block representing a single step. Within each block $B_j$, our algorithm uses the same distribution over the experts, except that it will also pick one time-step $\tau_j\sim B_j$ uniformly at random, and assigns an all-positive classifier to $\tau_j$. The intention for this time step $\tau_j$ is to prevent the agent from manipulations and simultaneously obtain the loss of every expert. This observed loss then serves as an unbiased loss estimate for the average loss over the same block. In the remainder of this section, we formally present this algorithm in \Cref{alg:reduction-MAB-FIB} and provide its regret guarantee in \Cref{thm:regret-alg-oblivious}. \begin{algorithm}[!ht] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetNoFillComment $K\gets T^{\frac{2}{3}}\ln^{\frac{1}{3}} |\mathcal{H}|$\; Partition the timeline $\{1,\cdots, T\}$ into $K$ consecutive blocks $B_1,\cdots,B_K$ where $B_j=\left\{(j-1)\cdot\frac{T}{K}+1,\cdots, j\cdot\frac{T}{K}\right\}$\; Initialize $w_1(h)\gets0,\ \forall h\in\mathcal{H}$\; \For{$1\leq j\leq K$}{ Sample $\tau_j \in B_j$ uniformly at random\; \For{$t\in B_j$}{ \tcc{Commit to drawing classifier $h_t\sim\mathcal{D}_t$, with $\mathcal{D}_t$ defined as follows:} \eIf{$t=\tau_j$}{ $\mathcal{D}_t$ puts all weight on a classifier that labels every node as positive\; } { $\mathcal{D}_t$ is a distribution over $\mathcal{H}$, where $p_j(\cdot)=\frac{w_j(\cdot)}{W_j}$, $W_j=\sum_{h\in\mathcal{H}} w_j(h)$\; } \tcc{Observe agent $(v_t,y_t)$.} } \tcc{Update the distribution at the end of $B_j$} \For{$h\in \mathcal{H}$}{ $w_{j+1}(h)\leftarrow w_j(h) e^{-\eta \cdot\hat{\ell}_j(h)}$, where $\hat{\ell}_j(h)=\ell(h,\mathsf{BR}_h(v_{\tau_j}),y_{\tau_j})$\; } } \caption{Randomized algorithm against oblivious adversaries } \label[algo]{alg:reduction-MAB-FIB} \end{algorithm} \begin{theorem} \Cref{alg:reduction-MAB-FIB} with parameter $K=T^{\frac{2}{3}}\ln^{\frac{1}{3}} |\mathcal{H}|$ achieves a regret bound of $\mathcal{O}\left(T^{\frac{2}{3}}\log^{\frac{1}{3}} |\mathcal{H}|\right)$ against any oblivious adversary. \label{thm:regret-alg-oblivious} \end{theorem} \begin{proof} For notational convenience, we denote the average loss over block $B_j$ as $\bar{\ell}_j(h)=\frac{\sum_{t\in B_j}\ell_t(h)}{|B_j|}$, where for each expert $h\in\mathcal{H}$, $\ell_t(h)=\ell(h,\mathsf{BR}_h(u_t),y_t)$. We first claim that for all $h$, $\hat{\ell}_j(h)=\ell(h,\mathsf{BR}_h(v_{\tau_j}),y_{\tau_j})$\footnote{Note that here $v_{\tau_j}$ is the best-response to $h_{\tau_j}$ and $\mathsf{BR}_h(v_{\tau_j})$ is the best-response to $h$ when the agent is at location $v_{\tau_j}$ (which we will show is the same as $u_{\tau_j}$).} is an unbiased estimator of the average loss $\bar{\ell}_j(h)$, i.e., $\E_{\tau_j\sim B_j}[\hat{\ell}_j(h)]=\bar{\ell}_j(h)$. This is because the algorithm predicts positive on every state at time $\tau_j$, so the agent reports truthfully ($u_{\tau_j}=v_{\tau_j}$), thus % $\hat{\ell}_j(h)=\ell(h,\mathsf{BR}_h(u_{\tau_j}),y_{\tau_j})=\ell_{\tau_j}(h)$ can be observed for any expert $h$. Since $\tau_j$ is sampled from $B_j$ uniformly at random, we have $$\E_{\tau_j\sim B_j}[\hat{\ell}_j(h)]=\E_{\tau_j\sim B_j}[\ell_{\tau_j}(h)]=\bar{\ell_j}(h).$$ Since the choice of $\tau_j$ is sampled independently after the distribution $p_j$ is chosen, the above claim implies that for any block $B_j$ and any $p_j$: \begin{align} \E_{h\sim p_j}\left[\bar{\ell}_j(h)\right] = \E_{\tau_j}\E_{h\sim p_j}\left[\hat{\ell}_j(h)\right]. \label{eq:claim-unbiased} \end{align} Therefore, inside each block $B_j$ and conditioning on the history before block $B_j$, the total loss of \Cref{alg:reduction-MAB-FIB} can be bounded as follows: \begin{align} \E\left[\sum_{t\in B_j}\ell_t(h_t)\right]=&\indicator{\Tilde{y}_{\tau_j}\neq y_{\tau_j}}+\sum_{t\in B_j,\ t\neq\tau_j}\E_{h\sim p_j}\left[{\ell}_t(h)\right]\nonumber\\ \le&1+\sum_{t\in B_j}\E_{h\sim p_j}\left[{\ell}_t(h)\right]=1+|B_j|\cdot\E_{h\sim p_j}\left[\bar{\ell}_j(h)\right],\label{eq:reduction-tmp1}\\ =&1+\frac{T}{K}\E_{\tau_j}\E_{h\sim p_j}\left[\hat{\ell}_j(h)\right].\label{eq:reduction-tmp2} \end{align} where the inequality \eqref{eq:reduction-tmp1} is because $\indicator{\Tilde{y}_{\tau_j}\neq y_{\tau_j}}\le1$ and the loss $\ell_t$ is always nonnegative, and \eqref{eq:reduction-tmp2} is because of the claim in \eqref{eq:claim-unbiased}. Summing over $K$ blocks and taking the expectation over $\tau_1,\cdots,\tau_K$, we obtain an upper bound of the expected total loss of \Cref{alg:reduction-MAB-FIB}: \begin{align} \E\left[\sum_{t=1}^T\ell_t(h_t)\right]=&\sum_{j=1}^K\E\left[\sum_{t\in B_j}\ell_t(h_t)\right] \le K+\frac{T}{K}\E_{\tau_1,\cdots,\tau_K}\left[\sum_{j=1}^K\E_{h\sim p_j}\left[\hat{\ell}_j(h)\right]\right].\label{eq:reduction-tmp6} \end{align} From the regret guarantee of Hedge, we have that over the loss sequence $\hat{\ell}_1,\cdots,\hat{\ell}_K$, there is \begin{align} {\sum_{j=1}^K \E_{h\sim p_j}\left[\hat{\ell}_j(h)\right]} -{\min_{h^\star \in \mathcal{H}}\sum_{j=1}^K \hat{\ell}_j(h^\star)} \le \mathcal{O}\Big(\sqrt{K\ln|\mathcal{H}|}\Big). \end{align} Therefore, taking the expectation over $\tau_1,\cdots,\tau_K$, we obtain \begin{align} \E_{\tau_1,\cdots,\tau_K}\left[{\sum_{j=1}^K \E_{h\sim p_j}\left[\hat{\ell}_j(h)\right]}\right] \le& \E_{\tau_1,\cdots,\tau_K}\left[{\min_{h^\star \in \mathcal{H}}\sum_{j=1}^K \hat{\ell}_j(h^\star)}\right] + \mathcal{O}\Big(\sqrt{K\ln|\mathcal{H}|}\Big)\nonumber\\ \le& \min_{h^\star \in \mathcal{H}}\E_{\tau_1,\cdots,\tau_K}\left[{\sum_{j=1}^K \hat{\ell}_j(h^\star)}\right] + \mathcal{O}\Big(\sqrt{K\ln|\mathcal{H}|}\Big)\label{eq:reduction-tmp3}\\ =&\min_{h^\star \in \mathcal{H}}\left[{\sum_{j=1}^K \bar{\ell}_j(h^\star)}\right] + \mathcal{O}\Big(\sqrt{K\ln|\mathcal{H}|}\Big).\label{eq:reduction-tmp4} \end{align} In the above equations, \eqref{eq:reduction-tmp3} is due to Jensen's inequality % and \eqref{eq:reduction-tmp4} is from the unbiasedness property established in \Cref{eq:claim-unbiased}. Finally, putting \Cref{eq:reduction-tmp6,eq:reduction-tmp4} together, and using the definition of the average loss $\bar{\ell}_j$, we conclude that \begin{align*} \E\left[\sum_{t=1}^T\ell_t(h_t)\right]\le& K+\frac{T}{K}\left(\min_{h^\star \in \mathcal{H}}\left[{\sum_{j=1}^K \bar{\ell}_j(h^\star)}\right] + \mathcal{O}\Big(\sqrt{K\ln|\mathcal{H}|}\Big)\right)\\ =&\min_{h^\star\in \mathcal{H}}\sum_{j=1}^K\sum_{t\in B_j}{\ell}_t(h^\star)+\mathcal{O}\left(T\sqrt{\frac{\ln|\mathcal{H}|}{K}}\right)+K. \end{align*} Set $K=T^{\frac{2}{3}}\ln^{\frac{1}{3}} |\mathcal{H}|$, this gives the final regret bound of $\mathcal{O}\left( T^{2/3}\ln^{1/3}|\mathcal{H}|\right)$. \end{proof} \subsection{Discussion on transparency} \label{sec:discussion-transparency} In the sections above, we have shown that making random choices \emph{fully transparent} to strategic agents can provably help the learner to achieve sublinear regret. This is in contrast to the fractional model, where we have lower bound examples showing that keeping random choices \emph{fully opaque} to the agents leads to linear regret. The contrasting results in these two models reveal a fundamental difference between strategic and non-strategic (adversarial) settings: unlike the adversarial setting where learners benefit more from hiding the randomness, in the strategic setting, the learner benefits more from being transparent. At a high level, this is because the relationship between the learner and strategic agents is not completely opposing: instead, the utility of the learner and agents can align to a certain degree. To be more specific, in our online strategic classification setting, there are three effective players in the game: the learner who selects the classification rule, the adversary who chooses the initial features of agents, and the strategic agents who best respond to the classification rule. From the learner's perspective, the only malicious player is the adversary, whereas the agent has a known, controllable best response rule. In the fractional classifiers model, both the adversary and the agents face the same amount of information (which is the set of fractions). Although the opacity can prevent the adversary from selecting worst-case agents that force the learner to err with probability $1$ (as in the lower bound examples of \ref{thm:deterministic-lower-bound}), it also reduces the learner's control of the strategic behavior of agents. As a result, the potentially rich structure of randomness collapses to the set of deterministic fractional values, resulting in the fact that the learner is still forced to make mistakes with a constant probability. On the contrary, in the randomized algorithms model, the learner can increase her own leverage of controlling the agents' best response behavior, and simultaneously reduces the adversary's ability of using the strategic nature of agents to hide information from the learner. Both are achieved by giving the agents more information (i.e., the realized classifiers). In other words, the learner benefits from ``colluding'' with the agents and competing against the malicious adversary in unity. This idea is demonstrated by \Cref{alg:reduction-MAB-FIB,alg:reduction-adaptive}, where the learner occasionally uses an all-positive classifier to encourage the truthful reporting of agents, thus making the adversary unable to benefit from hiding the true features from the learner. \section{Model} \subsection{Strategic Classification} Let $\mathcal{X}$ denote the space of agents' features, and $\mathcal{Y}=\{+1,-1\}$ denote the space of labels. % We consider the task of sequential classification where the learner aims to classify a sequence of agents $\{u_t,y_t\}_{t=1}^T$ that arrive in an online fashion. Here, we assume $u_t\in\mathcal{X}$ is the true feature set of agent $t$ and $y_t\in\mathcal{Y}$ is the true label. We call agents with $y_t=+1$ \emph{true positives}, and the ones with $y_t=-1$ \emph{true negatives}. % Importantly, we make minimum assumptions on the sequence of agents, and our results apply to the case of adversarially chosen sequences. A hypothesis $h:\mathcal{X}\rightarrow \mathcal{Y}$ (also called a {\em classifier} or an {\em expert}) is a function that assigns labels to the agents $u\in\mathcal{X}$. Given a hypotheses class $\mathcal{H}:\mathcal{X}\to\mathcal{Y}$, our goal is to bound the total number of mistakes made by the learner, compared to the best classifier $h^\star\in\mathcal{H}$ in hindsight. To model the gaming behavior in real-life classification tasks, we work with the setting of \emph{strategic classification}, in which agents have the ability to modify their features at a given cost and reach a new observable state. Formally, strategic classification can be described as a repeated Stackelberg game between the learner (leader) and the agents (followers). At each step $t\in[T]$, the learner first publicly commits to a classifier $h_t$. Then, the $t$-th agent $(u_t,y_t)$ arrives and responds to $h_t$ by modifying their features from $u_t$ to $v_t$. As a result of manipulation, only $v_t$ (instead of $u_t$) is observable to the learner. We assume that $v_t$ is chosen as a best-response ($\mathsf{BR}$) to the announced rule $h_t$, such that the agent's utility is maximized: \begin{align} v_t\in\mathsf{BR}_{h_t}(u_t)\triangleq\arg\max_{v\in \mathcal{X}} \Big[\mathsf{Value}(h_t(v))-\mathsf{Cost}(u_t,v)\Big]. \end{align} Here, $\mathsf{Value}(h_t(v))$ indicates the value of outcome $h_t(v)$, which is a binary function that takes the value of $1$ when $h_t(v)=+1$, and $0$ when $h_t(v)=-1$. In~\Cref{sec:fractional-model}, we consider the generalization of agents best responding to a probability distribution over classifiers, where $\mathsf{Value}(h_t(v))$ becomes the induced expectation on node $v$, i.e., the probability of $v$ getting classified as positive by $h_t$. Equivalently, we refer to $h_t$ as a \emph{fractional classifier} and the induced probabilities as \emph{fractions}. $\mathsf{Cost}(u_t,v)$ is a known, non-negative cost function that captures the cost of modifying features from $u_t$ to $v$. It is natural to assume $\mathsf{Cost}(u,u)=0$ for all $u\in \mathcal{X}$. We use $v_t \in \mathsf{BR}_{h_t}(u_t)$ to show the best-response of agent $u_t$ at time-step $t$. Ties are broken by always preferring features with higher $\mathsf{Value}(h_t(\cdot))$, and preferring to stay put, i.e. $u_t=v_t$, if $u_t$ is among the set of best-responses that achieves the highest value. \textbf{Learner's Objective:} The learner's loss is defined as the misclassification error on the observable % state: $\ell(h_t,v_t,y_t)=\indicator{y_t\neq h_t(v_t)}$. Since $v_t \in \mathsf{BR}_{h_t}(u_t)$ and has the highest value of $h_t(\cdot)$ according to the tie breaking rule, we also abuse the notation and write $\ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)=\indicator{y_t\neq\Big. \max\left\{ h_t(v):\ {v\in \mathsf{BR}_{h_t}(u_t)}\right\}}$. The learner's goal is to minimize the Stackelberg regret with respect to the best hypothesis $h^\star\in\mathcal{H}$ in hindsight, had the agents best responded to $h^\star$: \begin{align} \mathsf{Regret}(T)\triangleq\sum_{t=1}^T \ell(h_t,\mathsf{BR}_{h_t}(u_t),y_t)-\min_{h^\star\in\mathcal{H}}\sum_{t=1}^T \ell(h^\star,\mathsf{BR}_{h^\star}(u_t),y_t). \end{align} For notational convenience, we use $\mathsf{OPT}$ to denote the optimal loss achieved by the best hypothesis: \begin{align} \mathsf{OPT}\triangleq\min_{h^\star\in\mathcal{H}}\sum_{t=1}^T \ell(h^\star,\mathsf{BR}_{h^\star}(u_t),y_t). \end{align} When $\mathsf{OPT}=0$, we call the sequence of agents \emph{realizable}, meaning that there exists a perfect classifier that never makes a mistake had agents best responded to it. Otherwise when $\mathsf{OPT}>0$, we call it \emph{unrealizable} or \emph{agnostic}. \subsection{Manipluation Graph} We use graph $G(\mathcal{X},\mathcal{E})$ to characterize the set of \emph{plausible manipulations}. In graph $G$, each node in $\mathcal{X}$ corresponds to a state (i.e., features), and each edge $e=(u,v)\in\mathcal{E}$ captures a plausible manipulation from $u$ to $v$. The cost function $\mathsf{Cost}(u,v)$ is defined as the sum of costs on the shortest path from $u$ to $v$, or $+\infty$ if such a path does not exist. We present our results for the case of undirected manipulation graphs and show how they can be extended to the case of directed graphs (\Cref{sec:directed-graphs}). To model the cost of each edge, we consider \emph{weighted graphs} in which each edge $e\in \mathcal{E}$ is associated with a nonnegative weight $w(e)\in[0,1]$. As a special case of the weighted graphs, we also consider \emph{unweighted graphs}, in which each edge takes unit cost, i.e., $w(e)=1$. We remark that in unweighted graphs, agents will move for at most one hop because manipulating the features can increase the value of classification outcomes by at most $1$. To be specific, let $N[u]$ denote the closed neighborhood of state $u\in\mathcal{X}$, then agent $u$ respond to classifier $h$ as follows: if $h(u)$ is negative and there exists a neighbor $v\in N[u]$ with positive $h(v)$, then $u$ will move to $v$; otherwise, $u$ does not move. As a result, the loss function in unweighted graphs can be equivalently expressed as \[ \ell(h,\mathsf{BR}_h(u),y)= \begin{cases} 1 &\quad y=+1 \text{, } \forall v\in N[u]: h(v)=-1;\\ 1 &\quad y=-1 \text{, } \exists v\in N[u]: h(v)=+1;\\ 0 &\quad \text{otherwise}. \end{cases} \label{def-loss} \] When fractional classifiers are used, we also consider the \emph{free-edge} manipulation model. In this model, we restrict the agent to only moving one hop, where the cost of moving is zero. Specifically, each pair of nodes $(u,v)\in\mathcal{X}^2$ has zero manipulation cost if $(u,v)\in\mathcal{E}$, otherwise the cost is infinity. When agents best respond to classifier $h$ under this cost function, they will move to a one-hop neighbor of their initial state that has the highest probability of getting classified as positive. \subsection{Related Work} Our work builds upon a growing line of research, initiated by \citet{10.1145/1014052.1014066,dekel2008incentive,10.1145/2020408.2020495}, that studies learning from data provided by strategic agents.~\citet{Hardt2016} differentiated the field of strategic classification from the more general area of learning under adversarial perturbations; they introduced the problem of \emph{strategic classification} and modeled it as a sequential game between a jury that deploys a classifier and an agent that best responds to the classifier by modifying their features at a cost. Following the framework of \citet{Hardt2016}, recent works have focused on both the offline setting where examples come from underlying distributions~\citep{zhang2021incentive,sundaram2021pac,lechner2022learning,pmlr-v119-perdomo20a} and the online settings where examples are chosen by an adversary in a sequential manner~\citep{dong2018strategic,chen2020learning,ahmadi2021strategic}. \citet{milli-etal,10.1145/3287560.3287597} extend the setting considered by \citet{Hardt2016} to the case that heterogeneous sub-populations of strategic agents have different manipulation costs and studied other objectives such as social burden and fairness. % A number of other works focus on incentivizing agents to take improvement actions that increase their true qualification as opposed to gaming actions~\citep{10.1145/3417742,Alon2020MultiagentEM,haghtalab-ijcai2020,ahmadi_et_al:LIPIcs.FORC.2022.3,bechavod2022information}. The works by \citet{pmlr-v119-shavit20a,pmlr-v119-perdomo20a,bechavod2021gaming} study causal relationships between observable features and outcomes in strategic classification.~\citet{levanon2021strategic} provide a practical learning framework for strategic classification% . Recent works relax the assumption that strategic agents best respond to the classifiers and consider alternative assumptions such as noisy response~\citep{jagadeesan2021alternative}, learning agents~\citep{zrnic2021leads}, and non-myopic agents~\citep{haghtalab2022learning}. Our work is most closely related to that of~\citet{zhang2021incentive,lechner2022learning}, which captures the set of plausible manipulations using an underlying \emph{manipulation graph}, where each edge $\vec{x}\rightarrow \vec{x'}$ represents a plausible manipulation from features $\vec{x}$ to $\vec{x}'$. This formulation is in contrast to a geometric model where agents' features are vectors in a $d$-dimensional space, with manipulation cost captured by some distance metric. As a result, agents in the geometric setting move in a ball of bounded radius~\citep{dong2018strategic,chen2020learning,haghtalab-ijcai2020,ahmadi2021strategic,ghalme2021strategic,sundaram2021pac}. However, the work of \citet{zhang2021incentive,lechner2022learning} focuses on the \emph{offline} PAC learning setting. Our work can be considered as generalizations of their model to the \emph{online learning} setting. Our work is also connected to the line of work that studies randomness and transparency in strategic classification. In terms of \emph{classification accuracy} in the offline setting, \citet{Braverman2020TheRO} shows that in a one-dimensional feature space, both committed randomness (probabilistic classifiers) and noisy features under deterministic classifiers can improve accuracy, and the optimal randomized classifier has a structure where agents are better off not manipulating. On the other hand, \citet{ghalme2021strategic} gives sufficient conditions under which \emph{transparency} is the recommended policy for improving predictive accuracy. Our paper combines the insights of both papers in the online setting, where we show that randomness is necessary against the adversary that selects agents, but transparency is more advantageous when it comes to the strategic agents themselves (see \Cref{sec:discussion-transparency} for more discussions). In addition to accuracy, there are also studies about the societal implications of randomization and transparency in the presence of multiple subpopulations, such as information discrepancy~\citep{bechavod2022information} and fairness~\citep{immorlica2019access,kannan2019downstream,frankel2022improving,Braverman2020TheRO}. \section{Overview of Results} \label{sec:overview-results} \begin{table} \centering \footnotesize \begin{tabular}{|c|c|c|c|} \hline \parbox[c][][c]{2cm}{Type of \\Randomness} &\parbox[c][][c]{2cm}{Manipulation\\ Graph} & {Upper Bound} &Lower Bound\\\hhline{====} \parbox[c][][c]{1.9cm}{Deterministic}& {Unweighted} & \begin{tabular}{l|l} \multirow{2}{*}{Realizable} & $O(\Delta\ln|\mathcal{H}|)$\\ &\Cref{alg:halving} (\Cref{thm:baseline-realizable-upper-bound})\\\hline \multirow{2}{*}{Agnostic} & $O(\Delta\cdot\mathsf{OPT}+\Delta\ln|\mathcal{H}|)$ \\ & \Cref{alg:biased-weighted-maj-vote} (\Cref{thm:biased-weighted-maj-vote-mistake-bound}) \end{tabular} & \begin{tabular}{l|c} \multirow{2}{*}{Realizable} & $\Delta-1$\\ &\Cref{thm:deterministic-lower-bound}\\\hline \multirow{2}{*}{Agnostic} & $\Delta\cdot\mathsf{OPT}$ \\ & \Cref{thm:deterministic-lower-bound} \end{tabular} \\\hhline{====} \multirow{2}{*}{\parbox[c][][c]{1.9cm}{Fractional \\Classifiers \\{\tiny (random choice after agents respond)}}}&Free-edges& \parbox[c][1cm][c]{4cm}{$O(\Delta\cdot\mathsf{OPT}+\Delta\ln|\mathcal{H}|)$ \\ \Cref{alg:biased-weighted-maj-vote} (\Cref{thm:biased-weighted-maj-vote-mistake-bound})} & \parbox[c][1cm][c]{3cm}{ $\frac{\Delta}{2}\cdot\mathsf{OPT}$\\ \Cref{thm:fractional-onehop-lower-bound} }\\\hhline{~---} &Weighted& \parbox[c][1cm][c]{4cm}{$O(\tilde{\Delta}\cdot\mathsf{OPT}+\tilde{\Delta}\ln|\mathcal{H}|)$ \\ \Cref{prop:fractional-multi-hop-upper-bound}} & \parbox[l][1cm][c]{3cm}{ {$\frac{{\Delta}}{4}\cdot\mathsf{OPT}\ \left(\frac{\Tilde{\Delta}}{4}\cdot\mathsf{OPT}\right)$ }\\ \Cref{thm:fractional-multi-hops-lower-bound} }\\\hhline{====} \parbox[c][][c]{1.9cm}{Randomized\\ Algorithms \\{\tiny (random choice before agents respond)}}&{ Unweighted}& \begin{tabular}{l|l} \multirow{2}{*}{Oblivious} & $O\left(T^{\frac{2}{3}}\ln^{\frac{1}{3}}|\mathcal{H}|\right)$\\ &\Cref{alg:reduction-MAB-FIB} (\Cref{thm:regret-alg-oblivious})\\\hline \multirow{4}{*}{Adaptive} & $\widetilde{O}\left(T^{\frac{3}{4}}\ln^{\frac{1}{4}}|\mathcal{H}|\right)$ \\ & \Cref{alg:reduction-adaptive} (\Cref{thm:regret-alg-adaptive-reduction})\\\hhline{~-} & $\widetilde{O}\left(\sqrt{T|\mathcal{H}|\ln|\mathcal{H}|}\right)$ \\ & Vanilla EXP3 Algorithm \end{tabular} &Open\\\hline \end{tabular} \caption{\small{This table summarizes the main results of this paper for the model of deterministic classifiers, fractional classifiers, and randomized algorithms respectively. We use $\Delta$ to denote the maximum degree of the manipulation graph, and $\tilde{\Delta}$ to denote the maximum degree of the expanded manipulation graph, which is constructed from a weighted graph by connecting all % pairs of nodes $(u,v)\in \mathcal{X}^2$ where $\mathsf{Cost}(u,v)\leq 1$. Although the table is presented in terms of undirected graphs, we remark that all the upper and lower can be extended to the setting of directed graphs, with the degrees to be replaced by the corresponding out-degrees, % see \Cref{sec:directed-graphs} for an example in the setting of deterministic classifiers}. $\mathsf{OPT}$ stands for the optimal number of mistakes achieved by the best hypothesis in class $\mathcal{H}$. } \label{table-of-results} \end{table} Our work considers three types of randomness: deterministic, fractional classifiers, and randomized algorithms. In the \emph{deterministic} model, the learner is constrained to using deterministic algorithms to output a sequence of deterministic classifiers. In the \emph{fractional classifiers} model, the learner is allowed to output a probability distribution over classifiers at every round, inducing fractions on every node that represent their probability of being classified as positive. The agents best respond to these fractions before the random labels are realized. In the last \emph{randomized algorithms} model, the learner outputs a probability distribution over classifiers as in the fractional model, and the adversary may pick $u_t$ based on these probabilities, but now the agents respond to the true realized classifier in selecting $v_t$. We summarize our main results in \Cref{table-of-results}. \textbf{Deterministic Classifiers:} In the case of deterministic classifiers, we model strategic manipulations by unweighted graphs that have unit cost on all edges. We first consider the realizable setting where the perfect classifier lies in a finite hypothesis class $\mathcal{H}$, and show fundamental differences between the non-strategic and strategic settings. In the non-strategic setting, the deterministic algorithm $\mathsf{Halving}$ achieves $O(\ln|\mathcal{H}|)$ mistake bound. However, in the strategic setting, we show in \Cref{example:halving-fails} that the same algorithm can suffer from an infinite number of mistakes. In \Cref{sec:deterministic}, we analyze the strategic setting and provide upper and lower bounds of the mistake bound, both characterized by the \emph{maximum degree} of vertices in the manipulation graph, which we denote with $\Delta$. On the lower bound side, we show in \Cref{thm:deterministic-lower-bound} that no deterministic algorithm is able to achieve $o(\Delta)$ mistake bound, and this barrier exists even when $|\mathcal{H}|=O(\Delta)$. On the upper bound side, we propose \Cref{alg:halving} that achieves mistake bound $O(\Delta\ln|\mathcal{H}|)$ by incorporating the graph structure into the vanilla $\mathsf{Halving}$ algorithm. We then move to the agnostic strategic setting and propose \Cref{alg:biased-weighted-maj-vote} which achieves a mistake bound of $O(\Delta\cdot\mathsf{OPT}+\Delta\ln|\mathcal{H}|)$, where $\mathsf{OPT}$ denotes the minimum number of mistakes made by the best classifier in $\mathcal{H}$. This bound is $\Delta$-multiplicative of the bound achieved by the weighted majority vote algorithm in the non-strategic setting. We complement this result with a lower bound showing that no deterministic algorithm can achieve $o(\Delta\cdot\mathsf{OPT})$ mistake bound. In order to overcome the $\Delta$-multiplicative barrier, we study the use of randomization in the next two models. \textbf{Fractional Classifiers:} % In this setting, we consider two models of cost function: the \emph{free-edges} cost model, where traveling one edge is cost-free but the second edge costs infinity, and the \emph{weighted graph} model, where agents can travel multiple edges and pay for the sum of costs of edges. In the free-edges model, we show that no learner can overcome the mistake lower bound $\frac{\Delta}{2}\cdot\mathsf{OPT}$, and provide an upper bound of $O(\Delta\cdot\mathsf{OPT}+\Delta\ln|\mathcal{H}|)$ based on~\Cref{alg:biased-weighted-maj-vote}. In the weighted graph model, we show a mistake lower bound of $\frac{{\Delta}}{4}\cdot\mathsf{OPT}$, and an upper bound of $O(\Tilde{\Delta}\cdot\mathsf{OPT}+\Tilde{\Delta}\ln|\mathcal{H}|)$, which is obtained by running \Cref{alg:biased-weighted-maj-vote} on the \emph{expanded manipulation graph} $\Tilde{G}$ that is constructed by connecting all pairs of nodes $(u,v)\in \mathcal{X}^2$ where $\mathsf{Cost}(u,v)\leq 1$, and $\Tilde{\Delta}$ denotes the maximum degree of $\Tilde{G}$. In particular, our construction for the lower bound satisfies $\Tilde{\Delta}=\Delta$, so this result also implies that no learner is able to surpass the $\frac{\Tilde{\Delta}}{4}$-multiplicative regret. Our results in this setting indicate that using fractional classifiers cannot help the learner to achieve $o(\Delta\cdot\mathsf{OPT})$ regret. To resolve this issue, we move on to the randomized algorithms model where the learner realizes the random choices in transparency to the agents. \textbf{Randomized Algorithms:} In this setting, the learner uses randomized algorithms that produce probability distribution over deterministic classifiers at each round. The key difference from the fractional classifiers setting is, although the adversary still chooses agent $(u_t,y_t)$ based on the distribution, the agent will best respond to the classifier to be used after it is sampled from the distribution. Surprisingly, we show that revealing the random choices to the agents can make the interaction more fruitful for both the agents and the learner, as the learner is now able to achieve vanishing regret without the multiplicative dependency on $\Delta$ or $\Tilde{\Delta}$. This demonstrates an interesting difference between strategic and non-strategic settings {from the learner's perspective}: whereas delaying the realization of random bits is helpful in non-strategic settings, it is more helpful to realize the random choices \emph{before agents respond} in the strategic setting. We refer the readers to \Cref{sec:discussion-transparency} for more discussions about this difference. As for algorithms and upper bounds in this setting, we first show that the vanilla EXP3 algorithm on expert set $\mathcal{H}$ gives us a regret upper bound of $O\left(\sqrt{T|\mathcal{H}|\ln|\mathcal{H}|}\right)$. To improve the dependency on $|\mathcal{H}|$, we design two algorithms that simultaneously observe the loss of all experts by using an all-positive classifier at random time steps to stop the manipulations. In particular, \Cref{alg:reduction-MAB-FIB} achieves regret upper bound of $O\left(T^{\frac{2}{3}}\ln^\frac{1}{3}|\mathcal{H}|\right)$ against oblivious adversaries; and \Cref{alg:reduction-adaptive} achieves regret bound of $\widetilde{O}\left(T^{\frac{3}{4}}\ln^{\frac{1}{4}}|\mathcal{H}|\right)$ against general adaptive adversaries. We also extend this algorithmic idea to the linear classification setting where original examples are inseparable and obtain an upper bound in terms of the hinge loss of the original data points, resolving an open problem proposed in \citet{ahmadi2021strategic}. Although, our mistake bound has an extra $O(\sqrt{T})$ additive term compared to their bound for the case that original data points are separable. \textbf{Two Populations:} We propose an extension to our model in which agents are divided into two populations with heterogeneous manipulation power: group $A$ agents face a cost of 0.5 on each edge, whereas group $B$ agents face a cost of 1. We assume that group membership is a protected feature, and is observable only after the classifier is published. In \Cref{sec:two-populations}, we present an algorithm with a $\min\left\{\Delta+1+\frac{1}{\beta},\ \Delta^2+2\right\}$-multiplicative regret, where $\beta$ is the probability that agents are assigned to group $B$. \subsection{Two populations} \label{sec:two-populations} In this section, we study extensions of the unit-edge cost function in our baseline model. We assume there are two populations with different manipulation costs: agents of group $A$ face a cost of $0.5$ on each edge, whereas agents of group $B$ face a cost of $1$. As a result, in response to deterministic classifiers, agents from group $A$ move within their two-hop distance neighborhood, whereas agents from group $B$ only move inside their one-hop distance neighborhood. We suppose each agent has fixed probabilities of belonging to each group, regardless of the initial position and the label chosen by the adversary. In other words, at every round $t$, after the adversary picks the next agent $(u_t,y_t)$, we assume nature independently assigns this agent to group $c_t=B$ with probability $\beta$ and $c_t=A$ with probability $\alpha=1-\beta$. The agent's best response to classifier $h_t$ is a function of $u_t$ and $c_t$: \begin{align*} v_t\in\mathsf{BR}_{h_t}(u_t,c_t)\triangleq\begin{cases} \arg\max_{v\in \mathcal{X}} \Big[\mathsf{Value}(h_t(v))-\mathsf{Cost}_A(u_t,v)\Big], & \text{if } c_t=A\\ \arg\max_{v\in \mathcal{X}} \Big[\mathsf{Value}(h_t(v))-\mathsf{Cost}_B(u_t,v)\Big], & \text{if } c_t=B. \end{cases} \end{align*} As a result of manipulation, the learner suffers loss $\ell(h_t,v_t,y_t)=\ell(h_t,\mathsf{BR}_{h_t}(u_t,c_t),y_t)$ and observes $(v_t,y_t)$ together with group membership $c_t$. The learner's goal is to bound the expected number of mistakes in terms of the optimal number of mistakes in expectation, where the expectations are taken over the random group assignments and the possible randomness in the learning algorithm and the adversary's choices. \begin{align*} \E[\mathsf{Mistake}(T)]=\E\left[\sum_{t=1}^T\ell(h_t,\mathsf{BR}_{h_t}(u_t,c_t),y_t)\right],\quad \E[\mathsf{OPT}]=\min_{h\in\mathcal{H}}\E\left[\sum_{t=1}^T\ell(h,\mathsf{BR}_{h}(u_t,c_t),y_t)\right]. \end{align*} {We propose \Cref{alg:two-populations} that} is based on the idea of biased weighted majority vote (\Cref{alg:biased-weighted-maj-vote}), with a \emph{group-independent} threshold for the biased majority votes, and a \emph{group-dependent} way of penalizing experts. We state the mistake bound guarantee in \Cref{thm:two-populations}. \begin{algorithm}[t] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetNoFillComment \Input{$G(\mathcal{X},\mathcal{E})$, $\mathcal{H}$} Set initial weights $w_1(h)\leftarrow 1$ for all experts $h\in \mathcal{H}$\; Set discount factor $\gamma=\frac{1}{e}$, threshold $\theta=\max\left\{\frac{1}{\Delta+1+\frac{1}{\beta}},\ \frac{1}{\Delta^2+2}\right\}$\; \For{$t=1,2,\cdots$}{ \tcc{The learner commits to a classifier $h_t$ that is constructed as follows:} \For{$v\in V$}{ Let $W_t^+(v) = \sum_{h\in\mathcal{H}:h(v)=+1}w_t(h)$, $W_t^-(v) = \sum_{h\in\mathcal{H}:h(v)=-1}w_t(h)$, and $W_t= W_t^+(v)+W_t^-(v)$\; \eIf{$W_t^+(v)\geq \theta\cdot W_t$}{ $h_t(v)\leftarrow +1$\; } { $h_t(v)\leftarrow -1$\; } } \tcc{Unlabeled example $v_t$ is observed.} output prediction $h_t(v_t)$\; \tcc{The true label $y_t$ and group membership $c_t$ are observed} \tcc{If there was a mistake:} \If{$h_t(v_t)\neq y_t$}{ \eIf{$h_t(v_t)=+1$}{ for all $h\in \mathcal{H}:h(v_t)=+1$, $w_{t+1}(h)\leftarrow \gamma\cdot w_t(h)$\tcp*{false positive} } { \eIf{$c_t=A$}{$\mathcal{H'}\leftarrow \{h\in \mathcal{H}: \forall x\in N^2[v_t], h(x)=-1\}$\tcp*{$N^2[\cdot]$ is the 2-hop neighborhood}} {$\mathcal{H'}\leftarrow \{h\in \mathcal{H}: \forall x\in N[v_t], h(x)=-1\}$} If $h\in \mathcal{H}'$, $w_{t+1}(h)\leftarrow\gamma\cdot w_t(h)$, otherwise $w_{t+1}(h)\leftarrow w_t(h)$.\tcp*{false negative} } } } \caption{Biased weighted majority-vote algorithm for two populations.} \label[algo]{alg:two-populations} \end{algorithm} \begin{theorem} \label{thm:two-populations} In the setting of two populations and population $B$ has probability $\beta$, \Cref{alg:two-populations} achieves an expected mistake bound of the following: \begin{align*} \E[\mathsf{Mistake}(T)]\le e\cdot\min\left\{\Delta+1+\frac{1}{\beta},\ \Delta^2+2\right\}\left(\ln|\mathcal{H}|+\E[\mathsf{OPT}]\right). \end{align*} \end{theorem} \begin{remark} In \Cref{thm:two-populations}, when all agents can make two hops (i.e., $\beta=0$), the mistake bound reduces to the guarantee provided \Cref{thm:biased-weighted-maj-vote-mistake-bound} with $\Delta^2$ as the maximum degree. In this case, \Cref{alg:two-populations} is equivalent with \Cref{alg:biased-weighted-maj-vote} running on the expanded neighborhood graph $\widetilde{G}$ in which every two nodes of distance at most two are connected by an edge. Here, $\Delta^2$ is an upper bound on the maximum degree of $\widetilde{G}$.In contrast, when all agents can only make one hop (i.e., $\beta=1$), the problem reduces to the baseline model, and \Cref{thm:two-populations}'s guarantee becomes the same as that of \Cref{thm:biased-weighted-maj-vote-mistake-bound} with the same set of parameters. For values of $\beta$ between 0 and 1, the mistake bound smoothly interpolates the guarantees of the two extreme cases. \end{remark} \begin{proof}[Proof of~\Cref{thm:two-populations}] We show that whenever a mistake is made, we can reduce the total weight of experts ($W_t$) by a constant fraction in expectation. First, consider the case of a false positive. Since $h_t(v_t)=+1$, the total weight of experts that predict positive on $v_t$ is at least $\theta W_t$ % ; and {the weight of each of them gets reduced} by a factor of $\lambda$. Let $\mathcal{F}_t$ be the $\sigma$-algebra generated by the random variables up to time $t$, then we have \begin{align} \E[W_{t+1}\ |\ \mathcal{F}_{t-1},\text{ false positive}]\le W_t (1-\lambda \theta). \label{eq:cut-false-positive} \end{align} Next, consider the case of a false negative. Since $h_t(v_t)=-1$, we know that the agent did not move, i.e., $v_t=u_t$. The algorithm updates as follows: if $c_t=B$, it reduces the weight of experts who predict {negative on all the nodes in the one-hop neighborhood of $v_t$, i.e., $N[v_t]$.} if $c_t=A$, then it reduces the weight of experts who predict {negative on all the nodes in the two-hop neighborhood of $v_t$, i.e., $N^2[v_t]$}. We claim that: \begin{align*} &\frac{\Pr(c_t=B\ |\ \mathcal{F}_{t-1},\text{false negative})}{\beta}\ge \frac{\Pr(c_t=A\ |\ \mathcal{F}_{t-1},\text{false negative})}{1-\beta}\\ \Rightarrow\ &\Pr(c_t=B\ |\ \mathcal{F}_{t-1},\text{false negative})\ge\beta. \end{align*} To see this, we can use the Bayes law to calculate the conditional probability of group assignments: for {$X\in \{A,B\}$}, we have \begin{align} \Pr(c_t=X\ |\ \mathcal{F}_{t-1},\text{false negative})=\frac{\Pr(\text{false negative}\ |\ \mathcal{F}_{t-1},c_t=X)\cdot\Pr(c_t=X\ |\ \mathcal{F}_{t-1})}{\Pr(\text{false negative}\ |\ \mathcal{F}_{t-1})}. \end{align} Since the group membership $c_t$ is independently realized after the adversary chooses $(u_t,y_t)$, we have \begin{align*} &\frac{\Pr(c_t=B\ |\ \mathcal{F}_{t-1},\text{false negative})}{\beta}-\frac{\Pr(c_t=A\ |\ \mathcal{F}_{t-1},\text{false negative})}{1-\beta}\\ =&\frac{1}{\Pr(\text{false negative}\ |\ \mathcal{F}_{t-1})}\Big(\Pr(\text{false negative}\ |\ \mathcal{F}_{t-1},c_t=B)-\Pr(\text{false negative}\ |\ \mathcal{F}_{t-1},c_t=A)\Big)\ge0, \end{align*} where the last step is because agents of population $A$ have more manipulation power, so under every possible classifier, group $A$ is able to get classified as positive whenever group $B$ is; therefore, group $A$ agents are less likely to become false negative. We have thus established the claim. Now we turn to the total weight that is reduced in this scenario. If $c_t=A$, then there are at most $(\Delta^2+1)$ nodes in the two-hop neighborhood, in which all of them are predicted negative. Therefore, the total weight of experts who predict negative on all of them is at least $W_t(1-\theta(\Delta^2+1))_+$. On the other hand, if $c_t=B$, then the total weight of experts who predict negative on the one-hop neighborhood is at least $W_t(1-\theta(\Delta+1))_+$. Putting the two cases together and conditioning on the false negative, the total weight that can be reduced is at least \begin{align} &\Pr(c_t=B\ |\ \text{false negative},\mathcal{F}_{t-1})\cdot {(1-(\Delta+1)\theta)_+} \nonumber\\&\qquad\qquad +\Pr(c_t=A\ |\ \text{false negative},\mathcal{F}_{t-1})\cdot (1-(\Delta^2+1)\theta)_+\nonumber\\ \ge& \max\left\{\beta(1-(\Delta+1)\theta)_+,(1-(\Delta^2+1)\theta)_+\right\}, \label{eq:tttmp} \end{align} where the first term in \eqref{eq:tttmp} is due to the claim we just established, and the second term follows from $(1-(\Delta+1)\theta)_+\ge(1-(\Delta^2+1)\theta)_+$ together with $\Pr(c_t=B\ |\ \text{false negative},\mathcal{F}_{t-1})+\Pr(c_t=A\ |\ \text{false negative},\mathcal{F}_{t-1})=1$. From \Cref{eq:tttmp}, we obtain \begin{align} \E[W_{t+1}\ |\ \mathcal{F}_{t-1},\text{ false negative}]\le W_t\left(1-\lambda\cdot\max\left\{\beta(1-(\Delta+1)\theta)_+,(1-(\Delta^2+1)\theta)_+\right\}\right).\label{eq:cut-false-negative} \end{align} Finally, we optimize the threshold $\theta$ to equalize the decrease in the case of false positive (\Cref{eq:cut-false-positive}) and false negative (\Cref{eq:cut-false-negative}). As a result, the optimal $\theta$ is obtained by solving the following equation: \begin{align} \underbrace{\theta}_{f(\theta)}=\max\left\{\underbrace{\beta(1-(\Delta+1)\theta)}_{f_1(\theta)},\,\underbrace{1-(\Delta^2+1)\theta}_{f_2(\theta)},\,0\right\}.\nonumber \end{align} Since $f,f_1$, and $f_2$ are all linear functions where $f_1,f_2$ have a negative slope and $f$ has a positive slope, the intersection between $f$ and $\max\{f_1,f_2\}$ coincides with the maximum value between the intersection of $\{f,f_1\}$ and the intersection of $\{f,f_2\}$. Moreover, $\theta=0$ is not a valid solution because the other two intersections have strictly positive values. Thus we obtain \begin{align*} \theta\triangleq\max\left\{\frac{1}{\Delta+1+\frac{1}{\beta}},\ \frac{1}{\Delta^2+2}\right\}. \end{align*} Correspondingly, on each mistake, the optimal amount of decrease in the total weight is \begin{align*} \E\left[\left.\frac{W_{t+1}}{W_t}\ \right|\ \mathcal{F}_{t-1},\text{ mistake}\right]\le \min\left\{1-\frac{\lambda}{\Delta+1+\frac{1}{\beta}},\ 1-\frac{\lambda}{\Delta^2+2}\right\}. \end{align*} By Jensen's inequality, we further obtain that if a mistake is made at time $t$, then \begin{align} \E\left[\ln \left.\frac{W_{t+1}}{W_t}\ \right|\ \mathcal{F}_{t-1},\text{ mistake}\right]\le&\ln\E\left[\left.\frac{W_{t+1}}{W_t}\ \right|\ \mathcal{F}_{t-1},\text{ mistake}\right]\nonumber\\ \le &\ln\left(\min\left\{1-\frac{\lambda}{\Delta+1+\frac{1}{\beta}},\ 1-\frac{\lambda}{\Delta^2+2}\right\}\right).\label{eq:decrease:mistake} \end{align} The last step is to telescope \Cref{eq:decrease:mistake} over all mistakes. Note that the algorithm only penalizes the experts that make mistakes, so the same argument as \Cref{thm:biased-weighted-maj-vote-mistake-bound} implies that $W_T\ge\gamma^{\mathsf{OPT}}$. Thus we have \begin{align*} \E[\ln(\gamma^{\mathsf{OPT}})-\ln|\mathcal{H}|] \le \E[\ln W_T-\ln|\mathcal{H}|] \le \E[\mathsf{Mistake}(T)]\cdot\ln\left(\min\left\{1-\frac{\lambda}{\Delta+1+\frac{1}{\beta}},\ 1-\frac{\lambda}{\Delta^2+2}\right\}\right). \end{align*} {Rearranging} the above inequality, setting $\lambda=1/e$ {and using $\ln(1-x)\leq -x$} gives us an expected mistake bound of \begin{align*} \E[\mathsf{Mistake}(T)]\le e\cdot\min\left\{\Delta+1+\frac{1}{\beta},\ \Delta^2+2\right\}\left(\ln|\mathcal{H}|+\E[\mathsf{OPT}]\right). \end{align*} This completes the proof. \end{proof}
{ "timestamp": "2023-02-27T02:03:03", "yymm": "2302", "arxiv_id": "2302.12355", "language": "en", "url": "https://arxiv.org/abs/2302.12355", "abstract": "We study the problem of online binary classification where strategic agents can manipulate their observable features in predefined ways, modeled by a manipulation graph, in order to receive a positive classification. We show this setting differs in fundamental ways from non-strategic online classification. For instance, whereas in the non-strategic case, a mistake bound of $\\ln|H|$ is achievable via the halving algorithm when the target function belongs to a known class $H$, we show that no deterministic algorithm can achieve a mistake bound $o(\\Delta)$ in the strategic setting, where $\\Delta$ is the maximum degree of the manipulation graph (even when $|H|=O(\\Delta)$). We obtain an algorithm achieving mistake bound $O(\\Delta\\ln|H|)$. We also extend this to the agnostic setting and obtain an algorithm with a $\\Delta$ multiplicative regret, and we show no deterministic algorithm can achieve $o(\\Delta)$ multiplicative regret.Next, we study two randomized models based on whether the random choices are made before or after agents respond, and show they exhibit fundamental differences. In the first model, at each round the learner deterministically chooses a probability distribution over classifiers inducing expected values on each vertex (probabilities of being classified as positive), which the strategic agents respond to. We show that any learner in this model has to suffer linear regret. On the other hand, in the second model, while the adversary who selects the next agent must respond to the learner's probability distribution over classifiers, the agent then responds to the actual hypothesis classifier drawn from this distribution. Surprisingly, we show this model is more advantageous to the learner, and we design randomized algorithms that achieve sublinear regret bounds against both oblivious and adaptive adversaries.", "subjects": "Machine Learning (cs.LG); Computer Science and Game Theory (cs.GT)", "title": "Fundamental Bounds on Online Strategic Classification", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773707986486797, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7084670349345533 }
https://arxiv.org/abs/1907.00268
The valley version of the Extended Delta Conjecture
The Shuffle Theorem of Carlsson and Mellit gives a combinatorial expression for the bigraded Frobenius characteristic of the ring of diagonal harmonics, and the Delta Conjecture of Haglund, Remmel and the second author provides two generalizations of the Shuffle Theorem to the delta operator expression $\Delta'_{e_k} e_n$. Haglund et al. also propose the Extended Delta Conjecture for the delta operator expression $\Delta'_{e_k} \Delta_{h_r}e_n$, which is analogous to the rise version of the Delta Conjecture. Recently, D'Adderio, Iraci and Wyngaerd proved the rise version of the Extended Delta Conjecture at the case when $t=0$. In this paper, we propose a new valley version of the Extended Delta Conjecture. Then, we work on the combinatorics of extended ordered multiset partitions to prove that the two conjectures for $\Delta'_{e_k} \Delta_{h_r}e_n$ are equivalent when $t$ or $q$ equals 0, thus proving the valley version of the Extended Delta Conjecture when $t$ or $q$ equals 0.
\section{Introduction} Let $X=\{x_1,x_2,\ldots,x_n\}$ and $Y=\{y_1,y_2,\ldots,y_n\}$ be two sets of $n$ commuting variables. The {\em ring of diagonal harmonics} consists of those polynomials in $\mathbb{Q}[X,Y]$ which satisfy the following system of differential equations $$ \partial_{x_1}^a\partial_{y_1}^b\,f(X,Y)+\partial_{x_2}^a\partial_{y_2}^b\,f(X,Y)+\ldots+\partial_{x_n}^a\partial_{y_n}^b\,f(X,Y)=0 $$ for each pair of integers $a$ and $b$ such that {$a+b>0$}. Haiman \cite{Haiman} proved that the {\em bigraded Frobenius characteristic} of the $\Sn{n}$-module of diagonal harmonics, $DH_n(X;q,t)$, is given by \begin{equation} DH_n(X;q,t)=\nabla e_n. \end{equation} where $\nabla$, $e_n$, and other symmetric function notation will be defined in Section 2. The {\em Classical Shuffle Conjecture} proposed by Haglund, Haiman, Loehr, Remmel and Ulyanov \cite{HHLRU} gives a well-studied combinatorial expression for the bigraded Frobenius characteristic of the ring of diagonal harmonics. The Shuffle Conjecture has been proved by Carlsson and Mellit \cite{CM} as the \emph{Shuffle Theorem} as follows; again, relevant notation will be given in Section 2. \begin{theorem}[Carlsson and Mellit]\label{theorem:shuffle} For any integer $n\geq 0$, \begin{equation} \nabla e_n=\sum_{\mathrm{PF}\in\mathcal{WPF}_n}t^{\mathrm{area}(\mathrm{PF})}q^{\mathrm{dinv}(\mathrm{PF})} X^{\mathrm{PF}}, \end{equation} \end{theorem} \noindent which says that the Frobenius characteristic of diagonal harmonics can be written as a generating function of combinatorial objects called word parking functions. As a generalization of the Shuffle Theorem, the Delta Conjecture can be stated as \begin{conjecture}[Haglund, Remmel and Wilson] For any integers $n> k\geq 0$, \begin{eqnarray} \Delta'_{e_k}e_n&=&\sum_{\mathrm{PF}\in\mathcal{WPF}_n}t^{\mathrm{area}(\mathrm{PF})}q^{\mathrm{dinv}(\mathrm{PF})} X^{\mathrm{PF}} \prod_{i\in \mathrm{Rise}(\mathrm{PF})} (1+\frac{z}{t^{a_i(\mathrm{PF})}})\bigg|_{z^{n-k-1}}\label{delta1}\\ &=&\sum_{\mathrm{PF}\in\mathcal{WPF}_n}t^{\mathrm{area}(\mathrm{PF})}q^{\mathrm{dinv}(\mathrm{PF})} X^{\mathrm{PF}} \prod_{i\in \mathrm{Val}(\mathrm{PF})} (1+\frac{z}{q^{d_i(\mathrm{PF})+1}})\bigg|_{z^{n-k-1}}.\label{delta2} \end{eqnarray} \end{conjecture} The Delta Conjecture has two versions, the \emph{rise version} (Equation \ref{delta1}) and the \emph{valley version} (Equation \ref{delta2}), which are different generating functions about parking functions. The Delta Conjecture is still open, but several cases of the Delta Conjecture have been proved. The conjecture for $\Delta_{e_1}e_n$ is proved by Haglund, Remmel and the second author \cite{HRW}; the rise version Delta Conjecture at $q = 1$ is proved by Romero \cite{Romero}; the ``Catalan'' case of the conjecture is proved by Zabrocki \cite{Zabdelta}. The Delta Conjecture at the case when $t$ or $q$ equals $0$ is proved by Garsia, Haglund, Remmel and Yoo \cite{Delta0}; the second author \cite{wilson}; Rhoades \cite{Rhoades}; Haglund, Rhoades and Shimozono \cite{Delta01}. In \cite{HRW}, the authors also conjectured a combinatorial formula for the expression $\Delta'_{e_k} \Delta_{h_r}e_n$ which we call the \emph{Extended Delta Conjecture}, and the combinatorial side is a generating function of the set of \emph{extended word parking functions with blank valleys}. The \emph{Extended Delta Conjecture} of Haglund, Remmel and the second author \cite{HRW} is as follows. \begin{conjecture}[Rise version of the Extended Delta Conjecture \cite{HRW}]\label{conjecture:deltar1} For any positive integers $n$, $k$, and $r$ with $k<n$, $$ \Delta'_{e_k}\Delta_{h_r}e_{n}=\sum_{\mathrm{PF}\in\mathcal{WPF}_{n;r}}t^{\mathrm{area}(\mathrm{PF})}q^{\mathrm{dinv}(\mathrm{PF})}x^\mathrm{PF} \prod_{i\in \mathrm{Rise}(\mathrm{PF})}\left(1+\frac{z}{t^{a_i(\mathrm{PF})}}\right) \bigg|_{z^{n-k-1}}, $$ \end{conjecture} \noindent which is analogous to the rise version Delta Conjecture. By defining contractible valley set of parking functions with blank valleys, we conjecture the following. \begin{conjecture}[Valley version of the Extended Delta Conjecture]\label{conjecture:deltar2} For any positive integers $n$, $k$, and $r$ with $k<n$, $$ \Delta'_{e_k}\Delta_{h_r}e_{n}=\sum_{\mathrm{PF}\in\mathcal{WPF}_{n;r}}t^{\mathrm{area}(\mathrm{PF})}q^{\mathrm{dinv}(\mathrm{PF})}x^\mathrm{PF} \prod_{i\in \mathrm{Val}(\mathrm{PF})}\left(1+\frac{z}{q^{d_i(\mathrm{PF})+1}}\right) \bigg|_{z^{n-k-1}}. $$ \end{conjecture} \noindent We call \cref{deltar2} the \emph{valley version Extended Delta Conjecture}. Very recently, D'Adderio, Iraci and Wyngaerd \cite{Michele} proved this conjecture in the case $t=0$. However, the valley version conjecture of $\Delta'_{e_k} \Delta_{h_r}e_n$ is new and has not appeared anywhere before. We believe that the valley version conjecture is true since we have verified the conjecture for $n\leq 10$ by Maple programs, and we have also proved the valley version conjecture at the case when $t$ or $q$ is zero. The organization of this paper is as follows. In Section 2, we shall introduce some background about symmetric functions and parking functions related to the Delta Conjecture. In Section 3, we introduce ordered multiset partitions, extended ordered multiset partitions and their connections to the Delta Conjectures. In Section 4, we prove that the statistics inv, maj and dinv are equi-distributed by three insertion algorithms. In Section 5, we prove that the statistics inv and minimaj are equi-distributed by generalizing a method of Rhoades \cite{Rhoades}, which completes a proof of the {valley version} conjecture of $\Delta'_{e_k} \Delta_{h_r}e_n$ when $t$ or $q$ equals 0. In Section 6, we give a brief summary of this paper and point out some future directions of this research. \section{Background} We shall introduce some algebraic and combinatorial background about symmetric functions and parking functions that is involved in the Delta Conjecture. We shall start with definitions about symmetric functions. The \emph{symmetric group} $\mathcal{S}_{n}$ is the set of permutations of size $n$. Given any permutation $\sigma=\sigma_1\cdots\sigma_n\in\mathcal{S}_{n}$, the \emph{descent number} of $\sigma$ is defined to be $\mathrm{des}(\sigma):=|\{i:\sigma_i>\sigma_{i+1}\}|$, and the \emph{major index} of $\sigma$ is $\mathrm{maj}(\sigma):=\sum_{\sigma_i>\sigma_{i+1}} i$. For any integer $n$, a weakly decreasing sequence of positive integers $\lambda=(\lambda_1,\ldots,\lambda_k)$ is a \emph{partition} (or an \emph{integer partition}) of $n$ if $\sum_{i=1}^{k} \lambda_i = n$, written $\lambda\vdash n$. We let $|\lambda|=n$ and $\ell(\lambda)=k$ denote the size and length (number of parts) of the partition $\lambda$. A \emph{weak composition} of an integer $n$ is defined to be a sequence of \emph{non-negative} integers $\alpha=(\alpha_1,\ldots,\alpha_k)$ such that $\sum_{i=1}^{k} \alpha_i = n$, written $\alpha\vDash n$; and a \emph{strong composition} of $n$ is defined to be a sequence of \emph{positive} integers $\alpha=(\alpha_1,\ldots,\alpha_k)$ such that $\sum_{i=1}^{k} \alpha_i = n$, written $\alpha\vDash_{\mathrm{strong}} n$. We let $|\alpha|=n$ and $\ell(\alpha)=k$ denote the size and the length of the composition $\alpha$, respectively. For each partition $\lambda=(\lambda_1,\ldots,\lambda_k)\vdash n$, we can associate to the partition a \emph{Ferrers diagram} in French notation, which is a diagram with $n$ squares such that there are $\lambda_i$ squares in the \thn{i} row, counting from bottom to top. For each cell $c\in\lambda$, we let the \emph{coarm} of $c$, $a_{\lambda}'(c)$, be the number of cells to the right of $c$; the \emph{coleg} of $c$, $\ell_{\lambda}'(c)$, be the number of cells below $c$. We often abbreviate the notations to $a'(c)$ and $\ell'(c)$, and we let $(a_{\lambda}'(c), l_{\lambda}'(c))$ denote the coordinate of $c$. \fref{Ferrers} shows an example of the Ferrers diagram of a partition $\lambda=(7,7,5,3,3)\vdash 25$. \begin{figure}[ht!] \centering \vspace{-1mm} \begin{tikzpicture}[scale=0.5] \draw[help lines] (0,0) grid (5,3); \draw[help lines] (0,0) grid (3,4); \draw[help lines] (0,0) grid (7,2); \fillll{3}{3}{c}; \draw[<->,thick] (2.5,0) -- (2.5,2); \draw[<->,thick] (0,2.5) -- (2,2.5); \node at (3.8,.8) {\footnotesize $\ell_{\lambda}'(c)$}; \node at (1,1.8) {\footnotesize $a_{\lambda}'(c)$}; \node at (9,2) {\small \begin{tabular}{r@{\hskip 1.5mm}l} $a_{\lambda}'(c)= 2$\\ $\ell_{\lambda}'(c)= 2$\\ \end{tabular}}; \end{tikzpicture} \caption{The Ferrers diagram of the partition $\lambda=(7,7,5,3,3)$.} \label{fig:Ferrers} \end{figure} Now let $\lambda$ be a partition of $n$. We can fill the cells of the Ferrers diagram of $\lambda$ with positive integers to obtain a \emph{tableau} $T$. The set of tableaux of shape $\lambda$ is denoted by $\mathrm{Tab}(\lambda)$. We also use $T$ to denote the multiset of the filled integers, and we write $X^T:=\prod_{i\in T} x_i$. Let $\Lambda$ denote the ring of symmetric functions with coefficients in $\mathbb{C}(q,t)$, and let $\Lambda^{(n)}$ denote the elements of $\Lambda$ that are homogeneous of degree $n$. The \emph{elementary symmetric function basis} $\{e_\mu\}_{\mu\vdash n}$ of $\Lambda^{(n)}$ is defined by $$ e_k:=\sum_{i_1<\cdots<i_k}x_{i_1}\cdots x_{i_k},\quad\mbox{and}\quad e_\mu:=e_{\mu_1}\cdots e_{\mu_{\ell(\mu)}}, $$ and the \emph{homogeneous symmetric function} basis $\{h_{\mu}\}_{\mu\vdash n}$ is defined by $$ h_k:=\sum_{i_1\leq\cdots\leq i_k}x_{i_1}\cdots x_{i_k},\quad\mbox{and}\quad h_\mu:=h_{\mu_1}\cdots h_{\mu_{\ell(\mu)}}. $$ Macdonald \cite{Macbook} introduced a family of orthogonal symmetric functions known as \emph{Macdonald polynomials}, which have nice mathematical and physical properties. Macdonald polynomials have several transformations, and the form that we are using is called the \emph{modified Macdonald polynomials} $\widetilde{H}_{\mu}[X;q,t]$ indexed by partitions $\mu\vdash n$. One combinatorial way to define $\widetilde{H}_\mu[X;q,t]$ is due to the work of Haglund, Haiman and Loehr \cite{HHL}: $$ \widetilde{H}_\mu[X;q,t] := \sum_{T\in\mathrm{Tab}(\mu)} q^{\mathrm{inv}(T)}t^{\mathrm{maj}(T)}X^T, $$ where inv and maj are two statistics defined on the tableau $T$. We shall often abbreviate $\widetilde{H}_\mu[X;q,t]$ to $\widetilde{H}_\mu$. The symmetric function operators nabla ($\nabla$), delta ($\Delta$), and delta prime ($\Delta'$) are eigenoperators of Macdonald polynomials defined by Bergeron and Garsia \cite{GB}. For any partition $\mu\vdash n$, we let \begin{equation*} B_\mu := \sum_{c \in\mu} q^{a'(c)} t^{l'(c)} \quad\mbox{and}\quad T_\mu := \prod_{c \in\mu} q^{a'(c)} t^{l'(c)} \end{equation*} be polynomials defined from the Ferrers diagram of $\mu$. Given a modified Macdonald polynomial $\widetilde{H}_{\mu}[X;q,t]$, the operator \emph{nabla} acts by \begin{equation*} \nabla \widetilde{H}_{\mu}:= T_\mu \widetilde{H}_{\mu} \end{equation*} and we extend by scalars to obtain a symmetric function operator. Let $f$ be a given symmetric function, then $\Delta_f$ and $\Delta'_f$ are the operators such that \begin{equation*} \Delta_f \widetilde{H}_{\mu}:= f[B_\mu] \widetilde{H}_{\mu}, \quad \Delta'_f \widetilde{H}_{\mu}:= f[B_\mu-1] \widetilde{H}_{\mu}, \end{equation*} where $f[B_\mu]$ and $f[B_\mu-1]$ are plethystic expressions which can be thought of as substitutions. For example, for the partition $\mu=(3,1)\vdash 4$, we can first draw its Ferrers diagram, and fill in each cell $c\in\mu$ the weight $q^{a'(c)} t^{l'(c)}$. This process is pictured in \fref{ptnmu}. \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=.6] \draw[thick] (3,1) grid (0,0) grid (1,2); \fillll{1}{1}{1};\fillll{2}{1}{q};\fillll{3.1}{1.1}{q^2};\fillll{1}{2}{t}; \end{tikzpicture} \caption{A partition $\mu=(3,1)$.} \label{fig:ptnmu} \end{figure} By definition, we have $B_{(3,1)}=1+q+q^2+t$, $T_{(3,1)}=q^3t$, and $\nabla \widetilde{H}_{(3,1)}= q^3t\: \widetilde{H}_{(3,1)}$. Setting the symmetric function $f=e_2$, then \begin{equation*} \Delta_{e_2} \widetilde{H}_{(3,1)} = e_2[1+q+q^2+t]\: \widetilde{H}_{(3,1)}= (q+q^2+t+q^3+qt+q^2t)\: \widetilde{H}_{(3,1)}, \end{equation*} and \begin{equation*} \Delta'_{e_2} \widetilde{H}_{(3,1)} = e_2[q+q^2+t]\: \widetilde{H}_{(3,1)}= (q^3+qt+q^2t)\: \widetilde{H}_{(3,1)}. \end{equation*} Note that for $\mu\vdash n$, $e_n[B_\mu] = e_{n-1}[B_\mu -1] = T_\mu$, thus for any $F \in \Lambda^{(n)}$, $\nabla F = \Delta_{e_n} F = \Delta'_{e_{n-1}} F$. Furthermore, since $e_k[X+1]=e_k[X]+e_{k-1}[X]$, we have the following relation between the operators $\Delta$ and $\Delta'$: \begin{equation} \Delta_{e_k} = \Delta'_{e_k} +\Delta'_{e_{k-1}}. \end{equation} For integer $n,k$, we define the $q$-analogues of $n$, $n!$ and $\binom{n}{k}$ to be $$ [n]_q:=\frac{1-q^n}{1-q}, \quad [n]_q!: = [1]_q [2]_q \cdots [n]_q, \quad\mbox{and}\quad \qbinom{n}{k} := \frac{\qn{n}!}{\qn{k}!\qn{n-k}!}. $$ We also define several combinatorial objects that are related to the Delta Conjecture. Let $n$ be a positive integer. An $(n,n)$-{\em Dyck path} $P$ is a lattice path from $(0,0)$ to $(n,n)$ which always remains weakly above the main diagonal $y=x$. The number of Dyck paths of size $n$ is given by the \thn{n} Catalan number $C_n=\frac{1}{n+1}\binom{2n}{n}$. We let $\mathcal{D}_n$ denote the set of Dyck paths of size $n$. For a Dyck path $P\in\mathcal{D}_n$, the cells that are cut through by the main diagonal are called \emph{diagonal cells}, and the cells between the diagonal cells and the path are called \emph{area cells}. We call the main diagonal the \thn{0} diagonal; we call the line that parallel to and above the main diagonal with distance $i$ the \thn{i} diagonal. Given an $(n,n)$-Dyck path $P$, an \emph{$(n,n)$-word parking function} (or a \emph{labeled Dyck path}) PF is obtained by labeling the north steps of $P$ with positive integers such that the labels (called \emph{cars}) are strictly increasing along each column of $P$. We let $\ell_i(\mathrm{PF})$ be the \thn{i} row label of PF. We let $\mathcal{WPF}_n$ denote the set of $(n,n)$-word parking functions. We shall also omit ``word" to call $\mathrm{PF}\in\mathcal{WPF}_n$ a \emph{parking function} in this paper. For a parking function $\mathrm{PF}\in\mathcal{WPF}_n$, let $a_i(\mathrm{PF})$ be the number of full cells between the path and the diagonal in the \thn{i} row counting from bottom to top, and let \begin{multline*} d_i(\mathrm{PF}):=\big|\{(i,j):i<j,\ a_i(\mathrm{PF})=a_j(\mathrm{PF})\textnormal{ and }\ell_i(\mathrm{PF})<\ell_j(\mathrm{PF})\}\\ \cup\{(i,j):i<j,\ a_i(\mathrm{PF})=a_j(\mathrm{PF})+1\textnormal{ and }\ell_i(\mathrm{PF})>\ell_j(\mathrm{PF})\}\big|, \end{multline*} then $\mathrm{area}(\mathrm{PF}):=\sum_{i=1}^{n}a_i(\mathrm{PF})$ is the \emph{area} of PF and $\mathrm{dinv}(\mathrm{PF}):=\sum_{i=1}^{n}d_i(\mathrm{PF})$ is the \emph{dinv} of PF. \fref{areadinv} gives an example of a $(7,7)$-parking function with area 13 and dinv 2. \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=0.5] \fillshade{1/1,2/2,3/3,4/4,5/5,6/6,7/7} \Dpath{0,0}{7}{7}{0,0,0,1,1,3,3,-1}; \PFtext{0,0}{0/2,0/3,0/5,1/2,1/7,3/3,3/7}; \PFad{8,0}{1/0/0,2/1/0,3/2/0,4/2/1,5/3/1,6/2/0,7/3/0}; \end{tikzpicture} \caption{A $(7,7)$-parking function with area 13 and dinv 2.} \label{fig:areadinv} \end{figure} For a word parking function $\mathrm{PF}\in\mathcal{WPF}_n$, we define the \emph{label weight} (or \emph{car weight}) of PF to be $$ X^{\mathrm{PF}} := \prod_{i=1}^{n} x_{\ell_i(\mathrm{PF})}. $$ Then all statistics involved in the Shuffle Theorem have been defined. The Delta Conjecture also requires the following combinatorial terminology about parking functions. For a parking function $\mathrm{PF}\in\mathcal{WPF}_{n}$, we define \begin{eqnarray*} \mathrm{valley}(\mathrm{PF}) &:=& \{i : a_i(\mathrm{PF})\leq a_{i-1}(\mathrm{PF})\},\\ \mathrm{Rise}(\mathrm{PF}) &:=& \{i : a_i(\mathrm{PF}) = a_{i-1}(\mathrm{PF})+1\},\quad\mbox{and}\\ \mathrm{Val}(\mathrm{PF}) &:=& \{i : a_i(\mathrm{PF})< a_{i-1}(\mathrm{PF})\mbox{ or }a_i(\mathrm{PF})=a_{i-1}(\mathrm{PF})\mbox{ and }\ell_i(\mathrm{PF})>\ell_{i-1}(\mathrm{PF})\} \end{eqnarray*} to be the sets of \emph{valleys}, \emph{double rises} and \emph{contractible valleys} of PF. We denote the right hand sides of Equations (\ref{delta1}) and (\ref{delta2}) by $\mathrm{Rise}_{n,k}[X;q,t]$ and $\mathrm{Val}_{n,k}[X;q,t]$: \begin{eqnarray*} \mathrm{Rise}_{n,k}[X;q,t]&=&\sum_{\mathrm{PF}\in\mathcal{WPF}_n}t^{\mathrm{area}(\mathrm{PF})}q^{\mathrm{dinv}(\mathrm{PF})} X^{\mathrm{PF}} \prod_{i\in \mathrm{Rise}(\mathrm{PF})} \left(1+\frac{z}{t^{a_i(\mathrm{PF})}} \right)\bigg|_{z^{n-k-1}}\\ \mathrm{Val}_{n,k}[X;q,t]&=&\sum_{\mathrm{PF}\in\mathcal{WPF}_n}t^{\mathrm{area}(\mathrm{PF})}q^{\mathrm{dinv}(\mathrm{PF})} X^{\mathrm{PF}} \prod_{i\in \mathrm{Val}(\mathrm{PF})} \left(1+\frac{z}{q^{d_i(\mathrm{PF})+1}} \right)\bigg|_{z^{n-k-1}}. \end{eqnarray*} Consider the factor $t^{\mathrm{area}(\mathrm{PF})}\prod_{i\in \mathrm{Rise}(\mathrm{PF})} \left(1+\frac{z}{t^{a_i(\mathrm{PF})}} \right)\Big|_{z^{n-k-1}}$ in $\mathrm{Rise}_{n,k}[X;q,t]$. Each term in the expansion of this factor is a power of $t$, and the power is $\mathrm{area}(\mathrm{PF})$ minus $n-k-1$ row-areas $a_i(\mathrm{PF})$ of the double rise rows. Similarly, in the factor $q^{\mathrm{dinv}(\mathrm{PF})}\prod_{i\in \mathrm{Val}(\mathrm{PF})} (1+\frac{z}{q^{d_i(\mathrm{PF})+1}})\Big|_{z^{n-k-1}}$ in $\mathrm{Val}_{n,k}[X;q,t]$, each term is a power of $q$, and the power is $\mathrm{dinv}(\mathrm{PF})$ minus $n-k-1$ row-dinvs $(d_i(\mathrm{PF})+1)$ of the contractible valley rows. Thus, if we define \begin{eqnarray*} \mathcal{WPF}_{n,k}^{\mathrm{Rise}} &:=& \{(\mathrm{PF},R): P\in\mathcal{WPF}_n, R\subseteq \mathrm{Rise}(\mathrm{PF}), |R|=k \},\\ \mathcal{WPF}_{n,k}^{\mathrm{Val}} &:=& \{(\mathrm{PF},V): P\in\mathcal{WPF}_n, V\subseteq \mathrm{Val}(\mathrm{PF}), |V|=k \} \end{eqnarray*} and let \begin{eqnarray*} \mathrm{area}^-(\mathrm{PF},R)&:=&\sum_{i\in [n]\backslash R} a_i(\mathrm{PF}),\\ \mathrm{dinv}^-(\mathrm{PF},V)&:=&\sum_{i\in [n]\backslash V} d_i(\mathrm{PF}) - |V|, \end{eqnarray*} then \begin{eqnarray*} \mathrm{Rise}_{n,k}[X;q,t]&=&\sum_{(\mathrm{PF},R)\in\mathcal{WPF}_{n,n-k-1}^{\mathrm{Rise}}}t^{\mathrm{area}^-(\mathrm{PF},R)}q^{\mathrm{dinv}(\mathrm{PF})} X^{\mathrm{PF}},\\ \mathrm{Val}_{n,k}[X;q,t]&=&\sum_{(\mathrm{PF},V)\in\mathcal{WPF}_{n,n-k-1}^{\mathrm{Val}}}t^{\mathrm{area}(\mathrm{PF})}q^{\mathrm{dinv}^-(\mathrm{PF},V)} X^{\mathrm{PF}}, \end{eqnarray*} and the Delta Conjecture can be stated as $$ \Delta'_{e_k}e_n=\mathrm{Rise}_{n,k}[X;q,t]=\mathrm{Val}_{n,k}[X;q,t] $$ for any integers $n> k\geq 0$. We call a pair $(\mathrm{PF},R)\in\mathcal{WPF}_{n,k}^{\mathrm{Rise}}$ (or $(\mathrm{PF},V)\in\mathcal{WPF}_{n,k}^{\mathrm{Val}}$) a \emph{rise-decorated} (or \emph{valley-decorated}) parking function, which can be seen as a parking function PF with $k$ rows in $\mathrm{Rise}(\mathrm{PF})$ (or $\mathrm{Val}(\mathrm{PF})$) marked with a star $\ast$. \fref{MWPF} shows examples of rise-decorated and valley-decorated parking functions. \begin{figure}[ht] \centering \begin{tikzpicture}[scale=0.5] \fillshade{1/1,2/2,3/3,4/4,5/5,6/6,7/7} \Dpath{0,0}{7}{7}{0,0,0,1,1,3,4,-1}; \PFtext{0,0}{0/2,0/3,0/4,1/2,1/5,3/1,4/3}; \fillsome{0/2/\ast,1/5/\ast} \end{tikzpicture} \qquad\qquad \begin{tikzpicture}[scale=0.5] \fillshade{1/1,2/2,3/3,4/4,5/5,6/6,7/7} \Dpath{0,0}{7}{7}{0,0,0,1,1,3,4,-1}; \PFtext{0,0}{0/2,0/3,0/4,1/2,1/5,3/1,4/3}; \fillsome{3/6/\ast,4/7/\ast} \end{tikzpicture} \caption{Examples: parking functions in $\mathcal{WPF}_{7,2}^{\mathrm{Rise}}$ and $\mathcal{WPF}_{7,2}^{\mathrm{Val}}$.} \label{fig:MWPF} \end{figure} Now we shall give details of the Extended Delta Conjecture. Given an $(n,n)$-Dyck path $P$, recall that the valley set of $P$ is defined to be $$ \mathrm{valley}(P):=\{i:a_i(P)<a_{i-1}(P)\}. $$ We say that a word-labeling of a Dyck path \emph{has $r$ blank valleys} if there are $r$ valleys not receiving a label. Such labeled Dyck paths are called \emph{extended word parking functions}. We let $\mathcal{WPF}_{n;r}$ denote the set of extended word parking functions of size $n+r$ with $r$ blank valleys. \fref{blankvalley} shows an example of a parking function in the set $\mathcal{WPF}_{5;2}$. \begin{figure}[ht] \centering \begin{tikzpicture}[scale=0.5] \fillshade{1/1,2/2,3/3,4/4,5/5,6/6,7/7} \Dpath{0,0}{7}{7}{0,0,0,1,1,3,4,-1}; \PFtext{0,0}{0/2,0/3,0/4,1/{\ },1/5,3/3}; \PFad{8,0}{1/0/0,2/1/0,3/2/0,4/2/1,5/3/2,6/2/0,7/2/0}; \end{tikzpicture} \caption{A $(7,7)$-extended parking function with $2$ blank valleys.} \label{fig:blankvalley} \end{figure} A more convenient way to draw an extended word parking function is that, we can fill the blank valleys with 0's, thus an extended word parking function is a parking function with labels in $\mathbb{Z}_{\geq 0}$ such that $0$ does not appear in the first row (since the first row is not a valley). With 0's in the blank valley positions, we can define the area and dinv components $a_i(\mathrm{PF})$ and $d_i(\mathrm{PF})$ on each parking functions in $\mathcal{WPF}_{n;r}$ in the same way. We still let $\mathrm{Rise}(\mathrm{PF})=\{i: a_i(\mathrm{PF}) = a_{i-1}(\mathrm{PF})+1\}$ denote the double rise set. For sake of labeling the blank valleys with 0's, we can define the contractible valley set $\mathrm{Val}(\mathrm{PF})$ in the same way as normal word parking functions. Further, we can define the set of \emph{rise-decorated} (or \emph{valley-decorated}) \emph{parking functions with blank valleys}. The set of {rise-decorated} (or {valley-decorated}) {parking functions with $n$ cars, $r$ blank valleys} and $k$ marked double rises (or contractible valleys) is denoted by $\mathcal{WPF}_{r;n,k}^{\mathrm{Rise}}$ (or $\mathcal{WPF}_{r;n,k}^{\mathrm{Val}}$). We let $\mathrm{Rise}_{r;n,k}[X;q,t]$ denote the combinatorial side of \cref{deltar1} and $\mathrm{Val}_{r;n,k}[X;q,t]$ denote the combinatorial side of \cref{deltar2}. Notice that the combinatorial sides of the two conjectures could also be written as generating functions of the sets $\mathcal{WPF}_{r;n,n-k-1}^{\mathrm{Rise}}$ and $\mathcal{WPF}_{r;n,n-k-1}^{\mathrm{Val}}$, i.e.\ we have \begin{eqnarray*} \mathrm{Rise}_{r;n,k}[X;q,t]&=&\sum_{(\mathrm{PF},R)\in\mathcal{WPF}_{r;n,n-k-1}^{\mathrm{Rise}}}t^{\mathrm{area}^-(\mathrm{PF},R)}q^{\mathrm{dinv}(\mathrm{PF})}x^\mathrm{PF} ,\\ \mathrm{Val}_{r;n,k}[X;q,t]&=&\sum_{(\mathrm{PF},V)\in\mathcal{WPF}_{r;n,n-k-1}^{\mathrm{Val}}}t^{\mathrm{area}(\mathrm{PF})}q^{\mathrm{dinv}^-(\mathrm{PF},V)}x^\mathrm{PF}. \end{eqnarray*} \section{Extended ordered multiset partitions} \subsection{Ordered set partitions and ordered multiset partitions} Let $n\geq 0$ be any integer. A {\em set partition} $\pi$ of the set $[n]= \{1, \ldots, n\}$ is a family of nonempty, pairwise disjoint subsets $B_1,B_2,\ldots,B_k$ of $[n]$ called {\em parts} (or {\em blocks}) such that $\cup^k_{i=1}B_i=[n]$. We let $\ell(\pi)$ denote the number of parts in $\pi$ and $|\pi|=n$ denote the size of $\pi$. We let $\min(B_i)$ and $\max(B_i)$ denote the minimum and maximum elements of $B_i$ and we use the convention that we order the parts so that $\min(B_1)< \cdots < \min(B_k)$. To simplify notation, we shall write $\pi$ as $B_1/ \cdots /B_k$. Thus we would write $\pi =134/268/57$ for the set partition $\pi$ of $[8]$ with parts $B_1 = \{1,3,4\}$, $B_2 = \{2,6,8\}$ and $B_3=\{5,7\}$. An {\em ordered set partition} with underlying set partition $\pi$ is just a permutation of the parts of $\pi$, i.e.\ $\delta =B_{\sigma_1}/ \cdots /B_{\sigma_k}$ for some permutation $\sigma$ in the symmetric group $\Sn{k}$. For example, $\delta =57/134/268$ is an ordered set partition of the set $[8]$ with underlying set partition $\pi =134/268/57$. Let $\pi=B_1/ \cdots /B_k$ be an ordered set partition of $[n]$. The strong composition $\lambda(\pi)=(|B_1|,\ldots,|B_k|)$ is called the \emph{shape} of $\pi$. We let $\mathcal{OP}_n$ denote the set of ordered set partitions of $[n]$, and $\mathcal{OP}_{n,k}$ denote the set of ordered set partitions of $[n]$ with $k$ parts. Further, we let $\mathcal{OP}_{n,\alpha}$ denote the set of ordered set partitions of $[n]$ with shape $\alpha$. More generally, for a weak composition $\beta = \beta_1 \cdots \beta_\ell \vDash n$, an \emph{ordered multiset partition} with \emph{content} $\beta$ is defined to be a partition of the multiset $A(\beta)=\{i^{\beta_i}:1\leq i\leq \ell\}$ into several ordered sets called \emph{blocks} where \emph{repetition is not allowed} in each block. We denote the set of ordered multiset partitions with content $\beta$ by $\mathcal{OP}_{\beta}$. Similar, we have $\mathcal{OP}_{\beta,k}$ and $\mathcal{OP}_{\beta,\alpha}$. For example, $\pi=234/26/123$ is an ordered multiset partition in $\mathcal{OP}_{(1,3,2,1,0,1),(3,2,3)}$. We shall define 4 statistics: inv, maj, dinv and minimaj on ordered multiset partitions. Given $\pi=B_1/\cdots/B_k\in\mathcal{OP}_{\beta,k}$, the \emph{inversion} statistic $\mathrm{inv}(\pi)$ is defined to be the number of pairs $a>b$ such that $b$ is the minimum of its block, and $a$ is in some block that is strictly left of $b$'s block. Such pairs are called \emph{inversion pairs}. For example, $\pi =134/268/57$ has 4 inversions, and the inversion pairs are $(3,2),(4,2),(6,5),(8,5)$. For an ordered partition $\pi=B_1/\cdots/B_k\in\mathcal{OP}_{\beta,k}$, let $B_i^h$ denote the \thn{h} smallest element in part $B_i$, then the \emph{diagonal inversion} of $\pi$ is defined to be $$ \mathrm{dinv}(\pi):=|\{(h,i,j):i<j, B_i^h>B_j^h\}\cup\{(h,i,j):i<j, B_i^h>B_j^{h+1}\}|, $$ where the triples in the left set are called \emph{primary dinvs}, and the triples in the right set are called \emph{secondary dinvs}. For example, $\pi =134/268/57$ has 4 dinvs, which are all secondary dinvs: $(1,1,2),(1,1,3),(1,2,3),(2,1,2)$. We let $\sigma=\sigma(\pi)$ of a partition $\pi\in\mathcal{OP}_{\beta,k}$ be the word obtained by writing each block $B_i$ in decreasing order for $i=1\cdots k$. We also define the index word $\mathrm{ind}(\pi) = 0^{|B_1|} 1^{|B_2|}\cdots(k-1)^{|B_k|}$. Then the \emph{major index} of $\pi$ is $$ \mathrm{maj}(\pi) := \sum_{i:\sigma_i>\sigma_{i+1}} \mathrm{ind}(\pi)_{i+1}. $$ For example, if $\pi=134/268/57$, then $\sigma=43186275$, $\mathrm{ind}(\pi)=00011122$ and $\mathrm{maj}(\pi)=0+0+1+1+2=4$. Given $\pi = B_1/\cdots/B_k \in\mathcal{OP}_{\beta,\alpha}$ where $\alpha=(\alpha_1,\ldots,\alpha_k)$, we first construct a word $\mathrm{miniword}(\pi)$ by organizing the elements in each block and list the organized blocks $B_1,\ldots,B_k$. We first organize the numbers in $B_k$ in increasing order. Then suppose that we have processed block $B_{i+1}$, we shall organize the numbers in $B_i$ by placing the numbers strictly bigger than the first number of $B_{i+1}$ first in increasing order, followed by the remaining numbers also in increasing order, then we place the organized numbers on the left of the existing sequence. For example, if $\pi=2/34/13/13/2$, then $\mathrm{miniword}(\pi)=23413312$. The \emph{minimum major index} of $\pi$ is defined by $$ \mathrm{minimaj}(\pi):=\mathrm{maj}(\mathrm{miniword}(\pi)). $$ The four statistics are closely related to the Delta Conjecture. Let $$ D_{\beta,k}^{\mathrm{stat}}(q) := \sum_{\pi\in\mathcal{OP}_{\beta,k}}q^{\mathrm{stat}(\pi)} $$ where stat is one of the statistics \emph{inv, maj, dinv, minimaj}, Haglund, Remmel and the second author in \cite{HRW} proved that \begin{theorem}[Haglund, Remmel and Wilson]\label{theorem:combo} For any integers $n,k$ and weak composition $\beta$, \begin{eqnarray} \mathrm{Rise}_{n,k}[X;q,0]|_{M_\beta} &=& D_{\beta,k+1}^{\mathrm{dinv}}(q),\\ \mathrm{Rise}_{n,k}[X;0,q]|_{M_\beta} &=& D_{\beta,k+1}^{\mathrm{maj}}(q),\\ \mathrm{Val}_{n,k}[X;q,0]|_{M_\beta} &=& D_{\beta,k+1}^{\mathrm{inv}}(q),\\ \mathrm{Val}_{n,k}[X;0,q]|_{M_\beta} &=& D_{\beta,k+1}^{\mathrm{minimaj}}(q). \end{eqnarray} \end{theorem} They proved \tref{combo} by constructing 4 bijections of the form $\gamma^{\mathrm{stat}}$ for $\mathrm{stat}=\mathrm{dinv},$ $\mathrm{maj},\mathrm{inv}$ and $\mathrm{minimaj}$ between ordered multiset partitions and word parking functions. We present the four bijections in Appendix A. It is a \textbf{fact} that for any ordered multiset partition $\pi$, \emph{each bijection $\gamma^{\mathrm{stat}}$ maps the the minimum element in the last part of $\pi$ to the car in the first row in the parking function $\gamma^{\mathrm{stat}}(\pi)$}, mentioned in Appendix A. We are going to use the fact when we prove \tref{combor}. On the combinatorial side, the second author \cite{wilson} and Rhoades \cite{Rhoades} proved the following theorem: \begin{theorem}[Rhoades and Wilson]\label{theorem:equi} For any integers $n,k$, \begin{equation} \mathrm{Rise}_{n,k}(X;q,0)=\mathrm{Rise}_{n,k}(X;0,q)=\mathrm{Val}_{n,k}(X;q,0)=\mathrm{Val}_{n,k}(X;0,q). \end{equation} \end{theorem} \subsection{Extended permutations, extended ordered set and multiset partitions} We shall generalize the definitions of permutations, ordered set partitions and ordered multiset partitions in the way that the number 0 is allowed to be an entry. Let $\beta = \{\beta_1,\ldots,\beta_\ell\}\vDash n$ be a weak composition and $A(\beta)=\{i^{\beta_i}:1\leq i\leq \ell\}$ be its corresponding multiset. A permutation of $A(\beta)$ is an ordering of the entries in the multiset $A(\beta)$. We let $\Sn{\beta}$ denote the set of permutations of $A(\beta)$. Given a weak composition $\beta\vDash n$ and an integer $r\geq 0$, an \emph{extended permutation} (or a \emph{tail positive permutation}) is a permutation of the multiset $A(\beta)\cup \{0^r\}$ such that the last entry is not $0$. We let $\Sn{r;\beta}$ denote the set of extended permutations of $A(\beta)\cup \{0^r\}$. Clearly, $\Sn{0;\beta}=\Sn{\beta}$. In a similar way, one can define extended ordered set and multiset partitions. We let $\mathcal{OP}_{1;n}$ denote the set of \emph{extended ordered set partitions}, which are ordered set partitions of the set $\{0\}\cup\{1,\ldots,n\}$ such that the number $0$ is not contained in the last block. Similar to the definition of $\mathcal{OP}_{n,k}$ and $\mathcal{OP}_{n,\alpha}$, we have $\mathcal{OP}_{1;n,k}$ and $\mathcal{OP}_{1;n,\alpha}$. An \emph{extended ordered multiset partition} with content $\beta\vDash n$ with $r$ 0's is an ordered multiset partition of the set $A(\beta)\cup \{0^r\}$ such that $0$ is not contained in the last block. We let $\mathcal{OP}_{r;\beta}$ denote the set of all such extended ordered multiset partitions. Similarly, we have $\mathcal{OP}_{r;\beta,k}$ and $\mathcal{OP}_{r;\beta,\alpha}$. The above three new combinatorial objects are defined from the same idea that they do not end with $0$, and extended ordered multiset partitions have nice combinatorial properties. It is easy to check that all the four statistics: inv, maj, dinv, minimaj are well defined on the set $\mathcal{OP}_{r;\beta,\alpha}$. Let $$ D_{r;\beta,k}^{\mathrm{stat}}(q) := \sum_{\pi\in\mathcal{OP}_{r;\beta,k}}q^{\mathrm{stat}(\pi)} $$ where stat is one of the statistics \emph{inv, maj, dinv, minimaj}. We can prove the following theorem: \begin{theorem}\label{theorem:combor} For any integers $n,k,r$ and weak composition $\beta$, \begin{eqnarray} \mathrm{Rise}_{r;n,k}[X;q,0]|_{M_\beta} &=& D_{r;\beta,k+1}^{\mathrm{dinv}}(q),\\ \mathrm{Rise}_{r;n,k}[X;0,q]|_{M_\beta} &=& D_{r;\beta,k+1}^{\mathrm{maj}}(q),\\ \mathrm{Val}_{r;n,k}[X;q,0]|_{M_\beta} &=& D_{r;\beta,k+1}^{\mathrm{inv}}(q),\\ \mathrm{Val}_{r;n,k}[X;0,q]|_{M_\beta} &=& D_{r;\beta,k+1}^{\mathrm{minimaj}}(q). \end{eqnarray} \end{theorem} \begin{proof} Similar to the definition of $\mathcal{OP}_{r;\beta}$, we shall let $\mathcal{OP}^{\mathrm{all}}_{r;\beta}$ denote the set of ordered multiset partitions of the set $A(\beta)\cup \{0^r\}$, but there is no restriction of the placement of 0 (i.e.\ 0 is allowed to be in the last block). Similarly, we have $\mathcal{OP}^{\mathrm{all}}_{r;\beta,k}$ and $\mathcal{OP}^{\mathrm{all}}_{r;\beta,\alpha}$. Haglund et al.\ proved \tref{combo} by constructing 4 bijections $\gamma^\mathrm{dinv}, \gamma^\mathrm{maj}, \gamma^\mathrm{inv}, \gamma^\mathrm{minimaj}$ between ordered multiset partitions and decorated word parking functions: \begin{eqnarray*} \gamma^\mathrm{dinv}:\mathcal{OP}_{\beta,k+1} &\rightarrow& \{(\mathrm{PF},R)\in\mathcal{WPF}_{n,n-k-1}^{\mathrm{Rise}},\ X^{\mathrm{PF}} = \prod_{i=1}^{\ell(\beta)}x_i^{\beta_i},\ \mathrm{area}^-(\mathrm{PF},R)=0\}, \\ \gamma^\mathrm{maj}:\mathcal{OP}_{\beta,k+1} &\rightarrow& \{(\mathrm{PF},R)\in\mathcal{WPF}_{n,n-k-1}^{\mathrm{Rise}},\ X^{\mathrm{PF}} = \prod_{i=1}^{\ell(\beta)}x_i^{\beta_i},\ \mathrm{dinv}(\mathrm{PF})=0\}, \\ \gamma^\mathrm{inv}:\mathcal{OP}_{\beta,k+1} &\rightarrow& \{(\mathrm{PF},V)\in\mathcal{WPF}_{n,n-k-1}^{\mathrm{Val}},\ X^{\mathrm{PF}} = \prod_{i=1}^{\ell(\beta)}x_i^{\beta_i},\ \mathrm{area}(\mathrm{PF})=0\}, \\ \gamma^\mathrm{minimaj}:\mathcal{OP}_{\beta,k+1} &\rightarrow& \{(\mathrm{PF},V)\in\mathcal{WPF}_{n,n-k-1}^{\mathrm{Val}},\ X^{\mathrm{PF}} = \prod_{i=1}^{\ell(\beta)}x_i^{\beta_i},\ \mathrm{dinv}^-(\mathrm{PF},V)=0\}. \end{eqnarray*} The details can be found in Appendix A. If we allow 0 as an element of an ordered multiset partition, then the four maps can be naturally generalized to the set $\mathcal{OP}^{\mathrm{all}}_{r;\beta,k}$, and the range of the maps are parking functions that allow 0 as a car, i.e.\ if we let $\mathcal{WPF}_{r;n,k}^{\mathrm{Rise}+}$ and $\mathcal{WPF}_{r;n,k}^{\mathrm{Val}+}$ be the set of rise and valley decorated word parking function with $r$ 0's (car 0 is allowed in the first row), then we have bijections \begin{eqnarray*} \gamma^\mathrm{dinv}:\mathcal{OP}^{\mathrm{all}}_{r;\beta,k+1} &\rightarrow& \{(\mathrm{PF},R)\in\mathcal{WPF}_{r;n,n-k-1}^{\mathrm{Rise}+},\ X^{\mathrm{PF}} = \prod_{i=1}^{\ell(\beta)}x_i^{\beta_i},\ \mathrm{area}^-(\mathrm{PF},R)=0\}, \\ \gamma^\mathrm{maj}:\mathcal{OP}^{\mathrm{all}}_{r;\beta,k+1} &\rightarrow& \{(\mathrm{PF},R)\in\mathcal{WPF}_{r;n,n-k-1}^{\mathrm{Rise}+},\ X^{\mathrm{PF}} = \prod_{i=1}^{\ell(\beta)}x_i^{\beta_i},\ \mathrm{dinv}(\mathrm{PF})=0\}, \\ \gamma^\mathrm{inv}:\mathcal{OP}^{\mathrm{all}}_{r;\beta,k+1} &\rightarrow& \{(\mathrm{PF},V)\in\mathcal{WPF}_{r;n,n-k-1}^{\mathrm{Val}+},\ X^{\mathrm{PF}} = \prod_{i=1}^{\ell(\beta)}x_i^{\beta_i},\ \mathrm{area}(\mathrm{PF})=0\}, \\ \gamma^\mathrm{minimaj}:\mathcal{OP}^{\mathrm{all}}_{r;\beta,k+1} &\rightarrow& \{(\mathrm{PF},V)\in\mathcal{WPF}_{r;n,n-k-1}^{\mathrm{Val}+},\ X^{\mathrm{PF}} = \prod_{i=1}^{\ell(\beta)}x_i^{\beta_i},\ \mathrm{dinv}^-(\mathrm{PF},V)=0\}. \end{eqnarray*} We have mentioned the fact below \tref{combo} and in Appendix A that each bijection $\gamma^{\mathrm{stat}}$ maps the minimum element in the last part of $\pi$ into the car in the first row of $\gamma^{\mathrm{stat}}(\pi)$. Since the set $\mathcal{OP}_{r;\beta,k}$ contains ordered multiset partitions in $\mathcal{OP}^{\mathrm{all}}_{r;\beta,k}$ that 0 is not contained in the last block, the restriction of the maps $\gamma^{\mathrm{stat}}$ on the set $\mathcal{OP}_{r;\beta,k}\subseteq\mathcal{OP}^{\mathrm{all}}_{r;\beta,k}$ is a bijection between $\mathcal{OP}_{r;\beta,k}$ and the corresponding set of parking functions with $r$ 0's but 0 is not allowed in the first row, which exactly matches the set $\mathcal{WPF}_{r;n,n-k-1}^{\mathrm{Rise}}$ or $\mathcal{WPF}_{r;n,n-k-1}^{\mathrm{Val}}$, and the restriction of the maps $\gamma^{\mathrm{stat}}$ on $\mathcal{OP}_{r;\beta,k}\subseteq\mathcal{OP}^{\mathrm{all}}_{r;\beta,k}$ are bijections: \begin{eqnarray*} \gamma^\mathrm{dinv}:\mathcal{OP}_{r;\beta,k+1} &\rightarrow& \{(\mathrm{PF},R)\in\mathcal{WPF}_{r;n,n-k-1}^{\mathrm{Rise}},\ X^{\mathrm{PF}} = \prod_{i=1}^{\ell(\beta)}x_i^{\beta_i},\ \mathrm{area}^-(\mathrm{PF},R)=0\}, \\ \gamma^\mathrm{maj}:\mathcal{OP}_{r;\beta,k+1} &\rightarrow& \{(\mathrm{PF},R)\in\mathcal{WPF}_{r;n,n-k-1}^{\mathrm{Rise}},\ X^{\mathrm{PF}} = \prod_{i=1}^{\ell(\beta)}x_i^{\beta_i},\ \mathrm{dinv}(\mathrm{PF})=0\}, \\ \gamma^\mathrm{inv}:\mathcal{OP}_{r;\beta,k+1} &\rightarrow& \{(\mathrm{PF},V)\in\mathcal{WPF}_{r;n,n-k-1}^{\mathrm{Val}},\ X^{\mathrm{PF}} = \prod_{i=1}^{\ell(\beta)}x_i^{\beta_i},\ \mathrm{area}(\mathrm{PF})=0\}, \\ \gamma^\mathrm{minimaj}:\mathcal{OP}_{r;\beta,k+1} &\rightarrow& \{(\mathrm{PF},V)\in\mathcal{WPF}_{r;n,n-k-1}^{\mathrm{Val}},\ X^{\mathrm{PF}} = \prod_{i=1}^{\ell(\beta)}x_i^{\beta_i},\ \mathrm{dinv}^-(\mathrm{PF},V)=0\}. \end{eqnarray*} \tref{combor} follows from the fact that $\gamma^{\mathrm{stat}}$ maps the statistic $\mathrm{stat}$ into parking function statistics $\mathrm{dinv},\mathrm{area}^-,\mathrm{dinv}^-,\mathrm{area}$. \end{proof} Thus, the combinatorial sides of the conjectures about the expression $\Delta'_{e_k} \Delta_{h_r}e_n$ at the case when $q$ or $t$ equals $0$ become generating functions about extended ordered multiset partitions. We shall show in the following two sections that the statistics inv, maj, dinv, minimaj are equi-distributed on $\mathcal{OP}_{r;\beta,k}$. \section{The identity $D_{r;\beta,k}^{\mathrm{dinv}}(q) =D_{r;\beta,k}^{\mathrm{maj}}(q) =D_{r;\beta,k}^{\mathrm{inv}}(q)$} Recall that we let $\mathcal{OP}^{\mathrm{all}}_{r;\beta}$ denote the set of ordered multiset partitions of the set $A(\beta)\cup \{0^r\}$ and 0 is allowed to be in the last block. We also have $\mathcal{OP}^{\mathrm{all}}_{r;\beta,k}$ and $\mathcal{OP}^{\mathrm{all}}_{r;\beta,\alpha}$. In fact, $\mathcal{OP}^{\mathrm{all}}_{r;\beta,k}$ only enlarges the alphabet of $\mathcal{OP}_{\beta,k}$ from $\mathbb{Z}_+$ to $\mathbb{Z}_{\geq 0}$, and it will inherit all the properties of $\mathcal{OP}_{\beta,k}$. For a composition $\beta=(\beta_1,\ldots,\beta_n)$ and integers $r,k$, we let $$ D_{r;\beta,k}^{\mathrm{stat} +}(q) := \sum_{\pi\in\mathcal{OP}^{\mathrm{all}}_{r;\beta,k}}q^{\mathrm{stat}(\pi)} $$ where stat is one of the statistics \emph{inv, maj, dinv, minimaj}, then clearly \begin{equation*} D_{r;(\beta_1,\ldots,\beta_n),k}^{\mathrm{stat} +}(q) =D_{(r,\beta_1,\ldots,\beta_n),k}^{\mathrm{stat}}(q), \end{equation*} since we can add 1 to all the entries of an ordered multiset partition in $\mathcal{OP}^{\mathrm{all}}_{r;(\beta_1,\ldots,\beta_n),k}$ to get an ordered multiset partition in $\mathcal{OP}_{(r,\beta_1,\ldots,\beta_n),k}$. It follows from \tref{equi} that, \begin{corollary}\label{corollary:Dplus} For any integers $n,r$ and composition $\beta$, $$ D_{r;\beta,k}^{\mathrm{inv} +}(q) = D_{r;\beta,k}^{\mathrm{maj} +}(q) = D_{r;\beta,k}^{\mathrm{dinv} +}(q) = D_{r;\beta,k}^{\mathrm{minimaj} +}(q). $$ \end{corollary} For a composition $\beta=(\beta_1,\ldots,\beta_n)$, we let $\beta^-=(\beta_1,\ldots,\beta_{n-1})$ be the composition obtained by removing the last part of $\beta$. We also let $[0,\ell]$ be the set $\{0,1,\ldots,\ell\}$. Further, for a set $S$, we let $\binom{S}{k}$ be the set of size-$k$ subsets of $S$, and $\multiset{S}{k}$ be the set of size-$k$ multisets with elements in $S$. In order to prove the result about ordered multiset partition that $ D_{\beta,k}^{\mathrm{inv}}(q) = D_{\beta,k}^{\mathrm{maj}}(q) = D_{\beta,k}^{\mathrm{dinv}}(q) $, the second author in \cite{wilson} constructed 3 \emph{insertion} maps: $$ \phi^{\mathrm{stat}}_{\beta,k,\ell}: \mathcal{OP}_{\beta^-,\ell} \times \binom{[0,\ell-1]}{\beta_n-k+\ell} \times \multiset{[0,\ell]}{k-\ell} \rightarrow \mathcal{OP}_{\beta,k}, $$ where stat is one of the statistics \emph{inv, maj, dinv}, and he proved that $$ \mathrm{stat}\left(\phi^{\mathrm{stat}}_{\beta,k,\ell}(\pi,U,B)\right) = \mathrm{stat}(\pi)+\sum_{u\in U} u+\sum_{b\in B} b $$ for all the three statistics. In this section, we shall generalize the insertion maps in \cite{wilson} to extended ordered multiset partitions to prove the identity that $$D_{r;\beta,k}^{\mathrm{dinv}}(q) =D_{r;\beta,k}^{\mathrm{maj}}(q) =D_{r;\beta,k}^{\mathrm{inv}}(q).$$ This identity is also proved by D'Adderio, Iraci and Wyngaerd in \cite{Michele} independently. \subsection{The insertion map for inv} We shall generalize the map $\phi^{\mathrm{inv}}_{\beta,k,\ell}$ in \cite{wilson} to the extended case as \begin{align*} \phi^{\mathrm{inv}}_{r;\beta,k,\ell}&: \mathcal{OP}_{r;\beta^-,\ell} \times \binom{[0,\ell-1]}{\beta_n-k+\ell} \times \multiset{[0,\ell]}{k-\ell} \\ &+ \left(\mathcal{OP}^{\mathrm{all}}_{r;\beta^-,\ell} - \mathcal{OP}_{r;\beta^-,\ell}\right) \times \binom{[0,\ell-1]}{\beta_n-k+\ell} \times \multiset{[0,\ell]}{k-\ell-1} \rightarrow \mathcal{OP}_{r;\beta,k} \end{align*} such that \begin{equation}\label{inv} \mathrm{inv}\left(\phi^{\mathrm{inv}}_{r;\beta,k,\ell}(\pi,U,B)\right) = \mathrm{inv}(\pi)+\sum_{u\in U} u+\sum_{b\in B} b. \end{equation} Given $\beta=(\beta_1,\ldots,\beta_n)$ and $\pi\in\mathcal{OP}_{r;\beta^-,\ell}$, we label each block plus the space to the left of $\pi$ from right to left with numbers $0,1,\ldots,\ell$. Then for any $U\in \binom{[0,\ell-1]}{\beta_n-k+\ell}$ and $B\in \multiset{[0,\ell]}{k-\ell}$, we construct $\phi^{\mathrm{inv}}_{r;\beta,k,\ell}(\pi,U,B)$ as follows. We repeatedly remove the largest number $i$ from the multiset $U\cup B$, taking from $U$ first if the largest numbers are equal. If $i\in U$, then we place an $n$ to the block with label $i$; if $i\in B$, then we add a new block of a singleton $n$ to the right of the block with label $i$. This process constructs all the ordered multiset partitions in $\mathcal{OP}_{r;\beta,k}$ such that the last block which is not a singleton $n$ does not contain a 0. In order to construct the remaining ordered partitions in $\mathcal{OP}_{r;\beta,k}$, those whose last block which is not a singleton $n$ does not contain a 0, we take ordered multiset partitions $\pi$ in the set $\left(\mathcal{OP}^{\mathrm{all}}_{r;\beta^-,\ell} - \mathcal{OP}_{r;\beta^-,\ell}\right)$ (which means the last block of $\pi$ contains 0). Then for any $U\in \binom{[0,\ell-1]}{\beta_n-k+\ell}$ and $B'\in \multiset{[0,\ell]}{k-\ell-1}$, we set the multiset $B=B'\cup \{0\}$, and we construct $\phi^{\mathrm{inv}}_{r;\beta,k,\ell}(\pi,U,B)$ by repeatedly inserting numbers in the multiset $U\cup B$ in the same way. The 0 in $B$ ensures that the result will no longer have any 0's in its rightmost block. One can check easily that this gives all the ordered multiset partitions in $\mathcal{OP}_{r;\beta,k}$, and the inv statistic increases by $i$ each time we insert an $i$, thus Equation (\ref{inv}) follows. For example, suppose that $r=2$, $\beta = (2,3,2,4)$, and $k = 5$, $\ell=3$, and \begin{align*} \pi &= 23/0123/012 \in\left(\mathcal{OP}^{\mathrm{all}}_{r;\beta^-,\ell} - \mathcal{OP}_{r;\beta^-,\ell}\right), \\ U &= \{0, 2\} \in \binom{[0,\ell-1]}{\beta_n-k+\ell}, \\ B' &= \{3\} \in \multiset{[0,\ell]}{k-\ell-1}. \end{align*} Block 012 is labeled 0, block 0123 is labeled 1, and block 23 is labeled 2. The space to the far left receives label 3. First we take $i=3$ from $B$ and we insert a new singleton 4 block at the far left, yielding $4/23/0123/012$. Next we take $i=2$ from $U$, so we insert a 4 into the the 23 block and get $4/234/0123/012$. Then we take $i=0$ from $U$ and obtain $4/234/123/0124$. Finally, since $B = B' \cup \{0\}$, we insert a singleton 4 block at the far right to get $4/234/123/0124/4$. \subsection{The insertion map for maj} In order to define the map $\phi^{\mathrm{maj}}_{r;\beta,k,\ell}$, we shall introduce the \emph{descent-starred permutation notation} of an ordered partition. For any ordered partition $\pi = B_1/\cdots/B_n$, we write the numbers of each block in decreasing order, remove the slashes and add stars at the descent positions that are entirely contained in some block of $\pi$. This permutation with stars is called the \emph{descent-starred permutation notation} of $\pi$. The set of positions with stars is denoted by $S(\pi)$, and the permutation is denoted by $\sigma(\pi)$ introduced in Section 3.1. For example, if $\pi = 134/47/23$, then $\sigma(\pi)=4317432$, $S(\pi)=\{1,2,4,6\}$ and $4_\ast 3_\ast 17_\ast 43_\ast 2$ is the corresponding descent-starred permutation. The map \begin{align*} \phi^{\mathrm{maj}}_{r;\beta,k,\ell}&: \mathcal{OP}_{r;\beta^-,\ell} \times \binom{[0,\ell-1]}{\beta_n-k+\ell} \times \multiset{[0,\ell]}{k-\ell} \\ &+ \left(\mathcal{OP}^{\mathrm{all}}_{r;\beta^-,\ell} - \mathcal{OP}_{r;\beta^-,\ell}\right) \times \binom{[0,\ell-1]}{\beta_n-k+\ell} \times \multiset{[0,\ell]}{k-\ell-1} \rightarrow \mathcal{OP}_{r;\beta,k} \end{align*} is defined as follows. Given $\beta=(\beta_1,\ldots,\beta_n)$ and $\pi\in\mathcal{OP}_{r;\beta^-,\ell}$, we write $\pi$ in descent-starred notation and let $\sigma=\sigma(\pi)$. With labels $0,\ldots,\ell$, we first label the rightmost position, then the unstarred descent positions of $\pi$ from right to left, then the unstarred non-descent positions (including the leftmost position) from left to right. For any $U\in \binom{[0,\ell-1]}{\beta_n-k+\ell}$ and $B\in \multiset{[0,\ell]}{k-\ell}$, we construct $\phi^{\mathrm{maj}}_{r;\beta,k,\ell}(\pi,U,B)$ by setting $U^+ = \{u+1:u\in U\}$, then repeatedly remove the largest number $i$ from the multiset $U^+ \cup B$, taking from $B$ first if the largest numbers are equal. The algorithm of inserting $i$ is as follows: \begin{enumerate} \item Insert the number $n$ at the position with label $i$. \item Move each star that appears to the right of the new $n$ one descent to the left. \item If $i\in U^+$, then star the rightmost descent. \item Relabel the starred permutation as before, stopping at $i$ if $i\in B$ and $i-1$ if $i\in U^+$. \end{enumerate} This process constructs all the ordered multiset partitions in $\mathcal{OP}_{r;\beta,k}$ such that the last block which is not a singleton $n$ does not contain a 0. In order to construct the remaining ordered partitions in $\mathcal{OP}_{r;\beta,k}$, we take ordered multiset partitions $\pi$ in the set $\left(\mathcal{OP}^{\mathrm{all}}_{r;\beta^-,\ell} - \mathcal{OP}_{r;\beta^-,\ell}\right)$ such that the last block contains 0. Then for any $U\in \binom{[0,\ell-1]}{\beta_n-k+\ell}$ and $B'\in \multiset{[0,\ell]}{k-\ell-1}$, we set $U^+ = \{u+1:u\in U\}$ and $B=B'\cup \{0\}$, and we construct $\phi^{\mathrm{maj}}_{r;\beta,k,\ell}(\pi,U,B)$ by repeatedly inserting numbers in the multiset $U^+\cup B$ in the same way. One can check easily that this gives all the ordered multiset partitions in $\mathcal{OP}_{r;\beta,k}$. The second author \cite{wilson} gave a proof that the maj statistic increases by $i$ each time we insert an $i$ in the non-extended case, which works naturally for the extended case, thus we have \begin{equation}\label{maj} \mathrm{maj}\left(\phi^{\mathrm{maj}}_{r;\beta,k,\ell}(\pi,U,B)\right) = \mathrm{maj}(\pi)+\sum_{u\in U} u+\sum_{b\in B} b. \end{equation} Consider again the example $r=2$, $\beta = (2,3,2,4)$, and $k = 5$, $\ell=3$, and \begin{align*} \pi &= 23/0123/012 \in\left(\mathcal{OP}^{\mathrm{all}}_{r;\beta^-,\ell} - \mathcal{OP}_{r;\beta^-,\ell}\right) ,\\ U &= \{0, 2\} \in \binom{[0,\ell-1]}{\beta_n-k+\ell}, \\ B' &= \{3\} \in \multiset{[0,\ell]}{k-\ell-1}. \end{align*} As a descent-starred permutation, we write $\pi$ as $\sigma(\pi) = 3_{\ast}23_{\ast}2_{\ast}1_{\ast}02_{\ast}1_{\ast}0$. The labeling of $\sigma(\pi)$ is \begin{align*} _1 3_{\ast}2_2 3_{\ast}2_{\ast}1_{\ast}0_3 2_{\ast}1_{\ast}0_0 \end{align*} We take a 3 from $B$ and, after inserting a 4 at position 3 and shifting stars to the left, we get \begin{align*} 3_{\ast}23_{\ast}2_{\ast}1_{\ast}04_{\ast}2_{\ast}10 \end{align*} increasing the major index by 3. We relabel and continue with a 3 from $U^{+}$, obtaining \begin{align*} 3_{\ast}2 4_{\ast} 3_{\ast}2_{\ast}1_{\ast}04_{\ast}21_{\ast}0 \end{align*} adding a new star after the last descent since this 3 comes from $U^{+}$. We take the 1 from $U^{+}$ and get \begin{align*} 3_{\ast}2 4_{\ast} 3_{\ast}2_{\ast}1_{\ast}04_{\ast}24_{\ast}1_{\ast}0 . \end{align*} Finally, we take the 0 we added to $B$ to obtain \begin{align*} 3_{\ast}2 4_{\ast} 3_{\ast}2_{\ast}1_{\ast}04_{\ast}24_{\ast}1_{\ast}04 = 23 / 01234 / 24 / 014 / 4 \in \mathcal{OP}_{r;\beta,k} . \end{align*} \subsection{The insertion map for dinv} We define a map \begin{multline*} \phi^{\mathrm{dinv}}_{r;\beta,k,\ell}: \mathcal{OP}_{r;\beta^-,\ell} \times \binom{[0,\ell-1]}{\beta_n-k+\ell} \times \multiset{[0,\ell]}{k-\ell} \\ + \left(\mathcal{OP}^{\mathrm{all}}_{r;\beta^-,\ell} - \mathcal{OP}_{r;\beta^-,\ell}\right) \times \binom{[0,\ell-1]}{\beta_n-k+\ell} \times \multiset{[0,\ell]}{k-\ell-1} \rightarrow \mathcal{OP}_{r;\beta,k}. \end{multline*} Given $\beta=(\beta_1,\ldots,\beta_n)$ and $\pi\in\mathcal{OP}_{r;\beta^-,\ell}$, we label the $\ell+1$ spaces (the spaces between parts as well as the spaces in the two ends) of $\pi$ from right to left with numbers $0,1,\ldots,\ell$ which we call the \emph{gap labels}. Next, we label the blocks from highest to lowest length (from left to right for each length) with numbers $0,1,\ldots,\ell-1$ which we call the \emph{block labels}. For any $U\in \binom{[0,\ell-1]}{\beta_n-k+\ell}$ and $B\in \multiset{[0,\ell]}{k-\ell}$, we can construct $\phi^{\mathrm{dinv}}_{r;\beta,k,\ell}(\pi,U,B)$ by inserting an $n$ into each block whose label is in $U$ and inserting a singleton block $\{n\}$ at the gap $b$ for each $b\in B$. This process constructs all the ordered multiset partitions in $\mathcal{OP}_{r;\beta,k}$ such that the last block which is not a singleton $n$ does not contain a 0. In order to construct the remaining ordered partitions in $\mathcal{OP}_{r;\beta,k}$, we take ordered multiset partitions $\pi$ in $\left(\mathcal{OP}^{\mathrm{all}}_{r;\beta^-,\ell} - \mathcal{OP}_{r;\beta^-,\ell}\right)$. Then for any $U\in \binom{[0,\ell-1]}{\beta_n-k+\ell}$ and $B'\in \multiset{[0,\ell]}{k-\ell-1}$, we set the multiset $B=B'\cup \{0\}$, and we construct $\phi^{\mathrm{dinv}}_{r;\beta,k,\ell}(\pi,U,B)$ in the same way. One can check easily that this gives all the ordered multiset partitions in $\mathcal{OP}_{r;\beta,k}$, and the dinv statistic increases by $i$ each time we insert an $i$, thus we have \begin{equation}\label{dinv} \mathrm{dinv}\left(\phi^{\mathrm{dinv}}_{r;\beta,k,\ell}(\pi,U,B)\right) = \mathrm{dinv}(\pi)+\sum_{u\in U} u+\sum_{b\in B} b. \end{equation} Consider once more the example $r=2$, $\beta = (2,3,2,4)$, and $k = 5$, $\ell=3$, and \begin{align*} \pi &= 23/0123/012 \in\left(\mathcal{OP}^{\mathrm{all}}_{r;\beta^-,\ell} - \mathcal{OP}_{r;\beta^-,\ell}\right), \\ U &= \{0, 2\} \in \binom{[0,\ell-1]}{\beta_n-k+\ell} ,\\ B' &= \{3\} \in \multiset{[0,\ell]}{k-\ell-1}. \end{align*} We take a 3 from $B$ and insert a singleton 4 to the far left, obtaining $4 / 23 / 0123/ 012$. Then we take a 2 from $U$ and add a 4 to the 23 block to get $4 / 234 / 0123 / 012$. We take a 0 from $U$ and add a 4 to the 0123 block to get $4 /234 / 01234 / 012$. Finally, we take the 0 we added to $B$ and add a 4 to the far right to obtain $4 /234 / 01234 / 012 / 4 \in \mathcal{OP}_{r; \beta, k}$. According to the definitions of maps $\phi^{\mathrm{inv}}_{r;\beta,k,\ell}$, $\phi^{\mathrm{maj}}_{r;\beta,k,\ell}$, $\phi^{\mathrm{dinv}}_{r;\beta,k,\ell}$ and Equations (\ref{inv}), (\ref{maj}) and (\ref{dinv}), one can conclude the following. \begin{theorem}\label{theorem:eq1} For any integers $n,r$ and composition $\beta$, $$ D_{r;\beta,k}^{\mathrm{inv}}(q) = D_{r;\beta,k}^{\mathrm{maj}}(q) = D_{r;\beta,k}^{\mathrm{dinv}}(q). $$ \end{theorem} \noindent We mention the common recursion shared by these polynomials in Section 6. \section{The identity $D_{r;\beta,k}^{\mathrm{inv}}(q) =D_{r;\beta,k}^{\mathrm{minimaj}}(q)$} The goal of this section is to generalize the $(\mathrm{inv}, \mathrm{minimaj})$ equi-distribution theorem of Rhoades \cite{Rhoades} from the set $\mathcal{OP}_{\beta,k}$ to the set $\mathcal{OP}_{r;\beta,k}$. For our convenience, we shall abbreviate $D^{\mathrm{inv}}$ and $D^{\mathrm{minimaj}}$ to $I$ and $M$, i.e. we shall use the notations \begin{eqnarray*} &&I_{\beta,k}(q) = D_{\beta,k}^{\mathrm{inv}}(q),\quad I_{\beta,\alpha}(q) = D_{\beta,\alpha}^{\mathrm{inv}}(q), \quad I_{r;\beta,k}(q) = D_{r;\beta,k}^{\mathrm{inv}}(q), \quad I_{r;\beta,\alpha}(q) = D_{r;\beta,\alpha}^{\mathrm{inv}}(q), \\ &&M_{\beta,k}(q) = D_{\beta,k}^{\mathrm{minimaj}}(q), \quad M_{\beta,\alpha}(q) = D_{\beta,\alpha}^{\mathrm{minimaj}}(q),\\ &&M_{r;\beta,k}(q) = D_{r;\beta,k}^{\mathrm{minimaj}}(q), \quad M_{r;\beta,\alpha}(q) = D_{r;\beta,\alpha}^{\mathrm{minimaj}}(q). \end{eqnarray*} Further, we let \begin{eqnarray*} &&I^{\mathrm{all}}_{r;\beta,k}(q) = D_{r;\beta,k}^{\mathrm{inv}+}(q), \quad I^{\mathrm{all}}_{r;\beta,\alpha}(q) = D_{r;\beta,\alpha}^{\mathrm{inv}+}(q) \quad \\ && M^{\mathrm{all}}_{r;\beta,k}(q) = D_{r;\beta,k}^{\mathrm{minimaj}+}(q), \quad \mbox{and}\quad M^{\mathrm{all}}_{r;\beta,\alpha}(q) = D_{r;\beta,\alpha}^{\mathrm{minimaj}+}(q) \end{eqnarray*} denote the generating functions that allow 0 in the last block. \subsection{The recursion for inv} For any integer $m$ and set $S\subseteq [m]$, we let $\chi_S = (\chi_S(1),\ldots,\chi_S(m))$ be the sequence such that $\chi_S(i)=\chi(i\in S)$ where $\chi$ of a statement is 1 if the statement is true, 0 if false. For two sequences $\gamma_1$ and $\gamma_2$ of the same length, we write $\gamma_1\leq\gamma_2$ if each entry of $\gamma_1$ is less than or equal to the corresponding entry of $\gamma_2$. Given an integer $n$, a weak composition $\beta=(\beta_1,\ldots,\beta_m)\vDash n$ and a strong composition $\alpha=(\alpha_1,\ldots,\alpha_k)\vDash_{\mathrm{strong}} n$, we still use the notation $\alpha^-=(\alpha_1,\ldots,\alpha_{k-1})$ for the composition of $n-\alpha_k$ that the last part of $\alpha$ is removed. Recall that by definition, $\mathcal{OP}_{r;\beta,\alpha}$ is the set of extended ordered multiset partition of the multiset $A(\beta)\cup \{0^r\}$ and shape $\alpha$ such that $0$ is not contained in the last block, while $\mathcal{OP}^{\mathrm{all}}_{r;\beta,\alpha}$ allows $0$ in the last block. Their generating functions tracking the statistic inv are $I_{r;\beta,\alpha}(q)$ and $I^{\mathrm{all}}_{r;\beta,\alpha}(q)$ respectively. Then we have the following theorem which is analogous to Lemma 3.2 in \cite{Rhoades}. \begin{theorem}\label{theorem:Ialpha} The generating function $I_{r;\beta,\alpha}(q)$ satisfies the following equation: \begin{equation}\label{eqI} I_{r;\beta,\alpha}(q) = \sum_{\substack{S\subseteq[m],\ |S|=\alpha_k,\\\chi_S\leq \beta}} q^{\sum_{i=min(S)+1}^{m}(\beta_i-\chi_S(i))} I^{\mathrm{all}}_{r;\beta-\chi_S,\alpha^-}(q). \end{equation} \end{theorem} \begin{proof} Consider an ordered multiset partition $\mu=B_1/\cdots/B_k\in\mathcal{OP}_{r;\beta,\alpha}$. Writing $S=B_k$, we have that $B_1/\cdots/B_{k-1}\in\mathcal{OP}^{\mathrm{all}}_{r;\beta-\chi_S,\alpha^-}$. Since each element in the ordered partition $B_1/\cdots/B_{k-1}$ that is bigger than $\min(S)$ creates an inversion with the last block, Equation (\ref{eqI}) follows immediately. \end{proof} Summing over all the strong compositions $\alpha$ of $n$ with $k$ parts, we have the following corollary. \begin{corollary}\label{corollary:Irecursion} The generating function $I_{r;\beta,k}(q)$ satisfies the following equation: \begin{equation}\label{eqI1} I_{r;\beta,k}(q) = \sum_{\substack{S\subseteq[m],\ \chi_S\leq \beta}} q^{\sum_{i=min(S)+1}^{m}(\beta_i-\chi_S(i))} I^{\mathrm{all}}_{r;\beta-\chi_S,k-1}(q). \end{equation} \end{corollary} We shall prove a similar result about the statistic minimaj in the following subsection. \subsection{The recursion for minimaj} In our new notation, \coref{Dplus} shows that \begin{equation}\label{IMeq} I^{\mathrm{all}}_{r;\beta,k}(q) = M^{\mathrm{all}}_{r;\beta,k}(q). \end{equation} We shall prove in this subsection that \begin{theorem}\label{theorem:Mrecursion} The generating function $M_{r;\beta,k}(q)$ satisfies the following equation: \begin{equation}\label{eqM} M_{r;\beta,k}(q) = \sum_{\substack{S\subseteq[m],\ \chi_S\leq \beta}} q^{\sum_{i=min(S)+1}^{m}(\beta_i-\chi_S(i))} M^{\mathrm{all}}_{r;\beta-\chi_S,k-1}(q). \end{equation} \end{theorem} Then as a consequence of \coref{Irecursion}, \tref{Mrecursion} and Equation (\ref{IMeq}), we have \begin{theorem}\label{theorem:eq2} For any integers $n,r$ and composition $\beta$, $$ D_{r;\beta,k}^{\mathrm{inv}}(q) = D_{r;\beta,k}^{\mathrm{minimaj}}(q). $$ \end{theorem} In order to prove \tref{Mrecursion}, we need to state some combinatorial actions and properties about the statistic minimaj. We always use the setting that for any integers $n,r$, we consider ordered multiset partitions of the form $\mu=B_1/\cdots/B_k\in\mathcal{OP}_{r;\beta,\alpha}$, where $\beta=(\beta_1,\ldots,\beta_m)\vDash n$ is a weak composition and $\alpha=(\alpha_1,\ldots,\alpha_k)\vDash_{\mathrm{strong}} n$ is a strong composition. We let $\alpha^-=(\alpha_1,\ldots,\alpha_{k-1})$. A \emph{$k$-segmented word} is a pair $(w,\alpha)$ such that $w=w_1\cdots w_n$ is a length $n$ word and $\alpha$ is a strong composition of $n$. We write such $k$-segmented word in the form of a word $w$ with dots after $w_{\alpha_1},w_{\alpha_1+\alpha_2},\ldots,w_{\alpha_1+\cdots+\alpha_{k-1}}$. The components of the words separated by the dots are called \emph{segments}. For example, the 3-segmented word $(3342412,(2,3,2))$ can be written as $33\cdot 424\cdot 12$. For an ordered multiset partition $\mu=B_1/\cdots/B_k\in\mathcal{OP}_{r;\beta,\alpha}$ where $B_i =\{j_1^{(i)}<\ldots<j_{\alpha_i}^{(i)}\}$, we let $w(\mu)=\mu[1]\cdot\mu[2]\cdot\ \cdots\ \cdot\mu[k]$ denote the \emph{k-segmented word} obtained in the following way: we let the last segment $\mu[k]$ be the increasing word $j_1^{(k)}\cdots j_{\alpha_k}^{(k)}$. For $1\leq i\leq k-1$, assume that the $i+1^{\textnormal{st}}$ segment $\mu[i+1]$ is defined and let $r$ be the first letter of $\mu[i+1]$. Let $j_1^{(i)},\ldots,j_m^{(i)}$ be the numbers that are less than or equal to $r$, and let $j_{m+1}^{(i)},\ldots,j_{\alpha_i}^{(i)}$ be the numbers that are greater than $r$, then we define $\mu[i]=j_{m+1}^{(i)} \cdots j_{\alpha_i}^{(i)} j_1^{(i)}\cdots j_m^{(i)}$. We also refer to $w(\mu)$ as the permutation component of the segmented word without causing ambiguity. Note that $w(\mu)$ as a permutation coincides with our definition of $\mathrm{miniword}(\mu)$. Thus we have the following lemma: \begin{lemma} Let $\mu$ be an ordered multiset partition, then $\mathrm{minimaj}(\mu) = \mathrm{maj}(w(\mu))$. \end{lemma} Rhoades in \cite{Rhoades} defined an action on ordered multiset partitions $\mu$ to interchange the number of $i$ and $i+1$ in $\mu$, called the \emph{$t_i$-switch map}. Let $s_i$ be the action on a sequence that interchange its \thn{i} and $i+1{\textnormal{st}}$ component, then Rhoades proved the following theorem: \begin{theorem}[Rhoades]\label{theorem:tswitch} There exists a bijective map $$ t_i: \quad \mathcal{OP}_{\beta,k} \rightarrow \mathcal{OP}_{s_i\cdot \beta, k} $$ such that $\mathrm{minimaj}(t_i(\mu))=\mathrm{minimaj}(\mu)$. \end{theorem} Recall that we can add 1 to all the entries of an ordered multiset partition in $\mathcal{OP}^{\mathrm{all}}_{r;(\beta_1,\ldots,\beta_n),k}$ to get an ordered multiset partition in $\mathcal{OP}_{(r,\beta_1,\ldots,\beta_n),k}$, we can naturally generalize \tref{tswitch} to the set $\mathcal{OP}^{\mathrm{all}}_{r;\beta,k}$ that allows us to rearrange the component of $\beta$ and the number $r$: \begin{corollary}\label{corollary:perms} Let $(\gamma_0,\gamma_1,\ldots,\gamma_m)$ be any rearrangement of the sequence $(r,\beta_1,\ldots,\beta_m)$, then there is a minimaj-preserving bijection $\psi$ between the sets $\mathcal{OP}^{\mathrm{all}}_{\gamma_0;(\gamma_1,\ldots,\gamma_m),k}$ and $\mathcal{OP}^{\mathrm{all}}_{r;(\beta_1,\ldots,\beta_m),k}$. \end{corollary} It is obvious that for an ordered multiset partition, the contribution of the last block to minimaj only depends on the minimum element of the last block. Thus we have the following lemma. \begin{lemma} Let $B_1/\cdots/B_k$ be an ordered multiset partition. Then \begin{equation} \mathrm{minimaj}(B_1/\cdots/B_k) = \mathrm{minimaj}(B_1/\cdots/\min(B_k)). \end{equation} \end{lemma} Rhoades in \cite{Rhoades} defined an action of the group $\mathbb{Z}_m = \langle c \rangle$ on $\mathcal{OP}_{\beta,\alpha}$ by decrementing all the letters by 1 modulo $m$. Analogously, we define the group action of $\mathbb{Z}_{m+1} = \langle c \rangle$ on $\mathcal{OP}^{\mathrm{all}}_{r;\beta,\alpha}$ by decrementing all the letters by 1 modulo $m+1$. Rhoades in \cite{Rhoades} proved that \begin{lemma}[Lemma 3.4 in \cite{Rhoades}]\label{lemma:33rho} If the last component of $\alpha$ is 1, then $w(c.\mu) = c.w(\mu)$ for any $\mu\in\mathcal{OP}_{\beta,\alpha}$. \end{lemma} Recall that there is a bijective relation between $\mathcal{OP}^{\mathrm{all}}_{r;(\beta_1,\ldots,\beta_m),\alpha}$ and $\mathcal{OP}_{(r,\beta_1,\ldots,\beta_m),\alpha}$. It follows from \lref{33rho} and our new group action of $\mathbb{Z}_{m+1}$ that \begin{lemma}\label{lemma:33new} If the last component of $\alpha$ is 1, then $w(c.\mu) = c.w(\mu)$ for any $\mu\in\mathcal{OP}^{\mathrm{all}}_{r;\beta,\alpha}$. \end{lemma} Another property about the action $c$ is summarized in the following lemma: \begin{lemma}\label{lemma:34new} For any word $w=w_1\cdots w_n$ with content $\{0^r, 1^{\beta_1},\ldots,m^{\beta_m}\}$ such that $w_n\neq 0$, we have $\mathrm{maj}(c.w)=\mathrm{maj}(w)+r$. \end{lemma} \begin{proof} The map $c$ moves every descent occurring before a maximal contiguous run of $0$'s in $w$ to the position at the end of this run. \end{proof} Now we can prove the following lemma. \begin{lemma}\label{lemma:lm36new} Given integers $n,r$. Let $\alpha=(\alpha_1,\ldots,\alpha_k)\vDash_{\mathrm{strong}} n$ be a strong composition with $\alpha_k=1$ and let $\beta=(\beta_1,\ldots,\beta_m)\vDash n$ be a weak composition. We have \begin{equation}\label{l36eq} M_{r;\beta,\alpha}(q) = \sum_{\beta_i>0}q^{\beta_{i+1}+\cdots+\beta_m} M^{\mathrm{all}}_{r;(\beta_{i+1},\ldots,\beta_m,\beta_1,\ldots,\beta_{i}-1),\alpha^-}(q). \end{equation} \end{lemma} \begin{proof} We shall prove the recursion above about the generating function $M_{r;\beta,\alpha}(q)$ where $\alpha_k=1$. Without loss of generality, we assume that $\beta$ is a strong composition. Consider an ordered multiset partition $\mu\in\mathcal{OP}_{r;\beta,\alpha}$. If the last block of $\mu$ is a singleton $\{m\}$, then clearly it does not contribute anything to $\mathrm{minimaj}(\mu)$. Writing $\mu=\mu'/m$, then $\mathrm{minimaj}(\mu)=\mathrm{minimaj}(\mu')$. Next consider the case when $\mu=\mu'/m-i$ end with $m-i$ for some $i\in\{1,\ldots,m-1\}$, then $\mu'\in\mathcal{OP}^{\mathrm{all}}_{r;(\beta_1,\ldots,\beta_{m-i}-1,\ldots,\beta_m),\alpha^-}$. It follows that we have the following consequence of \lref{33new} and \lref{34new}: \begin{eqnarray*} \mathrm{minimaj}(\mu'/m-i)& =& \mathrm{minimaj}(c^i.(c^{-i}.\mu'|m))\\ &=& \mathrm{minimaj}(c^{-i}.\mu'|m)+ \beta_{m-i+1}+\cdots +\beta_m \end{eqnarray*} where $c^{-i}.\mu'\in\mathcal{OP}^{\mathrm{all}}_{\beta_{m-i+1},(\beta_{m-i+2},\ldots,\beta_{m},r,\beta_{1},\ldots,\beta_{m-i}-1),\alpha^-}$, and we have \begin{equation}\label{eqntemp} M_{r;\beta,\alpha}(q) = \sum_{\beta_{m-i}>0}q^{\beta_{{m-i}+1}+\cdots+\beta_m} M^{\mathrm{all}}_{\beta_{{m-i}+1};(\beta_{{m-i}+2},\ldots,\beta_m,r,\beta_1,\ldots,\beta_{{m-i}}-1),\alpha^-}(q). \end{equation} Equation (\ref{l36eq}) follows immediately from Equation (\ref{eqntemp}) and \coref{perms} since we can permute $r$ and the components of $\beta$. \end{proof} Now we are ready to prove \tref{Mrecursion}. \\ \textbf{Proof of \tref{Mrecursion}.} Let $\mu=B_1/\cdots/B_k\in\mathcal{OP}_{r;\beta,\alpha}$. For the case when $\alpha=(\alpha_1,\ldots,\alpha_{k-1},1)$, we have the following recursion as a consequence of \lref{lm36new}: \begin{eqnarray} \sum_{\alpha^-} M_{r;\beta,\alpha}(q) &=& \sum_{\alpha^-} \sum_{\beta_i>0}q^{\beta_{i+1}+\cdots+\beta_m} M^{\mathrm{all}}_{r;(\beta_{i+1},\ldots,\beta_m,\beta_1,\ldots,\beta_{i}-1),\alpha^-}(q)\nonumber\\ &=& \sum_{\beta_i>0}\sum_{\alpha^-} q^{\beta_{i+1}+\cdots+\beta_m} M^{\mathrm{all}}_{r;(\beta_{i+1},\ldots,\beta_m,\beta_1,\ldots,\beta_{i}-1),\alpha^-}(q)\nonumber\\ &=& \sum_{\beta_i>0} q^{\beta_{i+1}+\cdots+\beta_m} M^{\mathrm{all}}_{r;(\beta_{i+1},\ldots,\beta_m,\beta_1,\ldots,\beta_{i}-1),k-1}(q)\nonumber\\ &=& \sum_{\beta_i>0} q^{\beta_{i+1}+\cdots+\beta_m} M^{\mathrm{all}}_{r;(\beta_1,\ldots,\beta_{i}-1,\ldots,\beta_m),k-1}(q).\label{resultM} \end{eqnarray} The first line is Equation (\ref{resultM}) summed over all compositions $\alpha^-\vDash_{\mathrm{strong}} (n-1)$ with $k-1$ parts; the second line interchanges the order of the two summations; the third line evaluates the inner sum over all possible $\alpha^-$'s; the last line is an application of \coref{perms}. More generally, if the last block is of size $\alpha_k\geq 1$, then the following equation follows as a consequence of Equation (\ref{resultM}): \begin{equation} M_{r;\beta,k}(q) = \sum_{\substack{B_k\subseteq[m],\ \chi_{B_k}\leq \beta}} q^{\sum_{i=min(B_k)+1}^{m}(\beta_i-\chi_{B_k}(i))} M^{\mathrm{all}}_{r;\beta-\chi_{B_k},k-1}(q), \end{equation} which proves \tref{Mrecursion}. \hfill\qed \section{Conclusion and future directions} In this section, we give a brief summary of results in our paper and discuss directions for future work. \subsection{The shared distribution} In Sections 4 and 5, we proved the equi-distributivity of statistics inv, maj, dinv and minimaj on the set of extended ordered multiset partitions. \begin{corollary}\label{corollary:newDun} For any integers $n,r$ and composition $\beta$, $$ D_{r;\beta,k}^{\mathrm{inv}}(q) = D_{r;\beta,k}^{\mathrm{maj}}(q) = D_{r;\beta,k}^{\mathrm{dinv}}(q) = D_{r;\beta,k}^{\mathrm{minimaj}}(q). $$ \end{corollary} Given the work of D'Adderio, Iraci and Wyngaerd \cite{Michele}, we have the following, \begin{theorem}[D'Adderio, Iraci and Wyngaerd]\label{theorem:michele} For any integers $n,k\geq 0$, we have the equality \begin{equation} \mathrm{Rise}_{r;n,k}[X;q,0]= \mathrm{Rise}_{r;n,k}[X;0,q]= \Delta'_{e_k}\Delta_{h_r}e_n|_{t=0} = \Delta'_{e_k}\Delta_{h_r}e_n|_{q=0, t=q}. \end{equation} \end{theorem} These results can be combined as follows. \begin{corollary} For any integers $n,k\geq 0$, we have the equality \begin{multline} \mathrm{Rise}_{r;n,k}[X;q,0]= \mathrm{Rise}_{r;n,k}[X;0,q]= \mathrm{Val}_{r;n,k}[X;q,0]= \mathrm{Val}_{r;n,k}[X;0,q] \\= \Delta'_{e_k}\Delta_{h_r}e_n|_{t=0} = \Delta'_{e_k}\Delta_{h_r}e_n|_{q=0, t=q}. \end{multline} \end{corollary} Define the \emph{Mahonian distribution} on $\mathcal{OP}_{r;\beta,k}$ to be the polynomial $$ D_{r;\beta,k}(q) := D_{r;\beta,k}^{\mathrm{inv}}(q) = D_{r;\beta,k}^{\mathrm{maj}}(q) = D_{r;\beta,k}^{\mathrm{dinv}}(q) = D_{r;\beta,k}^{\mathrm{minimaj}}(q) $$ and let $ D^+_{r;\beta,k}(q) := D_{r;\beta,k}^{\mathrm{inv}+}(q) = D_{r;\beta,k}^{\mathrm{maj}+}(q) = D_{r;\beta,k}^{\mathrm{dinv}+}(q) = D_{r;\beta,k}^{\mathrm{minimaj}+}(q) $, then $D_{r;\beta,k}(q)$ generalizes the \emph{Mahonian distribution on ordered multiset partitions} $D_{\beta,k}(q)$ in \cite{wilson} that \begin{equation*} D_{\beta,k}(q)=D_{0;\beta,k}(q). \end{equation*} By either of the Equations (\ref{inv}), (\ref{maj}) and (\ref{dinv}), we have the base case that $D_{0;\emptyset,0}(q)=1$, $D_{r;\emptyset,k}(q)=0$ for $r+k>0$ and the recursion: \begin{eqnarray*} D_{r;\beta,k}(q) & =& \sum_{\ell = 0}^{k} q^{\binom{\beta_n-k+\ell}{2}} {\ell \brack \beta_n-k+\ell}_q {k \brack \ell}_q D_{r;\beta^-,\ell}(q)\\ && \qquad + q^{\binom{\beta_n-k+\ell}{2}} {\ell \brack \beta_n-k+\ell}_q {k-1 \brack \ell}_q \left(D^+_{r;\beta^-,\ell}(q)-D_{r;\beta^-,\ell}(q)\right)\\ & =& \sum_{\ell = 0}^{k} q^{\binom{\beta_n-k+\ell}{2}} {\ell \brack \beta_n-k+\ell}_q \left( {k-1 \brack \ell-1}_q D_{r;\beta^-,\ell}(q) + {k-1 \brack \ell}_q D^+_{r;\beta^-,\ell}(q) \right). \end{eqnarray*} Note that $D_{r;\beta,k}(q)$ is a generalization of the \emph{$q$-Stirling number} $S_{n,k}(q)$ defined by \begin{equation} S_{n,k}(q)=S_{n-1,k-1}(q) + [k]_q S_{n-1,k}(q) \end{equation} as a consequence of the following equation due to the work of the second author \cite{wilson}: \begin{equation} D_{0;1^n,k}(q) = S_{n,k}(q). \end{equation} \subsection{Schur positivity} By fixing the positions of the zero valleys in a particular Dyck path, one obtains an LLT polynomial \cite{LLT}. As a result, the combinatorial side of the Extended Delta Conjecture must be Schur positive, although there is no known Schur expansion of these polynomials. The original Delta Conjecture has two explicit (and not obviously equivalent) Schur expansions at $q$ or $t = 0$ via analysis of the major index statistic \cite{wilson} and the minimaj statistic \cite{crystal}. The latter work actually gives two proofs of Schur positivity at $q$ or $t = 0$, one using the theory of crystals and one using a bijection with skew Schur functions. The skew Schur function bijection is refined enough to carry over to the Extended Delta Conjecture case. Given an extended ordered set partition $\pi \in \mathcal{OP}_{r;n,k}$, recall from Subsection 3.1 that $\mathrm{miniword}(\pi)$ is the word obtained by rearranging the parts of $\pi$ to minimize the major index of the resulting word. Given a nonnegative integer $\ell$, sets $D \in [n+r-1]$ and $I \in [k-1]$ of size $\ell$, and words $z \in \mathbb{N}^{\ell+1}$, $w \in \{0,1\}^{\ell+1}$, we let $M(D, I, z, w)$ be the set of $\pi \in \mathcal{OP}_{n,k}$ such that \begin{itemize} \item $\mathrm{miniword}(\pi)$ has descents exactly at positions in $D$, \item the $j$th entry of $I$ gives the block containing the $j$th descent in $\mathrm{miniword}(\pi)$, \item the $j$th weakly increasing run in $\mathrm{miniword}(\pi)$ begins with $z_j$ zeros, and \item $z_j - w_j$ of the $z_j$ zeros that begin the $j$th weakly increasing run in $\mathrm{miniword}(\pi)$ occur at the beginning of a block. \end{itemize} The map from Proposition 3.1 in \cite{crystal} gives a bijection from $M(D, I, z, w)$ to a certain set of skew Schur functions, all but one of which are vertical strips, where the zeros in $\pi$ get mapped to \emph{fixed} positions outside the skew shapes. We depict an example of this map in Figure \ref{fig:skew}, and refer the reader to \cite{crystal} for a detailed description. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.5] \node at (0,-2.5) {${\color{blue} 0} {\color{red}24} |{\color{blue}4 }.{\color{orange} 2}|{\color{blue}3}| {\color{blue}3} {\color{orange}6}.{\color{violet} 0}|{\color{blue} 2} {\color{violet}3}. {\color{teal}0}| {\color{blue}1}{\color{teal}35}$}; \node at (5,-2.5) {$\mapsto$}; \draw (7,0) grid (8,-2); \draw (9,-1) grid (10,-2); \draw (11,-1) grid (12,-3); \draw (13,-4) grid (14,-6); \draw (14,-2) grid (15,-4); \draw (15,0) grid (16,-3); \node[color=orange] at (7.5,-0.5) {2}; \node[color=orange] at (7.5,-1.5) {6}; \node[color=violet] at (9.5,-0.5) {0}; \node[color=violet] at (9.5,-1.5) {3}; \node[color=teal] at (11.5,-0.5) {0}; \node[color=teal] at (11.5,-1.5) {3}; \node[color=teal] at (11.5,-2.5) {5}; \node[color=blue] at (13.5,-3.5) {0}; \node[color=red] at (13.5,-4.5) {2}; \node[color=red] at (13.5,-5.5) {4}; \node[color=blue] at (14.5,-2.5) {3}; \node[color=blue] at (14.5,-3.5) {4}; \node[color=blue] at (15.5,-0.5) {1}; \node[color=blue] at (15.5,-1.5) {2}; \node[color=blue] at (15.5,-2.5) {3}; \end{tikzpicture} \end{center} \caption{An example of the bijection from \cite{crystal} applied to extended ordered set partitions. Here $\pi \in \mathcal{OP}_{3;12,6}$ is written in minimaj order and its descents are denoted by periods. We have $D = \{4,8,11\}$, $I=\{2,4,5\}$, $z = \{1,0,1,1\}$, and $w = \{0, 0, 1, 1\}$. The leading entries in blocks and the entries in the first weakly increasing run in $\mathrm{miniword}(\pi)$ get sent to the ribbon shape, while other entries get sent to vertical strips.} \label{fig:skew} \end{figure} Since $\mathrm{minimaj}$ is constant for all $\pi \in M(D, I, z, w)$ and all $\pi \in \mathcal{OP}_{r;n,k}$ with a fixed minimaj can be decomposed into sets of type $M(D, I, z, w)$, this proves that the distribution of minimaj over $\mathcal{OP}_{r;n,k}$ is a sum of products of skew Schur functions and is therefore Schur positive. By our equi-distribution results, the other three statistics also have Schur positive distributions over $\mathcal{OP}_{r;n,k}$. \begin{problem} Provide an RSK proof \cite{wilson} or a crystal theoretic proof \cite{crystal} that the distribution of our statistics over $\mathcal{OP}_{n,k}$ is Schur positive. \end{problem} \subsection{The Extended Delta Conjecture} Though a number of cases of the Extended Delta Conjecture of $\Delta'_{e_k} \Delta_{h_r}e_n$ have been proved, the Extended Delta Conjecture in the general case is still open. The main goal of this study is: \begin{problem} Prove the Extended Delta Conjecture in general. \end{problem} \noindent This includes the original Delta Conjecture. \tref{equi} and \coref{newDun} show that the two versions of the Delta Conjecture and the Extended Delta Conjecture are equivalent at the case when $q$ or $t$ is 0. However, there is no proof that the combinatorial side of the two versions are equivalent in general. \begin{problem} Prove that \begin{equation} \mathrm{Rise}_{r;n,k}[X;q,t] = \mathrm{Rise}_{r;n,k}[X;t,q] = \mathrm{Val}_{r;n,k}[X;q,t]. \end{equation} \end{problem} \noindent This includes the problem that $\mathrm{Rise}_{n,k}[X;q,t] = \mathrm{Rise}_{n,k}[X;t,q] = \mathrm{Val}_{n,k}[X;q,t]$. \subsection{Other potential conjectures} Finally, the Delta operator satisfies $\Delta_{h_r e_k}=\Delta_{s_{r,1^k}}+\Delta_{s_{r+1,1^{k-1}}}$ and $\Delta_{h_r e_k}=\Delta'_{e_k}\Delta_{h_r}+\Delta'_{e_{k-1}}\Delta_{h_r}$, so the Extended Delta Conjecture can be amended to involve sums of consecutive hook-shaped Schur functions in the subscript. It would be nice to have a conjecture for when just a single hook-shaped Schur function appears. \begin{problem} Give a combinatorial conjecture for the expression $\Delta_{s_\lambda} e_n$, where $\lambda\vdash n$ is of hook shape. \end{problem} \bigskip\bigskip\bigskip
{ "timestamp": "2019-07-10T02:19:09", "yymm": "1907", "arxiv_id": "1907.00268", "language": "en", "url": "https://arxiv.org/abs/1907.00268", "abstract": "The Shuffle Theorem of Carlsson and Mellit gives a combinatorial expression for the bigraded Frobenius characteristic of the ring of diagonal harmonics, and the Delta Conjecture of Haglund, Remmel and the second author provides two generalizations of the Shuffle Theorem to the delta operator expression $\\Delta'_{e_k} e_n$. Haglund et al. also propose the Extended Delta Conjecture for the delta operator expression $\\Delta'_{e_k} \\Delta_{h_r}e_n$, which is analogous to the rise version of the Delta Conjecture. Recently, D'Adderio, Iraci and Wyngaerd proved the rise version of the Extended Delta Conjecture at the case when $t=0$. In this paper, we propose a new valley version of the Extended Delta Conjecture. Then, we work on the combinatorics of extended ordered multiset partitions to prove that the two conjectures for $\\Delta'_{e_k} \\Delta_{h_r}e_n$ are equivalent when $t$ or $q$ equals 0, thus proving the valley version of the Extended Delta Conjecture when $t$ or $q$ equals 0.", "subjects": "Combinatorics (math.CO)", "title": "The valley version of the Extended Delta Conjecture", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773707979895381, "lm_q2_score": 0.7248702702332476, "lm_q1q2_score": 0.7084670344567613 }
https://arxiv.org/abs/2006.01434
Resummed Wentzel-Kramers-Brillouin Series: Quantization and Physical Interpretation
The Wentzel-Kramers-Brillouin (WKB) perturbative series, a widely used technique for solving linear waves, is typically divergent and at best, asymptotic, thus impeding predictions beyond the first few leading-order effects. Here, we report a closed-form formula that exactly resums the perturbative WKB series to all-orders for two turning point problem. The formula is elegantly interpreted as the action evaluated using the product of spatially-varying wavenumber and a coefficient related to the wave transmissivity; unit transmissivity yields the Bohr-Sommerfeld quantization.
\section{Introduction} Linear waves are ubiquitous in the world of physics with applications ranging from quantum mechanics \cite{berry1972} to electromagnetism, fluid dynamics, and astrophysics \cite{iyer1987}. The properties of these waves are encoded in their dispersion relations, which reveal the nature of the medium they traverse, as well as their generating source \cite{iyer1987,fuller2015} (e.g., gravitational waves). This problem of obtaining the dispersion relation is traditionally addressed with the Wentzel-Kramers-Brillouin (WKB) perturbative series \cite{benderorszag}, which however is typically divergent and at best, asymptotic \cite{berry1972, robnik1997}. It thus presents challenges to predict phenomena beyond the leading-order effects. Obtaining an expression for this series to all-orders in perturbation theory, preferably in a closed-form, is therefore desirable. Despite its usefulness in finding previously unknown physical interpretations of fully quantized wave, it has remained elusive. Here, we will be accomplish both the tasks of finding a closed-form formula and assigning a physical meaning to it. In the quest for developing a closed-form quantization condition (dispersion relation) for linear waves, several insightful but hitherto-unsuccessful attempts have been undertaken through various ways like investigating structures of higher order expressions in the WKB series \cite{bender1977, barclay1994}, utilization of supersymmetric WKB method \cite{susy1985,susy1995}, complex WKB method \cite{voros1983}, and phase integral method \cite{froman1965}. We present here, a simple and insightful method to achieve this aim successfully. The unexpected simplicity (both in mathematical structure and geometric-optical interpretation) of the closed-form formula reported here, arising out of the unwieldy WKB series, is what we believe to be the most striking about this work. Our principal result is that the one-dimensional wave equation\footnote{ The book-keeping parameter $\epsilon$ can be set equal to $1$ at the outset or at the end of the perturbative calculations.} (or Schr\"{o}dinger equation) \begin{equation} \label{eq:WKB_diff} \epsilon^2 \frac{d^2 \psi(x)}{dx^2} = Q\left(x\right) \psi(x),\hspace{1.3cm} \psi\left(\pm \infty \right)=0, \end{equation} with $Q\left(x\right) = - k^2(x) = 2m[V(z)-E]/\hslash^2$ where $k(z)$ is the local wavenumber, $V(z)$ is the potential, $E$ is the energy eigenvalue, $m$ and $\hslash$ are the mass and reduced Planck's constant respectively, for the case of two turning points [locations where $Q(z)=0$ with $z$ being a complex variable] has an exact closed-form quantization condition \begin{equation} \label{eq:quantpot} \oint_{\Gamma} k(z) \cdot \tau(z) dz = \left(K +\frac{1}{2}\right) 2\pi, \hspace{0.8cm} (K = 0,1,2,...), \end{equation} where $T(z)=\tau^2(z)$ is the wave-traversing medium's transmissivity of a layer of width $1/k(z)$ given as \cite{bremmer1951} \begin{equation} \label{eq:transmissivitydefn} \tau(z) = \sqrt{1 -\left(\frac{1}{2} \frac{d (k^{-1})}{dz}\right)^{2}} = \sqrt{1+ \left(\frac{S_{1}'}{S_{0}'}\right)^{2}}. \end{equation} The contour $\Gamma$ encircles the two turning points in anticlockwise direction. ($S_{0}'$ and $S_{1}'$ are explained immediately below, but presented here due to their elegant appearances.) Unit transmissivity reproduces the commonly known leading-order WKB approximation. \section{Conventional WKB} To begin with, consider the traditional transformation that is applied to the Schr\"{o}dinger equation \eqref{eq:WKB_diff}, \begin{subequations} \begin{align} \label{eq:sprime} \psi(z,\epsilon)&=\exp\left[ \frac{1}{\epsilon} S(z,\epsilon) \right],\\ \mathrm{or,\ } S'(z,\epsilon)&=\frac{\epsilon}{\psi(z,\epsilon)} \frac{d\psi(z,\epsilon)}{dz}, \end{align} \end{subequations} to obtain the Riccati equation: $\left(S'\right)^2 + \epsilon S^{''}=Q(z)$. The quantization condition for energy eigenvalue $E$ in Eq.~\eqref{eq:WKB_diff} is given in terms of the WKB eigenfunction's exponent in Eq.~\eqref{eq:sprime} as \footnote{ One way to derive this is by integrating $dS/dz = S'(z,\epsilon)$ in Eq.~\eqref{eq:sprime} along a contour $\Gamma$ such that it encloses, for the $K^{\mathrm{th}}$ energy level, all $K$ zeros of the eigenfunction on the real axis, between the classical turning points. This leads to $\oint_\Gamma S'(z,\epsilon) dz = \oint_\Gamma \epsilon \frac{d\mathrm{\ ln}\psi(z)}{dz} dz= \epsilon \mathrm{\ ln}\psi(z) \Big|_{\mathrm{evaluated\ once\ around\ }\Gamma}= K \cdot 2 \pi i \epsilon$.}: \begin{equation} \label{eq:logintegrated} \frac{1}{\epsilon} \oint_\Gamma S'(z,\epsilon) dz = K \cdot 2 \pi i. \end{equation} This equation, although exact, is not useful unless we know what $S'(z,\epsilon)$ is [or be able to solve the above Riccati equation or Eq.~\eqref{eq:WKB_diff} exactly]. We, therefore, proceed with the perturbative method to compute $S'(z,\epsilon)$ as \begin{equation} \label{eq:pertWKB} S'(z,\epsilon) = \sum_{n=0}^{\infty} \epsilon^n S_{n}'(z). \end{equation} Substituting this ansatz in the aforementioned Riccati equation and equating like-powers of $\epsilon$, one finds the $S_{n}'(z)$ to obey the recurrence relation, \begin{align} \label{eq:recur} &S_{0}'(z) = \sqrt{ Q(z)} = i k(z) ,\\ \label{eq:recurS1} &S_{1}'(z) = -\frac{1}{2} \frac{d}{dz} \mathrm{ln} [S_0^{'}(z)],\\ \label{eq:recur2} &2 S_{0}' S_{n}' + \sum_{j = 1}^{n-1} S_{j}' S_{n-j}' + S_{n-1}'' = 0, \hspace{0.8cm} (n \geq 2 ). \end{align} Eq.~\eqref{eq:logintegrated} thus, to all-orders in perturbation theory, is (it was first written in this form by Dunham \cite{dunham1932}): \begin{equation} \label{eq:Dunham1} \frac{1}{2 i \epsilon} \oint_{\Gamma} \sum_{n=0}^{\infty} \epsilon^n S_n'\left( z \right) dz = K \pi, \hspace{1cm} (K = 0,1,2,...). \end{equation} This series on the left hand side (LHS) is now to be summed up. However, it is typically a divergent asymptotic series \cite{brezin1977, stone1978}. One way to circumvent this challenge is to employ the Borel summation technique and assign a physical meaning to such a series. When employed, the analytic continuation of the Borel transform, however, presents another difficulty -- singularities on the integration contour (see, e.g., \cite{zj1981, zj1984, unsal2012, aniceto2015}). Avoiding such with contour deformation yields ambiguous imaginary terms that plague the energy eigenvalues $E$. Significant progresses have been made to address such problems and more recently via exact WKB and uniform WKB methods, following advances of resurgence theory, developed by Ecalle and others in the 1980s \cite{ecalle1981, voros1983, sueishi2020, dunnetransseries2014}. In such works, the ambiguous imaginary terms arising from the Borel summation are made to cancel each other systematically to all-orders by considering a ``resurgent trans-series" for the energy eigenvalues \cite{dunnetransseries2014}, as opposed to a perturbative series for it as we have done here. Such a path although very insightful and useful will not be pursued here as we wish to present an alternative simpler way to resum the diverging series of Eq.~\eqref{eq:Dunham1}, for a class of potentials, and assign a physical meaning to the resummed series. Note that, in Eq.~\eqref{eq:Dunham1}, the term of first order in $\epsilon$ [i.e., $S'_1(z)$] can be integrated exactly \cite{benderorszag}: \begin{equation} \label{eq:piover2} \begin{aligned} &\frac{1}{2 i} \oint_{\Gamma} dz S_1'\left( z \right) = -\frac{1}{8 i} \oint_{\Gamma} dz \frac{d}{dz} \mathrm{ln}[Q(z)] \\ &= -\left. \frac{1}{8 i} \mathrm{ln}\mathrm{\ }Q(z)\right\rvert_{\substack{\mathrm{evaluated\ once} \\ {\mathrm{around\ contour\ } \Gamma }}} = - \frac{1}{8 i} \left(2 \cdot 2\pi i \right) = -\frac{\pi}{2}, \end{aligned} \end{equation} where evaluating the logarithmic function around the contour $\Gamma$, enclosing the two turning points of $Q(z)$ yields $4 \pi i$. This total contribution of $-\pi/2$ on the LHS of Eq.~\eqref{eq:Dunham1} correctly accounts for the zero-point energy of the simple harmonic oscillator. The series in Eq.~\eqref{eq:Dunham1}, truncated at the first order, is the Bohr-Sommerfeld quantization relation \cite{berry1972}. It has been considered as an exceptional case that all other higher order terms for the simple harmonic oscillator turn out to be zeros. However, in general, this is not the case. Fr\"{o}man and Fr\"{o}man \cite{froman1965} have shown that all \textit{other} higher \textit{odd}-order terms in the WKB series, Eq.~\eqref{eq:Dunham1}, can be written as exact derivatives, regardless of the type of potential, which upon contour integrating yield zeros. Setting $\epsilon=1$, we can, therefore, rewrite Eq.~\eqref{eq:Dunham1} as \begin{equation} \label{eq:Dunham2} \frac{1}{2 i} \oint_{\Gamma} \sum_{n=0}^{\infty} S_{2n}'\left( z \right) dz = \left(K +\frac{1}{2}\right) \pi, \qquad (K = 0,1,2,...). \end{equation} Several attempts have been undertaken in the past \cite{bender1977, romanovski2000} to infer the general expression for $S'_{2n}$ with expectations of summing up the series afterwards. It, however, has turned out to be, heretofore, insurmountable. It is the objective of this work to present such a summation in an exact manner for an arbitrary potential with two turning points. (Note that such a route has been possible only for a very few special kinds of potentials, for e.g., the Eckart and the Morse potentials \cite{bender1977, romanovski2000}). Next, we outline our method of summing up the WKB series up to all-orders and interpret its physical meaning thereafter. Let us recast Eq.~\eqref{eq:Dunham2} as \begin{equation} \label{eq:Bohrcorrection2} \frac{1}{2 i} \oint_{\Gamma} S_0'\left( z \right) \cdot \sum_{n=0}^{\infty} T_{2n}\left( z \right) dz = \left(K +\frac{1}{2}\right) \pi, \end{equation} where $T_{2n}(z) = S_{2n}'\left( z \right)/S_0'\left( z \right)$ and the summation over $T_{2n}$ will be achieved below. Introduce an economical notation $L(z) \equiv 1/S_0'(z)$. ($L(z)$ can be regarded as having the dimension of length, found using Eqs.~\eqref{eq:recur} and \eqref{eq:WKB_diff}, and so does $D^{-1}$ where $D \equiv d/dz$.) Dividing both sides of Eq.~\eqref{eq:recur2} by $\left(S_0'\right)^2$ to rewrite it in these new notations of $T, D,$ and $L$, we find, \begin{align} &2 T_{n} + \sum_{j = 1}^{n-1} T_{j} T_{n-j} + L^2 \frac{d}{dz} \left[ \frac{S_{n-1}'}{S_0'}\cdot \frac{1}{L} \right]= 0,\\ \label{eq:recur3} &2 T_{n} = - \sum_{j = 1}^{n-1} T_{j} T_{n-j} - LDT_{n-1} + T_{n-1} DL. \end{align} \section{Pattern Searching Campaign} Note $T_0=1$ and $T_1=DL/2$. This allows to cast Eq.~\eqref{eq:recur3} finally in a neat way as \begin{align} \label{eq:recur4} & T_{n} = T_{n-1} T_1 - \frac{1}{2}\sum_{j = 1}^{n-1} T_{j} T_{n-j} - \frac{1}{2}LDT_{n-1}, \qquad \hspace{0.3cm} (n \geq 2) . \end{align} We provide below the expressions for $T_n$.\footnote{It is worth highlighting that $T_{n}$ is a dimensionless function as it has exactly the same number of $D$'s and $L$'s; see Ref.~\cite{appendix:equalnoofLandD}.} Notice the appearance of $L$ below, \begin{subequations} \begin{alignat}{3} \label{eq:T2} T_2 &= &&\textrm{\ \ \ }\frac{T_1^2}{2} \quad &&- L \times \left[\frac{DT_1}{2}\right], \\ T_3 &= && \quad &&-L\times \left[\frac{DT_2}{2}\right],\\ T_4 &= &&-\frac{T_1^4}{8} \quad &&- L \times \left[\frac{DT_3}{2} - \frac{T_1^2\cdot DT_1}{4} + \frac{LDT_1 \cdot DT_1}{8} \right],\\ T_5 &= && \quad &&-L \times \left[ \frac{DT_4}{2} - \frac{T_2 \cdot DT_2}{2} \right],\\ T_6 &= &&\textrm{\ \ \ }\frac{T_1^6}{16} \quad &&- L\times \left[...\right], \end{alignat} \end{subequations} where ellipsis with a square bracket, $\left[...\right]$, represents a collection of functions of lower order $T_n$. Note that all terms of odd order (in $n$) of $T_n$ necessarily begin with $L$ because these expressions when substituted in Eq.~\eqref{eq:Bohrcorrection2} yield cancellation of such $L$ with $L$ in the denominator of $S_0'=1/L$. The remaining part of the integrand can be shown to be (the sum of) the product of exact derivatives or expressions that can be changed into them \cite{appendixB}. Such integrand with the product of exact derivatives is trivially zero upon contour integrating \cite{appendixC} because they are single-valued functions for the defined contour path (no logarithmic derivatives are involved here as $L$ in the denominator of $S_0'=1/L$ has been cancelled out). This is tantamount to stating that all odd-order terms of $T_n$ contribute to the wavefunction's amplitude (i.e., they do not play role in the quantization condition \cite{bender1977}) and $T_n$'s even-order terms modulate the phase of the wavefunction -- thus the quantization condition involves thereof. To reiterate, for $n \geq 1$, $T_{\mathrm{odd\ order}} = T_{2n+1}= -L \times \left[...\right]$ and $T_{\mathrm{even\ order}} = T_{2n} = ...T_1^{2n} - L \times \left[...\right]$. After explicitly computing higher order terms (e.g., $T_8 = 5T_1^8/128 - L \times \left[...\right]$), we recognize a completely unexpected but instructive pattern, based upon which we propose the following hypothesis and prove it subsequently.\footnote{We are very grateful to Michael V. Berry for questions that prompted us to propose this hypothesis.} \section{Inductive Hypothesis} Proposition: \textit{For any $n \in \mathbb {N}$,} \begin{equation} \label{eq:proposition} \mathrm{P}(n)\mathrm{:\ } T_{2n} = \binom{n-\frac{3}{2}}{-\frac{3}{2}} \left(\frac{T_1}{i}\right)^{2n} - L \times \left[...\right], \end{equation} \textit{where $ \mathrm{\binom{n-\frac{3}{2}}{-\frac{3}{2}}} $ is the binomial coefficient and $i^2=-1$.} This statement is not challenging to prove using the principle of mathematical induction (see section A in the Appendix for the proof). Although not immediately obvious, the inductive hypothesis, presented in the precise form as above, has paramount consequence. In the proposition, in Eq.~\eqref{eq:proposition}, the first term of $T_{2n}$ on the right hand side lacks $L$ in front of it unlike the second term and hence, when substituted in Eq.~\eqref{eq:Bohrcorrection2}, it yields a function that does not vanish upon doing the contour integration (as terms like $\oint_\Gamma dz S_0' \cdot T_1^{2n} \sim \oint_\Gamma dz (DL)^{2n}/L$ contribute to a logarithmic derivative of $L$ and thus the contour encloses poles of a logarithmic function, which are the zeros of $Q(z)$). In contrast thereof, the second term of $T_{2n}$ that begins with $L$, written as $L \times \left[...\right]$, contributes exactly zero upon contour integrating as the cancellation of this $L$ with $S_0'=1/L$ in Eq.~\eqref{eq:Bohrcorrection2} modifies the WKB integrand into (a sum of) the product of exact derivatives (importantly, without any logarithmic derivative) \cite{appendixB}. Resulting product of exact derivatives, lacking logarithmic term, amount to zero upon contour integrating; the reasoning here can follow the same as was aforementioned for the odd-order terms in the WKB series \cite{appendixC, appendixD}. Thus this campaign of searching terms which begin with $L$ and others which don't is unexpectedly helpful. We shall, therefore, deal with only the power series in $T_1$ in Eq.~\eqref{eq:Bohrcorrection2}. By straightforward summation of this special series up to all-orders in perturbation theory, we obtain a closed-form expression for Eq.~\eqref{eq:Bohrcorrection2} as presented below in Eq.~\eqref{eq:summedupWKB}. \begin{widetext} \begin{equation} \label{eq:summedupWKB} \begin{aligned} \hspace{-0.2cm}\left(K +\frac{1}{2}\right) \pi = \frac{1}{2 i} \oint_{\Gamma} \frac{dz}{L} \left[1+ \sum_{n=1,2,..}^{\infty} \binom{n-\frac{3}{2}}{-\frac{3}{2}} \left(\frac{\dl}{2 i}\right)^{2n} \right] =\frac{1}{2 i}\oint_{\Gamma} \frac{dz}{L\left( z \right)} \sqrt{1+\left(\frac{DL}{2}\right)^2} &= \frac{1}{2} \oint_{\Gamma} k(z) \cdot \sqrt{1+ \left(\frac{S_{1}'}{S_{0}'}\right)^{2}} dz\\ &\hspace{-0.5cm}=\frac{1}{2} \oint_{\Gamma} k(z) \cdot \sqrt{1 -\left(\frac{1}{2} \frac{d (k^{-1})}{dz}\right)^{2}} dz. \end{aligned} \end{equation} \end{widetext} We emphasize that our summation in Eq.~\eqref{eq:summedupWKB} involves a power series, which is also the case in all the special problems for which the WKB series has been summed up exactly \cite{romanovski2000, salasnich1997}. We believe this to be the reason why the first few terms of the WKB expansion often approximate the correct eigenvalues despite the full series being divergent (this is, in part, also an answer to why the WKB expansion is asymptotic). We are also able to physically interpret the expressions involved on the last line of Eq.~\eqref{eq:summedupWKB} using the notion of geometric optics; however, the equation in its entirety is non-trivial to elucidate, possibly owing to the quantum effects embodied in all-orders (in perturbation theory). \section{Geometric-Optical Meaning} We borrow here the illustration by Bremmer \cite{bremmer1951} where the author demonstrates, \textit{mutatis mutandis}, the similarity of each order of WKB series (consider its $n^{\mathrm{th}}$ order) to the transmitted waves in an infinitesimally-discretized inhomogeneous medium that undergo $n$-number of reflections. At each reflection, the waves change their direction by $180^{{\circ}}$. Thus a wave that begins from a point far to the left gets continually transmitted to the right while suffering reflection at each discretized boundary (and consider for now only the directly transmitted waves -- not doubly or quadruply or even number of multiply reflected ones that also can eventually transmit to the right). Such resulting wavefunction at the rightmost end yields the $1^{\mathrm{st}}$ order WKB approximation \cite{bremmer1951}. Each of the above-mentioned reflected waves can undergo further reflection(s) and keep continually transmitting to the right. The more the number of reflections they suffer before they arrive to the rightmost end, the higher they belong to in the order of WKB expansion. Referring readers to the original paper \cite{bremmer1951} for additional interesting details, we present now heuristically how Bremmer arrives at the reflection coefficient of a layer of width $1/k(z)$. The well-known reflection coefficient in one-dimension is (Bremmer \cite{bremmer1951} uses $R$ notation, which we shall reserve for the reflectivity to avoid potential confusions), \begin{equation} r(z) = \frac{k_s - k_{s+1}}{k_s + k_{s+1}} \approx \frac{k_s - k_{s+1}}{2 k_s} \approx-\frac{dk/d\xi}{2k}, \end{equation} where $s$ is the layer-number in the infinitesimally discretized inhomogeneous medium (within which $k_s$ remains constant) and $\xi$ is proportional to the number of wavelengths over which the wavelength (or $k$) changes appreciably (from $k_s$ to $k_{s+1}$). Following Bremmer \cite{bremmer1951}, $d\xi = k(z) dz$. Therefore, the transmissivity of a layer of width $1/k(z)$ is \begin{equation} T(z) = 1-r^2(z) = 1-\left(-\frac{dk/d\xi}{2k}\right)^2=1 -\left(\frac{1}{2} \frac{d (k^{-1})}{dz}\right)^{2}, \end{equation} which is exactly what appears on the last line of Eq.~\eqref{eq:summedupWKB}. Note that the wavenumber, obtained via resummation of WKB series to all-orders (i.e., integrand in Eq.~\eqref{eq:summedupWKB}), vanishes even before we reach the classical turning points. Interestingly, for \textit{all} potentials, it vanishes exactly at the locations where $p_{\mathrm{cl}}(x) \cdot x = \hslash/2$ where $p_{\mathrm{cl}} = \sqrt{2m[E-V(x)]}$ (cf. with the Heisenberg uncertainty principle). \section{Applications} Our novel closed-form quantization condition yields exact energy eigenvalues to potentials with two turning points. We demonstrate the efficacy of our formula in an example below and present other cases like simple harmonic oscillator, $3$-dimensional harmonic oscillator, Coulomb potential, Eckart potential, and Morse potential in section D of the Appendix. We also find in cases of $3$-dimensional spherically symmetric potentials, the Langer-correction factor \cite{langer1937, koike2009} appears naturally upon performing the contour integration of our formula that resums all-orders perturbative effects. This corroborates the previous claim that the Langer-modification comes from the higher order corrections in the WKB series \cite{salasnich1997}. Consider an asymmetric Rosen-Morse potential, $V(x)$, with $\hslash^2=2m$: $V(x)=-U_0 \textrm{sech}^2\left( x/a \right) +U_1\tanh\left( x/a \right)$. Let $z$ be a complex variable such that $z=\tanh\left( x/a\right)$. Then, $z_a$ and $z_b$ are the two classical turning points, satisfying $z_a+z_b = -U_1/U_0$; $z_a z_b = -\left(E+U_0\right)/U_0$. Using Eq.~\eqref{eq:quantpot}, \begin{equation} \label{eq:example1} \frac{1}{2 i}\oint_{\Gamma} \frac{\sqrt{16 \left[ V(z)-E \right]^{3}+\left[V'(z)\right]^{2} }}{4 \left[ V(z)-E \right]} dz = \left(K +\frac{1}{2}\right)\pi, \end{equation} where $K=0,1,2,...$ represent different energy levels. The poles of the integrand are at $z=z_a,z_b,1,-1,\textrm{ and } \infty$. So, calculating residue at each pole with a proper principal value yields \begin{align} \begin{split} -\frac{1}{4}+\frac{1}{4} &{-} \frac{a\sqrt{(z_a-1)(z_b-1)U_0}}{2} {-} \frac{a\sqrt{(z_a+1)(z_b+1)U_0}}{2}\\ &\hspace{2cm}+ \frac{\sqrt{1+4a^2 U_0}}{2} = K+\frac{1}{2}, \end{split} \end{align} \begin{equation} \begin{split} \therefore \frac{1}{2}\sqrt{\frac{-E-U_1}{U_0}} {+} \frac{1}{2}\sqrt{\frac{-E+U_1}{U_0}} = -\frac{1}{a\sqrt{U_0}} \left( K+\frac{1}{2} \right) \\ + \frac{\sqrt{1+4a^2U_0 }}{2a\sqrt{U_0}}, \end{split} \end{equation} which agrees with Ma \& Xu \cite{maxu2005} and the individual terms directly manifest from the residues at poles whose locations are precisely predicted by Eq.~\eqref{eq:example1}. At last, we remark that, for reasons not fully understood, the proposed quantization relation works only for two turning point problems. Nevertheless, understanding the nature of the exactly quantized action in two turning point problems, considered here, is likely to benefit the extension of our geometric-optically interpretable equation to problems with multiple turning points. Such interesting extensions will be investigated in a future study. Our proposed quantization condition might also engender rethinking of quantization in higher dimensions in terms of geometry \cite{stone2005}. \section{Conclusion} This article presents an exact closed-form quantization relation by summing up the WKB series to all-orders in perturbation theory for arbitrary one-dimensional potentials having two turning points. The new resummation procedure utilized herein reveals an unexpectedly simple pattern in the general term of the WKB series, leading to an inductive hypothesis, which we are able to prove by the principle of mathematical induction. The presented formula is then physically interpreted as the action of a wave with wavenumber corrected by a factor related to the wave transmissivity. Unit transmissivity recovers the Bohr-Sommerfeld quantization relation. This closed-form expression for the quantization might also be useful in problems with more than two turning points where non-perturbative effects that give rise to tunneling phenomena come into play, i.e., spectral curves with non-zero genus \cite{basar2017}. For such problems, some of the neglected terms in the series resummation appear to be necessary. In the light of resurgent perturbative/nonperturbative relations \cite{dunne2014, Gahramanov2016, basar2017}, collecting such terms seem interesting and further investigation is merited. This will, however, be left for the future as it is beyond the objective of the present article. Arguably the most important advancement through this work is the discovery of an elegant and physically-interpretable equation emerging from a myriad of complicated terms in the WKB series that become increasingly unmanageable at each higher order of the series. It is gratifying to find that the spectral problem with genus-0 spectral curve (i.e., with two turning points and no tunneling phenomenon) can be reduced to an exact equation as simple and economical as Eq.~\eqref{eq:quantpot}. Analyzing this equation might lead to a deeper understanding of quantum geometry \cite{basar2017}. \begin{acknowledgments} I am pleased to thank Dhrubaditya Mitra for enkindling interest in this work and for offering discussions and guidance; it is difficult to imagine NORDITA (Sweden) without his friendly teaching style and advocacy for perturbation methods. I am also indebted to Michael V. Berry and Carl M. Bender for their appraisals of this work that greatly improved the manuscript. Thanks are also due to Paul W. Terry, MJ Pueschel, and Dibyendu Nandi for their valuable feedback and suggestions. Correspondences with Mithat \"{U}nsal and Luca Salasnich helped to find connections of this work with others. I also acknowledge support from the Van Vleck Fellowship in Physics at UW-Madison and the conducive environment herein. \end{acknowledgments} \begin{widetext} \vspace{50cm} \section{Appendix} \subsection{Proof by the principle of mathematical induction} Below, we present the proof for the proposition put forth for $T_{2n}$ in Eq.~\eqref{eq:proposition}. The statement, P($n$), is trivially satisfied for $n=1$ [cf. Eq.~\eqref{eq:proposition} with Eq.~\eqref{eq:T2}]. Now, we assume it to be valid for an arbitrary $n$ and prove that it implies the proposition, in Eq.~\eqref{eq:proposition}, is true for $n+1$ as well [i.e., P$(n+1)$ is true]. We begin with the WKB recurrence relation, Eq.~\eqref{eq:recur4}, \begin{subequations} \begin{align} T_{2(n+1)} &= - \frac{1}{2}\sum_{j = 2}^{2(n+1)-2} T_{j} T_{2(n+1)-j} - \frac{LDT_{2(n+1)-1}}{2}\\ &= -\sum_{\substack{j = 2,4,... \\ \mathrm{\textbf{even}}}}^{2n} \frac{T_{j} T_{2n+2-j}}{2} - \sum_{\substack{j = 3,5,...\\ \mathrm{\textbf{odd}}}}^{2n-1} \frac{T_{j} T_{2n+2-j}}{2} - \frac{LDT_{2n+1}}{2}\\ &= \left\{-\sum_{j/2 = 1,2,...}^{n} \binom{\frac{j}{2}-\frac{3}{2}}{-\frac{3}{2}} \binom{n+1-\frac{j}{2}}{-\frac{3}{2}} \left(\frac{T_{1}}{i}\right)^{2n+2} - L \times \left[...\right] \right\} - \sum_{\substack{j = 3,5,...\\ \mathrm{\textbf{odd}}}}^{2n-1} \frac{T_{j} T_{2n+2-j}}{2} - \frac{LDT_{2n+1}}{2}\\ &= -\sum_{j/2 = 1,2,...}^{n} \binom{\frac{j}{2}-\frac{3}{2}}{-\frac{3}{2}} \binom{n+1-\frac{j}{2}}{-\frac{3}{2}} \left(\frac{T_{1}}{i}\right)^{2n+2} - L \times \left[...\right] -L \times \left[...\right] - \frac{LDT_{2n+1}}{2} \hspace{0.5cm} \left[ \because T_{\mathrm{odd}} \mathrm{\ begins\ with\ } L \right] \\ &= \binom{n+1-\frac{3}{2}}{-\frac{3}{2}} \left(\frac{T_1}{i}\right)^{2(n+1)} - L \times \left[...\right] \QEDA \end{align} \end{subequations} This proves the proposition. \subsection{$T_n$ beginning with $L$ convertible to a product of exact derivatives} Here, we detail the recipe of casting any expression in $T_n$ that begins with $L$ into a product of exact derivatives (without a logarithmic derivative) \cite{appendixB}. Consider the term in $T_n$ that begins with $L$: \begin{equation} \label{eq:appB1} \begin{aligned} \oint_\Gamma dz S_0' T_n = \oint_\Gamma dz S_0' L \times \left[...\right] &= \oint_\Gamma dz \times \frac{1}{L} \times L \times \left[...\right] \\ &= \oint_\Gamma dz \times \frac{1}{L} \times L \times \left[\left(D^{p_1}L^{q_1}D^{p_2}L^{q_2}...\right) \times ... \times \left(D^{p_{j-1}}L^{q_{j-1}}D^{p_{j}}L^{q_{j}}...\right)\right]\\ &= \oint_\Gamma dz \left[\left(D^{p_1}L^{q_1}D^{p_2}L^{q_2}...\right) \times ... \times \left(D^{p_{j-1}}L^{q_{j-1}}D^{p_{j}}L^{q_{j}}...\right)\right], \end{aligned} \end{equation} where $D^{p_1}L^{q_1}D^{p_2}L^{q_2}...$ represents $D^{p_1}\left(L^{q_1}D^{p_2}\left(L^{q_2}...\right)\right)$; $p_1, q_1, p_2, q_2, ..., p_j, q_j, ...$ are all integers in between (and including) $0$ and $n$, satisfying the constraints: $p_1+p_2+...+p_j+... = n$ (for $n$ number of $D$'s) and $q_1+q_2+...+q_j+... = n-1$ (for $n-1$ number of $L$'s); and one extra $L$ lies in the very beginning of $T_n$, which has gotten cancelled with $S_0'=1/L$. This makes $T_n$ to have $n$ number of $D$'s and $L$'s, as argued in Ref.~\cite{appendix:equalnoofLandD}. If all $p_1, p_2, ..., p_j, ...$ are equal to or greater than $1$, this integrand is already a product of exact derivatives (without a logarithmic derivative as $S_0'=1/L$ has already been cancelled with $L$ of $T_n$). If any of $p_1,p_2, ...,p_j, ..$ is zero, the integrand in Eq.~\eqref{eq:appB1}, using chain rule, becomes (say $p_1=0$): \begin{equation} \begin{aligned} \mathrm{integrand\ } &= \left(D^{0}L^{q_1}D^{p_2}L^{q_2}...\right) \times ... \times \left(D^{p_{j-1}}L^{q_{j-1}}D^{p_{j}}L^{q_{j}}...\right) \\ &= \hspace{0.5cm}\left(L^{q_1}D^{p_2}L^{q_2}...\right) \times ... \times \left(D^{p_{j-1}}L^{q_{j-1}}D^{p_{j}}L^{q_{j}}...\right)\\ &= D\left[\left(L^{q_1}D^{p_2}L^{q_2}...\right) \times ... \times \left(D^{p_{j-1}}L^{q_{j-1}}D^{p_{j}}L^{q_{j}}...\right)\right] \bm{-} D\left[\left(L^{q_1}D^{p_2}L^{q_2}...\right) \times ... \right] \times \left(D^{p_{j-1}-1}L^{q_{j-1}}D^{p_{j}}L^{q_{j}}...\right), \end{aligned} \end{equation} thus turning the integrand into (a product of) exact derivatives. This process can be repeated if $p_{j-1}-1$ also happens to be $0$ (and if several $L$ multiplies each other, its exponent can be raised to abridge this procedure). It should be emphasized that it is guaranteed to find such a transformation as there are exactly the same number of $D$'s and $L$'s in any $T_n$ \cite{appendix:equalnoofLandD}. In this regard, searching a general expression for $T_{2n}$ as stated in the proposition in Eq.~\eqref{eq:proposition} is surprisingly helpful. \subsection{Product of exact derivatives yields zero} We demonstrate here that the product of exact derivatives (without a logarithmic function) amounts to zero on contour integrating, expounding on Ref.\cite{appendixC}. Let us consider two functions $f$ and $g$ (without a logarithm). Then, $\oint_\Gamma dz ~ df/dz = 0$. Now, using the Cauchy integral formula to represent the exact derivatives and thereafter employing the partial fraction decomposition, \begin{subequations} \begin{align} \oint_\Gamma dz \frac{df}{dz} \frac{dg}{dz} &= \oint_\Gamma dz \frac{df}{dz} \frac{dg}{dz} \\ &= \oint_\Gamma dz \left\{\frac{1}{2\pi i} \oint_{\gamma_1} \frac{f(u)}{\left(u-z\right)^{1+1}} du \right\} \left\{\frac{1}{2\pi i} \oint_{\gamma_2} \frac{g(v)}{\left(v-z\right)^{1+1}} dv \right\} \hspace{1cm} [\gamma_1\ \& \ \gamma_2\ \mathrm{enclose\ the\ pole\ at\ } z]\\ &= -\frac{1}{4 \pi^2} \oint_{\gamma_1} \oint_{\gamma_2} du dv f(u) g(v) \oint_\Gamma dz {\left[\frac{1}{(u-z)(v-z)}\right]}^2\\ &= -\frac{1}{4 \pi^2} \oint_{\gamma_1} \oint_{\gamma_2} du dv f(u) g(v) \oint_\Gamma dz {\left[\frac{1}{v-u} \left(\frac{1}{u-z}-\frac{1}{v-z} \right)\right]}^2\\ &= -\frac{1}{4 \pi^2} \oint_{\gamma_1} \oint_{\gamma_2} du dv \frac{f(u) g(v)}{\left(v-u\right)^2} \oint_\Gamma dz { \left(\frac{1}{u-z}-\frac{1}{v-z} \right)}^2\\ &= -\frac{1}{4 \pi^2} \oint_{\gamma_1} \oint_{\gamma_2} du dv \frac{f(u) g(v)}{\left(v-u\right)^2} \left[ \oint_\Gamma dz \frac{1}{\left(u-z\right)^2} -2\oint_\Gamma dz \frac{1}{\left(u-z\right)\left(v-z\right)} + \oint_\Gamma dz \frac{1}{\left(v-z\right)^2} \right]\\ &= -\frac{1}{4 \pi^2} \oint_{\gamma_1} \oint_{\gamma_2} du dv \frac{f(u) g(v)}{\left(v-u\right)^2} \left[ 0 -2\oint_\Gamma dz \frac{1}{\left(u-z\right)\left(v-z\right)} + 0 \right]\\ &= \frac{1}{2 \pi^2} \oint_{\gamma_1} \oint_{\gamma_2} du dv \frac{f(u) g(v)}{\left(v-u\right)^2} \oint_\Gamma dz \frac{1}{v-u} \left(\frac{1}{u-z}- \frac{1}{v-z} \right)\\ &= \frac{1}{2 \pi^2} \oint_{\gamma_1} \oint_{\gamma_2} du dv \frac{f(u) g(v)}{\left(v-u\right)^3} \left[ \oint_\Gamma dz \frac{1}{u-z}- \oint_\Gamma dz \frac{1}{v-z} \right]\\ &= \frac{1}{2 \pi^2} \oint_{\gamma_1} \oint_{\gamma_2} du dv \frac{f(u) g(v)}{\left(v-u\right)^3} \left[- 2 \pi i+ 2 \pi i\right]\\ &=0. \end{align} \end{subequations} It is straightforward to prove, in the similar manner, that the product of three (or more) exact derivatives ((without a logarithmic function) under contour integration is also zero. We are thus left with a power series in $\left(T_1\right)^{2n}$, which we sum up in Eq.~\eqref{eq:summedupWKB}. \end{widetext} \subsection{More Examples} Let us now test the validity of our novel formula. We find that this formula gives the exact energy eigenvalues $E$ for all the following potentials and many more which are not listed here (but all having exactly two turning points). We choose, without the loss of generality, $\hslash^2 = 2m$ in all of the following calculations. \subsection*{I. Simple Harmonic Oscillator} Even though we already know that the leading-order WKB is exact for simple harmonic oscillator, it is desirable to test if the extra factor in the integrand of Eq.~\eqref{eq:example1} would cause any deviation from the correct eigenvalues. Consider, \begin{subequations} \begin{align} V(z)&=z^2,\\ V-E &=z^2-E,\\ V'(z) &=2 z . \end{align} \end{subequations} From Eq.~\eqref{eq:example1}, \begin{equation} \frac{1}{2 i}\oint \frac{\sqrt{16 \left( V-E \right)^{3}+\left(V'\right)^{2} }}{4 \left( V-E \right)} dz = \left(K +\frac{1}{2}\right)\pi, \end{equation} which leads to \begin{equation} \frac{1}{2 i}\oint \frac{\sqrt{16 \left( z^2-E \right)^{3}+\left( 2z \right)^{2} }}{4 \left( z^2-E \right)} dz = \left(K +\frac{1}{2}\right)\pi. \hspace{0.6cm} \end{equation} The poles of the integrand are at $z=\sqrt{E},-\sqrt{E},\textrm{ and } \infty$. So, calculating residues at each of the poles respectively, \begin{align} \frac{1}{4}-\frac{1}{4} + \frac{E}{2} &= K+\frac{1}{2},\\ \therefore E &=2 \left( K+\frac{1}{2}\right). \end{align} \subsection*{II. $3$-D harmonic oscillator } Consider the potential, \begin{subequations} \begin{align} V(r)&=r^2+\frac{b}{r^2}+\frac{l(l+1)}{r^2},\\ V-E &= r^2+\frac{b}{r^2}+\frac{l(l+1)}{r^2}-E. \end{align} \end{subequations} Let $u=r^2$; $u' = 2r= 2\sqrt{u}$. \begin{subequations} \begin{align} \therefore V-E &= \frac{1}{u} \left[ u^2-E u + \left\{b+l\left(l+1\right) \right\} \right]\\ &= \frac{1}{u} \left(u-u_a \right)\left(u-u_b \right), \end{align} \end{subequations} with \begin{subequations} \begin{align} u_{a,b} &= \frac{E}{2}\pm \sqrt{\left(\frac{E}{2}\right)^2 -\left\{b+l\left(l+1\right) \right\} },\\ u_a+u_b &= E,\\ u_a u_b &= b+l\left(l+1\right). \end{align} \end{subequations} Now, \begin{subequations} \begin{align} V' &= \frac{dV}{du}u'\\ &= \left[ \frac{2u-u_a-u_b}{u} -\frac{(u-u_a)(u-u_b)}{u^2}\right] 2\sqrt{u}. \end{align} \end{subequations} Using Eq.~\eqref{eq:example1}, \begin{equation} \frac{1}{2 i}\oint \frac{\sqrt{16 \left( V-E \right)^{3}+\left(V'\right)^{2} }}{4 \left( V-E \right)} dz = \left(K +\frac{1}{2}\right)\pi. \end{equation} The poles of the integrand are at $u=u_a,u_b,0,\textrm{ and } \infty$. So, calculating residues at each of the poles respectively, \begin{align} \frac{1}{4}-\frac{1}{4} - \frac{1}{4}&\sqrt{1+4u_a u_b}+ \frac{u_a+u_b}{4} = K+\frac{1}{2},\\ \therefore E &=2 \left[ 2K+1+\sqrt{{\color{red}\bm{\left(l+\frac{1}{2}\right)}^2}+b}\right]. \end{align} This solution agrees with Rosenzweig \& Krieger \cite{Rosenzweig1968}. \\ Note that the correct Langer correction factor, shown in the bold typeset, emerges naturally from our all-orders resummed WKB series, which Langer proposed to replace $l(l+1)$ by $(l+1/2)^2$ to obtain the correct eigenvalues. \subsection*{III. Coulomb potential} Now, assume the Coulomb potential, \begin{subequations} \begin{align} V(r)&=-\frac{V_0}{r}+\frac{b}{r^2}+\frac{l(l+1)}{r^2},\\ V-E &= \frac{1}{r^2}\left[-V_0 r + b+ l(l+1)-Er^2 \right]\\ &= \frac{-E}{r^2}\left[r^2+\frac{V_0}{E}r-\frac{b+l(l+1)}{E} \right]\\ &= \frac{-E}{r^2}\left(r-r_a\right)\left(r-r_b\right), \end{align} \end{subequations} with \begin{subequations} \begin{align} r_{a,b} &= -\frac{V_0}{2E}\pm \sqrt{\left(\frac{V_0}{2E}\right)^2 +\frac{b+l\left(l+1\right) }{E} },\\ r_a+r_b &= \frac{-V_0}{E},\\ r_a r_b &= -\frac{b+l\left(l+1\right)}{E}. \end{align} \end{subequations} Now, \begin{align} V' &= \frac{2E(r-r_a)(r-r_b)}{r^3} - \frac{E}{r^2}\left(2r-r_a-r_b\right). \end{align} We use Eq.~\eqref{eq:example1} below, \begin{equation} \frac{1}{2 i}\oint \frac{\sqrt{16 \left( V-E \right)^{3}+\left(V'\right)^{2} }}{4 \left( V-E \right)} dz = \left(K +\frac{1}{2}\right)\pi. \end{equation} The poles of the integrand are at $r=r_a,r_b,0,\textrm{ and } \infty$. So, calculating residues at each of the poles with proper principal value, \begin{align} - \frac{1}{4}+\frac{1}{4} {-} \frac{1}{2}&\sqrt{1-4E r_a r_b}+ \frac{\sqrt{-E}}{2}\left(r_a+r_b\right) = K+\frac{1}{2},\\ \therefore E &= \frac{-V_0^2}{4 \left[ K+1/2 {+}\sqrt{b+{\color{red}\bm{\left( l+\frac{1}{2}\right)}^2}} \right]^2}. \end{align} This solution agrees with Rosenzweig \& Krieger \cite{Rosenzweig1968}.\\ Notice again that the correct Langer correction factor, shown in the bold typeset, emerges naturally from our all-orders resummed WKB series. \subsection*{IV. Eckart potential} Let us consider the Eckart potential, \begin{align} V(x)&=\frac{-\lambda e^{-\alpha x}}{1-e^{-\alpha x}}+\frac{b e^{-\alpha x}}{\left(1-e^{-\alpha x}\right)^2}. \end{align} Using a transformation, $u=e^{\alpha x}-1$; $u' = \alpha(u+1)$; we write, \begin{subequations} \begin{align} V-E &= \frac{-\lambda}{u} +\frac{b(u+1)}{u^2}-E\\ &= \frac{-E}{u^2}\left[u^2+\frac{\lambda-b}{E}u -\frac{b}{E} \right]\\ &= \frac{-E}{u^2}(u-u_a)(u-u_b), \end{align} \end{subequations} with \begin{subequations} \begin{align} u_{a,b} &= \frac{b-\lambda}{2E}\pm \sqrt{\left(\frac{b-\lambda}{2E}\right)^2 +\frac{b}{E} },\\ u_a+u_b &= \frac{b-\lambda}{E},\\ u_a u_b &= -\frac{b}{E}. \end{align} \end{subequations} Now, \begin{subequations} \begin{align} V' &= \frac{dV}{du}u'\\ &= \left[\frac{2E}{u^3}(u-u_a)(u-u_b) - \frac{E}{u^2}(2u-u_a-u_b) \right] \alpha (u+1). \end{align} \end{subequations} Using Eq.~\eqref{eq:example1}, \begin{equation} \frac{1}{2 i}\oint \frac{\sqrt{16 \left( V-E \right)^{3}+\left(V'\right)^{2} }}{4 \left( V-E \right)} dz = \left(K +\frac{1}{2}\right)\pi. \end{equation} The poles of the integrand are at $u=u_a,u_b,0,-1,\textrm{ and } \infty$. So, calculating residues at each of the poles with proper principal value, \begin{eqnarray} \begin{split} -\frac{1}{4}+\frac{1}{4} {-} \frac{1}{2\alpha}\sqrt{\alpha^2-4E u_a u_b}+& \frac{1}{\alpha}\sqrt{-E(1+u_a)(1+u_b)}\\ &{-} \frac{\sqrt{-E}}{\alpha} = K+\frac{1}{2}, \end{split}\\ \therefore {-} \frac{1}{2}\sqrt{1+\frac{4b}{\alpha^2}}+ \frac{\sqrt{\lambda-E}}{\alpha}{-} \frac{\sqrt{-E}}{\alpha} = K+\frac{1}{2}\hspace{1cm} \end{eqnarray} This solution agrees with Romanovski \& Robnik \cite{romanovski2000}. \subsection*{V. Morse potential} Finally, consider the potential of the form, \begin{subequations} \begin{align} V(x)&=Ae^{-2\alpha x}-Be^{-\alpha x},\\ V-E &= Ae^{-2\alpha x}-Be^{-\alpha x}-E. \end{align} \end{subequations} Let $u=e^{\alpha x}$, $u' = \alpha u$, which leads us to, \begin{subequations} \begin{align} \therefore V-E &= \frac{1}{u^2} \left[ A-Bu-Eu^2 \right]\\ &= \frac{-E}{u^2} \left[ u^2+\frac{B}{E}u -\frac{A}{E}\right]\\ &= \frac{-E}{u^2} (u-u_a)(u-u_b), \end{align} \end{subequations} with \begin{subequations} \begin{align} u_{a,b} &= -\frac{B}{2E}\pm \sqrt{\left(\frac{B}{2E}\right)^2 +\frac{A}{E} },\\ u_a+u_b &= \frac{-B}{E},\\ u_a u_b &= \frac{-A}{E}. \end{align} \end{subequations} Now, \begin{subequations} \begin{align} V' &= \frac{dV}{du}u'\\ &= \left[\frac{2E}{u^3}(u-u_a)(u-u_b) - \frac{E}{u^2}(2u-u_a-u_b) \right] \alpha u. \end{align} \end{subequations} Employing Eq.~\eqref{eq:example1}, \begin{equation} \frac{1}{2 i}\oint \frac{\sqrt{16 \left( V-E \right)^{3}+\left(V'\right)^{2} }}{4 \left( V-E \right)} dz = \left(K +\frac{1}{2}\right)\pi. \end{equation} The poles of the integrand are at $u=u_a,u_b,0,\textrm{ and } \infty$. So, calculating residues at each of the poles with proper principal value, \begin{align} \frac{-1}{4}+\frac{1}{4} + \frac{1}{2 \alpha}\sqrt{\frac{-E}{u_a u_b}}(u_a+u_b) {-}\frac{\sqrt{-E}}{\alpha} &= K+\frac{1}{2},\\ \therefore\frac{B}{2\alpha \sqrt{A}} {-}\frac{\sqrt{-E}}{\alpha} = K+\frac{1}{2}. \end{align} This solution agrees with Romanovski \& Robnik \cite{romanovski2000}.
{ "timestamp": "2022-02-23T02:26:41", "yymm": "2006", "arxiv_id": "2006.01434", "language": "en", "url": "https://arxiv.org/abs/2006.01434", "abstract": "The Wentzel-Kramers-Brillouin (WKB) perturbative series, a widely used technique for solving linear waves, is typically divergent and at best, asymptotic, thus impeding predictions beyond the first few leading-order effects. Here, we report a closed-form formula that exactly resums the perturbative WKB series to all-orders for two turning point problem. The formula is elegantly interpreted as the action evaluated using the product of spatially-varying wavenumber and a coefficient related to the wave transmissivity; unit transmissivity yields the Bohr-Sommerfeld quantization.", "subjects": "Quantum Physics (quant-ph); Mathematical Physics (math-ph)", "title": "Resummed Wentzel-Kramers-Brillouin Series: Quantization and Physical Interpretation", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773707979895381, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7084670344567612 }
https://arxiv.org/abs/2210.11914
Generalized Turan number for the edge blow-up graph
Let $H$ be a graph and $p$ be an integer. The edge blow-up $H^p$ of $H$ is the graph obtained from replacing each edge in $H$ by a copy of $K_p$ where the new vertices of the cliques are all distinct. Let $C_k$ and $P_k$ denote the cycle and path of length $k$, respectively. In this paper, we find sharp upper bounds for $ex(n,K_3,C_3^3)$ and the exact value for $ ex(n,K_3,P_3^3)$ and determine the graphs attaining these bounds.
\section{Introduction} \textbf{Notation. }In this paper, we use $C_k$, $P_k$, $M_k$ and $S_k$ to denote the cycle, path, matching and star with $k$ edges, respectively. Let $K_t$ be the complete graph on $t$ vertices and $K_{s,t}$ be the complete bipartite graph with parts of size $s$ and $t$. The vertex and edge sets of a graph $G$ are denoted by $V(G)$ and $E(G)$, respectively. Also we denote the number of edges in $G$ by $e(G)$. For two graph $G$ and $H$, let $G\cup H$ denote the disjoint union of $G$ and $H$. Let $G+H$ denote the join of $G$ and $H$, which is obtained from $G\cup H$ by adding all edges with one endvertex in $V(G)$ and the other endvertex in $V(H)$. Let $T(G)$ denote the set of all triangles in $G$ and $t(G)=|T(G)|$. For a vertex $v$ in $V(G)$, let $t(v)$ denote the number of triangles containing $v$. For an edge~$uv$, let $N(uv)=N(u)\cap N(v)$. Hence, $|N(uv)|$ is the number of triangles containing $uv$. For a set of vertices $S \subseteq V(G)$ we denote by $G[S]$ the induced subgraph of $G$ on $S$ and we set $G-S=G[V(G) - S]$. For two disjoint sets of vertices $U,W \subseteq V(G)$ we denote by $G[U,W]$ the bipartite subgraph of $G$ consisting of those edges with one endvertex in $U$ and the other in $W$. \vskip 3mm Let $H$ be a given graph and $p$ be an integer greater than $2$. The edge blow-up $H^p$ of $H$ is the graph obtained from replacing each edge in $H$ by a copy of $K_p$ where the new vertices of the cliques are all distinct. The problem of finding the Tur\'an number of $H^p$ for various graphs $H$ has attracted a lot of attention. The first results on the topic can be dated back to 1960s. Moon~\cite{Moon}, and independently Simonovits~\cite{simonovits1968method} determined the Tur\'an number ${\rm ex}(n,M_k^p)$ for $p\ge 3$. Much later Erd\H{o}s, F\"uredi, Gould and Gunderson~\cite{ERDOS199589} determined the Tur\'an number ${\rm ex}(n,S_k^p)$ for $p=3$, and then Chen, Gould, Pfender and Wei~\cite{chen2003extremal} extended this result to any $p\ge 3$. Glebov~\cite{glebov2011extremal} determined the Tur\'an number of~$P_k^p$. More recently, Liu extended Glebov's result to the edge blow-up of a family of trees and also determined $C_k^p$ for sufficiently large $n$. Wang, Hou, Liu and Ma~\cite{Wang2021TheTN} determined the ${\rm ex}(n,T^P)$ for a larger family of trees and Yuan~\cite{Yuan2022ExtremalGF} determined ${\rm ex}(n,H^P)$ for any non-bipartite graph $H$ and $p\ge \chi(H)+1$. We will make use of the following result of Xiao, Katona, Xiao and Zamora~\cite{XIAO20221}, which determined the value of ${\rm ex}(n,C_3^3)$ for all $n\ge 6$. \begin{theorem}(Xiao, Katona, Xiao and Zamora~\cite{XIAO20221})\label{edge verion} Let $n\ge 6$ be an integer, then \begin{align*} {\rm ex}(n, C_3^3)= \begin{cases} \lfloor\frac{n^2}{4}\rfloor +\lfloor\frac{n}{2}\rfloor & \mbox{if $n \not\equiv 2\Mod 4$},\\ \frac{n^2}{4}+\frac{n}{2}-1 & \mbox{if\/ $n \equiv 2\Mod 4$}.\\ \end{cases} \end{align*} When $n=4k$, $M_{\frac{n}{4}}+M_{\frac{n}{4}}$ is the unique extremal graph.\\ When $n=4k+1$, $(M_{\lfloor\frac{n}{4}\rfloor}\cup K_1)+M_{\frac{n-1}{4}}$ and $S_{\lfloor\frac{n}{2}\rfloor}+\overline{K_{\lfloor\frac{n}{2}\rfloor}}$ are the extremal graphs.\\ When $n=4k+2$, $(M_{\lfloor\frac{n}{4}\rfloor}\cup K_1)+(M_{\lfloor\frac{n}{4}\rfloor}\cup K_1)$, $M_{\lceil\frac{n}{4}\rceil}+M_{\lfloor\frac{n}{4}\rfloor}$ and $S_{\frac{n}{2}-1}+\overline{K_\frac{n}{2}}$ are the extremal graphs.\\ When $n=4k+3$, $(M_{\lfloor\frac{n}{4}\rfloor}\cup K_1)+M_{\lceil\frac{n}{4}\rceil}$ and $S_{\lfloor\frac{n}{2}\rfloor}+\overline{K_{\lfloor\frac{n}{2}\rfloor}}$ are the extremal graphs. \end{theorem} In this paper, we will consider the generalized Tur\'an number. Let $T$ and $H$ be graphs, then the generalized Tur\'an number ${\rm ex}(n,T,H)$ is the maximum number of copies of $T$ that an $n$-vertex $H$-free graph $G$ can contain. If $T=K_2$, then ${\rm ex}(n,T,H)$ is the classical Tur\'an number of $H$. Although several results about the Tur\'an number of an edge blow-up of a graph have been obtained, less is known about the generalized Tur\'an number of such graphs. However, there have been some results in this direction. Liu and Wang~\cite{liu2021generalized} determined the value of ${\rm ex}(n,K_p, S_2^p)$ and ${\rm ex}(n,K_p, M_2^p)$. Later Gerbner and Patk\'os~\cite{gerbner2021generalized} determined ${\rm ex}(n,K_r, S_2^p)$ and ${\rm ex}(n,K_r, M_2^p)$ for any $r$, $p$, and Yuan and Yang~\cite{Yuanxiaoli} determined ${\rm ex}(n,K_3, M_2^3)$ for all~$n$. Recently, Zhu, Chen, Gerbner, Gy\H{o}ri and Hama Kairm~\cite{zhufriendship} determined ${\rm ex}(n,K_3, S_k^3)$ for any $k$. Our results concern the edge blow-ups of cycles and paths. We prove the following theorems. \begin{theorem}\label{Cycle} Let $n\ge 22$ be an integer, we have \[{\rm ex}(n,K_3, C_3^3)\le \frac{n^2}{4}-1+\mathbbm{1}_{4|n},\] where $\mathbbm{1}_{4|n}$ is the indicator function for $4|n$. Furthermore, equality holds when $n$ is even and $M_{\lceil\frac{n}{4}\rceil}+M_{\lfloor\frac{n}{4}\rfloor}$ is the unique extremal graph. \end{theorem} \begin{theorem}\label{Path} Let $n\ge 300^3$ be an integer. We have \[{\rm ex}(n,K_3, P_3^3)=\left\lfloor\frac{(n-1)^2}{4}\right\rfloor,\] and the unique extremal graph is $K_1+K_{\floor{\frac{n-1}{2}},\ceil{\frac{n-1}{2}}}$. \end{theorem} The rest of the paper is organized as follows. In Section~\ref{ProofC3}, we prove Theorem~\ref{Cycle}. In Section~\ref{ProofP}, we prove Theorem~\ref{Path}. In Section~\ref{concl}, we mention some problems about the general case: ${\rm ex}(n,K_3, C_k^3)$ and ${\rm ex}(n,K_3,P_k^3)$. \section{Proof of Theorem \ref{Cycle}}\label{ProofC3} One can see that when $n$ is even, the graph $M_{\lceil\frac{n}{4}\rceil}+M_{\lfloor\frac{n}{4}\rfloor}$ contains $\frac{n^2}{4}-1+\mathbbm{1}_{4|n}$ triangles. So our aim is to show that ${\rm ex}(n,K_3,C_3^3)\le \frac{n^2}{4}-1+\mathbbm{1}_{4|n}$. Let $n\ge 22$ be an integer and $G$ be an $n$-vertex $C_3^3$-free graph with $t(G)$ being the maximum. We may assume every edge is contained in at least one triangle, otherwise we delete this edge. We define the weight of $uv$ by \[ w(uv)\coloneqq \frac{1}{|N(uv)|}. \] For a triangle $xyz$, its weight is defined by $w(xyz)=w(xy)+w(xz)+w(yz)$. We will prove the upper bound by making use of the following claims. \begin{claim}\label{ weight of triangle} For any triangle $xyz$ in $G$, $$1+\frac{1}{n-2}\le w(xyz)\le 3,$$ or $w(xy)=w(yz)=w(xz)=\frac{1}{3}$ and there exists another two vertices $u,v$ such that $\{x,y,z,u,v\}$ induces a copy of $K_5^-$ or $K_5$. \end{claim} \begin{proof} Since each edge is contained in at least one triangle, without loss of generality, we have \[ \frac{1}{n-2}\le w(yz)\le w(xz)\le w(xy)\le 1. \] If $w(xy)=1$, then $w(xyz)\ge 1+\frac{2}{n-2}$ and we are done. Next we may assume $w(xy)\le \frac{1}{2}$ and we distinguish two cases based on whether $w(xy)=\frac{1}{2}$ or $w(xy)\le \frac{1}{3}$ . First suppose $w(xy)=\frac{1}{2}$ and let $N(xy)=\{z,z'\}$. If $w(xz)=\frac{1}{2}$, then $w(xyz)\ge 1+\frac{1}{n-2}$ and we are done. Thus we may assume $w(xz)\le \frac{1}{3}$ and let $y'\in N(xz)-\{y,z'\}$. If $w(yz)\le \frac{1}{4}$, then we can find a vertex $x'\in N(yz)-\{x,y',z'\}$ and $\{x,y,z,x',y',z'\}$ contains a copy of $C_3^3$, a contradiction. Hence $w(yz)=w(xz)=\frac{1}{3}$ and $w(xyz)=\frac{7}{6}\ge 1+\frac{1}{n-2}$, inequality holds since $n\ge 22$. Now suppose $w(xy)\le \frac{1}{3}$. Let $u,v \in N(xy)-\{z\}$. If $w(yz)\le \frac{1}{4}$, then there is a vertex $x'\in N(yz)-\{u,v,x\}$. Also we can find a vertex $y'\in N(xz)-\{y,x'\}$ and another vertex in $\{u,v\}$ not equal to $y'$ (say $u\not=y'$). Then $\{y',x',u,x,y,z\}$ contains a copy of $C_3^3$, a contradiction. It follows that $w(xz)=w(yz)=w(xy)=\frac{1}{3}$. Furthermore, if $N(yz)-\{x\}$ or $N(xz)-\{y\}$ is not equal to $\{u,v\}$, then one can check that we still can find a copy of $C_3^3$, a contradiction. Hence $\{x,y,z,u,v\}$ induces a copy of $K_5^-$ or $K_5$. \end{proof} \begin{claim}\label{t<e} $t(G)\le e(G).$ \end{claim} \begin{proof} By Claim~\ref{ weight of triangle}, we have \[t(G)=\sum_{xyz\in T(G)}1\le \sum_{xyz\in T(G)}\left(w(xz)+w(yz)+w(xy)\right)= e(G),\] as required. \end{proof} \begin{claim}\label{no K_5} For any triangle $xyz$, $w(xyz)\ge 1+\frac{1}{n-2}$, i.e., there is no $K_5^-$ in $G$. \end{claim} \begin{proof} Suppose to the contrary that there is a subgraph $H$ of $G$ isomorphic to $K_5$ or $K_5^-$ induced on the set $\{v_1,v_2,v_3,v_4,v_5\}$. If $H$ is isomorphic to $K_5^-$, then we may assume without loss of generality $v_4v_5$ is not an edge. One can check that for any edge $v_iv_j$ in $H$, $N(v_iv_j)\subseteq V(H)$. Otherwise we can find a copy of $C_3^3$. Let $S=(N(v_4)\cap N(v_5))-V(H)$ if $H$ is isomorphic to $K_5^-$ and $S=\emptyset$ otherwise. If $|S|\le n-10$, then $e(G-V(H))\le \frac{(n-5)^2}{4}+\frac{n-5}{2}$ by Theorem~\ref{edge verion}, and we have \begin{align*} e(G)\le& ~e(H)+e(G[V(H), V(G)\setminus V(H)])+e(G-V(H))\\ \le&~10+(n-5)+|S|+ \frac{(n-5)^2}{4}+\frac{n-5}{2}\\ < & ~\frac{n^2}{4}-1. \end{align*} By Claim~\ref{t<e}, it follows that $G$ is not the extremal graph. If $|S|> n-10$, then $G[S]$ is $P_3$-free, otherwise together with $v_4,v_5$, we can find a copy of $C_3^3$. Hence $e(G-V(H))\le (n-5-|S|)|S|+|S|+\binom{n-5-|S|}{2}$. When $n\ge 22$, we have \begin{align*} e(G) \le~ & 9+(n-5)+|S|+ (n-5-|S|)|S|+|S|+\binom{n-5-|S|}{2}\\ \le~ & 8n-56\\ <~&\frac{n^2}{4}-1. \end{align*} Again by Claim \ref{t<e}, we are done. \end{proof} Let $T_1(G)=\{xyz\in T(G): w(xyz)\ge 1+\frac{2}{n}\}$ and $T_2(G)=T(G)-T_1(G)$. We have the following bound on the average weight of a triangle in $G$. \begin{claim}\label{average weight} The average weight of each triangle in $G$ is at least $1+\frac{2}{n}$. \end{claim} \begin{proof} If $T_2(G)$ is empty, then there is nothing to prove. Hence we may assume $T_2(G)\not=\emptyset$. Let $xyz$ be a triangle in $T_2(G)$ with $w(xy)\ge w(xz)\ge w(yz)$. By Claim~\ref{ weight of triangle} and~\ref{no K_5}, we have $w(xy)\in \{1, \frac{1}{2}\}$. If $w(xy)=1$, then $$w(xyz)\ge 1+\frac{2}{n-2}$$ which means $xyz\in T_1(G)$, a contradiction. So $w(xy)=\frac{1}{2}$. Let $N(xy)=\{z,~x'\}$. Suppose $N(xz)-\{y,x'\}\not=\emptyset$ and $y'\in N(xz)-\{y,x'\}$. Then either $N(yz)-\{x,~x',~y'\}\not=\emptyset$ and we can find a copy of $C_3^3$, or $N(yz)=\{x,~x',~z'\}$ which means $\{x,y,z,x',y'\}$ contains a copy of $K_5^-$, or $w(yz)\ge \frac{1}{2}$ which means $w(xyz)=\frac{3}{2}\ge 1+\frac{2}{n}$. In all of these cases, we get a contradiction. Hence, $N(xz)=\{y,~x'\}$ and $w(xy)=w(xz)=\frac{1}{2}$ and so $w(yz)\le \frac{2}{n}$. For the edges $x'y, ~x'z$, we deduce that $w(x'y)=w(x'z)=\frac{1}{2}$. If not, suppose $w(x'y)<\frac{1}{2}$. Let $u$ be in $N(x'y)-\{x,z\}$. Since $|N(yz)|>\frac{n}{2}$, let $v$ be in $N(yz)-\{x,x',u\}$. Then $\{u,v, x',x,y,z\}$ contains a copy of $C_3^3$, a contradiction. It follows that the triangle $x'yz$ is also in $T_2(G)$. Therefore, for any triangle $xyz$ in $T_2(G)$, there is a unique triangle $x'yz$ in $T_2(G)$ such that $\{x,x',y,z\}$ induces a copy of $K_4$ and $w(xy)=w(xz)=w(x'y)=w(x'z)=\frac{1}{2}$. Hence, we can partition the set of triangles in $T_2(G)$ into pairs $(xyz,~x'yz)$. For each such pair, we define a mapping \[\phi (xyz,~x'yz)=\{xyz,~x'yz,~xx'y,~xx'z\}.\] Note that since $w(x'z)=\frac{1}{2}$ and $N(x'z)=\{x,y\}$, then $N(xx')\cap N(yz)=\emptyset$ and $|N(xx')|< n-|N(yz)|\le \frac{n}{2}$. This means $xx'y, xx'z$ are in $T_1(G)$. Furthermore, by Claim \ref{no K_5}, each triangle is contained in at most one copy of $K_4$, so $xx'y, xx'z$ do not belong to any other $\phi (uvw,~u'vw)$. Since \begin{align*} &w(xyz)+w(x'yz)+w(xx'y)+w(xx'z)\\ &=4+\frac{2}{|N(yz)|}+\frac{2}{N(xx')}\\ & \ge 4+\frac{8}{|N(xx')|+|N(yz)|}\ge 4+\frac{8}{n}, \end{align*} we can transfer the weight of $xx'y, ~xx'z$ to $xyz,~x'yz$ and ensure the average weight is at least $1+\frac{2}{n}$. \end{proof} Now by the Claim \ref{average weight} and Theorem \ref{edge verion}, we have \begin{align*} t(G)&=\sum_{xyz\in T(G)}1\le \frac{n}{n+2}\sum_{xyz\in T(G)}1+\frac{2}{n}\\ &\le \frac{n}{n+2}\sum_{xyz\in T(G)}\left(w(xz)+w(yz)+w(xy)\right)\\ &\le \frac{n}{n+2}e(G)\le \frac{n^2}{4}-1+\mathbbm{1}_{4|n}. \end{align*} Equality holds if and only if $e(G)$ attains the maximum and the average weight of each triangle is exactly $1+\frac{2}{n}$. Hence, by the characterization of the extremal graphs for ${\rm ex}(n,C_3^3)$ in Theorem~\ref{edge verion}, we have $G=M_{\lceil\frac{n}{4}\rceil}+M_{\lfloor\frac{n}{4}\rfloor}$ when $n$ is even. $\hfill\blacksquare$ \section{Proof of Theorem \ref{Path}}\label{ProofP} Let $t(u,v)$ denote the number of triangles containing $u$ or $v$ or both. First, we use a technique to reduce the proof of Theorem \ref{Path} to the case that each vertex is contained in many triangles. To this end we use the following lemma. \begin{lemma} \label{reduce} Suppose $G$ is a $P_3^3$-free graph on at least $300$ vertices. If for any two different vertices $u, v$, we have $t(u), t(v)\ge \frac{n}{2}-1$ and $t(u,v)\ge n-2$, then $t(G)\le \left\lfloor\frac{(n-1)^2}{4}\right\rfloor$ and equality holds if and only if $G=K_1+K_{\floor{\frac{n-1}{2}},\ceil{\frac{n-1}{2}}}$. \end{lemma} \noindent First we will show how to deduce Theorem~\ref{Path} from Lemma~\ref{reduce}, then we will prove Lemma~\ref{reduce}. \subsection{Proof of Theorem~\ref{Path} using Lemma~\ref{reduce}.} Let $G$ be a $P_3^3$-free graph on $n$ vertices with $n\ge 300^3$ and $t(G)\ge \left\lfloor\frac{(n-1)^2}{4}\right\rfloor$. We initialize $G_n=G$ and define a process of as follows: for $j<n$, let $G_j=G_{j+1}-v_1$ if $t(v_1)<\frac{j+1}{2}-1$ in $G_{j+1}$, or $G_{j-1}=G_{j+1}-\{v_1,v_2\}$ if $t(v_1,v_2)<(j+1)-2$ in $G_{j+1}$. Suppose the process ends at $G_\ell$ and for any two vertices $u,v$ in $G_\ell$, we have $t(u),t(v)\ge \frac{\ell}{2}-1$ and $t(u,v)\ge \ell-2$. Note that \[\binom{\ell}{3}\ge t(G_\ell)\ge \left\lfloor\frac{(\ell-1)^2}{4}\right\rfloor+\frac{n-\ell}{2}\] Hence $\ell\ge \sqrt[3]{3n}\ge 300$ and by Lemma 1, $G_\ell$ contains a copy of $P_3^3$, a contradiction. That is to say, $G_n$ satisfies the conditions in Lemma 1 and we are done. $\hfill\blacksquare$ \subsection{Proof of Lemma 1.} Let $G=G_1\cup \cdots \cup G_c$ be a $P_3^3$-free graph on $n\ge 300$ vertices, where $G_i$ are the connected components of $G$, for $1\le i\le c$. We may assume each edge of $G$ is contained in at least one triangle, otherwise we delete it and the conditions still hold in the resulting graph. For any two distinct vertices $u, v$, we have $t(u), t(v)\ge \frac{n}{2}-1$ and $t(u,v)\ge n-2$. It follows that $v(G_i)\ge \delta(G)\ge \sqrt{n}$. As mentioned in the introduction, Yuan and Yang~\cite{Yuanxiaoli} determined ${\rm ex}(n,K_3,M_2^3)$ for all $n$. \begin{theorem}(Yuan and Yang \cite{Yuanxiaoli})\label{2K_3} For $n\ge 7$, we have \[{\rm ex}(n,K_3,M_2^3)=\max\left\{3n-8, \left\lfloor\frac{(n-1)^2}{4}\right\rfloor\right\}.\] Furthermore, $K_3+\overline{K}_{n-3}$ or $K_1+K_{\floor{\frac{n-1}{2}},\ceil{\frac{n-1}{2}}}$ is the unique extremal graph. \end{theorem} \noindent If no $G_i$ contains two vertex-disjoint triangles, then since $v(G_i)\ge \sqrt{n}\ge \sqrt{300} $, we have $t(G_i)\le \left\lfloor\frac{(v(G_i)-1)^2}{4}\right\rfloor$ by Theorem~\ref{2K_3} and \[t(G)= \sum_{i=1}^c t(G_i)\le \left\lfloor\frac{(n-1)^2}{4}\right\rfloor.\] Equality holds if and only if $G$ is connected and $G=K_1+K_{\floor{\frac{n-1}{2}},\ceil{\frac{n-1}{2}}}$. Therefore, we may assume without loss of generality that $G_1$ contains two vertex-disjoint triangles. We define the distance between two vertex-disjoint triangles as the minimum length of a path with endvertices in the respective triangles. Among all vertex-disjoint triangle pairs in $G_1$, let $x_1y_1z_1,~x_2y_2z_2$ be two vertex~disjoint triangles whose distance is the smallest and let $P=x_1\cdots y_2$ be a path of minimal length between them. First suppose the length of $P$ is at least $2$. Let $x_1^+$ be the vertex adjacent to $x_1$ on the path $P$ and let $x_1x_1^+w$ be a triangle containing the edge $x_1x_1^+$. Then we either find a copy of $P_3^3$ if $w\in \{x_2,~y_2,~z_2\}$, or we find another two vertex disjoint triangles whose distance is smaller, and in both cases we obtain a contradiction. Hence we have that $P=x_1y_2$ is a single edge. Note that $x_1y_2$ is also contained in a triangle and the third vertex of this triangle must be in $ \{y_1,z_1,x_2,z_2\}$. Without loss of generality, say $x_1y_2z_2$ is a triangle. Let $S=(N(y_2)\cap N(z_2))-\{x_1,y_1,z_1\}$. Obviously, we have that $S$ is nonempty and independent, since $x_2\in S$ and $G$ contains no copy of $P_3^3$. Suppose $u$ is a vertex in $N(y_2)-(S\cup \{x_1,y_1,z_1,z_2\})$ and $uy_2w$ is a triangle containing the edge $uy_2$. If $w$ does not belong to $\{x_1,y_1,z_1\}$, then $y_1z_1x_1,~x_1z_2y_2,~y_2uw$ forms a copy of $P_3^T$. If $w\in \{x_1,y_1,z_1\}$, then $y_1z_1x_1,~wuy_2,~y_2z_2x_2$ forms a copy of $P_3^T$. In both cases we have a contradiction. It follows that $N(y_2)\subset S\cup \{x_1,y_1,z_1, z_2\}$ and analogously $N(z_2)\subset S\cup \{x_1,y_1,z_1, y_2\}$. Since $\delta(G)\ge \sqrt{n}$, then $|S|\ge 2$. We may assume that for each edge of the form $y_2u$ and $z_2u$ with $u\in S$, is only contained in the triangle $y_2uz_2$. If not, suppose $y_2uw$ is a new triangle distinct from $y_2uz_2$. Since $S$ is independent, we have $w\in \{x_1,y_1,z_1\}$. Let $u'\in S-\{u\}$, then $y_1z_1x_1,~wuy_2,~y_2u'z_2$ forms a copy of $P_3^3$, a contradiction. Therefore, if we delete the two vertices $y_2,~z_2$ we destroy at most $|S|+9$ triangles. By the condition $t(u)+t(v)\ge n-2$, we have $|S|\ge n-11$. We obtain that the total number of triangles is at most \[(n-11)\binom{11}{2}+\binom{11}{3}< \left\lfloor\frac{(n-1)^2}{4}\right\rfloor,\] where the equality holds because $n\ge 300$. It follows that $G$ is not the extremal graph, and we are done.$\hfill\blacksquare$ \section{Concluding remarks} \label{concl} In this paper, we studied the generalized Tur\'an number of edge blow-ups of the graphs $C_3^3$ and $P_3^3$. It would be interesting to consider the general case of $C_k^3$ and $P_k^3$. In this section, we pose two conjectures about the generalized extremal numbers of these graphs. Let $H(n,p, t)$ denote the graph $K_{t-1}+T_p(n-t+1)$, where $T_p(n-t+1)$ is the balanced $p$-partite complete graph on $n-t+1$ vertices, i.e., the Tur\'an graph. Let $H^+(n,p,t)$ be the graph obtained from $H(n,p,t)$ by adding an extra edge in any class of $T_p(n-t+1)$. Based on the our results and the Tur\'an number of ${\rm ex}(n,C_k^3)$ and ${\rm ex}(n,P_k^3)$, we pose the following conjecture for the generalized Tur\'an number. \begin{conjecture} When $k\ge 4$ and $n$ is sufficiently large, $H(n,2,\lfloor\frac{k-1}{2}\rfloor+1)$ is the unique extremal graph for both ${\rm ex}(n,K_3,C_k^3)$ and ${\rm ex}(n,K_3,P_k^3)$ when $k$ is odd, and $H^+(n,2,\lfloor\frac{k-1}{2}\rfloor+1)$ is the unique extremal graph when $k$ is even. \end{conjecture} \section{Acknowledgements} The research of the authors Gy\H{o}ri and Salia was partially supported by the National Research, Development and Innovation Office NKFIH, grants K132696 and SNN-135643. The research of Salia was supported by the Institute for Basic Science (IBS-R029-C4). The research of Tompkins was supported by NKFIH grant K135800.
{ "timestamp": "2022-10-24T02:12:31", "yymm": "2210", "arxiv_id": "2210.11914", "language": "en", "url": "https://arxiv.org/abs/2210.11914", "abstract": "Let $H$ be a graph and $p$ be an integer. The edge blow-up $H^p$ of $H$ is the graph obtained from replacing each edge in $H$ by a copy of $K_p$ where the new vertices of the cliques are all distinct. Let $C_k$ and $P_k$ denote the cycle and path of length $k$, respectively. In this paper, we find sharp upper bounds for $ex(n,K_3,C_3^3)$ and the exact value for $ ex(n,K_3,P_3^3)$ and determine the graphs attaining these bounds.", "subjects": "Combinatorics (math.CO)", "title": "Generalized Turan number for the edge blow-up graph", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773707960121133, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7084670330233848 }
https://arxiv.org/abs/2009.00130
On the Zarankiewicz problem for graphs with bounded VC-dimension
The problem of Zarankiewicz asks for the maximum number of edges in a bipartite graph on $n$ vertices which does not contain the complete bipartite graph $K_{k,k}$ as a subgraph. A classical theorem due to Kővári, Sós, and Turán says that this number of edges is $O\left(n^{2 - 1/k}\right)$. An important variant of this problem is the analogous question in bipartite graphs with VC-dimension at most $d$, where $d$ is a fixed integer such that $k \geq d \geq 2$. A remarkable result of Fox, Pach, Sheffer, Suk, and Zahl [J. Eur. Math. Soc. (JEMS), no. 19, 1785-1810] with multiple applications in incidence geometry shows that, under this additional hypothesis, the number of edges in a bipartite graph on $n$ vertices and with no copy of $K_{k,k}$ as a subgraph must be $O\left(n^{2 - 1/d}\right)$. This theorem is sharp when $k=d=2$, because by design any $K_{2,2}$-free graph automatically has VC-dimension at most $2$, and there are well-known examples of such graphs with $\Omega\left(n^{3/2}\right)$ edges. However, it turns out this phenomenon no longer carries through for any larger $d$.We show the following improved result: the maximum number of edges in bipartite graphs with no copies of $K_{k,k}$ and VC-dimension at most $d$ is $o(n^{2-1/d})$, for every $k \geq d \geq 3$.
\section{Introduction} The problem of Zarankiewicz is a central problem in extremal graph theory. It asks for the maximum number of edges $\operatorname{ex}(n,K_{k,k})$ in a bipartite graph on $n$ vertices, where each side of the bipartition contains $n/2$ vertices and which does not contain the complete bipartite graph $K_{k,k}$ as a subgraph. In 1954, K\H{o}v\'ari, S\'os, and Tur\'an \cite{KST54} proved that this number of edges is at most $c_{k}n^{2 - \frac{1}{k}}$, for some positive constant $c_{k}$ which depends only on $k$. Classical constructions of Reiman and Brown show that this bound is tight for $k=2,3$ (see \cite{PA95}). However, the Zarankiewicz problem for $k \geq 4$ remains one of the most challenging unsolved problems in combinatorics. The best lower bound for $k=4$ simply comes from the Brown construction \cite{Br66}, namely $$\operatorname{ex}(n,K_{4,4}) \geq \operatorname{ex}(n,K_{3,3}) = \Omega \left(n^{5/3}\right).$$ For $k=5,6$, the best lower bounds are due to Ball and Pepe \cite{BP12} and come from the norm graph construction of Alon, R\'onyai and Szab\'o \cite{ARS99} (originally considered for the asymmetric Zarankiewicz problem regarding bipartite graphs containing no copies of $K_{4,7}$), i.e. $$\operatorname{ex}(n,K_{6,6}) \geq \operatorname{ex}(n,K_{5,5}) = \Omega(n^{7/4}).$$ For $k \geq 7$, the best construction comes from a result of Bohman and Keevash \cite{BK10} on random graph processes, and is of the form $$\operatorname{ex}(n,K_{k,k}) = \Omega\left(n^{2 - 2/(k+1)} (\log k)^{1/(k^2-1)}\right).$$ An important variant of this problem is the analogous question in bipartite graphs with {\it{VC-dimension}} at most $d$, where $d$ is a fixed integer not larger than $k$. The {\it{VC-dimension of a set system}} $\mathcal{F}$ on the ground set $V$ is the largest integer $d$ for which there exists a $d$-element set $S \subset V$ such that for every subset $B \subset S$, one can find a member $A \in \mathcal{F}$ with $A \cap S = B$. The VC-dimension, introduced by Vapnik and Chervonenkis \cite{VC71}, is one of the most useful combinatorial parameters that measures the complexity of graphs and hypergraphs. Over the years it has proven tremendously important in many areas in (and out of) combinatorics and mathematics, in general. We refer to \cite{FPS20} for a nice discussion (and a further remarkable application). In order to define the VC-dimension of a bipartite graph $G = (A,B,E)$ with vertex set $A \cup B$ and edge set $E \subset A \times B$, the standard convention is to make a choice between $A$ and $B$, and then define the {\it{VC-dimension of $G = (A,B,E)$ with respect to $A$}} to be the VC-dimension of the set system of neighborhoods of vertices $b \in B$ (regarded as subsets of $A$), and vice versa (see also \cite{FPSSZ17}). In this paper, we will always choose the first side of the bipartition $A$ as the ground set, and so we shall say that the {\it{VC-dimension of a bipartite $G = (A,B,E)$}} is the VC-dimension of $G$ with respect to $A$. With this terminology, we now state the following remarkable result of Fox, Pach, Sheffer, Suk and Zahl from \cite[Theorem 2.1]{FPSSZ17}, which goes below the K\H{o}v\'ari, S\'os, and Tur\'an upper bound for the Zarankiewicz problem, in the presence of bounded VC-dimension. \begin{theorem}\label{FPSSZ1} Let $k$ and $d$ be integers such that $k \geq d \geq 2$. Let $G = (A,B,E)$ be a bipartite graph with $|A|=|B|=n/2$ with VC-dimension at most $d$. Then, if $G$ is $K_{k,k}$-free, we have $$|E(G)| \leq c n^{2-1/d},$$ for some positive constant $c=c(d,k)$. \end{theorem} Using a slightly stronger version of Theorem \ref{FPSSZ1} for bipartite graphs $G=(A,B,E)$ with $|A|$ and $|B|$ of possibly different sizes and with {\it{dual shatter function of degree at most d}}, the observation that semi-algebraic bipartite graphs of {\it{bounded description complexity}} also have bounded VC-dimension (a consequence of the Milnor-Thom theorem), and a standard amplification trick via the so-called {\it{cutting lemma}} of Chazelle \cite{Ch93}, the authors deduced the following general result. \begin{theorem} \label{FPSSZ2} Let $G = (P,Q,E)$ be a semi-algebraic bipartite graph in $\left(\mathbb{R}^{d_{1}},\mathbb{R}^{d_{2}}\right)$ such that $E$ has description complexity at most $t$, $|P|=m$, and $|Q|=n$. If $G$ is $K_{k,k}$-free, then $$|E(G)| \leq c_{1}\left((mn)^{2/3}+m+n\right)$$ for $d_{1}=d_{2}=2$, and $$|E(G)| \leq c_{2} \left(m^{\frac{d_{2}(d_{1}-1)}{d_{1}d_{2}-1} + \epsilon} n^{\frac{d_{1}(d_{2}-1)}{d_{1}d_{2}-1}} + m + n \right)$$ for every $\epsilon > 0$, for all $d_{1},d_{2}$. Here $c_{1}=c_{1}(t,k)$ and $c_{2}=c_{2}(d_{1},d_{2},t,k,\epsilon)$. \end{theorem} Since in this paper we will not be discussing anything specific to semi-algebraic graphs, we won't attempt to make Theorem \ref{FPSSZ2} (and its connection to Theorem \ref{FPSSZ1}) more precise by providing the required definitions, but we invite the interested reader to consult \cite{FPSSZ17}. Nevertheless, it is worth mentioning at least that this theorem has multiple applications in combinatorial geometry, as the case $d_{1}=d_{2}=2$ already directly implies the celebrated Szemeredi-Trotter theorem \cite{ST83}. The main result of this paper concerns Theorem \ref{FPSSZ1}. Going back to the statement, it is important to note that it is sharp when $k=d=2$. This is because, by design, every $K_{2,2}$-free bipartite graph has $VC$-dimension at most $2$, and so the well-known examples of such graphs with $\Omega\left(n^{3/2}\right)$ edges automatically serve as constructions which match the upper bound from Theorem \ref{FPSSZ1}. However, it turns out this phenomenon does not extend for larger $k$. For example, already when $k=3$, it is not difficult to check that the dense $K_{3,3}$-free graph from Brown's construction \cite{Br66} has VC-dimension equal to $4$, so it does not anymore provide a matching lower bound. This suggests that a potential improvement of Theorem \ref{FPSSZ1} might be possible when $k \geq d \geq 3$. In what follows, we confirm this suspicion by proving the following improved result. \begin{theorem}\label{main} Let $k$ and $d$ be integers such that $k \geq d \geq 3$. Let $G = (A,B,E)$ be a bipartite graph with $|A|=|B|=n/2$ with VC-dimension at most $d$. Then, if $G$ is $K_{k,k}$-free, we have $$|E(G)| = o_{n \to \infty}\left(n^{2-1/d}\right).$$ \end{theorem} We give the proof of Theorem \ref{main} in Section 2. It is perhaps important to emphasize that, unlike the argument used in \cite{FPSSZ17}, our method does not rely on the so-called {\it{packing lemma}} of Haussler \cite{Ha95}. Instead, our approach is similar in spirit to an argument used by Sudakov and Tomon \cite{ST20} in a related, but different context. \bigskip \section{Proof of Theorem \ref{main}} \bigskip Our notation is mostly standard. For a graph $G$ and vertex $v\in V(G)$, we write $N_G(v)$ for the set of neighbors of $v$ in $G$. When the graph is clear, we often write $N(v)$ for the same set. If $T\subset V(G)$, we write $N(T)$ for the set of common neighbours of the set $T$. Let $G=(A,B,E)$ be a $K_{k,k}$-free bipartite graph with the number of edges satisfying $|E(G)| \geq cn^{2-1/d}$ for some constant $c > 0$, and where $|A|=|B|=n/2$. In order to show that $G$ has VC-dimension at least $d+1$, we need to prove the existence of a set $S$ of $d+1$ vertices in $A$, which is {\it{shattered}} by the set system formed by the neighborhoods $N(b)$ of the vertices $b \in B$; that is, a set $S$ with $|S|=d+1$ such that for every subset $S' \subset S$, there exist $b \in B$ with the property that $N(b) \cap S = S'$. First, we move to a subgraph with large minimum degree and choose the vertex which will be a neighbor to every vertex in $S$. \begin{proposition} \label{lemma:subgraph} Let $G$ be a $K_{k,k}$-free bipartite graph with parts $A$ and $B$ and at least $cn^{2-1/d}$ edges, where $c > 0$ is a constant and $|A|,|B|= n/2$. Then $G$ has an induced subgraph $G'$ with parts $A'\subset A$ and $B'\subset B$ such that $G'$ has minimum degree at least $\frac{c}{4}n^{1-1/d}$ and there exists a vertex $x\in B'$ such that $|N(x')\cap N(x)|=o(|N(x)|)$ holds for every $x'\in B'\setminus \{x\}$. \end{proposition} In the short proof we shall use the asymmetric K\H ov\'ari--S\'os--Tur\'an theorem. \begin{lemma} \label{lemma:asymmetric} Let $G$ be a $K_{k,k}$-free bipartite graph with parts of size $m$ and $n$. Then $G$ has at most $O_k(nm^{1-1/k}+m)$ edges. \end{lemma} \begin{proofof}{Proposition \ref{lemma:subgraph}} First, by iteratively discarding vertices of degree less than $\frac{c}{2}n^{1-1/d}$, we find a non-empty subgraph $G''$ which has minimum degree at least $\frac{c}{2}n^{1-1/d}$. Let $G''$ have parts $A''$ and $B''$. Choose a vertex $x\in B''$ arbitrarily. By Lemma \ref{lemma:asymmetric}, since $G''$ is $K_{k,k}$-free and $|N_{G''}(x)|\geq \frac{c}{2}n^{1-1/d}$, it is easy to see that the number of vertices $y\in B''$ such that $|N_{G''}(y)\cap N_{G''}(x)|\geq \frac{|N_{G''}(x)|}{\log n}$ is at most $n^{1/100}$. Write $B'$ for the set obtained from $B''$ after removing these vertices (apart from $x$). Let $A'=A''$ and let $G'$ be the induced subgraph of $G''$ with parts $A'$ and $B'$. This choice of $x$ and $G'$ satisfies the conditions in the statement of the proposition. \end{proofof} The next proposition will be applied for the subgraph found in the previous result. \begin{proposition} \label{lemma:twocases} Let $G$ be a bipartite graph with parts $A$ and $B$ and with minimum degree satisfying $\delta(G)=\delta \geq cn^{1-1/d}$ for some constant $c > 0$, and where $|A|,|B|\leq n/2$. Let $r$ be a constant positive integer and let $x\in B$. Then one of the following two statements must be true: \begin{enumerate} \item there exists a set $R\subset N(x)$ of size $r$ such that for every $T\subset R$ of size $d$, we have $|N(T)|\geq r$ or \item there exist $\Theta(|N(x)|^r)$ sets $R\subset N(x)$ of size $r$ such that for every $T\subset R$ of size $d$, we have $N(T)\setminus \{x\}\neq\emptyset$. \end{enumerate} \end{proposition} To prove this, we will use the so-called {\it{hypergraph removal lemma}}, proved independently by Nagle, R\"odl, Schacht \cite{NRS06} and Gowers \cite{G07}. \begin{lemma} \label{lemma:hrl} Let $r,d$ be positive integers. For every $\beta > 0$ there exists $\delta = \delta(r,d,\beta) > 0$ such that the following holds. If $\mathcal{H}$ is a $d$-uniform hypergraph on $N$ vertices such that one needs to remove at least $\beta N^{d}$ hyperedges of $\mathcal{H}$ in order to make it free of copies of $K_{r}^{(d)}$, then $\mathcal{H}$ contains at least $\delta N^{r}$ copies of $K_{r}^{(d)}$. \end{lemma} \begin{proofof}{Proposition \ref{lemma:twocases}} Define a $d$-uniform hypergraph $\mathcal{H}$ on vertex set $N(x)$ such that any $T\subset N(x)$ of size $d$ is a hyperedge in $\mathcal{H}$ if and only if $N(T)\setminus \{x\}\neq \emptyset$. Then outcome 2. is equivalent to saying that $\mathcal{H}$ contains $\Theta(|N(x)|^r)$ copies of $K_r^{(d)}$. If the first statement does not hold, by using Lemma \ref{lemma:hrl}, it then suffices to prove that in order to destroy all copies of $K_r^{(d)}$ in $\mathcal{H}$, one needs to remove $\Theta(|N(x)|^d)$ hyperedges from $\mathcal{H}$. To this end, we shall prove that this indeed holds provided that there is no set $R\subset N(x)$ of size $r$ such that for every $T\subset R$ of size $d$, we have $|N(T)|\geq r$. Color a $d$-set $T\subset N(x)$ green if $N(T)=\{x\}$, blue if $1<|N(T)|<r$ and red if $|N(T)|\geq r$. Note that if $y\in B\setminus \{x\}$, then any $d$-set $T\subset N(x)\cap N(y)$ is colored blue or red. If there exists some $R\subset N(x)$ of size $r$ such that every $T\subset R$ of size $d$ is red, then condition 1. holds. However, if $\ell$ is sufficiently large, by Ramsey's theorem (see \cite{R30} or \cite[Theorem 4.18]{Ju11}) we know that for every set $L\subset N(x)\cap N(y)$ of size $\ell$, there exists a subset $R\subset L$ of size $r$ such that all $d$-sets in $R$ have the same color. Therefore each $\ell$-set in $N(x)\cap N(y)$ contains a monochromatic blue $r$-set. Clearly any such $r$-set $R$ has the property that for every $T\subset R$ of size $d$, we have $N(T)\setminus \{x\}\neq \emptyset$. Hence, if we are to delete all such $r$-sets from $\mathcal{H}$, then we need to delete a blue edge from every $\ell$-set in $N(x)\cap N(y)$, for every $y\in B\setminus \{x\}$. Hence, we need to delete at least $$\frac{1}{r}\sum_{y\in B\setminus \{x\}} \frac{1}{\binom{\ell}{d}} \binom{d(x,y)}{d}$$ hyperedges, where $d(x,y)=|N(x)\cap N(y)|$. Clearly, $$\sum_{y\in B\setminus \{x\}} d(x,y)=e(B\setminus \{x\},N(x))\geq \sum_{z\in N(x)} (d(z)-1)\geq d(x)(\delta-1).$$ Hence, by Jensen's inequality, $$\sum_{y\in B\setminus \{x\}} \binom{d(x,y)}{d}\geq \Omega(n(d(x) \delta/n)^d)\geq \Omega(d(x)^d)=\Omega(|N(x)|^d).$$ Note that in the first inequality we have used implicitly that $d(x)\delta\geq (cn^{1-1/d})^2\geq \omega(n)$ as $d\geq 3$. Thus, we indeed need to delete $\Omega(|N(x)|^d)$ hyperedges to destroy all copies of $K_r^{(d)}$ in $\mathcal{H}$. \end{proofof} \begin{proposition} \label{lemmma:ifallbig} Let $q$ be a positive integer and let $F$ be a $K_{k,k}$-free bipartite graph with parts $B$ and $Q$, where $|Q|=q$. Assume that there exists $x\in B$ which is joined to all vertices of $Q$ and that for every $T\subset Q$ of size $d$, we have $|N(T)|\geq q$. If $q$ is sufficiently large compared to $d$ and $k$, then $Q$ has a subset of size $d+1$ that is shattered by $\{N(b):b\in B\}$. \end{proposition} \begin{proofof} {Proposition \ref{lemmma:ifallbig}} Let $Z$ be a uniformly random subset of $Q$ of size $d+1$. We shall prove that $Z$ is shattered with probability at least $1/2$. In order to do this, we show that with probability at least $1/2$, the following property holds. \begin{center} {\it{For every $S\subset Z$ of size at most $d$ and every $z\in Z\setminus S$, we have $|N(S\cup \{z\})|< \frac{1}{d+1}|N(S)|$.}} \end{center} For convenience, let us call this property by the name {\it{Property $\mathcal{VC}$}} and see first why it implies that the set $Z$ is shattered. For each $S\subset Z$, we need to choose a vertex $b_S\in B$ such that $N(b_S)\cap Z=S$. For $S=Z$, choose $b_S=x$. Let $S\subset Z$ be a set of size at most $d$. By Property $\mathcal{VC}$, the number of vertices in $N(S)$ which have a neighbor in the set $Z\setminus S$ is less than $(d+1)\frac{1}{d+1}|N(S)|=|N(S)|$. Hence, we can pick some $b_S\in N(S)$ with $N(b_S)\cap Z=S$. It remains to prove that Property $\mathcal{VC}$ holds with probability at least $1/2$. Using the union bound and conditioning on the $d$-subsets of $Z$, it suffices to prove that for any $S\subset Q$ of size at most $d$, the probability that there exists $z\in Z\setminus S$ with $|N(S\cup \{z\})|\geq \frac{1}{d+1}|N(S)|$ is at most $\frac{1}{2\cdot 2^{d+1}}$. However, note that since $|N(S)|\geq q$, where $q$ is sufficiently large compared to $d$ and $k$, and the induced subgraph of $F$ with parts $Q$ and $N(S)$ is $K_{k,k}$-free, it follows by Lemma \ref{lemma:asymmetric} that the number of vertices $y\in Q\setminus S$ with $|N(S\cup \{y\})|\geq \frac{1}{d+1}|N(S)|$ is at most $f(d,k)$ for some function $f$. Hence, if $q$ is sufficiently large, then with probability more than $1-\frac{1}{2\cdot 2^{d+1}}$, the random subset $Z\subset Q$ avoids all these vertices. \end{proofof} \bigskip We are now in a position to prove Theorem \ref{main}. \begin{proofof}{Theorem \ref{main}} Suppose, for contradiction, that $G$ has at least $cn^{2-1/d}$ edges for some constant $c>0$. Choose a subgraph $G'$ and a vertex $x$ as in Proposition \ref{lemma:subgraph}. In what follows, all neighborhoods are defined in $G'$. By Proposition \ref{lemma:twocases}, one of the following two must hold. \begin{enumerate} \item there exists a set $R\subset N(x)$ of size $r$ such that for every $T\subset R$ of size $d$, we have $|N(T)|\geq r$, or \item there exist $\Theta(|N(x)|^r)$ sets $R\subset N(x)$ of size $r$ such that for every $T\subset R$ of size $d$, we have $N(T)\setminus \{x\}\neq\emptyset$. \end{enumerate} If condition 1. holds, then by Proposition \ref{lemmma:ifallbig}, $G'$ has VC-dimension at least $d+1$. This implies that $G$ also has VC-dimension at least $d+1$, which is a contradiction. So we may assume that condition 2. holds. Let $R\subset N(x)$ be a set of size $r$ such that for every $T\subset R$ of size $d$, we have $N(T)\setminus \{x\}\neq\emptyset$. Let $q$ be a constant which is sufficiently large compared to $d$ and $k$. Now if $r$ is sufficiently large, by Ramsey's theorem, there exists a set $Q\subset R$ of size $q$ such that either $|N(T)|\geq q$ for every $T\subset Q$ of size $d$, or $|N(T)|<q$ for every $T\subset Q$ of size $d$. In the former case, Proposition \ref{lemmma:ifallbig} shows that $G'$ has VC-dimension at least $d+1$, which is a contradiction. Hence, $|N(T)|<q$ for every $T\subset Q$ of size $d$. Since we can start with $\Theta(|N(x)|^r)$ many possible sets $R$ to get a subset $Q\subset R$ as above, it follows that there exist $\Theta(|N(x)|^q)$ sets $Q\subset N(x)$ such that for every $T\subset Q$ of size $d$, we have $N(T)\setminus \{x\}\neq \emptyset$ and $|N(T)|<q$. Let $Q$ be such a set. Assume that the sets $N(T)\setminus \{x\}$ are pairwise disjoint as $T$ ranges over all subsets of $Q$ of size $d$. Then we distinguish between two cases. Case (a) is when for every $S\subset Q$ of size $d-1$ we have $|N(S)|\geq q$. In this case, if $q$ is sufficiently large compared to $d$ and $k$, then using an argument very similar to the one in Proposition \ref{lemmma:ifallbig}, $Q$ has a subset of size $d+1$ which is shattered. In fact, just as in the proof of that proposition, a random subset $Z$ of size $d+1$ is shattered with probability at least $1/2$. The only difference is that for every set $S\subset Z$ of size $d$, the vertex $b_S$ is chosen from the set $N(S)\setminus \{x\}$. Since these sets are disjoint, we get a different vertex for each $S$, and we have $N(b_S)\cap Z=S$. Sets of size at most $d-1$ are treated just like in Proposition \ref{lemmma:ifallbig}. Now $G'$ has VC-dimension at least $d+1$, which is a contradiction. Otherwise, which we call case (b), there exists $S\subset Q$ of size $d-1$ such that $|N(S)|<q$. However, note that the number of $q$-sets for which case (b) can occur is $o(|N(x)|^q)$. Indeed, there are $O(|N(x)|^{d-1})$ ways to choose the set $S$ and there are less than $q$ ways to choose a common neighbor of the set $S$. But any element $z\in Q\setminus S$ has a neighbor in $N(S)\setminus \{x\}$ (since $N(S\cup \{z\})\setminus \{x\}\neq \emptyset$), which leaves $o(|N(x)|)$ choices for each of these vertices. Altogether, we get only $o(|N(x)|^q)$ possibilities for $Q$. It follows that the number of $q$-sets $Q\subset N(x)$ such that for every $T\subset Q$ of size $d$ we have $|N(T)|<q$ and the sets $N(T)\setminus \{x\}$ are not pairwise disjoint is $\Theta(|N(x)|^q)$. We now show that this is impossible by upper bounding the number of such sets. Let us choose distinct sets $T,T'\subset Q$ of size $d$ such that $(N(T)\setminus \{x\})\cap (N(T')\setminus \{x\})\neq \emptyset$. Note that there are at most $|N(x)|^d$ ways to choose $T$, given such a choice there are at most $|N(T)|<q$ ways to choose an element from $(N(T)\setminus \{x\})\cap (N(T')\setminus \{x\})$, and given these there are at most $o(|N(x)|)$ ways to choose each vertex in $T'\setminus T$. Every vertex in $Q\setminus (T\cup T')$ can be chosen in at most $|N(x)|$ many ways, so altogether we only have $o(|N(x)|^q)$ possibilities for $Q$, which is a contradiction. \end{proofof}
{ "timestamp": "2020-09-02T02:05:05", "yymm": "2009", "arxiv_id": "2009.00130", "language": "en", "url": "https://arxiv.org/abs/2009.00130", "abstract": "The problem of Zarankiewicz asks for the maximum number of edges in a bipartite graph on $n$ vertices which does not contain the complete bipartite graph $K_{k,k}$ as a subgraph. A classical theorem due to Kővári, Sós, and Turán says that this number of edges is $O\\left(n^{2 - 1/k}\\right)$. An important variant of this problem is the analogous question in bipartite graphs with VC-dimension at most $d$, where $d$ is a fixed integer such that $k \\geq d \\geq 2$. A remarkable result of Fox, Pach, Sheffer, Suk, and Zahl [J. Eur. Math. Soc. (JEMS), no. 19, 1785-1810] with multiple applications in incidence geometry shows that, under this additional hypothesis, the number of edges in a bipartite graph on $n$ vertices and with no copy of $K_{k,k}$ as a subgraph must be $O\\left(n^{2 - 1/d}\\right)$. This theorem is sharp when $k=d=2$, because by design any $K_{2,2}$-free graph automatically has VC-dimension at most $2$, and there are well-known examples of such graphs with $\\Omega\\left(n^{3/2}\\right)$ edges. However, it turns out this phenomenon no longer carries through for any larger $d$.We show the following improved result: the maximum number of edges in bipartite graphs with no copies of $K_{k,k}$ and VC-dimension at most $d$ is $o(n^{2-1/d})$, for every $k \\geq d \\geq 3$.", "subjects": "Combinatorics (math.CO)", "title": "On the Zarankiewicz problem for graphs with bounded VC-dimension", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708039218115, "lm_q2_score": 0.72487026428967, "lm_q1q2_score": 0.7084670329478108 }
https://arxiv.org/abs/1308.5966
Growth rate degeneracies in kinematic dynamos
We consider the classical problem of kinematic dynamo action in simple steady flows. Due to the adjointness of the induction operator, we show that the growth rate of the dynamo will be exactly the same for two types of magnetic boundary conditions: the magnetic field can be normal (infinite magnetic permeability, also called pseudo-vacuum) or tangent (perfect electrical conductor) to the boundaries of the domain. These boundary conditions correspond to well-defined physical limits often used in numerical models and relevant to laboratory experiments. The only constraint is for the velocity field u to be reversible, meaning there exists a transformation changing u into -u. We illustrate this surprising property using S2T2 type of flows in spherical geometry inspired by Dudley and James (1989). Using both types of boundary conditions, it is shown that the growth rates of the dynamos are identical, although the corresponding magnetic eigenmodes are drastically different.
\section{Introduction} The growth of magnetic fields due to dynamo action, both in astrophysical bodies and in laboratory experiments, is expected to depend not only on the details of the flow field, but also on the conditions on the magnetic field applied at the boundaries. In the laboratory there are two physically important limits: perfectly conducting, implying no normal field; and normal field, otherwise infinite permeability, where the tangential field at the boundary is zero. These conditions are so different that one might expect that the dynamo properties would be quite different in the two cases. In general this is true, but there is an important class of flows for which this is not the case. We call these {\it reversible} flows, defined as follows: consider the group $D$ of transformations which leave the boundaries invariant; then a velocity field $\bm u({\bm x})$ is reversible if ${\bm u}({\bm x})=-{\bm u}({\sf d}\cdot {\bm x})$, for some ${\sf d} \in D$. In other words, one can reverse the direction of the flow by an appropriate transformation. Then the main result of this paper can then be stated as follows: Consider a steady flow of an electrically-conducting fluid of constant magnetic diffusivity $\eta$, contained in a volume $V$ and delimited by boundaries $S$. \textit{Providing that the velocity field is reversible in the above sense, the growth rate of the kinematic dynamo will be exactly the same whether the boundaries are made of a perfect electrical conductor or have an infinite magnetic permeability. In fact the whole spectrum of growth rates will be identical.} This remarkable result is due to the adjointness property of the induction operator as discussed by \cite{roberts60,gibrob67,proctor77a,proctor77b}. It should be noted that there is no statement about the relation between the respective eigenfunctions and indeed as seen below these might differ considerably in the two cases. This result is formally proved as follows: We begin with an eigenfunction for the growing magnetic field ${\bf B}} \newcommand{{\bm j}} \newcommand{\bfubar}{\overline{\bfu}}{{\bf j}$ satisfying the perfectly-conducting boundary condition $\bm{B}\cdot\bm{n}=0$, where $\bm{n}$ is the unit vector normal to the surface $S$. The electric field must be normal to the boundaries and the tangential electric current vanishes there, $\left(\nabla\times\bm{B}\right)\times\bm{n}=0$ on $S$. The equation for the magnetic potential can be written using the Weyl gauge as \begin{equation} \label{eq:induceigen} s\bm{A}={\bf u}} \newcommand{\bfU}{{\bf U}\times{\bf B}} \newcommand{{\bm j}} \newcommand{\bfubar}{\overline{\bfu}}{{\bf j}-\eta\nabla\times{\bf B}} \newcommand{{\bm j}} \newcommand{\bfubar}{\overline{\bfu}}{{\bf j} \ , \end{equation} where $s$ is the complex growth rate and $\bm{A}$ is the magnetic vector potential defined by $\nabla\times\bm{A}=\bm{B}$. Since we have $\bm{u}\cdot\bm{n}=\bm{B}\cdot\bm{n}=\left(\nabla\times\bm{B}\right)\times\bm{n}=0$ on the boundary $S$, the cross product of equation \eqref{eq:induceigen} with $\bm{n}$ implies that $\bm{A}\times\bm{n}=0$ on $S$. Then if we multiply the complex conjugate of equation \eqref{eq:induceigen} by a solenoidal vector field $\bfQ=\nabla\times\bm{P}$, and integrate over the entire domain $V$, we obtain after integrating by parts \begin{multline} \label{eq:adjoint} \int_V{\bf B}} \newcommand{{\bm j}} \newcommand{\bfubar}{\overline{\bfu}}{{\bf j}^*\cdot(s^*{\bm P}} \newcommand{\bfQ}{{\bm Q}+{\bf u}} \newcommand{\bfU}{{\bf U}\times\bfQ+\eta\nabla\times\bfQ) \ \textrm{d}V= \\ - s^*\int_S\left(\bm{P}\times\bm{A}^*\right)\cdot\bm{n} \ \textrm{d}S - \eta\int_S\left(\bm{B}^*\times\bm{Q}\right)\cdot\bm{n} \ \textrm{d}S \ , \end{multline} where $s^*$ is the complex conjugate of $s$. The first surface integral on the right-hand side of equation \eqref{eq:adjoint} vanishes since $\bm{A}\times\bfn=0$ at the boundaries. The second surface integral vanishes providing that we specify $\bfQ\times\bfn=0$ at the boundaries. This last condition trivially implies that the normal electric current associated with $\bm{Q}$ vanishes on $S$, \textit{i.e.} $\left(\nabla\times\bm{Q}\right)\cdot\bm{n}=0$. The expression in parentheses on the left-hand side of equation \eqref{eq:adjoint} is then the operator on ${\bm P}} \newcommand{\bfQ}{{\bm Q}$ adjoint to the original operator \eqref{eq:induceigen}. Assuming that the eigenvectors $\bm{B}$ form a complete set, and taking the curl of this expression we obtain the following equation \begin{equation} s^*\bfQ=-\nabla\times({\bf u}} \newcommand{\bfU}{{\bf U}\times\bfQ)-\eta\nabla\times\nabla\times\bfQ \ , \end{equation} which is the induction equation for the solenoidal vector $\bfQ$ with ${\bf u}} \newcommand{\bfU}{{\bf U}$ replaced by $-{\bf u}} \newcommand{\bfU}{{\bf U}$; now however $\bfQ$ satisfies the infinite magnetic permeability condition $\bfQ\times\bfn=\left(\nabla\times\bm{Q}\right)\cdot\bm{n}=0$ at the boundaries. This shows that interchanging the boundary conditions and reversing the direction of the velocity field gives the same spectrum. In consequence the growth rates as a function of the magnetic Reynolds number ${R_{\rm m}}$ will be the same for both sets of boundary conditions. Note that the change in the direction of the velocity field for the adjoint problem has been known for a long time \citep{roberts60}. The problem was to find the appropriate choice of boundary conditions for both the original and the adjoint problem \citep{proctor77b}. Since most of the studies were motivated by the geodynamo problem, the external boundary condition for the original problem corresponded to a vacuum. In that case, the general boundary condition for the adjoint problem is unknown apart from some particular cases \citep{gibrob67}. The present demonstration shows that the adjoint boundary conditions associated with a perfect electrical conductor is an infinite magnetic permeability, both of them corresponding to clear physical limits. \begin{figure} \includegraphics[width=70mm]{./fig1a} \includegraphics[width=70mm]{./fig1b} \begin{picture}(10,0) \put(-15,22){\large{$S_2^2T_2^2$}} \put(-15,202){\large{$S_2^0T_2^0$}} \end{picture} \caption{(Color online) Illustration of the $S_2T_2$ velocity fields considered in this paper. These flows are characterised by an azimuthal wavenumber $m=0$ (top) or $m=2$ (bottom) and a Legendre polynomial order of $l=2$. The aspect ratio is $\alpha=0.4$. The isosurfaces show the velocity magnitude at 75\% of its maximum value. The streamlines are randomly initiated inside one of the hemisphere. The dark and thick streamlines correspond to large velocity magnitude whereas bright and thin strealines correspond to low velocity magnitude. The axisymmetric flow on the top is \textit{not} reversible whereas the flow on the bottom is.} \label{fig:vel} \end{figure} We now illustrate our result in spherical coordinates $\left(r,\theta,\phi\right)$. The choice of the coordinate system is not important and one could equally choose Cartesian or cylindrical coordinates. We focus here on the spherical case as the differences in the magnetic eigenmodes when varying the boundary conditions are the most striking. We consider an incompressible flow in a spherical shell defined by $\alpha<r<1$. Kinematic dynamos driven by simple flows are a classical problem in dynamo theory and a lot of examples have been considered in the past (see for example \cite{dudley89} and references therein for the case of a full sphere). The objective is here to compare kinematic dynamo action in two different flows with the two different types of boundary conditions mentioned previously. The velocity field is first written using a poloidal-toroidal decomposition, thus ensuring incompressibility, \begin{equation} \bm{u}=\nabla\times\nabla\times\left(S\bm{e}_r\right)+\nabla\times\left(T\bm{e}_r\right) \ , \end{equation} where $T$ is the toroidal component whereas $S$ is the poloidal component, and $\bm{e}_r$ is the unit vector in the radial direction. Each of these scalars is then projected onto spherical harmonics, for example for the poloidal component, \begin{equation} \label{eq:sph} S=\sum S_l^m(r)Y_l^m(\theta,\phi) \ , \end{equation} where the sum is carried over integers such that $l \le m \le 0$, and $Y_l^m(\theta,\phi)$ is the classical spherical harmonic of azimuthal wave number $m$ and Legendre function order $l$. The flows we consider in this letter are defined as follows: all coefficients $S_l^m(r)$ and $T_l^m(r)$ are zero except the ones for which $l=2$. This type of flow is often referred as to a $S_2T_2$ flow. In the azimuthal direction, all coefficients are zero except for one particular azimuthal wave number $M$ for which we impose \begin{equation} \label{eq:polvel} S_2^M(r)=\sin^2\left(\pi\frac{r-\alpha}{1-\alpha}\right) \ , \end{equation} \begin{equation} \label{eq:torvel} T_2^M(r)=8\sin^2\left(\pi\frac{r-\alpha}{1-\alpha}\right) \ . \end{equation} The factor $8$ in equation \eqref{eq:torvel} is arbitrarily introduced to minimise the critical magnetic Reynolds number for dynamo action to occur. This choice of radial structure is compatible with an impenetrable ($S=0$) and no-slip ($T=\partial S/\partial r=0$) boundary condition for the velocity field. We consider two possibilities for the azimuthal dependence: $M=0$ and $M=2$. These two flows are naturally labelled $S_2^0T_2^0$ and $S_2^2T_2^2$ respectively. The first flow has been studied in details in various geometries since it is a simple model of the mean-flow in the VKS experiment \citep{monchaux2007, gissinger2009}. The flow corresponds to two axisymmetric helical cells in each hemisphere with net helicity throughout the domain, \textit{i.e.} $\mathcal{H}=\int_V\bm{u}\cdot\nabla\times\bm{u}\textrm{d}V\neq0$. Note however that our conclusion does not depend on the presence or not of net kinetic helicity in the system. This flow is \textit{not} reversible as defined earlier. An illustration of this steady flow can be found in figure \ref{fig:vel}. Due to the axisymmetry of the flow, Cowling's theorem \citep{cowling33} forbids growing axisymmetric magnetic fields and the different azimuthal wave numbers of the magnetic field are decoupled. The second flow is similar to the flow first studied by \cite{pekeris73} in the geodynamo context, albeit there is no inner core in their case. It corresponds to a four cells flow with net kinetic helicity. Due to its symmetries, this flow is reversible (a rotation of $\pi/2$ around the vertical axis changes $\bm{u}$ in $-\bm{u}$) and a visualisation for the particular aspect ratio $\alpha=0.4$ is shown in figure \ref{fig:vel}. This simple type of flows is known to be a very efficient kinematic dynamo without an inner core \citep{dudley89}. \begin{figure} \includegraphics[width=70mm]{./fig2} \caption{(Color online) Growth rate of the magnetic energy versus magnetic Reynolds number in the case of homogeneous boundary conditions. The square symbols correspond to the perfectly-conducting case where $\bm{B}\cdot\bm{r}=0$ at both boundaries whereas the cross symbols correspond to the perfectly-insulating case where $\bm{B}\times\bm{r}=0$. The results are shown for the aspect ratio $\alpha=0.4$. For the $S_2^0T_2^0$ flow, only the growth rates associated with the $m=1$ mode are shown.} \label{fig:growth} \end{figure} \begin{figure*} \includegraphics[width=70mm]{./fig3a} \includegraphics[width=70mm]{./fig3b} \caption{(Color online) Magnetic eigenmodes close to the onset for kinematic dynamo action driven by the $S_2^2T_2^2$ reversible flow. Left: the boundary conditions are perfectly-conducting (no normal field). Right: the boundary conditions correspond to a pseudo-vacuum (no tangent field). In both cases, the magnetic field is dominated by a strong $m=1$ mode. The magnetic field lines are initiated randomly in the spherical shell. The dark and thick magnetic field lines correspond to large magnetic field amplitude whereas bright and thin lines correspond to low magnetic field magnitude. The growth rate associated with these two eigenmodes is exactly the same.} \label{fig:eigen} \end{figure*} In order to check our finding concerning the growth rate of kinematic dynamo action and its dependence on magnetic boundary conditions, we need to solve the induction equation with a prescribed velocity field. While this problem is linear and could be reduced to an eigenvalue problem, the relatively large three-dimensional resolution required here to solve the induction equation makes the equivalent initial value problem much easier to handle. As a consequence, the induction equation is solved using the numerical code PARODY. This code was originally written by E. Dormy \citep{dormy1998} and later improved by J. Aubert \citep{aubert2008}. PARODY has been benchmarked against other numerical codes in the context of a convectively-driven dynamo problem \citep{christensen2001}. Although the code is able to solve the full set of magnetohydrodynamics equations in the Boussinesq approximation, we only use the induction equation solver throughout this paper. The solenoidal magnetic field is written using a poloidal and toroidal decomposition and both poloidal and toroidal scalars are then projected onto spherical harmonics, as in equation \eqref{eq:sph}. The radial functions ${B_t}_l^m(r)$ for the toroidal field and ${B_p}_l^m(r)$ for the poloidal field are represented by their discretized values on a non-uniform radial grid. The grid is denser close to the inner and outer boundaries in order to accurately resolve boundary effects. The radial derivatives are computed using second order finite-differences. In the case of a perfectly-conducting boundary condition, the poloidal and toroidal components of the magnetic field must verify the following constraint for all $l,m$ \begin{align} \frac{\partial^2 B_p}{\partial r^2}+\frac{2}{r}\frac{\partial B_p}{\partial r} & = 0 \ , \\ \frac{\partial B_t}{\partial r}+\frac{1}{r}B_t & = 0 \ . \end{align} Note that due to the solenoidality of the magnetic field, these conditions directly imply that $B_p=0$ at the boundaries. In the case of an infinite magnetic permeability, the corresponding boundary conditions are \begin{align} \frac{\partial B_p}{\partial r}+\frac{1}{r}B_p & = 0 \ , \\ B_t & = 0 \ . \end{align} The time-stepping is achieved using a semi-implicit Crank-Nicholson scheme for the diffusive term and a second order Adams-Bashforth scheme for the advective term. The typical resolution is $480$ points in the radial direction, and a spherical harmonic decomposition truncated at $l,m<64$. In the case of the $S_2^0T_2^0$ flow, since all azimuthal magnetic modes are decoupled, only the most unstable $m=1$ mode is considered. We first compute the growth rate of the magnetic energy varying the magnetic Reynolds number defined here as \begin{equation} R_M=\frac{U_{\textrm{max}}(1-\alpha)}{\eta} \ , \end{equation} where $U_{\textrm{max}}$ is the maximum velocity in the spherical shell. We here consider a particular aspect ratio of $\alpha=0.4$ but our results do not qualitatively depend on this particular choice. For the flows defined by equations \eqref{eq:polvel} and \eqref{eq:torvel}, we have $U_{\textrm{max}}=32.98$ for $M=2$ and $U_{\textrm{max}}=29.07$ for $M=0$. The induction equation is then solved from an initial magnetic seed. After a rapid transient phase during which the initial condition is forgotten, the magnetic energy is exponentially growing or decaying. We compare in figure \ref{fig:growth} the results obtained varying the boundary conditions from perfectly-conducting to perfectly-insulating on both boundaries and for the two different flows. As expected from the previous demonstration, the kinematic growth rates do not depend on the boundary conditions for the $S_2^2T_2^2$ flow. The critical magnetic Reynolds number is approximately $R_M\approx40$ in this case. The fact that the growth rates are exactly equal for both types of boundary conditions is even more surprising looking at the corresponding magnetic eigenmodes. We show in figure \ref{fig:eigen} an illustration of the magnetic eigenmodes close to the onset of dynamo action. As expected due to the effect of the boundary conditions, the magnetic topology is significantly different in both cases. The growth rate associated with these two eigenmodes is however exactly the same. The growth rates for the two types of boundary conditions are however clearly distinct for the non-reversible $S_2^0T_2^0$ flow (see figure \ref{fig:growth}). Since the azimuthal magnetic modes are decoupled, we only show the growth rates associated with the most unstable mode $m=1$. In that case, a dynamo is observed in the case of perfectly-conducting boundary conditions whereas no dynamo at all is found with an infinite magnetic permeability. As already mentioned, this flow shares some similarity with the mean velocity field of the VKS experiment. The effect of the magnetic boundary conditions on the dynamo threshold of von K\'arm\'an swirling flows has been studied by \cite{gissinger2008}. The lack of dynamo in the infinite magnetic permeability case is due to the presence of the large inner core in our case. As the size of the core is reduced, we recover the dynamo observed by several studies, with a strong equatorial dipole. We also considered different flows corresponding to different spherical harmonics, radial structures and spherical shell aspect ratios, and the conclusion remains qualitatively the same. The previous result is also valid in the case of different boundary conditions at each boundary. If the inner core is perfectly-conducting whereas the outer core is perfectly-insulating, the growth rate of the kinematic dynamo will be the same if we reverse the boundary conditions configuration and the direction of the reversible flow. Finally, we considered different types of flows in different geometries. For example, one can consider the flow resulting from rotating convection in the Boussinesq approximation just above onset. In that case, the resulting steady flow in a plane layer model can correspond to square or hexagonal patterns \citep{veronis59}, which are all reversible. We solved the induction equation for both patterns and also found that the eigenvalue spectrum is the same when varying the boundary conditions from a perfect conductor to an infinite magnetic permeability. More details about kinematic dynamo action in such flows and the effect of boundary conditions can be found in \cite{favier2013pre}. Note also that we have only discussed steady velocity fields up to now. However, it seems that this result also holds for time periodic flows as long as the reversibility condition is valid at all times. So far, we have only checked this result numerically by allowing the amplitude of the flow to be time dependent (not shown here) but a more general demonstration should be accessible. To conclude, we show in this letter that providing that a flow is reversible (as defined at the beginning of this letter), kinematic dynamo action will be the same with two different types of boundary conditions: the boundary can be either perfectly conducting, so that magnetic field lines are tangent to the surface, or can be of infinite magnetic permeability, so that magnetic field lines reconnect perpendicularly to the surface. We verified this observation in spherical and Cartesian geometries for various types of flows. While there is only a simple constraint on the velocity field for this result to be true, the required symmetry is however unlikely to be verified in a more realistic turbulent context. It would therefore be interesting to consider the departure from this exact result in the experimentally relevant situation where small-scale velocity fluctuations are not reversible whereas the mean flow is. \acknowledgments{The authors would like to thank Emmanuel Dormy and Toby S. Wood for valuable comments and suggestions. BF thanks the Cambridge Newton Trust for financial support.} \bibliographystyle{unsrt}
{ "timestamp": "2013-09-25T02:08:36", "yymm": "1308", "arxiv_id": "1308.5966", "language": "en", "url": "https://arxiv.org/abs/1308.5966", "abstract": "We consider the classical problem of kinematic dynamo action in simple steady flows. Due to the adjointness of the induction operator, we show that the growth rate of the dynamo will be exactly the same for two types of magnetic boundary conditions: the magnetic field can be normal (infinite magnetic permeability, also called pseudo-vacuum) or tangent (perfect electrical conductor) to the boundaries of the domain. These boundary conditions correspond to well-defined physical limits often used in numerical models and relevant to laboratory experiments. The only constraint is for the velocity field u to be reversible, meaning there exists a transformation changing u into -u. We illustrate this surprising property using S2T2 type of flows in spherical geometry inspired by Dudley and James (1989). Using both types of boundary conditions, it is shown that the growth rates of the dynamos are identical, although the corresponding magnetic eigenmodes are drastically different.", "subjects": "Fluid Dynamics (physics.flu-dyn); Geophysics (physics.geo-ph)", "title": "Growth rate degeneracies in kinematic dynamos", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773707953529716, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7084670325455925 }
https://arxiv.org/abs/1708.02223
Negligibility of parabolic elements in relatively hyperbolic groups
We study density of parabolic elements in a finitely generated relatively hyperbolic group $G$ with respect to a word metric. We prove this density to be zero (apart from degenerate cases) and the limit defining the density to converge exponentially fast; this has recently been proven independently by W. Yang. As a corollary, we obtain the analogous result for the set of commuting pairs of elements in $G^2$, showing that the degree of commutativity of $G$ is equal to zero.
\section{Introduction} \label{s:intro} A group $G$ is hyperbolic to a collection of subgroups $\{ H_\omega \}_{\omega \in \Omega}$ if, loosely speaking, it is hyperbolic except for the part that is inside the set $\mathcal{P}$ consisting of elements in the subgroups $H_\omega$ and their conjugates. It is therefore natural to ask whether taking elements from $G$ ``at random'' we can expect these elements to be outside $\mathcal{P}$ and therefore ``behave like in a hyperbolic group''. We prove that this is the case if $G$ is finitely generated and the sequence of measures that makes sense of the words ``at random'' comes from a word metric on $G$. More precisely, let $G$ be a finitely generated group and let $X$ be a finite generating set for $G$. Denote by $|\cdot|_X: G \to \mathbb{Z}_{\geq 0}$ the \emph{word metric} on $G$ with respect to $X$. For any $n \in \mathbb{Z}_{\geq 0}$, define the sets $$S_{G,X}(n) := \{ g \in G \mid |g|_X = n \},$$ the \emph{sphere} of radius $n$ in the Cayley graph $\Gamma(G,X)$, and $$B_{G,X}(n) := \{ g \in G \mid |g|_X \leq n \} = \bigcup_{i=0}^n S_{G,X}(n),$$ the \emph{ball} of radius $n$ in $\Gamma(G,X)$. The following definition can be used to characterise ``small'' subsets of $G$. The term ``negligible'' to describe small subsets of $G^r$ (for a finitely generated infinite group $G$) was coined in \cite{kapovich}, although the definition given therein is not equivalent to Definition \ref{d:negl} here; in the case $r = 1$, the following definition is used implicitly in \cite{burillo}. \begin{defn} \label{d:negl} Let $r \geq 1$, and let $\mathcal{S} \subseteq G^r$ be a subset. For $n \geq 0$, let $$\delta_X(\mathcal{S},n) := \frac{|\mathcal{S} \cap B_{G,X}(n)^r|}{|B_{G,X}(n)|^r}$$ be the fraction of elements in $B_{G,X}(n)^r$ that belong to $\mathcal{S}$. The set $\mathcal{S}$ is said to be \emph{negligible} in $G$ with respect to $X$ if $\delta_X(\mathcal{S},n) \to 0$ as $n \to \infty$. Moreover, $\mathcal{S}$ is said to be \emph{exponentially negligible} in $G$ with respect to $X$ if in addition there exists a constant $\rho > 1$ such that $\delta_X(\mathcal{S},n) \leq \rho^{-n}$ for all sufficiently large $n$. \end{defn} There are various definitions of relatively hyperbolic groups, due to M.~Gromov \cite{gromov}, B.~Farb \cite{farb}, C.~Dru\c{t}u \& M.~Sapir \cite{drutu}, D.~V.~Osin \cite{osin06def}, D.~Groves \& J.~S.~Manning \cite{groves}, and B.~H.~Bowditch \cite{bowditch}. In this paper we use the definition by Osin; for a precise statement, see Section \ref{s:prelim}. Our main result is as follows: \begin{thm} \label{t:main} Let $G$ be a finitely generated group that is not virtually cyclic, and let $X$ be a finite generating set. Suppose that $G$ is hyperbolic with respect to a collection of \emph{proper} subgroups $\{ H_\omega \}_{\omega \in \Omega}$. Let $$\mathcal{P} := \bigcup_{\substack{\omega \in \Omega \\ g \in G}} H_\omega^g$$ be the set of \emph{parabolic} elements of $G$. Then $\mathcal{P}$ is exponentially negligible in $G$ with respect to $X$. \end{thm} \begin{rmk} During the process of writing up this paper, the author has discovered a more general result by W.~Yang in \cite{yanggen}. In particular, Theorem 1.7 therein is the same as Theorem \ref{t:main} above, and it is closely related to a genericity result that works in a more general setting \cite[Theorem A]{yanggen}. Thus most of this paper merely gives an alternative proof to a recent but already-known result. \end{rmk} As an immediate consequence of the Theorem we obtain: \begin{cor} \label{c:loxo} Let $G$ and $X$ be as in Theorem \ref{t:main}. Let $\mathcal{Q} \subseteq G$ be the set of finite order elements. Then $\mathcal{P} \cup \mathcal{Q}$ is exponentially negligible in $G$ with respect to $X$. \end{cor} The next result computes the degree of commutativity of relatively hyperbolic groups. The \emph{degree of commutativity} of a finitely generated group $G$ with respect to a finite generating set $X$ was defined by \cite{amv} as $$\operatorname{dc}_X(G) := \limsup_{n \to \infty} \frac{|\{ (x,y) \in B_{G,X}(n)^2 \mid xy=yx \}|}{|B_{G,X}(n)|^2}.$$ It has been conjectured \cite[Conjecture 1.6]{amv} that $\operatorname{dc}_X(G) = 0$ whenever $G$ is not virtually abelian (independently of $X$). The next corollary confirms the conclusion of the conjecture in the case when $G$ is a non-elementary relatively hyperbolic group, thereby generalising the result for hyperbolic groups in \cite[Theorem 1.7]{amv}. \begin{cor} \label{c:dc} Let $G$ and $X$ be as in Theorem \ref{t:main}. Then the set of pairs of commuting elements, $\{ (x,y) \in G^2 \mid xy=yx \}$, is exponentially negligible in $G$ with respect to $X$. \end{cor} The structure of the paper is as follows. Section \ref{s:prelim} defines relatively hyperbolic groups and recalls some of the main results on the geometry of their Cayley graphs. Section \ref{s:geod} derives some further results relating geodesics and quasi-geodesics of the ``usual'' (i.e.\ locally compact) and ``coned-off'' (cf terminology in \cite{farb}) Cayley graphs. We give a proof of Theorem \ref{t:main} in Section~\ref{s:pftmain}, and proofs of Corollaries \ref{c:loxo} and \ref{c:dc} in Section \ref{s:cor}. \begin{ntn} For a group $G$ and a generating subset $Z \subseteq G$, we will write $Z^\ast$ for the set of all words over $Z \cup Z^{-1}$, and we will identify these words in the obvious way with paths starting at $1 \in G$ in the Cayley graph $\Gamma(G,Z)$ (we do not require $Z$ to be finite and so $\Gamma(G,Z)$ to be locally finite). Moreover, we will identify $G$ with the vertices of $\Gamma(G,Z)$. Given a path $P$ in $\Gamma(G,Z)$, we also say it is \emph{labelled} by a word $Q \in Z^\ast$ if $P = gQ$ (viewed as paths) for some $g \in G$, and we let $P_-$ (resp.\ $P_+$) be the starting (resp.\ ending) vertex of $P$. For a word $P \in Z^\ast$, we will write $\ell_Z(P)$ for the length of the path $P$ in $\Gamma(G,Z)$ (= number of letters in $P \in Z^\ast$), and we will write $\overline{P} = \overline{P}_G$ for the corresponding element of $G$. For an element $g \in G$, we will write $|g|_Z$ for the word length of $g$ with respect to $Z$; in particular, if $p$, $q$ are vertices in $\Gamma(G,Z)$, then the distance between them will be $|p^{-1}q|_Z$. \end{ntn} \begin{ack} The author would like to thank his PhD advisor Armando Martino, without whose guidance this paper would not have been possible. The author is also grateful to Yago Antol\'in for valuable discussions and advice. \end{ack} \section{Preliminaries} \label{s:prelim} We use Osin's definition of relative hyperbolicity, given in \cite{osin06def}. For this, let $G$ be a group, $\{ H_\omega \}_{\omega \in \Omega}$ a collection of subgroups of $G$, and $X \subseteq G$ a subset. Define the group $$F := \left(\ast_{\omega \in \Omega} H_\omega\right) \ast F(X),$$ and let $\varphi: F \to G$ be the canonical homomorphism. If there exists a \emph{finite} subset $X \subseteq G$ as above such that $\varphi$ is surjective and $\ker(\varphi)$ is the normal closure of a \emph{finite} set $\mathcal{R} \subseteq F$, then $G$ is said to be \emph{finitely presented relative to} $\{ H_\omega \}_{\omega \in \Omega}$. Moreover, if we define $$\mathcal{H} := \bigcup_{\omega \in \Omega} \left( H_\omega \setminus \{1\} \right),$$ then any word $P \in \left( X \cup \mathcal{H} \right)^\ast$ such that the image $\overline{P} = \overline{P}_F$ of $P$ in $F$ is in the kernel of $\varphi$ satisfies an equality $$\overline{P} = \prod_{i=1}^n f_i^{-1} r_i^{\varepsilon_i} f_i$$ for some $f_i \in F$, $r_i \in \mathcal{R}$ and $\varepsilon_i \in \{ \pm 1 \}$; let $\operatorname{Area}_{rel}(P)$ be the minimal value of $n$ such that $\overline{P}$ can be written as above. If there exists a constant $C \geq 0$ such that $$\operatorname{Area}_{rel}(P) \leq C \ell_{X \cup \mathcal{H}}(P)$$ for every $P \in \left( X \cup \mathcal{H} \right)^\ast$ such that $\overline{P}_G = 0$, then $G$ is said to \emph{satisfy a relative linear isoperimetric inequality} (with respect to $X$ and $\{ H_\omega \}_{\omega \in \Omega}$). \begin{defn} \label{d:rh} The group $G$ is said to be \emph{hyperbolic relative to} $\{ H_\omega \}_{\omega \in \Omega}$ if it is finitely presented with respect to $\{ H_\omega \}_{\omega \in \Omega}$ and satisfies a relative linear isoperimetric inequality. We call the $H_\omega$ the \emph{peripheral subgroups} of $G$, and say $G$ is \emph{relatively hyperbolic} if it is hyperbolic relative to some collection of peripheral subgroups. \end{defn} For the remainder of the paper we fix a group $G$ and a collection of \emph{proper} subgroups $\{ H_\omega \}_{\omega \in \Omega}$, such that $G$ is hyperbolic relative to $\{ H_\omega \}_{\omega \in \Omega}$ (given some finite subset $X \subseteq G$, which is also fixed for now). We will usually assume that, moreover, $G$ is finitely generated and $X$ is a (finite) generating set. It is worth noting that in this case Definition \ref{d:rh} is independent of a chosen generating set $X$. Indeed, given two finite generating sets $X$ and $Y$, suppose $G$ is finitely presented relative to $X$. Then $G$ is also finitely presented relative to $Y$ since the canonical homomorphism $$\widetilde\varphi: \widetilde{F} := \left(\ast_{\omega \in \Omega} H_\omega\right) \ast F(Y) \to G$$ is surjective and $$\ker(\widetilde\varphi) = \langle\!\langle \{ \psi_Y(r) \mid r \in \mathcal{R} \} \cup \{ y^{-1}\psi_Y(\psi_X(y)) \mid y \in Y \} \rangle\!\rangle^{\widetilde{F}},$$ where $\mathcal{R}$ is as above, and $\psi_X(P)$ (resp.\ $\psi_Y(P)$) is a word over $X \cup \mathcal{H}$ (resp.\ $Y \cup \mathcal{H}$) obtained by replacing every letter of $Y$ (resp.\ $X$) in $P \in (Y \cup \mathcal{H})^\ast$ (resp.\ $P \in (X \cup \mathcal{H})^\ast$) by a word over $X$ (resp.\ $Y$) representing the same element in $G$. Moreover, \cite[Theorem 2.34]{osin06def} says that $G$ satisfies a linear isoperimetric inequality with respect to $X$ if and only if $G$ satisfies it with respect to $Y$. The definition below summarises common terms used to describe paths in ${\Gamma(G,X \cup \mathcal{H})}$. The endpoints of paths in Cayley graphs that we consider will always be vertices. \begin{defn} Let $P$ be a path in the Cayley graph $\Gamma(G,X \cup \mathcal{H})$. We say that \begin{enumerate}[(i)] \item a subpath $Q$ of $P$ is an \emph{$H_\omega$-subpath} if it is labelled by a word from $(H_\omega)^\ast$, with a convention that a single vertex is an $H_\omega$-subpath for any $\omega \in \Omega$; \item a subpath $Q$ of $P$ is an \emph{($H_\omega$-)component} if it is a maximal $H_\omega$-subpath, and a maximal $H_\omega$-subpath $Q$ is called a \emph{trivial ($H_\omega$-)component} if it is a single vertex; \item a vertex $p$ of $P$ is \emph{non-phase} if it is an interior vertex of some component of $P$, and $p$ is \emph{phase} otherwise; \item two $H_\omega$-components $Q_1$, $Q_2$ of paths $P_1$, $P_2$, respectively, in the graph $\Gamma(G,X \cup \mathcal{H})$ are \emph{connected} if there is an edge from $(Q_1)_-$ to $(Q_2)_-$ labelled by an element of $H_\omega$; \item an $H_\omega$-component $Q$ of $P$ is \emph{isolated} if it is not connected to any other $H_\omega$-component of $P$; \item the path $P$ \emph{does not vertex backtrack} (resp.\ \emph{does not backtrack}) if all its components (resp.\ all its non-trivial components) are isolated, and \emph{vertex backtracks} (resp.\ \emph{backtracks}) otherwise. \end{enumerate} \end{defn} It is clear that if $P$ is a geodesic in $\Gamma(G,X \cup \mathcal{H})$, then $P$ does not vertex backtrack and all its vertices are phase. We are interested in the collection $\mathcal{P}$ of parabolic elements of $G$. \begin{defn} An element $g \in G$ is \emph{parabolic} if it is conjugate to some element of $H_\omega$ for some $\omega \in \Omega$, and $g$ is \emph{hyperbolic} otherwise. We denote by $$\mathcal{P} := \bigcup_{\substack{\omega \in \Omega \\ g \in G}} H_\omega^g$$ the set of parabolic elements of $G$. \end{defn} We now recall some of the results about relatively hyperbolic groups which will be used in this paper. The first of them is a stronger version of the statement that the graph $\Gamma(G,X \cup \mathcal{H})$ is Gromov-hyperbolic. \begin{thm}[\cite{osin06def}, Theorem 3.26] \label{t:hyp} There exists a constant $\nu \in \mathbb{Z}_{\geq 1}$ with the following property. Let $\Delta \subseteq \Gamma(G,X \cup \mathcal{H})$ be a geodesic triangle with edges $P$, $Q$ and $R$, and let $p \in P$ be a vertex. Then there exists a vertex $q \in Q \cup R$ such that $$|p^{-1}q|_X \leq \nu.$$ \end{thm} The second result introduces what is known as the \emph{bounded coset penetration (BCP) property}. For this, recall the following definition. \begin{defn} Let $(K,d)$ be a geodesic metric space, $\lambda \geq 1$, and $c \geq 0$. A path $\alpha: [0,\ell_\alpha] \to K$ parametrised by arc length is a \emph{$(\lambda,c)$-quasi-geodesic} in $K$ if $$|t_1-t_2| \leq \lambda d(\alpha(t_1),\alpha(t_2))+c$$ for all $t_1,t_2 \in [0,\ell_\alpha]$. \end{defn} \begin{thm}[\cite{osin06def}, Theorem 3.23] \label{t:bcp} For any given $\lambda \geq 1$ and $c \geq 0$, there exists a constant $\varepsilon = \varepsilon(\lambda,c) \geq 0$ with the following property. Let $P$ and $Q$ be $(\lambda,c)$-quasi-geodesic paths in $\Gamma(G,X \cup \mathcal{H})$ that do not backtrack such that $P_- = Q_-$ and $P_+ = Q_+$. Then \begin{enumerate}[(i)] \item \label{i:t-bcp-near} If $p \in P$ is a phase vertex, then there exists a phase vertex $q \in Q$ such that $|p^{-1}q|_X \leq \varepsilon$. \item \label{i:t-bcp-conn} If $R$ is a non-trivial $H_\omega$-component of $P$ and $|R_-^{-1}R_+|_X > \varepsilon$, then there exists a non-trivial $H_\omega$-component of $Q$ that is connected to $R$. \item \label{i:t-bcp-ste} If $R \subseteq P$ and $S \subseteq Q$ are connected non-trivial $H_\omega$-components, then $$\max \{ |R_-^{-1}S_-|_X, |R_+^{-1}S_+|_X \} \leq \varepsilon.$$ \end{enumerate} \end{thm} The following (perhaps less standard) result allows us to enlarge an arbitrary finite generating set $Y$ of $G$ to a ``nicer'' set $X$ such that geodesics in $\Gamma(G,X)$ can be related to quasi-geodesics in $\Gamma(G,X \cup \mathcal{H})$. For this we need to construct derived paths, defined by by Antol\'in \& Ciobanu in \cite[Construction 4.1]{ac} (to avoid unnecessary complications, we will only define this for geodesics). \begin{defn} Let $X$ be a finite generating set for $G$, let $P$ be a geodesic path in $\Gamma(G,X)$. We can (uniquelly) express $P$ as a concatenation $$P = A_0U_1A_1\cdots U_nA_n,$$ where the $U_i$ are labelled by non-trivial words in $(H_{\omega_i})^\ast$ for some $\omega_i \in \Omega$, and no $U_i$ is a proper subpath of a subpath $Q$ of $P$ such that $(U_i)_- = Q_-$ and $Q$ is labelled by a word in some $(H_\omega)^\ast$. \begin{enumerate}[(i)] \item The \emph{derived path} $\widehat{P}$ of $P$ is a path in $\Gamma(G,X \cup \mathcal{H})$ given by $$\widehat{P} := A_0h_1A_1 \cdots h_nA_n,$$ where $h_n$ is an edge labelled by an element of $H_{\omega_i}$ such that $\overline{h_i} = \overline{U_i}$ in $G$. \item We call a vertex $p \in P$ a \emph{phase vertex} of $P$ if it is not an interior vertex of any of the $U_i$ (and so ``survives'' in $\widehat{P}$). \end{enumerate} \end{defn} \begin{thm}[\cite{ac}, Lemma 5.3] \label{t:gsl} Let $Y$ be an arbitrary generating set for $G$. Then there exist $\lambda \geq 1$, $c \geq 0$ and a finite subset $\mathcal{H}'$ of $\mathcal{H}$ such that for every finite subset $X$ of $G$ satisfying $$Y \cup \mathcal{H}' \subseteq X \subseteq Y \cup \mathcal{H}$$ and for any geodesic path $P$ in $\Gamma(G,X)$, the derived path $\widehat{P}$ in $\Gamma(G,X \cup \mathcal{H})$ is a $(\lambda,c)$-quasi-geodesic that does not vertex backtrack. \end{thm} A generating set $X$ satisfying the conclusion of Theorem \ref{t:gsl} will be called a \emph{well-behaved} generating set. Finally, we have the following finiteness results. \begin{thm}[\cite{osin06def}, Corollary 2.48] \label{t:finmanypar} If $G$ is finitely generated, then we have $|\Omega| < \infty$. \end{thm} \begin{thm}[\cite{osin06def}, Proposition 2.36] \label{t:almaln} If $H_\omega \cap H_{\widetilde\omega}^g$ is infinite for some $\omega,\widetilde\omega \in \Omega$ and $g \in G$, then $\widetilde\omega = \omega$ and $g \in H_\omega$. \end{thm} \section{Geodesics in Cayley graphs} \label{s:geod} Combining Theorems \ref{t:hyp}, \ref{t:bcp}\eqref{i:t-bcp-near} and \ref{t:gsl} it is easy to see that we have the following result. In particular, we may take $\tilde\delta := \nu + 2\varepsilon(\lambda,c)$, where $\nu$ and $(\lambda,c)$ are given by Theorems \ref{t:hyp} and \ref{t:gsl}, respectively, and $\varepsilon(\lambda,c)$ is given by Theorem \ref{t:bcp}. \begin{cor} \label{c:hyp} Let $X$ be a well-behaved generating set of $G$. There exists a constant $\tilde\delta \geq 0$ with the following property. Consider a geodesic triangle in $\Gamma(G,X)$ formed by edges $P$, $Q$ and $R$, and let $p$ be a phase vertex of $P$. Then there exists a phase vertex $q$ of either $Q$ or $R$ such that $$|p^{-1}q|_X \leq \tilde\delta. \eqno\qed$$ \end{cor} This implies that for a path $P$ in $\Gamma(G,X)$ that is ``not too long'', phase vertices of the geodesic with endpoints $P_-$, $P_+$ are ``not too far'' from $P$. \begin{prop} \label{p:logclose} Let $X$ be a well-behaved generating set of $G$, and let $P$ be a path in $\Gamma(G,X)$. Let $Q$ be a geodesic in $\Gamma(G,X)$ with endpoints $Q_- =P_-$, $Q_+=P_+$, and let $q$ be a phase vertex of $Q$. Then there exists a vertex $p$ of $P$ such that $$|p^{-1}q|_X \leq \tilde\delta \left\lceil \log_2(\ell_X(P)) \right\rceil$$ for a universal constant $\tilde\delta \geq 0$. \end{prop} \begin{proof} Let $\tilde\delta \geq 0$ be the constant given by Corollary \ref{c:hyp}, and let $$s := \left\lceil \log_2(\ell_X(P)) \right\rceil.$$ The proof resembles one that proves that geodesics in a hyperbolic metric space diverge exponentially, see e.g.\ \cite[Lemma 7.1.A]{gromov}. We start with the geodesic $Q$ and, for $b$ a binary string of length $\leq s$, define the geodesics $Q_b$ as follows. Suppose that the geodesic $Q_b$ has been defined for a binary string $b$ of length $< s$. Let $m_b$ be a vertex on $P$ such that $$\left| \ell_X(P_{b0})-\ell_X(P_{b1}) \right| \leq 1$$ where $P_{b0}$ (resp.\ $P_{b1}$) is a subpath of $P$ with endpoints $(Q_b)_-$ and $m_b$ (resp.\ $m_b$ and $(Q_b)_+$). Then we define $Q_{b0}$ (resp.\ $Q_{b1}$) to be a geodesic with endpoints $(Q_b)_-$ and $m_b$ (resp.\ $m_b$ and $(Q_b)_+$). Note that if $b$ has length $s$, then $\ell_X(Q_b) \leq 1$ and so $Q_b$ is a subpath of $P$. Now let $q = q_0$ be a phase vertex of $Q$, and construct phase vertices $q_i$ of $Q_{b(i)}$, for $1 \leq i \leq s$ and $b(i)$ a binary string of length $i$, as follows. Suppose $q_j$ and $b(j)$ have been chosen for $0 \leq j \leq i$, for some $i < s$. Consider the geodesic triangle formed by edges $Q_{b(i)}$, $Q_{b(i)0}$ and $Q_{b(i)1}$. Then by Corollary \ref{c:hyp}, for some $c \in \{ 0,1 \}$ there exists a phase vertex $q_{i+1}$ of $Q_{b(i+1)}$, where $b(i+1) = b(i)c$, such that $|q_i^{-1} q_{i+1}|_X \leq \tilde\delta$. Finally, $Q_{b(s)}$ is a subpath of $P$, so in particular $q_s \in P$, and we get $$|q^{-1}q_s|_X \leq \sum_{i=0}^{s-1} |q_i^{-1}q_{i+1}|_X \leq \tilde\delta s,$$ as required. \end{proof} In particular, as a corollary we obtain the following result. \begin{cor} \label{c:logclose} Let $Y$ be any finite generating set for $G$. Then there exist constants $\tilde\lambda \geq 0$ and $\tilde{c} \geq 0$ such that the following holds. Let $P$ be a geodesic path in $\Gamma(G,Y)$, and let $Q$ be a geodesic path in $\Gamma(G,Y \cup \mathcal{H})$ such that $Q_- = \widehat{P}_-$ and $Q_+ = \widehat{P}_+$. Then for any vertex $q$ of $Q$, there exists a vertex $p$ of $P$ such that $$|p^{-1}q|_Y \leq \tilde\lambda \log_2(\ell_Y(P)) + \tilde{c}.$$ \end{cor} \begin{rmk} In fact, the conclusion of Corollary \ref{c:logclose} can be strengthened by further requiring a constant bound on $|p^{-1}q|_Y$ that is independent of $P$, i.e.\ we can further assume that $\tilde\lambda = 0$, and (moreover) it is enough to require $P$ to be a quasi-geodesic. This is shown in \cite[Lemma 8.8]{hruska}. However, for the purposes of proving Theorem \ref{t:main}, the conclusion of Corollary \ref{c:logclose} is enough. \end{rmk} \begin{proof}[Proof of Corollary \ref{c:logclose}] Let $X$ be the well-behaved finite generating set containing $Y$ given by Theorem \ref{t:gsl}, and note that $X \cup \mathcal{H} = Y \cup \mathcal{H}$. Let $\lambda_X$ be the constant of the bilipschitz equivalence of $Y$ and $X$, i.e.\ a constant such that $|g|_Y \leq \lambda_X |g|_X$ for any $g \in G$ (we may take $\lambda_X := \max \{ |x|_Y \mid x \in X \}$). Given the generating set $X$, let $(\lambda,c)$ and $\tilde\delta$ be given by Theorem \ref{t:gsl} and Corollary \ref{c:hyp}, respectively, and let $\varepsilon(\lambda,c)$ be given by Theorem \ref{t:bcp}. Now let $R$ be a geodesic path in $\Gamma(G,X)$ with $R_- = i(P)_-$ and $R_+ = i(P)_+$, where $i: \Gamma(G,Y) \hookrightarrow \Gamma(G,X)$ is the canonical inclusion. Let $q \in Q$ be a vertex; note that, since $Q$ is a geodesic, $q$ is necessarily a phase vertex. By Theorem \ref{t:gsl}, $\widehat{R}$ is a $(\lambda,c)$-quasi-geodesic, and so by Theorem \ref{t:bcp}\eqref{i:t-bcp-near}, there exists a vertex $r$ of $\widehat{R}$ (viewed also as a phase vertex $r$ of $R$) such that $|r^{-1}q|_X \leq \varepsilon(\lambda,c)$. Now let $p \in P$ (technically, $i(p) \in i(P)$) be the vertex given by applying Proposition \ref{p:logclose} to the path $i(P)$, the geodesic $R$ and the phase vertex $r$ of $R$. Thus we have \begin{align*} \frac{1}{\lambda_X} |p^{-1}q|_Y &\leq |p^{-1}q|_X \leq |p^{-1}r|_X + |r^{-1}q|_X \leq \tilde\delta \left\lceil \log_2(\ell_X(P)) \right\rceil + \varepsilon(\lambda,c) \\ &\leq \tilde\delta \left( \log_2(\ell_Y(P))+1 \right) + \varepsilon(\lambda,c), \end{align*} so setting $\tilde\lambda := \lambda_Y \tilde\delta$ and $\tilde{c} := \lambda_Y \left( \tilde\delta + \varepsilon(\lambda,c) \right)$ gives the result. \end{proof} In particular, note that with $P$, $p$ and $q$ as in Corollary \ref{c:logclose}, the number $|p^{-1}q|_Y$ has a sublinear upper bound in terms of $\ell_Y(P)$. We will fix the constants $\tilde\lambda$ and $\tilde{c}$ given by Corollary \ref{c:logclose} for the remainder of this section, which gives a proof of the following Theorem. \begin{thm} \label{t:parbound1} Suppose $G$ is not virtually cyclic, and let $X$ be a finite generating set for $G$. Then we have $$|\mathcal{P} \cap B_{G,X}(n)| \leq D \sum_{\omega \in \Omega} \sum_{i=0}^{\left\lfloor \frac{n+f(n)}{2} \right\rfloor} |S_{G,X}(i)| |H_\omega \cap B_{G,X}(n+f(n)-2i)|$$ for some $D \geq 0$ and some function $f: \mathbb{Z}_{\geq 0} \to \mathbb{Z}_{\geq 0}$ such that $\frac{f(n)}{n} \to 0$ as $n \to \infty$. \end{thm} Let $\hat{h} \in \mathcal{P}$ be an arbitrary parabolic element; by increasing the constant $D$ if necessary we may assume that $\hat{h} \neq 1$. Thus $\hat{h} = ghg^{-1}$ for some $g \in G$ and $h \in H_\omega$ for some $\omega \in \Omega$; choose $(g,h)$ in such a way that $|g|_{X \cup \mathcal{H}}$ is minimal. Consider the conjugacy diagram $R_1 Q R_2^{-1} P^{-1}$, where the paths $P$, $Q$, $R_1$ and $R_2$ are geodesics in $\Gamma(G, X \cup \mathcal{H})$ such that $\overline{P}_G = \hat{h}$, $\overline{Q}_G = h$ and $\overline{(R_1)}_G = \overline{(R_2)}_G = g$; note that $Q$ is a single edge. Let $P_0$ be a geodesic in $\Gamma(G,X)$ such that $(\widehat{P_0})_- = P_-$ and $(\widehat{P_0})_+ = P_+$. See Figure \ref{f:mapto3}, left. \begin{figure}[ht] \begin{tikzpicture}[scale=0.9] \draw[thick,->] (0,0) node (p-) {} arc (110:90:10) node[label=below:$P$] (p) {}; \draw[thick] (p) arc (90:70:10) node (p+) {}; \draw[thick,->] (p-) arc (-60:-25:6) node[label=left:$R_1$] (r1) {}; \draw[thick] (r1) arc (-25:0:6) node (q-) {}; \draw[thick,->] (p+) arc (240:205:6) node[label=right:$R_2$] (r2) {}; \draw[thick] (r2) arc (205:180:6) node (q+) {}; \draw[thick,->-] (q-.center) -- (q+.center) node[midway,label=above:$Q$] (q) {}; \draw[thick,red,->-] (p-.center) .. controls (1,-3) and (2,-0.5) .. (3.35,-0.5) node[below] {$\widehat{P_0}$} .. controls (5,-0.5) and (6.5,-2.5) .. (p+.center); \draw[->] (7.5,3) -- (8.5,3) node[midway,above] {$i$}; \draw[thick,fill=black] (9,0) circle (1pt) node[below] {$i(P_-)$} -- (11,2) circle (1pt) node[label=left:$c$] (c) {}; \draw[thick,fill=black] (13,0) circle (1pt) node[below] {$i(P_+)$} -- (c.center); \draw[thick,fill=black] (11,4.5) circle (1pt) node[above] {$i(m)$} -- (c.center); \draw[dotted,thick,->-] (p) to[bend right=15] node[midway,label={[label distance=-5pt]left:$U_1$}] (u1) {} (r1); \draw[dotted,thick,->-] (r2) to[bend right=15] node[midway,label={[label distance=-5pt]right:$U_2$}] (u2) {} (p); \draw[dotted,thick,->-] (r1) to[bend right=15] node[midway,label={[label distance=-5pt]above:$U_3$}] (u3) {} (r2); \draw[blue,very thick,->-,dotted] (p-) arc (-60:-25:6) node[midway,left] {$R_{11}$}; \draw[blue,very thick,->-,dotted] (r1) arc (-25:0:6) node[midway,left] {$R_{12}$}; \draw[blue,very thick,->-,dotted] (p+) arc (240:205:6) node[midway,right] {$R_{21}$}; \draw[blue,very thick,->-,dotted] (r2) arc (205:180:6) node[midway,right] {$R_{22}$}; \end{tikzpicture} \vspace{-4em} \caption{The conjugacy diagram $R_1 Q R_2^{-1} P^{-1}$ (left) and the map $i$ to the tripod (right).} \label{f:mapto3} \end{figure} Now we temporarily relax the assumption that endpoints of paths in Cayley graphs are always assumed to be vertices, and view all graphs as geodesic metric spaces. Let $m$ be the midpoint of the edge $Q$ and consider the isometry $i$ from the diagram $R_1 Q R_2^{-1} P^{-1}$ to a tripod $T$, such that $P_-$, $P_+$ and $m$ are mapped to the ``leaves'' of $T$. Let $c$ be the ``branching vertex'' of $T$, and let $c_1$ (resp.\ $c_2$, $c_3$) be the (unique) point in $i^{-1}(c) \cap R_2$ (resp.\ $i^{-1}(c) \cap R_1$, $i^{-1}(c) \cap P$). For each $j \in \mathbb{Z}/3\mathbb{Z}$, let $U_j$ be a geodesic in $\Gamma(G,X \cup \mathcal{H})$ with $(U_j)_- = c_{j-1}$ and $(U_j)_+ = c_{j+1}$. Finally, for $j \in \mathbb{Z}/2\mathbb{Z}$, let $R_{j1}$ (resp.\ $R_{j2}$) be the subpath of $R_j$ with endpoints $(R_{j1})_- = (R_j)_-$ and $(R_{j1})_+ = c_{j+1}$ (resp.\ $(R_{j2})_- = c_{j+1}$ and $(R_{j2})_+ = (R_j)_+$). See Figure \ref{f:mapto3}. We call a geodesic $n$-gon in a graph \emph{nice} if its vertices (as an $n$-gon) are also vertices of the graph. Now by Theorem \ref{t:hyp}, nice geodesic triangles in $\Gamma(G, X \cup \mathcal{H})$ are $\nu$-slim (meaning any edge of the triangle is in the $\nu$-neighbourhood of the union of other two edges). Since any general geodesic triangle in a graph is a nice geodesic $n$-gon for $n \leq 6$, it can be shown (by drawing diagonals) that any geodesic triangle in $\Gamma(G, X \cup \mathcal{H})$ is $(3\nu)$-slim. In particular, if we apply \cite[Proposition 2.1]{msri} to the conjugacy diagram $R_1 Q R_2^{-1} P^{-1}$ (viewed as a geodesic triangle with vertices $P_-$, $P_+$ and $m$), the following is true: \begin{lem} \label{l:diams} \begin{enumerate}[(i)] \item \label{i:ld1} The diameter of $i^{-1}(t)$ is $\leq 18\nu$ for any $t \in T$. \item \label{i:ld2} The diameter of $i^{-1}(c)$ is $\leq 12 \nu$, i.e.\ $\ell_{X \cup \mathcal{H}}(U_j) \leq 12 \nu$ for $j \in \{1,2,3\}$. \qed \end{enumerate} \end{lem} We now divide the argument in two parts, depending on whether or not $Q$ is connected to a non-trivial component of $U_3$. \begin{lem} \label{l:hlong} There exists a universal constant $f_0 = f_0(G,Y,\{H_\omega\}_{\omega \in \Omega}) \geq 0$ such that if $Q$ is connected to a component of $U_3$ for some triple $(\hat{h},h,g)$ as above, then $$|\hat{h}|_X \geq 2|g|_X + |h|_X - 4\tilde\lambda\log_2(|\hat{h}|_X)-f_0.$$ \end{lem} \begin{proof} Since $R_{12}$, $R_{22}$ and $U_3$ are geodesics, and $Q$ is connected to a component of $U_3$, it follows by Lemma \ref{l:diams} \eqref{i:ld2} that $$(\ell_{X \cup \mathcal{H}}(R_{12})-1) + (\ell_{X \cup \mathcal{H}}(R_{22})-1) \leq \ell_{X \cup \mathcal{H}}(U_3) \leq 12\nu$$ and so $$\ell_{X \cup \mathcal{H}}(R_{12}) = \ell_{X \cup \mathcal{H}}(R_{22}) \leq 6\nu+1.$$ By Lemma \ref{l:diams}, it follows that given any vertex $r$ on $R_{12}$ (resp.\ $R_{22}$), there exists a vertex $p$ on $P$ such that we have $|p^{-1}r|_{X \cup \mathcal{H}} \leq 18\nu+1$ and $|P_-^{-1}p|_{X \cup \mathcal{H}} \leq |P_-^{-1}r|_{X \cup \mathcal{H}}$ (resp.\ $|P_+^{-1}p|_{X \cup \mathcal{H}} \leq |P_+^{-1}r|_{X \cup \mathcal{H}}$). It is easy to check that in this case $R_1QR_2^{-1}$ is a $(1,36\nu+3)$-quasi-geodesic. Note also that the path $R_1QR_2^{-1}$ does not backtrack: indeed, if $Q$ was connected to a non-trivial component of either $R_1$ or $R_2$ then it would be connected to both of them, and if some non-trivial components of $R_1$ and $R_2$ were connected then, since $R_1$ and $R_2$ are geodesics both labelled by $g$, this would contradict the minimality of $|g|_{X \cup \mathcal{H}}$. Now consider the (phase) vertices $Q_-$ and $Q_+$ of $R_1QR_2^{-1}$. Applying Theorem \ref{t:bcp} \eqref{i:t-bcp-near} to $R_1QR_2^{-1}$ and $P$ and Corollary \ref{c:logclose} to $P_0$ and $P$, it follows that there are vertices $p_-$ and $p_+$ on $P_0$ such that $$|Q_-^{-1}p_-|_X, |Q_+^{-1}p_+|_X \leq \tilde\lambda\log_2(|\hat{h}|_X)+\tilde{c}+\varepsilon(1,36\nu+3).$$ It follows that there exist elements $$z_-,z_+ \in B_{G,X}(\tilde\lambda\log_2(|\hat{h}|_X)+\tilde{c}+\varepsilon(1,36\nu+3))$$ such that $$g = p_1 z_- = p_3^{-1}z_+ \qquad \text{and} \qquad h = z_-^{-1}p_2z_+$$ where $p_1,p_2,p_3 \in G$ are such that $$\hat{h} = p_1p_2p_3 \qquad \text{and} \qquad |\hat{h}|_X = |p_1|_X + |p_2|_X + |p_3|_X.$$ Thus setting $f_0 := 4(\tilde{c}+\varepsilon(1,36\nu+3))$ gives the result. \end{proof} Now consider the paths $R_{12}Q$ and $U_3R_{22}$ with endpoints $(U_3)_-$ and $Q_+$. Since $Q$ is a single edge, it follows from Lemma \ref{l:diams} \eqref{i:ld2} that both of these paths are $(1,24\nu)$-quasi-geodesics. As follows from the proof of Lemma \ref{l:hlong}, $R_{12}Q$ does not backtrack; $U_3R_{22}$ might backtrack, but we can ``shorten this path along any backtracks'' to find a $(1,24\nu)$-quasi-geodesic path $\widetilde{U}$ with $\widetilde{U}_- = (U_3)_-$ and $\widetilde{U}_+ = Q_+$ such that all the vertices of $\widetilde{U}$ are on $U_3R_{22}$. Applying Theorem \ref{t:bcp} \eqref{i:t-bcp-conn} to $R_{12}Q$ and $\widetilde{U}$ then says that if $|h|_X > \varepsilon(1,24\nu)$ then $Q$ is connected to a non-trivial component of $\widetilde{U}$. As the path $QR_{22}^{-1}$ does not backtrack, in this case $Q$ cannot be connected to a component of $R_{22}$ (apart from $(R_{22})_+ = Q_+$ if it is a trivial component of $R_{22}$), and so $Q$ must be connected to a component of $U_3$. Therefore Lemma \ref{l:hlong} applies whenever $|h|_X > \varepsilon(1,24\nu)$, thus for all but finitely many elements $h$. By setting $D := D_0 B_{G,X}(\varepsilon(1,24\nu))+1$ (for some $D_0 \geq 0$) and picking $f: \mathbb{Z}_{\geq 0} \to \mathbb{Z}_{\geq 0}$ to be the pointwise maximum of finitely many sublinear functions (so still sublinear), Theorem \ref{t:parbound1} follows from the following result (since $1 \in H_\omega$): \begin{lem} \label{l:hshort} Let $h_0 \in H_\omega$ for some $\omega \in \Omega$, and let $\mathcal{P}(h_0)$ be the set of elements $\hat{h}$ in the conjugacy class $h_0^G$ such that if $g$ and $h$ are as above, then $h = h_0$ and $Q$ is not connected to a component of $U_3$. Then $$|\mathcal{P}(h_0) \cap B_{G,X}(n)| \leq D_0 \left| B_{G,X}\left( \left\lfloor \frac{n+f_0}{2} \right\rfloor \right) \right|$$ for some constants $D_0,f_0 \geq 0$. \end{lem} \begin{proof} Consider the closed path $R_{12}QR_{22}^{-1}U_3^{-1}$. Since the path $R_{12}QR_{22}^{-1}$ does not backtrack and by assumption $Q$ is not connected to a conponent of $U_3$, it follows that given any two distinct non-trivial connected components of $R_{12}QR_{22}^{-1}U_3^{-1}$, one of them must be on $U_3$ and the other one on either $R_{12}$ or $R_{22}$. But since $U_3$ is an arbitrary geodesic, we may without loss of generality assume that (loosely speaking) $U_3$ follows $R_{12}$ (and $U_3^{-1}$ follows $R_{22}$) until the last of their non-trivial connected components. Thus we have a closed path $\mathcal{C} := \widetilde{R_{12}}Q\widetilde{R_{22}}^{-1}\widetilde{U_3}^{-1}$ which does not backtrack and all of its vertices are phase except for, possibly, endpoints of $\widetilde{U_3}$. See Figure \ref{f:cycle}. Note that $\ell_{X \cup \mathcal{H}}(\widetilde{U_3}) \geq 1$ since (by minimality of $|g|_{X \cup \mathcal{H}}$) $R_1 \cap R_2 = \varnothing$. \begin{figure}[ht] \begin{tikzpicture} \draw[thick,->] (0,0) node (r1) {} arc (-25:-15:10) node[label=left:$R_{12}$] (r12) {}; \draw[thick] (r12) arc (-15:0:10) node (q-) {}; \draw[very thick,blue,dotted,->-] (r12) arc (-15:0:10) node[midway,label=left:$\widetilde{R_{12}}$] (r12t) {}; \draw[thick,->] (3.5,0) node (r2) {} arc (205:190:10) node[label=right:$R_{22}$] (r22) {}; \draw[thick] (r22) arc (190:180:10) node (q+) {}; \draw[very thick,blue,dotted,->-] (r22) arc (190:180:10) node[midway,label=right:$\widetilde{R_{22}}$] (r22t) {}; \draw[thick] (q-.center) -- (q+.center); \draw[very thick,blue,dotted,->-] (q-.center) -- (q+.center) node[midway,label=above:$Q$] (q) {}; \draw[very thick,blue,dotted,->-] (r12.center) to [bend right=20] node[midway,label=above:$\widetilde{U_3}$] (u3) {} (r22.center); \draw[very thick,red,dotted,->-] (r1) arc (-25:-15:10) node[midway,label=right:$\widetilde{R_{12}'}$] (r12p) {}; \draw[very thick,red,dotted,->-] (r2) arc (205:190:10) node[midway,label=left:$\widetilde{R_{22}'}$] (r22p) {}; \fill (r1) node[above left] {$(U_3)_-$} circle (1.5pt); \fill (r2) node[above right] {$(U_3)_+$} circle (1.5pt); \end{tikzpicture} \caption{The closed paths $R_{12}QR_{22}^{-1}U_3^{-1}$ (in red and blue) and $\mathcal{C} = \widetilde{R_{12}}Q\widetilde{R_{22}}^{-1}\widetilde{U_3}^{-1}$ (in blue). Here $U_3 = \widetilde{R_{12}'}\widetilde{U_3}\widetilde{R_{22}'}^{-1}$, where $\widetilde{R_{12}'}$ and $\widetilde{R_{22}'}$ are such that $R_{j2} = \widetilde{R_{j2}'} \widetilde{R_{j2}}$.} \label{f:cycle} \end{figure} Suppose without loss of generality that one of the following three cases holds: \begin{enumerate}[(i)] \item $\ell_{X \cup \mathcal{H}}(\widetilde{R_{12}}) > \ell_{X \cup \mathcal{H}}(\widetilde{R_{22}})$, or \item $\ell_{X \cup \mathcal{H}}(\widetilde{R_{12}}) = \ell_{X \cup \mathcal{H}}(\widetilde{R_{22}})$ and $(\widetilde{U_3})_-$ is a phase vertex of the closed path $\mathcal{C}$, or \item $\ell_{X \cup \mathcal{H}}(\widetilde{R_{12}}) = \ell_{X \cup \mathcal{H}}(\widetilde{R_{22}})$ and both $(\widetilde{U_3})_-$, $(\widetilde{U_3})_+$ are non-phase vertices of $\mathcal{C}$. \end{enumerate} Let $r_+$ be $(\widetilde{U_3})_+$ if $(\widetilde{U_3})_+$ is a phase vertex of $\mathcal{C}$, and let $r_+$ be the vertex on $\widetilde{R_{22}}$ that is adjacent to $(\widetilde{U_3})_+ = (\widetilde{R_{22}})_-$ otherwise. It follows that both $r_+$ and $r_- := \hat{h}^{-1}r_+$ (the latter one being a vertex of $\widetilde{R_{12}}$) are phase vertices of $\mathcal{C}$. Moreover, there is a subpath of $\mathcal{C}$ with endpoints $r_-$ and $r_+$ that is a union of at most $$\ell_{X \cup \mathcal{H}}(\widetilde{U_3}) + \left| \ell_{X \cup \mathcal{H}}(\widetilde{R_{12}'}) - \ell_{X \cup \mathcal{H}}(\widetilde{R_{22}'}) \right| + 1 \leq \ell_{X \cup \mathcal{H}}(U_3)+1 \leq 12\nu+1$$ components. All of these components have length $1$ (apart from, possibly, one or two components which have length $2$), hence $|r_-^{-1}r_+|_{X \cup \mathcal{H}} \leq 12\nu+3$. Now consider the two subpaths of $\mathcal{C}$ joining $r_-$ and $Q_+$. By the above, it follows that they are $(1,24\nu+6)$-quasi-geodesics that do not backtrack and do not have non-trivial connected components. By Theorem \ref{t:bcp} \eqref{i:t-bcp-conn}, if $S$ is a component of $\mathcal{C}$, then $|S_-^{-1}S_+|_X \leq \varepsilon(1,24\nu+6)$. Thus, by the previous paragraph, it follows that $$|r_-^{-1}r_+|_X \leq (12\nu+1) \varepsilon(1,24\nu+6).$$ In particular, since $\hat{h} = r_-r_0r_-^{-1}$ where $r_0 = r_-^{-1}r_+$, by setting $$D_0 := B_{G,X}((12\nu+1) \varepsilon(1,24\nu+6))$$ and fixing $r_0 \in G$ it is enough to show that $$|\mathcal{P}(h_0,r_0) \cap B_{G,X}(n)| \leq \left| B_{G,X}\left( \left\lfloor \frac{n+f_0}{2} \right\rfloor \right) \right|$$ for some constant $f_0 \geq 0$, where $\mathcal{P}(h_0,r_0)$ is the set of $\hat{h} \in h_0^G$ with $h = h_0$ and $r_0$ as above. Note that every vertex $v$ on $\widetilde{U_3}$ satisfies $|r_-^{-1}v|_X \leq (12\nu+1) \varepsilon(1,24\nu+6)$. Since $\widetilde{U_3}$ is an arbitrary geodesic (after fixing its endpoints), we may assume (similarly to the case above) that $\widetilde{U_3}$ follows $\widetilde{R_{12}'}^{-1} R_{11}^{-1}$ (and $\widetilde{U_3}^{-1}$ follows $\widetilde{R_{22}'}^{-1} R_{21}^{-1}$) until the last of their non-trivial connected components. Thus we obtain a path $\widetilde{R_1} \widetilde{U_3'} \widetilde{R_2}^{-1}$ (where $\widetilde{R_1}$, $\widetilde{U_3'}$, $\widetilde{R_2}$ are subpaths of $R_1$, $\widetilde{U_3}$, $R_2$, respectively) that does not backtrack and all its vertices are phase, except for possibly endpoints of $\widetilde{U_3}'$. Since all vertices of this path are on the $(1,36\nu)$-quasi-geodesic $R_{11}U_3R_{21}^{-1}$, it follows that $\widetilde{R_1} \widetilde{U_3'} \widetilde{R_2}^{-1}$ is also a $(1,36\nu)$-quasi-geodesic. Now if either $\ell_{X \cup \mathcal{H}}(\widetilde{U_3'}) > 1$ or $(\widetilde{U_3'})_-$ is a phase vertex of $\widetilde{R_1} \widetilde{U_3'} \widetilde{R_2}^{-1}$, then $\widetilde{R_1} \widetilde{U_3'} \widetilde{R_2}^{-1}$ contains a phase vertex that is on $\widetilde{U_3}$. Otherwise, consider the path $\widetilde{R_1'} \widetilde{U_3''} \widetilde{R_2}^{-1}$ obtained by replacing the non-geodesic subpath of length $2$ with interior point $(\widetilde{U_3'})_-$ by an edge $\widetilde{U_3''}$. Since $R_1$ and $R_2$ have no connected components, we have that either $(\widetilde{U_3''})_+ = (\widetilde{U_3'})_+$ is a phase vertex of $\widetilde{R_1'} \widetilde{U_3''} \widetilde{R_2}^{-1}$, or the edge $\widetilde{U_3''}$ is labelled by an element of $H_\omega \cap H_{\widetilde\omega}$ for some distinct $\omega,\widetilde\omega \in \Omega$. Thus in either case there exists a vertex $v$ of $\widetilde{U_3}$ and some $w \in G$ such that $vw$ is a phase vertex of a $(1,36\nu)$-quasi-geodesic with endpoints $P_-$ and $P_+$ that does not backtrack, and such that $$|w|_X \leq E_0 := \max \{ |k|_X \mid k \in H_\omega \cap H_{\widetilde\omega}, \omega,\widetilde\omega \in \Omega, \omega \neq \widetilde\omega \},$$ where the right hand side is defined and finite by Theorem \ref{t:almaln}. Therefore, by Theorem \ref{t:bcp} \eqref{i:t-bcp-near} and Corollary \ref{c:logclose} it follows that this vertex $v$ satisfies $|v^{-1}u|_X \leq f_1$ for some vertex $u$ of $\widehat{P_0}$, where $$f_1 := (12\nu+1)\varepsilon(1,24\nu+6) + E_0 + \varepsilon(1,36\nu) + \tilde\lambda\log_2(|\hat{h}|_X)+\tilde{c}.$$ It follows that there exists some $z \in B_{G,X}(f_1)$ such that $$r_- = p_1z = p_2^{-1}zr_0$$ where $p_1,p_2 \in G$ are such that $$\hat{h} = p_1p_2 \qquad \text{and} \qquad |\hat{h}|_X = |p_1|_X + |p_2|_X.$$ Thus setting $f_0 := 2f_1+(12\nu+1)\varepsilon(1,24\nu+6)$ implies that $|r_-|_X \leq \frac{|\hat{h}|_X+f_0}{2}$, which gives the result. \end{proof} \section{Exponential negligibility of $\mathcal{P}$} \label{s:pftmain} This section is dedicated to a proof of Theorem \ref{t:main}. We need the following definition. \begin{defn} For a group $K$ with a finite generating set $Z$, the \emph{(exponential) growth rate} of $K$ with respect to $X$ is the limit $$\mu(K,Z) := \lim_{n \to \infty} \sqrt[n]{|B_{K,Z}(n)|}.$$ By submultiplicativity of ball sizes in $\Gamma(K,Z)$ and a well-known result called Fekete's Lemma, it follows that this limit always exists and is equal to $\inf \{ \sqrt[n]{|B_{K,Z}(n)|} \mid n \in \mathbb{Z}_{\geq 0} \}$. \end{defn} In order to prove Theorem \ref{t:main}, we use growth tightness of relatively hyperbolic groups: \begin{thm}[\cite{yang}, Corollary 1.7] \label{t:grtight} Suppose $G$ is not virtually cyclic. Let $X$ be a finite generating set for $G$, and let $N \trianglelefteq G$ be an infinite normal subgroup. Let $\overline{X}$ be the image of $X$ under the quotient map $G \to G/N$, so that $\overline{X}$ is a finite generating set of $G/N$. Then $\mu(G/N,\overline{X}) < \mu(G,X)$. \end{thm} We also use Dehn filling in relatively hyperbolic groups, namely the following result. \begin{thm}[\cite{osin07}, Theorem 1.1 (1)] \label{t:dfill} Let $G$ be hyperbolic relative to a collection of subgroups $\{ H_\omega \}_{\omega \in \widetilde\Omega}$. Then there exists a finite subset $\mathcal{F}$ of $G \setminus \{1\}$ with the following property. Let $\{ N_\omega \}_{\omega \in \widetilde\Omega}$ be a collection of normal subgroups $N_\omega \trianglelefteq H_\omega$ such that $N_\omega \cap \mathcal{F} = \varnothing$ for each $\omega \in \widetilde\Omega$, and define a normal subgroup $N := \left\langle\!\left\langle \bigcup_{\omega \in \widetilde\Omega} N_\omega \right\rangle\!\right\rangle^G \trianglelefteq G$. Then for each $\omega \in \widetilde\Omega$, the natural map $H_\omega / N_\omega \to G / N$ is injective. \end{thm} We fix a finite generating set $X$ of $G$ for the remainder of the section. The main ingredient in the proof of Theorem \ref{t:main} (apart from Theorem \ref{t:parbound1}) is the following Lemma. \begin{lem} \label{l:expneg} For any $\omega \in \Omega$, the subgroup $H_\omega \leq G$ is exponentially negligible in $G$ (with respect to $X$). \end{lem} \begin{proof} The idea is to use results in \cite{osin06elem} and Theorem \ref{t:dfill} to find a quotient of $G$ whose growth could be compared to the growth of $H_\omega$, and then use Theorem \ref{t:grtight}. Suppose first that there exists a normal subgroup $N \trianglelefteq G$ such that the number $M := |H_\omega \cap N|$ is finite. Then the quotient $G/N$ is generated by the set $\overline{X}$ of images of elements of $X$ under the quotient map, and clearly $$|gN|_{\overline{X}} \leq |g|_X$$ for any $g \in G$. Also, for fixed elements $g \in G$ and $t_0 \in H_\omega \cap gN$, we have $$|H_\omega \cap gN| = |\{ t_0^{-1}t \mid t \in H_\omega \cap gN \}| \leq |H_\omega \cap N| = M.$$ In particular, it follows that \begin{align*} |H_\omega \cap B_{G,X}(n)| &\leq \sum_{\substack{gN \in G/N \\ gN \cap B_{G,X}(n) \neq \varnothing}} |H_\omega \cap gN| \\ &\leq M | \{ gN \in G/N \mid gN \cap B_{G,X}(n) \neq \varnothing \} | \\ &\leq M |B_{G/N,\overline{X}}(n)|. \end{align*} Thus, by Theorem \ref{t:grtight} it follows that as long as $N$ is infinite we have $$\limsup_{n \to \infty} \sqrt[n]{\frac{|H_\omega \cap B_{G,X}(n)|}{|B_{G,X}(n)|}} < 1,$$ which implies that $H_\omega$ is exponentially negligible in $G$. Thus the problem reduces to showing that there exists an infinite normal subgroup $N \trianglelefteq G$ such that $H_\omega \cap N$ is finite. To construct such a subgroup, we use Dehn filling in relatively hyperbolic groups. Let $g \in G$ be a hyperbolic element (i.e.\ an element of $G \setminus \mathcal{P}$) such that the order of $g$ is infinite: such an element exists by \cite[Corollary 4.5]{osin06elem}. Consider the subgroup $$E_G(g) := \{ h \in G \mid h^{-1} g^n h = g^{\pm n} \text{ for some } n \geq 1 \}.$$ Clearly $g \in E_G(g)$, and by \cite[Lemma 4.1]{osin06elem}, the index of $\langle g \rangle \cong \mathbb{Z}$ in $E_G(g)$ is finite. Also, by \cite[Corollary 1.7]{osin06elem}, $G$ is hyperbolic relative to the collection $\{ H_\omega \}_{\omega \in \Omega} \cup \{ E_G(g) \}$. Let $\widetilde\Omega := \Omega \sqcup \{0\}$ and let $H_0 := E_G(g)$. Now let $\mathcal{F} \subseteq G \setminus \{1\}$ be the finite subset given by Theorem \ref{t:dfill} applied to $G$ and the collection of subgroups $\{ H_\omega \}_{\omega \in \widetilde\Omega}$. Since $\mathcal{F}$ is finite, we have $\langle g^m \rangle \cap \mathcal{F} = \varnothing$ for $m \in \mathbb{Z}_{\geq 1}$ large enough. Let $N_\omega := \{1\}$ for $\omega \in \Omega = \widetilde\Omega \setminus \{0\}$, and let $N_0 := \bigcap_{h \in E_G(g)} \langle h^{-1}g^mh \rangle \trianglelefteq E_G(g)$ be the \emph{normal core} of $\langle g^m \rangle$ in $E_G(g) = H_0$, i.e.\ the kernel of the action of $E_G(g)$ on the set of left cosets of $\langle g^m \rangle$ in $E_G(g)$. As the index of $\langle g^m \rangle$ in $E_G(g)$ is finite, so is the index of $N_0$ in $E_G(g)$, so in particular, as $E_G(g)$ is infinite, so is $N_0$. Therefore, applying Theorem \ref{t:dfill} yields an infinite normal subgroup $N = \langle\!\langle N_0 \rangle\!\rangle^G$ of $G$ such that, for each $\omega \in \Omega$, the group $H_\omega \cap N$ is trivial, hence finite. \end{proof} \begin{proof}[Proof of Theorem \ref{t:main}] The proof follows from Theorem \ref{t:parbound1} and Lemma \ref{l:expneg}. In particular, by Lemma \ref{l:expneg} it follows that there exists a constant $\rho > 1$ such that $\frac{|H_\omega \cap B_{G,X}(n)|}{|B_{G,X}(n)|} \leq \rho^{-n}$ for all sufficiently large $n$. Note that we have $\mu := \mu(G,X) > 1$: otherwise, if $\mu = 1$, Theorem \ref{t:grtight} would imply that $N=G$ is \emph{not} an infinite normal subgroup of $G$, and so $G$ is finite, contradicting the assumption that $G$ is not virtually cyclic. Now choose a constant $\varepsilon > 0$ such that $\varepsilon < \mu (\min\{\mu,\rho\}-1).$ Then there exists a constant $n_0 \in \mathbb{Z}_{\geq 0}$ such that $$\frac{|H_\omega \cap B_{G,X}(n)|}{|B_{G,X}(n)|} \leq \rho^{-n} \qquad \text{and} \qquad |S_{G,X}(n)| \leq |B_{G,X}(n)| \leq (\mu+\varepsilon)^n$$ for all $n \geq n_0$; note also that $|B_{G,X}(n)| \geq \mu^n$ for all $n \in \mathbb{Z}_{\geq 0}$. Now let $n \geq 3n_0$. By Theorem \ref{t:finmanypar}, it follows that it is enough to find an exponential (with base $<1$) upper bound on the number $$\sum_{i=0}^{\left\lfloor \frac{n+f(n)}{2} \right\rfloor} \frac{|S_{G,X}(i)| |H_\omega \cap B_{G,X}(n+f(n)-2i)|}{|B_{G,X}(n)|}$$ for any fixed $\omega \in \Omega$. We do this in three parts. For $i < n_0$, we have \begin{align*} \sum_{i=0}^{n_0-1} &\frac{|S_{G,X}(i)| |H_\omega \cap B_{G,X}(n+f(n)-2i)|}{|B_{G,X}(n)|} \\ &\leq \sum_{i=0}^{n_0-1} |S_{G,X}(i)| \left(\frac{\mu+\varepsilon}{\rho}\right)^{n+f(n)-2i} \mu^{-n} \\ &\leq |B_{G,X}(n_0-1)| \left(\frac{\mu+\varepsilon}{\rho}\right)^{f(n)} \left(\frac{\mu+\varepsilon}{\rho\mu}\right)^n \end{align*} and $\frac{\mu+\varepsilon}{\rho\mu} < 1$ by the choice of $\varepsilon$, so since $\frac{f(n)}{n} \to 0$ as $n \to \infty$ we get exponential convergence to zero, as required. For $i \geq n_0$ and $n+f(n)-2i \geq n_0$, we have \begin{align*} \sum_{i=n_0}^{\left\lfloor \frac{n+f(n)-n_0}{2} \right\rfloor} &\frac{|S_{G,X}(i)| |H_\omega \cap B_{G,X}(n+f(n)-2i)|}{|B_{G,X}(n)|} \\ &\leq \sum_{i=n_0}^{\left\lfloor \frac{n+f(n)-n_0}{2} \right\rfloor} (\mu+\varepsilon)^i \left(\frac{\mu+\varepsilon}{\rho}\right)^{n+f(n)-2i} \mu^{-n} \\ &= \sum_{i=n_0}^{\left\lfloor \frac{n+f(n)-n_0}{2} \right\rfloor} \left(\frac{\mu+\varepsilon}{\rho\mu}\right)^{n} \left( \frac{\rho^2}{\mu+\varepsilon} \right)^i \left(\frac{\mu+\varepsilon}{\rho}\right)^{f(n)}. \end{align*} Now if $\frac{\rho^2}{\mu+\varepsilon} \leq 1$, the result follows immediately as above. If instead $\frac{\rho^2}{\mu+\varepsilon} > 1$, we can bound the above expression by \begin{align*} \left(\frac{\mu+\varepsilon}{\rho\mu}\right)^{n} \frac{\left( \frac{\rho}{\sqrt{\mu+\varepsilon}} \right)^{n+f(n)+2}}{\frac{\rho^2}{\mu+\varepsilon}-1} \left(\frac{\mu+\varepsilon}{\rho}\right)^{f(n)} = \frac{(\sqrt{\mu+\varepsilon})^{f(n)}}{1-\frac{\mu+\varepsilon}{\rho^2}} \left(\frac{\sqrt{\mu+\varepsilon}}{\mu}\right)^n \end{align*} and since $\frac{\mu+\varepsilon}{\mu^2} < 1$ by the choice of $\varepsilon$, we get exponential convergence as before. Finally, for $n+f(n)-2i < n_0$, we have \begin{align*} \sum_{i=\left\lfloor \frac{n+f(n)-n_0}{2} \right\rfloor+1}^{\left\lfloor \frac{n+f(n)}{2} \right\rfloor} &\frac{|S_{G,X}(i)| |H_\omega \cap B_{G,X}(n+f(n)-2i)|}{|B_{G,X}(n)|} \\ &\leq \sum_{i=\left\lfloor \frac{n+f(n)-n_0}{2} \right\rfloor+1}^{\left\lfloor \frac{n+f(n)}{2} \right\rfloor} (\mu+\varepsilon)^i |H_\omega \cap B_{G,X}(n+f(n)-2i)| \mu^{-n} \\ &\leq \frac{(\sqrt{\mu+\varepsilon})^{n+f(n)+2}}{\mu+\varepsilon-1} |H_\omega \cap B_{G,X}(n_0-1)| \mu^{-n} \\ &= \frac{|H_\omega \cap B_{G,X}(n_0-1)| (\sqrt{\mu+\varepsilon})^{f(n)}}{1-\frac{1}{\mu+\varepsilon}} \left(\frac{\sqrt{\mu+\varepsilon}}{\mu}\right)^n \end{align*} and so we again get exponential convergence. It follows that $\mathcal{P}$ is exponentially negligible. \end{proof} \section{Degree of commutativity} \label{s:cor} This section is dedicated to the proofs of Corollaries \ref{c:loxo} and \ref{c:dc}. Given Theorem \ref{t:main}, the proof of Corollary \ref{c:loxo} is easy. Indeed, it is easy to check (either directly from Definition \ref{d:rh}, or by using characterisation of hyperbolically embedded subgroups given by \cite[Theorem 1.5]{osin06elem}) that if a group $G$ is hyperbolic relative to $\{ H_\omega \}_{\omega \in \Omega}$ and $F \leq G$ is any finite subgroup, then $G$ is hyperbolic relative to $\{ H_\omega \}_{\omega \in \Omega} \cup \{F\}$. But there are only finitely many conjugacy classes of finite-order hyperbolic elements in $G$ (i.e.\ elements of $G \setminus \mathcal{P}$) \cite[Theorem 4.2]{osin06def}, hence there exists a finite collection $\{ F_1,\ldots,F_m \}$ of finite cyclic subgroups of $G$ such that any hyperbolic element of finite order is conjugate to an element of one of the $F_j$. Thus $G$ is hyperbolic relative to $\{ H_\omega \}_{\omega \in \Omega} \cup \{ F_1,\ldots,F_m \}$, and with this structure of a relatively hyperbolic group every hyperbolic element of $G$ has infinite order. Corollary \ref{c:loxo} then follows directly from Theorem \ref{t:main}. Given the previous paragraph, we may without loss of generality assume that all hyperbolic elements in $G$ have infinite order. The proof of Corollary \ref{c:dc} then follows closely the proof of \cite[Theorem 1.7]{amv}, stating an analogous result for ordinary hyperbolic groups. The following Lemma has been stated as Lemma 3.1 in \cite{amv} in a slightly different form, but the proof remains the same. For a group $G$ and an element $g \in G$, let $C_G(g)$ denote the centraliser of $g$ in $G$. \begin{lem} \label{l:amv} Let $G$ be a group generated by a finite subset $X$, and let $\mathcal{N} \subseteq G$ be a subset such that \begin{enumerate}[(i)] \item \label{i:l-amv-negl} $\mathcal{N}$ is exponentially negligible in $G$ with respect to $X$, and \item \label{i:l-amv-cent} there exist constants $\rho > 1$ and $n_0 \geq 0$ such that $$|C_G(g) \cap B_{G,X}(n)| \leq \rho^{-n} |B_{G,X}(n)|$$ for all $g \in G \setminus \mathcal{N}$ and $n \geq n_0$. \end{enumerate} Then $\{ (x,y) \in G^2 \mid xy=yx \}$ is exponentially negligible in $G$ with respect to $X$. \end{lem} Thus, by taking $\mathcal{N} = \mathcal{P}$, in the view of Corollary \ref{c:loxo} it is enough to show Lemma \ref{l:amv} \eqref{i:l-amv-cent}. \begin{proof}[Proof of Corollary \ref{c:dc}] It is known \cite[Lemma 4.1]{osin06elem} that given any hyperbolic element $g \in G$, the centraliser $C_G(g)$ contains $\langle g \rangle$ as a finite index subgroup; in particular, $C_G(g)$ is a $2$-ended subgroup of $G$. In this case, a classical result \cite[Lemma 4.1]{wall} tells that $C_G(g)$ fits into an exact sequence $$1 \to F \to C_G(g) \to Q \to 1$$ where $F$ is finite and $Q$ is either $\mathbb{Z}$ or $C_2 \ast C_2$. Thus $C_G(g)$ has a subgroup $K$ of index at most $2$ such that $K/F \cong \mathbb{Z}$ for a finite normal subgroup $F \trianglelefteq K$. Note that $K$ contains $g^2 \in G \setminus \mathcal{P}$ and so $K$ is a $2$-ended subgroup not contained in $\mathcal{P}$. Since $F \leq K$ is a finite subgroup, \cite[Lemma 9.4]{louder} tells that there is a universal constant $m_0$ (independent of $g$) such that $|F| \leq m_0$. Since the sequence $$1 \to F \to K \to \mathbb{Z} \to 1$$ splits, this means that $C_G(g)$ contains a normal subgroup $\langle k_g \rangle \cong \mathbb{Z}$ of index $\leq 2m_0$. It is clear that $k_g \notin \mathcal{P}$: otherwise $C_G(g) \cap H_\omega^{g_0}$ is infinite for some $\omega \in \Omega$ and $g_0 \in G$, which cannot happen since $$C_G(g) \cap H_\omega^{g_0} \leq H_\omega^{g_0g} \cap H_\omega^{g_0} = \left( H_\omega^{g_0gg_0^{-1}} \cap H_\omega \right)^{g_0}$$ and $H_\omega^{g_0gg_0^{-1}} \cap H_\omega$ is finite by Theorem \ref{t:almaln} since $g \notin \mathcal{P}$. Now consider the \emph{translation length function} in $G$, defined as $$\tau(g) := \limsup_{n \to \infty} \frac{|g^n|_{X \cup \mathcal{H}}}{n} = \inf \left\{ \frac{|g^n|_{X \cup \mathcal{H}}}{n} \;\middle|\; n \in \mathbb{Z}_{\geq 0} \right\}$$ for $g \in G \setminus \mathcal{P}$, where the second inequality follows from Fekete's Lemma since the sequence $(|g^n|_{X \cup \mathcal{H}})_{n=0}^\infty$ is subadditive. It is known \cite[Theorem 4.25]{osin06def} that there exists a universal constant $\zeta > 0$ such that $\tau(g) \geq \zeta$ for all $g \in G \setminus \mathcal{P}$. Finally, pick an element $g \in G \setminus \mathcal{P}$. Then $\tau(k_g) \geq \zeta$ for $k_g \in G \setminus \mathcal{P}$ as above, and so if $k_g^m \in B_{G,X}(n) \subseteq B_{G,X \cup \mathcal{H}}(n)$, then $|m| \leq n/\zeta$. It follows that $$|\langle k_g \rangle \cap B_{G,X}(n)| \leq \frac{2n}{\zeta}+1.$$ Moreover, if $g_0 \langle k_g \rangle \cap B_{G,X}(n) \neq \varnothing$ for some $g_0 \in G$ then we may assume that $g_0 \in B_{G,X}(n)$ and so $g_0 \langle k_g \rangle \cap B_{G,X}(n) \subseteq g_0 (\langle k_g \rangle \cap B_{G,X}(2n))$. Thus $$|g_0\langle k_g \rangle \cap B_{G,X}(n)| \leq \frac{4n}{\zeta}+1.$$ Since the index of $\langle k_g \rangle$ in $C_G(g)$ is $\leq 2m_0$, this implies that $$|C_G(g) \cap B_{G,X}(n)| \leq 2m_0 \left( \frac{4n}{\zeta}+1 \right),$$ which gives a linear (and so subexponential) bound independent of $g$. Since by assumption $G$ is not virtually cyclic, the growth rate of $G$ is $\mu(G,X) > 1$, and so the proof is complete. \end{proof} \bibliographystyle{amsplain}
{ "timestamp": "2017-08-10T02:08:46", "yymm": "1708", "arxiv_id": "1708.02223", "language": "en", "url": "https://arxiv.org/abs/1708.02223", "abstract": "We study density of parabolic elements in a finitely generated relatively hyperbolic group $G$ with respect to a word metric. We prove this density to be zero (apart from degenerate cases) and the limit defining the density to converge exponentially fast; this has recently been proven independently by W. Yang. As a corollary, we obtain the analogous result for the set of commuting pairs of elements in $G^2$, showing that the degree of commutativity of $G$ is equal to zero.", "subjects": "Group Theory (math.GR)", "title": "Negligibility of parabolic elements in relatively hyperbolic groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9532750427013547, "lm_q2_score": 0.7431680199891789, "lm_q1q2_score": 0.7084435259894657 }
https://arxiv.org/abs/2207.07070
Boundary Effects on Ideal Fluid Forces and Kelvin's Minimum Energy Theorem
The electrostatic force on a charge above a neutral conductor is generally attractive. Surprisingly, that force becomes repulsive in certain geometries (Levin & Johnson 2011), a result that follows from an energy theorem in electrostatics. Based on the analogous minimum energy theorem of Kelvin (1849), valid in the theory of ideal fluids, we show corresponding effects on steady and unsteady fluid forces in the presence of boundaries. Two main results are presented regarding the unsteady force. First, the added mass is proven to always increase in the presence of boundaries. Second, in a model of a body approaching a boundary, where the unsteady force is typically repulsive (Lamb 1975, §137), we present a geometry where the force can be attractive. As for the steady force, there is one main result: in a model of a Bernoulli suction gripper, for which the steady force is typically attractive, we show that force becomes repulsive in some geometries. Both the unsteady and steady forces are shown to reverse sign when boundaries approximate flow streamlines, at energy minima predicted by Kelvin's theorem.
\section{Introduction} It is generally stated that the electrostatic force on a charge above a conductor is attractive \citep[pp. 99]{Griffiths}. However, \citet[]{levin2011} showed that the force may be repulsive in certain geometries. Repulsion occurs when the electric field energy depends non-monotonically on the charge-conductor separation distance. Non-monotonicity occurs when conductor geometries resemble natural contours of electrical potential, as follows from the energy theorem of Thomson (Lord Kelvin) \citep[pp. 53]{jackson_classical}. An analogous energy theorem due to \citet[]{kelvin1849} holds in an ideal fluid. Whereas repulsion occurs in electrostatics when conductors resemble equipotential lines, we show surprising effects on fluid forces when boundaries resemble streamlines. In this paper, we analyse ideal fluid forces in such geometries. Ideal fluid forces can be decomposed into an unsteady and a steady component \citep[pp. 17]{sedov}, both of which we will analyse. The unsteady force on a submerged body is relevant during transient motions. It is often characterized by an effective added mass that limits large accelerations \citep[]{Newman2018, mckee2019} and alters the natural vibration frequencies of submerged structures \citep{valentin2014,Newman2018}. Moreover, unsteady forces play a critical role in swimming mechanisms \citep[]{Saffman1967,Childress1981,Weymouth2013,Steele2017} of cephalopods and biomimetic robots \citep[]{Serchi_ScienceRobot}. The steady force is relevant even when there is no acceleration, and vanishes (along the direction of motion) in an infinite fluid according to d'Alembert's paradox \citep[404-405]{Batchelor2000}. The steady force is generally non-zero in higher connectivities; for example, there is a non-zero steady force between two interacting circular cylinders \citep[]{Wang2004,Tchieu2010}. Notably, the steady force is exploited in non-contact Bernoulli grippers for industrial applications \citep[]{giesen2013,davis2008}. Specifically, a fluid source creates a region of high velocity and low pressure, resulting in an attractive lift force that can be used to manipulate objects (see \cite{vidref} for a video). In the presence of boundaries, ideal fluid forces have been studied in various contexts. \citet[pp. 215]{Basset1888a} gave the exact added mass of a cylinder in a concentric confining cylinder. \citet[\S 93]{Lamb1975} solved the spherical version of that problem, which was experimentally verified by \cite{ackermann1964}. Confinement effects were studied further in the context of nuclear reactor core oscillations at the Argonne National Laboratory\citep[]{Argon,Chung1984,Wambsganss1974}. \cite{brennen1982} then provided an extensive review of added mass and analysed boundary effects in various geometries. Later, \cite{Wang2004} presented analytical formulae for the fluid forces in the two-cylinder problem. More recently, \cite{Tchieu2010} formulated the two-body problem using a conformal map approach and revisited the two-cylinder problem. In all these studies, boundaries were found to increase the added mass, relative to the boundary-free problem. In light of the surprising electrostatic forces found by \cite{levin2011}, one might expect a boundary geometry in which the added mass of a body can actually be decreased. If this were possible, strategic boundary placement could give large accelerations during transient motions. We prove that no such boundary geometry exists, a result that follows from the minimum energy theorem of \cite{kelvin1849}. Specifically, the repulsive forces found by \citet[]{levin2011} stem from a non-monotonic dependence of the electric field energy on the conductor-charge distance. Therefore, we pose the question: \emph{Does the ideal fluid kinetic energy always depend monotonically on the separation distance to a boundary?} We answer this question in the negative and analyse corresponding effects on ideal fluid forces. The remainder of this paper is arranged as follows. In \S \ref{kelvenerg}, we state Kelvin's energy theorem and generalize to the case where source singularities exist in the fluid, which is relevant for the model to be developed in \S \ref{steady}. In \S \ref{nonmon}, we demonstrate that the fluid kinetic energy can be non-monotonic in the boundary separation distance, if boundaries approximate streamlines. In \S \ref{addmass}, we show that boundaries cannot decrease the added mass. In \S \ref{nonmomunsteady}, we analyse the unsteady force by revisiting a calculation of \citet[\S 137]{Lamb1975}, but with a streamline-approximating boundary. In \S \ref{steady}, we analyse the steady force through two models of a Bernoulli gripper, which we solve exactly using the framework of \citet[]{crowdy2020solving}. In the respective models, the unsteady and steady forces reverse sign near energy minima predicted by Kelvin's theorem, when boundaries approximate streamlines. \section{Ideal Fluid Energy and Kelvin's Theorem}\label{energy} \subsection{Kelvin's Minimum Energy Theorem}\label{kelvenerg} We begin by stating the minimum energy theorem of \cite{kelvin1849}, which will be generalized thereafter. The statement is as follows. \begin{theorem}\label{thm1} Consider an ideal fluid domain $D\subseteq \mathbb{R}^2$, with velocity field $\boldsymbol{v}(\boldsymbol{x})=\bnabla \phi(\boldsymbol{x})$ satisfying $\nabla^2\phi=0$ with no-flux boundary conditions on $\partial D$. Further assume that the volume of each boundary does not change in time and the velocity vanishes at infinity. Then any other incompressible flow $\boldsymbol{q}(\boldsymbol{x})$, satisfying the boundary conditions on $\partial D$, will possess kinetic energy greater than or equal to that of $\boldsymbol{v}$, \begin{equation}\label{eq:theoremkelvin} \frac{1}{2} \rho \int_{D}||\boldsymbol{v}||^2 dV \leq \frac{1}{2} \rho \int_{D}||\boldsymbol{q}||^2 dV, \end{equation} when $||\boldsymbol{v}||^2$ contains no non-integrable singularities. \end{theorem} When there exist source singularities in $D$, the integrals in (\ref{eq:theoremkelvin}) are not well-defined. We now show that the result still holds if the flow possesses a source/sink singularity. This generalization ensures the energy theorem is still valid for our model in \S \ref{steady}. \begin{theorem}\label{thm2} Consider the same ideal flow as in theorem \ref{thm1}, but in which there exists a source singularity in the fluid domain at $\boldsymbol{x}_0 \in D$; that is, for $\boldsymbol{x} \rightarrow \boldsymbol{x}_0$, \begin{equation}\label{eq:asympt} \boldsymbol{v}(\boldsymbol{x}) \sim \frac{m}{2\pi ||\boldsymbol{x}-\boldsymbol{x}_0||^2}\left(\boldsymbol{x}-\boldsymbol{x}_0\right). \end{equation} Consider any incompressible flow $\boldsymbol{q}$ satisfying the stated boundary conditions on $\partial D$ and with the same asymptotic character as $\boldsymbol{x}\rightarrow \boldsymbol{x}_0$ as in (\ref{eq:asympt}), but which is otherwise finite in $D$. Then $\boldsymbol{q}$ will possess kinetic energy greater than or equal to that of $\boldsymbol{v}$: \begin{equation}\label{eq:theoremkelvinsource} \frac{1}{2} \rho \int_{D}\left(||\boldsymbol{q}||^2-||\boldsymbol{v}||^2\right) dV \geq 0. \end{equation} \end{theorem} \begin{proof} We prove the second theorem while the first, a standard result, follows when $m=0$. Since $\boldsymbol{q}$ has the same asymptotic character as $\boldsymbol{v}$ for $\boldsymbol{x}\rightarrow \boldsymbol{x}_0$, and is otherwise finite, it can be decomposed as $\boldsymbol{q}=\boldsymbol{v}+\boldsymbol{w}$, where $\boldsymbol{w}$ is finite in $D$. The integrand in (\ref{eq:theoremkelvinsource}) then becomes $(||\boldsymbol{q}||^2-||\boldsymbol{v}||^2)=2\bnabla \phi \cdot \boldsymbol{w}+||\boldsymbol{w}||^2$. Near $\boldsymbol{x}_0$, $\bnabla \phi$ is of order $\mathcal{O}(1/r)$ which is integrable over the volume element, $dV=rdrd\theta$, while $||\boldsymbol{w}||$ is finite. Therefore, the integrand is integrable and the product rule and divergence theorem apply, yielding \begin{equation}\label{eq:theoremkelvinsource2} \frac{1}{2} \rho \int_{D}\left(||\boldsymbol{q}||^2-||\boldsymbol{v}||^2\right) dV = \rho\left(\int_{\partial D}\phi \boldsymbol{w\cdot n}dA-\int_{D} \phi \bnabla \cdot \boldsymbol{w} dV\right)+\frac{1}{2} \rho \int_{D} ||\boldsymbol{w}||^2dV. \end{equation} The first two terms vanish by the no-flux and incompressibility conditions on $\boldsymbol{q}$, and the last term is non-negative. \end{proof} We proceed by showing that an added stationary boundary never decreases the energy. \begin{corollary}\label{corol} Suppose a stationary impenetrable body $C$, with boundary $\partial C\subset D$, is added to the previous flow with source singularities in $D \backslash C$. Then a potential flow solution exists with velocity $\bnabla \phi_C$ defined over $D \backslash C$, where $\nabla^2 \phi_C=0$. Furthermore, the kinetic energy of that flow is greater than the boundary-less potential flow solution, \begin{equation}\label{eq:coroll} \frac{1}{2} \rho\left( \int_{D\backslash C}\left(||\bnabla \phi_C||^2 -||\boldsymbol{v}||^2\right)dV -\int_{C} ||\boldsymbol{v}||^2dV\right)\geq 0. \end{equation} \end{corollary} \begin{proof} The ideal flow, in the region $D \backslash C$, has a solution by standard existence theorems \citep[]{thomson1848theorems}. That solution has no flux into the region $C$ and a perfectly valid choice of an incompressible flow $\boldsymbol{q}$, as defined in theorem \ref{thm2}, is \begin{equation}\label{eq:corollareq} \boldsymbol{q}(\boldsymbol{x}) = \left\{ \begin{array}{ll} \bnabla \phi_C & \boldsymbol{x}\in D\backslash C \\ \boldsymbol{0} & \boldsymbol{x}\in C. \end{array} \right. \end{equation} This flow has precisely the kinetic energy of the potential flow solution in the presence of the boundary. However, by theorem \ref{thm2}, the flow possesses more energy than the boundary-less flow. Note that $\partial C$ has zero content and the decomposition in (\ref{eq:coroll}) ensures integrability; $\boldsymbol{v}$ is defined over $D$ and $\bnabla \phi_C$ over $D\backslash C$. \end{proof} \subsection{Non-Monotonic Kinetic Energy}\label{nonmon} We now demonstrate that the fluid kinetic energy need not vary monotonically with the distance between a fluid system and a newly introduced boundary $\partial C$, a result that follows from corollary \ref{corol}. We define a fluid system by a collection of boundary conditions on $\partial D$ as in theorem \ref{thm1}. A stationary boundary is introduced mathematically as an additional no-flux boundary $\partial C$, as in corollary \ref{corol}. As usual, slip is allowed in an ideal fluid. The boundary-separation distance is defined as the distance between a chosen point in $\partial C$ and one in $\partial D$. We now demonstrate by example, referring to figure \ref{fig:unsteadygeom}, as follows. Consider $D$ to be the exterior to a unit circle that translates vertically at unit velocity. We choose a boundary $\partial C$ comprising a pair of infinitely thin arcs resembling streamlines, which are drawn in three distance configurations, labelled $A,B,C$. The boundary-separation distance, labelled $x$, is taken as the vertical distance from the centre of the circle to the bottom of the boundary. We call the kinetic energy of the system $T(x)$. When $x\rightarrow \infty$, the boundary-less energy is recovered. At other distances, $T(x)\geq T(\infty)$ strictly by corollary \ref{corol}. However, when $x=x_B$, the boundary does not perturb the cylinder flow, as it coincides precisely with streamlines, and $T(x_B)=T(\infty)$. Therefore, $T(x_B)$ corresponds to a local minima of the kinetic energy as a function of the boundary-separation distance. Hence, the kinetic energy dependence need not be monotonic. Although this discussion considered infinitely thin boundaries, we show in \S \ref{steady} that the corresponding effects on fluid forces persist for finite thicknesses. \section{The Unsteady Force}\label{unsteady} In \S\ref{addmass}, we show that the added mass is always greater than or equal to that in the boundary-less problem. An alternative derivation is given in appendix \ref{altproof} using a conformal mapping approach, that does not appeal Kelvin's energy theorem. In \S\ref{nonmomunsteady}, we show the unsteady force found by \citet[\S 137]{Lamb1975} reverses sign when the boundary approximates a streamline. Herein, we ignore intrinsic body mass. \subsection{Added Mass Cannot Decrease}\label{addmass} Consider a single translating body in an unbounded ideal fluid. The Buckingham--Pi theorem \citep[]{buckingham1914} implies the total fluid kinetic energy can only be written as \begin{equation} T_f=\frac{C_a}{2}\rho V U^2, \label{eq:kinetic} \end{equation} where $U$ is the body velocity, $C_a$ is a dimensionless constant called the added mass coefficient, and $\rho$ and $V$ are the fluid density and body volume, respectively. Considering acceleration at a constant rate $a$, (\ref{eq:kinetic}) gives the necessary work to move a distance $\delta x$, $\delta W=C_a \rho V a \delta x$. The resulting force is therefore $F=-\delta W/\delta x=-C_a \rho V a$; the reaction is equivalent to a mass augmentation of amount $C_a \rho V$. The corollary (\ref{eq:coroll}) showed that boundaries may not decrease the total kinetic energy $T_f$. Therefore, when the presence of a boundary is modelled as an effective change in the added mass, increasing $T_f$ implies an increased $C_a$ in (\ref{eq:kinetic}). Hence, boundaries cannot be used to decrease the added mass and attain higher accelerations. The connection between boundary-induced added mass increase and Kelvin's theorem was mentioned by \citet[\S 93]{Lamb1975}. Although boundaries cannot reduce the added mass, geometries such as that described in \S \ref{nonmon} can lead to interesting fluid force effects, to be analysed in the rest of this paper. \begin{figure} \centerline{\includegraphics[height=5.5cm]{Figures/thicklines-01.png}} \caption{A cylinder of unit radius and unit speed is pictured with its boundary-less streamlines. Configurations of a streamline-approximating boundary are superimposed and labelled $A,B,C$.} \label{fig:unsteadygeom} \end{figure} \subsection{Effect of Non-monotonic Energy on Unsteady Force}\label{nonmomunsteady} When a body translates toward a boundary, according to the added mass formulation, the energy can be expressed as in (\ref{eq:kinetic}) with $C_a=C_a(x)$, where $x$ is the separation distance to the boundary. For 1D free motion toward the boundary, the Euler-Lagrange equation is \begin{equation}\label{eq:EOM} 2\frac{\partial C_a(x)}{\partial x}\dot{x}^2+2C_a(x)\ddot{x}-\frac{\partial C_a(x)}{\partial x}\dot{x}^2=\frac{1}{\dot{x}}\frac{d}{dt}\left( C_a(x)\dot{x}^2 \right)=0, \end{equation} or $C_a(x)\dot{x}^2=\mathrm{Const}$ since $\dot{x}\neq 0$. \citet[\S136]{Lamb1975} justified the Lagrangian formulation, and the validity of (\ref{eq:kinetic}), in this context. In analysing the motion of a sphere toward a plane wall, \citet[\S 137]{Lamb1975} deduced from (\ref{eq:EOM}) a repulsive force since $\partial C_a/\partial x$ was always negative there. However, this need not always be the case as discussed in \S \ref{nonmon}: if the boundary coincides, at some finite $x$, with the streamlines of a traveling body then there is a local minima of $C_a(x)$. The equation of motion (\ref{eq:EOM}) implies, setting the constant to 1, that $\partial \dot{x}/\partial x=-C_a^{-\frac{3}{2}}(\partial C_a/\partial x)/2$ which switches sign at the local minima. Consider the geometry of figure \ref{fig:unsteadygeom}. A circle translates vertically and boundary-less streamlines, which are a function of velocity, are drawn for reference. The circle approaches a boundary which approximates flow streamlines. If, as the circle evolves according to (\ref{eq:EOM}), at some moment the free streamlines coincide with the boundary (as pictured for distance $x_B$), then the force will change sign as the circle passes over $x_B$. In a different scenario where the circle is pushed at constant velocity, the necessary external force to maintain the velocity reverses sign at $x_B$. \section{The Steady Force}\label{steady} Here, we analyse the effect of energy non-monotonicity, as described in \S\ref{nonmon}, on the steady fluid force. We consider a source flow with streamline-approximating boundaries, as illustrated in figure \ref{fig:geometry}. In the simply connected problem (figure \ref{fig:geometry}A), attractive Bernoulli suction is achieved when the source is directly above the plate, giving a model of a Bernoulli suction gripper \citep[]{davis2008,giesen2013}. However, we show that the force is reversed near the point where boundaries lie directly on streamlines. Both geometries (figure \ref{fig:geometry}) are solved exactly using the theoretical framework of \citet[]{crowdy2020solving}. \subsection{Single Slit and Source}\label{singlebody} Consider an ellipse (which may degenerate to a slit) centered at the origin in the complex plane, with some orientation $\alpha$, as shown in figure \ref{fig:geometry}A. When $x=0$, a source is located directly above the plate and Bernoulli suction attracts the slit to the source, as shown in figure \ref{fig:plotsbernoulli}A. However, when $x \neq 0$, there is a value of $y=x\tan{\alpha}$ where the slit falls on a natural source streamline, corresponding to an energy minima following from theorem \ref{thm2}. We expect unique steady force behaviour near this point for slender ellipses. \begin{figure} \centerline{\includegraphics[height=5.5cm]{Figures/sourcecyl_combined.png}} \caption{\textbf{A)} Geometry of source above boundary. A source of strength $m$ is located at $x+\mathrm{i}y$ and a stationary ellipse is at the origin. \textbf{B)} Geometry of source above doubly-connected boundary.} \label{fig:geometry} \end{figure} We employ a conformal mapping approach. The simpler problem of a source exterior to a unit disk is solved and mapped to the problem of interest (figure \ref{fig:geometry}A) by a conformal map. We denote the physical domain coordinates by $z$ and the pre-image coordinates by $\zeta$. The complex potential for a source of strength $m=2\pi$ at location $\zeta_s$, where $|\zeta_s|>1$, is $\log{\left(\zeta-\zeta_s\right)}$. The complex potential exterior to a stationary unit disc is then given by the Milne-Thomson theorem \citep[\S 6.21]{milne1962}, \begin{equation}\label{eq:thepotential} W=\log{\left(\zeta-\zeta_s\right)}+\log{\left((1-\overline{\zeta_s}\zeta)/\zeta\right)}. \end{equation} The unit disk exterior maps to the tilted ellipse exterior by a Joukowsky-type map, \begin{equation}\label{eq:mapping} z(\zeta)=\left(\zeta + \frac{s}{\zeta}\right)e^{\mathrm{i}\alpha}, \end{equation} where $s\in[0,1]$ for univalency \citep[]{Smith2008}; $s$ defines a homotopy between the circle ($s=0$) and the slit of length $D=2$ ($s=1$). Intermittent shapes are ellipses. To achieve the configuation in figure \ref{fig:geometry}A, $\alpha$ in (\ref{eq:mapping}) corresponds directly to $\alpha$ in the figure. However, the source pre-image, $\zeta_s$, must be chosen to ensure that $z(\zeta_s)\equiv z_s=x+\mathrm{i}y$. After some algebra, one finds the pre-image location, $\zeta_s(z_s)=(z_s e^{-\mathrm{i}\alpha}+\sqrt{\left(z_s e^{-\mathrm{i}\alpha}\right)^2-4s})/2$. In what follows, we analyse the vertical component of force as $y$ is varied for a given $x$. The force is computed in the $\zeta$-plane since the complex potential is known there according to (\ref{eq:thepotential}). The steady force is given by the expression of \citet[equation 18]{Tchieu2010}, \begin{equation}\label{eq:forceexpress} F=\overline{\frac{\mathrm{i}}{2}\oint_{|\zeta|=1}{\left(\frac{dW}{d\zeta}\right)^2\left(\frac{dz}{d\zeta}\right)^{-1}d\zeta}}, \end{equation} after setting the fluid density to be equal to 1. The vertical component is $F_s\equiv \Imag\left\{ F \right\}$. Using (\ref{eq:thepotential}) and (\ref{eq:mapping}), the residue theorem yields the vertical force on the ellipse, \begin{equation} F_s=\Imag\left\{-\frac{2\pi\zeta_s^2(\zeta_s-s\overline{\zeta_s})}{(\zeta_s^2-s)^2(|\zeta_s|^2-1)}e^{-\mathrm{i}\alpha}\right\}, \end{equation} which is plotted in figure \ref{fig:plotsbernoulli}A for the case of $x=0$ and $s=1$. The usual Bernoulli suction effect is captured there; a low pressure on the top of the plate leads to a net vertical (attractive) force. The suction persists regardless of the angle $\alpha<\pi/2$ when $x=0$. Note that taking $s<1$ does not qualitatively affect the shape of that curve. We now examine the case where the slit may fall on a streamline of the source, $x\neq0$. Figure \ref{fig:plotsbernoulli}B plots the case of $x=2$ and $\alpha=\pi/4$ for various slenderness parameters, $s$. When $s=1$, the slit coincides with a natural (radial) streamline of the source at $y/D=1$. As per the discussion of \S \ref{nonmon}, this corresponds to a local energy minimum. The plot in figure \ref{fig:plotsbernoulli}B shows the force changes sign at precisely this point for a perfect slit, $s=1$. There is a local region of repulsion for $y/D>1$. For large $y/D$, the force becomes attractive again. Meanwhile, the onset of force sign-reversal is shifted for an ellipse of finite slenderness ($s<1$) until the effect vanishes for $s=0.79$ where there is no repulsive region. For large $y/D$, all plots are positive, indicating attraction. \begin{figure} \centerline{\includegraphics[height=5.5cm]{Figures/scaled_Forcesalph-01.png} \caption{\textbf{A)} Plot of the vertical steady force on the slit in the geometry shown in figure \ref{fig:geometry}A when $x=0$, $s=1$, for various $\alpha$. The force is positive for $y>0$, indicating attraction as in the standard Bernoulli gripper. \textbf{B)} Same plot when $\alpha=\pi/4$, $x=2$, for various slenderness parameters $s$. The slit approximates a streamline when $y/D=1$, leading to an energy minima and local sign-reversal of the force only for slender ellipses ($s\approx 1$).} \label{fig:plotsbernoulli} \end{figure} \subsection{Two Slits and Source}\label{twobody} The vertical steady force is computed by a similar procedure in the two-body geometry shown in figure \ref{fig:geometry}B. The canonical domain for the doubly connected problem is the annulus, where the problem can be solved simply and then mapped to the physical domain by a conformal map (see figures \ref{fig:confmaps}A and \ref{fig:confmaps}D). The conformal map from the annulus, $|\zeta|\in [\rho,1]$, to the two-slit geometry shown in figure \ref{fig:geometry}B is \begin{equation} z(\zeta)=-\mathrm{i}Ae^{3\mathrm{i}\alpha}\frac{\omega(\zeta,\sqrt{\rho}e^{2\mathrm{i}\alpha})\omega(\zeta,\sqrt{\rho}^{-1}e^{2\mathrm{i}\alpha})}{\omega(\zeta,\sqrt{\rho})\omega(\zeta,\sqrt{\rho}^{-1})}, \end{equation} where $\omega$ is the Schottky--Klein prime function defined by \citet[pp. 64,75]{crowdy2020solving}; a similar map was used by \cite{crowdy2009spreading} to analyse the Weis-Fogh mechanism. Note that we consider figure \ref{fig:geometry}B with zero-thickness slits. To accommodate a source, a sink of equal strength is included in the annulus for mass conservation, with the sink located at the pre-image of infinity, $\zeta_{\infty}=\sqrt{\rho}$. The complex potential in the $\zeta$-plane is \citep[]{sourcesink} \begin{equation} W(\zeta)=\frac{m}{2\pi}\log{\left(\frac{\omega(\zeta,\zeta_s)\omega(\zeta,\overline{\zeta_s}^{-1})}{\omega(\zeta,\zeta_{\infty})\omega(\zeta,\overline{\zeta_{\infty}}^{-1})}\right)}, \end{equation} up to a constant that does not enter our calculation. The source location in the annulus, $\zeta_s$, must be chosen such that its image lies on the imaginary axis as in figure \ref{fig:geometry}B. Conveniently, we notice the circle $|\zeta|=\sqrt{\rho}$ maps to the imaginary axis. We proceed by resolving the vertical component of force on the boundary. By symmetry, we can simply double the force on one slit; thus, the total force is twice that given in (\ref{eq:forceexpress}). The derivatives $dW/d\zeta$ and $dz/d\zeta$, and hence the integrand of (\ref{eq:forceexpress}), can be written analytically in terms of the functions $K(\zeta, \alpha)=\zeta\partial \log\omega(\zeta,\alpha)/\partial \zeta$. The integrand may then be explicitly evaluated using the series representations given by \citet[pp. 278,280]{crowdy2020solving}, and easily integrated numerically. The result is plotted in figure \ref{fig:plots2body} for the case of two slits near-parallel to the real axis. For large positive $y$, the force on the plate is positive, indicating attraction. For large negative $y$, the negative force indicates attraction. However, a small region of repulsion is encountered near the slit, similar to the prediction of the simply-connected model in \S\ref{singlebody}. The force is an odd function when the slits are parallel to the real axis, $\alpha=\pi/2$. Interestingly, the symmetry is quickly disrupted for deviations from $\alpha=\pi/2$; repulsion for $y>0$ is retained but lost for $y<0$. \begin{figure} \centerline{\includegraphics[height=5.5cm]{Figures/2slit_forcealpha-01.png} \caption{Vertical force on the slits shown in figure \ref{fig:geometry}B, near $\alpha=\pi/2$ (slits on real axis). For large $|y/D|$ there is attraction as in the usual Bernoulli gripper. When $y=0$, slits lie on streamlines and there is local repulsion. For tilted slits, there is only repulsion for $|y/D|> 0$.} \label{fig:plots2body} \end{figure} \section{Conclusion} It follows from the minimum energy theorem of \citet[]{kelvin1849} that boundaries cannot reduce the kinetic energy and hence the added mass of a translating body. However, boundaries that resemble streamlines can create local energy minima as a function of the boundary-separation distance, resulting in non-intuitive effects on fluid forces. In a system where \citet[\S 137]{Lamb1975} found a repulsive unsteady force, we show a transition to attraction when boundaries approximate streamlines. In two models of a Bernoulli gripper, where the steady force is typically attractive, we show that the steady force reverses sign when the gripper approximates a streamline. Effects are most prominent in slit-type boundaries, but are shown in \S \ref{singlebody} to persist for finite thicknesses. The geometry-dependent sign-reversal of the steady force in the Bernoulli gripper might be useful in future engineering design. Furthermore, similar effects may be relevant in electromagnetic power flows in 2D near-zero-index media, which were recently shown to be mathematically equivalent to ideal fluid flows \citep{liberal2020near}. \\ \\ \newline \paragraph{\textbf{Acknowledgements.} The author is grateful for valuable discussions with Peter Baddoo, Bavand Keshavarz, Valeri Frumkin, Ousmane Kodio, and John Bush.}\\ \\ \paragraph{\textbf{Funding.} The author was funded by an MIT Presidential Fellowship during this work.}\\ \\ \paragraph{\textbf{Declaration of Interests.} } The authors report no conflict of interest.
{ "timestamp": "2022-07-15T02:20:35", "yymm": "2207", "arxiv_id": "2207.07070", "language": "en", "url": "https://arxiv.org/abs/2207.07070", "abstract": "The electrostatic force on a charge above a neutral conductor is generally attractive. Surprisingly, that force becomes repulsive in certain geometries (Levin & Johnson 2011), a result that follows from an energy theorem in electrostatics. Based on the analogous minimum energy theorem of Kelvin (1849), valid in the theory of ideal fluids, we show corresponding effects on steady and unsteady fluid forces in the presence of boundaries. Two main results are presented regarding the unsteady force. First, the added mass is proven to always increase in the presence of boundaries. Second, in a model of a body approaching a boundary, where the unsteady force is typically repulsive (Lamb 1975, §137), we present a geometry where the force can be attractive. As for the steady force, there is one main result: in a model of a Bernoulli suction gripper, for which the steady force is typically attractive, we show that force becomes repulsive in some geometries. Both the unsteady and steady forces are shown to reverse sign when boundaries approximate flow streamlines, at energy minima predicted by Kelvin's theorem.", "subjects": "Fluid Dynamics (physics.flu-dyn)", "title": "Boundary Effects on Ideal Fluid Forces and Kelvin's Minimum Energy Theorem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9532750373915658, "lm_q2_score": 0.7431680086124811, "lm_q1q2_score": 0.7084435111982784 }
https://arxiv.org/abs/1904.10215
Multicast Communications in Tree Networks with Heterogeneous Capacity Constraints
A widely studied problem in communication networks is that of finding the maximum number of communication requests that can be scheduled concurrently, subject to node and/or link capacity constraints. In this paper, we consider the problem of finding the largest number of multicast communication requests that can be serviced simultaneously by a network of tree topology, subject to heterogeneous capacity constraints. This problem generalizes the following two problems studied in the literature: a) the problem of finding a largest induced $k$-colorable subgraph of a chordal graph, b) the maximum multi-commodity flow problem in tree networks.The problem is already known to be NP-hard and to admit a $c$-approximation ($c \approx 1.58$) in the case of homogeneous capacity constraints. We first show that the problem is much harder to approximate in the heterogeneous case. We then use a generalization of a classical algorithm to obtain an $M$-approximation where $M$ is the maximum number of leaves of the subtrees representing the multicast communications. Surprisingly, the same algorithm, though in various disguises, is used in the literature at least four times to solve related problems (though the analysis is different).The special case of the problem where instances are restricted to unicast communications in a star topology network is known to be polynomial-time solvable. We extend this result and show that the problem can be solved in polynomial time for a set of paths in a tree that share a common vertex.
\section{Simulation Results} \begin{figure*}[ht] \centering \includegraphics[width=0.8\textwidth]{Figures/Random.jpg} \caption{Approximation ratio of the greedy algorithm on random $\vpt$ graphs. The histogram is for 30.000 instances on random trees of 50 to 150 nodes, and number of paths from twice to four times the size of the tree.} \label{fig:Random} \end{figure*} \begin{figure*}[ht] \begin{subfigure}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{Figures/FrontEndCache.jpg} \label{fig:FrontEndCache} \end{subfigure} \begin{subfigure}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{Figures/FrontEndWeb.jpg} \label{fig:FrontEndWeb} \end{subfigure} \begin{subfigure}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{Figures/Hadoop.jpg} \label{fig:Hadoop} \end{subfigure} \caption{Approximation ratio of the greedy algorithm for 180 instances on data center traffic according to the distributions in \cite{Roy2015Traffic}. The individual histograms show the ratio for different traffic distributions resulting from three different applications.} \label{fig:RealData} \end{figure*} \subsection{Motivation and Background} \runningtitle{Motivation} Consider a communication graph $G$, in which the vertices represent telephone stations (trunks) or internet computers and the edges represent connection lines. Each station and connection line has a given capacity to handle transmissions. Given a set of transmission requests as paths between vertices, the problem is to find the maximum set of transmissions (paths in $G$) which can be handled at any moment. Thus, we must find a maximum set of paths that do not overload the capacity of the vertices or edges. This scenario occurs in numerous applications in communication networks, and it can be interpreted also in the context of a production line, where vertices are associated with machines, and bounds with number of units that can be produced by a machine during one time unit. In these applications, the aim is to maximize the number of production lines that can be active simultaneously. Such algorithms can be used in successive steps, so as to schedule all of the given requests. Stated in graph-theoretical terms, this is a special case of the fundamental class of graph optimization problems, in which the objective is to find a maximum number of given subgraphs under a given set of constraints. Specifically, when the communication graph $G$ is a tree and the given subgraphs are subtrees of $G$, the problem generalizes the maximum $k$-colorable set problem in chordal graphs. In that problem, one looks for the maximum number of vertices of a given graph that can be colored with $k$ colors (such that no two vertices of the same color are adjacent). \runningtitle{Graph classes and subtree representations} In this paper, we mention the following classes of graphs that can be represented as intersections of subtrees of a tree. \begin{itemize} \item {\em Chordal graphs}: a graph is \emph{chordal} if it does not contain induced cycles of length four or more. In other words, in a chordal graph, every cycle with more than three vertices contains a chord (i.e., an edge that joins two non-adjacent vertices of the cycle). It is known that a graph is chordal if and only if it is the vertex-intersection graph of subtrees of a tree ~\cite{Gavril74}. It is also known that a graph is chordal if and only if it has a perfect elimination order of its vertices. Such an order corresponds to the traversal of the representation tree in a bottom up manner and each time considering the subtrees that have the current vertex as their root. It is also known that the chromatic number of a chordal graphs (the graph being perfect) equals its clique number and equals the maximum number of subtrees in the representation that share a common vertex. We refer the reader to \cite{Golumbic:2004:AGT:984029} for more details on chordal graphs. \item {\em {\vpt} graphs}: Such a graph is the vertex-intersection graph of paths in a tree. In other words, a {\vpt} graph is a chordal graph that has a representation in which every subtree is a path. \item {\em Directed path graphs}: Such a graph is the vertex-intersection graph of (directed) paths in a rooted tree. We emphasize that a directed path graph is an undirected graph. In fact, it is easy to see that it is a {\vpt} graph. \item {\em Interval graphs}: an \emph{interval graph} is the intersection graph of intervals on the real line. Clearly, an interval graph is a directed path graph. \end{itemize} \subsection{Related Work and Our Contribution} \runningtitle{Homogeneous capacities} The maximum independent set problem on an interval graph is equivalent to the problem of finding a maximum cardinality subset of a given set of intervals such that no pair of them intersect. Such a set of intervals can be found using the Earliest Deadline First (EDF) algorithm \cite{Stankovic1998}. This algorithm processes the intervals greedily in the right-endpoint order, and an interval is added to the solution whenever it does not intersect an interval already in the solution, or in other words, whenever its addition to the solution would preserve feasibility. An extension of this problem is the maximum independent set problem in chordal graphs. A polynomial-time greedy algorithm for this problem is presented by Gavril \cite{Gavril72AlgorithmsForChordalGraphs}. In fact, this algorithm is a generalization of the EDF algorithm: vertices are processed in a perfect elimination order, and a vertex is added to the solution as long as it preserves feasibility. Recall that a perfect elimination order corresponds to a traversal of the tree in a bottom-up manner. Clearly, when the tree is a path, this order becomes the right-endpoint order. Another extension is the maximum $k$-colorable set problem in interval graphs, which is the problem of finding a maximum cardinality subset of vertices that can be partitioned into $k$ independent sets. A polynomial-time greedy algorithm for this problem is presented by Gavril and Yannakakis \cite{GY87}. The algorithm can be stated in the same way: "process the intervals in the right-endpoint order and add it to the solution if it preserves feasibility". The same algorithm was suggested also in \cite{carlisle1995k}. In this work, Gavril and Yannakakis showed that the last two extensions above, applied together, make the problem {\nph}. Namely, to find a maximum $k$-colorable subgraph of a chordal graph is {\nph}. This problem is equivalent to the problem of finding a maximum number of subtrees among a given set of subtrees of a tree such that every vertex is contained in at most $k$ of them. It is worth noting that this is the homogeneous variant of our problem in which all the vertices (and edges) have the same capacity of $k$ subtrees. Chakaravarthy and Roy \cite{ChakaravartyRoy09MaxkColorableChordal} suggested a 2-approximation algorithm for the weighted version of the problem, in which every subtree has a weight and the value of a solution is the sum of the weights of its subtrees. Their algorithm is a two pass algorithm where the first pass suffices to solve the unweighted variant of the problem. As they point out, this first pass is a greedy algorithm that processes the vertices in a perfect elimination order, and if run with $k=1$ it reduces to Gavril's algorithm for maximum independent sets. Putting differently, the algorithm is again: "process vertices in a perfect elimination order and add a vertex to the solution if it preserves feasibility". However, a different algorithm with a better approximation ratio, namely $1.582$, was proposed earlier by Wan and Liu \cite{WanLiu98MaximumThroughputinWRO}. Though they did not consider the same problem explicitly, they showed that for every graph class for which the maximum independent set problem can be solved in polynomial time, the maximum $k$-colorable subgraph problem can be approximated with an approximation ratio of $\frac{e}{e-1} \approx 1.582$. \runningtitle{Heterogeneous capacities} In ~\cite{GargVY97} Garg, Vazirani and Yannakakis studied the maximum integral multi-commodity flow problem in tree networks. In this problem, we are given pairs of vertices of a tree with capacities on its edges. The goal is to find a maximum number of pairs of vertices (i.e., paths of the tree) such that the number of paths that use an edge does not exceed its capacity. They provide an algorithm to solve the problem and another one to solve its dual, and they show that the solutions are at most a factor of 2 away from each other, thus providing a primal-dual proof of the algorithm being a $2$-approximation. The algorithm that solves the primal problem is in fact the same greedy algorithm: "process the tree in some bottom-up order, at every vertex consider all the paths whose highest vertex is this one and a path to the solution if it preserves feasibility". In their work, they also showed that the maximum multi-commodity flow problem is {\maxsnph} for general tree networks. \runningtitle{Our contributions:} In the current paper, we first show that the heterogeneous version of the problem is much harder to approximate when one considers subtrees instead of paths. Specifically, it cannot be approximated to within $n^{1-\epsilon}$ for any $\epsilon > 0$ where $n$ is the number of subtrees. On the other hand, when the subtrees are claws (i.e. sub-stars with 3 leaves) and all the loads are $1$ except for the center vertex, the problem becomes $\apxh$. Then we use the same greedy algorithm mentioned above to obtain an $M$-approximation, where $M$ is the maximum number of leaves of the trees in the representation, and the root of a tree is not counted as a leaf. We provide a direct proof of this fact, i.e. one that does not use duality theory, which is thus different from the above-mentioned algorithm of~\cite{GargVY97}. Besides considering trees (instead of paths), we allow for a demand to be specified per subtree (i.e., per multicast) basis. This result implies a $2$-approximation for the maximum multi-commodity flow problem with demands and an optimal algorithm for the maximum $k$-colorable subgraph problem in directed path graphs. Figure \ref{fig:GreedyAlgorithmEvolution} summarizes the above discussion. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{Figures/GreedyAlgorithmEvolution.pdf} \caption{ Applicability of variants of the greedy algorithm to various problems. An arrow from rectangle $A$ to rectangle $B$ indicates that the algorithm in $A$ can be used to solve the problem in $B$. The description on the arrow indicates inputs that algorithm in $A$ might need in addition to an instance of problem $B$. Whenever algorithm $A$ can be applied to problem $B$, if ties are broken in the same way (namely if they use the same perfect elimination ordering/bottom-up traversal), algorithm $A$ simulates the run of the algorithm in $B$, and produces the same output in the same order. } \label{fig:GreedyAlgorithmEvolution} \end{center} \end{figure} In \cite{GargVY97}, it is shown that the problem of finding the largest number of paths in a star network under heterogeneous capacity constraints can be solved in polynomial time. We also extend this result and show that the problem is polynomial-time solvable for any tree, provided that the paths share a common a vertex. In addition to the rigorous analysis, we tested our greedy algorithm on random data and real-life data. We found out that its performance is very close to optimum. \runningtitle{Applications} In \cite{halldorsson2003sum}, an optimal algorithm that solves of the maximum $k$-colorable subgraph problem optimally is used as a subroutine to get an approximation algorithm for the minimum sum coloring problem in interval graphs. This problem corresponds to the scheduling of items to minimize average completion time. In \cite{misra2019parameterized}, it was used in the context of parameterized algorithms. Examples of works in which the maximum $k$-colorable set problem is used in applications are \cite{erlebach1998maximizing}, in the context of optical networks with wavelength division multiplexing (WDM), where multiple connections can share a link if they are transmitted on different wavelengths, and the aim is to satisfy a maximum number of connection requests in a tree network; \cite{bermond2006optimal}, where it was used in the context of traffic grooming in optical networks; and in \cite{krause2013optimal}, in the context of register allocation, where it corresponds to the case where the input programs are in static single assignment form. \runningtitle{Paper structure:} In Section \ref{sec:prelim}, we present basic terms and notations, formally present the optimization problems and provide hardness results. In Section \ref{sec:mstbl}, we present our algorithms and their analysis. We summarize the theoretical results, present simulation results and discuss open problems in Section \ref{sec:conc}. \section{Introduction} \input{introduction} \section{Preliminaries}\label{sec:prelim} \input{preliminaries} \section{Maximum Number of Subtrees with Bounded Load}\label{sec:mstbl} \input{MaxNumberBL} \section{Conclusion, Simulations and Open Problems}\label{sec:conc} \input{Conclusion} \newpage
{ "timestamp": "2020-05-25T02:11:51", "yymm": "1904", "arxiv_id": "1904.10215", "language": "en", "url": "https://arxiv.org/abs/1904.10215", "abstract": "A widely studied problem in communication networks is that of finding the maximum number of communication requests that can be scheduled concurrently, subject to node and/or link capacity constraints. In this paper, we consider the problem of finding the largest number of multicast communication requests that can be serviced simultaneously by a network of tree topology, subject to heterogeneous capacity constraints. This problem generalizes the following two problems studied in the literature: a) the problem of finding a largest induced $k$-colorable subgraph of a chordal graph, b) the maximum multi-commodity flow problem in tree networks.The problem is already known to be NP-hard and to admit a $c$-approximation ($c \\approx 1.58$) in the case of homogeneous capacity constraints. We first show that the problem is much harder to approximate in the heterogeneous case. We then use a generalization of a classical algorithm to obtain an $M$-approximation where $M$ is the maximum number of leaves of the subtrees representing the multicast communications. Surprisingly, the same algorithm, though in various disguises, is used in the literature at least four times to solve related problems (though the analysis is different).The special case of the problem where instances are restricted to unicast communications in a star topology network is known to be polynomial-time solvable. We extend this result and show that the problem can be solved in polynomial time for a set of paths in a tree that share a common vertex.", "subjects": "Data Structures and Algorithms (cs.DS)", "title": "Multicast Communications in Tree Networks with Heterogeneous Capacity Constraints", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9609517095103498, "lm_q2_score": 0.7371581684030624, "lm_q1q2_score": 0.7083734021064411 }
https://arxiv.org/abs/2203.08721
Axiomatization via translation: Hiz's warning for predicate logic
The problems of logical translation of axiomatizations and the choice of primitive operators have surfaced several times over the years. An early issue was raised by H. Hi{\. z} in the 1950s on the incompleteness of translated calculi. Further pertinent work, some of it touched on here, was done in the 1970s by W. Frank and S. Shapiro, as well as by others in subsequent decades. As we shall see, overlooking such possibilities has led to incorrect claims of completeness being made (e.g. by J. L. Bell and A. B. Slomson as well as J. N. Crossley) for axiomatizations of classical predicate logic obtained by translation from axiomatizations suited to differently chosen logical primitives. In this note we begin by discussing some problematic aspects of an early article by W. Frank on the difficulties of obtaining completeness theorems for translated calculi. Shapiro had established the incompleteness of Crossley's axiomatization by exhibiting a propositional tautology that was not provable. In contrast, to deal with Bell and Slomson's system which is complete for propositional tautologies, we go on to show that taking a formal system for classical predicate calculus with the primitive $ \exists$, setting $\forall x \phi(x) \stackrel{\text{def}}{=}\neg \exists x \neg \phi(x)$, and writing down a set of axioms and rules complete for the calculus with $\forall $ instead of $ \exists$ as primitive, does not guarantee completeness of the resulting system. In particular, instances of the valid schema $\exists x \phi (x) \rightarrow \exists x \neg \neg\phi (x)$ are not provable, which is analogous to what occurs in modal logic with $\Box$ and $\Diamond$.
\section{Introduction} \label{Intro} A translation \textbf{t} from a language $\mathcal{L}$, thought of as the set of its formulas, to the language $\mathcal{L}'$ is a function from $\mathcal{L}$ to $\mathcal{L}'$ (on which one may impose further demands of compositionality etc., as desired). We say that \textbf{t} \textit{embeds} a consequence relation $\vdash$ on $\mathcal{L}$ in a consequence relation $\vdash'$ on $\mathcal{L}'$ when for all $\phi_1,\ldots, \phi_n, \psi \in \mathcal{L}$ when $\phi_1,\ldots, \phi_n \vdash \psi$ only if ${\bf t}(\phi_1),\ldots, {\bf t}(\phi_n) \vdash' {\bf t}(\psi)$ and that {\bf t} does so \emph{faithfully} when we have this with `only if' strengthened to `if and only if'.\footnote{A taxonomy of translations, bringing such refinements as those just alluded to -- compositionality, etc. -- can be found in \cite[Chp. 2]{French}, where further references to the extensive literature on this topic are also supplied.} If working with logics as sets of formulas -- as we shall be here -- rather than as consequence relations, the previous definitions apply by deleting everything to the left of the $\vdash$. A number of authors have assumed that such translations also preserve other syntactic (or proof-theoretic) properties. One well-known example is Crossley's mistake in \cite[p. 19]{Crossley}, which resulted in the incompleteness of a putative axiomatization of classical predicate logic presented there, and another appears with this same effect, as we shall see below (Section \ref{Incompleteness}), in the celebrated Bell and Slomson \cite{Bell&Slomson}.\footnote{This problem does not, of course, affect the correctness of the model-theoretic results that constitute the core of the book.} Halmos in \cite{Halmos} had proposed an axiomatization of propositional calculus via translation that turned out to be incomplete. The inadequacies were noticed by Hi{\. z} in \cite{Hiz}, and later Frank \cite{Frank} attempted a generalization of Hi{\. z}'s observation, which itself ran into difficulties, several of them noted by Shapiro \cite{Shapiro}; further problems with Frank's discussion will be described below.\footnote{The oversight is briefly alluded to in \cite{CorcoranShapirofirst}; and more fully (see p.\,85) in \cite{CorcoranShapirosecond}. Corcoran and Shapiro between them produced three papers mentioning this point \cite{CorcoranShapirofirst, CorcoranShapirosecond, Shapiro}, but curiously none mentions either of the other two. A Spanish translation of \cite{CorcoranShapirosecond} incorporated some typographical corrections: \cite{CorcoranShapiro3}. Note also that we refer to the current mistake as Crossley's oversight without mentioning his coauthors, because the different chapters of \cite{Crossley} were written by different authors and the present issue arises in a Chapter by Crossley. (This may explain -- which is not to say \textit{justify} -- the lack of uniformity in format and style noted at p.\,93 of \cite{CorcoranShapirosecond}).} We shall be supplementing the discussion in \cite{Shapiro} with our own criticism of \cite{Frank} below in Section \ref{franksection}. Shapiro's key point was that na\"ively axiomatizing $[\neg, \wedge, \exists]$ via translation in terms of the primitives $[\neg, \rightarrow, \forall]$ is not possible in general because of incompleteness already in the \textit{propositional} fragment. He showed that certain propositional tautologies are not provable in Crossley's system (specifically, certain instances of $(\phi \wedge \phi) \rightarrow \phi $); Shapiro also notes a specifically quantificational deficit in the axiomatization: see note \ref{Shapironote}. Crossley’s mistake can be quickly corrected, as Shapiro \cite[p. 249, note 3]{Shapiro} remarked. In \cite{Crossley}, the primitives were $[\neg, \wedge, \exists]$ with $[\rightarrow, \forall]$ being defined in the usual way, namely $\phi \rightarrow \psi \stackrel{\text{def}}{=}\neg (\phi \wedge \neg \psi)$ and $\forall x \phi(x) \stackrel{\text{def}}{=}\neg \exists x \neg \phi(x)$, but the axiomatization was one for $[\neg, \rightarrow, \forall]$. One might ask, though, whether predicate calculus with the primitives $[\neg, \wedge, \exists]$ can be axiomatized via translation in terms of $[\neg, \rightarrow, \forall]$ when the propositional fragment is indeed complete (by adding, for example, the axiom schema $(\phi \wedge \psi) \leftrightarrow \neg(\phi \rightarrow \neg \psi)$, as suggested by Shapiro).\footnote{\label{Shapironote}In fact, it is not entirely transparent that this would do exactly what Shapiro probably had in mind (because of the subtleties involved in spelling out the meaning of $\leftrightarrow$ in this context), but at any rate the idea is clear: to make $(\phi \wedge \psi) $ replaceable with $ \neg(\phi \rightarrow \neg \psi)$. Hence, in the interest of simplicity, we will set this issue aside. Even though Shapiro suggests also adding the axiom schema $\neg \exists x \neg \phi \leftrightarrow \forall x \phi$, he does not show why this is \emph{necessary}, which is what we do in the present note. Hi{\. z} \cite{Hiz} ends his second last paragraph with these words: `A translation of a complete set of axioms to another set of primitives would be complete only if from the resulting axioms the definitions of the first set of primitives followed.' Probably Shapiro is following this advice in the simplest way possible: to make sure it is provable, take it as an additional axiom. What we show here, though, is precisely why, in detail, things go wrong when the advice is not followed: namely that the replacement of equivalents -- `congruentiality' as it is called in the propositional (esp.\ modal) case, originally in \cite{Makinson} -- breaks down in the defective axiomatizations.} In this note, we explore the issue of the failure of the `replacement of equivalents' property that is at the root of both Hi{\. z}'s and Shapiro's examples, with special attention to the failure of provably equivalent open formulas to be interreplaceable within the scope of quantifiers. Hence, the difficulty with axiomatizing via translation is not a purely propositional issue, despite the literature containing mostly discussions focusing on what goes wrong only at the propositional level. As with the axiomatization in Crossley \cite{Crossley}, so also the presentation given in the celebrated text Bell and Slomson \cite{Bell&Slomson} is afflicted by this incompleteness problem. In \cite{Bell&Slomson} the primitives are also also $[\neg, \wedge, \exists]$, while the axioms are given via translation in terms of $[\neg, \wedge, \vee, \rightarrow, \forall]$. By contrast with the case of \cite{Crossley}, in this case the axiomatization is complete for the propositional fragment.\footnote{In personal communications, both John Bell and Alan Slomson have confirmed that this problem was unknown to them.} However, the axiomatization in \cite{Bell&Slomson} would, of course, be complete if the choice of primitive quantifier had instead been $ \forall$.\footnote{Similarly, the axiomatization in \cite{Crossley} is complete if the choice of primitives had been $[\neg, \rightarrow, \forall]$. The effect of the generalization rule is obtained here by taking as instances of the axiom schemata all possible generalizations of the formulas directly instantiating those schemata, where this covers not only their complete universal closures but also the result of universally quantifying any number (including zero) of the free variables in those formulas.} The following section contains our discussion of Frank's article \cite{Frank} touched on above, and still having some current significance for the present topic. Then we turn, in Section \ref{Incompleteness}, to the topic of incomplete axiomatizations of predicate logic resulting from insufficient attention to the way completeness depends on choosing axioms and rules that are appropriate for the logical primitives. We are especially concerned with translating an axiomatization of classical predicate logic suited to one set of primitives into an axiomatization putatively complete for another set. Recent work~\cite{Casanovas,Kennedy} and the not so recent \cite{McGee} work on logical operations in predicate logic -- going back, in particular, to \cite{Henkin} -- provide a replacement for the use of matrix methodology in showing unprovability (and hence independence and incompleteness) in propositional logic. \section{Problems with Frank's note}\label{franksection} On the first page of his note, Frank \cite{Frank} gives us the background to Hi{\. z}'s discussion, telling us that in his 1956 paper \cite{Hiz}, \begin{quotation}\small \noindent Halmos takes the Hilbert--Ackermann axioms for a sentential logic of $\mathord{\sim}$ and $\lor$, and the rule of inference \[ \RULE{p \,\lor q \qquad {\mathord{\sim} p}}{q}\] and provides an axiom system for $\mathord{\sim}$ and $\&$ by means of the definition \[p \lor q \leftrightarrow \mathord{\sim}(\mathord{\sim} p \mathop{\&} \mathord{\sim} q).\]\end{quotation} Given notational conventions more widely prevailing today, it would now be better to write $p, q$ as $A, B$ or $\phi, \psi$ to make it clear that these are schematic letters for arbitrary formulas, rather than specifically propositional variables (or sentence letters), but it would have been better to avoid $\leftrightarrow$ in the displayed definition and stuck to the formulations used by Halmos and Hi{\. z}, since the $\leftrightarrow$-formulation misleadingly suggests that an additional (biconditional) connective is somehow involved, raising questions about how this is related to the chosen primitives $\mathord{\sim}$ and $\lor$ for Hilbert and Ackermann, or $\mathord{\sim}$ and $\&$ for Halmos.\footnote{Our preferred way of recording a definition is with the `$\stackrel{\text{def}}{=}$' notation, so as to preserve as much neutrality as we can among competing accounts of what a definition (for logical vocabulary, in particular) is and does -- such as for example the metalinguistic and object-linguistic conceptions of definition contrasted in {\S}3 of \cite{Humberstone2} (or 3.16 in \cite{Humberstone3}). To minimize disruption, though, we prefer not to raise such issues every time we echo one of the authors discussed here in calling a $\leftrightarrow$-schema a definition. See also note~\ref{Shapironote} above.} A more substantive issue arises over the rule Frank tells us Hilbert and Ackermann use. A glance at the actual text on p.~28 of \cite{Hilbert&Ackermann} reveals that the rule they employ takes us, not from premises $\phi \lor \psi$ and $\mathord{\sim}\phi$ to conclusion $\psi$, but rather from $\mathord{\sim}\phi \lor \psi$ and $\phi$ to $\psi$. But the substantive question this raises -- as to whether this change makes a difference to the set of provable formulas -- is not one we need to consider here, since it does not bear on Frank's commentary on Hi{\. z} Hi{\. z}'s paper uses a three-element matrix with tables for $\land$ (as we shall now write in place of $\&$) and $\neg$ (as we shall write for $\mathord{\sim}$) that validates all theorems forthcoming on the basis of Halmos's $\{\lor,\neg\}$-axiomatization but not all classical tautologies in the $\{\land,\neg\}$-fragment, showing the axiomatization to be incomplete, despite the completeness of the $\{\lor,\neg\}$-axiomatization of Hilbert and Ackermann on which it was based, replacing every $\phi \lor \psi$ in the latter by $\neg(\neg \phi \land \neg \psi)$. The main point of Frank's discussion is that this ingenious matrix argument was not needed, because a simpler general observation already suffices to show that the $\{\land,\neg\}$-fragment of classical propositional logic could not possibly have been completely axiomatized by Halmos's axiomatization. And this general observation appears more or less as follows (we may call this `Frank's Claim'):\footnote{We shall replace Frank's notation `${\rm A}1,\ldots, {\rm A}N$' and `${\rm R}1,\ldots,{\rm R}M$', for axioms and rules respectively, with `$A_1,\ldots,A_n$' and `$R_1,\ldots, R_m$'. The same passage is quoted, though in this case verbatim, on the opening page of \cite{Shapiro}.} \begin{quotation}\noindent \small If {\sf T}($A$) is the closure of a formal system in a language $\mathcal{L}$, with axioms $A_1,\ldots,A_n$ and rules $R_1,\ldots, R_m$, and {\bf t} a rule of translation from $\mathcal{L}$ to $\mathcal{L}'$, then {\sf T}$'$, the closure of ${\bf t}(A_1),\ldots,{\bf t}(A_n),{\bf t}(R_1),\ldots, {\bf t}(R_m)$, is equal to {\bf t}(${\sf T}(A)$).\end{quotation} \noindent Frank remarks that a proof by induction (on the number of rule applications in a proof) of this claim is facilitated by noting that for a $k$-premise rule $R_j$, and formulas $\phi_1,\ldots,\phi_k, \psi$:\begin{center} ${\bf t}(R_j) \, = \, \{\langle {\bf t}(\phi_1),\ldots,{\bf t}(\phi_k),{\bf t}(\psi)\rangle \,\vert\,\langle \phi_1,\ldots \phi_n, \psi\rangle \in R_j\}$\end{center} though this is best taken, not as a comment about the translation of rules -- an otherwise unexplained notion -- but as a definition of what it is to apply {\bf t} to rules (formulated for $\mathcal{L}$), in terms of the initially specified {\bf t} as applied to formulas (of $\mathcal{L}$), identifying a $k$-premise rule with the set of all tuples $\langle \phi_1,\ldots \phi_n, \psi\rangle$ that constitute an application of the rule, the $\phi_i$s being premises and $\psi$ the conclusion. Shapiro \cite[p.\,249]{Shapiro}, points out that this definition does not in fact capture what most people have had in mind in translating rules, because it ignores the role of schematic letters in their formulation which, when interpreted over $\mathcal{L}'$, are taken as ranging over all formulas of $\mathcal{L}'$ and not just those of the form ${\bf t}(\phi)$ for some formula $\phi$ of $\mathcal{L}$\footnote{This point of Shapiro's suggests that instead of identifying a rule with the set of its applications in a particular language, we think of it as mapping any language equipped with the logical vocabulary governed by the rule to the relevant application-set in that language. This is the policy urged in 4.33 of \cite{Humberstone3}.}. This consideration also applies to 0-premise rules (axiom schemata). This makes such a rule a $(k + 1)$-ary relation between formulas -- and not necessarily a functional such relation.\footnote{We include this remark because at the top of the second page of his note, Frank adds that ${\bf t}(R_j)(y_1,\ldots,y_k) = {\bf t}(x) = {\bf t}(R_j)({\bf t}(y_1),\ldots,{\bf t}(y_k))$, in which he writes $y_1,\ldots,y_k,x$ for what appear as $\phi_1,\ldots,\phi_k,\psi$ above -- and incidentally writes `${\rm R}J\langle y_1,\ldots,y_k \rangle$' (etc.) for what was just quoted as `$R_j(y_1,\ldots,y_k)$'. There is no reason to restrict attention to such `functional' rules, however. For example, one could have a rule in a Hilbert-style/axiomatic setting like the natural deduction rule of $\lor$-introduction of a second disjunct taking us for any formulas $\phi, \psi$ from the premise $\phi$ to the conclusion $\phi \lor \psi$, which is the binary relation comprising all pairs $\langle \phi, \phi \lor \psi\rangle$ for $\phi, \psi$ in the language concerned, so the the conclusion is not uniquely determined by the premise.} There is good news and bad news. The good news is that this issue about the functionality of the rules does not affect the inductive argument Frank has in mind. The bad news is that the induction fails for a subtle reason explained in Shapiro \cite[p.\,348]{Shapiro} with the aid of a simple counterexample: we need to impose the condition that the translation itself should be injective. Frank's Claim -- setting to one side the need for a corrected formulation -- has yet to be brought to bear on the case of Halmos's incomplete axiomatization, however, so let us see how this is attempted in Frank's discussion. We shall quote the passage in question verbatim, except for symbolizing negation and conjunction by $\neg$ and $\land$; this includes reproducing the phrase `the domain {\sf D}', even though this `{\sf D}' appears nowhere else in the paper (and the syntax is obscure: the domain {\sf D} is not \textit{what}?): \begin{quote}\small Thus, if {\bf t} is not an onto-mapping from $\mathcal{L}$ to $\mathcal{L}'$, (as the domain {\sf D} is not, having as its range in the language containing $\neg$ and $\land$ only sentences beginning with $\neg$), a complete axiomatization in $\mathcal{L}$ will result in an incomplete one in $\mathcal{L}'$.\end{quote} Thus it seems that Frank has switched to using `{\bf t}' not as a variable for discussing translations in general, but to allude to the specific {\bf t} in play in Halmos's discussion. In mentioning the case as one in which {\bf t} does not map $\mathcal{L}$ onto (but only into) $\mathcal{L}'$, Frank has usefully made explicit the fact that {\bf t} is a mapping -- as well as raising the question of how its lack of surjectivity might bear on the current issue, something we shall put on hold for a moment -- and has also explicitly identified its domain and codomain.\footnote{\label{t*} Ideally, this second use of `{\bf t}' for a mapping from $\mathcal{L}$-rules to $\mathcal{L}'$-rules would be notationally distinguished -- as {\bf t}* -- say, from the formula-to-formula map {\bf t} it is induced by, though here we have been following Frank in suppressing the distinction (omitting the `*', on the suggestion just mooted). In Section \ref{Intro} we noted that since we were working with logics sets of formulas we would not be using the apparatus of consequence relations and could ignore everything to the left of the `$\vdash$' in our opening paragraph. It might seem that in discussing Frank's translations of axiomatizations the transition from $A$ to ${\sf T}(A)$ -- the closure of the set of axioms under the rules -- which he wants his translations to preserve, precisely reinstates consequence relation (or more accurately, the corresponding consequence operation) in play in our opening paragraph in Section \ref{Intro}. This is not straightforwardly so, however, because the notation $A$ (which appears in the passage quoted from Frank, though without being properly introduced there) stands for a set containing not only axioms but also rules. (Compare the `tuple systems' of \cite[subsec. 0.26]{Humberstone3}.) If we instead think of $A$ as the set of axioms and build the use of the rules into the ``{\sf T}'' part of the ``${\sf T}(A)$'' notation, then we will have a genuine consequence operation, though typically it will not correspond to the consequence relation most readily associated with the logic in question, because of the \textit{rules of proof} vs \textit{rules of inference} contrast. (See the index entry under that heading in \cite{Humberstone5} for discussion and references.)} In view of this, one might ask why Frank writes (as quoted above) that the range of {\bf t} comprises only formulas (or, as he says, sentences) beginning with $\neg$. What, in particular, is ${\bf t}(p_i)$ supposed to be, for a sentence letter/propositional variable $p_i$, in the case of the translations {\bf t} currently under consideration? In the literature on translations embedding one logic in another (whether the logics are taken as sets of theorems or as consequence relations or \ldots) as opposed to the associated translations -- {\bf t}* in note \ref{t*} -- from one proof system to another (whether, as for the current discussion, a Hilbert-style system, or a natural deduction system or a Gentzen system), what get called \textit{definitional} translations have to satisfy two conditions: they have to be \textit{variable-fixed}, i.e., satisfy ${\bf t}(p_i) = p_i$ for all propositional variables $p_i$ (where we here restrict attention restrict attention to sentential languages for simplicity, and assume that all are equipped with the same countable supply of such variables), as well as being \textit{compositional} (sometimes called `schematic') in the sense that for every primitive $n$-ary connective $\#$ of $\mathcal{L}$ there is a formula $\phi(p_1,\ldots,p_n) \in \mathcal{L}'$ containing only the sentence letters displayed, for which we have:\begin{center} for all $\psi_1,\ldots, \psi_n \in \mathcal{L}$, ${\bf t}\big(\#(\psi_1,\ldots, \psi_n)\big) = \phi\big({\bf t}(\psi_1),\ldots,{\bf t}(\psi_n)\big)$.\end{center} One can think of $\phi(p_1,\ldots,p_n)$ as putatively defining the $\#$ of $\mathcal{L}$ in the language $\mathcal{L}'$, which is why these are called {\it definitional} translations; in the case of Halmos $\mathcal{L}$ has connectives $\neg$ and $\lor$ and $\mathcal{L}'$ has connectives $\neg$ and $\wedge$, and the inductive definition of the {\bf t} involves: \begin{itemize} \setlength\itemsep{.3em} \item ${\bf t}(p_i) = p_i$ \item ${\bf t}(\neg \phi) = \neg({\bf t}(\phi))$ \item ${\bf t}(\phi \lor \psi) = \neg(\neg {\bf t}(\phi) \land \neg{\bf t}(\psi))$ \end{itemize} In view of these considerations, we are inclined to think of Frank's comment about the range of {\bf t} comprising only formulas of the form $\neg \phi$ as an oversight -- but in fact a revealing one if the charge in the following paragraph of a confusion between $\mathcal{L}$ and $\mathcal{L}'$ as, on the one hand languages, and on the other hand, logics formulated in those languages, is correct: since certainly in the setting of Halmos's and Hi{\. z}'s discussion, the $p_i$ are not going to show up as \textit{provable} formulas in the range of {\bf t}. Note, incidentally, that the issue raised by Shapiro concerning injectivity is not addressed by insisting on definitional translations, since we could easily have such a translation that is not injective. One way would be to choose the same `defining' formula $\phi$ for two connectives of the same arity. But we return from injectivity to the matter of surjectivity.\footnote{\label{constants_note}The example, not summarized above, of a non-injective translation on the first page of Shapiro's paper can be presented as a definitional translation subject to the convention that the propositional variables come in countable supply by treating his $a, b, c$ in $\mathcal{L}$ and $A, B$ in $\mathcal{L}_2$ as nullary connectives (sentential constants); the simultaneous presence in the languages of the $p_i$ ($i \in \omega$) does not affect the example.} What is not obvious is why a failure of surjectivity on {\bf t}'s part should occasion a failure of completeness for the target logic -- axiomatized by applying {\bf t} to a complete axiomatization of the source logic. Recall that $\mathcal{L}$ and $\mathcal{L}'$ are not the source and target logics involved here, but rather just the languages of these logics. So one cannot immediately reason: \begin{quote}\small Suppose that $\phi' \in \mathcal{L}'$ is not ${\bf t}(\phi)$ for any $\phi \in \mathcal{L}$. In that case the target logic must be incomplete, since ${\bf t}(\phi)$ is cannot be provable in it, by the main observation above (beginning ``If {\sf T}($A$) is the closure\ldots'').\end{quote} This would not work, because the notion of completeness in play here is most evidently explicated in semantic terms: we are trying to axiomatize the classically valid (`tautologous') formulas in the language $\mathcal{L}$, so what we need is not just some formula or other of $\mathcal{L}'$ that is not ${\bf t}(\phi)$ for any formula $\phi \in \mathcal{L}$ -- all that a failure of surjectivity asserts -- but that we have some \textit{valid} formula $\phi'$ of $\mathcal{L}'$ that is not ${\bf t}(\phi)$ for any (here redundantly: valid) formula $\phi \in \mathcal{L}$. With this apparent strengthening of (a correctly formulated version of) Frank's Claim, we could proceed to the desired incompleteness conclusion, since such a $\phi'$ would then be a witness to the incompleteness of the axiomatization obtained by applying {\bf t} to the initially given complete axiomatization of the valid formulas of $\mathcal{L}$. To see how this gap arises between Frank's claim to have provided a simpler alternative to Hi{\. z}'s conclusion and the explicit justification he provides for that claim, it is necessary to inquire more deeply into what unstated conditions might be in play concerning the notion of a translation beyond it being a mapping from one language to another (continuing here, for simplicity, to identify a language with its set of formulas). In many cases, once attention is restricted to definitional translations, the absence of a $\phi'$ which is not (classically) valid can be exploited to find a related formula of $\mathcal{L}'$ which is valid. For example, take $\mathcal{L}$ and $\mathcal{L}'$ to be the languages of classical propositional logic with primitives $\{\lor, \neg\}$ in the former case and $\{\lor, \neg, \bot\}$ in the latter, with {\bf t} the identity translation: a degenerate case of a definitional translation: \begin{itemize} \setlength\itemsep{.3em} \item ${\bf t}(p_i) = p_i$ \item ${\bf t}(\neg \phi) = \neg({\bf t}(\phi))$ \item ${\bf t}(\phi \lor \psi) = {\bf t}(\phi) \lor {\bf t}(\psi))$ \end{itemize} Evidently $\bot$ is not in the range of {\bf t}, and while this is not an immediate threat to the completeness of the result of applying {\bf t} to any complete axiomatization of the valid formulas of $\mathcal{L}'$ since $\bot$ is not such a formula, we note that this means that we also have, for example, $\neg p \lor (p \lor \bot)$ as a valid $\mathcal{L}'$-formula that is missing from the range of {\bf t}, occasioning incompleteness. What if the $\phi'$ not in the range of {\bf t} has no such distinctive logical behaviour -- for example is a nullary connective (cf.\ note \ref{constants_note}) like $\bot$ as it behaves in Johansson's Minimal Logic, while $\neg$ and $\lor$ continue to behave classically? Then, in place of the $\phi'$ in question we could use the disjunction $\phi' \lor \neg \phi'$ as the missing valid formula. Since we are focusing on Hilbert-style systems here -- and not including as logics `purely inferential' or atheorematic consequence relations (such as the conjunction--disjunction fragment of classical logic), the logics concerned need to have at least one provable formula. So we simply make judicious substitutions for any propositional variable occurring in such a formula. If we were concerned with intuitionistic propositional logic, for instance, we could use $\phi' \to \phi'$ or $\neg\neg(\phi' \lor \neg \phi')$ to play this role. This does not cover all eventualities, however. Saying that we require at least one provable formula does not guarantee that there is such a formula in which there occur propositional variables for which the envisaged substitution of $\phi'$ for some $p_i$ can be made. So again making use of constants (and once more the example mentioned in note \ref{constants_note} is relevant), let $\mathcal{L}$ and $\mathcal{L}'$ have for their logical vocabulary $\{\top\}$ and $\{\top, \bot\}$ and let {\bf t} be the identity map. The (classically) valid formulas of $\mathcal{L}$ are one in number: {$\top$}, so we can axiomatize the logic with that formula as our sole axiom, and no rules at all. The translation under {\bf t} of this axiomatization is the identity and so suffers from no incompleteness, even though Frank's sufficient condition for producing incomplete translations is satisfied: {\bf t} is not a surjective translation (though we cannot construct a valid formula to exploit this). Of course, this example differs from those considered earlier among fragments of classical logic in that we are not dealing with a functionally complete fragment (to say the least). But the role of functional completeness is not particularly emphasized in Frank's discussion and its role has not been investigated here either. \section{Incompleteness}\label{Incompleteness} \subsection{The case of one-variable classical predicate calculus}\label{one} Let us look now at the simple example of the monadic one-variable classical predicate calculus studied by Henkin in \cite[p. 6]{Henkin}; the reason for beginning with the one-variable fragment is explained below (in the paragraph following Remark \ref{alt}). We shall show that this logic with the primitives $[\neg, \rightarrow, \exists]$ cannot, in general, be axiomatized via translation in terms of $[\neg, \rightarrow, \forall]$ (our choice of $\neg, \rightarrow$ is just for simplicity, the same result holds for $\neg, \rightarrow, \wedge, \vee$). As Henkin presented his one-variable system, one allows only unary vocabularies and one individual variable $x$. This logic is axiomatizable by the usual presentations of predicate calculus (such as that in \cite{Mendelson}). Consider then the following standard\footnote{See \cite{Bell&Slomson} where the authors also add axiom schemata for the remaining connectives. The results in what follows all remain true if we add those further postulates as well.} attempt at axiomatization via translation where we set $\forall x \phi(x) \stackrel{\text{def}}{=}\neg \exists x \neg \phi(x)$: \begin{itemize} \item[] {\sc Axiom schemata} \item[(A1)] $\phi \rightarrow (\psi \rightarrow \phi)$ \item[(A2)] $(\phi \rightarrow (\psi \rightarrow \chi)) \rightarrow ((\phi \rightarrow \psi) \rightarrow (\phi \rightarrow \chi))$ \item[(A3)] $(\neg \phi \rightarrow \neg \psi) \rightarrow (\psi \rightarrow \phi)$ \item[(A4)] $\forall x \phi(x) \rightarrow \phi(x)$ \item[(A5)] $\forall x (\phi \rightarrow \psi (x)) \rightarrow (\phi \rightarrow \forall x \psi(x))$ where $x$ is not free in $\phi$. \end{itemize} \begin{itemize} \item[] {\sc Rules} \item[(R1)] \emph{Modus Ponens}: \[ \RULE{\phi \rightarrow \psi \qquad \phi}{\psi}\] \item[(R2)] \emph{Generalization}: \[ \RULE{\phi}{\forall x \phi}\] \end{itemize} When these rules and axiom schemata are given for predicate calculus with infinitely many variables, $x$ simply serves as a place holder for any of the variables in the official list $x_1, x_2, x_3, \dots$ Our strategy consists in adapting the argument for modal logic from \cite[Corollary 2.2]{Humberstone} to the present setting. Roughly, we shall be interpreting the logical primitives of one-variable predicate calculus by \emph{operations} on domains (in the sense of \cite{McGee} and more recently \cite[Def. 6.1]{Casanovas} or \cite[\S 2.1]{Kennedy}) that diverge from the standard in order to show the unprovability of the validity $\exists x \phi (x) \rightarrow \exists x \neg \neg\phi (x)$. This method was in fact, introduced already by Henkin in \cite[p. 21]{Henkin} using the term `generalized models'\footnote{Not to be confused with Henkin’s ‘general models’ for second order logic, though the two ideas have something in common, each giving an unintended interpretation to some of the logical vocabulary.} to show the incompleteness of certain finite-variable logics with respect to obvious attempts at axiomatization. This kind of argument is a generalization to the first-order level of the typical proofs of independence of different axioms for propositional calculus. \begin{Rmk}\label{logop}\emph{ To illustrate the notion of a logical operation on a domain $A$ we shall refer the reader to \cite[Example 3]{Kennedy}. For example, in the context of monadic logic, if we have $X_0, X_1 \subseteq A$, then conjunction is the operation $f_\wedge (X_0, X_1) \mapsto X_0 \cap X_1$, negation is the operation $f_\neg (X_0) \mapsto A \setminus X_0$ and the existential quantifier $\exists $ is the operation \begin{center}$ f_\exists (X_0) = \begin{cases*} A& if $X_0 \neq \emptyset$ \\ \emptyset & otherwise \end{cases*}$ \end{center} }\hspace*\fill$\blacktriangleleft$ \end{Rmk} \ \ Using the notion of an operation on models as exemplified in Remark \ref{logop}, one can easily obtain a satisfaction relation. Then we may take the \emph{value} of a formula $\phi$ on a model $\mathfrak{A}$, denoted $\mathfrak{A}(\phi)$, to be the set $\{ a \in A \mid \mathfrak{A} \models \phi[a]\}$ and it can be computed recursively using the operations corresponding to the primitives of the logic. For a sentence, $\mathfrak{A}(\phi)$ is always $A$ or $\emptyset$. In this sense, a first-order sentence might be said to be \emph{logically valid} if its value is the whole domain in every model. Once we reinterpret the logical primitives in new ways, what $\mathfrak{A}(\phi)$ ends up being will change. We show the independence of $\exists x\phi(x) \to \exists x \neg\neg\phi(x)$ in the course of the proof of Proposition \ref{monadic}, though readers for whom the `logical operations' approach summarised in Remark 2 may prefer to glance first at Remark 3 below to see the proof would go without explicitly invoking that apparatus. \begin{Pro}\label{monadic} Let $\vdash$ stand for provability in the axiomatization of one-variable logic given above. Consider a vocabulary $\tau= \{P\}$ where $P$ is a unary predicate letter. There is a two-element model $\mathfrak{A}$, with domain $A=\{u, v\}$, and an interpretation of the primitives $[\neg, \rightarrow, \exists]$ such that $\vdash \phi$ only if either $\mathfrak{A}(\phi) = A$ or $\mathfrak{A}(\phi) = \{u\}$. Moreover, we have that $$\mathfrak{A}(\exists x P (x) \rightarrow \exists x \neg \neg P (x)) = \emptyset.$$ \end{Pro} \begin{proof} Given the domain $A= \{u, v\}$, we proceed to re-interpret the operations corresponding to $\rightarrow, \neg$ and $\exists$ in this domain: \ \FloatBarrier \begin{table}[h] \begin{tabular}{l||cccc} $f^*_\rightarrow (X_0, X_1)$ & $X_1 = A$ & $X_1 = \{u\}$ & $X_1 = \{v\}$ & $X_1 = \emptyset$ \\ \hline\hline $X_0 = A$ & $A$ & $\{u\}$ & $\{v\}$ & $\emptyset$ \\ $X_0 = \{u\}$ & $A$ & $A$ & $\{v\}$ & $\{v\}$ \\ $X_0 = \{v\}$ &$A$ & $\{u\}$ & $A$ & $\{u\}$ \\ $X_0 = \emptyset$ & $A$ & $A$ & $A$ & $A$ \end{tabular} \end{table} \FloatBarrier \begin{minipage}{.45\linewidth} \[ f^*_\exists (X_0) = \begin{cases*} A& if $X_0 = A$ \\ A & if $X_0 = \{u\}$ \\ A & if $X_0 = \{v\}$\\ \emptyset & if $X_0 = \emptyset$ \end{cases*}\] \end{minipage} \begin{minipage}{.45\linewidth} \[ f^*_\neg (X_0) = \begin{cases*} \emptyset & if $X_0 = A$ \\ \emptyset & if $X_0 = \{u\}$ \\ \{u\} & if $X_0 = \{v\}$\\ \{u\} & if $X_0 = \emptyset$ \end{cases*} \] \end{minipage} \ Under our earlier definition of $\forall$, then the operation for this defined symbol is: \begin{center} \[ f^*_\neg( f^*_\exists (f^*_\neg (X_0))) = f^*_\forall (X_0) = \begin{cases*} \{u\}& if $X_0 = A$ \\ \{u\} & if $X_0 = \{u\}$ \\ \emptyset & if $X_0 = \{v\}$\\ \emptyset & if $X_0 = \emptyset$ \end{cases*}\] \end{center} \ Take the model $\mathfrak{A}$ that results from the domain $\{u, v\}$ and letting $\mathfrak{A}(P (x))=\{v\}$. First, one can see that the logically valid (in the standard sense) sentence $\exists x P (x) \rightarrow \exists x \neg \neg P (x)$ takes value $\emptyset$ in $\mathfrak{A}$, which implies that it is not derivable from the axiomatization we just considered. To see this, it suffices to check the following equalities: $$f^*_\rightarrow ( f^*_\exists (\{v\}), f^*_\exists ( f^*_\neg (f^*_\neg(\{v\})))) = f^*_\rightarrow ( A, f^*_\exists ( f^*_\neg ( \{u\} ))) = f^*_\rightarrow ( A, f^*_\exists ( \emptyset )) = f^*_\rightarrow ( A, \emptyset ) = \emptyset.$$ On the other hand, every axiom of the axiomatization via translation in terms of $[\neg, \rightarrow, \forall]$ takes as value either $\{u, v\}$ or $\{u\}$. We check this for A4 and A5 and leave the rest as exercises for the reader. For A4, first observe $$ f^*_\forall(\mathfrak{A}(\phi)) = \begin{cases*} \{u\}& if $\mathfrak{A}(\phi) = A$ \\ \{u\} & if $\mathfrak{A}(\phi) = \{u\}$ \\ \emptyset & if $\mathfrak{A}(\phi) = \{v\}$\\ \emptyset & if $\mathfrak{A}(\phi) = \emptyset$ \end{cases*}$$ which means that we have the following table for $f^*_\rightarrow ( f^*_\forall(\mathfrak{A}(\phi)), \mathfrak{A}(\phi))$ (and, hence the operation always takes $A$ as value): \FloatBarrier \begin{table}[h] \begin{tabular}{l||cccc} $f^*_\rightarrow ( f^*_\forall(\mathfrak{A}(\phi)), \mathfrak{A}(\phi))$ & $\mathfrak{A}(\phi) = A$ & $\mathfrak{A}(\phi) = \{u\}$ & $\mathfrak{A}(\phi)= \{v\}$ & $\mathfrak{A}(\phi) = \emptyset$ \\ \hline \hline $f^*_\forall(\mathfrak{A}(\phi)) = \{u\}$ & $A$ & $A$ & & \\ $f^*_\forall(\mathfrak{A}(\phi)) = \emptyset$ & & & $A$ & $A$ \end{tabular} \end{table} \FloatBarrier For A5, we must compute the values of $$f^*_\rightarrow ( f^*_\forall(f^*_\rightarrow(\mathfrak{A}(\phi), \mathfrak{A}(\psi(x)))), f^*_\rightarrow (\mathfrak{A}(\phi), f^*_\forall (\mathfrak{A}(\psi(x))) )).$$ First observe that given that $\phi$ is a sentence (so this sentence only takes one of the values $A$ or $\emptyset$), the table for $f^*_\rightarrow(\mathfrak{A}(\phi), \mathfrak{A}(\psi(x)))$ gets simplified: \begin{table}[h] \begin{tabular}{l||cccc} $f^*_\rightarrow(\mathfrak{A}(\phi), \mathfrak{A}(\psi(x)))$ & $\mathfrak{A}(\psi(x)) = A$ & $\mathfrak{A}(\psi(x)) = \{u\}$ & $\mathfrak{A}(\psi(x)) = \{v\}$ & $\mathfrak{A}(\psi(x)) = \emptyset$ \\ \hline\hline $\mathfrak{A}(\phi) = A$ & $A$ & $\{u\}$ & $\{v\}$ & $\emptyset$ \\ $\mathfrak{A}(\phi) = \emptyset$ & $A$ & $A$ & $A$ & $A$ \end{tabular} \end{table} Then $$ f^*_\forall(f^*_\rightarrow(\mathfrak{A}(\phi), \mathfrak{A}(\psi(x)))) = \begin{cases*} \{u\}& if $f^*_\rightarrow(\mathfrak{A}(\phi), \mathfrak{A}(\psi(x)))= A$ \\ \{u\} & if $f^*_\rightarrow(\mathfrak{A}(\phi), \mathfrak{A}(\psi(x))) = \{u\}$ \\ \emptyset & if $f^*_\rightarrow(\mathfrak{A}(\phi), \mathfrak{A}(\psi(x))) = \{v\}$\\ \emptyset & if $f^*_\rightarrow(\mathfrak{A}(\phi), \mathfrak{A}(\psi(x))) = \emptyset$ \end{cases*}$$ Similarly, we have $$ f^*_\forall(\mathfrak{A}(\psi(x))) = \begin{cases*} \{u\}& if $\mathfrak{A}(\psi(x)) = A$ \\ \{u\} & if $\mathfrak{A}(\psi(x)) = \{u\}$ \\ \emptyset & if $\mathfrak{A}(\psi(x)) = \{v\}$\\ \emptyset & if $\mathfrak{A}(\psi(x)) = \emptyset$ \end{cases*}$$ and then \FloatBarrier \begin{table}[h] \begin{tabular}{l||cccc} $f^*_\rightarrow (\mathfrak{A}(\phi), f^*_\forall (\mathfrak{A}(\psi(x))) )$ & & $f^*_\forall (\mathfrak{A}(\psi(x))) = \{u\}$ & & $f^*_\forall (\mathfrak{A}(\psi(x))) = \emptyset$ \\ \hline \hline $\mathfrak{A}(\phi) = A$ & & $\{u\}$ & & $\emptyset$ \\ $\mathfrak{A}(\phi) = \emptyset$ & & $A$ & & $A$ \end{tabular} \end{table} \FloatBarrier \ If we let $S_0$ stand for $ f^*_\forall(f^*_\rightarrow(\mathfrak{A}(\phi), \mathfrak{A}(\psi(x))))$ and $S_1$ for $f^*_\rightarrow (\mathfrak{A}(\phi), f^*_\forall (\mathfrak{A}(\psi(x))) )$, then we may build the following table: \FloatBarrier \begin{table}[h] \begin{tabular}{l||cccc} $f^*_\rightarrow ( S_0, S_1)$ & $ \mathfrak{A}(\psi(x)) = A$ & $ \mathfrak{A}(\psi(x)) = \{u\}$ & $ \mathfrak{A}(\psi(x)) = \{v\}$ & $ \mathfrak{A}(\psi(x)) = \emptyset$ \\ \hline \hline $\mathfrak{A}(\phi) = A$ & $A$ & $\{u\}$ & $A$ & $A$ \\ $\mathfrak{A}(\phi) = \emptyset$ & $A$ & $A$ & $A$ & $A$ \end{tabular} \end{table} \FloatBarrier Moreover, the rules of inference preserve these values: they take you from premises with values $A$ or $\{u\}$ to conclusions with those very same values. \end{proof} \begin{Rmk}\label{alt}\emph{ In the proof of Proposition \ref{monadic} we could have presented the new way of interpreting the primitives semantically by means of the following satisfaction relation $ \models^*$ where $\mathfrak{A}$ is taken to be a fixed parameter (namely the structure built in the proof): \begin{itemize} \item[] for all $a \in A$, \item[] $\mathfrak{A} \models^* P(x)[a]$ iff $a \in \mathfrak{A}(P(x))$ \item[] $\mathfrak{A} \models^* \neg \phi [a]$ iff $a=u$ and $\mathfrak{A} \not \models^* \phi [u]$ (so negation means failure to be satisfied by the distinguished element $u\in A$), \item[] $\mathfrak{A} \models^* (\phi \rightarrow \psi) [a]$ iff $\mathfrak{A}\not \models^* \phi [a]$ or $\mathfrak{A} \models^* \psi [a]$, \item[] $\mathfrak{A} \models^* \exists x \phi [a]$ iff $\mathfrak{A} \models^* \phi [b]$ for some $b \in A$, \item[] $\mathfrak{A} \models^* \forall x \phi [a]$ iff $a=u$ and $\mathfrak{A} \models^* \phi [u]$. \end{itemize} Then we may say a formula is $true^*$ in $\mathfrak{A}$ if satisfied by the distinguished element $u$. What the argument in Claim \ref{monadic} showed then is that all the axioms of the calculus are $true^*$ in $\mathfrak{A}$, the rules preserve $truth^*$ but $\exists x P (x) \rightarrow \exists x \neg \neg P (x)$ is not $true^*$. } \hspace*\fill$\blacktriangleleft$ \end{Rmk} By way of background, let us note that the above proof of Proposition \ref{monadic} is an adaptation of the proof of \cite[Corollary 2.2]{Humberstone}, in the setting of modal logic,\footnote{In the setting of modal logic, work on sensitivity to the choice of primitives began with \cite{Makinson}, though this concerned the structure of the lattice of all modal logics rather than the failure of `axiomatization by translation'. The latter theme is pursued in the case of intuitionistic logic in \cite{Humberstone2}; a combination of vocabulary unfamiliar to a copy editor and inadequate proof reading by the author resulted in the frequent appearance of `intuitionistic(ally)' in this paper as `intuitional(ly)'. Furher illustrations, from modal logic, of losing congruentiality as a result of changing primitives and not making compensatory adjustment can be found under Example 1.3.18 in \cite{Humberstone4}.} but with a semantics borrowing one aspect of the use of non-normal worlds. Usually those are exploited in two ways for non-normal modal logics, one being that the validity of a formula requires only its truth at the normal worlds in all models, with the other being that the normality of a world is an additional necessary condition for the truth of a $\Box$-formula at that world, over and above the truth of of the formula in the scope of the $\Box$ at all accessible worlds. The first of these roles is still played by the normal worlds of \cite{Humberstone}, but the second role is altered so that it is not $\Box$ (or $\Diamond$) formulas that make an additional demand of normality of the points at which they are evaluated, but $\neg$-formulas that require the world in question to be normal (as well as that the formula in the immediate scope of $\neg$ should fail to be true at the point). In addition, the accessibility relation in play is universal, and so does not need to be mentioned. This feature makes monadic one-variable predicate logic the appropriate non-modal analogue for present purposes, a closed monadic formula corresponding to a `fully modalized' modal formula, and the two-world models of \cite{Humberstone} -- with one normal and one non-normal world -- can then interpret any free occurrence of (the sole candidate) variable as picking out the normal element. This normal element above is referred to as $u$, the non-normal element being $v$, exactly as in the notation of \cite{Humberstone} for the corresponding two worlds. The domain $A$ is there just the universe $\{u, v\}$ of the models under consideration. This set is labelled 1, with $\{u\}, \{v\}$, and $\emptyset$ appearing as 2, 3, and 4 in Figure 1 of \cite[p.399]{Humberstone} being the modal version of of the tables in the proof of Claim \ref{monadic} above, with $\Box$ and $\Diamond$ in place of $\forall$ and $\exists$. Note that all that is required of the nonstandard semantics in such cases is that the axiomatization under discussion should be \textit{sound} w.r.t.\ the notion of validity provided by the semantics, and that the formula whose unprovability is to be shown is invalid. It is not required that the axiomatization be not only sound but complete w.r.t.\ the semantics on offer, though for one reason or another, one may be interested in this. In the modal case, Omori and Skurt \cite{Omori&Skurt} explore the possibility of a semantic characterization of the `failed axiomatization' of the modal logic {\sf K} discussed in \cite{Humberstone}. And, by way of a non-modal example, Shapiro \cite[p. 249, last para.]{Shapiro} provides a semantic description of Crossley's (attempted) axiomatization in \cite[p. 19]{Crossley} of classical predicate logic. \subsection{The case of classical predicate calculus} Next we are going to use the approach in Remark \ref{alt} to transport the incompleteness result to \emph{monadic} logic with infinitely many variables. Everything we do can be adapted to the polyadic case but the monadic case is simply easier to understand.\footnote{For example, one could set $\mathfrak{A}(R(x_1,...,x_m)) = \{\langle a_1,...,a_n\rangle \vert a_1 = ... = a_m = v\}$ and adjust the atomic clause for $\models^*$ appropriately.} This is just what suffices to refute Bell and Slomson's claim that the axiomatization in \cite{Bell&Slomson} is complete. We retain the semantic apparatus of Section \ref{one}, now setting $\mathfrak{A}(P (x_i))=\{v\}$ ($i<\omega$). We start by defining the satisfaction relation $ \models^*$,\footnote{Following, for simplicity, the treatment of variables and finite sequences of elements in a structure that appears in \cite{Marker}.} this time as follows (where $u$ is the distinguished element of $ A$): \begin{itemize} \item[] for any formula $\phi(x_{i_1}, \dots, x_{i_n})$ with free variables among $\{x_{i_1}, \dots, x_{i_n}\}$, a sequence $a_{i_1}, \dots, a_{i_n}$ of elements from $ A$ (in general with repetitions since $A$ has only two elements), \item[] if $\phi(x_{i_1}, \dots, x_{i_n}) = P(x_{i_k})$, \item[] $\mathfrak{A} \models^* \phi[a_{i_k}]$ iff $a_{i_k} \in \mathfrak{A}(P(x_{i_k}))$, \item[] if $\phi(x_{i_1}, \dots, x_{i_n}) = \neg \psi$, \item[] $\mathfrak{A} \models^* \neg \psi [a_{i_1}, \dots, a_{i_n}]$ iff $a_{i_l}=u \ (1\leq l\leq n)$ and $\mathfrak{A} \not \models^* \psi [\underbrace{u, \dots, u}_n]$, \item[] if $\phi(x_{i_1}, \dots, x_{i_n}) = \psi \rightarrow \chi$, \item[] $\mathfrak{A} \models^* (\psi \rightarrow \chi) [a_{i_1}, \dots, a_{i_n}]$ iff $\mathfrak{A} \not \models^* \psi [a_{i_1}, \dots, a_{i_n}]$ or $\mathfrak{A} \models^* \chi [a_{i_1}, \dots, a_{i_n}]$, \item[] if $\phi(x_{i_1}, \dots, x_{i_n}) = \exists x_{j} \psi(x_{i_1}, \dots, x_{i_n},x_{j})$, \item[] $\mathfrak{A} \models^* \exists x_{j} \psi [a_{i_1}, \dots, a_{i_n}]$ iff $\mathfrak{A} \models^* \psi [a_{i_1}, \dots, a_{i_n}, b]$ for some $b \in A$, \item[] if $\phi(x_{i_1}, \dots, x_{i_n}) = \forall x_{j} \psi(x_{i_1}, \dots, x_{i_n}, x_{j}) = \neg \exists x_{j} \neg \psi(x_{i_1}, \dots, x_{i_n}, x_{j})$, \item[] $\mathfrak{A} \models^* \forall x_{j} \psi [a_{i_1}, \dots, a_{i_n}]$ iff $\mathfrak{A} \models^* \psi [\underbrace{u, \dots, u}_{n+1}]$ and $a_{i_l}=u \ (1\leq l\leq n)$. \end{itemize} Then we may say a formula $\phi(x_1, \dots, x_n)$ is $true^*$ in $\mathfrak{A}$ if satisfied by the sequence $\underbrace{u, \dots, u}_n$ where $u$ is the distinguished element of $A$. \begin{Pro}\label{monadic2} Let $\vdash$ stand for provability in the axiomatization given in \emph{\cite{Bell&Slomson}}\footnote{Their axiomatization was roughly copied from \cite{Mendelson} according to Alan Slomson (personal communication).} (and repeated at the start of Section 3.1 above). Consider a vocabulary $\tau= \{P\}$ where $P$ is a unary predicate. Then every instance of an axiom schema from the system in \emph{\cite{Bell&Slomson}} is $true^*$ in $\mathfrak{A}$ and the rules preserve this property whereas the following formula is not: $$\exists x P (x) \rightarrow \exists x \neg \neg P (x).$$ \end{Pro} \begin{Rmk} \emph{The inquisitive reader may then ask where exactly the mistake in the purported completeness proof in \cite[Thm. 3.5.1]{Bell&Slomson} is. It is in the final step, where they claim that the equivalence class of formulas provably equivalent to $\exists v_n \psi(v_0/x_0, \dots, v_{n-1}/x_{n-1}, v_n)$ coincides with that of $\neg \forall v_n \neg \psi(v_0/x_0, \dots, v_{n-1}/x_{n-1}, v_n)$. Here they would have needed the principle $\exists x \psi \leftrightarrow \neg \forall x \neg \psi$, but since the addition of this schema would imply the provability of $\exists x \psi \rightarrow \exists x \neg \neg \psi$, it cannot be a theorem of the system in \cite{Bell&Slomson}.} \hspace*\fill$\blacktriangleleft$ \end{Rmk} \section{Conclusion} This article draws attention to a relatively subtle point: completeness of proof systems – here illustrated by axiomatic or `Hilbert' systems – is very sensitive to the choice of \emph{all} logical primitives, not only the propositional connectives. This did not appear to be sufficiently well known, as witnessed by the error in \cite{Bell&Slomson}. As we mentioned before, H. Hi{\. z} \cite{Hiz} had already warned that a `translation of a complete set of axioms to another set of primitives would be complete only if from the resulting axioms the definitions of the first set of primitives followed'. In this paper we have shown exactly why things go wrong when the advice is not followed at the level of the quantifiers: we can lose replacement of equivalents in our logic.\footnote{The need for checking against the loss (under change of primitives) of this replacement property, known in modal logic as congruentiality, was stressed already in \cite{Makinson}.} Observe that, at the propositional level, already both Hi{\. z}'s counterexample in \cite{Hiz} (that is, $\neg(p\wedge \neg p)$) and Shapiro's in \cite{Shapiro} (namely $(p\wedge p) \rightarrow p$) can be trivially interpreted as displaying a failure of replacement of equivalents. \bibliographystyle{acm}
{ "timestamp": "2022-03-17T01:37:59", "yymm": "2203", "arxiv_id": "2203.08721", "language": "en", "url": "https://arxiv.org/abs/2203.08721", "abstract": "The problems of logical translation of axiomatizations and the choice of primitive operators have surfaced several times over the years. An early issue was raised by H. Hi{\\. z} in the 1950s on the incompleteness of translated calculi. Further pertinent work, some of it touched on here, was done in the 1970s by W. Frank and S. Shapiro, as well as by others in subsequent decades. As we shall see, overlooking such possibilities has led to incorrect claims of completeness being made (e.g. by J. L. Bell and A. B. Slomson as well as J. N. Crossley) for axiomatizations of classical predicate logic obtained by translation from axiomatizations suited to differently chosen logical primitives. In this note we begin by discussing some problematic aspects of an early article by W. Frank on the difficulties of obtaining completeness theorems for translated calculi. Shapiro had established the incompleteness of Crossley's axiomatization by exhibiting a propositional tautology that was not provable. In contrast, to deal with Bell and Slomson's system which is complete for propositional tautologies, we go on to show that taking a formal system for classical predicate calculus with the primitive $ \\exists$, setting $\\forall x \\phi(x) \\stackrel{\\text{def}}{=}\\neg \\exists x \\neg \\phi(x)$, and writing down a set of axioms and rules complete for the calculus with $\\forall $ instead of $ \\exists$ as primitive, does not guarantee completeness of the resulting system. In particular, instances of the valid schema $\\exists x \\phi (x) \\rightarrow \\exists x \\neg \\neg\\phi (x)$ are not provable, which is analogous to what occurs in modal logic with $\\Box$ and $\\Diamond$.", "subjects": "Logic (math.LO)", "title": "Axiomatization via translation: Hiz's warning for predicate logic", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9609517028006208, "lm_q2_score": 0.7371581684030623, "lm_q1q2_score": 0.7083733971603094 }
https://arxiv.org/abs/1703.04180
MEDL and MEDLA: Methods for Assessment of Scaling by Medians of Log-Squared Nondecimated Wavelet Coefficients
High-frequency measurements and images acquired from various sources in the real world often possess a degree of self-similarity and inherent regular scaling. When data look like a noise, the scaling exponent may be the only informative feature that summarizes such data. Methods for the assessment of self-similarity by estimating Hurst exponent often involve analysis of rate of decay in a spectrum defined in various multiresolution domains. When this spectrum is calculated using discrete non-decimated wavelet transforms, due to increased autocorrelation in wavelet coefficients, the estimators of $H$ show increased bias compared to the estimators that use traditional orthogonal transforms. At the same time, non-decimated transforms have a number of advantages when employed for calculation of wavelet spectra and estimation of Hurst exponents: the variance of the estimator is smaller, input signals and images could be of arbitrary size, and due to the shift-invariance, the local scaling can be assessed as well. We propose two methods based on robust estimation and resampling that alleviate the effect of increased autocorrelation while maintaining all advantages of non-decimated wavelet transforms. The proposed methods extend the approaches in existing literature where the logarithmic transformation and pairing of wavelet coefficients are used for lowering the bias. In a simulation study we use fractional Brownian motions with a range of theoretical Hurst exponents. For such signals for which "true" $H$ is known, we demonstrate bias reduction and overall reduction of the mean-squared error by the two proposed estimators. For fractional Brownian motions, both proposed methods yield estimators of $H$ that are asymptotically normal and unbiased.
\section{Introduction} At first glance, data that scale look like noisy observations, and often the large scale features (basic descriptive statistics, trends, smoothed functional estimates, etc.) carry no useful information. For example, the pupil diameter in humans fluctuates at a high frequency (hundreds of Hz), and prolonged monitoring leads to massive data sets. Researchers found that the high-frequency dynamic of change in the diameter is informative of eye pathologies, e.g., macular degeneration, \cite{moloney}. Yet, the trends and traditional summaries of the data are clinically irrelevant, for the magnitude of the diameter depends on the ambient light, and not on the inherent eye pathology. Our interest focuses on the analysis of self-similar objects. Formally, a deterministic function $f(\mathbf{t})$ of a $d$-dimensional argument $\mathbf{t}$ is said to be self-similar if $f(\lambda \mathbf{t})=\lambda^{-H}f(\lambda \mathbf{t}), $ for some choice of the exponent $H$, and for all dilation factors $\lambda$. The notion of self-similarity has been extended to random processes. Specifically, a stochastic process $\{X(\mathbf{t}),\ \mathbf{t}\in R^d\}$ is self-similar with scaling exponent (or \emph{Hurst exponent}) $H$ if, for any $\lambda \in R^+$, \begin{eqnarray}\label{basicdef} X(\lambda \mathbf{t})\stackrel{d}{=}\lambda^H X(\mathbf{t}), \end{eqnarray} where the relation ``$\stackrel{d}{=}$'' is understood as the equality in all finite dimensional distributions. In this paper, we are concerned with a precise estimation of scaling exponent $H$ in one-dimensional setting. The results can be readily extended to self-similar objects of arbitrary number of dimensions. \\ A number of estimation methods for $H$ exist, including: re-scaled range calculation ($R/S$), Fourier-spectra methods, variance plots, quadratic variations, zero-level crossings, etc. For a comprehensive description, see \cite{beran1994}, \cite{doukhan2003theory}, and \cite{abry2013}. Wavelet transforms are especially suitable for modeling self-similar phenomena, as is reflected by vibrant research. An overview is provided in \cite{abry2000b}. If processes possess a stochastic structure (e.g. Gaussianity, stationary increments), the scaling exponent $H$ becomes a parameter in a well-defined statistical model and can be estimated as such. Fractional Brownian motion (fBm) is important and well understood model for data that scale. Its importance follows from the fact that fBm is a unique Gaussian process with stationary increments that is self-similar, in the sense of (\ref{basicdef}). A fBm has a (pseudo)-spectrum of the form $S(\omega)\propto |\omega|^{-(2H+1)}$, and consequently the log-magnitudes of detail coefficients at different resolutions in a wavelet decomposition exhibit a linear relationship. Using non-decimated wavelet domains to leverage on this linearity constitutes the staple of this paper. Each decomposition level in nondecimated wavelet transform (NDWT) contains the same number of coefficients as the size of the original signal. This redundancy contributes to the accuracy of estimators of $H$. However, reducing the bias induced by level-wise correlation among the redundant coefficients becomes an important issue. The two estimators we propose are based on the so-called ``logarithm-first'' approach, connecting Hurst exponent with a robust location and resampling techniques. The rest of the paper consists of three additional sections and an appendix. Section 2 provides background of wavelet transforms as well as the properties of resulting wavelet coefficients. Section 3 presents distributional results on which the proposed methods are based. Section 4 provides the simulation results and compares the estimation performance of the proposed methods to some standardly used methods. The final Section is reserved for concluding remarks. Appendix contains all technical details for the results presented in Section 3. \section{Orthogonal and non-decimated wavelet transforms} Discrete signals from an acquisition domain can be mapped to the wavelet domain in multiple ways. We overview two versions of discrete wavelet transform: orthogonal wavelet transform (DWT) and non-decimated wavelet transform (NDWT). We also describe algorithmic procedures in performing two versions of wavelet transform and obtaining the wavelet coefficients. Here we focus on functional representations of wavelet transform which is more critical for the subsequent derivations. Interested readers can refer to \cite{nason1995}, \cite{vidakovic1999}, and \cite{percival2006wavelet} for alternative definitions.\\ Any square-integrable $L_2(\mathbb{R})$ function $f(x) $ can be represented in the wavelet domain as \ba f(x)=\sum_{k}c_{J_0, k}\phi_{J_0, k}(x) + \sum_{j\geq J_0}^{\infty} \sum_{ k} d_{j, k}\psi_{j,k}(x), \end{eqnarray*} where $c_{J_0, k}$ indicates coarse coefficients, $d_{j, k}$ detail coefficients, $\phi_{J_0,k}(x)$ scaling functions, and $\psi_{jk}(x)$ wavelet functions. We use different decomposing atom functions, as scaling and wavelet functions, depending on a version of wavelet transform. For DWT, the atoms are \ba \phi_{J_0,k}(x)&=&2^{J_0/2}\phi(2^{J_0}x - k)\\ \psi_{jk}(x)&=&2^{j/2}\psi(2^jx - k), \end{eqnarray*} where $x \in \mathbb{R}$, $j$ is a resolution level, $J_0$ is the coarsest resolution level, and $k$ is the location of an atom. For NDWT, atoms are \ba \phi_{J_0,k}(x)&=&2^{J_0/2}\phi(2^{J_0}(x - k))\\ \psi_{jk}(x)&=&2^{j/2}\psi(2^j(x - k)). \end{eqnarray*} Notice that atoms in NDWT have a constant location shift $k$ at all levels, which yields the maximal sampling rate at each level. Two types of coefficients, $c_{J_0, k}$ and $d_{j, k}$, capture coarse and detail fluctuations of an input signal, respectively. These are obtained as \ba c_{J_0,k} &=& \langle f(x), \phi_{J_0,k} \rangle\\ d_{jk} &=& \langle f(x), \psi_{jk} \rangle. \end{eqnarray*} In a $p$-depth decomposition of an input signal of size $m$, a NDWT yields $m\times (p+1)$ wavelet coefficients, while DWT yields $m$ wavelet coefficients independent of $p$. The redundant transform NDWT decreases the variance of the scaling estimators, but at the same time, increases the correlations among wavelet coefficients. Since the estimators of scaling are based on the second order properties of wavelet coefficients, the NDWT-based estimators can be biased. \begin{figure}[H] \centering \includegraphics[width= 4.5in]{autocor-eps-converted-to.pdf} \caption{The autocorrelation present in wavelet coefficients from the DWT and the NDWT.} \label{fig:autocor} \end{figure} Figure \ref{fig:autocor} illustrates the autocorrelation within wavelet coefficients in the level $J-4$ (the level of finest detail is $J-1$, so $J-4$ is 4th ``most detailed'' level) in DWT and NDWT. Haar wavelet was used on a Brownian motion path of size $2^{11}$. As we noted before, the coefficients from the NDWT are highly correlated while such correlation is not strong among the DWT coefficients. The two methods introduced in the following section reduce the effect of correlation among the coefficients, while maintaining redundancy and invariance as desirable threads of NDWT. \section{ MEDL and MEDLA Methods} We start by an overview of properties of wavelet coefficients and a brief literature overview methods in literature based on which we develop the proposed methods. For defining a wavelet spectrum, and subsequently, for estimating $H$ only detail wavelet coefficients are used. When an fBm with Hurst exponent $H$ is mapped to the wavelet domain by DWT, the resulting detail wavelet coefficients satisfy the following properties \citep{tewfik1992correlation, abry1995wavelets, flandrin1992wavelet}: \begin{enumerate} \item[(i)]$d_{j}$, a detail wavelet coefficient at level $j$, follows the Gaussian distribution with mean 0 and variance $\sigma_0^2 2^{-j(2H+1)}$, where $\sigma_0^2$ is the variance of a detail coefficient at level 0, \item[(ii)] a sequence of wavelet coefficients from level $j$ is stationary, and \item[(iii)] the covariance between two coefficients from any level of detail decreases exponentially as the distance between them increases; the rate of the decrease depends additionally on the number of vanishing moments of the decomposing wavelet. \end{enumerate} From property (i), the relationship between detail wavelet coefficients and Hurst exponent $H$ is \begin{align*} \log_2 \mathbb{E}\{d^2_j\}=-j(2H+1) + 2 \log_2 \sigma_0. \end{align*} \cite{abry2000wavelets} calculate sample variance of wavelet coefficients to estimate $\mathbb{E}\{d^2_j\}$ assuming i.i.d. Gaussianity of coefficients at level $j$. Frequently, a squared wavelet coefficient is referred as an ``energy.'' Empirically, we look at the levelwise average of squared coefficients (energies), \begin{align*} \overline{d^2_j}=\frac{1}{n_j}\sum_{i=1}^{n_j}d^2_{j,k}, \end{align*} where $n_j$ is the number of wavelet coefficients at level $j$. The relationship between average energy $\overline{d^2_j}$ and $H$ is \begin{align*} \log_2 \overline{d^2_j} \overset{d}{\approx} -(2H + 1 )j - \log_2 C - \log \chi^2_{n_j}/\log 2, \end{align*} where $\overset{d}{\approx}$ indicates approximate equality in distribution, $\chi^2_{n_j}$ follows a chi-square distribution with ${n_j}$ degrees of freedom, and $C$ is a constant. The method of \cite{abry2000wavelets} is affected by the non-normality of $\log_2 \overline{d^2_j}$ and correlation among detail wavelet coefficients, which results in biases of weighted least squares estimates. To reduce the bias, \cite{soltani2004estimation} defined ``mid-energies,'' as \begin{align*} D_{j,k}=\frac{d_{j,k}^2+d_{j,k+n_j/2}^2}{2}, k=1, ..., n_j/2. \end{align*} According to this approach, each multiresolution level is split on two equal parts and corresponding coefficients from each part are paired, squared, and averaged. This produces a quasi-decorrelation effect. \cite{soltani2004estimation} show that level-wise averages of $\log_2 D_{j,k}$ are asymptotically normal with the mean $-(2 H + 1)j + C,$ which is used to estimate $H$ by regression. The estimators in \cite{soltani2004estimation} consistently outperform the estimators that use log-average energies, under various settings. \cite{shen2007robust} show that the method of \cite{soltani2004estimation} yields more accurate estimators since it takes the logarithm of a mid-energy, and then averages. Moreover, averaging logged squared wavelet coefficients, rather than taking logarithm of averaged squared wavelet coefficients, is theoretically justified and this approach will be pursued in this paper. For both proposed methods, MEDL and MEDLA, we first take the logarithm of a squared wavelet coefficient or an average of two squared wavelet coefficients, then derive the distribution of such logarithms under the assumption of independence. Next, we use the median of the derived distribution instead of the mean. The medians are more robust to potential outliers that can occur when logarithmic transform of a squared wavelet coefficient is taken and the magnitude of coefficient is close to zero. This numerical instability may increase the bias and variance of sample means. However, since the logarithms are monotone, the variability of the sample medians will not be affected. The first proposed method is based on the relationship between the median of the logarithm of squared wavelet coefficients and the Hurst exponent. We use acronym ``MEDL'' to refer to this method. In MEDL, the logarithmic transform reduces the autocorrelation, while the number of coefficients remains the same. The second method derives the relationship between the median of the logarithm of average of two squared wavelet coefficients and the Hurst exponent. We use acronym ``MEDLA'' to refer to this method. The MEDLA method is similar in concept to approach of \cite{soltani2004estimation} who paired and averaged energies prior to taking logarithm. Then the mean of logarithms was connected to $H$. Instead, we repeatedly sample with replacement $m$ random pairs keeping distance between them at least $q_j$. Then, as in \cite{soltani2004estimation} we find the logarithm of pair's average and connect the Hurst exponent with the median of the logarithms. As we relax the constraints on the distance between energies in each pair, we obtain a larger amount of distinct samples and selecting only $N$ samples out of such sample population further reduces the correlation. \begin{figure}[h] \centering \includegraphics[width= 4.5in]{coeff-eps-converted-to.pdf} \caption{Autocorrelation of variables used in four methods.} \label{fig:coeff} \end{figure} \noindent To illustrate the decorrelation effects of the proposed methods, in Figure \ref{fig:coeff}, we compare the autocorrelation present in variables that are averaged: means of $d_{jk}^2$ for traditional method, means of $\log_2 \bigg[(d_{jk}^2+d_{j,k+m/2}^2)/2 \bigg]$ for Soltani-like method, medians of $\log d_{jk}^2$ for MEDL, and medians of sampled $\log \bigg[(d_{jk_1}^2+d_{jk_2}^2)/2 \bigg]$ for MEDLA method. The two default methods exhibit higher amount of autocorrelation that decreases at a slower rate. The MEDLA shows substantial reduction in correlation. For formal distributional assessment of the two proposed methods, we start with an arbitrary wavelet coefficient from decomposition level $j$ at location $k$, $d_{jk}$, resulting from a non-decimated wavelet transform of a one-dimensional fBm $B_H(\omega, t), t \in \mathbb{R}$, \ba d_j=\int_{\mathbb{R}} B_H(\omega, t)\psi_{jk}(t)dt, \text{for some fixed } k. \end{eqnarray*} As \cite{flandrin1992wavelet} showed, the distribution of a single wavelet coefficient is \begin{align} d_j \stackrel{d}{=} 2^{-(H+1/2) j} \sigma Z, \label{eq:wcoef} \end{align} where $Z$ follows a standard normal distribution, and $\sigma^2$ is the variance of wavelet coefficients at level 0. We will use (\ref{eq:wcoef}) repeatedly for the derivations that follow. \subsection{MEDL Method} \label{sec:MEDL} For the median of the logarithm of squared wavelet coefficients (MEDL) method, we derive the relationship between the median of the logarithm on an arbitrary squared wavelet coefficient from decomposition level $j$ and Hurst exponent $H$. The following theorem serves as a basis for the MEDL estimator: \begin{theorem} \label{thm:MEDL} Let $y_j^*$ be the median of $\log d_j^2$, where $d_j$ is an arbitrary wavelet coefficient from level $j$ in a NDWT of a fBm with Hurst exponent $H$. Then, the population median is \begin{eqnarray} \label{th:MEDLy} y^*_j = - \log 2\, (2 H + 1) j + C, \end{eqnarray}} \newcommand{\ba}{\begin{eqnarray*} where $C$ is a constant independent of $j$. The Hurst exponent can be estimated as \begin{eqnarray} \label{th:MEDLH} \widehat{H}= - \frac{\widehat{\beta}}{2\log 2} -\frac{1}{2}, \end{eqnarray}} \newcommand{\ba}{\begin{eqnarray*} where $\widehat{\beta}$ is the slope in ordinary least squares (OLS) linear regression on pairs $(j,{\hat y}_j^*),$ and ${\hat y^*_j}$ is the sample median. \end{theorem} \noindent The proof of Theorem \ref{thm:MEDL} is deferred to Appendix A. We estimate $y_j^*$ by taking sample median of logged energies at each level. The use of OLS linear regression is justified by the fact that variances of the sample medians ${\hat y^*_j}$ are constant in $j$, that is, \begin{lemma} \label{th:MEDLV} The variance of sample median ${\hat y^*_j}$ at level $j$ is approximately \ba \frac{\pi e^Q}{2 N Q}, \end{eqnarray*} where $N$ is the sample size and $Q = \left( \Phi^{-1}(3/4) \right)^2$. \end{lemma} \noindent The theorem is stating that the logarithm acts as a variance stabilizing operator; the variance of the sample median is independent of level $j$, and ordinary regression to find slope $\beta$ in Theorem \ref{thm:MEDL} is fully justified. Note that the use of OLS regression is not adequate in DWT; the weighted regression is needed to account for levelwise heteroscedasticity. The levelwise variance is approximately $ 5.4418/N,$ independent of $H$ and $\sigma^2.$ The proof of Theorem \ref{th:MEDLV} is deferred to Appendix A. In addition, for $\widehat{H}$ the normal approximation applies: \begin{theorem} The MEDL estimator $\widehat{H}$ follows the asymptotic normal distribution \begin{align*} \widehat{H} \overset{approx}{\sim} {\cal N} \left(H, \frac{3 A}{N m( m^2 - 1)(\log 2)^2}\right), \end{align*} \label{th:MEDLHD} where $A = \pi e^Q/(2 Q) \cong 5.4418$, $N$ is the sample size, and $m$ is the number of levels used in the spectrum. \end{theorem} \noindent The proof of Theorem \ref{th:MEDLHD} is deferred to Appendix A. To illustrate Theorem \ref{th:MEDLHD}, we perform an NDWT of depth 10 on simulated fBm's with $H =0.3, 0.5,$ and $0.7$. We use resulting wavelet coefficients from levels $J-7$ to $J-2$ inclusive (i.e., six levels) to estimate $H$ with MEDL. Following Theorem \ref{th:MEDLHD}, $\widehat{H}$ of MEDL in the simulation follows a normal distribution with mean $H$ and variance $7.9007\times 10^{-5}$, which is illustrated in Figure \ref{fig:MEDLH}. \begin{figure*} \begin{center} \subfigure{ \includegraphics[width=0.45 \columnwidth]{dist_H3_medl1-eps-converted-to.pdf} } \subfigure{ \includegraphics[width=0.45 \columnwidth]{dist_H3_medl2-eps-converted-to.pdf} } \subfigure{ \includegraphics[width=0.45 \columnwidth]{dist_H5_medl1-eps-converted-to.pdf} } \subfigure{ \includegraphics[width=0.45 \columnwidth]{dist_H5_medl2-eps-converted-to.pdf} } \subfigure{ \includegraphics[width=0.45 \columnwidth]{dist_H7_medl1-eps-converted-to.pdf} } \subfigure{ \includegraphics[width=0.45 \columnwidth]{dist_H7_medl2-eps-converted-to.pdf} } \end{center} \caption{Panels on the right are histograms of $\widehat{H}$ and panels on the left are q-q plots of $\widehat{H}$ versus the quantiles asymptotic distribution when $H =0.3, 0.5,$ and 0.7, respectively.} \label{fig:MEDLH} \end{figure*} \subsection{MEDLA Method} For the median of the logarithm of averaged squared wavelet coefficients (MEDLA) method, we derive the relationship between logarithm of an average of two energies and Hurst exponent $H$. \cite{soltani2004estimation} proposed a method that quasi-decorrelates wavelet coefficients by splitting all wavelet coefficients from one level into left and right sections and pairing every coefficient in the left section with its counterpart in the right section, maintaining an equal distance to its pair (i.e., members in each pair are $m/2$ apart when $m$ is the number of wavelet coefficients on that level). Then, \cite{soltani2004estimation} averaged every pair of energies and took logarithm of each average. We follow similar idea except that instead of fixing the combinations of pairs, which amounts to $m/2$ pairs in \cite{soltani2004estimation}, we randomly sample with replacement $m$ pairs whose members are at least $q_j$ apart. Based on sample autocorrelation graphs, we define $q_j=2^{J-j}$ that decrease with level $j$ because the finer the subspace (i.e., larger $j$), the lower the correlation among wavelet coefficients. Then, we propose an estimator of $H$ based on the following result. \begin{theorem} \label{thm:MEDLA} Let $d_{jk_1}$ and $d_{jk_2}$ be two wavelet coefficients from level $j$, at positions $k_1$ and $k_2$, respectively, from a NDWT of a fBm with Hurst exponent $H$. Assume that $|k_1 - k_2| > q_j,$ where $q_j$ is the minimum separation distance that depends on level $j$ and selected wavelet base. Let $y_j^*$ be the median of $\log \bigg[\frac{d_{jk_1}^2+d_{jk_2}^2}{2}\bigg]$. Then, as in Theorem \ref{thm:MEDL}, results (\ref{th:MEDLy}) and (\ref{th:MEDLH}) hold. \end{theorem} \noindent The proof of Theorem \ref{thm:MEDLA} is deferred to Appendix B. To estimate $y_j^*$, we first repeatedly sample $m$ pairs of wavelet coefficients with replacement from all pairs that are at least $q_j$ apart. Then, we take logarithm of pair's average energy and take the median. As in Theorem \ref{th:MEDLV}, the variances of sample medians ${\hat y^*_j}$ are free of $j$. \begin{lemma} \label{th:MEDLAV} The variance of the sample median $\hat{y^*_j}$ at level $j$ is approximated by \begin{align*} \frac{1}{N (\log 2)^2}, \end{align*} where $N$ is the sample size. \end{lemma} \noindent The proof is straightforward and given in Appendix B. Thus, the variance of ${\hat y^*_j}$ is constant over levels. We find that MEDLA estimator of $H$ indeed follows a approximately normal distribution with a mean and a variance given in the following theorem. \begin{theorem} The estimator $\widehat{H}$ of MEDLA follows the asymptotic normal distribution \begin{align*} \widehat{H} \overset{approx}{\sim} {\cal N}\left(H, \frac{3}{ N m( m^2 - 1)(\log 2)^4} \right), \end{align*} \label{th:MEDLAHD} where $N$ is the sample size, and $m$ is the number of levels used in the spectrum. \end{theorem} The proof of Theorem \ref{th:MEDLAHD} is deferred to Appendix B. To illustrate Theorem \ref{th:MEDLAHD}, we use the same wavelet coefficients from the simulation in section \ref{sec:MEDL}. Following Theorem \ref{th:MEDLAHD}, $\widehat{H}$ of MEDLA in the simulation follows an approximate normal distribution with mean $H$ and variance $7.9007\times 10^{-5}$, which is shown in Figure \ref{fig:MEDLAH}. \begin{figure*} \begin{center} \subfigure{ \includegraphics[width=0.45 \columnwidth]{dist_H3_medla1-eps-converted-to.pdf} } \subfigure{ \includegraphics[width=0.45 \columnwidth]{dist_H3_medla2-eps-converted-to.pdf} } \subfigure{ \includegraphics[width=0.45 \columnwidth]{dist_H5_medla1-eps-converted-to.pdf} } \subfigure{ \includegraphics[width=0.45 \columnwidth]{dist_H5_medla2-eps-converted-to.pdf} } \subfigure{ \includegraphics[width=0.45 \columnwidth]{dist_H7_medla1-eps-converted-to.pdf} } \subfigure{ \includegraphics[width=0.45 \columnwidth]{dist_H7_medla2-eps-converted-to.pdf} } \end{center} \caption{Panels on the right are histograms of $\widehat{H}$ and panels on the left are q-q plots of $\widehat{H}$ versus the quantiles of asymptotic distribution when $H =0.3, 0.5,$ and 0.7, respectively.} \label{fig:MEDLAH} \end{figure*} \section{Simulations} Next, we assess the performance of MeDL and MEDLA in estimation of Hurst exponent, We simulate three sets of three hundred one-dimensional fractional Brownian motion (1-D fBm) paths of size $2^{11}$ with Hurst exponents 0.3, 0.5, and 0.7 respectively. Then, we perform an NDWT of depth 10 with a Haar wavelet on each simulated signal and obtain wavelet coefficients to which we apply MEDL and MEDLA. For all methods and estimations, we use wavelet coefficients from levels $J-7$ to $J-2$ in the regression. We compare the estimation performance of the proposed methods to two standard methods: a method of \cite{veitch1999wavelet} and a method of \cite{soltani2004estimation}, both in the context of NDWT. We present the estimation performance in terms of mean, variance, bias-squared, and mean squared error, based on 300 simulations for each case. Table \ref{tb:sim_stat} and Figure \ref{fig:MEDLA_simul} indicate that as $H$ increases, the proposed methods outperform the standard methods. For smaller $H$, the estimation performance of all methods is comparable. \begin{table}[h] \begin{center} \begin{tabular}{| c| c| c |c | c |} \multicolumn{1}{c}{$H$=0.3} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{ } & \multicolumn{1}{c}{} & \multicolumn{1}{c}{ } \\ \hline Method & Traditional & Soltani & MEDL & MEDLA\\ \hline Mean & 0.2864 & 0.2849 & 0.2778 & 0.2783\\ \hline Variance & 0.0017 & 0.0015& 0.0021 & 0.0016 \\ \hline Bias-squared & 0.0002 & 0.0003 & 0.0005& 0.0005 \\ \hline MSE & 0.0019 & 0.0018 & 0.0026& 0.0021 \\ \hline \multicolumn{1}{c}{ } & \multicolumn{1}{c}{} & \multicolumn{1}{c}{ } & \multicolumn{1}{c}{} & \multicolumn{1}{c}{ } \\ \multicolumn{1}{c}{$H$=0.5} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{ } & \multicolumn{1}{c}{} & \multicolumn{1}{c}{ } \\ \hline Method & Traditional & Soltani & MEDL & MEDLA\\ \hline Mean & 0.475 & 0.5091 & 0.4966 & 0.4982 \\ \hline Variance & 0.0012 & 0.0022 & 0.0023 & 0.0017\\ \hline Bias-squared & 0.0006 & 6.7E-5 & 4.1E-6 & 1.3E-6\\ \hline MSE & 0.0018 & 0.0023 & 0.0023 & 0.0017 \\ \hline \multicolumn{1}{c}{ } & \multicolumn{1}{c}{} & \multicolumn{1}{c}{ } & \multicolumn{1}{c}{} & \multicolumn{1}{c}{ } \\ \multicolumn{1}{c}{$H$=0.7} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{ } & \multicolumn{1}{c}{} & \multicolumn{1}{c}{ } \\ \hline Method & Traditional & Soltani & MEDL & MEDLA\\ \hline Mean & 0.5524 & 0.7286 & 0.7065 & 0.7084 \\ \hline Variance & 0.0039 & 0.0028 & 0.0033 & 0.0024 \\ \hline Bias-squared & 0.0217 & 0.0008 & 3.3E-5& 6.2E-5 \\ \hline MSE & 0.0256 & 0.0036 & 0.0033 & 0.0024 \\ \hline \end{tabular} \caption{Estimation of $H$ with 300 simulated 1-D fBm signals of size $2^{11}$ when $H$=0.3, 0.5, and 0.7 by four methods} \label{tb:sim_stat} \end{center} \end{table} \begin{figure*} \begin{center} \subfigure[$H$=0.3]{ \includegraphics[width=0.52 \columnwidth]{simul_H3-eps-converted-to.pdf} } \subfigure[$H$=0.5]{ \includegraphics[width=0.52 \columnwidth]{simul_H5-eps-converted-to.pdf} } \subfigure[$H$=0.7]{ \includegraphics[width=0.52 \columnwidth]{simul_H7-eps-converted-to.pdf} } \end{center} \caption{Boxplots of $\widehat{H}$ by four methods with 300 simulated 1-D fBm signals of size $2^{11}$ when $H$=0.3, 0.5, and 0.7} \label{fig:MEDLA_simul} \end{figure*} \section{Conclusions} We proposed two methods for robust estimation of Hurst exponent in one- and two-dimensional signals that scale. Unlike the standard methods, the proposed methods are based on NDWT. The motivation for using NDWT was its redundancy and time-invariance. However, the redundancy, which was useful for the stability of estimation, increases autocorrelations among the wavelet coefficients. The proposed methods lower the present autocorrelation by (i) taking logarithm of the squared wavelet coefficients prior to averaging, (ii) relating the Hurst exponent to the median of the model distribution, rather than the mean, and (iii) resampling the coefficients. The methods are compared to standard approaches and give estimators with smaller MSE for a range of input conditions. Instead of medians in (ii) we could employ any other quantile; the methodology is equivalent and will differ for the intercept and variance in the regressions. \section*{References}
{ "timestamp": "2017-03-14T01:09:26", "yymm": "1703", "arxiv_id": "1703.04180", "language": "en", "url": "https://arxiv.org/abs/1703.04180", "abstract": "High-frequency measurements and images acquired from various sources in the real world often possess a degree of self-similarity and inherent regular scaling. When data look like a noise, the scaling exponent may be the only informative feature that summarizes such data. Methods for the assessment of self-similarity by estimating Hurst exponent often involve analysis of rate of decay in a spectrum defined in various multiresolution domains. When this spectrum is calculated using discrete non-decimated wavelet transforms, due to increased autocorrelation in wavelet coefficients, the estimators of $H$ show increased bias compared to the estimators that use traditional orthogonal transforms. At the same time, non-decimated transforms have a number of advantages when employed for calculation of wavelet spectra and estimation of Hurst exponents: the variance of the estimator is smaller, input signals and images could be of arbitrary size, and due to the shift-invariance, the local scaling can be assessed as well. We propose two methods based on robust estimation and resampling that alleviate the effect of increased autocorrelation while maintaining all advantages of non-decimated wavelet transforms. The proposed methods extend the approaches in existing literature where the logarithmic transformation and pairing of wavelet coefficients are used for lowering the bias. In a simulation study we use fractional Brownian motions with a range of theoretical Hurst exponents. For such signals for which \"true\" $H$ is known, we demonstrate bias reduction and overall reduction of the mean-squared error by the two proposed estimators. For fractional Brownian motions, both proposed methods yield estimators of $H$ that are asymptotically normal and unbiased.", "subjects": "Methodology (stat.ME)", "title": "MEDL and MEDLA: Methods for Assessment of Scaling by Medians of Log-Squared Nondecimated Wavelet Coefficients", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9609517083920617, "lm_q2_score": 0.7371581510799252, "lm_q1q2_score": 0.7083733846353877 }
https://arxiv.org/abs/math/0512077
The neighborhood complex of a random graph
For a graph G, the neighborhood complex N[G] is the simplicial complex having all subsets of vertices with a common neighbor as its faces. It is a well known result of Lovasz that if N[G] is k-connected, then the chromatic number of G is at least k + 3.We prove that the connectivity of the neighborhood complex of a random graph is tightly concentrated, almost always between 1/2 and 2/3 of the expected clique number. We also show that the number of dimensions of nontrivial homology is almost always small, O(log d), compared to the expected dimension d of the complex itself.
\section{Introduction} In 1978, L\'{a}szl\'{o} Lov\'{a}sz proved Kneser's conjecture \cite{Lovasz}, that if the $n$-subsets of a $(2n+k)$-set are partitioned into $k+1$ families, at least one family contains a disjoint pair. He restated the problem graph theoretically and then proved a more general theorem about graph coloring. All our graphs will be simple undirected graphs, with no loops or multiple edges. For a graph $G$, a {\it $k$-coloring} is a function $f:V(G) \rightarrow \{ 1, 2, \ldots, k \}$ such that $f(x) \neq f(y)$ whenever $\{ x, y\} \in E(G)$. The {\it chromatic number} $\chi(G)$ is the minimum $k$ such that $G$ admits a $k$-coloring. Define the {Kneser graph} $KG(n,k)$ to have all $n$-subsets of a $(2n+k)$-set as its vertices, with edges between disjoint pairs. The Kneser conjecture is equivalent to the claim that $\chi(KG(n,k)) \ge k+2$. There are several simplicial complexes naturally assoicated with a graph $G$. The clique complex $X(G)$ is the simplicial complex $X(G)$ on vertex set $V(G)$ whose simplices are all complete subgraphs of $G$. In another article \cite{Kahle}, we study the clique complex of a random graph. The {\it neighborhood complex} $\mathcal{N}[G]$ is the simplicial complex on $V(G)$ which has all subsets of $V(G)$ with a common neighbor for its faces. For example, the neighborhood complex of the complete graph $K_n$ has all proper subsets of the vertices for its faces, so it is the boundary of an $(n-1)$-dimensional simplex. Its geometric realization $\| \mathcal{N}[K_n] \|$ is homeomorphic to an $(n-2)$-dimensional sphere $\mathbb{S}^{n-2}$. (We denote the geometric realization of a simplicial complex $\Delta$ by $\| \Delta \|$.) A topological space $X$ is said to be {\it $k$-connected} if every map from a sphere $\mathbb{S}^{n} \rightarrow X$ extends to a map from the ball $\mathbb{B}^{n+1} \rightarrow X$ for $n=0, 1, \ldots, k$. On the way to proving the Kneser conjecture, Lov\'{a}sz proved the following \cite{Lovasz}. \begin{theorem}[Lov\'{a}sz] \label{Theorem A} If $\| \mathcal{N}[G] \|$ is $k$-connected, then $\chi(G) \ge k+3$. \end{theorem} In the case of the Kneser graphs this lower bound is tight, matching an easy upper bound to give an exact answer. It seems natural to ask how good is the bound for ``typical'' graphs. For our purposes, a typical graph is the random graph $G(n,p)$ \cite{Bollo}. The random graph $G(n,p)$ is the probability space of all graphs on a vertex set of size $n$ with each edge inserted independently with probability $p$. Frequently, one considers $p$ to be a function of $n$ and asks whether the graph is likely to have some property as $n \rightarrow \infty$. We say that $G(n,p)$ {\it almost always (a.a.)} has property $\mathcal{P}$ if $\mbox{Prob}[G(n,p) \in \mathcal{P}] \rightarrow 1$ as $n \rightarrow \infty$. The main goal of this article is to understand some of the most basic topological features of $\| \mathcal{N}[G(n,p)] \|$. \section{Statement of results} Let $p=p(n)$ be a monotone function of $n$, and let $i,j,k,$ and $l$ be integer valued monotone functions of $n$. In the asymptotic notation that follows, $n \rightarrow \infty$ is the free variable. Homology is understood to be reduced with coefficients in $\mathbb{Z}$ throughout. \begin{theorem}\label{Theorem 1} If ${n \choose i} (1-p^i)^{n-i} = o(1)$ then $\mathcal{N}[G(n,p)]$ is a.a. $(i-2)$-connected. \end{theorem} \begin{theorem}\label{Theorem 2} If ${n \choose j}{n \choose k} p^{jk} = o(1)$ then $\mathcal{N}[G(n,p)]$ a.a. strong deformation retracts to a simplicial complex of dimension at most $j+k-3$. \end{theorem} In proving Theorem \ref{Theorem 2}, we make use of the following lemma, which might be of independent interest. \begin{lemma}\label{lemma 1} If $H$ is any graph not containing a complete bipartite subgraph $K_{a,b}$ then $|| \mathcal{N}[H] ||$ strong deformation retracts to a complex of dimension at most $a+b-3$. \end{lemma} A $d$-connected complex has trivial homology through dimension $d$ by the Hurewicz theorem. Also, $\widetilde{H}_k(\Delta)=0$ whenever $k$ is greater than the dimension of $\Delta$, and strong deformation retracts are homotopy equivalences and preserve homology. Hence Theorems \ref{Theorem 1} and \ref{Theorem 2} bound the possible dimensions of nontrivial homology from below and above. We give special cases of the theorems as corollaries. First fix $p=1/2$, where $G(n,p)$ is the uniform distribution on all graphs on vertex set $[n]$. \begin{corollary} \label{corollary 1} If $p=1/2$ and $\epsilon > 0$ then a.a. $\widetilde{H}_l(\| \mathcal{N}[G(n,p)] \|)=0$ for $l \le (1-\epsilon) \log_2{n}$ and $l \ge (4+\epsilon) \log_2{n}$. \end{corollary} Note for comparison that the dimension of the neighborhood complex is one less than the maximum vertex degree, so when $p=1/2$ we expect it to be slightly more than $n/2$. Next, fix $l$ and check that $\widetilde{H}_l$ is trivial outside a certain range of $p$. \begin{corollary} \label{corollary 2} Let $p=n^\alpha$ with $\alpha \in [-2,0]$. If $\alpha > \frac{-1}{l+2}$ then a.a. $\widetilde{H}_l(\| \mathcal{N}[G(n,p)] \|)=0$. For $l$ even, if $\alpha < \frac{-4}{l+2}$ then a.a. $\widetilde{H}_l(\| \mathcal{N}[G(n,p)] \|)=0$. For $l$ odd, if $\alpha < \frac{-4(l+2)}{(l+1)(l+3)}$ then a.a. $\widetilde{H}_l(\| \mathcal{N}[G(n,p)] \|)=0$. \end{corollary} For a partial converse to Corollaries \ref{corollary 1} and \ref{corollary 2}, we exhibit explicit nontrivial homology classes by retracting onto random spheres. Recall that a {\it clique} of order $n$ is a complete subgraph on $n$ vertices. \begin{definition} Let the graph $X_n$ have vertex set $\{ u_1, u_2, \ldots, u_n \} \coprod \{ v_1, v_2, \ldots, v_n \}$, such that $\{ u_1, u_2, \ldots, u_n \}$ spans a clique, and $u_i$ is adjacent to $v_j$ whenever $i \neq j$. \end{definition} \begin{theorem}\label{Theorem 3} If $H$ is any graph containing a maximal clique of order $n$ that can't be extended to an $X_n$ subgraph, then $\| \mathcal{N}[H] \|$ retracts onto a sphere $\mathbb{S}^{n-2}$. \end{theorem} \begin{corollary} \label{corollary 3} If $p=1/2$, $\epsilon > 0$, and $(4/3+\epsilon) \log_2{n} <k < (2-\epsilon) \log_2{n}$, then a.a. $\widetilde{H}_k(\| \mathcal{N}[G(n,p)] \|) \neq 0$. \end{corollary} \begin{corollary} \label{corollary 4} Let $p=n^\alpha$ with $\frac{-2}{k+1}< \alpha < \frac{-4}{3(k+1)}$, then a.a. $ \widetilde{H}_k(\| \mathcal{N}[G(n,p)] \|) \neq 0$. \end{corollary} \section{Proofs} We first prove that if ${n \choose i} (1-p^i)^{n-i} = o(1)$ then $\mathcal{N}[G(n,p)]$ is a.a. $(i-2)$-connected. \begin{proof}[Proof of Theorem \ref{Theorem 1}] A simplex complex is {\it $i$-neighborly} if every $i$ vertices span a face. By simplicial approximation, if a complex is $i$-neighborly then it is $(i-2)$-connected. The probability that a given set of $i$ vertices in $G(n,p)$ has no common neighbor is $(1-p^i)^{n-i}$. Then the total probability that any set of $i$ vertices doesn't have a common neighbor is bounded above by ${n \choose i} (1-p^i)^{n-i}=o(1)$. So a.a. every such set has some common neighbor, hence spans a face in the neighborhood complex. So $\mathcal{N}[G(n,p)]$ is a.a. $i$-neighborly and $(i-2)$-connected. \end{proof} Next we prove that if $H$ is any graph not containing a complete bipartite subgraph $K_{a,b}$ then $|| \mathcal{N}[H] ||$ strong deformation retracts to a complex of dimension at most $a+b-3$. \begin{proof}[Proof of Lemma \ref{lemma 1}] For a poset $Q$, the {\it order complex} $\Delta(Q)$ is the simplicial complex of all chains in $Q$. For a simplicial complex $S$, let $P(S)$ denote its face poset. To avoid proliferation of notation, we denote the geometric realization of the order complex of a poset $Q$ by $\|Q\|$ rather than $\|\Delta(Q)\|$. For a vertex $x$ of $H$, let $\Gamma(x)$ denote the set of common neighbors of $x$. Similarly, for any face in the neighborhood complex $X \in \mathcal{N}[H]$, let $\Gamma(X)=\cap_{x \in X} \Gamma(x) $. (This map is used in Lov\'{a}sz's paper \cite{Lovasz}.) Note that $\Gamma$ is an order reversing self-map of $P(\mathcal{N}[H])$, abbreviated for the rest of this proof by $P$. So we can define an order preserving poset map $v:P \rightarrow P$ by $v(X)=\Gamma(\Gamma(X))$. It's also easy to check that $\Gamma^3=\Gamma$, so $\Gamma^4=\Gamma^2$, and $v^2=v$. Since $v(X) \supseteq X$ for every $X$, a standard theorem in combinatorial homotopy theory \cite{Bjorner} gives that $\| v(P) \|$ is a strong deformation retract of $\|P \|$. An $m+1$-dimensional face in $v(P)$ is a chain of faces in $\mathcal{N}[H]$, $X_1 \subsetneq X_2 \subsetneq \cdots \subsetneq X_{m+2}$. Set $Y_i = \Gamma (X_i)$ and we have $Y_1 \supsetneq Y_2 \supsetneq \cdots \supsetneq Y_{m+2}$. (The inclusions $Y_i \subsetneq Y_{i+1}$ are strict since $\Gamma^3=\Gamma$.) Suppose $a+b-3 \le m$. Since the inclusions $X_i \subsetneq X_{i+1}$ are strict and $X_1$ is nonempty, $X_a$ contains at least $a$ vertices. Similarly, $Y_a$ contains at least $m+3-a \ge b$ vertices. But $Y_p = \Gamma(X_p)$, so $X_a$ and $Y_b$ span a complete bipartite subgraph $K_{a,b}$. Then if the dimension of $v(P)$ is at least $m+1$, $H$ contains $K_{a,b}$ subgraphs for every $a$ and $b$ such that $a+b-3 \le m$, which is the claim. \end{proof} Now we apply Lemma \ref{lemma 1} to check that if ${n \choose j}{n \choose k} p^{jk} = o(1)$ then $\mathcal{N}[G(n,p)]$ a.a. strong deformation retracts to a simplicial complex of dimension at most $j+k-3$. \begin{proof}[Proof of Theorem \ref{Theorem 2}] Let $U$ and $V$ be vertex subsets of $G(n,p)$ of order $j$ and $k$ respectively. The probability that they span a complete bipartite graph with parts $U$ and $V$ is $p^{jk}$. So the total probability that there are any $K_{j,k}$ subgraphs is bounded above by ${n \choose j}{n \choose k} p^{jk} = o(1)$. There are a.a. no such subgraphs, so the claim follows by Lemma \ref{lemma 1}. \end{proof} Checking Corollaries \ref{corollary 1} and \ref{corollary 2} is now a straightforward computation. \begin{proof}[Proof of Corollary \ref{corollary 1}] Let $p=1/2$ and $\epsilon > 0$. If $l \le (1-\epsilon) \log_2{n}$, then ${n \choose l} (1-p^l)^{n-l}$ $$={n \choose l} (1-(1/2)^{l})^{n-l}$$ $$\le n^l e^{-(1/2)^l (n-l)}$$ $$\le n^{\log_2{n}} e^{-n^{-1+\epsilon} (n-\log_2{n})} $$ $$= \exp{(\log{n}\log_2{n} - n^{\epsilon}+n^{-1+\epsilon}\log_2{n})}$$ $$ = o(1).$$ Then Theorem \ref{Theorem 1} gives that $\mathcal{N}[G(n,p)]$ is a.a. $(l-2)$-connected. Since $\epsilon$ doesn't appear anywhere in the conclusion of the theorem, we can replace it be a slightly smaller $\epsilon$ and for large enough $n$, $l+2 \le (1-\epsilon) \log_2{n}$ and this gives that $\mathcal{N}[G(n,p)]$ is a.a. $l$-connected. On the other hand, suppose $l \ge (4+\epsilon) \log_2{n}$ and let $j = \lceil l/2 \rceil$. $${n \choose j}^2 \left( \frac{1}{2} \right) ^{j^2}$$ $$\le n^{2(2+\epsilon/2)\log_2{n}} \left( \frac{1}{2} \right)^{(2+\epsilon/2)^2 (\log_2{n})^2 }$$ $$= n^{2(2+\epsilon/2)\log_2{n}-(2+\epsilon/2)^2 \log_2{n}}$$ $$= o(n^{-\epsilon\log_2{n}})$$ $$=o(1).$$ Then Theorem \ref{Theorem 2} gives that $\mathcal{N}[G(n,p)]$ a.a. strong deformation retracts to a complex of dimension at most $2j-3 \le l-1$. \end{proof} \begin{proof}[Proof of Corollary \ref{corollary 2}] Let $p=n^\alpha$ and suppose first that $\alpha > \frac{-1}{l+2}$. $${n \choose l+2} (1-p^{l+2})^{n-l}$$ $$\le n^{l+2} e^{-n^{\alpha (l+2)} (n-l)}$$ $$\le \exp{[(l+2) \log{n} -n^{1+\alpha(l+2)} + n^{\alpha(l+2)}l] }$$ $$=o(1),$$ \noindent since $l$ is constant and $1+\alpha(l+2)>0$. Then Theorem \ref{Theorem 1} gives that $\mathcal{N}[G(n,p)]$ is a.a. $l$-connected. So a.a. $\widetilde{H}_l(\| \mathcal{N}[G(n,p)] \|)=0$. Now suppose $l$ is even and $\alpha < \frac{-4}{l+2}$. Set $j = \frac{l+2}{2}$. $${n \choose j}^2 (n^\alpha) ^{j^2}$$ $$\le n^{l+2} n^{\alpha(l+2)^2 /4}$$ $$= n^{(l+2)(4+\alpha(l+2))/4}$$ $$=o(1),$$ \noindent since $4+\alpha(l+2) < 0$. So Theorem \ref{Theorem 2} gives that $\mathcal{N}[G(n,p)]$ a.a. strong deformation retracts to a complex of dimension at most $2j-3=l-1$. Similarly, suppose $l$ is odd and $\alpha < \frac{-4(l+2)}{(l+1)(l+3)}$. Set $j = \frac{l+1}{2}$. $${n \choose j}{n \choose j+1} (n^\alpha)^{ j(j+1)}$$ $$\le n^{2j+1} n^{\alpha j(j+1)}$$ $$\le n^{l+2} n^{\alpha \frac{l+1}{2} \frac{l+3}{2} }$$ $$=o(1).$$ Then Theorem \ref{Theorem 2} gives that $\mathcal{N}[G(n,p)]$ a.a. strong deformation retracts to a complex of dimension at most $2j-2 \le l-1$. In both the even and odd cases $\widetilde{H}_l(\| \mathcal{N}[G(n,p)] \|)=0$. \end{proof} Recall that the graph $X_n$ has vertex set $\{ u_1, u_2, \ldots, u_n \} \coprod \{ v_1, v_2, \ldots, v_n \}$, such that $\{ u_1, u_2, \ldots, u_n \}$ spans a clique, $\{ v_1, v_2, \ldots, v_n \}$ is an independent set, and $u_i$ is adjacent to $v_j$ whenever $i \neq j$. We show now that if $H$ is any graph containing a maximal clique $\{ u_1, u_2, \ldots, u_n \}$ that isn't contained in an $X_n$ subgraph, then $\| \mathcal{N}[H] \|$ retracts onto a sphere $\mathbb{S}^{n-2}$. \begin{proof} [Proof of Theorem \ref{Theorem 3}] Suppose $H$ contains a clique $X=\{ u_1, u_2, \ldots, u_n \}$ that isn't contained in any larger clique or $X_n$ subgraph. The induced subcomplex of $\mathcal{N}[G(n,p)]$ on $X$ is a topological sphere $\mathbb{S}^{n-2}$, since $X$ itself is not a face by assumption of maximality of the clique. Define a map on vertices $r_1: V(H) \rightarrow V(H)$ by $r_1(x)=x$ for $x \in X$ and $r_1(x)=u_1$ otherwise. The only possible obstruction to $r_1$ extending to a simplicial map $\widetilde{r_1}: \mathcal{N}[H] \rightarrow \mathcal{N}[H]$, is an $(n-1)$-dimensional face getting mapped onto $X$. This happens only if for some vertex $u_1^*$, $X \cup \{ u_1^* \} - u_1$ has a common neighbor $v_1$. Note that $u_1$ isn't adjacent to $v_1$ since then $X \cup \{ v_1 \}$ would be an extension of $X$ to a larger clique. Similarly, replacing $u_1$ with $u_i$ for $i=2, \ldots, n$. If none of the candidate maps $r_1, \ldots, r_n$ extends to a simplicial map, then the $v_i$ are clearly distinct, since the $u_i$ are distinct and $u_i$ is adjacent to $v_j$ if and only if $i \neq j$. But this yields an $X_n$ subgraph containing $X$. Otherwise $\| \mathcal{N}[H] \|$ retracts onto $\| X \| = \mathbb{S}^{n-2}$ as claimed, via one of these maps. \end{proof} \begin{proof} [Proof of Corollary \ref{corollary 3}] Let $p=1/2$ and $\epsilon>0$. It is well known that $G(n,p)$ a.a. contains maximal cliques of every order $k$ with $(1+\epsilon) \log_2{n} <k < (2-\epsilon) \log_2{n}$ \cite{Bollo}. We need only check that there are a.a. no $X_k$ subgraphs when $k>(4/3+\epsilon) \log_2{n}$. Note that $X_k$ has $2k$ vertices and $3k(k-1)/2$ edges. Then the probability that $G(n,p)$ contains a copy of $X_k$ is bounded above by $$(2k)! {n \choose 2k} \left( \frac{1}{2} \right) ^{3k(k-1)/2}$$ $$\le n^{2k} \left( \frac{1}{2} \right) ^{3k(k-1)/2}$$ $$\le n^{(8/3 + 2\epsilon) \log_2{n}} n^{(-3/2)(4/3+\epsilon)((4/3 + \epsilon)\log_2{n}-1)}$$ $$= n^{(8/3 + 2\epsilon) \log_2{n}} n^{-(8/3 + 4 \epsilon + 3\epsilon^2/2) \log_2{n} +2+3\epsilon/2}$$ $$= n^{-(2 + 3\epsilon /2) (\epsilon \log_2{n} - 1)}$$ $$=o(1).$$ \end{proof} \begin{proof}[Proof of Corollary \ref{corollary 4}] Define the {\it density} of a graph with $v$ vertices and $e$ edges to be $\lambda = e/v$. We say a graph is {\it strictly balanced} if the density of the graph itself is strictly greater than the density of any of its subgraphs. Let $H$ be any strictly balanced graph of density $\lambda$. It is classical that $p=n^{-1 / \lambda}$ is a sharp threshold for $G(n,p)$ containing $H$ as a subgraph \cite{Bollo}. In particular, if $p=n^\alpha$ and $\alpha > -1 / \lambda$ then $G(n,p)$ a.a. contains $H$ as a subgraph, and if $\alpha < -1 / \lambda$ then $G(n,p)$ a.a. doesn't contain $H$. Since $K_k$ and $X_k$ are both strictly balanced we may apply this result twice. The density of $K_k$ is $(k-1)/2$, and the density of $X_k$ is $3(k-1)/4$. So if $p=n^\alpha$ with $\frac{-2}{k+1}< \alpha < \frac{-4}{3(k+1)}$, then $G(n,p)$ a.a. contains $K_{k+2}$ but not $X_{k+2}$ subgraphs. This implies that $ \widetilde{H}_k(\| \mathcal{N}[G(n,p)] \|) \neq 0$ by Theorem \ref{Theorem 3} once we check the detail that at least one of these $K_{k+2}$ subgraphs can't be extended to a $K_{k+3}$. In fact a randomly chosen clique will do the job. The conditional probability that a given $K_{k+2}$ extends to a $K_{k+3}$ is easily seen to be bounded above by $np^{k+2}$, and $np^{k+2}=o(1)$ since $p=n^\alpha$ with $\alpha<\frac{-4}{3(k+1)}$. \end{proof} \section{Connectivity, cliques, and chromatic number} The chromatic number $\chi(G(n,1/2)$ is tightly concentrated around $n/\log_2{n}$. For comparison, the clique number is almost always close to $2\log_2{n}$. As a corollary of what we've shown here, the connectivity of the neighborhood complex, somewhere between $\log_2{n}$ and $(4/3)\log_2{n}$, is almost always less than the clique number. Similar remarks hold for all monotone functions $p=p(n)$. The asymptotic picture that emerges is the following. The neighborhood complex strong deformation retracts to a complex of dimension $d$, which is $d/4$-connected, with nonvanishing homology between dimensions $d/3$ and $d/2$, where the clique number is $d/2$. We see that the connectedness of the neighborhood complex won't do better than the clique number as a lower bound on chromatic number for random graphs; the maximal cliques themselves actually represent nontrivial homology classes. Recent work of Eric Babson and Dmitry Kozlov \cite{Babson1,Babson2,Babson3} provides new examples of topological lower bounds on chromatic number and a more general setting in which to work. However, Carsten Schultz put bounds on the strength of these bounds \cite{Schultz}. In particular, the $\mathbb{Z}_2$-index of the neighborhood complex provides the strongest known topological bound on chromatic number. This may in general be higher than the connectivity of the neighborhood complex. But by what we've shown here, even the $\mathbb{Z}_2$-index won't do much better for random graphs as a lower bound on chromatic number than connectivity, since the dimension of the retract is an upper bound on the index. \section{Random simplicial complexes and unimodality} One justification for random graph theory is that it provides models for ``typical'' graphs. This can be made precise in a few ways. For example, $G(n,1/2)$ is the uniform distribution on all graphs on vertex set $[n]$. Any property that $G(n,1/2)$ a.a. has is a property of almost all graphs. Or for another example, the Szemer\'edi Regularity Lemma states that every graph is well approximated by random graphs. Every neighborhood complex is homotopy equivalent to a free $\mathbb{Z}_2$-complex, via Lovasz's retract. Up to homotopy, the converse also holds \cite{Csorba}. \begin{theorem}[Csorba] Given a finite simplicial complex $\Delta$ with a free $\mathbb{Z}_2$-action, there exists a graph $G$ such that $\| \mathcal{N}[G] \|$ is homotopy equivalent to $\| \Delta \|$. \end{theorem} So neighborhood complexes of random graphs asymptotically give a probability distribution on all finite triangulable $\mathbb{Z}_2$-spaces as $n \rightarrow \infty$, at least up to homotopy type. Little seems to be written so far about random simplicial complexes. However, Nathan Linial and Roy Meshulam recently studied $H_1(\| Y(n,p) \|,\mathbb{Z}_2)$ for random $2$-dimensional simplicial complexes $Y(n,p)$ \cite{Nati}. Their definition of $Y(n,p)$ is a natural extension of the Erd\H{o}s-R\'enyi random graph $G(n,p)$; $Y(n,p)$ has vertex set $[n]$ and edge set ${[n] \choose 2}$, with each $2$-face appearing independently with probability $p$. One advantage of the Linial-Meshulam model is that vanishing of homology is a monotone property. That is, once enough $2$-faces have been added that $H_1(\| Y(n,p) \|,\mathbb{Z}_2)$ vanishes, adding more $2$-faces can't ever make it nonvanishing. (This particular fact doesn't depend on the coefficients of homology, but only on the definition of random $2$-complex. Simple connectivity is also a monotone property but it's still not known where the threshold function lies.) Most properties of $G(n,p)$ that have been studied to date are monotone graph properties, in contrast to what we've studied in this article, where vanishing of homology is clearly not monotone. But in this setting unimodality seems like a natural substitute for montonicity. In another article \cite{Kahle}, we study the {\it clique complex} of a random graph, which is the simplicial complex with all complete subgraphs for its faces. The results are analogous to what we find here, although we also study the expectation of the Betti numbers. Denote the $k$th Betti number by $\beta_k$. We conjecture that for both random neighborhood and clique complexes, for any fixed $k$ and large enough $n$ depending on $k$, the expectation $E[\beta_k]$ is a unimodal function of $p$. \section{Acknowledgements} The author wishes to thank his advisor Eric Babson for inspiration, guidance, and patience; fellow graduate students, particularly Anton Dochtermann, Alex Papazoglou, and David Rosoff, for many helpful conversations; and Sara Billey for generous support. Any mistakes are his own.
{ "timestamp": "2006-02-01T11:04:37", "yymm": "0512", "arxiv_id": "math/0512077", "language": "en", "url": "https://arxiv.org/abs/math/0512077", "abstract": "For a graph G, the neighborhood complex N[G] is the simplicial complex having all subsets of vertices with a common neighbor as its faces. It is a well known result of Lovasz that if N[G] is k-connected, then the chromatic number of G is at least k + 3.We prove that the connectivity of the neighborhood complex of a random graph is tightly concentrated, almost always between 1/2 and 2/3 of the expected clique number. We also show that the number of dimensions of nontrivial homology is almost always small, O(log d), compared to the expected dimension d of the complex itself.", "subjects": "Combinatorics (math.CO); Algebraic Topology (math.AT)", "title": "The neighborhood complex of a random graph", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.990140142659816, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7083700232550082 }
https://arxiv.org/abs/1304.4648
Construction of Self-dual Codes over $F_p+vF_p$
In this paper, we determine all self-dual codes over $F_p+vF_p$ ($v^2=v$) in terms of self-dual codes over the finite field $F_p$ and give an explicit construction for self-dual codes over $F_p+vF_p$, where $p$ is a prime.
\section{Introduction} Codes over finite rings were initiated in the early 1970s \cite{Blake1972, Blake1975}. They have received much attention since the seminal work \cite{HK}, which showed that certain good nonlinear binary codes could be found as images of linear codes over $\mathbb{Z}_4$ under the Gray map. Generally, most of the studies are concentrated on the situation when the ground rings associated with codes are finite chain rings ( e.g. see \cite{DINH},\cite{Dinh10},\cite{GH},\cite{KL}-\cite{LB},\cite{W1} ). However, it has been proved that finite Frobenius rings are suitable for coding alphabets \cite{WJ}, which leads to many works on codes over non-chain rings. In recent years, linear codes over the ring $F_p+vF_p$ with $v^2=v$ and $p$ being a prime, which is not a chain ring but a Frobenius ring, have been considered by some authors. In \cite{ZWS}, Zhu et al. gave some results on cyclic codes over $F_2+vF_2$, where it is shown that cyclic codes over the ring are principally generated. In \cite{ZW}, Zhu et al. studied $(1-2v)$-constacyclic codes over $F_p+vF_p$, where $p$ is an odd prime. They determined the image of $(1-2v)$-constacyclic codes over $F_p+vF_p$ under the Gray map and the structures of such constacyclic codes over $F_p+vF_p$. In \cite{CDD}, Cengellenmis et al. generated the ring $F_2+vF_2$ to the infinite family of rings $A_k=F_2[v_1, v_2, \cdots,v_k]/\langle v_i^2=v_i, v_iv_j=v_jv_i\rangle, 1\leq i,j \leq k$, and studied codes over these rings by using Gray maps. On the other hand, self-dual codes play a very significant role in coding theory both from practical and theoretical points of view. A vast number of papers have been devoted to the study of self-dual codes; e.g. see \cite{DHS}-\cite{DKKL}, \cite{GL}, \cite{HP}-\cite{KL1}, \cite{MS,PH}. In \cite{KL}, Kim and Lee gave an efficient method to construct self-dual codes over finite fields from a given self-dual code of a smaller length. In \cite{DKKL}, Dougherty et al. proved that self-dual codes exist over all finite commutative Frobenius rings and gave some building-up constructions for self-dual codes over these rings. More recently, Cengellenmis et al. \cite{CDD} studied Euclidean and Hermitian self-dual codes over $A_k=F_2[v_1, v_2, \cdots,v_k]/\langle v_i^2=v_i, v_iv_j=v_jv_i\rangle, 1\leq i,j \leq k$, and gave a sufficient and necessary condition for the existence of self-dual codes over the rings. In this paper, we determine all self-dual codes over $F_p+vF_p$ ($v^2=v$) in terms of self-dual codes over the finite field $F_p$ and give an explicit construction for self-dual codes over $F_p+vF_p$, where $p$ is a prime. Unlike the technique used in the mentioned papers, we first give the parity check matrices for linear codes over $F_p+vF_p$. Then we characterize the torsion codes associated with the linear codes, which are used as a tool to study self-dual codes over $F_p+vF_p$ and their explicit construction. The organization of this paper is as follows. The necessary notations and some known results are provided in Section 2. In Section 3, we first characterize the torsion codes, and then give some criteria for a linear code over the ring to be self-dual. In Section 4, we determine all self-dual codes over $F_p+vF_p$ and give an explicit construction for self-dual codes over $F_p+vF_p$. In section 5, we give some examples to illustrate our main results. \section{Preliminaries} Let $F_p$ be a finite field with $p$ elements, where $p$ is a prime. Throughout this paper, $R$ denotes the commutative ring $F_p+vF_p=\{a+vb\,|\,a,b\in F_p\}$ with $v^2=v$. Any element of $R$ can be uniquely expressed as $c=a+vb$, where $a,b\in F_p$. The Gray map $\Phi$ from $R$ to $F_p\times F_p$ is given by $\Phi(c)=(a,a+b)$. It is routine to check that $\Phi$ is a ring isomorphism, which means $R$ is isomorphic to the ring $F_p\times F_p;$ so $R$ is a finite Frobenius ring. The ring $R$ is a semi-local ring with exactly two maximal ideals given by $\langle v\rangle=\{av|a\in F_p\}$ and $\langle 1-v\rangle=\{a(1-v)|a\in F_p\}$. It is easy to verify that both $R/\langle v\rangle$ and $R/\langle 1-v\rangle$ are isomorphic to $F_p$. A code $C$ of length $n$ over $R$ is a nonempty subset of $R^n$, and the ring $R$ is referred to the alphabet of the code. If this subset is also an $R$-submodule of $R^n$, then $C$ is called linear. For any code $C$ of length $n$ over $R$, the {\it dual code of $C$} is defined as $C^\perp=\{u\in R^n\,|\,u\cdot v=0, \mbox{for any $v\in C$}\}$, where $u\cdot v$ denotes the standard Euclidean inner product of $u$ and $v$ in $R^n$. Notice that $C^\perp$ is linear whether or not $C$ is linear. If $C\subseteq C^\perp$, then $C$ is called {\it self-orthogonal}. If $C = C^\perp$, then $C$ is called {\it self-dual}. We have known that the ring $R$ has exactly two maximal ideals $\langle v\rangle$ and $\langle 1-v\rangle$. Their residue fields are both $F_p$. Thus we have two canonical projections defined as follows: $$R=F_p+vF_p \longrightarrow R/\langle v\rangle = F_p$$ $$a+vb\longmapsto a;$$ and $$R=F_p+vF_p \longrightarrow R/\langle 1-v\rangle = F_p$$ $$a+vb\longmapsto a+b.$$ We simple denote these two projections by ``\,\,$-$\,\,'' and``\,\,\,$\widehat{}$\,\,\,'', respectively. Denote by $\overline{r}$ and $\widehat{r}$ the images of an element $r\in R$ under these two projections, respectively. Note that any element $c$ of $R^n$ can be uniquely expressed as $c=r+vq$, where $r,q \in F_p^n$. Let $C$ be a linear code of length $n$ over $R$. Define $$C_1=\{a\in F_p^n|a+vb\in C, \text{for some} \, \,b\in F_p^n\}$$ and $$ C_2=\{a+b\in F_p^n|a+vb\in C\}. $$ Obviously, $C_1$ and $C_2$ are linear codes over $F_p$. Assume that $a\in R$. For a code $C$ of length $n$ over $R$, the {\it submodule quotient} is a linear code of length $n$ over $R$, defined as follows: $$(C:a)=\{x\in R^n|ax\in C\}.$$ The codes $\widehat{(C:v)}$ and $\overline{(C:(1-v))}$ over the field $F_p$ is called the {\it torsion codes} associated with the code $C$ over the ring $R$. For the case of odd prime $p$, any nonzero linear code $C$ over $R$ is permutation-equivalent to a code generated by the following matrix ( see \cite{ZW}): $$G= \begin{pmatrix} I_{k_1} & (1-v)B_1 & vA_1 & vA_2+(1-v)B_2 & vA_3+(1-v)B_3 \\ 0 & vI_{k_2} & 0 & vA_4 & 0 \\ 0 & 0 & (1-v)I_{k_3} & 0 & (1-v)B_4 \end{pmatrix}, $$ where $A_i$ and $B_j$ are matrices with entries in $F_p$ for $i,j=1,2,3,4$. Such a code $C$ is said to have type $p^{2k_1}p^{k_2}p^{k_3}$ and $|C|=p^{2k_1+k_2+k_3}$. For later convenience the above generator matrix can be rewritten in the form: $$G= \begin{pmatrix} I_{k_1} & (1-v)B_1 & vA_1 & vD_1+(1-v)D_2 \\ 0 & vI_{k_2} & 0 & vC_1 \\ 0 & 0 & (1-v)I_{k_3} & (1-v)C_2 \end{pmatrix},\qquad \qquad \qquad(\ast) $$ where $D_1=(A_2 \mid A_3), D_2=(B_2 \mid B_3), C_1=(A_4 \mid 0), C_2=(0 \mid B_4)$. For the case $p=2$, a nonzero linear code $C$ over $R$ has a generator matrix which after a suitable permutation of the coordinates can be written in the form ( see \cite{WZX,ZWS} ): $$ G= \begin{pmatrix} I_{k_1} & A & B & D_1+vD_2\\ 0 & vI_{k_2} & 0 & vC_1\\ 0 & 0 & (1+v)I_{k_3} & (1+v)E \end{pmatrix},\qquad \qquad \qquad(\ast) $$ where $A, B, C_1, D_1, D_2$ and $E$ are matrices with entries in $F_2$, and $|C|=2^{2k_1}2^{k_2}2^{k_3}$. For $k>0$, $I_k$ denotes the $k\times k$ identity matrix. The code $C_1$ is permutation-equivalent to a code with generator matrix of the form ( see \cite{ZW,ZWS} ): \begin{equation*} G_1= \begin{cases} \begin{pmatrix} I_{k_1} & B_1 & 0 & B_2 & B_3 \\ 0 & 0 & I_{k_3} & 0 & B_4 \end{pmatrix}, & \text{$p$ is odd;}\\ \begin{pmatrix} I_{k_1} & A & B & D_1\\ 0 & 0 & I_{k_3} & E \end{pmatrix}, & \text{$p=2$,} \end{cases} \end{equation*} where $A, B, E, D_1$ and $B_i$ are $p$-ary matrices for $i\in \{1,2,3,4\}$. And the code $C_2$ is permutation-equivalent to a code with generator matrix of the form ( see \cite{ZW,ZWS} ): \begin{equation*} G_2= \begin{cases} \begin{pmatrix} I_{k_1} & 0 & A_1 & A_2 & A_3 \\ 0 & I_{k_2} & 0 & A_4 & 0 \end{pmatrix}, & \text{$p$ is odd;}\\ \begin{pmatrix} I_{k_1} & A & B & D_1+D_2\\ 0 & I_{k_2} & 0 & C_1 \end{pmatrix}, & \text{$p=2$,} \end{cases} \end{equation*} where $A,B, C_1, D_1, D_2$ and $A_i$ are $p$-ary matrices for $i\in \{1,2,3,4\}$. It is easy to see that ${\rm dim} C_1=k_1+k_3$ and ${\rm dim} C_2=k_1+k_2$. \section{Self-dual codes over $F_p+vF_p$} We begin with a lemma about the torsion codes associated with the code over the ring $R$, which will be used throughout the paper. \begin{lem}\label{mainlemma} Assume the notation given above. Let $C$ be a linear code of length $n$ over $R$. Then {\rm (1)} $\widehat{(C:v)}=C_2$. {\rm (2)} $\overline{(C:(1-v))}=C_1$. \end{lem} \begin{proof} (1) For any $y\in \widehat{(C:v)}$, there exists an $x\in (C:v)$ such that $y=\widehat{x}$. Let $x=r+vq$, where $r,q\in F_p^n$. Then $\widehat{x}=r+q$. Since $vx\in C$, we have $$v(r+q)=v(r+vq)=vx\in C,$$ which implies that $r+q\in C_2$. Therefore $y=\widehat{x}=r+q\in C_2$. It follows that $\widehat{(C:v)}\subseteq C_2$. Let $z\in C_2$. Then there exists an element $x+vy\in C$ such that $z=x+y$. Hence $$v(x+y)=v(x+vy)\in C,$$ and $x+y\in (C:v)$. Thus we have that $$z=x+y=\widehat{x+y}\in \widehat{(C:v)}.$$ Hence $\widehat{(C:v)}\supseteq C_2$. Therefore we get the desired result. (2) Let $y$ be an element of $\overline{(C:(1-v))}$, then there exists some $x \in (C:(1-v))$ such that $y=\overline{x}$. Suppose that $x=r+vq$, for $r, q\in F_p^n$. Then $\overline{x}=r$. From $(1-v)x \in C$ we have that $$r-vr=(1-v)r=(1-v)(r+vq)=(1-v)x\in C,$$ which leads to $r\in C_1$. Hence $y=\overline{x}=r\in C_1$. Therefore we obtain that $\overline{(C:v)}\subseteq C_1$. If $r$ is an element of $C_1$, then we have that $r+vq \in C$ for some $q\in F_q^n$. Since $$(1-v)r=(1-v)(r+vq)\in C,$$ which shows that $r\in (C:(1-v))$. Hence $r=\overline{r}\in \overline{(C:(1-v))}$, then $\overline{(C:(1-v))}\supseteq C_1$. Therefore $\overline{(C:(1-v))}= C_1$, as required. \end{proof} In the following $A^T$ denotes the transpose of the matrix $A$. Suppose $\mathcal{C}_1$ and $\mathcal{C}_2$ are permutation equivalent linear codes over $R$ with $\mathcal{C}_1P=\mathcal{C}_2$ for some permutation matrix $P$. Then $\mathcal{C}_1^\perp P=\mathcal{C}_2^\perp$. Without loss of generality, we may assume that a linear code $C$ of length $n$ over $R$ is with generator matrix in the form $(\ast)$. \begin{Theorem}\label{generatormatrix} Let $C$ be a linear code of length $n$ over $R$ with generator matrix in the form $(\ast)$. {\rm (1)} For $p$ being odd, let $$ H= \begin{pmatrix} v E_1+(1-v)E_2 & P & Q & I_{n-k}\\ v(-A_1^T) & 0 & v I_{k_3} & 0\\ (1-v)(-B_1^T) & (1-v)I_{k_2} & 0 & 0 \end{pmatrix}, $$ where $E_1=(-A_2 \mid A_1B_4-A_3)^T$, $E_2=(B_1A_4-B_2 \mid -B_3)^T$, $P=(-A_4 \mid 0)^T$, $Q=(0 \mid -B_4)^T$ and $k=k_1+k_2+k_3$. Then $H$ is a generator matrix for $C^\perp$ and a parity check matrix for $C$. {\rm (2)} For $p=2$, let $$ H= \begin{pmatrix} E^TB^T+C_1^TA^T+(D_1+vD_2)^T & C_1^T & E^T & I_{n-k}\\ vB^T & 0 & vI_{k_3} & 0 \\ (1+v)A^T & (1+v)I_{k_2} & 0 & 0 \end{pmatrix}, $$ where $A, B, C_1, D_1, D_2$ and $E$ are matrices with entries in $F_2$ and $k=k_1+k_2+k_3$. Then $H$ is a generator matrix for $C^\perp$ and a parity check matrix for $C$. {\rm (3)} $(\widehat{(C:v)})^\perp=\widehat{(C^\perp:v)}; (\overline{(C:(1-v))})^\perp=\overline{(C^\perp:(1-v))}$. \end{Theorem} \begin{proof} (1) Since the verification of $HG^T=0$ is routine and somewhat tedious, we present a detail proof in the appendix. Let $D$ be the $R$-submodule generated by $H$, then $D\subseteq C^\perp$. Since $R$ is a Frobenius ring, we have $|C||C^\perp|=|R|^n$ (\cite{WJ}). It follows that $$|C^\perp|=\frac{|R|^n}{|C|}=\frac{p^{2n}}{p^{2k_1+k_2+k_3}}=p^{2(n-k_1)-k_2-k_3}.$$ Note that $|D|=p^{2(n-k)+k_3+k_2}=p^{2(n-k_1)-k_2-k_3}$, and we obtain $|D|=|C^\perp|$, hence $D=C^\perp$. (2) Similar to the proof of (1). (3) We first prove that $\widehat{(C^\perp:v)}\subseteq (\widehat{(C:v)})^\perp$. Let $x\in (C^\perp:v)$ and $y\in (C:v)$. Then $vx\in C^\perp$ and $vy\in C$, so $(vx)(vy)^T=0$, i.e., $v(xy^T)=0$. Hence $xy^T\in (1-v)R$, and $\widehat{x}\widehat{y}^T=0$, which implies that $\widehat{(C^\perp:v)}\subseteq (\widehat{(C:v)})^\perp$. On the other hand, by Lemma \ref{mainlemma} and Theorem \ref{generatormatrix}\,(1)(2), we have that $${\rm dim}\widehat{(C^\perp:v)}=n-k+k_3=n-k_1-k_2;$$ $${\rm dim}\widehat{(C:v)}^\perp=n-{\rm dim}\widehat{(C:v)}=n-(k_1+k_2)=n-k_1-k_2.$$ Hence ${\rm dim}\widehat{(C^\perp:v)}={\rm dim}\widehat{(C:v)}^\perp$, which follows that $(\widehat{(C:v)})^\perp=\widehat{(C^\perp:v)}$. The proof of the second equality is similar to that of the first one and is omitted here. \end{proof} \begin{Corollary}\label{conditions-1} Let $C$ be a linear code of length $n$ over $R$. Then $C$ is self-dual if and only if both the following two conditions are satisfied: {\rm (i)} $C$ is self-orthogonal; {\rm (ii)} $n=2(k_1+k_2), k_2=k_3$. \end{Corollary} \begin{proof} Now suppose that both Conditions (i) and (ii) are satisfied. Then we have that $$|C|=p^{2k_1+k_2+k_3}=p^{2(k_1+k_2)}, |C^\perp|=p^{2(n-k)+k_2+k_3}=p^{2(k_1+k_2)}.$$ Note that $C\subseteq C^\perp$, and then $C = C^\perp$, that is, $C$ is self-dual. Suppose that $C$ is self-dual, then $C$ is self-orthogonal. By Lemma \ref{mainlemma} and Theorem \ref{generatormatrix}(1)(2), we have that $${\rm dim}\widehat{(C:v)}=k_1+k_2;$$ $${\rm dim}\widehat{(C^\perp:v)}=n-k+k_3=n-k_1-k_2,$$ and $${\rm dim}\overline{(C:(1-v))}=k_1+k_3;$$ $${\rm dim}\overline{(C^\perp:(1-v))}=n-k+k_2=n-k_1-k_3.$$ Since $C=C^\perp$, we have that $n=2(k_1+k_2), k_2=k_3$. \end{proof} Let $A, B$ be the codes over $R$. We denote by $A\oplus B=\{a+b\,|\,a\in A, b\in B\}$. \begin{Theorem}\label{maindecomposition} With the above notations, let $C$ be a linear code of length $n$ over $R$. Then $C$ can be uniquely expressed as $C=vC_2\oplus (1-v)C_1$. Moreover, we also have $C^\perp=vC^\perp_2\oplus (1-v)C^\perp_1$. \end{Theorem} \begin{proof} We first prove the uniqueness of the expression of every element in $vC_2\oplus (1-v)C_1$. Let $va_2+(1-v)a_1=vb_2+(1-v)b_1$, where $a_2, b_2 \in C_2$ and $a_1, b_1 \in C_1$. Then $v(a_2-b_2)=(1-v)(b_1-a_1)$, which implies that $a_1=b_1$ and $a_2=b_2$. Hence $|vC_2\oplus (1-v)C_1|=|C_1||C_2|=p^{k_1+k_3}p^{k_1+k_2}=p^{2k_1+k_2+k_3}=|C|$. Next we prove that $vC_2\oplus (1-v)C_1\subseteq C$. Let $a\in (C:v)$ and $b\in (C:(1-v))$. Then $va \in C$ and $(1-v)b\in C$. Assume $a=a_1+(1-v)a_2, b=b_1+vb_2$, where $a_1, a_2, b_1, b_2 \in F_p^n$. Then $\widehat{a}=a_1\in C_2, \overline{b}=b_1\in C_1$. Thus $$v\widehat{a}+(1-v)\overline{b}=va_1+(1-v)b_1=va+(1-v)b\in C.$$ Hence $vC_2\oplus (1-v)C_1\subseteq C$. Note that $|vC_2\oplus (1-v)C_1|=|C|$, therefore $C=vC_2\oplus (1-v)C_1$. Finally, we prove the second statement. Combining the first statement, Theorem \ref{generatormatrix}(3) with Lemma \ref{mainlemma} we have \begin{eqnarray*} C^\perp & = & v\widehat{(C^\perp:v)}\oplus (1-v)\overline{(C^\perp:(1-v))}\\ & = & v(\widehat{(C:v)})^\perp\oplus (1-v)(\overline{(C:(1-v))})^\perp\\ & = & vC^\perp_2\oplus (1-v)C^\perp_1, \end{eqnarray*} which is the desired result. \end{proof} \begin{Corollary}\label{maincor} With the above notations, let $C$ be a linear code of length $n$ over $R$. Then $C$ is a self-dual code if and only if $C_1$ and $C_2$ are both self-dual codes. \end{Corollary} \begin{proof} $(\Longrightarrow)$ Let $C$ be a self-dual code. Then by Lemma \ref{mainlemma} and Theorem \ref{generatormatrix}(3) we have $$C^\perp_1=(\overline{(C:(1-v)})^\perp=\overline{(C^\perp:(1-v))}=\overline{(C:(1-v))}=C_1$$ and $$C^\perp_2=(\widehat{(C:v)})^\perp=\widehat{(C^\perp:v)}=\widehat{(C:v)}=C_2,$$ that is, $C_1$ and $C_2$ are both self-dual codes. $(\Longleftarrow)$ Let $C_1$ and $C_2$ be both self-dual codes. Then by Theorem \ref {maindecomposition}, $$C^\perp=vC^\perp_2\oplus (1-v)C^\perp_1=vC_2\oplus (1-v)C_1=C.$$ So $C$ is self-dual. \end{proof} \begin{Remark}\label{expression-1} According to Theorem \ref{maindecomposition} and Corollary \ref{maincor}, it is clear that a self-dual code over $R$ can be explicitly expressed via two self-dual codes over $F_p$. We need to study the converse part, which is an interesting step. \end{Remark} \section{Construction of self-dual codes over $F_p+vF_p$} The construction of self-dual codes over $R$ depends on the following theorem. \begin{Theorem}\label{selfdual} Suppose that $\mathcal{C}_1$ and $\mathcal{C}_2$ are linear codes of length $n$ over $F_p$ with generator matrices $G_1$ and $G_2$ respectively, and let $l_1$ and $l_2$ be the dimensions of $\mathcal{C}_1$ and $\mathcal{C}_2$ respectively. Then the code $C$ over $R$ generated by the matrix $G$, \begin{equation*} G = \begin{cases} \begin{pmatrix} vG_2\\0 \end{pmatrix} +(1-v)G_1, & \text{if $l_1> l_2$};\\ vG_2+ \begin{pmatrix} (1-v)G_1\\0 \end{pmatrix}, & \text{if $l_1< l_2$};\\ v G_2+(1-v)G_1, & \text{if $l_1 =l_2$}. \end{cases} \end{equation*} satisfies $$C=v\mathcal{C}_2\oplus (1-v)\mathcal{C}_1,~~ \widehat{(C:v)}=\mathcal{C}_2, \,\,\, \overline{(C:(1-v))}=\mathcal{C}_1.$$ \end{Theorem} \begin{proof} We only prove the case $l_1>l_2$ in the following, as the proof of the other cases are similar to this case. Assume that $G_1=(g_{11}, g_{12},\cdots,g_{1,l_1})^T$, $G_2=(g_{21}, g_{22},\cdots,g_{2,l_2})^T$, then $$G= \begin{pmatrix} vg_{21}+(1-v)g_{11} \\ vg_{22}+(1-v)g_{12}\\ \vdots\\ vg_{2,l_2}+(1-v)g_{1,l_2}\\ (1-v)g_{1,l_2+1}\\ \vdots\\ (1-v)g_{1,l_1} \end{pmatrix}. $$ Since $vg_{2i}+(1-v)g_{1i} \in C$, i.e. $g_{1i}+v(g_{2i}-g_{1i})\in C$, for every $1\leq i\leq l_2$, by Lemma \ref{mainlemma} we have $$g_{2i}=g_{1i}+(g_{2i}-g_{1i})\in \widehat{(C:v)},$$ for every $1\leq i\leq l_2$. Therefore $\mathcal{C}_2\subseteq \widehat{(C:v)}$. Let $y\in \widehat{(C:v)}$, then there exists $x\in (C:v)$ such that $y=\widehat{x}$. Since $vx \in C$, we may assume that $$vx=\sum_{i=1}^{l_2}(a_i+vs_i)[vg_{2i}+(1-v)g_{1i}]+\sum_{l_2+1}^{l_1}(a_i+vs_i)[(1-v)g_{1i}],$$ where $a_i+vs_i\in F_p+vF_p$, for $1\leq i \leq l_1$. So $$vx=v^2x=v\cdot vx=v\sum_{i=1}^{l_2}(a_i+s_i)g_{2i}.$$ Let $x=x_1+vx_2, x_1, x_2\in F_p^n$. Then $\widehat{x}=x_1+x_2$. Thus $$v(x_1+x_2)=vx=v\sum_{i=1}^{l_2}(a_i+s_i)g_{2i}.$$ Hence $x_1+x_2=\sum_{i=1}^{l_2}(a_i+s_i)g_{2i}$. Therefore we have $$y=\widehat{x}=x_1+x_2=\sum_{i=1}^{l_2}(a_i+s_i)g_{2i} \in \mathcal{C}_2,$$ which gives $\widehat{(C:v)}\subseteq \mathcal{C}_2$. From the above facts we get that $\widehat{(C:v)}= \mathcal{C}_2$. On the other hand, since $$vg_{2i}+(1-v)g_{1i} \in C, i.e. \;\; g_{1i}+v(g_{2i}-g_{1i})\in C,$$ for every $1\leq i\leq l_1$. Here $g_{2i}=0$, if $i> l_2$. By Lemma \ref{mainlemma} we have $g_{1i}\in \overline{(C:(1-v))}$, for every $1\leq i\leq l_1$. Therefore $\mathcal{C}_1\subseteq \overline{(C:(1-v))}$. Let $z\in \overline{(C:(1-v))}$, then there exists $s\in (C:(1-v))$ such that $z=\overline{s}$. Since $(1-v)s \in C$, we assume that $$(1-v)s=\sum_{i=1}^{l_2}(b_i+vt_i)[vg_{2i}+(1-v)g_{1i}]+\sum_{l_2+1}^{l_1}(b_i+vt_i)[(1-v)g_{1i}],$$ where $b_i+vt_i\in F_p+vF_p$, for $1\leq i \leq l_1$. So $$(1-v)s=(1-v)^2s=(1-v)\cdot (1-v)s=(1-v)\sum_{i=1}^{l_1}b_ig_{1i}.$$ Let $s=s_1+vs_2, s_1, s_2\in F_p^n$. Then $\overline{s}=s_1$. Thus $$(1-v)s_1=(1-v)s=(1-v)\sum_{i=1}^{l_1}b_ig_{1i}.$$ Hence $s_1=\sum_{i=1}^{l_1}b_ig_{1i}$. Therefore we have $$z=\overline{s}=s_1=\sum_{i=1}^{l_1}b_ig_{1i} \in \mathcal{C}_1,$$ which gives $\overline{(C:(1-v))}\subseteq \mathcal{C}_1$. Thus we get $\overline{(C:(1-v))}= \mathcal{C}_1$. Finally, by Lemma \ref{mainlemma} and Theorem \ref{maindecomposition}, \begin{eqnarray*} C & = & v \widehat{(C:v)} \oplus (1-v)\overline{(C:(1-v))}\\ & = & v\mathcal{C}_2 \oplus (1-v)\mathcal{C}_1, \end{eqnarray*} which gives our desired result. Thus we complete the proof. \end{proof} \begin{Corollary}\label{construction} Suppose that $\mathcal{C}_1$ and $\mathcal{C}_2$ are two self-dual codes of length $n$ over $F_p$ with generator matrices $G_1$ and $G_2$ respectively, then the code $C$ over $R$ generated by the matrix $G$ as follows is also self-dual, where $$ G = v G_2+(1-v)G_1. $$ \end{Corollary} \begin{proof} Note that $l_1=l_2$ in this case. By Lemma \ref{mainlemma}, Theorem \ref{maindecomposition} and Theorem \ref{selfdual} we have \begin{eqnarray*} C^\perp & = & v(\widehat{(C:v)})^\perp\oplus (1-v)(\overline{(C:(1-v))})^\perp\\ & = & v\mathcal{C}^\perp_2\oplus (1-v)\mathcal{C}^\perp_1\\ & = & v\mathcal{C}_2\oplus (1-v)\mathcal{C}_1\\ & = & C. \end{eqnarray*} So $C$ is self-dual. \end{proof} \begin{Theorem}\label{allcodes} All the self-dual codes over $R$ are given by $$v \mathcal{C}_2\oplus (1-v)\mathcal{C}_1,$$ where $\mathcal{C}_1, \mathcal{C}_2$ range over all the self-dual codes over $F_p$, respectively. Moreover, this expression is unique, i.e. if $$v \mathcal{C}_2\oplus (1-v)\mathcal{C}_1 = v \mathcal{C}'_2\oplus (1-v)\mathcal{C}'_1,$$ then $\mathcal{C}_2 = \mathcal{C}'_2$ and $\mathcal{C}_1 = \mathcal{C}'_1$, where $\mathcal{C}_1, \mathcal{C}_2, \mathcal{C}'_1 $ and $\mathcal{C}'_2$ are all self-dual codes over $F_p$. \end{Theorem} \begin{proof} First by Corollary~\ref{maincor}, every self-dual code over $R$ can be explicitly expressed by two fixed self-dual codes over $F_p$ as in the above form. Next, let $\mathcal{C}_1, \mathcal{C}_2$ be arbitrary two self-dual codes over $F_p$. Assume that $G_1$ and $G_2$ are generator matrices for $\mathcal{C}_1, \mathcal{C}_2$, respectively. Then according to Corollary \ref{construction} we know that the code $C$ generated by the matrix $vG_1 + (1-v)G_2$ is self-dual and satisfies $C = v \mathcal{C}_2\oplus (1-v)\mathcal{C}_1$. This completes the proof of the first statement. Let $x\in \mathcal{C}_2$. Since $v \mathcal{C}_2\oplus (1-v)\mathcal{C}_1 = v \mathcal{C}'_2\oplus (1-v)\mathcal{C}'_1$, we have that $$vx\in v \mathcal{C}_2 \subseteq v \mathcal{C}_2\oplus (1-v)\mathcal{C}_1 = v \mathcal{C}'_2\oplus (1-v)\mathcal{C}'_1.$$ Assuming $vx =vx'+(1-v)y'$ where $x'\in \mathcal{C}'_2, y'\in \mathcal{C}'_1$, we get that $v(x-x')=(1-v)y'$ and $v(x-x')=0$, so $x=x'$. Therefore $\mathcal{C}_2\subseteq \mathcal{C}'_2$. Similarly, we have $\mathcal{C}'_2\subseteq \mathcal{C}_2$. Hence $\mathcal{C}_2 = \mathcal{C}'_2$. Let $z\in \mathcal{C}_1$. Since $v \mathcal{C}_2\oplus (1-v)\mathcal{C}_1 = v \mathcal{C}'_2\oplus (1-v)\mathcal{C}'_1$, we have that $$(1-v)z\in (1-v) \mathcal{C}_1 \subseteq v \mathcal{C}_2\oplus (1-v)\mathcal{C}_1 = v \mathcal{C}'_2\oplus (1-v)\mathcal{C}'_1.$$ Setting $(1-v)z =va'+(1-v)z'$, where $a'\in \mathcal{C}'_2, z'\in \mathcal{C}'_1$, we get that $(1-v)(z-z')=va'$ and $(1-v)(z-z')=0$, so $z=z'$. Therefore $\mathcal{C}_1\subseteq \mathcal{C}'_1$. Similarly, we have $\mathcal{C}'_1\subseteq \mathcal{C}_1$. Hence $\mathcal{C}_1 = \mathcal{C}'_1$. Thus we complete the proof. \end{proof} \begin{Corollary} Let $N(R)$ be the number of self-dual codes of length $n$ over $R$ and $N(F_p)$ the number of self-dual codes of length $n$ over $F_p$. Then $$N(R) = N(F_p)^2.$$ \end{Corollary} \begin{proof} It follows immediately from Theorem \ref{allcodes}. \end{proof} The following lemma is well known and can be found from \cite{RS}. \begin{lem}\label{conditions-2} Let $F_q$ be a finite field with characteristic $p$. Then {\rm (i)} If $p=2$ or $p\equiv 1\,({\rm mod}\,4)$, then a self-dual code of length $n$ exists over $F_q$ if and only if $n\equiv 0\,({\rm mod}\,2)$; {\rm (ii)} If $p\equiv 3\,({\rm mod}\,4)$, then a self-dual code of length $n$ exists over $F_q$ if and only if $n\equiv 0\,({\rm mod}\,4)$. \end{lem} Now Combining Theorem \ref{allcodes} with Corollary \ref{conditions-2}, the following result is easily obtained. \begin{Theorem With the above notations. Then the following two statements hold: {\rm (i)} If $p=2$ or $p\equiv 1\,({\rm mod}\,4)$, then a self-dual code of length $n$ over $R$ exists if and only if $n\equiv 0\,({\rm mod}\,2)$; {\rm (ii)} If $p\equiv 3\,({\rm mod}\,4)$, then a self-dual code of length $n$ over $R$ exists if and only if $n\equiv 0\,({\rm mod}\,4)$. \end{Theorem} \begin{Remark} For $p=2$, the corresponding result has been obtained in \cite[Corollary 5.5]{CDD}. \end{Remark} \section{Examples} According to Corollary \ref{construction}, the construction of self-dual codes over $R$ hinges on constructing the self-dual codes over $F_p$. See \cite{KL} on the building-up construction of self-dual codes over $F_p$. The following examples illustrate our results. \begin{Example} Consider the construction of self-dual code of length $4$ over $R=F_5+vF_5$. Let $c=2$ be in $F_5$ such that $c^2=-1$ in $F_5$. Here $l_1=l_2=2$ and $$ G_1= \begin{pmatrix} 1 & 0 & 3 & 0\\ -3 & 1 & 1 & 2 \end{pmatrix}; $$ $$ G_2= \begin{pmatrix} 0 & 2 & 0 & 1\\ -2 & 4 & 1 & 2 \end{pmatrix}. $$ Then the code $C$ of length $4$ over $R=F_5+vF_5$ generated by the following matrix $$ G=vG_2+(1-v)G_1= \begin{pmatrix} 1-v & 2v & 3-3v & v \\ -3+v & 1+3v & 1 & 2 \end{pmatrix} $$ is self-dual. On the other hand, it is an elementary calculation to check that the above code $C$ is permutation-equivalence to a code $\mathcal{C}$ generated by the following matrix: $$ \begin{pmatrix} 1 & 0 & 2+v & 0\\ 0 & 1 & 0 & 2+v \end{pmatrix} = \begin{pmatrix} I_2 \mid vD_1+(1-v)D_2 \end{pmatrix}, $$ where $D_1= \begin{pmatrix} 3 & 0\\ 0 & 3 \end{pmatrix}, D_2= \begin{pmatrix} 2 & 0\\ 0 & 2 \end{pmatrix} $. By the Corollary \ref{conditions-1}, it is easy to check that $\mathcal{C}$ is self-dual. So the code $C$ is also self-dual. \end{Example} \begin{Example} Consider the construction of self-dual code of length $6$ over $R=F_2+vF_2$. Here $l_1=l_2=3$ and $$ G_1= \begin{pmatrix} 1 & 0 & 1 & 1 & 0 & 1\\ 1 & 1 & 1 & 0 & 1 & 0\\ 1 & 1 & 1 & 1 & 1 & 1 \end{pmatrix}; $$ $$ G_2= \begin{pmatrix} 1 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 1\\ 1 & 1 & 1 & 1 & 1 & 1 \end{pmatrix}. $$ Then the code $C$ of length $6$ over $R=F_2+vF_2$ generated by the following matrix \begin{eqnarray*} G & = & v G_2+(1-v)G_1 \\ & = & G_1+v(G_2-G_1) \\ & = & \begin{pmatrix} 1 & 0 & 1+v & 1 & 0 & 1+v\\ 1+v & 1+v & 1 & 0 & 1+v & v\\ 1 & 1 & 1 & 1 & 1 & 1 \end{pmatrix} \end{eqnarray*} is self-dual. Similarly, throught an elementary calculation, the above code $C$ is permutation-equivalence to a code $\mathcal{C}$ generated by the following matrix: $$ \begin{pmatrix} 1 & 0 & 0 & v & 0 & 1+v\\ 0 & 1 & 0 & 1+v & 0 & v \\ 0 & 0 & 1 & 0 & 1 & 0 \end{pmatrix} = \begin{pmatrix} I_3 \mid D_1+vD_2 \end{pmatrix}, $$ where $D_1= \begin{pmatrix} 0 & 0 & 1\\ 1 & 0 & 0\\ 0 & 1 & 0 \end{pmatrix}, D_2= \begin{pmatrix} 1 & 0 & 1\\ 1 & 0 & 1\\ 0 & 0 & 0 \end{pmatrix} $. By the Corollary \ref{conditions-1}, it is easy to check that $\mathcal{C}$ is self-dual. Thus the code $C$ is also self-dual. \end{Example} \begin{Example} Consider the construction of self-dual code of length $12$ over $R=F_3+vF_3$. Here $l_1=l_2=6$ and $$G_1=(I_6\mid B),$$ where $I_6$ denotes the $6\times 6$ identity matrix, and $$ B= \begin{pmatrix} 0 & 1 & 1 & 1 & 1 & 1\\ 1 & 0 & 1 & 2 & 2 & 1\\ 1 & 1 & 0 & 1 & 2 & 2\\ 1 & 2 & 1 & 0 & 1 & 2\\ 1 & 2 & 2 & 1 & 0 & 1\\ 1 & 1 & 2 & 2 & 1 & 0 \end{pmatrix}, $$ i.e. the code with generator matrix $G_1$ is the ternary Golay code; $$ G_2=\left(\begin{array}{llllccccrrrrr} 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 & 0 & 1 & 2 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 1 & 0 \\ 2 & 1 & 2 & 0 & 1 & 2 & 1 & 0 & 2 & 2 & 0 & 1 \end{array}\right). $$ Then the code $C$ of length $12$ over $R=F_3+vF_3$ generated by the following matrix \begin{eqnarray*} G & = & v G_2+(1-v)G_1 \\ & = & G_1+v(G_2-G_1) = \\ \end{eqnarray*} $$ \left(\begin{array}{llllccccrrrrr} 1+2v & v & v & v & 0 & 0 & 0 & 1+2v & 1+2v & 1+2v & 1+2v & 1+2v \\ v & 1+2v & 0 & 0 & v & 0 & 1 & 2v & 1+2v & 2+2v & 2+2v & 1+2v \\ 0 & 0 & 1+2v & 0 & 0 & v & 1 & 1 & 0 & 1+2v & 2+v & 2+v \\ 0 & 0 & 0 & 1+2v & v & 0 & 1+2v & 2+v & 1 & 0 & 1+v & 2+v \\ 0 & 0 & 0 & 0 & 1+2v & 0 & 1+2v & 2+v & 2+2v & 1+v & v & 1+2v \\ 2v & v & 2v & 0 & v & 1+v & 1 & 1+2v & 2 & 2 & 1+2v & v \end{array}\right). $$ is self-dual. Here we do the same thing as in the above examples and get the code $C$ is permutation-equivalence to a code $\mathcal{C}$ generated by the following matrix: $$ \left(\begin{array}{llllccccrrrrr} 1 & 0 & 0 & 0 & 0 & 0 & 2+v & 2+2v & 2 & 1+2v & 0 & 2+v\\ 0 & 1 & 0 & 0 & 0 & 0 & 2 & 0 & 1+2v & 2+v & 1+2v & 2 \\ 0 & 0 & 1 & 0 & 0 & 0 & 2v & 1+2v & 1+2v & 1+2v & 2+v & 2+2v\\ 0 & 0 & 0 & 1 & 0 & 0 & 1+2v & 2+2v & v & 2+v & 2+v & 2+v\\ 0 & 0 & 0 & 0 & 1 & 0 & 1+2v & 1+2v & 2+v & 2v & 1 & 2+v\\ 0 & 0 & 0 & 0 & 0 & 1 & 1+2v & 2+v & 1+2v & 1 & 1 & 0 \end{array}\right) = \begin{pmatrix} I_6 \mid vD_1+(1-v)D_2 \end{pmatrix}, $$ where $D_1= \begin{pmatrix} 0 & 1 & 2 & 0 & 0 & 0\\ 2 & 0 & 0 & 0 & 0 & 2 \\ 2 & 0 & 0 & 0 & 0 & 1\\ 0 & 1 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 2 & 1 & 0\\ 0 & 0 & 0 & 1 & 1 & 0 \end{pmatrix}, D_2= \begin{pmatrix} 2 & 2 & 2 & 1 & 0 & 2\\ 2 & 0 & 1 & 2 & 1 & 2 \\ 0 & 1 & 1 & 1 & 2 & 2\\ 1 & 2 & 0 & 2 & 2 & 2\\ 1 & 1 & 2 & 0 & 1 & 2\\ 1 & 2 & 1 & 1 & 1 & 0 \end{pmatrix} $. By the Corollary \ref{conditions-1}, it is easy to check that $\mathcal{C}$ is self-dual. So the code $C$ is also self-dual. \end{Example} \noindent{\bf Acknowledgement} This work is supported by the National Natural Science Foundation of China, Grant No. 11171370. \medskip \medskip \noindent{\bf Appendix} We give a detail proof for $HG^T=0$ in Theorem \ref{generatormatrix} below. For $p$ being odd, we have that \begin{align*} HG^T & = \begin{pmatrix} v E_1+(1-v)E_2 & P & Q & I_{n-k}\\v(-A_1^T) & 0 & v I_{k_3} & 0\\ (1-v)(-B_1^T) & (1-v)I_{k_2} & 0 & 0 \end{pmatrix} \begin{pmatrix} I_{k_1} & (1-v)B_1 & v A_1 & v D_1+(1-v)D_2 \\ 0 & v I_{k_2} & 0 & v C_1 \\ 0 & 0 & (1-v)I_{k_3} & (1-v)C_2 \end{pmatrix}^T\\ & = \begin{pmatrix} v E_1+(1-v)E_2 & P & Q & I_{n-k}\\v(-A_1^T) & 0 & v I_{k_3} & 0\\ (1-v)(-B_1^T) & (1-v)I_{k_2} & 0 & 0 \end{pmatrix} \begin{pmatrix} I_{k_1} & 0 & 0 \\ (1-v)B^T_1 & vI_{k_2} & 0\\ vA^T_1 & 0 & (1-v)I_{k_3}\\ vD_1^T+(1-v)D^T_2 & vC_1^T & (1-v)C_2^T \end{pmatrix}\\ & = \begin{pmatrix} v(E_1+QA^T_1+D_1^T)+(1-v)(E_2+PB_1^T+D_2^T) & v(P+C_1^T) & (1-v)(Q+C_2^T)\\ v(-A^T_1)+v^2A^T_1 & 0 & v(1-v)I_{k_3}\\ (1-v)(-B^T_1)+(1-v)^2B^T_1 & v(1-v)I_{k_2} & 0 \end{pmatrix} \\ & = 0, \end{align*} where $$v(E_1+QA^T_1+D_1^T)+(1-v)(E_2+PB_1^T+D_2^T)$$ \begin{align*} & = v\big[ \begin{pmatrix} -A_2 \mid A_1B_4-A_3\end{pmatrix}^T +\begin{pmatrix} 0 \mid -B_4 \end{pmatrix}^TA_1^T+D_1^T \big] +(1-v)\big[ \begin{pmatrix} B_1A_4-B_2 \mid -B_3\end{pmatrix}^T +\begin{pmatrix}-A_4 \mid 0 \end{pmatrix}^TB_1^T+D_2^T \big]\\ & = v\big[ \begin{pmatrix} -A_2 \mid A_1B_4-A_3\end{pmatrix} +A_1\begin{pmatrix} 0 \mid -B_4 \end{pmatrix}+D_1 \big]^T +(1-v)\big[ \begin{pmatrix} B_1A_4-B_2 \mid -B_3\end{pmatrix} +B_1\begin{pmatrix}-A_4 \mid 0 \end{pmatrix}+D_2 \big]^T\\ & = v\big[ -\begin{pmatrix} A_2 \mid A_3 \end{pmatrix}+D_1\big]^T +(1-v)\big[ -\begin{pmatrix} B_2 \mid B_3 \end{pmatrix}+D_2\big]^T\\ & = v\big( -D_1+D_1 \big)^T+(1-v)\big( -D_2+D_2\big)^T\\ & = 0; \end{align*} $$P+C_1^T=\begin{pmatrix}-A_4 \mid 0\end{pmatrix}^T+\begin{pmatrix}A_4 \mid 0\end{pmatrix}^T=0;$$ $$Q+C_2^T=\begin{pmatrix}0 \mid -B_4\end{pmatrix}^T+\begin{pmatrix}0 \mid B_4 \end{pmatrix}^T=0.$$ For $p=2$, \small{ \begin{align*} HG^T & = \begin{pmatrix} E^TB^T+C_1^TA^T+(D_1+vD_2)^T & C_1^T & E^T & I_{n-k}\\ vB^T & 0 & vI_{k_3} & 0 \\ (1+v)A^T & (1+v)I_{k_2} & 0 & 0 \end{pmatrix} \begin{pmatrix} I_{k_1} & A & B & D_1+vD_2\\ 0 & vI_{k_2} & 0 & vC_1\\ 0 & 0 & (1+v)I_{k_3} & (1+v)E \end{pmatrix}^T\\ & = \begin{pmatrix} E^TB^T+C_1^TA^T+(D_1+vD_2)^T & C_1^T & E^T & I_{n-k}\\ vB^T & 0 & vI_{k_3} & 0 \\ (1+v)A^T & (1+v)I_{k_2} & 0 & 0 \end{pmatrix} \begin{pmatrix} I_{k_1} & 0 & 0 & \\ A^T & vI_{k_2} & 0\\ B^T & 0 & (1+v)I_{k_3}\\ D_1^T+vD_2^T & vC_1^T & (1+v)E^T \end{pmatrix}\\ & = \begin{pmatrix} E^TB^T+C_1^TA^T+(D_1^T+vD_2^T)+C_1^TA^T+E^TB^T+D_1^T+vD^T_2 & vC_1^T+vC_1^T & (1+v)E^T+(1+v)E^T \\ vB^T+vB^T & 0 & v(1+v)I_{k_3} \\ (1+v)A^T+(1+v)A^T & v(1+v)I_{k_2} & 0 \end{pmatrix}\\ & =0. \end{align*}} \normalsize Thus we complete the proof.
{ "timestamp": "2013-04-18T02:00:48", "yymm": "1304", "arxiv_id": "1304.4648", "language": "en", "url": "https://arxiv.org/abs/1304.4648", "abstract": "In this paper, we determine all self-dual codes over $F_p+vF_p$ ($v^2=v$) in terms of self-dual codes over the finite field $F_p$ and give an explicit construction for self-dual codes over $F_p+vF_p$, where $p$ is a prime.", "subjects": "Information Theory (cs.IT)", "title": "Construction of Self-dual Codes over $F_p+vF_p$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9901401449874105, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7083700189125208 }
https://arxiv.org/abs/1403.7427
Robust optimal solutions in interval linear programming with forall-exists quantifiers
We introduce a novel kind of robustness in linear programming. A solution x* is called robust optimal if for all realizations of objective functions coefficients and constraint matrix entries from given interval domains there are appropriate choices of the right-hand side entries from their interval domains such that x* remains optimal. we propose a method to check for robustness of a given point, and also recommend how a suitable candidate can be found. We also discuss topological properties of the robust optimal solution set. We illustrate applicability of our concept in a transportation problem.
\section{Introduction} Robustness in mathematical programming was intensively studied from diverse points of view \cite{BenBoy2006,BenNem2009,BenGor2004,BenNem2002, SoyMur2013}. Generally, robustness corresponds to stability of some key characteristics under limited input data change. In case of uncertainties in the objective function only, an optimal solution is usually called robust if the worst-case regret in the objective value is minimal. One class of robustness is dealt with in the area of interval linear programming. Therein, we model uncertain parameters by intervals of admissible values and suppose that parameters can attain any value from their interval domains independently of other parameters. The effect of variations on the optimal value and interval solutions are the fundamental problems investigated \cite{AllNeh2013,Hla2012a,Hla2014:a}. Concerning to robustness, \cite{Hla2014a} was devoted to stability of an optimal basis in interval linear programming. In \cite{AveLeb2005,GabMur2010b,InuSak1995,MauLag1998}, the authors utilized maximum regret approach for finding robust solutions. In multiobjective case, \cite{HlaSit2013,RivYag2013} studied robustness of a Parto optimal solution, and some specific nonlinear programming problems \cite{Hla2010c} were addressed in the context of interval robustness as well. Recently, \cite{LiLuo2013,LuoLi2013a,LuoLi2014a} introduced a novel kind of interval robustness. They divided interval parameters into two sets, quantified respectively by universal and existential quantifiers. Roughly speaking, an optimal solution is robust in this sense if for each realization of universally quantified interval parameter there is some realization of the existentially quantified parameters such that the solutions remains optimal. Such forall-exists quantified problems are also studied in the context of interval linear equations \cite{Pop2012,PopHla2013,Sha2002}; imposing suitable quantifiers give us a more powerful technique in real-life problem modelling, and can more appropriately reflect various decision maker strategies. This paper is a contribution to interval linear programming with quantified parameters. The robust optimal solutions considered must remain optimal for any admissible perturbation in the objective and matrix coefficients, compensated by suitable right-hand side change. We propose a method to check for this kind of robustness and present a cheap sufficient condition. We discuss properties of the set of all robust solutions, and propose a heuristic to find a robust solution. We apply the robustness concept to transportation problem in a small numerical study. The equality form of linear programming is then extended to a general form with mixed equations and inequalities (Section~\ref{sGen}). \paragraph*{Notation.} The $k$th row of a matrix $A$ is denoted as $A_{k*}$, and $\diag(s)$ stands for the diagonal matrix with entries given by $s$. The sign of a real $r$ is defined as $\sgn(r)=1$ if $r\geq0$ and $\sgn(r)=-1$ otherwise; for vectors the sign is meant entrywise. An interval matrix is defined as $$ \imace{A}:=\{A\in\R^{m\times n}\mmid \umace{A}\leq A\leq \omace{A}\}, $$ where $ \umace{A}$ and $\omace{A}$, $\umace{A}\leq\omace{A}$, are given matrices. The midpoint and radius matrices are defined as $$ \Mid{A}:=\frac{1}{2}(\umace{A}+\omace{A}),\quad \Mid{A}:=\frac{1}{2}(\omace{A}-\umace{A}). $$ Naturally, intervals and interval vectors are consider as special cases of interval matrices. For interval arithmetic, we refer the readers to \cite{MooKea2009,Neu1990}, for instance. \paragraph*{Problem formulation.} Consider a linear programming problem in the equality form \begin{align}\label{lp} \min c^Tx\st Ax=b,\ x\geq0. \end{align} Let $\imace{A}\in\IR^{m\times n}$, $\ivr{b}\in\IR^{m}$ and $\ivr{c}\in\IR^{n}$ be given. Let $x^*\in\R^n$ be a candidate robustly optimal solution. The problem states as follows: \begin{quote} For every $c\in\ivr{c}$ and $A\in\imace{A}$, does there exists $b\in\ivr{b}$ such that $x^*$ is optimal to \nref{lp}? \end{quote} In other words, we ask whether $x^*$ is robustly optimal in the sense that any change in $c$ and $A$ within the prescribed bounds can be compensated by an adequate change in $b$. Thus, $\imace{A}$ and $\ivr{c}$ play role of uncertain parameters all realizations of which must be taken into account. On the other hand, intervals in $\ivr{b}$ represent some reserves that we can utilize if necessary. In \cite{Hla2012b}, it was shown that checking whether $x^*$ is optimal for all evaluations $c\in\ivr{c}$, with fixed $A$ and $b$, is a co-NP-complete problem. Since the class of problems studied in this manuscript covers this as a sub-class, we have as a consequence that our problem is co-NP-complete problem as well. This practically means that we hardly can hope for a polynomial time verification of robust optimality. \section{Checking robust optimality}\label{sEqForm} Let $I:=\{i=1,\dots,n\mmid x^*_i=0\}$ be the set of active indices of $x^*$. It is well known that $x^*$ is optimal if and only if $x^*$ is feasible, and there is no strictly better solution in the tangent cone at $x^*$ to the feasible set. In other words, the linear system \begin{align}\label{optCond} c^Tx=-1,\ \ Ax=0,\ \ x_i\geq0,\ i\in I, \end{align} has no solution. We refer to this conditions as \emph{feasibility} and \emph{optimality}. In order that $x^*$ is robustly optimal, both conditions must hold with the given forall-exists quantifiers. Notice that only the entries of $A$ are situated in both conditions. Since there is the universal quantifier associated with $A$, we can check for feasibility and optimality separately. \paragraph*{Feasibility.} We have to check that for any $A\in\imace{A}$ there is $b\in\ivr{b}$ such that $Ax^*=b$. This is well studied problem and $x^*$ satisfying this property is called tolerance (or tolerable) solution; see \cite{Fie2006,Pop2013a,Sha2002,Sha2004}. By \cite[Thm. 2.28]{Fie2006}, $x^*$ is a tolerance solution if and only if it satisfies \begin{align}\label{optFeas} |\Mid{A}x^*-\Mid{b}|+\Rad{A}|x^*|\leq\Rad{b}. \end{align} Thus, the feasibility question is easily answered. \paragraph*{Optimality.} Denote by $A_I$ the restriction of $A$ to the columns indexed by $I$, and denote by $A_J$ the restriction to the columns indexed by $J:=\seznam{n}\setminus I$. In a similar manner we use $I$ and $J$ as sub-indices for other matrices and vectors. We want to check whether \nref{optCond} is infeasible for any $A\in\imace{A}$ and $c\in\ivr{c}$. By \cite{Hla2013b}, this is equivalent to infeasibility of the system \begin{align*} (\uvr{c_I})^Tx_I+(\Mid{c_J})^Tx_J&\leq(\Rad{c_J})^T|x_J|-1,\\ -(\ovr{c_I})^Tx_I-(\Mid{c_J})^Tx_J&\leq(\Rad{c_J})^T|x_J|+1,\\ \uvr{A_I}\,x_I+\Mid{A_J}x_J&\leq\Rad{A_J}|x_J|,\\ -\ovr{A_I}\,x_I-\Mid{A_J}x_J&\leq\Rad{A_J}|x_J|,\\ x_I&\geq0. \end{align*} Due to the absolute values, the system is nonlinear in general, and it is the reason why checking robust optimality is co-NP-hard. Equivalently, this system is infeasible if and only if \begin{subequations}\label{optAeExp} \begin{align} (\uvr{c_I})^Tx_I+(\Mid{c_J}-\Rad{c_J}\diag(s))^Tx_J&\leq-1,\\ -(\ovr{c_I})^Tx_I-(\Mid{c_J}+\Rad{c_J}\diag(s))^Tx_J&\leq1,\\ \uvr{A_I}\,x_I+(\Mid{A_J}-\Rad{A_J}\diag(s))x_J&\leq0,\\ -\ovr{A_I}\,x_I+(\Mid{A_J}+\Rad{A_J}\diag(s))x_J&\leq0,\\ x_I&\geq0 \end{align} \end{subequations} is infeasible for any sign vector $s\in\{\pm1\}^{|J|}$, where $|J|$ denotes the cardinality of $J$. The system \nref{optAeExp} is linear, however, we have to verify infeasibility $2^{|J|}$ of instances. When $x^*$ is a basic feasible solution, then $|J|\leq m\leq n$. Thus, the number usually grows exponentially with respect to $m$, but not necessarily with respect to $n$. Therefore, we possibly can solve large problems provided the number of equations is low. \subsection{Sufficient condition}\label{ssSufCond} Since the number of systems \nref{optAeExp} can be very large, an easily computable sufficient condition for robust optimality is of interest. Let us rewrite \nref{optCond} as \begin{align}\label{optCond2} c_I^Tx_I+c_J^Tx_J=-1,\ \ A_Ix_I+A_Jx_J=0,\ \ x_I\geq0. \end{align} According to the Farkas lemma \cite{Fie2006,Schr1998}, this system is infeasible if and only if the dual system \begin{align}\label{optCondDual} A_I^Tu \leq c_I,\ \ A_J^Tu = c_J \end{align} is feasible. Thus, in order that the optimality condition holds true, the linear system \nref{optCondDual} must be feasible for each $A\in\imace{A}$ and $c\in\ivr{c}$. If $x^*$ is a basic non-degenerate solution, then $A_J$ is square. If it is nonsingular in addition, then the system $A_J^Tu = c_J$ has a unique solution, and it suffices to check if the solution satisfies the remaining inequalities. Extending this idea to the interval case, consider the solution set defined as $$ \{u\in\R^m\mmid \exists A_J\in\imace{A}_J\exists c_J\in\ivr{c}_J: A_J^Tu = c_J\}. $$ There are plenty of methods to find an interval enclosure (superset) $\ivr{u}$ of this solution set; see e.g.\ \cite{Fie2006,Hla2014b,MooKea2009,Neu1990}. Now, if $$ \ovr{\imace{A}_I^T\ivr{u}} \leq \unum{c}_I, $$ where the left-hand side is evaluated by interval arithmetic, then we are sure that \nref{optCondDual} has a solution in $\ivr{u}$ for each realization of interval data, and therefore the optimality criterion is satisfied. If $x^*$ is a basic degenerate solution, we can adopt a sufficient condition for checking similar kind of robust feasibility of mixed system of equations and inequalities proposed recently in \cite{Hla2013b}. We will briefly recall the method. First, solve the linear program \begin{align*} \max\alpha\st (\Mid{A_I})^Tu+\alpha e \leq \Mid{c_I},\ \ (\Mid{A_J})^Tu = \Mid{c_J}, \end{align*} where $e$ is the all-one vector. Let $u^*$ be its optimal solution. Let $B$ be an orthogonal basis of the null space of $(\Mid{A_J})^T$ and put $d:=Bu^*$. Now, compute an enclosure $\ivr{u}\in\IR^m$ of the solutions set $$ \{u\in\R^m\mmid \exists A_J\in\imace{A}_J\exists c_J\in\ivr{c}_J: A_J^Tu = c_J,\ Bu=d\}. $$ Finally, if $$ \ovr{\imace{A}_I^T\ivr{u}} \leq \unum{c}_I, $$ the the optimality criterion is satisfied. \subsection{Seeking for a candidate}\label{ssCand} If we are not given a candidate vector $x^*$ for a robust optimal solution, then it may be a computationally difficult problem to find a robust optimal solution or to prove that there is no one. Below, we propose a simple heuristic for finding a promising candidate. A candidate should be robustly feasible. The condition \nref{optFeas} is can be rewritten in a linear form as \begin{align*} (\Mid{A}x^*-\Mid{b})+\Rad{A}x^*\leq\Rad{b},\ \ -(\Mid{A}x^*-\Mid{b})+\Rad{A}x^*\leq\Rad{b}, \end{align*} or, equivalently, as \begin{align}\label{ineqSetRob} \omace{A}x^*\leq\ovr{b},\ \ \umace{A}x^*\geq\uvr{b}. \end{align} This motivates us to find a good candidate $x^*$ as an optimal solution of the linear program \begin{align*} \min (\Mid{c})^Tx\st x\in\mna{F}, \end{align*} where \begin{align*} \mna{F}:=\{x\in\R^n\mmid \omace{A}x\leq\ovr{b},\ \ \umace{A}x\geq\uvr{b},\ \ x\geq0\}. \end{align*} \subsection{The set of robust solutions in more detail} Let us denote by $\Ss$ the set of all robust optimal solutions. \begin{proposition}\label{propSsEqUniConv} $\Ss$ is formed by a union of at most $\binom{n}{\lfloor n/2\rfloor}$ convex polyhedral sets. \end{proposition} \begin{proof} Each $x\in\Ss$ must lie in $\mna{F}$ and must satisfy the optimality criterion. Since the optimality criterion does not depend directly on $x$, but only on the active set $I$ of $x$, we have that \begin{align* \mna{F}_I:=\mna{F}\cap \{x\in\R^n\mmid x_i=0,\ i\in I,\ x_i>0,\ i\not\in I\} \end{align*} either whole lies in $\Ss$, or is disjoint with $\Ss$. Hence $\Ss$ is formed by a union of the sets $\mna{F}_I$ for several index sets $I\subseteq \seznam{n}$. Since \begin{align*} \mna{F}_I\subseteq\Ss\wedge I\subseteq I'\ \Rightarrow\ \mna{F}_{I'}\subseteq\Ss, \end{align*} we can replace the sets $\mna{F}_I$ by \begin{align*} \tilde{\mna{F}}_I:=\mna{F}\cap \{x\in\R^n\mmid x_i=0,\ i\in I\}. \end{align*} Now, since $\tilde{\mna{F}}_I\supseteq\tilde{\mna{F}}_{I'}$ for $I\subseteq I'$, not all subsets of $\seznam{n}$ have to be taken into account. By Sperner's theorem (see, e.g., \cite{MatNes2008}), only $\binom{n}{\lfloor n/2\rfloor}$ of them it is sufficient to consider. \end{proof} As illustrated by the following example, the robust solution set $\Ss$ needn't be topologically connected. \begin{example} Consider the problem \begin{align*} \min x_1+x_2+c_3x_3 \st x_1+x_2+x_3=1,\ x_1-x_2=b_2,\ x_1,x_2,x_3\geq0, \end{align*} where $c_3\in[0.5,1.5]$ and $b_2\in[-1,1]$. The robust feasible set $\mna{F}$ is formed by a triangle with vertices $(1,0,0)$, $(0,1,0)$ and $(0,0,1)$. Concerning optimality, the system \nref{optCond} reads \begin{align}\label{sysRobOptEx} x_1+x_2+c_3x_3=-1,\ \ x_1+x_2+x_3=0,\ \ x_1-x_2=0,\ \ x_I\geq0. \end{align} If $3\not\in I$, then \nref{sysRobOptEx} has a solution $x=(1,1,-2)$ when $c_3=1.5$. Thus, it must be $3\in I$. If $I=\{3\}$, then \nref{sysRobOptEx} has a solution $x=(-1,-1,2)$ when $c_3=0.5$. If $I=\{1,3\}$, then \nref{sysRobOptEx} has no solution for any $c_3$, and the corresponding $\tilde{\mna{F}}_I=\{(0,1,0)\}$. Similarly, for $I=\{2,3\}$, the system \nref{sysRobOptEx} has no solution for any $c_3$, and $\tilde{\mna{F}}_I=\{(1,0,0)\}$. In summary, the robust solution set $\Ss$ consists of two isolated points $(1,0,0)$ and $(0,1,0)$. \end{example} \section{Applications} \subsection{Transportation problem} Since linear programming is so widely used technique, the proposed concept of robust solution and the corresponding methodology is applicable in many practical problems. These problems include transportation problem and flows in networks, among others, in which the constraint matrix $A$ represents an incidence matrix of a (undirected or directed) graph. By imposing suitable intervals ($[0,1]$ or $[-1,1]$) as the matrix entries, we can model uncertainty in the knowledge of the edge existence. More concretely, consider a transportation problem \begin{align*} \min\ &\sum_{i=1}^m\sum_{j=1}^nc_{ij}x_{ij}\\ \st &\sum_{i=1}^m\alpha_{ij}x_{ij}=b_j,\quad j=1,\dots,n,\\ &\sum_{j=1}^n\alpha_{ij}x_{ij}=a_i,\quad i=1,\dots,m,\\ &x_{ij}\geq0,\quad i=1,\dots,m,\ j=1,\dots,n, \end{align*} where $c_{ij}\in\inum{c}_{ij}$, $a_i\in\inum{a}_i$ and $b_j\in\inum{b}_j$. In contrast to the standard formulation $\alpha_{ij}\in\{0,1\}$ and in order to obtain interval parameters, we allow $\alpha_{ij}$ to attain values in the interval $[0,1]$. Robustness here means that an optimal solution $x^*$ remains optimal for any $c_{ij}\in\inum{c}_{ij}$. Moreover, $x^*$ should also remain optimal even when some selected edges are removed. The edge removal could be compensated by a suitable change of $a_i\in\inum{a}_i$ and $b_j\in\inum{b}_j$. Herein,the intervals $\inum{a}_i$ and $\inum{b}_j$ are interpreted as tolerances in supplies and demands. \begin{example} For concreteness, let \begin{align*} C=\begin{pmatrix}20& 30& 10\\10& 20& 50\\40& 10& 20\end{pmatrix},\quad a=(100, 160, 250),\quad b=(150, 210, 150). \end{align*} Suppose that the objective coefficients $c_{ij}$ are known with $10\%$ precision only. Next suppose that the supplies and demands have $10\%$ tolerance in which they can be adjusted. Eventually, suppose that the connections from the second supplier to the second and third demanders, and from the third supplier to the first demander are all uncertain. Thus, we have interval data \begin{align*} \imace{C}&=\begin{pmatrix} [18,22]& [27,33]& [9,11]\\ [9,11]& [18,22]& [45,55]\\ [36,44]& [9,11]& [18,22] \end{pmatrix},\\ \ivr{a}&=([90,110],\, [144,176],\, [225,275]),\\ \ivr{b}&=([135,165],\,[189,231],\,[135,165])\\ \inum{\alpha}_{22}&=\inum{\alpha}_{23}=\inum{\alpha}_{31}=[0,1],\ \ \inum{\alpha}_{ij}=1,\ (i,j)\not\in\{(2,2),(2,3),(3,1)\}. \end{align*} For the midpoint data, the optimal solution is \begin{align*} \begin{pmatrix}0& 0& 100\\150& 10& 0\\0& 200& 50\end{pmatrix}. \end{align*} It is robustly feasible, however, it is not robustly optimal. Let us try our method from Section~\ref{ssCand}. It recommends to solve the problem \begin{align*} \min\ &\sum_{i=1}^m\sum_{j=1}^n\Mid{c}_{ij}x_{ij}\\ \st &\sum_{i=1}^m\onum{\alpha}_{ij}x_{ij}\leq \onum{b}_j,\ \ \sum_{i=1}^m\unum{\alpha}_{ij}x_{ij}\geq \unum{b}_j, \quad j=1,\dots,n,\\ &\sum_{j=1}^n\onum{\alpha}_{ij}x_{ij}\leq\onum{a}_i,\ \ \sum_{j=1}^n\unum{\alpha}_{ij}x_{ij}\geq\unum{a}_i, \quad i=1,\dots,m,\\ &x_{ij}\geq0,\quad i=1,\dots,m,\ j=1,\dots,n. \end{align*} Its solution is \begin{align*} \begin{pmatrix}0& 0& 99\\144& 0& 0\\0& 189& 36\end{pmatrix}. \end{align*} It turns out that is is both robustly feasible and optimal, so it can serve as a robust solution in question. As the sufficient condition fails, optimality must have been verified by the exhausting feasibility checking of 16 systems of type \nref{optAeExp}. Nevertheless, if the edge $(2,2)$ becomes certain, and only the others are uncertain, then the sufficient condition succeeds. \end{example} \begin{example}\label{exDpNum} We carried out a limited numerical study about what is the efficiency of the sufficient condition and the heuristic for finding a candidate. In the transportation problem with given dimensions $m$ and $n$, we randomly chosen $C$, $a$ and $b$. In $C$, there were $10\%$ of randomly chosen entries subject to $10\%$ relative uncertainty. Tolerances for supplies and demands were also $10\%$. A given number randomly selected edges were considered as uncertain, i.e., the coefficients by $x_{ij}$ ranged in $[0,1]$. \begin{table}[t \caption{(Example~\ref{exDpNum}) Computing time and percentual rate of finding robust optimal solution for different dimensions and numbers of uncertain edges in the transportation problem.\label{tabDpNum}} \begin{center} \tabcolsep=5pt \renewcommand\arraystretch{1.1} \begin{tabular}{cccccc} \toprule $m$ & $n$ & $[0,1]$-edges & candidate time (in $s$) & robustness time (in $s$) & success rate (in \%)\\ \midrule \multirow{3}{*}{5} & \multirow{3}{*}{10} & 2 & 0.03138 & 0.01268 & 25.66 \\ && 4 & 0.03155 & 0.00673 & 10.63 \\ && 6 & 0.03178 & 0.00415 & 4.89 \\ \cline{3-6} \multirow{3}{*}{10} & \multirow{3}{*}{15} & 3 & 0.06980 & 0.02139 & 18.84 \\ && 5 & 0.06904 & 0.01447 & 11.90 \\ && 7 & 0.06873 & 0.01011 & 7.46 \\ \cline{3-6} \multirow{3}{*}{10} & \multirow{3}{*}{30} & 4 & 0.1364 & 0.02919 & 9.30 \\ && 6 & 0.1370 & 0.02035 & 5.96 \\ && 8 & 0.1336 & 0.01457 & 3.94 \\ \cline{3-6} \multirow{3}{*}{20} & \multirow{3}{*}{50} & 4 & 0.4737 & 0.11660 & 4.47 \\ && 7 & 0.4520 & 0.08784 & 2.45 \\ && 10 & 0.4265 & 0.06585 & 1.68 \\ \cline{3-6} \multirow{3}{*}{30} & \multirow{3}{*}{70} & 6 & 1.0825 & 0.2666 & 1.04 \\ && 9 & 1.0119 & 0.2177 & 0.80 \\ && 12 & 0.9657 & 0.1853 & 0.42 \\ \cline{3-6} \multirow{3}{*}{50} & \multirow{3}{*}{100} & 1 & 1.6982 & 0.04426 & 0 \\ && 3 & 1.8670 & 0.04975 & 0 \\ && 5 & 2.0025 & 0.08210 & 0 \\ \bottomrule \end{tabular} \end{center} \end{table} Table~\ref{tabDpNum} displays the results. Each row is a mean of 10000 runs, and shows the average running time in seconds and the success rate. The success rate measures for how many instances the heuristic found a candidate that was after that verified as a robust optimal solution by the sufficient condition. This means that the number of robust solution can be higher, but we were not able to check it because of its intractability. The displayed running time concerns both the heuristic for finding a suitable candidate and the sufficient condition for checking robustness. The results show that in low computing time we found robust optimal solutions in 5\% to 15\% of the small dimensional cases. In large dimensions, the number of robust solutions is likely to be small. Even when we decreased the number of uncertain edges, the sufficient condition mostly failed. This may be due to 500 interval costs in the last data set. \end{example} \subsection{Nutrition problem} The diet problem is the classical linear programming problem, in which a combination of $n$ different types of food must be found such that $m$ nutritional demands are satisfied and the overall cost in minimal. The mathematical formulation has exactly the form of \nref{lp}, where $x_j$ be the number of units of food $j$ to be consumed, $b_i$ is the required amount of nutrient $j$, $c_j$ is the price per unit of food $j$, and $a_{ij}$ is the the amount of nutrient $j$ contained in one unit of food $i$. Since the amounts of nutricients is foods are not constant, it is reasonable to consider intervals of possible ranges instead of fixed values. The same considerations apply for the costs. The requirements on nutritional demands to be satisfied as equations are too strict. Usually, there are large tolerances on the amount of consumed nutricients (such as calories, proteins, vitamins, etc.), which leads to quite wide intervals of admissible tolerances for the entries of $b$. In this interval valued diet problem, we would like to find optimal solution $x$ that is robustly feasible in the sense that for each possible instance of $A$ there is an admissible $b$ such that $Ax=b$. This model is exactly the robustness model we are dealing with in this paper. \begin{example} Consider Stigler's nutrition model \cite{Dan1963}, the GAMS model file containing the data is posted at \url{http://www.gams.com/modlib/libhtml/diet.htm}. The problem consists of $m=9$ nutrients and $n=20$ types of food. The data in $A$ are already normalized such that it gives nutritive values of foods per dollar spent. This means that the objective is $c=(1,\dots,1)^T$. Suppose that the entries of $A$ can vary up to $5\%$ of their nominal values, and the tolerances in $b$ are $10\%$. Then the method from Section~\ref{ssCand} finds the solution \begin{align*} x^* =(&0.0256, 0.0067, 0.0429, 0, 0, 0.0015, 0.0245,\\ &0.0108, 0, 0, 0, 0.0109, 0, 0, 0.0016, 0, 0, 0, 0, 0)^T. \end{align*} Even though our sufficient condition fails, it turns out by checking infeasibility of \nref{optAeExp} for each sign vector that this solution is robustly optimal. \end{example} \section{General form of interval linear programming}\label{sGen} For the sake of simplicity of exposition, we considered the equality form of linear programming \nref{lp} in the first part of this paper. It is well known in interval linear programming that different forms are not equivalent to each other \cite{Hla2012a} since transformations between the formulations lead to dependencies between interval coefficients. That is why we will consider a general form of interval linear programming in this section and extend the results developed so far. The general form with $m$ equation, $m'$ inequalities and variables $x\in\R^n$, $y\in\R^{n'}$ reads \begin{align}\label{lpGen} \min c^Tx+d^Ty\st Ax+By=b,\ Cx+Dy\leq a,\ x\geq0, \end{align} where $a\in\ivr{a}$, $b\in\ivr{b}$, $c\in\ivr{c}$, $d\in\ivr{d}$, $A\in\imace{A}$, $B\in\imace{B}$, $C\in\imace{C}$ and $D\in\imace{D}$. Let $(x^*,y^*)$ be a candidate solution. The problem now states as follows. \begin{quote} For every $c\in\ivr{c}$, $d\in\ivr{d}$, $A\in\imace{A}$, $B\in\imace{B}$, $C\in\imace{C}$, $D\in\imace{D}$, does there exist $a\in\ivr{a}$ and $b\in\ivr{b}$ such that $(x^*,y^*)$ is optimal to \nref{lpGen}? \end{quote} As before, we will study feasibility and optimality conditions separately. \subsection{Feasibility} Here, we have to check whether for each $A\in\imace{A}$, $B\in\imace{B}$, $C\in\imace{C}$ and $D\in\imace{D}$, there are $a\in\ivr{a}$ and $b\in\ivr{b}$ such that $Ax^*+By^*=b$ and $Cx^*+Dy^*\leq a$. We can check equations and inequalities independently. Equations are dealt with in a similar manner as in Section~\ref{sEqForm}, and the sufficient and necessary condition is \begin{align}\label{condGenFeasEq} |\Mid{A}x^*+\Mid{B}y^*-\Mid{b}|+\Rad{A}x^*+\Rad{B}|y^*|\leq\Rad{b}. \end{align} For inequalities, we have the following characterisation. \begin{proposition} For each $C\in\imace{C}$ and $D\in\imace{D}$, there is $a\in\ivr{a}$ such that $Cx^*+Dy^*\leq a$ if and only if \begin{align}\label{condGenFeasIneq} \omace{C}x^*+\Mid{D}y^*+\Rad{D}|y^*|\leq\ovr{a}. \end{align} \end{proposition} \begin{proof} For each $C\in\imace{C}$ and $D\in\imace{D}$ we have \begin{align*} Cx^*+Dy^* &=Cx^*+\Mid{D}y^*+(D-\Mid{D})y^* \leq Cx^*+\Mid{D}y^*+|D-\Mid{D}||y^*|\\ &\leq \omace{C}x^*+\Mid{D}y^*+\Rad{D}|y^*|. \end{align*} This inequality chain holds as equation for $C:=\omace{C}$ and $D:=\Mid{D}+\Rad{D}\diag(\sgn(y^*))$ since $|y^*|=\diag(\sgn(y^*))y^*$. That is, the largest value of the left-hand side is attained for this setting. Therefore, feasibility condition holds true if and only if the inequality is satisfied for this setting of $C$ and $D$, and for the largest possible right-hand side vector $a:=\ovr{a}$. \end{proof} \subsection{Optimality} For checking optimality we have to define the active set first. For nonnegativity constraints, we can use the standard definition $I:=\{i=1,\dots,n\mmid x^*_i=0\}$. However, we face a problem to define the active set for the other inequalities due to the variations in $C$ and $D$. Fortunately, we can define it as follows. \begin{proposition}\label{propGenOpt} Each instance of the inequality system $Cx^*+Dy^*\leq a$, $C\in\imace{C}$, $D\in\imace{D}$, with a suitable $a\in\ivr{a}$ includes the index set \begin{align*} K:=\{k\mmid \umace{C}_{k*}x^*+\Mid{D}_{k*}y^*-\Rad{D}_{k*}|y^*|\geq \unum{a}_k\} \end{align*} as a subset of its active set. Moreover, $K$ is attained as an active set for $C:=\umace{C}$, $D:=\Mid{D}-\Rad{D}\diag(\sgn(y^*))$ and $a:=\max\{\uvr{a},Cx^*+Dy^*\}$. \end{proposition} \begin{proof} First, we show that $K$ is attained for $C:=\umace{C}$, $D:=\Mid{D}-\Rad{D}\diag(\sgn(y^*))$, and $a:=\max\{\uvr{a},Cx^*+Dy^*\}\in\ivr{a}$. The condition $Cx^*+Dy^*\leq\ovr{a}$, and hence also $a\in\ivr{a}$, follows from feasibility of $(x^*,y^*)$, so $a$ is well defined. For $k\in K$, we have \begin{align*} {C}_{k*}x^*+{D}_{k*}y^* =\umace{C}_{k*}x^*+\Mid{D_{k*}}y^*-\Rad{D_{k*}}|y^*| \geq \unum{a}_k, \end{align*} whence ${C}_{k*}x^*+{D}_{k*}y^*=a_k$. For $k\not\in K$, we have \begin{align*} {C}_{k*}x^*+{D}_{k*}y^* =\umace{C}_{k*}x^*+\Mid{D_{k*}}y^*-\Rad{D_{k*}}|y^*| < \unum{a}_k=a_k. \end{align*} Now, let $C\in\imace{C}$, $D\in\imace{D}$ and $k\in K$ be arbitrary. From the feasibility of $(x^*,y^*)$ and \begin{align*} \unum{a}_k \leq\umace{C}_{k*}x^*+\Mid{D_{k*}}y^*-\Rad{D_{k*}}|y^*| \leq{C}_{k*}x^*+{D}_{k*}y^* \end{align*} we can put $a_k:={C}_{k*}x^*+{D}_{k*}y^*\in\inum{a}_k$. Therefore, $k$ lies in the active set corresponding to $C$, $D$ and $a$. \end{proof} Notice that the larger the active set the better since we have more constraints in the optimality criterion and the solution is more likely optimal. Proposition~\ref{propGenOpt} says that we can take $K$ as the active set to the interval inequalities. Since for each $C\in\imace{C}$ and $D\in\imace{D}$, this $K$ is the smallest active set, it is the worst case scenario that we can imagine. Similarly, the right-hand side vector $a$ from Proposition~\ref{propGenOpt} is the best response: If we decrease it, then $(x^*,y^*)$ or $a$ becomes infeasible, and if we increase it, then the active set becomes smaller. To state the optimality criterion comprehensively, we have to introduce some notation first. Let $\tilde{A}:=A_I$, $\tilde{B}:=(A_J\mid B)$, $\tilde{c}:=c_I$, $\tilde{d}:=(c_J, d)$, $\tilde{x}:=x_I$, $\tilde{y}:=(x_J,y)$. Let $\tilde{C}$ be the restriction of $C_I$ to the rows indexed by $K$, and similarly $\tilde{D}$ be a restriction of $(C_J\mid D)$ to the rows indexed by $K$. For a concrete setting $a\in\ivr{a}$, $b\in\ivr{b}$, $c\in\ivr{c}$, $d\in\ivr{d}$, $A\in\imace{A}$, $B\in\imace{B}$, $C\in\imace{C}$ and $D\in\imace{D}$, a feasible solution $(x^*,y^*)$ is optimal if and only if \begin{align}\label{sysGenOpt} \tilde{c}^T\tilde{x}+\tilde{d}^T\tilde{y}\leq-1,\ \ \tilde{A}\tilde{x}+\tilde{B}\tilde{y}=0,\ \ \tilde{C}\tilde{x}+\tilde{D}\tilde{y}\leq0,\ \ \tilde{x}\geq0 \end{align} has no solution. In order that $(x^*,y^*)$ is robustly optimal, this systems has to be infeasible for each realization from the given intervals. By \cite{Hla2013b}, \nref{sysGenOpt} is infeasible for each realization if and only if the system \begin{align*} \uvr{\tilde{c}}^T\tilde{x}+(\Mid{\tilde{d}})^T\tilde{y} &\leq(\Rad{\tilde{d}})^T|\tilde{y}|-1,\\ \umace{\tilde{A}}\tilde{x}+\Mid{\tilde{B}}\tilde{y} &\leq\Rad{\tilde{B}}|\tilde{y}|,\\ -\omace{\tilde{A}}\tilde{x}-\Mid{\tilde{B}}\tilde{y} &\leq\Rad{\tilde{B}}|\tilde{y}|,\\ \umace{\tilde{C}}\tilde{x}+\Mid{\tilde{D}}\tilde{y} &\leq\Rad{\tilde{D}}|\tilde{y}|,\\ \tilde{x}&\geq0 \end{align*} is infeasible. Even though we reduced infeasibility checking from infinitely many systems to only one, the resulting system in nonlinear. As in Section~\ref{sEqForm}, we can formulate it equivalently as infeasibility of \begin{align*} \uvr{\tilde{c}}^T\tilde{x} +(\Mid{\tilde{d}}-\Rad{\tilde{d}}\diag(s))^T\tilde{y} &\leq-1,\\ \umace{\tilde{A}}\tilde{x} +(\Mid{\tilde{B}}-\Rad{\tilde{B}}\diag(s))\tilde{y} &\leq 0,\\ -\omace{\tilde{A}}\tilde{x} -(\Mid{\tilde{B}}+\Rad{\tilde{B}}\diag(s))\tilde{y} &\leq 0,\\ \umace{\tilde{C}}\tilde{x} +(\Mid{\tilde{D}}-\Rad{\tilde{D}}\diag(s))\tilde{y} &\leq 0,\\ \tilde{x}&\geq0 \end{align*} for every $s\in\{\pm1\}^{n'+|J|}$. Now, we have to check infeasibility of $2^{n'+|J|}$ linear systems, which is large but finite. In case there are few sign-unrestricted variables and few positive components in $x^*$, the number fo systems can be acceptable for computation. \subsection{Sufficient condition} Similarly as in Section~\ref{ssSufCond}, we can derive a sufficient condition for optimality checking. We discuss it briefly and refer to \cite{Hla2013b} for more details. By Farkas lemma, the optimality criterion holds true if and only if the dual system \begin{align}\label{optGenCondDual} \tilde{A}^Tu-\tilde{C}^Tv\leq \tilde{c},\ \ \tilde{B}^Tu-\tilde{D}^Tv = \tilde{d},\ \ v \geq 0 \end{align} is feasible for each interval setting. First, solve the linear program \begin{align*} \max\alpha\st (\Mid{\tilde{A}})^Tu-(\Mid{\tilde{C}})^Tv+\alpha e \leq\Mid{\tilde{c}},\ \ (\Mid{\tilde{B}})^Tu-(\Mid{\tilde{D}})^Tv = \Mid{\tilde{d}},\ \ v \geq \alpha e. \end{align*} Let $(u^*,v^*)$ be its optimal solution, and let $(\hat{B}\mid-\hat{D})$ be an orthogonal basis of the null space of $((\Mid{\tilde{B}})^T\mid-(\Mid{\tilde{D}})^T)$ and put $\hat{d}:=\hat{B}u^*-\hat{D}v^*$. Compute an interval enclosure $(\ivr{u},\ivr{v})$ to the solution set of the square interval system \begin{align*} \{(u,v)\mmid \exists \tilde{B}\in\tilde{\imace{B}} \exists \tilde{D}\in\tilde{\imace{D}} \exists \tilde{d}\in \tilde{\imace{d}}: \tilde{B}^Tu-\tilde{D}^Tv = \tilde{d},\ \ \hat{B}u-\hat{D}v=\hat{d}\}, \end{align*} and check whether $\uvr{v}\geq0$ and $$ \ovr{\tilde{\imace{A}}^T\ivr{u}-\tilde{\imace{C}}^T\ivr{v}} \leq \tilde{\uvr{c}}. $$ If they are satisfied, then \nref{optGenCondDual} has a solution in the set $(\ivr{u},\ivr{v})$ for each interval realization, and we can claim that optimality criterion holds true. \subsection{Seeking for a candidate} Herein, we generalize the heuristic from Section~\ref{ssCand} to find a good candidate for robust optimal solution. Concerning the feasibility question, the conditions \nref{condGenFeasEq} and \nref{condGenFeasIneq} are not convenient due to their nonlinearities. Thus, we state an equivalent, linear form of feasibility testing. \begin{proposition} A vector $(x,y)$ is robustly feasible if and only if $y$ has the form of $y=y^1-y^2$ such that \begin{subequations}\label{condGenFeasProp} \begin{align}\label{condGenFeasProp1} \omace{A}x+\omace{B}y^1-\umace{B}y^2&\leq\ovr{b},\\ \label{condGenFeasProp2} \umace{A}x+\umace{B}y^1-\omace{B}y^2&\geq\uvr{b},\\ \label{condGenFeasProp3} \omace{C}x+\omace{D}y^1-\umace{D}y^2&\leq\ovr{a},\\ x,y^1,y^2&\geq0. \end{align} \end{subequations} \end{proposition} \begin{proof} Let $(x,y^1,y^2)$ be a solution to \nref{condGenFeasProp}. For any $A\in\imace{A}$ and $B\in\imace{B}$ we have \begin{align*} Ax+B(y^1-y^2)\leq\omace{A}x+\omace{B}y^1-\umace{B}y^2&\leq\ovr{b}, \end{align*} and \begin{align*} Ax+B(y^1-y^2)\geq\umace{A}x+\umace{B}y^1-\omace{B}y^2&\geq\uvr{b}, \end{align*} whence $Ax+B(y^1-y^2)\in\ivr{b}$. For any $C\in\imace{C}$ and $D\in\imace{D}$ we have \begin{align*} Cx+D(y^1-y^2)\leq\omace{C}x+\omace{D}y^1-\umace{D}y^2&\leq\ovr{a}, \end{align*} so $(x,y^1-y^2)$ is robustly feasible. Conversely, let $(x,y)$ be robustly feasible. Put $y^1:=y^+$ and $y^2:=y^-$, the positive and negative parts of $y$. From \nref{condGenFeasEq}, we derive \begin{align*} |\Mid{A}x+\Mid{B}(y^1-y^2)-\Mid{b}|+\Rad{A}x+\Rad{B}(y^1+y^2) \leq\Rad{b}. \end{align*} This inequality gives rise to two linear inequalities: \begin{align*} \Mid{A}x+\Mid{B}(y^1-y^2)-\Mid{b}+\Rad{A}x+\Rad{B}(y^1+y^2) &\leq\Rad{b},\\ -\Mid{A}x-\Mid{B}(y^1-y^2)+\Mid{b}+\Rad{A}x+\Rad{B}(y^1+y^2) &\leq\Rad{b}, \end{align*} which are equivalent to \nref{condGenFeasProp1}--\nref{condGenFeasProp2}. Similarly, \nref{condGenFeasIneq} implies \begin{align*} \omace{C}x+\Mid{D}(y^1-y^2)+\Rad{D}(y^1+y^2)\leq\ovr{a}, \end{align*} which is equivalent to \nref{condGenFeasProp3}. \end{proof} Now, we recommend to take as a candidate solution the pair $(x^*,y^{*1}-y^{*2})$, where $(x^*,y^{*1},y^{*2})$ is an optimal solution of the linear program \begin{align*} \min (\Mid{c})^Tx+(\Mid{d})^Ty^1-(\Mid{d})^Ty^2 \st (\ref{condGenFeasProp}). \end{align*} \subsection{The set of robust solutions in more detail} As before, we denote by $\Ss$ the set of all robust optimal solutions and state to following topological results on it. \begin{proposition}\label{propSsGenUniConv} $\Ss$ is formed by a union of at most $\binom{m'}{\lfloor m'/2\rfloor}\binom{n}{\lfloor n/2\rfloor}$ convex polyhedral sets. \end{proposition} \begin{proof} Each $x\in\Ss$ must satisfy both the feasibility and optimality criteria. The robust feasible set is a convex polyhedral set, so we focus on the optimality issue. The optimality depends only on the active sets $I$ and $K$, not on the concrete value of $x$. Given $I$ and $K$, the corresponding set \begin{align* \mna{F}\cap \{(x,y)\in\R^{n+n'}\mmid &x_i=0,\ i\in I,\ x_i>0,\ i\not\in I,\\ &\umace{C}_{k*}x+\Mid{D}_{k*}y-\Rad{D}_{k*}|y|\geq\unum{a}_k,\ k\in K,\\ &\umace{C}_{k*}x+\Mid{D}_{k*}y-\Rad{D}_{k*}|y|<\unum{a}_k,\ k\not\in K\} \end{align*} either is a subset of $\Ss$ or is disjoint. Since larger $I$ and $K$ preserve optimality, we can remove the strict inequalities, and $\Ss$ is formed by a union of some of the sets \begin{align* \mna{F}\cap \{(x,y)\in\R^{n+n'}\mmid &x_i=0,\ i\in I,\ \umace{C}_{k*}x+\Mid{D}_{k*}y-\Rad{D}_{k*}|y|\geq\unum{a}_k,\ k\in K\}. \end{align*} There are $2^{m'+n}$ possibilities to choose $I$ and $K$, but by Sperner's theorem again, only at most $\binom{m'}{\lfloor m'/2\rfloor}\binom{n}{\lfloor n/2\rfloor}$ of them are sufficient to consider. It remains to prove that the set of feasible solutions fulfilling the active set requirements is a convex polyhedral set. Concerning $I$, the condition $x_i=0$, $i\in I$, is obviously convex preserving. Concerning $K$, the condition \begin{align*} \umace{C}_{k*}x+\Mid{D}_{k*}y-\Rad{D}_{k*}|y|\geq \unum{a}_k \end{align*} can be reformulated as \begin{align*} \umace{C}_{k*}x+\Mid{D}_{k*}y-\Rad{D}_{k*}z\geq \unum{a}_k,\ \ z\geq y,\ \ z\geq -y. \end{align*} These inequalities describe a convex polyhedral set and its projection to the $x,y$-subspace is also convex polyhedral. \end{proof} \section{Conclusion} We introduced a novel kind of robustness in linear programming. When showing some basic properties, some open questions raised. For example, the robust optimal solution set $\Ss$ may be disconnected, but what can be the number of components at most? Similarly, how tight is the number of convex bodies (Propositions~\ref{propSsEqUniConv} and~\ref{propSsGenUniConv}) the set $\Ss$ consists of? \subsubsection*{Acknowledgments.} The author was supported by the Czech Science Foundation Grant P402/13-10660S. \bibliographystyle{abbrv}
{ "timestamp": "2014-03-31T02:08:37", "yymm": "1403", "arxiv_id": "1403.7427", "language": "en", "url": "https://arxiv.org/abs/1403.7427", "abstract": "We introduce a novel kind of robustness in linear programming. A solution x* is called robust optimal if for all realizations of objective functions coefficients and constraint matrix entries from given interval domains there are appropriate choices of the right-hand side entries from their interval domains such that x* remains optimal. we propose a method to check for robustness of a given point, and also recommend how a suitable candidate can be found. We also discuss topological properties of the robust optimal solution set. We illustrate applicability of our concept in a transportation problem.", "subjects": "Optimization and Control (math.OC)", "title": "Robust optimal solutions in interval linear programming with forall-exists quantifiers", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9901401467331061, "lm_q2_score": 0.7154239836484143, "lm_q1q2_score": 0.7083700081460242 }
https://arxiv.org/abs/1201.0660
Stable complexity and simplicial volume of manifolds
Let the complexity of a closed manifold M be the minimal number of simplices in a triangulation of M. Such a quantity is clearly submultiplicative with respect to finite coverings, and by taking the infimum on all finite coverings of M normalized by the covering degree we can promote it to a multiplicative invariant, a characteristic number already considered by Milnor and Thurston, which call the "stable complexity" of M.We study here the relation between the stable complexity of M and Gromov's simplicial volume ||M||. It is immediate to show that ||M|| is smaller or equal than the stable complexity of M and it is natural to ask whether the two quantities coincide on aspherical manifolds with residually finite fundamental group. We show that this is not always the case: there is a constant C_n<1 such that ||M|| is smaller than C_n times the stable complexity for any hyperbolic manifold M of dimension at least 4.The question in dimension 3 is still open in general. We prove that the stable complexity equals ||M|| for any aspherical irreducible 3-manifold M whose JSJ decomposition consists of Seifert pieces and/or hyperbolic pieces commensurable with the figure-eight knot complement. The equality holds for all closed hyperbolic 3-manifolds if a particular three-dimensional version of the Ehrenpreis conjecture is true.
\section*{Introduction} Following Milnor and Thurston \cite{MiThu}, a numerical invariant $\alpha(M)$ associated to any closed $n$-manifold $M$ is a \emph{characteristic number} if for every degree-$d$ covering $M\stackrel{d}{\to} N$ we have $\alpha(M) = d\cdot \alpha(N)$. Two important characteristic numbers are the Euler characteristic $\chi(M)$ and the \emph{simplicial volume} $\|M\|$ introduced by Gromov~\cite{Gro}, which equals (up to a constant factor depending on $n$) the volume of $M$ when $M$ is a hyperbolic manifold. In \cite{MiThu} Milnor and Thurston introduce various characteristic numbers, including the following one. Let $\sigma(M)$ be the \emph{$\Delta$-complexity} of $M$, \emph{i.e.}~the minimal number of tetrahedra in a triangulation of $M$. We employ here the word ``triangulation'' in a loose sense, as is customary in geometric topology: a triangulation is the realization of $M$ as the glueing of finitely many simplices via some simplicial pairing of their facets. The $\Delta$-complexity is clearly not a characteristic number: for every degree-$d$ covering $M\stackrel d\to N$ we have $$\sigma(M)\leqslant d\cdot \sigma(N),$$ but such inequality is very often strict, \emph{i.e.}~we typically get $\sigma(M)<d\cdot \sigma(N)$. We can however easily promote $\sigma$ to a characteristic number as follows. We define the \emph{stable $\Delta$-complexity} $\sigma_\infty(M)$ of $M$ by setting $$\sigma_\infty (M) = \inf_{\widetilde M \stackrel d\to M} \left\{\frac{\sigma(\widetilde M)}d\right\}$$ where the infimum is taken over all finite coverings $\widetilde M \stackrel d\to M$ of any finite degree $d$. Stable $\Delta$-complexity is easily seen to be a characteristic number, that is we have $$\sigma_\infty(M) = d\cdot \sigma_\infty(N)$$ for every finite covering $M\stackrel d\to N$. The characteristic number $\sigma_\infty$ was first defined by Milnor and Thurston in \cite{MiThu}. The following easy inequalities are established in Subsection~\ref{easy:sub}. \begin{prop}\label{easy:prop} Let $M$ be a closed manifold. We have $$\|M\|\leqslant \sigma_\infty(M) \leqslant \sigma(M).$$ \end{prop} The main question we address here is the following: \begin{quest} \label{main:quest} For which closed manifolds $M$ do we have $\|M\|= \sigma_\infty (M)$? \end{quest} Among the motivations for studying such a problem, we mention the following (open) question of Gromov: \begin{quest}[\cite{Gromov2}, page 232]\label{gromov:conj} Let $M$ be an aspherical closed manifold. Does $\|M\|=0$ imply $\chi (M)=0$? \end{quest} It turns out that if $\|M\| = \sigma_\infty(M)$ on aspherical manifolds then one could easily answer Gromov's question, thanks to the following simple fact, proved below in Subsection \ref{easy:sub}. \begin{prop} \label{chi:prop} Let $M$ be a closed manifold. If $\sigma_\infty (M)=0$ then $\chi(M)=0$. \end{prop} It is tempting to guess that $\|M\| = \sigma_\infty (M)$ at least when $M$ is hyperbolic, because $\pi_1(M)$ is residually finite and hence $M$ has plenty of finite coverings of arbitrarily large injectivity radius. However, we show here that this guess is wrong. \begin{teo} \label{4:teo} In every dimension $n\geqslant 4$ there is a constant $C_n<1$ such that $\|M\|\leqslant C_n\sigma_\infty(M)$ for every closed hyperbolic $n$-manifold $M$. \end{teo} A similar result holds if we replace stable complexity with stable integral simplicial volume (see Section~\ref{prel} and Theorem~\ref{integral:pre} below). We mention for completeness the following converse inequality, proved below in Subsection \ref{converse:subsection}. \begin{prop} \label{converse:prop} In every dimension $n\geqslant 2$ there is a constant $D_n>1$ such that $\sigma_\infty (M) \leqslant D_n\|M\|$ for every closed hyperbolic $n$-manifold $M$. \end{prop} Theorem \ref{4:teo} does not hold in dimension two and three. In dimension 2, it is easy to prove that $\sigma_\infty(S) = \|S\|$ for any closed hyperbolic surface $S$ (see Proposition~\ref{surface}). In dimension 3 we have the following. \begin{teo} \label{3:teo} There is a sequence $M_i$ of closed hyperbolic 3-manifolds such that $$\frac {\sigma_\infty (M_i)}{\|M_i\|} \to 1.$$ \end{teo} The main difference between dimensions two, three, and higher depends on the fact that the regular ideal hyperbolic $n$-simplex $\Delta^n$ can tile $\matH^n$ only in dimensions 2 and 3. The key observation that we use to prove Theorem \ref{4:teo} is that the dihedral angle of $\Delta^n$ does not divide $2\pi$ when $n\geqslant 4$. In dimension three Question \ref{main:quest} for hyperbolic 3-manifolds remains (as far as we know) open. Our ignorance on this point can be expressed as follows: we do not know any closed hyperbolic 3-manifold $M$ for which $\sigma_\infty(M) = \|M\|$, and we do not know any closed hyperbolic 3-manifold $M$ for which $\sigma_\infty(M)\neq \|M\|$. We refer the reader to Section~\ref{futuro:sec} for a brief discussion about some possible approaches to, and reformulations of this problem. We know however various non-hyperbolic 3-manifolds for which Question \ref{main:quest} has a positive answer. \begin{teo} \label{JSJ:teo} Let $M$ be an irreducible manifold with infinite fundamental group, which decomposes along its JSJ decomposition into pieces, each homeomorphic to a Seifert manifold or a hyperbolic manifold commensurable with the figure-8 knot complement. Then $$\sigma_\infty(M) = \|M\|.$$ \end{teo} In particular if $M$ is a graph manifold with infinite fundamental group we have $\sigma_\infty (M) = \|M\| = 0$. If $M=DN$ is the double of the complement $N$ of the figure-eight knot we have $\sigma_\infty (M) = \|M\| = 2\|N\| = 4$ (the simplicial volume of bounded manifolds is defined in Section~\ref{prel}). It is absolutely necessary to restrict ourselves to manifolds with infinite fundamental group, since a manifold $M$ with finite fundamental group has only finitely many coverings and hence $\sigma_\infty(M)>0$, whereas $\|M\|=0$ (since the simplicial volume is a characteristic number and vanishes on simply connected manifolds~\cite{Gro, Ivanov}). To prove Theorem \ref{JSJ:teo} we slightly modify the definition of $\sigma_\infty$ by using \emph{spines} instead of \emph{triangulations} in the spirit of Matveev complexity \cite{Mat}. The resulting invariant, which we denote by $c_\infty (M)$, is another characteristic number defined for any compact manifold $M$ of any dimension, possibly with boundary. When $M$ is an irreducible 3-manifold with infinite fundamental group we get $c_\infty(M) = \sigma_\infty(M)$. On more general 3-manifolds we have $\|M\| \leqslant c_\infty (M) \leqslant \sigma_\infty (M)$ and $c_\infty$ has a better behaviour than $\sigma_\infty$. For instance, we get $c_\infty(M)=0$ on any 3-manifold $M$ with finite fundamental group (in contrast with $\sigma_\infty$) and we can prove the following. \begin{teo} \label{additive:teo} The invariant $c_\infty$ is additive on connected sums and on the pieces of the JSJ decomposition. \end{teo} Note that Gromov norm is also additive on connected sums and on the pieces of the JSJ decomposition~\cite{Soma}. To deduce Theorem \ref{JSJ:teo} from Theorem \ref{additive:teo} it suffices to check that $c_\infty(M) = \|M\|$ when $M$ is an $S^1$-bundle over a surface or the complement of the figure-eight knot, two special cases that are easy to deal with. \subsection{Structure of the paper} We introduce in Section \ref{prel} the simplicial volume, stable integral volume, and stable $\Delta$-complexity, and prove some basic properties. Section \ref{higher} is devoted to dimension $n\geqslant 4$ and hence to the proof of Theorem \ref{4:teo}. In Section \ref{complexity:section} we introduce the stable complexity $c_\infty$ and in Section \ref{Three:section} we turn to 3-manifolds, thus proving Theorems \ref{3:teo}, \ref{JSJ:teo}, and \ref{additive:teo}. Section \ref{futuro:sec} contains some concluding remarks and open questions. \subsection{Acknowledgements} We thank Clara L\"oh and Juan Souto for useful conversations. \section{Preliminaries}\label{prel} We introduce in this section three characteristic numbers: the well-known \emph{simplicial volume} introduced by Gromov \cite{Gro}, a less-known variation which uses integral homology instead of real homology which we call \emph{stable integral volume}, and the \emph{stable $\Delta$-complexity}, first introduced by Milnor and Thurston in \cite{MiThu} and studied in this paper. A further characteristic number called \emph{stable complexity} uses spines instead of triangulations and is introduced in Section \ref{complexity:section}. \subsection{Simplicial volume} Let $M$ be a compact connected oriented $n$-manifold (possibly with boundary), and let $[M,\partial M]^\matZ$ be the integral fundamental class of $M$, \emph{i.e.}~the generator of $H_n(M,\partial M;\matZ)\cong\matZ$ corresponding to the orientation of $M$. The inclusion $\matZ\hookrightarrow \matR$ induces a map $H_n(M,\partial M;\matZ)\to H_n(M,\partial M;\matR)$ which sends $[M,\partial M]^\matZ$ into the real fundamental class $[M,\partial M]\in H_n(M,\partial M;\matR)$ of $M$. Following Gromov~\cite{Gro}, we define the \emph{simplicial volume} $\|M\|$ and the \emph{integral simplicial volume} $\|M\|^\matZ$ of $M$ as follows: \begin{align*} \|M\| &= \inf\left\{ \sum_{i=1}^k |\lambda_i|\, , \ \left[ \sum_{i=1}^k \lambda_i\sigma_i\right]=[M,\partial M]\,\in\, H_n(M,\partial M;\matR)\,\right\}\ \in\ \matR,\\ \|M\|^\matZ &= \inf\left\{ \sum_{i=1}^k |\lambda_i|\, , \ \left[ \sum_{i=1}^k \lambda_i\sigma_i\right]=[M,\partial M]^\matZ\,\in\, H_n(M,\partial M;\matZ)\,\right\}\ \in\ \matZ . \end{align*} The (integral) simplicial volume does not depend on the orientation of $M$ and the (integral) simplicial volume of a nonorientable manifold is defined as half the volume of its orientable double covering (hence the integral version may be a half-integer). Moreover, the (integral) simplicial volume of a disconnected manifold is the sum of the simplicial volumes of its components. As mentioned above, the simplicial volume is a characteristic number, \emph{i.e.}~it is multiplicative under finite coverings~\cite{Gro}. On the contrary, the integral simplicial volume is only submultiplicative: every characteristic number vanishes on manifolds that admit finite non-trivial self-coverings, \emph{e.g.}~on $S^1$, while $\| M\|^\matZ\geq 1$ for every closed orientable manifold. We may therefore define the \emph{stable} integral simplicial volume $\| M\|^\matZ_\infty$ as follows: $$\| M\|_\infty^\matZ = \inf_{\widetilde M \stackrel d\to M} \left\{\frac{\| \widetilde M\|^\matZ}d\right\}\ .$$ As observed in Proposition~\ref{last:prop}, the stable integral simplicial volume bounds from above (up to a constant depending only on the dimension) the Euler characteristic, so it can be exploited to study Gromov's Question~\ref{gromov:conj}. However, in Section~\ref{futuro:sec} we will prove the following analog of Theorem~\ref{4:teo}: \begin{teo}\label{integral:pre} For every $n\geqslant 4$ there exists a constant $C_n<1$ such that the following holds. Let $M$ be a closed orientable hyperbolic manifold of dimension $n\geqslant 4$. Then $$ \|M\| \leqslant C_n \|M\|^\matZ_\infty. $$ \end{teo} It is folklore that the simplicial volume of a manifold is equal to the seminorm of its \emph{rational} fundamental class (see~\cite{BFP} for a complete proof). As a consequence, integral cycles may be used to approximate the simplicial volume via the following equality, which holds for every compact orientable $n$-manifold $M$: $$ \| M\|=\inf\left\{ \frac{\sum_{i=1}^k |\lambda_i|}{|h|}\, , \ \left[ \sum_{i=1}^k \lambda_i\sigma_i\right]=h\cdot [M,\partial M]^\matZ\,\in\, H_n(M,\partial M;\matZ),\, h\in\matZ\setminus\{0\}\right\}. $$ Note however that this equality does not seem to be useful in order to attack Gromov's Question~\ref{gromov:conj}. \subsection{Stable $\Delta$-complexity}\label{easy:sub} We work in the PL category, so every manifold in this paper will be tacitly assumed to have a piecewise-linear structure. As mentioned in the introduction, a \emph{(loose) triangulation} of a closed $n$-dimensional manifold $M$ is the realization of $M$ as the glueing of finitely many $n$-simplices via some simplicial pairing of their facets. The \emph{$\Delta$-complexity} $\sigma(M)$ of $M$ is the minimal number of simplices needed to triangulate $M$. The \emph{stable $\Delta$-complexity} of $M$ is then $$\sigma_\infty (M) = \inf_{\widetilde M \stackrel d\to M} \left\{\frac{\sigma(\widetilde M)}d\right\}.$$ We can easily establish the inequalities \begin{equation}\label{easy:ineq} \| M\|\leqslant \sigma_\infty (M)\leqslant \sigma (M) \end{equation} stated in Proposition~\ref{easy:prop}. The assertion $\sigma_\infty (M)\leqslant \sigma (M)$ follows from the definitions. In order to prove the other inequality we may suppose that $M$ is oriented. Let $\mathcal{T}$ be a triangulation of $M$ with $m=\sigma (M)$ simplices, and let $s_1,\ldots,s_m$ be suitably chosen orientation-preserving parameterizations of the simplices of $\mathcal{T}$. We would like to say that $s_1+\ldots + s_m$ represents the fundamental class in $H_n(M;\matZ)$, however this singular chain is not necessarily a cycle. We can fix this problem easily by averaging each $s_i$ on all its permutations. That is, we define for any simplex $s$ the chain $$ {\rm alt}(s)=\frac{1}{(n+1)!}\sum_{\tau\in \mathfrak{S}_{n+1}} (-1)^{{\rm sgn}(\tau)}s\circ \overline{\tau}, $$ where $\overline{\tau}$ is the unique affine diffeomorphism of the standard $n$-simplex $\Delta_n$ corresponding to the permutation $\tau$ of the vertices of $\Delta_n$. Now it is immediate to verify that the chain $z={\rm alt}(s_1)+\ldots+{\rm alt}(s_m)$ is a cycle which represents the fundamental class of $M$. Moreover, the sum of the absolute values of the coefficients of $z$ is at most $m$, and this implies the inequality $\| M\|\leqslant \sigma (M)$. The fact that $\| M\|\leqslant\sigma_\infty (M)$ now follows from the fact that the simplicial volume is multiplicative under finite coverings. It is also easy to prove a stronger version of Proposition \ref{chi:prop}. \begin{prop} \label{chi2:prop} Let $M$ be a closed $n$-dimensional manifold. We have $$\big|\chi(M)\big|\leqslant 2^{n+1}\sigma_\infty (M).$$ \end{prop} \begin{proof} A triangulation $\mathcal{T}$ of $M$ endows $M$ with a cellular structure with at most $2^{n+1}\cdot t$ cells, where $t$ is the number of the simplices of $\mathcal{T}$. Since the Euler characteristic $\chi(M)$ can be computed as the alternating sum of the number of simplices in a triangulation of $M$, this readily implies that $|\chi(M)|\leqslant 2^{n+1}\sigma (M)$ for every $n$-manifold $M$. Since $\chi$ is a characteristic number, also the stronger inequality $|\chi(M)|\leqslant 2^{n+1}\sigma_\infty (M)$ holds. \end{proof} \subsection{Surfaces} In the two-dimensional case the answer to Question~\ref{main:quest} is well-known. In fact, considering triangulations of finite coverings is the standard way to compute the (upper bound for) the simplicial volume of surfaces of negative Euler characteristic: \begin{prop}\label{surface} Let $S$ be a closed compact surface. If $S=S^2$ (resp.~$S=\matR\matP^2$) then $\sigma_\infty (S)=2$ (resp.~$\sigma_\infty (S)=1$) and $\| S\|=0$. Otherwise we have $$ \sigma_\infty (S)=\| S\| = 2 | \chi(S) |. $$ \end{prop} \begin{proof} Of course we have $\sigma (S^2)=2$, so $\sigma_\infty(S^2)=2$ and $\sigma_\infty (\matR\matP^2)=1$. Moreover, since $S^2$ admits a self-map of degree bigger than one we have $\|S^2\|=0$, whence $\|\matR\matP^2\|=0$ because the simplicial volume is a characteristic number. Let us now suppose that $\chi(S)\leqslant 0$. Of course, it is sufficient to consider the case when $S$ is orientable. Then, the equality $ \sigma_\infty (S)=\| S\| = 2 | \chi(S) | $ is well-known (see \emph{e.g.}~\cite{BePe}). \end{proof} \subsection{The simplicial volume of hyperbolic manifolds} The simplicial volume of a manifold is deeply related to several geometric properties of the Riemannian structures that the manifold can support. Concerning hyperbolic manifolds, the following result due to Gromov and Thurston shows that the simplicial volume is proportional to the Riemannian volume (for a complete proof we refer the reader to~\cite[Theorem C.4.2]{BePe}, \cite[Theorem 11.6.3]{Ratcliffe} or~\cite{Bucher} for the closed case, and \cite{Francaviglia1}, \cite{FriPag} or \cite{FM} for the cusped case). Let $M$ be a complete finite-volume hyperbolic $n$-manifold. If $M$ is non-compact, then it admits a natural compactification $\overline{M}$ such that $\overline{M}$ is a manifold with boundary and $\partial \overline{M}$ is a finite collection of closed $(n-1)$-manifolds each of which supports a flat Riemannian metric. We denote by $v_n$ the volume of the ideal regular hyperbolic simplex in $\matH^n$. The following result is due to Thurston and Gromov~\cite{Thurston, Gro} (detailed proofs can be found in~\cite{BePe} for the closed case, and in~\cite{Francaviglia1,FriPag,FM} for the cusped case): \begin{teo}[Gromov, Thurston]\label{prop:teo} Let $M$ be a complete finite-volume hyperbolic manifold with compactification $\overline{M}$ (so $\overline{M}=M$ if $M$ is closed). Then $$ \left\| \overline{M} \right\|=\frac{{\rm vol} (M)}{v_n}. $$ \end{teo} \subsection{Converse inequality} \label{converse:subsection} We prove here Proposition \ref{converse:prop}. The proof was communicated to us by Juan Souto, and closely follows ideas of Thurston~\cite[Theorem 5.11.2]{Thurston} and Gromov~\cite[Section 2.1]{Gro}. \begin{prop} In every dimension $n\geqslant 2$ there is a constant $D_n>1$ such that $\sigma_\infty (M) \leqslant D_n\|M\|$ for every closed hyperbolic $n$-manifold $M$. \end{prop} \begin{proof} Let $R>0$ be any fixed positive real number. Since $\pi_1(M)$ is residually finite we may replace $M$ with a finite cover (which we still call $M$) with injectivity radius bigger than $3R$. Let $S\subset M$ be a maximal set of points that are pairwise at distance $\geqslant R$. Consider the Dirichlet tesselation of $M$ into polyhedra determined by $S$, where every point $x_0\in S$ gives rise to the polyhedron $$P_{x_0} = \big\{y \in M\ \big|\ d(x_0,y) \leqslant d(x, y) \ \forall x\in S \big\}.$$ This is indeed isometric to a convex polyhedron because the injectivity radius of $M$ is sufficiently big. We have $B\left(x_0, \frac R2\right) \subset P_{x_0} \subset B\left(x_0, R\right) $. The number of polyhedra is therefore bounded above by $$\frac {{\rm vol} (M)}{{\rm vol} \left(B\left(x_0, \frac R2\right)\right)}.$$ A facet $F$ of $P_{x_0}$ corresponds to some point $x_F\in S$ such that $d(x,x_0) = d(x,x_F)$ for all $x\in F$; since $d(x_0,x_F) < 2R$ the number of facets of $P_{x_0}$ is smaller or equal than the number of points in $S \cap B(x_0, 2R)$, which is in turn smaller or equal than the ratio between ${\rm vol} \left(B\left(x_0, 3R\right)\right)$ and ${\rm vol} \left(B\left(x_0, \frac R2\right)\right)$. Therefore the number of facets of each $P_x$ is uniformly bounded and hence the possible combinatorial types for $P_x$ vary on a finite set which depends only on the dimension $n$ and $R$. Choose for each possible combinatorial type a triangulation which induces on every facet a triangulation that is symmetric with respect to every combinatorial isomorphism of the facet: these symmetric triangulations necessarily match to give a triangulation of $M$. Let $T$ be the maximal number of simplices of the triangulated combinatorial types. Our original manifold $M$ triangulates with at most $$\frac {T}{{\rm vol} \left(B\left(x_0, \frac R2\right)\right)} \cdot {\rm vol} (M) = \frac {Tv_n}{{\rm vol} \left(B\left(x_0, \frac R2\right)\right)} \cdot \| M \| $$ simplices. \end{proof} \section{Higher dimensions} \label{higher} This section is devoted to the proof of following theorem. \begin{teo}\label{bah} For every $n\geqslant 4$ there exists a constant $C_n<1$ such that the following holds. Let $M$ be an $n$-dimensional closed orientable hyperbolic manifold. Then $$ {\rm vol}(M) \leqslant C_n v_n \sigma(M). $$ \end{teo} Putting toghether this result with Theorem~\ref{prop:teo} we get the following. \begin{cor}\label{main:cor} We have $\|M\| \leqslant C_n \sigma(M)$ for every closed orientable hyperbolic $n$-manifold $M$ of dimension $n\geqslant 4$. \end{cor} Since the simplicial volume is a characteristic number, Corollary~\ref{main:cor} implies in turn Theorem~\ref{4:teo}. \begin{cor} We have $\|M\| \leqslant C_n \sigma_\infty(M)$ for every closed hyperbolic $n$-manifold $M$ of dimension $n\geqslant 4$. \end{cor} \subsection{Straight simplices} We recall that every pair of points $\ensuremath{\overline{\matH^n}}$ is connected by a unique geodesic segment (which has infinite length if any of its endpoints lies in $\partial\ensuremath{\overline{\matH^n}}$). A subset in $\ensuremath{\overline{\matH^n}}$ is \emph{convex} if whenever it contains a pair of points it also contains the geodesic segment connecting them. The \emph{convex hull} of a set $A$ is defined as usual as the intersection of all convex sets containing $A$. A \emph{(geodesic) $k$-simplex} $\Delta$ in $\ensuremath{\overline{\matH^n}}$ is the convex hull of $k+1$ points in $\ensuremath{\overline{\matH^n}}$, called \emph{vertices}. A $k$-simplex is: \begin{itemize} \item \emph{ideal} if all its vertices lie in $\partial\matH^n$, \item \emph{regular} if every permutation of its vertices is induced by an isometry of $\matH^n$, \item \emph{degenerate} if it is contained in a $(k-1)$-dimensional subspace of $\matH^n$. \end{itemize} Let $v_n$ be the volume of the regular ideal simplex in $\ensuremath{\overline{\matH^n}}$. \begin{teo}[\cite{HM, Pe}] \label{maximal:teo} Let $\Delta$ be a geodesic $n$-simplex in $\ensuremath{\overline{\matH^n}}$. Then ${\rm vol}(\Delta)\leqslant v_n$, and ${\rm vol}(\Delta) = v_n$ if and only if $v_n$ is ideal and regular. \end{teo} A \emph{singular $k$-simplex} in $\matH^n$ is of course a continuous map $\sigma\colon \Delta_k \to \matH^n$ from the standard $k$-simplex $\Delta_k\subset \matR^{k+1}$ to hyperbolic space. The corresponding \emph{straight} simplex $\sigma^\stra\colon \Delta_k \to \matH^n$ is defined as follows: set $\sigma^\stra(v) = \sigma(v)$ on every vertex $v$ of $\Delta_k$, and extend using barycentric coordinates (which exist in $\matH^n$, using the hyperboloid model). The image of $\sigma^\stra$ is the convex hull of the images of the vertices of $\Delta_k$ via $\sigma$, hence it is a geodesic simplex. Using again barycentric coordinates, for every singular $k$-simplex in $\matH^n$ we can define a homotopy $H(\sigma)\colon \Delta_k\times [0,1]\to \matH^n$ between $\sigma$ and $\sigma^\stra$ by setting $H(\sigma)(p,t)= t\sigma (p)+(1-t)\sigma^\stra(p)$. The following lemma readily descends from the definitions and from the fact that barycentric coordinates commute with the isometries of $\matH^n$. \begin{lemma}\label{invariance} Let $\sigma\colon\Delta_k\to\matH^n$ be a singular simplex, and let $g$ be an isometry of $\matH^n$. Then: \begin{enumerate} \item $(g\circ\sigma)^\stra=g\circ\sigma^\stra$ and $H(g\circ \sigma)=g\circ H(\sigma)$; \item if $h<k$ and $i\colon \Delta_{h}\to \Delta_k$ is an affine inclusion of $\Delta_h$ onto an $h$-dimensional face of $\Delta_k$, then $(\sigma\circ i)^\stra=\sigma^\stra\circ i$ and $H(\sigma\circ i)=H(\sigma)\circ (i\times {\rm Id})$. \end{enumerate} \end{lemma} Henceforth, $M$ will always be an oriented hyperbolic closed $n$-dimensional manifold. Let $\sigma\colon \Delta_k\to M$ be a singular simplex in $M$. The straightening $\sigma^\stra\colon\Delta_k \to M$ is defined by lifting the map $\sigma$ to the universal covering $\matH^n$, straightening it, and then projecting it back to $M$. By Lemma~\ref{invariance} this operation does not depend on the chosen lift. The {\em algebraic volume} of a singular $n$-simplex $\sigma\colon\Delta_n\to M$ is $${\rm algvol}(\sigma)=\int_{\sigma^\stra}d{\rm vol} = \int_{\Delta_n} (\sigma^\stra)^*d{\rm vol}$$ where $d{\rm vol}$ is the volume form on $M$. The absolute value $|{\rm algvol}(\sigma)|$ equals the volume of the image of any lift $\widetilde{\sigma}^\stra$ of $\sigma^\stra$ to $\matH^n$. In particular, ${\rm algvol}(\sigma)$ vanishes if and only if $\widetilde\sigma^\stra$ is degenerate. When ${\rm algvol}(\sigma)\neq 0$ the straightened singular simplex $\sigma^\stra$ is an immersion and the sign of ${\rm algvol}(\sigma)$ depends on whether $\sigma^\stra$ is orientation-preserving or not. As we said above, every geodesic $n$-simplex in $\matH^n$ has volume smaller than the volume $v_n$ of the regular ideal simplex. In particular we always have $$-v_n\leqslant{\rm algvol}(\sigma)\leqslant v_n.$$ \begin{defn} A singular $n$-simplex $\sigma$ in $M$ is {\em positive} if ${\rm algvol}(\sigma)>0$, {\em negative} if ${\rm algvol}(\sigma)<0$, and {\em flat} if ${\rm algvol}(\sigma)=0$. For $\varepsilon>0$, the singular simplex $\sigma$ is \emph{$\varepsilon$-big} if $$ {\rm algvol} (\sigma) \geqslant (1-\varepsilon)v_n,$$ and \emph{$\varepsilon$-small} otherwise. \end{defn} \subsection{The straightening as a map}\label{stmap} Let now $\mathcal{T}$ be a (loose) triangulation of an oriented hyperbolic closed manifold $M$, that is the realization of $M$ as the union of $m$ copies of the standard simplex $\Delta_n$ quotiented by an orientation-reversing simplicial pairing of their $(n-1)$-dimensional faces. Every simplex in $\mathcal{T}$ is a copy of $\Delta_n$ and hence is the image of an orientation-preserving singular simplex $\sigma_i\colon \Delta_n\to M$. Henceforth, if $\sigma$ is a singular simplex in $M$, we denote by $|\sigma|\subseteq M$ the image of $\sigma$ in $M$. We define a map $$\str_\mathcal{T} \colon M\to M$$ which corresponds to the simultaneous straightening of all the simplices of $\mathcal{T}$. If $p\in M$ lies in $|\sigma_i |$, we choose a point $q\in \Delta_n$ such that $\sigma_i(q)=p$ and set $\str_\mathcal{T}(p)=\sigma_i^\stra(q)$. Of course, if the point $p$ belongs to the $(n-1)$-skeleton of $\mathcal{T}$, then both the choice of $\sigma_i$ and/or the choice of the point $q\in\Delta_n$ are somewhat arbitrary. However, Lemma~\ref{invariance} ensures that $\str_\mathcal{T}$ is well-defined, continuous and homotopic to the identity of $M$. In what follows, when a triangulation $\mathcal{T}$ is fixed and no ambiguities can arise, we will denote the map $\str_\mathcal{T}$ simply by $\str$. It is important now to note that the straightened simplices of $\mathcal{T}$ do not necessarily form a triangulation of $M$ in any reasonable sense: straightened simplices may degenerate and overlap (and they often do, see also Remark \ref{rem1}). However, one important property is preserved by the straightening: the positive simplices still cover the manifold $M$. \begin{lemma}\label{algvol} Let $\sigma_1,\ldots,\sigma_t$ be the simplices of a triangulation $\mathcal{T}$ of $M$. Then $$ M=\bigcup_{{\rm positive\ }\sigma_i} |\sigma_i^\stra|= \str_\mathcal{T}\left(\bigcup_{{\rm positive\ } \sigma_i} |\sigma_i|\right) , $$ so $$ {\rm vol}(M)\leqslant \sum_{{\rm positive\ }\ \sigma_i} {\rm vol}\big(|\sigma_i^\stra|\big). $$ \end{lemma} \begin{proof} Let $M_0\subseteq M$ be the image along $\str_\mathcal{T}$ of the $(n-1)$-skeleton of $\calT$ and of the flat simplices of $\mathcal{T}$. The complement $M\setminus M_0$ is open and dense and consists of topologically regular values for the map $\str_\mathcal{T}$, that is the pre-image of every point in $M\setminus M_0$ consists of finitely many points where $\str_\mathcal{T}$ is a local homeomorphism and has hence local degree $\pm 1$. Since $\str_\mathcal{T}$ has globally degree one, every topologically regular value lies in the image of at least one positive simplex. The conclusion follows since the image via $\str_\mathcal{T}$ of the positive simplices of $\mathcal{T}$ is compact, whence closed. \end{proof} \subsection{Strategy of the proof of Theorem \ref{bah}.} We outline here the proof of Theorem \ref{bah}. Let $\mathcal{T}$ be a triangulation of a closed hyperbolic manifold $M$ of dimension $n\geqslant 4$. We need to prove that ${\rm vol} (M) \leqslant C_nv_n t$ where $t$ is the number of simplices in $\mathcal{T}$ and $C_n<1$ is a constant depending only on the dimension $n$. Suppose for simplicity that every simplex of $\mathcal{T}$ is positive. In that lucky case the map $\str_\mathcal{T}$ is a homeomorphism and the straightened triangulation is a genuine triangulation (which we still denote by $\mathcal{T}$) made of straight positive simplices. The key observation now is that in dimension $n\geqslant 4$ the ratio between $2\pi$ and the dihedral angle of an ideal regular geodesic simplex is not an integer. Therefore we may choose $\varepsilon_n>0$ independently of $\mathcal{T}$ in such a way that every $(n-2)$-dimensional face $E$ of $\mathcal{T}$ enjoys the following properties: \begin{enumerate} \item the face $E$ is contained in at least one $\varepsilon_n$-small simplex of $\mathcal{T}$, and \item the number of $\varepsilon_n$-big simplices of $\mathcal{T}$ that contain $E$ is uniformly bounded from above by a universal constant. \end{enumerate} These facts easily imply that the ratio between the number of $\varepsilon_n$-big simplices of $\mathcal{T}$ and the total number $t$ of simplices of $\mathcal{T}$ is smaller than some constant $K_n< 1$ independent of $\mathcal{T}$. Therefore the volume of $M$ is smaller than $$t\big(v_nK_n + (1-\varepsilon_n)v_n\big(1-K_n)) = tv_n\left(K_n + (1-\varepsilon_n)(1-K_n)\right) = tv_nC_n$$ with $C_n = 1-\varepsilon_n(1-K_n)<1$. We now need to refine this strategy to deal with negative and flat simplices. As we said above, the straightening of $\mathcal{T}$ may create degenerations and overlappings of simplices. Degenerations and overlappings are volume-consuming, so it is reasonable to expect that the inequality ${\rm vol}(M)\leqslant C_nt$ holds \emph{a fortiori} in presence of negative and flat simplices: the generalization of the above argument however is not immediate. Note for instance that both points (1) and (2) stated above do not hold for a general triangulation $\mathcal{T}$: a codimension $(n-2)$ face $E$ may be incident to arbitrarily many arbitrarily big positive simplices, that wind many times around $E$ (the local degree of the straightening map around $E$ can be arbitrarily big! See Figure \ref{degree:fig}, which is inspired by~\cite[Example 2.6.4]{Francaviglia2}, \cite[Example 4.1]{Francaviglia3}). Of course by winding many times around $E$ the simplices overlap a lot and hence a lot of volume is wasted: we will need to estimate that loss of volume to prove our theorem. \begin{figure} \begin{center} \includegraphics[width = 6.5 cm]{degree.pdf} \end{center} \nota{The local degree of the straightening map associated to the trangulation described here (which may be thought as a triangulation of a portion of the projective model of the hyperbolic plane) is equal to $0$ in $p$ and to $2$ in $q$.} \label{degree:fig} \end{figure} \subsection{The volume of a simplex}\label{simplices:sub} We will need to estimate (from below) the overlapping regions of big simplices. To do so we first study their geometry. For every $n\geqslant 3$ and $k\leqslant n$, we denote by $\ensuremath {\mathcal{V}}_k(\ensuremath{\overline{\matH^n}})$ the space of unordered $(k+1)$-tuples of (not necessarily distinct) points of $\ensuremath{\overline{\matH^n}}$, \emph{i.e.}~the topological space $(\ensuremath{\overline{\matH^n}})^{k+1}/\mathfrak{S}_{k+1}$, where $\mathfrak{S}_{k+1}$ is the permutation group on $k+1$ elements. We also denote by $\ensuremath {\mathcal{S}}_k(\ensuremath{\overline{\matH^n}})$ the set of $k$-dimensional geodesic simplices of $\ensuremath{\overline{\matH^n}}$, and we endow $\ensuremath {\mathcal{S}}_k(\ensuremath{\overline{\matH^n}})$ with the topology induced by the Hausdorff topology on closed subsets of $\overline{\matH^n}$. The convex hull defines a surjective map $${\rm Conv}\colon \ensuremath {\mathcal{V}}_k(\ensuremath{\overline{\matH^n}}) \to \ensuremath {\mathcal{S}}_k(\ensuremath{\overline{\matH^n}}).$$ We will often use the following notation. \begin{defn} If $K$ is any subset of $\ensuremath{\overline{\matH^n}}$, we denote by $H(K)$ the smallest geodesic subspace of $\ensuremath{\overline{\matH^n}}$ containing $K$ (if $K$ consists of a single point of $\partial\matH^n$, then we set $H(K)=K$). \end{defn} Of course an element $K$ in $\ensuremath {\mathcal{V}}_k(\ensuremath{\overline{\matH^n}})$ or $\ensuremath {\mathcal{S}}_k(\ensuremath{\overline{\matH^n}})$ is degenerate if $\dim H(K)<k$. We denote by $\ensuremath {\mathcal{V}}_k^*(\ensuremath{\overline{\matH^n}})$ and $\ensuremath {\mathcal{S}}_k^*(\ensuremath{\overline{\matH^n}})$ the set of nondegenerate elements of $\ensuremath {\mathcal{V}}_k(\ensuremath{\overline{\matH^n}})$ and $\ensuremath {\mathcal{S}}_k(\ensuremath{\overline{\matH^n}})$. The proof of the following easy result is left to the reader: \begin{lemma}\label{ovvio} The map $$ {\rm Conv}\colon \ensuremath {\mathcal{V}}^*_k(\ensuremath{\overline{\matH^n}})\to \ensuremath {\mathcal{S}}^*_k(\overline{\matH^n}) $$ is a homeomorphism. \end{lemma} We are mainly interested in the behaviour of the function $$ {\rm vol}\colon \ensuremath {\mathcal{S}}_k(\overline{\matH^n})\to\matR $$ which maps every geodesic $k$-simplex into its $k$-dimensional volume. Despite its natural definition, the function ${\rm vol}$ is not continuous on the whole $\ensuremath {\mathcal{S}}_k(\overline{\matH^n})$: for example, let $K$ be any ideal regular simplex and $g\in{\rm Isom}(\matH^n)$ be a parabolic isometry that fixes an ideal vertex $p$ of $K$. Then $\lim_{i\to\infty} g^i(K) = \{p\}$ and therefore $$\lim_{i\to\infty} {\rm vol}(g^i(K))=v_n\neq 0={\rm vol} \left(\lim_{i\to\infty} g^i(K)\right).$$ This shows that in general some care is needed in studying geometric properties of limits of simplices. However, the following lemma ensures that the volume function is continuous on the space of nondegenerate simplices. \begin{lemma}\label{cont-vol} The restriction of ${\rm vol}$ to the set of simplices with at least three different vertices is continuous. In particular, the restriction $$ {\rm vol}\colon \ensuremath {\mathcal{S}}^*_k(\overline{\matH^n})\to\matR^+ $$ is continuous. \end{lemma} \begin{proof} By Lemma~\ref{ovvio}, the conclusion is an immediate consequence of~\cite[Proposition 4.1]{Luo} (see also~\cite[Theorem 11.4.2]{Ratcliffe}). \end{proof} \subsection{The incenter and inradius of a simplex} Lemma~\ref{cont-vol} implies in particular that if a sequence $K_i$ of elements in $\ensuremath {\mathcal{S}}_k^*(\ensuremath{\overline{\matH^n}})$ converges to an ideal regular $k$-simplex then ${\rm vol}(K_i)\to v_k$ as $i\to\infty$. We are interested in proving the converse result: the shape of a simplex with large volume has to be similar to the shape of a regular ideal simplex. However, we have observed above that a sequence of ideal regular simplices may well converge to a degenerate simplex, so some care is needed here. Consider a nondegenerate $k$-simplex $K\in \ensuremath {\mathcal{S}}_k^*(\overline{\matH^n})$. For every point $p\in K\cap\matH^n$ we denote by $r_K(p)$ the radius of the maximal $k$-ball of $H(K)$ centered in $p$ and contained in $K$. Since the volume of any $k$-simplex is smaller than $v_k$ and the volume of $k$-balls diverges as the radius diverges, there exists a constant $r_k>0$ such that $r_K(p)\leqslant r_k$ for every $K\in \ensuremath {\mathcal{S}}_k^*(\overline{\matH^n})$ and $p\in K$. \begin{defn} Take $K\in \mathcal{S}^*_k(\ensuremath{\overline{\matH^n}})$. The \emph{inradius} $r(K)$ of $K$ is $$ r(K)=\sup_{p\in K\cap \matH^n} r_K(p)\ \in \ (0,r_k] $$ (observe that $r(K)>0$ since $K$ is nondegenerate). The \emph{incenter} ${\rm inc}(K)$ is the unique point $p\in K\cap \matH^n$ such that $r_K(p)=r(K)$. \end{defn} \begin{lemma}\label{incentri} The incenter is well-defined. The sphere centered in ${\rm inc}(K)$ of radius $r(K)$ is tangent to all the facets of $K$. The functions $$ {\rm inc}\colon S_k^*({\ensuremath{\overline{\matH^n}}})\to \matH^n, \qquad r\colon S_k^*({\ensuremath{\overline{\matH^n}}})\to\matR $$ are continuous. \end{lemma} \begin{proof} The map $p\mapsto r_K(p)$ is continuous, and if $q$ is a (possibly ideal) vertex of $K$ we have $\lim_{p\to q} r_K(p)=0$. Therefore, the map $r_K\colon K\to (0,r_k]$ is proper, and this ensures the existence of a point $p\in K$ such that $r_K(p)=r(K)$. Let $S_p$ be the sphere centered in $p$ of radius $r(K)$: we prove that $S_p$ is tangent to every $(k-1)$-face of $K$. Assume by contradiction that $F$ is a $(k-1)$-face of $K$ such that $S_p\cap F=\emptyset$, and denote by $v$ the (possibly ideal) vertex of $K$ opposite to $F$. Let $\gamma$ be the geodesic ray (or line, if $v$ is ideal) exiting from $v$ and containing $p$. It is readily seen that the distance between $\gamma(t)$ and any $(k-1)$-face of $K$ distinct from $F$ is an increasing function of $t$. If $p=\gamma(t_0)$, then this implies that there exists $\varepsilon>0$ such that $r_K(\gamma(t_0+\varepsilon))>r_K(p)=r(K)$, a contradiction. We exploit the hyperboloid model of $\matH^n$ to determine the point $p$ more explicitly, \emph{i.e.}~we fix the identification $$\mathbb H^n=\big\{w=(w_0,\dots,w_n)\in\mathbb R^{n+1}: \langle w,w\rangle=-1,\ w_0>0\big\}$$ where $\langle\cdot,\cdot\rangle$ denotes the usual Minkowski product. Let $H$ be the $(k+1)$-dimensional linear subspace of $\matR^{n+1}$ containing $H(K)$. If $F_0,\ldots,F_k$ are the $(k-1)$-faces of $K$, for every $i=0,\ldots,k$ we denote by $q_i$ the dual vector of $F_i$, \emph{i.e.}~ the unique vector $q_i\in H$ such that $\langle q_i,q_i\rangle=1$, $\langle q_i,w\rangle=0$ for every $w\in F_i$, and $\langle q_i,w\rangle\leqslant 0$ for every $w\in K$. If $w$ is any point of $K$, then the hyperbolic distance between $w$ and the geodesic $(k-1)$-plane containing $F_i$ satisfies the equality $$\sinh d(w,H(F_i))=-\langle w,q_i\rangle.$$ Let now $H_{ij}\subseteq H$ be the hyperplane of $H$ which is orthogonal to $q_i-q_j$. Recall that our point $p\in H$ lies at the same distance from the geodesic planes containing the faces of $K$, so \begin{equation}\label{inc:eq} p\in \bigcap_{i\neq j} H_{ij}. \end{equation} Since $K$ is nondegenerate, the vectors $q_0-q_i$, $i=1,\ldots,k$ are linearly independent, and this readily implies that $\bigcap_{i\neq j}^k H_{ij}$ is a $1$-dimensional linear subspace of $H$. Such a subspace cannot meet the hyperboloid $\matH^n$ in more than one point, and this concludes the proof that $p$ is the unique point of $K$ such that $r_K(p)=r(K)$. Moreover, we have \begin{equation}\label{inr:eq} \sinh r(K)=-\langle {\rm inc}(K),q_i\rangle\qquad \textrm{for\ every}\ i=0,\ldots,k. \end{equation} This description of ${\rm inc}(K)$ also implies that ${\rm inc}(K)$ and $r(K)$ continuously depend on $K$. In fact, even when considering simplices with possibly ideal vertices, it is readily seen that each subspace $H_{i}$, whence each $q_i$ and each $H_{ij}$, continuously depends on $K$. Thanks to equations~\eqref{inc:eq} and~\eqref{inr:eq}, this implies that the maps ${\rm inc}\colon S_k^*(\ensuremath{\overline{\matH^n}})\to \matH^n$ and $r\colon S_k^*({\ensuremath{\overline{\matH^n}}})\to\matR$ are continuous. \end{proof} We now need the following result proved by Luo. \begin{lemma}[Proposition 4.2 in \cite{Luo}]\label{liminf} Let $K_i$ be a sequence of elements in $S_k^*(\ensuremath{\overline{\matH^n}})$ such that $ \lim_{i\to \infty} r(K_i)=0. $ Then $ \lim_{i\to \infty} {\rm vol}(K_i)=0 $. \end{lemma} We can finally prove that simplices of large volume are close to ideal regular simplices. \begin{prop}\label{regular-converge} Let $K_\infty\in S_k^*(\ensuremath{\overline{\matH^n}})$ be a fixed ideal regular simplex, and let $K_i$ be a sequence of elements in $S_k^*(\ensuremath{\overline{\matH^n}})$ such that $$ \lim_{i\to\infty} {\rm vol} (K_i)=v_k. $$ Then there exists a sequence $g_i$ of isometries of $\matH^n$ such that $$ \lim_{i\to\infty} g_i(K_i)=K_{\infty}. $$ \end{prop} \begin{proof} We consider the disc model for $\matH^n$ and suppose that the origin $O$ is the incenter of $K_\infty$. Let $H$ be the $k$-space containing $K_\infty$. We define the distance $d(K,K')$ of two simplices as the Hausdorff distance with respect to the Euclidean metric of the closed disc. For each $i$ we pick an isometry $g_i$ of $\matH^n$ such that: \begin{enumerate} \item $g_i(K_i)$ has its incenter in $O$ and is contained in $H$, \item $g_i$ is chosen among all isometries $g_i$ satisfying (1) in such a way that $g_i(K_i)$ has the smallest possible distance from $K_\infty$ (such a choice is possible since the set of isometries of $\matH^n$ taking ${\rm inc}(K_i)$ to $O$ and the geodesic subspace $H(K_i)$ into $H$ is homeomorphic to $O(k)$ and hence compact). \end{enumerate} Since $\ensuremath {\mathcal{S}}_k(\ensuremath{\overline{\matH^n}})$ is compact, in order to conclude it is sufficient to show that every converging subsequence of $g_i(K_i)$ converges to $K_\infty$. So, let us take a subsequence which converges to some $k$-simplex $K_\infty'$. Lemma~\ref{liminf} ensures that the sequence of radii $r(K_i)$ is bounded below by a positive number, hence the intersection $\cap_ig_i(K_i)$ contains a $k$-ball $B\subset H$ centered in $O$. Therefore $B\subset K_\infty'$ and hence $K_\infty'$ is nondegenerate. We may now apply Lemma~\ref{cont-vol} and get $${\rm vol} (K_\infty')=\lim_{i\to\infty} {\rm vol}(K_i)=v_k. $$ By Theorem~\ref{maximal:teo}, the simplex $K_\infty'$ is ideal and regular, and assumption (2) easily implies that $K_\infty' = K_\infty$. This concludes the proof. \end{proof} It is now easy to prove that in big simplices the incenter of a face is uniformly distant from any other non-incident face. \begin{lemma}\label{palle} Let $n\geqslant 3$. There exist $\varepsilon_n>0$ and $\delta_n>0$ such that the following holds for any simplex $\Delta\in \ensuremath {\mathcal{S}}_n(\overline{\matH^n})$ with ${\rm vol}(\Delta)\geqslant v_n(1-\varepsilon_n)$. Let $E$ be any face of $\Delta$ and $E'$ another face of $\Delta$ which does not contain $E$. Then $$d({\rm inc}(E), E') > 2\delta_n.$$ \end{lemma} \begin{proof} There is only one regular ideal $n$-dimensional simplex $\Delta^{\rm reg}$ up to isometries of $\ensuremath{\overline{\matH^n}}$. Let $3\delta_n>0$ be the minimal distance between ${\rm inc}(E)$ and $E'$ among all pairs of faces $E,E'$ of $\Delta^{\rm reg}$ such that $E\not\subseteq E'$. We claim that there is a constant $\varepsilon_n>0$ such that $d({\rm inc} (E),E')>2\delta_n$ for any pair of faces $E\not\subseteq E'$ of any $n$-simplex $\Delta$ of volume bigger than $v_n(1-\varepsilon_n)$. Suppose by contradiction that there is a sequence $\Delta_i$ of $n$-simplices with $\lim_{i\to\infty} {\rm vol}(\Delta_i)=v_n$, each $\Delta_i$ containing two faces $E_i \not\subseteq E_i'$ with $d({\rm inc} (E), E') \leqslant 2\delta_n$. By Proposition~\ref{regular-converge}, up to replacing each $\Delta_i$ with an isometric copy we may assume that $\lim_{i\to\infty} \Delta_i=\Delta^{\rm reg}$, $\lim_{i\to\infty} E_i=E_*$, and $\lim_{i\to\infty} E'_i=E'_*$ for some faces $E_* \not\subseteq E_*'$ of $\Delta^{\rm reg}$. Using Lemma~\ref{incentri} we get $$ \lim_{i\to\infty} d({\rm inc}(E_i), E'_i) = d({\rm inc}(E_*),E_*') \geqslant 3\delta_n $$ hence a contradiction. \end{proof} \subsection{Dihedral angles} Let $\Delta\in \ensuremath {\mathcal{S}}_n^*(\overline{\matH^n})$ be a nondegenerate $n$-simplex, and let $E$ be an $(n-2)$-dimensional face of $\Delta$. The \emph{dihedral angle} $\alpha (\Delta,E)$ of $\Delta$ at $E$ is defined as usual in the following way: let $p$ be a point in $E\cap \matH^n$, and let $H\subseteq \matH^n$ be the unique $2$-dimensional geodesic plane which intersects orthogonally $E$ in $p$. We set $\alpha(\Delta,E)$ to be equal to the angle in $p$ of the polygon $\Delta\cap H$ of $H\cong \matH^2$. It is easily seen that this is well-defined (\emph{i.e.}~independent of $p$). For every $n\geqslant 3$, we denote by $\alpha_n$ the dihedral angle of the ideal regular $n$-dimensional simplex at any of its $(n-2)$-dimensional faces. It is readily seen by intersecting the simplex with a horosphere centered at any vertex that $\alpha_n$ equals the dihedral angle of the regular \emph{Euclidean} $(n-1)$-dimensional simplex at any of its $(n-3)$-dimensional faces, so $\alpha_n=\arccos \frac 1{n-1}$. In particular, we have $\alpha_2=\arccos\frac{1}{2}=\pi/3$. Moreover, it is easily checked that $\frac{2\pi}{6}<\arccos \frac{1}{3}<\frac{2\pi}{5}$ and $\frac{2\pi}{5}<\arccos \frac{1}{n}<\frac{2\pi}{4}$ for every $n\geqslant 4$. As a consequence, the real number $\frac{2\pi}{\alpha_n}$ is an integer if and only if $n=3$, and if we denote by $k_n\in\ensuremath {\mathbb{N}}$, $n\geq 4$, the unique integer such that $$k_n\alpha_n<2\pi<(k_n+1)\alpha_n,$$ then $k_n=5$ if $n=4$ and $k_n=4$ if $n\geqslant 5$. \begin{lemma}\label{maximal-angle} Let $n\geqslant 4$. Then, there exist $a_n>0$ and $\varepsilon_n>0$, depending only on $n$, such that the following condition holds: if $\Delta\in \ensuremath {\mathcal{S}}_n^*(\ensuremath{\overline{\matH^n}})$ is an $n$-simplex such that ${\rm vol}(\Delta)\geqslant (1-\varepsilon_n)v_n$ and $\alpha$ is the dihedral angle of $\Delta$ at any of its $(n-2)$-faces, then $$ \frac{2\pi}{k_n+1}(1+a_n) < \alpha < \frac{2\pi}{k_n}(1-a_n). $$ \end{lemma} \begin{proof} It is very easy to show that the dihedral angles of a nondegenerate $n$-simplex continuously depend on its vertices, so the conclusion follows from Proposition~\ref{regular-converge} and the fact that $\frac{2\pi}{k_n+1} < \alpha_n < \frac{2\pi}{k_n}$. \end{proof} \subsection{Proof Of Theorem~\ref{bah}}\label{proof:sub} In this subsection we suppose that $M$ is a closed orientable hyperbolic manifold of dimension $n\geqslant 4$. We will prove that there exists a constant $C_n<1$, only depending on $n$, such that if $\mathcal{T}$ is any triangulation of $M$ with $|\mathcal{T}|$ simplices, then ${\rm vol}(M)\leqslant C_n v_n |\mathcal{T}|$. Let us suppose that $|\mathcal{T}|=t$, and let us denote by $\sigma_1,\ldots,\sigma_t$ suitably chosen orientation-preserving parameterizations of the simplices of $\mathcal{T}$. Let us fix positive constants $\varepsilon_n,\delta_n$ and $a_n$ that satisfy the conclusions of Lemma~\ref{palle} and Lemma~\ref{maximal-angle}. Recall that an $n$-simplex of $\mathcal{T}$ is \emph{$\varepsilon$-big} if ${\rm algvol} (\sigma) \geqslant (1-\varepsilon)v_n$. Let $t_b$ and $t_s$ be respectively the number of $\varepsilon_n$-big and $\varepsilon_n$-small simplices in $\mathcal{T}$, so that $t = t_b+ t_s$. We begin with the following easy \begin{lemma}\label{stima1a} Suppose that $t_s\geqslant \frac t{12}$. Then $$ {\rm vol} (M)\leqslant \left(1-\frac{\varepsilon_n}{12}\right) t v_n. $$ \end{lemma} \begin{proof} Our assumption implies that $t_b+(1-\varepsilon_n)t_s=t-\varepsilon_n t_s\leqslant (1-\frac{\varepsilon_n}{12})t$. Moreover, if $\sigma_i$ is $\varepsilon_n$-small, then either it is negative or ${\rm vol} (|\sigma_i^\stra|)\leqslant (1-\varepsilon_n)v_n$. Therefore Lemma~\ref{algvol} implies that $$ {\rm vol}(M)\leqslant \sum_{\textrm{positive}\ \sigma_i} {\rm vol}(|\sigma^\stra_i|)\leqslant v_n(t_b+(1-\varepsilon_n)t_s) \leqslant \left(1- \frac{\varepsilon_n}{12}\right) tv_n. $$ \end{proof} Therefore if $t_s\geqslant \frac t{12}$ we are done: henceforth we assume that $t_s\leqslant \frac t{12}$. If $E\subseteq M$ is an $(n-2)$-dimensional face of $\mathcal{T}$, we denote by $v(E)$ the number of $\varepsilon_n$-big simplices (counted with multiplicities) of $\mathcal{T}$ which are incident to $E$. We say that $E$ is \emph{full} if $v(E)\geqslant k_n+1$, we denote by ${\rm Full}(\mathcal{T})$ the set of full $(n-2)$-dimensional faces of $\mathcal{T}$, and we set $$e_{\rm f}=| {\rm Full}(\mathcal{T})|,\qquad N=\sum_{E\in {\rm Full}(\mathcal{T})} v(E).$$ \begin{rem}\label{rem1} Observe that if $E$ is full, then $\str(E)$ is a face of a nondegenerate $n$-simplex, so it is itself nondegenerate. In particular, the point ${\rm inc}(\str(E))$ is well-defined. On the other hand, Lemma~\ref{maximal-angle} implies that any \emph{non}full $(n-2)$-dimensional face of $\mathcal{T}$ is incident to at least one $\varepsilon_n$-small simplex. Note that it is possible to construct triangulations containing full $(n-2)$-dimensional faces incident to no $\varepsilon_n$-small simplices. In this case, the map $\str$ is locally a branched covering (whose degree grows with $\varepsilon_n^{-1}$). See Figure~\ref{degree:fig} for an example where $\str$ has local degree 2 at some $(n-2)$-dimensional face. \end{rem} Recall that $k_n=5$ if $n=4$ and $k_n=4$ if $n\geqslant 5$. For later purposes we point out the following: \begin{lemma}\label{stimaN} We have $$ N\geqslant 5t. $$ \end{lemma} \begin{proof} Recall that the number of $(n-2)$-dimensional faces of an $n$-simplex is $\frac{n(n+1)}2$. Let $e_{\rm nf}$ be the number of $(n-2)$-dimensional faces of $\mathcal{T}$ that are \emph{not} full. Remark~\ref{rem1} implies that $e_{\rm nf}\leqslant \frac{n(n+1)t_s}2$. Moreover, by definition, every $(n-2)$-dimensional face of $\mathcal{T}$ that is not full is incident to at most five $\varepsilon_n$-big simplices of $\mathcal{T}$ (counted with multiplicities), so $$ t \frac{n(n+1)}{2}= (t_b+t_s) \frac{n(n+1)}{2}\leqslant N+5e_{\rm nf}+t_s\frac{n(n+1)}{2}\leqslant N+3t_sn(n+1). $$ Since $t_s\leqslant \frac t{12}$, this implies that $N\geqslant \frac {tn(n+1)}4$, whence the conclusion since $n\geqslant 4$. \end{proof} We now decompose $M$ into the union of three subsets $M_1,M_2,M_3$. The first subset $M_1$ consists of $\delta_n$-balls centered at the inradii of the (straightened) full faces: $$M_1=\bigcup_{E\in {\rm Full}(\mathcal{T})} B\big({\rm inc}(\str(E)),\delta_n\big).$$ The subset $M_2$ is the union of all $\varepsilon_n$-big (straightened) simplices minus $M_1$, and $M_3$ is the union of all the $\varepsilon_n$-small simplices: $$M_2=\str\left(\bigcup_{\varepsilon_n-{\rm big}\ \sigma} |\sigma| \right) \setminus M_1, \qquad M_3=\str\left(\bigcup_{\varepsilon_n-{\rm small}\ \sigma} |\sigma |\right).$$ Recall from Lemma~\ref{algvol} that every point of $M$ lies in $\str(|\sigma|)$ for some simplex $\sigma$ of $\mathcal{T}$ (in fact, $\sigma$ may also be chosen to be positive, but this is not relevant here). Therefore, we have \begin{equation}\label{somma} {\rm vol}(M)\leqslant {\rm vol} (M_1)+{\rm vol} (M_2)+{\rm vol} (M_3). \end{equation} The reason for considering these three regions is roughly the following: if there are many $\varepsilon_n$-small simplices, some volume is ``lost'' in $M_3$; on the other hand, if there are many $\varepsilon_n$-big simplices they must wind and overlap a lot along the full faces of $\mathcal{T}$ and some volume is ``lost'' in $M_1$: in all cases the volume of $M$ will be strictly smaller than $C_ntv_n$ for some constant $C_n<1$. Let us estimate ${\rm vol} (M_i)$, $i=1,2,3$. We set $\eta_n={\rm vol} (B(p,\delta_n))$, where $p$ is any point of $\matH^n$. We have of course \begin{equation}\label{m1} {\rm vol} (M_1)\leqslant e_{\rm f} \eta_n. \end{equation} Let now $\sigma$ be an $\varepsilon_n$-big simplex of $\mathcal{T}$, and let $\nu$ be the number of $(n-2)$-dimensional faces of $\sigma$ (considered as an abstract $n$-simplex) which project into a full $(n-2)$-dimensional face of $\mathcal{T}$. If $\widetilde{\sigma}^\stra$ is a lift of $\sigma^\stra$ to $\matH^n$, then by Lemma~\ref{palle} the hyperbolic balls of radius $\delta_n$ centered in the incenters of the $(n-2)$-dimensional faces of $|\widetilde{\sigma}^\stra|$ are pairwise disjoint. Moreover, each of these balls does not intersect any $(n-1)$-dimensional face of $|\widetilde\sigma^\stra |$ that does not contain its center. Together with Lemma~\ref{maximal-angle}, this implies that the volume of $\str(|\sigma|)\setminus M_1$ is at most $$ v_n- \nu \eta_n \frac{1+a_n}{k_n+1}. $$ Summing up over all the $\varepsilon_n$-big simplices of $\mathcal{T}$ we get \begin{equation}\label{m2} {\rm vol} (M_2) \leqslant t_b v_n -\eta_n (1+a_n) \frac{N}{k_n+1}. \end{equation} Finally, we obviously have \begin{equation}\label{m3} {\rm vol}(M_3)\leqslant t_s v_n. \end{equation} Putting together the inequalitites~\eqref{somma}, \eqref{m1}, \eqref{m2}, \eqref{m3} we get the inequality \begin{equation}\label{stima1} {\rm vol} (M)\leqslant t v_n+ \eta_n \left(e_{\rm f}-(1+a_n) \frac{N}{k_n+1}\right). \end{equation} We now conclude by considering separately the cases $e_{\rm f} \leqslant \frac t2$ and $e_{\rm f} \geqslant \frac t2$. \begin{lemma}\label{stima2} Suppose that $t_s\leqslant \frac t{12}$ and $e_{\rm f}\leqslant \frac t2$. Then $$ {\rm vol}(M)\leqslant t v_n \left( 1-\frac{\eta_n}{3v_n}\right). $$ \end{lemma} \begin{proof} Recall that $k_n+1\leqslant 6$ and $N\geqslant 5t$ (see Lemma~\ref{stimaN}), so our estimate~\eqref{stima1} yields $$ {\rm vol}(M)\leqslant t v_n+\eta_n\left(e_{\rm f}-\frac{5t}{6}\right)\leqslant t v_n-\eta_n\frac{t}{3}= t v_n \left(1-\frac{\eta_n}{3 v_n}\right). $$ \end{proof} \begin{lemma}\label{stima3} Suppose that $t_s\leqslant \frac t{12}$ and $e_{\rm f}\geqslant \frac t2$. Then $$ {\rm vol}(M)\leqslant t v_n \left( 1-\frac{a_n \eta_n}{2 v_n}\right). $$ \end{lemma} \begin{proof} Recall that every full $(n-2)$-dimensional face is incident to at least $k_n+1$ $\varepsilon_n$-big simplices of $\mathcal{T}$, so $N\geqslant (k_n+1)e_{\rm f}$. Plugging this inequality into~\eqref{stima1} we get $$ {\rm vol}(M)\leqslant tv_n+\eta_n(e_{\rm f}-(1+a_n)e_{\rm f})=tv_n-a_n\eta_ne_{\rm f}\leqslant tv_n-\frac{a_n\eta_nt}{2}. $$ \end{proof} We can summarize the results proved in Lemmas~\ref{stima1a}, \ref{stima2}, and \ref{stima3} in the following statement, which provides a quantitative version of Theorem~\ref{bah}. \begin{teo} Let $M$ be a closed orientable hyperbolic manifold of dimension $n\geqslant 4$, and let $$ C_n=\max \left\{1-\frac{\varepsilon_n}{12}, 1-\frac{\eta_n}{3v_n}, 1-\frac{a_n\eta_n}{2v_n}\right\} < 1.$$ Then $$ {\rm vol} (M)\leqslant C_n v_n \sigma(M). $$ \end{teo} \section{Stable complexity} \label{complexity:section} As anticipated in the introduction, by replacing triangulations with spines we get another characteristic number $c_\infty$ which equals $\sigma_\infty$ on any irreducible 3-manifold with infinite fundamental group, but which is better-behaved and closer to the simplicial volume in many cases (see \emph{e.g.}~Propositions~\ref{surface2} and \ref{elliptic}). We define here the characteristic number $c_\infty$ and prove some basic properties. \subsection{Complexity} The complexity $c(M)$ of a compact manifold $M$ was defined by Matveev \cite{Mat} in dimension 3 and generalized by the last author in all dimensions \cite{Mar}. We recall briefly its definition. Let $\Delta = \Delta_{n+1}$ be the $(n+1)$-simplex and $\Pi^n$ be the cone over the $(n-1)$-skeleton of $\Delta$. The polyhedron $\Pi^n_k = \Pi^{n-k}\times D^k$ has a \emph{center} $c=(d,0)$ with $d\in\Pi^{n-k}$ being the center of the cone. A compact $(n-1)$-dimensional polyhedron $X$ is \emph{simple} if every point $x$ of $X$ has a star neighborhood PL-homeomorphic to $\Pi^n_k$, via a homeomorphism that sends $x$ to $c$. \begin{figure} \begin{center} \includegraphics[width = 9 cm]{models.pdf} \end{center} \nota{Neighborhoods of points in a simple polyhedron.} \label{models:fig} \end{figure} In a simple 2-dimensional polyhedron every point has a neighborhood of one of the three types shown in Fig.~\ref{models:fig}. Points of type (1) are called \emph{vertices}. The points of type (2) and (3) form respectively some manifolds of dimension 1 and 2: their connected components are called respectively \emph{edges} and \emph{regions}. Note that an edge can be a circle and a region can be an arbitrary (connected) surface. A simple $n$-dimensional polyhedron is stratified similarly. Let $M$ be a compact $n$-manifold, possibly with boundary. A subpolyhedron $X\subset \interior M$ is a \emph{spine} of $M$ if $M\setminus X$ consists of an open collar of $\partial M$ and some (possibly none) open balls. \begin{defn} The \emph{complexity} $c(M)$ of $M$ is the minimal number of vertices in a simple spine for $M$. \end{defn} The following facts, already proved in \cite{Mat, Mar}, are immediate. \begin{teo} \label{immediate:teo} The following inequalities hold: \begin{itemize} \item $c(M)\leqslant \sigma(M)$ for any closed manifold $M$, \item $c(M)\leqslant d\cdot c(N)$ for any finite covering $M\stackrel d\to N$ of compact manifolds. \end{itemize} \end{teo} \begin{proof} By dualizing a triangulation $\calT$ of $M$ we get a simple spine of $M$ with one vertex at the barycenter of each simplex of $\calT$, hence $c(M)\leqslant \sigma (M)$. The preimage of a simple spine of $N$ along the covering map is a simple spine of $M$ with $d$ vertices lying above each vertex of $N$, hence $c(M) \leqslant d\cdot c (N)$. \end{proof} We summarize the properties of $c$ in dimension 3 that we will need below. If $F\subset \interior{M}$ is a closed surface in the interior of a compact 3-manifold $M$ we denote by $M/\!/F$ the manifold $M$ with an open tubular neighborhood of $F$ removed. \begin{teo} [Matveev, \cite{Mat}]\label{matveev:teo} The complexity $c$ of compact orientable 3-manifolds satisfies the following properties: \begin{itemize} \item $c(M) = \sigma(M)$ for any closed irreducible 3-manifold $M$ distinct from $S^3$, $\matRP^3$, and $L(3,1)$, \item $c(M\#N) = c(M)+c(N)$ for any compact 3-manifolds $M$ and $N$, \item $c(M/\!/F)\leqslant c(M)$ for any irreducible compact 3-manifold $M$ and any incompressible closed surface $F\subset \interior{M}$. \end{itemize} \end{teo} \subsection{Stable complexity} Theorem \ref{immediate:teo} says that $c(M)\leqslant d\cdot c(N)$ for any finite covering $M\stackrel d\to N$ between compact manifolds \cite{Mat, Mar}. We can then mimic the construction of $\sigma_\infty$ and define the \emph{stable complexity} $c_\infty(M)$ of a compact manifold $M$ as $$c_\infty (M) = \inf_{\widetilde M \stackrel d\to M} \left\{\frac{c(\widetilde M)}d\right\}.$$ The stable complexity is of course a characteristic number and we get $$c_\infty(M) \leqslant \sigma_\infty (M)$$ for any closed manifold $M$ by Theorem \ref{immediate:teo}. The following result refines Proposition~\ref{easy:prop}. \begin{prop} \label{smaller:prop} Let $M$ be a closed $n$-manifold and suppose that $\pi_1(M)$ is virtually torsion-free (this condition is automatically satisfied if $n=2$ and if $n=3$ thanks to geometrization). Then $$\|M\| \leqslant c_\infty (M) \leqslant \sigma_\infty (M).$$ \end{prop} \begin{proof} As we have just said, the right inequality is in fact true for any closed $M$. Concerning the left inequality, we have $\|N\| \leqslant c(N)$ for any closed manifold $N$ with virtually torsion-free fundamental group \cite{Mar}. Since this group-theoretical property extends to every finite index subgroup of $\pi_1(M)$, for every degree-$d$ covering $\widetilde M \to M$ of $M$ we get $$\|M\| = \frac{\|\widetilde M \|}d \leqslant \frac{c(\widetilde M)}d$$ and therefore $\|M\|\leqslant c_\infty(M)$. If $N$ is a closed $3$-manifold, the inequality $\|N\| \leqslant c(N)$ can be proved directly (without geometrization) building on these facts: \begin{itemize} \item both $c$ and $\|\cdot \|$ are additive on connected sums \cite{Mat, Gro}; \item if $M\in \{S^3, \matRP^3, S^2\times S^1, L(3,1)\}$ then $c(M)=0$ \cite{Mat}; \item if $M$ is irreducible and not in the above list then $c(M)=\sigma(M)$ (\cite{Mat}, see Theorem~\ref{matveev:teo}). \end{itemize} \end{proof} Turning to dimension 3, we will prove below an appropriate version of Theorem \ref{matveev:teo} for $c_\infty$. First of all, the characteristic numbers $c_\infty$ and $\sigma_\infty$ coincide on the 3-manifolds we are mostly interested in: \begin{prop} Let $M$ be a closed irreducible 3-manifold with $|\pi_1(M)|=\infty$. Then $c_\infty(M) = \sigma_\infty (M)$. \end{prop} \begin{proof} Every finite-index covering $N$ of $M$ is irreducible with $|\pi_1(N)|=\infty$ and hence $c(N)=\sigma(N)$. Therefore $c_\infty(M) = \sigma_\infty(M)$. \end{proof} We will show in the next section that $c_\infty$ is also additive on connected sums and monotonic with respect to cutting along incompressible surfaces. More than that, we will show that $c_\infty$ is also additive on JSJ decompositions. Note that $c$ is certainly \emph{not} additive on JSJ decompositions, since there are only finitely many irreducible 3-manifolds of any given complexity $c$ (because $c=\sigma$ there), whereas infinitely many 3-manifolds can share the ``same'' JSJ decomposition (in the weak sense that they share the same geometric blocks, but assembled via different maps). \subsection{Surfaces and elliptic manifolds} The following propositions describe examples where the stable complexity is equal to the simplicial volume and strictly smaller than the stable $\Delta$-complexity. \begin{prop}\label{surface2} If $S$ is a compact surface then $c_\infty (S) = \| S\|= 2\chi_-(S)$, where $\chi_-(S) = \min\{-\chi(S), 0\}$. Therefore, if $S$ is closed we have $\sigma_\infty(S)>c_\infty (S)$ if $\chi(S)>0$ and $\sigma_\infty(S)=c_\infty(S)$ if $\chi(S)\leqslant 0$. \end{prop} \begin{proof} Let us first recall that the equality $\| S\|=2\chi_-(S)$ (which was stated in Propositon~\ref{surface} for closed surfaces) also holds for surfaces with boundary. In fact, if $S$ is a disk or an annulus, then the pair $(S, \partial S)$ admits a self-map of degree bigger than one, so $\| S\|=\chi_-(S)=0$. If $S$ is a M\"obius strip, then $S$ is covered by the annulus, so again $\| S\|=\chi_-(S)=0$. In the remaining cases, the interior of $S$ admits a complete finite-volume hyperbolic structure, so we may apply Theorem~\ref{prop:teo} to get $$ \|S\|=\frac{{\rm Area}({\rm int}(S))}{v_2}=\frac{2\pi|\chi(S)|}{\pi}=2\chi_-(S) , $$ where ${\rm vol}({\rm int}(S))=2\pi|\chi(S)|$ by Gauss-Bonnet Theorem, and $v_2=\pi$ since the maximal area of hyperbolic triangles is equal to $\pi$. Let us now come to the statement of the proposition. If $S$ is either $S^2$, $\matRP^2$, an annulus, or a M\"obius strip, then $S$ has a spine without vertices (a circle) and hence $c(S)=0$. Every other surface $S$ with non-empty boundary has a simple (\emph{i.e.}~trivalent) spine with $2\chi_-(S)$ vertices, and hence $$\|S\|\leqslant c_\infty(S) \leqslant c(S) \leqslant 2\chi_-(S) = \|S\| .$$ This proves the first statement when $S$ has non-empty boundary or $\chi(S)>0$. If $S$ is closed with $\chi(S)\leqslant 0$ then by Proposition~\ref{easy:prop} we have $$\| S\|\leqslant c_\infty (S)\leqslant \sigma_\infty(S)= \| S\|=2\chi_-(S),$$ whence the conclusion. \end{proof} \begin{prop}\label{elliptic} If $M$ is an elliptic $n$-manifold then $c_\infty(M)=\| M\|=0$ and $\sigma_\infty(M)>0$. \end{prop} \begin{proof} For every $n\geqslant 1$ we have $c(S^n)=0$ \cite{Mat, Mar} and $\| S^n\|=0$, since $S^n$ admits a self-map of degree bigger than one. Since every elliptic manifold is covered by $S^n$, we get $c_\infty(M)=\| M\|=0$. On the other hand $\sigma(M)>0$ for every manifold $M$ and hence $\sigma_\infty(M)>0$ whenever $M$ has finite fundamental group, and hence only finitely many coverings. \end{proof} \section{Three-manifolds} \label{Three:section} We study here the stable complexity $c_\infty$ of 3-manifolds. We first show that $c_\infty$ is additive on connected sums and JSJ decompositions: while additivity on connected sums is easy, to prove additivity on JSJ decompositions we make an essential use of a couple of lemmas established by Hamilton \cite{Ham}. As a corollary, we compute $c_\infty$ on any irreducible 3-manifold whose JSJ decomposition consists of Seifert pieces and hyperbolic manifolds commensurable with the figure-eight knot complement. We end this section by exhibiting a sequence of closed hyperbolic 3-manifolds $M_i$ (with bounded volume) for which the ratio between $c_\infty(M_i)$ and $\|M_i\|$ tend to one. \subsection{Minimizing sequences and disconnected coverings.} We define the following natural notion. \begin{defn} Let $M$ be a compact manifold. A \emph{minimizing sequence} of coverings $f_i\colon M_i\stackrel{d_i}{\to} M$ is a sequence such that $\frac{c(M_i)}{d_i} \to c_\infty(M)$. \end{defn} Of course every manifold $M$ has a minimizing sequence. Given a minimizing sequence $f_i\colon M_i\stackrel{d_i}{\to} M$, we can replace each $M_i$ with any manifold $N_i$ covering $M_i$ and we still get a minimizing sequence. In particular, if $M$ is a 3-manifold with $|\pi_1(M)|=\infty$ we can always take a minimizing sequence such that $d_i\to \infty$ because $\pi_1(M)$ is residually finite \cite{Hem}. Let $\eta(M)$ be an invariant which is submultiplicative under finite coverings, like $c(M)$, $\sigma(M)$, or $\|M\|^\matZ$. We have defined in this paper the \emph{stable} version $\eta_\infty(M)$ of $\eta(M)$ by taking the infimum of $\frac{\eta(N)}d$ among all finite coverings $N\stackrel d\to M$. We have implicitly assumed in this definition that both $M$ and $N$ are connected, as this hypothesis is typically embodied in the definition of ``covering''. If we discard this hypothesis, thus allowing both $M$ and $N$ to be disconnected, we actually get the same stable function $\eta_\infty$. More precisely, we define a \emph{(possibly disconnected) degree-$d$ covering} as a map $p\colon M\to N$ between (possibly disconnected) topological spaces where every point in $N$ is contained in some open set $U$ such that $p^{-1}(U) = \cup_{i=1}^d U_i$ and $p|_{U_i}\colon U_i \to U$ is a homeomorphism. We re-define $\eta_\infty(M)$ for any (possibly disconnected) manifold $M$ as the infimum of $\frac{\eta(M)}d$ over all (possibly disconnected) degree-$d$ coverings of $M$. It is easy to verify that this slightly modified definition of $\eta_\infty(M)$ coincides on a connected manifold $M$ with the one we have introduced beforehand using only connected coverings, and that we get an additive function $\eta_\infty(\sqcup_{i\in I} M_i) = \sum_{i\in I} \eta_\infty(M_i)$ on the connected components of disconnected manifolds. In this section (and nowhere else) we allow implicitly all converings to be disconnected: this is a natural framework when one cuts a 3-manifold along surfaces, and might get a disconnected 3-manifold as a result. \subsection{Connected sums and incompressible surfaces} Additivity on connected sums easily lifts from $c$ to $c_\infty$. We subdivide the proof in two steps. \begin{prop} Let $M$ be a 3-manifold and $S\subset\interior M$ a 2-sphere. We have $c_\infty (M/\!/S) = c_\infty(M)$. \end{prop} \begin{proof} We know \cite{Mat} that if $N$ is a 3-manifold and $S\subset\interior N$ is a sphere then $c(N/\!/S) = c(N)$, so the same result for $c_\infty$ follows easily. If $p\colon \widetilde M \to M$ is a covering, the preimage $\widetilde S = p^{-1}(S)$ is a union of spheres, and hence $c(\widetilde M/\!/\widetilde S) = c(\widetilde M)$. Every covering $\widetilde M \stackrel d\to M$ induces a covering $\widetilde M /\!/ \widetilde S \stackrel d\to M/\!/S$ with $c(\widetilde M) = c(\widetilde M /\!/ \widetilde S)$, hence $c_\infty(M/\!/S)\leqslant c_\infty(M)$. Conversely, every covering $N \stackrel d \to M/\!/S$ gives rise to a covering $N' \stackrel d \to M$, where $N'$ is obtained from $N$ by gluing the $2d$ boundary spheres in pairs. In particular $c(N') = c(N)$ and hence we also get $c_\infty(M/\!/S)\geqslant c_\infty(M)$. \end{proof} \begin{cor} Let $M,N$ be any compact 3-manifolds. We have $$c_\infty(M\#N) = c_\infty(M)+c_\infty(N).$$ \end{cor} \begin{proof} Cutting and glueing along 2-spheres does not vary $c_\infty$. Capping a boundary 2-sphere with a 3-disc $D^3$ also does not modify $c_\infty$ since $c_\infty(D^3) = c(D^3) = 0$. \end{proof} Another property which lifts easily from $c$ to $c_\infty$ is monotonicity under the operation of cutting along incompressible surfaces. \begin{prop} Let $S\subset \interior{M}$ be an incompressible surface in an irreducible 3-manifold $M$. We have $$c_\infty(M/\!/S) \leqslant c_\infty(M).$$ \end{prop} \begin{proof} If $p\colon \widetilde M \to M$ is a covering, the manifold $\widetilde M$ is irreducible and the pre-image $\widetilde S = p^{-1}(S)$ of $S$ is a (possibly disconnected) incompressible surface in $\widetilde M$. Therefore $c(\widetilde M /\!/ \widetilde S)\leqslant c(\widetilde M)$ by Theorem \ref{matveev:teo}, and $c_\infty(M/\!/S)\leqslant c_\infty(M)$. \end{proof} When the incompressible surface is a torus, we actually get an equality. To prove this non-trivial fact (which does not hold for $c$ and heavily depends on geometrization) we will need to construct appropriate coverings of irreducible 3-manifolds, using some techniques introduced by Hempel in his proof that the fundamental group of an irreducible 3-manifold is residually finite \cite{Hem} and further developed in a recent paper by E. Hamilton \cite{Ham}. \subsection{Characteristic coverings} Recall that a \emph{characteristic subgroup} of a group $G$ is a subgroup $H<G$ which is invariant by any automorphism of $G$. For a natural number $x\in \ensuremath {\mathbb{N}}$, the \emph{$x$-characteristic} subgroup of $\matZ\times \matZ$ is the subgroup $x(\matZ\times\matZ)$ generated by $(x,0)$ and $(0,x)$. It has index $x^2$ if $x>0$ and $\infty$ if $x=0$. The characteristic subgroups of $\matZ\times \matZ$ are precisely the $x$-characteristic subgroups with $x\in \ensuremath {\mathbb{N}}$. It is easy to prove that a subgroup of $\matZ\times\matZ$ of index $x$ contains the $x$-characteristic subgroup. A covering $p\colon \widetilde T\to T$ of tori is called \emph{$x$-characteristic} if $p_*(\pi_1(\widetilde T))$ is the $x$-characteristic subgroup of $\pi_1(T)\cong \matZ\times \matZ$. A covering $p\colon \widetilde M \to M$ of 3-manifolds bounded by tori is \emph{$x$-characteristic} if the restriction of $p$ to each boundary component of $\widetilde M$ is $x$-characteristic. Lemmas 5 and 6 from \cite{Ham} state the following. \begin{lemma}[E. Hamilton] \label{Hamilton:lemma} Let $M_1,\ldots, M_n$ be a finite collection of compact, orientable 3-manifolds with boundary whose interiors admit complete hyperbolic structures of finite volume. Let $m$ be a positive integer. Then there exist a positive integer $x$ and finite-index normal subgroups $K_i\triangleleft \pi_1(M_i)$ such that $K_i\cap \pi_1(T_{ij})$ is the characteristic subgroup of index $(mx)^2$ in $\pi_1(T_{ij})$, for each component $T_{ij}$ of $\partial M_i$. Hence the covering of $M_i$ corresponding to $K_i$ is $(mx)$-characteristic. \end{lemma} \begin{lemma}[E. Hamilton] \label{Hamilton:2:lemma} Let $M$ be a compact, orientable Seifert fibered space with non-empty, incompressible boundary. Then there exists a positive integer $v$ such that for each multiple $m$ of $v$ there is a finite $m$-characteristic covering space $M_m$ of $M$. \end{lemma} We will use these lemmas to prove the following result, which concerns this question: given one covering on each piece of the JSJ decomposition of an irreducible 3-manifold $M$, can we glue them together to a covering of $M$? The answer is of course negative in general, since there is no way to glue arbitrary coverings which behave very differently along the tori of the JSJ decomposition; however Lemmas \ref{Hamilton:lemma} and \ref{Hamilton:2:lemma} can be used to replace the given coverings with some bigger $x$-characteristic coverings, and (as noted by Hempel \cite{Hem}) such coverings can indeed be glued together (but one needs to take multiple copies of each covering to glue everything properly). \begin{prop} \label{glue:prop} Let an irreducible orientable 3-manifold $M$ with (possibly empty) boundary consisting of tori decompose along its JSJ decomposition into some pieces $M_1,\ldots, M_h$. Let $p_i\colon\widetilde{M_i}\to M_i$ be a finite covering for every $i$. There exist a natural number $n$, a finite covering $q_i\colon N_i \to \widetilde{M_i}$ for every $i$ and a finite covering $p\colon N \to M$ such that $p^{-1}(M_i)$ consists of copies of $N_i$ covering $M_i$ along $p_i\circ q_i$. Moreover each $p_i\circ q_i$ is $n$-characteristic. \end{prop} \begin{proof} Up to taking a bigger covering we can suppose that $p_i$ is regular for every $i$. The pre-image of a boundary torus $T_{ij}\subset\partial M_i$ consists of finitely many tori $T_{ij}^1, \ldots, T_{ij}^l$, and the restriction of $p_i$ to each $T_{ij}^k$, $k=1,\ldots,l$, gives isomorphic coverings (because $p_i$ is regular). In particular $H_{ij} = (p_i)_*(\pi_1(T_{ij}^k))$ is a subgroup of $\pi_1(T_{ij})$ which does not depend on $k$. Let $d_{ij}$ be the index of $H_{ij}$ in $\pi_1(T_{ij})$. By geometrization every $M_i$ is either hyperbolic or Seifert. For every Seifert block $M_i$ there is some integer $v_i$ such that the conclusion of Lemma \ref{Hamilton:2:lemma} applies. Let now $m$ be the least common multiple of all integers $d_{ij}$ and $v_i$. Let us apply Lemma \ref{Hamilton:lemma} to the hyperbolic blocks of the JSJ decomposition of $M$: there is an integer $x$ such that every hyperbolic block $M_i$ has an $(mx)$-characteristic covering. By Lemma \ref{Hamilton:2:lemma} every Seifert block $M_i$ also has an $(mx)$-characteristic covering. Therefore, every block $M_i$ has an $(mx)$-characteristic covering, determined by some subgroup $K_i<\pi_1(M_i)$ which intersects every $\pi_1(T_{ij})$ in its $(mx)$-characteristic subgroup. Recall that our original covering $p\colon \widetilde{M_i}\to M_i$ is determined by some other subgroup $H_i<\pi_1(M_i)$ intersecting every $\pi_1(T_{ij})$ in a subgroup $H_{ij}$ of some index $d_{ij}$. A subgroup of index $d_{ij}$ contains the $(d_{ij})$-characteristic subgroup and hence the $(mx)$-characteristic subgroup since $d_{ij}$ divides $mx$. Therefore $K_i\cap H_i$ also intersects every $\pi_1(T_{ij})$ in its $(mx)$-characteristic subgroup, and hence it induces a $(mx)$-characteristic covering $N_i\to \widetilde{M_i} \to M_i$. Summing up, we have shown that every covering $\widetilde {M_i}\to M_i$ has a bigger $(mx)$-characteristic covering $N_i\to\widetilde {M_i}\to M_i$, where the constant $mx$ is fixed. Hempel proved \cite{Hem} that $K$-characteristic coverings (with fixed $K$) can be glued together. Namely, there is a finite covering $N\to M$ such that its restriction to $M_i$ consists of finite copies of the covering $N_i\to M_i$. \end{proof} \begin{cor} \label{glue:cor} Let $M$ be an irreducible orientable 3-manifold. For every integer $n_0$ there is a bigger integer $n>n_0$ and a covering $p\colon N\to M$ whose restriction over any torus of the JSJ decomposition of $M$ is a disjoint union of $n$-characteristic coverings. \end{cor} \begin{proof} Take a block $M_1$ of the JSJ decomposition of $M$. Thanks to geometrization, the fundamental group $\pi_1(M_1)$ is residually finite, hence there is a covering $\widetilde M_1 \to M_1$ which restricts on some boundary torus of $\widetilde M_1$ to a covering of degree bigger than $n_0^2$. Apply Proposition \ref{glue:prop} to this covering: the result is an $n$-characteristic covering $N\to M$ with $n > n_0$. \end{proof} \subsection{JSJ decompositions} We will prove below that $c_\infty$ is additive on JSJ decompositions. We start by proving the following. \begin{lemma} \label{tori:lemma} Let an irreducible orientable 3-manifold $M$ with (possibly empty) boundary consisting of tori decompose along its JSJ decomposition into some pieces $M_1,\ldots, M_h$. We have $$c_\infty (M) \leqslant c(M_1)+\ldots + c(M_h).$$ \end{lemma} \begin{proof} Given some simple spines $P_1, \ldots, P_h$ for $M_1, \ldots, M_h$, it is easy to construct a simple spine $Q$ for $M$. Set $P = P_1\sqcup \ldots \sqcup P_h$. Recall that $M_i\setminus P_i$ consists of an open collar of $\partial M_i$ plus maybe some open balls. Therefore $M\setminus P$ consists of one product neighborhood $T\times (-1,1)$ of each torus $T$ of the decomposition, plus maybe some open balls. To build a spine for $M$ it suffices to choose a simple spine $Y$ for $T$ (\emph{i.e.}~$Y$ is a 1-dimensional polyhedron with only 3-valent vertices and $T\setminus Y$ consists of open discs) and add to $P$ one product $Y\times (-1,1)$ inside each such product neighborhood $T\times (-1,1)$. If $Y\subset T$ is in generic position, the resulting polyhedron $Q$ is still simple. Now $M\setminus Q$ consists of open balls only: those that were in $M\setminus P$, plus one for each torus of the decomposition. Therefore $Q$ is a spine for $M$. \begin{figure} \begin{center} \includegraphics[width = 12.5 cm] {types.pdf} \nota{We colour in green the regions of the inserted portions $Y\times (-1,1)$. There are four types of vertices $A$, $B$, $C$, and $D$ in the spine $Q$, according to the colours of the incident regions.} \label{types:fig} \end{center} \end{figure} Colour in green the regions in the products $Y\times (-1,1)$. It is easy to check that there are now four types $A, B, C, D$ of vertices in $Q$ according to the colours of the incident regions, as shown in Fig.~\ref{types:fig}. The vertices of type $A$ are those of $P$. Let $v_A$, $v_B$, $v_C$, and $v_D$ be the number of vertices of type $A$, $B$, $C$, and $D$ in $Q$. Consider one inserted piece $Y\times (-1,1) \subset T\times (-1,1)$ inside a collar separating two (possibly coinciding) polyhedra $P_i$ and $P_j$. Pull back the cellularization of $P_i$ on $T$ via the collar, as in Fig.~\ref{tori:fig}: the four types of vertices are also shown in the figure. \begin{figure} \begin{center} \includegraphics[width = 9 cm] {tori.pdf} \nota{The cellularization of $T$ induced by the collar map $T\to P_i$, and the spine $Y$ of $T$ coloured in green. The four types of vertices $A$, $B$, $C$, $D$.} \label{tori:fig} \end{center} \end{figure} Corollary \ref{glue:cor} ensures that for every $n_0>0$ there is a natural number $n>n_0$ and a covering $p\colon N\stackrel{d}\to M$ whose restriction over each torus $T$ of the JSJ decomposition is a disjoint union of some $h$ distinct $n$-characteristic coverings. We thus have $d = hn^2$. The pre-image $\widetilde Q = p^{-1}(Q) \subset N$ is a simple spine of $N$, and we give each region of $\widetilde Q$ the same colour of its image in $Q$. We thus get $dv_A, dv_B, dv_C,$ and $dv_D$ vertices of type $A$, $B$, $C$, $D$ respectively. \begin{figure} \begin{center} \includegraphics[width = 12.5 cm] {tori2.pdf} \nota{A 3-characteristic covering $\widetilde T$ of some torus $T$ of the JSJ decomposition. The spine $Y$ lifts to the green spine $\widetilde Y$ shown in the left picture. We can eliminate most of its edges and still get a spine $\widetilde Y'$ of $\widetilde T$.} \label{tori2:fig} \end{center} \end{figure} Let $T\subset M$ be one torus of the decomposition. One component $\widetilde T$ of $p^{-1}(T)$ is shown in Fig.~\ref{tori2:fig}, containing the lifted spine $\widetilde Y$. If $T\setminus Y$ consists of one disc only as in Fig.~\ref{tori:fig}, now $\widetilde T \setminus \widetilde Y$ consists of $n^2$ discs. As shown in the figure, we can replace $\widetilde Y$ with a simpler spine $\widetilde Y'\subset \widetilde Y \subset \widetilde T$, whose complement in $\widetilde T$ consists of only one disc. We then modify $\widetilde Q$ by substituting the product $\widetilde Y \times (0,1)$ with $\widetilde Y' \times (0,1)$. The resulting polyhedron $\widetilde Q'\subset \widetilde Q$ is still a spine of $N$, with less vertices than $\widetilde Q$. We estimate the number of vertices for $\widetilde Q'$. Recall that $d = hn^2$. It is clear from the picture that, after the removal of such coloured faces, the number of vertices of type $A$, $B$, $C$, $D$ is respectively not greater than $dv_A$, $2hnv_B$, $hv_C$, and $2hnv_D$. Therefore $$c(N) \leqslant dv_A + 2hn(v_B+v_D) + hv_C.$$ Suppose that in our construction we started with some spines $P_1,\ldots, P_h$ with minimal number of vertices for $M_1,\ldots, M_h$. Then $v_A$ equals $c(M_1)+\ldots + c(M_h)$ and we get $$c_\infty (M) \leqslant \frac{c(N)}{d} \leqslant v_A + \frac{2(v_B+v_D)}{n} + \frac{v_C}{n^2}. $$ Since for every $n_0$ there is $n>n_0$ which satisfies this inequality, we get $$c_\infty(M) \leqslant v_A = c(M_1) +\ldots + c(M_h).$$ \end{proof} Finally we can prove the following. \begin{prop} Let an irreducible orientable 3-manifold $M$ with (possibly empty) boundary consisting of tori decompose along its JSJ decomposition into some pieces $M_1,\ldots, M_h$. We have $$c_\infty (M) = c_\infty(M_1)+\ldots + c_\infty(M_h).$$ \end{prop} \begin{proof} We already know that by cutting along incompressible surfaces we cannot increase the stable complexity, hence $c_\infty (M)\geqslant c_\infty(M_1)+\ldots + c_\infty(M_h)$. We need to prove the converse inequality. Let $p_i^j\colon M_i^j\stackrel{d_i^j}{\to} M_i$ be a minimizing sequence of coverings for $M_i$, for each $i=1,\ldots, h$. By hypothesis we have $$\frac{c(M_i^j)}{d_i^j} \to c_\infty (M_i)$$ as $j\to \infty$, for each $i=1,\ldots, h$. Fix $j$. By Proposition \ref{glue:prop}, up to replacing each $p_i^j$ with a bigger covering (which we still denote by $p_i^j$) we can suppose that there is a covering $p\colon M^j\stackrel{d^j}{\to} M$ which restricts on $M_i$ to some $k_i^j$ disjoint copies of $p_i^j$, for every $i$. We necessarily have $d^j = k_i^jd_i^j$. Lemma \ref{tori:lemma} impies that $$c_\infty (M^j) \leqslant k_1^jc(M_1^j) + \ldots + k_h^jc(M_h^j).$$ We divide both expressions by $d^j$ and get $$c_\infty(M) \leqslant \frac{c(M_1^j)}{d_1^j} + \ldots + \frac{c(M_h^j)}{d_h^j}.$$ Since $p_i^j$ are minimizing sequences for all $i$, by sending $j\to \infty$ we get $$c_\infty(M) \leqslant c_\infty(M_1) + \ldots + c_\infty(M_h).$$ \end{proof} \subsection{Seifert manifolds} We have proved that the stable complexity of an irreducible 3-manifold is the sum of the stable complexity of the pieces in its JSJ decomposition. We can therefore concentrate our attention to Seifert and hyperbolic manifolds. \begin{prop} Let $M$ be a compact Seifert manifold, with or withour boundary. We have $c_\infty(M)=0$. \end{prop} \begin{proof} A Seifert manifold has a finite covering $M$ which is an $S^1$-bundle over an orientable surface $\Sigma$ with some Euler number $e\geqslant 0$. If the manifold has boundary then $e=0$ and the bundle is a product $\Sigma \times S^1$. Since this manifold covers itself with arbitrarily high degree, it clearly has stable complexity zero. If the manifold $M$ is closed, we denote it by $(\Sigma, e)$. Its complexity is at most \cite{MP}: $$c(\Sigma, e)\leqslant \max \{0, e-1+\chi(\Sigma)\} - 6(\chi(\Sigma)-1) \leqslant e+6\chi_-(\Sigma)+6.$$ A degree-$d$ covering of surfaces $\widetilde\Sigma \stackrel d\to \Sigma$ induces a covering $(\widetilde \Sigma, de) \stackrel d\to (\Sigma, e)$. By unwrapping the fiber we can construct another covering $(\widetilde \Sigma, e) \stackrel d\to (\widetilde\Sigma, de)$ and by composing them we get $$(\widetilde \Sigma, e) \stackrel{d^2} \to (\Sigma, e).$$ Therefore $$c_\infty(\Sigma,e) \leqslant \frac{c(\widetilde\Sigma,e)}{d^2} \leqslant \frac{e+6\chi_-(\widetilde \Sigma) + 6}{d^2} = \frac{e+6d\chi_-(\Sigma) + 6}{d^2} \to 0$$ as $d\to \infty$. \end{proof} \begin{cor} A graph manifold has stable complexity zero. \end{cor} \subsection{Hyperbolic $3$-manifolds} We can now turn to hyperbolic 3-manifolds. Note that, since $c_\infty$ is a characteristic number, once we know the stable complexity of a manifold we also know the stable complexity of any manifold in its commensurability class. We start by extending Proposition \ref{smaller:prop} to the cusped case. \begin{prop} Let $M$ be a compact 3-manifold, whose interior admits a complete hyperbolic structure with finite volume. We have $$\|M\| \leqslant c_\infty(M).$$ \end{prop} \begin{proof} The closed case was considered in Proposition \ref{smaller:prop}, so we suppose $M$ has boundary. As shown by Matveev \cite{Mat:book}, if $N$ is a compact 3-manifold whose interior admits a complete hyperbolic structure with finite volume, then the complexity $c(N)$ equals the minimal number of tetrahedra in an ideal triangulation of $N$. Moreover, by straightening the simplices of such an ideal triangulation it is easily seen that ${\rm vol} (N)\leqslant v_n c(N)$, whence $\| N\| \leqslant c(N)$ by Theorem~\ref{prop:teo}. Therefore $\|N\|\leqslant c(N)$ for every covering $N$ of $M$, and we conclude as in Proposition \ref{smaller:prop}. \end{proof} We can calculate $c_\infty$ only on one (very special) commensurability class of hyperbolic cusped 3-manifolds. \begin{prop} If $M$ is commensurable with the figure-eight knot complement $$\|M\| = c_\infty(M).$$ \end{prop} \begin{proof} The figure-eight knot complement $N$ is obtained by gluing two ideal regular tetrahedra, each of volume $v_3$. We have ${\rm vol}(N) = 2v_3$ and hence $\|N\| = 2$. We also have $c(N)=2$. The (very special) equality $c(N) = \|N\|$ together with $\|N\|\leqslant c_\infty(N) \leqslant c(N)$ implies $c_\infty (N) = \|N\|$. Since both $c_\infty$ and $\|\cdot \|$ are characteristic numbers, they coincide on the whole commensurability class of $N$. \end{proof} Finally, we can say something concerning Dehn filling. \begin{prop} \label{filling:prop} Let $N$ be any compact 3-manifold and $M$ be obtained from $N$ via Dehn filling one boundary torus of $N$. We have $$c_\infty (M) \leqslant c(N).$$ \end{prop} \begin{proof} The proof is similar to that of Lemma \ref{tori:lemma}. Given a simple spine $P$ of $N$, it is easy to construct a spine $Q$ of $M$. Recall that $N\setminus P$ consists of a collar of the boundary plus possibly some open balls. Then $M\setminus P$ consists of a collar of the boundary (if non-empty), plus possibly some open balls, plus an open solid torus $V$ created by the Dehn filling. Let $D$ be a meridian disc of $V$, and take $Q = P \cup D$. If $D$ is generic, the resulting polyhedron $Q$ is still simple. Now $V\setminus D$ is an open ball and hence $Q$ is a spine of $M$. As in the proof of Lemma \ref{tori:lemma}, colour the added disc $D$ in green. There are now three types $A$, $B$, and $D$ of vertices in $Q$, as shown in Fig.~\ref{types:fig}. Let $v_A$, $v_B$, and $v_D$ be the number of vertices of type $A$, $B$, and $D$. Since $\pi_1(M)$ is residually finite \cite{Hem}, for every $n>0$ there is an $h>0$ and a regular covering $p\colon \widetilde M \stackrel {hn}\to M$ such that $p^{-1}(V)$ consists of $h$ open solid tori $\widetilde V_1,\ldots, \widetilde V_h$, each winding $n$ times along $V$ via $p$. The covering has degree $d=hn$. The spine $Q$ of $M$ lifts to a spine $p^{-1}(Q) = \widetilde Q$ of $\widetilde M$, which contains $dv_A$, $dv_B$, and $dv_D$ vertices of type $A$, $B$, and $D$. The disc $D$ lifts to $n$ discs inside each $\widetilde V_i$. These $n$ discs subdivide $\widetilde V_i$ into $n$ open balls. We now remove from $\widetilde Q$ some $n-1$ of these $n$ discs, leaving only one disc $\widetilde D_i\subset \widetilde V_i$, whose complement in $\widetilde V_i$ is a single open ball. If we do such removals for every $i=1,\ldots, h$ we are left with a spine $\widetilde Q'\subset \widetilde Q$ with fewer vertices. The number of vertices of type $A$, $B$, and $D$ in $\widetilde Q$ is at most $dv_A$, $hv_B$, and $hv_D$. Therefore $$c(\widetilde M) \leqslant dv_A + h(v_B+v_D).$$ Suppose that $P$ has the minimal number of vertices for $N$. Then $v_A$ equals $c(N)$ and we get $$c_\infty(M) \leqslant \frac {c(\widetilde M)}d \leqslant v_A + \frac{v_B+v_D}n.$$ Since this equality holds for every $n>0$ we get $c_\infty(M) \leqslant c(N)$. \end{proof} Note that we do not know if $c_\infty(M)$ is smaller than $c_\infty(N)$. However, this result is enough to deduce the following corollary, which implies in turn Theorem~\ref{3:teo}, since $\sigma_\infty(M)=c_\infty(M)$ for every closed hyperbolic $3$-manifold $M$. \begin{cor} \label{figure-eight:cor} Let $M_i$ be any sequence of distinct Dehn fillings on the figure-eight knot complement. We have $$\frac{c_\infty(M_i)}{\|M_i\|} \to 1.$$ \end{cor} \begin{proof} Let $N$ be the figure-eight knot complement. We have $c_\infty(M_i) \leqslant c(N) = 2$ for all $i$. By Thurston's Dehn filling Theorem~\cite{Thurston} we also get ${\rm vol}(M_i)\to {\rm vol}(N)$ and hence $\|M_i\|\to \|N\| = 2$. Since $c_\infty(M_i)/\|M_i\|\geqslant 1$ the conclusion follows. \end{proof} \section{Concluding remarks}\label{futuro:sec} In this section we show how our arguments can be adapted to prove that the stable integral simplicial volume is strictly bigger than the simplicial volume for closed hyperbolic manifolds of dimension at least four. Moreover, we describe some possible approaches to prove or disprove that $\sigma_\infty (M)=\| M\|$ for every (or some) closed hyperbolic $3$-manifold $M$. \subsection{Stable integral simplicial volume and Gromov's Question~\ref{gromov:conj}} Proposition \ref{chi2:prop} holds (with a similar proof) also for the stable integral simplicial volume: \begin{prop}\label{last:prop} Let $M$ be a closed $n$-dimensional manifold. We have $$|\chi(M)|\leqslant (n+1)\cdot \|M\|_\infty^\matZ.$$ \end{prop} \begin{proof} Using Poincar\'e duality, it is not difficult to prove the following inequality~\cite{Loeh2}, similar to the one we used in the proof of Proposition \ref{chi2:prop}: \begin{equation}\label{aaa} \sum_{i=0}^n b_i(M)\leqslant (n+1)\cdot \|M\|^\matZ \end{equation} (see also~\cite[Example 14.28]{luck}). As a consequence we get $|\chi(M)|\leqslant (n+1) \|M\|^\matZ$ and $$ |\chi(M)|\leqslant (n+1) \cdot \|M\|^\matZ_\infty $$ since the Euler characteristic is a characteristic number. \end{proof} Therefore if $\|M\|_\infty^\matZ$ were equal to $\|M\|$ for any aspherical manifold $M$ then we could answer positively Gromov's Question~\ref{gromov:conj}. However, as for $\sigma_\infty$, the two characteristic numbers differ at least on hyperbolic manifolds of dimension $n\geqslant 4$. \begin{teo}\label{integral} For every $n\geqslant 4$ there exists a constant $C_n<1$ such that the following holds. Let $M$ be a closed orientable hyperbolic manifold of dimension $n\geqslant 4$. Then $$ \|M\| \leqslant C_n \|M\|^\matZ_\infty. $$ \end{teo} \begin{proof} Just as in the proof of Theorem~\ref{4:teo}, it is sufficient to show that $$ \frac{{\rm vol}(M)}{v_n}\leqslant C_n \| M\|^\matZ $$ for some constant $C_n<1$ independent of $M$. Let us now briefly describe how the proof of Theorem~\ref{bah} can be adapted to achieve this goal. Let $z=\sum_{i=1}^m \epsilon_i \sigma_i$ be an integral cycle representing the fundamental class of $M$, and suppose that $\epsilon_i=\pm 1$ for every $i$ (so we don't exclude the case that $\sigma_i=\sigma_j$ for some $i\neq j$). We may also assume that $z$ realizes the integral simplicial volume of $M$, so $\| M\|^\matZ=m$. The cycle $z$ defines a function $f$ from the disjoint union of $m$ copies of the standard $n$-dimensional simplex to $M$. Since $z$ is a cycle, there exists a complete pairing of the $(n-1)$-dimensional faces of these standard simplices such that paired faces can be identified by a simplicial isomorphism which is compatible with $f$. These data define a pseudomanifold $X$ endowed with a (loose) triangulation $\mathcal{T}$ with $m$ simplices and a map $g\colon X\to M$ induced by $f$. By construction, the sum of the simplices of $\mathcal{T}$ defines an $n$-cycle $z'\in Z_n(X,\matZ)$, and $g_\ast(z')=z$. We can now mimic the construction described in Subsection~\ref{stmap} and homotope $g$ into a map $\str(g)\colon X\to M$ which sends each simplex of $\mathcal{T}$ into the support of a straight simplex in $M$. As a consequence, we may still define positive, negative, $\varepsilon$-big and $\varepsilon$-small simplices of $\mathcal{T}$, and full $(n-2)$-dimensional faces of $\mathcal{T}$. The estimates described in Subsection~\ref{proof:sub} still hold, and this provides the needed constant $C_n<1$ such that ${\rm vol}(M)\leqslant C_n v_n m=C_n v_n \| M\|^\matZ$. \end{proof} \subsection{$L^2$-Betti numbers}\label{futuro:sub} As already mentioned in the introduction, Gromov was primarily interested in the comparison of simplicial volume with the Euler characteristic. In~\cite{Gromov2} and~\cite{Gro3}, he suggested to use $L^2$-invariants to attack this problem. More precisely, in \cite[page 232]{Gromov2} he observed that Question~\ref{gromov:conj} may be reduced to Question~\ref{gromov2:conj} below, which is formulated in terms of $L^2$-Betti numbers. The $L^2$-Betti numbers were first defined analytically by Atiyah in terms of the heat kernel in the context of cocompact groups actions on manifolds~\cite{Aty}. Since then, the range of definition and application of $L^2$-Betti numbers was impressively widened, see \emph{e.g.}~\cite{Connes, CheeGro, Lupaper, Farber, Gab}. A comprehensive introduction to $L^2$-Betti numbers may be found in L\"uck's book \cite{luck}. One of the most important features of $L^2$-Betti numbers is that they can be defined both analytically and combinatorially. In the topological-combinatorial setting, the $L^2$-invariants are mainly based on the study of the action of the fundamental group of a space on the cellular chain complex of its universal covering. Here, if $M$ is a closed manifold, we denote by $b_k^{(2)}(M)$ the $k$-th $L^2$-Betti number $b_k^{(2)}(\widetilde{M},\pi_1(M))$ as defined in~\cite[Chapter 6]{luck}, where $\widetilde{M}$ is the universal covering of $M$ and $\pi_1(M)$ acts as usual on $\widetilde{M}$. In~\cite{Gromov2}, Gromov asked the following: \begin{quest}[\cite{Gromov2}, page 232]\label{gromov2:conj} Let $M$ be a closed aspherical manifold such that $\| M\|=0$. Is that true that $ b_k^{(2)}(M)=0$ for every $k\in\ensuremath {\mathbb{N}} $? \end{quest} In order to explain how Question~\ref{gromov2:conj} is related to Question~\ref{gromov:conj}, let us briefly mention some important properties of $L^2$-Betti numbers: \begin{enumerate} \item they are characteristic numbers, \emph{i.e.}~they are multiplicative with respect to finite coverings \cite[Theorem 1.35]{luck}; \item $b_k^{(2)}(M)$ is a sort of stable version of the $k$-th Betti number $b_k(M)$ of $M$; more precisely, it is proved in~\cite{luckpaper} that if $\pi_1(M)=G$ admits a sequence of nested normal subgroups $G\supseteq G_1\supseteq G_2\supseteq\ldots$ of finite index such that $\bigcap_{i\in\ensuremath {\mathbb{N}}} G_i=\{1\}$, then $$ b_k^{(2)}(M)=\inf_{\widetilde{M}_i \stackrel{d_i}{\to} M} \left\{\frac{b_k(\widetilde M_i)}{d_i}\right\}, $$ where $\widetilde M_i \stackrel{d_i}\to M$ is the covering associated to $G_i$ (see also~\cite{BerGab} for an extension of this result to the case when $\Gamma_i$ is not supposed to be normal in $\Gamma$); \item if $M$ is $n$-dimensional, then $b_k^{(2)}(M)=0$ for $k>n$ and $$ \sum_{i=0}^n (-1)^ib_i^{(2)}(M)=\chi (M)\ $$ (see \cite[Theorem 1.35]{luck}). \end{enumerate} By property (3), a positive answer to Question~\ref{gromov2:conj} would lead to a positive answer to Question~\ref{gromov:conj}. \subsection{Integral foliated simplicial volume}\label{futurobis:sub} In order to study Question~\ref{gromov2:conj}, Gromov introduced in~\cite{Gro3} a new invariant, the \emph{integral foliated simplicial volume} $\| M\|^{\matZ,\mathcal{F}}$ of $M$ (see~\cite{Schmidt} for a precise definition and all the properties of $\| M\|^{\matZ,\mathcal{F}}$ that we are mentioning below). The integral foliated simplicial volume satisfies the inequalities $$ \| M\|\leqslant \| M\|^{\matZ,\mathcal{F}}\leqslant \| M\|^\matZ . $$ Moreover, the integral foliated simplicial volume is a characteristic number, so $$ \| M\|\leqslant \| M\|^{\matZ,\mathcal{F}}\leqslant \| M\|^\matZ_\infty . $$ As stated by Gromov in~\cite{Gro3} and proved by Schmidt in~\cite{Schmidt}, the integral foliated simplicial volume can be used to bound from above the sum of the $L^2$-Betti numbers of a closed manifold. More precisely, if $M$ is a closed $n$-manifold then the following $L^2$-analogous of inequality~\eqref{aaa} holds~\cite{Loeh2}: $$ \sum_{i=0}^n b_i^{(2)} (M)\leqslant (n+1) \| M\|^{\matZ,\mathcal{F}}. $$ In particular, a closed manifold with vanishing integral foliated simplicial volume has vanishing $L^2$-Betti numbers (whence has vanishing Euler characteristic). However, the problem whether the vanishing of the simplicial volume implies the vanishing of the integral foliated simplicial volume at least in the case of aspherical manifolds, is still open. In fact, as far as we know, no example is known of a closed aspherical $n$-manifold $M$ such that $\| M\|\neq \| M\|^{\matZ,\mathcal{F}}$. Therefore, we ask here the following: \begin{quest} May our proof of Theorem~\ref{integral} be adapted to show that $$\| M\|^{\matZ,\mathcal{F}}> \| M\|$$ for every closed hyperbolic manifold $M$ of dimension greater than 3? \end{quest} \subsection{The three-dimensional case}\label{futuro2:sub} Let us now concentrate on our unsolved \begin{quest}\label{ourquestion} Does the equality $$ \| M\|=\sigma_\infty (M) $$ hold for every closed hyperbolic $3$-manifold $M$? \end{quest} In their recent proof of the Ehrenpreis conjecture~\cite{KM}, Kahn and Markovic showed that every closed orientable hyperbolic surface $S$ has a finite covering which decomposes into pairs of pants whose boundary curves have length arbitrarily close to an arbitrarily big constant $R>0$. Question~\ref{ourquestion} is equivalent to some sort of $3$-dimensional version of Kahn and Marcovic's result. Namely, the discussion carried out in the previous sections shows that $\| M\|=\sigma_\infty (M)$ if and only if the following condition holds: for every $\varepsilon>0$, $R>0$ there exists a finite covering $\widetilde{M}$ with a triangulation $\widetilde{\mathcal{T}}$ such that the shape of at least $(1-\varepsilon) |\widetilde{\mathcal{T}}|$ simplices of $\widetilde{\mathcal{T}}$ is $\varepsilon$-close to the shape of a regular positive simplex with edge-length bigger than $R$. Let us briefly describe some possible strategies to approach Question~\ref{ourquestion}. \subsection{How to answer Question~\ref{ourquestion} in the positive} Corollary \ref{figure-eight:cor} says that it is possible to make the ratio $\frac{\sigma_\infty(M)}{\|M\|}$ arbitrarily close to 1 by Dehn filling the figure-eight knot complement along arbitrarily long slopes (the \emph{length} of a slope is measured with respect to a fixed toric horocusp section). This fact can be easily generalized to any manifold $M$ covering the figure-eight knot complement: \begin{prop} Let $M_i$ be a sequence of manifolds covering the figure-eight knot. For each $i$, let $\alpha_i = (\alpha_i^1,\ldots, \alpha_i^{k_i})$ be a set of slopes on the $k_i$ boundary components of $M_i$, and let $\ell_i$ be their minimal length with respect to some horocusp section. Let $N_i$ be the closed manifold obtained by filling $M_i$ along $\alpha_i$. If $\ell_i \to \infty$ then $N_i$ is hyperbolic for all sufficiently big $i$ and we have $$\frac{\sigma_\infty(N_i)}{\|N_i\|} \to 1.$$ \end{prop} \begin{proof} Since $\ell_i$ tends to infinity, the main result of \cite{HK} ensures that $N_i$ is hyperbolic for all sufficiently big $i$ (in fact, geometrization and Agol's and Lackenby's results \cite{Agol, Lackenby} imply that $N_i$ is hyperbolic provided that $\ell_i>6$). The estimates on volume change under Dehn filling of Neumann and Zagier \cite{NZ} or Futer, Kalfagianni, and Purcell \cite[Theorem 1.1]{FKP} show that $\frac{{\rm vol}(N_i)}{{\rm vol} (M_i)}\to 1$. Let $M_i$ cover the figure-eight knot complement with degree $d_i$. Then $M_i$ is triangulated with $2d_i$ regular ideal tetrahedra and ${\rm vol}(M_i) = 2d_iv_3$, so $\|M_i\| = \frac{{\rm vol}(M_i)}{v_3} = 2d_i$. Therefore $\frac{\|N_i\|}{2d_i}\to 1$. Since $M_i$ is triangulated with $2d_i$ ideal tetrahedra we have $c(M_i)\leqslant 2d_i$ (we actually have an equality, but this is not important here). Proposition \ref{filling:prop} implies that $c_\infty(N_i)\leqslant c(M_i)\leqslant 2d_i$. Since $\|N_i\|\leqslant c_\infty(N_i)$ we get $\frac{c_\infty(N_i)}{\|N_i\|} \to 1$. Finally, we have $c_\infty(N_i) = \sigma_\infty(N_i)$ because $N_i$ is irreducible. \end{proof} This result leads naturally to the following definition. Let $\ensuremath {\mathcal{H}}_l$ be the set of all hyperbolic manifolds that may be obtained from some covering of the figure-eight knot complement by Dehn-filling some slopes having length bigger than $l$ (with respect to some horocusp section). \begin{quest}\label{H:quest} Let $M$ be a closed hyperbolic manifold. Is it true that for every $l$ there is a finite-degree covering $\widetilde M$ of $M$ lying in $\ensuremath {\mathcal{H}}_l$? \end{quest} In other words, does $M$ virtually lie in all sets $\ensuremath {\mathcal{H}}_l$? A positive answer to this question would prove that $\|M\| = \sigma_\infty(M)$. Note that every manifold is a Dehn filling of some cover of the figure-eight knot, because the figure-eight knot is universal \cite{HLM}, and hence every 3-manifold lies in some $\ensuremath {\mathcal{H}}_l$. \begin{rem} Ehrenpreis asked \cite{Ehre} whether any two closed Riemann surfaces with $\chi <0$ have finite coverings with arbitrarily small distance, with respect to some natural metric on the moduli spaces of Riemann surfaces. This question (which was recently answered in the affirmative \cite{KM}) can be generalized to categories of manifolds of any dimension and of any kind, provided that they are equipped with a distance function: we call such a question an \emph{Ehrenpreis problem}. The filtration $\ensuremath {\mathcal{H}}_l$ easily defines a distance $d$ which translates Question \ref{H:quest} into an Ehrenpreis problem (the distance of two distinct manifolds $M$ and $N$ is the smallest $\frac 1l$ such that $M,N\in \ensuremath {\mathcal{H}}_l$). \end{rem} We may specialize our question to the following. \begin{quest} Let $M$ be a closed hyperbolic manifold. Is it true that for every $l$ there is a finite-degree covering $\widetilde M$ of $M$ and a branched covering $\widetilde M \to S^3$ branched over the figure-eight knot with ramification indices bigger than $l$? \end{quest} In other words, does $M$ virtually cover $S^3$ with branching locus the figure-eight knot, with arbitrarily large ramification indices? A large ramification index gives a long filling slope on the covering of the figure-eight knot complement, hence a positive answer to this question would also imply a positive answer to Question \ref{H:quest} and thus would prove the equality $\|M\|=\sigma_\infty (M)$. \subsection{How to answer Question~\ref{ourquestion} in the negative} If $M$ is a closed hyperbolic $n$-manifold and $z_i\in Z_n(M,\matR)$ is a sequence of representatives of the fundamental class of $M$, we say that $z_i$ is \emph{minimizing} if the $L^1$-norm of $z_i$ approaches $\| M\|$ as $i$ tends to infinity. In order to get a negative answer to Question~\ref{ourquestion}, one may probably profit from Jungreis' characterization of minimizing sequences of fundamental cycles for $M$~\cite{Jungreis}. Every cycle $z_i$ in such a sequence lifts to a locally finite cycle $\widetilde{z}_i$ in $\matH^n$. After straightening, $\widetilde{z}_i$ is a locally finite sum of straight simplices in $\ensuremath {\mathcal{S}}_n(\matH^n)$. Jungreis considers a suitable space $\mathcal{M} (\ensuremath {\mathcal{S}}_n(\ensuremath{\overline{\matH^n}}))$ of measures on the set of geodesic simplices with vertices in $\ensuremath{\overline{\matH^n}}$. Every $\widetilde{z}_i$ may be thought as a locally finite linear combination of atomic measures concentrated on the lifts of the simplices of $z_i$. Therefore, $\widetilde{z}_i$ may be identified with an element in $\mathcal{M} (\ensuremath {\mathcal{S}}_n(\ensuremath{\overline{\matH^n}}))$, and Jungreis' result implies that the sequence $\widetilde{z}_i$ converges in $\mathcal{M} (\ensuremath {\mathcal{S}}_n(\ensuremath{\overline{\matH^n}}))$ to a measure $\mu$ that is concentrated on the subset of regular ideal simplices, and is invariant with respect to the action of the group $G$ of orientation-preserving isometries of $\matH^n$. Roughly speaking, the $G$-invariance of $\mu$ implies that, if $n$ is big, then the simplices of $z_i$ must be almost homogeneously distributed in $M$. Of course, such a behaviour of $z_i$ is strongly in contrast with the possibility that $z_i$ is represented by a triangulation. In order to give a negative answer to Question~\ref{ourquestion}, one could prove that Jungreis' result is not compatible with the fact that $z_i$ is represented by a \emph{virtual} triangulation, \emph{i.e.}~it is obtained by suitably rescaling the push-forward of a triangulation of a finite covering.
{ "timestamp": "2012-08-22T02:02:43", "yymm": "1201", "arxiv_id": "1201.0660", "language": "en", "url": "https://arxiv.org/abs/1201.0660", "abstract": "Let the complexity of a closed manifold M be the minimal number of simplices in a triangulation of M. Such a quantity is clearly submultiplicative with respect to finite coverings, and by taking the infimum on all finite coverings of M normalized by the covering degree we can promote it to a multiplicative invariant, a characteristic number already considered by Milnor and Thurston, which call the \"stable complexity\" of M.We study here the relation between the stable complexity of M and Gromov's simplicial volume ||M||. It is immediate to show that ||M|| is smaller or equal than the stable complexity of M and it is natural to ask whether the two quantities coincide on aspherical manifolds with residually finite fundamental group. We show that this is not always the case: there is a constant C_n<1 such that ||M|| is smaller than C_n times the stable complexity for any hyperbolic manifold M of dimension at least 4.The question in dimension 3 is still open in general. We prove that the stable complexity equals ||M|| for any aspherical irreducible 3-manifold M whose JSJ decomposition consists of Seifert pieces and/or hyperbolic pieces commensurable with the figure-eight knot complement. The equality holds for all closed hyperbolic 3-manifolds if a particular three-dimensional version of the Ehrenpreis conjecture is true.", "subjects": "Geometric Topology (math.GT); Differential Geometry (math.DG)", "title": "Stable complexity and simplicial volume of manifolds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9901401429507651, "lm_q2_score": 0.7154239836484144, "lm_q1q2_score": 0.7083700054400469 }
https://arxiv.org/abs/2010.01655
Completeness of Positive Linear Recurrence Sequences
A sequence of positive integers is complete if every positive integer is a sum of distinct terms. A positive linear recurrence sequence (PLRS) is a sequence defined by a homogeneous linear recurrence relation with nonnegative coefficients of the form $H_{n+1} = c_1 H_n + \cdots + c_L H_{n-L+1}$ and a particular set of initial conditions.We seek to classify various PLRS's by completeness. With results on how completeness is affected by modifying the recurrence coefficients of a PLRS, we completely characterize completeness of several families of PLRS's as well as conjecturing criteria for more general families. Our primary method is applying Brown's criterion, which says that an increasing sequence $\{H_n\}_{n = 1}^{\infty}$ is complete if and only if $H_1 = 1$ and $H_{n + 1} \leq 1 + \sum_{i = 1}^n H_i$.%A survey of these results can be found in \cite{BHLLMT}.Finally, we adopt previous analytic work on PLRS's to find a more efficient way to check completeness. Specifically, the characteristic polynomial of any PLRS has exactly one positive root; by bounding the size of this root, the majority of sequences may be classified as complete or incomplete. Additionally, we show there exists an indeterminate region where the principal root does not reveal any information on completeness. We have conjectured precise bounds for this region.
\section{Introduction} The Fibonacci numbers are one of the most studied integer sequences. One of their many interesting properties is that they can be used to construct a unique decomposition for any positive integer. Zeckendorf proved that every positive integer can be written uniquely as a sum of non-consecutive elements of the Fibonacci sequence, when indexed with the initial conditions $f_1=1,\ f_2=2$ and the recurrence $f_{n+1}=f_n+f_{n-1}$. Note that this is a just a shift of the indexing by one from the common initial conditions $F_0=0,\ F_1=1$ \seqnum{A000045}. For an arbitrary positive integer, this unique decomposition into Fibonacci numbers is called its \emph{Zeckendorf decomposition} \cite{Ze}. The result on the uniqueness and existence of such decompositions has been generalized to a much larger class of linear recurrence relations; the following definitions are from Miller and Wang \cite{MW}. \begin{definition}\label{defn:goodrecurrencereldef} We say a sequence $\left(H_n\right)_{n=1}^\infty$ of positive integers is a \emph{Positive Linear Recurrence Sequence (PLRS)} if the following properties hold. \begin{enumerate} \item \emph{Recurrence relation:} There are non-negative integers $L, c_1, \dots, c_L$\label{c_i} such that \begin{equation} H_{n+1} \ = \ c_1 H_n + \cdots + c_L H_{n+1-L},\end{equation} with $L, c_1$ and $c_L$ positive. \item \emph{Initial conditions:} $H_1 = 1$, and for $1 \leq n < L$ we have \begin{equation} H_{n+1} \ =\ c_1 H_n + c_2 H_{n-1} + \cdots + c_n H_{1}+1.\end{equation} \end{enumerate} \end{definition} \begin{definition}[Legal decompositions] We call a decomposition $\sum_{i=1}^{m} {a_i H_{m+1-i}}$\label{a_i} of a positive integer $N$ (and the sequence $\left(a_i\right)_{i=1}^{m}$) \emph{legal} \label{legal} if $a_1>0$, the other $a_i \geq 0$, and one of the following two conditions holds. \begin{enumerate} \item We have $m<L$ and $a_i=c_i$ for $1\leq i\leq m$. \item There exists $s\in\{1,\dots, L\}$ such that \begin{equation} a_1\ = \ c_1,\ a_2\ = \ c_2,\ \cdots,\ a_{s-1}\ = \ c_{s-1}\ {\rm{and}}\ a_s<c_s, \end{equation} $a_{s+1}, \dots, a_{s+\ell} \ = \ 0$ for some $\ell \geq 0$, and $\left(b_i\right)_{i=1}^{m-s-\ell}$ (with $b_i = a_{s+\ell+i}$) is legal or empty. \end{enumerate} \end{definition} The following theorem is due to Grabner and Tichy \cite{GT}, and stated in this form in Miller and Wang \cite{MW}. \begin{theorem}[Generalized Zeckendorf's Theorem for PLRS]\label{thm:genZeckendorf} Let $\left(H_n\right)_{n=1}^\infty$ be a \emph{positive linear recurrence sequence}. Then there is a unique legal decomposition for each positive integer $N\geq 0$. \end{theorem} Next, we introduce \emph{completeness}, as defined by Hoggatt and King \cite{HK}. \begin{definition} An arbitrary sequence of positive integers $(a_i)_{i=1}^\infty$ is \emph{complete} if and only if every positive integer $n$ can be represented in the form $n=\sum_{i=1}^\infty \varepsilon_i a_i$, where $\varepsilon \in \{0,1\}$. A sequence that fails to be complete is \emph{incomplete}. \end{definition} In other words, a sequence of positive integers is complete if and only if each positive integer can be written as a sum of unique terms of the sequence. \begin{example} The Fibonacci sequence is complete. This follows directly from Zeckendorf's theorem, which is stronger statement, as it states that every positive integer may be written as the sum of \emph{non-consecutive} Fibonacci numbers. Completeness does not require that the decompositions use non-consecutive terms. Note that unlike Zeckendorf decompositions, complete decompositions are not necessarily unique. In the case of the Fibonacci sequence, while the Zeckendorf decomposition of $10$ as $10=f_2+f_5=8+2$ is unique, we may find multiple complete decompositions, as with $10=2+8=2+3+5$. \end{example} After seeing this example, it is natural to ask if Theorem~\ref{thm:genZeckendorf} implies that all PLRS's are complete. Previous work in numeration systems by Gewurz and Merola \cite{GM} has shown that specific classes of recurrences as defined by Fraenkel \cite{Fr} are complete under their greedy expression. However, we cannot generalize this result to all PLRS's. For legal decompositions, the decomposition rule can permit some sequence terms to be used multiple. This is not allowed for completeness decompositions, where each unique term from the sequence can be used at most once. \begin{example} The PLRS $H_{n+1} = H_n + 3H_{n-1}$ has terms $\left(1, 2, 5, 11, \ldots\right)$ \seqnum{A006138}. The unique \emph{legal} decomposition for $9$ is $1\cdot 5 + 2\cdot 2$, where the term $2$ is used twice. However, no \emph{complete} decomposition for $9$ exists. Adding all terms from the sequence less than $9$ is $1 + 2 + 5 = 8$, and to include $11$ or any subsequent term surpasses $9$. \end{example} We also make use of the following criterion for completeness of a sequence, due to Brown \cite{Br}. \begin{theorem}[Brown's Criterion] If $a_n$ is a nondecreasing sequence, then $a_n$ is complete if and only if $a_1 = 1$ and for all $n > 1$, \begin{equation}\label{eqn:BrownsCrit} a_{n + 1} \leq 1 + \sum_{i = 1}^{n} a_i. \end{equation} \end{theorem} An immediate corollary is the following sufficient, though not necessary, condition for completeness, which we call the doubling criterion. The proof is left to the appendix, as Corollary~\ref{cor:doublingCritApx}. \begin{corollary}[Doubling Criterion]\label{cor:doublingCrit} If $a_n$ is a nondecreasing sequence such that $a_n \leq 2 a_{n - 1}$ for all $n \geq 2$, then $a_n$ is complete. \end{corollary} \begin{remark}\label{two} By considering the special case when $a_n = 2a_{n-1}$, this immediately implies that the doubling sequence itself $\left(1,2,4,8,\ldots\right)$ \seqnum{A000079} is complete. \end{remark} In this paper, we characterize many types of PLRS by whether they are complete or not complete. \begin{notation} We use the notation $[c_1, \ldots, c_L]$ to represent the PLRS defined by the recurrence $H_{n+1} = c_1 H_n + \cdots + c_L H_{n+1-L}$ and initial conditions as given in Definition~\ref{defn:goodrecurrencereldef}. When the context is clear, we also use $[c_1, \ldots, c_L]$ to refer to the coefficients themselves. \end{notation} A simple case to consider is when all coefficients $c_i$ for the sequence $[c_1,\ldots,c_L]$ are positive. The following result, proved in Section~\ref{sec:modifying}, completely characterizes these sequences are either complete or incomplete. \begin{theorem}\label{basic} If $(H_n)$ is a PLRS generated by positive coefficients $[c_1, \ldots, c_n]$, then $(H_n)$ is complete if and only if the coefficients are $[\underbrace{1,\ldots,1}_L]$ or $[\underbrace{1,\ldots,1}_{L-1},2]$ for $L \geq 1$. \end{theorem} The situation becomes much more complicated when we consider all PLRS's, in particular those that have at least one $0$ as a coefficient. In order to be able to make progress on determining completeness of these PLRS's, we develop several tools. The following three theorems are results that allow certain modifications of the coefficients $[c_1, \ldots, c_L]$ that generate a PLRS that is known to be complete or incomplete, and preserve completeness or incompleteness. They are proved in Section~\ref{sec:modifying}. \begin{theorem}\label{lem:incompAddCoeff} Consider sequences $\left( G_{n} \right) = [c_1,\dots, c_{L}]$ and $\left( H_{n} \right)= [c_1,,\dots, c_{L},c_{L+1}]$, where $c_{L+1}$ is any positive integer. If $\left( G_{n} \right)$ is incomplete, then $\left( H_{n} \right)$ is incomplete as well. \end{theorem} \begin{theorem}\label{decreaseLastCoe} Consider sequences $\left(G_n\right)=[c_1,\ldots, c_{L-1}, c_L]$ and $\left(H_n\right)=[c_1,\ldots, c_{L-1}, k_L]$, where $1 \leq k_L \leq c_L$. If $\left(G_n\right)$ is complete, then $\left(H_n\right)$ is also complete. \end{theorem} \begin{theorem}\label{Adding M Theorem} Consider sequences $\left(G_n\right)=[c_1,\ldots, c_{L-1}, c_L]$ and $\left(H_n\right)=[c_1,\ldots, c_{L-1} + c_L]$. If $\left(G_n\right)$ is incomplete, then $\left(H_n\right)$ is also incomplete. \end{theorem} The next theorem is a result that classifies a family of PLRS's as complete or incomplete. It is proved in Section~\ref{sec:famiilies}. \begin{theorem}\label{thm:1onekzero} The sequence generated by $[1,\underbrace{0,\ldots,0}_k,N]$ is complete if and only if $1 \leq N \leq \left\lceil(k+2)(k+3)/{4}\right\rceil$, where $\lceil \cdot \rceil$ is the ceiling function. \end{theorem} The sequence of upper bounds on $N$ in Theorem~\ref{thm:1onekzero} is $(2,3,5,8,11,14,18,\ldots)$, as $k$ increases, which is a shift of \seqnum{A054925}. \iffalse \begin{theorem}\label{2onekzero} The sequence generated by $[1, 1, \underbrace{0, \dots, 0}_{k}, N]$ is complete if and only if $1 \leq N \leq \lfloor (f_{k + 6} - k - 5)/4 \rfloor$, where $f_n$ are the Fibonacci numbers with $f_1=1, f_2=2$ and $\lfloor \cdot \rfloor$ is the floor function. \end{theorem} \fi We have a partial extension of these theorems to when there are $g$ initial ones followed by $k$ zeroes in the collection of coefficients. \begin{theorem}\label{thm:gbon} Consider a PLRS generated by coefficients $[\underbrace{1, \dots, 1}_{g}, \underbrace{0,\ldots,0}_{k},N]$, with $g,k \geq 1$. \begin{enumerate}[itemsep=2ex, leftmargin=2em] \item For $g \geq k +\lceil \log _2 k\rceil$, the sequence is complete if and only if $1 \leq N \leq 2^{k+1}-1$. \item For $k \leq g \leq k +\lceil \log _2 k\rceil$, the sequence is complete if and only if $1 \leq N \leq 2^{k+1} - \lceil k/{2^{g-k}} \rceil$. \end{enumerate} \end{theorem} Finally, in Section~\ref{roots}, we give some results and conjectures on completeness based on the principal roots of a PLRS. We determine some criteria for completeness based on the size of the principal root and find that there is a certain indeterminate region where the principal root does not reveal any information. \section{Modifying sequences}\label{sec:modifying} A basic question to ask is how far we can tweak the coefficients used to generate a sequence, yet preserve its completeness. The modifying process turns out to be well-behaved and heavily dependent on the location of coefficients that are changed. Before we start looking into implementing any changes to our sequences, we first need to understand the maximal complete sequence. \subsection{Maximal complete sequence} We introduce the maximal complete sequence, which serves an important role. First, we look at all complete sequences with only positive coefficients, and show Theorem~\ref{basic}, which states that any such sequence can only have the coefficients $[1,\ldots, 1]$ or $[1,\ldots,1,2]$. \begin{proof}[Proof of Theorem~\ref{basic}.] Assume that $\left(H_n\right)$ is complete. By the definition of a PLRS and by Brown's criterion, we have \begin{equation} c_1 H_{L - 1} + c_2 H_{L - 2} + \cdots + c_{L - 1} H_{1} + 1 = H_L \leq 1 + H_1 + H_2 + \cdots + H_{L - 1}. \end{equation} Since $c_i \geq 1$ for $1 \leq i \leq L$, this implies that $c_i = 1$ for $1 \leq i < L$. By the definition of a PLRS, \begin{equation} H_{L+1} = c_1 H_L + c_2 H_{L - 1} + \cdots + c_L H_1 = H_L + H_{L - 1} + \cdots + H_2 + c_L H_1. \end{equation} Combining this with Brown's criterion gives \begin{align} H_{L+1} = H_L + H_{L - 1} + \cdots + c_L H_1 &\leq 1 + H_1 + H_2 + \cdots + H_{L - 1} \nnend c_L H_1 &\leq 1 + H_1 = 2. \end{align} Hence $c_L\leq 2$, which completes the forward direction of the proof. We know that if the coefficients are just $[2]$, then the sequence is complete by Remark~\ref{two}. So, now assume that $c_1 = \cdots = c_{L - 1} = 1$ and $1 \leq c_L \leq 2$. We argue by strong induction on $n$ that $H_n$ satisfies Brown's criterion. We can show this explicitly for $1 \leq n < L$. First, if $n = 1$, then $H_n = 1$, as desired. Next, if $1 \leq n < L$, then \begin{equation} H_{n + 1} = c_1 H_n + \cdots + c_n H_1 + 1 = H_n + \cdots + H_1 + 1, \end{equation} so these terms satisfy Brown's criterion. Now assume that for some $n \geq L$, for all $n' < n$, \begin{equation} H_{n' + 1} \leq H_{n'} + \cdots + H_1 + 1. \end{equation} It follows that \begin{align} H_{n + 2} &= H_{n + 1} + \cdots + H_{n + 2 - L} + c_L H_{n + 1 - L}\nnend &\leq H_{n + 1} + \cdots + H_{n + 2 - L} + 2H_{n + 1 - L}\nnend &\leq H_{n + 1} + \cdots + H_{n + 2 - L} + H_{n + 1 - L} + (H_{n - L} + \cdots + H_1 + 1),\label{eqn:IHnonZeroPLRS} \end{align} where the inductive hypothesis was applied to $H_{n + 1 - L}$ to obtain (\ref{eqn:IHnonZeroPLRS}). This completes the induction. \end{proof} Now that we have found some complete sequences, it turns out that the sequence generated by the coefficient $[2]$, i.e., $\left(2^{n - 1}\right)$, is the maximal complete sequence. \begin{lemma}\label{clm:largestCompleteGaps} The complete sequence with largest span in summands is $\left(2^{n - 1}\right)$. \end{lemma} \begin{proof} Suppose there exists a complete sequence $\left(H_n\right)$ with the largest span in summands. As a complete sequence must satisfy Brown's criterion, it suffices to take $H_{n + 1} = 1 + \sum^{n}_{i=1} {H_i}$. Hence, \begin{align} H_{n + 1} = 1 + \sum_1^{n} H_{i} &= 1 + \sum_1^{n - 1} H_{i} + H_{n} = 2H_{n}. \end{align} By the intial conditions for a PLRS, $H_1=1$ and $H_2=2$. Thus, $H_n = 2 H_{n-1}=2^{n-1}$. \end{proof} \begin{remark} Thus $\left(H_k\right) = \left(2^{k - 1}\right)$ is an inclusive upper bound for any complete sequence. \end{remark} As it turns out, this sequence can be generated by multiple collections of coefficients. \begin{corollary} A PLRS with coefficients $[\underbrace{1,\ldots,1}_{L-1},2]$ generates the sequence $H_n= 2^{n-1}$. \end{corollary} \begin{proof} Consider the sequence $\left(H_n\right)$ generated by $[\underbrace{1,\ldots,1}_{L-1},2]$. We proceed by induction on $L$. Note $H_1=1=2^{1-1}$ by the definition of the PLRS. Now, suppose $H_k=2^{k-1}$ for $k\in\{1,\dots,n\}$. For $n<L$, note \begin{align} H_{n+1}&=c_1H_n+c_2H_{n-1}+\dots+c_n H_1+1\nnend &=H_n+H_{n-1}+\dots+H_1+1\nnend &=2^{n-1}+2^{n-2}+\dots+1+1=2^n. \end{align} Hence, the claim holds for all $n<L$. Now, for $n\geq L$, note \begin{align} H_{n+1}&=c_1H_n+c_2H_{n-1}+\dots+c_L H_{n+1-L}\nnend &=H_n+H_{n-1}+\dots+2H_{n+1-L}\nnend &=2^{n-1}+2^{n-2}+\dots+2^{n-L+1}+2\cdot 2^{n-L}=2^n. \end{align} Thus, by induction, the claim holds for all $n,L\in\N$. \end{proof} \subsection{Modifications of sequences with arbitrary coefficients} Modifying coefficients in order to preserve completeness proves to be a balancing act. Sometimes increasing a coefficient causes an incomplete sequence to become complete, while other times, increasing a coefficient causes a complete sequence to become incomplete. For example, $[1,0,0,0,0,0,15]$ is incomplete; increasing the second coefficient to $1$, i.e., $[1,1,0,0,0,0,15]$ is complete. Further increasing it to $2$, i.e., $[1,2,0,0,0,0,15]$ is again incomplete. To study how such modifications preserve completeness or incompleteness, we add a new definition to our toolbox. \begin{definition} For a sequence $\left(H_n\right)$, we define its \emph{$n$\textsuperscript{th} Brown's gap} \begin{equation} B_{H, n} \coloneqq 1 + \sum_{i=1}^{n-1}H_i - H_n. \end{equation} \end{definition} Thus, from Brown's criterion, $\left(H_n\right)$ is complete if and only if $B_{H, n} \geq 0$ for all $n \in \N$. Our next questions is: What happens if we append one more coefficient to $[c_1,\ldots,c_L]$? It turns out that if our sequence is already incomplete, appending any new coefficients will never make it complete. This is Theorem~\ref{lem:incompAddCoeff}, which using are ready to prove using Brown's gap. \begin{proof}[Proof of Theorem~\ref{lem:incompAddCoeff}.] By Brown's criterion, it is clear that $\left( G_{n} \right)$ is incomplete if and only if there exists $n$ such that $B_{G,n}<0$. We claim that for all $m$, $B_{H,m}\leq B_{G,m}$. If true, our lemma is proven: suppose $B_{G,n}<0$ for some $n$, we would see $B_{H,n}\leq B_{G,n}<0$, implying $\left( H_{n} \right)$ is incomplete as well. We proceed by induction. Clearly, $B_{H,k}=B_{G,k}$ for $1\leq k \leq L$. Further, for $k=L$, we see \begin{equation} B_{G,L+1}-B_{H,L+1}= 1+\sum_{i=1}^{L}G_{i} - G_{L+1} - \left(1+\sum_{i=1}^{L}H_{i} - H_{L+1} \right) =H_{L+1}-G_{L+1}=1>0 .\end{equation} Now, let $m \geq 2$ be arbitrary, and suppose \begin{equation}\label{Bequ1} B_{H,\; L+m-1}\leq B_{G,\; L+m-1}. \end{equation} We wish to show that $B_{H,\; L+m}\leq B_{G,\; L+m}$. Note that \begin{equation}\label{1first} B_{H,\; L+m}-B_{H,\; L+m-1}= 2H_{L+m-1} - H_{L+m}. \end{equation} Similarly, \begin{equation}\label{2first} B_{G,\; L+m}-B_{G,\; L+m-1}= 2G_{L+m-1} - G_{L+m}. \end{equation} We use Lemma~\ref{SequencesDifferencesAppendix}, which states that for all $k \geq 2$, $H_{L+k}-G_{L+k}\geq 2(H_{L+k-1}-G_{L+k-1})$. Applying this to \eqref{1first} and \eqref{2first}, we see that $B_{H,\; L+m}-B_{H,\; L+m-1}\leq B_{G,\; L+m}-B_{G,\; L+m-1}$. Summing this inequality to both sides of inequality \eqref{Bequ1}, we arrive at $B_{H,L+m}\leq B_{G,L+m}$, as desired. \end{proof} Now, we turn our attention to the behavior when we decrease the last coefficient for any complete sequence. In Theorem~\ref{decreaseLastCoe}, we find that decreasing the last coefficient for any complete sequence preserves completeness. \begin{proof}[Proof of Theorem~\ref{decreaseLastCoe}.] Given that $\left(G_n\right)$ is complete, suppose for the sake of contradiction that there exists an incomplete $\left(H_n\right)$. Thus, let $m$ be the least such that \begin{equation} \label{eq:incomplete} H_m>1+\sum^{m-1}_{i=1}H_i. \end{equation} Simultaneously, as $\left(G_n\right)$ is complete, by Brown's criterion, \begin{equation} G_m\leq1+\sum^{m-1}_{i=1}G_i. \end{equation} First, suppose $m\leq L$. However, for all $n\leq L$, $G_n=H_n$, hence \begin{equation} H_m=G_m\leq 1+\sum^{m-1}_{i=1}G_i=1+\sum^{m-1}_{i=1}H_i, \end{equation} which contradicts (\ref{eq:incomplete}). Now, suppose $m>L$. Therefore, \begin{equation} G_m\leq 1+\sum^{m-1}_{i=1}G_i = 1+\sum^{L}_{i=1}G_i+\sum_{i=L+1}^{m-1}G_i= 1+\sum^{L}_{i=1}H_i+\sum_{i=L+1}^{m-1}G_i. \end{equation} This implies \begin{equation} 1+\sum^{L}_{i=1}H_i \geq G_m-\sum_{i=L+1}^{m-1}G_i. \end{equation} Now, we know that \begin{equation} H_m>1+\sum^{m-1}_{i=1}H_i=1+\sum^{L}_{i=1}H_i+\sum_{i=L+1}^{m-1}H_i\geq G_m-\sum_{i=L+1}^{m-1}G_i+\sum_{i=L+1}^{m-1}H_i, \end{equation} and thus \begin{align}\label{eq:contr} H_m-\sum_{i=L+1}^{m-1}H_i&> G_m-\sum_{i=L+1}^{m-1}G_i. \end{align} We claim that the opposite of (\ref{eq:contr}) is true, arguing by induction on $m$. For $m=L+1$, we obtain $G_{L+1}\geq H_{L+1}$ as $k_L\leq c_L$. Now, assume that \begin{equation} G_m-\sum_{i=L+1}^{m-1}G_i \geq H_m-\sum_{i=L+1}^{m-1}H_i \end{equation} is true for a positive integer $m$. Using the inductive hypothesis, it then follows that \begin{align} G_{m+1}-\sum_{i=L+1}^{m}G_i=G_{m+1}-\sum_{i=L+1}^{m-1}G_i-G_m&\geq G_{m+1}-2G_m+H_m-\sum_{i=L+1}^{m-1}H_i. \end{align} Finally, we use Lemma~\ref{Lemma3.37Appendix}, proved in Appendix~\ref{ProofsOfLemmas2}, which states that for all $k\in\N$, $H_{L+k+1}-2H_{L+k}\leq G_{L+k+1}-2G_{L+k}$. Note \begin{equation} G_{m+1}-2G_m+H_m-\sum_{i=L+1}^{m-1}H_i \geq H_{m+1}-2H_m+H_m-\sum_{i=L+1}^{m-1}H_i = H_{m+1}-\sum_{i=L+1}^{m}H_i, \end{equation} which does contradict (\ref{eq:contr}) for all $m>L$. Therefore, for all $m\in\N$, we have contradicted \eqref{eq:incomplete}. Hence, $\left(H_n\right)$ must be complete as well. \end{proof} The result above is crucial in our characterization of \textit{families} of complete sequences in Section~\ref{sec:famiilies}; finding one complete sequence allows us to decrease the last coefficient to find more. Next, we prove two lemmas that together prove Theorem~\ref{Adding M Theorem}. \begin{lemma}\label{IncompExtension} Let $\left( G_{n} \right)$ be the sequence defined by $[c_1,\ldots, c_{L}]$, and let $\left( H_{n} \right)$ be the sequence defined by $[c_1,\ldots, c_{L-1}+1,\; c_{L}-1]$. If $\left( G_{n} \right)$ is incomplete, then $\left( H_{n} \right)$ must be incomplete as well. \end{lemma} \begin{proof} We claim that for all $m$, $B_{H,m}\leq B_{G,m}$. This lemma is proven using similar reasoning as for Lemma~\ref{lem:incompAddCoeff}. We proceed by induction. Clearly, $B_{H,k}=B_{G,k}$ for $1\leq k \leq L-1$. Further, for $k=L$, we see \begin{equation} B_{G,L}-B_{H,L}=1+\sum_{i=1}^{L-1}G_{i} - G_{L} - \left(1+\sum_{i=1}^{L-1}H_{i} - H_{L} \right) = H_{L}-G_{L}=1>0. \end{equation} Now, let $m \geq 0$ be arbitrary, and suppose \begin{equation}\label{Bequ2} B_{H,\; L+m}\leq B_{G,\; L+m}. \end{equation} We wish to show that $B_{H,\; L+m+1}\leq B_{G,\; L+m+1}$. Note that \begin{equation}\label{1second} B_{H,\; L+m+1}-B_{H,\; L+m}=2H_{L+m}-H_{L+m+1}, \end{equation} and similarly, \begin{equation}\label{2second} B_{G,\; L+m+1}-B_{G,\; L+m}=2G_{L+m}-G_{L+m+1}. \end{equation} We use Lemma~\ref{Add 1 Lemma Appendix}, which says that for all $k \geq 0$, $H_{L+k+1}-G_{L+k+1}\geq 2\left( H_{L+k}-G_{L+k} \right)$. Applying it to \eqref{1second} and \eqref{2second}, we see $B_{H,\; L+m+1}-B_{H,\; L+m}\leq B_{G,\; L+m+1}-B_{G,\; L+m}$. Summing this inequality to both sides of inequality \eqref{Bequ2}, we conclude that $B_{H,L+m+1}\leq B_{G,L+m+1}$, as desired. \end{proof} How many times can Lemma~\ref{IncompExtension} be applied? Enough times to get all the way up to $[c_1,\ldots,c_{L-1}+c_L-1,1]$, but no further, as the last coefficient must remain positive to stay a PLRS. \begin{lemma}\label{Last Case Adding M Theorem} Let $\left( G_{n} \right)$ be the sequence defined by $[c_1,\ldots , c_{L-1},1]$, and let $\left( H_{n} \right)$ be the sequence defined by $[c_1,\ldots , c_{L-1}+1]$. If $\left( G_{n} \right)$ is incomplete, then $\left( H_{n} \right)$ must be incomplete as well. \end{lemma} \begin{remark} Despite the similarities, Lemma~\ref{Last Case Adding M Theorem} is not directly implied by Lemma~\ref{IncompExtension}; both are necessary for the proof Theorem~\ref{Adding M Theorem}. Applying Lemma~\ref{IncompExtension} $(c_L-1)$ times proves that if $[c_1,\ldots , c_{L-1},c_L]$ is incomplete, then $[c_1,\ldots , c_{L-1}+c_L-1,1]$ is incomplete; at this point, we cannot apply the lemma further while maintaining a positive final coefficient to meet the definition of a PLRS. Hence the case of Lemma~\ref{Last Case Adding M Theorem} must be dealt with separately, in order to arrive at the full result of Theorem~\ref{Adding M Theorem}. \end{remark} \begin{proof} The proof is similar to that of Lemma~\ref{IncompExtension}. We aim to show that $B_{H,m}\leq B_{G,m}$ for all $m$. Clearly $B_{H,k}=B_{G,k}$ for $1\leq k \leq L$. Further, for $k=L+1$, we see \begin{equation} B_{G,L+1}-B_{H,L+1}=\sum_{i=1}^{L}G_{i}-G_{L+1}-\left( 1+\sum_{i=1}^{L-1}H_{L}-H_{L+1} \right) =H_{L+1}-G_{L+1}=c_1>0. \end{equation} Now, let $m\geq 0$ be arbitrary, and suppose \begin{equation}\label{last brown gap inequality} B_{H,L+m}\leq B_{G,L+m}. \end{equation} We wish to show that $B_{H,L+m+1}\leq B_{G,L+m+1}$. Note that \begin{equation}\label{ultimobrowngap} B_{H,L+m+1}-B_{H,L+m}=2H_{L+m}-H_{L+m+1}, \end{equation} and similarly \begin{equation}\label{ultimobrowngap 2} B_{G,L+m+1}-B_{G,L+m}=2G_{L+m}-G_{L+m+1}. \end{equation} We use Lemma~\ref{Last Case Adding M Appendix}, which states that for all $k\geq 0$, $H_{L+k+1}-G_{L+k+1}\geq 2\left( H_{L+k}-G_{L+k} \right) $. Applying it to equations \eqref{ultimobrowngap} and \eqref{ultimobrowngap 2}, we see $B_{H,L+m+1}-B_{H,L+m}\leq B_{G,L+m+1}-B_{G,L+m}$. Summing this inequality to both sides of Inequality \eqref{last brown gap inequality}, we conclude that $B_{H,L+m+1}\leq B_{G,L+m+1}$, as desired. \end{proof} Using these lemmas, we can now prove Theorem~\ref{Adding M Theorem}. \begin{proof}[Proof of Theorem~\ref{Adding M Theorem}.] We apply Lemma~\ref{IncompExtension} $c_L-1$ times to conclude that if $[c_1,\ldots , c_{L-1},c_L]$ is incomplete, then $[c_1,\ldots , c_{L-1}+c_L-1,1]$ is incomplete. Finally, applying Lemma~\ref{Last Case Adding M Theorem}, we achieve the desired result. \end{proof} \section{Families of sequences}\label{sec:famiilies} Recall that Theorem~\ref{decreaseLastCoe} says that given a complete PLRS, decreasing the last coefficient preserves its completeness. This raises a natural question: given the first $L-1$ coefficients $c_1, c_2, \dots, c_{L-1}$, what is the maximal $N$ such that $[c_1, c_2, \dots, c_{L-1}, N]$ is complete? In this section we explore this question. \subsection{Using 1's and 0's as initial coefficients} We first prove Theorem~\ref{thm:1onekzero}, which is about sequences with $1$ and $0$'s as the first coefficients. This proof is followed by a conjecture on classifying sequences of a similar form, and then another conjecture on how complete sequences of these forms can be modified to obtain additional complete sequences and some progress toward proving it. \begin{proof}[Proof of Theorem~\ref{thm:1onekzero}.] First assume that $\left(H_n\right)$ is complete. By the definition of a PLRS, we can easily generate the first $k+2$ terms of the sequence: $H_i = i$ for all $1 \leq i \leq k+2$. We then have for all $n > k+1$, \begin{equation}\label{eqn:termsThroughK+4} H_{n+1} = H_n + NH_{n-k-1}, \end{equation} which implies that \begin{equation}\label{eq:recrelation} H_{k+4} = H_{k+3} + NH_2 = H_{k+3} + 2N. \end{equation} By Brown's criterion, \begin{equation} H_{k+4} \leq H_{k+3} + H_{k+2} + \cdots + H_1 + 1. \end{equation} By \eqref{eq:recrelation}, \begin{equation} H_{k+3} + 2N \leq H_{k+3} + H_{k+2} + \cdots + H_1 + 1, \end{equation} and we obtain \begin{align} 2N &\leq H_{k+2} + H_{k+1} + \cdots + H_1 + 1 \nnend &= (k+2) + (k+1) + \cdots + 1 + 1 \nnend &= \frac{(k+2)(k+3)}{2} + 1, \end{align} and thus we find \begin{equation} N \leq \frac{(k+2)(k+3)}{4} + \frac{1}{2}. \label{takefloor} \end{equation} Since $N$ is an integer and $\left\lfloor (k + 2)(k + 3)/{4} + 1/2 \right\rfloor = \left\lceil (k + 2)(k + 3)/{4} \right\rceil$, we may take the floor of the right hand side of equation \eqref{takefloor}, and then $N \leq \left\lceil (k + 2)(k + 3)/{4} \right\rceil$. We now prove that if $N \leq \left\lceil (k + 2)(k + 3)/{4} \right\rceil$, then $\left(H_n\right)$ is complete. We first show that if $N = \left\lceil (k + 2)(k + 3)/{4} \right\rceil$, then $\left(H_n\right)$ is complete. Taking the recurrence relation $H_{n+1} = H_n + NH_{n-k-1}$, and applying Brown's criterion gives \begin{equation} H_{n+1} =H_n + NH_{n-k-1} \leq H_n +(N-2)H_{n-k-1} + H_{n-k-1} + H_{n-k-2} + \dots + H_1 +1. \end{equation} By Lemma~\ref{lem:sharp1onekzero}, we can expand $(N-2)H_{n-k-1}$ and find that \begin{equation} H_{n+1} \leq H_n + H_{n-1} +\dots + H_{n-k} + H_{n-k-1} + H_{n-k-2} + \dots + H_1 +1. \end{equation} Hence, by Brown's criterion, the sequence $\left(H_n\right)$ is complete. Lastly, by Theorem~\ref{decreaseLastCoe}, for all positive $N < \left\lceil (k + 2)(k + 3)/{4} \right\rceil$, the sequence is also complete. \end{proof} \iffalse \begin{proof}[Proof of Theorem~\ref{2onekzero}.] Suppose that $(H_n)$ is complete. Using the definition of a PLRS, the first $k + 3$ terms of the sequence can be generated in the same way: $H_i = f_{i + 1} - 1$ for all $1 \leq i \leq k + 3$, where $f_n$ is the $n$\textsuperscript{th} Fibonacci number. Proceeding in a manner similar to the proof of Theorem~\ref{thm:1onekzero}, we see that \begin{align} H_{k + 4} &= H_{k + 3} + H_{k + 2} + N H_1 = f_{k+5} + N - 2, \nnend H_{k+5} &= H_{k+4} + H_{k+3} + NH_2 = f_{k+6} + 3N - 3, \nnend H_{k+6} &= H_{k+5} + H_{k+4} + NH_3 = f_{k+7} + 8N - 5. \end{align} By applying Brown's criterion, \begin{align} H_{k + 6} &\leq H_{k + 5} + H_{k + 4} + \cdots + H_1 + 1 \nnend &= f_{k+6} + 3N - 3 + f_{k+5} + N - 2 + \sum_{i=1}^{k+3}H_i + 1 \nnend &= f_{k+7} + 4N - 5 + \sum_{i=1}^{k+3}(f_{i+1} - 1) + 1. \end{align} Next, \begin{equation} f_{k + 7} + 8N - 5 \leq f_{k+7} + 4N - 5 + \sum_{i=1}^{k+3}(f_{i+1} - 1) + f_1, \end{equation} which implies \begin{equation} 4N \leq \sum_{i=1}^{k+3}(f_{i+1} - 1) + f_1 = \sum_{i = 1}^{k+4}f_i + (k+3) = f_{k+6} + (k+5). \end{equation} Thus \begin{equation} N \leq \frac{f_{k + 6} - k-5}{4}, \end{equation} and since $N$ is an integer, \begin{equation} N \leq \left\lfloor\frac{f_{k + 6} - k-5}{4}\right\rfloor. \end{equation} Next, we show that if $N = \left\lfloor (f_{k + 6} - k-5)/{4}\right\rfloor$, then $(H_n)$ is complete. The initial conditions can be found easily, and for the later terms we have \begin{align} H_{n + 1} &= H_n + H_{n - 1} + NH_{n - k - 2}\nnend &\leq H_n + H_{n - 1} +(N - 2) H_{n - k - 2} + H_{n - k - 2} + H_{n - k - 3} + \cdots + H_1 + 1.\\ \intertext{Using Lemma~\ref{lma:lemsharptwo}, we expand $(N-2)H_{n-k-2}$ and obtain} &\leq H_n + H_{n - 1} + H_{n - 2} + \cdots + H_{n - k - 1} + H_{n - k - 2} + H_{n - k - 3} + \cdots + H_1 + 1. \end{align} Hence, by Brown's criterion, this sequence is complete. Lastly, by Theorem~\ref{decreaseLastCoe}, for all positive $N < \left\lfloor (f_{k + 6} - k-5)/{4}\right\rfloor$, the sequence is also complete. \end{proof} \fi We conjecture a bound on the last coefficient of a similar family sequence of PLRS as follows. The necessary condition for $N$ can be easily proven, similar to Theorem \ref{thm:1onekzero}. \begin{conjecture}\label{2onekzero} The sequence generated by $[1, 1, \underbrace{0, \dots, 0}_{k}, N]$ is complete if and only if $1 \leq N \leq \lfloor (f_{k + 6} - k - 5)/4 \rfloor$, where $f_n$ are the Fibonacci numbers with $f_1=1, f_2=2$ and $\lfloor \cdot \rfloor$ is the floor function. \end{conjecture} We want to find a more general result for $[\underbrace{1, \dots, 1}_{g}, \underbrace{0, \dots, 0}_{k}, N]$, as seen in Figure~\ref{fig:FamiliesOfOne}. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{nbonacci.eps} \caption{The maximal $N$ such that $[\protect\underbrace{1, \dots, 1}_{g}, \protect\underbrace{0, \dots, 0}_{k}, N]$ is complete, with $k$ and $g$ varying. Each color represents a fixed $k$.} \label{fig:FamiliesOfOne} \end{figure} Interestingly, we see that as we keep $k$ fixed and increase $g$, the bound increases, and then stays constant from some value of $g$ onward. This motivates the following conjecture. \begin{conjecture}\label{addFrontOnes} If $[\underbrace{1, \dots, 1}_{g}, \underbrace{0, \dots, 0}_{k}, N]$ is complete, then so is $[\underbrace{1, \dots, 1}_{g+1}, \underbrace{0, \dots, 0}_{k}, N]$. \end{conjecture} We have made some progress towards this conjecture; in fact, we show the precise bound for $N$ for the case where $g \geq k$ in Theorem~\ref{thm:gbon}. \begin{theorem}\label{weak2Lcrit} The PLRS $(H_n)$ generated by $[c_1, c_2, \dots, c_L]$ is complete if \begin{equation} \begin{cases} B_{H, n} \geq 0, & \text{if $n < L$,}\\ B_{H, n} > 0, & \text{if $L \leq n \leq 2L-1$.} \end{cases} \end{equation} \end{theorem} \begin{proof} Consider $L \geq 2$; we see that if $c_1 \geq 2$, then the sequence is automatically incomplete, so we need only consider $c_1 = 1$. For $B_n \coloneqq B_{H, n}$, and we show by induction on $n$ that $B_n > 0$ when $n \geq L$. Suppose $B_n > 0$ for $L \leq n \leq m$ (with $m \geq 2L-1$). Then \begin{align} B_{m+1} &= 1 + \sum_{i=1}^mH_i - H_{m+1}\nnend &= 1 + \sum_{i=1}^L H_i + \sum_{i=L+1}^m H_i - \left(H_m + \sum_{j=2}^L c_jH_{m+1-j}\right)\nnend &= 1 + \sum_{i=1}^L H_i + \sum_{i=L+1}^m \left(H_{i-1} + \sum_{j=2}^L c_jH_{i-j} \right) - \left(H_m + \sum_{j=2}^L c_jH_{m+1-j}\right)\nnend &= \left(1 + \sum_{i=1}^{m-1}H_i - H_m + H_L\right) + \sum_{j=2}^L c_j\left(\sum_{i=L+1}^m H_{i-j} - H_{m +1-j} \right)\nnend &= (B_m + H_L) + \sum_{j=2}^{L} c_j\left(1 + \sum_{i=j+1}^m H_{i-j} - H_{m +1-j} - 1 - \sum_{i = j+1}^L H_{i-j} \right) \nnend &= (B_m + H_L) + \sum_{j=2}^L c_j\left(B_{m+1-j} - 1 - \sum_{i = j+1}^L H_{i-j} \right) \nnend &= B_m + \sum_{j=2}^L c_j(B_{m+1-j} - 1) + H_L - \sum_{i = 3}^L\sum_{j=2}^{i-1} c_jH_{i-j}\nnend &= B_m + \sum_{j=2}^L c_j(B_{m+1-j} - 1) + H_L - \sum_{i=3}^L(H_i - H_{i-1} - 1)\nnend &= B_m + \sum_{j=2}^L c_j(B_{m+1-j} - 1) + (L-2) + H_L - \sum_{i=3}^L(H_i - H_{i-1})\nnend &= B_m + \sum_{j=2}^L c_j(B_{m+1-j} - 1) + L. \end{align} The last line is positive since $B_{m+1-j} - 1 \geq 0$ and $B_m, L > 0$. Our proof by induction is complete. \end{proof} \begin{lemma} \label{thm:sticky} The PLRS $(H_i)$ generated by $[\underbrace{1, \dots, 1}_{g}, \underbrace{0, \dots, 0}_{k}, 2^{k+1}]$ is incomplete if $g \geq k \geq 1$. \end{lemma} \begin{proof} Suppose this sequence is complete. Note that \begin{equation} H_{2g+2}=H_{2g+1}+\dots+H_{g+2}+ 2^{k+1}H_{g+1-k}. \end{equation} By applying Brown's criterion to $H_{2g+2}$, we see that \begin{equation} 2^{k+1}H_{g+1-k}\leq \sum_{i=1}^{g+1}H_i+1. \end{equation} Now, note $k$ is positive, so that $g+1-k\leq g+1$. Also, by the structure of the sequence, $H_i=2^{i-1}$ for $i\leq g+1$. Hence \begin{equation} 2^{g+1} = 2^{k+1}H_{g+1-k} = 2^{k+1}2^{g-k} \leq \sum_{i=1}^{g+1}2^{i-1}+1 = 2^{g+1}. \end{equation} Therefore one may substitute previous inequalities with equalities and obtain \begin{equation} \label{2g+2 eq} H_{2g+2}=\sum^{2g+1}_{i=1}H_i+1. \end{equation} It follows immediately from (\ref{2g+2 eq}) that \begin{equation} \sum^{2g+2}_{i=1}H_i+1=2H_{2g+2}. \end{equation} Now, consider \begin{equation} H_{2g+3}=H_{2g+2}+H_{2g+1}+\dots+H_{g+3}+2^{k+1}H_{g+2-k}. \end{equation} Since $g+2-k\leq g+1$ as $k\geq 1$, one gets \begin{equation} H_{g+2-k}M=2^{g+1-k}2^{k+1}=2^{g+2}=2(2^{g+1})=2H_{g+2}. \end{equation} Hence \begin{align} H_{2g+3}&=H_{2g+2}+H_{2g+1}+\dots+H_{g+3}+2H_{g+2} \nnend &=H_{2g+2}+\left(H_{2g+1}+\dots+H_{g+3}+H_{g+2}+H_{g+2}\right) \nnend &>H_{2g+2}+\left(H_{2g+1}+\dots+H_{g+3}+H_{g+2}+H_{g+1-k}\right) \nnend & =2H_{2g+2}=\sum^{2g+2}_{i=1}H_i+1. \end{align} So $H_{2g+3}$ causes Brown's criterion to fail, rendering whole sequence incomplete. \end{proof} We now show the stabilizing behavior of the bound mentioned above. \begin{lemma}\label{thm:SharpgAndlog2k} If $g \geq k + \lceil \log_2k \rceil$, then $[\underbrace{1, \dots, 1}_{g}, \underbrace{0, \dots, 0}_{k}, 2^{k+1} - 1]$ is complete. \end{lemma} \begin{proof} Define $\left( f_n \right)=[\underbrace{1,\ldots , 1}_{g}]$, and $\left( H_n \right)=[\underbrace{1, \dots, 1}_{g}, \underbrace{0, \dots, 0}_{k}, 2^{k+1} - 1]$. We can calculate the terms of $\left( f_n \right)$ and $\left( H_n \right)$ up to $2g+1$. Namely, \begin{alignat}{2} H_{n} &= f_{n}=2^{n-1}, &&\qquad \text{if $1\leq n\leq g$;} \nnend H_{g+n} &= f_{g+n}+2^{n-1}, &&\qquad \text{if $1\leq n\leq k+1$;}\nnend H_{g+k+1+n} &= f_{g+k+1+n}+\left( 2^{k+1}-1 \right) \left( 2^{n}+2^{n-2}\left( n-1 \right) \right), &&\qquad \text{if $1\leq n\leq g-k$;}\nnend f_{g+n} &= 2^{g+n-1}-2^{n-2}\left( n+1 \right), &&\qquad \text{if $1 \leq n \leq g$.} \end{alignat} The third and fourth lines are verified in Lemmas~\ref{Line3Terms} and \ref{Line4Terms}, respectively. We show that the conditions in Theorem~\ref{weak2Lcrit} hold for $\left(H_n\right)$. We can verify directly that Brown's criterion holds for the first $(2g+1)$ terms of $\left( H_n \right)$; in fact, for $B_n \coloneqq B_{H, n}$, we get \begin{equation} \begin{cases} B_n \geq 0, & \text{if $1\leq n\leq g + k$;}\\ B_n > 0, & \text{if $g+k+1 \leq n \leq 2g+1$.} \end{cases} \end{equation} Thus, it remains to show that $B_n > 0$ for $2g+2\leq n \leq 2\left( g+k \right) -1$. \begin{enumerate}[label={Case \arabic*:}, leftmargin=*] \item $2g+2 \leq n \leq 2g+k+1$. Define $b(n) \coloneqq H_{n}-f_{n}$. Note that $b(n)\geq 0$, and by induction, $b(n) > 0$ for all $n \geq g+1$. For $n\geq g+k+2$, \begin{align} f_{n}+b(n) &=H_{n}\nnend &= H_{n-1}+H_{n-2}+\cdots +H_{n-g}+\left( 2^{k+1}-1 \right) H_{n-(g+k+1)} \nnend &= \sum_{i=1}^{g}f_{n-i}+\sum_{i=1}^{g}b\left( n-i \right) +\left( 2^{k+1}-1 \right) H_{n-(g+k+1)}. \end{align} Since $f_{n}=\sum_{i=1}^{g}f_{n-i}$, \begin{equation} b(n)=\sum_{i=1}^{g}b\left( n-i \right) +\left( 2^{k+1}-1 \right) H_{n-(g+k+1)}. \end{equation} Thus, for any $n\geq 2g+2$, \begin{align}\label{LastTerm} B_n &= 1+\sum_{i=1}^{n-1}H_{i}-H_{n} \nnend &= 1+\sum_{i=1}^{n-1}\left( f_{i}+b(i) \right) -\left( f_{n}+b(n) \right) \nnend &= \left( 1+\sum_{i=1}^{n-1}f_{i}-f_{n} \right) -\left( 2^{k+1}-1 \right) H_{n-(g+k+1)}+\sum_{i=g+1}^{n-\left( g+1 \right) }b(i)\nnend & > \left( 1+\sum_{i=1}^{n-1}f_{i}-f_{n} \right) -\left( 2^{k+1}-1 \right) H_{n-(g+k+1)}. \end{align} We are to show that the last term is nonnegative. As $n-\left( g+k+1 \right) \leq g$, \begin{align} & 1+\sum_{i=1}^{n-1}f_{i}-f_{n}-\left( 2^{k+1}-1 \right) H_{n-\left( g+k+1 \right) }\nnend &= 1+\sum_{i=1}^{n-\left( g+1 \right) }f_{i}-\left( 2^{k+1}-1 \right) H_{n-(g+k+1)}\nnend &= 1+\sum_{i=1}^{g}f_{i}+\sum_{i=1}^{n-\left( 2g+1 \right) }f_{g+i}-\left( 2^{k+1}-1 \right) \cdot 2^{n-\left( g+k+1 \right) -1}\nnend &= 1+\sum_{i=1}^{g}2^{i-1}+\sum_{i=1}^{n-\left( 2g+1 \right) }\left( 2^{g+i-1}-2^{i-2}\left( i+1 \right) \right) -2^{n-g-1}+2^{n-\left( g+k+1 \right) -1}\nnend &= 2^{n-\left( g+k+1 \right) -1}-\sum_{i=1}^{n-\left( 2g+1 \right) }2^{i-2}\left( i-1 \right) -\sum_{i=1}^{n-\left( 2g+1 \right) }2^{i-1}\nnend &= 2^{n-\left( g+k+1 \right) -1}-\left( 2^{n-\left( 2g+2 \right) }\left( n-\left( 2g+3 \right) \right) +1 \right) -\left( 2^{n-\left( 2g+2 \right) }-1 \right)\nnend &= 2^{n-\left( g+k+1 \right) -1}-2^{n-\left( 2g+2 \right) }\left( n-\left( 2g+2 \right) \right)\nnend &= 2^{n-\left( 2g+2 \right) }\left( 2^{g-k}-\left( n-\left( 2g+2 \right) \right) \right)\nnend & \geq 2^{n-\left( 2g+2 \right) }\left( 2^{g-k}-\left( k-1 \right) \right)\nnend & > 0. \end{align} Note that the last line comes from $g \geq k+\log_2k$, which implies $2^{g-k }\geq k > k-1$. \item $2g+k+2 \leq n \leq 2g+2k+1$. We show that $B_{n+1} \geq B_n$ for $2g+k+2 \leq n < 2g+2k+1$, and that $B_{2g+k+2} > 0$. \begin{align}\label{differenceBGap} B_{n+1} - B_n &= 2H_n - H_{n+1} \nnend &= 2H_n - \left(\sum_{i=n-g+1}^n H_i +(2^{k+1} -1)H_{n - (g+k)} \right) \nnend &= \left(H_n - \sum_{i=n-g+1}^n H_i \right) +(2^{k+1} -1)H_{n - (g+k)} \nnend &= H_{n-g} - (2^{k+1} -1)(H_{n - (g+k)} - H_{n-(g+k+1)}). \intertext{Replace $n$ by $2g+k+1+m$ with $1 \leq m \leq k$ to obtain} &= H_{(g+k+1)+m} - (2^{k+1}-1)(H_{g+m+1} - H_{g+m}) \nnend &= H_{(g+k+1)+m} - (2^{k+1}-1)(2^{g+m-1} - 2^{m-2}(m+1)). \end{align} For $1 \leq m \leq g-k$, we have an explicit formula for $H_{(g+k+1)+m}$, so we can substitute directly to show that \eqref{differenceBGap} is nonnegative. Thus, if $g-k \geq k$ (i.e., $g \geq 2k$), then this holds for all $1 \leq m \leq k$. If $g - k < k$ (i.e., $g < 2k$), then from Lemma~\ref{lem:Gap2}, \eqref{differenceBGap} is nonnegative. Thus, $B_{n+1} \geq B_n$ for all $2g+k+2 \leq n \leq 2g+k+1$. It remains to show that $B_{2g+k+2} > 0$, which we can do by directly substituting the explicit formulas.\qedhere \end{enumerate} \end{proof} Combining these lemmas, we can prove the first part of Theorem~\ref{thm:gbon}. \begin{proof}[Proof of Theorem~\ref{thm:gbon}.1.] From Lemmas~\ref{thm:sticky} and~\ref{thm:SharpgAndlog2k}, the bound for $N$ is precisely $2^{k+1} - 1$ when $g \geq k + \lceil \log_2k \rceil$. \end{proof} Next, we consider when $k\leq g \leq k+\ceil{\log_2k}$, and prove the second part of Theorem~\ref{thm:gbon} using similar methods. \begin{proof}[Proof of Theorem~\ref{thm:gbon}.2.] First, we show that for $N>2^{k+1}-\ceil{k/{2^{g-k}}}$, $\left(H_i\right)$ is incomplete, and suppose $k \geq 2$. Let us calculate the initial $L = g+k+1$ terms of the sequence. Note \begin{alignat}{2} H_n&=2^{n-1} &&\qquad\text{for all } 1\leq n \leq g+1 \nnend H_{g+n}&=2^{g+n-1}-2^{n-2}(n-1) &&\qquad\text{for all } 1\leq n \leq k+1. \end{alignat} Let $B_i \coloneqq B_{H,i}$. Then, we consider Brown's gap $B_{2g+k+2}$, \begin{align} B_{2g+k+2} &= \na*{1+\sum_{i = 1}^{2g+k+1}{H_i}} - H_{2g+k+2} \nnend &= \na*{1+\sum_{i = 1}^{2g+k+1}{H_i}} - \na*{\sum_{i = g+k+2}^{2g+k+1}{H_i}+NH_{g+1}} \nnend &=\na*{1+\sum_{i = 1}^{g+k+1}H_i} - NH_{g+1} \nnend &=1+\sum_{i = 1}^{g}H_i+\sum_{i = g+1}^{g+k+1}H_i - NH_{g+1}\nnend &=1+\sum_{i = 1}^{g}2^{i-1}+\sum_{i = 1}^{k+1}\na*{2^{g+i-1}-2^{i-2}\na*{i-1}} - 2^g N\nnend &=2^{g+k+1}-\sum_{i = 1}^{k}2^{i-1}i - 2^g N\nnend &=2^{g+k+1}-2^k(k-1)-1 - 2^gN . \intertext{Now, $N>2^{k+1}-\ceil*{k/{2^{g-k}}}$ by assumption so it follows that $N\geq 2^{k+1}-k/{2^{g-k}}+1$, hence} &\leq 2^{g+k+1}-2^k(k-1)-1 - 2^{g}\na*{2^{k+1}-\frac{k}{2^{g-k}}+1}\nnend &=2^{k}-2^g-1, \end{align} which must be negative as $g\geq k$. So $\left(H_n\right)$ fails Brown's criterion at the $(2g+k+1)$st term, rendering the sequence incomplete. Now we can show that for $N=2^{k+1}-\ceil{k/{2^{g-k}}}$, $\left(H_i\right)$ is complete by Theorem~\ref{weak2Lcrit}. We can easily verify that $B_n\geq 0$ for all $1\leq n \leq g+k+1$ and $B_{g+k+1} > 0$; it remains to show that $B_n > 0$ for $g+k+2 \leq n \leq 2g+2k+1$. We consider two cases. \begin{enumerate}[label={Case \arabic*:}, leftmargin=*] \item $2 \leq n-(g+k)\leq g+1$. We want to show that $B_{n+1}\geq B_n$ for all $2\leq n - (g+k) \leq g+1$ and that $B_{g+k+2} > 0$. Now, \begin{align} B_n&=1+\sum_{i=1}^{n-1}{H_i}-H_n \nnend &=1+\sum_{i=1}^{n-1}{H_i}-\na*{\sum_{i=n-g}^{n-1}{H_i}+NH_{n-(g+k+1)}} \nnend &=1+\sum_{i=1}^{n-g-1}{H_i}-NH_{n-(g+k+1)}. \end{align} Then, note that \begin{align} B_{n+1}-B_n&=H_{n-g}-N\na*{H_{n-(g+k)}-H_{n-(g+k+1)}} \nnend &=H_{n-g}-N\na*{2^{n-(g+k+1)}-2^{n-(g+k+2)}}, \intertext{and by assumption,} &=H_{n-g}-\na*{2^{k+1}-\ceil[\Big]{{\frac{k}{2^{g-k}}}}}{2^{n-(g+k+2)}} \nnend &={2^{n-(g+k+2)}}\ceil[\Big]{{\frac{k}{2^{g-k}}}}-\na*{2^{n-g-1}-H_{n-g}}. \end{align} If $n-g\leq g+1$, then $2^{n-g-1}-H_{n-g}=0$, so $B_{n+1}-B_n>0$. If $g+2\leq n-g \leq g+k+1$, then \begin{equation} 2^{n-g-1}-H_{n-g}=2^{n-2g-2}\na*{n-2g-1}\leq 2^{n-(g+k+2)}\frac{k}{2^{g-k}}\leq 2^{n-(g+k+2)}\ceil[\Big]{{\frac{k}{2^{g-k}}}}, \end{equation} so that $B_{n+1}-B_n \geq 0$. In any case, $B_{n+1}\geq B_n$. We can verify directly that $B_{g+k+2}>0$, completing this case. \item $g\leq n-(g+k)\leq g+k+1$. From the previous case, $B_{2g+k+2}\geq B_{2g+k+1}>0$. Now, \begin{align} B_n&=1+\sum_{i=1}^{n-g-1}H_i - NH_{n-(g+k+1)} \nnend &=1+\sum_{i=1}^{n-2g-1}H_i+\sum_{i=n-2g}^{n-g-1}H_i- NH_{n-(g+k+1)} \nnend &=1+\sum_{i=1}^{n-2g-1}H_i+H_{n-g}- NH_{n-(2g+k+1)}- NH_{n-(g+k+1)}. \intertext{Substituting $n=2g+k+1+m$ for $1\leq m \leq k$,} &=1+\sum_{i=1}^{k+m}H_i+H_{g+k+1+m}-N\na*{H_m+H_{g+m}} \nnend &\geq H_{k+m+1}+H_{g+k+1+m}-N\na*{2^{m-1}+2^{g+m-1}-2^{m-2}(m-1)}. \label{Cm} \end{align} Let $C_m \coloneqq H_{k+m+1} + H_{g+k+1+m}- N \na*{2^{m-1}+2^{g+m-1}-2^{m-2}(m-1)}$, from equation \eqref{Cm}. We show by strong induction that $C_m > 0$. By direct computation, $C_1 > 0$. Suppose it holds for all values from $1$ to $m-1$ for $m\geq 2$. Then by the induction hypothesis, \begin{align} H_{g+k+1+m}&=\na*{H_{g+k+m}+\cdots+H_{g+k+2}}+\na*{H_{g+k+1}+\cdots+H_{m+k+1}}+NH_m \nnend & > \sum_{i=1}^{m-1}\na*{N\na*{2^{i-1}+2^{g+1-i}-2^{i-2}(i-1)}-H_{k+i+1}}+\nnend &\hspace{25mm}+\na*{2^{g+k}+\cdots+2^{m+k}-\sum_{i=1}^{k+1}2^{i-2}(i-1)}+2^{m-1}N \nnend &=N\na*{2^m-1+2^{g+m+1}-2^g-2^{m-2}(m-3)-1} - \nnend &\hspace{25mm} -\sum_{i=k+2}^{k+m}H_i+\na*{2^{g+k+1}-2^{m+k}-2^k(k-1)-1} \nnend &\geq N\na*{2^{m-1}+2^{g+m-1}-2^{m-2}(m-1)}-\na*{2^g+2-2^m}N - \nnend &\hspace{25mm} - \sum_{i=k+m-g}^{k+m} H_i + \na*{2^{g+k+1}-2^{m+k}-2^k(k-1)-1}, \end{align} where $H_i = 0$ for nonpositive $i$. Hence, \begin{align} C_m&=H_{g+k+1+m}- N \na*{2^{m-1}+2^{g+m-1}-2^{m-2}(m-1)}+H_{k+m+1}\nnend & > \na*{H_{k+m+1}-\sum_{i=k+m-g}^{k+m}H_i}+\na*{2^{g+k+1}-2^{m+k}-2^k(k-1)-1} - \nnend &\hspace{80mm} -\na*{2^g+2-2^m}N \nnend &=1+\na*{2^{g+k+1}-2^{m+k}-2^k(k-1)-1}-\na*{2^g+2-2^m}\left(2^{k+1}-\left\lceil\frac{k}{2^{g-k}}\right\rceil\right) \nnend &=2^{m+k}-2^k\na*{k+3}+\na*{2^g+2-2^m}\left\lceil\frac{k}{2^{g-k}}\right\rceil \nnend &\geq 2^{m+k}-2^k\left(k+3\right)+\left(2^g+2-2^m\right) \frac{k}{2^{g-k}}\nnend &=2^{m+k}-3\cdot 2^k - \left(2^m-2\right)\frac{k}{2^{g-k}}\nnend &=\na*{2^m-3}\left(2^k-\frac{k}{2^{g-k}}\right)-\frac{k}{2^{g-k}} \nnend &\geq 2^k-\frac{2k}{2^{g-k}} \geq 2^k-2k \geq 0. \end{align} This completes the induction, so $B_n\geq C_m > 0$. \end{enumerate} Since both cases are satisfied, $\left(H_i\right)$ is complete. \end{proof} \begin{remark} The case $k = 1$ is characterized in Lemma~\ref{lem:failAt2L-1}. \end{remark} \subsection{The ``\texorpdfstring{$2L - 1$}{2L - 1} conjecture''} We conjecture a strengthened version of Theorem~\ref{weak2Lcrit} as follows. \begin{conjecture}\label{2Lcrit} The PLRS $\left(H_n\right)$ defined by $[c_1, \dots, c_L]$ is complete if $B_{H, n} \geq 0$ for all $n \leq 2L - 1$, i.e., Brown's criterion holds for the first $2L-1$ terms. \end{conjecture} When using Brown's criterion, it would be very helpful to know how many terms must be checked to be sure that a PLRS is complete. This conjecture, if true, would be a powerful tool to do so. We do not know yet if such a threshold exists for each $L$; however, if it does, then it is at least $2L-1$, as shown by the following example, where $k+2=L$. \begin{lemma}\label{lem:failAt2L-1} The sequence $[1,\dots,1,0,4]$, with $k$ ones, where $k \geq 1$, is always incomplete. Moreover, it first fails Brown's criterion on the $(2k+3)$\textsuperscript{rd} term. \end{lemma} \begin{proof} We have the recurrence relation $H_{n+1} = H_n + \dots + H_{n-k+1} + 4H_{n-k-1}$. We show that the term in the $(2k+3)$\textsuperscript{rd} position in the sequence fails Brown's criterion. First, \begin{equation} H_{2k+3} = H_{2k+2} + \dots + H_{k+3} + 4H_{k+1}. \end{equation} Next, we observe that for $1 \leq j \leq k+1$, we have $H_j=2^{j-1}$. Additionally, $H_{k+2} = 2^{k+1}-1$. Thus, \begin{equation} 2H_{k+1} = 2^{k+1}>2^{k+1}-1 = H_{k+2}. \end{equation} We also note that $H_{k+1} = H_k +\dots + H_1 +1$. Putting everything together, \begin{align} H_{2k+3} &= H_{2k+2} + \dots + H_{k+3} + 4H_{k+1}\nnend &= H_{2k+2} + \dots + H_{k+3} + 3H_{k+1} + H_k + \dots + H_1 +1 \nnend &> H_{2k+2} + \dots + H_{k+3} + H_{k+2} + H_{k+1} + H_k + \dots + H_1 +1. \end{align} Hence, we have shown that $[1,\dots,1,0,4]$, with $k\geq 1$ ones, is incomplete, as it fails Brown's criterion on the $(2k+3)$\textsuperscript{rd} term. We now show that Brown's criterion holds for the first $(2k+2)$ terms. For $1 \leq j \leq k+1$, we have $H_j=2^{j-1}$, which satisfies the equality $H_{j+1}=H_j +\dots + H_1 + 1$. When $k+2 \leq j \leq 2k+2$, \begin{equation} H_{j+1} = H_j + \dots + H_{j-k+1} + 4H_{j-k-1}. \end{equation} Note that $H_{j-k-1}=H_{j-k+2}+\dots+H_1+1$ as $1 \leq j-k-1 \leq k + 1$, so \begin{equation} H_{j+1} = H_j + \dots + H_{j-k+1} + 2H_{j-k-1} + H_{j-k-1} + H_{j-k-2} + \dots + H_1 +1 \end{equation} and as $2H_{j-k-1} = 2^{j-k-1}=H_{j-k}$, we see \begin{equation} H_{j+1} = H_j + \dots + H_{j-k+1} + H_{j-k} + H_{j-k-1} + H_{j-k-1} + H_{j-k-2} + \dots + H_1 +1. \end{equation} Hence, this equality satisfies Brown's criterion for terms $k+2 \leq j \leq 2k+2$. \end{proof} Assuming this conjecture, we can explore sequences of the form $[1, 0, \dots, 0, 1, \dots, 1, N]$ further. In Theorems~\ref{sum2L-2m} and \ref{sum2L-2m 2nd}, we show that the bound on $N$ for $[1, \underbrace{0, \dots, 0}_{L-m-2}, \underbrace{1, \dots, 1}_{m}, N]$ strictly increases if we keep $L$ fixed and increase $m$ from $0$ to $L-3$, i.e., switching the coefficients from $0$ to $1$ gradually from the end so that at least one $0$ remains. We first state a following powerful lemma that is contingent on this conjecture. \begin{lemma}[Conditional]\label{firstL+2} Let $\left( H_{n} \right)$ defined by $[1,0,\dots, 0,1,\dots, 1,N]$ be a sequence with $L$ coefficients, $m$ of which are ones. Then, if $(H_n)$ is incomplete, it must fail Brown's criterion at the $(L+1)$st or $(L+2)$nd term. In other words, if $H_{L+1}\leq 1+\sum_{i=1}^{L}H_{i}$ and $H_{L+2}\leq 1+\sum_{i=1}^{L+1}H_{i}$, then $\left( H_{n} \right)$ is complete. \end{lemma} The proof of this lemma is deferred to Lemma~\ref{firstL+2append} of Appendix~\ref{apx:sec3lemmas}. \begin{theorem}\label{sum2L-2m} Let $\left( H_{n} \right)$ be a PLRS with $L$ coefficients defined by $[1,0,\dots, 0, \underbrace{1,\dots, 1}_{m}, N]$, where $L \geq 2m + 2$. Then $\left( H_{n} \right)$ is complete if and only if \begin{equation} N\leq \left\lfloor \frac{\left( L-m \right) \left( L+m+1 \right) }{4}+\frac{1}{48}m(m+1)(m+2)(m+3)+\frac{1-2m}{2} \right\rfloor.\end{equation} \end{theorem} \begin{proof} First, note for all $1\leq n\leq L-m$, that $H_{n}=n$. Now, we claim that for all $1\leq k\leq m$, \begin{equation} H_{L-m+k}=L-m+\frac{1}{6}k(k+1)(k+2)+k .\end{equation} We use induction, appealing to the identity $\sum_{a=1}^{n}a(a+1)/{2}=n(n+1)(n+2)/6$. We first see that \begin{equation} H_{L-m+1}=H_{L-m}+H_1+1=L-m+2 = L-m + \sum_{a=1}^{1}\frac{a(a+1)}{2}+1. \end{equation} Additionally, \begin{equation} H_{L-m+2}=H_{L-m+1}+H_2+H_1+1 = (L-m+2)+2+1+1 = L-m + \sum_{a=1}^{2}\frac{a(a+1)}{2}+2 .\end{equation} Now, suppose $H_{L-m+k}=L-m+\sum_{a=1}^{k}a(a+1)/{2}+k$ for some $k<m$. Note that \begin{equation} H_{L-m+k+1}=H_{L-m+k}+H_{k+1}+\dots+H_{1}+1.\end{equation} Since we supposed $L \geq 2m+2$, we see $k+1 \leq m+1 \leq L-m$, and thus for all $ 1\leq i\leq k,\; H_{i}=i$. Thus, \begin{align} H_{L-m+k+1} &= \left( L-m+\sum_{a=1}^{k}\frac{a(a+1)}{2}+k \right) +\frac{(k+1)(k+2)}{2}+1 \nnend &= L-m +\sum_{a=1}^{k+1}\frac{a(a+1)}{2}+k+1. \end{align} Thus, we have an explicit formula for $H_{i}$, for $1\leq i\leq L$. Note that $\left( H_{n} \right)$ is complete if and only if it fulfills Brown's criterion for the $(L+1)$st and $(L+2)$nd term. We show that $\left( H_{n} \right)$ fulfills the criterion for $L+2$ if and only if the bound above holds; it is not difficult to show that the bound for $L+1$ is less strict. Indeed, we wish to reduce the inequality \begin{align} H_{L+2}=H_{L+1}+H_{m+2}+\dots+H_{3}+2N &\leq 1+\sum_{i=1}^{L+1}H_{i}\\ \iff H_{m+2}+\dots+H_{3}+2N &\leq 1+\sum_{i=1}^{L}H_{i}\label{1}. \end{align} Simplifying the left hand side of inequality \eqref{1}, \begin{align} H_{m+2}+\dots+H_{3}+2N &=H_{m+2}+\dots+H_3+(H_2+H_1 - H_2 - H_1) +2N\nnend&= \frac{(m+2)(m+3)}{2}-3+2N. \end{align} Additionally, \begin{align} 1+\sum_{n=1}^{L}H_{n}&=1+\sum_{n=1}^{L-m}H_{n}+\sum_{n =L-m+1}^{L}H_{n}\nnend &=1 + \frac{(L-m)\left( L-m+1 \right) }{2}+\sum_{n=1}^{m}\left( \frac{1}{6}n(n+1)(n+2) +n+L-m\right).\label{eq:351} \end{align} We use the fact that $\sum_{n=1}^{m}n(n+1)(n+2)=m(m+1)(m+2)(m+3)/4$ to simplify \eqref{eq:351} as follows: \begin{multline} 1+\frac{(L-m)\left( L-m+1 \right) }{2}+\frac{m(m+1)}{2}+mL-m ^2+\frac{1}{6}\sum_{n=1}^{m}n(n+1)(n+2)\\ =1+ \frac{(L-m)\left( L-m+1 \right) }{2}+\frac{m(m+1)}{2}+mL-m ^2+\frac{1}{24}m(m+1)(m+2)(m+3) .\end{multline} Hence \eqref{1} is equivalent to \begin{multline} \frac{\left( m+2 \right) \left( m+3 \right) }{2}-3+2N \leq 1 + \frac{(L-m)\left( L-m+1 \right) }{2}+\frac{m(m+1)}{2}\\ +mL-m ^2+\frac{1}{24}m(m+1)(m+2)(m+3). \end{multline} Simplifying, this gives us \begin{equation} N\leq \left\lfloor \frac{\left( L-m \right) \left( L+m+1 \right) }{4}+\frac{1}{48}m(m+1)(m+2)(m+3)+\frac{1-2m}{2} \right\rfloor.\end{equation} \end{proof} \begin{theorem}\label{sum2L-2m 2nd} Let $\left(G_n\right)$ and $\left(H_n\right)$ be PLRS's, both with $L$ coefficients, which are defined by $[1, 0, \dots, 0, \underbrace{1, \dots, 1}_{m}, N]$ and $[1, 0, \dots, 0, \underbrace{1, \dots, 1}_{m+1}, N+1]$ respectively. Suppose $L - m \geq 4$ (so that at least one zero is present in $\left(H_n\right)$), $m \geq (L-1) / 2$, and $\left(G_n\right)$ is complete. Then $\left(H_n\right)$ is also complete. \end{theorem} \begin{proof} As $\left(G_n\right)$ is complete, from Brown's criterion, we obtain \begin{equation} G_{L+2} = G_{L+1} + \sum_{i=3}^{m+2}G_i + NG_2 \leq 1 + \sum_{i=1}^{L+1}G_i, \end{equation} which is equivalent to \begin{equation}\label{baseG} 2N \leq \sum_{i = m+3}^L G_i + 4. \end{equation} From Lemma~\ref{firstL+2}, it suffices to show that \begin{equation} H_{L+1} \leq 1 + \sum_{i=1}^L H_i \qquad\text{and}\qquad H_{L+2} \leq 1 + \sum_{i=1}^{L+1} H_i, \end{equation} or equivalently, \begin{equation} N \leq \sum_{i=m+3}^{L-1}H_i\label{easybound} \end{equation} and \begin{equation} 2N \leq \sum_{i=m+4}^{L}H_i + 2\label{hardbound}. \end{equation} We first show \eqref{easybound}. Combining with \eqref{baseG}, it suffices to show that \begin{equation} \sum_{i=m+3}^L G_i + 4 \leq 2\sum_{i=m+4}^L H_i. \end{equation} From Lemma~\ref{diffGH}, \begin{equation} \begin{cases} G_i \leq H_i, & \text{if $m + 3 \leq i \leq L$;}\\ G_i \leq H_{i-1} - 1, & \text{if $2(L-m) < i \leq L$}. \end{cases} \end{equation} Thus, \begin{align} \sum_{i=m+3}^L G_i + 4 &= \sum_{i=m+3}^{2(L-m)}G_i + \sum_{i=2(L-m)+1}^{L}G_i + 4 \nnend & \leq \sum_{i=m+3}^{2(L-m)}H_i + \sum_{i=2(L-m)}^{L-1}H_i + (2m - L + 4) \nnend & \leq 2\sum_{i=m+4}^L H_i, \end{align} where the last inequality can be taken crudely. We then show \eqref{hardbound}. Similarly, combining with \eqref{baseG}, it suffices to show that \begin{equation} \sum_{i=m+3}^L G_i + 2 \leq \sum_{i=m+4}^L H_i. \end{equation} If $m + 3 \geq 2(L-m)$, then \begin{align} \sum_{i=m+4}^LH_i &= \sum_{i=m+4}^L\left(H_{i-1} + \sum_{j=1}^{i-L+m+1}H_j + 1\right) \nnend &\geq \sum_{i=m+4}^L(H_{i-1} + H_{i-L + m + 2}) \text{ (Brown's criterion for the first terms)} \nnend &= \sum_{i=m+3}^{L-1} H_i + \sum_{i=2m+6-L}^{m+2}H_i \geq \sum_{i=m+3}^{L-1}(G_{i+1} + 1) + H_{m+2} \nnend &\geq \sum_{i=m+3}^L G_i + 2. \end{align} If $m + 3 < 2(L-m)$, then \begin{align} \sum_{i=m+3}^L G_i &= \sum_{i=m+3}^{2(L-m)-1}G_i + G_{2(L-m)} + \sum_{i=2(L-m) +1}^L G_i \nnend &= \sum_{i=m+3}^{2(L-m)-1}(H_{i-1} + 1) + H_{2(L-m)-1} + \sum_{i=2(L-m) +1}^L G_i \nnend &= \sum_{i=m+2}^{2(L-m)-1}H_i + (2L - 3(m+1)) + \sum_{i=2(L-m) +1}^L G_i. \end{align} Thus, our original inequality \eqref{hardbound} holds if we can show that \begin{equation} H_{m+2} + H_{m+3} + (2L - 3(m+1)) + \sum_{i=2(L-m) +1}^L G_i \leq \sum_{i=2(L-m)}^L H_i. \end{equation} Similarly to the previous case, \begin{align} \sum_{i=2(L-m)}^L H_i &\geq \sum_{i=2(L-m)}^L(H_{i-1} + H_{i-L+m+2}) \nnend &= \sum_{i=2(L-m)-1}^{L-1}H_i + \sum_{i=L-m+2}^{m+2}H_i \nnend &= \sum_{i=2(L-m)}^{L-1}H_i + H_{2(L-m)-1} + H_{m+2} + \sum_{i=L-m+2}^{m+1}H_i. \end{align} As $2(L-m) - 1 \geq m+3$ and $H_i \geq i$, \begin{align} \sum_{i=2(L-m)}^L H_i &\geq \sum_{i=2(L-m)}^{L-1}(G_{i+1} + 1) + H_{m+3} + H_{m+2} + \sum_{L-m+2}^{m+1}i \nnend &= \sum_{i=2(L-m)+1}^L G_i + H_{m+3} + H_{m+2} + \left( 2m - L + \sum_{i=L-m+2}^{m+1}i \right). \end{align} From Lemma~\ref{lem:trivialineq}, \begin{equation} \sum_{i=2(L-m)}^L H_i \geq \sum_{i=2(L-m)+1}^L G_i + H_{m+3} + H_{m+2} + (2L - 3(m+1)). \end{equation} \end{proof} \section{An analytical approach}\label{roots} \subsection{An introduction to principal roots} We begin by restating some results from Martinez, Miller, Mizgerd, Murphy, and Sun \cite{MMMMS}. \begin{lemma}\label{lma:principalRoot} Let $P(x)$ be the characteristic polynomial of a recurrence relation with nonnegative coefficients and at least one positive coefficient. Let $S = \{m \ |\ c_m \neq 0\}$. Then \begin{enumerate} \item there exists exactly one positive root $r$, and this root has multiplicity $1$, \item every root $z \in \C$ satisfies $|z| \leq r$, and \item if $\gcd(S) = 1$, then $r$ is the unique root of greatest magnitude. \end{enumerate} \end{lemma} \begin{proof} This is Lemma 2.1 from Martinez, Miller, Mizgerd, Murphy, and Sun \cite{MMMMS}. \end{proof} \begin{remark} We refer to the unique positive root from Lemma~\ref{lma:principalRoot} as the \emph{principal root} of the recurrence sequence and corresponding characteristic polynomial. \end{remark} \begin{lemma} Let $P(x)$ be the characteristic PLRS $\left(H_n\right)$ and let $r_1$ be its principal root. Then \begin{equation} \lim_{n \to \infty} \frac{H_n}{r_1^n} = C \end{equation} for some constant $C > 0$. \end{lemma} \begin{proof} Corollary 2.3 from \cite{MMMMS} proves a stronger result than this, which immediately implies this lemma. \end{proof} \begin{lemma}\label{lma:principalRootContributes} Let $P(x)$ be the characteristic polynomial of a PLRS $\left(H_n\right)$ with roots $r_i$, each of multiplicity $m_i$, where $r_1$ is the principal root. If \begin{equation}\label{eqn:explicitHn} H_n = a_1 r_1^n + \sum_{i=2}^k q_i(n)r_i^n, \end{equation} where $q_i(x)$ is a polynomial of degree at most $m_i - 1$, then $a_1 > 0$. \end{lemma} \begin{proof} First, note that the set $S$ of Lemma~\ref{lma:principalRoot} contains $1$ because $c_1 > 0$ in a PLRS. Therefore $\gcd(S) = 1$, and $r_1$ is the unique root of greatest magnitude. If $a_1 < 0$, then this implies that $H_n < 0$ for some $n$ because the behavior of $a_1 r_1^n$ eventually dominates the expression for $H_n$ in (\ref{eqn:explicitHn}). If $a_1 = 0$, then \begin{equation} \lim_{n \to \infty} \frac{H_n}{r_1^n} = 0 \end{equation} because $r_1$ is the unique root of greatest magnitude, so if $a_1 = 0$ then the behavior of $H_n$ is bounded by geometric growth of the root of next greatest magnitude, which is necessarily smaller than $r_1^n$. Thus, $a_1 > 0$. \end{proof} \subsection{Applications to completeness} Given these results, we see that the principal root of a PLRS serves as a measure for the rate of that sequence's growth. Guided by the simple heuristic that, generally, a sequence which grows slowly is more likely to be complete than a sequence which grows rapidly, we find bounds for the potential roots of a complete or incomplete PLRS. We aim to answer these questions: for any given $L$, what is the fastest-growing complete PLRS with $L$ coefficients? What is the slowest-growing incomplete PLRS with $L$ coefficients? While the principal root of a PLRS has not been related to completeness before, there is previous work on bounding the principal root of other linear recurrence sequences by Gewurz and Merola \cite{GM}. \begin{lemma}\label{thm:CompleteCriterionRoots} If $\left(H_n\right)$ is a complete PLRS and $r_1$ is its principal root, then $|r_1| \leq 2$. \end{lemma} \begin{proof} Suppose that $|r_1| > 2$. Set \begin{equation} H_n = a_1r_1^{n}+q_2 (n)r_2^n + \cdots + q_r (n)r_k^n. \end{equation} Since $r_1$ is the unique root of largest magnitude by Lemma~\ref{lma:principalRoot}, the behavior of $a_1 r_1^n$ dominates in the limit. By Lemma~\ref{lma:principalRootContributes}, $a_1 > 0$, so if $|r_1| > 2$, then eventually $|a_1 r_1^n| > 2^{n - 1}$, and so there exists a large $n$ for which $H_{n}>2^{n-1}$. As the sequence $\left( 2^{n-1} \right)$ is the complete PLRS with maximal terms by Theorem~\ref{clm:largestCompleteGaps}, we see $\left( H_{n} \right)$ must be incomplete. \end{proof} \begin{remark} The converse to this lemma does not hold. A counterexample is $[1, 1, 1, 0, 4]$, which has principal root 2 but is not complete. \end{remark} While the proof is simple, this lemma gives us an effective upper bound for the roots of a complete PLRS, regardless of length. Recall from Theorem~\ref{basic} that for any $L$, the PLRS $\left( H_{n} \right)$ generated by the coefficients $[\underbrace{1,\ldots , 1}_{L-1},2]$ satisfies $H_{n}=2^{n-1}$. This sequence naturally has a principal root of 2, and is complete. Similarly, for any $L \geq 1$, the sequence $[\underbrace{1,\ldots , 1}_{L}]$ is complete, and its principal root asymptotically approaches 2 as $L$ grows. We now focus on finding a lower bound for the roots of an incomplete sequence, which proves to be a more difficult problem. \begin{lemma}\label{boundexists} For any $ L\in \Z_{>0}$, there exists a constant $ B_{L}$, with $ 1< B_{L}<2$ such that if $\left( H_{n} \right)$ is a PLRS with principal root $r_1$ and $r_1<B_{L}$, then $\left( H_{n} \right)$ is complete. \end{lemma} \begin{remark} This means that for any $L$, there exists a lower bound $B_{L}$ on possible values of the principal root of an incomplete PLRS generated by $[c_1,\ldots, c_{L}]$. \end{remark} \begin{proof} In order to show that such a $B_{L}$ exists, it suffices to show that for any given $L$, there exists only finitely many incomplete positive linear recurrence sequences generated by $[c_1,\dots, c_{L}]$ with principal root $r_1 <2$. Recall that the principal root $r_1$ of a PLRS is the single positive root of the characteristic polynomial $p(x)=x^{L}-\sum_{i=1}^{L}c_{i}x^{L-i}$. As $\lim_{x \rightarrow \infty }p(x)=+\infty $, the fact that $r_1$ is the unique positive root of $p(x)$ implies that $r_1<2 \iff p(2)>0$, by IVT. Note that \begin{equation} p(2)=2^{L}-\sum_{i=1}^{L}c_{i}2^{L-i}>0 \iff \sum_{i=1}^{L}c_{i}2^{L-i}<2^{L}. \end{equation} As for all $i$, $c_{i}\geq 0$, so the inequality above cannot hold if there exists $i$ such that $c_{i}\geq 2^{i}$. As the set $\{ [c_1,\dots, c_{L} ]: 0\leq c_{i}\leq 2^{i} \text{ for all } i\}$ of such sequences is finite, we are done. \end{proof} The remainder of this section is a series of lemmas which build towards the following conjecture. \begin{conjecture}\label{final} Let $N_{L}=\left\lceil L(L+1)/4 \right\rceil$, and let $\lambda_{L} $ be the principal root of the sequence generated by $[1,\underbrace{0,\ldots , 0}_{L-2}, N_{L} +1]$, i.e., the sole principal root of \begin{equation} p_{L}(x)=x^{L}-x^{L-1}-\left\lceil \frac{L(L+1)}{4} \right\rceil -1. \end{equation} Then, if $[c_1,\ldots , c_{L}]$ generates an incomplete sequence, then its principal root is at least $\lambda_{L}$; in other words, the incomplete sequence of length $L$ with the smallest possible principal root is precisely $[1,\underbrace{0,\ldots , 0}_{L-2}, N_{L} +1]$. \end{conjecture} \begin{remark} This conjecture is equivalent to stating $B_{L}=\lambda _{L}$ for all $L\geq 2$, where $B_{L}$ is the bound proposed in Lemma~\ref{boundexists}. \end{remark} \begin{remark} By using Theorem~\ref{thm:1onekzero}, it is easy to see that the sequence generated by $[1,0,\ldots , 0, N_{L} +1]$ is incomplete; in fact, the value $N_{L}+1$ is the minimal positive integer such that a sequence of this form is incomplete. \end{remark} As a first step towards a proof of Conjecture~\ref{final}, we prove Lemma~\ref{minprincipalroot}, which addresses the case of sequences with a large sum in coefficients. \begin{definition} For positive integers $S,L$, we define the set of positive linear recurrence sequences \begin{equation} P_{L,S} \coloneqq \biggl\{ \left(H_n\right) \text{ generated by } [c_1,\ldots, c_{L}]\ \bigg| \ \sum_{i=1}^{L}c_{i}=S+1 \biggr\}. \end{equation} \end{definition} \begin{lemma}\label{PLSRoots} The sequence in $P_{L,S}$ with the minimal principal root is $[1,0,\ldots, 0,S]$. \end{lemma} \begin{proof} Consider a sequence generated by $s=[c_1,\ldots, c_{L}]\in P_{L,S}$, and let $r_1,\ldots, r_{L}$ be its roots, with $r_1>0$ the principal root. Since $|c_{L}|=\bigl| \prod_{i=1}^{L}r_{L}\bigr|$ is a positive integer, we know $r_1>1$. Now, for any $1\leq m\leq L$ consider a sequence generated by $s_{m} \in P_{L,S}$ of the form \begin{equation} [c_1,\ldots,c_{m-1}, c_{m}-1,c_{m+1},\ldots, c_{L}+1]. \end{equation} We claim that the principal root $q_1$ of $s_{m}$ fulfills $q_1<r_1$. Define the characteristic polynomials $f(x)$ and $g(x)$ for $s$ and $s_{m}$, respectively, so that \begin{equation} f(x)=x^{L}-\sum_{i=1}^{L}c_{i}x^{L-i}, \end{equation} and \begin{align} g(x) &= x^{L}-\sum_{i=1}^{m-1}c_{i}x^{L-i}-\left( c_{m}-1 \right)x^{m} - \sum_{i=m+1}^{L-1}c_{i}c_{i}x^{L-i}-\left( c_{L}+1 \right)\nnend &=x^{L}-\sum_{i=1}^{L}c_{i}x^{L-i}+x^{m}-1. \end{align} As $q_1$ is the sole positive root of $g(x)$, and $g(x)$ is eventually positive, we notice that $q_1<r_1$ if and only if $g(r_1)>0$, which is equivalent to $g\left( r_1 \right) >f(r_1)$. Now, \begin{equation} \begin{array}{l@{{}\iff {}}l} g\left( r_1 \right) >f\left( r_1 \right) & r_1^{L}-\sum_{i=1}^{L}c_{i}r_1^{L-i}+r_1^{m}-1>r_1^{L}-\sum_{i=1}^{L}c_{i}r_1^{L-1} \\ & r_1^{m}-1 >0 \\ & r_1>1. \\ \end{array} \end{equation} As $r_1 > 1$, the principal root $q_1$ of $g(x)$ is strictly less than that of $f(x)$. As $s$ was chosen arbitrarily, we see that the principal root of any sequence $s\in P_{L,S}$ can be strictly decreased by using the transformation $s\rightarrow s_{m}$ for any $1\leq m\leq L$. Applying this transformation iteratively for all values of $m$, we inevitably end up with the minimal possible values of $c_1,\ldots, c_{L-1}$, namely $c_1=1,\; c_2=c_3=\cdots =c_{L-1}=0$, and the maximal possible value of $c_{L}$, namely $c_{L}=S$. Thus, as the principal root under these iterated transformations is strictly decreasing, we conclude that $[1,0,\ldots, 0,S]$ has the smallest principal root of any element of $P_{L,S}$. \end{proof} \begin{lemma}\label{DecreaseLastDecreasesRoot} For any $S > 0$, the principal root of $[1,0,\ldots,0,S]$ is strictly less than that of $[1,0,\dots,0,S+1]$. \end{lemma} \begin{proof} Let $S$ be an arbitrary positive integer, and let $f(x), g(x)$ and $r_1,q_1$ denote the characteristic polynomials and principal roots of $[1,0,\ldots, 0,S+1]$ and $[1,0,\ldots, 0,S]$, respectively. As before, $q_1<r_1$ if and only if $g(r_1)>0=f(r_1)$. Note that \begin{equation} \begin{array}{l@{{}\iff {}}l} g(r_1)>f(r_1) & r_1^{L}-r_1^{L-1}-S>r_1^{L}-r_1^{L-1}-\left( S+1 \right) \\ & S+1>S. \end{array} \end{equation} Thus, $q_1<r_1$, for any value of $S$. \end{proof} \begin{lemma}\label{minprincipalroot} Any sequence fulfilling $\sum_{i=1}^{L}c_{i}\geq N_{L}+2$ has a principal root greater than or equal to that of \begin{equation} [1,0,\ldots,0,N_{L}+1]. \end{equation} \end{lemma} \begin{proof} Recall from Theorem~\ref{thm:1onekzero} that the sequence $[1,0,\dots,0,N] $ is complete if and only if $N\leq N_{L}$, for $N_L= \left\lceil L(L+1)/{4}\right\rceil$. Thus, an immediate corollary to Theorem~\ref{thm:1onekzero} is that the incomplete sequence of the form $[1,0,\dots,0,N]$ with the minimal possible principal root is $[1,0,\ldots,0,N_{L}+1]$. Furthermore, if we have a sequence generated by $[c_1,\ldots, c_{L}]$ which fulfills $\sum_{i=1}^{L}c_{i}\geq N_{L}+2$, Lemmas~\ref{PLSRoots} and \ref{DecreaseLastDecreasesRoot} present a sequence of algorithms which allow us to transform this sequence into the sequence generated by $[1,0,\ldots,0,N_{L}+1]$, in such a way that each transformation strictly lowers the magnitude of the principal root. Thus, any sequence satisfying $\sum_{i=1}^{L}c_{i}\geq N_{L}+2$ has a principal root strictly greater than the principal root of $[1,0,\ldots,0,N_{L}+1]$. \end{proof} The following lemmas are working towards proving Conjecture~\ref{finalsegundamitad}, which addresses the second case of Conjecture~\ref{final}, which addresses the roots of sequences $[c_1, \ldots, c_L]$ which fulfill $\sum_{i=1}^L c_i \leq N_L +2$. \begin{lemma}\label{addingcoeffdecreasesroot} Suppose the sequence generated by $[c_1,\ldots, c_{L}]$ has principal root $r$, then for any $c_{L+1}\in \Z_{>0}$, the sequence generated by $[c_1,\ldots, c_{L},c_{L+1}]$ (in which we add an additional positive coefficient) with principal root $q$ fulfills $r<q$. \end{lemma} \begin{proof} Let $f(x),g(x)$ be the characteristic polynomials of the two sequences, so that \begin{equation} f(x)=x^{L}-\sum_{i=1}^{L}c_{i}x^{L-i} \quad \text{and} \quad g(x)=x^{L+1}-\sum_{i=1}^{L}c_{i}x^{L+1-i}-c_{L+1}. \end{equation} Similar to previous arguments, by the IVT, $r<q$ if and only if $g(r)<f(r)=0$. Note that \begin{equation} \begin{array}{l@{{}\iff {}}l} g(r)<f(r) & r^{L+1}-\sum_{i=1}^{L}c_{i}x^{L+1-i}-c_{L+1}<r^{L}-\sum_{i=1}^{L}c_{i}r^{L-i} \\ & c_{L+1} > r^{L+1}-r^{L}+\sum_{i=1}^{L}c_{i}r^{L-i}-\sum_{i=1}^{L}c_{i}r^{L+1-i} \\ & c_{L+1}>r^{L}\left( r-1 \right) +\sum_{i=1}^{L}c_{i}r^{L-i}\left( 1-r \right) \\ & c_{L+1}>\left( 1-r \right) \left( r^{L}-\sum_{i=1}^{L}c_{i}r^{L-i} \right) =\left( 1-r \right) \cdot f(r) \\ & c_{L+1}> \left( 1-r \right) f(r)=\left( 1-r \right) \cdot 0 =0. \\ \end{array} \end{equation} Since $c_{L+1}\in \Z_{>0}$, the last line holds. It follows immediately that $r<q$. \end{proof} \begin{lemma}\label{rootbounddecreases} Let $\lambda _{L}$ be the principal root of \begin{equation} x^{L} - x^{L - 1} - N_L - 1. \end{equation} Then, for any $L\geq 2$, $\lambda _{L}>\lambda _{L+1}$. \end{lemma} \begin{proof} We let $f(x)$ and $g(x)$ denote the characteristic polynomials of $[1,0,\ldots, 0,N_{L}+1]$ and $[1,0,\ldots, 0,N_{L+1}+1]$, of length $L$ and $L+1$, respectively. This way we obtain \begin{equation} f(x)=x^{L}-x^{L-1}-N_{L}-1,\; \; \; g(x)=x^{L+1}-x^{L}-N_{L+1}-1. \end{equation} As in previous proofs, we see that $\lambda _{L}>\lambda _{L+1}\iff g\left( \lambda _{L} \right) >f\left( \lambda _{L} \right) =0$. \begin{equation} \begin{array}{l@{{}\iff {}}l} g\left( \lambda \right) >f\left( \lambda \right) & \lambda ^{L+1}-\lambda ^{L}-N_{L+1}-1> \lambda ^{L}-\lambda ^{L-1}-N_{L}-1 \\ & \lambda ^{L+1}-2\lambda ^{L}+\lambda ^{L-1}>N_{L+1}-N_{L} \\ &\lambda ^{L-1}\left( \lambda -1 \right) ^2 > N_{L+1}-N_{L}. \\ \end{array} \end{equation} Note that when $f\left( \lambda \right) =0$, we have $\; \lambda ^{L-1}\left( \lambda -1 \right) =N_{L}+1$. Moreover, $N_{L+1}-N_{L}\leq (L+2)/{2}$, which can be shown by using the definition of $N_L$ and checking all cases modulo 4. Thus, it suffices to show that \begin{equation} \left( N_{L}+1 \right) \left( \lambda _{L}-1 \right) \geq \frac{L+2}{2}. \end{equation} Now, using the value of $N_{L}$, all we need to show is \begin{equation}\label{appendixit} \left( \lambda _{L}-1 \right) \geq \frac{L+2}{L^2+L+4}. \end{equation} The proof of \eqref{appendixit} is just algebra, and is left to Appendix~\ref{apx:sec4lemmas}, as Lemma~\ref{appendixitAppendix}. \end{proof} \begin{lemma}\label{rootsgotozero} For any $L \in \N$, let $\lambda _{L}$ be the sole positive root of the polynomial \begin{equation} p_{L}(x)=x^{L}-x^{L-1}-\left\lceil \frac{L(L+1)}{4} \right\rceil -1. \end{equation} Then, $\lim_{L \rightarrow \infty }\lambda _{L}=1$. \end{lemma} \begin{proof} We show that for any $\varepsilon >0$, there exists an $M$ large enough so that for all $L > M$, $p_{L}(1+\varepsilon )>0$. As $p_{L}(x)$ has only one positive root $\lambda _{L}$ and $p(x)$ is positive as $x\rightarrow \infty $, we see $p_{L}(1+\varepsilon )>0$ implies $\lambda _{L}<1+\varepsilon $. If this is possible for arbitrary $\varepsilon$, then $\lambda _{L}\rightarrow 1$ as desired. Let us fix an $\varepsilon >0$. For any $L$, we may write \begin{align} p_{L}(1+\varepsilon )&=\left( 1+\varepsilon \right) ^{L}-(1+\varepsilon )^{L-1}-\left\lceil \frac{L(L+1)}{4} \right\rceil -1 \nnend &= \sum_{n=0}^{L}\varepsilon ^{n}\left( \binom{L}{n}-\binom{L-1}{n} \right) - \left\lceil \frac{ L(L+1) }{4} \right\rceil -1,\label{uglyformpl} \end{align} where $\binom{L-1}{L}$ is $0$. Using Pascal's rule ($ \binom{n-1}{k} + \binom{n-1}{k-1} = \binom{n}{k}$), we can reduce \eqref{uglyformpl} to \begin{equation}\label{eq:Pascalrule} p_{L}\left( 1+\varepsilon \right) = \sum_{n=1}^{L}\varepsilon ^{n}\binom{L-1}{n-1} - \left\lceil L(L+1)/4 \right\rceil -1. \end{equation} The quantity from (\ref{eq:Pascalrule}) can easily be shown to be positive (and in fact tends towards infinity) for large enough $L$. For example, we can take the trivial bound \begin{equation} \sum_{n=1}^{L}\varepsilon ^{n}\binom{L-1}{n-1} > \varepsilon ^4\binom{L-1}{3}, \end{equation} as the full sum must be larger than only its fourth summand. Since $\varepsilon ^{4}$ is simply a positive constant and $L(L+1) \ll \binom{L-1}{3}$, then for large enough $L$, \begin{equation} p_{L}(1+\varepsilon )>\varepsilon ^4\binom{L-1}{3}-\left\lceil L(L+1)/4 \right\rceil -1 >0. \end{equation} \end{proof} \begin{remark} Even in the event that Conjecture~\ref{final} is false, this gives us conclusive proof that we may find incomplete sequences whose roots are arbitrarily close to 1. Since 1 is the minimum possible size for the root of a PLRS, this may be interpreted as proof that we may find arbitrarily slow-growing incomplete sequences, with coefficients of any length $L$. \end{remark} \begin{lemma}\label{addingm} Consider the sequence generated by $[c_1,\ldots,c_{L}]$. For any value $m \in \Z_{>0}$, the principal root of $[c_1,\ldots, c_{L}+m]$ is greater than that of $[c_1,\ldots, c_{L},m]$. \end{lemma} \begin{proof} Let $f(x),g(x)$ be the characteristic polynomials and $r,q$ be the principal roots of $[c_1,\ldots, c_{L}+m]$ and $[c_1,\ldots, c_{L},m]$, respectively. Since each of $f$ and $g$ has a unique positive root, we see that $r > q\iff g(r)>f(r)=0$. Note that \begin{equation} \begin{array}{l@{{}\iff {}}l} g(r)>0 & 0=rf(r)<g(r) \\ & r\left( r^{L}-\sum_{i=1}^{L}c_{i}r^{L-i}-m\right) < r^{L+1}-\sum_{i=1}^{L}c_{i}r^{L+1-i}-m \\ & m<rm \\ & r>1. \end{array} \end{equation} Thus, the inequality always holds, and so $r > q$, as desired. \end{proof} \begin{conjecture}\label{finalsegundamitad} Let $\lambda_{L} $ be the principal root of $x^{L}-x^{L-1}-N_{L}-1$. If the sequence generated by $[c_1,\ldots, c_{L}]$ is incomplete with $\sum_{i=1}^{L}c_{i}\leq \left\lceil L( L+1)/ {4}\right\rceil+2$, then its principal root is at least $\lambda_{L} $. \end{conjecture} We present a partial proof, which addresses all cases except what is denoted as Subcase 2. \begin{proof}[Partial proof.] We use induction. For $L=2$, $N_{L}=\left\lceil 2\cdot 3/{4}\right\rceil=2$, and so the coefficients $[c_1,c_2]$ fulfilling the requirement are of the form $c_1+c_2\leq 4$. The incomplete sequences of this form have coefficients $[2,1]$, $[2,2]$, $[1,3]$, and $[3,1]$. Checking each case directly, we see that their principal roots are approximately $2.414$, $2.731$, $2.303$, and $3.303$, respectively. Among these roots, the root of $[1,3]=[1,N_2+1]$ is the minimum; thus, the lemma holds for the base case. Now, suppose the lemma holds for some value of $L\geq 2$. We show that the Lemma holds for $L+1$ as well. Let $[c_1,\ldots, c_{L},c_{L+1}]$ be an incomplete sequence with $\sum_{i=1}^{L+1}c_{i}\leq \left\lceil (L+1)(L+2) /{4}\right\rceil+2$. \begin{enumerate}[label={\textbf{Case \arabic*:}}, leftmargin=*] \item $\sum_{i=1}^{L}c_{i}< N_{L}+2$. Under the condition above, the following two sub-cases arise. \begin{enumerate}[label={\textbf{Sub-Case \arabic*:}}, leftmargin=*] \item $[c_1,\ldots, c_{L}]$ is incomplete. If the sequence is incomplete, then, by our inductive hypothesis, since $\sum_{i=1}^{L}c_{i}\leq N_{L}+2$, we must have that the principal root $r$ of $[c_1,\ldots, c_{L}]$ is greater than or equal to $\lambda _{L}$. Hence, by Lemma~\ref{addingcoeffdecreasesroot}, since the principal root $q$ of $[c_1,\ldots, c_{L+1}]$ satisfies $q>r$, we have that $[c_1,\ldots, c_{L+1}]$ has principal root $q>\lambda_{L} $. Finally, by Lemma~\ref{rootbounddecreases}, we know that $\lambda _{L}>\lambda _{L+1}$. Therefore, we have $q>r \geq \lambda _{L}>\lambda _{L+1}$, and the statement holds in this case. \item $[c_1,\ldots, c_{L}]$ is complete: The proof of this sub-case has not been found yet, hence why the statement remains a conjecture. \end{enumerate} \item $\sum_{i=1}^{L}c_{i}\geq N_{L}+2$. If this inequality holds, the transformations developed in Lemmas~\ref{PLSRoots} and \ref{DecreaseLastDecreasesRoot}, imply that $[c_1,\ldots, c_{L}]$ has principal root of at least $\lambda$. Applying Lemma~\ref{addingcoeffdecreasesroot}, we see that the principal root of $[c_1,\dots, c_{L+1}]$ is strictly greater, and thus the statement holds in this case.\qedhere \end{enumerate} \end{proof} The results in this section provide us with an efficient way to verify completeness for PLRS's. Namely, for a sequence $[c_1, \ldots, c_L]$, we may evaluate its characteristic polynomials at the points $B_L$ and $2$, which provides the following information. \begin{itemize} \item If $p(2)<0$, the sequence is incomplete. \item If $p(B_L)>0$, the sequence is complete. \item If $p(2)\geq 0$ and $p(B_L)\leq 0$, then the principal root of the sequence lies in the interval $[B_L, 2]$, and so further inquiry is necessary to determine whether the sequence is complete. \end{itemize} Computationally, evaluating a polynomial of degree $L$ is an $\mathcal{O}(L^2)$ problem. Generating a minimum of $2L$ terms of the sequences and checking Brown's criterion for each, on the other hand, is a $\mathcal{O}(2^L)$ problem. Thus, this method---even if inconclusive---provides fast and efficient method to categorize sequences, and narrows our search to the interesting interval $[B_L, 2]$, in which both complete and incomplete sequences arise. \subsection{Denseness of incomplete roots} Having narrowed our search for principal roots of complete and incomplete sequences to the interval $[B_L, 2]$, it is only natural to ask how the roots of these sequences are distributed throughout the interval. \begin{lemma}\label{rootsorder} For fixed $L>2$ and $k>0$, define the three polynomials $f(x)=x^{L}-x^{L-1}-k$, $g(x)=x^{L}-x^{L-1}-\left( k+1 \right) $, and $h(x)=x^{L}-x^{L-1}-\left( k+2 \right)$. Let $q,r$, and $s$ be the sole positive roots of $f,g$, and $h$ respectively, so that $1<q<r<s$. Then, \begin{equation} r-q>s-r. \end{equation} \end{lemma} \begin{proof} From the definition of $f,g$, and $h$, we see that \begin{align} q^{L}-q^{L-1}&= k,\nnend r^{L}-r^{L-1}&= k+1,\nnend s^{L}-s^{L-1} &=k+2.\label{diferenciasderoots} \end{align} Now, define the polynomial $p(x)=x^{L}-x ^{L-1} $. Taking the first and the second derivative of $p$, we see $p'(x)=Lx^{L-1}-\left( L-1 \right) x^{L-2}$, and $p''(x)=L\left( L-1 \right) x^{L-2}-\left( L-1 \right) \left( L-2 \right) x^{L-3}$. In particular, for all $x\geq 1$, $p(x)\geq 0,p'(x)>0$, and $p''(x)>0$. Thus, $p(x)$ is increasing and convex on $\left( 1,\infty \right) $. By \eqref{diferenciasderoots}, we have $p(r)-p(q)=p(s)-p(r)$. Thus, since $s>r>q>1$, we conclude $r-q>s-r$, as desired. \end{proof} \begin{theorem}\label{denseness} For any $L\geq 2$, let $R_{L}$ be the set of roots of all incomplete PLRS's generated by $L$ coefficients. Then, for any $\varepsilon >0$, there exists an $M$ such that for all $L>M$ and for any $\varepsilon $-ball $B_{\varepsilon }\subset \left( 1,2 \right) $, $B_{\varepsilon }\cap R_{L}\neq \varnothing$. \end{theorem} \begin{proof} Let $\varepsilon >0$ be arbitrary. By Lemma~\ref{rootsgotozero}, we may fix an $M$ such that for all $L>M$, $1<\lambda _{L}<1+\varepsilon $. From our previous work, we know that the sequence of length $L$ that has coefficients $[1,0,\ldots , 0,\left\lceil L\left( L+1 \right) /4 \right\rceil +1]$ is incomplete, as is any sequence of the form $[1,0,\ldots , 0,k]$, with $k \geq \left\lceil L\left( L+1 \right) /4 \right\rceil +1$. Note that $\lambda _{L}$ is the root of $[1,0,\ldots , 0,\left\lceil L\left( L+1 \right) /4 \right\rceil +1]$. Since $\lambda _{L}<1+\varepsilon $, it is clear that the root $\alpha $ of $[1,0,\ldots , 0,\left\lceil L\left( L+1 \right) /4 \right\rceil]$ fulfills $1<\alpha <\lambda _{L}$, and so $\lambda -\alpha <\varepsilon $. Now, we know the sequence $[1,0,\ldots , 0,2^{L-1}]$ has a root of size exactly 2. Applying Lemma~\ref{rootsorder} iteratively, any two sequences $[1,0,\ldots , 0,k]$, $[1,0,\ldots , 0,k+1]$ with $k\geq \left\lceil L\left( L+1 \right) /4 \right\rceil $ and roots $q,r$ must fulfill $r-q<\lambda _{L}-\alpha <\varepsilon $. Thus, any two consecutive sequences $[1,0,\ldots , 0,k]$, $[1,0,\ldots , 0,k+1]$ with $k\geq \left\lceil L\left( L+1 \right) /4 \right\rceil +1$ have roots with separation less than $\varepsilon $, and so the set of roots of sequences of the form $[1,0,\ldots , 0,k]$ with $\left\lceil L\left( L+1 \right) 4 \right\rceil +1\leq k\leq 2^{L-1}$ intercepts any $\varepsilon $-ball of $\left( 1,2 \right) $. As this is a subset of $R_{L}$, we are done. \end{proof} \begin{corollary}\label{densenesscoro} The set of principal roots of incomplete sequences $R=\bigcup_{L=2}^{\infty }R_{L}$ is dense in $\left( 1,2 \right) $. \end{corollary} We conjecture that a similar result can be shown about complete roots; however, this proof has proven to be more difficult, as examples of families of complete sequences are more fragile. \section{Open questions} Here are conjectures and several other questions that future research could investigate. \begin{itemize} \item Our results often focus on the final coefficient, such as in Theorems~\ref{decreaseLastCoe} and \ref{Adding M Theorem}. Do these results have any analogues for coefficients that are not the last? \item Can Theorem~\ref{thm:gbon} be extended to address what happens when $g < k$? \item Are there any other interesting families of PLRS's that can be fully characterized that have entries other than $0$ and $1$ as coefficients that are not the final coefficient? \item Are Conjectures~\ref{addFrontOnes} and \ref{2Lcrit} true? \item Is the missing component of the proof of Conjecture~\ref{final}, i.e., Conjecture~\ref{finalsegundamitad} true? \end{itemize} \section{Acknowledgments} This research was conducted as part of the SMALL 2020 REU at Williams College. The authors were supported by NSF Grants DMS1947438 and DMS1561945, Williams College, Yale University, and the University of Rochester. The authors would like to thank the organizers of the 19th International Fibonacci Conference, the 2020 Young Mathematicians Conference, and CANT 2021 for the opportunity to present this work and receive feedback in earlier stages.
{ "timestamp": "2021-09-01T02:06:36", "yymm": "2010", "arxiv_id": "2010.01655", "language": "en", "url": "https://arxiv.org/abs/2010.01655", "abstract": "A sequence of positive integers is complete if every positive integer is a sum of distinct terms. A positive linear recurrence sequence (PLRS) is a sequence defined by a homogeneous linear recurrence relation with nonnegative coefficients of the form $H_{n+1} = c_1 H_n + \\cdots + c_L H_{n-L+1}$ and a particular set of initial conditions.We seek to classify various PLRS's by completeness. With results on how completeness is affected by modifying the recurrence coefficients of a PLRS, we completely characterize completeness of several families of PLRS's as well as conjecturing criteria for more general families. Our primary method is applying Brown's criterion, which says that an increasing sequence $\\{H_n\\}_{n = 1}^{\\infty}$ is complete if and only if $H_1 = 1$ and $H_{n + 1} \\leq 1 + \\sum_{i = 1}^n H_i$.%A survey of these results can be found in \\cite{BHLLMT}.Finally, we adopt previous analytic work on PLRS's to find a more efficient way to check completeness. Specifically, the characteristic polynomial of any PLRS has exactly one positive root; by bounding the size of this root, the majority of sequences may be classified as complete or incomplete. Additionally, we show there exists an indeterminate region where the principal root does not reveal any information on completeness. We have conjectured precise bounds for this region.", "subjects": "Combinatorics (math.CO)", "title": "Completeness of Positive Linear Recurrence Sequences", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9901401429507652, "lm_q2_score": 0.7154239836484143, "lm_q1q2_score": 0.7083700054400469 }
https://arxiv.org/abs/2112.12559
Multigrid solvers for isogeometric discretizations of the second biharmonic problem
We develop a multigrid solver for the second biharmonic problem in the context of Isogeometric Analysis (IgA), where we also allow a zero-order term. In a previous paper, the authors have developed an analysis for the first biharmonic problem based on Hackbusch's framework. This analysis can only be extended to the second biharmonic problem if one assumes uniform grids. In this paper, we prove a multigrid convergence estimate using Bramble's framework for multigrid analysis without regularity assumptions. We show that the bound for the convergence rate is independent of the scaling of the zero-order term and the spline degree. It only depends linearly on the number of levels, thus logarithmically on the grid size. Numerical experiments are provided which illustrate the convergence theory and the efficiency of the proposed multigrid approaches.
\section{Introduction} We consider multigrid methods for biharmonic problems discretized by Isogeometric Analysis (IgA). In particular, we consider the following model problem: Given a bounded domain $\Omega\subset \mathbb R^d$, $d\in\{2,3\}$, with Lipschitz boundary $\partial\Omega$, a parameter $\beta \geq 0$ and sufficiently smooth functions $f$, $g_1$, and $g_2$, find a function $u$ such that \begin{align} \label{eq:probStrong} \begin{split} \beta u + \Delta^2 u &= f \quad \text{in} \quad \Omega,\\ u &= g_1 \quad \text{on} \quad \partial\Omega,\\ \Delta u &= g_2 \quad \text{on} \quad \partial\Omega \end{split} \end{align} holds in a variational sense. For $\beta = 0$, this problem is known as the \textit{second biharmonic problem}, which is of interest for plate theory (cf. \cite{ciarlet2002finite}) and Stokes streamline equations (cf. \cite{girault2012finite}). Problems with $\beta > 0$ are of particular interest in the context of optimal control problems, where the constraint is a second order elliptic operator. The optimality systems associated to these optimal control problems can be preconditioned robustly using preconditioners that rely on solving~\eqref{eq:probStrong}, see \cite{MarNieNor17,SogZul18,beigl2019robust,mardal2020robust}. The problem~\eqref{eq:probStrong} is obtained when considering the full observation; if one considers an optimal control problem with limited observation, one would obtain a similar problem, where the mass term $\beta u$ is multiplied with the characteristic function for the observation domain. We derive a standard variational formulation of the model problem, which lives in the Sobolev space $H^2(\Omega)$. For the discretization, we use Isogeometric Analysis (IgA) since it easily allows for $H^2$-conforming discretizations. Particularly, we consider a discretization based on tensor product B-splines of some degree $p>1$ and maximum smoothness, i.e., $p-1$ times continuously differentiable. For the derivation of the multigrid solver, we set up a hierarchy of grids as obtained by uniform refinement. Since we keep spline degree and spline smoothness fixed, we obtain nested spaces. Concerning the choice of the smoother, there are many possibilities. We are interested in a smoother that yields a $p$-robust multigrid method. The first $p$-robust multigrid solvers were based on the {boundary corrected mass smoother} \cite{hofreither2017robust} and the {subspace corrected mass smoother} \cite{hofreither2016robust}. Both have been formulated for the Poisson problem. Since the subspace corrected mass smoother is more flexible and has proven itself more efficient in practice, we restrict ourselves to that smoother. The multigrid solvers with subspace corrected mass smoother have been extended to the first biharmonic problem in \cite{sogn2019robust} and to the second and third biharmonic problem in the thesis \cite{sogn2018schur}. The convergence estimates are shown using the standard splitting of the analysis into approximation property and smoothing property, as proposed by Hackbusch, cf.~\cite{hackbusch2013multi}. The theory in all of these papers requires that the grids are uniform since they have been based on the $p$-robust approximation error estimates from~\cite{takacs2016approximation}, which are valid only in this case. Since then, newer $p$-robust approximation error estimates, see \cite{sande2020explicit, sande2019sharp}, have been proposed, which do not require uniform grids. Using these new estimates, it is straightforward to relax this assumption and to show analogous results for the Poisson problem as well as the first biharmonic problem for quasi uniform grids. However, this is not straightforward for the second biharmonic problem, since the proof requires a certain commutativity property (cf.~\cite[Lemma~9.2]{sogn2018schur}), which is only valid in case of uniform grids. In this paper, we go another way. We base the analysis on the framework introduced by Bramble et al., cf.~\cite{bramble1991convergence,bramble2018multigrid}. This allows us to drop the requirement that the grids are uniform. While this analysis could also be performed for other kinds of boundary conditions, like the first biharmonic problem, we restrict ourselves to the second biharmonic problem since it has previously turned out to be the more challenging one. For this setting, we prove a multigrid convergence estimate which is robust with respect to the spline degree $p$ and which only depends logarithmically on the grid size $h$. Moreover, we show that the convergence is robust in the parameter $\beta\ge0$. This analysis is motivated by the mentioned optimal control problem. Such parameter-robust multigrid solvers are also known for the Poisson problem, see \cite{olshanskii2000convergence} for an analysis based on Hackbusch's framework. There, the authors also provide a regularity result for the corresponding partial differential equation (PDE), which is based on standard results for the Poisson problem. In our case, we do not need to do that since Bramble's analysis is not based on any regularity assumptions. In the numerical experiments, one can observe that the convergence of a multigrid solver with subspace corrected mass smoother degrades if the geometry gets distorted. While this is also true for the Poisson problem, this dependence is significantly amplified for the biharmonic problem. The reason for the geometry dependence of the convergence rates is that the subspace corrected mass smoother is based on the tensor product structure of the spline space. This tensor product structure is distorted by the geometry mapping. So, the contributions of the geometry function are ignored when setting up the smoother. We aim to overcome this problem by considering a hybrid smoother that combines the proposed smoother with Gauss-Seidel sweeps, see also~\cite{sogn2019robust,sogn2018schur}. Alternative smoothers based on overlapping multiplicative Schwarz techniques have been considered in \cite{de2020robust,mardal2020robust}. Both approaches give good numerical results for the biharmonic problem. However, there is no rigorous, $p$-robust convergence theory available for these methods. It is worth mentioning that, as an alternative for solving biharmonic problems on the primal form, various kinds of mixed or non-conforming formulations have been developed, cf. \cite{doi:10.1137/0726062, Zhang:Xu, hanisch1993multigrid, rafetseder2018decomposition, chen2015multigrid}. The remainder of the paper is organized as follows. We introduce IgA, the biharmonic model problem in its variational form and its discretization in Section~\ref{sec:prelims}. In Section~\ref{sec:MG}, the multigrid method is introduced and we state sufficient conditions for its convergence. We develop the approximation error estimates needed for the convergence estimates in Section~\ref{Approx}. The choice of the smoother, the smoothing properties and the resulting multigrid convergence results are addressed in Section~\ref{sec:smoothers}. Finally, we provide numerical results in Section~\ref{sec:numerical}. \section{Model problem and its discretization} \label{sec:prelims} \subsection{The biharmonic model problem} Following the usual design principles of IgA, we assume that the computational domain $\Omega\subset \mathbb R^d$ has a Lipschitz boundary $\partial \Omega$ and that it is parameterized by a geometry function \begin{equation} \nonumber \textbf G: \widehat \Omega =(0,1)^d\rightarrow \Omega =\textbf G(\widehat \Omega), \end{equation} whose third weak derivatives are almost everywhere uniformly bounded. The parameterization has the property \begin{equation} \label{eq:GeoMapCond} \|\nabla^r \mathbf{G}\|_{L^\infty(\widehat \Omega)} \leq c_1 \quad \text{and} \quad \|\left(\nabla^r \mathbf{G}\right)^{-1}\|_{L^\infty(\widehat \Omega)} \leq c_2, \quad \text{for}\quad r=1,2,3, \end{equation} for some constants $c_1$ and $c_2$. After homogenization, the variational formulation of the model problem \eqref{eq:probStrong} reads as follows. Given $f\in L^2(\Omega)$ and $\beta\in\mathbb{R}$ with $\beta\geq 0$, find $u\in V:=H^2(\Omega)\cap H^1_0(\Omega)$ such that \begin{equation}\label{eq:problem1} \beta(u, v)_{L^2(\Omega)}+(\Delta u, \Delta v)_{L^2(\Omega)}= (f, v)_{L^2(\Omega)} \quad \forall \, v \in V. \end{equation} Here and in what follows, $L^2(\Omega)$ and $H^r(\Omega)$ denote the standard Lebesgue and Sobolev spaces with standard inner products $(\cdot,\cdot)_{L^2(\Omega)}$, $(\cdot,\cdot)_{H^r(\Omega)}$ and norms $\|\cdot \|_{L^2(\Omega)}$, $\|\cdot \|_{H^r(\Omega)}$. $H^1_0(\Omega)$ is the standard subspace of $H^1(\Omega)$ containing the functions with vanishing trace. On $V$, we define the bilinear form $\inner{\cdot}{\cdot}_{\mathcal{B}}$ via \[ \inner{u}{v}_{\mathcal{B}} := (\Delta u, \Delta v)_{L^2(\Omega)} \quad\forall \, u,v\in V, \] which is an inner product since we have the Poincaré like inequality \begin{equation} \label{eq:2FIne} \|u\|_{H^2(\Omega)}\leq c_{\Omega}\|\Delta u\|_{L^2(\Omega)} = c_{\Omega} \|u\|_{\mathcal B} \quad \forall \, u\in V, \end{equation} where $c_{\Omega}$ is a constant that depends only on the shape of $\Omega$, cf.~\cite{mardal2020robust}. Using the substitution rule for integration and the chain rule for differentiation, \eqref{eq:problem1} can be expressed in terms of integrals on the parameter domain $\widehat\Omega$. In IgA, this is usually done in order to simplify the evaluation of the integrals using quadrature rules. Besides these inner products, there are also standard inner products for the parameter domain, like $(\cdot,\cdot)_{L^2(\widehat\Omega)}$ and $(\cdot,\cdot)_{\widehat{\mathcal{B}}}$, where the latter is given by \[ \inner{\widehat u}{\widehat v}_{\widehat{\mathcal{B}}} := (\Delta \widehat u, \Delta \widehat v)_{L^2(\widehat \Omega)} \quad\forall \, \widehat u,\widehat v\in \widehat V:= H^2(\widehat \Omega)\cap H^1_0(\widehat\Omega). \] Also for the parameter domain $\widehat\Omega$, the result~\eqref{eq:2FIne} holds. So, we know \begin{equation} \nonumber \|u\|_{H^2(\widehat \Omega)}\leq c_{\widehat \Omega}\|\Delta u\|_{L^2(\widehat\Omega)} = c_{\widehat\Omega} \|u\|_{\widehat{\mathcal B}} \quad \forall \, u\in \widehat V. \end{equation} We know (cf.~\cite{sogn2018schur}) that there exist constants $\underline{c}_M$, $\overline{c}_M$, $\underline{c}_B$ and $\overline{c}_B$ only depending on the constants $c_1$, $c_2$ and the shape of $\Omega$ such that \begin{equation} \label{eq:geoEquiv} \begin{split} \underline{c}_M\, (u,u)_{L^2(\Omega)} &\leq (\widehat u,\widehat u)_{L^2(\widehat{\Omega})} \leq \overline{c}_M\, (u,u)_{L^2(\Omega)} \quad \text{and}\\ \underline{c}_B\, (u,u)_{\mathcal B} & \leq (\widehat u,\widehat u)_{\widehat{\mathcal{B}}} \leq \overline{c}_B\, (u,u)_{\mathcal B} \end{split} \end{equation} for all $u\in V$ with $\widehat u = u\circ \bm{G}\in\widehat V$. We define a simplified bilinear form $\left(\cdot, \cdot\right)_{\bar{\mathcal{B}}}$ as the inner product obtained by removing the cross terms from the inner product $\left(\cdot, \cdot\right)_{\widehat{\mathcal{B}}}$, that is, \begin{equation} \nonumber \left(\widehat u, \widehat v\right)_{\bar{\mathcal{B}}} := \sum^d_{k=1}\left(\partial_{x_kx_k} \widehat u , \partial_{x_kx_k} \widehat v \right)_{L^2(\widehat{\Omega})} \quad\forall \, \widehat u,\widehat v\in \widehat V. \end{equation} Here and in what follows, $\partial_{x} := \frac{\partial}{\partial x}$ and $\partial_{xy} := \partial_{x} \partial_{y}$ and $\partial_{x}^r := \frac{\partial^r}{\partial x^r}$ denote partial derivatives. The original bilinear form and the simplified bilinear form are spectrally equivalent, which implies that also the simplified bilinear form is an inner product. \begin{lemma} \label{lemma:equivB} The inner products $(\cdot,\cdot)_{\widehat{\mathcal{B}}}$ and $(\cdot,\cdot)_{\bar{\mathcal{B}}}$ are spectrally equivalent, that is, \begin{equation} \nonumber \left(\widehat u, \widehat u\right)_{\bar{\mathcal{B}}} \leq \left(\widehat u, \widehat u\right)_{\widehat{\mathcal{B}}} \leq d\left(\widehat u, \widehat u\right)_{\bar{\mathcal{B}}} \quad \forall \, \widehat u\in \widehat V . \end{equation} \end{lemma} \begin{proof} From \cite{grisvard2011elliptic, grisvard1992singularities}, it follows that $\|\Delta \widehat u\|_{L^2(\widehat\Omega)} = \|\nabla^2 \widehat v\|_{L^2(\widehat\Omega)}$ for $\widehat u,\widehat v\in \widehat V$. Using this, we obtain \begin{align*} \|\widehat u\|_{\widehat{\mathcal{B}}}^2 & = \|\Delta \widehat u\|_{L^2(\widehat\Omega)}^2 = \|\nabla^2\widehat u\|_{L^2(\widehat\Omega)}^2 = \underbrace{ \sum_{k=1}^d \|\partial_{x_kx_k}\widehat u\|_{L^2(\widehat\Omega)}^2 }_{\displaystyle =\|\widehat u\|_{\bar{\mathcal{B}}}^2} + \underbrace{ \sum_{k=1}^d\sum_{l\in\{1,\ldots,d\}\backslash\{k\}} \|\partial_{x_kx_l}\widehat u\|_{L^2(\widehat\Omega)}^2 }_{\displaystyle \ge 0}, \end{align*} which shows the first side of the inequality. Using the Cauchy-Schwarz inequality and $ab \le \tfrac12( a^2+ b^2)$, we obtain \begin{align*} \|\widehat u\|_{\widehat{\mathcal{B}}}^2 &= \sum_{k=1}^d\sum_{l=1}^d \left(\partial^2_{x_k} \widehat u,\partial^2_{x_l} \widehat u\right)_{L^2(\widehat\Omega)} \le \frac{1}{2} \sum_{k=1}^d\sum_{l=1}^d \left( \left\|\partial_{x_k}^2 \widehat u\right\|_{L^2(\widehat\Omega)}^2 + \left\|\partial_{x_l}^2 \widehat u\right\|_{L^2(\widehat\Omega)}^2\right) = d \|\widehat u\|_{\bar{\mathcal{B}}}^2, \end{align*} which shows second side of the inequality. \end{proof} \begin{remark} A analogous result holds for the domain $\Omega$, which satisfies condition \eqref{eq:GeoMapCond}. In this case, the constants also depend on the shape of $\Omega$. \end{remark} \subsection{Discretization} We consider a discretization using tensor product B-splines in the context of IgA. We start by defining these splines on the parameter domain $\widehat \Omega$. Let $C^k(0,1)$ denote the space of all continuous functions mapping $(0,1)\rightarrow \mathbb{R}$ that are $k$ times continuously differentiable and let $\mathcal{P}_p$ be the space of polynomials of degree at most $p$. For any sequence of grid points $\bm{\tau}:= (\tau_0,\ldots,\tau_{N+1})$ with \[ 0 = \tau_0 < \tau_1 <\cdots < \tau_N < \tau_{N+1} = 1, \] we define the space $S_{p,\bm\tau}$ of splines of degree $p$ with maximum smoothness by \[ S_{p,\bm\tau} := \left\lbrace v\in C^{p-1}(0,1) : v|_{(\tau_j,\tau_{j+1})}\in \mathcal{P}_p,\; j=0,1,\ldots,N \right\rbrace. \] The size of the largest and the smallest interval are denoted by \begin{equation} \nonumber h_{\bm\tau} := \max_{j=0,\ldots,N}(\tau_{j+1}-\tau_j) \quad\text{and}\quad h_{\bm\tau,\mathrm{min}} :=\min_{j=0,\ldots,N}(\tau_{j+1}-\tau_j), \end{equation} respectively. For the parameter domain, we define a spline space by tensorization, which we transfer to the physical domain using the pull-back principle, thus we define for given sequences of grid points $\bm{\tau}_{\ell,1},\ldots,\bm{\tau}_{\ell,d}$ the spaces \[ \widehat V_\ell := \left( \bigotimes^d_{i=1} S_{p,\bm{\tau}_{\ell,i}} \right) \cap H^1_0(\widehat \Omega)\subset \widehat V \quad\text{and}\quad V_\ell := \{ f\circ \textbf G^{-1} : f\in \widehat V_\ell \}\subset V. \] Here and in what follows, the tensor product space $\bigotimes^d_{i=1} S_{p,\bm{\tau}_{\ell,i}}$ is the space of all linear combinations of functions of the form $v(x_1,\ldots,x_d)=v_1(x_1)\cdots v_d(x_d)$ with $v_i\in S_{p,\bm{\tau}_{\ell,i}}$. The spline degree $p$ could be different for each of the spacial directions. For notational convenience, we restrict ourselves to a uniform choice of the degree. The corresponding minimum and maximum grid size are denoted by \[ h_\ell := \max_{i=1,\ldots,d} h_{\bm{\tau}_{\ell,i}} \quad\text{and}\quad h_{\ell,\mathrm{min}} := \min_{i=1,\ldots,d} h_{\bm{\tau}_{\ell,i},\mathrm{min}}. \] For the multigrid methods we, set up a sequence nested spline spaces \[ V_0 \subset V_1 \subset \cdots \subset V_{L} \subset V \quad\text{with}\quad h_0>h_1> \cdots >h_L>0 \] based on a sequence of nested grids. We assume that all grids are quasi uniform, that is, there is a constant $c_q$ such that \begin{equation} \label{eq:quasiuniform} h_\ell \leq c_q \,h_{\ell,\mathrm{min}} \quad \text{for} \quad \ell = 0,1,\ldots, L. \end{equation} We also assume that the ratio of the grid sizes of any two consecutive grids is bounded, that is, there is a constant $c_r$ such that \begin{equation} \label{eq:assGrids} h_\ell \leq c_r\, h_{\ell-1} \quad \text{for} \quad \ell = 1,\ldots, L. \end{equation} If the grids are obtained by uniform refinements of the coarsest grid, then this condition is naturally satisfied with $c_r=2$. By applying a Galerkin discretization, we obtain the following discrete problem: Find $u_\ell \in V_\ell$ such that \begin{equation} \label{eq:probDisc} \beta (u_\ell,v_\ell)_{L^2(\Omega)}+ (u_\ell,v_\ell)_{\mathcal{B}} = (f, v_\ell)_{L^2(\Omega)}\quad \forall \, v_\ell \in V_\ell. \end{equation} By fixing a basis for the space $V_\ell$, we can rewrite \eqref{eq:probDisc} in matrix-vector notation as \begin{equation} \label{eq:probMat} (\beta \mathcal{M}_\ell+\mathcal{B}_\ell)\uv{\ell} = \underline{f}_{\ell}, \end{equation} where $\mathcal{B}_\ell$ is the biharmonic stiffness matrix, $\mathcal{M}_\ell$ is the mass matrix, $\uv{\ell}$ is the vector representation of the corresponding function $u_\ell$ with respect to the chosen basis and the vector $\underline{f}_{\ell}$ is obtained by testing the right-hand side functional $(f, \cdot)_{L^2(\Omega)}$ with the basis functions. \begin{notation} \label{notation:c} Throughout this paper, $c$ is a generic positive constant that is independent of $h$ and $p$, but may depend on $d$, the constants $c_1$, $c_2$, $c_q$, and $c_r$ and the shape of $\Omega$. \end{notation} For any two square matrices $A,B\in\mathbb R^{n\times n}$, $A\le B$ means that \[ \underline x^T A \underline x \leq \underline x^T B \underline x \quad \forall \, \underline x\in\mathbb{R}^n. \] \section{The multigrid solver} \label{sec:MG} In this section, we present an abstract multigrid method and give a convergence theorem that is based on the analysis by Bramble et al., see \cite[Theorem 1]{bramble1991convergence}. \subsection{The multigrid framework} Let us assume that we have nested spaces $V_0\subset V_1 \subset \cdots \subset V_L\subset V$. Let $I^{\ell}_{\ell-1}$ be the matrix representation of the canonical embedding from $V_{\ell-1}$ into $V_{\ell}$ and let the restriction matrix $I^{\ell-1}_{\ell}$ be its transpose, this is $I^{\ell-1}_{\ell} := (I^{\ell}_{\ell-1})^T$. On each grid level, $\ell=0,\ldots,L$, we have a linear system \[ \mathcal A_\ell \, \underline u_\ell = \underline f_\ell, \] which is obtained by discretizing a symmetric, bounded and coercive bilinear form $a(\cdot,\cdot)$ in the space $V_\ell$ using the Galerkin principle. The matrix induces a norm via $\|\uv{\ell}\|_{\mathcal{A}_\ell} :=(\mathcal A_\ell \uv{\ell},\uv{\ell})^{1/2} = \|\mathcal A_\ell^{1/2} \uv{\ell}\|$. Here and in what follows, $(\cdot,\cdot)$ and $\|\cdot\|$ are the Euclidean scalar product and norm, respectively. In the continuous setting, the matrix can be represented by an operator \[ \mathcal{A}:V\rightarrow V' \quad\text{with}\quad \mathcal A u = a(u,\cdot). \] We have $\|u_{\ell}\|_{\mathcal{A}} = \|\uv{\ell}\|_{\mathcal{A}_\ell}$ for all functions $u_\ell\in V_\ell$ with coefficient representation $\uv{\ell}$. For the analysis, we can additionally choose symmetric positive definite matrices $X_\ell$ for all grid levels $\ell=0,1,\ldots,L$, which induce norms via $\|\uv{\ell}\|_{X_{\ell}} = (X_{\ell} \uv{\ell},\uv{\ell})^{1/2} = \|X^{1/2}_{\ell}\uv{\ell}\|$. The norm $\|u_{\ell}\|_{X_{\ell}}$ of a function $u_{\ell}\in V_\ell$ is interpreted as $\|\uv{\ell}\|_{X_{\ell}}$, where $\uv{\ell}$ is the coefficient representation of $u_\ell$. For the abstract framework, we assume to have a symmetric and positive definite matrix $\Smo{\ell}$ for every grid level $\ell=1,\ldots,L$, representing the smoother. Later, for the model problem, the bilinear form $a(\cdot,\cdot)$, the matrices $\mathcal{A}_\ell$, $\ell=0,\ldots,L$ and our choice of ${X}_\ell$ will be \[ a(u,v) = \beta (u,v)_{L^2(\Omega)} + (u,v)_{\mathcal B}, \quad \mathcal{A}_\ell=\beta\mathcal{M}_\ell+\mathcal{B}_\ell \quad\text{and}\quad X_\ell=(\beta+h_\ell^{-4})\mathcal{M}_\ell+\mathcal{B}_\ell. \] As smoothers, we will choose a subspace corrected mass smoother, a symmetric Gauss-Seidel smoother and a hybrid smoother in Section~\ref{sec:smoothers}. Based on these choices, the overall algorithm reads as follows. \begin{algorithms} \label{algo:S} One multigrid cycle, applied to some iterate $\underline u_\ell^{(0)}$ and a right-hand side $\underline f_\ell$ consists of the following steps: \begin{itemize} \item Apply $\nu_\ell$ pre-smoothing steps, i.e., compute \begin{equation} \label{eq:algo:Smooth} \underline{u}_\ell^{(i)} = \underline{u}_\ell^{(i-1)} + \Smo{\ell}(\underline{f}_{\ell}-\mathcal{A}_\ell\underline{u}_\ell^{(i-1)}) \quad \mbox{for} \quad i=1,\ldots,\nu_\ell. \end{equation} \item Apply recursive coarse-grid correction, i.e., apply the following steps. Compute the residual and restrict it to the next coarser grid level: \[ \underline r_{\ell-1} = I_\ell^{\ell-1} (\underline{f}_{\ell}-\mathcal{A}_\ell\underline{u}_\ell^{(\nu_\ell)}). \] If $\ell-1=0$, compute the update $\underline q_0:= A_0^{-1} \underline r_0$ using a direct solver. Otherwise, compute the update $\underline q_{\ell-1}$ by applying the algorithm $r$ ($r\in\mathbb N:=\{1,2,\ldots\}$) times recursively to the right-hand side $\underline r_{\ell-1}$ and a zero vector as initial guess. Then set \[ \underline u^{(\nu_\ell+1)}_\ell = \underline u^{(\nu_\ell)}_\ell + I_{\ell-1}^\ell \underline q_{\ell-1}. \] \item Apply $\nu_\ell$ post-smoothing steps, i.e., compute $\underline u_\ell^{(i)}$ using~\eqref{eq:algo:Smooth} for $i=\nu_\ell+2,\ldots,2\nu_\ell+1$ to obtain the next iterate $\underline u_\ell^{(2\nu_\ell+1)}$. \end{itemize} \end{algorithms} This abstract algorithm coincides with the algorithm presented in~\cite{bramble1991convergence}. Since each multigrid cycle is linear, its application can be expressed by the matrix $B_\ell^s$, which is recursively given by $B_0^s := \mathcal A_0^{-1}$ and \[ B_\ell^s := \big( I-(I-\tau_\ell L_\ell^{-1}\mathcal A_\ell)^{\nu_\ell} (I-I_{\ell-1}^\ell B_{\ell-1}^s I_\ell^{\ell-1} \mathcal A_\ell)^r (I-\tau_\ell L_\ell^{-1}\mathcal A_\ell)^{\nu_\ell} \big)\mathcal A_\ell^{-1}, \quad \ell=1,\ldots,L. \] The iteration matrix corresponding to one multigrid cycle is given by \[ I-B_\ell^s \mathcal A_\ell = (I-\tau_\ell L_\ell^{-1}\mathcal A_\ell)^{\nu_\ell} (I-I_{\ell-1}^\ell B_{\ell-1}^s I_\ell^{\ell-1} \mathcal A_\ell)^r (I-\tau_\ell L_\ell^{-1}\mathcal A_\ell)^{\nu_\ell}, \quad\ell=1,\ldots,L . \] \begin{remark} The integer $r$ represents the recursively of the algorithm, where $r=1$ corresponds to the $V$-cycle and $r=2$ corresponds to the $W$-cycle. \end{remark} \subsection{Abstract convergence framework} The assumptions used to show convergence can be split into two groups: \textit{approximation properties} and \textit{smoother properties}. \begin{theorem}\label{thrm:abstract} Let $\lambda_\ell$ be the largest eigenvalue of $X^{-1}_\ell\mathcal{A}_\ell$. Assume that the following estimates hold: \begin{itemize} \item \emph{Approximation properties.} There are constants $C_1$ and $C_2$, independent of $\ell$, and linear operators $Q_\ell: V_L \rightarrow V_\ell$ for $\ell = 0,1,\ldots,L$ with $Q_L=I$ such that \begin{align} \label{eq:ass:approx1} \|(Q_{\ell} -Q_{\ell-1})u_{L}\|^2_{X_\ell} &\leq C_1 \lambda_{\ell}^{-1} (u_{L},u_{L})_{\mathcal{A}} \quad &\text{for}\quad& \ell = 1,\ldots,L,\\ \label{eq:ass:approx2} (Q_{\ell}u_{L},Q_{\ell}u_{L})_{\mathcal{A}} &\leq C_2 (u_{L},u_{L})_{\mathcal{A}} \quad &\text{for}\quad& \ell = 0,\ldots,L-1, \end{align} for all $u_{L} \in V_L$. \item \emph{Smother properties.} We assume there exist a constant $C_S$ independent of $\ell$ such that \begin{equation} \label{eq:B34} \frac{\|\uv{\ell}\|^2_{X_\ell}}{\lambda_{\ell}} \leq C_S (\Smo{\ell}X_\ell \uv{\ell},\uv{\ell})_{X_\ell} \quad \forall \, \uv{\ell} \in \mathbb{R}^{\dim V_{\ell}} \end{equation} and \begin{equation} \label{eq:PositiveSmo} (\Smo{\ell}\mathcal{A}_\ell \uv{\ell},\uv{\ell})_{\mathcal{A}_\ell} \leq ( \uv{\ell},\uv{\ell})_{\mathcal{A}_\ell} \quad \forall \, \uv{\ell}\in\mathbb{R}^{\dim V_\ell} \end{equation} holds for $\ell=1,\ldots,L$. \end{itemize} Then, the estimate \[ \left((I-B^s_L\mathcal{A}_L)\uv{L},\uv{L}\right)_{\mathcal{A}_L} \leq \left(1-\frac{1}{CL}\right) \left(\uv{L},\uv{L}\right)_{\mathcal{A}_L}, \] holds for all $\uv L \in \mathbb R^{\dim V_L}$, where $C = [1+C_2^{1/2}+(C_SC_1)^{1/2}]^{2}$. \end{theorem} For a proof, see~\cite[Theorem~1]{bramble1991convergence}. \begin{remark} Condition \eqref{eq:B34} is only required for functions $u_\ell$ in the range of $Q_{\ell} - Q_{\ell-1}$. However, since we do not exploit this, we have stated the stronger condition. \end{remark} Now, we provide conditions that guarantee \eqref{eq:B34} and \eqref{eq:PositiveSmo}, which fit our needs better than the original conditions. \begin{lemma}\label{lem:smo1} If there exists a constant $C_S$, independent of $\ell$, which satisfies \begin{equation} \label{eq:smo1} (\mathcal{A}_\ell\uv{\ell},\uv{\ell}) \leq\frac{1}{\tau_{\ell}}(L_{\ell}\uv{\ell},\uv{\ell}) \leq \lambda_\ell C_S (X_{\ell}\uv{\ell},\uv{\ell}) \quad \forall \, \uv{\ell}\in \mathbb{R}^{\dim V_{\ell}} \end{equation} for each $\ell = 1,\ldots, L$. Then, the assumptions~\eqref{eq:B34} and \eqref{eq:PositiveSmo} hold for the same $C_S$. \end{lemma} \begin{proof} We start by showing that the first inequality implies \eqref{eq:PositiveSmo}, i.e., the smoothing operator $I-\Smo{\ell} \mathcal{A}_\ell$ is nonnegative in $\mathcal{A}_\ell$. Let $\wv{\ell}\in \mathbb{R}^{\dim V_\ell}$ be an arbitrary vector. Using the Cauchy-Schwarz inequality and the first inequality in \eqref{eq:smo1}, we obtain \begin{align*} \tau_\ell(L^{-1}_\ell\wv{\ell},\wv{\ell}) &= \tau_\ell(\mathcal{A}^{1/2}_\ell L^{-1}_\ell\wv{\ell},\mathcal{A}^{-1/2}_\ell\wv{\ell})\\ &\leq \tau_\ell(\mathcal{A}_\ell L^{-1}_\ell\wv{\ell},L^{-1}_\ell\wv{\ell})^{1/2}(\mathcal{A}^{-1}_\ell \wv{\ell},\wv{\ell})^{1/2} \\ &\leq \tau^{1/2}_\ell(L^{-1}_\ell\wv{\ell},\wv{\ell})^{1/2}(\mathcal{A}^{-1}_\ell \wv{\ell},\wv{\ell})^{1/2} \end{align*} It follows that \[ \tau_\ell(L^{-1}_\ell\wv{\ell},\wv{\ell}) \leq (\mathcal{A}^{-1}_\ell \wv{\ell},\wv{\ell})\quad \forall \, \wv{\ell}\in \mathbb{R}^{\dim V_\ell}. \] By substituting $\wv{\ell}$ with $\mathcal{A}_\ell\uv{\ell}$, we get \eqref{eq:PositiveSmo}. Next, we use the Cauchy-Schwarz inequality and the second inequality in \eqref{eq:smo1} to show \eqref{eq:B34}. Let $\wv{\ell}\in \mathbb{R}^{\dim V_{\ell}}$, we have \begin{align*} (X^{-1}_{\ell}\wv{\ell},\wv{\ell}) &= (L_{\ell}^{1/2}X^{-1}_{\ell}\wv{\ell},L^{-1/2}_{\ell}\wv{\ell}) \leq (L_{\ell}X^{-1}_{\ell}\wv{\ell},X^{-1}_{\ell}\wv{\ell})^{1/2}(L^{-1}_{\ell}\wv{\ell},\wv{\ell})^{1/2}\\ &\leq \tau^{1/2}_{\ell}\lambda_\ell^{1/2} {C}_S^{1/2}(X^{-1}_{\ell}\wv{\ell},\wv{\ell})^{1/2}(L^{-1}_{\ell}\wv{\ell},\wv{\ell})^{1/2}. \end{align*} By squaring the inequality, we get \[ (X_{\ell}^{-1}\wv{\ell},\wv{\ell}) = \tau_\ell \lambda_\ell C_S(L_{\ell}^{-1} \wv{\ell},\wv{\ell}) \quad \forall \, \wv{\ell}\in \mathbb{R}^{\dim V_\ell}. \] By substituting $\wv{\ell}$ with $X_{\ell}\uv{\ell}$, we get~\eqref{eq:B34}. \end{proof} \section{Approximation error estimates} \label{Approx} In this section, we prove some approximation error estimates and provide a projector which will be used to prove~\eqref{eq:ass:approx1} and~\eqref{eq:ass:approx2}. \subsection{Error and stability estimates for the univariate case} We start by introducing a periodic spline space. For any given sequence of grid points $\bm\tau=(0,\tau_1,\ldots,\tau_N,1)$, we define \[ \bm{\tau}^{per} := (-1,-\tau_N,\cdots,-\tau_1,0,\tau_1,\cdots,\tau_N,1). \] For each $p\in \mathbb N$, we define the periodic spline space \begin{equation} \nonumber S_{p,\bm{\tau}}^{per} := \left\lbrace v \in S_{p,\bm{\tau}^{per}} \,:\, \partial^{l} v \left(-1\right) = \partial^{l}v\left( 1\right)\quad \forall \, l\in\mathbb N_0 \mbox{ with } l<p \right\rbrace \end{equation} and a spline space with vanishing even derivatives on the boundary \begin{equation} \label{eq:defSEV} S^{0}_{p,\bm{\tau}} := \left\lbrace v \in S_{p,\bm\tau} \,:\, \partial^{2l} v \left(0\right) = \partial^{2l}v\left(0 \right) = 0\quad \forall \, l\in\mathbb N_0 \mbox{ with } 2l<p \right\rbrace. \end{equation} We also define the periodic Sobolev space \begin{equation} \nonumber H^{q}_{per}(-1,1) := \left\lbrace v \in H^q(-1,1) \,:\, \partial^{l}v\left(-1\right)=\partial^{l} v\left( 1\right), \quad \forall \, l\in\mathbb N_0 \mbox{ with } l<q \right\rbrace \end{equation} for each $q\in \mathbb N$. Let $\Pi^{per}_{p,\bm{\tau}}:H^2_{per}(-1,1) \rightarrow S_{p,\bm{\tau}}^{per}$ be the $H^2$-orthogonal projector satisfying \begin{align} \label{eq:perProj} \begin{split} \inner{\partial^2\Pi^{per}_{p,\bm{\tau}}u}{\partial^2 v}_{L^2(-1,1)} &= \inner{\partial^2 u}{\partial^2 v}_{L^2(-1,1)} \quad \forall \, v\in S_{p,\bm{\tau}}^{per}, \\ \inner{\Pi^{per}_{p,\bm{\tau}}u}{1}_{L^2(-1,1)} &= \inner{u}{1}_{L^2(-1,1)}. \end{split} \end{align} We use the following approximation error estimate for spline spaces which does not require uniform knot spans. \begin{theorem} \label{theo:espen4} For any $p \geq 3$, we have \begin{equation} \nonumber \|\partial^2(u-\Pi^{per}_{p,\bm{\tau}} u) \|_{L^2(-1,1)} \leq \frac{h_{\bm{\tau}}^2}{\pi^2} \|\partial^4 u \|_{L^2(-1,1)} \quad \forall \, u \in H^4_{per}(-1,1). \end{equation} \end{theorem} For a proof, see \cite[Theorem 4]{sande2019sharp}. Using the $H^2$--$H^4$ result above and an Aubin-Nitsche duality trick, we obtain the following $L^2$--$H^2$ result. \begin{theorem} \label{theo:uniPerL2H2} For any $p \geq 3$, we have \begin{equation} \nonumber \|u-\Pi^{per}_{p,\bm{\tau}} u\|_{L^2(-1,1)} \leq \frac{h_{\bm{\tau}}^2}{\pi^2} \|\partial^2 u \|_{L^2(-1,1)} \quad \forall \, u \in H^2_{per}(-1,1). \end{equation} \end{theorem} \begin{proof} Let $u \in H^2_{per}(-1,1)$ be arbitrary but fixed. Let $w\in H^4(-1,1)\cap H^3_{per}(-1,1)$ be such that $\partial^4 w = u - \Pi^{per}_{p,\bm{\tau}} u$. Note that~\eqref{eq:perProj} gives $0=(u - \Pi^{per}_{p,\bm{\tau}} u,1)_{L^2(-1,1)} = (\partial^4 w,1)_{L^2(-1,1)} = \partial^3 w(1) - \partial^3 w(-1)$. So, we know that $w\in H^4_{per}(-1,1)$. Using integration by parts (which does not introduce boundary terms since $u - \Pi^{per}_{p,\bm{\tau}} u \in H^2_{per}(-1,1)$ and $w\in H^4_{per}(-1,1)$) and using Theorem~\ref{theo:espen4}, we obtain \begin{align*} \|u - \Pi^{per}_{p,\bm{\tau}} u\|^2_{L^2} &= \frac{(u - \Pi^{per}_{p,\bm{\tau}} u, u - \Pi^{per}_{p,\bm{\tau}} u)_{L^2}}{\|u - \Pi^{per}_{p,\bm{\tau}} u\|_{L^2}} =\frac{(u - \Pi^{per}_{p,\bm{\tau}} u, \partial^4 w)_{L^2}}{\|\partial^4 w\|_{L^2}}\\ &=\frac{(\partial^2(u - \Pi^{per}_{p,\bm{\tau}} u), \partial^2 w)_{L^2}}{\|\partial^4 w\|_{L^2}} \leq \frac{h_{\bm{\tau}}^2}{\pi^2} \frac{(\partial^2(u - \Pi^{per}_{p,\bm{\tau}} u), \partial^2 w)_{L^2}}{\|\partial^2 (w - \Pi^{per}_{p,\bm{\tau}} w)\|_{L^2}}. \end{align*} From the definition of $\Pi^{per}_{p,\bm{\tau}}$, see \eqref{eq:perProj}, we have $(\partial^2(u - \Pi^{per}_{p,\bm{\tau}} u), \partial^2 \Pi^{per}_{p,\bm{\tau}} w)_{L^2} = 0$. This, together with the Cauchy-Schwarz inequality and the $H^2$-stability of $\Pi^{per}_{p,\bm{\tau}}$, gives \begin{align*} \|u - \Pi^{per}_{p,\bm{\tau}} u\|^2_{L^2} &\leq \frac{h_{\bm{\tau}}^2}{\pi^2} \frac{(\partial^2(u - \Pi^{per}_{p,\bm{\tau}} u), \partial^2(w-\Pi^{per}_{p,\bm{\tau}} w))_{L^2}}{\|\partial^2 (w - \Pi^{per}_{p,\bm{\tau}} w)\|_{L^2}} \\ &\leq \frac{h_{\bm{\tau}}^2}{\pi^2}\|\partial^2 (u - \Pi^{per}_{p,\bm{\tau}} u)\|^2_{L^2} \leq \frac{h_{\bm{\tau}}^2}{\pi^2}\|\partial^2 u \|^2_{L^2}, \end{align*} which completes the proof. \end{proof} Let $\Pi^{0}_{p,\bm{\tau}}:H^2(0,1)\cap H^1_0(0,1) \rightarrow S_{p,\bm{\tau}}^0$ be the $H^2$-orthogonal projector satisfying \begin{align} \nonumber \inner{\partial^2\Pi^{0}_{p,\bm{\tau}}u}{\partial^2 v}_{L^2(0,1)} &= \inner{\partial^2 u}{\partial^2 v}_{L^2(0,1)} \quad \forall \, v\in S_{p,\bm{\tau}}^{0}. \end{align} \begin{theorem} \label{theo:Pi0} For any $p \geq 3$, we have \begin{equation} \nonumber \|u-\Pi^0_{p,\bm{\tau}} u \|_{L^2(0,1)} \leq \frac{h_{\bm{\tau}}^2}{\pi^2} \|\partial^2 u \|_{L^2(0,1)} \quad \forall \, u \in H^2(0,1)\cap H^1_0(0,1). \end{equation} \end{theorem} \begin{proof} Let $u\in H^2(0,1) \cap H^1_0(0,1)$ be arbitrary but fixed. Define $w$ on $(-1,1)$ to be \[ w(x) := \mbox{sign}(x)\; u( |x| ). \] Observe that we obtain $w \in H^2_{per}(-1,1)$. From Theorem~\ref{theo:uniPerL2H2}, we have \[ \| (I-\Pi^{per}_{p,\bm{\tau}}) w \|_{L^2(-1,1)} \le c h_{\bm{\tau}}^2 \|\partial^2 w \|_{L^2(-1,1)}. \] Observe that $\|\partial^2 w \|_{L^2(-1,1)} = 2^{1/2} \|\partial^2 u \|_{L^2(0,1)}$. Define $w_{\bm{\tau}}:= \Pi^{per}_{p,\bm{\tau}} w$ and let $u_{\bm{\tau}}$ be the restriction of $w_{\bm{\tau}}$ to $(0,1)$. Observe that $w_{\bm{\tau}}$ is anti-symmetric, which implies that $u_{\bm{\tau}}\in S^{0}_{p,\bm{\tau}}$. It follows that $\|w-w_{\bm{\tau}}\|_{L^2(-1,1)} = 2^{1/2} \|u-u_{\bm{\tau}}\|_{L^2(0,1)}$. Using this, we obtain \[ \| u-u_{\bm{\tau}} \|_{L^2(0,1)} \le c h_{\bm{\tau}}^2 \|\partial^2 u \|_{L^2(0,1)}. \] It remains to show that $u_{\bm{\tau}}$ coincides with $\Pi^{per}_{p,\bm{\tau}} u$, i.e., to show that $u-u_{\bm{\tau}}$ is $H^2$-orthogonal to $S^{0}_{p,\bm{\tau}}$. By definition, this means that we have to show \[ (\partial^2(u-u_{\bm{\tau}}),\partial^2 \tilde{u}_{\bm{\tau}})_{L^2(0,1)} = 0 \quad \forall \, \tilde{u}_{\bm{\tau}} \in S^{0}_{p,\bm{\tau}}. \] Let $\tilde{w}_{\bm{\tau}} \in S_{p,\bm{\tau}}^{per}$ be $\tilde{w}_{\bm{\tau}} := \mbox{sign}(x) \,\tilde{u}_{\bm{\tau}}( |x| )$ and observe that $ 2(\partial^2(u-u_{\bm{\tau}}),\partial^2\tilde{u}_{\bm{\tau}})_{L^2(0,1)} = (\partial^2(w-w_{\bm{\tau}}),\partial^2\tilde{w}_{\bm{\tau}})_{L^2(0,1)}$, since $u$, $u_{\bm{\tau}}$ and $\tilde{u}_{\bm{\tau}}$ are restrictions of $w$, $w_{\bm{\tau}}$ and $\tilde{w}_{\bm{\tau}}$, respectively. Furthermore, $(\partial^2(w-w_{\bm{\tau}}),\partial^2\tilde{w}_{\bm{\tau}})_{L^2(-1,1)}=0 $ by construction, since $w_{\bm{\tau}}:= \Pi^{per}_{p,\bm{\tau}} w$, which completes the proof. \end{proof} Let $Q^{0}_{p,\bm{\tau}}:H^2(0,1)\cap H^1_0(0,1) \rightarrow S_{p,\bm{\tau}}^0$ be the $L^2$-orthogonal projector satisfying \begin{align} \nonumber \inner{Q^{0}_{p,\bm{\tau}}u}{ v}_{L^2(0,1)} &= \inner{ u}{ v}_{L^2(0,1)} \quad \forall \, v\in S_{p,\bm{\tau}}^{0}. \end{align} Since the $L^2$-orthogonal projector minimizes the error in the $L^2$-norm, Theorem~\ref{theo:Pi0} immediately implies the following statement. \begin{theorem} \label{theo:Q} For any $p \geq 3$, we have \begin{equation} \nonumber \|u-Q^{0}_{p,\bm{\tau}} u \|_{L^2(0,1)} \leq \frac{h_{\bm{\tau}}^2}{\pi^2} \|\partial^2 u \|_{L^2(0,1)} \quad \forall \, u \in H^2(0,1) \cap H^1_0(0,1). \end{equation} \end{theorem} Next, we show the stability of $Q_{p,\bm{\tau}}^0$ with respect to the $H^2$-seminorm. Such a proof is possible since the space $S_{p,\bm{\tau}}^0$ satisfies the following $p$-robust inverse inequality, while the space $S_{p,\bm{\tau}}\cap H^1_0(0,1)$ does not satisfy such an inverse inequality, cf.~\cite{takacs2016approximation}. \begin{theorem} \label{theo:BiInv2} Let $p\in \mathbb{N}$ with $p\geq 2$. We have \begin{equation} \nonumber \|\partial^2 u_{\bm{\tau}}\|_{L^2(0,1)} \leq 12 h^{-2}_{\bm{\tau},\mathrm{min}} \|u_{\bm{\tau}}\|_{L^2(0,1)} \quad \forall u_{\bm{\tau}} \in S^{0}_{p,\bm{\tau}}. \end{equation} \end{theorem} A proof can be found in \cite[Theorem 12]{sogn2019robust}. \begin{theorem} \label{theo:QHstab1d} Let $p\in \mathbb{N}$ with $p\geq 3$. Then there exists a constant $c>0$ such that \begin{align*} \|\partial^2(Q_{p,\bm{\tau}}^0 u) \|^2_{L^2(0,1)} \leq c\|\partial^2 u\|^2_{L^2(0,1)} \quad \forall u \in H^2(0,1)\cap H^1_{0}(0,1). \end{align*} \end{theorem} \begin{proof} The proof is analogous to that of \cite[Theorem 14]{sogn2019robust}, however it is given here for completeness. Using the triangle inequality and the inverse inequality, we obtain \begin{align*} \|\partial^2 Q_{p,\bm{\tau}}^0 u \|^2_{L^2} & \le 2\|\partial^2 \Pi_{p,\bm{\tau}}^0 u \|^2_{L^2} + 2\|\partial^2 (Q_{p,\bm{\tau}}^0u - \Pi_{p,\bm{\tau}}^0 u) \|^2_{L^2} \\&\le 2\|\partial^2 \Pi_{p,\bm{\tau}}^0 u \|^2_{L^2} + c h_{\bm{\tau},\mathrm{min}}^{-2} \|Q_{p,\bm{\tau}}^0 u - \Pi_{p,\bm{\tau}}^0 u \|^2_{L^2} \\ & \le 2\|\partial^2 \Pi_{p,\bm{\tau}}^0 u \|^2_{L^2} + c h_{\bm{\tau},\mathrm{min}}^{-2} \|u - \Pi_{p,\bm{\tau}}^0 u \|^2_{L^2} + c h_{\bm{\tau},\mathrm{min}}^{-2} \|u-Q_{p,\bm{\tau}}^0 u \|^2_{L^2}. \end{align*} The Theorems~\ref{theo:Pi0} and~\ref{theo:Q} and Assumption~\eqref{eq:quasiuniform} give the desired result. \end{proof} \subsection{Proof of the approximation properties} \label{subsec:4:3} In this subsection, we consider the discretization framework from Section~\ref{sec:prelims}. We choose \[ X_\ell := \mathcal{B}_\ell + (\beta + \hmaxh{\ell}^{-4})\mathcal{M}_\ell, \] which corresponds to the norm $\|\cdot\|_{X_\ell}$ that satisfies \[ \|u\|^2_{X_\ell} = \|u \|^2_{\mathcal{B}} + (\beta + \hmaxh{\ell}^{-4})\|u\|^2_{L^2(\Omega)} \quad \forall \, u\in V. \] Now, we give a bound for the eigenvalues of $X^{-1}_\ell\mathcal{A}_\ell$. \begin{lemma} \label{lem:eigenvalue} Let $\lambda_{\ell}$ with $\ell\geq 1$ be the largest eigenvalue of $X_{\ell}^{-1}\mathcal{A}_{\ell}$. For $p\geq 3$, we have $\lambda_{\ell}\in (\frac{1}{1+c},1)$ for some positive constant $c$. \end{lemma} \begin{proof} Since $\mathcal{M}_\ell$ is symmetric positive definite and $h_\ell^{-4}>0$, we have $\mathcal{A}_{\ell}< X_{\ell}$, which implies $\lambda_{\ell}<1$. For the lower bound, we use $V_{\ell-1} \subsetneqq V_\ell$, which implies that there is some $w_\ell\in V_\ell$ that is $L_2$-orthogonal to $V_{\ell-1} $, that is $(w_{\ell},u_{\ell-1})_{L^2(\Omega)} = 0$ for all $u_{\ell-1}\in V_{\ell-1}$. By combining Theorem~\ref{theo:QHapp} and~\eqref{eq:geoEquiv}, we obtain \[ \|w_{\ell}\|_{L^2(\Omega)} = \sup_{u_{\ell-1}\in V_{\ell-1}} \|w_{\ell}-u_{\ell-1}\|_{L^2(\Omega)} \leq c \, \hmaxh{\ell-1}^2 \|w_{\ell}\|_{\mathcal B}. \] In matrix-vector notation, this reads as \[ \underline w_{\ell}^T \mathcal{M}_{\ell} \underline w_{\ell} \leq c\, \hmaxh{\ell-1}^4\, \underline w_{\ell}^T \mathcal{B}_{\ell}\underline w_{\ell}. \] Using~\eqref{eq:assGrids}, we know that there is a constant $c>0$ such that \begin{align*} \underline{w}_{\ell}^\top X_{\ell}\underline{w}_{\ell} &= \underline{w}_{\ell}^\top \mathcal A_{\ell} \underline{w}_{\ell} + \hmaxh{\ell}^{-4} \underline{w}_{\ell}^\top \mathcal{M}_{\ell} \underline{w}_{\ell} < (1+c) \underline{w}_{\ell}^\top \mathcal A_{\ell} \underline{w}_{\ell}, \end{align*} which shows $\lambda_\ell > 1/(1+c)$. \end{proof} Next, we prove \eqref{eq:ass:approx1} and~\eqref{eq:ass:approx2}. This requires that we choose the projectors $\mathbf{Q}^0_{p,\ell}$, which have to map into the space $V_\ell$. We first define a projector that maps from $\widehat V$ into $\widehat V_\ell$ by tensorization of the univariate projectors: \[ \widehat{\mathbf{Q}}^0_{p,\ell} := Q^0_{p,\bm{\tau}_{\ell,1}} \otimes \cdots \otimes Q^0_{p,\bm{\tau}_{\ell,d}}, \] where the tensor product is to be understood as in \cite[Section~3.2]{T:2017MPMG}. The next two theorems follow from Theorems~\ref{theo:Q} and~\ref{theo:QHstab1d} by standard arguments. \begin{theorem} \label{theo:QHapp} Let $p\in \mathbb{N}$ with $p\geq 3$. Then there exists a constant $c$ such that \[ \|(I-\widehat{\mathbf{Q}}_{p,\ell}^0) \widehat u \|_{L^2(\widehat\Omega)} \leq c h_{\ell}^2 \| \widehat u \|_{\bar{\mathcal B}} \quad \forall \, \widehat u\in H^2(\widehat\Omega) \cap H^1_0(\widehat\Omega). \] \end{theorem} \begin{proof} The proof is given for the two-dimensional case. We have by definition and using the triangle inequality \begin{align*} \|(I-\widehat{\mathbf{Q}}_{p,\ell}^0) \widehat u \|_{L^2(\widehat\Omega)} &= \|(I-Q_{p,\bm{\tau}_{\ell,1}}^0\otimes Q_{p,\bm{\tau}_{\ell,2}}^0) \widehat u \|_{L^2(\widehat\Omega)} \\&\le \|(I-Q_{p,\bm{\tau}_{\ell,1}}^0\otimes I) \widehat u \|_{L^2(\widehat\Omega)} + \|(Q_{p,\bm{\tau}_{\ell,1}}^0\otimes I)(I-I\otimes Q_{p,\bm{\tau}_{\ell,2}}^0) \widehat u \|_{L^2(\widehat\Omega)}. \end{align*} Using the $L^2$-stability of the $L^2$-projectors, we further obtain \begin{align*} \|(I-\widehat{\mathbf{Q}}_{p,\ell}^0) \widehat u \|_{L^2(\widehat\Omega)} &\le \|(I-Q_{p,\bm{\tau}_{\ell,1}}^0\otimes I) \widehat u \|_{L^2(\widehat\Omega)} + \|(I-I\otimes Q_{p,\bm{\tau}_{\ell,2}}^0) \widehat u \|_{L^2(\widehat\Omega)}. \end{align*} The desired result immediately follows from Theorem~\ref{theo:Q}. The extension to more dimensions is obvious. \end{proof} \begin{theorem} \label{theo:QHstab} Let $p\in \mathbb{N}$ with $p\geq 3$. Then there exists a constant $c>0$ such that \begin{align*} \|\widehat{\bm{Q}}_{p,\ell}^0 \widehat u \|^2_{\bar{\mathcal B}} \leq c\|\widehat u\|^2_{\bar{\mathcal B}} \quad \forall \widehat u \in H^2(\widehat \Omega)\cap H^1_{0}(\widehat\Omega). \end{align*} \end{theorem} \begin{proof} The proof is given for the two-dimensional case. We have by definition and using the triangle inequality \begin{align*} \|\widehat{\mathbf{Q}}_{p,\ell}^0 \widehat u \|_{\bar{\mathcal B}}^2 &= \|\partial_{x_1}^2 (Q_{p,\bm{\tau}_{\ell,1}}^0\otimes Q_{p,\bm{\tau}_{\ell,2}}^0) \widehat u \|_{L^2(\widehat\Omega)}^2 + \|\partial_{x_2}^2 (Q_{p,\bm{\tau}_{\ell,1}}^0\otimes Q_{p,\bm{\tau}_{\ell,2}}^0) \widehat u \|_{L^2(\widehat\Omega)}^2 \\&= \|\partial_{x_1}^2 (I\otimes Q_{p,\bm{\tau}_{\ell,2}}^0) (Q_{p,\bm{\tau}_{\ell,1}}^0\otimes I) \widehat u \|_{L^2(\widehat\Omega)}^2 + \|\partial_{x_2}^2 (Q_{p,\bm{\tau}_{\ell,1}}^0\otimes I) (I\otimes Q_{p,\bm{\tau}_{\ell,2}}^0) \widehat u \|_{L^2(\widehat\Omega)}^2. \end{align*} Using the $L^2$-stability of the $L^2$-projector, we obtain \begin{align*} \|\widehat{\mathbf{Q}}_{p,\ell}^0 \widehat u \|_{\bar{\mathcal B}}^2 &\le \|\partial_{x_1}^2 (I\otimes Q_{p,\bm{\tau}_{\ell,1}}^0) \widehat u \|_{L^2(\widehat\Omega)}^2 + \|\partial_{x_2}^2 (Q_{p,\bm{\tau}_{\ell,2}}^0\otimes I) \widehat u \|_{L^2(\widehat\Omega)}^2. \end{align*} Using Theorem~\ref{theo:QHstab1d}, we further obtain \begin{align*} \|\widehat{\mathbf{Q}}_{p,\ell}^0 \widehat u \|_{\bar{\mathcal B}}^2 &\le c\|\partial_{x_1}^2 \widehat u \|_{L^2(\widehat\Omega)}^2 + c\|\partial_{x_2}^2 \widehat u \|_{L^2(\widehat\Omega)}^2 = c \|\widehat u\|_{\bar{\mathcal B}}^2, \end{align*} which finishes the proof. The extension to more dimensions is obvious. \end{proof} The projectors $\mathbf{Q}^0_{p,\ell}$ are now defined via the pull-back principle, such that \begin{equation}\label{def:Qphys} \mathbf{Q}^0_{p,\ell} u := (\widehat{\mathbf{Q}}^0_{p,\ell} (u \circ \bm{G})) \circ \bm{G}^{-1} \quad\forall \, u\in V. \end{equation} Note that, by construction, $\mathbf{Q}^0_{p,\ell}$ maps into a subspace of $V_\ell$, where all even outer normal derivatives on the boundary vanish. \begin{theorem} \label{theo:appProof} Let $d\in\mathbb{N}$ and $p\in \mathbb{N}$ with $p \ge 3$. For each level $\ell = 0,1,\ldots, L-1$, let $\mathbf{Q}^0_{p,\ell}:H^2(\Omega)\cap H^1_{0}(\Omega)\rightarrow V_\ell$ be the projectors defined in \eqref{def:Qphys}. There exists a constants $C_1$ and $C_2$ such that \begin{align} \label{eq:As11} \|(\mathbf{Q}^0_{p,\ell}-\mathbf{Q}^0_{p,\ell-1})u_L \|^2_{X_\ell} &\leq C_1 \lambda^{-1}_\ell (u_L,u_L)_{\mathcal{A}} \quad &\text{for}\quad& \ell = 1,\ldots,L,\\ \label{eq:As12} (\mathbf{Q}^0_{p,\ell}\,u_L,\mathbf{Q}^0_{p,\ell}\,u_L)_{\mathcal{A}} &\leq C_2 (u_L,u_L)_{\mathcal{A}} \quad &\text{for}\quad& \ell = 0,\ldots,L-1, \end{align} for all $u_L\in V_L$. \end{theorem} \begin{proof} Let $u_L\in V_L$ arbitrary but fixed and let $\widehat u_L:= u_L \circ\bm{G}\in \widehat V_L$. Using \eqref{eq:geoEquiv}, Lemma~\ref{lemma:equivB} and Theorem~\ref{theo:QHstab} and the $L^2$-stability of $\widehat{\mathbf{Q}}^0_{p,\ell}$, we obtain \begin{align*} (\mathbf{Q}^0_{p,\ell}\, u_L, \mathbf{Q}^0_{p,\ell}\, u_L)_{\mathcal{A}} &\le c (\widehat{\mathbf{Q}}^0_{p,\ell}\,\widehat u_L, \widehat{\mathbf{Q}}^0_{p,\ell}\,\widehat u_L)_{\widehat{\mathcal{A}}} = c\beta \|\widehat{\mathbf{Q}}^0_{p,\ell}\widehat u_{L}\|^2_{L^2(\widehat \Omega)} + c\|\widehat{\mathbf{Q}}^0_{p,\ell}\widehat u_{L}\|^2_{\widehat{\mathcal{B}}}\\ &\leq c \beta \|\widehat u_L\|^2_{L^2(\widehat \Omega)} + c\|\widehat u_{L}\|^2_{\widehat{\mathcal{B}}} \leq c (\widehat u_L,\widehat u_L)_{\widehat{\mathcal{A}}} \leq C_2 (u_L,u_L)_{\mathcal{A}}, \end{align*} which shows~\eqref{eq:As12}. Next we prove the auxiliary result \begin{equation} \label{eq:aux3} \|(I-\mathbf{Q}^0_{p,\ell-1})u_L \|^2_{X_\ell} \leq c \lambda^{-1}_\ell (u_L,u_L)_{\mathcal{A}} \quad \text{for}\quad \ell = 1,\ldots,L. \end{equation} Using~\eqref{eq:geoEquiv},~\eqref{lemma:equivB}, Theorem~\ref{theo:QHstab}, Theorem~\ref{theo:QHapp} and the $L^2$-stability of $\mathbf{Q}^0_{p,\ell-1}$, we get \begin{align*} \|(I-\mathbf{Q}^0_{p,\ell-1})u_L \|^2_{X_\ell} &= \|(I-\mathbf{Q}^0_{p,\ell-1})u_L \|^2_{\mathcal{B}}+ (\beta+\hmaxh{\ell}^{-4})\|(I-\mathbf{Q}^0_{p,\ell-1})u_L \|^2_{L^2(\Omega)}\\ &\le c \|(I-\widehat{\mathbf{Q}}^0_{p,\ell-1})\widehat{u}_L \|^2_{\bar{\mathcal{B}}}+ c(\beta+\hmaxh{\ell}^{-4})\|(I-\widehat{\mathbf{Q}}^0_{p,\ell-1})\widehat{u}_L \|^2_{L^2(\widehat{\Omega})}\\ &\leq c \|\widehat u_L\|^2_{\bar{\mathcal{B}}} + c\hmaxh{\ell}^{-4}\hmaxh{\ell-1}^{4}\|\widehat u_L\|^2_{\widehat{\mathcal{B}}} + c\beta \|\widehat u_L\|^2_{L^2(\widehat \Omega)}\\ &\leq c(1+\hmaxh{\ell}^{-4}\hmaxh{\ell-1}^{4})\|u_L\|^2_{\mathcal{B}} + c\beta \|u_L\|^2_{L^2(\Omega)}. \end{align*} We use assumption \eqref{eq:assGrids} and Lemma~\ref{lem:eigenvalue} to get \eqref{eq:aux3}. To complete the proof, we use the fact that $\mathbf{Q}^0_{p,\ell-1}\mathbf{Q}^0_{p,\ell} = \mathbf{Q}^0_{p,\ell-1}$, \eqref{eq:aux3} and \eqref{eq:As12}, to obtain \begin{align*} \|(\mathbf{Q}^0_{p,\ell}-\mathbf{Q}^0_{p,\ell-1})u_L \|^2_{X_\ell} &= \|(I-\mathbf{Q}^0_{p,\ell-1})\mathbf{Q}^0_{p,\ell}u_L \|^2_{X_\ell} \leq c \lambda^{-1}_\ell (\mathbf{Q}^0_{p,\ell} u_L, \mathbf{Q}^0_{p,\ell}u_L)_{\mathcal{A}}\\ &\leq C_1 \lambda^{-1}_\ell (u_L, u_L)_{\mathcal{A}}. \end{align*} This shows~\eqref{eq:As11} and finishes the proof. \end{proof} \begin{remark} In \cite[Lemma 9.2]{sogn2018schur}, a similar result to Theorem~\ref{theo:QHapp} is shown. There, the $\mathcal{B}_\ell$-orthogonal projector is considered. That proof only holds for uniform grids. By using an $L^2$-orthogonal projector, we avoid these difficulties. Since the convergence theory by Hackbusch \cite{hackbusch2013multi} requires the error estimates for the $\mathcal{B}_\ell$-orthogonal projector, this motivated us to use the convergence theory by Bramble \cite{bramble2018multigrid}, where this is not the case. \end{remark} \section{The smoothers and the overall convergence results} \label{sec:smoothers} \subsection{Subspace corrected mass smoother} We consider the subspace corrected mass smoother, which was originally proposed in \cite{hofreither2016robust} for a second order problem and was one of the first smoothers to produce a multigrid method for IgA which is robust in both the grid size and the spline degree. In \cite{sogn2019robust,sogn2018schur} this smoother was extended to biharmonic problems. The smoother is based around the inverse inequality in Theorem~\ref{theo:BiInv2}, which is independent of the spline degree. First, we introduce a splitting for the one dimensional case as follows: \[ S_{p,\bm{\tau}}\cap H^1_0(0,1) = S^{0}_{p,\bm{\tau}} \oplus S^{1}_{p,\bm{\tau}}, \] where $S^{0}_{p,\bm{\tau}}$ is as defined in \eqref{eq:defSEV} and $S^{1}_{p,\bm{\tau}}$ is its $L^2$-orthogonal complement in $S_{p,\bm{\tau}}\cap H^1_0(0,1)$. For each of these spaces, we define the corresponding $L^2$-orthogonal projection \begin{align*} Q_{p,\bm{\tau}}^0: H^2(0,1)\cap H^1_0(0,1) \rightarrow S^0_{p,\bm{\tau}},\\ Q_{p,\bm{\tau}}^1: H^2(0,1)\cap H^1_0(0,1) \rightarrow S^1_{p,\bm{\tau}}. \end{align*} The next step, is to extend the splitting to the multivariate case. Let $\alpha:=(\alpha_1,\ldots,\alpha_d) \in \lbrace 0, 1\rbrace^d$ be a multiindex. The tensor product B-spline space $\widehat V_\ell = S_{p,\bm{\tau}_\ell }\cap H^1_0(\widehat\Omega)$ with $\bm{\tau}_\ell=(\bm{\tau}_{\ell,1},\ldots,\bm{\tau}_{\ell,d})$ is split into the direct sum of $2^d$ subspaces \begin{equation} \label{eq:subspaces} \widehat V_\ell = \bigoplus_{\alpha \in \lbrace 0,1\rbrace^d } S^\alpha_{p,\bm{\tau}_{\ell}} \quad\text{where}\quad S^\alpha_{p,\bm{\tau}_\ell} = S^{\alpha_1}_{p,\bm{\tau}_{\ell,1}}\otimes \cdots\otimes S^{\alpha_d}_{p,\bm{\tau}_{\ell,d}}. \end{equation} Again, we define $L^2$-orthogonal projectors \begin{equation} \nonumber \widehat{\mathbf{Q}}_{p,\bm{\tau}_\ell}^\alpha:= Q_{p,\bm{\tau}_{\ell,1}}^{\alpha_1}\otimes \cdots \otimes Q_{p,\bm{\tau}_{\ell,d}}^{\alpha_d}: \widehat V \rightarrow S^\alpha_{p,\bm{\tau}_\ell}. \end{equation} The projector $\widehat{\mathbf{Q}}_{p,\bm{\tau}_\ell}^0$ from Section~\ref{subsec:4:3} is consistent with this definition, for the choice $\alpha=0$. Since the splitting is $L^2$-orthogonal, we obviously have the following result. \begin{equation} \label{eq:L2equal} \widehat u_\ell = \sum_{\alpha\in \lbrace 0,1\rbrace^d} \widehat{\mathbf{Q}}_{p,\bm{\tau}_\ell}^\alpha \widehat u_\ell \quad \mbox{and} \quad \|\widehat u_\ell\|^2_{L^2(\widehat\Omega)} = \sum_{\alpha\in \lbrace 0,1\rbrace^d} \|\widehat{\mathbf{Q}}_{p,\bm{\tau}_\ell}^\alpha \widehat u_\ell \|^2_{L^2(\widehat\Omega)} \quad \forall \widehat u_\ell \in \widehat V_\ell. \end{equation} The next theorem shows that the splitting is also stable in $H^2$. \begin{theorem} \label{theo:QHstab2} Let $p\in \mathbb{N}$ with $p\geq 3$. Then there exists a constant $c>0$ such that \begin{align*} c^{-1}\|\widehat u_\ell\|^2_{\bar{\mathcal{B}}} \leq \sum_{\alpha\in \lbrace 0,1\rbrace^d} \|\widehat{\mathbf{Q}}_{p,\bm{\tau}_\ell}^\alpha \widehat u_\ell \|^2_{\bar{\mathcal{B}}} \leq c\|\widehat u_\ell\|^2_{\bar{\mathcal{B}}} \quad \forall \widehat u_\ell \in \widehat V_\ell. \end{align*} \end{theorem} \begin{proof} Theorem~\ref{theo:QHstab1d} states the stability of $Q^0_{p,\bm{\tau}_\ell}$ in the $H^2$-seminorm. The stability of $Q^1_{p,\bm{\tau}_\ell}$ in the $H^2$-seminorm follows using the triangle inequality. The stability of these statements in the $L^2$-norm is obvious. From these observations, the right inequality follows by arguments that are completely analogous to those of the proof of Theorem~\ref{theo:QHstab}. The left inequality follows from~\eqref{eq:L2equal} and the triangle inequality. \end{proof} For notational convenience, we restrict the setup of the smoother to the two dimensional case. For notational convenience, we write the splitting \eqref{eq:subspaces} as \[ \widehat V_\ell = S^{00}_{p,\bm{\tau}_\ell} \oplus S^{01}_{p,\bm{\tau}_\ell} \oplus S^{10}_{p,\bm{\tau}_\ell} \oplus S^{11}_{p,\bm{\tau}_\ell}, \quad\text{where}\quad S^{\alpha_1,\alpha_2}_{p,\bm{\tau}_\ell} = S^{\alpha_1}_{p,\bm{\tau}_{\ell,1}} \otimes S^{\alpha_2}_{p,\bm{\tau}_{\ell,2}}. \] Following the ideas of~\cite{hofreither2016robust,sogn2019robust}, we construct local smoothers $L_\alpha$ for any of the spaces $V_{\ell,\alpha}:= S^{\alpha}_{p,\bm{\tau}_\ell}$. These local contributions are chosen such that they satisfy the corresponding local condition \begin{equation}\nonumber \bar{\mathcal{B}}_{\ell,\alpha} +\beta \widehat{\mathcal{M}}_{\ell,\alpha}\le L_{\ell,\alpha} \le c (\bar{\mathcal{B}}_{\ell,\alpha} + (\beta + h^{-4}) \widehat{\mathcal{M}}_{\ell,\alpha}), \end{equation} where \[ \bar{\mathcal{B}}_{\ell,\alpha} := \mathbf{P}_{\ell,\alpha}^T \bar{\mathcal{B}}_\ell \mathbf{P}_{\ell,\alpha} \qquad\mbox{and}\qquad \widehat{\mathcal M}_{\ell,\alpha} := \mathbf{P}_{\ell,\alpha}^T \widehat{\mathcal M}_\ell \mathbf{P}_{\ell,\alpha} \] and $\mathbf{P}_{\ell,\alpha}$ is the matrix representation of the canonical embedding $V_{\ell,\alpha}\rightarrow V_\ell$. The canonical embedding has tensor product structure, i.e., $P_{\ell,\alpha_1}\otimes\cdots\otimes P_{\ell,\alpha_d}$, where the $P_{\ell,\alpha_i}$ are the matrix representations of the corresponding univariate embeddings. In the two-dimensional case, $\bar{\mathcal{B}}_\ell$ and $\widehat{\mathcal M}_\ell$ have the representation \begin{equation*} \bar{\mathcal{B}}_\ell = B \otimes M + M \otimes B \quad\mbox{and}\quad \widehat{\mathcal M}_\ell = M \otimes M, \end{equation*} where $B$ and $M$ are the corresponding univariate stiffness and mass matrices (not necessarily equal for both spacial directions). For notational convenience, we do not indicate the spacial direction and the grid level for these matrices. Restricting $\bar{\mathcal{B}}_\ell$ to the subspace $V_{\ell,(\alpha_1,\alpha_2)}$ gives \begin{equation*} \bar{\mathcal{B}}_{\ell,(\alpha_1,\alpha_2)} = B_{\alpha_1} \otimes M_{\alpha_2} + M_{\alpha_1} \otimes B_{\alpha_2}, \end{equation*} where $B_{\alpha_i} = P^T_{\ell,\alpha_i} B P_{\ell,\alpha_i}$ and $M_{\alpha_i} = P^T_{\ell,\alpha_i} M P_{\ell,\alpha_i}$. We define \[ \bar{\mathcal{A}}_\ell :=\bar{\mathcal{B}}_\ell + \beta \widehat{\mathcal{M}}_\ell \quad\text{and}\quad \bar{\mathcal{A}}_{\alpha_1,\alpha_2} := \bar{\mathcal{B}}_{\alpha_1,\alpha_2}+\beta\widehat{\mathcal{M}}_{\alpha_1,\alpha_2}. \] The inverse inequality for $S^{0}_{p,\bm{\tau}_{\ell,i}}$ (Theorem \ref{theo:BiInv2}), allows us to estimate \begin{equation*} B_0\leq \sigma M_0, \end{equation*} where $\sigma = \sigma_0 h_{\ell,\mathrm{min}}^{-4}$ and $\sigma_0 = 144$. Using this, we define the smoothers $L_{\alpha_1,\alpha_2}$ as follows and obtain estimates for them as follows: \begin{equation}\nonumber \begin{aligned} \bar{\mathcal{A}}_{00} &\leq (2\sigma + \beta) M_0\otimes M_0 &=: L_{00} \le c(\bar{\mathcal{A}}_{00}+h^{-4} \widehat{\mathcal{M}}_{00}),\\ \bar{\mathcal{A}}_{01} &\leq M_0 \otimes\left((\sigma +\beta)M_1 +B_1\right) &=: L_{01} \le c(\bar{\mathcal{A}}_{01}+h^{-4} \widehat{\mathcal{M}}_{01}),\\ \bar{\mathcal{A}}_{10} &\leq \left(B_1 + (\sigma + \beta)M_1 \right)\otimes M_0 &=: L_{10}\le c(\bar{\mathcal{A}}_{10}+h^{-4} \widehat{\mathcal{M}}_{10}),\\ \bar{\mathcal{A}}_{11} &= B_1 \otimes M_1 + M_1 \otimes B_1 + \beta M_1 \otimes M_1 &=: L_{11}\le c(\bar{\mathcal{A}}_{11}+h^{-4} \widehat{\mathcal{M}}_{11}). \end{aligned} \end{equation} The extension to three and more dimensions is completely straight-forward (cf.~\cite{hofreither2016robust}). For each of the subspaces $V_{\ell,\alpha}$, we have defined a symmetric and positive definite smoother $L_\alpha$. The overall smoother is given by \begin{equation*} L_{\ell}:=\sum_{\alpha\in\{0,1\}^d} (\mathbf{Q}^{D,\alpha})^T L_{\alpha} \mathbf{Q}^{D,\alpha}, \end{equation*} where $\mathbf{Q}^{D,\alpha}= \widehat{\mathcal{M}}_{\alpha}^{-1}\mathbf{P}^T_{\ell,\alpha}\widehat{\mathcal{M}}_{\ell}$ is the matrix representation of the $L^2$-projection from $V_\ell$ to $V_{\ell,\alpha}$. Completely analogous to~\cite[Section~5.2]{hofreither2016robust}, we obtain \begin{equation*} L_{\ell}^{-1}=\sum_{\alpha\in\{0,1\}^d} \mathbf{P}_{\ell,\alpha} L^{-1}_{\alpha} \mathbf{P}^T_{\ell,\alpha}. \end{equation*} \begin{theorem} \label{theo:SCMS} Let $d\in \mathbb{N}$ and $p\in \mathbb{N}$ with $p\geq 3$. The subspace corrected mass smoother $L_{\ell}$, satisfies \eqref{eq:smo1}, i.e., \[ (\mathcal{A}_\ell\uv{\ell},\uv{\ell}) \leq\frac{1}{\tau_{\ell}}(L_{\ell}\uv{\ell},\uv{\ell}) \leq C_S\,\lambda_\ell \, ((\mathcal{A}_\ell+h^{-4}\mathcal{M}_\ell)\uv{\ell},\uv{\ell}) \quad \forall \, \uv{\ell}\in \mathbb{R}^{\dim V_{\ell}} \] for all $\tau\in(0,\tau_0)$, where $\tau_0>0$ is some constant. \end{theorem} \begin{proof} The inequality \[ (\bar{\mathcal{A}}_\ell\uv{\ell},\uv{\ell}) \leq(L_{\ell}\uv{\ell},\uv{\ell}) \leq c ((\bar{\mathcal{A}}_\ell+h^{-4}\widehat{\mathcal{M}}_\ell)\uv{\ell},\uv{\ell}) \] was shown in \cite[Theorem 17]{sogn2019robust} for $\beta = 0$. Note that no part of that proof requires uniform grids. So, the proof can be used almost verbatim also in the context of this paper. Using \eqref{eq:L2equal}, the extension to $\beta>0$ is straight forward. Using this and Lemma~\ref{lemma:equivB}, we get \begin{align*} (\widehat{\mathcal{A}}_\ell\uv{\ell},\uv{\ell}) &\leq d (\bar{\mathcal{A}}_\ell\uv{\ell},\uv{\ell}) \leq\frac{d}{\tau_{\ell}}(L_{\ell}\uv{\ell},\uv{\ell}) \leq c ((\widehat{\mathcal{A}}_\ell+h^{-4}\widehat{\mathcal{M}}_\ell)\uv{\ell},\uv{\ell}) \end{align*} for some constant $c>0$. Using~\eqref{eq:geoEquiv}, we obtain \begin{align*} ({\mathcal{A}}_\ell\uv{\ell},\uv{\ell}) &\leq\frac{c_1}{\tau_{\ell}}(L_{\ell}\uv{\ell},\uv{\ell}) \leq c_2 ((\mathcal{A}_\ell+h^{-4}\mathcal{M}_\ell)\uv{\ell},\uv{\ell}) \end{align*} for some constants $c_1,c_2>0$, which finishes the proof since $\lambda_\ell$ is bounded from below by a constant (Lemma~\ref{lem:eigenvalue}). \end{proof} \begin{corollary}\label{final:SCMS} Suppose that we solve the linear system~\eqref{eq:probMat} using a multigrid solver as outlined in Section~\ref{sec:MG} and using the subspace corrected mass smoother as outlined in Section~\ref{sec:smoothers}, then the convergence of the multigrid solver is described by the relation \begin{equation}\label{eq:finalconvergence} \left((I-B^s_L\mathcal{A}_L)\uv{L},\uv{L}\right)_{\mathcal{A}_L} \leq \left(1-\frac{1}{CL}\right) \left(\uv{L},\uv{L}\right)_{\mathcal{A}_L} , \end{equation} where the constant $C$ is independent of the grid sizes $h_\ell$, the number of levels $L$, the spline degree $p$ and the choice of the scaling parameter~$\beta$. It may depend on $d$, the constants $c_1$, $c_2$, $c_q$, and $c_r$ and the shape of $\Omega$, cf. Notation~\ref{notation:c}. \end{corollary} \begin{proof} We use Theorem~\ref{thrm:abstract}, whose assumptions are shown by Theorem~\ref{theo:appProof} and the combination of Lemma~\ref{lem:smo1} and Theorem~\ref{theo:SCMS}. \end{proof} \begin{remark} The operator $L^{-1}_{\ell}$ can be applied efficiently because all of the local contributions $L_{00}$, $L_{01}$ and $L_{10}$ can be inverted efficiently because they are tensor products. For example, we have $L_{00}^{-1} = \frac{1}{2\sigma+\beta} (M^{-1}_0 \otimes I)(I \otimes M^{-1}_0)$, where both $M^{-1}_0 \otimes I$ and $I \otimes M^{-1}_0$ can be realized by applying direct solvers for the univariate mass matrix to several right-hand sides. The operator $L_{11}$ is the sum of two tensor products. So, it has to be inverted as a whole. However, the dimension of the corresponding space is so small that the corresponding computational costs are negligible. More details on how to realize the smoother computationally efficient, are given in \cite[Section 5]{hofreither2016robust}. There, it is outlined where an efficient realization of the subspace corrected mass smoother is also possible in case of more than two dimensions. \end{remark} \subsection{Symmetric Gauss-Seidel smoother and a hybrid smoother} \label{ssub:hybrid} The second smoother we consider is a symmetric Gauss-Seidel smoother consisting of one forward sweep and one backward sweep. It can be shown that this smoother satisfies Condition~\eqref{eq:smo1}, where the constant $C_S$ depends on the spline degree, see \cite{sogn2019robust}. This means that also the overall convergence result~\eqref{eq:finalconvergence} holds, where again $C$ depends on the spline degree. The symmetric Gauss-Seidel smoother works well for domains with a nontrivial geometry transformations, but degenerated for large spline degrees (cf. \cite{gahalaut2013multigrid,hofreither2014spectral}). Since the symmetric Gauss-Seidel smoother works well for nontrivial geometry transformations and the subspace corrected mass smoother is robust with respect to the spline degree, we combine these smoothers into a hybrid smoother, which was first introduced in \cite{sogn2019robust}. This hybrid smoother consists of one forward Gauss-Seidel sweep, followed by one step of the subspace corrected mass smoother, finally followed by one backward Gauss-Seidel sweep. \section{Numerical experiments} \label{sec:numerical} In this section, we present the results of numerical experiments performed with the proposed algorithm. As computational domains, we first consider the unit square, then we consider the nontrivial geometries displayed in Figures~\ref{fig:domains2d} (two dimensional domain) and \ref{fig:domains3d} (three-dimensional domain). We consider the problem \begin{align} \nonumber \begin{split} \beta u + \Delta^2 u &= f \quad \text{in} \quad \Omega,\\ u &= g_1 \quad \text{on} \quad \partial\Omega,\\ \Delta u &= g_2 \quad \text{on} \quad \partial\Omega, \end{split} \end{align} where \begin{align*} f(x) = (\beta+d^2\pi^4)\prod^d_{k=1}\sin(\pi x_k),\quad g_1(x) = \prod^d_{k=1}\sin(\pi x_k),\quad g_2(x) = -d\pi^2\prod^d_{k=1}\sin(\pi x_k). \end{align*} The discretization space on the parameter domain is the space of tensor-product B-splines. On the coarsest level ($\ell = 0$), we choose \begin{equation} \label{eq:gridpoints} \bm{\tau}_{0,i}= (0,\,1/3,\,1/2,\,4/5,\,1), \end{equation} for all spacial directions $i=1,\ldots,d$. The discretization on level $\ell$ is obtained by preforming $\ell$ uniform $h$-refinement steps. The spline spaces have maximum continuity and spline degree $p$. We solve the resulting system using the preconditioned conjugate gradient (PCG) with a V-cycle multigrid method with 1 pre and 1 post smoothing step, as preconditioner. A random initial guess is used and the stopping criteria is \[ \|\underline{r}^{(k)}_L\| \leq 10^{-8}\|\underline{r}^{(0)}_L\|, \] where $\underline{r}^{(k)}_L:= \underline{f}_L- \mathcal{A}_L\underline{x}^{(k)}_L$ is the residual at step $k$ and $\|\cdot\|$ denotes the Euclidean norm. All numerical experiments are implemented using the G+Smo library~\cite{gismoweb}. \subsection{Numerical experiments on parameter domain} We start with the unit square as the domain, that is, $\Omega = (0,1)^2$. Note that $g_1(x)=g_2(x)=0$ for this domain. For now, we consider the symmetric Gauss-Seidel smoother and the subspace corrected mass smoother. For both smoothers, we choose $\tau = 1$. The iteration counts are displayed in Table~\ref{t:Para2Db1} for $\beta = 1$, and in Table~\ref{t:Para2Db1e7} for $\beta = 10^7$. \begin{table}[ht] \begin{center} \begin{tabular}{| c || c | c | c | c | c | c | c |} \hline {$\ell\;\diagdown\; p$}& {\quad3\quad} & {\quad4\quad} & {\quad5\quad} & {\quad6\quad} & {\quad7\quad} & {\quad8\quad} & {\quad9\quad} \\ \hline \hline \multicolumn{8}{|l|}{Symmetric Gauss-Seidel} \\ \hline 5 & 10 & 16 & 28 & 45 & 71 & 120 & 210\\ \hline 6 & 10 & 16 & 27 & 44 & 71 & 119 & 209\\ \hline 7 & 10 & 16 & 27 & 44 & 72 & 117 & 212\\ \hline 8 & 11 & 16 & 27 & 45 & 72 & 120 & 221\\ \hline\hline \multicolumn{8}{|l|}{Subspace corrected mass smoother, $\sigma^{-1}_0 = 0.02$} \\ \hline e 5 & 126 & 122 & 114 & 105 & 98 & 93 & 85\\ \hline 6 & 131 & 129 & 123 & 116 & 110 & 105 & 100\\ \hline 7 & 132 & 133 & 127 & 121 & 116 & 110 & 106\\ \hline 8 & 133 & 134 & 130 & 124 & 118 & 114 & 110\\ \hline \end{tabular} \caption{Iteration counts for 2D parametric domain, $\beta = 1$} \label{t:Para2Db1} \end{center} \end{table} \begin{table}[ht] \begin{center} \begin{tabular}{| c || c | c | c | c | c | c | c |} \hline {$\ell\;\diagdown\; p$}& {\quad3\quad} & {\quad4\quad} & {\quad5\quad} & {\quad6\quad} & {\quad7\quad} & {\quad8\quad} & {\quad9\quad} \\ \hline \hline \multicolumn{8}{|l|}{Symmetric Gauss-Seidel} \\ \hline 5 & 10 & 16 & 28 & 45 & 71 & 119 & 211\\ \hline 6 & 10 & 16 & 27 & 44 & 71 & 118 & 208\\ \hline 7 & 10 & 16 & 27 & 44 & 72 & 117 & 212\\ \hline 8 & 11 & 16 & 27 & 45 & 72 & 119 & 221\\ \hline \multicolumn{8}{|l|}{Subspace corrected mass smoother, $\sigma^{-1}_0 = 0.02$} \\ \hline 5 & 124 & 121& 113& 104& 96 & 92 & 85\\ \hline 6 & 131 & 129& 123& 116& 110& 105& 99\\ \hline 7 & 132 & 133 & 127 & 120 & 116 & 110 & 106\\ \hline 8 & 133 & 134 & 130 & 124 & 116 & 118 & 114\\ \hline \end{tabular} \caption{Iteration counts for 2D parametric domain, $\beta = 10^7$} \label{t:Para2Db1e7} \end{center} \end{table} From the tables, we see that the symmetric Gauss-Seidel smoother preforms well for small spline degrees, but degenerates for larger spline degrees. These results are not surprising since it is known that standard smoothers do not work well for large spline degrees (cf. \cite{gahalaut2013multigrid,hofreither2014spectral}). Due to Corollary~\ref{final:SCMS}, the multigrid solver with subspace corrected mass smoother is robust with respect to the spline degree. The tables do reflex this. However, the iteration numbers are relatively high. Table~\ref{t:Para2Db1uniform} shows the iteration numbers when using an uniform grid with spacing $1/4$ on the coarsest level ($\ell=0$), rather than the grid \eqref{eq:gridpoints}. The numbers in Table~\ref{t:Para2Db1uniform} are significantly smaller. This implies that the subspace corrected mass smoother is sensitive to the quasi-uniformity constant $c_q$. \begin{table}[ht] \begin{center} \begin{tabular}{| c || c | c | c | c | c | c | c |} \hline {$\ell\;\diagdown\; p$}& {\quad3\quad} & {\quad4\quad} & {\quad5\quad} & {\quad6\quad} & {\quad7\quad} & {\quad8\quad} & {\quad9\quad} \\ \hline \hline \multicolumn{8}{|l|}{Subspace corrected mass smoother, $\sigma^{-1}_0 = 0.015$} \\ \hline 5 & 41 & 40 & 39 & 37 & 35 & 34 & 33\\ \hline 6 & 41 & 41 & 39 & 37 & 36 & 35 & 34\\ \hline 7 & 42 & 42 & 40 & 39 & 37 & 35 & 35\\ \hline 8 & 42 & 42 & 41 & 39 & 37 & 37 & 35\\ \hline \end{tabular} \caption{Iteration counts for 2D parametric domain with uniform grid, $\beta = 1$} \label{t:Para2Db1uniform} \end{center} \end{table} \subsection{Numerical experiments on physical domain} Now, we consider a domain with a nontrivial geometry transformation as displayed in Figures~\ref{fig:domains2d} and \ref{fig:domains3d}. The convergence of subspace corrected mass smoother degrades significantly due to the nontrivial geometry mapping. To combat this, we consider the hybrid smoother described in Section~\ref{ssub:hybrid}. \begin{figure}[h] \center \begin{minipage}{0.47\textwidth} \centering \includegraphics[width=0.56\textwidth]{2d-geometry.pdf} \caption{The two-dimensional domain} \label{fig:domains2d} \end{minipage} \begin{minipage}{0.47\textwidth} \centering \includegraphics[width=0.56\textwidth]{3d-geometry.pdf} \caption{The three-dimensional domain} \label{fig:domains3d} \end{minipage} \end{figure} Table~\ref{t:Ann2D} and Table~\ref{t:Ann3D} display the iteration numbers for the 2D and 3D physical domains, respectively. These iteration numbers are relatively small and seam to be robust with respect to both grid size and spline degree. Although the hybrid smoother is more expensive, as one smoothing step can be view as two smoothing steps, the reduction of iteration numbers outweigh this cost for larger spline degrees $p>4$. For smaller spline degrees, the symmetric Gauss-Seidel smoother is ideal choice. \begin{table}[ht] \begin{center} \begin{tabular}{| c || c | c | c | c | c | c | c |} \hline {$\ell\;\diagdown\; p$}& {\quad3\quad} & {\quad4\quad} & {\quad5\quad} & {\quad6\quad} & {\quad7\quad} & {\quad8\quad} & {\quad9\quad} \\ \hline \hline \multicolumn{8}{|l|}{Hybrid smoother, $\beta = 1$} \\ \hline 5 & 28 & 23 & 23 & 24 & 26 & 27 & 27\\ \hline 6 & 28 & 23 & 22 & 25 & 24 & 26 & 26 \\ \hline 7 & 29 & 23 & 22 & 23 & 24 & 24 & 24\\ \hline 8 & 28 & 22 & 21 & 21 & 22 & 22 & 22\\ \hline\hline \multicolumn{8}{|l|}{Hybrid smoother, $\beta = 10^7$} \\ \hline 5 & 27 & 23 & 23 & 24 & 26 & 27 & 28 \\ \hline 6 & 28 & 23 & 22 & 25 & 25 & 26 & 26 \\ \hline 7 & 29 & 23 & 22 & 23 & 24 & 24 & 24\\ \hline 8 & 28 & 22 & 21 & 21 & 22 & 22 & 22\\ \hline \end{tabular} \caption{Iteration counts for 2D Physical domain, $\sigma^{-1}_0 = 0.015$, $\tau = 0.1$} \label{t:Ann2D} \end{center} \end{table} \begin{table}[ht] \begin{center} \begin{tabular}{| c || c | c | c | c | c |} \hline {$\ell\;\diagdown\; p$}& {\quad3\quad} & {\quad4\quad} & {\quad5\quad} & {\quad6\quad} & {\quad7\quad} \\ \hline \hline \multicolumn{6}{|l|}{Hybrid smoother, $\beta = 1$} \\ \hline 1 & 16 & 18 & 21 & 27 & 30 \\ \hline 2 & 31 & 28 & 26 & 29 & 32 \\ \hline 3 & 46 & 37 & 33 & 33 & 35 \\ \hline 4 & 50 & 41 & 34 & 34 & mem\\ \hline\hline \multicolumn{6}{|l|}{Hybrid smoother, $\beta = 10^7$} \\ \hline 1 & 10 & 11 & 13 & 17 & 20 \\ \hline 2 & 12 & 16 & 20 & 25 & 29 \\ \hline 3 & 16 & 19 & 22 & 24 & 28 \\ \hline 4 & 29 & 28 & 28 & 28 & mem\\ \hline \end{tabular} \caption{Iteration counts for 3D Physical domain, $\sigma^{-1}_0 = 0.020$, $\tau = 0.1$} \label{t:Ann3D} \end{center} \end{table} \begin{remark} All experiments have also been performed for the choice $\beta = 0$. In this case, one obtains iteration numbers that are identical than those obtained for $\beta = 1$. Therefore, we chose to only display the results for $\beta = 1$. \end{remark} \textbf{Acknowledgements.} This research was funded by the Austrian Science Fund (FWF): P31048. \bibliographystyle{siamplain}
{ "timestamp": "2021-12-24T02:17:10", "yymm": "2112", "arxiv_id": "2112.12559", "language": "en", "url": "https://arxiv.org/abs/2112.12559", "abstract": "We develop a multigrid solver for the second biharmonic problem in the context of Isogeometric Analysis (IgA), where we also allow a zero-order term. In a previous paper, the authors have developed an analysis for the first biharmonic problem based on Hackbusch's framework. This analysis can only be extended to the second biharmonic problem if one assumes uniform grids. In this paper, we prove a multigrid convergence estimate using Bramble's framework for multigrid analysis without regularity assumptions. We show that the bound for the convergence rate is independent of the scaling of the zero-order term and the spline degree. It only depends linearly on the number of levels, thus logarithmically on the grid size. Numerical experiments are provided which illustrate the convergence theory and the efficiency of the proposed multigrid approaches.", "subjects": "Numerical Analysis (math.NA)", "title": "Multigrid solvers for isogeometric discretizations of the second biharmonic problem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9814534392852383, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7083573638499208 }
https://arxiv.org/abs/2207.10873
The rational Chow rings of moduli spaces of hyperelliptic curves with marked points
We determine the rational Chow ring of the moduli space $\mathcal{H}_{g,n}$ of $n$-pointed smooth hyperelliptic curves of genus $g$ when $n \leq 2g+6$. We also show that the Chow ring of the partial compactification $\mathcal{I}_{g,n}$, parametrizing $n$-pointed irreducible nodal hyperelliptic curves, is generated by tautological divisors. Along the way, we improve Casnati's result that $\mathcal{H}_{g,n}$ is rational for $n \leq 2g+8$ to show $\mathcal{H}_{g,n}$ is rational for $n \leq 3g+5$.
\section{Introduction} The intersection theory of the moduli space of genus $g$ curves $\mathcal{M}_g$ is of central interest in algebraic geometry. The Chow ring with rational coefficients $A^*(\mathcal{M}_g)$ is completely understood for $g\leq 9$ \cite{FaberI, FaberII, Izadi, PenevVakil, 789}. On the other hand, much less is known about the Chow rings of the moduli spaces $\mathcal{M}_{g,n}$ of genus $g$ curves with $n>0$ marked points. The complete picture is only understood for $A^*(\mathcal{M}_{0,n})$ \cite{Keel}, $A^*(\mathcal{M}_{1,n})$ for $n\leq 10$ \cite{Belorousski}, and $A^*(\mathcal{M}_{2,1})$ \cite{FaberPhD}. Because of the structure of the boundary of the compactification $\overline{\M}_g$, computing Chow rings $A^*(\mathcal{M}_{g',n})$ with $g' \leq g$ and $n \leq 2(g - g')$ is a fundamental first step towards understanding $A^*(\overline{\M}_{g})$, which is only completely understood in genus $2$ and $3$ \cite{Mumford,FaberI}. Recent progress in the unpointed case has been based off of studying Hurwitz spaces parametrizing curves of low gonality and arbitrary genus \cite{Hurwitz}. In this paper, we begin to pursue the same strategy in the pointed case by studying moduli spaces of pointed hyperelliptic curves $\H_{g,n}\subset \mathcal{M}_{g,n}$. The case $n=0$ is straightforward. The coarse space of $\H_g$ is a quotient of an open subset of affine space by a finite group action. Thus, with rational coefficients, one has $A^*(\H_g) = \mathbb{Q}$. (With \emph{integral coefficients}, the Chow ring of $\H_g$ is \emph{not} trivial, and was determined in \cite{DiLorenzo} for $g$ even, and \cite{EdidinFulghesu} for $g$ odd. In this paper, we shall work with rational coefficients throughout.) When one adds in marked points, however, the geometry and intersection theory of $\H_{g,n}$ becomes more interesting. With regards to its birational geometry, the moduli space $\H_{g,n}$ is known to be (see Figure \ref{f1}, left) \begin{itemize} \item rational when $n \leq 2g+8$ (Casnati \cite{Casnati}), \item uniruled when $n \leq 4g+5$ (Benzo and Agostini--Barros \cite{Benzo,AB}), \item of Kodaira dimension $4g+3$ when $n = 4g+6$ (Barros--Mullane \cite{BarrosMullane}), \item of general type when $n \geq 4g+7$ (Schwarz \cite{Schwarz}). \end{itemize} We use the birational geometry of $\H_{g,n}$ as a proxy for its complexity. In particular, the nice sorts of presentations that are typically used in calculating Chow rings often show a space is (uni)rational. Thus, we might expect that the intersection theory of $\H_{g,n}$ becomes more complicated as $n$ grows, being understandable in the rational range, but possibly quite difficult to access for $n$ large. The Chow ring is usually more difficult to compute than the birational type. Indeed, to prove rationality, one must understand a dense open subset of the space, whereas to determine the Chow ring, one must understand a full stratification and how the pieces fit together. Previous work on the intersection theory of $\H_{g,n}$ has determined the Chow group in codimension 1 (Scavia \cite{Scavia}) and the full (integral) Chow ring in the case of 1 marked point (Pernice \cite{Pernice}). Here, we determine the full rational Chow ring $A^*(\H_{g,n})$ for $n \leq 2g + 6$. The picture on the left below summarizes the previously known results about $\H_{g,n}$; the version on the right adds in our new contributions (Corollary \ref{hcor} and Theorem \ref{rat}). \begin{figure}[h!] \centering \includegraphics[width=5in]{hyp-spectrum.pdf} \caption{Summary of previously known results about $\H_{g,n}$ (left) and our new results in context (right). Loosely speaking, later colors in the rainbow indicate a ``more explicit" or ``more complete" understanding of the space.} \label{f1} \end{figure} \subsection{Statement of results} The main part of our work is to establish that the Chow ring of $\H_{g,n}$ is generated by divisors when $n \leq 2g+6$. Our techniques for doing so naturally extend over a partial compactification of the moduli space. Let $\mathcal{I}_{g,n}$ be the stack parametrizing irreducible, nodal $n$-pointed hyperelliptic curves. Equivalently, if $\mathcal{M}_{g,n}^{\mathrm{irr}} \subset \overline{\M}_{g,n}$ denotes the locus of irreducible curves, then $\mathcal{I}_{g,n}$ is the closure of $\H_{g,n}$ in $\mathcal{M}_{g,n}^{\mathrm{irr}}$. Let $\delta \in A^1(\mathcal{I}_{g,n})$ be the class of the locus of singular curves, and let $\psi_i \in A^1(\mathcal{I}_{g,n})$ denote the restriction of the $i^{\text{th}}$ psi class on $\overline{\M}_{g,n}$ to $\mathcal{I}_{g, n} \subset \overline{\M}_{g,n}$. We work over an arbitrary algebraically closed field of characteristic not $2$. \begin{thm} \label{divgen} Let $g \geq 2$ and $n \leq 2g+6$. Then $A^*(\mathcal{I}_{g,n})$ is generated by the divisor classes $\psi_1, \ldots, \psi_n$ and $\delta$. \end{thm} \begin{rem} Theorem \ref{divgen} is used in our forthcoming work \cite{forthcoming}, which makes significant progress towards determining the pairs $(g, n)$ for which $A^*(\overline{\M}_{g,n})$ is tautological. When $n \leq 2g + 6$, Theorem \ref{divgen} guarantees that the failure of $A^*(\overline{\M}_{g,n})$ to be tautological will not be the fault of classes supported on $\mathcal{I}_{g,n} \subset \overline{\M}_{g,n}$. For this application, it is advantageous to know the result for the larger locus $\mathcal{I}_{g,n}$, instead of just $\H_{g,n}$. \end{rem} On the smooth locus $\H_{g,n} \subset \mathcal{I}_{g,n}$, it is not hard to determine all relations among the psi classes for any $n$. The tautological ring of $\mathcal{M}_{g,n}$ is generated by the psi classes and pullbacks of kappa classes from $\mathcal{M}_g$. Since the kappa classes restrict to zero on $\H_g \subset \mathcal{M}_g$, it follows that the psi classes generate the tautological ring $R^*(\H_{g,n}) \subseteq A^*(\H_{g,n})$. The following proposition shows that the structure of the tautological ring is quite simple. \begin{prop} \label{rprop} For $g \geq 2$ and any $n$, we have \[R^*(\H_{g,n}) = \mathbb{Q}[\psi_1, \ldots, \psi_n]/(\psi_1, \ldots, \psi_n)^2.\] \end{prop} \begin{rem} In \cite{Tavakol}, Tavakol determines the tautological ring of a different partial compactification $\H_{g,n}^{\mathrm{rt}} \supset \H_{g,n}$, which implies Proposition \ref{rprop} by excision. In Lemma \ref{r2}, we nearly determine the tautological ring of our partial compactification $\mathcal{I}_{g,n}$, which also immediately implies Proposition \ref{rprop}. \end{rem} Combining Proposition \ref{rprop} with Theorem \ref{divgen}, we obtain the following. \begin{cor} \label{hcor} If $n \leq 2g + 6$, then \[A^*(\H_{g,n}) = R^*(\H_{g,n}) = \mathbb{Q}[\psi_1,...,\psi_n]/(\psi_1,\ldots,\psi_n)^2.\] \end{cor} \begin{rem} Note that Corollary \ref{hcor} and Theorem \ref{divgen} do \emph{not} hold for all $n$. In \cite[Theorem 3]{GraberPandharipande}, Graber and Pandharipande prove that $A^*(\mathcal{M}_{2,20}) \neq R^*(\mathcal{M}_{2,20})$ by producing an explicit non-tautological algebraic cycle. (Note that $n = 20$ falls in the range where $\mathcal{M}_{2,n} = \H_{2,n}$ is of general type.) \end{rem} Our approach to proving Theorem \ref{divgen} is inspired by Casnati's method of constructing models in $\mathbb{P}^2$ \cite{Casnati}, but we use models on $\mathbb{P}^1 \times \mathbb{P}^1$ instead. One consequence of our approach is an improvement on Casnati's bound for when these spaces are rational. \begin{thm} \label{rat} If $n \leq 3g+6$, then $\H_{g,n}$ is rational. \end{thm} The discrepancy between the range where we can prove $\H_{g,n}$ is rational and where we can determine the Chow ring is the difference between when a \emph{general} collection of $n$ points on $\mathbb{P}^1 \times \mathbb{P}^1$ impose independent conditions on a certain linear system versus when \emph{every} configuration of $n$ points we care about imposes independent conditions. \subsection{Structure of the paper} In Section \ref{stacks}, we define stacks $\mathcal{I}_{g,n}$ of irreducible, nodal pointed hyperelliptic curves and the locally closed substacks in the stratification we shall use. In Section \ref{tsec}, we define the tautological classes on $\mathcal{I}_{g,n}$ and prove several relations among them. This involves constructing some explicit quotient stacks with the same coarse spaces as $\mathcal{I}_{g,0}$ and $\mathcal{I}_{g,1}$. For $n \geq 2$ however, we cannot give a global quotient description of $\mathcal{I}_{g,n}$. Instead, we build each of the pieces of our stratification using models on $\mathbb{P}^1 \times \mathbb{P}^1$ in Section \ref{quotstack}. The two maps to $\mathbb{P}^1$ in forming these models are the hyperelliptic map and the complete linear series of a (weighted) sum of marked points having degree $g+1$. For this linear series to be base point free, certain points are prohibited from being conjugate or Weierstrass. An inductive argument allows us to eliminate strata where a pair of points is conjugate. Meanwhile, we construct strata with marked Weierstrass points by imposing tangency conditions to vertical lines of the ruling in our models. At the end of Section \ref{quotstack}, we present a proof of Theorem \ref{divgen}, relying on Scavia's result over $\mathbb{C}$. In Section \ref{pp}, we work to establish the needed parts of Scavia's theorem over an arbitrary algebraically closed field of characteristic not $2$. \subsection{Notations and Conventions} If the ground field is not mentioned, we are working over an algebraically closed field $k$ of characteristic not $2$. We will explicitly mention where we are using results that are only known to hold over $\mathbb{C}$. We use the classical subspace convention for projective bundles. \subsection*{Acknowledgments} We are grateful to our advisors, Elham Izadi and Ravi Vakil, respectively, for the many helpful conversations. We thank Dan Petersen for his comments and for pointing out \cite{Tavakol}. We also thank Renzo Cavalieri for his comments on an earlier draft. \section{Pointed irreducible, nodal hyperelliptic curves}\label{stacks} Let $S$ be a $k$-scheme. A family of irreducible, nodal hyperelliptic curves of genus $g$ over $S$ is a morphism of $S$ schemes $C\rightarrow P\rightarrow S$ where $C\rightarrow S$ is a family of irreducible, nodal curves of genus $g$, $P\rightarrow S$ is a $\mathbb{P}^1$-fibration, and $C\rightarrow P$ is finite and flat of degree $2$. An $n$-pointed family of irreducible, nodal hyperelliptic curves over $S$ is a family of irreducible, nodal hyperelliptic curves $C\rightarrow P\rightarrow S$ together with $n$ disjoint sections $p_1,\dots,p_n:S\rightarrow C$ of $C\rightarrow S$ such that the sections are disjoint from the nodes in fibers of $C \to S$. An arrow between families $(C\rightarrow P\rightarrow S, p_1,\dots,p_n:S\rightarrow C)$ and $(C'\rightarrow P'\rightarrow S',p_1',\dots,p_n':S\rightarrow C)$ is simply a commutative diagram \[ \begin{tikzcd} C \arrow[d] \arrow[r] & P \arrow[d] \arrow[r] & S \arrow[d] \\ C' \arrow[r] & P' \arrow[r] & S' \end{tikzcd} \] that also commutes the sections. For brevity, we sometimes omit $S$ and $P$, and write $(C,p_1,\dots,p_n)$ for a family of $n$-pointed irreducible, nodal hyperelliptic curves. \begin{definition} The stack of \emph{$n$-pointed irreducible, nodal genus $g$ hyperelliptic curves} $\mathcal{I}_{g,n}$ is the stack whose objects are families of $n$-pointed irreducible, nodal hyperelliptic curves with morphisms defined as above. The stack of \emph{$n$-pointed genus $g$ hyperelliptic curves} $\H_{g,n} \subset \mathcal{I}_{g,n}$ is the substack defined by the additional condition that $C \to S$ is smooth. \end{definition} Let $\mathcal{M}_{g,n}^{\mathrm{irr}} \subset \mathcal{M}_{g,n}$ denote the substack of pointed, irreducible curves. One can also describe $\mathcal{I}_{g,n}$ as follows. \begin{lem} $\mathcal{I}_{g,n}$ is the closure in $\mathcal{M}_{g,n}^{\mathrm{irr}}$ of $\H_{g,n} \subset \mathcal{M}_{g,n}$. \end{lem} \begin{proof} The closure of $\H_{g,n}$ inside $\overline{\M}_{g,n}$ is the preimage along $\overline{\M}_{g,n} \to \overline{\M}_g$ of $\overline{\H}_g \subset \overline{\M}_g$. The locus $\overline{\H}_g \subset \overline{\M}_g$ is well-understood as the image of the space of admissible degree $2$ covers. The admissible covers with irreducible source are exactly the irreducible nodal hyperelliptic curves. When we add markings, we require that they do not meet the node, so the curve stays irreducible. \end{proof} \subsection{The hyperelliptic involution and the dualizing sheaf} \label{hi} Given an irreducible, nodal hyperelliptic curve $C$, let $\nu: \tilde{C} \to C$ be the normalization. Write $q_i, q_i'$, $i = 1, \ldots, m$ for the pairs of points lying over the nodes of $C$. Considering the composition $\tilde{C} \to C \to \mathbb{P}^1$, we see that $\tilde{C}$ admits a degree $2$ map $\tilde{C} \to \mathbb{P}^1$ and each pair $q_i, q_i'$ lies in the same fiber of this map. \begin{center} \includegraphics[width=2.5in]{hyp-glue2.pdf} \end{center} If $g(\tilde{C}) \geq 2$, then $\tilde{C}$ is a hyperelliptic curve and $q_i, q_i'$ are conjugate under the hyperelliptic involution on $\tilde{C}$. If $g(\tilde{C}) = 1$, then we have the condition $q_i + q_i' \sim q_j + q_j'$ for all $i, j$. If $g(\tilde{C}) = 0$, then every pair of points are linearly equivalent, but the following condition on triples $(i, j, k)$ must be satisfied: the degree $2$ polynomial vanishing at $q_i+q_i'$ must lie in the span of the degree $2$ polynomial vanishing at $q_j+q_j'$ and the degree $2$ polynomial vanishing at $q_k + q_k'$. In summary, irreducible nodal hyperelliptic curves are just built by gluing together conjugate points on a hyperelliptic curve $\tilde{C}$ of lower genus (with a suitable modification when $\tilde{C}$ has genus $1$ or $0$). \begin{lem} Let $C$ be an irreducible, nodal hyperelliptic curve of genus $g \geq 2$. Then $C$ admits a unique degree $2$ map $\alpha: C \to \mathbb{P}^1$ (up to automorphisms of $\mathbb{P}^1$). \end{lem} \begin{proof} In this proof, we write that two maps to $\mathbb{P}^1$ are equal if they differ by composition by an automorphism of $\mathbb{P}^1$. Let $\nu: \tilde{C} \to C$ be the normalization. Suppose $\alpha': C \to \mathbb{P}^1$ is another degree $2$ map. Then $\alpha' \circ \nu$ and $\alpha' \circ \nu$ are both degree $2$ maps $\tilde{C} \to \mathbb{P}^1$. If $g(\tilde{C}) \geq 2$, then we immediately see $\alpha' \circ \nu = \alpha \circ \nu$, and hence $\alpha' = \alpha$. If $g(\tilde{C}) = 1$, then there is a unique degree two map $\tilde{C} \to \mathbb{P}^1$ that sends $q_1$ and $q_1'$ to the same point, namely the complete linear system of $\O_{\tilde{C}}(q_1 + q_1')$. As $\alpha' \circ \nu$ and $\alpha \circ \nu$ both send $q_1$ and $q_1'$ to the same point, we see $\alpha' \circ \nu = \alpha \circ \nu$, and hence $\alpha' = \alpha$. If $g(\tilde{C}) = 0$, then since $g(C) \geq 2$, we know $C$ has at least two nodes. But there is a unique degree two map $\tilde{C} \to \mathbb{P}^1$ which sends $q_1$ and $q_1'$ to the same point \emph{and} sends $q_2$ and $q_2'$ to the same point. Hence, again we see $\alpha' \circ \nu = \alpha \circ \nu$, and so $\alpha' = \alpha$. \end{proof} \begin{definition} We call the unique degree two map $\alpha: C \to \mathbb{P}^1$ the hyperelliptic map. We write $L := \alpha^*\O_{\mathbb{P}^1}(1)$ for the corresponding degree $2$ line bundle. Given a point $p \in C$, we write $\overline{p}$ for its conjugate under the hyperelliptic involution. Nodes are always fixed under the hyperelliptic involution. \end{definition} We now note that two facts, which are well-known for smooth hyperelliptic curves, also hold in the irreducible nodal case (by the same arguments as in the smooth case). \begin{lem} Let $C$ be an irreducible, nodal hyperelliptic curve of genus $g$, $\alpha: C \to \mathbb{P}^1$ be the hyperelliptic map, and let $L = \alpha^*\O_{\mathbb{P}^1}(1)$ be the corresponding degree $2$ line bundle. Then the dualizing sheaf satisfies $\omega_C \cong L^{\otimes g-1}$. \end{lem} \begin{proof} By Riemann--Roch, \[h^0(C, L^{\otimes g-1}) - h^0(C, \omega_C \otimes (L^{\otimes g-1})^{\vee}) = (2g - 2) - g + 1 = g - 1. \] Meanwhile, we have $h^0(C, L^{\otimes g-1}) \geq h^0(\mathbb{P}^1, \O(g-1)) = g$, so $h^0(C, \omega_C \otimes (L^{\otimes g - 1})^\vee) \geq 1$. But $ \omega_C \otimes (L^{\otimes g - 1})^\vee$ has degree $0$, so it has a non-trivial section if and only if it trivial, which means $\omega_C \cong L^{\otimes g - 1}$. \end{proof} Geometrically, this tells us that the complete linear system for $\omega_C$ sends $C$ to a double cover of a degree $g - 1$ rational normal curve in $\mathbb{P}^{g-1}$. This implies the following. \begin{lem} \label{georr} Suppose $D = x_1 + \ldots + x_g$ is an effective degree $g$ divisor with $h^0(C, \O(D)) \geq 2$. Then $x_i = \overline{x}_j$ for some $i \neq j$. \end{lem} \begin{proof} By Riemann--Roch, if $h^0(C, \O(D)) \geq 2$, then $h^0(C, \omega_C(-D)) \geq 1$, which is to say the image of $D$ under the canonical embedding $C \to \mathbb{P}^{g-1}$ lies in a hyperplane. However, any degree $g$ divisor on a rational normal curve in $\mathbb{P}^{g-1}$ spans all of $\mathbb{P}^{g-1}$. Therefore, a length two subscheme of $D$ must be sent to a length one subscheme under the canonical map, which is to say $x_i = \overline{x}_j$ for some $i \neq j$. \end{proof} \subsection{Our stratification} In order to compute the Chow ring of $\mathcal{I}_{g,n}$, we will introduce a stratification. It keeps track of the two interesting geometric phenomena that happen on pointed hyperelliptic curves: \begin{enumerate} \item a marked point is fixed by the hyperelliptic involution (this is called a Weierstrass point), or \item two marked points are conjugate under the hyperelliptic involution. \end{enumerate} Recall that, if they exist, the nodes are not allowed to be marked. Recall that we write $\overline{p}$ for the conjugate of $p$ under the hyperelliptic involution. First, define divisors \[D_{ij} =\{(C, p_1, \ldots, p_n) \in \mathcal{I}_{g,n} : p_i = \overline{p}_j\}, \] so $D_{ij}$ is the locus where $p_i$ and $p_j$ are conjugate under the hyperelliptic involution, and $D_{ii}$ is the locus where $p_i$ is a Weierstrass point. \begin{prop}\label{induction} Let $i\neq j$ and suppose that $A^*(\mathcal{I}_{g,n-1})$ is generated by divisors. Then the Chow ring of $D_{ij}\subset \mathcal{I}_{g,n}$ is generated by restrictions of divisors from $\mathcal{I}_{g,n}$. \end{prop} \begin{proof} There is a commutative square \[ \begin{tikzcd} D_{ij} \arrow[hook]{r} \arrow{d}[swap]{\sim} & \mathcal{I}_{g,n} \arrow{d} \\ \mathcal{I}_{g,n-1} \smallsetminus D_{ii} \arrow{r} & \mathcal{I}_{g,n-1}, \\ \end{tikzcd} \] where the vertical maps forget the $j^{\mathrm{th}}$ marked point. The left vertical map is an isomorphism. Note that above $D_{ij}\subset \mathcal{I}_{g,n}$, while $D_{ii}\subset \mathcal{I}_{g,n-1}$. Taking Chow rings, we have \[ A^*(\mathcal{I}_{g,n-1})\twoheadrightarrow A^*(\mathcal{I}_{g,n-1}\smallsetminus D_{ii})\cong A^*(D_{ij}). \ By the assumption that $A^*(\mathcal{I}_{g,n-1})$ is generated by divisors, it follows that $A^*(D_{ij})$ is generated by restrictions of divisors from $\mathcal{I}_{g,n}$. \end{proof} \begin{rem} A similar idea to the above --- relating divisors where a certain linear equivalence holds to an open subset of the moduli space with one less point --- was used by Belorousski \cite{Belorousski} in studying the Chow rings of $\mathcal{M}_{1,n}$ for $n \leq 10$. \end{rem} Thus, if $A^*(\mathcal{I}_{g,n-1})$ is generated by divisors, then using the push-pull formula, every class supported on $D_{ij} \subset \mathcal{I}_{g,n}$ for $i\neq j$ is a polynomial in divisor classes. Let us write \[\mathcal{I}_{g,n}^\circ := \mathcal{I}_{g,n} \smallsetminus \bigcup_{i < j} D_{ij}.\] On $\mathcal{I}_{g,n}^\circ$, no pair of marked points is conjugate, but the marked points may still be Weierstrass. Thus, by excision and Proposition \ref{induction}, to establish that $A^*(\mathcal{I}_{g,n})$ is generated by divisors, it will suffice to show that $A^*(\mathcal{I}_{g,n}^\circ)$ is generated by divisors. When $g +1 \leq n \leq 2g+6$ we construct $\mathcal{I}_{g,n}^\circ$ directly as quotient of an open subset of a projective bundle over an open subset of affine space. Meanwhile, for $n \leq g$, we must further stratify $\mathcal{I}_{g,n}^\circ$. To set this up, we will also require the slightly smaller open where no pair of points are conjugate and the $i^{\mathrm{th}}$ point is prohibited from being Weierstrass: \begin{align}\label{Hl} \mathcal{I}_{g,n}^{\circ,i} &:=\mathcal{I}_{g,n} \smallsetminus \left( D_{ii} \cup \bigcup_{j<k} D_{jk} \right) &\qquad &(\text{if $n < g+1$}) \intertext{In order to combine arguments in the cases $n < g+1$ and $n \geq g+1$, we use the convention that} \mathcal{I}_{g,n}^{\circ, i} &:= \mathcal{I}_{g,n}^\circ &\qquad &(\text{if $n \geq g+1$}). \label{conv} \end{align} The basic geometric loci left on $\mathcal{I}_{g,n}^\circ$ are the loci where some collection of points are Weierstrass. When $n < g+1$, we are going to stratify $\mathcal{I}_{g,n}^\circ$ into disjoint locally closed strata $W_0 \cup W_1 \cup \cdots \cup W_n$. The subscript $i$ will measure the number of initial consecutive Weierstrass points when the points are read in order. Precisely, $W_i$ is the locally closed stratum \begin{align} W_i &:= \{(C, p_1, \ldots, p_n) \in \mathcal{I}_{g,n}^\circ: p_j = \overline{p}_j \text{ for $j \leq i$ and } p_{i+1} \neq \overline{p}_{i+1}\} \label{si}\\ &= \mathcal{I}_{g,n}^\circ \cap \left( \bigcap_{j \leq i} D_{jj} \smallsetminus \bigcap_{j \leq i+1} D_{jj}\right). \label{si2} \end{align} Note that $W_i$ is a closed subset of $\mathcal{I}_{g,n}^{\circ,i+1}$. In particular, $W_0 = \mathcal{I}_{g,n}^{\circ, 1}$. Generically, curves in $W_i$ have $i$ marked Weierstrass points, so $W_i$ has codimension $i$. For each $i$, we have $\overline{W}_i = W_i \cup \cdots \cup W_n$. \begin{lem} \label{fc} The fundamental class $[\overline{W}_i]$ is a product of divisors. \end{lem} \begin{proof} By \eqref{si2}, we see that $\overline{W}_i = \mathcal{I}_{g,n}^\circ \cap(\bigcap_{j \leq i} D_{jj})$ is a dimensionally transverse intersection of $i$ divisors. \end{proof} Thus, to show that $A^*(\mathcal{I}_{g,n}^\circ)$ is generated by divisors, it will suffice to show that each $A^*(W_i)$ is generated by restrictions of divisors from $\mathcal{I}_{g,n}^\circ$. We shall show this by constructing $\mathcal{I}_{g,n}^{\circ,i+1}$ as a quotient of an open subset of a projective bundle over an open subset of affine space. Furthermore, $W_i \subset \mathcal{I}_{g,n}^{\circ,i+1}$ will be the quotient of an open subset of the projectivization of a particular subbundle. This will allow us to see that $A^*(W_i)$ is generated by restrictions of classes from $A^*(\mathcal{I}_{g,n}^{\circ, i+1})$, which we in turn show is generated by divisors (note that such divisors are restrictions from $\mathcal{I}_{g,n}^\circ$ by excision). The case $W_n$ is slightly different and will be treated in Lemma \ref{Wn}. \section{Tautological classes and relations} \label{tsec} We define the \emph{tautological ring} $R^*(\mathcal{I}_{g,n}) \subseteq A^*(\mathcal{I}_{g,n})$ (resp. $R^*(\H_{g,n}) \subseteq A^*(\H_{g,n})$) to be the subring generated restrictions of tautological classes from $\overline{\M}_{g,n}$. \subsection{The psi and boundary divisors} Let \[ \pi:\mathcal{C}_{g,n}\rightarrow \mathcal{I}_{g,n} \] be the universal family of curves over $\mathcal{I}_{g,n}$. The morphism $\pi$ has $n$ universal pairwise disjoint sections \[ p_1,\dots,p_n:\mathcal{I}_{g,n}\rightarrow \mathcal{C}_{g,n}, \] which are also disjoint from the singular locus of $\mathcal{C}_{g,n} \to \mathcal{I}_{g,n}$. \begin{definition} The cotangent line bundles are the bundles \[ \mathbb{L}_i:=p_i^*\omega_{\pi}. \] The $\psi$ classes are the divisors \[ \psi_i:=c_1(\mathbb{L}_i)\in A^1(\mathcal{I}_{g,n}). \] \end{definition} The $\psi$ classes behave well with respect to pulling back by the morphisms that forget marked points. There are natural maps $\pi_j: \mathcal{I}_{g,n} \to \mathcal{I}_{g,n-1}$ where we forget the $j^{\mathrm{th}}$ marked point, under which we have $\pi_j^* \psi_i = \psi_i$ (this equality uses that our curve is irreducible and the markings do not meet the singular locus). \begin{definition} Let $\Delta \subset \mathcal{I}_{g,n}$ be the divisor of singular curves, so $\mathcal{I}_{g,n} \smallsetminus \Delta = \H_{g,n}$. Define \[\delta :=[\Delta] \in A^1(\mathcal{I}_{g,n}).\] \end{definition} \begin{lem} The divisor $\Delta \subset \mathcal{I}_{g,n}$ is irreducible. \end{lem} \begin{proof} By the discussion in Section \ref{hi}, we see that $\Delta$ is the image of $D_{12} \subset \mathcal{I}_{g,n+2}$ under the map $D_{12} \to \mathcal{I}_{g,n}$ that glues together the first two marked points, which are conjugate. As in Lemma \ref{induction}, we have $D_{12}$ isomorphic to $\mathcal{I}_{g,n+1} \smallsetminus D_{11}$, which is irreducible. \end{proof} In \cite[Theorem 1.1]{Scavia}, Scavia shows that, over the complex numbers, the $\psi$ classes and boundary divisors form a basis for the Picard group of $\overline{\H}_{g,n}$. Our $\mathcal{I}_{g,n}$ is the complement of all boundary divisors in $\overline{\H}_{g,n}$ besides the divisor $\Delta$ of irreducible nodal curves. Because $\Delta$ is irreducible, we obtain the following. \begin{thm} \label{Scavia} Let $k= \mathbb{C}$ and take $\mathcal{I}_{g,n}$ as a stack over $\Spec \mathbb{C}$. For every $g\geq 2$ and $n\geq 0$, $A^1(\mathcal{I}_{g,n})$ has a basis given by $\psi_1,\dots,\psi_n$ and $\delta$. \end{thm} We will prove Theorem \ref{divgen} for $\mathcal{I}_{g,n}$ defined over any algebraically closed field $k$ of characteristic not $2$. Using Scavia's result (Theorem \ref{Scavia}), we can simplify the proof when we work over the complex numbers: to prove Theorem \ref{divgen} over $\mathbb{C}$ it remains to prove that $A^*(\mathcal{I}_{g,n})$ is generated by divisors. We have organized the paper so that the reader who is only interested in the case over $\mathbb{C}$ need not read the final Section \ref{pp}. Finally, let us relate the geometric classes in our stratification to the tautological classes we have just defined. In \cite{EdidinHu}, Edidin and Hu use the method of test curves to compute compute the classes of $D_{ij}$ in $A^1(\overline{\H}_{g,n})$. Restricting to $\mathcal{I}_{g,n}$, we have \begin{align} [D_{ij}] &= \frac{1}{g-1}(\psi_i + \psi_j) - \frac{1}{2(2g+1)(g-1)}\delta \label{dij} \\ [D_{ii}] &= \frac{g+1}{g-1} \cdot \psi_i - \frac{1}{2(2g+1)(g-1)}\delta. \label{dii} \end{align} Although Edidin--Hu cite Scavia's result over $\mathbb{C}$ that the $\psi$ classes and boundary divisors generate the Picard group, their argument does not use the complex numbers in any other way. Thus, by our work in Section \ref{pp} generalizing Scavia's result, these formulas will be shown to hold over any ground field of characteristic not $2$. \subsection{Relations} \label{relsec} We now calculate several relations among the $\psi$ and $\delta$ classes on $\mathcal{I}_{g,n}$. These relations nearly determine $R^*(\mathcal{I}_{g,n})$ and fully determine $R^*(\H_{g,n})$. First, we compute $A^*(\mathcal{I}_{g,0})$. \begin{lem} \label{0pt} We have $A^*(\mathcal{I}_{g,0}) = \mathbb{Q}[\delta]/(\delta^3)$. \end{lem} \begin{proof} An irreducible nodal hyperelliptic curve is determined by its branch divisor. The branch divisor is a degree $2g+2$ divisor on $\mathbb{P}^1$ with no point of multiplicity $3$ or more (to make the double cover at worst nodal) and at least one point of multiplicity $1$ (to make the cover irreducible). Let $\mathcal{V}$ be the universal rank $2$ vector bundle on $\BGL_2$. Define $\Delta_3 \subset \Sym^{2g+2} \mathcal{V}^\vee$ to be the degree $2g + 2$ forms on $\mathbb{P} \mathcal{V}$ with a root of multiplicity $3$ or more. Let $R \subset \Sym^{2g+2} \mathcal{V}^\vee$ be the forms with no simple roots. Note that $R$ has codimension $g+1 \geq 3$. Then the coarse space of $\mathcal{I}_{g, 0}$ is the same as the coarse space of \[\Sym^{2g+2} \mathcal{V}^\vee \smallsetminus (\Delta_3 \cup R),\] which we think of as an open subset of a vector bundle over $\BGL_2$. We therefore identify the Chow ring of $\mathcal{I}_{g, 0}$ with the Chow ring of $\Sym^{2g+2} \mathcal{V}^\vee \smallsetminus (\Delta_3 \cup R)$ We can determine the relations that come from excising $\Delta_3$ using a diagram of the form \begin{center} \begin{tikzcd} \tilde{\Delta}_3 \arrow{dr} \arrow{r} & \pi^* \Sym^{2g+2}\mathcal{V}^\vee \arrow{r}\arrow{d} & \Sym^{2g+2}\mathcal{V}^\vee \arrow{d} \\ & \mathbb{P} \mathcal{V} \arrow{r}{\pi} & \BGL_2, \end{tikzcd} \end{center} where $\tilde{\Delta}_3$ is the kernel of the principal parts evaluation map \[\pi^*\Sym^{2g+2} \mathcal{V}^\vee = \pi^*\pi_* \O_{\mathbb{P} \mathcal{V}}(2g+2) \rightarrow P^2_{\mathbb{P} \mathcal{V}/\BGL_2}(\O_{\mathbb{P} \mathcal{V}}(2g+2)). \] The space $\tilde{\Delta}$ parametrizes points on $\mathbb{P} \mathcal{V}$ together with forms having a root of multiplicity $3$ or more at that point. In particular, $\tilde{\Delta}_3$ surjects onto $\Delta_3 \subset \Sym^{2g+2} \mathcal{V}^\vee$. Since we work with rational coefficients, all classes supported on $\Delta_3$ are pushforwards of classes from $\tilde{\Delta}_3$. Let $c_1, c_2$ be the generators of $A^*(\BGL_2)$ and let $z = c_1(\O_{\mathbb{P} V}(1))$. By the projective bundle theorem, $A^*(\mathbb{P} \mathcal{V}) = \mathbb{Z}[c_1,c_2,z]/(z^2 + c_1z + c_2)$. The fundamental class of $\tilde{\Delta}_3 \subset \pi^*\Sym^{2g+2}\mathcal{V}^\vee$ is the top Chern class of the principal parts bundle $c_3(P^2_{\mathbb{P} \mathcal{V}/\BGL_2}(\O_{\mathbb{P} \mathcal{V}}(2g+2)))$. To calculate this top Chern class, we use that $P^2_{\mathbb{P} \mathcal{V}/\BGL_2}(\O_{\mathbb{P} \mathcal{V}}(2g+2))$ is filtered by the line bundles \[\O_{\mathbb{P} \mathcal{V}}(2g+2), \qquad \O_{\mathbb{P} \mathcal{V}}(2g+2) \otimes \Omega_{\mathbb{P} \mathcal{V}/\BGL_2}, \qquad \O_{\mathbb{P} \mathcal{V}}(2g+2) \otimes \Omega_{\mathbb{P} \mathcal{V}/\BGL_2}^{\otimes 2}\] and the splitting principle. By the relative Euler sequence, $c_1(\Omega_{\mathbb{P} \mathcal{V}/\BGL_2}) = -2z -c_1$. By \cite[Trapezoid Lemma 2.1]{part2}, the image of $A^{*-1}(\tilde{\Delta}_3) \to A^*(\Sym^{2g+2}\mathcal{V}^\vee)$ is the ideal in $A^*(\Sym^{2g+2}\mathcal{V}^\vee) \cong A^*(\BGL_2)$ generated by \begin{align} \pi_*(c_3(P^2_{\mathbb{P} \mathcal{V}/\BGL_2}(\O_{\mathbb{P} \mathcal{V}}(2g+2))) ) &=(8g^3 + 12g^2 + 4g)c_1^2 + (-8g^3 + 8g)c_2 \label{firstp} \\ \intertext{and} c_3(P^2_{\mathbb{P} \mathcal{V}/\BGL_2}(\O_{\mathbb{P} \mathcal{V}}(2g+2)) \cdot z) &= (-8g^3 - 12g^2 - 4g)c_1^3 + (16g^3 + 12g^2 - 8g - 4)c_1c_2. \label{secondp} \end{align} Setting the right-hand side of \eqref{firstp} to zero tells us that $c_2$ is a non-zero multiple of $c_1^2$; then, setting \eqref{secondp} to zero tells us $c_1^3 = 0$. From this, it follows that $A^*(\mathcal{I}_{g,0}) \cong \mathbb{Q}[c_1]/(c_1^3)$. Finally, we express the class $\delta$ in terms of $c_1$. The divisor $\Delta \subset \Sym^{2g+2} \mathcal{V}^\vee$ corresponds to the degree $2g+2$ forms on $\mathbb{P} \mathcal{V}$ with a double root. In a similar fashion to the above argument, we may realize $\Delta$ as the image of the kernel of the principal parts evaluation map \[\pi^* \Sym^{2g+2} \mathcal{V}^\vee = \pi^*\pi_*\O_{\mathbb{P} \mathcal{V}}(2g+2) \to P^1_{\mathbb{P} \mathcal{V}/\BGL_2}(\O_{\mathbb{P} \mathcal{V}}(2g+2)).\] Pushing forward, we have \begin{equation} \label{dclass} \delta = \pi_*(c_2(P^1_{\mathbb{P} \mathcal{V}/\BGL_2}(\O_{\mathbb{P} \mathcal{V}}(2g+2) ))) = (-4g^2 - 6g - 2)c_1. \end{equation} In particular, $\delta$ is a non-zero multiple of $c_1$, so we get the claimed presentation of $A^*(\mathcal{I}_{g,0})$. \end{proof} \begin{cor} \label{tcor} The tautological ring $R^*(\mathcal{I}_{g,n})$ is generated by the $\psi$ classes and $\delta$. \end{cor} \begin{proof} By definition, $R^*(\mathcal{I}_{g,n})$ is generated by the restrictions of psi classes, the kappa classes, and boundary strata from $\overline{\M}_{g,n}$. Since we require our curves to be irreducible, the only boundary strata that meet $\mathcal{I}_{g,n}$ are irreducible curves with some number of nodes, which are all pulled back from $\overline{\M}_{g}$. On $\overline{\M}_{g,n}$, the kappa classes are expressible as polynomials in the pullbacks of kappa classes from $\overline{\M}_g$ and psi classes and boundary classes. Considering the commutative square \begin{center} \begin{tikzcd} \mathcal{I}_{g,n} \arrow{r} \arrow{d} & \overline{\M}_{g,n} \arrow{d} \\ \mathcal{I}_{g,0} \arrow{r} & \overline{\M}_{g} \end{tikzcd} \end{center} and the fact that $A^*(\mathcal{I}_{g,0})$ is generated by $\delta$, we see that $R^*(\mathcal{I}_{g,n})$ is generated by $\delta$ and the $\psi$ classes. \end{proof} Next, we shall compute $A^*(\mathcal{I}_{g,1})$. This determines some relations among $\psi$ and $\delta$. \begin{lem} \label{1pt} We have \[A^*(\mathcal{I}_{g,1}) = \frac{\mathbb{Q}[\psi_1, \delta]}{(\delta \psi_1 +(2g-1)\delta^2, \psi_1^2 + a_g \delta^2, \delta^3)}\] where $a_g$ is a constant depending on $g$. \end{lem} \begin{proof} In the coarse space of $\mathcal{I}_{g,1}$, the points $(C, p)$ and $(C, \overline{p})$ are identified. This allows us to easily construct another stack with the same coarse space: we need a degree $2g+2$ form $F$ on $\mathbb{P} \mathcal{V}$ with no roots of multiplicity $3$ or more (and at least one root of multiplicity $1$), together with a point $p \in \mathbb{P} \mathcal{V}$ which is not a double root of $V(F)$. (The point $p$ should not be a double root because we marked points are not allowed to be singular.) Let \begin{align*}D &= \{(p, F) : p \text{ has multiplicity $\geq 2$ in $V(F)$}\} \\ &\subset \mathbb{P} \mathcal{V} \times_{\BGL_2} (\Sym^{2g+2} \mathcal{V}^\vee \smallsetminus (\Delta_3 \cup R)) = \mathbb{P} \mathcal{V} \times_{\BGL_2} \mathcal{I}_{g,0}. \end{align*} Then the coarse space of $\mathcal{I}_{g,1}$ is the same as the coarse space of \[(\mathbb{P} \mathcal{V} \times_{\BGL_2} \mathcal{I}_{g,0}) \smallsetminus D.\] We have $A^*(\mathbb{P} \mathcal{V} \times_{\BGL_2} \mathcal{I}_{g,0}) = A^*(\mathcal{I}_{g,0})/(z^2 + c_1z + c_2)$. In terms of the generator $\delta$, we have \begin{equation} \label{pbtrel}0 = z^2 + c_1z + c_2 = z^2 - \frac{1}{2(2g+1)(g+1)}\delta z - \frac{1}{8(2g+1)(g-1)(g+1)^2}\delta^2, \end{equation} where we have used \eqref{firstp} and \eqref{dclass} to write $c_1$ and $c_2$ in terms of $\delta$. It remains to find the relations imposed by excising $D$. Let $\pi: \mathbb{P} \mathcal{V} \to \BGL_2$ be the projection. We have that $D$ is the total space of the kernel of \[\pi^* \Sym^{2g+2} \mathcal{V}^\vee = \pi^*\pi_* \O_{\mathbb{P} \mathcal{V}}(2g+2) \to P^1_{\mathbb{P} \mathcal{V}/\BGL_2}(\O_{\mathbb{P} \mathcal{V}}(2g+2)).\] Because $D$ is a vector bundle over $\mathbb{P} \mathcal{V}$, all classes supported on it are multiples of its fundamental class. The fundamental class of $D$ is the top Chern class of the principal parts bundle \begin{align}c_2(P^1_{\mathbb{P} \mathcal{V}/\BGL_2}(\O_{\mathbb{P} \mathcal{V}}(2g+2))) &= ((-4g^2 - 6g - 2)c_1)z + (-4g^2 - 4g)c_2 \notag \\ &= \delta z + \frac{g}{2(2g+1)(g-1)(g+1)} \delta^2. \label{rel1} \end{align} We now wish to express $z$ in terms of $\psi_1$. The divisor $D_{11} \subset \mathcal{I}_{g,1}$ is \[D_{11} = \{(p, F) : p \in V(F)\} \subset \mathbb{P} \mathcal{V} \times_{\BGL_2} \Sym^{2g+2}\mathcal{V}^\vee,\] which is the total space of the kernel of the regular evaluation map \[\pi^*\Sym^{2g+2} \mathcal{V}^\vee = \pi^*\pi_*\O_{\mathbb{P} \mathcal{V}}(2g+2) \to \O_{\mathbb{P} \mathcal{V}}(2g+2). \] Hence, $[D_{11}] = c_1(\O_{\mathbb{P} \mathcal{V}}(2g+2)) = (2g+2)z$. Using \eqref{dii}, we find \[z =\frac{1}{2g+2}[D_{11}] = \frac{1}{2g+2}\left(\frac{g+1}{g-1} \psi_1 - \frac{1}{2(2g+1)(g-1)}\delta\right).\] When we plug this into \eqref{rel1} and divide out by non-zero constants, we find \[\delta \psi_1 + (2g-1) \delta^2 = 0.\] Finally, we plug the formula for $z$ into \eqref{pbtrel} and use the above to eliminate the $\delta \psi_1$ terms. This gives \[\psi_1^2 + \frac{16g^4-24g^3+16g^2+8g-3}{4(2g+1)^2(g+1)^2}\delta^2 = 0,\] so we obtain the desired presentation. \end{proof} Similar techniques to the previous lemmas allow us find the Chow ring of the stratum $W_n \subset \mathcal{I}_{g,n}$ where all marked points are Weierstrass. We deal with these strata now, since our method of constructing models of curves on $\mathbb{P}^1 \times \mathbb{P}^1$ does not work on them. \begin{lem} \label{Wn} Suppose $2 \leq n \leq 2g + 2$. The stratum $W_n \subset \mathcal{I}_{g,n}$ has $A^*(W_n) = \mathbb{Q}$. \end{lem} \begin{proof} As before, let $\mathcal{V}$ be the tautological rank $2$ bundle on $\BGL_2$. Let $\eta_i: (\mathbb{P} \mathcal{V})^n \to \mathbb{P} \mathcal{V}$ be projection onto the $i^{\mathrm{th}}$ factor, and write $z_i := \eta_i^* \O_{\mathbb{P} \mathcal{V}}(1)$. Let $U \subset (\mathbb{P} \mathcal{V})^n$ be the complement of all the big diagonals. The locus where $p_i = p_j$ is defined by the vanishing of the composition $\eta_i^* \O_{\mathbb{P} \mathcal{V}}(-1) \to \mathcal{V} \to \mathcal{V}/\eta_j^*\O_{\mathbb{P} \mathcal{V}}(-1)$. Thus, we obtain relations $z_i + z_j + c_1 = 0 \in A^*(U)$ for each $i \neq j$. Now consider \[X = \{(p_1, \ldots, p_n, F): p_i \in V(F)\} \subset U \times_{\BGL_2} \Sym^{2g+2} \mathcal{V}^\vee,\] which is the total space of a vector bundle over $U$. In particular, $A^*(X) \cong A^*(U),$ which is generated by $c_1, c_2, z_1, \ldots, z_n$. Since the marked points are not allowed to be singular, we wish to remove from $X$ the divisors $X_i$ where $V(F)$ has multiplicity $2$ or more at $p_i$. Each $X_i \subset X$ is a subbundle of $X$ whose cokernel is $\eta_i^*(\Omega_{\mathbb{P} V/\BGL_2} \otimes \O_{\mathbb{P} \mathcal{V}}(2g+2))$. Hence, the fundamental class of $X_i \subset X$ is $2g \cdot z_i$. It follows that $z_i = 0 \in A^*(X \smallsetminus (X_1 \cup \cdots \cup X_n))$. Since $0 = z_1 + z_2 + c_1$, we also see that $c_1 = 0 \in A^*(X \smallsetminus (X_1 \cup \cdots \cup X_n))$. Finally, the form $F$ should not have any triple roots and must have at least one simple root, so we also wish to remove $U \times_{\BGL_2} (\Delta_3 \cup R) \subset U\times_{\BGL_2} \Sym^{2g+2} \mathcal{V}^\vee = X$. This introduces the relations from Lemma \ref{0pt}, which showed $c_2$ is a multiple of $c_1^2$. In particular, \[A^*(X \smallsetminus (X_1 \cup \cdots \cup X_n \cup U \times_{\BGL_2} (\Delta_3 \cup R))) = \mathbb{Q}.\] The above open subset of $X$ has the same coarse space as $W_n$, so $A^*(W_n) = \mathbb{Q}$ as well. \end{proof} We now describe many relations in $R^*(\mathcal{I}_{g,n})$, which almost determine this ring. \begin{lem} \label{r2} All classes in $R^2(\mathcal{I}_{g,n})$ are proportional to $\delta^2$. In other words, $\dim R^2(\mathcal{I}_{g,n}) \leq 1$. Moreover, $R^i(\mathcal{I}_{g,n}) = 0$ for all $i \geq 3$. \end{lem} \begin{proof} By Lemma \ref{tcor}, we know $R^*(\mathcal{I}_{g,n})$ is generated by the $\psi$ classes and $\delta$. Moreover, by Lemma \ref{0pt}, we know $\delta^3 = 0$. Hence, it suffices to show $\psi_i^2$ and $\psi_i\psi_j$ are proportional to $\delta^2$. The class $\psi_i$ is pulled back from the map $\mathcal{I}_{g,n} \to \mathcal{I}_{g,1}$ that forgets all but the $i^{\text{th}}$ marked point, so Lemma \ref{1pt} shows that $\psi_i^2$ and $\psi_i\delta$ are proportional to $\delta^2$. For $i \neq j$, the intersection of $D_{ii}$ and $D_{ij}$ in $\mathcal{I}_{g,n}$ is empty because at any point of their intersection we would have $p_i = \overline{p}_i =p_j$, which is impossible. Using \eqref{dii} and \eqref{dij}, we see \begin{align*} 0 &= [D_{ii}][D_{ij}] = \frac{g+1}{(g-1)^2} \psi_i \psi_j + \text{terms proportional to $\delta^2$}. \end{align*} so $\psi_i\psi_j$ is also proportional to $\delta^2$. \end{proof} \begin{rem} Lemma \ref{r2} nearly determines $R^*(\mathcal{I}_{g,n})$. All that remains unknown is if $\delta^2$ is non-zero. (We know $\delta^2$ is non-zero on $\mathcal{I}_{g,0}$ and $\mathcal{I}_{g,1}$, but it is possible that $\delta^2$ could lie in the kernel of the pullback to $\mathcal{I}_{g,n}$ for some $n > 1$.) \end{rem} Since $\delta$ lies in the kernel of the restriction map $R^*(\mathcal{I}_{g,n}) \to R^*(\H_{g,n})$ and $\psi_1, \ldots, \psi_n$ are independent in $A^1(\H_{g,n})$, Lemma \ref{r2} implies Proposition \ref{rprop} from the introduction: \begin{cor} We have $R^*(\H_{g,n}) = \mathbb{Q}[\psi_1, \ldots, \psi_n]/(\psi_1, \ldots, \psi_n)^2$. \end{cor} \section{Quotient stack presentations}\label{quotstack} The purpose of this section is to give quotient stack presentations for the stacks $\mathcal{I}_{g,n}^{\circ, i}$ and $W_i$ (defined in \eqref{Hl}--\eqref{si}). Using these presentations, we will show that $A^*(\mathcal{I}_{g,n}^{\circ, i})$ is generated by divisors and $A^*(W_{i})$ is generated by restrictions of divisors from $\mathcal{I}_{g,n}$. We begin by constructing models for our hyperelliptic curve lying on $\mathbb{P}^1 \times \mathbb{P}^1$, in which we will be able to keep track of our marked points. Our method is inspired by Casnati's work \cite{Casnati}, which used plane models of hyperelliptic curves. One of the maps to $\mathbb{P}^1$ is given by the hyperelliptic map $\alpha:C \to \mathbb{P}^1$. We denote the corresponding degree two line bundle by $L := \alpha^*\O_{\mathbb{P}^1}(1)$. The other map to $\mathbb{P}^1$ has degree $g+1$ and comes from the following lemma. Recall our convention \eqref{conv} that $\mathcal{I}_{g,n}^{\circ, i} = \mathcal{I}_{g,n}^\circ$ if $n \geq g+1$. \begin{lem} \label{themap} Given $(C, p_1, \ldots, p_n) \in \mathcal{I}_{g,n}^{i, \circ}$, define \[M = \begin{cases} \O(p_1 + \ldots + p_{i-1} + (g-n+2)p_{i} + p_{i+1} + \ldots + p_n) & \text{if $n \leq g$} \\ \O(p_1 + \ldots + p_{g+1}) & \text{if $n \geq g+1$} \end{cases} .\] Then $M$ is base point free with $h^0(C, M) = 2$. \end{lem} \begin{proof} To start, observe $h^0(C, M) \geq \chi(M) = 2$ by Riemann--Roch. Suppose for the sake of contradiction that there exists a point $p \in C$ so that $h^0(C, M(-p)) \geq 2$. Once we derive a contradiction, this will show $M$ is base point free and has exactly $2$ sections. Let $M' = M(-p)$. Then $M'$ has degree $g$ and $h^0(C, M') \geq 2$. By Lemma \ref{georr}, we see $M' \cong L(x_1 + \ldots + x_{g-2})$, so $M \cong L(x_1 + \ldots + x_{g-2} + p)$. It follows that \[\O(p_2 + \ldots + p_{i-1} + (g-n+2)p_i + p_{i+1} + \ldots + p_n) \cong M(-p_1) \cong \O(\overline{p}_1 + x_1 + \ldots + x_{g-2} + p).\] If $i \neq 1$, the assumption $(C, p_1, \ldots, p_n) \in \mathcal{I}_{g,n}^{\circ, i}$ means $\overline{p}_1$ is not among the points \[p_2 + \ldots + p_{i-1} + (g-n+2)p_i + p_{i+1} + \ldots + p_n.\] Thus, $h^0(C, M(-p_1)) \geq 2$. By Lemma \ref{georr}, we would have $p_j = p_k$ for some $j \neq k$, but this contradicts the fact that $(C, p_1, \ldots, p_n) \in \mathcal{I}_{g,n}^{\circ, i}$. If $i = 1$, then $\overline{p}_1 \neq p_1$ so $\overline{p}_1$ is still not among the points representing $M(-p_1) = (g-n+1)p_1 + p_2 + \ldots + p_n$. Thus, as before, $h^0(C, M(-p_1)) \geq 2$, so Lemma \ref{georr} gives a contradiction with the fact that no pair of points in this divisor is conjugate. \end{proof} Let $\beta: C \to \mathbb{P}^1$ be the degree $g+1$ map associated to the line bundle $M$ introduced in Lemma \ref{themap}, so we have maps \begin{equation} \label{ab} \begin{tikzcd} C \arrow{r}{\beta} \arrow{d}[swap]{\alpha} & \mathbb{P}^1 = \mathbb{P} H^0(C, M)^\vee \\ {\color{white} \mathbb{P} H^0(C, L)^\vee = } \mathbb{P}^1 = \mathbb{P} H^0(C, L)^\vee \end{tikzcd} \end{equation} The product $\alpha \times \beta: C \to \mathbb{P}^1 \times \mathbb{P}^1 $ sends $C$ to a curve of bidegree $(g+1, 2)$ on $\mathbb{P}^1 \times \mathbb{P}^1$. By the degree-genus formula on $\mathbb{P}^1 \times \mathbb{P}^1$, the image has genus $g$, so the map must be an embedding. Let $\O(g+1,2) = \O_{\mathbb{P}^1}(g+1) \boxtimes \O_{\mathbb{P}^1}(2)$ The equation of $C \subset \mathbb{P}^1 \times \mathbb{P}^1$ has the form \begin{align} \label{PQR} P(x_0, x_1)y_0^2 + Q(x_0, x_1)y_0y_1 + R(x_0, x_1)y_1^2 &\in H^0(\mathbb{P}^1 \times \mathbb{P}^1, \O(g+1,2)) \\ &= H^0(\mathbb{P}^1,\O_{\mathbb{P}^1}(g+1))\otimes H^0(\mathbb{P}^1,\O_{\mathbb{P}^1}(2)) \notag \end{align} where $P, Q, R$ are homogeneous of degree $g+1$. We call this vector space of equations $E:=H^0(\mathbb{P}^1 \times \mathbb{P}^1, \O(g+1,2))$. We will need to know when certain configurations of points on $\mathbb{P}^1 \times \mathbb{P}^1$ impose independent conditions on forms in $E$. \begin{lem}\label{indconditions} Let $C\subset \mathbb{P}^1\times\mathbb{P}^1$ be an irreducible bidegree $(g+1,2)$ curve. Let $\Gamma$ be a degree at most $2g+5$ divisor contained in the smooth locus of $C$. Then the evaluation map \begin{equation} \label{evgamma} E=H^0(\mathbb{P}^1\times \mathbb{P}^1,\O(g+1,2))\rightarrow H^0(\Gamma,\O(g+1,2)|_{\Gamma}) \end{equation} is surjective. \end{lem} \begin{proof} The evaluation map \eqref{evgamma} factors as \begin{equation} \label{factorit}E = H^0(\mathbb{P}^1 \times \mathbb{P}^1, \O(g+1,2)) \to H^0(C, \O(g+1,2)|_C) \to H^0(\Gamma, \O(g+1,2)|_{\Gamma}). \end{equation} Write $N:=\O(g+1,2)|_C = L^{\otimes g+1} \otimes M^{\otimes 2}$, which is a line bundle of degree $4g+4$. By Riemann--Roch, we have $h^0(C, N) = \chi(C,N) = 3g+5$. Meanwhile, \[h^0(\mathbb{P}^1 \times \mathbb{P}^1, \O(g+1,2)) = (g+2)(3) = 3g+6.\] There is a unique polynomial of bidegree $(g+1,2)$ vanishing on $C$, so the first map in \eqref{factorit} is surjective by dimension counting. Next, consider the exact sequence of sheaves on $C$: \[ 0 \rightarrow N(-\Gamma) \to N \to N|_{\Gamma} \rightarrow 0\] Taking global sections, we see that the second map in \eqref{factorit} is surjective if \[ 0 = H^1(C, N(-\Gamma)) = H^0(\omega_C\otimes N(-\Gamma)^\vee).\] Because $n \leq 2g+5$, we see that \[\deg(\omega_C \otimes N(-\Gamma)^\vee) = 2g - 2 - (4g+4 - n) < 0,\] so indeed it has no global sections. \end{proof} \subsection{The stack of appropriately marked $(g+1, 2)$ curves} \label{Xsec} The construction modeling $C$ on $\mathbb{P}^1 \times \mathbb{P}^1$ works in families. Given $f: C \to S$ a family of irreducible nodal hyperelliptic curves with sections $p_1, \ldots, p_n: S \to C$, we obtain a relative degree $g+1$ line bundle $M$ as in Lemma \ref{themap} (where, in the definition of $M$, we interpret $p_i$ as the image of the $i^{\mathrm{th}}$ section). Since we have a marked point, there is a relative degree $2$ line bundle inducing the hyperelliptic map defined on the entire family by $L = \O(p_1 + \overline{p}_1)$. Let us write $V = (f_*L)^\vee$ and $W = (f_*M)^\vee$, which are both rank $2$ vector bundles on $S$. The relative version of \eqref{ab} becomes \begin{equation} \label{abrel} \begin{tikzcd} C \arrow{r}{\beta} \arrow{d}[swap]{\alpha} & \mathbb{P} W = \mathbb{P} (f_*M)^\vee\\ {\color{white} \mathbb{P}(f_* L)^\vee = } \mathbb{P} V = \mathbb{P} (f_*L)^\vee \end{tikzcd} \end{equation} We saw earlier that this map is an embedding on each fiber over $S$, so we obtain an embedding of our family $\iota: C \to \mathbb{P} V \times_S \mathbb{P} W$. The sections $p_1,\ldots, p_n:S \to C$ give rise to sections $\sigma_1, \ldots, \sigma_n: S \to \mathbb{P} V \times_S \mathbb{P} W$ defined via $\sigma_{i} := \iota \circ p_i$. In this way, we see $\mathcal{I}_{g,n}^{\circ, i}$ is equivalent to a stack $\mathcal{X}_{g,n, i}$ of appropriately marked $(g+1, 2)$ curves. More precisely, the objects of $\mathcal{X}_{g,n,i}$ over a scheme $S$ are tuples \[(V, W, C, \sigma_1, \ldots, \sigma_n)\] where $V, W$ are rank $2$ vector bundles on $S$; $C \subset \mathbb{P} V \times_S \mathbb{P} W$ is the zero locus of a section of $\O_{\mathbb{P} V}(g+1) \boxtimes \O_{\mathbb{P} W}(2)$ such that $C \to S$ is a family of irreducible, at worst nodal curves; and $\sigma_1, \ldots, \sigma_n: S \to \mathbb{P} V \times_S \mathbb{P} W$ are sections that satisfy the following conditions: \begin{enumerate} \item[(X.1)] If $\pi_1: \mathbb{P} V \times_S \mathbb{P} W \to \mathbb{P} V$ is the first projection, then $\pi_1(\sigma_j) \cap \pi_1(\sigma_k) = \varnothing$ for $j \neq k$ (this ensures the sections do not give rise to conjugate points on $C$) \item[(X.2)] If $\pi_2: \mathbb{P} V \times_S \mathbb{P} W \to \mathbb{P} W$ is the second projection, then {\small \begin{equation} \label{tcon} \pi_2^{-1}(\pi_2(\sigma_1)) \cap C = \begin{cases} \sigma_1 + \ldots + \sigma_{i-1} + (g-n+2) \sigma_i + \sigma_{i+1} + \ldots + \sigma_n & \text{if $n \leq g$} \\ \sigma_1 + \ldots + \sigma_{g+1} & \text{if $n \geq g+1$.} \end{cases} \end{equation}} \item[(X.3)] $\sigma_1, \ldots, \sigma_n$ factor through the smooth locus of $C \subset \mathbb{P} V\times_S \mathbb{P} W \to S$ \end{enumerate} Note that if $n \leq g$, then the fact that $\sigma_i$ is in the smooth locus of $C \to S$ and is tangent to the horizontal fiber $\pi_2^{-1}(\pi_2(\sigma_1))$ means $\sigma_i$ is not tangent to the vertical fiber $\pi_1^{-1}(\pi_2(\sigma_i))$, so it does not define a Weierstrass point on $C$. See Figure \ref{pts-fig} below for an illustration of conditions (X.1) and (X.2). \begin{figure}[h!] \centering \includegraphics[width=3.1in]{Slide1.png} \hspace{.2in} \includegraphics[width=3.1in]{Slide3.png} \caption{On the left is the configuration of points for $n \leq g$. In condition (X.2) We are going to ask that $C$ meet the red line with multiplicity $g-n+2$ at $\sigma_i$. On the right is the configuration of points for $n \geq g+1$. In both cases, the points have distinct projections onto the horizontal line (condition (X.1)). } \label{pts-fig} \end{figure} \subsubsection{A savings of one point} \label{savings} If $n \geq g+1$, then it is actually not necessary to keep track of $\sigma_{g+1}$. We can replace the condition \eqref{tcon} with the condition that $\pi_2^{-1}(\pi_2(\sigma_1))$ is reduced and contains $\sigma_1 + \ldots + \sigma_{g}$. Indeed, from $C$ and $\sigma_1, \ldots, \sigma_g$, we can then recover $\sigma_{g+1}$ as $\pi_2^{-1}(\pi_2(\sigma_1)) \smallsetminus (\sigma_1 + \ldots + \sigma_g)$. \subsection{Stacks of markings on $\mathbb{P}^1 \times \mathbb{P}^1$} There is of course a natural map from $\mathcal{X}_{g,n,i}$ to the stack $\mathcal{B}_{g,n}$ whose objects over $S$ are tuples \[(V, W, \sigma_1, \ldots, \sigma_g, \sigma_{g+2}, \ldots, \sigma_n)\] where $V, W$ are rank $2$ vector bundles on $S$ and $\sigma_1, \ldots, \sigma_g, \sigma_{g+2}, \ldots, \sigma_n: S \to \mathbb{P} V\times_S \mathbb{P} W$ are sections satisfying \begin{enumerate} \item $\pi_1(\sigma_j) \cap \pi_1(\sigma_k) = \varnothing$ for $j \neq k$ \item $\pi_2(\sigma_j) = \pi_2(\sigma_1)$ for $j \leq g$ and $\pi_2(\sigma_j) \cap \pi_2(\sigma_1) = \varnothing$ for $j \geq g+2$. \end{enumerate} (See Section \ref{savings} for why we are leaving out $\sigma_{g+1}$ when $n \geq g+1$.) Finally, we consider the stack whose objects are tuples $(S, V, W, \sigma_1)$ with $V, W$ rank $2$ vector bundles on $S$ and $\sigma_1: S \to \mathbb{P} V \times_S \mathbb{P} W$ a section. The section $\sigma_1$ is equivalent to the data of sections $v: S \to \mathbb{P} V$ and $w: S \to \mathbb{P} W$. Let $\PU \subset \PGL_2$ be the stabilizer of a point $[1:0] \in \mathbb{P}^1$. In other words, $\PU$ is the projectivization of the group $\mathrm{U} \subset \GL_2$ of upper triangular matrices. The stack parametrizing the data of a rank $2$ bundle $V$ together with a section of $\mathbb{P} V$ is the classifying stack $\BPU$. Thus, the stack of tuples $(S, V, W, \sigma_1)$ is simply $\BPU \times \BPU$. The following diagram summarizes the maps we have considered so far \begin{center} \begin{tikzcd} \mathcal{I}_{g,n}^{\circ, i} \cong \mathcal{X}_{g,n,i} {\color{white} \mathcal{I}_{g,n}^{\circ, i}} \arrow{d} & (C, p_1, \ldots, p_n) \arrow{r} & \arrow{l} (V, W, C, \sigma_1, \ldots, \sigma_n) \arrow{d} \\ \mathcal{B}_{g,n} \arrow{d} & & (V, W, \sigma_1, \ldots, \sigma_g, \sigma_{g+2}, \ldots, \sigma_n) \arrow{d} \\ \BPU \times \BPU & & (V, W, \sigma_1). \end{tikzcd} \end{center} Our goal is now to factor each vertical map above into open inclusions and affine/projective bundle morphisms. First, we describe the map $\mathcal{B}_{g,n} \to \BPU \times \BPU$. Let us write $\mathcal{V}$ and $\mathcal{W}$ for the universal rank $2$ bundles over $\BPU \times \BPU$, one pulled back from each factor. Let $v \times w: \BPU \times \BPU \to \mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}$ be the universal section and label the projection maps as follows \begin{center} \begin{tikzcd} \mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W} \arrow{r}{\pi_2} \arrow{d}[swap]{\pi_1} \arrow{dr}{\pi} & \mathbb{P} \mathcal{W} \arrow{d} \\ \mathbb{P} \mathcal{V} \arrow{r} & \BPU \times \BPU \ar[bend right = 30, u, swap, "w"] \ar[bend left = 20, l, "v"] \end{tikzcd} \end{center} In the case $n = 1$, we have $\mathcal{B}_{g,1} = \BPU \times \BPU$ and $\sigma_1 = v \times w$. For $2 \leq n \leq g$, we have that $\mathcal{B}_{g,n}$ is an open substack of \[\mathcal{A}_{g,n} := (\mathbb{P} \mathcal{V} \smallsetminus v)^{n-1}. \] This is saying that the sections $\sigma_2, \ldots, \sigma_n$ tell us $n-1$ points distinct from $\sigma_1$ on the same horizontal fiber as $\sigma_1$. Meanwhile, if $n \geq g+1$, then $\mathcal{B}_{g,n}$ is an open substack of \[\mathcal{A}_{g,n} := (\mathbb{P} \mathcal{V} \smallsetminus v)^{g-1} \times ((\mathbb{P} \mathcal{V} \smallsetminus v) \times (\mathbb{P} \mathcal{W} \smallsetminus w)) ^{n-g-1}.\] This is because we have $g-1$ sections $\sigma_2, \ldots, \sigma_{g}$ on the same horizontal fiber as $\sigma_1$ and the remaining sections $\sigma_{g+2},\ldots, \sigma_n$ lie in the complement of the vertical and horizontal fibers through $\sigma_1$. The condition (X.1) that no two sections lie in the same vertical fiber is open. \subsection{Relating $\mathcal{X}_{g,n,i}$ to a projective bundle over $\mathcal{B}_{g,n}$} Next, let \[\mathcal{N} := \O_{\mathbb{P} \mathcal{V}}(g+1) \boxtimes \O_{\mathbb{P} \mathcal{W}}(2).\] (This is the line bundle that defines curves in the class of interest on $\mathbb{P}^1 \times \mathbb{P}^1$.) Write $\pi: \mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W} \to \BPU \times \BPU$ for the structure map. Define \[\mathcal{E} := \pi_* \mathcal{N} = \Sym^{g+1} \mathcal{V}^\vee \otimes \Sym^2 \mathcal{W}^\vee,\] which is a vector bundle on $\BPU \times \BPU$. Equivalently, $\mathcal{E}$ is the quotient $E/(\PU \times \PU)$ of the vector space $E$ we considered earlier in \eqref{PQR}. There is a natural map from $\mathcal{X}_{g,n,i}$ to $\mathbb{P}(\phi^*\mathcal{E})$ over $\mathcal{B}_{g,n}$ that sends $(V, W, C, \sigma_1, \ldots, \sigma_n)$ to the equation defining $C \subset \mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}$. The image is contained in the locus of equations with appropriate vanishing behavior at the sections $\sigma_j$. Let $\phi: \mathcal{B}_{g,n} \to \BPU \times \BPU$ so we have \begin{equation} \label{setup} \begin{tikzcd} & \mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W} \arrow{d}{\pi} \\ \arrow{ur}{\sigma_j} \mathcal{B}_{g,n} \arrow{r}[swap]{\phi} & \BPU \times \BPU. \end{tikzcd} \end{equation} The equations in $\phi^* \mathcal{E}$ that vanish along $\sigma_j$ is just the kernel of the evaluation map pulled back from the $j^{\mathrm{th}}$ factor: \[\phi^*\mathcal{E} = \eta_j^* \pi^* \pi_*\mathcal{N} \to \eta_j^* \mathcal{N}. \] \subsection{The case $n \leq g$} \label{smn} When $n \leq g$, we also want the vanishing of our equation to be tangent with order $g-n+2$ to the horizontal fiber through $\sigma_i$. Such equations make up the kernel of the evaluation map in the principal parts bundle pulled back from the $i^{\mathrm{th}}$ factor: \[\phi^*\mathcal{E} = \eta_i^* \pi^* \pi_*\mathcal{N} \to \eta_i^* P^{g-n+1}_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{W}}(\mathcal{N}).\] To get the correct vanishing behavior at all of $\sigma_1, \ldots, \sigma_n$, we are therefore interested in the kernel of the evaluation map \begin{equation} \label{eeev}\mathrm{ev}: \phi^* \mathcal{E} \to \eta_i^*P^{g-n+1}_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{W}}(\mathcal{N}) \oplus \bigoplus_{\substack{1 \leq j \leq n \\ j \neq i}} \eta_j^* \mathcal{N}. \end{equation} \begin{lem} \label{evs} The map $\mathrm{ev}$ in \eqref{eeev} is surjective. \end{lem} \begin{proof} The equation \eqref{eeev} is the equivariant version of the map \[E = H^0(\mathbb{P}^1 \times \mathbb{P}^1, \O(g+1, 2)) \to H^0(\O_{\mathbb{P}^1}(g+1)) \to \O_D\] that restricts a bidegree $(g+1, 2)$ polynomial to the distinguished horizontal line of the ruling $\mathbb{P} \mathcal{V} \times \pi_2(\sigma_1)$ and then restricts the resulting degree $g+1$ polynomial to the degree $g+1$ subscheme $D = \sigma_1 + \ldots + (g+n-2)\sigma_i + \ldots+\sigma_n \subset \mathbb{P} \mathcal{V} \times \pi_2(\sigma_1)$. If we were to pick coordinates as in \eqref{PQR}, the first map would be taking the polynomial $P(x_0, x_1)$, and the second map would be evaluating $P(x_0, x_1)$ at a specified degree $g+1$ subscheme of $\mathbb{P}^1$. Both of these maps are surjective. \end{proof} Let $\mathcal{F} \subset \phi^*\mathcal{E}$ be the kernel of $\mathrm{ev}$, which is a vector bundle by Lemma \ref{evs}. \begin{lem} \label{oplem} There is an open inclusion $\mathcal{I}_{g,n}^{\circ}\cong\mathcal{X}_{g,n,i} \subset \mathbb{P} \mathcal{F}$. \end{lem} \begin{proof} The bundle $\mathbb{P}\mathcal{F}$ over $\mathcal{B}_{g,n}$ parametrizes $(V, W, C, \sigma_1, \ldots, \sigma_n)$ satisfying (X.1) and (X.2) in Section \ref{Xsec}. The additional conditions that define $\mathcal{X}_{g,n,i}$ are that $C$ is a family of irreducible nodal curves and (X.3), both of which are open conditions. \end{proof} \subsubsection{The strata $W_{i-1}$ when $n \leq g$} Using a similar ideal, we can realize $W_{i-1} \subset \mathcal{I}_{g,n}^{\circ, i}$ as the intersection of $\mathcal{I}_{g,n}^{\circ, i}$ with a subbundle $\mathbb{P} \mathcal{F}_{i-1} \subset \mathbb{P} \mathcal{F}$ for an appropriate vector subbundle $\mathcal{F}_{i-1} \subset \mathcal{F}$. The $j^{\mathrm{th}}$ marked point will be Weierstrass if the equation for $C$ is tangent to the vertical fiber through $\sigma_j$. This means we need to consider the evaluation map \begin{equation} \label{evi}\mathrm{ev}_{i-1}: \phi^* \mathcal{E} \to \bigoplus_{1 \leq j \leq i-1} \eta_j^* P^1_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{V}}(\mathcal{N}) \oplus \eta_i^*P^{g-n+1}_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{W}}(\mathcal{N}) \oplus \bigoplus_{i+1\leq j \leq n} \eta_j^*\mathcal{N}. \end{equation} \begin{figure}[h!] \centering \includegraphics[width=4in]{Slide2.png} \caption{The points to the left of $\sigma_i$ are Weierstrass points, indicated by the vertical tangent arrows to each point.} \label{weierstrassfig} \end{figure} \begin{lem} The evaluation map $\mathrm{ev}_{i-1}$ in \eqref{evi} is surjective. \end{lem} \begin{proof} Let us describe the map in terms of the vector space $E$ with coordinates as in \eqref{PQR}. As in Lemma \ref{evs}, evaluation in $\eta_j^*\mathcal{N}$ is the equivariant version of evaluating $P(x_0, x_1)$ at various points along $\mathbb{P}^1$. The evaluation in $\eta_i^*P^{e}_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{W}}(\mathcal{N})$ is evaluating $P(x_0, x_1)$ in an $e^{\mathrm{th}}$ order neighborhood around the $i^{\mathrm{th}}$ point in $\mathbb{P}^1$. On the other hand, evaluation in the rank $2$ bundle $\eta_j^*P^1_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{V}}(\mathcal{N})$ corresponds to evaluating $P(x_0, x_1)$ and $Q(x_0, x_1)$ at the $j^{\mathrm{th}}$ point on $\mathbb{P}^1$. Since $Q(x_0, x_1)$ has degree $g+1$, the evaluation map at $i - 1 \leq g$ points is surjective. \end{proof} There are natural surjections $\eta_j^* P^1_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{V}}(\mathcal{N}) \to \eta_j^*\mathcal{N}$, so the target of \eqref{eeev} is a quotient of the target of \eqref{evi}. Hence, $\mathcal{F}_{i-1} := \ker(\mathrm{ev}_{i-1})$ is a subbundle of $\mathcal{F} = \ker(\mathrm{ev})$. Moreover, $W_{i-1} \subset \mathcal{I}_{g,n}^{\circ, i}$ is the intersection of $\mathcal{I}_{g,n}^{\circ, i}$ with $\mathbb{P} \mathcal{F}_{i-1} \subset \mathbb{P} \mathcal{F}$, so we obtain a diagram where the square at the top is fibered: \begin{equation} \begin{tikzcd} W_{i-1} \arrow{r} \arrow{d} &\mathbb{P} \mathcal{F}_{i-1} \arrow{d} \\ \mathcal{I}_{g,n}^{\circ,i} \arrow{r} & \mathbb{P} \mathcal{F} \arrow{d} \\ & \mathcal{B}_{g,n} \arrow{r} & \mathcal{A}_{g,n} \arrow{d} \\ && \BPU \times \BPU \end{tikzcd} \end{equation} Above, the vertical maps are projective/affine bundles and the horizontal maps are open inclusions. \begin{lem} \label{dglem} We have $A^*(\mathcal{I}_{g,n}^{\circ, i})$ is generated by divisors, and $A^*(W_{i-1})$ is generated by restrictions of divisors from $A^*(\mathcal{I}_{g,n})$. \end{lem} \begin{proof} By excision, to prove the first claim, it suffices to show $A^*(\mathbb{P} \mathcal{F})$ is generated by divisors. By the projective bundle theorem, this follows if $A^*(\mathcal{B}_{g,n})$ is generated by divisors. By excision again, this follows if $A^*(\mathcal{A}_{g,n})$ is generated by divisors. But $\mathcal{A}_{g,n}$ is an affine bundle over $\BPU \times \BPU$, so it suffices to show that $A^*(\BPU \times \BPU)$ is generated by divisors, which we see as follows. The map $\PU\rightarrow \gg_m \ltimes \gg_a$ \[ A=\begin{pmatrix} a & b \\ 0 & c \end{pmatrix}\mapsto (a/c,b/c) \] is an isomorphism. This induces a natural map \[ \mathrm{B}\gg_m\times \mathrm{B}\gg_m\rightarrow \BPU\times \BPU, \] which is an affine bundle with fiber $\mathbb{A}^2$, so the pullback map induces an isomorphism on Chow rings. The Chow ring $A^*(\mathrm{B}\gg_m\times \mathrm{B}\gg_m)$ is generated by divisors, corresponding to the first Chern classes of the two universal line bundles. To prove the second claim, note that $\O_{\mathbb{P} \mathcal{F}}(1)$ restricts to $\O_{\mathbb{P} \mathcal{F}_{i-1}}(1)$. Thus, by the projective bundle theorem, $A^*(\mathbb{P} \mathcal{F}) \to A^*(\mathbb{P} \mathcal{F}_{i-1})$ is surjective. By excision $A^*(\mathbb{P} \mathcal{F}_{i-1}) \to A^*(W_{i-1})$ is surjective. Hence, $A^*(\mathbb{P} \mathcal{F}) \to A^*(W_{i-1})$ is surjective. But since the top square commutes, all classes in the image of $A^*(\mathbb{P} \mathcal{F}) \to A^*(W_{i-1})$ are in the image of $A^*(\mathcal{I}_{g,n}^{\circ,i}) \to A^*(W_{i-1})$. Finally, $A^*(\mathcal{I}_{g,n}) \to A^*(\mathcal{I}_{g,n}^{\circ, i})$ is surjective by excision, so $A^*(\mathcal{I}_{g,n}) \to A^*(\mathcal{I}_{g,n}^{\circ, i}) \to A^*(W_{i-1})$ is surjective. \end{proof} We now conclude the proof of Theorem \ref{divgen} in the case $n \leq g$ \begin{proof}[Proof of Theorem \ref{divgen} over $\mathbb{C}$ when $n \leq g$] By Theorem \ref{Scavia}, it suffices to show that $A^*(\mathcal{I}_{g,n})$ is generated by divisors. By Lemma \ref{dglem}, we have that $W_i$ is generated by restrictions of divisors from $\mathcal{I}_{g,n}$ for $i \leq n-1$. (Note $W_0 = \mathcal{I}_{g,n}^{\circ, 1}$.) Lemma \ref{Wn} shows $W_n$ is also generated by restrictions of divisors from $\mathcal{I}_{g,n}$. The fundamental class of each $W_i$ is a product of divisors (Lemma \ref{fc}), so we conclude that $A^*(\mathcal{I}_{g,n}^\circ) = A^*(W_0 \cup W_1 \cup \cdots \cup W_n)$ is generated by divisors for $n \leq g$. Now we proceed by induction. The base case is that $A^*(\mathcal{I}_{g,0})$ is generated by divisors (Lemma \ref{0pt}). Suppose we know that $A^*(\mathcal{I}_{g,n-1})$ is generated by divisors. By Proposition \ref{induction}, we know $A^*(D_{ij})$ is generated by restrictions of divisors from $\mathcal{I}_{g,n}$ for each $i \neq j$. By the push-pull formula, all classes supported on $D_{ij}$ are products of divisors. Since $A^*(\mathcal{I}_{g,n}^\circ)$ is generated by divisors, excision shows that $A^*(\mathcal{I}_{g,n})$ is generated by divisors. \end{proof} \subsection{The case $g+1 \leq n \leq 2g+6$} \label{ng1} When $n \geq g+1$, we have no higher order vanishing conditions to worry about at $\sigma_i$. Recall also that $\mathcal{I}_{g,n}^{\circ, i} = \mathcal{I}_{g,n}^{\circ}$. To find the equations in $\phi^*\mathcal{E}$ that vanish along $\sigma_1, \ldots, \sigma_g,\sigma_{g+2},\ldots, \sigma_n$, we consider the evaluation map \begin{equation} \label{ev2} \mathrm{ev}: \phi^*\mathcal{E} \to \bigoplus_{\substack{1 \leq j \leq n \\ j \neq g+1}} \eta_j^*\mathcal{N}. \end{equation} The preimage of the zero section of \eqref{ev2} inside $\phi^* \mathcal{E}$ parametrizes global sections of $\mathcal{N}$ that vanish along $\sigma_1, \ldots, \sigma_g, \sigma_{g+2},\ldots, \sigma_n$. By construction then, there is a map $\mathcal{I}_{g,n}^{\circ} \to \mathbb{P} (\mathrm{ev}^{-1}(0)) \subset \mathbb{P} (\phi^*\mathcal{E})$. This map is an open inclusion by a similar argument to Lemma \ref{oplem}. The problem now is that $\mathrm{ev}^{-1}(0)$ is not necessarily a vector bundle because the rank of \eqref{eeev} can drop. Let $\mathcal{B}^\circ \subset \mathcal{B}_{g,n}$ be the locus where $\eqref{eeev}$ is surjective. By semicontinuity, we know $\mathcal{B}^\circ \subset \mathcal{B}_{g,n}$ is open. (Of course, $\mathcal{B}^\circ$ may be empty, and it will be when $n$ is too large.) Define $\mathcal{F} \subset \phi^*\mathcal{E}|_{\mathcal{B}^\circ}$ to be the kernel of the restriction of \eqref{ev2} to $\mathcal{B}^\circ$. By definition of $\mathcal{B}^\circ$, we know that $\mathcal{F}$ is a vector bundle over $\mathcal{B}^\circ$. There is a fibered square as below, and we wish to know if $\mathcal{I}_{g,n}^{\circ, i} \to \mathbb{P}(\mathrm{ev}^{-1}(0)) \subset \mathbb{P} (\phi^*\mathcal{E})$ factors through $\mathbb{P} \mathcal{F}$. \begin{center} \begin{tikzcd} \mathcal{I}_{g,n}^{\circ} \ar[drr, bend left = 10] \ar[ddr, bend right = 10, swap, dashed, "?"] \arrow[dashed, "?"]{dr} \\ & \mathbb{P} \mathcal{F} \arrow{r} \arrow{d} & \mathbb{P} (\mathrm{ev}^{-1}(0)) \arrow{d} \\ & \mathcal{B}^\circ \arrow{r} & \mathcal{B}_{g,n} \end{tikzcd} \end{center} Equivalently, we wish to know if $\mathcal{I}_{g,n}^{\circ} \to \mathcal{B}_{g,n}$ factors through $\mathcal{B}^\circ$. The following lemma shows that this holds when $n$ is sufficiently small relative to $g$. \begin{lem} \label{2g6} If $n \leq 2g + 6$, then the image of $\mathcal{I}_{g,n}^{\circ} \to \mathcal{B}$ is contained in $\mathcal{B}^\circ$. \end{lem} \begin{proof} Each point in the image of $\mathcal{I}_{g,n}^{\circ} \to \mathcal{B}_{g,n}$ corresponds to a collection of $n-1 \leq 2g+5$ points on $\mathbb{P}^1 \times \mathbb{P}^1$ that lie in the smooth locus of an irreducible $(g+1, 2)$ curve. (We have $n-1$ points because we are not tracking $\sigma_{g+1}$, see Section \ref{savings}.) Thus, surjectivity follows from Lemma \ref{indconditions}. \end{proof} \begin{rem}[The case $n \geq 2g+7$] When $n \geq 2g+7$, Lemma \ref{2g6} fails. The reason is because $n - 1 \geq 2g+6$ points on an irreducible $(g+1,2)$ curve can fail to impose independent conditions. Looking back at Lemma \ref{indconditions}, we see this occurs when the line bundle $\omega_C \otimes N(-\Gamma)^\vee$ has non-trivial global sections. The first case of this is when $\omega_C \otimes N(-\Gamma) \cong \O$ which is the case $\Gamma$ consists of $2g+6$ points which are the intersection of the $(g+1,2)$ curve $C$ with a $(2, 2)$ curve. \end{rem} It follows from Lemma \ref{2g6} that $\mathcal{I}_{g,n}^{\circ}$ is an open substack of $\mathbb{P} \mathcal{F}$. Thus, we have a diagram \begin{equation} \begin{tikzcd} \mathcal{I}_{g,n}^{\circ} \arrow{r} & \mathbb{P} \mathcal{F} \arrow{d} \\ &\mathcal{B}^{\circ} \arrow{r} & \mathcal{B}_{g,n} \arrow{r} & \mathcal{A}_{g,n} \arrow{d} \\ &&& \BPU \times \BPU \end{tikzcd} \end{equation} where the vertical maps are projective or affine bundles and the horizontal maps are open inclusions. Essentially the same argument as in Lemma \ref{dglem} yields the following. \begin{lem} \label{key} When $n \leq 2g+6$, we have $A^*(\mathcal{I}_{g,n}^{\circ})$ is generated by divisors. \end{lem} \begin{proof}[Proof of Theorem \ref{divgen} over $\mathbb{C}$ when $g+1 \leq n \leq 2g+6$] Again, we argue by induction. Suppose that we know $A^*(\mathcal{I}_{g,n-1})$ is generated by divisors. Then the push-pull formula together with Proposition \ref{induction} shows that classes supported on $D_{ij}$ are products of divisors. By Lemma \ref{key}, we know $A^*(\mathcal{I}_{g,n}^{\circ})$ is generated by divisors, so the result follows by excision. \end{proof} \subsection{Rationality when $n \leq 3g+6$} A key step in the previous section was knowing that the image of $\mathcal{I}_{g,n}^\circ$ was contained in the locus $\mathcal{B}^\circ \subset \mathcal{B}_{g,n}$ where the evaluation map was surjective. In this section, we explain how knowing the weaker fact that $\mathcal{B}^\circ$ is non-empty shows that $\mathcal{I}_{g,n}^\circ$ is rational. \begin{lem} Let $\mathcal{B}^\circ \subset \mathcal{B}_{g,n}$ be the locus where the evaluation map \eqref{ev2} is surjective. If $n \leq 3g+6$, then $\mathcal{B}^\circ$ is non-empty. \end{lem} \begin{proof} This is equivalent to showing that a general collection of $n-1 \leq 3g+5$ points on $\mathbb{P}^1 \times \mathbb{P}^1$ with $g$ on the same line of the ruling impose independent conditions on $\O(g+1, 2)$. (Note that this is much weaker than our previous situation where we wanted to know \emph{every} collection of $n-1$ points lying on an irreducible $(g+1,2)$ curve impose independent conditions on $\O(g+1,2)$.) We know any $g$ distinct points $p_1, \ldots, p_g$ on a line of the ruling impose independent conditions. Now continue choosing points $p_i$ not in the base locus of the linear system of $(g+1,2)$ curves vanishing at $p_1, \ldots, p_{i-1}$. This is possible so long as the kernel of the evaluation map $H^0(\mathbb{P}^1 \times \mathbb{P}^1, \O(g+1,2)) \to \bigoplus_{j=1}^{i-1} \O(g+1,2)|_{p_j}$ has dimension at least $1$. This holds at each step since we have $n - 1 \leq 3g+5$ points and $h^0(\mathbb{P}^1 \times \mathbb{P}^1, \O(g+1,2)) = 3g+6$. \end{proof} Now, let $\mathcal{F} \subset \phi^*\mathcal{E}|_{\mathcal{B}^\circ}$ be the kernel of \eqref{ev2} restricted to $\mathcal{B}^\circ$. We obtain a diagram where both squares are fibered \begin{center} \begin{tikzcd} U \arrow{r} \arrow{d} &\mathcal{I}_{g,n}^{\circ} \arrow{d} \\ \mathbb{P} \mathcal{F} \arrow{r} \arrow{d} & \mathbb{P}(\mathrm{ev}^{-1}(0)) \arrow{d} \\ \mathcal{B}^\circ \arrow{r} & \mathcal{B}_{g,n} \end{tikzcd} \end{center} Although $\mathcal{I}_{g,n}^\circ \to \mathbb{P}(\mathrm{ev}^{-1}(0))$ does not factor through $\mathbb{P} \mathcal{F}$, since $\mathcal{B}^\circ \subset \mathcal{B}_{g,n}$ is non-empty there is still a dense open $U \subset \mathcal{I}_{g,n}^\circ$ which does factor through $\mathbb{P} \mathcal{F}$. Moreover, because a general collection of $n-1 \leq 3g+5$ points impose indpendent conditions on $\O(g+1,2)$, the inclusion $U \subset \mathbb{P} \mathcal{F}$ is open. Finally, it remains to show that $U$ is rational. \begin{lem} If $n \leq 3g+6$, then $U$ defined above is rational. Hence, $\mathcal{I}_{g,n}$ is rational. \end{lem} \begin{proof} Our $U$ is an open subset of a projective bundle $\mathbb{P} \mathcal{F}$ over $\mathcal{B}^\circ$, so it suffices to show that $\mathcal{B}^\circ$ is rational. Since Casnati has proved rationality for $n \leq 2g+8$, we can assume $n \geq 2g+9 > g+3$. This ensures that there are at least two marked points which are not in the same fiber as $\sigma_1$. On the open subset of $\mathcal{B}^\circ$ where $\pi_2(\sigma_{g+2}) \cap \pi_2(\sigma_{g+3}) = \varnothing$, we can ``use up" the $\PU \times \PU$ action to fix \[\sigma_1 = [1:0] \times [1:0], \sigma_{g+2} = [1:1] \times [1:1], \quad\text{and}\quad \sigma_{g+3} = [0:1] \times [0:1].\] The other markings are then parametrized by an open subset of $(\mathbb{P}^1)^{g-1} \times (\mathbb{P}^1 \times \mathbb{P}^1)^{n-g-3}$, which is rational. \end{proof} \section{Upgrading to a proof in characteristic $\neq 2$} \label{pp} In the course of the proof of Theorem \ref{divgen} and Proposition \ref{rprop} so far, we relied on the following two facts that Scavia has shown to hold over $\mathbb{C}$: \begin{itemize} \item[(a)] The classes $\psi_1, \ldots, \psi_n \in A^1(\H_{g,n})$ are independent (so that there are no more relations in Proposition \ref{rprop}) \item[(b)] The classes $\psi_1, \ldots, \psi_n, \delta$ span $A^1(\mathcal{I}_{g,n})$ when $n \leq 2g+6$ (to show these are the codimension $1$ generators in Theorem \ref{divgen}). Equivalently, the classes $\psi_1, \ldots, \psi_n$ span $A^1(\H_{g,n})$ when $n \leq 2g + 6$. \end{itemize} With the set up we have developed, it is not too much more work to establish (a) and (b) in any characteristic $\neq 2$, thereby proving Theorem \ref{divgen} more generally. This also provides a new proof of Scavia's result that the $\psi$ classes are a basis for $A^1(\H_{g,n})$ when $n \leq 2g+6$, which holds in positive characteristic. \subsection{Independence of $\psi$ classes} \label{psisec} Our proof of independence is geometric, using test curves on a partial compactification of $\H_{g,n}$. The method is in part inspired by work of Edidin--Hu \cite{EdidinHu}, which used test curves to determine the classes of $\overline{D}_{11} \in A^1(\overline{\H}_{g,1})$ and $\overline{D}_{12} \in A^1(\overline{\H}_{g,2})$. However, we shall only need the simplest families of test curves introduced there (and their generalizations to more marked points). \begin{rem} Edidin--Hu work over $\mathbb{C}$ in order to make use of Scavia's result that gives a basis for $A^1(\overline{\H}_{g,n})$. However, the test curves we use make sense over any ground field of characteristic $\neq 2$. Independence of divisors can be proved so long as we have enough test curves. In Section \ref{spanning}, we will show that the $\psi$ classes span $A^1(\H_{g,n})$ by dimension counting and excision. \end{rem} The following lemma is standard, but we include a proof for the convenience of the reader. \begin{lem} \label{psitest} Suppose $X \to C$ is a family of pointed curves with smooth total space $X$ and sections $\sigma_i : C \to X$. Let $a: C \to \overline{\M}_{g,n}$ be the induced map. Then $\deg(a^*\psi_i) = -[\sigma_i(C)]^2$. \end{lem} In this situation, we shall often abbreviate $[\sigma_i(C)]^2$ by $\sigma_i^2$. \begin{proof} We have \[ a^*\psi_i = -c_1(\sigma_i^* \mathcal{T}_{X/C}) = -(c_1(\sigma_i^*\mathcal{T}_C) - c_1(\mathcal{T}_C)) = -c_1(N_{\sigma_i(C)/X}) = -[\sigma_i(C)]^2. \qedhere \] \end{proof} Let $\widetilde{\H}_{g,n} \subset \overline{\mathcal{H}}_{g,n}$ denote the open subset of pointed hyperellipitc curves $(C, p_1, \ldots, p_n)$ so that $C$ is irreducible or a union of an irreducible component of genus $g$ and a genus $0$ curve containing exactly two marked points. We write $\Delta_{ij} \subset \widetilde{\H}_{g,n}$ for the divisor where $p_i$ and $p_j$ are the two points on the genus $0$ component. Let $\delta_{ij} = [\Delta_{ij}] \in A^1(\widetilde{\H}_{g,n})$. \begin{lem} \label{psi-indep} The classes $\psi_1, \ldots, \psi_n$ and the boundary divisors $\delta_{ij}$ (for $i \neq j$) are independent in $A^1(\tilde{\H}_{g,n})$ for any $n$ and $g \geq 2$. In particular, $\psi_1, \ldots, \psi_n$ are independent in $A^1(\H_{g,n})$ for any $n$ and $g \geq 2$. \end{lem} \begin{proof} In total, we have $n + {n \choose 2}$ divisors we wish to show are independent. Let us now introduce $n + {n \choose 2}$ test curves in $\widetilde{\H}_{g,n}$ which we shall intersect with our divisors. Our test curves come in two families. Below, we compute the intersection number of each test curve with each divisor. Then we describe a change of basis which makes the intersection matrix upper triangular with nonzero diagonal entries, and so visibly full rank. \subsubsection{The first family of test curves:} \label{ff} First, for each $i = 1, \ldots, n$, we let $T_i$ be the test curve where $p_i$ roams over the curve and $p_j$ for $j \neq i$ is a fixed non-Weierstrass point. To build this family, take $C \times C$ and sections $\sigma_i = \Delta$ (the diagonal) and $\sigma_j = C \times p_j$ when $j \neq i$. Then we blow up the points $(p_j, p_j)$ for $j \neq i$. Write $\nu:X \to C \times C$ for the blow up, and $E_j$ for the exceptional divisors. Then $T_i$ is the family $X \to C$ with sections $\tilde{\sigma}_1, \ldots, \tilde{\sigma}_n$ which are the proper transforms of $\sigma_1, \ldots, \sigma_n$. \begin{center} \includegraphics[width=6in]{Ti.pdf} \end{center} Since $\sigma_i \subset C \times C$ is the diagonal, we have \[\sigma_i^2 = \deg N_{\sigma_i/X} = \deg T_{C \times C} - \deg T_C = -(2g-2).\] Then we have \[-(2g-2) = \sigma_i^2 = (\nu^*\sigma_i)^2 = (\tilde{\sigma}_i + \sum_{j \neq i} E_j)^2 = \tilde{\sigma}_i^2 + 2(n-1) - (n-1). \] Hence, $\tilde{\sigma}_i^2 = -(2g + n - 3)$. In a similar manner, for $j \neq i$ \[0 = \sigma_j^2 = (\nu^*\sigma_j)^2 = (\tilde{\sigma}_j + E_j)^2 = \tilde{\sigma}_j^2 + 2 - 1,\] so $\tilde{\sigma}_j^2 = -1$. By Lemma \ref{psitest}, we therefore obtain \begin{align} \label{i1} T_i \cdot \psi_j &= \begin{cases} 1 & \text{if } i \neq j \\ 2g + n - 3 & \text{if } i = j. \end{cases} \intertext{Because the total space of our test family is smooth, the intersection of $T_i$ with each boundary divisor $\Delta_{jk}$ is reduced. Therefore, we obtain} T_i \cdot \delta_{jk} &= \begin{cases} 1 & \text{if $j = i$ or $k= i$} \\ 0 & \text{if $j, k \neq i$}.\end{cases} \end{align} \subsubsection{The second family of test curves} Our second family of test curves are curves $T_{ij}$ where $p_i$ runs over the curve, $p_j$ is its conjugate, and the other points are fixed non-Weierstrass points. To build this, we start with $C \times C$ and take sections $\sigma_i = \Delta$ and $\sigma_j = \iota(\Delta)$ where $\iota : C\times C \to C\times C$ sends $(p, q) \mapsto (p, \overline{q})$. Now $\sigma_i$ and $\sigma_j$ intersect at the points $(p, p)$ where $p$ is Weierstrass. For $k \neq i, j$ we set $\sigma_k = C \times p_k$ to be a fixed non-Weierstrass point. Then $\sigma_k$ meets $\sigma_i = \Delta$ in $(p_k, p_k)$ and it meets $\sigma_j = \iota(\Delta)$ in $(\overline{p}_k, p_k)$. Let $\nu: S \to C \times C$ be the blow up at all points $(p, p)$ with $p$ Weierstrass (call these exceptionals $E_1', \ldots, E_{2g+2}'$) and all points $(p_k, p_k)$ and $(\overline{p}_k, p_k)$ for $k \neq i,j$ (call these exceptionals $E_k$ and $\overline{E}_k$ respectively.) \begin{center} \includegraphics[width=6in]{Tij.pdf} \end{center} To calculate the degree of the $\psi$ classes, we note that \[-(2g-2) = (\nu^*\sigma_i)^2 = (\tilde{\sigma}_i + E_1'+\ldots + E_{2g+2}' + \sum_{k \neq i,j} E_k)^2 = \tilde{\sigma}_i^2 + 2(2g+2+n-2) - (2g+2+n-2) \] and so $\tilde{\sigma}_i^2 = -(4g+n-2)$. A similar calculation (computing the self-intersection of the divisor $\nu^*\sigma_j = \tilde{\sigma}_j + E_1' + \ldots + E_{2g+2}' + \sum \overline{E}_k$) yields $\tilde{\sigma}_j^2 = -(4g+n-2)$ as well. Meanwhile, for $k \neq i,j$, we have \[0 = (\nu^*\sigma_k)^2 = (\tilde{\sigma}_k + E_k + \overline{E}_k)^2 =\tilde{\sigma}_k^2 + 4 - 2 \] and so $\tilde{\sigma}_k = -2$. Therefore, by Lemma \ref{psitest} we have \begin{align} T_{ij} \cdot \psi_k &= \begin{cases} 2 & \text{if $k \neq i, j$} \\ 4g+n-2 & \text{if $k = i$ or $k = j$.} \end{cases} \intertext{Again, the total space of our test family is smooth, so the intersection of $T_{ij}$ with the boundary divisor $\Delta_{k\ell}$ is reduced. It is then straightforward to count} T_{ij} \cdot \delta_{k\ell} &= \begin{cases} 2g+2 & \text{if }\#(\{i,j\} \cap \{k,\ell\}) = 2 \\ 1 & \text{if }\#(\{i,j\} \cap \{k,\ell\})= 1 \\ 0 & \text{if }\#(\{i,j\} \cap \{k,\ell\}) = 0. \label{i4} \end{cases} \end{align} \subsubsection{The intersection matrix and change of basis} Consider the intersection matrix with rows the test curves $T_i$ and $T_{ij}$ and the columns the divisors $\psi_i$ and $\delta_{ij}$. For example, when $n = 3$, the intersection matrix is given below. \begin{center} \begin{tabular}{c||c|c|c|c|c|c} $\cdot$ & $\psi_1$ & $\psi_2$ & $\psi_3$ & $\delta_{12}$ & $\delta_{13}$ & $\delta_{23}$ \\ \hline \hline $T_1$ & $2g$ & $1$ & $1$ & $1$ & $1$ & $0$ \\ $T_2$ & $1$ & $2g$ & $1$ & $1$ & $0$ & $1$ \\ $T_3$ & $1$ & $1$ & $2g$ & $0$ & $1$ & $1$ \\ $T_{12}$ & $4g+1 $& $4g+1$ & $2$ & $2g+2$ & $1$ & $1$ \\ $T_{13}$ & $4g+1$ & $2$ & $4g+1$ & $1$ & $2g+2$ & $1$ \\ $T_{23}$ & $2$ & $4g+1$ & $4g+1$ & $1$ & $1$ & $2g+2$ \end{tabular} \end{center} We perform the column operations that subtract $\sum_{j \neq i}\delta_{ij}$ from the $\psi_i$ column. Then, we perform the row operations that subtract $T_i + T_j$ from $T_{ij}$. When $n = 3$, this gives: \begin{center} \begin{tabular}{c||c|c|c|c|c|c} $\cdot$ & $\psi_1 - \delta_{12} - \delta_{13}$ & $\psi_2 - \delta_{12} - \delta_{23}$ & $\psi_3 - \delta_{13} -\delta_{23}$ & $\delta_{12}$ & $\delta_{13}$ & $\delta_{23}$ \\ \hline \hline $T_1$ & $2g-2$ & $0$ & $0$ & $1$ & $1$ & $0$ \\ $T_2$ & $0$ & $2g-2$ & $0$ & $1$ & $0$ & $1$ \\ $T_3$ & $0$ & $0$ & $2g-2$ & $0$ & $1$ & $1$ \\ $T_{12} - T_1 - T_2$ & $0$& $0$ & $0$ & $2g$ & $0$ & $0$ \\ $T_{13} - T_1 - T_3$ & $0$ & $0$ & $0$ & $0$ & $2g$ & $0$ \\ $T_{23} - T_2 - T_3$ & $0$ & $0$ & $0$ & $0$ & $0$ & $2g$ \end{tabular} \end{center} More generally, using \eqref{i1}--\eqref{i4}, one can see that, using this change of basis, the intersection matrix always takes the block form \begin{center} \begin{tabular}{c||c|c} $\cdot$ & $\psi_i - \sum \delta_{ij}$ & $\delta_{ij}$ \\ \hline\hline $T_i$ & $(2g - 2) \cdot \mathrm{Id}$ & $*$ \\ $T_{ij} - T_i - T_j$ & $0$ & $2g \cdot \mathrm{Id}$ \end{tabular} \end{center} In particular, the intersection matrix is full rank for all $g \geq 2$ and any $n$. It follows that $\psi_1, \ldots, \psi_n$ and the boundary divisors $\delta_{ij}$ are independent on $\tilde{\H}_{g,n}$. \end{proof} \subsection{Spanning of $\psi$ classes} \label{spanning} Our goal in this section is to show that $A^1(\H_{g,n})$ is spanned by $\psi_1, \ldots, \psi_n$ when $n \leq 2g+6$. The first step is the following. \begin{lem} \label{no-extra} For $n \leq 2g+6$, we have $A^1(\mathcal{I}_{g,n}^{\circ,1} \smallsetminus \Delta) = 0$. \end{lem} \begin{proof} We described $\mathcal{I}_{g,n}^{\circ,1}$ in Section \ref{smn} as an open subset of a projective bundle $\mathbb{P} \mathcal{F}$ over $\mathcal{B}^\circ \subseteq \mathcal{B}_{g,n}$, which in turn is an open subset of an affine bundle over $\BPU \times \BPU$. By the projective bundle theorem $A^1(\mathbb{P} \mathcal{F})$ is generated by $\zeta := c_1(\O_{\mathbb{P} \mathcal{F}}(1))$ and the generators $c_1 := c_1(\mathcal{V})$ and $d_1 := c_1(\mathcal{W})$ of $A^1(\BPU \times \BPU)$. Our goal is thus to show that $\zeta, c_1, d_1$ all restrict to zero on $\mathcal{I}_{g,n}^{\circ,1} \smallsetminus \Delta \subset \mathbb{P} \mathcal{F}$. By construction, the pullback of $\mathcal{V}$ to $\mathcal{I}_{g,n}^{\circ,1} \subset \mathbb{P} \mathcal{F} \to \BPU \times \BPU$ is the rank two bundle $f_*\O_{\mathcal{C}}(p_1 + \overline{p}_1)$, whose projectivization is the universal $\mathbb{P}^1$ bundle over $\mathcal{I}_{g,1}$. Therefore, $c_1$ is the pullback of $c_1$ from Lemmas \ref{1pt} and \ref{0pt} in Section \ref{relsec}. In particular, Equation \ref{dclass} says $\delta$ is a non-zero multiple of $c_1$, so $c_1 = 0 \in A^1(\mathcal{I}_{g,n}^{\circ,1} \smallsetminus \Delta)$. Below, we find two more independent relations from studying components of the complement of $\mathcal{I}_{g,n}^{\circ,1} \subset \mathbb{P} \mathcal{F}$. Let $e = g-n+1$ if $n \leq g$ and let $e = 0$ if $n \geq g+1$ so that \[\mathcal{F} = \ker\left(\phi^* \mathcal{E} \to \sigma_1^*P^e_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{W}}(\mathcal{N}) \oplus \bigoplus_{2 \leq j \leq n}\sigma_j \mathcal{N}\right).\] \subsubsection{Meeting the horizontal ruling with too high multiplicity at $\sigma_1$} \label{firstrel} Let $\mathcal{F}' \subset \mathcal{F}$ be the kernel of \[\phi^* \mathcal{E} \to \sigma_1^*P^{e+1}_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{W}}(\mathcal{N}) \oplus \bigoplus_{\substack{2 \leq j \leq n \\ j \neq g+1}}\sigma_j \mathcal{N}\] Equations in $\mathbb{P} \mathcal{F}'$ vanish to order $e+2$ along $\pi_2^{-1}(\pi_2(\sigma_1))$. If $n \leq g$, then taking into account vanishing along the other sections, the vanishing of this equation meets the ruling with multiplicity $g+2$, so contains the entire ruling. On the other hand, if $n \geq g+1$, then the vanishing of such an equation cannot be $g+1$ distinct points. In either case, $\mathbb{P} \mathcal{F}' \subset \mathbb{P} \mathcal{F}$ lies in the complement of $\mathcal{I}_{g,n}^{\circ,1} \subset \mathbb{P} \mathcal{F}$. By the snake lemma, we have \[\mathcal{F}/\mathcal{F}' \cong \ker(\sigma_1^*P^{e+1}_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{W}}(\mathcal{N}) \to \sigma_1^*P^{e}_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{W}}(\mathcal{N})) \cong \sigma_1^*(\mathcal{N} \otimes \Omega_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{W}}^{\otimes e+1}).\] The divisor $\mathbb{P}\mathcal{F}' \subset \mathbb{P} \mathcal{F}$ is defined by the vanishing of the composition \[\O_{\mathbb{P} \mathcal{F}}(-1) \to \mathcal{F} \to \mathcal{F}/\mathcal{F}' \cong \sigma_1^*(\mathcal{N} \otimes \Omega_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{W}}^{\otimes e+1}). \] Therefore, the fundamental class of $\mathbb{P} \mathcal{F}' \subset \mathbb{P} \mathcal{F}$ is \begin{align*}c_1(\O_{\mathbb{P} \mathcal{F}}(1) \otimes \sigma_1^*(\mathcal{N} \otimes \Omega_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{W}}^{\otimes g} )) &= \zeta + c_1(\sigma_1^*\pi_1^*\O_{\mathbb{P} \mathcal{V}}(g+1)) \\ & \qquad + c_1(\sigma_1^*\pi_2^*\O_{\mathbb{P} \mathcal{W}}(2)) + c_1(\sigma_1^*\Omega_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{W}}^{\otimes e+1 }). \intertext{First, note that $\Omega_{\mathbb{P} \mathcal{V}\times \mathbb{P} \mathcal{W}/\mathbb{P}\mathcal{W}} \cong \pi_1^*\O_{\mathbb{P} \mathcal{V}}(-2)$. Now, $v = \pi_1\circ \sigma_1$ represents the class $\O_{\mathbb{P} \mathcal{V}}(1)$, so $v^*\O_{\mathbb{P} \mathcal{V}}(1) = \pi_{1*}(c_1(\O_{\mathbb{P} \mathcal{V}}(1))^2) = -c_1(\mathcal{V}) = -c_1$. Similarly, $\sigma_1^*\pi_2^*\O_{\mathbb{P} \mathcal{W}}(1) = -c_1(\mathcal{W}) = -d_1$. Therefore, the above becomes} [\mathbb{P} \mathcal{F}'] &= \zeta - (g+1)c_1 - 2d_1 + 2(e+1)c_1 \\ &= \zeta + (2e-g+1)c_1 - 2d_1. \end{align*} Hence, we obtain the relation $\zeta + (2e-g+1)c_1 - 2d_1 = 0 \in A^1(\mathcal{I}_{g,n}^{\circ,1})$. \subsubsection{Meeting the vertical line with too high multiplicity at $\sigma_1$} \label{sp} In $\mathcal{I}_{g,n}^{\circ, 1}$, the first marked point is not Weierstrass. Therefore, equations in $\mathbb{P} \mathcal{F}$ that are tangent to the vertical ruling at $\sigma_1$ lie in the complement of $\mathcal{I}_{g,n}^{\circ, 1}$. Let $\mathcal{F}'' \subset \mathcal{F}$ be the subbundle of equations whose vanishing is tangent to the vertical ruling at $\sigma_1$. Then \[\mathcal{F}/\mathcal{F}'' \cong \ker(\sigma_1^*P^1_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{V}}(\mathcal{N}) \to \sigma_1^*\mathcal{N}) \cong \sigma_1^*(\mathcal{N} \otimes \Omega_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{V}}).\] Similarly to before, we calculate \begin{align*}c_1(\sigma_1^*(\mathcal{N} \otimes \Omega_{\mathbb{P} \mathcal{V}\times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{V}})) &= c_1(\sigma_1^*\pi_1^*\O_{\mathbb{P} \mathcal{V}}(g+1)) + c_1(\pi_1^*\pi_2^*\O_{\mathbb{P} \mathcal{W}}(2)) + c_1(\sigma_1^*\Omega_{\mathbb{P} \mathcal{V} \times \mathbb{P} \mathcal{W}/\mathbb{P} \mathcal{V}}) \\ &= -(g+1)c_1 - 2d_1 + 2d_1 = -(g+1)c_1. \end{align*} Hence, the fundamental class of $\mathbb{P} \mathcal{F}'' \subset \mathbb{P} \mathcal{F}$ is $\zeta - (g+1)c_1$. Thus, we obtain a relation $\zeta - (g+1) c_1 = 0 \in A^1(\mathcal{I}_{g,n}^{\circ, 1} \smallsetminus \Delta)$. Since we have already said $c_1 = 0$, this shows $\zeta = 0$. Then the relation in \ref{firstrel} shows $d_1 = 0$ too. \end{proof} First, we conclude the case $n = 2$. \begin{lem} The classes $\psi_1, \psi_2$ form a basis for $A^1(\H_{g,2})$. Hence $A^1(\mathcal{I}_{g,2})$ is spanned by $\psi_1, \psi_2,$ and $\delta$. \end{lem} \begin{proof} We have $\H_{g,2} \smallsetminus (D_{11} \cup D_{12}) = \mathcal{I}_{g,2}^{\circ, 1} \smallsetminus \Delta$. Hence $A^1(\H_{g,2} \smallsetminus (D_{11} \cup D_{12})) = 0$ by Lemma \ref{no-extra}. It follows that $\dim A^1(\H_{g,2}) \leq 2$. Meanwhile, Lemma \ref{psi-indep} shows $\psi_1$ and $\psi_2$ are independent in $A^1(\H_{g,2})$, so we can conclude that $\psi_1, \psi_2$ are a basis for $A^1(\H_{g,2})$. \end{proof} This allows us to see that the divisors we have removed are tautological for any $n$. \begin{lem} \label{dijl} For any $g, n$, the class $[D_{ij}] \in A^1(\mathcal{I}_{g,n})$ lies in the span of $\psi_i, \psi_j$ and $\delta$. \end{lem} \begin{proof} If $i \neq j$, the divisor $D_{ij}$ is the pullback of $D_{12} \subset \mathcal{I}_{g,2}$ under the map $\mathcal{I}_{g,n} \to \mathcal{I}_{g,2}$ that forgets all but the $i^{\mathrm{th}}$ and $j^{\mathrm{th}}$ marked points. Then this follows from the previous lemma which said $\psi_1, \psi_2, \delta$ span $A^1(\mathcal{I}_{g,2})$. If $i = j$, then $D_{ii}$ is the pullback of $D_{11} \subset \mathcal{I}_{g,1}$, so this follows from Lemma \ref{1pt} which showed $A^1(\mathcal{I}_{g,1})$ is spanned by $\psi_1$ and $\delta$. \end{proof} \begin{lem} If $n \leq 2g+6$, then the classes $\psi_1, \ldots, \psi_n$ form a basis for $A^1(\H_{g,n})$. Hence, $A^1(\mathcal{I}_{g,n})$ is spanned by $\psi_1, \ldots, \psi_n$ and $\delta$. \end{lem} \begin{proof} By Lemma \ref{no-extra}, we see $A^1(\H_{g,n} \smallsetminus \bigcup_{i,j} D_{ij}) = 0$. But by Lemma \ref{dijl}, each $[D_{ij}]$ lies in the span of $\psi_i$ and $\psi_j$. It follows that $A^1(\H_{g,n})$ is spanned by $\psi_1, \ldots, \psi_n$. They are a basis by Lemma \ref{psi-indep}. \end{proof} Having established (a) and (b) from the beginning of this section, our previous proof of Theorem \ref{divgen} now gives the result in any characteristic $\neq 2$. \bibliographystyle{amsplain}
{ "timestamp": "2022-07-25T02:07:17", "yymm": "2207", "arxiv_id": "2207.10873", "language": "en", "url": "https://arxiv.org/abs/2207.10873", "abstract": "We determine the rational Chow ring of the moduli space $\\mathcal{H}_{g,n}$ of $n$-pointed smooth hyperelliptic curves of genus $g$ when $n \\leq 2g+6$. We also show that the Chow ring of the partial compactification $\\mathcal{I}_{g,n}$, parametrizing $n$-pointed irreducible nodal hyperelliptic curves, is generated by tautological divisors. Along the way, we improve Casnati's result that $\\mathcal{H}_{g,n}$ is rational for $n \\leq 2g+8$ to show $\\mathcal{H}_{g,n}$ is rational for $n \\leq 3g+5$.", "subjects": "Algebraic Geometry (math.AG)", "title": "The rational Chow rings of moduli spaces of hyperelliptic curves with marked points", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.981453438742759, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.70835736345839 }
https://arxiv.org/abs/1709.03462
The horofunction boundary of finite-dimensional $\ell_p$ spaces
We give a complete description of the horofunction boundary of finite-dimensional $\ell_p$ spaces for $1\leq p\leq \infty$. We also study the variation norm on $\mathbb{R}^{\mathcal{N}}$, $\mathcal{N}=\{1,...,N\}$, and the corresponding horofunction boundary. As a consequence, we describe the horofunctions for Hilbert's projective metric on the interior of the standard cone $\mathbb{R}^{\mathcal{N}}_{+}$ of $\mathbb{R}^{\mathcal{N}}$.
\section{Introduction} There has recently been growing interest in the horofunction boundary of metric spaces. It is a powerful tool in the study of self-mappings of convex cones \cite{Gaubert_Vigeral2012,Karlsson2014} and random walks on groups \cite{Karlsson_Ledrappier2011}. The horofunction boundary has been studied mainly in spaces of nonpositive curvature since the introduction of the notion by Gromov \cite{Gromov1981}. By applying methods of convex analysis, Walsh \cite{Walsh2007} describes the horofunctions of general finite-dimensional normed spaces. Afterwards, in \cite{Walsh2008} he gives a description of the horofunction boundary of Hilbert's projective metric on general finite-dimensional cones. In an earlier paper \cite{Karlsson_Metz_Noskov2006} polyhedral normed spaces and Hilbert's projective metric on simplicial cones were studied. Let $1\leq p\leq \infty$ and let $\mathcal{N}=\{1,...,N\}$ for any $N\in\N$. Throughout, we shall denote by $\ell_p(\mathcal{N},\R)$ the vector space $\R^{\mathcal{N}}$ endowed with the norm \[ \norm{x}_{p}=\begin{cases} \left( \sum_{i\in\mathcal{N}}\abs{x_i}^{p} \right)^{1/p}, & 1\leq p < \infty, \\ \max_{i\in\mathcal{N}}\abs{x_i}, & p=\infty, \end{cases} \] for all $x=(x_i)_{i\in\mathcal{N}}\in\R^{\mathcal{N}}$. We shall also denote by $\ell_{\var}(\mathcal{N},\R)$ the vector space $\R^{\mathcal{N}}$ endowed with the pseudo-norm \[ \norm{x}_\var=\max_{i\in\mathcal{N}}x_i - \min_{i\in\mathcal{N}}x_i. \] The purpose of this paper is to give an explicit and detailed description of the horofunction boundary of $\ell_p(\mathcal{N},\R)$, for all $1\leq p\leq \infty$. We also give a complete description of the horofunction boundary of the pseudo-normed space $\ell_{\var}(\mathcal{N},\R)$. As a consequence, we readily obtain the horofunctions for Hilbert's projective metric on the interior of the standard cone $\R^{\mathcal{N}}_{+}$ of $\R^{\mathcal{N}}$. We would like to emphasize that the techniques we use in this paper are significantly different from those used by Walsh. Our results contain explicit formulas for the horofunctions. This paper is organized as follows. In \Cref{horol1N} we give a complete description of the horofunctions on $\ell_1(\mathcal{N},\R)$. In \Cref{horolpN} we show that if $1 < p < \infty$ then the horofunction boundary of $\ell_p(\mathcal{N},\R)$ is precisely the set of all norm one linear functionals on $\ell_p(\mathcal{N},\R)$. In \Cref{horolinfN} we give a complete description of the horofunctions on $\ell_{\infty}(\mathcal{N},\R)$. In \Cref{horolvarN} we give a complete description of the horofunction boundary of $\ell_{\var}(\mathcal{N},\R)$, and consequently we obtain all the horofunctions for Hilbert's projective metric on the interior of the standard cone $\R^{\mathcal{N}}_{+}$ of $\R^{\mathcal{N}}$. As an application of the latter result, we give a new proof of Perron's theorem. \section{Preliminaries} \subsection{The horofunction boundary of a metric space} Let $(X,d)$ be a metric space. Fix an arbitrary \emph{base point} $b$ in $X$. Define the mapping $\tau_d: X \to \R^{X}$ by associating to any $y\in X$ the function $\tau_d(y)$ given by \begin{equation}\label{emb} \tau_d(y)(x):=d(x,y)-d(b,y) \end{equation} for all $x$ in $X$. For each $y\in X$, the function $\tau_d(y)$ is bounded from below by $-d(b,y)$ and, moreover, is $1$-Lipschitz with respect to the metric $d$. In fact, by the triangle inequality it follows that \begin{align*} \abs{\tau_d(y)(x)-\tau_d(y)(z)} &=\abs{d(x,y)-d(b,y)-d(z,y)+d(b,y)} \\ &=\abs{d(x,y)-d(z,y)} \\ &\leq d(x,z) \end{align*} for all $x,z\in X$. Furthermore, by taking $z=b$ we get $\abs{\tau_d(y)(x)}\leq d(x,b)$ for all $x\in X$. Hence \[ \tau_d(X)\subset \prod_{x\in X}[-d(x,b),d(x,b)]\subset \R^{X}. \] By Tychonoff's theorem the product space $\prod_{x\in X}[-d(x,b),d(x,b)]$ is compact in the product topology. Therefore the set $\tau_d(X)$ has compact closure in this topology, which is equivalent to the topology of pointwise convergence. \begin{defin} We denote by $\overline{X}^{H}:=\closure(\tau_d(X))$ the \emph{horofunction compactification} of $(X,d)$. The \emph{horofunction boundary} of $(X,d)$ is defined by \begin{equation}\label{horobnd} \partial_H X:=\overline{X}^{H}\setminus\tau_d(X). \end{equation} The elements of $\partial_H X$ are called \emph{horofunctions} for the metric $d$ on $X$. For each $r\in\R$, the sublevel set $\mathcal{H}(h,r):=\{x\in X \mid h(x)\leq r\}$ is called a \emph{horoball} centered at $h\in\partial_H X$. \end{defin} \begin{rem} The mapping $y\mapsto\tau_d(y)$ is injective and continuous in the product topology. If $(X,d)$ is \emph{proper}, i.e., every closed ball is compact, then the mapping $y\mapsto\tau_d(y)$ defines an embedding $X\hookrightarrow\overline{X}^{H}$. By identifying $X$ with $\tau_d(X)$, the horofunction boundary (\ref{horobnd}) becomes $\partial_H X=\overline{X}^{H}\setminus X$. The choice of the base point $b\in X$ is irrelevant, in the sense that horofunction boundaries of $(X,d)$ for different base points are homeomorphic. We refer to \cite{Ballmann_Gromov_Schroeder1985,Bridson_Haefliger1999,Rieffel2002} for further details. \end{rem} \begin{rem} If $X$ is a normed space with norm $\norm{\cdot}$, then we choose the base point $b=0\in X$, and so (\ref{emb}) becomes $\tau(y)(x)=\norm{x-y}-\norm{y}$. Moreover, if $X$ is finite-dimensional, then $(X,\norm{\cdot})$ is a proper metric space and hence any $h\in\overline{X}^{H}$ can be written as $h(x)=\lim_{n\to\infty}\tau(y^{n})(x)$, for all $x\in X$ and for some sequence $\{y^{n}\}_{n\in\N}$ in $X$. \end{rem} It is well-known that the horofunction boundary of $\ell_1(\{1\},\R):=(\R,\abs{\cdot})$ has exactly two elements. More precisely, by considering unbounded sequences $\{y^{n}\}_{n\in\N}$ of real numbers, one obtains \begin{equation}\label{l11horo} \partial_H \ell_1(\{1\},\R)= \left\lbrace x\mapsto h_{\epsilon} (x)=\epsilon x \mathrel{\big\vert} \begin{aligned} \epsilon\in\{-1,+1\} \end{aligned} \right\rbrace. \end{equation} In the following sections we describe the horofunction boundary of $\ell_p(\mathcal{N},\R)$ for all $1 \leq p \leq \infty$. Throughout we shall denote $\tau_p(y)(x)=\norm{x-y}_p-\norm{y}_p$, where $x,y\in\ell_p(\mathcal{N},\R)$. \section{The horofunction boundary of $\ell_1(\mathcal{N},\R)$}\label{horol1N} \begin{lem}\label{ThmAl1} Let $N\geq 2$ and $\mathcal{N}=\{1,...,N\}$. Let $\{y^{n}\}_{n\in\N}$ be a sequence in $\ell_1(\mathcal{N},\R)$ such that $\norm{y^{n}}_1 \to \infty$ as $n \to \infty$. Then there exists $\emptyset\subsetneq\mathcal{I}\subseteq\mathcal{N}$ such that the sequence of functions $\{\tau_{1}(y^{n})\}_{n\in\N}$ has a subsequence which converges pointwise to the function \[ x\mapsto h_{\epsilon,\mu}^{\mathcal{I}}(x):=\sum_{i\in\mathcal{I}}\epsilon_ix_i + \sum_{i\in\mathcal{N}\setminus\mathcal{I}}(\abs{x_i-\mu_i}-\abs{\mu_i}), \] where $\epsilon=(\epsilon_i)_i\in\{-1,+1\}^{\mathcal{I}}$ and $\mu=(\mu_i)_i\in\R^{\mathcal{N}\setminus\mathcal{I}}$. \end{lem} \begin{proof} Let $\{y^{n}\}_{n\in\N}$ be a sequence in $\ell_1(\mathcal{N},\R)$ such that $\norm{y^{n}}_1 \to \infty$, as $n \to \infty$. By taking subsequences, we can find $\emptyset\subsetneq\mathcal{I}\subseteq\mathcal{N}$ such that $\abs{y_i^{n}}\to \infty$, as $n \to \infty$, for all $i\in\mathcal{I}$, and $\{y_i^{n}\}_{n\in\N}\subset\R$ is bounded for all $i\in\mathcal{N}\setminus\mathcal{I}$. By applying Cantor's diagonal argument and (\ref{l11horo}), we find a further subsequence such that for every $x\in\ell_1(\mathcal{N},\R)$, \begin{align*} \tau_{1}(y^{n})(x) &=\sum_{i\in\mathcal{N}}\abs{x_i-y_i^{n}} - \sum_{i\in\mathcal{N}}\abs{y_i^{n}} \\ &= \sum_{i\in\mathcal{I}} \left( \abs{x_i-y_i^{n}}-\abs{y_i^{n}} \right) + \sum_{i\in\mathcal{N}\setminus\mathcal{I}} \left( \abs{x_i-y_i^{n}}-\abs{y_i^{n}} \right)\\ &\xrightarrow[n \to \infty]{} \sum_{i\in\mathcal{I}}\epsilon_ix_i + \sum_{i\in\mathcal{N}\setminus\mathcal{I}}(\abs{x_i-\mu_i}-\abs{\mu_i}), \end{align*} where $\epsilon=(\epsilon_i)_i\in\{-1,+1\}^{\mathcal{I}}$ and $\mu=(\mu_i)_i\in\R^{\mathcal{N}\setminus\mathcal{I}}$. \end{proof} \begin{thm}\label{ThmBl1} Let $N\geq 2$ and $\mathcal{N}=\{1,...,N\}$. The horofunction boundary of the metric space $\ell_1(\mathcal{N},\R)$ is given by \begin{equation}\label{l1horoset} \partial_H \ell_1(\mathcal{N},\R)= \left\lbrace x\mapsto h_{\epsilon,\mu}^{\mathcal{I}}(x) \mathrel{\bigg\vert} \begin{aligned} &\emptyset\subsetneq\mathcal{I}\subseteq\mathcal{N},\; \epsilon\in\{-1,+1\}^{\mathcal{I}},\\ &\mu\in\R^{\mathcal{N}\setminus\mathcal{I}} \end{aligned} \right\rbrace, \end{equation} where $h_{\epsilon,\mu}^{\mathcal{I}}(x)=\sum_{i\in\mathcal{I}}\epsilon_ix_i + \sum_{i\in\mathcal{N}\setminus\mathcal{I}}(\abs{x_i-\mu_i}-\abs{\mu_i})$ for all $x\in\ell_1(\mathcal{N},\R)$. \end{thm} \begin{proof} Suppose that $h\in\partial_H \ell_1(\mathcal{N},\R)$. Then there exists a sequence $\{y^{n}\}_{n\in\N}$ in $\ell_1(\mathcal{N},\R)$ with $\norm{y^{n}}_1 \to \infty$ such that $\{\tau_{1}(y^{n})\}_{n\in\N}$ converges pointwise to $h$ as $n \to \infty$. By \Cref{ThmAl1} there exist $\emptyset\subsetneq\mathcal{I}\subseteq\mathcal{N}$, $\epsilon\in\{-1,+1\}^{\mathcal{I}}$ and $\mu\in\R^{\mathcal{N}\setminus\mathcal{I}}$ such that there is a subsequence $\{\tau_{1}(y^{n_k})\}_{k}$ that converges pointwise to $ h_{\epsilon,\mu}^{\mathcal{I}}$ as $k \to \infty $. Therefore $h=h_{\epsilon,\mu}^{\mathcal{I}}$ and so $\partial_H \ell_1(\mathcal{N},\R)$ is contained in the set on the right-hand side of (\ref{l1horoset}). For the other inclusion, assume that $\mathcal{I}$ is any nonempty subset of $\mathcal{N}$. Let $\epsilon\in\{-1,+1\}^{\mathcal{I}}$ and let $\mu\in\R^{\mathcal{N}\setminus\mathcal{I}}$. We will show that the function $h_{\epsilon,\mu}^{\mathcal{I}}$ belongs to $\overline{\ell_1(\mathcal{N},\R)}^{H}\setminus\tau_1(\ell_1(\mathcal{N},\R))$. Indeed, for each $n$ define $y^{n}=(y_{i}^{n})_{i\in\mathcal{N}}$ in $\ell_1(\mathcal{N},\R)$ by \begin{equation}\label{escapevector} y_i^{n}=\begin{cases} -\epsilon_i n, & i\in\mathcal{I}, \\ \mu_i, & i\in\mathcal{N}\setminus\mathcal{I}. \end{cases} \end{equation} Then for every $x\in\ell_1(\mathcal{N},\R)$ we have \begin{align*} \tau_{1}(y^{n})(x) &=\sum_{i\in\mathcal{I}} \left( \abs{x_i-y_i^{n}}-\abs{y_i^{n}} \right) + \sum_{i\in\mathcal{N}\setminus\mathcal{I}} \left( \abs{x_i-y_i^{n}}-\abs{y_i^{n}} \right)\\ &=\sum_{i\in\mathcal{I}} \left( \abs{x_i+\epsilon_i n}-n \right) + \sum_{i\in\mathcal{N}\setminus\mathcal{I}} \left( \abs{x_i-\mu_i}-\abs{\mu_i} \right)\\ &\xrightarrow[n \to \infty]{} \sum_{i\in\mathcal{I}}\epsilon_ix_i + \sum_{i\in\mathcal{N}\setminus\mathcal{I}} \left( \abs{x_i-\mu_i}-\abs{\mu_i} \right) = h_{\epsilon,\mu}^{\mathcal{I}}(x). \end{align*} Therefore $h_{\epsilon,\mu}^{\mathcal{I}}\in\overline{\ell_1(\mathcal{N},\R)}^{H}$. It remains to show that $h_{\epsilon,\mu}^{\mathcal{I}}$ is not an element of $\tau_1(\ell_1(\mathcal{N},\R))$. Suppose the contrary, so there exists $z\in\ell_1(\mathcal{N},\R)$ such that $h_{\epsilon,\mu}^{\mathcal{I}}=\tau_{1}(z)$. It follows by (\ref{escapevector}) that \[ h_{\epsilon,\mu}^{\mathcal{I}}(y^{n})=-n\abs{\mathcal{I}} - \sum_{i\in\mathcal{N}\setminus\mathcal{I}}\abs{\mu_i}\xrightarrow[n \to \infty]{}-\infty. \] However, by (\ref{emb}) we know that $\tau_{1}(z)$ is bounded from below by $-\norm{z}_1$, and hence \[ \liminf_{n\to\infty}\tau_{1}(z)(y^{n})\geq -\norm{z}_1 > -\infty, \] which is a contradiction. Therefore $h_{\epsilon,\mu}^{\mathcal{I}}$ belongs to $\partial_H \ell_1(\mathcal{N},\R)$, that is, every element of the set on the right-hand side of (\ref{l1horoset}) is a horofunction on $\ell_1(\mathcal{N},\R)$. \end{proof} \section{The horofunction boundary of $\ell_p(\mathcal{N},\R)$ for $1<p<\infty$}\label{horolpN} Recall that a normed space $(X,\norm{\cdot})$ is called \textit{uniformly convex} if for every $\epsilon\in]0,2]$ there exists $\delta>0$ such that $\norm{x+y}\leq 2(1-\delta)$ whenever $x,y\in X$ with $\norm{x}=\norm{y}=1$ and $\norm{x-y}\geq\epsilon$. A well-known result due to Clarkson \cite{Clarkson1936} is that $L_{p}$ and $\ell_p$ spaces are uniformly convex for $1<p<\infty$. It will be convenient to use the following equivalent characterization of uniform convexity. \begin{prop}({\cite[p.~287]{INF_DIM_GEOM2001}})\label{uniconvex} A Banach space $(X,\norm{\cdot})$ is uniformly convex if and only if $\norm{x_n-y_n}\to 0$, as $n\to \infty$, whenever $x_n,y_n\in X$ with $\norm{x_n}\leq 1,\norm{y_n}\leq 1$ for all $n\in\N$, and $\norm{x_n+y_n}\to 2$ as $n\to \infty$. \end{prop} \begin{lem}\label{ThmAlp} Let $p,q\in ]1,+\infty[$ such that $p^{-1}+q^{-1}=1$. Let $\{y^{n}\}_{n\in\N}$ be a sequence in $\ell_p(\mathcal{N},\R)$ such that $\norm{y^{n}}_p \to \infty$ as $n\to \infty$. Then there exists $\mu\in \ell_q(\mathcal{N},\R)$ with $\norm{\mu}_q=1$ for which the sequence of functions $\{\tau_{p}(y^{n})\}_{n\in\N}$ has a subsequence converging pointwise to the function \[ x\mapsto h_{\mu}(x):=-\sum_{i\in\mathcal{N}}\mu_ix_i. \] \end{lem} \begin{proof} We may assume without loss of generality that $y^{n}\neq 0$, and define $w^{n}:=y^{n}/\norm{y^{n}}_p$ for all $n$. By compactness of the unit sphere of $\ell_p(\mathcal{N},\R)$, it follows that there exists a subsequence $\{w^{n_k}\}_k$ that converges, as $k\to\infty$, to some $w\in\ell_p(\mathcal{N},\R)$ with $\norm{w}_p=1$. Therefore, by $\ell_p/\ell_q$-duality there exists a unique $\mu\in\ell_q(\mathcal{N},\R)$ with $\norm{\mu}_q=1$ such that $\iprod{\mu,w}=1$. Now, let $x\in\ell_p(\mathcal{N},\R)$ and for each $k$ define \begin{equation}\label{unitzk} z^{k}:=\frac{y^{n_k}-x}{\norm{x-y^{n_k}}_p} =\frac{-x}{\norm{x-y^{n_k}}_p}+\frac{\norm{y^{n_k}}_p}{\norm{x-y^{n_k}}_p}w^{n_k}. \end{equation} For each $k$ we have $\norm{z^{k}}_p=1$, and hence by $\ell_p/\ell_q$-duality there exists $\varphi^{k}\in\ell_q(\mathcal{N},\R)$ with $\norm{\varphi^{k}}_q=1$ such that $\iprod{\varphi^{k},z^{k}}=1$. By applying the assumption $\norm{y^{n_k}}_p \to \infty$ to (\ref{unitzk}), we obtain $\norm{z^{k}-w}_p\to 0$ as $k\to\infty$. Consequently, \[ 2=\norm{\varphi^{k}}_q+\norm{\mu}_q\geq\norm{\varphi^{k}+\mu}_q\geq \iprod{\varphi^{k}+\mu,z^{k}}=1+\iprod{\mu,z^{k}} \xrightarrow[k \to \infty]{}2, \] and hence, by \Cref{uniconvex}, we have $\norm{\varphi^{k}-\mu}_q\to 0$ as $k\to\infty$. On the other hand, by evaluating each dual pairing of $\varphi^{k}$ at $z^{k}$ in (\ref{unitzk}) we obtain \begin{equation*} \norm{x-y^{n_k}}_p=\iprod{\varphi^{k},-x}+\norm{y^{n_k}}_p\iprod{\varphi^{k},w^{n_k}}. \end{equation*} Therefore, \begin{align*} \tau_{p}(y^{n_k})(x)&=\norm{x-y^{n_k}}_p-\norm{y^{n_k}}_p\\ &=\iprod{\varphi^{k},-x}+\norm{y^{n_k}}_p\iprod{\varphi^{k},w^{n_k}}-\norm{y^{n_k}}_p\\ &\xrightarrow[k \to \infty]{} -\iprod{\mu,x}=h_{\mu}(x). \end{align*} \end{proof} \begin{thm}\label{ThmBlp} Let $p,q\in ]1,+\infty[$ such that $p^{-1}+q^{-1}=1$. The horofunction boundary of the metric space $\ell_p(\mathcal{N},\R)$ is given by \begin{equation}\label{lphoroset} \partial_H\ell_p(\mathcal{N},\R)= \left\lbrace x\mapsto h_{\mu}(x) \mathrel{\big\vert} \mu\in\ell_q(\mathcal{N},\R),\; \norm{\mu}_q=1 \right\rbrace, \end{equation} where $h_{\mu}(x)=-\sum_{i\in\mathcal{N}}\mu_ix_i$ for all $x\in\ell_p(\mathcal{N},\R)$. \end{thm} \begin{proof} If $h\in\partial_H\ell_p(\mathcal{N},\R)$, then there exists a sequence $\{y^{n}\}_{n\in\N}$ with $\norm{y^{n}}_p \to \infty$ such that $h$ is the pointwise limit of the sequence $\{\tau_{p}(y^{n})\}_{n\in\N}$. By \Cref{ThmAlp}, there exists $\mu\in\ell_q(\mathcal{N},\R)$ with $\norm{\mu}_q=1$ such that along subsequences $\tau_{p}(y^{n})(x)$ converges to $h_{\mu}(x)=-\sum_{i\in\mathcal{N}}\mu_ix_i$ for all $x\in\ell_p(\mathcal{N},\R)$. Therefore $h=h_\mu$ and so $\partial_H \ell_p(\mathcal{N},\R)$ is contained in the set on the right-hand side of (\ref{lphoroset}). On the other hand, if $\mu\in\ell_q(\mathcal{N},\R)$ with $\norm{\mu}_q=1$, then by $\ell_p/\ell_q$-duality there exists $w\in\ell_p(\mathcal{N},\R)$ with $\norm{w}_p=1$ such that $\sum_{i\in\mathcal{N}}\mu_i w_i=1$. Let $y^{n}=nw$ for all $n$. Then, by proceeding as in \Cref{ThmAlp}, it follows that $\tau_{p}(y^{n})(x)=\norm{x-nw}_p-n$ converges to $h_{\mu}(x)=-\sum_{i\in\mathcal{N}}\mu_ix_i$ for all $x\in\ell_p(\mathcal{N},\R)$. That is, $h_{\mu}$ belongs to $\overline{\ell_p(\mathcal{N},\R)}^{H}$. However, note that $h_{\mu}(y^{n})=-n$ for all $n$. Therefore, since for any $z\in\ell_p(\mathcal{N},\R)$ the function $\tau_{p}(z)$ is bounded from below, we must have $h_{\mu}\in\partial_H \ell_p(\mathcal{N},\R)$. That is, every element of the set on the right-hand side of (\ref{lphoroset}) is a horofunction on $\ell_p(\mathcal{N},\R)$. \end{proof} \begin{rem} \Cref{ThmAlp} and \Cref{ThmBlp} hold for every finite-dimensional uniformly convex Banach space. \end{rem} \section{The horofunction boundary of $\ell_\infty(\mathcal{N},\R)$}\label{horolinfN} It will be convenient and helpful to consider the \emph{top function} $\topf$ and the \emph{bottom function} $\botf$ defined on $\R^{\mathcal{N}}$ by \begin{equation*} \begin{aligned} \topf(x) &:= \max_{i\in\mathcal{N}}x_i,\;\; \end{aligned} \begin{aligned} \botf(x) &:= \min_{i\in\mathcal{N}}x_i. \end{aligned} \end{equation*} These functions simplify notations significantly when proving \Cref{ThmAlsup} and \Cref{ThmBlsup} in this section as well as \Cref{ThmAlvar}, \Cref{ThmBlvar}, and \Cref{horoHilbert} in \Cref{horolvarN}. The norm $\norm{\cdot}_\infty$ on $\R^{\mathcal{N}}$ can be redefined as \begin{equation}\label{maxnorm} \norm{x}_\infty=\max\{\topf(x),-\botf(x)\}. \end{equation} The standard cone $\R_{+}^{\mathcal{N}}$ of $\R^{\mathcal{N}}$ is defined by \[ \R_{+}^{\mathcal{N}}:=\{x\in\R^{\mathcal{N}} \mid x_i\geq 0,\;\forall i\in\mathcal{N}\}. \] We denote by $\R_{>0}^{\mathcal{N}}$ the interior of $\R_{+}^{\mathcal{N}}$. The boundary $\partial\R_{+}^{\mathcal{N}}$ of $\R_{+}^{\mathcal{N}}$ is the set $\R_{+}^{\mathcal{N}}\setminus\R_{>0}^{\mathcal{N}}$. We shall denote by $\cht$ the element of $\R^{\mathcal{N}}$ given by $\cht=(1,...,1)$. It follows that, $x - \botf(x)\cht$ and $\topf(x)\cht - x$ are both elements of $\partial\R_{+}^{\mathcal{N}}$ for all $x\in\R^{\mathcal{N}}$. The mapping $\Exp : \R^{\mathcal{N}}\to \R_{>0}^{\mathcal{N}}$ is defined by $\Exp(x)_i := e^{x_i}$ for all $i\in\mathcal{N}$. Similarly, the mapping $\Log : \R_{>0}^{\mathcal{N}}\to\R^{\mathcal{N}}$ is defined by $\Log(u)_i := \log(u_i)$ for all $i\in\mathcal{N}$. The \textit{Hadamard product} of any two elements $x=(x_i)_{i\in\mathcal{N}}$ and $y=(y_i)_{i\in\mathcal{N}}$ of $\R^{\mathcal{N}}$, denoted by $x \odot y$, is another element of $\R^{\mathcal{N}}$ defined by $(x \odot y)_i := x_iy_i$ for all $i\in\mathcal{N}$. For every $x=(x_i)_{i\in\mathcal{N}}$ in $\R_{>0}^{\mathcal{N}}$ we shall denote by $x^{-1}$ the element of $\R_{>0}^{\mathcal{N}}$ defined by $(x^{-1})_i:=1/x_i$ for all $i\in\mathcal{N}$. Using the notations introduced above, it readily follows that \begin{align} \topf(\Exp(x)\odot\Exp(y)) &= \exp \topf(x+y) \mbox{ for all } x,y\in \R^{\mathcal{N}}, \label{t-ExpExp} \\ \topf(\Log(x)-\Log(y)) &= \log \topf(x\odot y^{-1}) \mbox{ for all } x,y\in\R_{>0}^{\mathcal{N}}. \label{t-Log-Log} \end{align} Note that $\norm{x}_\infty=\max\{\topf(x),\topf(-x)\}=\topf(x,-x)$ for all $x\in\R^{\mathcal{N}}$. Therefore, the mapping $y\mapsto\tau_{\infty}(y)$ becomes \begin{align*} \tau_{\infty}(y)(x) &=\norm{x-y}_\infty-\norm{y}_\infty \\ &=\topf(x-y,-x+y)-\norm{y}_\infty \\ &=\topf(x-y-\norm{y}_\infty\cht,-x+y-\norm{y}_\infty\cht). \end{align*} For any $x=(x_i)_{i\in\mathcal{N}}\in\R^{\mathcal{N}}$ and any nonempty subset $\mathcal{I}$ of $\mathcal{N}$ we shall denote $x_\mathcal{I}=(x_i)_{i\in\mathcal{I}}$. \begin{lem}\label{ThmAlsup} Let $\{y^{n}\}_{n\in\N}$ be a sequence in $\ell_\infty(\mathcal{N},\R)$ such that $\norm{y^{n}}_\infty \to \infty$, as $n\to \infty$. Then the sequence $\{\tau_{\infty}(y^{n})\}_{n\in\N}$ has a subsequence which converges pointwise to the function \[ x\mapsto h^{\mathcal{I},\mathcal{J}}_{\mu,\nu}(x):=\topf(x_\mathcal{I}-\mu,-x_\mathcal{J}-\nu), \] where $\emptyset\subseteq\mathcal{I},\mathcal{J} \subseteq \mathcal{N}$ with $\mathcal{I}\cap\mathcal{J}=\emptyset$, $\mathcal{I}\cup\mathcal{J}\neq\emptyset$, and $\mu\in\R_{+}^{\mathcal{I}},\,\nu\in \R_{+}^{\mathcal{J}}$ with $\botf(\mu,\nu)=0$. \end{lem} \begin{proof} For each $n$, define \begin{align*} u^{n}&=\Exp(-y^{n}-\norm{y^{n}}_\infty\cht),\\ v^{n}&=\Exp(y^{n}-\norm{y^{n}}_\infty\cht). \end{align*} It follows that $(u^{n},v^{n})\in \R_{+}^{\mathcal{N}}\times \R_{+}^{\mathcal{N}}$ with $\topf(u^{n},v^{n})=1$ for all $n$. Therefore, there exists a subsequence $\{(u^{n_k},v^{n_k})\}_{k}$ which converges, as $k\to\infty$, to $(u,v)\in \R_{+}^{\mathcal{N}}\times \R_{+}^{\mathcal{N}}$ with $\topf(u,v)=1$. Furthermore, note that $u^{n_k}\odot v^{n_k}=\Exp(-2\norm{y^{n_k}}_\infty\cht)$ for all $k$. Hence by taking the limit as $k\to\infty$, we obtain $u\odot v =0\cht$. Consequently, there exist $\emptyset\subseteq\mathcal{I},\mathcal{J}\subseteq\mathcal{N}$ with $\mathcal{I}\cap\mathcal{J}=\emptyset$ and $\mathcal{I}\cup\mathcal{J}\neq\emptyset$ such that $0<u_i\leq 1$ for all $i\in\mathcal{I}$, $0< v_j \leq 1$ for all $j\in\mathcal{J}$ with $\topf(u_\mathcal{I},v_\mathcal{J})=1$. Now, by letting $\mu=-\Log(u_\mathcal{I})$ and $\nu=-\Log(v_\mathcal{J})$, it follows that $\mu\in\R_{+}^{\mathcal{I}},\nu\in \R_{+}^{\mathcal{J}}$ with $\botf(\mu,\nu)=0$. Finally, let $x\in\ell_\infty(\mathcal{N},\R)$; then by (\ref{t-ExpExp}) we have \begin{align*} \lim_{k\to\infty} \tau_{\infty}(y^{n_{k}})(x) &=\lim_{k\to\infty}\topf(x-y^{n_k}-\norm{y^{n_k}}_\infty\cht,-x+y^{n_k}-\norm{y^{n_k}}_\infty\cht)\\ &=\lim_{k\to\infty}\log\topf(\Exp(x)\odot u^{n_k},\Exp(-x)\odot v^{n_k})\\ &=\log\topf(\Exp(x)\odot u,\Exp(-x)\odot v)\\ &=\log\topf(\Exp(x_\mathcal{I})\odot u_\mathcal{I},\Exp(-x_\mathcal{J})\odot v_\mathcal{J})\\ &=\topf(x_\mathcal{I}-\mu,-x_\mathcal{J}-\nu). \end{align*} \end{proof} Let $\overline\R$ denote the extended set of real numbers $\R\cup\{-\infty,\infty\}$. The top function $\topf$ and bottom function $\botf$ can be redefined on $\overline\R^{\mathcal{N}}$ according to the natural order in $\overline\R$. Let $\overline\R_+^{\mathcal{N}}$ denote the set \[ \overline\R_+^{\mathcal{N}}= \{x\in\overline\R^{\mathcal{N}}\mid 0\leq x_i\leq \infty,\;\forall i=1,...,N\} =[0,\infty]^{\mathcal{N}}. \] \begin{thm}\label{ThmBlsup} The horofunction boundary of the metric space $\ell_\infty(\mathcal{N},\R)$ is given by \begin{equation}\label{linfhoroset} \partial_H\ell_\infty(\mathcal{N},\R)= \left\lbrace x\mapsto h_{\overline\mu,\overline\nu}(x) \mathrel{\bigg\vert} \begin{aligned} &\overline\mu,\overline\nu\in\overline\R_+^{\mathcal{N}},\; \botf(\overline\mu,\overline\nu)=0,\;\\ &\overline\mu+\overline\nu=\infty\cht \end{aligned} \right\rbrace, \end{equation} where $h_{\overline\mu,\overline\nu}(x)=\topf(x-\overline\mu,-x-\overline\nu)$ for all $x\in\ell_\infty(\mathcal{N},\R)$. \end{thm} \begin{proof} Suppose that $h\in\partial_H\ell_\infty(\mathcal{N},\R)$. Then there exists a sequence $\{y^{n}\}_{n\in\N}$ in $\ell_\infty(\mathcal{N},\R)$ with $\norm{y^{n}}_\infty \to \infty$ such that $\tau_{\infty}(y^{n})$ converges pointwise to $h$ as $n \to \infty$. Let $x\in\ell_\infty(\mathcal{N},\R)$. By \Cref{ThmAlsup} there exist $\emptyset\subseteq\mathcal{I},\mathcal{J}\subseteq\mathcal{N}$ with $\mathcal{I}\cap\mathcal{J}=\emptyset$, $\mathcal{I}\cup\mathcal{J}\neq\emptyset$, and $\mu\in\R_{+}^{\mathcal{I}},\nu\in \R_{+}^{\mathcal{J}}$ with $\botf(\mu,\nu)=0$, such that for some subsequence $\{y^{n_k}\}_k$ we have \[ h(x)=\lim_{k\to\infty}\tau_{\infty}(y^{n_k})(x) =\topf(x_\mathcal{I}-\mu,-x_\mathcal{J}-\nu). \] Finally, by letting \begin{equation*} \begin{aligned} \overline\mu_{i}=\begin{cases} \mu_i, & i\in\mathcal{I} \\ \infty & i\in\mathcal{N}\setminus\mathcal{I} \end{cases} \end{aligned} \begin{aligned} \;,\;\; \end{aligned} \begin{aligned} \overline\nu_{i}=\begin{cases} \nu_i, & i\in\mathcal{J} \\ \infty & i\in\mathcal{N}\setminus\mathcal{J} \end{cases} \end{aligned} \end{equation*} we get $\overline\mu,\overline\nu\in\overline\R_+^{\mathcal{N}}$ with $\overline\mu+\overline\nu=\infty\cht$, and $\botf(\overline\mu,\overline\nu)=0$. Hence, $h(x)=h_{\overline\mu,\overline\nu}(x)$ and so $\partial_H\ell_\infty(\mathcal{N},\R)$ is contained in the set on the right-hand side of (\ref{linfhoroset}). Now, we need to show that given $\overline\mu,\overline\nu\in\overline\R_+^{\mathcal{N}}$ with $\overline\mu+\overline\nu=\infty\cht$ and $\botf(\overline\mu,\overline\nu)=0$, the function $x\mapsto h_{\overline\mu,\overline\nu}(x)$ is a horofunction on $\ell_\infty(\mathcal{N},\R)$. First we show that it belongs to $\overline{\ell_\infty(\mathcal{N},\R)}^{H}$. Indeed, let $(y^{n})_n$ be the sequence in $\ell_\infty(\mathcal{N},\R)$ given by \[ y_i^{n}=\begin{cases} -n+\overline\mu_i, & \overline\mu_i <\infty\\ n-\overline\nu_i, & \overline\nu_i <\infty\\ 0, & \mbox{otherwise}. \end{cases} \] Let $x\in\ell_\infty(\mathcal{N},\R)$. Then \[ \tau_{\infty}(y^{n})(x)=\norm{x-y^{n}}_\infty-\norm{y^{n}}_\infty \xrightarrow[n \to \infty]{} \topf(x-\overline\mu,-x-\overline\nu) = h_{\overline\mu,\overline\nu}(x). \] It remains to show that $h_{\overline\mu,\overline\nu}$ is not an element of $\tau_{\infty}(\ell_\infty(\mathcal{N},\R))$. Suppose the contrary, so there exists $z\in\ell_\infty(\mathcal{N},\R)$ such that \[ h_{\overline\mu,\overline\nu}(x)=\topf(x-\overline\mu,-x-\overline\nu) =\tau_{\infty}(z)(x). \] For each $k$, define \[ x_i^{k}=\begin{cases} \overline\mu_i, & \overline\mu_i <\infty \\ -\overline\nu_i, & \overline\nu_i <\infty\\ -k, & \mbox{otherwise}. \end{cases} \] Then $h_{\overline\mu,\overline\nu}(x^{k})=\topf(x^{k}-\overline\mu,-x^{k}-\overline\nu)=0$ for all $k$. However, \[ \tau_{\infty}(z)(x^{k})=\norm{x^{k}-z}_\infty-\norm{z}_\infty \centernot{\xrightarrow[k \to \infty]{}0}, \] which is a contradiction. Therefore $h_{\overline\mu,\overline\nu}\in\partial_H\ell_\infty(\mathcal{N},\R)$ and so the other inclusion holds. \end{proof} \section{The horofunction boundary of $\ell_{\var}(\mathcal{N},\R)$}\label{horolvarN} We define the \textit{variation norm} on $\R^{\mathcal{N}}$ by \begin{equation}\label{varnorm} \norm{x}_\var := \topf(x)-\botf(x), \end{equation} where $\topf$ and $\botf$ are, respectively, the top and bottom functions introduced in \Cref{horolinfN}. In fact, $\norm{\cdot}_\var$ is a pseudo-norm on $\R^{\mathcal{N}}$, as $\norm{x}_\var = 0$ if and only if $x=\lambda\cht$ for some $\lambda\in\R$. Moreover $\norm{x+\lambda\cht}_\var = \norm{x}_\var$ for all $x\in\R^{\mathcal{N}}$. Hence $\norm{\cdot}_\var$ is a norm on the quotient vector space $\R^{\mathcal{N}}/\R\cht$. By (\ref{varnorm}), the mapping $y\mapsto\tau_\var(y)$ becomes \begin{align} \tau_\var(y)(x) &= \norm{x-y}_\var - \norm{y}_\var \nonumber \\ &= \topf(x-y)-\botf(x-y) - \topf(y)+\botf(y) \nonumber \\ &= \topf(x-y+\botf(y)\cht) - \botf(x-y+\topf(y)\cht) \label{emblvar}. \end{align} \begin{lem}\label{ThmAlvar} Let $N\geq 2$ and $\mathcal{N}=\{1,...,N\}$. Let $\{y^{n}\}_{n\in\N}$ be a sequence in $\ell_{\var}(\mathcal{N},\R)$ such that $\norm{y^{n}}_\var \to \infty$, as $n\to \infty$. Then $\{\tau_\var(y^{n})\}_{n\in\N}$ has a subsequence which converges pointwise to the function \[ x\mapsto h_{\mu,\nu}^{\mathcal{I},\mathcal{J}}(x) :=\topf(x_\mathcal{I}-\mu) - \botf(x_\mathcal{J}+\nu), \] where $\emptyset\subsetneq\mathcal{I},\mathcal{J}\subsetneq\mathcal{N}$ with $\mathcal{I}\cap\mathcal{J}=\emptyset$, and $\mu\in\partial\R_{+}^{\mathcal{I}}$, $\nu\in \partial\R_{+}^{\mathcal{J}}$. \end{lem} \begin{proof} For each $n$, define \begin{align*} u^{n} &=\Exp(\botf(y^{n})\cht - y^{n}),\\ v^{n} &=\Exp(y^{n}-\topf(y^{n})\cht). \end{align*} It follows that $(u^{n},v^{n})\in \R_{+}^{\mathcal{N}}\times\R_{+}^{\mathcal{N}}$ with $\topf(u^{n})=1$, $\topf(v^{n})=1$ for all $n$. Hence, there exists a subsequence $\{(u^{n_k},v^{n_k})\}_k$ which converges, as $k\to\infty$, to some $(u,v)\in \R_{+}^{\mathcal{N}}\times\R_{+}^{\mathcal{N}}$ with $\topf(u)=1$, $\topf(v)=1$. Let $x\in\ell_{\var}(\mathcal{N},\R)$. By (\ref{t-ExpExp}), it follows that for every $k$, \begin{enumerate}[\upshape (i)] \item $\log\topf(\Exp(x)\odot u^{n_{k}}) = \topf(x-y^{n_{k}}+\botf(y^{n_{k}})\cht)$, \item $\log\topf(\Exp(-x)\odot v^{n_{k}})=\topf(-x+y^{n_{k}}-\topf(y^{n_{k}})\cht)=-\botf(x-y^{n_{k}}+\topf(y^{n_{k}})\cht)$. \end{enumerate} Therefore, by (\ref{emblvar}) \begin{align*} \tau_\var(y^{n_{k}})(x) &=\topf(x-y^{n_{k}}+\botf(y^{n_{k}})\cht) - \botf(x-y^{n_{k}}+\topf(y^{n_{k}})\cht)\\ &=\log\topf(\Exp(x)\odot u^{n_{k}}) + \log\topf(\Exp(-x)\odot v^{n_{k}}) \\ &\xrightarrow[k \to \infty]{} \log\topf(\Exp(x)\odot u) + \log\topf(\Exp(-x)\odot v). \end{align*} Also note that $u^{n_{k}} \odot v^{n_{k}} = \exp(-\norm{y^{n_{k}}}_\var)\cht$ for all $k$. Hence, by taking the limit as $k\to\infty$ we obtain $u \odot v =0\cht$. Consequently, there exist $\emptyset\subsetneq\mathcal{I},\mathcal{J}\subsetneq\mathcal{N}$ with $\mathcal{I}\cap\mathcal{J}=\emptyset$ such that $0<u_i\leq 1$ for all $i\in \mathcal{I}$, and $0<v_j\leq 1$ for all $j\in \mathcal{J}$. Let $\mu=-\Log(u_\mathcal{I})$ and $\nu=-\Log(v_\mathcal{J})$. Then $\mu\in\partial\R_{+}^{\mathcal{I}}$ and $\nu\in \partial\R_{+}^{\mathcal{J}}$. Therefore, by (\ref{t-ExpExp}), it follows that \begin{align*} \lim_{k\to\infty} \tau_\var(y^{n_{k}})(x) &= \log\topf(\Exp(x)\odot u) + \log\topf(\Exp(-x)\odot v)\\ &=\log\topf(\Exp(x_\mathcal{I})\odot u_\mathcal{I}) + \log\topf(\Exp(-x_\mathcal{J})\odot v_\mathcal{J})\\ &=\topf(x_\mathcal{I}-\mu)+\topf(-x_\mathcal{J}-\nu)\\ &=\topf(x_\mathcal{I}-\mu)-\botf(x_\mathcal{J}+\nu)\\ &=h_{\mu,\nu}^{\mathcal{I},\mathcal{J}}(x). \end{align*} \end{proof} \begin{thm}\label{ThmBlvar} Let $N\geq 2$ and $\mathcal{N}=\{1,...,N\}$. The horofunction boundary of the pseudo-normed space $\ell_{\var}(\mathcal{N},\R)$ is given by \begin{equation}\label{lvarhoroset} \partial_H\ell_{\var}(\mathcal{N},\R)= \left\lbrace x\mapsto h_{\mu,\nu}^{\mathcal{I},\mathcal{J}}(x) \mathrel{\bigg\vert} \begin{aligned} &\emptyset\subsetneq\mathcal{I},\mathcal{J}\subsetneq\mathcal{N},\; \mathcal{I}\cap\mathcal{J}=\emptyset,\\ &\mu\in\partial\R_{+}^{\mathcal{I}},\;\nu\in \partial\R_{+}^{\mathcal{J}} \end{aligned} \right\rbrace, \end{equation} where $h_{\mu,\nu}^{\mathcal{I},\mathcal{J}}(x)=\topf(x_\mathcal{I}-\mu) - \botf(x_\mathcal{J}+\nu)$ for all $x\in \ell_{\var}(\mathcal{N},\R)$. \end{thm} \begin{proof} Suppose that $h\in\partial_H\ell_{\var}(\mathcal{N},\R)$. Then there exists a sequence $\{y^{n}\}_{n\in\N}$ in $\ell_{\var}(\mathcal{N},\R)$ with $\norm{y^{n}}_\var \to \infty$ such that $\tau_\var(y^{n})$ converges pointwise to $h$ as $n\to\infty$. By \Cref{ThmAlvar}, there exist $\emptyset\subsetneq\mathcal{I},\mathcal{J}\subsetneq\mathcal{N}$ with $\mathcal{I}\cap\mathcal{J}=\emptyset$, and $\mu\in \partial\R_{+}^{\mathcal{I}},\nu\in \partial\R_{+}^{\mathcal{J}}$ such that there is a subsequence $\tau_\var(y^{n_k})$ that converges pointwise to $h_{\mu,\nu}^{\mathcal{I},\mathcal{J}}$ as $k \to \infty $. Therefore $h=h_{\mu,\nu}^{\mathcal{I},\mathcal{J}}$ and so $\partial_H\ell_{\var}(\mathcal{N},\R)$ is contained in the set on the right-hand side of (\ref{lvarhoroset}). Now, we need to show that for given $\emptyset\subsetneq\mathcal{I},\mathcal{J}\subsetneq\mathcal{N}$ with $\mathcal{I}\cap\mathcal{J}=\emptyset$, and $\mu\in \partial\R_{+}^{\mathcal{I}},\nu\in \partial\R_{+}^{\mathcal{J}}$, the function $h_{\mu,\nu}^{\mathcal{I},\mathcal{J}}$ is a horofunction on $\ell_{\var}(\mathcal{N},\R)$. First we show that $h_{\mu,\nu}^{\mathcal{I},\mathcal{J}}$ belongs to $\overline{\ell_{\var}(\mathcal{N},\R)}^{H}$. Let $(y^{n})_n$ be the sequence in $\ell_{\var}(\mathcal{N},\R)$ given by \[ y_{i}^{n}=\begin{cases} -n+\mu_i, & i\in\mathcal{I}, \\ n-\nu_i, & i\in\mathcal{J},\\ 0, & \mbox{ otherwise}. \end{cases} \] Let $x\in\ell_{\var}(\mathcal{N},\R)$. Then, by (\ref{emblvar}) we have \[ \tau_\var(y^{n})(x)=\norm{x-y^{n}}_\var-\norm{y^{n}}_\var \xrightarrow[n \to \infty]{} \topf(x_\mathcal{I}-\mu) - \botf(x_\mathcal{J}+\nu) = h_{\mu,\nu}^{\mathcal{I},\mathcal{J}}(x). \] Hence $h_{\mu,\nu}^{\mathcal{I},\mathcal{J}}$ is an element of $\overline{\ell_{\var}(\mathcal{N},\R)}^{H}$. It remains to show that $h_{\mu,\nu}^{\mathcal{I},\mathcal{J}}$ is not in $\tau_\var(\ell_{\var}(\mathcal{N},\R))$. Suppose the contrary, so there exists $z\in\ell_{\var}(\mathcal{N},\R)$ such that $h_{\mu,\nu}^{\mathcal{I},\mathcal{J}}=\tau_\var(z)$. For each $k$, define \[ x_{i}^{k}=\begin{cases} \mu_i, & i\in\mathcal{I}, \\ -\nu_i, & i\in\mathcal{J},\\ -k, & \mbox{ otherwise}. \end{cases} \] Then $h_{\mu,\nu}^{\mathcal{I},\mathcal{J}}(x^{k})=0$ for all $k$. However, \[ \tau_\var(z)(x^{k})=\norm{x^{k}-z}_\var-\norm{z}_\var \centernot{\xrightarrow[k \to \infty]{}0}, \] which is a contradiction. Therefore $h_{\mu,\nu}^{\mathcal{I},\mathcal{J}}$ belongs to $\partial_H\ell_{\var}(\mathcal{N},\R)$. \end{proof} \subsection{Hilbert's projective metric on $\R_{>0}^{\mathcal{N}}$} We define \textit{Hilbert's projective metric} on $\R_{>0}^{\mathcal{N}}$ by \[ \dH(x,y):=\log\frac{\topf(x\odot y^{-1})}{\botf(x\odot y^{-1})} \] for all $x,y$ in $\R_{>0}^{\mathcal{N}}$ (see \cite{Birkhoff1957,Bushell1973_2,Bushell1973_1}). In fact, $\dH(\cdot,\cdot)$ is a pseudo-metric on $\R_{>0}^{\mathcal{N}}$. More precisely, $\dH(x,y)=0$ if and only if $x=\beta y$ for some $\beta > 0$. Also, $\dH(\alpha x,\beta y)=\dH(x,y)$ for all $\alpha,\beta>0$ and all $x,y\in\R_{>0}^{\mathcal{N}}$. By (\ref{t-Log-Log}) and (\ref{varnorm}), it follows that \begin{align} \dH(x,y)&=\log \topf(x\odot y^{-1}) - \log \botf(x\odot y^{-1}) \nonumber\\ &=\topf(\Log x - \Log y) - \botf(\Log x - \Log y) \nonumber\\ &=\norm{\Log x - \Log y}_\var. \label{LOGiso} \end{align} In other words, the mapping $\Log$ is an isometry of $(\R_{>0}^{\mathcal{N}},\dH)$ into $\ell_{\var}^{\mathcal{N}}$. See \cite{Nussbaum1988,Lemmens_Nussbaum2012} for more details. The horofunction boundary of Hilbert's projective metric space $(\R_{>0}^{\mathcal{N}},\dH)$ is completely described by combining (\ref{LOGiso}) and \Cref{ThmBlvar} as follows. \begin{cor}\label{horoHilbert} The horoboundary for Hilbert's projective metric $\dH$ on $\R_{>0}^{\mathcal{N}}$ is given by \[ \partial_H (\R_{>0}^{\mathcal{N}},\dH)= \left\lbrace x\mapsto h_{u,v}(x) \mathrel{\bigg\vert} \begin{aligned} &u,v\in \R_{+}^{\mathcal{N}},\; \topf(u)=1, \;\topf(v)=1,\\ &u\odot v=0\cht \end{aligned} \right\rbrace. \] where $h_{u,v}(x):=\log\topf(u \odot x)+\log\topf(v\odot x^{-1})$ for all $x\in\R_{>0}^{\mathcal{N}}$. \end{cor} \begin{proof} Let $x\in\R_{>0}^{\mathcal{N}}$. For every $y^{n}\in\R_{>0}^{\mathcal{N}}$ we have \begin{align*} \tau_{\dH}(y^{n})(x)&=\dH(x,y^{n})-\dH(\cht,y^{n}) \\ &=\norm{\Log(x)-\Log(y^{n})}_\var - \norm{\Log(\cht)-\Log(y^{n})}_\var\\ &=\tau_\var(\Log (y^{n}))(\Log(x)). \end{align*} Note that $\dH(\cht,y^{n})\to \infty$ as $n\to \infty$ if and only if $y^{n}\to\xi\in\partial\R_+^{\mathcal{N}}$ as $n\to \infty$. The latter can be expressed equivalently by $\norm{\Log y^{n}}_\var\to \infty$ as $n\to \infty$. By \Cref{ThmBlvar}, it follows that $h_\xi\in\partial_H (\R_{>0}^{\mathcal{N}},\dH)$ is given by \[ h_\xi(x)=h_{\mu,\nu}^{\mathcal{I},\mathcal{J}} (\Log(x)) =\topf(\Log(x_\mathcal{I})-\mu)+\topf(-\Log(x_\mathcal{J})-\nu), \] where $\emptyset\subsetneq\mathcal{I},\mathcal{J}\subsetneq\mathcal{N}$ with $\mathcal{I}\cap\mathcal{J}=\emptyset$, and $\mu\in\partial\R_{+}^{\mathcal{I}},\;\nu\in \partial\R_{+}^{\mathcal{J}}$. Finally, consider $u=(u_i)_{i\in\mathcal{N}}$ and $v=(v_i)_{i\in\mathcal{N}}$ given by \begin{equation*} \begin{aligned} u_{i}=\begin{cases} \exp(-\mu_i), & i\in\mathcal{I}, \\ 0, & i\in\mathcal{N}\setminus\mathcal{I}, \end{cases} \end{aligned} \begin{aligned} \;,\;\; \end{aligned} \begin{aligned} v_{j}=\begin{cases} \exp(-\nu_j), & j\in\mathcal{J}, \\ 0, & j\in\mathcal{N}\setminus\mathcal{J}. \end{cases} \end{aligned} \end{equation*} Then $u,v\in \R_{+}^{\mathcal{N}}$ with $\topf(u)=1$, $\topf(v)=1$ and $u\odot v=0\cht$. Hence, by (\ref{t-Log-Log}) it follows that \[ h_\xi(x)=\log\topf(u \odot x)+\log\topf(v\odot x^{-1})=h_{u,v}(x). \] \end{proof} \subsection{Perron's Theorem} Let $N\geq 2$ and $\mathcal{N}=\{1,...,N\}$. Let $T=(T_{ij})\in\R^{\mathcal{N}\times \mathcal{N}}$ be a positive matrix, that is $T_{ij}>0$ for all $i,j\in\mathcal{N}$. Perron's theorem states that $T$ fixes a unique point in $\R_{>0}^{\mathcal{N}}/\R_{>0}$. We give here a new proof by applying the horofunction boundary of Hilbert's projective metric on $\R_{>0}^{\mathcal{N}}$. Let $x\in\R_{>0}^{\mathcal{N}}$. For each $i,j\in\mathcal{N}$ we have \begin{align*} (Tx)_i &:=\sum_{k\in\mathcal{N}}T_{ik}x_k \leq \topf((T_{ik})_{k\in\mathcal{N}})\sum_{k\in\mathcal{N}}x_k, \\ (Tx)_j &:=\sum_{k\in\mathcal{N}}T_{jk}x_k \geq \botf((T_{jk})_{k\in\mathcal{N}})\sum_{k\in\mathcal{N}}x_k. \end{align*} By \Cref{horoHilbert}, each element of the horofunction boundary $\partial_H (\R_{>0}^{\mathcal{N}},\dH)$ is of the form $h_{u,v}(x)=\log\topf(u \odot x)+\log\topf(v\odot x^{-1})$, where $u,v\in \R_{+}^{\mathcal{N}}$ with $\topf(u)=1$, $\topf(v)=1$ and $u\odot v=0\cht$. Thus, \begin{equation}\label{horo} h_{u,v}(Tx) \leq \log \max_{i,j}\bigg\{ u_i v_j \dfrac{\topf((T_{ik})_{k\in\mathcal{N}})}{\botf(T_{jk})_{k\in\mathcal{N}})}\bigg\}. \end{equation} Let $r_{u,v}$ denote the term on the right-hand side of (\ref{horo}). Therefore, for every $x\in\R_{>0}^{\mathcal{N}}$, the sequence $\{Tx,T^{2}x,T^{3}x,...\}$ stays within the horoball $\mathcal{H}(h_{u,v},r_{u,v})$. It is well-known \cite{Karlsson_Metz_Noskov2006,Walsh2008} that horoballs for Hilbert's projective metric $\dH$ are convex subsets. It is also well-known \cite{Nussbaum1988,Lemmens_Nussbaum2012} that the norm topology and $\dH$-topology are equivalent in $\R_{>0}^{\mathcal{N}}/\R_{>0}$. By combining these facts and (\ref{horo}), we readily obtain the following. \begin{lem}\label{compactcone} The set \[ C=\bigcap_{\substack{u,v\in\R_{+}^{\mathcal{N}} \\ \topf(u)=\topf(v)=1 \\ u\odot v=0\cht}} \mathcal{H}(h_{u,v},r_{u,v}) \] is a nonempty convex subset of $\R_{>0}^{\mathcal{N}}$. Furthermore, $C$ is compact in $\R_{>0}^{\mathcal{N}}/\R_{>0}$ and is invariant under the positive matrix $T$, that is, $TC\subset C$. \end{lem} We can now consider $T$ as a self-mapping of the compact metric space $(C,\dH)$. In order to prove that $T$ fixes a unique point in $C\subset\R_{>0}^{\mathcal{N}}/\R_{>0}$ we will need the following. \begin{lem}\label{contractive} Let $x\neq y$ in $\R_{>0}^{\mathcal{N}}/\R_{>0}$. Then $\dH(Tx,Ty) < \dH(x,y)$. \end{lem} \begin{proof} If $x\neq y$ in $\R_{>0}^{\mathcal{N}}/\R_{>0}$, then there exist $r,s\in\mathcal{N}$ with $r\neq s$ such that \[ \botf(x\odot y^{-1}) =\frac{x_r}{y_r}<\frac{x_s}{y_s} =\topf(x\odot y^{-1}). \] On the other hand, for every $i$, \begin{align*} (Tx)_i &= T_{ir}x_r + T_{is}x_s + \sum_{\mathclap{\substack{k\in\mathcal{N} \\ k\neq r,s}}}T_{ik}x_k\\ & < T_{ir}y_r\frac{x_s}{y_s} + T_{is}\frac{x_s}{y_s}y_s + \sum_{\mathclap{\substack{k\in\mathcal{N} \\ k\neq r,s}}}T_{ik}\frac{x_k}{y_k}y_k \\ & \leq T_{ir}y_r\frac{x_s}{y_s} + T_{is}\frac{x_s}{y_s}y_s + \sum_{\mathclap{\substack{k\in\mathcal{N} \\ k\neq r,s}}}T_{ik}\frac{x_s}{y_s}y_k \\ & =\frac{x_s}{y_s}(Ty)_i. \end{align*} The above implies that $\topf(Tx\odot (Ty)^{-1}) < \topf(x\odot y^{-1})$. In a similar way we can show that $\botf(Tx\odot (Ty)^{-1})>\botf(x\odot y^{-1})$. Therefore, \[ \dfrac{\topf(Tx\odot (Ty)^{-1})}{\botf(Tx\odot (Ty)^{-1})} < \dfrac{\topf(x\odot y^{-1})}{\botf(x\odot y^{-1})}, \] and the result follows. \end{proof} \begin{rem} Samelson \cite{Samelson1957} gives a different proof of \Cref{contractive} by applying projective properties of cross-ratios, which appear in Hilbert's original definition of $\dH$. \end{rem} Finally, by combining \Cref{compactcone} and \Cref{contractive} and applying Edelstein's fixed-point theorem \cite{Edelstein1962} we obtain the following. \begin{cor}[Perron's theorem] There exists a unique point $x^{*}$ in $C\subset\R_{>0}^{\mathcal{N}}/\R_{>0}$ such that $T(x^{*})=x^{*}$. \end{cor} \subsection*{Acknowledgements} I would like to thank Prof. Anders Karlsson for suggesting the research topic, as well as for his constant guidance, support and encouragement. I would also like to thank Prof. Kalle Kyt\"ol\"a and Prof. Olavi Nevanlinna for several helpful comments and suggestions.
{ "timestamp": "2018-12-31T02:19:15", "yymm": "1709", "arxiv_id": "1709.03462", "language": "en", "url": "https://arxiv.org/abs/1709.03462", "abstract": "We give a complete description of the horofunction boundary of finite-dimensional $\\ell_p$ spaces for $1\\leq p\\leq \\infty$. We also study the variation norm on $\\mathbb{R}^{\\mathcal{N}}$, $\\mathcal{N}=\\{1,...,N\\}$, and the corresponding horofunction boundary. As a consequence, we describe the horofunctions for Hilbert's projective metric on the interior of the standard cone $\\mathbb{R}^{\\mathcal{N}}_{+}$ of $\\mathbb{R}^{\\mathcal{N}}$.", "subjects": "Metric Geometry (math.MG); Functional Analysis (math.FA)", "title": "The horofunction boundary of finite-dimensional $\\ell_p$ spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9814534327754854, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7083573591515506 }
https://arxiv.org/abs/1706.03386
Extensions of partial cyclic orders, Euler numbers and multidimensional boustrophedons
We enumerate total cyclic orders on $\left\{1,\ldots,n\right\}$ where we prescribe the relative cyclic order of consecutive triples $(i,{i+1},{i+2})$, these integers being taken modulo $n$. In some cases, the problem reduces to the enumeration of descent classes of permutations, which is done via the boustrophedon construction. In other cases, we solve the question by introducing multidimensional versions of the boustrophedon. In particular we find new interpretations for the Euler up/down numbers and the Entringer numbers.
\section{Introduction} \label{sec:introduction} In this paper we enumerate some extensions of partial cyclic orders to total cyclic orders. In certain cases, this question is related to that of linear extensions of some posets. A (linear) order on a set $X$ is a reflexive, antisymmetric and transitive binary relation on this set. When the set $X$ possesses a partial order, a classical problem is the search for (and the enumeration of) linear extensions, i.e. total orders on $X$ that are compatible with this partial order. Szpilrajn~\cite{S30} proved that such a linear extension always exists using the axiom of choice. It is possible to find a linear extension of a given finite poset in linear time (cf~\cite{CLR01} section 22.4). Brightwell and Winkler~\cite{BW91} proved that counting the number of linear extensions of a poset was $\#P$-complete. Another type of order one can put on a set $X$ is a cyclic order, which is a ternary relation $Z \subset X^3$ satisfying the following conditions: \begin{enumerate} \item $\forall x, y, z \in X$, $(x,y,z) \in Z \Rightarrow (y,z,x) \in Z$ (cyclicity). \item $\forall x ,y,z \in X$, $(x,y,z) \in Z \Rightarrow (z,y,x) \not \in Z$ (asymmetry). \item $\forall x,y,z,u \in X$, $(x,y,z) \in Z$ and $(x,z,u) \in Z \Rightarrow (x,y,u) \in Z$ (transitivity). \end{enumerate} A cyclic order may be partial or total (in the latter case, for any triple $(x,y,z)$ of distinct elements, either $(x,y,z)\in Z$ or $(z,y,x)\in Z$). The problem of studying the total cyclic orders extending a given partial cyclic order is much harder than its linear counterpart and has been subject to less investigations. Not every partial cyclic order admits an extension to a total cyclic order, as shown by Megiddo~\cite{M76}. Galil and Megiddo~\cite{GM77} proved that the problem of determining whether a given partial cyclic order admits an extension is NP-complete. For any $n\geq1$, denote by $[n]$ the set $\left\{1,\ldots,n\right\}$. In this paper, we solve the question of the enumeration of total cyclic orders on $[n]$ which extend a partial cyclic order prescribing the relative order of any triple $(i,{i+1},{i+2})$. This is the cyclic counterpart of the classical linear extension problem considered in Subsection~\ref{subsec:descentclass}, where the relative order of every pair $(i,{i+1})$ is prescribed. We enumerate three types of total cyclic orders. \begin{definition} Fix $n\geq3$, $w=\epsilon_1\cdots\epsilon_{n-2}\in\left\{+,-\right\}^{n-2}$ and $\eta\in\left\{+,-\right\}$. \begin{itemize} \item The set $\mathcal{P}_w$ is the set of all total cyclic orders $Z$ on $[n]$ such that for any $1 \leq i \leq n-2$ verifying $\epsilon_i=+$ (resp. for any $1 \leq i \leq n-2$ verifying $\epsilon_i=-$), we have $(i,{i+1},{i+2})\in Z$ (resp. $({i+2},{i+1},i)\in Z$). \item The set $\mathcal{Q}_w^+$ (resp. $\mathcal{Q}_w^-$) is the set of all total cyclic orders $Z\in\mathcal{P}_w$ such that $({n-1},n,1)\in Z$ (resp. $(1,n,{n-1})\in Z$). \item The set $\mathcal{R}_w^{\eta,+}$ (resp. $\mathcal{R}_w^{\eta,-}$) is the set of all total cyclic orders $Z\in\mathcal{Q}_w^{\eta}$ such that $(n,1,2)\in Z$ (resp. $(2,1,n)\in Z$). \end{itemize} \end{definition} See Figure~\ref{fig:cyclicorder4} for an example. \begin{figure}[htpb] \centering \includegraphics[height=1.3in]{cyclicorder4.pdf} \caption{A total cyclic order $Z$ on $[n]$ may be represented pictorially by placing all the elements of $[n]$ on a circle, in such a way that $(i,j,k)\in Z$ if and only if starting at $i$ and turning in the positive direction, one sees $j$ before $k$. Here $n=4$ and $Z$ is the set formed by the union of the four triples $(1,3,4),(3,4,2),(4,2,1)$ and $(2,1,4)$ with the eight possible cyclic permutations of these four triples. The arrow indicates the positive direction of rotation. The cyclic order $Z$ belongs to the following sets: $\mathcal{P}_{-+}$, $\mathcal{Q}_{-+}^+$ and $\mathcal{R}_{-+}^{+-}$.} \label{fig:cyclicorder4} \end{figure} Our main results concern the enumeration of total cyclic orders of each of the three above types. It is not hard to see that each such enumeration question is equivalent to the enumeration of total cyclic orders extending some given partial cyclic order on $[n]$. We show that these enumeration questions can always be solved by using linear recurrences which extend the classical boustrophedon construction used to compute the Entringer numbers. As a consequence, this provides an algorithm for computing the cardinalities of the sets $\mathcal{P}_w$, $\mathcal{Q}_w^{\eta}$ and $\mathcal{R}_w^{\eta_1,\eta_2}$ which runs in polynomial time in the length of $w$, instead of the naive super-exponential algorithm which would consist in testing every total cyclic order to check if it belongs to one of these sets. \subsection*{Outline of the paper} In Section~\ref{sec:firstenumeration}, we state that the enumeration of each $\mathcal{P}_w$ is equivalent to the enumeration of some descent class of permutations. As a consequence, we obtain new interpretations for the Euler and Entringer numbers in terms of cyclic orders. We prove these statements in Section~\ref{sec:proof1} by producing a specific bijection between total cyclic orders on $[n+1]$ and permutations of $[n]$. In Section~\ref{sec:secondenumeration}, we briefly recall the classical boustrophedon construction and we explain how to extend it to higher dimensions to enumerate the classes $\mathcal{Q}_w^\eta$ and $\mathcal{R}_w^{\eta_1,\eta_2}$. The proof of these linear recurrence relations can be found in Section~\ref{sec:proof2}. We finish in Section~\ref{sec:conjecture} by formulating a conjecture regarding the asymptotic densities of the classes $\mathcal{Q}_w^\eta$ and $\mathcal{R}_w^{\eta_1,\eta_2}$ inside the class $\mathcal{P}_w$, when all the letters of $w$ are $+$, in the limit when the length of $w$ goes to infinity. \section{Enumeration of \texorpdfstring{$\mathcal{P}_w$}{Pw} and relation with the Euler and Entringer numbers} \label{sec:firstenumeration} The enumeration of the total cyclic orders in $\mathcal{P}_w$ will be shown to be equivalent to the enumeration of descent classes of permutations, which we now introduce. \subsection{Descent classes of permutations} \label{subsec:descentclass} For any $n\geq1$, denote by $\mathcal{S}_n$ the set of permutations of $[n]$. \begin{definition} For any $n\geq2$, the \emph{descent pattern} of a permutation $\sigma\in\mathcal{S}_n$ is the word $w=\epsilon_1\ldots\epsilon_{n-1}\in\left\{+,-\right\}^{n-1}$ such that for all $1\leq i\leq n-1$ \[ \epsilon_i=\begin{cases} + &\text{ if } \sigma(i+1)>\sigma(i), \\ - &\text{ if } \sigma(i+1)<\sigma(i). \end{cases} \] The \emph{descent class} $\mathcal{S}_w$ is defined to be the set of all permutations with descent pattern $w$. \end{definition} For example, the descent pattern of the permutation $\sigma$ whose one-line notation\footnote{Recall that the one-line notation for $\sigma\in\mathcal{S}_n$ is $\sigma(1) \sigma(2) \cdots \sigma(n)$.} is $1 5 3 2 4$ is $+--+$. To any word $w=\epsilon_1\ldots\epsilon_{n-1}\in\left\{+,-\right\}^{n-1}$, we associate the partial order $\prec_w$ on $[n]$ generated by the following relations: for any $1\leq i\leq n-1$, $i \prec_w {i+1}$ (resp. ${i+1} \prec_w i$) if $\epsilon_i=+$ (resp. $\epsilon_i=-$). Then the number of linear extensions of $\prec_w$ is $\#\mathcal{S}_w$, hence the enumeration of descent classes is also a problem of enumeration of linear extensions of a poset. The first formula for $\#\mathcal{S}_w$ was obtained by MacMahon~\cite{M01}. For further formulas for $\#\mathcal{S}_w$, see~\cite{V79} and the references therein. A special descent class is the class of \emph{up/down permutations}, this is the case when the letters of $w$ in odd (resp. even) positions are all $+$ (resp. $-$). Andr\'e~\cite{A79} showed that if we write $F(x)=\sec x + \tan x$, then the number of up/down permutations in $\mathcal{S}_n$ is the $n$-th Euler number $E_n:=F^{(n)}(0)$. One way to compute the Euler numbers is via the Entringer numbers $e_{n,i}$, which count the number of up/down permutations $\sigma\in\mathcal{S}_n$ such that $\sigma(n)=i$. These Entringer numbers satisfy linear recurrence relations corresponding to the boustrophedon construction~\cite{E66} (see Subsection~\ref{subsec:boustrophedon} for more details). \subsection{Connection with the enumeration of total cyclic orders} For any $n\geq 1$ and $w\in\left\{+,-\right\}^n$, one can express the number of total cyclic orders in $\mathcal{P}_w$ in terms of cardinalities of descent classes of permutations. Define the involution $i$ on $\bigsqcup_{n\geq1} \left\{+,-\right\}^n$ which flips all the signs at even locations: $i(\epsilon_1\cdots\epsilon_n):=\epsilon'_1\cdots\epsilon'_n$ with \[ \epsilon'_j=\begin{cases} \epsilon_j &\text{ if } j \text{ is odd}, \\ -\epsilon_j &\text{ if } j \text{ is even}. \end{cases} \] For example, $i(++--)=+--+$. Then the following holds: \begin{theorem} \label{thm:cyclicshape} For integer $n\geq1$ and any word $w\in\left\{+,-\right\}^n$, \begin{equation} \#\mathcal{P}_w=\#\mathcal{S}_{i(w)}. \end{equation} \end{theorem} The proof of Theorem~\ref{thm:cyclicshape} can be found in Section~\ref{sec:proof1}. For any permutation $\sigma\in\mathcal{S}_{i(w)}$, the word $w$ is sometimes called the alternating descent pattern of $\sigma$, see for example~\cite{C08}. \begin{remark} We prove Theorem~\ref{thm:cyclicshape} by constructing a somewhat natural bijection between each $\mathcal{P}_w$ and $\mathcal{S}_{i(w)}$. It would have been easier to just show that the numbers $\#\mathcal{P}_w$ verify the same linear recurrence relations as the ones verified by the numbers $\#\mathcal{S}_{i(w)}$ (cf Subsection~\ref{subsec:boustrophedon}), but exhibiting a bijection between these sets provides a stronger connection between the classes $\mathcal{P}_w$ and descent classes of permutations. \end{remark} As a corollary, taking $w$ to be a word with all the letters that are $+$, we deduce a new interpretation for the Euler numbers: \begin{corollary} \label{cor:Eulernumbers} For any $n\geq1$, the Euler number $E_n$ is the number of total cyclic orders $Z$ on $[n+1]$ which verify $(i,{i+1},{i+2})\in Z$ for any $1\leq i \leq n-1$. \end{corollary} As a corollary of the proof of Theorem~\ref{thm:cyclicshape}, we also obtain a new interpretation for the Entringer numbers in terms of cyclic orders. Given a total cyclic order $Z$ on $[n]$, we define for any pair of distinct elements $(i,j)\in [n]$ the \emph{content} of the arc from $i$ to $j$ to be \begin{equation} c_Z(i,j):=\#\left\{x\in [n] | (i,x,j) \in Z \right\}. \end{equation} For example, $c_Z(3,4)=0$ for the total cyclic order $Z$ depicted in Figure~\ref{fig:cyclicorder4}. Then we have the following result: \begin{corollary} \label{cor:Entringernumbers} For any $1 \leq i \leq n$, the Entringer number $e_{n,i}$ is the number of total cyclic orders $Z$ on $[n+1]$ verifying the following two conditions: \begin{enumerate} \item for any $1\leq j \leq n-1$, we have $(j,{j+1},{j+2})\in Z$ ; \item the parameter $i$ satisfies \[ i= \begin{cases} 1+c_Z(n,{n+1}) &\text{ if } n \text{ is odd}, \\ 1+c_Z({n+1},n) &\text{ if } n \text{ is even}. \end{cases} \] \end{enumerate} \end{corollary} \section{Enumeration of \texorpdfstring{$\mathcal{Q}_w^{\eta}$}{Q,w,eta} and \texorpdfstring{$\mathcal{R}_w^{\eta_1,\eta_2}$}{R,w,eta1,eta2} and boustrophedons of higher dimensions} \label{sec:secondenumeration} \subsection{Boustrophedons} \label{subsec:boustrophedon} The classical way to compute the Euler and Entringer numbers is to set up a triangular array of numbers, called either the \emph{boustrophedon} or the Seidel-Entringer-Arnold triangle. Each line of the array is obtained by a linear transformation of the previous line, and the answer is read on the bottom line (see Figure~\ref{fig:boustrophedon}). \begin{figure}[htpb] \[ \begin{matrix} &&&&1&&&& \\ &&&&&&&& \\ &&&0&&1&&& \\ &&&&&&&& \\ &&1&&1&&0&& \\ &&&&&&&& \\ &0&&1&&2&&2& \\ &&&&&&&& \\ 5&&5&&4&&2&&0 \\ \end{matrix} \] \caption{Computation of the Entringer numbers $e_{n,i}$ for $1\leq i \leq n\leq5$ using the boustrophedon method. The number $e_{n,i}$ is the $i$-th number (counting from the left) on the $n$-th line. Here each entry of a line of even (resp. odd) index is obtained by taking the sum of the entries of the previous line that lie to its left (resp. right).} \label{fig:boustrophedon} \end{figure} Viennot~\cite{V79} extended this construction to obtain the cardinality of any descent class $\mathcal{S}_w$. Thus, by Theorem~\ref{thm:cyclicshape}, one can compute the cardinality of $\mathcal{P}_w$ by means of a linear inductive process. Unlike the case of $\mathcal{P}_w$, it seems that we cannot reduce the question of enumerating $\mathcal{Q}_w^{\eta}$ and $\mathcal{R}_w^{\eta_1,\eta_2}$ to the enumeration of some nice or known class of permutations. For example, while $(\#\mathcal{P}_{+^n})_{n\geq1}$ (where $+^n$ denotes the word of length $n$ with all letters equal to $+$) is the sequence of Euler numbers by Corollary~\ref{cor:Eulernumbers}, the sequences $(\#\mathcal{Q}_{+^n}^+)_{n\geq1}$ and $(\#\mathcal{R}_{+^n}^{+,+})_{n\geq1}$ are currently absent from the On-Line Encyclopedia of Integer Sequences~\cite{OEIS} (see Table~\ref{tab:firstterms} for the first few values). \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline $n$ & 1& 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline $\#\mathcal{P}_{+^n}$ & 1 & 2 & 5 & 16 & 61 & 272 & 1385 & 7936 & 50521 & 353792 \\ \hline $\#\mathcal{Q}_{+^n}^+$ & 1 & 1 & 3 & 11 & 38 & 169 & 899 & 5047 & 31914 & 226205 \\ \hline $\#\mathcal{R}_{+^n}^{+,+}$ & 1 & 1 & 2 & 9 & 31 & 128 & 708 & 4015 & 24865 & 177444 \\ \hline \end{tabular} \caption{The first ten values of the cardinalities of the sets $\mathcal{P}_{+^n}$, $\mathcal{Q}_{+^n}^+$ and $\mathcal{R}_{+^n}^{+,+}$. The first sequence corresponds to the Euler up/down numbers. We formulate in Section~\ref{sec:conjecture} a conjecture regarding the asymptotic ratio of the terms of these sequences.} \label{tab:firstterms} \end{table} However, we solve these enumeration questions by introducing linear recurrences that are higher-dimensional versions of the boustrophedon. The boustrophedon can be seen as the time-evolution of a sequence of numbers, where at time $t$ the sequence has length $t$ and the sequence at time $t+1$ is obtained from the sequence at time $t$ by a linear transformation. The enumeration of $\mathcal{Q}_w^{\eta}$ (resp. $\mathcal{R}_w^{\eta_1,\eta_2}$) is done via the time-evolution of triangles of numbers (resp. tetrahedra of numbers), where at time $t$ the triangles (resp. tetrahedra) have side-length $t$ and the triangles (resp. tetrahedra) at time $t+1$ are obtained from the triangles (resp. tetrahedra) at time $t$ by a linear transformation. We will introduce a family of operators $\Phi_{a,b,c}$ and show that the recursions for $\mathcal{P}_w$, $\mathcal{Q}_w^{\eta}$ and $\mathcal{R}_w^{\eta_1,\eta_2}$ can all be expressed simply using these operators. Foata and Han~\cite{FH14} also studied the evolution of triangular arrays of numbers in order to enumerate Bi-Entringer numbers, which count up/down permutations $\sigma\in\mathcal{S}_n$ with prescribed values of $\sigma(1)$ and $\sigma(n)$. Our $\Phi$ operators can also be used to describe their recurrence formulas. \subsection{Enumeration of \texorpdfstring{$\mathcal{Q}_w^{\eta}$}{Q,w,eta}} In order to enumerate $\mathcal{Q}_w^{\eta}$ or $\mathcal{R}_w^{\eta_1,\eta_2}$, we need to refine these classes by fixing finitely many reference points and introducing a parameter to specify the content of each arc between two consecutive reference points. Given a total cyclic order $Z$ on a set $X$ and $(y_1,\ldots,y_p)$ a $p$-tuple of distinct elements in $X$, we define the \emph{multi-content} $\tilde{c}_Z(y_1,\ldots,y_p)$ to be the $p$-tuple \begin{equation} \tilde{c}_Z(y_1,\ldots,y_p):=(c_Z(y_1,y_2),c_Z(y_2,y_3),\ldots,c_Z(y_{p-1},y_p),c_Z(y_p,y_1)). \end{equation} For any $n\geq3, w\in\left\{+,-\right\}^{n-2}$ and nonnegative integers $i,j,k$ such that $i+j+k=n-3$, set \begin{align} f_{w,i,j,k}^+ &:= \# \left\{ Z\in\mathcal{Q}_w^+ | \tilde{c}_Z({n-1},n,1)=(i,j,k) \right\} \\ f_{w,i,j,k}^- &:= \# \left\{ Z\in\mathcal{Q}_w^- | \tilde{c}_Z(n,{n-1},1)=(i,j,k) \right\}. \end{align} These numbers provide a refined enumeration of $\mathcal{Q}_w^+$ and $\mathcal{Q}_w^-$ according to the content of each arc between the elements $n-1$, $n$ and $1$, playing the same role as the role played by the Entringer numbers for the Euler numbers. Just like the Entringer number, these numbers $f_{w,i,j,k}^+$ and $f_{w,i,j,k}^-$ satisfy some linear recurrence relations. If $w\in\left\{+,-\right\}^n$, we denote by $w+$ (resp. $w-$) the word on $n+1$ letters obtained by adding the letter $+$ (resp $-$) at the end of the word $w$. We then have the following recurrence relations: \begin{theorem} \label{thm:extendedcoefficients} For any $n\geq1$, $w\in\left\{+,-\right\}^n$ and nonnegative integers $i,j,k$ such that $i+j+k=n$, we have \begin{align} f_{w+,i,j,k}^+ &=\sum_{j'=0}^{j-1} f_{w,i+j-1-j',j',k}^- + \sum_{k'=0}^{k-1} f_{w,k-1-k',i+j,k'}^+ \label{eq:firstlinearrecurrence} \\ f_{w+,i,j,k}^- &=\sum_{i'=0}^{i-1} f_{w,i',j,i+k-1-i'}^+ \label{eq:secondlinearrecurrence} \\ f_{w-,i,j,k}^+ &=\sum_{i'=0}^{i-1} f_{w,i',i+j-1-i',k}^- \\ f_{w-,i,j,k}^- &=\sum_{k'=0}^{k-1} f_{w,i+k-1-k',j,k'}^+ + \sum_{j'=0}^{j-1} f_{w,j-1-j',j',i+k}^-. \end{align} \end{theorem} See Section~\ref{sec:proof2} for the proof of Theorem~\ref{thm:extendedcoefficients}. For fixed $n\geq1$, $w\in\left\{+,-\right\}^n$ and $\eta\in\left\{+,-\right\}$, the collection \[ T_w^{\eta}:=\left\{f_{w,i,j,k}^{\eta}|i+j+k=n-1\right\} \] forms a triangular array of numbers, and Theorem~\ref{thm:extendedcoefficients} gives a recursive way to compute the cardinality of any $\mathcal{Q}_w^{\eta}$: compute the evolution of the pair of triangular arrays $(T_{w'}^+,T_{w'}^-)$, for $w'$ a prefix of $w$ of increasing length, until one gets to $T_w^{\eta}$, then take the sum of all the entries of $T_w^{\eta}$ (see Figure~\ref{fig:triangleevolution}). The recurrences are initialized as follows: \begin{equation} \begin{array}{ccccc} f_{+,0,0,0}^+ & = & f_{-,0,0,0}^- &=& 1 \\ f_{-,0,0,0}^+ & = & f_{+,0,0,0}^- &=& 0. \\ \end{array} \end{equation} \begin{figure}[htpb] \[ \begin{array}{rccrc} T_+^+=& 1 && T_+^-= &0 \\ &&& \\ &&& \\ &&& \\ T_{++}^+= &\begin{matrix} & 0 & \\ &&\\ 0 & & 1\\ \end{matrix} &&T_{++}^-= &\begin{matrix} & 1 & \\ &&\\ 0 & & 0\\ \end{matrix} \\ &&& \\ &&& \\ &&& \\ T_{+++}^+= &\begin{matrix} &&0&&\\ &&&&\\ &1&&0&\\ &&&&\\ 1&&0&&1\\ \end{matrix} &&T_{+++}^-= &\begin{matrix} &&1&&\\ &&&&\\ &0&&1&\\ &&&&\\ 0&&0&&0\\ \end{matrix} \\ &&& \\ &&& \\ &&& \\ T_{++++}^+= &\begin{matrix} &&&0&&&\\ &&&&&&\\ &&1&&1&&\\ &&&&&&\\ &1&&2&&1&\\ &&&&&&\\ 1&&2&&1&&1\\ \end{matrix} &&T_{++++}^-= &\begin{matrix} &&&1&&&\\ &&&&&&\\ &&1&&1&&\\ &&&&&&\\ &1&&0&&1&\\ &&&&&&\\ 0&&0&&0&&0\\ \end{matrix} \end{array} \] \caption{Pairs of triangular arrays of numbers of growing size used to enumerate $\mathcal{Q}_{++++}^\eta$. The bottom, right and left sides of each triangle respectively correspond to $i=0$, $j=0$ and $k=0$. Taking the sum of the entries in $T_{++++}^+$, one obtains $\#\mathcal{Q}_{++++}^+=11$, as indicated in Table~\ref{tab:firstterms}.} \label{fig:triangleevolution} \end{figure} The linear recursions can be rewritten in a more compact way, by introducing the multivariate generating series of the numbers $f_{w,i,j,k}^+$ and $f_{w,i,j,k}^-$ and defining some linear operators acting on these generating functions. Fix $m\geq 2$ and $1\leq a,b,c \leq m$ to be three integers such that $b\neq c$ ($a$ may be equal to $b$ or $c$). We define the linear endomorphism $\Phi_{a,b,c}$ of $\mathbb{Z}[X_1,\ldots,X_m]$ by its action on monomials: for any $i_1,\ldots,i_m\geq0$, \begin{equation} \Phi_{a,b,c}\left(\prod_{\ell=1}^m X_\ell^{i_\ell} \right) = \left(\prod_{\substack{\ell=1 \\ \ell\notin\left\{b,c\right\}}}^m X_{\ell}^{i_\ell} \right)X_a^{i_b+1} \sum_{k=0}^{i_c} X_b^{i_c-k}X_c^{k}. \end{equation} Note that $\Phi_{a,b,c}$ maps any homogeneous polynomial of degree $d$ to a homogeneous polynomial of degree $d+1$. For any $n\geq1$, $w\in\left\{+,-\right\}^n$ and $\eta\in\left\{+,-\right\}$, we form the generating function \begin{equation} Q_w^{\eta}(X_1,X_2,X_3):=\sum_{\substack{i,j,k\geq0 \\ i+j+k=n-1}} f_{w,i,j,k}^\eta X_1^i X_2^j X_3^k. \end{equation} Then Theorem~\ref{thm:extendedcoefficients} can be rewritten as follows. \begin{theorem} \label{thm:extendedgf} For any $n\geq1$ and $w\in\left\{+,-\right\}^n$, we have \begin{align} Q_{w+}^+ &=\Phi_{2,2,1}(Q_w^-) + \Phi_{3,1,2}(Q_w^+) \\ Q_{w+}^- &=\Phi_{1,1,3}(Q_w^+) \\ Q_{w-}^+ &=\Phi_{1,1,2}(Q_w^-) \\ Q_{w-}^- &=\Phi_{3,3,1}(Q_w^+) + \Phi_{2,1,3}(Q_w^-). \end{align} \end{theorem} \begin{remark} \label{rem:classicalcase} The $\Phi$ operators can be used to enumerate $\mathcal{P}_w$, which by Theorem~\ref{thm:cyclicshape} corresponds to the classical boustrophedon and its extension by Viennot in~\cite{V79}. For any $n\geq3$, $w\in\left\{+,-\right\}^{n-2}$ and nonnegative integers $i,j$ such that $i+j=n-2$, set \begin{equation} e_{w,i,j}:= \begin{cases} \# \left\{Z\in\mathcal{P}_w | \tilde{c}_Z({n-1},n)=(i,j)\right\} &\text{ if } n \text{ is even} \\ \# \left\{Z\in\mathcal{P}_w | \tilde{c}_Z({n-1},n)=(j,i)\right\} &\text{ if } n \text{ is odd} \end{cases} \end{equation} and define the generating function \begin{equation} P_w(X_1,X_2):=\sum_{\substack{i,j\geq 0 \\ i+j=n-2}} e_{w,i,j} X_1^iX_2^j. \end{equation} Then we have \begin{align} P_{w+}&= \begin{cases} \Phi_{1,1,2}(P_w) &\text{ if } n \text{ is even} \\ \Phi_{2,2,1}(P_w) &\text{ if } n \text{ is odd} \end{cases} \\ P_{w-}&= \begin{cases} \Phi_{2,2,1}(P_w) &\text{ if } n \text{ is even} \\ \Phi_{1,1,2}(P_w) &\text{ if } n \text{ is odd} . \end{cases} \end{align} \end{remark} \begin{remark} \label{rem:foatahan} Foata and Han~\cite{FH14} introduced and studied Seidel triangle sequences, which are sequences $(A_n)_{n\geq1}$ of triangular arrays of numbers of growing size satisfying some linear recursion relations. One may reformulate their definition using the $\Phi$ operators as follows. Fix $H$ to be an arbitrary infinite triangular array of numbers (the $n$-th line of $H$ contains $n$ numbers). For any $n\geq1$, denote by $T_n(H)$ the triangular array of numbers of side-length $n$ defined as follows: the entries of $T_n(H)$ are constant along rows of constant $j$-coordinate and the last line of $T_n(H)$ is equal to the $n$-th line of $H$ (see Figure~\ref{fig:examplefh} for an example). \begin{figure}[htpb] \[ \begin{matrix} &&h^{(3)}_3&& \\ &&&& \\ &h^{(3)}_2&&h^{(3)}_3& \\ &&&& \\ h^{(3)}_1&&h^{(3)}_2&&h^{(3)}_3 \\ \end{matrix} \] \caption{Triangle $T_3(H)$ when the third line of $H$ is given from left to right by $h^{(3)}_1,h^{(3)}_2,h^{(3)}_3$.} \label{fig:examplefh} \end{figure} Then the Seidel triangle sequence built from $H$ is defined to be the sequence of triangular arrays $(A_n)_{n\geq1}$ such that $A_1=T_1(H)$ and for any $n\geq2$, $A_n=\Phi_{1,1,3}(A_{n-1})+T_n(H)$. Note that here we slightly abused the notation by applying $\Phi_{1,1,3}$ to a triangular array of numbers instead of its generating function. \end{remark} \subsection{Enumeration of \texorpdfstring{$\mathcal{R}_w^{\eta_1,\eta_2}$}{R,w,eta1,eta2}} In order to obtain linear recurrence relations to compute the cardinality of $\mathcal{R}_w^{\eta_1,\eta_2}$, we need to define six types of subsets, distinguished according to the cyclic order of the four elements $1,2,n-1$ and $n$. If $Z$ is a total cyclic order on a finite set $X$ and $y_1,\ldots,y_p$ are $p$ distinct elements of $X$ (with $p\geq 3$), we say that $(y_1,\ldots,y_p)$ forms a $Z$-chain if for any $3 \leq i \leq p$, $(y_1,y_{i-1},y_i)\in Z$. We denote by $\mathcal{C}_Z$ the set of $Z$-chains. Using this notation, we define for any $n\geq4$ and $w\in\left\{+,-\right\}^{n-2}$ the following sets: \begin{align} \mathcal{R}_w^{(1)}&:=\left\{ Z\in\mathcal{P}_w | (1,2,{n-1},n) \in \mathcal{C}_Z \right\} \\ \mathcal{R}_w^{(2)}&:=\left\{ Z\in\mathcal{P}_w | (1,{n-1},2,n) \in\mathcal{C}_ Z \right\} \\ \mathcal{R}_w^{(3)}&:=\left\{ Z\in\mathcal{P}_w | (1,{n-1},n,2) \in \mathcal{C}_Z \right\} \\ \mathcal{R}_w^{(4)}&:=\left\{ Z\in\mathcal{P}_w | (1,2,n,{n-1}) \in \mathcal{C}_Z \right\} \\ \mathcal{R}_w^{(5)}&:=\left\{ Z\in\mathcal{P}_w | (1,n,2,{n-1}) \in \mathcal{C}_Z \right\} \\ \mathcal{R}_w^{(6)}&:=\left\{ Z\in\mathcal{P}_w | (1,n,{n-1},2) \in \mathcal{C}_Z \right\}. \end{align} Note that for any $n\geq2$ and $w\in\left\{+,-\right\}^n$, \begin{align} \mathcal{R}_w^{+,+}&=\mathcal{R}_w^{(1)} \sqcup \mathcal{R}_w^{(2)} \\ \mathcal{R}_w^{+,-}&=\mathcal{R}_w^{(3)} \\ \mathcal{R}_w^{-,+}&=\mathcal{R}_w^{(4)} \\ \mathcal{R}_w^{-,-}&=\mathcal{R}_w^{(5)} \sqcup \mathcal{R}_w^{(6)}. \end{align} While the enumeration of $\mathcal{Q}_w^+$ and $\mathcal{Q}_w^-$ was performed by refining according to the multi-content associated with the reference points $1$, $n-1$ and $n$, we enumerate each class $\mathcal{R}_w^{(6)}$ with $1\leq \alpha \leq 6$ by refining according to the multi-content associated with the four reference points $1$, $2$, $n-1$ and $n$. For any $n\geq4$ and nonnegative integers $i,j,k,\ell$ such that $i+j+k+\ell=n-4$, set: \begin{align} g_{w,i,j,k,\ell}^{(1)} &:= \# \left\{ Z\in\mathcal{R}_w^{(1)} | \tilde{c}_Z({n-1},n,1,2)=(i,j,k,\ell) \right\} \\ g_{w,i,j,k,\ell}^{(2)} &:= \# \left\{ Z\in\mathcal{R}_w^{(2)} | \tilde{c}_Z({n-1},2,n,1)=(i,\ell,j,k) \right\} \\ g_{w,i,j,k,\ell}^{(3)} &:= \# \left\{ Z\in\mathcal{R}_w^{(3)} | \tilde{c}_Z(n,2,1,{n-1})=(i,j,k,\ell) \right\} \\ g_{w,i,j,k,\ell}^{(4)} &:= \# \left\{ Z\in\mathcal{R}_w^{(4)} | \tilde{c}_Z(n,{n-1},1,2)=(i,j,k,\ell) \right\} \\ g_{w,i,j,k,\ell}^{(5)} &:= \# \left\{ Z\in\mathcal{R}_w^{(5)} | \tilde{c}_Z(n,2,{n-1},1)=(i,\ell,j,k) \right\} \\ g_{w,i,j,k,\ell}^{(6)} &:= \# \left\{ Z\in\mathcal{R}_w^{(6)} | \tilde{c}_Z({n-1},2,1,n)=(i,j,k,\ell) \right\}. \end{align} For any $1\leq \alpha \leq 6$, $n\geq2$ and $w\in\left\{+,-\right\}^n$, the collection \[ \left\{g_{w,i,j,k,\ell}^{(\alpha)} | i+j+k+\ell = n-2\right\} \] forms a tetrahedral array of numbers. We provide linear recurrence formulas for these arrays directly in the language of generating functions (we skip the less compact formulation in terms of sequences indexed by $i,j,k,\ell$). For any $1 \leq \alpha \leq 6$, $n\geq2$ and $w\in\left\{+,-\right\}^n$, we set \begin{equation} R_w^{(\alpha)}(X_1,X_2,X_3,X_4):=\sum_{\substack{i,j,k,\ell\geq0 \\ i+j+k+\ell=n-2}} g_{w,i,j,k,\ell}^{(\alpha)} X_1^i X_2^j X_3^k X_4^\ell. \end{equation} Then we can express the recurrence relations among the $R_w^{(\alpha)}$'s by using the operators $\Phi_{a,b,c}$ defined above: \begin{theorem} \label{thm:fullgf} For any $n\geq2$ and $w\in\left\{+,-\right\}^n$, we have \begin{align} R_{w+}^{(1)}&=\Phi_{4,1,2}(R_{w}^{(1)} ) + \Phi_{3,1,2}(R_{w}^{(2)} ) + \Phi_{2,2,1}(R_{w}^{(4)} ) \\ R_{w+}^{(2)}&=\Phi_{3,4,2}(R_{w}^{(3)} ) + \Phi_{2,2,4}(R_{w}^{(5)} ) \\ R_{w+}^{(3)}&=\Phi_{3,4,1}(R_{w}^{(3)} ) + \Phi_{2,4,1}(R_{w}^{(5)} ) + \Phi_{1,1,4}(R_{w}^{(6)} ) \\ R_{w+}^{(4)}&=\Phi_{1,1,4}(R_{w}^{(1)} ) \\ R_{w+}^{(5)}&=\Phi_{4,1,3}(R_{w}^{(1)} ) + \Phi_{1,1,3}(R_{w}^{(2)} ) \\ R_{w+}^{(6)}&=\Phi_{4,4,3}(R_{w}^{(3)} ) \\ R_{w-}^{(1)}&=\Phi_{1,1,2}(R_{w}^{(4)} ) \\ R_{w-}^{(2)}&=\Phi_{1,4,2}(R_{w}^{(6)} ) + \Phi_{4,4,2}(R_{w}^{(5)} ) \\ R_{w-}^{(3)}&=\Phi_{4,4,1}(R_{w}^{(6)} ) \\ R_{w-}^{(4)}&=\Phi_{2,1,4}(R_{w}^{(4)} ) + \Phi_{3,1,4}(R_{w}^{(2)} ) + \Phi_{4,4,1}(R_{w}^{(1)} ) \\ R_{w-}^{(5)}&= \Phi_{2,1,3}(R_{w}^{(4)} ) + \Phi_{3,3,1}(R_{w}^{(2)} ) \\ R_{w-}^{(6)}&=\Phi_{1,4,3}(R_{w}^{(6)} ) + \Phi_{2,4,3}(R_{w}^{(5)} ) + \Phi_{3,3,4}(R_{w}^{(3)} ). \end{align} \end{theorem} The recurrences are initialized as follows: \[ R_{++}^{(1)}=R_{--}^{(2)}=R_{-+}^{(3)}=R_{+-}^{(4)}=R_{++}^{(5)}=R_{--}^{(6)}=1 \] and all other $R_{\epsilon_1,\epsilon_2}^{(\alpha)}$ are set to $0$. See Section~\ref{sec:proof2} for the proof of Theorem~\ref{thm:fullgf}. \section{Proof of Theorem~\ref{thm:cyclicshape}} \label{sec:proof1} \subsection{Proof idea} A total cyclic order $Z$ on $[n]$ can be viewed as a way to place the numbers $1$ to $n$ on a circle, such as on Figure~\ref{fig:cyclicorder4}, where only the relative positions of the numbers matter and are prescribed by $Z$. A permutation in $\mathcal{S}_n$ can be viewed as a diagram of dots where only the relative positions of the dots matter, such as on Figure~\ref{fig:permutation4}. \begin{figure}[htpb] \centering \includegraphics[height=.8in]{permutation4.pdf} \caption{This dot diagram represents the permutation $2431$, because when reading the dots from left to right, the first dot is the second lowest, the second is the highest, the third is the second highest and the fourth is the lowest.} \label{fig:permutation4} \end{figure} To prove Theorem~\ref{thm:cyclicshape} we will construct for every $[n]$ a bijection from total cyclic orders on $[n+1]$ to permutations on $[n]$ then show that this bijection maps each $\mathcal{P}_w$ to $\mathcal{S}_{i(w)}$. This bijection is constructed by induction on $n$. We grow the total cyclic order by adding the numbers $1,2,\ldots,n+1$ one after the other. Simultaneously we grow the dot diagram, by adding the $i$-th dot at the same time as we add the number $i+1$ to the cyclic order. At the beginning of the process, the cycle only contains the numbers $1$ and $2$ and the dot diagram only contains a single dot. Assume we already have $j$ elements in the cycle with $j\geq2$ and $j-1$ dots in the diagram. We divide the space in which the dot diagram lives into $j$ regions separated by horizontal boundaries, with one horizontal boundary at the height of each currently present dot. The slices are numbered from $0$ to $j-1$, either from bottom to top if $j$ is odd or from top to bottom if $j$ is even. See Figure~\ref{fig:slices} for an example. \begin{figure}[htbp] \centering \subfloat[Slicing the plane into four numbered regions when $j=4$.]{\label{fig:slices3}\includegraphics[height=1.5in]{slices3.pdf}} \hspace{\stretch{1}} \subfloat[Slicing the plane into five numbered regions when $j=5$.]{\label{fig:slices4}\includegraphics[height=1.5in]{slices4.pdf}} \caption{Slicing the plane and numbering the corresponding regions when $j$ is even or odd.} \label{fig:slices} \end{figure} Let $k$ denote the number of elements which are on the arc from $j$ to $j+1$ at the time when the cycle contains $j+1$ elements. We add the $j$-th dot in the region number $k$ of the dot diagram, to the right of all the dots already present. See Figure~\ref{fig:dotaddition} for an example. \begin{figure}[htbp] \centering \subfloat[Adding the number $5$ to a cycle already containing $4$ elements.]{\label{fig:adding5}\includegraphics[height=2in]{adding5.pdf}} \hspace{\stretch{1}} \subfloat[The dot diagram corresponds to the permutation $321$ before the addition of the fourth dot (represented here by the hollow dot). After adding the fourth dot, it corresponds to the permutation $4312$.]{\label{fig:addingdot}\includegraphics[height=1.2in]{addingdot.pdf}} \caption{In this example, $j=4$ and $k=2$, because the numbers $2$ and $1$ are on the arc between $4$ and the newly added number $5$. Thus the newly added dot in the dot diagram is in the region number $2$.} \label{fig:dotaddition} \end{figure} In the remainder of this section, we first define in Subsection~\ref{subsec:cyclicdescentclass} a one-to-one correspondence between each $\mathcal{P}_w$ and a certain class of cyclic permutations on $[n]$, which we call the cyclic descent class $\mathcal{C}_w$, then we formalize in Subsection~\ref{subsec:bijection} the map described above and finally in Subsections~\ref{subsec:lemma1} and~\ref{subsec:lemma2} we show that this map is a bijection from each $\mathcal{P}_w$ and $\mathcal{S}_{i(w)}$. \subsection{Cyclic descent classes} \label{subsec:cyclicdescentclass} There is a one-to-one correspondence between permutations $\sigma\in\mathcal{S}_n$ and total orders $<_{\sigma}$ on $[n]$, by setting $i <_{\sigma} j$ if and only if $\sigma(i)<\sigma(j)$. As observed in Subsection~\ref{subsec:descentclass}, this reduces the problem of enumerating certain linear extensions of posets on $[n]$ to enumerating descent class of permutations in $\mathcal{S}_n$. By analogy with the linear setting, we define a one-to-one correspondence between cyclic permutations $\pi\in\mathcal{C}_n$ and total cyclic orders $Z$ on $[n]$. For any $n\geq3$, let $\mathcal{Z}_n$ denote the set of all total cyclic orders on $[n]$. Define the map \[ \zeta:\bigsqcup_{n\geq3} \mathcal{C}_n \rightarrow \bigsqcup_{n\geq3} \mathcal{Z}_n \] where for any $\pi\in\mathcal{C}_n$, $\zeta(\pi)$ is defined as follows: $({i_1},{i_2},{i_3})\in\zeta(\pi)$ if and only if $i_2=\pi^k(i_1)$ and $i_3=\pi^{\ell}(i_1)$ for some $1 \leq k< \ell \leq n-1$. In words, for the total cyclic order $\zeta(\pi)$, ${\pi(i)}$ is the next element after $i$ when turning in the positive direction. For example, if $n=4$ and $\pi=[1,3,4,2]$, then $\zeta(\pi)$ is the total cyclic order depicted in Figure~\ref{fig:cyclicorder4}. The map $\zeta$ is clearly a bijection from $\mathcal{C}_n$ to $\mathcal{Z}_n$ for any $n\geq3$. Continuing the analogy with the linear setting, we define cyclic descent classes. \begin{definition} Fix $n\geq3$. The \emph{cyclic descent pattern of a cyclic permutation} $\pi\in\mathcal{C}_n$ is the word $w=\epsilon_1\ldots\epsilon_{n-2}\in\left\{+,-\right\}^{n-2}$ such that for all $1\leq i\leq n-2$ \[ \epsilon_i=\begin{cases} + \text{ if } (i,{i+1},{i+2}) \in \zeta(\pi), \\ - \text{ if } ({i+2},{i+1},i) \in \zeta(\pi). \end{cases} \] The \emph{cyclic descent class} $\mathcal{C}_w$ is defined to be the set of all cyclic permutations with cyclic descent pattern $w$. \end{definition} Clearly $\zeta$ maps $\mathcal{C}_w$ to $\mathcal{P}_w$. Thus, to prove Theorem~\ref{thm:cyclicshape}, it suffices to show that for any $n\geq1$ and $w\in\left\{+,-\right\}^n$, we have $\# \mathcal{C}_w = \#\mathcal{S}_{i(w)}$. \subsection{Construction of a bijection} \label{subsec:bijection} Before describing the construction of $F$, we need some preliminary remarks about notations for permutations. For any $n\geq3$, we define the map $\partial:\mathcal{C}_n\rightarrow\mathcal{C}_{n-1}$ as follows: for any $\pi\in\mathcal{C}_n$, $\partial(\pi)$ is the cyclic permutation such that for any $1 \leq i \leq n-1$, \[ \partial(\pi)(i)= \begin{cases} \pi(i) &\text{ if } \pi(i) \neq n, \\ \pi(n) &\text{ if } \pi(i)=n. \end{cases} \] In words, $\partial$ deletes the largest number from the cycle. For example, $\partial([1,5,3,4,2])=[1,3,4,2]$. For any $n\geq2$, we define the map $d:\mathcal{S}_n\rightarrow\mathcal{S}_{n-1}$ as follows: for any $\sigma\in\mathcal{S}_n$, $d(\sigma)$ is the permutation such that for any $1 \leq i \leq n-1$, \[ d(\sigma)(i)= \begin{cases} \sigma(i) &\text{ if } \sigma(i)<\sigma(n), \\ \sigma(i)-1 &\text{ if } \sigma(i)>\sigma(n). \end{cases} \] For example, using the one-line notation for permutations, we have $d(1 4 2 5 3)=1 3 2 4$. Note that \[ D: \begin{array}{ccc} \mathcal{S}_n & \longrightarrow & \mathcal{S}_{n-1}\times\left\{1,\ldots,n\right\} \\ && \\ \sigma & \longmapsto & (d(\sigma),\sigma(n)) \end{array} \] is a bijection. We will define the bijection $F$ by induction on the size of the cyclic permutation $\pi$. There is a unique cyclic permutation $\pi_0\in\mathcal{C}_2$. We set $F(\pi_0)$ to be the unique permutation in $\mathcal{S}_1$. Fix $n \geq 2$. Assume we have defined how $F$ acts on cyclic permutations in $\mathcal{C}_n$. Fix now $\pi\in\mathcal{C}_{n+1}$ and write $\tilde{\sigma}=F(\partial(\pi))$. Set \begin{equation} \beta:=c_{\zeta(\pi)}(n,{n+1}) \end{equation} and write \begin{equation} \label{eq:defalpha} \alpha= \begin{cases} n-\beta \text{ if } n \text{ is even}, \\ 1+\beta \text{ if } n \text{ is odd}. \end{cases} \end{equation} Then we set \begin{equation} F(\pi)=D^{-1}(\tilde{\sigma},\alpha). \end{equation} For example, $F([1,5,3,4,2])=4 3 1 2$. Theorem~\ref{thm:cyclicshape} immediately follows from the next two lemmas. \begin{lemma} \label{lem:Phibijection} The map $F$ induces for all $n\geq1$ a bijection from $\mathcal{C}_{n+1}$ to $\mathcal{S}_n$. \end{lemma} \begin{lemma} \label{lem:Phiactiononclasses} For any $n\geq1$ and $w\in\left\{+,-\right\}^n$, $F(\mathcal{C}_w)\subset\mathcal{S}_{i(w)}$. \end{lemma} Corollary~\ref{cor:Entringernumbers} about Entringer numbers follows from formula~\eqref{eq:defalpha} defining $\alpha$. \subsection{Proof of Lemma~\ref{lem:Phibijection}.} \label{subsec:lemma1} The sets $\mathcal{C}_{n+1}$ and $\mathcal{S}_n$ both have cardinality $n!$. We will show by induction on $n\geq1$ that $F:\mathcal{C}_{n+1}\rightarrow\mathcal{S}_n$ is surjective. It is clear for $n=1$. Pick $\sigma\in\mathcal{S}_n$ with $n\geq 2$. By induction hypothesis, we can find $\tilde{\pi}\in\mathcal{C}_n$ such that $F(\tilde{\pi})=d(\sigma)$. We set $\beta$ to be $n-\sigma(n)$ (resp. $\sigma(n)-1$) if $n$ is even (resp. odd). For any $1\leq i \leq n+1$, define \[ \pi(i)= \begin{cases} n+1 &\text{ if } i=\tilde{\pi}^{\beta}(n), \\ \tilde{\pi}^{\beta+1}(n) &\text{ if } i=n+1, \\ \tilde{\pi}(i) &\text{ otherwise}. \end{cases} \] It is not hard to check that $\pi\in\mathcal{C}_{n+1}$. In words, $\pi$ is obtained from $\tilde{\pi}$ by adding the element $n+1$ to the cycle in such a way that $c_{\zeta(\pi)}(n,{n+1})=\beta$. Thus by construction of $F$ we have $F(\pi)=\sigma$, which concludes the proof of Lemma~\ref{lem:Phibijection}. \subsection{Proof of Lemma~\ref{lem:Phiactiononclasses}.} \label{subsec:lemma2} We will show by induction on $n\geq1$ that for any $w\in\left\{+,-\right\}^n$ and $\pi\in\mathcal{C}_w$, $F(\pi)\in\mathcal{S}_{i(w)}$. Since $F([1,2,3])=(1 2)$ and $F([3,2,1])=(2 1)$, it is true for $n=1$. Define the map $\delta:\left\{+,-\right\}^n\rightarrow\left\{+,-\right\}^{n-1}$ by $\delta(\epsilon_1\cdots \epsilon_n):=\epsilon_1\cdots \epsilon_{n-1}$. Pick $n\geq2$, $w\in\left\{+,-\right\}^n$ and $\pi\in\mathcal{C}_w$. Set $\sigma=F(\pi)$, $\tilde{\pi}=\partial(\pi)$ and $\tilde{\sigma}=F(\tilde{\pi})$. Then $\tilde{\pi}\in\mathcal{C}_{\delta(w)}$ and by induction hypothesis $\tilde{\sigma}\in\mathcal{S}_{i(\delta(w))}$. Observing that the maps $i$ and $\delta$ commute and that $\tilde{\sigma}=d(\sigma)$, we get that $d(\sigma)\in\mathcal{S}_{\delta(i(w))}$. For any $1 \leq i \leq n-1$, we have $\sigma(i)<\sigma(i+1)$ if and only if $d(\sigma)(i)<d(\sigma)(i+1)$, so in order to conclude that $\sigma\in\mathcal{S}_{i(w)}$, it suffices to show that $\sigma(n)<\sigma(n+1)$ (resp. $\sigma(n)>\sigma(n+1)$) if the last letter of $i(w)$ is $+$ (resp. $-$). Set \begin{align} \beta_1 &= c_{\zeta(\tilde{\pi})}(n,{n+1}) \\ \beta_2 &= c_{\zeta(\pi)}({n+1},{n+2}). \end{align} Observe that \begin{equation} \beta_1=\#\left\{1\leq i \leq n-1 | (n,i,{n+1})\in \zeta(\pi) \right\}. \end{equation} We now need to distinguish according to the parity of $n$ and the last letter $\epsilon_n$ of $w$. Assume first $n$ is odd. Observe that $\sigma(n+1)=n+1-\beta_2$ and \begin{equation} \sigma(n)= \begin{cases} \beta_1 +1 &\text{ if } \beta_1 +1<\sigma(n+1), \\ \beta_1 +2 &\text{ if } \beta_1 +1 \geq\sigma(n+1). \end{cases} \end{equation} In particular, $\sigma(n)<\sigma(n+1)$ if and only if $\beta_1+\beta_2 \leq n-1$. If $\epsilon_n=+$ (resp. $\epsilon_n=-$), then $(n,{n+1},{n+2})\in \zeta(\pi)$ (resp. $({n+2},{n+1},n)\in \zeta(\pi)$) thus $\beta_1+\beta_2\leq n-1$ (resp. $\beta_1+\beta_2\geq n$) hence $\sigma(n)<\sigma(n+1)$ (resp. $\sigma(n)>\sigma(n+1)$), which concludes the proof in this case since the last letter of $i(w)$ is $\epsilon_n$. Assume now that $n$ is even. This time $\sigma(n+1)=1+\beta_2$ and \begin{equation} \sigma(n)= \begin{cases} n-\beta_1 &\text{ if } n-\beta_1<\sigma(n+1), \\ n-\beta_1+1 &\text{ if } n-\beta_1 \geq \sigma(n+1). \end{cases} \end{equation} In particular, $\sigma(n)<\sigma(n+1)$ if and only if $\beta_1+\beta_2 \geq n$. Just as in the case of $n$ odd, we find that if $\epsilon_n=+$ (resp. $\epsilon_n=-$), then $\sigma(n)>\sigma(n+1)$ (resp. $\sigma(n)<\sigma(n+1)$), which concludes the proof in this case since the last letter of $i(w)$ is $-\epsilon_n$. \section{Proof of the recursion formulas for \texorpdfstring{$\mathcal{Q}_w^{\eta}$}{Q,w,eta} and \texorpdfstring{$\mathcal{R}_w^{(\alpha)}$}{R,w,alpha}} \label{sec:proof2} We first show how to derive linear recursion relations for the triangles and tetrahedra of numbers, then how to translate them to the language of generating functions. \subsection{Recursion relations for triangles and tetrahedra of numbers} The proof of the linear recursion formulas for $\mathcal{Q}_w^{\eta}$ (resp. for $\mathcal{R}_w^{(\alpha)}$) goes along the following lines: take a total cyclic order $Z$ on $[n+1]$ where the elements $n$, ${n+1}$ and $1$ (resp. $n,{n+1},1$ and $2$) form a prescribed chain in $Z$, obtain a total cyclic order $\tilde{Z}$ on $[n]$ by deleting the element ${n+1}$ and look at the possible chains formed in $\tilde{Z}$ by ${n-1}$, $n$ and $1$ (resp. ${n-1},n,1$ and $2$). Define the map \begin{equation} \bar{\partial}: \begin{array}{ccc} \bigsqcup_{n\geq4} \mathcal{Z}_n & \longrightarrow & \bigsqcup_{n\geq3} \mathcal{Z}_n \\ &&\\ Z & \longmapsto & \zeta \circ \partial \circ \zeta^{-1}(Z) \end{array}. \end{equation} In words, for any $n\geq3$ and $Z\in\mathcal{Z}_{n+1}$, $\bar{\partial}(Z)$ is the total order on $[n]$ obtained by deleting from $Z$ all the triples involving ${n+1}$. For any $n\geq3$, $w\in\left\{+,-\right\}^n$ and $i,j,k\geq0$ with $i+j+k=n-3$, set \begin{align} \mathcal{Q}_{w,i,j,k}^+ &:=\left\{ Z\in\mathcal{Q}_w^+ | \tilde{c}_Z({n-1},n,1)=(i,j,k) \right\} \\ \mathcal{Q}_{w,i,j,k}^- &:= \left\{ Z\in\mathcal{Q}_w^- | \tilde{c}_Z(n,{n-1},1)=(i,j,k) \right\}. \end{align} \begin{lemma} \label{lem:removing} For any $n\geq1$, $w\in\left\{+,-\right\}^n$ and $i,j,k\geq0$ such that $i+j+k=n$, the restriction of the map $\bar{\partial}$ to $\mathcal{Q}_{w+,i,j,k}^+$ is a bijection from $\mathcal{Q}_{w+,i,j,k}^+$ to the set \[ \tilde{\mathcal{Q}}_{w,i,j,k}:=\bigsqcup_{j'=0}^{j-1} \mathcal{Q}_{w,i+j-1-j',j',k}^- \sqcup \bigsqcup_{k'=0}^{k-1} \mathcal{Q}_{w,k-1-k',i+j,k'}^+. \] \end{lemma} \begin{proof} Fix $n\geq3$, $w\in\left\{+,-\right\}^{n-2}$ and $Z\in\mathcal{Q}_{w+,i,j,k}^+$. We first show that $\bar{\partial} Z$ lies in $\tilde{\mathcal{Q}}_{w,i,j,k}$. Since $\bar{\partial} Z$ lies in $\mathcal{Z}_n$ and the relative order of any triple $i,{i+1},{i+2}$ with $1\leq i\leq n-2$ is prescribed by the word $w$, $\bar{\partial} Z$ must lie either in some $\mathcal{Q}_{w,i',j',k'}^-$ or in some $\mathcal{Q}_{w,i',j',k'}^+$. Assume first that $\bar{\partial} Z$ lies in some $\mathcal{Q}_{w,i',j',k'}^-$ (see Figure~\ref{fig:finduction}). \begin{figure}[htpb] \centering \includegraphics[height=2.5in]{finduction.pdf} \caption{A cyclic order $Z$ in some $\mathcal{Q}_{w+,i,j,k}^+$ with $w\in\left\{+,-\right\}^{n-2}$ and such that $\bar{\partial}Z\in\mathcal{Q}_{w,i',j',k'}^-$. The small dashed circle indicates that ${n+1}$ is not counted by $i'$.} \label{fig:finduction} \end{figure} Then $({n-1},n,{n+1})\in Z$ and $({n-1},1,n)\in\bar{\partial} Z\subset Z$ thus by transitivity, it follows that $({n-1},1,n,{n+1})$ forms a chain in $Z$. From this we deduce the following: \begin{gather*} k'=c_{\bar{\partial}Z}(1,n)=c_Z(1,n)=k ; \\ j'=c_{\bar{\partial}Z}({n-1},1)=c_Z({n-1},1) < c_Z({n+1},1)=j ; \\ i'+j'=c_{\bar{\partial}Z}(n,{n-1})+c_{\bar{\partial}Z}({n-1},1)=c_Z(n,{n-1})-1+c_Z({n-1},1) \\ =c_Z(n,{n+1})+c_Z({n+1},1)-1=i+j-1. \end{gather*} Thus \[ \bar{\partial}Z\in\bigsqcup_{j'=0}^{j-1} \mathcal{Q}_{w,i+j-1-j',j',k}^-. \] Similarly, in the case when $\bar{\partial} Z$ lies in some $\mathcal{Q}_{w,i',j',k'}^+$, we obtain that \[ \bar{\partial}Z\in\bigsqcup_{k'=0}^{k-1} \mathcal{Q}_{w,k-1-k',i+j,k'}^+. \] For any $\tilde{Z}\in \tilde{\mathcal{Q}}_{w,i,j,k}$, let $\iota(\tilde{Z})$ be the total order on $[n+1]$ obtained from $\tilde{Z}$ by adding ${n+1}$ on the circle in such a way that \[c_{\iota(\tilde{Z})}(n,{n+1})=i. \] More precisely, writing $\tilde{\pi}=\zeta^{-1}(\tilde{Z})$, set $\iota(\tilde{Z}):=\zeta(\pi)$, where $\pi$ is defined by \[ \pi(\ell)= \begin{cases} n+1 &\text{ if } \ell=\tilde{\pi}^{i}(n), \\ \tilde{\pi}^{i+1}(n) &\text{ if } \ell=n+1, \\ \tilde{\pi}(\ell) &\text{ otherwise}. \end{cases} \] It is not hard to see that $\iota$ is a left- and right-inverse to the restriction of the map $\bar{\partial}$ to $\mathcal{Q}_{w+,i,j,k}^+$, which is thus a bijection. \end{proof} The recurrence relation~\eqref{eq:firstlinearrecurrence} from Theorem~\ref{thm:extendedcoefficients} immediately follows from Lemma~\ref{lem:removing}. The other statements of this Theorem are proved using statements analogous to Lemma~\ref{lem:removing}. For the recurrence relation~\eqref{eq:secondlinearrecurrence}, one should observe that the image under $\bar{\partial}$ of $\mathcal{Q}_{w+,i,j,k}^-$ must lie in some $\mathcal{Q}_{w,i',j',k'}^+$, using transitivity: if $Z\in\mathcal{Q}_{w+,i,j,k}^-$, then $(n,1,{n+1})\in Z$ and $(n,{n+1},{n-1})\in Z$ thus $(n,1,{n-1}) \in Z$ and $(n,1,{n-1}) \in \bar{\partial}Z$. Finally, one obtains similarly linear recursion formulas for the $g_{w,i,j,k,\ell}^{(\alpha)}$. \subsection{Recursion relations for generating functions} Before we translate the recursion relations for triangles and tetrahedra of numbers into recursion relations for generating functions, we need to introduce some notation for certain subsets of indices. Fix $m\geq2$ and $1\leq a,b,c\leq m$ such that $b\neq c$. For any $\underline{i}=(i_1,\ldots,i_m)$ an $m$-tuple of nonnegative integers, we define the set $I_{a,b,c}(\underline{i})$ as follows. \begin{enumerate} \item If $a=b$, then $I_{a,a,c}(\underline{i})$ is the set of all $m$-tuples of nonnegative integers $\underline{i'}=(i'_1,\ldots,i'_m)$ verifying the following conditions: \begin{itemize} \item $0 \leq i'_a \leq i_a-1$ ; \item $i'_c=i_c+i_a-1-i'_a$ ; \item $i'_\ell=i_\ell$ if $\ell\notin\left\{a,c\right\}$. \end{itemize} \item If $a \neq b$, then $I_{a,b,c}(\underline{i})$ is the set of all $m$-tuples of nonnegative integers $\underline{i'}=(i'_1,\ldots,i'_m)$ verifying the following conditions: \begin{itemize} \item $0 \leq i'_a \leq i_a-1$ ; \item $i'_b=i_a-1-i'_a$ ; \item $i'_c=i_b+i_c$ ; \item $i'_\ell=i_\ell$ if $\ell\notin\left\{a,b,c\right\}$. \end{itemize} \end{enumerate} For any $n\geq0$, denote by $J_n$ the set of all $m$-tuples of nonnegative integers $\underline{i}=(i_1,\ldots,i_m)$ such that $i_1 + \cdots + i_m=n$. In order to translate the recursion formulas for triangular or tetrahedral arrays of numbers into recursion formulas for generating functions, we apply the following lemma: \begin{lemma} \label{lem:generatingfunction} Fix $n\geq0$ and let $(\lambda_{\underline{i'}})_{\underline{i'}\in J_n}$ and $(\mu_{\underline{i}})_{\underline{i}\in J_{n+1}}$ be two collections of numbers indexed by $m$-tuples of nonnegative integers. Assume that for any $\underline{i}\in J_{n+1}$, we have \begin{equation} \mu_{\underline{i}}=\sum_{\underline{i'}\in I_{a,b,c}(\underline{i})} \lambda_{\underline{i'}} \end{equation} Then we have \begin{equation} \Phi_{a,b,c}\left(\sum_{\underline{i'}\in J_n}\lambda_{\underline{i'}}\prod_{\ell=1}^m X_{\ell}^{i'_\ell}\right)=\sum_{\underline{i}\in J_{n+1}}\mu_{\underline{i}}\prod_{\ell=1}^m X_{\ell}^{i_\ell}. \end{equation} \end{lemma} The proof of Lemma~\ref{lem:generatingfunction} is a straightforward computation. \section{A conjecture on asymptotic densities} \label{sec:conjecture} Observe that for any $n\geq1$ and $w\in\left\{+,-\right\}^n$, we have \[ \mathcal{P}_w=\bigsqcup_{\alpha=1}^6 \mathcal{R}_w^{(\alpha)}. \] One may investigate the density of each $\mathcal{R}_w^{(\alpha)}$ inside $\mathcal{P}_w$. In particular, in the case when $w=+^n$ (in which case $\#\mathcal{P}_w$ is an Euler number), denote by \[ p_n^{(\alpha)}:=\frac{\#\mathcal{R}_{+^n}^{(\alpha)}}{\#\mathcal{P}_{+^n}} \] the density of each $\mathcal{R}_{+^n}^{(\alpha)}$ inside $\mathcal{P}_{+^n}$ for any $1 \leq \alpha \leq 6$. We conjecture the following regarding the asymptotics of $p_n^{(\alpha)}$: \begin{conjecture} \label{conj:asymptoticdensities} For any $1 \leq \alpha \leq 6$, $p_{\infty}^{(\alpha)}:=\lim_{n\rightarrow\infty} p_n^{(\alpha)}$ exists and is given by: \begin{gather} p_{\infty}^{(1)}=\frac{1}{\pi} \\ p_{\infty}^{(2)}=p_{\infty}^{(5)}=\frac{1}{2}-\frac{1}{\pi} \\ p_{\infty}^{(3)}=p_{\infty}^{(4)}=\frac{2}{\pi}-\frac{1}{2} \\ p_{\infty}^{(6)}=1-\frac{3}{\pi} \end{gather} \end{conjecture} The conjecture is supported by numerical simulations, whereby we computed each $p_n^{(\alpha)}$ for $n \leq 50$ and $1 \leq \alpha \leq 6$ and we observed the convergence to the predicted values, with a precision of $10^{-8}$ when $n=50$. If true, Conjecture~\ref{conj:asymptoticdensities} would imply that as $n$ goes to infinity, the asymptotic density of $\mathcal{Q}_{+^n}^+$ (resp. $\mathcal{R}_{+^n}^{+,+}$) inside $\mathcal{P}_{+^n}$ equals $\frac{2}{\pi}$ (resp. $\frac{1}{2}$). \paragraph*{Acknowledgements} We thank Arvind Ayyer, Dan Betea, Wenjie Fang, Mat-thieu Josuat-Verg\`es and Bastien Mallein for fruitful discussions. We also acknowledge the support and hospitality of the Institut Henri Poincar\'e, where this work was initiated during the program on ``Combinatorics and interactions'', as well as the support of the Fondation Simone et Cino Del Duca. Finally we thank the anonymous referee for providing advice to improve the exposition. \label{Bibliography} \bibliographystyle{plain}
{ "timestamp": "2018-11-28T02:13:34", "yymm": "1706", "arxiv_id": "1706.03386", "language": "en", "url": "https://arxiv.org/abs/1706.03386", "abstract": "We enumerate total cyclic orders on $\\left\\{1,\\ldots,n\\right\\}$ where we prescribe the relative cyclic order of consecutive triples $(i,{i+1},{i+2})$, these integers being taken modulo $n$. In some cases, the problem reduces to the enumeration of descent classes of permutations, which is done via the boustrophedon construction. In other cases, we solve the question by introducing multidimensional versions of the boustrophedon. In particular we find new interpretations for the Euler up/down numbers and the Entringer numbers.", "subjects": "Combinatorics (math.CO)", "title": "Extensions of partial cyclic orders, Euler numbers and multidimensional boustrophedons", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9814534322330059, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7083573587600197 }
https://arxiv.org/abs/0907.4652
The stability of the Kronecker products of Schur functions
In the late 1930's Murnaghan discovered the existence of a stabilization phenomenon for the Kronecker product of Schur functions. For n sufficiently large, the values of the Kronecker coefficients appearing in the product of two Schur functions of degree n do not depend on the first part of the indexing partitions, but only on the values of their remaining parts. We compute the exact value of n for which all the coefficients of a Kronecker product of Schur functions stabilize. We also compute two new bounds for the stabilization of a sequence of coefficients and show that they improve existing bounds of M. Brion and E. Vallejo.
\section*{Introduction} The understanding of the \emph{Kronecker coefficients of the symmetric group} (the multiplicities appearing when the tensor product of two irreducible representations of the symmetric group is decomposed into irreducibles; equivalently, the structural constants for the Kronecker product of symmetric functions in the Schur basis) is a longstanding open problem. Richard Stanley writes ``One of the main problems in the combinatorial representation theory of the symmetric group is to obtain a combinatorial interpretation for the Kronecker coefficients'' \cite{Stanley:vol2}. It is also a source of new challenges such as the problem of describing the set of non--zero Kronecker coefficients \cite{Oeding}, a problem inherited from quantum information theory \cite{Klyachko,Christandl:Harrow:Mitchison}. Or proving that the positivity of a Kronecker coefficient can be decided in polynomial time, a problem posed by Mulmuley at the heart of his Geometric Complexity Theory \cite{GCT6}. The present work is part of a series of articles that study another family of nonnegative constants, the \emph{reduced Kronecker coefficients} $\overline{g}_{\mu,\nu}^{\lambda}$, as a way to gain understanding about the Kronecker coefficients ${g}_{\mu,\nu}^{\lambda}$, \cite{Briand:Orellana:Rosas:SH,Briand:Orellana:Rosas:Chamber}. In \cite{Briand:Orellana:Rosas:Chamber}, we obtained the first explicit piecewise quasipolynomial description of a non--trivial family of Kronecker coefficients, the Kronecker coefficients indexed by two two--row shapes. This new description allowed us to test several conjectures of Mulmuley. As a result, we found a counterexample \cite{Briand:Orellana:Rosas:SH} for the strong version of his SH conjecture \cite{GCT6} on the behavior of the Kronecker coefficients under stretching of its indices. The starting point of the investigation presented in this paper is a remarkable stability property for the Kronecker products of Schur functions discovered by Murnaghan \cite{Murnaghan:1938, Murnaghan:1955}. This property is best shown on an example, that will be followed by a precise statement. Denote the Kronecker product of $s_{\lambda}$ and $s_{\beta}$ by $s_{\lambda} \ast s_{\beta}$. Then, {\begin{align*} s_{2,2}\ast s_{2,2}&= s_{4}+ s_{1, 1, 1, 1}+ \phantom{2} s_{2, 2} \\ s_{3,2}\ast s_{3,2}&= s_{5}+ s_{2, 1, 1, 1} +\phantom{2}s_{3, 2}+ s_{4, 1}+s_{3, 1, 1} +\phantom{2}s_{2, 2, 1} \\ s_{4,2}\ast s_{4,2} &= s_6 +s_{3, 1, 1, 1} +2s_{4, 2} +s_{5, 1} +s_{4, 1, 1} +2s_{3, 2, 1}+s_{2, 2, 2} \\ s_{5,2}\ast s_{5,2} &= s_{7}+ s_{4, 1, 1, 1}+ 2s_{5, 2}+ s_{6, 1}+s_{5, 1, 1} +2s_{4, 2, 1} +s_{3, 2, 2} +s_{4, 3} +s_{3, 3, 1} \\ s_{6,2}\ast s_{6,2} &= s_8 +s_{5, 1, 1, 1}+ 2s_{6, 2} +s_{7, 1}+ s_{6, 1, 1} +2s_{5, 2, 1} +s_{4, 2, 2} +s_{5, 3} +s_{4, 3, 1} +s_{4, 4}\\ s_{7,2}\ast s_{7,2} &= s_{9} +s_{6, 1, 1, 1} +2s_{7, 2} +s_{8, 1} +s_{7, 1, 1} +2s_{6, 2, 1} +s_{5, 2, 2} +s_{6, 3} +s_{5, 3, 1} +s_{5, 4}\\ s_{\bullet,2}\ast s_{\bullet,2} &= s_{\bullet} +s_{\bullet, 1, 1, 1} +2s_{\bullet, 2} +s_{\bullet, 1}+s_{\bullet, 1, 1} +2s_{\bullet, 2, 1} +s_{\bullet, 2, 2} +s_{\bullet, 3} +s_{\bullet, 3, 1} +s_{\bullet, 4} \end{align*} } Given a partition $\alpha=(\alpha_1,\ldots,\alpha_k)$ and an integer $n$, we set $\alpha[n]=(n-|\alpha|, \alpha_1, \ldots, \alpha_k)$. Murnaghan's theorem says that for $n$ big enough the expansions of $s_{\alpha[n]} \ast s_{\beta[n]}$ in the Schur basis all coincide, except for the first part of the indexing partitions which is determined by the degree, $n$. In particular, given any three partitions $\alpha$, $\beta$ and $\gamma$, the sequence with general term ${g}_{\alpha[n]\beta[n]}^{\gamma[n]}$ is eventually constant. The reduced Kronecker coefficient $\overline{g}_{\alpha,\beta}^{\gamma}$ is defined as the stable value of this sequence. In our example, we see that $\overline{g}_{(2),(2)}^{(2)}=2$ and $\overline{g}_{(2),(2)}^{(4)}=1$. In view of the difficulty of studying the Kronecker coefficients, it is surprising to obtain theorems that hold in general. Regardless of this, we present new results of general nature. We find an elegant formula that tells the point $n=\operatorname{stab}(\alpha,\beta)$ at which the expansion of the Kronecker product $s_{\alpha[n]} \ast s_{\beta[n]}$ stabilizes: \[ \operatorname{stab}(\alpha,\beta)=|\alpha|+|\beta|+\alpha_1+\beta_1 \] We also find new upper bounds for the point at which the sequence ${g}_{\alpha[n]\beta[n]}^{\gamma[n]}$ becomes constant, improving previously known bounds due to Brion \cite{Brion:Foulkes} and Vallejo \cite{Vallejo}. Interestingly, our investigations reduce to maximizing or bounding linear forms on the sets $\textrm{Supp}(\alpha,\beta)$ of partitions $\gamma$ such that $\overline{g}_{\alpha,\beta}^{\gamma} >0$, where $\alpha$ and $\beta$ are fixed partitions. This connects our research to a current problem of major importance: to describe the cones generated by the indices of the nonzero Kronecker coefficients \cite{Klyachko,Oeding}. Moreover, using Weyl's inequalities for eigenvalues of triples of hermitian matrices \cite{Weyl}, we find the maximum of $\gamma_1$ and upper bounds for all parts $\gamma_k$, among all $\gamma$ in $\textrm{Supp}(\alpha,\beta)$. This paper is organized as follows, in Section \ref{sec:main results} we give a detailed description of the main results of this work. In Section \ref{sec:RKC}, we prove the theorem that allows us to recover the Kronecker coefficients from the reduced Kronecker coefficients. We also give an expression of the reduced Kronecker coefficients in terms of Littlewood-Richardson coefficients and Kronecker coefficients. The main significance of this expression is that it doesn't involve cancellations and it provides us with a tool to prove most of our main results. In Section \ref{sec:product}, we provide a proof for the sharp bound for the stability of the Kronecker product. In the next section, Section \ref{sec:bounds}, we consider the problem of finding bounds on the rows of $\gamma$, whenever $\overline{g}_{\alpha, \beta}^\gamma>0$. We prove a theorem for a general upper bound for all rows of $\gamma$ using this theorem we give a sharp bound for $\gamma_1$. In Section \ref{sec:coeff}, we describe a general technique for deriving upper bounds for the stabilization of sequences of coefficients. Using this technique we get two new bounds. We show that one of these bounds improves the bounds of Brion and Vallejo. Finally, we compare our results to existing results in the literature. \section{Preliminaries and Main Results}\label{sec:main results} Let $\lambda$ be a partition (weakly decreasing sequences of positive integers) of $n$. Denote by $V_{\lambda}$ the irreducible representation of the symmetric group $\S_n$ indexed by $\lambda$. The Kronecker coefficient $g_{\mu,\nu}^{\lambda}$ is the multiplicity of $V_{\lambda}$ in the decomposition into irreducible representations of the tensor product $V_{\mu} \otimes V_{\nu}$. The Frobenius map identifies the irreducible representations $V_\lambda$ of the symmetric group with the Schur function $s_{\lambda}$. In doing so, it allows us to lift the tensor product of representations of the symmetric group to the setting of symmetric functions. Accordingly, the Kronecker coefficients ${g}_{\mu,\nu}^{\lambda}$ define the Kronecker product on symmetric functions by setting \[ s_{\mu}*s_{\nu} = \sum_{\lambda} g_{\mu,\nu}^{\lambda} s_{\lambda}. \] The reader is referred to \cite{Macdonald} Chapter I or \cite{Stanley:vol2} Chapter 7 for the standard facts in the theory of symmetric functions. Throughout this paper we follow the standard notation for partitions found in \cite{Macdonald}. If $\lambda =(\lambda_1, \lambda_2, \ldots,\lambda_k)$ is a partition, its \emph{parts} are its terms $\lambda_i$. The \emph{weight} of $\lambda$ is defined to be the sum of its parts, and it is denoted by $|\lambda|$. The number $k$ of (nonzero) parts of $\lambda$ is called its \emph{length} , and denoted by $\ell(\lambda).$ We identify a partition $\lambda$ with its Ferrers diagram \[ D(\lambda) = \left\{ (i,j) : 1 \le i \le \lambda_j, 1 \leq j \leq \ell(\lambda) \right\} \subseteq \mathbb{N}^2 \] This way, we obtain that $\alpha\, \cap\, \beta\, $=$( \min(\alpha_1,\beta_1),$ $\min(\alpha_2,\beta_2), \ldots)$. The sum of two partitions $\alpha + \beta$ is defined as $ ( \alpha_1+\beta_1, \alpha_2+\beta_2, \ldots)$. Listing the number of points in each column of $D(\lambda)$ gives the transpose partition of $\lambda$, denoted by $\lambda'$; equivalently, one obtains the Ferrers diagram of $\lambda'$ by reflecting the one of $\lambda$ along its main diagonal. The skew shape $\mu / \nu$ is defined as the set difference $D(\mu)\setminus D(\nu)$. Notice that $D(\mu)\subset D(\lambda)$ if $\mu_i\leq \lambda_i$ for all $i$. Again, the intersection and union of skew-shapes is defined as the corresponding operations on their diagrams. The \emph{width} of $\mu / \nu $ is defined as the number of nonzero columns of $\mu / \nu$ in $\mathbb{N}^2.$ Consider a partition $\lambda$ and an integer $n$. Then $\bar{\lambda}$ is defined to be the partition $(\lambda_2,\lambda_3, \ldots)$ and $\lambda[n]$ as the sequence $(n-|\lambda|, \lambda_1, \lambda_2, \ldots)$. Notice that $\lambda[n]$ is a partition only if $n-|\lambda|\geq \lambda_1$. We are ready to describe the starting point of our investigations, a remarkable theorem of Murnaghan that deserves to be better known. We first need to extend the definition of $s_{\mu}$ to the case where $\mu$ is any finite sequence of $n$ integers. For this, we use the Jacobi-Trudi determinant, \begin{equation}\label{eq:JT} s_{\mu}=\det\left( h_{\mu_j+i-j} \right)_{1\le i,j \le n}, \end{equation} where $h_k$ is the complete homogeneous symmetric function of degree $k$. In particular, $h_k=0$ if $k$ is negative, and $h_0=1$. It is not hard to see that such a Jacobi--Trudi determinant $s_{\mu}$ is either zero or $\pm 1$ times a Schur function. \begin{Murtheorem}[Murnaghan, \cite{Murnaghan:1938, Murnaghan:1955}] \label{thm:murnaghan} There exists a family of non-negative integers $(\overline{g}_{\alpha\beta}^\gamma)$ indexed by triples of partitions $(\alpha,\beta,\gamma)$ such that, for $\alpha$ and $\beta$ fixed, only finitely many terms $\overline{g}_{\alpha\beta}^\gamma$ are nonzero, and for all $n\geq 0$, \begin{equation}\label{eq:thm Mur} s_{\alpha[n]}\ast s_{\beta[n]} =\sum_{\gamma} \overline{g}_{\alpha\beta}^\gamma s_{\gamma[n]} \end{equation} Moreover, the coefficient $\overline{g}_{\alpha\beta}^\gamma$ vanishes unless the weights of the three partitions fulfill the inequalities: \begin{align*} \label{muriq} |\alpha|\leq |\beta|+|\gamma|, \qquad |\beta|\leq |\alpha|+|\gamma|,\qquad |\gamma|\leq |\alpha|+|\beta|. \end{align*} \end{Murtheorem} In what follows, we refer to these inequalities as \emph{Murnaghan's inequalities} and we will denote $\textrm{Supp}(\alpha,\beta)$ the set of all partitions $\gamma$ such that $\overline{g}_{\alpha,\beta}^{\gamma}>0$. We follow Klyachko \cite{Klyachko} and call the coefficients $\overline{g}_{\alpha\beta}^\gamma$ the {\em reduced Kronecker coefficients}. An elegant proof of Murnaghan's Theorem, using vertex operators on symmetric functions, is given in \cite{Thibon}. \begin{exa According to Murnaghan's theorem the reduced Kronecker coefficients determine the Kronecker product of two Schur functions, even for small values of $n$. For instance, \[ s_{2,2}\ast s_{2,2} = s_{4} +s_{1, 1, 1, 1} +2s_{2, 2} +s_{3, 1}+s_{2, 1, 1} +2s_{1, 2, 1} +s_{0, 2, 2} +s_{1, 3} +s_{0, 3, 1} +s_{0, 4} \] The Jacobi-Trudi determinants corresponding to $s_{1, 2, 1}$ and $s_{0, 2, 2}$ have a repeated column, hence they are zero. On the other hand, it is easy to see that $s_{1, 3}=-s_{2,2}$, $s_{0, 3, 1}=-s_{2,1,1}$, and $s_{0, 4}=-s_{3,1}$. After taking into account the resulting cancellations, we recover the expression of the Kronecker product $s_{2,2}\ast s_{2,2}$ in the Schur basis $ s_{4}+s_{1, 1, 1, 1}+s_{2, 2}. $ \end{exa} The reduced Kronecker coefficients contain the Little\-wood--Richardson coefficients as special cases. \begin{LRtheorem}[Murnaghan \cite{Murnaghan:1955}, Littlewood \cite{Littlewood:1958}] Let $\alpha$, $\beta$ and $\gamma$ be partitions. If $|\gamma|=|\alpha|+|\beta|$, then the reduced Kronecker coefficient $\overline{g}_{\alpha,\beta}^{\gamma}$ is equal to the Littlewood--Richardson coefficient $c_{\alpha,\beta}^{\gamma}$. \end{LRtheorem} Finally, a remarkable result of Christandl, Harrow, and Mitchison (originally stated for the Kronecker coefficients) says that the set \[ \operatorname{RKron}_k= \{(\alpha, \beta, \gamma)\, |\, \ell(\alpha),\ell(\beta),\ell(\gamma) \le k \text{ and } \overline{g}_{\alpha,\beta}^\gamma>0\} \] is a finitely generated semigroup under componentwise addition, \cite{Christandl:Harrow:Mitchison}. That is, if $\overline{g}_{\alpha, \beta}^{\gamma} \neq 0$ and $\overline{g}_{\hat\alpha \hat\beta }^{\hat\gamma} \neq 0$, then $\overline{g}_{\alpha+\hat\alpha, \beta+\hat\beta}^{\gamma+\hat\gamma} \neq 0$. This implies that $\operatorname{RKron}_k$ is closed under stretching. That is, that $\overline{g}_{\alpha, \beta}^{\gamma} \neq 0$ implies that $\overline{g}_{N\, \alpha, N\,\beta}^{N\,\gamma} \neq 0$ for all $N>0$. Both Klyachko and Kirillov have conjectured that the converse also holds. That is to say, that the reduced Kronecker coefficients satisfy the saturation property, \cite{Klyachko, Kirillov:saturation}. Remarkably, the Kronecker coefficients do not satisfy the saturation property. For example, \[ g^{(n,n)}_{(n,n),(n,n)}=0 \text{ if $n$ is odd, but $g^{(n,n)}_{(n,n),(n,n)}=1$ if $n$ is even.} \] At this point, we hope that the reader is convinced that the reduced Kronecker coefficients are interesting objects on their own. We are ready to describe the results of this article. In Theorem \ref{theorem:g in gbar} we give an explicit formula for recovering the value of the Kronecker coefficients from the reduced Kronecker coefficients. Let $u=(u_1,u_2,\ldots)$ be an infinite sequence and $i$ a positive integer. Define $u^{\dagger i}$ as the sequence obtained from $u$ by adding $1$ to its $i-1$ first terms and erasing its $i$--th term: \[ u^{\dagger i}=(1+u_1,1+u_2, \ldots, 1+u_{i-1}+1,u_{i+1}, u_{i+2}, \ldots) \] Partitions are identified with infinite sequences by appending trailing zeros. Under this identification, when $\lambda$ is a partition then so is $\lambda^{\dagger i}$ for all positive $i$. \begin{theorem}[Computing the Kronecker coefficients from the reduced Kronecker coefficients]\label{theorem:g in gbar} Let $n$ be a nonnegative integer and $\lambda$, $\mu$, and $\nu$ be partitions of $n$. Then \begin{equation}\label{eq:g to gbar} g_{\mu\nu}^\lambda = \sum_{i=1}^{\ell(\mu)\ell(\nu)} (-1)^{i+1} \bar{g}_{\bar{\mu} \bar{\nu}}^{\lambda^{\dagger i}} \end{equation} \end{theorem} This Theorem was stated in \cite{Briand:Orellana:Rosas:FPSAC}, and used to compute an explicit piecewise quasipolynomial description for the Kronecker coefficients indexed by two two--row shapes. Murnaghan Theorem implies the stability property for the Kronecker products $s_{\alpha[n]} \ast s_{\beta[n]}$ presented in the introduction. Indeed, for $n$ big enough, all sequences $\gamma[n]$ for $\gamma \in \textrm{Supp}(\alpha,\beta)$ are partitions, and then \eqref{eq:thm Mur} is the expansion of $s_{\alpha[n]} \ast s_{\beta[n]}$ in the Schur basis. In particular, for $n$ big enough, the Kronecker coefficient $g_{\alpha[n],\beta[n]}^{\gamma[n]}$ is equal to the reduced Kronecker coefficient $\overline{g}_{\alpha,\beta}^{\gamma}$. It is natural to ask about the index $n$ at which the expansion of $s_{\alpha[n]} \ast s_{\beta[n]}$ stabilizes. This index is defined as follows. \begin{defi}[$\operatorname{stab}(\alpha,\beta)$] Let $V$ be the linear operator on symmetric functions defined on the Schur basis by: $V \left(s_{\lambda}\right)=s_{\lambda+(1)}$ for all partitions $\lambda$. Let $\alpha$ and $\beta$ be partitions. Then $\operatorname{stab}(\alpha,\beta)$ is defined as the smallest integer $n$ such that $s_{\alpha[n+k]} \ast s_{\beta[n+k]}=V^k \left(s_{\alpha[n]} \ast s_{\beta[n]}\right)$ for all $k>0$. \end{defi} As an illustration see the example in the introduction, there $\alpha=\beta=(2)$ and the Kronecker product is stable starting at $s_{(6,2)}\ast s_{(6,2)}$. Since $(6,2)$ is a partition of $8$, we get that $\operatorname{stab}(\alpha,\beta)=8$. \begin{theorem} \label{thm:global} Let $\alpha$ and $\beta$ be two partitions. Then \[ \operatorname{stab}(\alpha,\beta) =|\alpha|+|\beta|+\alpha_1+\beta_1. \] \end{theorem} In order to show that this theorem holds, we first reduce the calculation of $\operatorname{stab}(\alpha,\beta)$ to maximizing a linear form on $\textrm{Supp}(\alpha,\beta)$ (Lemma \ref{lemma:stabbound}): \[ \operatorname{stab}(\alpha,\beta) =\maxsupp{|\gamma|+\gamma_1}. \] Then, we show that (Theorem \ref{thm:g and g1}) \begin{equation}\label{eq:g and g1} \maxsupp{|\gamma|+\gamma_1} =|\alpha|+|\beta|+\alpha_1+\beta_1 \end{equation} using a decomposition of $\overline{g}_{\alpha\beta}^\gamma$ as a sum of nonnegative summands derived from Murnaghan's theorem, this decomposition is described in Lemma \ref{prop:Littlewood}. We also obtain other interesting bounds for linear forms over the set $\textrm{Supp}(\alpha,\beta)$: \begin{itemize} \item In Theorem \ref{thm:gamma1} we show that: \begin{equation}\label{eq:max gamma1} \maxsupp{\gamma_1} = |\alpha \cap \beta|+\max(\alpha_1,\beta_1) \end{equation} \item More generally we obtain in Theorem \ref{prop:otherparts} that, whenever $\overline{g}_{\alpha,\beta}^{\gamma}>0$, we have for all positive integers $i$, $j$: \[ \gamma_{i+j-1} \leq |E_i\alpha \cap E_j\beta|+\alpha_i+\beta_j \] where $E_k \lambda$ stands for the partition obtained from $\lambda$ by erasing its $k$--th part. \item We also obtain (Theorem \ref{minmaxgamma}): \begin{align*} \maxsupp{|\gamma|}&= |\alpha|+|\beta|,\\ \minsupp{|\gamma|}&= \max(|\alpha|,|\beta|)-|\alpha\cap\beta|. \end{align*} \end{itemize} Note that Formula \eqref{eq:max gamma1} is reminiscent to the following result for the Kronecker coefficients: \begin{proposition}[ Klemm \cite{Klemm}, Dvir \cite{Dvir} Theorem 1.6, Clausen and Meier \cite{Clausen:Meier} Satz 1.1.]\label{prop:Dvir} Let $\alpha$ and $\beta$ be partitions with the same weight. Then: \[ \max\left\{\gamma_1\;|\; \text{\rm $\gamma$ partition s. t. $g_{\alpha,\beta}^{\gamma}>0$} \right\}=|\alpha \cap \beta| \] \end{proposition} In Section \ref{sec:coeff}, we consider the weaker version of the stabilization problem (Think uniform convergence vs simple convergence). As mentioned, Murnaghan's Theorem also implies that each particular sequence of Kronecker coefficients ${g}_{\alpha[n],\beta[n]}^{\gamma[n]}$ stabilizes with value $\overline{g}_{\alpha,\beta}^{\gamma}$, possibly before reaching $\operatorname{stab}(\alpha, \beta)$. More is known about these sequences \begin{Britheorem}[Brion \cite{Brion:Foulkes}, see also \cite{Manivel:rectangularKron}]\label{brion:monotone} Let $\alpha$, $\beta$ and $\gamma$ be partitions. The sequence with general term $g_{\alpha[n],\beta[n]}^{\gamma[n]}$ is weakly increasing. \end{Britheorem} The second stabilization problem consists in determining the following numbers. \begin{defi}[$\operatorname{stab}(\alpha,\beta,\gamma)$] Let $\alpha$, $\beta$, $\gamma$ be partitions. Then $\operatorname{stab}(\alpha,\beta,\gamma)$ is defined as the the smallest integer $N$ such that the sequences $\alpha[N]$, $\beta[N]$ and $\gamma[N]$ are partitions and $ g_{{\alpha}[n],{\beta}[n]}^{{\gamma}[n]}=\overline{g}_{\alpha,\beta}^{\gamma} $ for all $n \geq N$. \end{defi} Lemma \ref{lemma:Mf} describes a general technique for producing linear upper bounds for $\operatorname{stab}(\alpha,\beta,\gamma)$ from any linear function $f$ such that $\gamma_1 \le f(\alpha, \beta, \bar \gamma)$ whenever $\overline{g}_{\alpha,\beta}^{\gamma}>0$. This method provides two new upper bounds $N_1$ and $N_2$ for $\operatorname{stab}(\alpha,\beta, \gamma)$. The first bound is found by applying Lemma \ref{lemma:Mf} to the bound \eqref{eq:max gamma1} for $\gamma_1$ obtained in Theorem \ref{thm:gamma1}. \begin{theorem}\label{theorem:N1} Let $M_1(\alpha,\beta;\gamma)=|\gamma|+|\bar{\alpha}\cap \bar{\beta}|+\alpha_1+\beta_1$ and \[ N_1(\alpha,\beta,\gamma)= \min \left\{ M_1(\alpha,\beta;\gamma), M_1(\alpha,\gamma;\beta), M_1(\beta,\gamma;\alpha) \right\} \] Then $\operatorname{stab}(\alpha,\beta,\gamma)\leq N_1(\alpha,\beta,\gamma)$. \end{theorem} The second bound is obtained by applying Lemma \ref{lemma:Mf} to the bound \eqref{eq:g and g1} obtained in Theorem \ref{thm:g and g1}. \begin{theorem}\label{theorem:N2} Let \[ N_2(\alpha,\beta,\gamma) =\left[ \frac{|\alpha|+|\beta|+|\gamma|+\alpha_1+\beta_1+\gamma_1}{2} \right] \] where $[x]$ denotes the integer part of $x$. Then $\operatorname{stab}(\alpha,\beta,\gamma)\leq N_2(\alpha, \beta, \gamma)$. \end{theorem} We finish our work by placing the new bounds in the context of the current literature. We show in Proposition \ref{prop:comparison} that $N_1$ beats those of Ernesto Vallejo \cite{Vallejo} and Michel Brion \cite{Brion:Foulkes}. But neither one is better than the other since there are infinite families of examples where $N_1 < N_2$ (see the Example \ref{ex:three hooks} on the Kronecker coefficients indexed by three hooks), and others where $N_2<N_1$ (see the Example \ref{ex:two two-row} on the Kronecker coefficients indexed by two two-row shapes). Finally, we revisit the work of Rosas \cite{Rosas:2001}, Ballantine and Orellana \cite{Ballantine:Orellana}, and \cite{Briand:Orellana:Rosas:Chamber} where the situation for some restricted families of Kronecker coefficients is addressed. \section{The reduced Kronecker coefficients}\label{sec:RKC} In this section we show how to recover the Kronecker coefficients from the knowledge of the reduced Kronecker coefficients. We also present an expression for the reduced Kronecker coefficients as sums of nonnegative terms, involving Littlewood--Richardson coefficients as well as Kronecker coefficients, that will be useful in the next two sections. We denote by $\langle \ |\ \rangle$ the Hall inner product on symmetric functions. Recall that Formula \eqref{eq:g to gbar} in Theorem \ref{theorem:g in gbar} shows that we can recover the Kronecker coefficients from the reduced ones: \[ g_{\mu\nu}^\lambda = \sum_{i=1}^{\ell(\mu)\ell(\nu)} (-1)^{i+1} \bar{g}_{\bar{\mu} \bar{\nu}}^{\lambda^{\dagger i}}. \] We now provide the proof. \begin{proof}[Proof of Theorem \ref{theorem:g in gbar}] Murnaghan's theorem tells us that \[ s_\mu\ast s_\nu=\sum_{\gamma \in \textrm{Supp}(\bar{\mu},\bar{\nu})} \bar{g}_{\bar{\mu}\bar{\nu}}^\gamma s_{\gamma[n]} \] Performing the scalar product with $s_{\lambda}$ in the preceding equation yields: \begin{equation}\label{eq:ggg} g_{\mu,\nu}^{\lambda}=\sum_{\gamma \in \textrm{Supp}(\bar{\mu},\bar{\nu})} \bar{g}_{\bar{\mu}\bar{\nu}}^\gamma \scalar{s_{\gamma[n]}}{s_{\lambda}} \end{equation} Consider a particular $\gamma \in \textrm{Supp}(\bar{\mu},\bar{\nu})$ such that $\scalar{s_{\gamma[n]}}{s_{\lambda}}\neq 0$. Let $k$ be its length. Then $\lambda$ has length at most $k+1$ and the Jacobi--Trudi determinants $s_{\gamma[n]}$ and $s_{\lambda}$ have the same columns, up to the order, see Eq. \eqref{eq:JT}. That is, the sequence \[ v = (n-|\gamma|, \gamma_1,\gamma_2,\ldots,\gamma_k)+(k+1,k,k-1,\ldots,1) \] is a permutation of the decreasing sequence $u = \lambda + (k+1,k,k-1,\ldots,1)$. (As usual one sets $\lambda_j=0$ for $j> \ell(\lambda)$.) By construction, we have that $v$ is decreasing starting at $v_2$. Therefore, there exists an index $i$ such that $u_j = v_j +1$ for all $j < i$ and $u_j = v_j$ for all $j > i$. This means that $\gamma = \lambda^{\dagger i}$ for some $i \leq k+1$. Since $\gamma \in \textrm{Supp}(\bar{\mu},\bar{\nu})$ there is $k \leq \ell_1 \ell_2-1$ and thus $i \leq k$. Finally $\scalar{s_{\gamma[n]}}{s_{\lambda}}$ is the sign of the permutation that transforms $v$ into the decreasing sequence $u$. This permutation is the cycle $(i, i-1,\ldots, 2, 1)$, which has sign $(-1)^{i+1}$. This shows that only the partitions $\gamma=\lambda^{\dagger i}$, for $i$ between $1$ and $\ell_1\ell_2$, contribute to the sum in the right--hand side of \eqref{eq:ggg}, and that the contribution of $\lambda^{\dagger i}$ is $(-1)^{i+1} \bar{g}_{\bar{\mu}\bar{\nu}}^\gamma$. \end{proof} The operator on symmetric functions $f \mapsto f^\perp$ is defined as the operator dual to multiplication with respect to the inner product, $\langle \, |\, \rangle$. Define $c_{\alpha,\beta,\gamma}^\delta$ as the coefficients of $s_{\delta}$ in $s_{\alpha}s_\beta s_\gamma$. From the definition of the Littlewood--Richardson coefficients as the structural constant for the product of two Schur functions, we immediately obtain that \begin{equation}\label{3LR} c_{\alpha,\beta,\gamma}^\delta = \sum_\varphi c_{\alpha,\beta}^\varphi c_{\varphi,\gamma}^\delta \end{equation} \begin{lemma}\label{prop:Littlewood} Let $\alpha$, $\beta$, $\gamma$ be partitions. Then $\overline{g}_{\alpha,\beta}^{\gamma}$ is positive if and only if there exist partitions $\delta$, $\epsilon$, $\zeta$, $\rho$, $\sigma$, $\tau$ such that all four coefficients $g_{\delta,\epsilon}^{\zeta}$, $c_{\delta,\sigma,\tau}^{\alpha}$, $c_{\epsilon,\rho,\tau}^{\beta}$ and $c_{\zeta,\rho,\sigma}^{\gamma}$ are positive. Moreover, \begin{align}\label{trick} \overline{g}_{\alpha,\beta}^{\gamma}= \sum g_{\delta,\epsilon}^{\zeta} c_{\delta,\sigma,\tau}^{\alpha} c_{\epsilon,\rho,\tau}^{\beta} c_{\zeta,\rho,\sigma}^{\gamma} \end{align} \end{lemma} \begin{proof} Given partitions $\alpha$ and $\beta$, define the following symmetric function \[ R_{\alpha,\beta}=\sum_{\delta,\epsilon,\tau} \left( (s_{\delta} s_{\tau})^{\perp} s_{\alpha}\right) \left( (s_{\epsilon} s_{\tau})^{\perp} s_{\beta}\right) \left( s_{\delta} \ast s_{\epsilon} \right) \] where the sum is over all triples of partitions $\delta$, $\epsilon$, $\tau$. For $n$ integer, let $U_n$ be the linear operator on symmetric functions that sends the Schur function $s_{\lambda}$ to the Jacobi--Trudi determinant $s_{\lambda[n]}$. Littlewood showed in \cite{Littlewood:1958} that for all partitions $\alpha$ and $\beta$ and all integers $n$, \begin{equation}\label{eq:Littlewood} s_{\alpha[n]} \ast s_{\beta[n]}=U_n R_{\alpha,\beta} \end{equation} Formula \eqref{eq:Littlewood} is also presented in \cite{Butler:King} (Formula 6.1) and \cite{Scharf:Thibon:Wybourne} (Formula 8). Comparing \eqref{eq:Littlewood} with Murnaghan's Theorem we see that $U_n R_{\alpha,\beta}=U_n \sum_{\gamma} \overline{g}_{\alpha,\beta}^{\gamma} s_{\gamma}$. The operator $U_n$ is not injective, but its restriction to the symmetric functions of degree at most $n/2$ is. Indeed, when $|\gamma| \leq n/2$, the sequence $\gamma[n]$ is a partition. Therefore, taking $n$ big enough we can deduce that $R_{\alpha,\beta}=\sum_{\gamma} \overline{g}_{\alpha,\beta}^{\gamma} s_{\gamma}$. Let us determine the expansion $\sum_{\gamma} r_{\alpha,\beta}^{\gamma} s_{\gamma}$ of $R_{\alpha,\beta}$ in the Schur basis. We have: \begin{align*} (s_{\delta} s_{\tau})^{\perp}s_{\alpha}&=\sum_{\sigma} c_{\delta,\sigma,\tau}^{\alpha} s_{\sigma},\\ (s_{\epsilon} s_{\tau})^{\perp}s_{\beta}&=\sum_{\rho} c_{\epsilon,\rho,\tau}^{\beta} s_{\rho},\\ s_{\delta} \ast s_{\epsilon}&=\sum_{\zeta} g^{\zeta}_{\delta,\epsilon} s_{\zeta} \end{align*} Therefore, \begin{align*} R_{\alpha,\beta} &= \sum g^{\zeta}_{\delta,\epsilon} c_{\delta,\sigma,\tau}^{\alpha} c_{\epsilon,\rho,\tau}^{\beta} s_{\sigma} s_{\rho} s_{\tau}\\ &= \sum g^{\zeta}_{\delta,\epsilon} c_{\delta,\sigma,\tau}^{\alpha} c_{\epsilon,\rho,\tau}^{\beta} c_{\sigma,\rho,\tau}^{\gamma} s_{\gamma} \end{align*} We obtain Eq. (\ref{trick}). \end{proof} \section{Stability : The Kronecker product}\label{sec:product} In this section we consider the stability of the Kronecker product of Schur functions. We provide a proof for Theorem \ref{thm:global} which provides a sharp bound for this stability. \begin{lemma} \label{lemma:stabbound} Let $\alpha$ and $\beta$ be partitions. Then \[ \operatorname{stab}(\alpha,\beta) =\maxsupp{|\gamma|+\gamma_1} \] \end{lemma} \begin{proof} Let $N=\maxsupp{|\gamma|+\gamma_1}$. If $\alpha$ and $\beta$ are equal to the empty partition then $N=0=\operatorname{stab}(\alpha,\beta)$. In the other cases, that we consider now, we have $N>0$. Remember (from the definition of $\operatorname{stab}(\alpha,\beta)$ in Section \ref{sec:main results}) that $V$ is the linear operator that fulfills $V\left(s_{\lambda}\right)=s_{\lambda+(1)}$ for all partitions $\lambda$. For all $\gamma \in \textrm{Supp}(\alpha,\beta)$ and $k > 0$, the sequences $\gamma[N]$ and $\gamma[N+k]$ are partitions, therefore $s_{\gamma[N+k]}=V^k\left(s_{\gamma[N]}\right)$. After Murnaghan's Theorem, \begin{align*} s_{\alpha[N]}\ast s_{\beta[N]} &=\sum_{\gamma \in \textrm{Supp}(\alpha,\beta)} \overline{g}_{\alpha\beta}^\gamma s_{\gamma[N]},\\ s_{\alpha[N+k]}\ast s_{\beta[N+k]} &=\sum_{\gamma \in \textrm{Supp}(\alpha,\beta)} \overline{g}_{\alpha\beta}^\gamma s_{\gamma[N+k]}. \end{align*} We obtain that: \[ s_{\alpha[N+k]}\ast s_{\beta[N+k]}=V^k \left( s_{\alpha[N]}\ast s_{\beta[N]} \right). \] This proves that $N \geq \operatorname{stab}(\alpha,\beta)$. The equality will be obtained by proving additionally that $N-1 < \operatorname{stab}(\alpha,\beta)$. There exists a partition $\gamma \in \textrm{Supp}(\alpha,\beta)$ such that $|\gamma|+\gamma_1=N$. Then $\gamma[N]$ is a partition with its first part equal to its second part. This shows that $s_{\gamma[N]}$ is not in the image of $V$. It follows that $s_{\alpha[N]} \ast s_{\beta[N]}$ is not in the image of $V$. In particular, $s_{\alpha[N]} \ast s_{\beta[N]}$ is not equal to $V \left(s_{\alpha[N-1]} \ast s_{\beta[N-1]}\right)$. \end{proof} \begin{theorem}\label{thm:g and g1} Let $\alpha$, $\beta$ be partitions. Then, \begin{equation}\tag{\ref{eq:g and g1}} \maxsupp{|\gamma|+\gamma_1}=|\alpha|+|\beta|+\alpha_1+\beta_1. \end{equation} \end{theorem} \begin{proof} Let $\gamma$ be a partition such that $\overline{g}_{\alpha,\beta}^{\gamma}>0$. By Lemma \ref{prop:Littlewood}, there exist partitions $\delta$, $\epsilon$, $\zeta$, $\rho$, $\sigma$, $\tau$ such that all four coefficients $g_{\delta,\epsilon}^{\zeta}$, $c_{\delta,\sigma,\tau}^{\alpha}$, $c_{\epsilon,\rho,\tau}^{\beta}$ and $c_{\zeta,\rho,\sigma}^{\gamma}$ are positive. The Littlewood--Richardson rule together with Eq. \eqref{3LR} implies that if $c_{\zeta,\rho,\sigma}^{\gamma}>0$ then $\gamma_1 \leq \zeta_1+\rho_1+\sigma_1$. Since $c_{\zeta,\rho,\sigma}^{\gamma}>0$, we have also $|\gamma|=|\zeta|+|\rho|+|\sigma|$. Therefore $|\gamma|+\gamma_1 \leq |\zeta|+\zeta_1+|\rho|+\rho_1+|\sigma|+\sigma_1$. Obviously $\zeta_1 \leq |\zeta|$. Thus \begin{equation}\label{eq:12} |\gamma|+\gamma_1 \leq 2\,|\zeta|+|\rho|+\rho_1+|\sigma|+\sigma_1 \end{equation} Since $g_{\delta,\epsilon}^{\zeta}>0$ we have $|\zeta|=|\delta|=|\epsilon|$. Replacing $2|\zeta|$ with $|\delta|+|\epsilon|$ in \eqref{eq:12} yields \begin{equation}\label{eq:13} |\gamma|+\gamma_1 \leq |\delta|+|\sigma|+\sigma_1+|\epsilon|+|\rho|+\rho_1 \end{equation} Since $c_{\delta,\sigma,\tau}^{\alpha}>0$ we have $\sigma \subset \alpha$ and thus $\sigma_1 \leq \alpha_1$. We have also $|\delta|+|\sigma|\leq |\alpha|$. Therefore $|\delta|+|\sigma|+\sigma_1 \leq |\alpha|+\alpha_1$. Similarly, $c_{\epsilon,\rho,\tau}^{\beta}>0$ implies $|\epsilon|+|\rho|+\rho_1 \leq |\beta|+\beta_1$. Substituting these two new inequalities in \eqref{eq:13} provides the following inequality \[ |\gamma|+\gamma_1 \leq |\alpha|+|\beta|+\alpha_1+\beta_1. \] We now show that the bound is achieved. Consider the reduced Kronecker coefficient $\overline{g}_{\alpha,\beta}^{\alpha+\beta}$. The Murnaghan--Littlewood theorem implies that it is equal to the Littlewood--Richardson coefficient $c_{\alpha,\beta}^{\alpha+\beta}$ which is equal to $1$. This proves that the upper bound $|\alpha|+|\beta|+\alpha_1+\beta_1$ on $\textrm{Supp}(\alpha,\beta)$, for $|\gamma|+\gamma_1$, is reached with $\gamma=\alpha+\beta$. \end{proof} Theorem \ref{thm:global} is now a direct consequence of Lemma \ref{lemma:stabbound} and Theorem \ref{thm:g and g1}. \section{Bounds for linear forms on $\textrm{Supp}(\alpha,\beta)$}\label{sec:bounds} In this section we provide proofs for the bounds of the lengths of the rows of $\gamma$ when $\overline{g}_{\alpha,\beta}^\gamma >0$. In particular, we provide a sharp bound for the first row and upper bounds for the remaining rows. Theorem \ref{thm:gamma1} gives a first step towards describing the set partitions indexing the nonzero reduced Kronecker coefficients, that is $\mbox{Supp}(\alpha,\beta)$. Indeed, we show that \begin{theorem}\label{thm:gamma1} Let $\alpha$ and $\beta$ be partitions, then \[ \maxsupp{\gamma_1} = |\alpha \cap \beta|+\max(\alpha_1,\beta_1) \] \end{theorem} From Theorem \ref{thm:gamma1}, we obtain that given any three partitions $\mu, \nu$ and $\lambda$ of $n$. If $g_{\mu,\nu}^\lambda>0$ then \[ \lambda_2\leq \min(\frac{n}{2}, |\bar{\mu}\cap \bar{\nu}| +\max(\mu_2,\nu_2)). \] Fix two partitions $\alpha$ and $\beta$. To prove Theorem \ref{thm:gamma1} we first prove an upper bound for all the rows of $\gamma$ whenever $\overline{g}_{\alpha,\beta}^\gamma>0$ (Theorem \ref{prop:otherparts}). For $\lambda$ partition and $k$ positive integer, set $E_k \lambda$ for the partition obtained from $\lambda$ by erasing its $k$--th part (or leaving $\lambda$ unchanged when it has less than $k$ parts). In particular $E_1 \lambda=\cut{\lambda}$. \begin{lemma}\label{lemma:H} Let $\alpha$, $\delta$, $\sigma$ and $\tau$ be partitions such that $c_{\delta,\sigma,\tau}^{\alpha} >0$. Let $i$ be a positive integer. Then there exists a set $A$ such that $D(\delta) \subset D(E_i \alpha) \cup A$ and $|A|+\sigma_k \leq \alpha_k$. \end{lemma} \begin{proof} By Eq. \eqref{3LR}, there exists a partition $\kappa$ such that $c_{\kappa,\tau}^{\alpha}>0$ and $c_{\delta,\sigma}^{\kappa}>0$ since $c_{\delta, \sigma,\tau}^\alpha>0$. In particular $D(\delta) \subset D(\kappa) \subset D(\alpha)$. Let $S_i=\{(x,y)\,|\,x \geq 1 \text{ and }y \geq i\}$ and let $H=D(\delta)\setminus D(\cut{\kappa})$. Notice that $H$ is an horizontal strip consisting in all boxes of $D(\delta)$ having no box of $D(\kappa)$ above them, see Figure \ref{fig:strip} for an example. \begin{figure}[h] \includegraphics{strip.pdf} \caption{The horizontal strip $H$ (boxes with thick edges) for $\kappa=(10,6,3,2)$ (white and grey boxes) and $\delta=(8,4,1,1)$ (grey boxes). }\label{fig:strip} \end{figure} Let $A=S_i \cap H$, notice that this is the horizontal strip contained in $H$ strictly above the $i-1$-st row. We have \[ |A|=\kappa_i - \operatorname{width}( D(\kappa/\delta) \cap S_i ). \] On the other hand, since $c_{\delta,\sigma}^{\kappa}>0$, there exists a Littlewood--Richardson tableau with shape $\kappa/\delta$ and content $\sigma$. In this tableau, there is at most one occurrence of $i$ by column of $\kappa/\delta$, and they are all in row $i$ or higher. Therefore, \[ \sigma_i \leq \operatorname{width}(D(\kappa/\delta) \cap S_i). \] As a consequence, \[ |A| +\sigma_i \leq \kappa_i \] Since $D(\kappa) \subset D(\alpha)$ we conclude that $|A| +\sigma_i \leq \alpha_i$. Now by construction of $A$, \[ D(\delta) \cap S_i \subset \left(D(\cut{\kappa}) \cap S_i\right) \cup A \] and clearly $D(\delta) \setminus S_i \subset D(\kappa) \setminus S_i$. Therefore \[ D(\delta) \subset \left( D(\kappa) \setminus S_i\right) \cup \left( D(\cut{\kappa}) \cap S_i\right) \cup A \] Finally, observe that $D(E_i \kappa)=\left( D(\kappa) \setminus S_i\right) \cup \left( D(\cut{\kappa}) \cap S_i\right)$. Therefore, \[ D(\delta) \subset D(E_i \kappa) \cup A \] Since $D(\kappa) \subset D(\alpha)$ we have $D(E_i \kappa) \subset D(E_i \alpha)$, and thus \[ D(\delta) \subset D(E_i \alpha) \cup A \] \end{proof} \begin{theorem}\label{prop:otherparts} Let $\alpha$, $\beta$ and $\gamma$ be partitions such that $\overline{g}_{\alpha,\beta}^{\gamma}>0$ and let $i,j$ and $k$ be a positive integers such that $i+j-1=k$, then we have \[ \gamma_k \leq |E_i \alpha \cap E_j \beta | +\alpha_i + \beta_j. \] \end{theorem} \begin{proof} Let $i$ and $j$ such that $i+j-1=k$. By Lemma \ref{prop:Littlewood}, there exist partitions $\delta$, $\epsilon$, $\zeta$, $\rho$, $\sigma$, $\tau$ such that all four coefficients $g_{\delta,\epsilon}^{\zeta}$, $c_{\delta,\sigma,\tau}^{\alpha}$, $c_{\epsilon,\rho,\tau}^{\beta}$, $c_{\zeta,\rho,\sigma}^{\gamma}$ are positive. By Eq. \eqref{3LR} and since $c_{\zeta,\rho,\sigma}^{\gamma}>0$, there exists a partition $\phi$ such that $c_{\zeta,\phi}^{\gamma}>0$ and $c_{\rho,\sigma}^{\phi}>0$. Weyl's inequalities for eigenvalues of Hermitian matrices (\cite{Weyl} or Eq. (2) in \cite{Fulton}) imply that whenever a Littlewood--Richardson coefficient $c_{\mu,\nu}^{\lambda}$ is non--zero there is $\lambda_{p+q-1} \leq \mu_p + \nu_q$ for all $p$, $q$ (see \cite{Fulton}). Apply this to $c_{\zeta,\phi}^{\gamma}$ with $p=1$, $q=k$: we obtain $\gamma_k \leq \zeta_1+\phi_k$. Apply Weyl's inequalities to $c_{\rho,\sigma}^{\phi}$ with $p=j$, $q=i$: we obtain $\phi_k \leq \rho_j+\sigma_i$. It follows that $\gamma_k \leq \zeta_1+\sigma_i+\rho_j$. Since $g_{\delta,\epsilon}^{\zeta}>0$, we have $\zeta_1 \leq |\delta \cap \epsilon|$ by Proposition \ref{prop:Dvir}, then \begin{equation}\label{eq:11} \gamma_k \leq |\delta \cap \epsilon|+\rho_j+\sigma_i \end{equation} Since $c_{\delta,\sigma,\tau}^{\alpha}>0$, Lemma \ref{lemma:H} implies that there exists a set $A_1$ such that \[ D(\delta) \subset D(E_i \alpha) \cup A_1 \text{ and } |A_1| + \sigma_i \leq \alpha_i. \] Similarly for $c_{\epsilon,\rho,\tau}^{\beta}>0$, Lemma \ref{lemma:H} implies that there exists a set $A_2$ such that \[ D(\epsilon) \subset D(E_j \beta) \cup A_2 \text{ and } |A_2| + \rho_j \leq \beta_j. \] Therefore, \[ D(\delta \cap \epsilon) \subset D(E_i \alpha \cap E_j \beta) \cup A_1 \cup A_2. \] As a consequence, \[ |\delta \cap \epsilon| \leq |E_i \alpha \cap E_j \beta| + |A_1| + |A_2|. \] This together with \eqref{eq:11} yields \[ \gamma_k \leq |E_i \alpha \cap E_j \beta| + |A_1| + \sigma_i+|A_2|+\rho_j. \] Remembering that $|A_1| + \sigma_i \leq \alpha_i$ and $|A_2| + \rho_j \leq \beta_j$, we get the claimed inequality. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:gamma1}] The bound holds by Theorem \ref{prop:otherparts} since $|E_1\alpha\cap E_1\beta|+\alpha_1+\beta_1=|\alpha\cap \beta|+\max(\alpha_1,\beta_1)$. Let us now show it is reached. Choose $\delta=\epsilon=\cut{\alpha} \cap \cut{\beta}$ and for $\zeta$ a partition such that $g_{\delta,\epsilon}^{\zeta}>0$ and $\zeta_1=|\delta \cap \epsilon|=|\cut{\alpha} \cap \cut{\beta}|$, such a partition exists by Proposition \ref{prop:Dvir}. Choose $\tau$ to be the empty partition. Choose $\sigma$ as follows: first, $\sigma_1=\alpha_1$. This will ensure that $c_{\delta,\sigma,\tau}^{\alpha}=c_{\delta,\sigma}^{\alpha}=c_{\delta,\cut{\sigma}}^{\cut{\alpha}}$. The Littlewood--Richardson coefficients $c_{\delta,\kappa}^{\cut{\alpha}}$ are the coefficients in the expansion of the non--zero skew--Schur function $s_{\cut{\alpha}/\delta}$ in the Schur basis, hence one of them has to be non--zero. Choose for $\cut{\sigma}$ one such partition $\kappa$ (observe that $D(\kappa) \subset D(\cut{\alpha})$, therefore $\kappa_1 \leq \alpha_2 \leq \alpha_1=\sigma_1$). Define similarly $\rho$. Finally set $\gamma=\zeta+\sigma+\rho$. \end{proof} \begin{theorem}[The maximum and minimum weight of partitions indexing nonzero reduced Kronecker coefficients]\label{minmaxgamma} Let $\alpha$ and $\beta$ be partitions. We have: \begin{align*} \maxsupp{|\gamma|} &= |\alpha|+|\beta|,\\ \minsupp{|\gamma|} &= \max(|\alpha|,|\beta|)-|\alpha\cap\beta|. \end{align*} \end{theorem} \proof From Murnaghan's inequalities we know that $|\gamma|\le|\alpha|+|\beta|$ for all $\gamma \in \textrm{Supp}(\alpha,\beta)$. Moreover, this maximum is achieved, take $\gamma=\alpha+\beta$, then $c_{\alpha,\beta}^{\alpha+\beta}>0$ and finally $\overline{g}_{\alpha,\beta}^{\alpha+\beta}=c_{\alpha,\beta}^{\alpha+\beta}$ by the theorem of Littlewood and Murnaghan. To show the second bound, assume that $\overline{g}_{\alpha,\beta}^{\gamma}>0$. There exists $n$ such that $g_{\alpha[n],\beta[n]}^{\gamma[n]}=\overline{g}_{\alpha,\beta}^{\gamma}$. By Proposition \ref{prop:Dvir} we have that $ n-|\gamma| \leq |\alpha[n]\cap \beta[n]|. $ Hence, \[ |\alpha[n]\cap \beta[n]| = \min(n-|\alpha|,n-|\beta|)+|\alpha \cap \beta| = n-\max(|\alpha|,|\beta|)+|\alpha \cap \beta|. \] We conclude that $|\gamma| \geq \max(|\alpha|, |\beta|) -|\alpha\cap\beta|$. Again by Proposition \ref{prop:Dvir} we know that there is a partition $\gamma$ for which $n-|\gamma| = |\alpha[n]\cap\beta[n]|$, hence this bound is sharp. \endproof \begin{corollary} \label{thm:globalboundlower} Let $\alpha$ and $\beta$ be partitions and $i$ and $j$ positive integers such that $k=i+j-1$. Then \[ \maxsupp{\gamma_k} \le \min\left( |E_i \alpha \cap E_j \beta | +\alpha_i + \beta_j,\left[\frac{ |\alpha|+|\beta|}{k}\right] \right) \] \end{corollary} \begin{proof} This is a straightforward consequence of Theorems \ref{prop:otherparts} and \ref{minmaxgamma}. \end{proof} \begin{exa} \label{ex:otherparts} Let $\alpha=(2)$ and $\beta=(4,3,2)$, then the first row of the table are the nonzero values of $\gamma_k$ and the second row are the upper bounds given by Corollary \ref{thm:globalboundlower}. \begin{center} \begin{tabular}{l|c|c|c|c|c} $k$ & 1 & 2 & 3 &4 &5 \\ \hline max values for $\gamma_k$ & 6&4&3&2&1\\ \hline bound for $\gamma_k$ &6&5&3&2&2 \end{tabular} \end{center} In the case that $\alpha=(3,1)$ and $\beta=(2,2)$ we get \begin{center} \begin{tabular}{l|c|c|c|c|c|c} $k$ & 1 & 2 & 3 &4 &5& 6 \\ \hline max values for $\gamma_k$ & 6&3&2&1&1&1\\ \hline bound for $\gamma_k$ & 6 & 4 & 2 & 2&1 &1 \end{tabular} \end{center} \end{exa} \section{Stability : The Kronecker coefficients}\label{sec:coeff} In this last section we consider linear upper bounds for $\operatorname{stab}(\alpha,\beta,\gamma)$. Previously known bounds, due to Brion \cite{Brion:Foulkes} and Vallejo \cite{Vallejo} respectively, are \begin{align*} M_B(\alpha,\beta;\gamma)&=|\alpha|+|\beta|+\gamma_1,\\ M_V(\alpha,\beta; \gamma)&=|\gamma|+\left\lbrace \begin{array}{ll} \max \{|\alpha|+\alpha_1-1,|\beta|+\beta_1-1,|\gamma|\} & \text{ if } \alpha \neq \beta \\ \max \{|\alpha|+\alpha_1,|\gamma|\} & \text{ if } \alpha=\beta \end{array} \right. \end{align*} We introduce Lemma \ref{lemma:Mf} that produces linear upper bounds for $\operatorname{stab}(\alpha, \beta, \gamma)$ from linear inequalities fulfilled by those $(\alpha,\beta,\gamma)$ for which $\overline{g}_{\alpha,\beta}^{\gamma} >0.$ Applying this lemma to different bounds derived in Section 3 and Section 4, we obtain two new upper bounds for $\operatorname{stab}(\alpha, \beta, \gamma)$, and recover Brion's bound $M_B$. \begin{lemma}\label{lemma:Mf} Let $f$ be a function on triples of partitions such that for all $i$, \[ f(\alpha,\beta,\bar{\gamma}) \geq f(\alpha,\beta,\gamma^{\dagger i}). \] Set $\mathcal{M}_f(\alpha,\beta,\gamma)=|\gamma|+f(\alpha,\beta,\bar{\gamma})$ and assume also that whenever $\overline{g}_{\alpha,\beta}^{\gamma} >0$, \begin{equation}\label{eq:condition} \mathcal{M}_f(\alpha,\beta,\gamma) \geq \max\left(|\alpha|+\alpha_1,|\beta|+\beta_1,|\gamma|+\gamma_1\right). \end{equation} Then whenever $\overline{g}_{\alpha,\beta}^{\gamma}>0$, \[ \operatorname{stab}(\alpha,\beta,\gamma) \leq \mathcal{M}_f(\alpha,\beta,\gamma). \] \end{lemma} \begin{proof} Let $\alpha$, $\beta$ and $\gamma$ be partitions such that $\overline{g}_{\alpha,\beta}^{\gamma}>0$. Let $n \geq \mathcal{M}_f(\alpha,\beta,\gamma)$. By Lemma \ref{theorem:g in gbar}, \begin{equation}\label{eq:g in gbar with gamma} g_{\alpha[n]\beta[n]}^{\gamma[n]} = \overline{g}_{\alpha,\beta}^{\gamma}+ \sum_{i=1}^{N}(-1)^{i}\, \overline{g}_{\alpha,\beta}^{(n-|\gamma|+1,\gamma^{\dagger i})} \end{equation} for some $N$. Since $n \geq \mathcal{M}_f(\alpha,\beta,\gamma)=|\gamma|+f(\alpha,\beta,\bar{\gamma})$, we have $n-|\gamma|+1 > f(\alpha,\beta,\bar{\gamma})$. Thus $n-|\gamma|+1 > f(\alpha,\beta,\gamma^{\dagger i})$ for all $i$. As a consequence, none of the partitions $\tau=(n-|\gamma|+1,\gamma^{\dagger i})$ fulfills $\mathcal{M}_f(\alpha,\beta,\tau) \geq |\tau|+\tau_1$. Indeed, for such a partition, $|\tau|+\tau_1=|\tau|+(n-|\gamma|+1)$ and $\mathcal{M}_f(\alpha,\beta,\tau)=|\tau|+f(\alpha,\beta,\gamma^{\dagger i})$. We get that all terms $\overline{g}_{\alpha,\beta}^{(n-|\gamma|+1,\gamma^{\dagger i})}$ in \eqref{eq:g in gbar with gamma} are zero. Therefore $g_{\alpha[n]\beta[n]}^{\gamma[n]}$ is equal to its stable value $\overline{g}_{\alpha,\beta}^{\gamma}$. We conclude that $\mathcal{M}_f \geq \operatorname{stab}(\alpha,\beta,\gamma)$. \end{proof} Three functions $f$ such that \eqref{eq:condition} hold have already appeared in this paper. Each one gives a bound for $\operatorname{stab}(\alpha,\beta,\gamma)$. \begin{enumerate} \item Murnaghan's triangle inequalities (see Murnaghan's Theorem) and Theorem \ref{thm:gamma1} show that \eqref{eq:condition} holds for $f(\alpha,\beta,\tau)=|\alpha|+|\beta|-|\tau|$. We recover Brion's bound $M_B$. \item Theorem \ref{thm:gamma1} and Murnaghan's triangle inequalities also imply that \eqref{eq:condition} holds for $f(\alpha,\beta,\tau)=|\bar{\alpha} \cap \bar{\beta}|+\alpha_1+\beta_1$. the corresponding bound $\mathcal{M}_f$ is $M_1(\alpha,\beta,\gamma)=|\gamma|+|\bar{\alpha}\cap \bar{\beta}|+\alpha_1+\beta_1$. Hence, by Lemma \ref{lemma:Mf} and the symmetry of the Kronecker coefficients we obtain the proof of Theorem \ref{theorem:N1}. \item Theorem \ref{thm:global} shows that \eqref{eq:condition} holds for $f(\alpha,\beta,\tau)=1/2\,(|\alpha|+|\beta|+\alpha_1+\beta_1-|\tau|)$, which corresponds to $\mathcal{M}_f=M_2=\frac{1}{2}(|\alpha|+|\beta|+|\gamma|+\alpha_1+\beta_1+\gamma_1)$. The bound $N_2=\left[ M_2 \right]$ of Theorem \ref{theorem:N2} follows. \end{enumerate} Set $N_1(\alpha,\beta,\gamma)=\min\left\{ M_1(\alpha,\beta;\gamma), M_1(\alpha,\gamma;\beta), M_1(\beta,\gamma;\alpha) \right\}$ and define similarly $N_B$ and $N_V$ from $M_B$ and $M_V$. These are also upper bounds for $\operatorname{stab}(\alpha,\beta,\gamma)$. In the following proposition we show that the bound $N_1$ improves both Vallejo's $N_V$ and Brion's bound, $N_B$. \begin{proposition} \label{prop:comparison} Let $\alpha$, $\beta$, $\gamma$ be partitions, then $N_1(\alpha,\beta,\gamma) \leq N_B(\alpha,\gamma,\beta)$ and $N_1(\alpha,\beta,\gamma) \leq N_V(\alpha,\beta,\gamma)$. \end{proposition} \begin{proof} For all partitions $\alpha$, $\beta$, $\gamma$, we have \[ M_1(\alpha,\beta;\gamma)=|\gamma|+|\bar{\alpha}\cap \bar{\beta}|+\alpha_1+\beta_1 \leq |\gamma|+|\alpha|+\beta_1=M_B(\alpha,\gamma;\beta), \] since $|\bar{\alpha}\cap \bar{\beta}|+\alpha_1 \leq |\bar{\alpha}|+\alpha_1 =|\alpha|$. This is enough to conclude that $N_1(\alpha,\beta,\gamma) \leq N_B(\alpha,\beta,\gamma)$. We now prove that $N_1(\alpha,\beta,\gamma) \leq N_V(\alpha,\beta,\gamma)$. It is enough to prove that for all partitions $\alpha$, $\beta$, $\gamma$ we have $M_1(\alpha,\beta;\gamma) \leq M_V(\alpha,\beta;\gamma)$. By symmetry of both bounds with respect to $\alpha$ and $\beta$, we can assume without loss of generality that $\alpha_1 \geq \beta_1$. We consider three cases: $\alpha = \beta$; $\alpha \subsetneq \beta$; $\alpha \not \subset \beta$. We show that in the first case $|\bar{\alpha}\cap \bar{\beta}|+\alpha_1+\beta_1 \leq |\alpha|+\alpha_1$ and that in the other two cases $|\bar{\alpha}\cap \bar{\beta}|+\alpha_1+\beta_1 \leq \max(|\alpha|+\alpha_1-1,|\beta|+\beta_1-1)$. Consider the case $\alpha = \beta$. Then $|\bar{\alpha}\cap \bar{\beta}|+\alpha_1+\beta_1=|\alpha|+\alpha_1$. Consider now the case $\alpha \subsetneq \beta$. Then $|\bar{\alpha}\cap \bar{\beta}|+\alpha_1=|\alpha|\leq |\beta|-1$. Therefore $|\bar{\alpha}\cap \bar{\beta}|+\alpha_1+\beta_1 \leq |\beta|+\beta_1-1$. Consider last the case when $\alpha \not \subset \beta$. There is $|\bar{\alpha}\cap \bar{\beta}|+\beta_1=|\alpha\cap \beta| \leq |\alpha|-1$. Therefore $|\bar{\alpha}\cap \bar{\beta}|+\alpha_1+\beta_1 \leq |\alpha|+\alpha_1-1$. \end{proof} Now that we have shown that $N_1$ improves the bounds $N_B$ and $N_V$. In the following two examples we now compare $N_2$ to $N_B$ and $N_V$. \begin{exa}[Comparison of $N_2$ to $N_B$] Let $\alpha = (2,1)$ and $\beta=(3,1)$, if $\gamma =(3,1)$, then $N_B=10$ is greater than $N_2=9$ and if $\gamma=(3,2,2)$ then $N_B=10$ and $N_2=11$. This shows that neither one is better than the other. \end{exa} \begin{exa}[Comparison of $N_2$ to $N_V$] Let $\alpha=(2,1)$, $\beta=(3,1)$ and $\gamma=(3,2,2)$, then $N_2=11$ and $N_V=12$, hence $N_2<N_V$. On the other hand if $\alpha=(3,2)$ and $\beta=(3,1,1)$ and $\gamma=(6)$, then $N_V=13$ and $N_2=14$ and in this case, $N_V<N_2$. This shows that neither $N_V$ nor $N_2$ is better than the other. Notice that the last example can be generalized as follows. If $|\alpha|=|\beta|$ with $\alpha_1=\beta_1$ and $\gamma=(\gamma_1)$, then $N_V\leq N_2$. \end{exa} We conclude this section applying our bounds to some interesting examples of Kronecker coefficients appearing in the literature. \begin{exa}[The Kronecker coefficients indexed by three hooks]\label{ex:three hooks} Our first example looks at the elegant situation where the three indexing partitions are hooks. Note that after deleting the first part of a hook we always obtain a one column shape. Let $\alpha=(1^e)$, $\beta=(1^f)$ and $\gamma=(1^d)$ be the reduced partitions, with $d$, $e$ and $f$ positive. In Theorem 3 of \cite{Rosas:2001}, it was shown that Murnaghan's inequalities describe the stable value of the Kronecker coefficient ${g}_{\alpha[n],\beta[n]}^{\gamma[n]}$, \[ \overline{g}_{\alpha,\beta}^{\gamma}= (( e \le d+f)) (( d \le e+f)) (( f \le e+d)) \] where $((P))$ equals $1$ if the proposition is true, and $0$ if not. Moreover, $\operatorname{stab}(\alpha,\beta,\gamma) $ was actually computed in the proof of Theorem 3 \cite {Rosas:2001}. It was shown that the Kronecker coefficient equals $1$ if and only if Murnaghan's inequalities hold, as well as the additional inequality $e+f \le d+2(n-d)-2$. This last inequality says that: \[ \operatorname{stab}(\alpha,\beta,\gamma) = \Big[ \frac{d+e+f+3}{2} \Big]=N_2(\alpha,\beta,\gamma) \] To summarize, for triples of hooks, Murnaghan's inequalities govern the value of the reduced Kronecker coefficients, and $N_2$ is a sharp bound. On the other hand, the bounds provided by $N_1$, $N_B$, and $N_V$ are not in general sharp. \flushright$\Box$ \end{exa} \begin{exa}[The Kronecker coefficients indexed by two two-row shapes] \label{ex:two two-row} After deleting the first part of a two-row partition we obtain a partition of length $1$. Let $\alpha$ and $\beta$ be one-row partitions. We have: \begin{align*} N_1(\alpha,\beta,\gamma)&=\alpha_1+\beta_1+\gamma_1\\ N_2(\alpha,\beta,\gamma)&=\alpha_1+\beta_1+\gamma_1+\left[ \frac{\gamma_2+\gamma_3}{2}\right] \end{align*} It follows from \cite{Briand:Orellana:Rosas:Chamber} that when $\overline{g}_{\alpha,\beta}^{\gamma}>0$, \[ \operatorname{stab}(\alpha,\beta,\gamma)=\gamma_1-\gamma_3+\alpha_1+\beta_1. \] Neither $N_1$ nor $N_2$ are sharp bounds. Indeed, for $\overline{g}_{\alpha,\beta}^{\gamma}>0$ we have $\operatorname{stab}(\alpha, \beta, \gamma) <N_1$ if $\gamma_3>0$, and $\operatorname{stab}(\alpha, \beta, \gamma) <N_2$ if $\gamma_2 >0$. Moreover, $N_1<N_2$ when $\gamma_2+\gamma_3>1$. \flushright$\Box$ \end{exa} \begin{exa}[The Kronecker coefficients: One of the partitions is a two-row shape] The case when $\gamma$ has only one row, $\gamma=(p)$, was studied in \cite{Ballantine:Orellana}. It was shown there (Theorem 5.1) that \[ \operatorname{stab}(\alpha,\beta,(p) ) \le |\alpha|+\alpha_1+ 2\,p. \] Notice that this bound coincides with $\operatorname{stab}(\alpha,(p))$ after Theorem \ref{thm:global}. In this case, \[ N_1=p+|\bar{\alpha} \cap \bar{\beta}|+\alpha_1+\beta_1, \] is less than or equal to $N_2$. It is also mentioned in \cite{Ballantine:Orellana} that, for the case when $\alpha=\beta$, Vallejo's bound $N_V$ does beat this bound (that is, $ \operatorname{stab}(\alpha,\alpha) $), but not always. Indeed, when $\alpha=\beta$, $N_2$ coincides with $N_V$. \flushright$\Box$ \end{exa} The situation described in the previous example, where $\operatorname{stab}(\alpha, \beta)<N_V(\alpha, \beta, \gamma)$ raises the question of whether $\min(N_1, N_2)$ is always less or equal to $\operatorname{stab}(\alpha, \beta)$ when $\overline{g}_{\alpha,\beta}^{\gamma}>0 $. This is indeed the case since, as a direct consequence of Theorem \ref{thm:g and g1} ,$N_2 \leq |\alpha|+|\beta|+\alpha_1+\beta_1$ \begin{exa}[Vallejo's example]\label{exa:Vallejo example} In \cite{Vallejo} the case $\alpha=(3,2)$, $\beta=(2,2,1)$, $\gamma=(2,2)$ was considered. In this case $\operatorname{stab}(\alpha,\beta,\gamma)=10$, but \[ N_B(\alpha,\beta,\gamma)=N_V(\alpha,\beta,\gamma)=N_1(\alpha,\beta,\gamma)=11. \] Nevertheless, $N_2(\alpha,\beta,\gamma)=10$. \flushright$\Box$ \end{exa} \section*{Acknowledgments} We thank Ernesto Vallejo for pointing to us the reference \cite{Brion:Foulkes}, Ron King for pointing to us Littlewood's formula \eqref{eq:Littlewood}, and Richard Stanley for suggesting to look at \cite{Kirillov:saturation}. We also thank John Stembridge for making freely available his Maple package SF \cite{Stembridge:SF}. \bibliographystyle{alpha} \def$'${$'$}
{ "timestamp": "2009-08-06T23:45:09", "yymm": "0907", "arxiv_id": "0907.4652", "language": "en", "url": "https://arxiv.org/abs/0907.4652", "abstract": "In the late 1930's Murnaghan discovered the existence of a stabilization phenomenon for the Kronecker product of Schur functions. For n sufficiently large, the values of the Kronecker coefficients appearing in the product of two Schur functions of degree n do not depend on the first part of the indexing partitions, but only on the values of their remaining parts. We compute the exact value of n for which all the coefficients of a Kronecker product of Schur functions stabilize. We also compute two new bounds for the stabilization of a sequence of coefficients and show that they improve existing bounds of M. Brion and E. Vallejo.", "subjects": "Representation Theory (math.RT); Combinatorics (math.CO)", "title": "The stability of the Kronecker products of Schur functions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9814534398277176, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7083573583672576 }
https://arxiv.org/abs/1902.03313
Quasi-optimal and pressure robust discretizations of the Stokes equations by new augmented Lagrangian formulations
We approximate the solution of the stationary Stokes equations with various conforming and nonconforming inf-sup stable pairs of finite element spaces on simplicial meshes. Based on each pair, we design a discretization that is quasi-optimal and pressure robust, in the sense that the velocity $H^1$-error is proportional to the best $H^1$-error to the analytical velocity. This shows that such a property can be achieved without using conforming and divergence-free pairs. We bound also the pressure $L^2$-error, only in terms of the best approximation errors to the analytical velocity and the analytical pressure. Our construction can be summarized as follows. First, a linear operator acts on discrete velocity test functions, before the application of the load functional, and maps the discrete kernel into the analytical one. Second, in order to enforce consistency, we employ a new augmented Lagrangian formulation, inspired by Discontinuous Galerkin methods.
\section{Introduction} \label{S:introduction} We consider the discretization of the stationary Stokes equations \begin{equation} \label{Stokes-strong} -\mu \Lapl u + \Grad p = f \quad \text{and} \quad \Div u = 0 \quad \text{in } \Omega, \qquad u = 0 \quad \text{on } \partial \Omega \end{equation} with viscosity $\mu > 0$, in a bounded domain $\Omega \subseteq \mathbb{R}^d$, $d \in \{2,3\}$. According to the classical approach of Brezzi~\cite{Brezzi:74}, we approximate the analytical velocity $u$ and the analytical pressure $p$ by means of discrete spaces $V_h$ and $Q_h$, which are required to fulfill the so-called inf-sup condition. We additionally assume that $V_h$ and $Q_h$ are finite element spaces on a simplicial mesh of $\Omega$. To motivate our work, let us focus on the velocity $H^1$-error, i.e. the error between $u$ and the discrete velocity $u_h$, measured in the $H^1$-norm. We refer to~\cite[Chapter~5]{Boffi:Brezzi:Fortin.13} for the proof of the results listed hereafter. The C\'{e}a's-type quasi-optimal estimate \begin{equation} \label{intro:div-free-est} \NormLeb{\Grad(u-u_h)}{} \leq c \inf_{w_h \in V_h} \NormLeb{\Grad(u-w_h)}{} \end{equation} is well-known for standard discretizations (see~\eqref{Stokes-disc} and~\eqref{conforing-div-free-disc} below) with conforming and divergence-free pairs, i.e. under the assumptions $V_h \subseteq \SobH{}^\Dim$ and $\Div V_h = Q_h$. Such pairs have attracted a growing interest in recent years; see \cite{Guzman.Neilan:14,Guzman.Neilan:18,Scott.Vogelius:85,Zhang:07} and the references therein. Owing to~\eqref{intro:div-free-est}, this class of discretizations seems particularly attractive, because it fully exploits, up to a constant, the approximation properties of the space $V_h$ in the $H^1$-norm. This prevents, in particular, from the following issues. For standard discretizations with general conforming pairs (see~\eqref{Stokes-disc} and~\eqref{conforing-disc} below) one typically has \begin{equation} \label{intro:conforming-est} \NormLeb{\Grad(u-u_h)}{} \leq c \left( \inf_{w_h \in V_h} \NormLeb{\Grad(u-w_h)}{} +\dfrac{1}{\mu} \inf_{q_h \in Q_h} \NormLeb{p-q_h}{} \right). \end{equation} Thus, if $\Div V_h \neq Q_h$, the right-hand side suggests that the velocity $H^1$-error may be not robust with respect to the pressure. This is indeed the case and such effect is known in the literature as poor mass conservation. It becomes extreme for purely irrotational loads or for small values of the viscosity; see, for instance,~\cite{Linke:14}. Poor mass conservation discourages, in particular, from the use of unbalanced pairs, i.e. pairs $V_h/Q_h$ so that the approximation power of $V_h$ in the $H^1$-norm is higher than the one of $Q_h$ in the $L^2$-norm; cf. Remark~\ref{R:unbalanced-pairs}. Recall also that, in the nonconforming case $V_h \nsubseteq \SobH{\Omega}^d$, estimates in the form \begin{equation} \label{intro:nonconforming-est} \Normh{u-u_h} \leq c\left( \inf_{w_h \in V_h} \Normh{u-w_h} + \dfrac{1}{\mu} \inf_{q_h \in Q_h} \NormLeb{p-q_h}{} + \Normtr{(u,p)}_h \right) \end{equation} are often derived. Here $\Normh{\cdot}$ is an extension of the $H^1$-norm to $\SobH{\Omega}^d + V_h$ and the semi-norm $\Normtr{\cdot}_h$ is defined on (a subspace of) $\SobH{\Omega}^d \times \Leb{\Omega}$. Since the lack of smoothness in $V_h$ is commonly compensated by additional regularity of the load beyond $\SobD{\Omega}^d$, the semi-norm $\Normtr{\cdot}_h$ cannot be extended to $\SobH{\Omega}^d \times \LebH{\Omega}$ and potentially dominates the right-hand side of \eqref{intro:nonconforming-est} for rough solutions. Therefore, an estimate like~\eqref{intro:conforming-est} cannot be expected to hold, cf. Remark~\ref{R:smoothing-E}. Several techniques are available in the literature to deal with the above mentioned difficulties. The discretization of \cite[section~6]{Badia.Codina.Gudi.Guzman:14} and the general framework in~\cite{Veeser.Zanotti:18} indicate how to avoid the issue with $\Normtr{\cdot}_h$ for nonconforming pairs. The over-penalized augmented Lagrangian formulation of~\cite{Boffi.Lovadina:97} and the grad-div stabilization~\cite{Olshanskii.Reusken:04} may serve to mitigate the impact of poor mass conservation. More recently, Linke et al.~\cite{Lederer.Linke.Merdon.Schoberl:17,Linke:14,Linke.Matthies.Tobiska:16} proposed a class of discretizations, which differ from standard ones only in the treatment of the load and enjoy the following pressure robust upper bound \begin{equation} \label{intro-Linke-est} \Normh{u-u_h} \leq c \left( \inf_{w_h \in V_h} \Normh{u-w_h} + \Normtr{(u, 0)}_h \right) \end{equation} for several conforming and nonconforming pairs. In this paper, we show that the quasi-optimal and pressure robust estimate~\eqref{intro:div-free-est} is not a prerogative of conforming and divergence-free pairs, but can be achieved also by (carefully designed) discretizations, based on general inf-sup stable pairs. In this way, we combine the advantages of the various techniques listed above. We also bound the pressure $L^2$-error only in terms of the best approximation errors to the analytical velocity and to the analytical pressure. To our best knowledge, similar error bounds were previously obtained only in \cite{Verfuerth.Zanotti:18} in the rather specific case of the lowest-order nonconforming Crouzeix-Raviart pair \cite{Crouzeix.Raviart:73}. In particular, our results make unbalanced pairs a valuable option, if one is more interested in the analytical velocity rather than in the analytical pressure. Our approach is guided by few simple necessary conditions and builds on two main ingredients. First, we discretise the load with the help of an operator which maps $V_h$ into $\SobH{}^\Dim$ and discretely divergence-free into exactly divergence-free functions. The importance of the latter property was first devised in \cite{Linke:14}. For this purpose, we solve local Stokes problems with Scott-Vogelius elements on a barycentric refinement of the mesh, see \cite{Guzman.Neilan:18,Qin:1994,Zhang:05}. Second, we discretise the weak form of the Laplace operator in a way inspired by Discontinuous Galerkin (DG) methods, in order to enforce the necessary consistency. The resulting discretization can be interpreted as a new augmented Lagrangian formulation, cf. Remark~\ref{R:connection-augmented-lagrangian}. The rest of the paper is organized as follows. In section~\ref{S:abstract-framework} we set up the abstract framework. In section~\ref{S:paradigmatic-discretization} we illustrate our construction by means of a model example. Various generalizations are then discussed in section~\ref{S:generalizations}. Finally, in section~\ref{S:numerics} we complement our theoretical findings through some numerical experiments. \section{Abstract framework} \label{S:abstract-framework} This section introduces an abstract discretization of \eqref{Stokes-strong} and the properties in which we are interested. Two basic results are also proved. We use standard notations for Lebesgue and Sobolev spaces. \subsection{Quasi-optimal discretizations} \label{SS:quasi-optimality} Let $\Omega \subseteq \mathbb{R}^d$, $d \in \{2,3\}$, be an open and bounded polytopic domain with Lipschitz-continuous boundary. The weak formulation of the stationary Stokes equations in $\Omega$, with viscosity $\mu > 0$ and load $f \in \SobD{}^d$, looks for $u \in \SobH{}^\Dim$ and $p \in \LebH{}$ such that \begin{equation} \label{Stokes-weak} \begin{alignedat}{2} &\forall v \in \SobH{}^\Dim& \qquad \mu \int_\Omega \Grad u \colon \Grad v - \int_\Omega p \Div v &= \left\langle f , v \right\rangle \\ &\forall q \in \LebH{}& \qquad \int_\Omega q \Div u &= 0 . \end{alignedat} \end{equation} Here $\colon$ denotes the euclidean scalar product of $d \times d$ tensors and $\left\langle \cdot, \cdot \right\rangle $ is the dual pairing of $\SobD{}^d$ and $\SobH{}^\Dim$. Due to the boundary condition on the analytical velocity $u$, the analytical pressure $p$ belongs to $\LebH{} := \{ q \in \Leb{} \mid \int_\Omega q = 0 \}$. Problem~\eqref{Stokes-weak} is uniquely solvable, according to \cite[Theorem~8.2.1]{Boffi:Brezzi:Fortin.13}. \begin{remark}[Alternative formulation] \label{R:alternative-formulation} Most of our subsequent results remain unchanged in case the gradient is replaced by the symmetric gradient in the first equation of \eqref{Stokes-weak} and the homogeneous Neumann condition is imposed on (a portion of) $\partial \Omega$. The only remarkable difference is that a piecewise Korn's inequality may fail to hold for some of the nonconforming pairs mentioned in section~\ref{SS:nonconforming-pairs}, see~\cite{Arnold:93,Brenner:04}. This problem, however, can be overcome e.g. by an additional jump penalization in the spirit of \cite[Section~3.3]{Veeser.Zanotti:18b}. \end{remark} We consider discretizations that mimic the variational structure of problem~\eqref{Stokes-weak}. More precisely, we approximate $u$ and $p$ in finite-dimensional linear spaces $V_h$ and $Q_h$. We require $Q_h \subseteq \LebH{}$ and measure the pressure error in the $L^2$-norm $\NormLeb{\cdot}{}$. Instead, we allow for nonconforming discrete velocity spaces $V_h \nsubseteq \SobH{}^\Dim$. In order to measure the velocity error, we assume that an extension $\Normh{\cdot}$ of the $H^1$-norm $\NormLeb{\Grad \cdot}{}$ to $\SobH{}^\Dim + V_h$ is at our disposal. We replace the bilinear forms in~\eqref{Stokes-weak} with discrete surrogates $a_h: V_h \times V_h \to \mathbb{R}$ and $b_h : V_h \times Q_h \to \mathbb{R}$. Moreover, we let $E_h: V_h \to \SobH{}^\Dim$ be a linear operator. Hence, we look for a discrete velocity $u_h \in V_h$ and a discrete pressure $p_h \in Q_h$ such that \begin{equation} \label{Stokes-disc} \begin{alignedat}{2} &\forall v_h \in V_h &\qquad \mu \,a_h(u_h, v_h) + b_h(v_h, p_h) &= \left\langle f , E_h v_h \right\rangle \\ &\forall q_h \in Q_h &\qquad b_h(u_h, q_h) &= 0 . \end{alignedat} \end{equation} To ensure that this problem is uniquely solvable, we assume hereafter that $a_h$ is coercive on $V_h$ and that the pair $V_h / Q_h$ is inf-sup stable, i.e. \begin{equation} \label{inf-sup-disc} \forall q_h \in Q_h \qquad \beta \NormLeb{q_h}{} \leq \sup_{v_h \in V_h} \dfrac{b_h(v_h, q_h)}{\Normh{v_h}} \end{equation} for some constant $\beta > 0$, see \cite[Corollary~4.2.1]{Boffi:Brezzi:Fortin.13}. Note, in particular, that the duality $\left\langle f , E_h v_h \right\rangle$ is well-defined for all $f \in \SobD{}^d$ and $v_h \in V_h$, also in the nonconforming case. We shall pay special attention to the following property, which guarantees that $(u_h, p_h)$ is a near-best approximation of $(u,p)$ in $V_h \times Q_h$. \begin{definition}[Quasi-optimality] \label{D:quasi-optimal} Denote by $(u,p)$ and $(u_h,p_h)$ the solutions of~\eqref{Stokes-weak} and~\eqref{Stokes-disc}, respectively, with load $f$ and viscosity $\mu$. We say that~\eqref{Stokes-disc} is a quasi-optimal discretization of~\eqref{Stokes-weak} when there is a constant $C \geq 1$ such that \begin{equation} \label{quasi-optimality} \mu \Normh{u-u_h} + \NormLeb{p-p_h}{} \leq C \left( \mu \inf_{w_h \in V_h} \Normh{u-w_h} + \inf_{q_h \in Q_h} \NormLeb{p-q_h}{} \right) \end{equation} for all $f \in \SobD{}^d$ and $\mu > 0$. We denote by $C_{\mathrm{qo}}$ the smallest such constant. \end{definition} According to~\cite[Theorem~5.2.5]{Boffi:Brezzi:Fortin.13}, the discretization~\eqref{Stokes-disc} is quasi-optimal if \begin{equation} \begin{gathered} \label{conforing-disc} V_h \subseteq \SobH{}^\Dim \qquad E_h = \mathrm{Id}_{V_h}\\ a_h(w_h, v_h) = \int_\Omega \Grad w_h \colon \Grad v_h \qquad b_h(v_h, q_h) = -\int_\Omega q_h \Div v_h \end{gathered} \end{equation} i.e. if $V_h/Q_h $ is a conforming pair and $a_h$, $b_h$ and $E_h$ are simple restrictions of their conforming counterparts in~\eqref{Stokes-weak}. In sections~\ref{S:paradigmatic-discretization} and \ref{S:generalizations} we show that quasi-optimality can be achieved also with nonconforming pairs and/or for different choices of $a_h$ and $E_h$. \begin{remark}[Smoothing by $E_h$] \label{R:smoothing-E} Since $V_h$ is finite-dimensional, the operator $E_h$ is bounded and the solution of~\eqref{Stokes-disc} depends continuously on the $H^{-1}$-norm of $f$. This property, in turn, prevents the issue pointed out in the introduction concerning the semi-norm $\Normtr{\cdot}_h$ in~\eqref{intro:nonconforming-est}. Of course, such observation is of practical interest only if the norm of $E_h$ is of moderate size, so that it does not affect too much the stability constant of~\eqref{Stokes-disc}. We call $E_h$ "smoothing" operator, because it increases the smoothness of the elements of $V_h$ whenever $V_h \nsubseteq \SobH{}^\Dim$. For conforming pairs, one can let $E_h$ be the identity as in~\eqref{conforing-disc}. This choice is compatible with quasi-optimality but, possibly, it is not pressure robust; compare with section~\ref{SS:quasi-optimality-press-robustness} below. \end{remark} \begin{remark}[Computational feasibility] \label{R:computational-feasibility} It is highly desirable that there are bases $\{ \varphi_1, \dots, \varphi_N \}$ and $\{ \psi_1, \dots, \psi_M \}$ of $V_h$ and $Q_h$, respectively, such that the scalars \begin{equation*} \label{computational-feasibility} a_h(\varphi_i, \varphi_j) \qquad b(\varphi_i, \psi_k) \qquad \left\langle f, E_h \varphi_i \right\rangle \end{equation*} can be computed or approximated, up to a prescribed tolerance, with $O(1)$ operations, for all $i,j = 1,\dots, N$ and $k=1,\dots, M$. This "computational feasibility" is not necessary for quasi-optimality but guarantees that the solution of~\eqref{Stokes-disc} can be computed with optimal complexity. \end{remark} \subsection{Quasi-optimal and pressure robust discretizations} \label{SS:quasi-optimality-press-robustness} The analytical velocity $u$ solving~\eqref{Stokes-weak} can be equivalently characterized as the solution of an elliptic problem. In fact, the second equation imposes that $u$ is divergence-free or, in other words, that it is an element of the kernel \begin{equation*} \label{kernel} Z := \{ z \in \SobH{}^\Dim \mid \Div z = 0 \}. \end{equation*} Then, testing the first equation with an arbitrary element of $Z$, we obtain the reduced problem \begin{equation} \label{Stokes-reduced} \forall z \in Z \qquad \mu \int_\Omega \Grad u \colon \Grad z = \left\langle f ,z \right\rangle \end{equation} which is uniquely solvable, according to the Lax-Milgram lemma and the Friedrichs inequality. The same structure can be observed at the discrete level. To see this, we first introduce the discrete divergence $\Divdisc: V_h \to Q_h$ by \begin{equation} \label{divergence-disc} \forall q_h \in Q_h \qquad \int_\Omega q_h \Divdisc v_h = - b_h(v_h, q_h) \end{equation} for all $v_h \in V_h$. The second equation of~\eqref{Stokes-disc} imposes that $u_h$ is discretely divergence-free, i.e. it is an element of the discrete kernel \begin{equation*} \label{kernel-discrete} Z_h := \{ z_h \in V_h \mid \Divdisc z_h = 0 \}. \end{equation*} Then, testing the first equation with an arbitrary element of $Z_h$, we derive the discrete reduced problem \begin{equation} \label{Stokes-reduced-disc} \forall z_h \in Z_h \qquad \mu\, a_h(u_h, z_h) = \left\langle f, E_h z_h \right\rangle \end{equation} which is uniquely solvable, since $a_h$ is coercive on $V_h$. In the vein of~\cite[Remark~2.1]{Brezzi:74}, it is worth recalling that this is a (possibly) nonconforming discretization of \eqref{Stokes-reduced}, because $Z_h$ may fail to be a subspace of $Z$, even if $V_h \subseteq \SobH{}^\Dim$. Similarly as in Definition~\ref{D:quasi-optimal}, we will be interested in the question whether $u_h$ is a near-best approximation of $u$ in $Z_h$. This actually amounts to ask whether $u_h$ is near-best in $V_h$, because the inf-sup condition~\eqref{inf-sup-disc} implies \begin{equation} \label{best-errors-Vh-Zh} \inf_{z_h \in Z_h} \Normh{u-z_h} \leq \left( 1 + \beta^{-1} \right) \inf_{w_h \in V_h} \Normh{u-w_h} \end{equation} according to~\cite[Proposition~5.1.3]{Boffi:Brezzi:Fortin.13} and~\cite[Lemma~2.1]{Pyo.Nochetto:05}. \begin{definition}[Quasi-optimality and pressure robustness] \label{D:quasi-optimality-press-robust} Denote by $u$ and $u_h$ the solutions of~\eqref{Stokes-reduced} and \eqref{Stokes-reduced-disc}, respectively, with load $f$ and viscosity $\mu$. We say that~\eqref{Stokes-disc} is a quasi-optimal and pressure robust discretization of~\eqref{Stokes-weak} when there is a constant $C \geq 1$ such that \begin{equation} \label{quasi-optimal-press-robust} \Normh{u-u_h} \leq C \inf_{w_h \in V_h} \Normh{u-w_h} \end{equation} for all $f \in \SobD{}^d$ and $\mu > 0$. We denote by $\mathrm{C_{\mathrm{qopr}}}$ the smallest such constant. \end{definition} Problem~\eqref{Stokes-reduced} reveals that the analytical velocity $u$ is independent of the pressure $p$ and depends on the load $f$ only through its restriction to $Z$. This implies, for instance, that $u$ is invariant with respect to irrotational perturbations of $f$, see Linke~\cite{Linke:14}. The near-best estimate \eqref{quasi-optimal-press-robust} guarantees that $u_h$ reproduces such invariance property at the discrete level and justifies the designation "pressure robust". The discretization~\eqref{Stokes-disc} is known to be quasi-optimal and pressure robust if \begin{equation} \begin{gathered} \label{conforing-div-free-disc} V_h \subseteq \SobH{}^\Dim \qquad \Div V_h = Q_h \qquad E_h = \mathrm{Id}_{V_h}\\ a_h(w_h, v_h) = \int_\Omega \Grad w_h \colon \Grad v_h \qquad b_h(v_h, q_h) = -\int_\Omega q_h \Div v_h \end{gathered} \end{equation} i.e. if $V_h/ Q_h$ is a conforming and divergence-free pair and $a_h$, $b_h$ and $E_h$ are simple restrictions of their continuous counterparts in \eqref{Stokes-weak}. In fact, in this case, we have $Z_h \subseteq Z$ and~\eqref{Stokes-reduced-disc} is a conforming Galerkin discretization of~\eqref{Stokes-reduced}. Therefore, C\'{e}a's lemma and~\eqref{best-errors-Vh-Zh} imply $\mathrm{C_{\mathrm{qopr}}} \leq (1+ \beta^{-1})$. It is our purpose to show that quasi-optimality and pressure robustness can be achieved also by other discretizations than~\eqref{conforing-div-free-disc}. \subsection{Necessary consistency conditions} \label{SS:necessary-consistency} The left- and the right-hand sides of \eqref{quasi-optimality} are seminorms on $Z \times \LebH{}$ and the kernel of the latter is $(Z \cap Z_h) \times Q_h$, as a consequence of \eqref{best-errors-Vh-Zh}. Quasi-optimality actually prescribes that such seminorms are equivalent, because the converse of \eqref{quasi-optimality} immediately follows from the inclusion $(u_h, p_h) \in Z_h \times Q_h$. Hence, a simple necessary condition is that the kernels of the two seminorms coincide. In other words, whenever the solution $(u,p)$ of \eqref{Stokes-weak} is in $Z_h \times Q_h$, it must solve also \eqref{Stokes-disc}. This is an algebraic consistency condition, which can be rephrased in terms of the forms $a_h$ and $b_h$ and of the operator $E_h$, in the spirit of~\cite[Definition~2.7]{Veeser.Zanotti:18}. \begin{lemma}[Consistency for quasi-optimality] \label{L:quasi-optimal-nec} Assume that~\eqref{Stokes-disc} is a quasi-optimal discretization of \eqref{Stokes-weak}. Then, necessarily we have \begin{subequations} \label{quasi-optimal-nec} \begin{alignat}{2} \label{quasi-optimal-nec-div} &\forall v_h \in V_h, \, p \in Q_h &\qquad &\int_\Omega p ( \Divdisc v_h - \Div E_h v_h ) = 0 \intertext{and}\label{quasi-optimal-nec-lapl} &\forall u \in Z \cap Z_h, \, v_h \in V_h &\qquad& a_h(u, v_h) = \int_\Omega \Grad u \colon \Grad E_h v_h. \end{alignat} \end{subequations} \end{lemma} \begin{proof} Denote by $(u,p)$ the solution of~\eqref{Stokes-weak} and assume first $u=0$ and $p \in Q_h$. Quasi-optimality implies that the solution $(u_h, p_h)$ of~\eqref{Stokes-disc} satisfies $u_h = 0$ and $p_h = p$. Comparing the first equations of \eqref{Stokes-weak} and~\eqref{Stokes-disc}, we derive the identity $b_h(v_h, p) = -\int_\Omega p \Div E_h v_h $ for all $v_h \in V_h$. Condition~\eqref{quasi-optimal-nec-div} then follows from the definition of $\Divdisc$ in~\eqref{divergence-disc}. Next, assume $u \in Z \cap Z_h$ and $p=0$. Since quasi-optimality implies $u_h = u$ and $p_h = 0$, condition \eqref{quasi-optimal-nec-lapl} can be derived comparing the first equations of~\eqref{Stokes-weak} and~\eqref{Stokes-disc} as before. \end{proof} The conforming discretization~\eqref{conforing-disc} is a simple option to fulfill~\eqref{quasi-optimal-nec}, but not the only possible. Examples with nonconforming discrete velocity space can be found in~\cite[Section~6]{Badia.Codina.Gudi.Guzman:14} and \cite{Verfuerth.Zanotti:18}. Standard nonconforming discretizations, like the one of Crouzeix and Raviart~\cite{Crouzeix.Raviart:73}, do not fulfill~\eqref{quasi-optimal-nec}, because they do not employ a smoothing operator. It is also worth noticing that~\eqref{quasi-optimal-nec} involves the interplay of $a_h$ and $b_h$ with $E_h$. This indicates that the discretization of the differential operator in~\eqref{Stokes-strong} and the one of the corresponding load should not be regarded as independent tasks. Proceeding similarly as in Lemma~\ref{L:quasi-optimal-nec}, we derive necessary conditions for quasi-optimality and pressure robustness. \begin{lemma}[Consistency for quasi-optimality and pressure robustness] \label{L:quasi-optimal-press-robust-nec} Assume that~\eqref{Stokes-disc} is a quasi-optimal and pressure robust discretization of~\eqref{Stokes-weak}. Then, necessarily we have \begin{subequations} \label{quasi-optimal-press-robust-nec} \begin{equation} \label{quasi-optimal-press-robust-nec-div} E_h (Z_h) \subseteq Z \end{equation} and \begin{equation} \label{quasi-optimal-press-robust-nec-lapl} \forall u \in Z \cap Z_h, \, z_h \in Z_h \qquad a_h(u, z_h) = \int_\Omega \Grad u \colon \Grad E_h z_h. \end{equation} \end{subequations} \end{lemma} \begin{proof} Let $z_h \in Z_h$ be such that $\Div E_h z_h \neq 0$. Assuming that $(u,p) = (0, \Div E_h z_h)$ solves \eqref{Stokes-weak}, we infer $\left\langle f, E_h z_h \right\rangle = -\NormLeb{\Div E_h z_h}{}^2 \neq 0$. Inserting this information in~\eqref{Stokes-reduced-disc}, we obtain $u_h \neq 0$. Therefore, we have $\Normh{u-u_h} > \inf_{v_h \in V_h} \Normh{u-v_h} = 0$, which contradicts quasi-optimality and pressure robustness. This proves \eqref{quasi-optimal-press-robust-nec-div}. Assertion \eqref{quasi-optimal-press-robust-nec-lapl} may be checked similarly to~\eqref{quasi-optimal-nec-lapl} in Lemma~\ref{L:quasi-optimal-nec}. \end{proof} Condition~\eqref{quasi-optimal-press-robust-nec-lapl} is clearly necessary for~\eqref{quasi-optimal-nec-lapl}, while \eqref{quasi-optimal-press-robust-nec-div} is neither necessary nor sufficient for~\eqref{quasi-optimal-nec-div}. We mention also that \eqref{quasi-optimal-press-robust-nec-div} differs from the condition exploited in~\cite{Linke.Matthies.Tobiska:16} to achieve pressure robustness, in that here $E_h$ is required to map into $\SobH{}^\Dim$ and not only into $\Hdiv{}$, cf. Remark~\ref{R:smoothing-E}. \begin{remark}[Failure of $E_h=\mathrm{Id}_{V_h}$] \label{R:failure-identity} If $V_h/Q_h$ is a conforming and divergence-free pair, the abstract discretization~\eqref{Stokes-disc} with \eqref{conforing-div-free-disc} verifies the first necessary condition in Lemma~\ref{L:quasi-optimal-press-robust-nec}. If, instead, the pair is conforming but not divergence-free, we have $Z_h \nsubseteq Z$. In this case, the operator $E_h$ cannot coincide with the identity on $Z_h$. \end{remark} In the next sections, we design some new discretizations proceeding as follows. Given an inf-sup stable pair $V_h/Q_h$, together with the corresponding bilinear form $b_h$, we construct $a_h$ and $E_h$ so that the necessary conditions in Lemmas~\ref{L:quasi-optimal-nec} and~\ref{L:quasi-optimal-press-robust-nec} hold true. Then, we use standard techniques from the analysis of saddle point problems to verify~\eqref{quasi-optimality} and~\eqref{quasi-optimal-press-robust} and to bound the constants $C_{\mathrm{qo}}$ and $\mathrm{C_{\mathrm{qopr}}}$. Alternatively, one could exploit~\cite[Theorem~4.14]{Veeser.Zanotti:18}, which guarantees that~\eqref{quasi-optimal-press-robust-nec} is a sufficient condition for quasi-optimality and pressure robustness. Such result provides also a formula for $\mathrm{C_{\mathrm{qopr}}}$. Analogously, generalizing the framework of \cite{Veeser.Zanotti:18}, one could show also that \eqref{quasi-optimal-nec} is a sufficient condition for quasi-optimality and derive a formula for $C_{\mathrm{qo}}$. We prefer to proceed as indicated, to make sure this paper can be read independently of \cite{Veeser.Zanotti:18}. \section{A paradigmatic discretization} \label{S:paradigmatic-discretization} Assume that we are given an inf-sup stable pair $V_h/Q_h$, together with the corresponding bilinear form $b_h$. A possible strategy to fulfill the necessary conditions~\eqref{quasi-optimal-nec-div} and \eqref{quasi-optimal-press-robust-nec-div} is to employ a "divergence-preserving" smoothing operator, i.e. \begin{equation} \label{conservation-divergence} \forall v_h \in V_h \qquad \Div E_h v_h = \Divdisc v_h. \end{equation} Once such operator is given, conditions~\eqref{quasi-optimal-nec-lapl} and~\eqref{quasi-optimal-press-robust-nec-lapl} prescribe the restriction of $a_h$ on $(Z \cap Z_h) \times V_h$. Then, inspired by~\cite{Arnold:82} and~\cite{Veeser.Zanotti:18b}, we extend the resulting form to $V_h\times V_h$, in a way that additionally ensures symmetry and coercivity. In order to keep the exposition as clear as possible, we first exemplify this idea in a model setting. We postpone various generalizations to the next section. \subsection{The unbalanced $\Poly{\ell} / \Poly{\ell-2}$ pair} \label{SS:Pl-Pl-2-pair} We consider hereafter pairs of finite element spaces on a face-to-face simplicial mesh $\mathcal{M}$ of $\Omega$ in the sense of~\cite[Definition 1.36]{DiPietro.Ern:12}. We write $c$ for a nondecreasing and nonnegative function of the shape parameter of $\mathcal{M}$, which possibly depends also on different quantities (like, e.g., the space dimension), but neither on other properties of $\mathcal{M}$ nor on the viscosity $\mu$. Such constant may change at different occurrences. We occasionally abbreviate $a \leq c b $ as $a \lesssim b$ and $c^{-1} b \leq a \leq c b$ as $a \eqsim b$. For all integers $\ell \geq 0$, we denote by $\Poly{\ell}(S)$ the space of polynomials with total degree $\leq \ell$ on a simplex $S \subseteq \mathbb{R}^d$. The space of $H^k$-conforming element-wise polynomials on $\mathcal{M}$ then reads \begin{equation} \label{elementwise-polynomials} \Polypiec{\ell}{k} := \{ v \in H^k(\Omega) \mid \forall K \in \mathcal{M} \;\; v_{|K} \in \Poly{\ell}(K) \} \end{equation} with $k \in \{0,1\}$ and the convention $H^0(\Omega) := \Leb{}$. Motivated by the homogeneous boundary condition in \eqref{Stokes-strong}, we consider the subspaces \begin{equation} \label{elementwise-polynomials-sub} \Polybnd{\ell}{1} := \Polypiec{\ell}{1} \cap \SobH{} \qquad \text{and} \qquad \Polyavg{\ell}{k} := \Polypiec{\ell}{k} \cap \LebH{}. \end{equation} To exemplify our construction, we assume $d=2$ for the remaining part of this section. We consider the conforming $\Poly{\ell} / \Poly{\ell-2}$ pair, which is given by \begin{equation} \label{Pl-Pl-2-pair} V_h = (\Polybnd{\ell}{1})^2 \qquad \text{and} \qquad Q_h = \Polyavg{\ell-2}{0}, \qquad b_h(v_h, q_h) = -\int_\Omega q_h \Div v_h \end{equation} with $\ell \geq 2$. The inf-sup condition~\eqref{inf-sup-disc} holds with $\beta^{-1} \leq c$, see \cite[Remark~8.6.2]{Boffi:Brezzi:Fortin.13}. \begin{remark}[Unbalanced pairs] \label{R:unbalanced-pairs} The $\Poly{\ell}/\Poly{\ell-2}$ pair is unbalanced, in the sense that the approximation power $\ell-1$ of the discrete pressure space in the $L^2$-norm is strictly less than the approximation power $\ell$ of the discrete velocity space in the $H^1$-norm. Other examples can be obtained enriching the velocity space of any inf-sup stable pair. The use of conforming unbalanced pairs, in combination with the standard discretization~\eqref{conforing-disc}, is discouraged by the error estimate~\eqref{intro:conforming-est} and Remark~\ref{R:failure-identity}; see also \cite[Remark~8.6.2]{Boffi:Brezzi:Fortin.13}. Still, quasi-optimal and pressure robust discretizations based on such pairs would be a valuable option, if one is more interested in the analytical velocity rather than in the analytical pressure. \end{remark} The discrete divergence $\Divdisc$ in the $\Poly{\ell}/\Poly{\ell-2}$ pair coincides with the $L^2$-orthogonal projection of the analytical divergence onto $\Polyavg{\ell-2}{0}$. Since~\eqref{divergence-disc} actually holds for all discrete pressures in $\Polypiec{\ell-2}{0}$, we can compute $\Divdisc$ element-wise as follows \begin{equation} \label{divergence-disc-Pl-Pl-2} \Divdisc v_h = \Pi^K_{\ell-2} \Div v_h \quad \text{in} \,\, K \end{equation} for all $v_h \in (\Polybnd{\ell}{1})^2$ and $K \in \mathcal{M}$, where $\Pi^K_{\ell-2}$ is the $L^2$-orthogonal projection onto $\Poly{\ell-2}(K)$. Therefore, denoting by $Z_h^{ub}$ the discrete kernel, we conclude $Z_h^{{ub}} \nsubseteq Z$.\footnote{The superscript $"{ub}"$ stands for "unbalanced". Along this section, we use it to label spaces, forms and operators related to the $\Poly{\ell}/\Poly{\ell-2}$ pair.} This confirms that the $\Poly{\ell}/\Poly{\ell-2}$ pair is conforming but not divergence-free. The abstract discretization~\eqref{Stokes-disc} with~\eqref{conforing-disc}, based on the $\Poly{\ell}/\Poly{\ell-2}$ pair, states $u_h \in (\Polybnd{\ell}{1})^2$ and $p_h \in \Polyavg{\ell-2}{0}$ such that \begin{equation} \label{Stokes-Pl-Pl-2-standard} \begin{alignedat}{2} &\forall v_h \in (\Polybnd{\ell}{1})^2 &\qquad \mu \,\int_\Omega \Grad u_h \colon \Grad v_h -\int_\Omega p_h \Div v_h &= \left\langle f , v_h \right\rangle \\ &\forall q_h \in \Polyavg{\ell-2}{0} &\qquad \int_\Omega q_h \Div u_h &= 0 . \end{alignedat} \end{equation} \subsection{Local inversion of the divergence} \label{SS:local-inversion-divergence} Proceeding as in~\cite{Verfuerth.Zanotti:18}, we enforce \eqref{conservation-divergence} with the help of local right inverses of the divergence. Such operators can be defined through discrete Stokes-like problems on the barycentric refinement of each element. To see this, fix $K \in \mathcal{M}$ and let $\Mesh_K$ denote the triangulation of $K$ obtained connecting each vertex with the barycenter; cf. Figure~\ref{F:barycentric-refinement}. For $\ell \in \mathbb{N}$, we define the local spaces \begin{equation*} \label{local-spaces} \Polybnd{\ell}{1}(\Mesh_K) \qquad \text{and} \qquad \Polyavg{\ell-1}{0}(\Mesh_K) \end{equation*} on $\Mesh_K$ similarly to the global spaces $\Polybnd{\ell}{1}$ and $\Polyavg{\ell-1}{0}$ in \eqref{elementwise-polynomials-sub}. In particular, all $v_k \in \Polybnd{\ell}{1}(\Mesh_K)$ vanish on $\partial K$ and all $q_K \in \Polyavg{\ell-1}{0}(\Mesh_K)$ are such that $\int_K q_K = 0$. The pair $\Polybnd{\ell}{1}(\Mesh_K)^2 /\Polyavg{\ell-1}{0} (\Mesh_K)$ is conforming and divergence-free in $K$. \begin{figure}[ht] \centering \begin{tikzpicture} \coordinate (z1) at (0,0); \coordinate (z2) at (2,0); \coordinate (z3) at (1,1.5); \coordinate (c1) at (1, 0.5); \path (z1) edge (z2); \path (z2) edge (z3); \path (z3) edge (z1); \coordinate (z4) at (4,0); \coordinate (z5) at (6,0); \coordinate (z6) at (5,1.5); \coordinate (c2) at (5, 0.5); \path (z4) edge (z5); \path (z5) edge (z6); \path (z6) edge (z4); \path[dashed] (z4) edge (c2); \path[dashed] (z5) edge (c2); \path[dashed] (z6) edge (c2); \end{tikzpicture} \caption{Generic element $K \in \mathcal{M}$ (left) and barycentric refinement $\Mesh_K$ (right).} \label{F:barycentric-refinement} \end{figure} According to~\cite[Theorem~3.1]{Guzman.Neilan:18}, we have the local inf-sup stability \begin{equation} \label{local-inf-sup} \forall q_K \in \Polyavg{\ell-1}{0}(\Mesh_K) \qquad \NormLeb{q_K}{K} \leq c \sup_{v_K \in \Polybnd{\ell}{1}(\Mesh_K)^2} \dfrac{\int_K q_K \Div v_K}{\NormLeb{\Grad v_K}{K}}. \end{equation} This entails that we can define a linear operator $R_\ell^{K}: \Leb{\Omega} \to \SobH{}^2$ as follows. Given $q \in \Leb{}$, let $u_K = u_K(q) \in \Polybnd{\ell}{1}(\Mesh_K)^2$ and $p_K = p_K(q) \in \Polyavg{\ell-1}{0}(\Mesh_K)$ solve \begin{equation} \label{local-problem-div} \begin{alignedat}{2} &\forall v_K \in \Polybnd{\ell}{1}(\Mesh_K)^2 &\qquad \int_K \Grad u_K \colon \Grad v_K - \int_K p_K \Div v_K &= 0\\ &\forall q_K \in \Polyavg{\ell-1}{0}(\Mesh_K) &\quad \; \int_K q_K \Div u_K &= \int_K q_K q. \end{alignedat} \end{equation} Hence, we set \begin{equation*} \label{local-right-inverse} R_\ell^{K} q := u_K \quad \text{in} \;\; K \qquad \text{and} \qquad R_\ell^{K} q := 0 \quad \text{in} \;\; \Omega \setminus K. \end{equation*} \begin{proposition}[Local right inverses] \label{P:local-right-inverse} Let $K \in \mathcal{M}$ be a mesh element and $\ell \in \mathbb{N}$. The operator $R_\ell^{K}$ is well-defined and, for all $q \in \Leb{}$, we have \begin{subequations} \label{local-right-inverse-prop} \begin{equation} \label{local-right-inverse-prop-stab} \NormLeb{\Grad R_\ell^{K} q}{} \leq c \NormLeb{q}{K} \end{equation} and \begin{equation} \label{local-right-inverse-prop-div} q_{|K} \in \Polyavg{\ell-1}{0}(\Mesh_K) \quad \Longrightarrow \quad \Div R_\ell^{K} q = q \quad \text{in} \;\; K \end{equation} \end{subequations} \end{proposition} \begin{proof} The operator $R_\ell^{K}$ is well-defined and satisfies~\eqref{local-right-inverse-prop-stab} in view of the local inf-sup~\eqref{local-inf-sup} and~\cite[Corollary~4.2.1]{Boffi:Brezzi:Fortin.13}. The property in~\eqref{local-right-inverse-prop-div} directly follows from the second equation of problem~\eqref{local-problem-div}, because $\Div u_K \in \Polyavg{\ell-1}{0}(\Mesh_K) $. \end{proof} \begin{remark}[Computation of the local right inverses] \label{R:computation-rightinv} In what follows, we shall need to compute $R_\ell^{K} q$ for all $K \in \mathcal{M}$ and various $q \in \Polypiec{\ell-1}{0}$. To this end, a possible strategy is to precompute the solution of~\eqref{local-problem-div} on a reference triangle $K_\mathrm{ref}$, for all possible loads $q_\mathrm{ref}$ in a basis of $\Poly{\ell-1}(K_\mathrm{ref})$. The computational complexity of this task only depends on $\ell$. Then, the solution of \eqref{local-problem-div} in $K$ can be obtained in terms of the corresponding solution in $K_\mathrm{ref}$, by means of the contravariant Piola transformation; see \cite[Section~2.1.3]{Boffi:Brezzi:Fortin.13}. \end{remark} We have considered here the two-dimensional case only to be consistent with the simplification introduced in section~\ref{SS:Pl-Pl-2-pair}. The same construction is actually possible in any space dimension $d \geq 2$. \subsection{A new augmented Lagrangian formulation} \label{SS:DG-like-formulation} We now propose a new discretization of the Stokes equations, based on the $\Poly{\ell}/\Poly{\ell-2}$ pair. The first ingredient of our construction is a linear operator $E_h^{ub}: (\Polybnd{\ell}{1})^2 \to \SobH{}^2$ fulfilling~\eqref{conservation-divergence}. In view of $Z_h^{{ub}} \nsubseteq Z$ and Remark~\ref{R:failure-identity}, the identity on $(\Polybnd{\ell}{1})^2$ cannot accommodate this property. Therefore, we introduce a "divergence correction" $R_h^{ub}: (\Polybnd{\ell}{1})^2 \to \SobH{}^2$ \begin{equation*} \label{Pl-Pl-2-div-correction} R_h^{ub} v_h := \sum_{K \in \mathcal{M}} R_\ell^K ( \Divdisc v_h - \Div v_h). \end{equation*} \begin{proposition}[Divergence-preserving smoothing operator] \label{P:Pl-Pl-2-smoother} The linear operator $E_h^{ub}: (\Polybnd{\ell}{1})^2 \to \SobH{}^2$ given by \begin{equation} \label{Pl-Pl-2-smoother} E_h^{ub} v_h := v_h + R_h^{ub} v_h \end{equation} fulfills~\eqref{conservation-divergence} and is such that, for all $v_h \in (\Polybnd{\ell}{1})^2$, \begin{equation} \label{Pl-Pl-2-smoother-stability} \NormLeb{\Grad(v_h - E_h^{ub} v_h)}{} \eqsim \NormLeb{\Divdisc v_h - \Div v_h}{}. \end{equation} \end{proposition} \begin{proof} For all $v_h \in (\Polybnd{\ell}{1})^2$ and $K \in \mathcal{M}$, it holds \begin{equation*} \Div E_h^{ub} v_h = \Div v_h + \Div R_\ell^K (\Divdisc v_h - \Div v_h) \quad \text{in} \,\, K. \end{equation*} In view of~\eqref{divergence-disc-Pl-Pl-2}, we have $\int_K (\Divdisc v_h - \Div v_h) = 0$. Since the inclusion $v_h \in (\Polybnd{\ell}{1})^2$ implies also $(\Divdisc v_h - \Div v_h)_{|K} \in \Poly{\ell-1}(K)$, Proposition~\ref{P:local-right-inverse} and the identity above ensure that $E_h^{ub}$ fulfills~\eqref{conservation-divergence}. This, in turn, easily implies the lower bound "$\gtrsim$" in \eqref{Pl-Pl-2-smoother-stability}. The corresponding upper bound "$\lesssim$" is a consequence of the identity $\NormLeb{\Grad (v_h - E_h^{ub} v_h)}{K} = \NormLeb{\Grad R^K_\ell v_h}{K}$, $K \in \mathcal{M}$, combined with~\eqref{local-right-inverse-prop-stab}. \end{proof} The second ingredient of our construction is a suitable bilinear form $a_h$. Accounting for the definition of $E_h^{ub}$ in~\eqref{Pl-Pl-2-smoother}, the necessary conditions \eqref{quasi-optimal-nec-lapl} and \eqref{quasi-optimal-press-robust-nec-lapl} prescribe \begin{equation} \label{Pl-Pl-2-bilinear-form-restriction} a_h(u, v_h) = \int_\Omega \Grad u \colon \Grad v_h + \int_\Omega \Grad u \colon \Grad R_h^{ub} v_h \end{equation} for all $u \in Z \cap Z_h^{{ub}}$ and $v_h \in (\Polybnd{\ell}{1})^2$. A simple option would be to let the right-hand side define $a_h$ on $(\Polybnd{\ell}{1})^2 \times (\Polybnd{\ell}{1})^2$. Still, it has to be noticed that the second summand $\int_\Omega \Grad u \colon \Grad R_h^{ub} v_h = - \sum_{K\in \mathcal{M}}\int_K \Lapl u \cdot R_h^{ub} v_h $ cannot be expected to vanish. Therefore, it obstructs the symmetry and, possibly, also the nondegeneracy of $a_h$. To overcome this problem, we observe that $R_h^{ub}$ vanishes on $Z \cap Z_h^{{ub}}$, according to \eqref{Pl-Pl-2-smoother-stability}. This suggests to re-establish symmetry and nondegeneracy mimicking the construction of the Symmetric Interior Penalty (DG-SIP) discretization of second-order problems, see~\cite{Arnold:82} or~\cite[section~4.2.1]{DiPietro.Ern:12}. Thus, we set $a_h = a_h^{ub}$, where \begin{equation} \label{Pl-Pl-2-bilinear-form} \begin{split} a_h^{ub}(w_h, v_h) := &\int_\Omega \Grad w_h \colon \Grad v_h + \int_\Omega \Grad w_h \colon \Grad R_h^{ub} v_h +\\ &+\int_\Omega \Grad R_h^{ub} w_h \colon \Grad v_h + \eta \int_\Omega \Grad R_h^{ub} w_h \colon \Grad R_h^{ub} v_h \end{split} \end{equation} where $\eta > 0$ is a penalty parameter. Note that $a_h^{ub}$ fulfills~\eqref{Pl-Pl-2-bilinear-form-restriction}. The abstract discretization~\eqref{Stokes-disc} with the $\Poly{\ell}/\Poly{\ell-2}$ pair, $a_h = a_h^{ub}$ and $E_h = E_h^{ub}$ reads as follows: Find $u_h \in (\Polybnd{\ell}{1})^2$ and $p_h \in \Polyavg{\ell-2}{0}$ such that \begin{equation} \label{Stokes-Pl-Pl-2} \begin{alignedat}{2} &\forall v_h \in (\Polybnd{\ell}{1})^2 &\qquad \mu \,a_h^{ub}(u_h, v_h) -\int_\Omega p_h \Div v_h &= \langle f , E_h^{ub} v_h \rangle \\ &\forall q_h \in \Polyavg{\ell-2}{0} &\qquad \int_\Omega q_h \Div u_h &= 0 . \end{alignedat} \end{equation} We begin our discussion on the new discretization by checking that a solution $(u_h, p_h)$ exists and is unique. In view of the above-mentioned inf-sup stability of the $\Poly{\ell}/\Poly{\ell-2}$ pair, it suffices to prove that $a_h^{ub}$ is coercive on $(\Polybnd{\ell}{1})^2$. We proceed similarly as in~\cite[Lemma~4.1.2]{DiPietro.Ern:12}. \begin{lemma}[Coercivity of $a_h^{ub}$] \label{L:Pl-Pl-2-coercivity} The bilinear form $a_h^{ub}$ is coercive on $(\Polybnd{\ell}{1})^2$ for all $\eta > 1$ and we have \begin{equation*} \label{Pl-Pl-2-coercivity} a_h^{ub} (v_h, v_h) \geq \left( 1 - \dfrac{1}{\eta}\right) \NormLeb{\Grad v_h}{}^2 \end{equation*} for all $v_h \in (\Polybnd{\ell}{1})^2$. \end{lemma} \begin{proof} Let $v_h \in (\Polybnd{\ell}{1})^2$. Setting $w_h = v_h$ in~\eqref{Pl-Pl-2-bilinear-form}, we obtain \begin{equation*} a_h^{ub} (v_h, v_h) = \NormLeb{\Grad v_h}{}^2 + \eta \NormLeb{\Grad R_h^{ub} v_h}{}^2 +2 \int_\Omega \Grad v_h \colon \Grad R_h^{ub} v_h. \end{equation*} The Cauchy-Schwartz and the weighted Young's inequality further provide the upper bound $2\NormSemi{\int_\Omega \Grad v_h \colon \Grad R_h^{ub} v_h} \leq \eta^{-1} \NormLeb{\Grad v_h}{}^2 + \eta \NormLeb{\Grad R_h^{ub} v_h}{}^2$. Inserting this inequality into the previous identity concludes the proof. \end{proof} Let us comment on the cost for assembling and solving the new discretization. \begin{remark}[Feasibility of the new discretization] \label{R:Pl-Pl-2-feasibility} Assume that $\{\varphi_1, \dots, \varphi_N\}$ and $\{\psi_1, \dots, \psi_M\}$ are nodal bases of $(\Polybnd{\ell}{1})^2$ and $\Polyavg{\ell-2}{0}$, respectively. All functions $\varphi_i$ and $\psi_k$, with $i=1,\dots, N$ and $k=1,\dots,M$, are locally supported. Hence, the construction of $E_h^{ub} \varphi_i$ involves the solution of a limited number of local problems \eqref{local-problem-div} and we have $\mathrm{supp}(E_h^{ub} \varphi_i) \subseteq \mathrm{supp}(\varphi_i)$. Moreover, thanks to the local characterization of the discrete divergence \eqref{divergence-disc-Pl-Pl-2}, the entire computation of $E_h^{ub} \varphi_i$ requires $O(1)$ operations. This entails that the bilinear forms $a_h^{ub} (\varphi_i, \varphi_j)$ and $\int_\Omega \psi_k \Div \varphi_i$ and the linear form $\langle f, E_h^{ub} \varphi_i \rangle$ can be evaluated with $O(1)$ operations for all $i,j=1, \dots, N$ and $k=1,\dots, M$. Thus, the discretization \eqref{Stokes-Pl-Pl-2} is computationally feasible, in the sense of Remark~\ref{R:computational-feasibility}. Let us mention also that the stiffness matrices associated with $a_h^{ub}$ and its counterpart in~\eqref{Stokes-Pl-Pl-2-standard} are of course different but, for all $\eta>1$, their condition numbers differ, at most, by the ratio of the continuity and the coercivity constants of $a_h^{ub}$. This ratio is bounded by $c \eta^2(\eta-1)^{-1}$, as a consequence of Proposition~\ref{P:Pl-Pl-2-smoother} and Lemma~\ref{L:Pl-Pl-2-coercivity}. \end{remark} The following remarks connect~\eqref{Stokes-Pl-Pl-2} with other existing discretizations. \begin{remark}[Connection with augmented Lagrangian formulations] \label{R:connection-augmented-lagrangian} In view of \eqref{Pl-Pl-2-smoother-stability}, the last summand $\eta \int_\Omega \Grad R_h^{ub} w_h \colon \Grad R_h^{ub} v_h$ in the definition of $a_h^{ub}$ penalizes the functions that are in the discrete kernel $Z_h^{{ub}}$ and not in $Z$. More precisely, the penalization is equivalent to $\eta \int_\Omega \Div w_h \Div v_h$ on $Z_h^{{ub}}$. This indicates that~\eqref{Stokes-Pl-Pl-2} can be interpreted as a new augmented Lagrangian formulation for the Stokes problem; see \cite[Section~6.1]{Boffi:Brezzi:Fortin.13}. The additional terms enforcing consistency and symmetry distinguish our formulation from previous ones. \end{remark} \begin{remark}[Connection with DG discretizations] \label{R:connection-DG} The DG-SIP bilinear form in~\cite{Arnold:82} consists of four terms. The first two terms serve to accommodate consistency, see \cite[Section~4.2]{DiPietro.Ern:12} or~\cite{Veeser.Zanotti:18b}. In particular, the second one arises due to the use of possibly nonconforming, i.e. discontinuous, functions. The two remaining terms are designed to further enforce symmetry and coercivity, respectively, still preserving consistency. The same structure can be observed in the form $a_h^{ub}$. Here nonconformity has to be intended in the sense that $Z_h^{ub} \nsubseteq Z$, i.e. discretely divergence-free functions are possibly not divergence-free. A remarkable difference from the DG-SIP bilinear form is that the coercivity of $a_h^{ub}$ can be guaranteed for all $\eta > 1$ and not only for sufficiently large $\eta$. \end{remark} \begin{remark}[Connection with R-FEM discretizations] \label{R:connection-recovered} Rearranging terms in~\eqref{Pl-Pl-2-bilinear-form}, we see that the form $a_h^{ub}$ can be rewritten as follows \begin{equation} \label{Pl-Pl-2-bilinear-form-revisited} a_h^{ub} (w_h, v_h) = \int_\Omega \Grad E_h^{ub} w_h \colon \Grad E_h^{ub} v_h + (\eta-1) \int_\Omega \Grad R_h^{ub} w_h \colon \Grad R_h^{ub} v_h. \end{equation} This sheds additional light on the condition $\eta>1$ in Lemma~\ref{L:Pl-Pl-2-coercivity} and provides an interesting connection with the Recovered Finite Element Method (R-FEM) of Georgoulis and Pryer~\cite{Georgoulis.Pryer:18}. \end{remark} \subsection{Error estimates} \label{SS:error-estimates} We now aim at showing that, unlike \eqref{Stokes-Pl-Pl-2-standard}, \eqref{Stokes-Pl-Pl-2} is a quasi-optimal and pressure robust discretization of~\eqref{Stokes-weak}. As a preliminary step, we bound the consistency error generated by the last two terms in the definition of $a_h^{ub}$. Such terms can be expected to generate a consistency error, as they were artificially added to the right-hand side of~\eqref{Pl-Pl-2-bilinear-form-restriction}. \begin{lemma}[Consistency error] \label{L:Pl-Pl-2-consistency} Let $\eta > 1$ be given. We have \begin{equation} \label{Pl-Pl-2-consistency} \NormSemi{\int_\Omega \Grad z_h \colon \Grad E_h^{ub} v_h - a_h^{ub} (z_h, v_h)} \lesssim \eta \inf_{z \in Z} \NormLeb{\Grad(z-z_h)}{} \NormLeb{\Grad v_h}{} \end{equation} for all $z_h \in Z_h^{{ub}}$ and $v_h \in (\Polybnd{\ell}{1})^2$. \end{lemma} \begin{proof} The definitions of $a_h^{ub}$ and $E_h^{ub}$ imply \begin{equation*} \int_\Omega \Grad z_h \colon \Grad E_h^{ub} v_h - a_h^{ub} (z_h, v_h) = -\int_\Omega \Grad R_h^{ub} z_h \colon \Grad(v_h + \eta R_h^{ub} v_h). \end{equation*} The equivalence~\eqref{Pl-Pl-2-smoother-stability} reveals, in particular, $\NormLeb{\Grad R_h^{ub} z_h}{} \lesssim \NormLeb{\Grad(z-z_h)}{}$ for all $z \in Z$. The characterization \eqref{divergence-disc-Pl-Pl-2} of the discrete divergence $\Divdisc$ and~\eqref{Pl-Pl-2-smoother-stability} entail also $\NormLeb{\Grad(v_h + \eta R_h^{ub} v_h)}{} \lesssim \eta \NormLeb{\Grad v_h}{}$. Inserting these bounds into the identity above concludes the proof. \end{proof} Recall from section~\ref{SS:quasi-optimality-press-robustness} that the discrete velocity $u_h$ solving~\eqref{Stokes-Pl-Pl-2} is in the discrete kernel $Z_h^{{ub}}$ and can be equivalently characterized through the reduced problem \begin{equation} \label{Stokes-Pl-Pl-2-reduced} \forall z_h \in Z_h^{{ub}} \qquad \mu \,a_h^{ub} (u_h, z_h) = \langle f, E_h^{ub} z_h \rangle. \end{equation} \begin{theorem}[Quasi-optimality and pressure robustness] \label{T:Pl-Pl-2-velocity-error} For all $\eta >1$, problem~\eqref{Stokes-Pl-Pl-2} is a quasi-optimal and pressure robust discretization of~\eqref{Stokes-weak} with constant $\mathrm{C_{\mathrm{qopr}}} \leq c \eta^2(\eta-1)^{-1}$. \end{theorem} \begin{proof} Denote by $u \in Z$ and $u_h \in Z_h^{{ub}}$ the solutions of problems~\eqref{Stokes-reduced} and~\eqref{Stokes-Pl-Pl-2-reduced}, respectively, with load $f \in \SobD{}^2$ and viscosity $\mu > 0$. Let $z_h \in Z_h^{{ub}}$ be arbitrary and define $v_h := u_h - z_h$. Lemma~\ref{L:Pl-Pl-2-coercivity} and problem \eqref{Stokes-Pl-Pl-2-reduced} reveal \begin{equation*} \left( 1 - \dfrac{1}{\eta} \right) \NormLeb{\Grad(u_h- z_h)}{}^2 \leq \dfrac{1}{\mu} \langle f, E_h^{ub} v_h \rangle - a_h^{ub}(z_h, v_h). \end{equation*} Since $v_h \in Z_h^{{ub}}$, we have $E_h^{ub} v_h \in Z$ as a consequence of Proposition~\ref{P:Pl-Pl-2-smoother}. Hence, problem \eqref{Stokes-reduced} yields $\mu^{-1} \langle f, E_h^{ub} v_h \rangle = \int_\Omega \Grad u \colon \Grad E_h^{ub} v_h$. We insert this identity into the previous inequality and invoke Proposition~\ref{P:Pl-Pl-2-smoother} and Lemma~\ref{L:Pl-Pl-2-consistency}. Owing to the inclusion $u \in Z$, it results \begin{equation*} \NormLeb{\Grad(u_h- z_h)}{} \leq c \eta^2(\eta-1)^{-1} \NormLeb{\Grad(u-z_h)}{}. \end{equation*} We conclude taking the infimum over all $z_h \in Z_h$ and recalling~\eqref{best-errors-Vh-Zh}. \end{proof} Let us mention that a better bound of the constant $\mathrm{C_{\mathrm{qopr}}}$ in terms of $\eta$, namely $\mathrm{C_{\mathrm{qopr}}} \leq c \eta (\eta-1)^{-1/2}$, could be obtained with the help of~\cite[Theorem~4.14]{Veeser.Zanotti:18}. Both, this estimate and the one in Theorem~\ref{T:Pl-Pl-2-velocity-error}, suggest to set $\eta=2$. The next remark additionally confirm that we may have $\mathrm{C_{\mathrm{qopr}}} \to +\infty$ as $\eta \to +\infty$, thus pointing out the importance of explicitly knowing a safe value of the penalty parameter. \begin{remark}[Locking effect] \label{R:locking} The penalization in $a_h^{ub}$ imposes that the solution $u_h^{ub}$ of~\eqref{Stokes-Pl-Pl-2-reduced} approaches the subspace $Z \cap Z_h^{ub}$ for $\eta \to + \infty$, as a consequence of Proposition~\ref{P:Pl-Pl-2-smoother}. This entails that the constant $\mathrm{C_{\mathrm{qopr}}}$ in Theorem~\ref{T:Pl-Pl-2-velocity-error} remains bounded in the limit $\eta \to +\infty$ only if the equivalence \begin{equation} \label{best-errors-Pl-Pl-2} \inf_{z_h \in Z \cap Z_h^{{ub}}} \NormLeb{\Grad(z-z_h)}{} \stackrel{!}{\eqsim} \inf_{w_h \in (\Polybnd{\ell}{1})^2} \NormLeb{\Grad(z-w_h)}{} \end{equation} holds for all $z \in Z$. Conversely, if~\eqref{best-errors-Pl-Pl-2} holds, we can assume that the function $z_h$ in the proof of Theorem~\ref{T:Pl-Pl-2-velocity-error} varies only in $Z \cap Z_h^{ub}$. This, in turn, provides a robust upper bound of $\mathrm{C_{\mathrm{qopr}}}$ in the limit $\eta \to +\infty$. Whenever condition~\eqref{best-errors-Pl-Pl-2} fails, a locking effect may occur, in the sense of~\cite{Babuska.Suri:92b}. We illustrate this in section~\ref{SS:numerics-locking} by means of a numerical experiment. \end{remark} Theorem~\ref{T:Pl-Pl-2-velocity-error} states that the discretization \eqref{Stokes-Pl-Pl-2} enjoys a better velocity $H^1$-error estimate than the standard one~\eqref{Stokes-Pl-Pl-2-standard}, cf. Remark~\ref{R:failure-identity}. The next result additionally ensures that the two discretizations are actually comparable if one considers the sum of the velocity $H^1$-error times viscosity plus the pressure $L^2$-error. Thus, in other words, the modifications introduced in~\eqref{Stokes-Pl-Pl-2} do not impair the quasi-optimality of~\eqref{Stokes-Pl-Pl-2-standard}. \begin{theorem}[Quasi-optimality] \label{T:Pl-Pl-2-pressure-error} For all $\eta > 1$, problem~\eqref{Stokes-Pl-Pl-2} is a quasi-optimal discretization of~\eqref{Stokes-weak} with constant $C_{\mathrm{qo}} \lesssim \eta^3/(\eta-1)$. \end{theorem} \begin{proof} Denote by $(u,p)$ and $(u_h, p_h)$ the solutions of problems \eqref{Stokes-weak} and~\eqref{Stokes-Pl-Pl-2}, respectively, with load $f \in \SobD{}^2$ and viscosity $\mu>0$. In view of Theorem~\ref{T:Pl-Pl-2-velocity-error}, it suffices to bound the pressure error $\NormLeb{p-p_h}{}$. To this end, let $q_h \in \Polyavg{\ell-2}{0}$ be arbitrary and recall that the discrete divergence $\Divdisc$ is given by~\eqref{divergence-disc}. The inf-sup stability of the $\Poly{\ell}/\Poly{\ell-2}$ pair and Proposition~\ref{P:Pl-Pl-2-smoother} yield \begin{equation*} \NormLeb{p_h- q_h}{} \leq c \sup_{v_h \in (\Polybnd{\ell}{1})^2} \dfrac{\int_\Omega (p_h - q_h) \Div E_h^{ub} v_h}{\NormLeb{\Grad v_h}{}}. \end{equation*} For all $v_h \in (\Polybnd{\ell}{1})^2$, a comparison of \eqref{Stokes-weak} and~\eqref{Stokes-Pl-Pl-2} entails \begin{equation*} \int_\Omega (p_h - q_h) \Div E_h^{ub} v_h = \mu \left( a_h^{ub}(u_h, v_h) - \int_\Omega \Grad u \colon \Grad E_h^{ub} v_h \right) + \int_\Omega (p-q_h) \Divdisc v_h \end{equation*} where we have made use again of Proposition~\ref{P:Pl-Pl-2-smoother}. The last summand in the right-hand side vanishes if we let $q_h$ be the $L^2$-orthogonal projection of $p$. Hence, invoking Lemma~\ref{L:Pl-Pl-2-consistency} and proceeding as in the proof of Theorem~\ref{T:Pl-Pl-2-velocity-error}, we infer \begin{equation} \label{Pl-Pl-2-pressure-error} \NormLeb{p_h-q_h}{} \leq c \mu \eta \NormLeb{\Grad(u-u_h)}{}. \end{equation} The triangle inequality and Theorem~\ref{T:Pl-Pl-2-velocity-error} conclude the proof. \end{proof} \subsection{Inhomogeneous continuity equation} \label{SS:inhomogeneous-continuity} It is worth having a look at the case when the incompressibility constraint $\Div u = 0$ of~\eqref{Stokes-strong} is replaced by the inhomogeneous continuity condition $\Div u = g$ with $g \in \LebH{}$. The corresponding weak formulation reads as follows: Find $u \in \SobH{}^2$ and $p \in \LebH{}$ such that \begin{equation} \label{Stokes-inhom} \begin{alignedat}{2} &\forall v \in \SobH{}^2 &\qquad \mu \int_\Omega \Grad u \colon \Grad v - \int_\Omega p \Div v &= \left\langle f , v \right\rangle \\ &\forall q \in \LebH{} &\qquad \int_\Omega q \Div u &= \int_\Omega q g. \end{alignedat} \end{equation} A possible extension of the discretization~\eqref{Stokes-Pl-Pl-2} with the $\Poly{\ell}/\Poly{\ell-2}$ pair consists in finding $u_h \in (\Polybnd{\ell}{1})^2$ and $p_h \in \Polyavg{\ell-2}{0}$ such that \begin{equation} \label{Stokes-inhom-Pl-Pl-2} \begin{aligned} &\forall v_h \in (\Polybnd{\ell}{1})^2 &\qquad \mu \,a_h^{ub}(u_h, v_h) -\int_\Omega p_h \Div v_h &= \langle f , E_h^{ub} v_h \rangle \\ &\forall q_h \in \Polyavg{\ell-2}{0} &\qquad \int_\Omega q_h \Div u_h &= \int_\Omega q_h g. \end{aligned} \end{equation} The second equations of~\eqref{Stokes-inhom} and \eqref{Stokes-inhom-Pl-Pl-2} impose $u \in Z(g)$ and $u_h \in Z_h^{{ub}}(g)$, respectively, where \begin{equation*} Z(g) := \{ z \in \SobH{}^2 \mid \Div z = g \}, \qquad Z_h^{{ub}}(g) := \{ z_h \in (\Polybnd{\ell}{1})^2 \mid \Divdisc z_h = \Pi_{\ell-2} g \} \end{equation*} and $\Pi_{\ell-2}$ is the $L^2$-orthogonal projection onto $\Polyavg{\ell-2}{0}$. Lemma~\ref{L:Pl-Pl-2-consistency} states that the consistency error in the left hand side of~\eqref{Pl-Pl-2-consistency} vanishes whenever $z_h \in Z \cap Z_h^{{ub}}$. If, instead, we assume $z_h \in Z(g) \cap Z_h^{{ub}}(g)$ for some $g \in \LebH{}$ with $g \neq \Pi_{\ell-2} g$, the consistency error may not vanish. In fact, we possibly have $R_h^{ub} z_h \neq 0$, as a consequence of Proposition~\ref{P:Pl-Pl-2-smoother}. This suggests that a bound of the consistency error solely in terms of the best approximation $H^1$-error to $z_h$ by elements of $Z(g)$ is likely not possible. Therefore, we do not expect that the discrete velocity $u_h$ solving~\eqref{Stokes-inhom-Pl-Pl-2} is a near-best approximation of the analytical velocity in $(\Polybnd{\ell}{1})^2$, with respect to the $H^1$-norm. Still, combining the equivalence~\eqref{Pl-Pl-2-smoother-stability} and the $L^2$-orthogonality of $\Pi_{\ell-2}$, we obtain the following generalization of Lemma~\ref{L:Pl-Pl-2-consistency} \begin{equation*} \label{Pl-Pl-2-consistency-inhom} \begin{split} &\NormSemi{\int_\Omega \Grad z_h \colon \Grad E_h^{ub} v_h - a_h^{ub} (z_h, v_h)} \leq \\ &\hspace{1.5cm}\leq c \eta \left( \inf_{z \in Z(g)} \NormLeb{\Grad(z-z_h)}{} + \inf_{q_h \in \Polyavg{\ell-2}{0}} \NormLeb{g-q_h}{} \right) \NormLeb{\Grad v_h}{} \end{split} \end{equation*} for all $z_h \in Z_h^{{ub}}(g)$ and $v_h \in (\Polybnd{\ell}{1})^2$, with $g \in \LebH{}$. Apart from the additional term in the right-hand side of this estimate, the technique in the proof of Theorem~\ref{T:Pl-Pl-2-velocity-error} can be still applied, with the help of~\cite[Proposition~5.1.3]{Boffi:Brezzi:Fortin.13}, and we finally derive \begin{equation} \label{Pl-Pl-2-velocity-error-inhom} \NormLeb{\Grad(u-u_h)}{} \lesssim \inf_{v_h \in (\Polybnd{\ell}{1})^2} \NormLeb{\Grad(u-v_h)}{} + \inf_{q_h \in \Polyavg{\ell-2}{0}} \NormLeb{g-q_h}{} \end{equation} for any fixed $\eta >1$. Similarly as in~\eqref{intro:conforming-est}, here the approximation power of the discrete pressure space in the $L^2$-norm may impair the velocity $H^1$-error, because the $\Poly{\ell}/\Poly{\ell-2}$ pair is unbalanced. We confirm this suspicion by means of a numerical experiment in section~\ref{SS:numerics-inhomogeneous}. Still, we remark that this estimate, unlike~\eqref{intro:conforming-est}, is pressure robust, i.e. independent of the analytical pressure. A corresponding bound of the pressure error can be derived arguing as in the proof of Theorem~\ref{T:Pl-Pl-2-pressure-error}. The nonconforming discretization proposed in section~\ref{SS:nonconforming-pairs} has the remarkable property that the consistency error can always be bounded solely in terms of the best approximation $H^1$-error to the analytical velocity; cf. Remark~\ref{R:CR-inhom-continuity}. Therefore, in that case, we achieve quasi-optimality and pressure robustness even if an inhomogeneous continuity condition is imposed. \section{Generalizations of the paradigmatic discretization} \label{S:generalizations} The idea illustrated in the previous section can be generalized in various directions. An immediate observation is that the same construction applies to any other conforming and inf-sup stable pair $V_h/Q_h$ such that \begin{itemize} \item[$(i)$] $\Polyavg{0}{0}$ is a subset of $Q_h$ and \item[$(ii)$] the discrete divergence $\Divdisc$ can be computed element-wise. \end{itemize} The first condition is needed in Proposition~\ref{P:Pl-Pl-2-smoother} to ensure that the smoothing operator $E_h^{ub}$ fulfills~\eqref{conservation-divergence}. The second one guarantees that the divergence correction $R^{ub}$ can be computed element-wise. As a consequence, the proposed discretization is computationally feasible, cf. Remark~\ref{R:Pl-Pl-2-feasibility}. Conditions (i) and (ii) are verified, for instance, by the following generalization of the $\Poly{\ell} / \Poly{\ell-2}$ pair \begin{equation*} \label{Pl-Pl-2-pair-generalized} V_h = (\Polybnd{\ell}{1})^d \qquad \text{and} \qquad Q_h = \Polyavg{\ell-k}{0}, \qquad b_h(v_h, q_h) = -\int_\Omega q_h \Div v_h \end{equation*} where $d \leq k \leq \ell$ and $d \in \{2,3\}$. Another possibility is to consider the conforming Crouzeix-Raviart pairs described in~\cite[Sections~8.6.2 and 8.7.2]{Boffi:Brezzi:Fortin.13}. Stable pairs with continuous pressure, i.e. $Q_h \subseteq C^0(\Omega)$, do not fulfill (i), while (ii) is violated, for instance, by the modified Hood-Taylor pairs of Boffi et al.~\cite{Boffi.Cavallini.Gardini.Gastaldi:12}. We now aim at addressing more substantial generalizations. We mainly focus on the necessary modifications and, in particular, we omit all proofs that are similar to the ones in the previous section. \subsection{Nonconforming pairs} \label{SS:nonconforming-pairs} Assume that $V_h/Q_h$ is a nonconforming pair, i.e. $V_h \nsubseteq \SobH{}^\Dim$. In this case, it does not seem appropriate to define the smoothing operator $E_h$ as in~\eqref{Pl-Pl-2-smoother}, because of the condition $E_h(V_h) \subseteq \SobH{\Omega}^d$. A possible fix for this problem is to replace $v_h $ with $M_h v_h$, where $M_h: V_h \to \SobH{}^\Dim$ is a linear operator. To make sure that a counterpart of Proposition~\ref{P:Pl-Pl-2-smoother} holds, we require that $\Div M_h v_h$ has element-wise the same mean as $\Divdisc v_h$ for all $v_h \in V_h$. Therefore, we resort to a element-wise "mean mass preserving" operator; cf. Proposition~\ref{P:CR-mean-operator}. As before, we illustrate this idea by means of a model example, namely the two-dimensional nonconforming Crouzeix-Raviart pair of degree $\ell \geq 2$. We do not consider the lowest-order case $\ell=1$, as it is rather specific and it is already covered by \cite{Verfuerth.Zanotti:18}, cf. Remark~\ref{R:CR-first-order}. A similar technique can be applied, for instance, with the modified Crouzeix-Raviart pairs of~\cite{Matthies.Tobiska:05} or with the three-dimensional generalizations of the Kouhia-Stenberg pair from \cite{Hu.Schedensack:18}. The original two-dimensional pair of Kouhia and Stenberg~\cite{Kouhia.Stenberg:95} can be treated as indicated in Remark~\ref{R:CR-first-order}. Let the mesh $\mathcal{M}$ be as in section~\ref{S:paradigmatic-discretization} and denote by $\Faces{}$ the faces of $\mathcal{M}$. A subscript to $\Faces{}$ indicates that we consider only those faces that are contained in the set specified by the subscript. We orient each interior face $F \in \Faces{\Omega}$ with a normal unit vector $n_F$. We denote by $\Jump{\cdot}_{|F}$ the jump on $F$ in the direction of $n_F$. For boundary faces $F \in \Faces{\partial \Omega}$, we orient $n_F$ so that it points outside $\Omega$ and let $\Jump{\cdot}_{|F}$ coincide with the trace on $F$, cf.~\cite[Section~1.2.3]{DiPietro.Ern:12}. We use the subscript $\mathcal{M}$ to indicate the broken version of a differential operator on $\mathcal{M}$. For instance, the broken gradient of an element-wise $H^1$-function $v$ is given by $(\GradM v)_{|K} := \Grad(v_{|K})$ for all $K \in \mathcal{M}$. The nonconforming Crouzeix-Raviart space of degree $\ell \in \mathbb{N}$ on $\mathcal{M}$, with homogeneous boundary conditions, can be defined as follows \begin{equation*} \label{CR-space} \CR{\ell} := \{ v \in \Polypiec{\ell}{0} \mid \forall F \in \Faces{}~\text{and}~ r \in \Poly{\ell-1}(F) \quad \int_F \Jump{v} r = 0 \}. \end{equation*} Notice that the integral $\int_F v$ is well-defined for all $v \in \CR{\ell}$ and $F \in \Faces{}$ and vanishes if $F \in \Faces{\partial \Omega}$. Yet, the jumps on mesh faces are not vanishing in general. We assume hereafter $\ell \geq 2$. The two-dimensional nonconforming Crouziex-Raviart pair of degree $\ell$ is \begin{equation*} \label{CR-pair} V_h = (\CR{\ell})^2 \qquad \text{and} \qquad Q_h = \Polyavg{\ell-1}{0}, \qquad b_h(v_h, q_h) = -\int_\Omega q_h \DivM v_h. \end{equation*} Results concerning the inf-sup stability can be found in \cite{Baran.Stoyan:07,Crouzeix.Falk:89,Fortin.Soulie:83}. Since the broken divergence $\DivM$ maps $V_h$ into $Q_h$, it coincides with the discrete divergence from~\eqref{divergence-disc}, i.e. $\Divdisc = \DivM$. We measure the velocity error in the broken $H^1$-norm, augmented with scaled jumps. Thus, in the notation of section~\ref{S:abstract-framework}, we set \begin{equation*} \label{CR-norm} \Normh{v}^2 =\Normh[{cr}]{v}^2 := \NormLeb{\GradM v}{}^2 + \sum_{F \in \Faces{}} h_F^{-1} \NormLeb{\Jump{v}}{F}^2, \end{equation*} where $h_F$ is the diameter of $F$. An equivalent alternative would be to consider only the broken $H^1$-norm. Both options extend the $H^1$-norm to $\SobH{}^2 + (\CR{\ell})^2$. Let $\Nodes{\ell, \Omega}$ be the set of interior Lagrange nodes of degree $\ell$ in $\mathcal{M}$. For all $\nu \in \Nodes{\ell, \Omega}$, we denote by $\Phi_\ell^\nu$ the Lagrange basis function of $\Polybnd{\ell}{1}$ associated with the evaluation at $\nu$, i.e. $\Phi_\ell^\nu(\nu') = \delta_{\nu \nu'}$ for all $\nu' \in \Nodes{\ell, \Omega}$. Fix also an element $K_\nu \in \mathcal{M}$ with $\nu \in K_\nu$. We define a "simplified nodal averaging" operator $A_h^{cr}: (\CR{\ell})^2 \to (\Polybnd{\ell}{1})^2$ by \begin{equation*} \label{CR-averaging} A_h^{cr} v_h := \sum_{\nu \in \Nodes{\ell,\Omega}} v_{h|K_\nu}(\nu)\, \Phi_\ell^\nu. \end{equation*} Next, let $m_F$ be the midpoint of any interior face $F \in \Faces{\Omega}$. Consider the bubble function $\Phi_2^F:= 3(2 \NormSemi{F})^{-1}\Phi_2^{m_F}$, where $\Phi_2^{m_F}$ is the Lagrange basis function of $\Polybnd{2}{1}$ associated with the evaluation at $m_F$. The normalization implies $\int_{F'} \Phi_2^F = \delta_{FF'}$ for all $F' \in \Faces{}$, according to the Simpson quadrature formula. We introduce a "bubble" operator $B_h^{cr}: (\CR{\ell})^2 \to (\Polybnd{\ell}{1})^2$ by \begin{equation*} \label{CR-bubble-operator} B_h^{cr} v_h := \sum_{F \in \Faces{\Omega}} \left( \int_Fv_h \right) \Phi_2^F. \end{equation*} We combine $A_h^{cr}$ and $B_h^{cr}$ to obtain the announced element-wise mean mass preserving operator $M_h^{cr}$. Roughly speaking, we use $B_h^{cr}$ to enforce the first part of \eqref{CR-mean-operator-properties} below, while $A_h^{cr}$ is responsible for the second part. \begin{proposition}[Element-wise mean mass preserving operator] \label{P:CR-mean-operator} The linear operator $M_h^{cr}: (\CR{\ell})^2 \to (\Polybnd{\ell}{1})^2$ given by \begin{equation} \label{CR-mean-operator} M_h^{cr} v_h := A_h^{cr} v_h + B_h^{cr} (v_h - A_h^{cr} v_h) \end{equation} is such that \begin{equation} \label{CR-mean-operator-properties} \int_K \Div M_h^{cr} v_h = \int_K \Div v_h \quad \text{and} \quad \Normh[{cr}]{v_h - M_h^{cr} v_h} \leq c \inf_{v \in \SobH{}^2} \Normh[{cr}]{v-v_h} \end{equation} for all $v_h \in (\CR{\ell})^2$ and $K \in \mathcal{M}$. \end{proposition} \begin{proof} Let $v_h \in (\CR{\ell})^2$ and $F' \in \Faces{\Omega}$ be given. The normalization of the functions $\{ \Phi_2^F \}_{F \in \Faces{\Omega}}$ reveals \begin{equation*} \int_{F'} B_h^{cr} (v_h - A_h^{cr} v_h) = \sum_{F \in \Faces{\Omega}}\int_F (v_h - A_h^{cr} v_h) \delta_{FF'} = \int_{F'} (v_h - A_h^{cr} v_h). \end{equation*} The same identities hold also for boundary faces $F' \in \Faces{\partial \Omega}$, in view of the boundary conditions in $\CR{\ell}$ and $\Polybnd{\ell}{1}$. Rearranging terms, we obtain $\int_{F'} M_h^{cr} v_h = \int_{F'} v_h$ for all $F' \in \Faces{}$. Then, for all $K \in \mathcal{M}$, the Gauss theorem yields the first part of~\eqref{CR-mean-operator-properties} \begin{equation*} \int_K \Div M_h^{cr} v_h = \sum_{F' \in \Faces{\partial K}} \int_{F'} M_h^{cr} v_h \cdot n_K = \sum_{F' \in \Faces{\partial K}} \int_{F'} v_h \cdot n_K = \int_K \Div v_h. \end{equation*} A detailed proof of the second part of \eqref{CR-mean-operator-properties} can be found in \cite[Section~3]{Veeser.Zanotti:18b}, where a similar, actually more involved, operator is considered. For this reason, we only sketch the proof. Let $K \in \mathcal{M}$ be given. Owing to the triangle inequality, we initially bound $\NormLeb{\Grad(v_h - A_h^{cr} v_h)}{K}$ and $\NormLeb{\GradB_h^{cr}(v_h - A_h^{cr} v_h)}{K}$. The scaling of the functions $\{\Phi_2^F\}_{F \in \Faces{\partial K}}$ and the trace inequality imply \begin{equation} \label{CR-mean-operator-proof} \begin{split} \NormLeb{\Grad(v_h - A_h^{cr} v_h)}{K} &+ \NormLeb{\GradB_h^{cr}(v_h - A_h^{cr} v_h)}{K} \\ &\lesssim h_K^{-1} \NormLeb{v_h - A_h^{cr} v_h}{K} + \NormLeb{\Grad(v_h - A_h^{cr} v_h)}{K}, \end{split} \end{equation} where $h_K$ is the diameter of $K$. Next, for all $\nu \in \Nodes{\ell,K}$, we have $v_{h|K} (\nu) = A_h^{cr} v_h(\nu)$ if $\nu \in \mathrm{int}(K)$, otherwise $\NormSemi{v_{h|K} (\nu) - A_h^{cr} v_h(\nu)} \lesssim \sum_{F \ni \nu} h_F^{-1/2} \NormLeb{\Jump{v_h}}{F}$, where $F$ varies in $\Faces{}$. This estimate and the scaling of the Lagrange basis functions entail that the right-hand side of~\eqref{CR-mean-operator-proof} is bounded by $\sum_{F \cap K \neq \emptyset} h_F^{-1/2} \NormLeb{\Jump{v_h}}{F}$. Squaring and summing over all $K \in \mathcal{M}$, we finally obtain \begin{equation*} \NormLeb{\GradM(v_h - M_h^{cr} v_h)}{}^2 \lesssim \sum_{F \in \Faces{}} h_F^{-1} \NormLeb{\Jump{v_h}}{F}^2. \end{equation*} We conclude recalling the definition of the norm $\Normh[{cr}]{\cdot}$. \end{proof} According to the first part of~\eqref{CR-mean-operator-properties}, we can now construct a smoothing operator similarly to $E_h^{ub}$ in Proposition~\ref{P:Pl-Pl-2-smoother}. Recalling the local operators $R_\ell^K$ introduced in section~\ref{SS:local-inversion-divergence}, we define $E_h^{cr}: (\CR{\ell})^2 \to \SobH{\Omega}^2$ by \begin{equation} \label{CR-smoother} E_h^{cr} v_h := M_h^{cr} v_h + \sum_{K \in \mathcal{M}} R_\ell^K (\DivM v_h - \Div M_h^{cr} v_h). \end{equation} Owing to the identity $\Divdisc = \DivM$, we see that $E_h^{cr}$ fulfills condition~\eqref{conservation-divergence}, as a consequence of Propositions~\ref{P:local-right-inverse} and \ref{P:CR-mean-operator}. Moreover, the stability of the operators $R_\ell^K$ and the second part of \eqref{CR-mean-operator-properties} provide a strengthened counterpart of~\eqref{Pl-Pl-2-smoother-stability} in that, for all $v_h \in (\CR{\ell})^2$, we have \begin{equation} \label{CR-smoother-stability} \Normh[{cr}]{v_h - E_h^{cr} v_h} \lesssim \inf_{v \in \SobH{}^2} \Normh[{cr}]{v-v_h}. \end{equation} Next, inspired by the definition of $a_h^{ub}$ in \eqref{Pl-Pl-2-bilinear-form} as well as by identity~\eqref{Pl-Pl-2-bilinear-form-revisited}, we introduce the following bilinear form $a_h^{cr}$ on $(\CR{\ell})^2$ \begin{equation*} a_h^{cr} (w_h, v_h) := \int_\Omega \Grad E_h^{cr} w_h \colon \Grad E_h^{cr} v_h + (\eta-1) \int_{\Omega} \GradM R_h^{cr} w_h \colon \GradM R_h^{cr} v_h \end{equation*} where $R_h^{cr} := (E_h^{cr} - \mathrm{Id})$ and $\eta > 1$ is a penalty parameter. The above-mentioned properties of $E_h^{cr}$ imply that the necessary conditions in Lemmas~\ref{L:quasi-optimal-nec} and \ref{L:quasi-optimal-press-robust-nec} are fulfilled if we set $a_h = a_h^{cr}$ and $E_h = E_h^{cr}$. In this setting, the abstract discretization~\eqref{Stokes-disc} reads as follows: Find $u_h \in (\CR{\ell})^2$ and $p_h \in \Polyavg{\ell-1}{0}$ such that \begin{equation} \label{Stokes-CR} \begin{alignedat}{2} &\forall v_h \in (\CR{\ell})^2 &\qquad \mu \,a_h^{cr}(u_h, v_h) -\int_\Omega p_h \DivM v_h &= \left\langle f , E_h^{cr} v_h \right\rangle \\ &\forall q_h \in \Polyavg{\ell-1}{0} &\qquad \int_\Omega q_h \DivM u_h &= 0. \end{alignedat} \end{equation} Similarly as $a_h^{ub}$ in Lemma~\ref{L:Pl-Pl-2-coercivity}, the form $a_h^{cr}$ is coercive on $(\CR{\ell})^2$, for $\eta > 1$, with constant $\geq (1-\eta^{-1})$. Moreover, in view of~\eqref{CR-smoother-stability}, we can estimate the consistency error of~\eqref{Stokes-CR} by the following counterpart of Lemma~\ref{L:Pl-Pl-2-consistency} \begin{equation} \label{CR-consistency} \NormSemi{\int_\Omega \GradM w_h \colon \Grad E_h^{cr} v_h - a_h^{cr} (w_h, v_h)} \leq c \eta \inf_{w \in \SobH{}^2} \Normh[{cr}]{w-w_h} \Normh[{cr}]{v_h} \end{equation} for all $w_h, v_h \in (\CR{\ell})^2$. Hence, we conclude that \eqref{Stokes-CR} is a quasi-optimal and pressure robust discretization of~\eqref{Stokes-weak} in the norm $\Normh[{cr}]{\cdot}$ and the constant $\mathrm{C_{\mathrm{qopr}}}$ from Definition~\ref{D:quasi-optimality-press-robust} solely depends on $\eta$ and the shape parameter of $\mathcal{M}$. Whenever the pair $(\CR{\ell})^2/ \Polyavg{\ell-1}{0}$ is inf-sup stable, an estimate of the pressure $L^2$-error, only in terms of the best approximation errors to the analytical velocity and the analytical pressure, can also be established similarly as in Theorem~\ref{T:Pl-Pl-2-pressure-error}. Thus, problem \eqref{Stokes-CR} is also a quasi-optimal discretization of \eqref{Stokes-weak}. Locally supported basis functions of $\CR{\ell}$ are described in \cite[section~3]{Baran.Stoyan:06}. With this basis and the standard nodal basis of $\Polypiec{\ell-1}{0}$, we see that \eqref{Stokes-CR} is computationally feasible in the sense of Remark~\ref{R:computational-feasibility}, cf. Remark~\ref{R:Pl-Pl-2-feasibility}. \begin{remark}[The pair $(\CR{1})^2/\Polyavg{0}{0}$] \label{R:CR-first-order} In principle, the approach described for $\ell \geq 2$ applies also with $\ell = 1$, up to observing that $R_2^K$ (and not $R_1^K$) should be used in \eqref{CR-smoother}. The point is that, in this case, an element-wise integration by parts and the identity $\int_{F'} E_h^{cr} v_h = \int_{F'} M_h^{cr} v_h = \int_{F'} v_h$, with $F' \in \Faces{}$, reveal $\int_\Omega \GradM w_h \colon \Grad R_h^{cr} v_h = 0$ for all $w_h, v_h \in (\CR{1})^2$. Hence, the form $a_h^{cr}$ is given by $a_h^{cr}(w_h, v_h) = \int_\Omega \GradM w_h \colon \GradM v_h + \eta\int_\Omega \GradM R_h^{cr} w_h \colon \GradM R_h^{cr} v_h$, showing that the penalization is actually not needed. Setting $\eta=0$ annihilates the consistency error and corresponds to the discretization proposed in \cite{Verfuerth.Zanotti:18}. \end{remark} \begin{remark}[Inhomogeneous continuity equation] \label{R:CR-inhom-continuity} The infimum in the right-hand side of~\eqref{CR-consistency} is taken over $\SobH{}^2$ and not only over $Z$, unlike Lemma~\ref{L:Pl-Pl-2-consistency}. This prevents the issue pointed out in section~\ref{SS:inhomogeneous-continuity}. Therefore, the nonconforming Crouzeix-Raviart pair can be used to design a quasi-optimal and pressure robust discretization of problem~\eqref{Stokes-inhom} with the inhomogeneous continuity condition $g \neq 0$. \end{remark} \subsection{Conforming pairs with continuous pressure} \label{SS:continuous-pressure} Another class of pairs still not covered by our discussion are conforming pairs with continuous pressure. In fact, the following observations obstruct the construction of a smoothing operator as indicated in Proposition~\ref{P:Pl-Pl-2-smoother}. \begin{itemize} \item[$(i)$] Since $\Polyavg{0}{0}$ is not a subspace of $Q_h$, the identity $\int_K \Divdisc v_h = \int_K \Div v_h$ may fail to hold for some $v_h \in V_h$ and $K \in \mathcal{M}$. \item[$(ii)$] The computation of $\Divdisc$ is likely unfeasible in the sense of Remark~\ref{R:computational-feasibility}. \end{itemize} Item (i) entails that we cannot correct the divergence element-wise by means of the operators $R_\ell^K$ from section~\ref{SS:local-inversion-divergence}. The shape functions of the lowest-order continuous space $\Polypiec{1}{1}$ suggest to work on patches of elements sharing a vertex, instead. Item (ii) further indicates that we should never require a direct computation of $\Divdisc$. The construction of a quasi-optimal and pressure robust discretization of the Stokes equations is still possible under these constraints, but it is more involved than the ones in the previous sections. We mainly adapt ideas by Lederer et al.~\cite{Lederer.Linke.Merdon.Schoberl:17}. As an example, we let the mesh $\mathcal{M}$ be as in section~\ref{S:paradigmatic-discretization} and consider the two-dimensional Hood-Taylor pair \begin{equation*} \label{Hood-Taylor-pair} V_h = (\Polybnd{\ell}{1})^2 \qquad \text{and} \qquad Q_h = \Polyavg{\ell-1}{1}, \qquad b_h(v_h, q_h) = -\int_\Omega q_h \Div v_h \end{equation*} with $\ell \geq 2$. The inf-sup condition~\eqref{inf-sup-disc} holds with $\beta^{-1} \leq c$ under mild assumptions on $\mathcal{M}$, see~\cite{Boffi:94}. The discrete divergence coincides with the $L^2$-orthogonal projection of the analytical divergence onto $\Polyavg{\ell-1}{1}$. We denote by $Z_h^{ht}$ the discrete kernel. Let $\Nodes{} := \Nodes{1}$ denote the set of all vertices of $\mathcal{M}$. For each $\nu \in \Nodes{}$, let $\Phi_1^\nu$ be the Lagrange basis function of $\Polypiec{1}{1}$ associated with the evaluation at $\nu$, i.e. $\Phi_1^\nu(\nu') = \delta_{\nu\nu'}$ for all $\nu' \in \Nodes{}$. Recall that $\Phi_1^\nu$ is supported on the patch $\omega_\nu := \{ K \in \mathcal{M} \mid \nu \in K \}$. Consider the barycentric refinement $\Mesh_\nu$ of $\omega_\nu$, i.e. the mesh obtained connecting the vertices and the barycenter of any triangle in $\omega_\nu$, cf. Figure~\ref{F:barycentric-refinement-star}. The space $\Polypiec{\ell}{0}(\Mesh_\nu)$ and the subspaces \begin{equation*} \label{local-spaces-star} \Polybnd{\ell}{1}(\Mesh_\nu) \qquad \text{and} \qquad \Polyavg{\ell-1}{0}(\Mesh_\nu). \end{equation*} are defined on $\Mesh_\nu$ analogously to $\Polypiec{\ell}{0}$ in~\eqref{elementwise-polynomials} and $\Polybnd{\ell}{1}$ and $\Polyavg{\ell-1}{0}$ in~\eqref{elementwise-polynomials-sub}, respectively. The element-wise local Lagrange interpolant $I_\ell^\nu : \Polypiec{\ell}{0}(\Mesh_\nu) \to \Polypiec{\ell-1}{0}(\Mesh_\nu)$ is given by \begin{equation*} \label{Hood-Taylor-local-Lagrange} I_\ell^\nu v := \sum_{K \in \Mesh_\nu} \sum_{\nu' \in \Nodes{\ell-1, K} } v_{|K} (\nu') \Phi_{\ell-1}^{\nu',K} \end{equation*} where $\Nodes{\ell-1, K}$ is the set of Lagrange nodes of degree $\ell-1$ in $K$ and $\Phi_{\ell-1}^{\nu',K}$ is the Lagrange basis function of $\Poly{\ell}(K)$ associated with the evaluation at $\nu'$ and extended to zero outside $K$. Consider also the simplified local averaging $A_\ell^\nu: \Polypiec{\ell}{0}(\Mesh_\nu) \to \Polypiec{\ell-1}{1}$ \begin{equation*} \label{Hood-Taylor-local-averaging} A_\ell^\nu v := \sum_{\nu \in \Nodes{\ell-1}} v_{|{K_\nu}} (\nu) \Phi_{\ell-1}^\nu \end{equation*} where $K_\nu \in \mathcal{M}$ is a fixed element such that $\nu \in K_\nu$ and $v$ is extended to zero outside $\omega_\nu$. As before, $\Phi_{\ell-1}^\nu$ denotes the Lagrange basis function of $\Polybnd{\ell-1}{1}$ associated with the evaluation at $\nu$. \begin{figure}[ht] \centering \begin{tikzpicture} \coordinate (z1) at (0.5,1.2); \coordinate (z2) at (0.8,0); \coordinate (z3) at (2.2,0); \coordinate (z4) at (2.5,1.2); \coordinate (z5) at (1.5,2); \coordinate (c) at (1.5,1); \path (z1) edge (z2); \path (z2) edge (z3); \path (z3) edge (z4); \path (z4) edge (z5); \path (z5) edge (z1); \path (z1) edge (c); \path (z2) edge (c); \path (z3) edge (c); \path (z4) edge (c); \path (z5) edge (c); \fill (c) circle (2pt); \coordinate (w1) at (4.5,1.2); \coordinate (w2) at (4.8,0); \coordinate (w3) at (6.2,0); \coordinate (w4) at (6.5,1.2); \coordinate (w5) at (5.5,2); \coordinate (cc) at (5.5,1); \path (w1) edge (w2); \path (w2) edge (w3); \path (w3) edge (w4); \path (w4) edge (w5); \path (w5) edge (w1); \path (w1) edge (cc); \path (w2) edge (cc); \path (w3) edge (cc); \path (w4) edge (cc); \path (w5) edge (cc); \fill (cc) circle (2pt); \coordinate (b1) at (4.93,0.73); \coordinate (b2) at (5.5,0.33); \coordinate (b3) at (6.07,0.73); \coordinate (b4) at (5.83,1.4); \coordinate (b5) at (5.17,1.4); \path[dashed] (w1) edge (b1); \path[dashed] (w2) edge (b1); \path[dashed] (cc) edge (b1); \path[dashed] (w2) edge (b2); \path[dashed] (w3) edge (b2); \path[dashed] (cc) edge (b2); \path[dashed] (w3) edge (b3); \path[dashed] (w4) edge (b3); \path[dashed] (cc) edge (b3); \path[dashed] (w4) edge (b4); \path[dashed] (w5) edge (b4); \path[dashed] (cc) edge (b4); \path[dashed] (w5) edge (b5); \path[dashed] (w1) edge (b5); \path[dashed] (cc) edge (b5); \end{tikzpicture} \caption{Generic patch $\omega_\nu$ (left) and barycentric refinement $\Mesh_\nu$ (right).} \label{F:barycentric-refinement-star} \end{figure} We are now ready to define the operators $R_\ell^\nu: \Leb{} \to \SobH{}^2$ that will be used to correct the divergence in each patch $\omega_\nu$, $\nu \in \Nodes{}$. Here $R_\ell^\nu$ plays the same role as $R_\ell^K$ in section~\ref{SS:local-inversion-divergence}. Given $q \in \Leb{}$, let $u_\nu = u_\nu(q) \in \Polybnd{\ell}{1}(\Mesh_\nu)^2$ and $p_\nu = p_\nu(q) \in \Polyavg{\ell-1}{0}(\Mesh_\nu)$ be such that \begin{equation} \label{local-problem-div-star} \begin{aligned} &\forall v_\nu \in \Polybnd{\ell}{1}(\Mesh_\nu)^2 \qquad \int_{\omega_\nu} \Grad u_\nu \colon \Grad v_\nu - \int_{\omega_\nu} p_\nu \Div v_\nu = 0\\ &\forall q_\nu \in \Polyavg{\ell-1}{0}(\Mesh_\nu) \quad \; \int_{\omega_\nu} q_\nu \Div u_\nu = \int_{\omega_\nu} \left ( A_\ell^\nu(q_\nu \Phi_1^\nu) - I_\ell^\nu(q_\nu \Phi_1^\nu) \right )q. \end{aligned} \end{equation} This problem is uniquely solvable, according to~\cite[Corollary~6.2]{Guzman.Neilan:18}. Then, we set \begin{equation*} \label{local-right-inverse-star} R_\ell^{\nu} q := u_\nu \quad \text{in} \;\; \omega_\nu \qquad \text{and} \qquad R_\ell^{\nu} q := 0 \quad \text{in} \;\; \Omega \setminus \omega_\nu. \end{equation*} \begin{remark}[Local problems] \label{R:local-problem-star} The use of the barycentric refinement $\Mesh_\nu$ is a main difference compared to~\cite{Lederer.Linke.Merdon.Schoberl:17}. This ensures that the pair $\Polybnd{\ell}{1}(\Mesh_\nu)^2/ \Polyavg{\ell-1}{0}(\Mesh_\nu)$ is inf-sup stable. In fact, it is known that the stability of the Scott-Vogelius pair on $\omega_\nu$ (without the barycentric refinement) may be impaired if $\nu$ is a singular or nearly singular vertex, see~\cite{Scott.Vogelius:85}. The partition of unity $\{ \Phi_1^\nu \}_{\nu \in \Nodes{}}$ and the interpolants $\{ I_\ell^\nu \}_{\nu \in \Nodes{}}$ account for the overlapping of the patches, while the averaging operators $\{ A_\ell^\nu \}_{\nu \in \Nodes{}}$ are used to avoid a direct computation of the discrete divergence in~\eqref{Hood-Taylor-divergence-correction}. \end{remark} We define a global divergence correction $R_h^{ht}: (\Polybnd{\ell}{1})^2 \to \SobH{}^2$ \begin{equation} \label{Hood-Taylor-divergence-correction} R_h^{ht} v_h := \sum_{\nu \in \Nodes{} } R_\ell^\nu \Div v_h. \end{equation} In contrast to $E_h^{ub}$ and $E_h^{cr}$ from~\eqref{Pl-Pl-2-smoother} and~\eqref{CR-smoother}, respectively, we now make use of a smoothing operator $E_h^{ht}$ which is not guaranteed to be divergence-preserving, i.e.~\eqref{conservation-divergence} may fail to hold. We shall see, however, that it still satisfies the necessary conditions in Lemmas~\ref{L:quasi-optimal-nec} and~\ref{L:quasi-optimal-press-robust-nec}. In the following proposition we only prove a basic stability estimate, for the sake of simplicity. \begin{proposition}[Smoothing operator for the Hood-Taylor pair] \label{P:Hood-Taylor-smoother} The linear operator $E_h^{ht}: (\Polybnd{\ell}{1})^2 \to \SobH{}^2$ given by \begin{equation*} \label{Hood-Taylor-smoother} E_h^{ht} v_h := v_h + R_h^{ht} v_h \end{equation*} satisfies~\eqref{quasi-optimal-nec-div} and~\eqref{quasi-optimal-press-robust-nec-div} and is such that, for all $v_h \in (\Polybnd{\ell}{1})^2$, \begin{equation} \label{Hood-Taylor-smoother-stability} \NormLeb{\Grad(v_h - E_h^{ht} v_h)}{} \leq c \NormLeb{\Div v_h}{}. \end{equation} \end{proposition} \begin{proof} For all $v_h \in (\Polybnd{\ell}{1})^2$ and $q_h \in \Polyavg{\ell-1}{1}$, we have \begin{equation*} \int_\Omega q_h \Div R_h^{ht} v_h = \sum_{\nu \in \Nodes{}} \int_{\omega_\nu} ( A_\ell^\nu (q_h \Phi_1^\nu) - I_\ell^\nu (q_h \Phi_1^\nu) )\Div v_h = 0. \end{equation*} The first identity follows from the second equation of \eqref{local-problem-div-star}, which actually holds for all $q_\nu$ in $\Polypiec{\ell-1}{0}(\Mesh_\nu)$ (and not only in $\Polyavg{\ell-1}{0}(\Mesh_\nu)$), as both sides vanish if $q_\nu$ is constant. To check the second identity, observe that $A_\ell^\nu (q_h \Phi_1^\nu) = I_\ell^\nu (q_h \Phi_1^\nu)$ for all $\nu \in \Nodes{}$, due to the continuity of $q_h \Phi_1^\nu$. Thus, we derive the identity \begin{equation*} \int_\Omega q_h \Div E_h^{ht} v_h = \int_\Omega q_h \Div v_h \end{equation*} showing that condition~\eqref{quasi-optimal-nec-div} holds. Next, let $z_h \in Z^{ht}$ be given and consider $q_h = \Div E_h^{ht} z_h $. Recall that $\{ \Phi_1^\nu \}_{\nu \in \Nodes{}}$ is a partition of unity and extend $I_\ell^\nu(q_h \Phi_1^\nu)$ to zero outside $\omega_\nu$. We infer $\sum_{\nu \in \Nodes{}} I_\ell^\nu(q_h \Phi_1^\nu) = q_h$. Then, since $z_h$ is discretely divergence-free, we have \begin{equation*} \NormLeb{q_h}{}^2 = \int_\Omega q_h \Div z_h - \sum_{\nu \in \Nodes{}} \int_\Omega I_\ell^\nu(q_h \Phi_1^\nu) \Div z_h = 0. \end{equation*} This reveals $\Div E_h^{ht} z_h = 0$ and confirms that condition~\eqref{quasi-optimal-press-robust-nec-div} holds. Finally, owing to the stability of $I_\ell^\nu$ and $A_\ell^\nu$ in the $L^2(\omega_\nu)$-norm, we infer \begin{equation*} \sup_{q_\nu \in \Polyavg{\ell-1}{0}(\Mesh_\nu)} \dfrac{\int_{\omega_\nu} \left ( A_\ell^\nu(q_\nu \Phi_1^\nu) - I_\ell^\nu(q_\nu \Phi_1^\nu) \right ) \Div v_h}{\NormLeb{q_\nu}{{\omega_\nu}}} \leq c \NormLeb{\Div v_h}{\omega_\nu} \end{equation*} for all $\nu \in \Nodes{}$ and $v_h \in (\Polybnd{\ell}{1})^2$. This entails $\NormLeb{\Grad \Div R_\ell^\nu v_h}{\omega_\nu} \lesssim \NormLeb{\Div v_h}{\omega_\nu}$, owing to \cite[Corollary~4.2.1]{Boffi:Brezzi:Fortin.13} and the inf-sup stability of the pair $\Polybnd{\ell}{1}(\Mesh_\nu)^2/ \Polyavg{\ell-1}{0}(\Mesh_\nu)$ stated in \cite[Corollary~6.2]{Guzman.Neilan:18}. The definition of $R_h^{ht}$ in~\eqref{Hood-Taylor-divergence-correction} then implies \begin{equation*} \NormLeb{\Grad R_h^{ht} v_h}{K} \lesssim \sum_{K' \cap K \neq \emptyset} \NormLeb{\Div v_h}{K'} \end{equation*} for all $K \in \mathcal{M}$, where $K'$ varies in $\mathcal{M}$. We conclude summing over all elements of $\mathcal{M}$ and recalling the definition of $E_h^{ht}$. \end{proof} Next, for $\eta > 1$, we introduce the following bilinear form on $(\Polybnd{\ell}{1})^2$ \begin{equation*} \label{Hood-Taylor-bilinear-form} a_h^{ht}(w_h, v_h) := \int_\Omega \Grad E_h^{ht} w_h \colon \Grad E_h^{ht} v_h + (\eta-1) \int_\Omega \Grad R_h^{ht} w_h \colon \Grad R_h^{ht} v_h. \end{equation*} The abstract discretization~\eqref{Stokes-disc} with $a_h = a_h^{ht}$ and $E_h = E_h^{ht}$ looks for $u_h \in (\Polybnd{\ell}{1})^2$ and $p_h \in \Polyavg{\ell-1}{1}$ such that \begin{equation} \label{Stokes-Hood-Taylor} \begin{alignedat}{2} &\forall v_h \in (\Polybnd{\ell}{1})^2 &\qquad \mu \,a_h^{ht}(u_h, v_h) -\int_\Omega p_h \Div v_h &= \left\langle f , E_h^{ht} v_h \right\rangle \\ &\forall q_h \in \Polyavg{\ell-1}{1} &\qquad \int_\Omega q_h \Div u_h &= 0. \end{alignedat} \end{equation} This discretization is computationally feasible in the sense of Remark~\ref{R:computational-feasibility}, cf. Remark~\ref{R:Pl-Pl-2-feasibility}. Yet, the implementation is more costly than the one of~\eqref{Stokes-Pl-Pl-2} and~\eqref{Stokes-CR} because, in general, we cannot resort to one reference configuration for the solution of the local problems~\eqref{local-problem-div-star}. The error analysis of~\eqref{Stokes-Hood-Taylor} proceeds almost verbatim as in section~\ref{SS:error-estimates}, with the help of Proposition~\ref{P:Hood-Taylor-smoother}. The only remarkable difference is that estimate~\eqref{Pl-Pl-2-pressure-error} in the proof of Theorem~\ref{T:Pl-Pl-2-pressure-error} should be replaced by the weaker one $\NormLeb{p_h-q_h}{} \lesssim \mu \eta \NormLeb{\Grad(u-u_h)}{} + \NormLeb{p-q_h}{}$, because identity~\eqref{conservation-divergence} may fail to hold. \section{Numerical experiments with the unbalanced $\Poly{2}/\Poly{0}$ pair} \label{S:numerics} In this section we restrict our attention to the two-dimensional Stokes equations, with unit viscosity, posed in the unit square. In the notation of section~\ref{S:abstract-framework}, this corresponds to \begin{equation*} \label{numerics-setting} d = 2 \qquad \qquad \mu = 1 \qquad \qquad \Omega = (0,1)^2. \end{equation*} We investigate numerically the new discretization~\eqref{Stokes-Pl-Pl-2}, based on the unbalanced $\Poly{2}/\Poly{0}$ pair, i.e. \begin{equation*} \label{P2-P0-pair} V_h = (\Polybnd{2}{1})^2 \qquad \text{and} \qquad Q_h = \Polyavg{0}{0}, \qquad b_h(v_h, q_h) = -\int_\Omega q_h \Div v_h. \end{equation*} If not specified differently, the penalty parameter is set to \begin{equation*} \label{numerics-penalty} \eta = 2. \end{equation*} We shall consider the following families $(\mathcal{M}_N^D)_{N \in \mathbb{N}_0}$ and $(\mathcal{M}_N^C)_{N \in \mathbb{N}_0}$ of triangular meshes of $\Omega$. For $N \in \mathbb{N}_0$, we divide $\Omega$ into $2^N \times 2^N$ identical squares, with edges parallel to the $x_1$- and $x_2$-axis and with area $2^{-2N}$. We obtain the "diagonal mesh" $\mathcal{M}_N^D$ dividing each square by the diagonal with positive slope. Similarly, we obtain the "crisscross mesh" $\mathcal{M}_N^C$ drawing both diagonals of each square, cf. Figure~\ref{F:meshes}. All experiments have been implemented in ALBERTA 3.0~\cite{Heine.Koester.Kriessl.Schmidt.Siebert,Schmidt.Siebert:05}. \begin{figure}[htp] \hfill \subfloat{\includegraphics[width=0.4\hsize]{Mesh-Diagonal.png}} \hfill \subfloat{\includegraphics[width=0.4\hsize]{Mesh-Crisscross.png}} \hfill \caption{Diagonal mesh $\mathcal{M}_N^D$ (left) and crisscross mesh $\mathcal{M}_N^C$ (right) with $N=2$.} \label{F:meshes} \end{figure} \subsection{Smooth solution} \label{SS:numerics-smooth} To illustrate the quasi-optimality and pressure robustness of the new $\Poly{2}/\Poly{0}$ discretization, we first consider a test case with smooth analytical solution, given by \begin{equation*} u(x_1, x_2) = \Curl(x_1^2(1-x_1)^2x_2^2(1-x_2)^2 ) \qquad p(x_1, x_2) = \sin(2\pi x_1) \sin(2\pi x_2) \end{equation*} where $\Curl(w) := (\partial_2 w, -\partial_1 w)$. We compare the performances of the standard $\Poly{2}/\Poly{0}$ discretization~\eqref{Stokes-Pl-Pl-2-standard} and the new one~\eqref{Stokes-Pl-Pl-2} on the crisscross meshes $\mathcal{M}_N^C$ with $N=0, \dots, 8$. Figure~\ref{F:smooth} displays the respective balances of velocity $H^1$-error and pressure $L^2$-error versus $\#\mathcal{M}_N^C$, that is the number of triangles in the mesh. We first observe that the pressure $L^2$-errors of both discretizations behave quite similarly and converge to zero with the maximum decay rate $(\#\mathcal{M}_N^C)^{-0.5}$. The velocity $H^1$-error of the standard discretization converges to zero with the same decay rate, as suggested by estimate~\eqref{intro:conforming-est}, according to the approximation power of the discrete pressure space in the $L^2$-norm. Note, however, that such rate is suboptimal with respect to the approximation power of the discrete velocity space in the $H^1$-norm. In contrast, the velocity $H^1$-error of the new discretization exhibits the maximum decay rate $(\#\mathcal{M}_N^C)^{-1}$, as predicted by Theorem~\ref{T:Pl-Pl-2-velocity-error}. The next experiments are intended to highlight some of the ingredients that contribute to make this optimal-order convergence possible. \begin{figure}[htp] \hfill \subfloat{\includegraphics[width=0.5\hsize]{Smooth-Vel.png}} \hfill \subfloat{\includegraphics[width=0.5\hsize]{Smooth-Pre.png}} \hfill \caption{Test case \S\ref{SS:numerics-smooth}. Velocity $H^1$-error (left) and pressure $L^2$-error (right) of standard ($*$) and new ($\circ$) $\Poly{2}/\Poly{0}$ discretizations. Plain and dashed lines indicate decay rates $(\#\mathcal{M}_N^C)^{-0.5}$ and $(\#\mathcal{M}_N^C)^{-1}$, respectively.} \label{F:smooth} \end{figure} \subsection{Composite numerical quadrature} \label{SS:numerics-quadrature} The evaluation of the duality $\left\langle f, E_h v_h\right\rangle $, $v_h \in (\Polybnd{2}{1})^2$, in the new $\Poly{2}/\Poly{0}$ discretization requires, in particular, the evaluation of $\left\langle f, \widetilde{v}_h\right\rangle$ for test functions $\widetilde{v}_h$ that are element-wise quadratic on the barycentric refinement of the mesh at hand. This suggests that, for each triangle $K$ in the mesh, a composite quadrature rule, based on the barycentric refinement of $K$, should be used. If one, instead, uses a standard quadrature rule in $K$, the resulting quadrature error could be not negligible, due to the low regularity of $\widetilde{v}_h$. Moreover, since the quadrature error is potentially not pressure robust, as pointed out in~\cite[section ~6.2]{Linke.Merdon.Neilan.Neumann:18}, this may even affect the decay rate of the velocity $H^1$-error. To illustrate such effect, we consider a test case with analytical solution \begin{equation*} u(x_1, x_2) = \Curl(x_1^2(1-x_1)^2x_2^2(1-x_2)^2 ) \qquad p(x_1, x_2) = \alpha\sin(2\pi x_1) \sin(2\pi x_2). \end{equation*} For $\alpha \in \{1, 10^3\}$, we apply the new $\Poly{2}/\Poly{0}$ discretization on the crisscross meshes $\mathcal{M}_N^C$ with $N=0,\dots,8$. We assemble the right-hand side both with a composite and a standard quadrature rule of degree $6$. For $N=4,\dots, 8$, the corresponding velocity $H^1$-errors are reported in Table~\ref{F:quadrature}. In each case, we compute also the so-called experimental order of convergence (EOC), defined as \begin{equation*} \label{EOC} \mathrm{EOC}_N := \frac{\log(e_N / e_{N-1})}{\log(\#\mathcal{M}_{N-1}^C / \#\mathcal{M}_N^C)} = \frac{\log(e_{N-1} / e_N)}{\log 4} \end{equation*} where $e_N$ denotes the $H^1$-error on $\mathcal{M}_N^C$. When the composite quadrature rule is applied, the results seem insensitive to the parameter $\alpha$ and we observe the maximum decay rate $(\#\mathcal{M}_N^C)^{-1}$. In contrast, the use of the standard quadrature rule impairs the pressure robustness stated in Theorem~\ref{T:Pl-Pl-2-velocity-error}. In fact, for sufficiently large $N$, the velocity $H^1$-error is essentially proportional to $\alpha$ and exhibits the suboptimal decay rate $(\#\mathcal{M}_N^C)^{-0.5}$. \begin{table}[htp] \begin{minipage}[c]{0.49\linewidth} \centering \begin{tabular}{|r|c|c|} & $\alpha = 1$ & $\alpha = 10^3$ \\ N & $H^1$-error \hspace{1pt} EOC & $H^1$-error \hspace{1pt} EOC \\[1ex] \hline && \\[-1.5ex] 4 & 3.32e-04 \hspace{27pt} & 3.32e-04 \hspace{27pt} \\ 5 & 8.31e-05 \hspace{5pt} \raisebox{1.5ex}[0pt]{1.00} & 8.31e-05 \hspace{5pt} \raisebox{1.5ex}[0pt]{1.00} \\ 6 & 2.08e-05 \hspace{5pt} \raisebox{1.5ex}[0pt]{1.00} & 2.08e-05 \hspace{5pt} \raisebox{1.5ex}[0pt]{1.00} \\ 7 & 5.19e-06 \hspace{5pt} \raisebox{1.5ex}[0pt]{1.00} & 5.19e-06 \hspace{5pt} \raisebox{1.5ex}[0pt]{1.00} \\ 8 & 1.30e-06 \hspace{5pt} \raisebox{1.5ex}[0pt]{1.00} & 1.30e-06 \hspace{5pt} \raisebox{1.5ex}[0pt]{1.00} \end{tabular} \end{minipage} \hfill \begin{minipage}[c]{0.49\linewidth} \centering \begin{tabular}{|r|c|c|} & $\alpha = 1$ & $\alpha = 10^3$\\ N & $H^1$-error \hspace{1pt} EOC & $H^1$-error \hspace{1pt} EOC \\[1ex] \hline && \\[-1.5ex] 4 & 3.57e-04 \hspace{27pt} & 1.29e-01 \hspace{27pt} \\ 5 & 1.07e-04 \hspace{5pt} \raisebox{1.5ex}[0pt]{0.87} & 6.72e-02 \hspace{5pt} \raisebox{1.5ex}[0pt]{0.47} \\ 6 & 4.01e-05 \hspace{5pt} \raisebox{1.5ex}[0pt]{0.71} & 3.41e-02 \hspace{5pt} \raisebox{1.5ex}[0pt]{0.49} \\ 7 & 1.80e-05 \hspace{5pt} \raisebox{1.5ex}[0pt]{0.58} & 1.71e-02 \hspace{5pt} \raisebox{1.5ex}[0pt]{0.50} \\ 8 & 8.72e-06 \hspace{5pt} \raisebox{1.5ex}[0pt]{0.52} & 8.57e-03 \hspace{5pt} \raisebox{1.5ex}[0pt]{0.50} \end{tabular} \end{minipage} \vspace{1ex} \caption{Test case \S\ref{SS:numerics-quadrature}. Velocity $H^1$-errors of the new $\Poly{2}/\Poly{0}$ discretization and corresponding EOCs with composite (left) or standard (right) quadrature rules for $\alpha \in \{1, 10^3\}$.} \label{F:quadrature} \end{table} \subsection{Locking} \label{SS:numerics-locking} As mentioned in Remark~\ref{R:connection-DG}, the bilinear form $a_h^{ub}$ in the new $\Poly{2}/\Poly{0}$ discretization has the same structure as the DG-SIP form of~\cite{Arnold:82}. Still, one main difference is that Lemma~\ref{L:Pl-Pl-2-coercivity} ensures the coercivity of the former for any penalty $\eta>1$ (and not only for sufficiently large $\eta$). Moreover, the coercivity constant is $\geq 0.5$ for $\eta = 2$. Having an explicit and safe choice of the penalty parameter is particularly useful in this context, because we may have locking for large $\eta$, in view of Remark~\ref{R:locking}. To illustrate this, we consider a test case with analytical solution \begin{equation*} u(x_1, x_2) = \Curl(x_1^2(1-x_1)^2x_2^2(1-x_2)^2 ) \qquad p(x_1, x_2) = (x_1 - 0.5)(x_2-0.5). \end{equation*} We apply the new $\Poly{2}/\Poly{0}$ discretization for $\eta \in \{ 2, 32, 512 \}$ both on diagonal meshes $\mathcal{M}_N^D$ and on crisscross meshes $\mathcal{M}_N^D$, with $N=0, \dots, 7$. The velocity $H^1$-errors displayed in the right part of Figure~\ref{F:locking} indicate that the new discretization is robust with respect to $\eta$ on crisscross meshes. This follows from the fact that condition~\eqref{best-errors-Pl-Pl-2} in Remark~\ref{R:locking} holds for such meshes, as a consequence of~\cite[Theorem~4.3.1]{Qin:1994}. In contrast, adopting the terminology of~\cite{Babuska.Suri:92b}, we observe on the left part of Figure~\ref{F:locking} locking of order $(\mathcal{M}_N^D)^{1/2}$ when diagonal meshes are used. \begin{figure}[htp] \hfill \subfloat{\includegraphics[width=0.5\hsize]{Locking-Diag.png}} \hfill \subfloat{\includegraphics[width=0.5\hsize]{Locking-Cris.png}} \hfill \caption{Test case \S\ref{SS:numerics-locking}. Velocity $H^1$-error of the new $\Poly{2}/\Poly{0}$ discretization on diagonal (left) and crisscross (right) meshes, for $\eta = 2$ ($+$), $\eta = 32$ ($\square$) and $\eta = 512$ ($\Diamond$). Plain and dashed lines indicate decay rates $(\#\mathcal{M}_N^*)^{-0.5}$ and $(\#\mathcal{M}_N^*)^{-1}$, with $*\in \{D, C\}$.} \label{F:locking} \end{figure} \subsection{Inhomogeneous continuity equation} \label{SS:numerics-inhomogeneous} We finally point out that the quasi-optimality and pressure robustness of the new $\Poly{2}/\Poly{0}$ discretization, as stated in Theorem~\ref{T:Pl-Pl-2-velocity-error}, hinges on the homogeneity of the continuity equation in the Stokes problem~\eqref{Stokes-weak}, cf. section~\ref{SS:inhomogeneous-continuity}. \begin{figure}[htp] {\includegraphics[width=0.6\hsize]{InhomCont.png}} \caption{Test case \S\ref{SS:numerics-inhomogeneous}. Velocity $H^1$-error of standard ($*$) and new ($\circ$) $\Poly{2}/\Poly{0}$ discretizations. Plain line indicates decay rate $(\#\mathcal{M}_N^C)^{-0.5}$.} \label{F:inhomogeneous} \end{figure} To see this, we consider the more general problem~\eqref{Stokes-inhom} and approximate the analytical solution \begin{equation*} u(x_1, x_2) = \left( \begin{tabular}{c} $x_1(1-x_1)x_2(1-x_2)$\\[2pt] $x_1(1-x_1)x_2(1-x_2)$ \end{tabular} \right) \qquad p(x_1, x_2) = (x_1 - 0.5)(x_2-0.5) \end{equation*} on the crisscross meshes $\mathcal{M}_N^C$ with $N=0,\dots,8$. Note, in particular, that $\Div u$ is not element-wise constant on $\mathcal{M}_N^C$. Comparing the velocity $H^1$-errors of the standard $\Poly{2}/\Poly{0}$ discretization~\eqref{Stokes-Pl-Pl-2-standard} and the new one~\eqref{Stokes-Pl-Pl-2}, we see that the former is slightly smaller than the latter and that both errors converge to zero with decay rate $(\mathcal{M}_N^C)^{-0.5}$; cf. Figure~\ref{F:inhomogeneous}. This confirms that inequality~\eqref{Pl-Pl-2-velocity-error-inhom} captures the correct behavior of the new discretization. Thus, for this problem, we expect that the new discretization performs significantly better than the standard one only in case of large pressure $L^2$-errors. \subsection*{Acknowledgements} We wish to thank Rüdiger Verfürth for reading some preliminary versions of this manuscript and for suggesting several improvements in the presentation. \subsection*{Funding} The authors gratefully acknowledge partial support by the DFG research grant KR 3984/5-1 ``Convergence Analysis for Adaptive Discontinuous Galerkin Methods''.
{ "timestamp": "2019-02-12T02:03:17", "yymm": "1902", "arxiv_id": "1902.03313", "language": "en", "url": "https://arxiv.org/abs/1902.03313", "abstract": "We approximate the solution of the stationary Stokes equations with various conforming and nonconforming inf-sup stable pairs of finite element spaces on simplicial meshes. Based on each pair, we design a discretization that is quasi-optimal and pressure robust, in the sense that the velocity $H^1$-error is proportional to the best $H^1$-error to the analytical velocity. This shows that such a property can be achieved without using conforming and divergence-free pairs. We bound also the pressure $L^2$-error, only in terms of the best approximation errors to the analytical velocity and the analytical pressure. Our construction can be summarized as follows. First, a linear operator acts on discrete velocity test functions, before the application of the load functional, and maps the discrete kernel into the analytical one. Second, in order to enforce consistency, we employ a new augmented Lagrangian formulation, inspired by Discontinuous Galerkin methods.", "subjects": "Numerical Analysis (math.NA)", "title": "Quasi-optimal and pressure robust discretizations of the Stokes equations by new augmented Lagrangian formulations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.981453438742759, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.708357357584196 }
https://arxiv.org/abs/math/9512204
On isometric reflexions in Banach spaces
We obtain the following characterization of Hilbert spaces. Let $E$ be a Banach space whose unit sphere $S$ has a hyperplane of symmetry. Then $E$ is a Hilbert space iff any of the following two conditions is fulfilled: a) the isometry group ${\rm Iso}\, E$ of $E$ has a dense orbit in S; b) the identity component $G_0$ of the group ${\rm Iso}\, E$ endowed with the strong operator topology acts topologically irreducible on $E$. Some related results on infinite dimentional Coxeter groups generated by isometric reflexions are given which allow to analyse the structure of isometry groups containing sufficiently many reflexions.
\section*{INTRODUCTION} Let $E$ be a real Banach space, $S=S(E)$ the unit sphere in $E$, ${\rm Iso}\, E$ the isometry group of $E$ endowed with the strong operator topology, and $G_0 = G_0(E)$ the identity component of ${\rm Iso}\, E$. {\it A reflexion} in $E$ is an operator of the form $s_{e, e^*} = 1_E - 2e^* \otimes e $, where $e\in E, e^* \in E^*$ and $e^*(e)=1$. If $s=s_{e, e^*} \in {\rm Iso}\, E$, then one may assume also that $ ||e||_E = ||e^*||_{E^*} =1$; in this case we will call $e$ {\it the reflexion vector} and $e^*$ {\it the reflexion functional}; regarding as sphere points, $e$ and $-e$ are called {\it reflexion points}. The unit sphere $S$ is symmetric with respect to {\it the mirror hyperplane} ${\rm Ker}\,e^*$ of $s$. It turns out that this imposes strong restrictions on the isometry group ${\rm Iso}\, E$. We say that a proper subspace $H \subset E$ is {\it biorthogonally complemented in} $E$ if there exists a bicontractive projection $p$ of $E$ onto $H$, i.e. such that $ ||p||_E = ||1_E - p||_E = 1$.\bigskip \noindent {\bf Theorem 1.} {\sl Let $s_{e, e^*}$ be an isometric reflection in $E$. Let $H = {\overline{\rm span}}(G_0\, e)$ be the minimal closed subspace of $E$ containing the orbit $G_0\, e$. Then \smallskip \noindent a) $H$ is a Hilbert space and $H$ is biorthogonally complemented in $E$ or $H = E$; \smallskip \noindent b) furthermore, there exists a projection $p$ of E onto $H$ such that \smallskip \noindent i) $1_E - 2p \in {\rm Iso}\, E$, \noindent ii) $(1_E - p) + {\bar u} p \in {\rm Iso}\, E$ for any ${\bar u} \in {\rm O}(H) = {\rm Iso} H$, and \noindent iii) any $g \in {\rm Iso}\, E$ such that $g \vert H = {\bar u} \in {\rm O}(H)$ has the form $g= {\bar v} (1_E - p) + {\bar u} p$, where ${\bar v} = g \vert {\rm Ker}\, p \in {\rm Iso}\, {\rm Ker}\, p$; \smallskip \noindent c) the orbit $G_0\, e$ coincides with the unit sphere $S(H)$ of $H$. }\bigskip This subject is related to the following Banach - Mazur rotation problem ([3], p.242): \smallskip {\sl Let $E$ be a separable Banach space such that the group ${\rm Iso}\, E$ acts transitively on the unit sphere $S$. Is it true that $E$ is a Hilbert space?} \smallskip Recall (see [23, Ch.IX], \S 6) that the group ${\rm Iso}\, L_p$, where $L_p = L_p [0;1]$ and $1 \le p \ne 2 < \infty$, has exactly two orbits on the unit sphere $S_p = S(L_p)$. One of them consists of the functions in $S_p$ with the zero set of a positive measure, and the other one contains the rest. Thus, both orbits are dense in $S_p$. One says that the group ${\rm Iso}\, E$ acts {\sl almost transitively} on $S$ if it has a dense orbit in $S$. This is the case in the above examples and also in the anisotropic spaces $L_{pq}$. In a non-separable $L_p-$space the second of the above two orbits is empty, and thus it is a non-Hilbert Banach space with the isometry group acting transitively on the unit sphere. This shows that the assumption of separability in the Banach - Mazur problem is essential. Observe that ${\rm Iso}\, E$ is a Banach-Lie group. If this group is transitive on the unit sphere $S$, then $S$ is a homogeneuos space of ${\rm Iso}\, E$. If in addition $S$ has a hyperplane of symmetry $L$, it should be a symmetric space. Indeed, $L$ is a mirror hyperplane of an isometric reflection. The unit sphere $S$ having a reflexion point, by transitivity each point $x \in S$ should be a reflexion point of an isometric reflexion $s = s_{x, x^*}$. Furthermore, $x$ is an isolated fixed point of the involution $-s\,|\,S$ which acts as $-1$ at the supporting hyperplane $x^* = 1$ to $S$ at $x$ (we are grateful to J. Arazy for this remark). From Theorem 2 below it follows that $S$ being a symmetric space of the group ${\rm Iso}\, E$, $E$ should be a Hilbert space. In fact, in Theorem 2 more strong criteria for $E$ to be a Hilbert space are done. They hold without the separability assumption. \bigskip \noindent { \bf Theorem 2.} {\sl Let the group ${\rm Iso}\, E$ contains a reflection $s_{e, e^*}$ along the vector $e \in S$. Then $E$ is a Hilbert space iff either of the following two conditions is fulfilled: \noindent a) ${\rm Iso}\, E$ acts almost transitively on $S$; \noindent b) $e$ is a cyclic vector of the strong identity component $G_0$ of ${\rm Iso}\, E$ (i.e. $E = {\overline {\rm span}} (G_0\, e)$).} \bigskip The second statement is a corollary of Theorem 1; the first one, being much simpler, is proven along the same lines. By a theorem of Godement [9] any isometric operator in a Banach space has a non-trivial invariant subspace (see also [28] for a more general fact). From Theorem 1 one obtains the following \bigskip \noindent { \bf Corollary.} {\sl Let $E$ be a non-Hilbert Banach space. If there is an isometric reflexion in $E$, then all operators in $G_0(E)$ have a common non-trivial invariant Hilbert subspace $H$, biorthogonally complemented in $E$. Moreover, if $G_0(E)$ is a non-trivial group, then ${\rm dim}\, H > 1$. In particular, in this case there is an orthogonally complemented euclidean plane in $E$.} \bigskip Note that by a theorem of Yu. Lyubich [20] if a finite dimensional Banach space has an infinite isometry group, i.e. if the group $G_0(E)$ is non-trivial, then $E$ has a euclidean plane $L$ with a contractive projection $p : E \to L$ (in this case $L$ is called {\it orthogonally complemented} in $E$) (see also [16], [21]). From the other hand, there are Banach spaces of infinite dimension with big isometry groups, but without any orthogonally complemented euclidean subspace of dimension greater than $1$. Indeed, $L_p = L_p [0;1]$, where $1<p \ne 2 < \infty$, contains no such a subspace, while the group $G_0$ is non-trivial. Furthermore, there is no bicontractive projection of $L_p\,\, (p \ne 2)$ onto a hyperplane [13, 14]; in particular, there is no isometric reflexion. The same is true in general for rearrangement-invariant (r.i.) ideal Banach latticies, or symmetric spaces, of (classes of) measurable functions different from $L_2$ [14, Theorem 4.4]. Recall [17, 19] that a r.i. (or symmetric) space $E$ on the interval [0;1] satisfies the following axioms: 1) $1 \in E$ and $ ||1||_E =1$. 2) For any measure preserving transformation $\alpha$ of the interval [0;1] the {\it shift operator} $T_{\alpha} : x(t) \rightarrow x(\alpha(t))$ acts isometrically in $E$. 3) If $x(t) \in E$ and $ |y(t)| \le |x(t)| $ a.e., then $y(t) \in E$ and $ ||y(t)||_E \le ||x(t)||_E$. If $E$ is a r.i. space different from $L_2$, then every $g \in {\rm Iso}\, E$ has {\it a weighted shift representation} $g: x(t) \rightarrow h(t)x(\phi(t))$, where $h = g(1) \in E$ and $\phi$ is a transformation of $[0;1]$ preserving measurability (see [30, 31] for the complex case and [13,14] for the real one; see also [1], [18], [22], [29]. As for symmetric sequence spaces, see [23, Ch.IX], [2], [6], [8]). Furthermore, $\phi$ should be measure--preserving except in the case where $E$ coincides with some of the $L_p$, probably endowed with a new equivalent norm [30] (see also [14], [18], [22]). In particular, this shows that $L_p$ are the only r.i. spaces where the orbits of the isometry group are dense in the unit sphere.\\ The content of the paper is the following. Section 1 contains a preliminary finite dimensional version of Theorem 1. The proofs of Theorems 1 and 2 are given in section 2. Besides, section 2 contains a version of Theorem 1 where no operator topology is prescribed (see Theorem 2.10). In section 3 we classify the Coxeter groups in infinite dimensional case (probably, this classification is not new). In sections 4 and 5 we consider Banach spaces possessing total families of isometric reflexions. A kind of a structure theorem for isometry groups is proven (Theorem 5.7). It applies the notions of Hilbert and Coxeter partial orthogonal subspace decompositions, introduced earlier in this section. In the last section we give an application to isometry groups of the ideal generalized sequence spaces.\\ The main results of this paper were announced in [26]; see [27] for their proofs. Somehow, the proofs have never been published before. The present article contains some new facts, and the exposition of the old ones is quite different. \bigskip \section{ Isometric reflections in finite dimensional Banach spaces} Let $A$ be a set of reflexions in a real vector space $E$ and $W$ be the group generated by the reflexions in $A$. Denote by $\Gamma_{W, A}$ the Coxeter graph of $W$. Recall [5] that $\Gamma_{W, A}$ has $A$ as the set of vertices; two vertices are connected by an edge iff the corresponding reflexions do not commute. By $\Gamma_W$ we denote the full Coxeter graph of $W$, i.e. $ \Gamma_W = \Gamma_{W,R}$, where $R = R(W)$ is the set of all the reflexions in $W$. \bigskip \noindent {\bf 1.1. Lemma} ([5, Ch. V, 3.7]). {\sl A group $W$ generated by a set $A$ of orthogonal reflexions in ${\bf R}^n$ is irreducible iff the origin is the only fixed point of $W$ and the Coxeter graph $\Gamma_{W, A}$ is connected. In particular, $\Gamma_W$ is connected iff its subgraph $\Gamma_{W, A}$ is connected.} \bigskip Let $E$ be a finite dimensional Banach space. Then ${\rm Iso}\, E$ is a compact Lie group, and there exists a scalar product in $E$ invariant with respect to ${\rm Iso}\, E$. It can be defined, for instance, by averaging of any given scalar product over the Haar measure on ${\rm Iso}\, E$. In general, such an invariant scalar product is not unique. Being orthogonal, two isometric reflexions in $E$ along the vectors $e_1, e_2 \in S(E)$ commute iff either $e_1 = \pm e_2$ or $e_1 \perp e_2$. \\ The proof of the following lemma is simple and can be omited. \bigskip \noindent {\bf 1.2. Lemma.} {\sl Let a connected submanifold $M$ of ${\bf R}^n$ be invariant under a reflexion $s_{e,e^*}$ which fixes a point $x \in M$ and acts identically on the tangent space $T_x M$. Then $M$ is contained in the mirror hyperplane ${\rm Ker}\, e^*$. } \bigskip The main result of this section is the following \bigskip \noindent {\bf 1.3. Proposition.} {\sl Let $E$ be a real Banach space of dimension $n$. Let $G \subset {\rm Iso}\, E$ be a closed subgroup of a positive dimension which contains reflexions $t_1,\dots,t_n$ along linearly independent vectors $e_1, \dots,e_n$. Then there exists a subspace $H \subset E$ such that \noindent a) ${\rm dim} H \ge 2, \,\,\, H$ is euclidean and biorthogonally complemented in $E$; \noindent b) the unit sphere $S(H)$ of $H$ coincides with an orbit of the identity component $G_0$ of $G$; \noindent c) there exists a projection $p$ of $E$ onto $H$ such that $1_E - 2p \in G$ and $p$ commutes with any reflexion $t \in G$. Furthermore, $(1_E - p) + {\bar u}p \in G$ for any ${\bar u} \in O(H)$.} \bigskip \noindent {\sl Proof.} Fix an invariant scalar product in $E$ and identify $E$ with ${\bf R}^n$ in such a way that ${\rm Iso}\, E \subset O(n)$. Let $u_1, \dots, u_n$ be the system of vectors in ${\bf R}^n$ biorthogonal to the system $e_1, \dots, e_n$. Since $\dim G > 0$, the orbit $Gu_i$ has a positive dimension for at least one value of $i$, say for $i=1$. We may also assume that $u_1 \in S^{n-1}$, where $S^{n-1}$ is the euclidean unit sphere in ${\bf R}^n$. Let $M$ be the connected component of the orbit $G u_1$ which contains $u_1$. Since $u_1$ is fixed by any of the reflexions $t_i, i= 2,\dots, n$, $M$ is invariant under these reflexions, an hence the tangent space $T = T_{u_1} M$ is invariant, too. Thus for each $i = 2,\dots, n$ either $e_i \in T$ or $e_i \perp T$. Put $$A = \{i \in \{2, \dots, n\}\, | \,e_i \in T\}$$ and $$B = \{i \in \{2, \dots, n\} \,|\, e_i \perp T\}\,\,\, .$$ Since $G \subset O(n)$ and $u_1 \in S^{n-1}$, we have $M \subset S^{n-1}$, and so $T \subset T_{u_1} S^{n-1}$. Therefore $T\perp u_1$. It follows that $T \subset L$, where $L = {\rm span} (e_2, \dots, e_n)$, and therefore $T = {\rm span}(e_i\,|\, i \in A)$ (hereafter {\it span} means {\it the linear span}). Thus, $\dim M = \dim T = {\rm card} A$. Since $M$ is $t_i$-invariant for $i \in B$, by Lemma 1.2, $M$ is contained in the subspace $H = \{v \in E \,|\, v \perp e_i, i \in B\}$. It is easily seen that $k= \dim H = {\rm card} A + 1 = \dim M + 1 $. Thus $M$ is a closed submanifold of each of the unit spheres $S_r (H) = S_r (E) \cap H$, where $r= ||u_1 ||_E$, and $S^{k-1} = S^{n-1} \cap H$, of the same dimension $\dim M = \dim H - 1 = k - 1$. Hence $M$ coincides with both of them. At the same time, being connected $M$ coincides with the orbit $G_0 u_1 $. Here $k \ge 2$, since $\dim M > 0$. Therefore, $H$ is euclidean and the unit sphere $S(H)$ coincides with the orbit $G_0 (u_1 /r)$. Since $T \subset H$ and $e_i \in T$ for each $i \in A$, where ${\rm card} A = k-1>0$, there exists $i_0 \in A$ such that $e_{i_0} \in S(H)$, and thus $S(H) = G_0 e_{i_0}$. Let $w_1, \dots , w_k \in H$ be an orthogonal basis in $H$ with $||w_i||_E =1, i=1, \dots, k$, and $g_1, \dots, g_k \in G$ be such that $g_j (e_{i_0}) = w_j$. Then $s_j = g_j t_{i_0} g_j ^{-1} \in G$ is the orthogonal reflexion along the vector $w_j, j=1, \dots, k$. By the same reasoning as above, for any vector $w \in S(H)$ the orthogonal reflexion $s_{w, w^*}$ along $w$ belongs to $G$. The reflexions $s_j , j=1, \dots , k$, pairwise commute, and so $p= {1\over 2}(1_E - \prod _{i=1}^k s_i)$ is the orthogonal projection of $E$ onto $H$ such that $\tau =1_E - 2p = \prod _{i=1}^k s_i \in G \subset {\rm Iso}\, E$. Thus, $||p||_E = {1 \over 2}||1_E - \tau||_E =1$ and either $E=H$ or $||1_E - p||_E = {1 \over 2} ||1_E + \tau||_E = 1$. Therefore, $H$ is a biorthogonally complemented subspace of $E$. Any orthogonal reflexion $\bar s $ in $H$ coincides with the restriction to $H$ of some reflexion $s \in G$, where in fact $s = (1_E - p) + {\bar s}p$. The same is true for any orthogonal operator ${\bar u} \in O(H)$; indeed, the group $O(H)$ is generated by orthogonal reflexions. Let $t \in G$ be a reflexion. The mirror hyperplane of $t$ intersects with $H$ by a subspace of $H$ of dimension $k-1 >0$. Therefore, $t$ has a fixed point on the sphere $M=S^{k-1} \subset H$, and so $t(M) \cup M$ is connected and contained in the orbit $Gu_1$. It follows that $t(M) = M$, $H$ is invariant with respect to $t$ and so $t$ and $p$ commute. This completes the proof. \,\,\,$\bigcirc$ \bigskip \noindent {\bf 1.4. Corollary.} {\sl Let $W$ be a group generated by isometric reflexions in a finite dimensional Banach space $E$. If $W$ is irreducible and infinite, then $E$ is euclidean and $W$ is dense in the orthogonal group ${\rm Iso}\, E \approx {\it O}(n), n=\dim E$. } \bigskip \noindent {\sl Proof.} Let $G$ be the closure of $W$ in ${\rm Iso}\, E$ and $G_0$ be the identity component of $G$. Sinse $W$ is irreducible, by Lemma 1.1, it contains $n$ reflexions along linearly independent vectors, and the Coxeter graph $\Gamma _W$ is connected. Let $H$ be the euclidean subspace of $E$ constructed in Proposition 1.3. Since by (c), $H$ is invariant with respect to the reflexions from $W$, for each $s_{e,e^*} \in W$ either $e \in H$ or $e \perp H$. If $A$ resp. $B$ is the set of reflexions from $W$ of the first resp. second type, then each element of $A$ commutes with every element of $B$. By the connectedness of the graph $\Gamma _W$ one of the sets $A$ and $B$ should be empty. This shows that $H=E$. By (c), ${\bar u} \in G$ for any ${\bar u} \in O(H)$. Therefore, $G = O(H)$ and we are done. \,\,\, $\bigcirc$ \bigskip \noindent {\it Remark.} Related results can be found in [6], [10, (1.7)], [23], [25]. \section{ Proofs of Theorems 1 and 2} \smallskip \noindent {\bf 2.1. Definition.} Let E be a real Banach space, and let $s_1, s_2$ be two isometric reflexions in $E$ along linearly independent vectors $e_1$, $e_2 \in S=S(E)$. Denote by $\alpha (s_1 , s_2)$ the minimal positive angle between the lines containing $e_1$ and $e_2$, measured with respect to an invariant inner product in the plane $L={\rm span}(e_1, e_2)$. Put $\alpha (s_1 , s_2) = 0$ iff $e_1 = \pm e_2$. \bigskip \noindent {\bf 2.2.} {\sl Remarks}. a) It is easily seen that the above definition does not depend on the choice of an invariant scalar product in $L$. \noindent b) An isometric reflection $s=s_{e,e^*}$ in $E$ is uniquely defined by the reflexion point $e \in S(E)$. Indeed, this is true for the restriction of $s$ to any finite dimensional subspace $F$ containing $e$, since the mirror hyperplane ${\rm Ker}\, e^* \cap F$ of $s|F$ is orthogonal to $e$ with respect to an invariant scalar product on $F$. Thus, this is true for $s$ itself. \noindent c) Two isometric reflections $s_1$ and $s_2$ commute iff either $\alpha (s_1 , s_2) = 0$, i.e. $e_1 = \pm e_2$ or $\alpha (s_1 , s_2) = {\pi \over 2}$, i.e. $e_1 \perp e_2$ in $L$. \bigskip \noindent {\bf 2.3. Lemma.} {\sl Let $s_i = s_{e_i ,e{_i}^*}, i=1,2$, be two isometric reflexions in $E$. Then} $$\cos ^2 \alpha (s_1 , s_2) = {e_1}^* (e_2){e_2}^* (e_1) .$$ {\sl Proof. }This is evidently true if $e_1 = \pm e_2$. Assume, further, that $e_1$ and $e_2$ are linearly independent. Let an invariant scalar product in the plane $L={\rm span}(e_1 , e_2)$ be given by the bilinear form $B = \left( \begin{array}{cc} b & a\\ a & c \end{array} \right)$ with respect to the basis $(e_1 , e_2)$ in $L$. Consider the orthogonal projection $p_i = {1 \over 2} (1_L + s_i |L)$ of $L$ onto the mirror line $l_i$ of the axial reflexion $s_i |L \,,i=1,2$. Since $p_i (e_j) \perp e_i$ for $j \ne i$, we have $$0=B(p_1 (e_2), e_1) = B(e_2 - {e_1}^* (e_2) e_1, e_1) = a - {e_1}^* (e_2)b$$ and $$ 0=B(p_2 (e_1), e_2) = a - {e_2}^* (e_1)c\,\, .$$ Thus $$ a^2 = {e_1}^* (e_2){e_2}^* (e_1)bc \,\,,$$ and so $$\cos ^2 \alpha (s_1 , s_2) = {a^2 \over bc} = {e_1}^* (e_2){e_2}^* (e_1)\,\, .$$ \,\,\,$\bigcirc$ \bigskip \noindent {\bf 2.4. Corollary.} {\sl $$\cos \alpha (s_1 , s_2) \ge 1 - ||e_1 - e_2||_E .$$ In particular, if $e_1 \ne e_2$ and $||e_1 - e_2||_E < 1$, then $s_1$ and $s_2$ do not commute. } \bigskip \noindent {\sl Proof. } Since $s_i \in {\rm Iso}\, E$, and so $||e_i||_E = ||e^*_i||_{E^*}=e^*(e)=1$, we have $$|1 - e_1^*(e_2)| = |e_1^*(e_1-e_2)| \le ||e_1-e_2||_E$$ and $$|1-e_2^*(e_1)| \le ||e_1-e_2||_E\,\,.$$ We can assume that $||e_1-e_2||_E < 1$. Then from the above inequalities we obtain $$|e_1^*(e_2)| \ge 1 - ||e_1-e_2||_E$$ and $$|e_2^*(e_1)| \ge 1 - ||e_1-e_2||_E\,\,.$$ The desired inequality follows from the latter two by multiplying them and making use of Lemma 2.3. \,\,\,$\bigcirc$ \bigskip \noindent {\bf 2.5. Lemma.} {\sl Let $s=s_{e, e^*} \in {\rm Iso}\, E$. Consider the function on ${\rm Iso}\, E \times {\rm Iso}\, E$ $$\phi_s (g_1, g_2) = \sin^2 \alpha (s_1 , s_2)\, $$ where $s_i = g_i s {g_i}^{-1},\, i=1,2$. Then \noindent a) $\phi_s$ is left invariant, i.e. $$ \phi_s (g_1, g_2) = \phi_s (gg_1, gg_2) = \phi_s (1_E , {g_1}^{-1}g_2) $$ for each $g, g_1, g_2 \in {\rm Iso}\, E$. \noindent b) $$\phi_s (g_1, g_2) = 1 - {e^*}({g_1}^{-1}g_2 (e)) e^* ({g_2}^{-1}g_1 (e)) \,\,.$$ Therefore, $\phi _s$ is continuous on $({\rm Iso}\, E)^2$ in the strong operator topology. \noindent c) For any two elements $g', g'' \in G_0$ such that $\phi_s (g', g'' ) > 0$, and for any $\epsilon,\, 0< \epsilon < 1 ,$ one can find a finite chain of elements $h_0 = g', h_1, \dots, h_n = g''$ with the property $0<\phi_s (h_i , h_{i+1}) < \epsilon$, so that the reflexions $t_i = h_i s {h_i}^{-1}$ and $t_{i+1}$ do not commute for all $i = 0, 1, \dots, n-1$.}\bigskip \noindent \noindent {\sl Proof}. (a) is evident. The identity in (b) easily follows from the equality $$\phi_s (g_1, g_2) = 1 - {e^*}({g_1}^{-1}g_2 (e))({g_1}^{-1}g_2)^* (e^* )(e)\,\,\, ,$$ which follows from (a) and Lemma 2.3. The second statement of (b) is true since ${\rm Iso}\, E$ is a topological group with respect to the strong operator topology. To prove (c), consider the covering of $G_0$ by the open subsets $$ U_{\epsilon} (g) = \{h \in G_0 \, \vert \, \phi_s (g, h) < \epsilon\}\,\,\, .$$ Since $G_0$ is connected, any two of them $ U_{\epsilon} (g')$ and $ U_{\epsilon} (g'')$ can be connected by a finite chain of such subsets, and the assertion follows. \,\,\,$\bigcirc$\bigskip \noindent {\bf 2.6. Proposition. }{\sl Let $s=s_{e, e^*} \in {\rm Iso}\, E, \,\, g_1, \dots, g_n \in G_0$ and $H' = {\rm span} (e_1, \dots, e_n)$, where $e_i = g_i (e),\, i=1,\dots, n$. Then \noindent a) $H'$ is euclidean; \noindent b) there exists a unique projection $p'$ of $E$ onto $H'$ such that $1_E - 2p \in {\rm Iso}\, E$; \noindent c) the unit sphere $S(H')$ of $H'$ is contained in the orbit $G_0\, e$, and for each vector $v \in S(H')$ there exists a reflexion $s_{v, v^*} \in {\rm Iso}\, E$ along $v$ commuting with $p'$.} \bigskip \noindent {\sl Proof.} First we construct a finite dimensional subspace $F$ containing $H'$ which satisfies all the properties of (a), (b), (c) above. Put $g_0 = 1_E$ and for each pair $(g_i , g_{i+1}), i=0, \dots, n-1$, find a chain $\{h_{ij}\}_{j=0}^{n_i}$ as in Lemma 2.5.c above. The proposition is evident in the case when $\dim H' =1$ , and so we may assume that $g_i (e) \ne e$ for at least one value $i_0$ of $i$. Since the continuous function $\phi_s$ takes all its intermediate values on $G_0$, we can also choose the element $h=h_{i_0 , 1}$ in such a way that the angle $\alpha (s, hsh^{-1})$ is irrational modulo $\pi$, and thus the group generated by the reflections $s$ and $hsh^{-1}$ is infinite. Put $F = {\rm span}(h_{ij} (e)\,\vert \, j=0,\dots, n_i, i=0,\dots,n)$. Let $W$ be the group generated by the reflexions $\{t_{ij}|F\}$ in $F$, where $t_{ij} = h_{ij} s h_{ij}^{-1}, j=0,\dots, n_i, i=0,\dots,n$. It is clear that the origin is the only fixed point of $W$ in $F$. Since, by the construction the Coxeter graph, $\Gamma_W$ of $W$ is connected, by Lemma 1.1, $W$ is irreducible. $W$ being infinite, by Corrolary 1.4, the subspace $F$ is euclidean and the closure of $W$ coincides with the group ${\rm Iso}\, E = O(F)$. Let $v_1,\dots,v_l$, where $l=\dim F$, be a basis of $F$ chosen from the system $(h_{ij} (e))$, and $t_k = t_{v_k , {v_k}^*}, k=1,\dots,l$, be the corresponding reflexions from the system $(t_{ij})$. Put $M= \bigcap _{k=1} ^l {\rm Ker}\, {v_k}^*$. It is easily seen that $E = M \oplus F$. Let $F'$ be a finite dimensional subspace of $E$ containing $F$, endowed with an invariant scalar product. Then for each $k=1,\dots,l$ the restriction $t'_k = t_k |F'$ is an orthogonal reflexion in $F'$, and so $v_k \perp ({\rm Ker} \, {v_k}^* \cap F')$. Therefore, $F \perp (M \cap F')$. It follows that each of the vectors $h_{ij} (e) \in F$ is orthogonal to $M \cap F'$, too, so that the restriction $t_{ij} |(M \cap F')$ is the identity mapping. This gives the representation $t_{ij} = (1_E - p) + t_{ij} p$, where $p$ is the projection of $E$ onto $F$ along $M$. Thus, each element $ \bar g \in W$ can be represented as the restriction to $F$ of the isometry $g = (1_E - p) + {\bar g}p \in {\rm Iso}\, E$. If a sequence ${\bar g}_i \in W$ converges to an element ${\bar h} \in O(F)$, then the sequence of extensions $g_i$ converges to the extension $h = (1_E - p) + {\bar h} p$ of ${\bar h}$, where $h \in\, {\rm Iso}\, E$. In particular, in this way each orthogonal reflexion in $F$ extends to a unique isometric reflexion in $E$, and each element ${\bar u} \in O(F)$ extends to the unique isometry $u=(1_E - p) + {\bar u}p \in {\rm Iso}\, E$. It follows that $S(F) \subset G_0 (e)$. Let $f_1,\dots, f_l$ be an orthogonal basis in $F$ and ${\bar s}_1, \dots, {\bar s}_l$ be the orthogonal reflexions in $F$ along these vectors. It is easily seen that then $p = {1 \over 2} (1_E - \prod _{i=1}^l s_i)$, and thus $1_E - 2p = \prod _{i=1}^l s_i \in {\rm Iso}\, E$. If $s' \in {\rm Iso}\, E$ is a reflexion along a vector $v' \in S(F)$, then as above $s' = (1_E - p) + s'p = (1_E - p) + ps'p$, and so $s'$ and $p$ commute. It is evident that the subspace $H' \subset F$ has the same properties as $F$ itself, and therefore (a), (b), (c) are fulfilled.\,\,\, $\bigcirc$ \bigskip \noindent {\bf 2.7.} {\sl Remark.} It is easily seen that if $H' \subset H''$ are two subspaces as in Proposition 2.6, then for the corresponding projections $p' , p''$ we have $p' \prec p''$, i.e. $p'p'' = p' (= p''p')$. \bigskip \noindent {\bf 2.8.} {\sl Proof of Theorem 1.a}. Let $x, y$ be two arbitrary vectors in $H$. Then for any $\epsilon > 0$ in the linear span of the orbit $G_0 e$ there exist two vectors $x_{\epsilon}, y_{\epsilon}$ such that $||x - x_{\epsilon}||_E < \epsilon, ||y - y_{\epsilon}||_E < \epsilon$. Let $x_{\epsilon} = \sum _{i=1}^n a_i g_i (e)$ and $y_{\epsilon} = \sum_{i=1}^n b_i g_i (e)$, where $g_i \in G_0, i = 1, \dots, n$. Put $H' = {\rm span} (g_i (e)\vert i = 1, \dots, n)$. By Proposition 2.6, the subspace $H'$ is euclidean, and therefore the norm in $H'$ satisfies the four squares identity. In particular, $$||x_{\epsilon} + y_{\epsilon}||^2 + ||x_{\epsilon} - y_{\epsilon}||^2 = 2 (||x_{\epsilon}||^2 + ||y_{\epsilon}||^2).$$ Passing to the limit we see that the same identity holds for $x, y \in H$. It follows that $H$ is a Hilbert space (see [7], Ch.7, \S 3). Consider further the family of all finite dimensional subspaces ${H'}$ which belong to the linear span of the orbit $G_0 e$. Let ${\cal P} = \{ p' \}$ be the corresponding partially ordered family of finite dimensional projections $E \rightarrow H'$ such that $1_E - 2p' \in {\rm Iso}\, E$. For a fixed vector $v \in S(E)$ and for each $p' \in P$ consider the subset $$Y_{p'} = \omega \{p''(v)\,\, |\,\, p'' \in P, \,\, p' \prec p''\}\,\, ,$$ where $\omega$ denotes the closure with respect to the weak topology in $E$. The family $\{Y_{p'}\}$ has the property that for each finite system of projections $p'_1, \dots, p'_n \in P$ the intersection $\cap _{i=1} ^n Y_{p'_i}$ is non-empty. Indeed, let $H_0 = {\rm span} ({\rm Im}\, p'_1, \dots, {\rm Im}\, p'_n)$, and let $p'_0 \in \cal P$ be the corresponding projection of $E$ onto $H_0$. Then $p'_i \prec p'_0$, and hence $p'_0(v) \in Y_{p'_i}$ for each $i=1, \dots, n$. Since $H$ is a Hilbert space, the unit ball $B(H)$ of $H$ is weakly compact. It follows that the centralized family $\{Y_{p'}\}_{p' \in \cal P}$ of weakly closed subsets of $B(H)$ has a non-empty intersection. By the Barry's Theorem [4] the generalized sequence of projections ${\cal P} = (p')$ converges in the strong operator topology to its upper bound $p$ which is a projection of $E$ onto $H$, and which satisfies the condition ($i$) $1_E - 2p \in {\rm Iso}\, E$. In particular, $H$ is biorthogonally complemented in $E$. This proves ($a$). \noindent $b, \,\,c$. Let $s' = s_{x, x^*} \in {\rm Iso}\, E$, where $v \in G_0 e$. Then $s'$ commutes with any projection $p' \in \cal P$ such that $p'(x) = x$, which means that ${\rm Ker}\, p' \subset {\rm Ker}\, x^*$. Passing to the limit we see that $p$ commutes with $s'$ and ${\rm Ker}\, p \subset {\rm Ker}\, x^*$, too. It follows that $s' = (1_E - p) + s'p$. Let $x_0 \in S(H)$ be the limit of a generalized sequence of vectors $x_{\alpha} \in G_0 e \cap S(H)$. Then the corresponding sequence of isometric reflections $s_{\alpha} = s_{x_{\alpha}, x^* _{\alpha}} = g_{\alpha} s g_{\alpha}^{-1}$, where $g_{\alpha} \in G_0$ and $g_{\alpha} (e) = x_{\alpha}$, is strongly convergent to the reflexion $s_0 = s_{x_0 , x^* _0} \in {\rm Iso}\, E$. Indeed, from the representation $s_{\alpha} = (1_E - p) + s_{\alpha} p$ it easily follows that the generalized sequence $\{s_{\alpha}\}$ converges to $s_0 = (1_E - p) + s_0 p$ on each of the complementary subspaces ${\rm Ker}\, p$ and $H$. Let ${\bar u} \in O(H)$ and $x \in E$. Consider the extension $u = (1_E - p) + {\bar u}p$ of ${\bar u}$ to $E$. Since $||p(x)||_E = ||up(x)||_E$, there exists an orthogonal reflexion ${\bar s}_0$ in $H$ such that ${\bar s}_0 p(x) = up(x)$. Let $s_0 = (1_E -p) + {\bar s}_0 p \in {\rm Iso}\, E$. Then we have $||x||_E = ||s_0 (x)||_E = ||(1_E - p) (x) + {\bar s}_0 p (x)||_E = ||u(x)||_E$. Therefore, $u \in {\rm Iso}\, E$, and thus ($ii$) is fulfilled. Now it is clear that the orbit $G_0 e \subset S(H)$ contains the orbit of the strong identity component of the orthogonal group $O(H)$, and so it coincides with the sphere $S(H)$. This proves ($c$). Let $g \in {\rm Iso}\, E$ be such that ${\bar u} = g|H \in O(H)$. We will show that $g$ leaves the subspace ${\rm Ker}\, p$ invariant and thus ${\bar v} = g| {\rm Ker}\, p \in {\rm Iso} ({\rm Ker}\, p)$ and $g = {\bar v} (1_E - p) + {\bar u} p$. Suppose that $g({\rm Ker}\, p) \not\subset {\rm Ker}\, p$. Consider the operator $g_1 = g u^{-1} \in {\rm Iso}\, E$. We have $g_1|H = 1_H$ and $g_1|{\rm Ker}\, p = g|{\rm Ker}\, p$. By our assumption there exists a vector $x \in {\rm Ker}\, p$ such that $g_1 (x) \notin {\rm Ker}\, p$, and so $pg_1 (x) \ne 0$. Denote $x_1 = (1_E - p)g_1(x)$ and $x_2 = pg_1(x)$. Then $g_1(x) = x_1 + x_2$, hence ${g_1}^{-1} (x_1) = {g_1}^{-1}(g_1(x) - x_2) = x - x_2$. Consider two functions $\phi (t) = ||x_1 + tx_2||_E$ and $\psi (t) = ||x + tx_2||_E$. Since $1_E - 2p \in {\rm Iso}\, E, x, x_1 \in {\rm Ker}\, p$ and $x_2 \in {\rm Im}\, p$, we have $||x_1 + x_2|| = ||x_1 - tx_2||$ and $||x + tx_2|| = ||x - tx_2||$. Thus, both $\phi$ and $\psi$ are even functions. From the equalities $${g_1}^{-1} (x_1 + tx_2) = x - x_1 + tx_2 = x + (1 - t)x_2$$ and $$g_1 (x + tx_2) = x_1 + (1 + t)x_2$$ and the fact that $g_1 \in {\rm Iso}\, E$ we obtain that $\phi (t) = \psi (1 - t)$ and $\psi (t) = \phi (1 + t)$. It follows that $ \phi(t) = \phi (-t) = \psi (1 + t) = \phi (t + 2)$. Therefore, being convex and periodic function on $\bf R$, $\,\,\phi (t)$ should be constant. This is possible only if $x_2 = 0$, i.e. $g_1 (x) \in {\rm Ker}\, p$, which is a contradiction. Thus, ($iii$) is fulfilled as well. This completes the proof of Theorem 1.\,\,\, $\bigcirc$ \bigskip It has been already noted that the statement of Theorem 2.b is a direct corollary of Theorem 1. Thus, it is enough to prove Theorem 2.a. \bigskip \noindent {\bf 2.9.} {\sl Proof of Theorem 2.a}. It is enough to show that the four squares identity holds in E. For this it is enough, as it was done in the proof of Theorem 1, to approximate an arbitrary pair of vectors $x, y \in E$ by a sequence $\{(x_{\alpha} , y_{\alpha})\}$ of pairs of vectors belonging to finite dimensional euclidean subspaces $H_{\alpha}$ of $E$. In turn, it is enough to show that any pair of vectors in the linear span of the orbit $Ge$ of the group $G = {\rm Iso}\, E$ belongs to a finite dimensional euclidean subspace $H'$ of $E$. Indeed, it is easily seen that under our assumptions any orbit of $G$ in the unit sphere $S = S(E)$ is dense in $S$. In particular, the orbit $Ge$ is dense in $S$. Fix such a pair $x, y \in {\rm span}(Ge)$ and consider a subspace $H' = {\rm span} (g_1(e),\dots, g_n(e))$, $g_i \in G, i=1,\dots,n,$ containing this pair. Since the orbit $Ge$ is dense in $S$, for any two vectors $g'(e)$ and $g''(e) \in G\,e$ one can find a finite chain of vectors $h_j (e) \in G\,e, j=0,\dots, k, $ such that $h_0 (e) = g'(e), h_k (e) = g''(e)$ and $||h_{j + 1} (e) - h_j (e)||_E < {1 \over 10},\,\, j=0,1,\dots,k-1$. Find such a chain $\{h_{ij} (e)\} _{j=1,\dots,k_i}$ for each of the pairs $(g_i (e) , g_{i+1} (e)), i=1,\dots,n-1,$ and put $F={\rm span}(h_{ij} (e), j=0,\dots,k_i , i=1,\dots,k-1)$. We may assume that $E$ is infinite dimensional (otherwise the proof is simple), and that $\dim L > 8$. Let $\{t_{ij} = h_{ij} s {h_{ij}}^{-1}\}$ be the system of isometric reflexions along the vectors $h_{ij} (e) , j=1,\dots,k_i , i=1,\dots,n-1$, and let $W$ be the group generated by the restrictions $t_{ij}\vert F$. By Corollary 2.4, the Coxeter graph ${\Gamma}_W$ is connected, and since the system of vectors $(h_{ij} (e))$ is complete in $F$, by Lemma 1.1, the group $W$ is irreducible. For a pair of vectors $(v' = h_{ij} (e), v'' = h_{ij+1} (e))$ we have $0 < ||v' - v''|| < {1 \over 10}$, and so by Corollary 2.4, $$0 < {\alpha} (t_{ij} , t_{ij+1} ) < \arccos {9 \over 10}\, .$$ From the classification of Coxeter groups [5, Ch.VI, sect.4] it follows that the group $W$ is infinite. Thus, by Corollary 1.4, the subspaces $F$ and $H' \subset F$ are euclidean. The theorem is proven. \,\,\,$\bigcirc$\\ \bigskip {\it A priori}, the strong operator topology could be still too strong in order that the identity component $G_0$ be big enough to apply Theorem 1 in an efficient way. Next we give a version of Theorem 1 which does not involve any operator topology. Recall that a group $G$ is {\it locally finite} if every finitely generated subgroup of $G$ is finite. \bigskip \noindent {\bf 2.10. Theorem.} {\it Let $s = s_{e, e^*}$ be an isometric reflexion in a Banach space $E$. Denote $$U = \{g \in {\rm Iso}\,E\,|\,[s, g^{-1}sg] \neq 1_E\}\,\,.$$ Let $G_1$ be the subgroup of ${\rm Iso}\,E$ generated by $U$, and $H = {\rm \overline {span}}\,(G_1 \, e)$. If any of the following two conditions (i), (ii) is fulfilled, then all the conclusions (a), (b), (c) of Theorem 1 hold:\\ i) The group $W_1$ generated by the set of reflexions ${\rm IR}_1 = \{g^{-1}sg\}_{g \in G_1}$ is not locally finite.\\ ii) The orbit $G_1 \,e$ contains three linearly independent vectors $e_1 , e_2 , e_3$, where $||e_1 - e_2 ||_E < 1 - \cos \pi/5$.} \bigskip \noindent \noindent {\sl Proof}. Repeating the arguments used in the proofs of Theorems 1 and 2, it is enough to show that for each finite subset $\sigma \subset {\rm IR}_1$ there exists another finite subset $\sigma_1 \subset {\rm IR}_1$ such that $\sigma \subset \sigma_1$ and the group generated by the reflexions from $\sigma_1$, as well as its restiction to the subspace ${\rm span}\,(v\,|\,s_{v, v^*} \in {\rm IR}_1)$, is infinite. In other words, each finite subgraph $\gamma$ of the Coxeter graph $\Gamma_{W_1}$ should be contained in a finite connected subgraph $\gamma_1 \subset \Gamma_{W_1}$ with the following property: the group $W(\gamma_1)$ generated by the reflexions which correspond to the vertices of $\gamma_1$, is infinite. The latter holds as soon as the Coxeter graph $\Gamma_{W_1}$ is connected and contains a finite subgraph $\gamma_0$ such that the group $W(\gamma_0)$ is infinite. If the first of these conditions is fulfilled, than the second follows from each of the assumptions ($i$) and ($ii$) above. Indeed, it is clear for ($i$). As for ($ii$), by the connectedness of the graph $\Gamma_{W_1}$, one can find a finite connected subgraph $\gamma_0 \subset \Gamma_{W_1}$ which contains three vertices corresponding to the reflexions $s_1 , s_2 , s_3 \in {\rm IR}_1$ with the reflexion vectors $e_1 , e_2 , e_3$, resp. From the classification of finite Coxeter groups [5, Ch.VI, sect.4] it follows that the group $W(\gamma_0)$ is infinite. Indeed, if $V(\gamma_0)$ is the subspace generated by the reflexion vectors of the reflexions from $W(\gamma_0)$, then ${\rm dim}\,V(\gamma_0) \ge 3$ and by Corollary 2.3, the order of the rotation $s_1 s_2 \in W(\gamma_0)$ is greater than $5$. Thus, it remains to check that the graph $\Gamma_{W_1}$ is connected. Let the vertices $v, v'$ of $\Gamma_{W_1}$ correspond to the reflexions $s, s' = h^{-1} s h$ resp., where $h = g_n g_{n-1} \cdot\dots\cdot g_1 \in G_1$ is arbitrary and $g_i \in U\,,\,i=1,\dots,n$. Put $h_i = g_i g_{i-1}\cdot\dots\cdot g_1$ and $s_i = h_i^{-1} s h_i\,,\,i=0,\dots,n$, so that $h_0 = 1_E , h_1 = g_1 , h_n = h$ and $s_0 = s\,,\,s_n = s'$. Since $g_1 \in U$, the reflexions $s_0 = s$ and $s_1$ do not commute, and thus $0 < \phi (1_E , h_1 ) < 1$. By Lemma 2.4.a, $0 < \phi (1_E , h_1 ) = \phi (g_1 , g_1 h_1 ) = \phi (h_1 , h_2) < 1$ , and therefore the reflexions $s_1$ and $s_2$ do not commute, as well. By induction, we see that $s_i$ does not commute with $s_{i+1}$ for all $i=0,\dots, n-1$, and so the vertices $v, v'$ of the graph $\Gamma_{W_1}$ are connected by a path. This concludes the proof. \,\,\,$\bigcirc$ \section{ Infinite Coxeter groups} In sections 4, 5, 6 below we will use the classification of infinite Coxeter groups. Although it should be well known, in view of the lack of references we reproduce it here in all details. By {\it an infinite Coxeter group} we mean an infinite locally finite group $W$ generated by reflexions in a real vector space $V$ which is algebraically irreducible in $V$. We fix the following notation and conventions.\\ \noindent {\bf 3.1.} {\sl Notation}. Denote by ${\bf R}^{\Delta}$ the linear space of all the real functions with finite support defined on a given set $\Delta$, and by ${\bf R}^{\Delta}_0$ the subspace of functions with the zero mean value. Let \smallskip \noindent $A_{\Delta}$ be the group of finite permutations of elements of $\Delta$ acting in ${\bf R}^{\Delta}_0$; \smallskip \noindent $B_{\Delta}$ be the group of finite permutations of $\Delta$ and changes of sign of values at the points of finite subsets of $\Delta$ acting in ${\bf R}^{\Delta}$; \smallskip \noindent $D_{\Delta}$ be the subgroup of $B_{\Delta}$ which consists of finite permutations and changes of signs of even numbers of coordinates acting in ${\bf R}^{\Delta}$. \smallskip If $\Delta$ is infinite then $A_{\Delta}$, $B_{\Delta}$, $D_{\Delta}$ are infinite Coxeter groups. Let $\epsilon_{\delta}$ be the characteristic function of the one-point subset $\{\delta\}$ of $\Delta$, so that $(\epsilon_{\delta}\,\vert\,\delta \in \Delta)$ is the standard Hamel basis of ${\bf R}^{\Delta}$. Let ${\bf R}^{\Delta}$ be endowed with the standard scalar product. Then $A_{\Delta}$ (resp. $B_{\Delta}$, $D_{\Delta}$) is generated by orthogonal reflexions along vectors from the infinite root system $(\epsilon_{\delta} - \epsilon_{\delta'})$ (resp. $(\pm \epsilon_{\delta}, \,\,\,\pm \epsilon_{\delta} \pm \epsilon_{\delta'})$, $(\pm \epsilon_{\delta} \pm \epsilon_{\delta'})$) \,\,\,($\delta, \delta' \in \Delta, \,\,\delta \ne \delta'$). In the category of pairs $(W, V)$, where $W$ is a group generated by reflexions in a real vector space $V$, there is a natural notion of isomorphism. We will also use a notion of subpair. Namely, we will say that $(W', V')$ is {\it a subpair} of $(W, V)$ if $W'$ is the restriction of a subgroup of $W$ generated by reflexions to its invariant subspace $V'$. An embedding of pairs is an isomorphism with a subpair. In the proposition below {\it isomorphism of Coxeter groups} means isomorphism of pairs, rather then isomorphism of abstract groups. \bigskip \noindent {\bf 3.2. Proposition.} {\sl Any infinite Coxeter group $W$ is isomorphic to one and only one of the groups $A_{\Delta}$, $B_{\Delta}$, $D_{\Delta}$.} \bigskip \noindent {\sl Proof}. In the sequal $\gamma$ denotes a finite connected subgraph of the Coxeter graph $\Gamma_W$, $G(\gamma)$ denotes the finite subgroup of $W$ generated by reflexions $s_i = s_{e_i, e^*_i} \in \gamma, i=1,\dots, {\rm card}\,\gamma$, $V(\gamma) = {\rm span}\,(e_i\,,i=1,\dots, {\rm card}\,\gamma)$ and $n(\gamma) = \dim\,V(\gamma)$. By Lemma 1.1, the restriction $G(\gamma)\,\vert\,V(\gamma)$ is irreducible, so it is a finite Coxeter group. The full Coxeter graph $\Gamma_{G(\gamma)}$ can be naturally identified with a finite connected subgraph $\bar \gamma$ of $\Gamma_W$ containing $\gamma$; in fact, $\bar \gamma$ is the maximal subgraph of $\Gamma_W$ with the properties that $V(\bar \gamma) = V(\gamma)$ and $G(\bar \gamma) = G(\gamma)$ (but the first one alone does not determine $\bar \gamma$). If $n = n(\gamma)>8$, then $G(\gamma)$ is one of the Coxeter groups $A_n, B_n, D_n$ [5, Ch.VI, sect. 4]. Let $\Delta$ be a set with ${\rm card}\,\Delta = {\rm dim}\,V$, where ${\rm dim}\,V$ is the cardinality of a Hamel basis in $V$. The proposition follows from the assertions (i) - (iii) below. \smallskip \noindent i) $(W, V) \approx (B_{\Delta}, {\bf R}^{\Delta})$ if $(G(\gamma_0), V(\gamma_0)) \approx (B_n, {\bf R}^n)$ for some $\gamma_0 \subset \Gamma_W$; \noindent ii) $(W, V) \approx (D_{\Delta}, {\bf R}^{\Delta})$ if there is no $\gamma \subset \Gamma_W$ for which $(G(\gamma), V(\gamma)) \approx (B_n, {\bf R}^n)$ and $(G(\gamma_0), V(\gamma_0)) \approx (D_n, {\bf R}^n)$ for some $\gamma_0 \subset \Gamma_W$ with $n=n(\gamma_0) \ge 4$; \noindent iii) $(W, V) \approx (A_{\Delta}, {\bf R}^{\Delta}_0)$ in the other cases. \smallskip From now on we consider Coxeter graphs as weighted graphs. As usual, the weight of an edge $(s', s'')$ is the order of the product $s's''$. Since $W$ is a locally finite group, the weights on $\Gamma_W$ take only finite values. Recall that the Coxeter graphs of types $A_n$ and $D_n$ have only edges of weight 3, while in any of the Coxeter graphs of type $B_n$ there are edges of weight 4. Thus, if the assumption of (i) holds, then $(G(\gamma'), V(\gamma')) \approx (B_{n'}, {\bf R}^{n'}) $ for any $\gamma' \supset \gamma_0$ with $n'=n(\gamma') > 8$, and thus $(W, V)$ is the inductive limit of some net of pairs $(B_n, {\bf R}^n)$. Next we show that for $\gamma \subset \gamma'$, where $\gamma \supset \gamma_0$ and $n(\gamma) > 8$, the embedding $(G(\gamma), V(\gamma)) \subset (G(\gamma'), V(\gamma'))$ is coordinatewise. This means that under the isomorphisms $(G(\gamma), V(\gamma)) \approx (B_n, {\bf R}^n) $ and $(G(\gamma'), V(\gamma')) \approx (B_{n'}, {\bf R}^{n'}) $ the pair $(B_n, {\bf R}^n)$ is a coordinate subpair of $(B_{n'}, {\bf R}^{n'}) $. Indeed, identify $(G(\gamma'), V(\gamma'))$ with $(B_{n'}, {\bf R}^{n'}) $. Then $V(\gamma) \subset {\bf R}^{n'}$ is spanned by a subsystem of the root system $(\pm \epsilon_i,\,\, \pm \epsilon_i \pm \epsilon_j), \,\,1\le i<j\le n'$, and $G(\gamma)$ is generated by the corresponding orthogonal reflexions. A plane in ${\bf R}^{n'} $ spanned by roots may contain reflexion root vectors of 2, 3 or 4 different reflexions from $B_{n'}$. It is a coordinate plane precisely when it contains 4 reflexions. Since $(G(\gamma), V(\gamma)) \approx (B_n, {\bf R}^n) $, the subspace $V(\gamma)$ contains $\left( \begin{array}{cc} n \\ 2 \end{array} \right)$ planes of the latter type which span it. At the same time, these planes should be coordinate planes of ${\bf R}^{n'}$. Therefore, $V(\gamma)$ is a coordinate subspace ${\bf R}^n \subset {\bf R}^{n'}$, and so $G(\gamma)$ coincides with the group $B_n$ generated by reflexions along those roots of the above root system which belong to $V(\gamma)$. For any of the graphs $\gamma \supset \gamma_0$ with $n=n(\gamma) > 8$ all the vertices of the full Coxeter graph $\bar\gamma$ (see the notation above) are divided in two types: those which correspond to sign change reflexions, i.e. reflexions along the roots of the form $\epsilon_i, i=1,\dots,n,$ and others. Being coordinatewise, embeddings of pairs respect this division. Thus, it is well defined in the inductive limit $\Gamma_W$. Note that the vertices of change sign type in $\Gamma_W$ are those which are incident only with edges of weight 4. Denote by $\Delta$ the set of all the vertices of $\Gamma_W$ of change sign type. Fix $\delta_0 \in \Delta \cap \gamma_0$, and let $\epsilon_0 = \epsilon_{\delta_0}$ be one of the two opposite roots in $V(\gamma_0)$ which correspond to the reflexion $\delta_0$. It is easily seen that for any $\gamma \supset \gamma_0$ with $n=n(\gamma) > 8$ the orbit $G(\gamma)\,\epsilon_0$ consists of the roots of coordinate type $\pm \epsilon_i$ in $V(\gamma)$, and so the class of conjugates of $\delta_0$ in $W$ coinsides with $\Delta$. Choosing one of any two opposite root vectors in the orbit $W(\epsilon_0)$ we obtain a Hamel basis of the $W$-invariant subspace ${\rm span}(W(\epsilon_0))$ which coinsides with $V$ since $W$ is assumed to be irreducible in $V$. Thus, we obtain a Hamel basis of $V$ formed by roots of coordinate type. This yields an isomorphism $V \approx {\bf R}^{\Delta}$. The root system of $W$, which consists of the vectors of the two orbits $W(\epsilon_0)$ and $W\,(\epsilon_0 + \epsilon_1)$, where $\epsilon_1 \not = \epsilon_0$ is another coordinate vector in $V(\gamma_0)$, corresponds under this isomorphism to the root system $(\pm\epsilon_{\delta},\,\,\pm\epsilon_{\delta}\pm\epsilon_{\delta'}\,\vert\,\delta,\delta' \in \Delta, \delta \not = \delta')$ of the group $B_{\Delta}$. Therefore, $(W, V) \approx (B_{\Delta}, {\bf R}^{\Delta})$. This proves (i). Next we consider the case (ii), where there is no subgroup $G(\gamma)\subset W$ of type $B_n$, but at least one of them, say $G(\gamma_0)$, has type $D_n$ for some $n=n(\gamma_0)\ge 4$. First we show that any subgroup $G(\gamma)\supset G(\gamma_0)$ is of type $D_{n(\gamma)}$, and all the embeddings $G(\gamma) \hookrightarrow G(\gamma')$ are coordinatewise. The group $D_4$ contains 4 pairwise commuting reflexions along the root vectors $\epsilon_1\pm\epsilon_2, \epsilon_3\pm\epsilon_4$. If $v_1,\dots, v_4$ are 4 mutually orthogonal root vectors from the root system $(\pm(\epsilon_i - \epsilon_j)\,|\,1\le i<j\le n')$ of type $A_{n'}$ and $L = {\rm span}\,(v_1,\dots, v_4)$, then the only reflexions in $A_{n'}\,\vert\,L$ are the orthogonal reflexions along $v_1,\dots, v_4$, and so $A_{n'}\,\vert\,L$ does not contain $D_4$. Therefore, the Coxeter group $G(\gamma)\supset G(\gamma_0)$ is not of type $A_{n(\gamma)}$, and thus it must be of type $D_{n(\gamma)}$. Let F be a subspace of dimension 4 of ${\bf R}^{n'}$ generated by 4 mutually orthogonal root vectors from the root system $(\pm\epsilon_i \pm \epsilon_j),\,1\le i<j\le n'$, of type $D_{n'}$, and $G(F)\subset D_{n'}$ be the subgroup generated by the orthogonal reflexions along the roots in $F$. Then $G(F)$ is irreducible (and of type $D_4$) iff $F$ is a coordinate subspace of ${\bf R}^{n'}$. Thus, if $(G(\gamma), V(\gamma)) \subset (D_{n'}, {\bf R}^{n'})$ is of type $D_n$, where $n=n(\gamma) \ge 4$, then $V(\gamma)$ is a coordinate subspace of $ {\bf R}^{n'}$. Fix a reflexion vector $v_0$ of a reflexion from $G(\gamma_0)$. If $G(\gamma)\supset G(\gamma_0)$, then the orbit $G(\gamma)\,v_0$ is a root system of type $D_{n(\gamma)}$ of the Coxeter group $G(\gamma)$. Consider the infinite root system $W\,(v_0)$ in $V$. Since $W$ is irreducible, this system is complete in $V$. Note that two roots $v', v''$ of $D_n$ are contained in the same coordinate plane in ${\bf R}^n$ iff the sets of their neigborhooding vertices in the full Coxeter graph $\Gamma_{D_n}$ coincide. In this case the vectors $({{\pm v'\pm v''}\over 2}) = (\pm\epsilon_i,\, \pm \epsilon_j), \,i\not = j$, are contained in the coordinate axes which are the intersections of coordinate planes. The same pairing is defined on the above root system of $W$. In this way, fixing one of any two opposite vectors $({{\pm v'\pm v''}\over 2})$ arbitrarily, we obtain a Hamel basis $\Delta$ in $V$, which in turn provides us with an isomorphism $(W, V) \approx (D_{\Delta}, {\bf R}^{\Delta})$. This proves (ii). Assume further that any subgroup $G(\gamma')\subset W$ with $n'=n(\gamma') > 8$ is of type $A_{n'}$, where $A_{n'}$ acts by permutations in $ {\bf R}^{n'+1}_0$. Let $(G(\gamma), V(\gamma)) \subset (A_{n'}, {\bf R}^{n'+1}_0)$ be of type $A_n$. We will show that $V(\gamma)$ is a coordinate subspace of ${\bf R}^{n'+1}_0$. Let $s_{ij}$ be the orthogonal reflexions (transpositions) along the roots $\pm(\epsilon_i - \epsilon_j)\,\,,1 \le i <j \le n'+1$. Put $$I = \{i \in \{1,\dots, n'+1\}\,\vert\,s_{ij} \in G(\gamma)\,\,\, {\rm for\,\,\, some}\,\,\, j \in \{1,\dots, n'+1\}\}\,\,\,.$$ Thus, if $s_{kl} \in G(\gamma)$, then $k,l \in I$. Vice versa, $s_{kl} \in G(\gamma)$ for any pair $k, l \in I, k \not = l$. This follows from the connectedness of the Coxeter graph $\Gamma_{G(\gamma)}$ and the following remark: if $s_{ij} \in G(\gamma)$ and $s_{jk} \in G(\gamma)$, then $s_{ik} \in G(\gamma)$. Indeed, $s_{jk}(\epsilon_i - \epsilon_j) = \epsilon_i - \epsilon_k$ and thus $s_{ik}=s_{jk} s_{ij} s_{jk}$. Now we see that $V(\gamma) = {\bf R}^I_0 = {\rm span}\,(\epsilon_i - \epsilon_j \,\vert\,i,j \in I)$ is a coordinate subspace of ${\bf R}^{n'+1}_0$, and $G(\gamma)\subset A_{n'}$ is a subgroup of permutations of the set $I$. On the set of edges of the full Coxeter graph $\Gamma_{A_n}$ consider the following equivalence relation: $(s_{i_1,j_1}, s_{i_2,j_2}) \sim (s_{k_1,l_1}, s_{k_2,l_2})$ iff these four transpositions have an index in common. Then this index is the same for the whole equivalence class, so that the set of classes is $\{1,\dots,n\}$. Since this equivalence relation is compatible with the embedding of pairs $(G(\gamma), V(\gamma)) \hookrightarrow (G(\gamma'), V(\gamma'))$, it can be defined as well on the whole graph $\Gamma_W$. Let $\Delta$ be the set of the equivalence classes. Let $v \in \Gamma_W$ be a vertex. Then all the edges incident with $v$ belong to two different classes $\delta, \delta' \in \Delta$, where each class $\delta \in \Delta$ consists of the edges of a complete subgraph of $\Gamma_W$, and each pair of these complete subgraphs which correspond to some $\delta, \delta' \in \Delta, \,\,\delta \not = \delta',$ has exactly one vertex $v(\delta, \delta')$ in common. It is easily seen that the action of $W$ on $\Gamma_W$ by inner automorphisms is locally finite and compatible with the equivalence relation, and so it induces the action of $W$ on $\Delta$ by finite permutations, such that the reflexions in $W$ act as transpositions. Fix a reflexion $s_0 = s(\delta, \delta') \in W$ which corresponds to the transposition $(\delta, \delta')$, with the reflexion vector $e_0 = e(\delta, \delta')$. Then the orbit $W\,(e_0)$ is a root system of $W$ which spans $V$. Fixing one of the each two opposite roots, we obtain a Hamel basis of $V$ which corresponds to the basis of ${\bf R}^{\Delta}_0$ consisting of the root vectors $\epsilon_{\delta} - \epsilon_{\delta'} \,(\delta, \delta' \in \Delta, \delta \not = \delta')$ of $A_{\Delta}$. This gives an isomorphism $(W, V) \approx (A_{\Delta}, {\bf R}^{\Delta}_0)$. The proof is complete. \,\,\,$\bigcirc$ \section{Total families of isometric reflexions} Denote by ${\rm {\rm IR}}(E)$ the set of all the reflexions in ${\rm Iso}\, E$, and let $W = W (E) $ be the subgroup of ${\rm Iso}\, E$ generated by the reflexions from ${\rm {\rm IR}}(E)$. In this section we assume that ${\rm {\rm IR}}(E)$ contains a total subset of reflexions $\{ s_{\alpha} = s_{e_{\alpha}, e^*_{\alpha}}\}_{\alpha \in A}$, which means that the family of linear functionals $T = \{e^*_{\alpha}\}_{\alpha \in A} \subset E^*$ is a total family. \bigskip \noindent {\bf 4.1. Lemma.} {\sl Let $g_1 , g_2 \in {\rm Iso}\, E$. In the notation as above assume that $g_1(e_{\alpha}) = g_2(e_{\alpha})$ for all $\alpha \in A$. Then $g_1 = g_2$.} \bigskip \noindent {\sl Proof}. Put $g_0 = g_1^{-1} g_2$. Then $g_0(e_{\alpha}) = e_{\alpha}$ for all ${\alpha} \in A$. Since $s_{\alpha}$ is the only isometric reflexion in the direction of $e_{\alpha}$ (see Remark 2.2.b), it coincides with $s'_{\alpha} = g_0 s_{\alpha} g_0^{-1} = s_{e_{\alpha}, g_0^{* -1}(e^*_{\alpha})}$, and so $ g_0^{* -1}(e^*_{\alpha}) = e^*_{\alpha}$, i.e. $ g_0^* (e^*_{\alpha}) = e^*_{\alpha}$ or, in other words, $e^*_{\alpha}(g_0(v) - v) = 0$ for all $\alpha \in A$. Since $T$ is total, it follows that $g_0 = 1_E$. \,\,\,$\bigcirc$\\ Let, as before, $G_0$ be the strong identity component of ${\rm Iso}\, E$, and let $W$ be the group generated by the reflexions in ${\rm Iso}\, E$. \bigskip \noindent {\bf 4.2. Lemma.} {\sl $W$ is locally finite iff $G_0$ is trivial. }\bigskip \noindent {\sl Proof}. Assume that $G_0$ is trivial. To prove that $W$ is locally finite it is enough to show that each subgroup $W'$ of $W$ generated by a finite number of reflexions $\{s_i = s_{e_i, e^*_i}\}_{i=1,\dots,n} \subset {\rm {\rm IR}}(E)$ is finite. Suppose that $W'$ is an infinite group. Put $F' = {\rm span}(e_i\,|\, i=1,\dots,n)$. Let $G'$ be the closure of $W'$ in ${\rm Iso}\, E$ in the strong operator topology. It is easily seen that the closed subspace $M' = \bigcap _{i=1}^n {\rm Ker}\,e^*_i$ is a complementary subspace of $F'$, i.e. $E = M' \oplus F'$, and it coincides with the fixed point subspace of $W'$. Hence, it also coincides with the fixed point subspace of $G'$. It follows that $G' = 1_{M'} \oplus {\bar G}' $, where ${\bar G}' \subset O(F')$ is the closure of $W' \, \vert \, F'$ in ${\rm Iso}\, F'$. Thus, $G'$ is a compact Lie group, and being infinite it has a non-trivial identity component. This is a contradiction. Assume now that $G_0$ is non-trivial. Then, as it was shown in the proof of Proposition 2.6, there exist reflexions $s' , s'' \in {\rm {\rm IR}}(E)$ such that the angle $\alpha (s' , s'')$ is irrational modulo $\pi$, and so the subgroup of $W$ generated by these two reflexions is infinite. \,\,\,$\bigcirc$\\ Remind that a group $G$ of operators in a Banach space $E$ is called {\it topologically irreducible} if it has no nontrivial closed invariant subspace. \bigskip \noindent {\bf 4.3. Lemma.} {\sl Let $W'$ be a group generated by a set of reflexions $\{ s_{\alpha} = s_{e_{\alpha}, e^*_{\alpha}}\}_{\alpha \in A'} \subset {\rm {\rm IR}}(E)$. Then $W'$ is topologically irreducible iff the following two conditions are fulfilled: \noindent i) The system of vectors $(e_{\alpha}\, \vert\, \alpha \in A')$ is complete, i.e. $E = {\overline {\rm span}}(e_{\alpha}\, \vert \alpha \in A')$. \noindent ii) The Coxeter graph $\Gamma_{W', A'}$ is connected. } \bigskip \noindent {\sl Proof}. Since the closed subspace $E' = {\overline {\rm span}}(e_{\alpha}\, \vert \,\alpha \in A')$ is invariant with respect to $W'$, the first condition (i) is necessary for $W'$ being irreducible. Let $\Gamma '$ be a connected component of $\Gamma_{W', A'}$. It is easily seen that the closed subspace $F' = {\overline {\rm span}}(e\, \vert \, s_{e, e^*} \in \Gamma')$ is invariant, too. Thus, the second condition (ii) is necessary, too. Assume further that (i) and (ii) are fulfilled. Let $F'$ be a closed invariant subspace of $W'$. Put $A = \{\alpha \in A'\, \vert \,e_{\alpha} \in F'\}$ and $B = \{\alpha\in A'\, \vert \,e_{\alpha} \notin F'\}$. Being invariant $F'$ is contained in ${\rm Ker}\,e^*_{\beta}$ for each $\beta \in B$. It follows that $\alpha (s_{\alpha} , s_{\beta}) = \pi /2$, and so $[s_{\alpha} , s_{\beta}] = 1_E$ for any $ \alpha\in A , \beta \in B$. By (ii) this implies that either $A = \emptyset$ or $B = \emptyset$. From (i) it easily follows that the system of linear functionals $(e^*_{\alpha}\,\vert\,\alpha\in A)\subset E^*$ is total. Thus, if $A = \emptyset$, then $F' \subset \bigcap \{{\rm Ker}\,e^*_{\alpha}\,\vert\,\alpha \in A'\} = \{\bar 0 \}$, and if $B = \emptyset$, then $F' \supset {\overline {\rm span}}\,(e_{\alpha}\,\vert\,\alpha \in A') = E$. In any case, $F'$ is not a proper subspace. This shows that $W'$ is topologically irreducible.\,\,\,$\bigcirc$ \\ Let things be as in Lemma 4.3. Consider the algebraic linear subspace $V' = {\rm span}\,(e_{\alpha}\, \vert\, \alpha \in A')$. The group $W'$ is algebraically irreducible in $V'$ iff the Coxeter graph $\Gamma_{W', A'}$ is connected. In this case $W'$ is topologically irreducible in the closed subspace $E' = {\overline V'} = {\overline {\rm span}}\,(e_{\alpha}\, \vert \,\alpha \in A')$. If $W'$ is finite and the Coxeter graph $\Gamma_{W'}$ is connected, then ${\rm \dim}\,V' = n < \infty$ and $W'\,\vert\,V'$ is a finite Coxeter group, i.e. a finite irreducible group generated by orthogonal reflexions in ${\bf R}^n$ (here we identify $V'$ with ${\bf R}^n$ by choosing an orthonormal basis with respect to an invariant scalar product in $V'$). Let the group ${\rm Iso} E$ be discrete in the strong operator topology, i.e. $G_0 = \{1_E\}$. Then by Lemma 4.3, $W'$ is a locally finite group. If ${\rm \dim}\,V' = \infty$ and the Coxeter graph $\Gamma_{W'}$ is connected, then $W'$ is an infinite Coxeter group, and by Proposition 3.2, it is isomorphic to one of the groups $A_{\Delta}, \,\,\,B_{\Delta}, \,\,\,D_{\Delta}$. The next proposition shows that if the pair $(W', V')$ is maximal, it can not be of type $D_{\Delta}$. \bigskip \noindent {\bf 4.4. Proposition.} {\sl Let the notation be as above. If ${\rm \dim}\,V' = \infty$ and $(W', V') \approx (D_{\Delta}, {\bf R}^{\Delta})$, then the group $W'$ can be extended to a subgroup $W'' \subset {\rm Iso} E$ generated by reflexions along vectors in $V'$ and such that $(W'', V') \approx (B_{\Delta}, {\bf R}^{\Delta})$}. \bigskip For the proof we need the following lemma on partial orthogonal decompositions in Banach spaces. \bigskip \noindent {\bf 4.5. Lemma.} {\sl Let $\{p_i\}_{i=1,2,\dots}$ be a sequence of projections in a Banach space $E$ such that \smallskip \noindent a)\,\,\, $1_E - 2p_i \in {\rm Iso} E \,\,\,{\rm for\,\,\, all}\,\,\, i=1,2,\dots$; \noindent b) \,\,\, the projections $p_i$ are mutually orthogonal, i.e. $p_i p_j = 0\,\,\, {\rm for\,\,\, all}\,\,\, i \ne j$. \smallskip \noindent Then $\,\,\,\limsup_{i \rightarrow \infty} ||(1_E - p_i)(x)||_E = ||x||_E \,\,\,{\rm for\,\,\, all}\,\,\,x \in E$.} \bigskip \noindent {\sl Proof}. By (a), we have $||p_i||_E = ||1_E - p_i||_E = 1\,\,\,{\rm for\,\,\, all}\,\,\,i=1,2,\dots$. From (a) and (b) it follows that $\prod_{i=1}^k (1_E - 2p_i) = 1_E - 2\sum_{i=1}^k p_i \in {\rm Iso} E$, and so $||\sum_{i=1}^k p_i||_E = ||1_E - \sum_{i=1}^k p_i||_E = 1$, as well. Assume that there exist $x_0 \in E$ and $\epsilon_0 > 0$ such that $$||(1_E - p_i) (x_0)||_E \le ||x_0||_E - \epsilon_0 \,\,\,{\rm for\,\,\, all}\,\,\, i=1,2,\dots\,\,\,.$$ Then $$ \Vert {1\over k}\sum_{i=1}^k (1_E - p_i) (x_0) \Vert_E \le {1\over k} \sum_{i=1}^k \Vert (1_E - p_i)(x_0)\Vert_E \le ||x_0||_E - \epsilon_0 \,\,\,.$$ Therefore $$\epsilon_0\le ||x_0||_E - \Vert {1\over k}\sum_{i=1}^k (1_E - p_i) (x_0) \Vert_E \le \Vert x_0 - {1\over k}\sum_{i=1}^k (1_E - p_i) (x_0)\Vert_E = {1\over k}\Vert (\sum_{i=1}^k p_i) (x_0)\Vert_E \le {1\over k}||x_0||_E\,\,\, .$$ This is a contradiction. $\,\,\,\bigcirc$\\ \bigskip \noindent {\sl Proof of Proposition 4.4}. Identify $V'$ with ${\bf R}^{\Delta}$ via an isomorphism $(W', V') \approx (D_{\Delta}, {\bf R}^{\Delta})$, and consider in $V'$ the root system $\{\pm \epsilon_{\delta'} \pm \epsilon_{\delta''}\}$ of type $D_{\Delta}$. Denote by $s_{\delta', \delta''}^+$ the isometric reflexion along the vector $v_{\delta' ,\delta''}^+ = \epsilon_{\delta'} + \epsilon_{\delta''}, \delta', \delta'' \in \Delta, \delta' \ne \delta'',$ and by $s_{\delta', \delta''}^-$ the isometric reflexion along the vector $v_{\delta', \delta''}^- = \epsilon_{\delta'} - \epsilon_{\delta''}$. Put $d _{\delta', \delta''} = s_{\delta', \delta''}^+s_{\delta', \delta''}^- \in W'$, so that $d _{\delta', \delta''}$ is the operator of change of signs of the coordinates $\delta'$ and $\delta''$. Choose a countable subset $\{\delta_i\}_{i=1,2,\dots} \subset \Delta$ and put $d_{i,j} = d _{\delta_i, \delta_j}$. Then the involutions $d_{i,j}$ pairwise commute and $d_{n,k}d_{k,m}=d_{n,m}$. The orthogonal projections $p_{i,j} = {1\over 2}(1_E - d_{i,j})$ onto planes also pairwise commute. For each triple of different indices $n, m, k$ consider the one-dimensional projection $p_{n}^{k,m} = p_{n,k}p_{n,m}$. Since $p_{n}^{k,m}$ and $p_{n}^{i,j}$ commute and have the same image, they coincide; indeed, $p_{n}^{k,m} = p_{n}^{i,j}p_{n}^{k,m}=p_{n}^{k,m}p_{n}^{i,j}=p_{n}^{i,j}$. Denote by $p_n$ their common value, and consider the corresponding reflexion $s_n = 1_E - 2p_n$ along the coordinate vector $\epsilon_{\delta_n}$. It is easily seen that $s_{n} s_{m} = d_{n,m}$ and $s_m (1_E - p_{m, k}) = 1_E - p_{m, k}$. By Lemma 4.5, for a fixed $n \in {\bf N}$ and for any $x \in E, \epsilon > 0$ there exist $k, m \in {\bf N}$ such that $$\Vert s_n (x)\Vert_E \le \Vert (1_E - p_{k, m})s_n (x)\Vert_E + \epsilon = \Vert s_n (1_E - p_{k, m})(x) \Vert_E + \epsilon = $$ $$\Vert s_n s_m (1_E - p_{k, m}) (x) \Vert_E + \epsilon = \Vert d_{n,m} (1_E - p_{k, m}) (x) \Vert_E + \epsilon = $$ $$\Vert (1_E - p_{k, m}) (x) \Vert_E + \epsilon \le \Vert x \Vert_E + \epsilon\,\,\,.$$ It follows that $s_n \in {\rm Iso} E$. Since $\delta_n \in \Delta$ is taken as arbitrary, this implies that for any $\delta \in \Delta$ there exists an isometric reflexion along the vector $\epsilon_{\delta}$. Thus, the group ${\rm Iso} E$ contains the subgroup $W''$ generated by reflexions along vectors of the root system $\{\pm \epsilon_{\delta}, \pm \epsilon_{\delta'} \pm \epsilon_{\delta''}\}$ of type $B_{\Delta}$. $\,\,\,\bigcirc$\\ \bigskip \noindent {\bf 4.6. Corollary.} {\sl Let $\dim E = \infty$, $G_0 = G_0 (E) = \{1_E\}$, and the group $W=W(E)$ generated by all the isometric reflexions in $E$ be topologically irreducible. Then $W$ is an infinite Coxeter group of type $A_{\Delta}$ or $B_{\Delta}$.} \\ \bigskip \noindent {\bf 4.7.} {\sl Remark}. Let $\dim E = n < \infty$. Then Proposition 4.4 still holds in the case when $n$ is odd. Indeed, in this case $B_n = W''$ is the subgroup of the group ${\rm Iso} E$ generated by the subgroup $W' = D_n$ and the element $-1_E$. But for $n$ even the statement of Proposition 4.4 in general is not valid. As an example, consider $E={\bf R}^n$, where $n = 2k \ge 4$, with the unit ball $B(E)$ being the convex hull of the $D_n$-orbit of the point $v_0 = (1,2,\dots,n)$. Then the image $s_n (v_0) = (1,2,\dots,n-1,-n)$ of $v_0$ by the reflexion $s_n = s_{\epsilon_n, \epsilon_{n}^*}$ does not belong to $B(E)$ (indeed, it is separated from $B(E)$ by the hyperplane $-x_n + \sum_{i=1}^{n-1} x_i = {n(n+1)\over 2}$). Hence $B(E)$ is not invariant with respect to the action of the Coxeter group $B_n$ on ${\bf R}^n = E$, and so $B_n$ is not a subgroup of ${\rm Iso}\, E$. \section{ Hilbert and Coxeter decompositions} Let, as before, $E$ be a Banach space with a total family of isometric reflexions. In this section we construct a partial orthogonal decomposition of $E$ which consists of two parts: {\it Hilbert decomposition} into a direct sum of biorthogonally complemented Hilbert subspaces, and {it Coxeter decomposition} into a direct sum of closed subspaces endowed with topologically irreducible Coxeter groups generated by isometric reflexions. In a sense, this decomposition is orthogonal (see Lemma 5.4 and Proposition 5.6). Both of these decompositions are stable under the action of the isometry group ${\rm Iso} E$, and the second one is fixed under the action of the identity component $G_0$. The main result of the section, Theorem 5.7, is a kind of a structure theorem for the isometry group ${\rm Iso} E$.\\ \bigskip \noindent {\bf 5.1.} {\it Notation.} As above, by ${\rm IR}(E)$ we denote the set of all the isometric reflexions in $E$ which is assumed to be total. To each subspace $V$ of $ E$ we attach two closed subspaces, {\sl the kernel} $$V_0 = {\overline {\rm span}} \,(e \in V \,|\,s_{e, e^*} \in {\rm IR} (E)\,)$$ and {\sl the hull} $$\hat {V} = \bigcap \{ {\rm Ker}\, e^*\,|\,s_{e, e^*} \in {\rm IR} (E)\,, \,V \subset {\rm Ker}\, e^*\}\,\,;$$ we put $\hat {V} = E$ if there is no $s_{e, e^*} \in {\rm IR} (E)$ such that $ V \subset {\rm Ker}\, e^*$. It is easily seen that \smallskip \noindent {\it i) $V_0 \subset {\overline V} \subset \hat V$, \noindent ii) $V_{00} = V_0\,,\,\hat {\hat {V}} = \hat {V}$, and \noindent iii) if $V \subset V'$, then $V_0 \subset V'_0$ and $\hat V \subset \hat {V'}$.} \smallskip \noindent Observe that possibly $V_0$ resp. ${\overline V}$ is a proper subspace of ${\overline V}$ resp. $\hat {V}$. For instance, this is the case when $E = {\rm l}_{\infty}$ and $V = {\rm c}$ (the subspace of convergent sequences); indeed, then $\hat {V} = E$ and $V_0 = {\rm c}_0$. Denote also ${\rm IR}_V = \{s_{e, e^*} \in {\rm IR} (E)\,|\,e \in V\}$. Let $W_V$ be the group generated by the reflexions from ${\rm IR}_V$. \bigskip \noindent {\bf 5.2. Coxeter decomposition.} This is a partial subspace decomposition defined on the fixed point subspace $F = {\rm Fix} \, G_0$ of the group $G_0 = G_0(E)$. Let $\Gamma _F = \Gamma_{W_F}$ be the full Coxeter graph of the group $W_F$, and let $ \cal A$ be the set of the connected components of $\Gamma_F$. For $\alpha \in \cal A$ denote by ${\rm IR}_{\alpha}$ the set of reflexions in ${\rm IR}_F$ which correspond to vertices of the component $\alpha$ of $\Gamma_F$. Put $V_{\alpha} = {\overline {\rm span}}\,(e\,|\,s_{e, e^*} \in {\rm IR}_{\alpha})$; so, ${\rm IR}_{\alpha} = {\rm IR}_{V_{\alpha}}$. Put also $W_{\alpha} = W_{V_{\alpha}}$. Then $V_{\alpha}$ is a closed subspace of the kernel $F_0$, and the group $W_{\alpha}\,|\,V_{\alpha}$ is topologically irreducible. By the discussion after Lemma 4.3, $W_{\alpha}$ is a Coxeter group. If $\dim V_{\alpha} = \infty$, then by Corollary 4.6, $W_{\alpha}$ has type $A_{\Delta}$ or $ B_{\Delta}$. The set $\cal A$ can be devided into equivalence classes which correspond to the isomorphism types of the Coxeter pairs $(W_{\alpha}, V_{\alpha})$. Since $G_0$ is a normal subgroup of the group ${\rm Iso}\,E$, its fixed point subspace $F$ is invariant with respect to ${\rm Iso}\,E$; the same is true for the kernel $F_0$ and the hull $\hat {F}$. Each isometry $g \in {\rm Iso}\,E$ acts (by conjugation) on the set ${\rm IR}_F$ and also on the graph $\Gamma_F$, and so on the set $\cal A$. It is clear that the above partition of $\cal A$ is stable under this action and its equivalence classes are invariant.\bigskip \noindent {\bf 5.3. Hilbert decomposition.} Consider the following equivalence relation defined on the set ${\rm IR}(E) \setminus {\rm IR}_F\,$: $$s_{e, e^*} \sim s_{e', e'^*}\,\,\, {\rm iff} \,\,\, e' \in G_0\, e\,\,\,.$$ Let $\cal B$ be the set of its equivalence classes. By Theorem 1, to each $\beta \in \cal {B}$ there corresponds the unique Hilbert subspace $H_{\beta} = {\overline {\rm span}}\,(G_0 \,e\,|\,s_{e, e^*} \in \beta)$ and the unique bicontractive projection $p_{\beta}: E \rightarrow H_{\beta}$ satisfying all the properties of Theorem 1.b. An isometry $g \in {\rm Iso}\,E$ induces the action $g_*$ on the set $\cal B$ which is defined as follows: $g_* \beta = \beta '$ iff $g(H_{\beta}) = H_{\beta'}$. In particular, the orthogonal bases in $H_{\beta}$ and in $H_{\beta'}$ are of the same cardinality. The following lemma shows that this partial decomposition into Hilbert subspaces is orthogonal; moreover, all of the subspaces $H_{\beta}$ are orthogonal to the fixed point subspace $F$. Let ${\rm IR}_{\beta} = {\rm IR}_{H_{\beta}}$. Note that ${\rm IR} (E) = {\rm IR}_{\cal {A}} \cup {\rm IR}_{\cal {B}}$, where ${\rm IR}_{\cal {A}} = {\rm IR}_F = \bigcup_{\alpha \in {\cal {A}}} {\rm IR}_{\alpha}$ and ${\rm IR}_{\cal {B}} = \bigcup_{\beta \in \cal {B}} {\rm IR}_{\beta}$. \\ \bigskip \noindent {\bf 5.4. Lemma.} {\sl a) Let $s, s' \in {\rm IR}(E)$. If $[s, s'] \neq 1_E$, then $s, s'$ belong either to the same subset ${\rm IR}_{\alpha}$, where $\alpha \in \cal {A}$, or to the same subset ${\rm IR}_{\beta}$, where $\beta \in \cal {B}$. \noindent b) The projection $p_{\beta}$ commutes with any reflexion $s \in {\rm IR}(E)$ for any $\beta \in \cal {B}$. \noindent c) Furthermore, $ p_{\beta}p_{\beta'} = 0$ for any $\beta, \beta' \in \cal {B}, \beta \neq \beta'$, and $p_{\beta}\,|\,F = 0$ for any $\beta \in \cal {B}$.} \bigskip \noindent {\sl Proof. a}. Let $s_i = s_{e_i, e_i ^*}, \,i=1,2$, be two arbitrary distinct reflections from $ {\rm IR}_{\beta}$, where $\beta \in \cal {B}$. Being restricted to the Hilbert subspace $H_{\beta}$ the rotation $r = s_1 s_2 \in {\rm Iso}\,E$ in the plane $L = {\rm span}\,(e_1, e_2)$ belongs to the connected component $G_0 (H_{\beta})$ of the orthogonal group, and so by Theorem 1.c, $r \in G_0$. Since $F = {\rm Fix}\, G_0 \subset {\rm Fix}\,r = {\rm Ker}\,e_1^* \cap {\rm Ker}\,e_2^* $, we have that $e_i^* (e) = 0 $ for each $e \in F$. Therefore, if $s= s_{e, e^*} \in {\rm IR}_F$ then by Lemma 2.3, $\alpha (s_i, s) = {\pi\over 2}$, and thus $[s_i, s] = 1_E,\,\, i=1,2$ (see Remark 2.2.c). If $\beta' \in \cal {B}$ and $\beta' \neq \beta$, then the subspace $H_{\beta'}$ is invariant with respect to the rotation $r = s_1 s_2 \in G_0$. One may assume that $r\,|\,L \neq -1_L$, and so either $H_{\beta'} \subset {\rm Fix}\,r$ or $L \subset H_{\beta'}$. The second case is impossible (indeed, otherwise by the construction, we would have $H_{\beta} = H_{\beta'}$, and so $\beta = \beta'$). Thus, $H_{\beta'} \subset {\rm Fix}\,r = {\rm Ker}\,e_1^* \cap {\rm Ker}\,e_2^*$. As above, it follows that $[s_i, s] = 1_E$ for each $s \in {\rm IR}_{\beta}$. To prove (a) it remains to note that the definition of the set $\cal A$ (5.2) yields that $[s, s'] = 1_E$ if $s \in {\rm IR}_{\alpha},\, s' \in {\rm IR}_{\alpha'}\,$, where $\alpha, \alpha' \in \cal {A}$ and $\alpha \neq \alpha'$. Now, (b) and (c) easily follow from (a) by the construction of the projections $p_{\beta}$ as in 2.8. $\,\,\,\bigcirc$ \bigskip \noindent {\bf 5.5. Lemma.} {\sl a) $\hat {F}_0 = \hat {F} = F$. \noindent b) $(H_{\beta})_0 = \hat {H_{\beta}} = H_{\beta} $ for any $\beta \in \cal {B}$.} \bigskip \noindent {\sl Proof. a}. Let $s_{e, e^*} \in {\rm IR} (E)$ be such that $F_0 \subset {\rm Ker}\,e^*$. Then $e \notin F_0$, and hence $e \in H_{\beta}$ for some $\beta \in \cal {B}$. As in the proof of Lemma 5.4, it follows that $F \subset {\rm Ker}\,e^*$, and so $F \subset \hat {F_0} = \bigcap_{\beta \in \cal {B}} {\rm Ker}\,p_{\beta}$. Since the subspace $\bigcup_{\beta \in \cal {B}} H_{\beta} $ is $G_0$-invariant, it is clear that $\hat {F_0}$ is $G_0$-invariant, too. If $\hat {F_0} \neq F$, then there exists $g_0 \in G_0$ such that $g_0\,|\,\hat {F_0} \neq 1_{F_0}$, and so $g_0(x) \neq x$ for some $x \in \hat {F_0}$. Note that both $g_0 (x)$ and $x$ belong to ${\rm Ker}\,e^* $ for each $e^*$ such that $s_{e, e^*} \in {\rm IR} (E) \setminus {\rm IR}_F$ (indeed, in this case $g_0 (e) = e$, and thus $g_0^* (e^*) = e^*$). Therefore, $e^* (g_0 (x) - x) = 0$ for each $e^*$ as above, and also for each $e^*$ such that $s_{e, e^*} \in {\rm IR} (E)$. Since the system of functionals $(e^*\,|\,s_{e, e^*} \in {\rm IR} (E))$ is total, we have $g_0 (x) - x = 0$, which is a contradiction. This proves (a). \noindent b. If $\hat {H_{\beta}} \neq H_{\beta}$ for some $\beta \in \cal {B}$, then $(1_E - p_{\beta}) (x) \neq 0$ for some vector $x \in \hat {H_{\beta}}$. By Lemma 5.4.b, the projection $p_{\beta}$ commutes with any reflexion $s = s_{e, e^*} \in {\rm IR}(E)$. Thus, if $H_{\beta} \subset {\rm Ker}\,e^* $, then also $\hat {H_{\beta}} \subset {\rm Ker}\,e^*$. Therefore, $s(y) = y$ for all $y \in \hat {H_{\beta}}\,,\, p_{\beta} s(y) = s p_{\beta}(y) = p_{\beta}(y)$ and $s(1_E - p_{\beta})(y) = (1_E - p_{\beta})(y)$. The latter means that $(1_E - p_{\beta})(y) \in {\rm Ker}\,e^*$. Hence, $(1_E - p_{\beta})(\hat {H_{\beta}}) \subset \hat {H_{\beta}}$. Now, we have $(1_E - p_{\beta})(x) \in{\rm Ker}\,e^*$ for any $e^*$ such that $s_{e, e^*} \in {\rm IR} (E)$. This contradicts to the assumption that the system ${\rm IR}(E)$ is total, since $(1_E - p_{\beta})(x) \neq 0$. $\,\,\,\bigcirc$ \bigskip Put $R_0 = {\overline {\rm span}}\, (\bigcup_{\beta \in \cal B} H_{\beta})$ and $\hat R = \hat R_0$. \bigskip \noindent {\bf 5.6. Proposition.} {\sl a) The subspace $R_0 \dot{+} F$ is closed, and if $p_{R_0 , F} : R_0 \dot{+} F \to R_0$ is the first projection, then $1_{R_0 \dot{+} F} - 2p_{R_0 , F} \in {\rm Iso}\,(R_0 \dot{+} F)$. Therefore, the projection $p_{R_0 , F}$ is bicontractive. \smallskip \noindent b) For any $\alpha \in {\cal A}$ there exists a projection $p_{\alpha}: F_0 \dot{+} {\hat R} \to V_{\alpha}$ such that \noindent i) $p_{\alpha}$ commutes with any reflexion $s \in {\rm IR}\,(E)$ and $p_{\alpha}p_{\alpha'}=p_{\alpha'}p_{\alpha}=0$ resp. $p_{\alpha}p_{\beta}=p_{\beta}p_{\alpha}=0$ for all $\alpha' \in {\cal A}\,,\,\alpha' \neq \alpha\,,\,\beta \in {\cal B}$; \noindent ii) $||p_{\alpha}||_{F_0 \dot{+} {\hat R}} \le 2$ and $||1_{F_0 \dot{+} {\hat R}} - p_{\alpha}||_{F_0 \dot{+} {\hat R}} = 1$, if the latter projection is non-zero; \noindent iii) moreover, if the Coxeter group $W_{\alpha}$ is a group of type $B_{\Delta}$, then $1_{F_0 \dot{+} \hat R} - 2p_{\alpha} \in {\rm Iso}\,(F_0 \dot{+} {\hat R})$, and so $||p_{\alpha}||_{F_0 \dot{+} {\hat R}} = 1$, too. \smallskip \noindent c) The subspace $F_0 \dot{+} {\hat R}$ is closed, and if both subspaces $F_0$ and $\hat R$ are non-trivial and $p_{F_0 , {\hat R}} : F_0 \dot{+} {\hat R} \to F_0$ is the first projection, then $||1_{F_0 \dot{+} {\hat R}} - p_{F_0 , {\hat R}}||_{F_0 \dot{+} {\hat R}} = 1$ and $||p_{F_0 , {\hat R}}||_{F_0 \dot{+} {\hat R}} \le 2$.} \bigskip \noindent {\sl Proof. a}. Let $x = x_1 + x_2$, where $x_1 \in R_0$ and $x_2 \in F$. For any $\epsilon > 0$ there exists a finite subset $\sigma \subset {\cal B} $ and a vector $x_1^{\sigma} \in \bigoplus\limits_{\beta \in \sigma} H_{\beta}$ such that $||x_1 - x_1^{\sigma}||_E < \epsilon$. Since $u_{\sigma} = \prod_{\beta \in \sigma} (1_E - 2p_{\beta}) \in {\rm Iso}\,E$ and $u_{\sigma}(x_1^{\sigma}) = -x_1^{\sigma}\,,\,u_{\sigma}(x_2) = x_2$, we have $||x_1^{\sigma} + x_2||_E = ||- x_1^{\sigma} + x_2||_E$. Thus, if $R_0 \neq \{0\}$ and $F \neq \{0\}$, then $||1_{R_0 \dot{+} F} - 2p_{R_0 , F}||_{R_0 \dot{+} F} = 1$, and therefore also $||p_{R_0 , F}||_{R_0 \dot{+} F} = ||1_{R_0 \dot{+} F} - p_{R_0 , F}||_{R_0 \dot{+} F} = 1$, if both subspaces $R_0$ and $F$ are non-trivial. By the closed graph theorem, this implies that the subspace $R_0 \dot{+} F$ is closed. \noindent $b$. If $\dim V_{\alpha} < \infty$, put $p'_{\alpha} = (1/{\rm card}\,W_{\alpha})\sum _{g \in W_{\alpha}} g$. Then $p'_{\alpha}$ is a projection on the fixed point subspace $F_{\alpha}$ of the group $W_{\alpha}$, which coincides with $\cap_{s_{e, e^*} \in {\rm IR}_{\alpha}} {\rm Ker}\,e^*$, and thus it is a complementary subspace to $V_{\alpha} = {\rm Ker}\, p'_{\alpha}$. It is clear that $||p'_{\alpha}||_E = 1$, and so $||1_E - p'_{\alpha}||_E \le 2$. From Lemma 5.4 and the definition of $p'_{\alpha}$ it follows that the projection $p_{\alpha} = (1_E - p'_{\alpha})\,|\,(R_0 \dot{+} F)$ satisfies (i); by the above inequalities, it also satisfies (ii). Next consider the case when $\dim V_{\alpha} = \infty$ and the Coxeter group $W_{\alpha}$ is of type $A_{\Delta}$. For a finite subset $\sigma \subset \Delta$ denote by $V_{\sigma}$ the subspace generated by the root vectors $\epsilon_{\delta} - \epsilon_{\delta'}$, where $\delta\,,\,\delta' \in \sigma\,,\,\delta\neq\delta'$, and by $W_{\sigma}$ the Coxeter group of type $A_n$, where $n = \dim V_{\sigma}$, generated by the isometric reflexions along these vectors. Define the projections $p'_{\sigma}$ resp. $p_{\sigma}$ in the same way as $p'_{\alpha}$ resp. $p_{\alpha}$ above. It is clear that $p_{\sigma}$ commutes with any reflexion $s \in {\rm IR}\,(E) \setminus {\rm IR}_{\alpha}$ and satisfies all the other properties in (i), (ii). It is easily seen that the net $(p_{\sigma})$ is strongly convergent to the identity on the subspace $ V_{\alpha}$, and that all the projections $p_{\sigma}$ vanish on the subspace ${\hat R} \dot{+} V'_{\alpha}$, where $V'_{\alpha} = {\overline {\rm span}}\,(\cup_{\alpha' \in {\cal A} \setminus \{\alpha\}} V_{\alpha'})$. As in (a) above it follows that $F_0 = V_{\alpha} \dot{+} V'_{\alpha}$. Therefore, this net is strongly convergent on the subspace $F_0 \dot{+} {\hat R}$ to the projection $p_{\alpha}$ which has the properties (i) and (ii). Finally, assume that $W_{\alpha}$ is a Coxeter group of type $B_{\Delta}$. Then for any finite subset $\sigma \subset \Delta$ the product $u_{\sigma} = \prod_{\delta \in \sigma} s_{\delta}$ of pairwise commuting reflexions $s_{\delta} = s_{\epsilon_{\delta}, \epsilon_{\delta}^*} \in {\rm IR}_{\alpha}$ is an isometric involution with the fixed point subspace $\cap_{\delta \in \sigma} {\rm Ker}\,\epsilon_{\delta}^* \supset {\hat R} \dot{+} V'_{\alpha}$. Similarly as above, the net of the restrictions $(u_{\sigma}\,|\,(F_0 \dot{+} {\hat R}))$ is strongly convergent to an isometric involution $u_{\alpha}$ which has $ V_{\alpha}$ and ${\hat R} \dot{+} V'_{\alpha}$ as its spectral subspaces. It is easily seen that the projection $p_{\alpha} = (1_{F_0 \dot{+} {\hat R}} + u_{\alpha})/2$ possesses all the properties mentioned in (i), (ii) and (iii). \noindent $c$. By the closed graph theorem it is enough to check the second statement. For a finite subset $\sigma \subset {\rm IR}_{\cal A}$ let $W_{\sigma}$ be a finite group generated by reflexions from $\sigma$, and let $V_{\sigma}$ be the linear span of the reflexion vectors of these reflexions. Then the action of $W_{\sigma}$ in $V_{\sigma}$ is fixed point free, and so the projection $p'_{\sigma} = (1/{\rm card}\,W_{\sigma})\sum _{g \in W_{\sigma}} g$ onto the fixed point subspace $F_{\sigma} \supset {\hat R}$ of $W_{\sigma}$ vanishes on $V_{\sigma}$. Consider the net of finite dimensional projections $(p_{\sigma} = 1_{F_0 \dot{+} {\hat R}} - p'_{\sigma}\,|\,(F_0 \dot{+} {\hat R}))$ onto the subspaces $V_{\sigma}$. Observe that $\bigcup_{\sigma} V_{\sigma}$ is dense in the subspace $F_0$. Since all of $p_{\sigma}$ vanish on ${\hat R}$ and satisfy the norm inequalities of (ii), this net is strongly convergent to the projection $p_{F_0 , {\hat R}}$, which also satisfies these inequalities. This completes the proof. $\,\,\,\bigcirc$ \bigskip \noindent {\it Remark.} For further information on the Hilbert decomposition, see Proposition 6.2 and examples 6.8 below. \bigskip \noindent {\bf 5.7. Theorem.} {\sl The subspaces $F, F_0, \hat R$ and $R_0$ are invariant with respect to the group ${\rm Iso}\,E$, and there are natural monomorphisms ${\rm Iso}\,E \hookrightarrow {\rm Iso}\,R_0 \times {\rm Iso}\,F_0$ , $G_0(E) \hookrightarrow \prod\limits_{\beta \in \cal B} G_0(H_{\beta})$ and $\prod\limits_{\beta \in \cal B} {\rm O} (H_{\beta}) \hookrightarrow {\rm Iso}\,(F \dot{+} R_0)$.} \bigskip \noindent {\sl Proof.} The invariance of the subspaces $F$ and $F_0$ was already established in 5.2; the invariance of $R_0$ follows from the remark in 5.3. Similar arguments applied to the conjugate action of ${\rm Iso}\,E$ on $E^*$ provide the invariance of $\hat R$. Since the set $\{e \in S(E)\,|\,s_{e, e^*} \in IR(E)\}$ is contained in $F_0\cup(\bigcup\limits_{\beta \in \cal B} H_{\beta}) \subset R \dot{+} F_0$, the latter summands being invariant, it follows from Lemma 4.1 that the restriction mappings $g \longmapsto g\,|\,R_0\,,\, g \longmapsto g\,|\,F_0\,,\,g \longmapsto g\,|\,H_{\beta}$ induce the monomorphisms ${\rm Iso}\,E \hookrightarrow {\rm Iso}\,R_0 \times {\rm Iso}\,F_0$ and $G_0(E) \hookrightarrow \prod\limits_{\beta \in \cal B} G_0(H_{\beta})$. As for the last statement, fix arbitrary $g = \prod\limits_{\beta \in \cal B}\bar u_{\beta} \in \prod\limits_{\beta \in \cal B} {\rm O} (H_{\beta}) $. For any finite subset $\sigma \subset \cal B$ put $u_{\sigma} = \prod\limits_{\beta \in \sigma} u_{\beta}$, where $u_{\beta} = \bar u_{\beta} p_{\beta} +(1_E - p_{\beta}) \in {\rm Iso}\,E$ (see Theorem 1.b). We will show that the net $\{u_{\sigma}\,|\,(F \dot{+} R_0)\} \subset {\rm Iso}\,(F \dot{+} R_0)$ strongly converges to an element $u \in {\rm Iso}\,(F \dot{+} R_0)$ such that $u\,|\,H_{\beta} = \bar u_{\beta}$. Therefore, the correspondence $\prod\limits_{\beta \in \cal B} {\rm O} (H_{\beta}) \owns g \longmapsto u \in {\rm Iso}\,(F \dot{+} R_0)$ yields the desired monomorphism. By the Banach - Steinhaus Theorem, it is enough to show that for any $x \in F \dot{+} R_0$ the generalized sequence $(u_{\sigma}(x))$ is convergent. Let $x = x_1 + x_2$, where $x_1 \in F$ and $x_2 \in R_0$. For any $\epsilon > 0$ there exists a finite subset $\sigma \subset \cal B$ such that $||(1_E - \sum \limits_{\beta \in \sigma} p_{\beta})(x_2)||_E < \epsilon /2$. If $\sigma'$ and $\sigma''$ are two finite subsets of $\cal B$ containing $\sigma$, then $u_{\sigma'} - u_{\sigma''} = (u_{\sigma'} - u_{\sigma''})(1_E - \sum\limits_{\beta \in \sigma} p_{\beta})$, and so $$||(u_{\sigma'} - u_{\sigma''})(x_2)||_E \le ||u_{\sigma'}(1_E - \sum\limits_{\beta \in \sigma} p_{\beta})(x_2)||_E + ||u_{\sigma''}(1_E - \sum\limits_{\beta \in \sigma} p_{\beta})(x_2)||_E < \epsilon\,\,\,.$$ Thus, $(u_{\sigma}(x))$ is a generalized Cauchy sequence, and hence it is convergent. This proves the theorem. $\,\,\,\bigcirc$ \bigskip \noindent {\it Remark.} In general, the monomorphisms in Theorem 5.7 are not surjective; see Example 6.8.2 below. \section{ An application: Isometry groups of ideal generalized sequence spaces} \noindent {\bf 6.1.} {\it Definitions.} Recall the following notions (see e.g. [17], [19]). Let $(e_{\alpha})_{\alpha\in \Delta}$ be a system of vectors in a Banach space $E_0$. It is called {\it a generalized Shauder basis} of $E_0$ if each vector $e \in E_0$ has a unique, up to permutations, decomposition $e = \sum_{i=1}^{\infty} a_i e_{\alpha_i}$, where $(\alpha_i)_{i=1,\dots}$ is a sequence of pairwise distinct indicies from $\Delta$. If this series is still convergent to $e$ after any permutation of its members, then this basis is called {\it unconditional}. In this case for any choices of signes $\theta = (\theta_{\alpha})_{\alpha \in \Delta}$, where $\theta_{\alpha}=\pm 1$, the linear operators $M_{\theta}(e) = \sum_{i=1}^{\infty}\theta_{\alpha_i} a_i e_{\alpha_i}$ are uniformly bounded. The number ${\rm sup}_{\theta} \,||M_{\theta}||_{E_0}$ is called {\it the unconditional constant} of the basis $(e_{\alpha})_{\alpha\in \Delta}$. For instance, any complete orthonormal system in a Hilbert space is an unconditional basis with the unconditional constant $1$. If the index set $\Delta$ is countable, we have the usual notion of an unconditional basis. The generalized unconditional basis $(e_{\alpha})_{\alpha\in \Delta}$ is called {\it symmetric} if for any bijection $\pi: \Delta \rightarrow \Delta$ the linear operator $$\pi^* : E_0 \owns e = \sum_{i=1}^{\infty} a_i e_{\alpha_i} \mapsto \sum_{i=1}^{\infty} a_i e_{\pi(\alpha_i)} = \pi^* (e) \in E_0$$ is bounded, and so the infinite symmetric group $S_{\Delta} = {\rm Biject}(\Delta)$ acts in $E_0$, being uniformly bounded there. The constant ${\rm sup}_{\theta, \pi} ||M_{\theta}\pi^*||_{E_0}$ is called {\it the symmetric constant} of the basis $(e_{\alpha})_{\alpha\in \Delta}$. For instance, in the classical Banach space $c_0 (\Delta)$ of generalized sequences convergent to zero, with $\Delta$ as a set of indicies, the system of the standard basis vectors $(\epsilon_{\delta})_{\delta\in\Delta}$ form a symmetric basis with the symmetric constant $1$ (remark that each vector in $c_0 (\Delta)$ has a countable support). Fixing a generalized unconditional basis $(e_{\alpha})_{\alpha\in \Delta}$ in $E_0$ we obtain a representation of $E_0$ as a generalized sequence space contained in $c_0 (\Delta)$. If the unconditional constant of this basis is $1$, then $E_0$ is an ideal Banach lattice. Recall that {\it an ideal generalized sequence space} $E$ is a Banach space of sequences defined on an index set $\Delta$ such that if $x=(x_{\alpha})_{\alpha \in \Delta} \in E$, then for any sequence $y = (y_{\alpha})_{\alpha \in \Delta}$ with $|y_{\alpha}| \le |x_{\alpha}|$ for all $\alpha \in \Delta$ one has $y \in E$ and $||y||_E \le ||x||_E$. It is called {\it a symmetric generalized sequence space} if $E$ is an ideal generalized sequence space, where the symmetric group $S_{\Delta}$ of all bijections of $\Delta$ acts isometrically. \\ The next simple lemma should be well known; by the lack of references we give a proof. We say that a family of reflexions is {\it orthogonal} if the reflexions from the family pairwise commute. \bigskip \noindent {\bf 6.2. Lemma.} {\sl Let $E$ be a Banach space with a total orthogonal family of isometric reflexions $(s_{\delta}=s_{\epsilon_{\delta},\epsilon_{\delta}^*})_{\delta \in \Delta}$. Identify $E$ with a generalized sequence space with the index set $\Delta$ by posing ${\bar x}= (\epsilon_{\delta}^*(x))_{\delta \in \Delta}$ for $x \in E$. Let $E_0 = {\overline {\rm span}}\, (\epsilon_{\delta}\,|\,\delta \in \Delta)$. Then we have \smallskip \noindent a) The system $(\epsilon_{\delta})_{\delta \in \Delta}$ is a generalized unconditional basis in $E_0$ with the unconditional constant $1$, and so $E_0$ is an ideal generalized sequence space. \smallskip \noindent b) If the Coxeter group $B_{\Delta}$ of permutations and sign changes of finite number of coordinates acts isometrically in $E_0$, then $(\epsilon_{\delta})_{\delta \in \Delta}$ is a symmetric basis in $E_0$ with the symmetric constant $1$, and so $E_0$ is a symmetric generalized sequence space.} \bigskip \noindent {\sl Proof. a}. Let $\sigma$ be a finite subset of $\Delta$. Consider the coordinate subspace $$E_{\sigma} = \{x=(x_{\delta})_{\delta\in\Delta}\in E_0\,|\,x_{\delta} =0 \,\,{\rm for\,\, all}\,\,\delta \notin \sigma\}\,\,\,.$$ Let $p_{\sigma} = {1\over 2} (1_{E_0} - u_{\sigma} )$, where $u_{\sigma} = \prod_{\delta \in \sigma} s_{\delta}$, be the coordinate projection $E_0 \to E_{\sigma}$. Since $u_{\sigma} \in {\rm Iso}\, E_0$, we have $||p_{\sigma} ||_{E_0} = ||1_{E_0} - p_{\sigma}||_{E_0} = 1$. Fix an arbitrary vector $x \in E_0$. For any $n \in {\rm \bf N}$ there exists a finite subset $\sigma_n$ of $\Delta$ and $y_n \in E_{\sigma_n}$ such that $||x - y_n||_E < 1/n$. Then also $||p_{\sigma_n}(x) - y||_{E_0} < 1/n$, and so $||(1_{E_0} - p_{\sigma_n})(x)||_{E_0} < 2/n$. It follows that $x$ has at most countable support contained in $\Omega = \bigcup_{i=1}^{\infty} \sigma_i = \{\delta_1,\dots, \delta_k,\dots\}$, and $||x - \sum_{i=1}^k x_i\epsilon_{\delta_i}||_{E_0} \to 0$. Thus the system $(\epsilon_{\delta})_{\delta \in \Delta}$ is a generalized Shauder basis in $E_0$. It is easily seen that for any fixed subset $\Omega \subset \Delta$ the net of isometric involutions $(u_{\sigma}\,|\,\sigma \subset \Omega,\,{\rm card}\,\sigma < \infty)$ strongly converges on $E_0$ to the isometric involution $u_{\Omega}$, and therefore the basis $(\epsilon_{\delta})_{\delta \in \Delta}$ of $E_0$ is unconditional with the unconditional constant $1$. \smallskip \noindent $b$. Fix a permutation $\pi \in S_{\Delta}$, a vector $x \in E_0$ and $\epsilon > 0$ arbitrarily. Let $\sigma$ be a finite subset of $\Delta$ such that $||(1_{E_0} - p_{\sigma})(x)||_{E_0} < \epsilon$. There exists a finite permutation $\pi' \in S_{\Delta}$ such that $\pi' | \sigma = \pi | \sigma$. Since the Coxeter group $B_{\Delta}$ acts isometrically on $E_0$, we have $$||\pi^* p_{\sigma}(x)||_{E_0} = ||\pi'^*p_{\sigma}(x)||_{E_0} = ||p_{\sigma}(x)||_{E_0}\,\,\,.$$ Thus, the linear operator $\pi^*$ is well defined and isometric on the dense subspace ${\bf R}^{\Delta}$ of $E_0$. Therefore, it can be extended isometrically onto $E_0$, and since the same is true for $(\pi^{-1} )^*$, this extension does belong to the group ${\rm Iso}\,E_0$. This proves ($b$).$\,\,\,\bigcirc$ \bigskip \noindent {\it Remark.} It is not true, in general, that under the assumption of this lemma $E$ itself should be an ideal space if all single sign changes are isometries of $E$. As an example, consider the space $c$ of convergent sequencies, which is not an ideal lattice. \bigskip We return to the Hilbert decomposition, keeping all the notation and the conventions of Section 5.\\ \bigskip \noindent {\bf 6.3. Proposition.} {\it There exists an ideal generalized sequence space $X$ with $\cal B$ as an index set such that the subspace $R_0$ of $E$ is isometric to the Banach sum $(\bigoplus\limits_{\beta \in \cal B} H_{\beta})_X$.} \bigskip \noindent {\sl Proof.} For each $\beta \in \cal B$ fix a vector $e_{\beta} \in S(H_{\beta})$. Consider the subspace $X = {\overline {\rm span}}\, (e_{\beta} \,|\,\beta \in {\cal B}) \subset F \dot{+} R_0$. Since the system of functionals $(e_{\beta}^*\,|\,\beta \in {\cal B})$ is biorthogonal to the system $(e_{\beta} \,|\,\beta \in {\cal B})$ and the reflexions $s_{e_{\beta}, e_{\beta}^*}\,|\,X$ are isometric, by Lemma 5.9.a, the latter system is an unconditional basis in $X$ with the unconditional constant $1$, and so $X$ can be identified with an ideal generalized sequence space on $\cal B$. Put $R_1 = (\bigoplus\limits_{\beta \in \cal B} H_{\beta})_X$. We will show that the correspondence $$\tau : R_0 \owns x \longmapsto (p_{\beta} (x))_{\beta \in {\cal B}} \in R_1$$ is a linear isometry of $R_0$ onto $R_1$. Put $e'_{\beta} = {p_{\beta} (x) \over ||p_{\beta} (x)||_E} \in S(H_{\beta})$ if $p_{\beta} (x) \neq 0$. Let $\bar u_{\beta} \in {\rm O}(H_{\beta})$ is such that $\bar u_{\beta}(e'_{\beta}) = e_{\beta}$ if $p_{\beta} (x) \neq 0$ and $\bar u_{\beta} = 1_{H_{\beta}}$ otherwise. As follows from Theorem 5.7, there exists $u \in {\rm Iso}\,R_0$ such that $u\,|\,H_{\beta} = \bar u_{\beta}$ for all $\beta \in \cal B$. Since $u(x) \in X$ and $u$ is an isometry, it is clear that $\tau (x) \in R_1$ and $||\tau (x)||_{R_1} = ||x||_{R_0}$. To show that $\tau$ is surjective, fix arbitrary vector $\bar x \in R_1\,,\, \bar x = (x_{\beta} \in H_{\beta})_{\beta \in \cal B}$. Then $x' = \sum\limits_{\beta \in \cal B} ||x_{\beta}||_E\, e_{\beta} \in X \subset R_0$. For each $\beta \in \cal B$ let $\bar u_{\beta} \in {\rm O}(H_{\beta})$ be such that $\bar u_{\beta}(x_{\beta}) = ||x_{\beta}||_E\, e_{\beta}$. As above, there exists $u \in {\rm Iso}\,R_0$ such that $u\,|\,H_{\beta} = \bar u_{\beta}$ for all $\beta \in \cal B$. If $x_0 = u^{-1} (x')$, then we have $\tau (x_0) = \bar x$. Thus, $\tau$ is an inversible isometry. This completes the proof. $\,\,\,\bigcirc$ \bigskip \noindent {\bf 6.4.} {\it Notation.} Consider again an ideal generalized sequence space $E$ with an index set $\Delta$. Without lost of generality one may assume that $||\epsilon_{\delta}||_E = 1$ for all $\delta \in \Delta$. For a subset $\Omega \subset \Delta$ let $E({\Omega})$ be a strip $E({\Omega}) = \{x=(x_{\delta} ) \in E\,|\,x_{\delta} = 0$ for all $\delta \in \Delta \setminus \Omega\}$. Any such strip is biorthogonally complemented in $E$; indeed, the operator of multiplication by the characteristic function of $\Omega$ is a bicontractive projection $p_{\Omega}\,:\,E \to E(\Omega)$ with $1_E - p_{\Omega} = p_{\Delta \setminus \Omega}$. {Put $E_0 (\Omega) = {\overline {\rm span}}\,(\epsilon_{\delta})_{\delta \in \Omega}$, so that $E = E(\Delta), \,E_0 = E_0 (\Delta)$ and $E_0 (\Omega) = E_0 \cap E (\Omega)$. We also preserve in this particular case all the other notation introduced in section 5. The next proposition shows that the Hilbert and Coxeter decompositions of an ideal generalized sequence space yield an orthogonal decomposition into strips. A reflexion vector $\epsilon_{\delta}$ of the single sign change $s_{\delta} = s_{\epsilon_{\delta}, \epsilon_{\delta}^*} \in {\rm IR} (E)$ belongs to a certain subspace $V_{\alpha}$ or $H_{\beta}$. Putting $\Delta_{\alpha} = \{\delta \in \Delta\,|\,s_{\delta} \in V_{\alpha}\}$ and $\Delta_{\beta} = \{\delta \in \Delta\,|\,s_{\delta} \in H_{\beta}\}$ we obtain a disjoint partition of $\Delta$ by the subsets $\{\Delta_{\alpha} \,,\,\Delta_{\beta}\}_{\alpha \in {\cal A}\,,\,\beta \in {\cal B}}$. Put also $\Delta_{{\cal A}} = \bigcup\limits_{\alpha \in {\cal A}} \Delta_{\alpha}$ and $\Delta_{{\cal B}} = \bigcup\limits_{\beta \in {\cal B}} \Delta_{\beta}$, so that $\Delta = \Delta_{{\cal A}} \cup \Delta_{{\cal B}}$. \bigskip \noindent {\bf Proposition 6.5.} {\sl In the notation as above one has \smallskip \noindent a) \noindent i) $V_{\alpha} = E_0 (\Delta_{\alpha})\,,\,{\hat V_{\alpha}} = E(\Delta_{\alpha})$ for each $\alpha \in {\cal A}$ and \noindent ii) $H_{\beta} = E (\Delta_{\beta})=E_0 (\Delta_{\beta}) = l_2 (\Delta_{\beta})$ for each $\beta \in {\cal B}$; \noindent iii) if ${\rm card}\,(\Delta_{\alpha}) = \infty$, then $W_{\alpha}$ is a Coxeter group of type $B_{\Delta_{\alpha}}$; \smallskip \noindent b) \noindent i) $F_0 = E_0 (\Delta_{{\cal A}})$ and $R_0 = E_0 (\Delta_{{\cal B}})$, so that $E_0 = R_0 \dot{+} F_0$; \noindent ii) $F = E(\Delta_{{\cal A}})$ and ${\hat R} = E(\Delta_{{\cal B}})$, so that $E = {\hat R} \dot{+} F$; \smallskip \noindent c) \noindent i) $p_{\alpha} = p_{\Delta_{\alpha}}\,|\,(F_0 \dot{+} {\hat R})$ and $p_{\beta} = p_{\Delta_{\beta}}$ for all $\alpha \in {\cal A}\,,\,\beta \in {\cal B}$; \noindent ii) $p_{R_0, F} = p_{\Delta_{\cal B}}\,|\,(R_0 \dot{+} F)$ and $p_{F_0, {\hat R}} = p_{\Delta_{\cal A}}\,|\,(F_0 \dot{+} {\hat R})$ (see Proposition 5.6).} \bigskip \noindent {\sl Proof. a}. Put $\cal C = \cal A \cup \cal B$ and $V_{\gamma} = H_{\gamma}$ for $\gamma \in \cal B$. By the above definitions, $E_0 (\Delta_{\gamma}) \subset V_{\gamma}$ for all $\gamma \in \cal C$. Let $\gamma \in \cal C$, $\delta \in \Delta_{\gamma}$ and $\delta' \in \Delta \setminus \Delta_{\gamma}$. By Lemma 5.4, $s_{\delta'} = s_{\epsilon_{\delta'}, \epsilon_{\delta'}^*} \in {\rm IR} (E)\setminus IR_{\gamma}$ commutes with any reflexion $s \in IR_{\gamma}$, and so $V_{\gamma} \subset {\rm Ker}\,\epsilon_{\delta'}^*$. Therefore, ${\hat V_{\gamma}} \subset {\rm Ker}\,\epsilon_{\delta'}^*$, too, and hence ${\hat V_{\gamma}} \subset \bigcap\limits_{\delta' \in \Delta \setminus \Delta_{\gamma}} {\rm Ker}\,\epsilon_{\delta'}^* = E(\Delta_{\gamma})$. In particular, each reflexion vector $e$ of a reflexion $s=s_{e, e^*} \in IR(E)$ belongs to one of the strips $E(\Delta_{\gamma})$, where $\gamma \in \cal C$. Namely, $e = (x_{\delta}) \in E(\Delta_{\gamma})$ iff $x_{\delta}\neq 0$ for at least one $\delta \in \Delta_{\gamma}$. Furthermore, in the latter case either $e = \pm \epsilon_{\delta}$ or the reflexions $s$ and $s_{\delta}$ do not commute. Let $\gamma = \alpha \in \cal A$. From the classification of the infinite Coxeter groups in section 4 it follows that if $W$ is such a group and $s \in W$, then the set of all reflexions in $W$ that do not commute with $s$ contains not more than a finite subset of pairwise commuting reflexions. This means that the reflexion vector $e$ of any given reflexion $s\in IR_{\alpha}$ has only a finite number of non-zero coordinates, i.e. $e \in {\rm span}\,(\epsilon_{\delta}\,|\,\delta \in \Delta_{\alpha}) \subset E_0 (\Delta_{\alpha})$. Thus, $V_{\alpha} \subset E_0 (\Delta_{\alpha})$, and therefore, $V_{\alpha} = E_0 (\Delta_{\alpha})$, which is the first statement of ($a.i$). In particular, the reflexion vectors $(\epsilon_{\delta})_{\delta \in \Delta_{\alpha}}$ of sign change reflexions $(s_{\delta})_{\delta \in \Delta_{\alpha}} \subset IR_{\alpha}$ form a complete orthogonal system in $V_{\alpha}$. If ${\rm card}\,(\Delta_{\alpha}) <\infty$, then clearly $E (\Delta_{\alpha})=E_0 (\Delta_{\alpha})=V_{\alpha}=\hat {V_{\alpha}}$. If ${\rm card}\,(\Delta_{\alpha}) =\infty$, then by Corollary 4.6, the group $W_{\alpha}$ generated by reflexions from $IR_{\alpha}$ is a Coxeter group of type $A_{\Delta'}$ or $B_{\Delta'}$. But the Coxeter group $A_{\Delta'}$ does not contain a complete set of pairwise commuting reflexions, i.e. there is no orhtogonal subsystem of the root system $(\epsilon_{\delta} - \epsilon_{\delta'}\,|\,\delta, \delta' \in \Delta'\,,\,\delta \neq \delta')$ which would be a Hamel basis of ${\bf R}_0^{\Delta'}$. This excludes the first case, and so the group $W_{\alpha}$ should be a Coxeter group of type $B_{\Delta'}$. It is clear that ${\rm card}\,(\Delta') = {\rm card}\,(\Delta_{\alpha})$. This proves ($a.iii$). Let, further, $\gamma = \beta \in \cal B$. Then $E_0 (\Delta_{\beta})$ is a subspace of the Hilbert space $H_{\beta}$, and the system $(\epsilon_{\delta})_{\delta \in \Delta_{\beta}}$ is an orthonormal basis of $E_0 (\Delta_{\beta})$. Thus, $E_0 (\Delta_{\beta})= {\rm l}_2 (\Delta_{\beta})$. Assume that $H_{\beta} \neq E_0 (\Delta_{\beta})$ . Let $x \in H_{\beta}$ be a non-zero vector orthogonal to $E_0 (\Delta_{\beta})$. It is easily seen that $\epsilon_{\delta}^* (x) = 0$ for all $\delta \in \Delta_{\beta}$. This is impossible, since $H_{\beta}\subset E (\Delta_{\beta})$ and $x \neq \bar 0$. Therefore, $H_{\beta} = E_0 (\Delta_{\beta}) = {\rm l}_2 (\Delta_{\beta})$. Let $p_{\beta} \,:\,E \to H_{\beta}$ be the projection as in Theorem 1.b. Suppose that $H_{\beta}\neq E (\Delta_{\beta})$. Then the restriction $p_{\beta}\,|\,E (\Delta_{\beta})$ is a non-identical projection, so that there exists a non-zero vector $x \in {\rm Ker}\,p_{\beta}\cap E (\Delta_{\beta})$. Fixing $\delta \in \Delta_{\beta}$, consider the plane $L = {\rm span} (x, \epsilon_{\delta})$. There are two commuting isometric reflexions in $L$, namely $(1_L -2p_{\beta})\,|\,L$ and $s_{\delta}\,|\,L$. Therefore, $x \in {\rm Ker}\,e_{\delta}^*$ for all $\delta \in \Delta_{\beta}$, and so $x = \bar 0$, which is a contradiction. Hence, $H_{\beta} = E(\Delta_{\beta}) = E_0 (\Delta_{\beta}) = {\rm l}_2 (\Delta_{\beta})$. This proves ($a$), besides the second equality in ($a.i$), which is proven below. \smallskip \noindent $b$. For any $\gamma \in {\cal C}$ consider the isometric involution $u_{\gamma} = 1_E - 2p_{\Delta_{\gamma}}$ with the spectral subspaces $E(\Delta_{\gamma})$ and $E(\Delta \setminus \Delta_{\gamma})$. It is easily seen that for any $s=s_{e, e^*} \in IR(E)$ the isometries $s u_{\gamma}$ and $u_{\gamma} s$ coincide on the total system of reflexion vectors $(\epsilon_{\delta})_{\delta \in \Delta}$. From Lemma 4.1 it follows that they coincide on $E$. Thus, the involution $u_{\gamma}$ commutes with each reflexion $s_{e, e^*} \in IR(E)$. Therefore, one of its spectral subspaces contains the vector $e$ and another one is contained in the mirror hyperplane ${\rm Ker}\, e^*$. Hence, for any $s_{e, e^*} \in IR_{\gamma}$ one has ${\rm Ker}\, e^* \supset E(\Delta \setminus \Delta_{\gamma})$. Let the set $\cal C$ be devided into two disjoint parts ${\cal C} = {\cal C}' \cup {\cal C}''$. Put $\Omega' = \bigcup\limits_{\gamma \in {\cal C}'} \Delta_{\gamma}\,,\,\Omega'' = \bigcup\limits_{\gamma \in {\cal C}''} \Delta_{\gamma}$, so that $\Omega', \Omega''$ consist of some parts of the disjoint partition $\Delta = \bigcup\limits_{\gamma \in {\cal C}} \Delta_{\gamma}$. We are going to show, more generally, that ${\widehat {E_0 (\Omega')}} = E(\Omega')$, which easily implies the equalities in ($b.ii$) and ($a.i$). By the considerations above, we have $E(\Omega') \subset \bigcap ({\rm Ker}\,e^*\,|\,s_{e, e^*} \in {\rm IR}_{{\cal C}''})$, where ${\rm IR}_{{\cal C}''} = \bigcup\limits_{\gamma \in {\cal C}''} {\rm IR}_{\gamma}$. On the other hand, $E(\Omega') = \bigcap\limits_{\delta \in \Omega''} {\rm Ker}\,\epsilon_{\delta}^* \supset \bigcap ({\rm Ker}\,e^*\,|\,s_{e, e^*} \in IR_{{\cal C}''})$. Therefore, $E(\Omega') = \bigcap ({\rm Ker}\,e^*\,|\,s_{e, e^*} \in {\rm IR}_{{\cal C}''}) = {\widehat {E_0 (\Omega')}}$. The last equality is clear from the definition of the envelope, because $s_{e, e^*} \in IR_{{\cal C}''}$ iff $E_0 (\Omega') \subset {\rm Ker}\,e^*$. This proves ($b$) and the second equality in ($a.i$). \smallskip $c$. The isometric involutions $u_{\beta} = 1_E - 2p_{\Delta_{\beta}}$ and $1_E - 2p_{\beta}$ coincide on vectors of the system $(\epsilon_{\delta})_{\delta \in \Delta}$, so by Lemma 4.1, they coincide on $E$. This proves the second equality in ($c.i$). By the same reasoning (see Proposition 5.6.$b.iii$) the first equality in ($c.i$) holds. The equalities ($c.ii$) follow from ($b$), just by the definition of the projections involved. By Proposition 5.6.$b.i$, the projection $p_{\alpha}$ commutes with any sign change reflexion $s_{\delta}$. By the same type of arguments as those used in the proof of ($b$), it follows that ${\rm Ker}\, p_{\alpha} = {\rm Ker} \,(p_{\Delta_{\alpha}}\,|\,(F_0 \dot{+} {\hat R}))$. Since the images also coincide, we have the first equality in ($c.i$). This proves the proposition. $\,\,\,\bigcirc$ \bigskip This proposition, together with Theorem 5.7 and the remark that the union of the subspaces $H_{\beta}\,,\,\beta \in \cal B,$ is invariant with respect to the group ${\rm Iso}\,E$, leads to the following \bigskip \noindent {\bf 6.6. Corollary.} {\it a) $$G_0(E) \subset \bigoplus\limits_{\beta \in {\cal B}} G_0 (l_2 (\Delta_{\beta}))\,\,\,\,\,\, and \,\,\,\,\,\,{\rm Iso}\,E \subset {\rm Iso}\,E (\Delta_{{\cal A}}) \bigoplus {\rm Iso}\,E (\Delta_{{\cal B}})\,\,;$$ b) each element of the group $({\rm Iso}\,E)\,|\,E (\Delta_{\cal B})$ is of the form $(x_{\beta}) \mapsto (u_{\beta} (x_{\beta}))$, where $x_{\beta} \in {\rm l}_2 (\Delta_{\beta})$, $\,\,u_{\beta} : {\rm l}_2 (\Delta_{\beta}) \to {\rm l}_2 (\Delta_{\pi (\beta)})$ is an isometry of Hilbert spaces for each $\beta \in {\cal B}$, and $\pi$ is a permutation of the set $\cal B$.} \bigskip \noindent {\bf 6.7.} {\it Remark.} Let $\alpha \in \cal A$ be such that ${\rm card}\, \Delta_{\alpha} = \infty$. Then ${\rm Iso}\,V_{\alpha}$ contains a Coxeter subgroup of type $B_{\Delta_{\alpha}}$. It is not true in general that it contains also the symmetric group $S_{\Delta_{\alpha}}$ of shift operators. In fact, this latter group is contained in ${\rm Iso}\,V_{\alpha}$, but probably in some other representation of $V_{\alpha}$ as an ideal generalized sequence space. Indeed, consider any symmetric generalized sequence space $M$ on $\Delta_{\alpha}$, such that the system $(\epsilon_{\delta})_{\delta \in \Delta}$ is a Shauder basis in $M$. Fix a disjoint partition of $\Delta_{\alpha}$ into pairs $(\delta, \delta')$. Then by Lemma 6.2.$a$, the corresponding subsystem $(\epsilon_{\delta} \pm \epsilon_{\delta'})$ of the root system is an unconditional Shauder basis in $M$ with the unconditional constant $1$, and this basis is not symmetric. Thus, using the dual system of functionals, one can represent the strip component $M = V_{\alpha}$ as an ideal generalized sequence space which is not symmetric and such that the isometry group does not act as permutations and sign changes (the image of a basis vector under an isometric reflexion might be a vector with 4 non-zero coordinates!). Recall that a symmetric basis is unique; moreover, a basis which in a sense is symmetric enough, is unique [11]. Thus, here we have an unconditional basis with a relatively small group of symmetries. Conversely, if $g \in {\rm Iso}\, E$ is such that $g(V_{\alpha}) = V_{\alpha'}$, where $\alpha \in \cal A$ is as above, then one can represent $V_{\alpha}$ resp. $V_{\alpha'}$ as a symmetric generalized sequence space on $\Delta_{\alpha}$ resp. $\Delta_{\alpha'}$, and then $g$ should be an operator of the form $(x_{\alpha}) \mapsto (\pm x_{\pi(\alpha)})$, where $\pi\,:\,\Delta_{\alpha} \to \Delta_{\alpha'}$ is a bijection (indeed, $\pi$ must transfer the sign change reflexions from ${\rm IR}_{\alpha}$ into sign change reflexions from ${\rm IR}_{\alpha'}$). In [24], [25] certain conditions on an ideal generalized sequence space are given which guarantee that its isometry group acts by permutations and sign changes. This is always the case in a symmetric sequence spaces different from ${\rm l}_2$ [23, ch. IX], [6] (see also [2], [8] for the complex field). \bigskip Next we give several examples related to the results of Sect. 5 and 6. \bigskip \noindent {\bf 6.8.} {\it Examples.} \medskip \noindent 1) ([24], [25]) Fix a sequence of real numbers $p_k \ge 1\,,\,k=1,\dots$. {\it The Orlicz--Nakano space} $E = l(\{p_k\})$ consists of all sequences of real numbers $x = (\xi_k)_{k=1}^{\infty}$ such that the following norm is finite: $$||x||_E = {\rm inf}\,\{\lambda > 0\,|\,\sum\limits_{k=1}^{\infty} |\xi_k / \lambda |^{p_k} \le 1\}\,\,.$$ It is an ideal sequence space. Put $\Delta_q = \{i \in {\bf N}\,|\,p_i = q\}$, where $q= q_1 , q_2 ,\dots$ are pairwise distinct. Then ${\cal A} = \{q_i\,|\,i \neq 2\}$ and $\Delta_{\cal A} = \{i\,|\,p_i \neq 2\}\,,\, {\cal B} = \{2\}$ if $\Delta_2 \neq \emptyset$ and ${\cal B} = \emptyset$ otherwise; $E(\Delta_q) = {\rm l}_q (\Delta_q)$. The group ${\rm Iso}\, E$ is the direct product of the groups ${\rm O}\, ({\rm l}_2 (\Delta_2))$ and ${\rm Iso}\,\Delta_{\cal A}$, where ${\rm Iso}\,\Delta_{\cal A}$ is the group of all permutations of coordinates $\xi_i,\,i \in \Delta_{\cal A}$, preserving the partition $\Delta_{\cal A} = \bigcup \Delta_{q_i}$, and arbitrary sign changes of these coordinates. Indeed, this direct product evidently is a subgroup of ${\rm Iso}\, E$; the converse inclusion follows from the results of [24], [25] in view of the decomposition from Corollary 6.6. In a similar way one can describe the isometry groups of more general modular sequence spaces or of Banach sums of (symmetric) ideal sequence spaces. \medskip \noindent 2) Let $E$ be the space of all convergent complex sequences with the supremum norm. Then $E$ is a Banach sum of the real euclidean planes $H_i\,,\,i=1,\dots$. We have that $E = \hat R$, and $E_0 = R_0$ is the subspace of sequences in $E$ convergent to zero. The group ${\rm Iso}\, E_0$ is the semi-direct product of ${\rm O}(2)^{\omega}$ and the infinite symmetric group $S_{\omega}$, while ${\rm Iso}\, E$ is its proper subgroup (indeed, if $g \in {\rm Iso}\, E$, then the corresponding sequence of orthogonal plane transformations from ${\rm O}(2)^{\omega}$ is convergent). This shows that all the inclusions in Theorem 5.7 are strict. Observe that here $E$ is not an ideal sequence space. \medskip \noindent 3) Consider $E = {\bf R}^4$ with the norm $$||x||_E = ||(\xi_1 , \xi_2, \varsigma_1, \varsigma_2)||_E = [((\xi_1^2 + \xi_2^2)^{1/2} + |\varsigma_1 |)^2 + \varsigma_2^2]^{1/2}\,.$$ It is easily seen that here $\hat R = R_0 = H = \{x \in E\,|\,\varsigma_1 = \varsigma_2 = 0\}$ and $F=F_0=\{x \in E\,|\,\xi_1 = \xi_2 =0\}$. Furthermore, $E = F_0\dot{+} R_0$ is an ideal space, and both strips $F_0$ and $R_0$ are euclidean planes. Thus, $G_0 (E) \neq G_0 (F_0) \oplus G_0 (R_0)$, and so ${\rm Iso}\, E \neq {\rm Iso}\, F_0 \oplus {\rm Iso}\, R_0$ (cf. Corollary 6.6.$a$). \medskip \noindent 4) Slightly modifying example 4, consider $E \cong {\bf R}^6$ with the norm $$||x||_E = ||(\xi_1 , \xi_2, \eta_1, \eta_2, \varsigma_1, \varsigma_2)||_E = [((\xi_1^2 + \xi_2^2)^{1/2} + |\varsigma_1 |)^2 + ((\eta_1^2 + \eta_2^2)^{1/2} + |\varsigma_2|)^2]^{1/2}\,.$$ Being the direct sum of two euclidean planes $ H_1$ and $ H_2$, which are strips invariant under $G_0 (E)$, the subspace $R_0$ itself is euclidean. Thus, $G_0 (R_0) \neq G_0 (H_1) \oplus G_0 (H_2) = G_0 (E)$ (cf. Corollary 6.6.$a$). \medskip \noindent 5) Let, further, $\bar E = {\rm c} \oplus {\rm c} \oplus {\rm c}$ with the norm $||(x, y, z)||_{\bar E} = \sup_{i=1,\dots} \,\{(\xi_i^2 + \eta_i^2)^{1/2} + |\varsigma_i |\}$, where $x = (\xi_i)_{i=1}^{\infty} \in {\rm c}\,,\, y = (\eta_i)_{i=1}^{\infty} \in {\rm c}\,,\,z = (\varsigma_i)_{i=1}^{\infty} \in {\rm c}$. Consider the hyperplane $E = \{(x, y, z) \in \bar E\,|\,\lim_{i \to \infty} \eta_i = \lim_{i \to \infty} \varsigma_i\}$. Here we have ${\hat R} \approx {\rm c} \oplus {\rm c}_0\,,\,F \approx {\rm c}_0$. Thus, $ {\hat R} \dot{+} F$ is a hyperplane in $E$, and there is no contractive projection of $E$ onto ${\hat R}$ and onto $F$, in contrary to the case of ideal sequence spaces (cf. Propositions 5.6 and 6.5.$b,\,c$). \bigskip The following questions are directly related to the subject of this paper. \smallskip For a given Banach space $E$, consider the constant $$c(E) = \inf_{e\in E, \,e^* \in E^*,\,e^* (e) = 1}\{||s_{e, e^*}||_E\}\,\,.$$ It is clear that $1 \le c(E) \le 3$, and $c(E) = 1$ in the case when there exists an isometric reflexion is $E$. It is easily seen that $c(L_p )$ is a convex function of $p$ which takes the value $1$ only for $p=2$ and the value $3$ only for $p=1$ and $p=\infty$. For any given finite set of reflexions in E one can find an equivalent norm $||\cdot||'$ on $E$ in such a way that the group generated by these reflexions will be a subgroup of the isometry group of the new norm. In particular, $c(E, ||\cdot||') = 1$. Consider, further, the constant $$\varsigma(E) = \sup_{||\cdot||' \sim ||\cdot||_E} \{c(E, ||\cdot||')\}\,\,.$$ By the definition, $\varsigma (E) \in [1,\,3]$ is a numerical invariant of isomorphism. Is it nontrivial? Let $\varsigma (n) = \varsigma ({\bf R}^n) = \varsigma ({\rm l}^n_2)$. Let $M_n$ be the Minkowski compact of classes of isometric norms in ${\bf R}^n$, endowed with the Banach--Mazur distance. Denote by $A_n$ the subset of $M_n$ which consists of the classes of norms having an isometric reflexion (or, the same, a hyperplane of symmetry). It is easy to show that $\log\varsigma (n)$ coincides with the radius of the metric factor space $M_n / A_n$ with respect to the distinguish point which corresponds to $A_n$. It is known that the radius of the $M_2$ centred at the class of euclidean norms is $\log\sqrt{2}$ (F. Behrend, 1937; see [12, sect. 7] for this and for some further information). Thus, $\varsigma (2) \le \sqrt{2}$. \medskip \noindent {\bf 6.9.} {\it Problem.} Is it true that\\ $\varsigma (3) < 3$ ? \\ $\varsigma (n) < 3 $ for any $n$ ? \\ $\limsup_{n \to \infty} \varsigma (n) < 3$ ?\\ $\varsigma ({\rm l}_2) < 3$ ? \medskip \noindent If the answer to any of the above questions is ``yes'', which seems to be less plausible, then, of course, the exact value of the corresponding constant $\varsigma$ would be worthwhile to find. \bigskip \begin{center} {\LARGE References} \end{center} \noindent [1] Yu. Abramovich, M. Zaidenberg. {\em A rearrangement invariant space isometric to $L_p$ coincides with $L_p$}, Interaction between Functional Analysis, Harmonic Analysis, and Probability, N. Kalton, E. Saab eds., Lect. Notes in Pure Appl. Math. 175, Marcel Dekker, New York, 1995, 13-18. \noindent [2] J. Arazy, {\em Isometries of complex symmetric sequence spaces}, Math. Z., 188 (1985), 427--431. \noindent [3] S. Banach, {\em Th\'eorie des op\'erations lin\'eaires}, Hafner Publishing Co, N.Y. 1932. \noindent [4] J.J. Barry, {\em On the convergence of ordered sets of projections}, Proc. Amer. Math. Soc., 5 (1954), 313--314. \noindent [5] N. Bourbaki, {\em Groupes et alg\`ebres de Lie}, Ch.IV - VI, Hermann, Paris, 1968. \noindent [6] M. Sh. Braverman, E.M. Semenov, {\em Isometries of symmetric spaces}, Soviet Math. Doklady, 15 (1974), 1027--1030. \noindent [7] M.M. Day, {\em Normed linear spaces}, Springer, N.Y. e.a., 1973. \noindent [8] R.J. Fleming, J.E. Jamison, {\em Isometries on certain Banach spaces}, J. London Math. Soc., 2 (1974), 121--127. \noindent [9] R. Godement, {\em Th\'eor\'emes taub\'eriens et th\'eorie spectrale}, Ann. Sci. Ecole Norm. Sup., (3) 64 (1947), 119--138. \noindent [10] Y. Gordon, D. R. Lewis, {\em Isometries of diagonally symmetric Banach spaces}, Israel J. of Mathem., 28 (1977), 45--67. \noindent [11] Y. Gordon, R. Loewy, {\em Uniqueness of $(\Delta)$ bases and isometries of Banach spaces}, Math. Ann., 241 (1979), 159--180. \noindent [12] B. Gr\"unbaum, {\em Measures of symmetry for convex sets}, Proc. Symp. Pure. Mathem. VII Convexity (1963), AMS, Providence, R.I., 233--270. \noindent [13] N.J. Kalton, B. Randrianantoanina, {\em Isometries on rearrangement-invariant spaces}, C. R. Acad. Sci. Paris, 316 (1993), 351--355. \noindent [14] N.J. Kalton, B. Randrianantoanina, {\em Surjective isometries on rearrangement-invariant spaces}, Quart. J. Math. Oxford, (2) 45 (1994), 301--327. \noindent [15] N. J. Kalton, G.W. Wood, {\em Orthonormal systems in Banach spaces and their applications}, Math. Proc. Cambr. Phil. Soc., 79 (1976), 493--510. \noindent [16] M. A. Krasnosel'skii, {\em On a spectral property of compact linear operators in the space of continuous functions}, Problems of the mathematical analysis of complex systems, 2, Voronezh (1968), 68--71 (Russian). \noindent [17] S. G. Krein, Ju. I. Petunin, E. M. Semenov. {\em Interpolation of linear operators}, Providence. R.I., 1982. \noindent [18] P.--K. Lin, {\em Elementary isometries of rearrangement--invariant spaces}, preprint, 1995, 1--22. \noindent [19] J. Lindenstrauss and L. Tzafriri, {\em Classical Banach Spaces, Vol.1,2}, Springer, 1977, 1979. \noindent [20] Yu.I. Lyubich, {\em On the boundary spectrum of a contraction in Minkovsky spaces}, Sib. Mat.Zh., 11:2 (1970), 358--369. \noindent [21] Yu.I. Lyubich, L. N. Vaserstein, {\em Isometric embeddings between classical Banach spaces, curbature formulas, and spherical designs}, Geometriae Dedicata, 47 (1993), 327--362. \noindent [22] B. Randrianantoanina, {\em Isometric classification of norms in rearrangement--invariant function spaces}, preprint, 1995, 1--15. \noindent [23] S. Rolewicz, {\em Metric Linear Spaces}, Polish Scientific Publishers, Warsaw, 1972. \noindent [24] A.I. Skorik, {\em Isometries of ideal coordinate spaces}, Uspekhi Mathem. Nauk, 31:2 (1976), 229--230 (Russian). \noindent [25] A.I. Skorik, {\em On isometries of a class of ideal coordinate spaces}, Teorija Funkziy, Funkz. Analyz Prilozh., 34 (1980), 120--131 (Russian). \noindent [26] A.I. Skorik, M.G. Zaidenberg, {\em Groups of isometries containing reflexions}, Function. Anal. Appl., 10 (1976), 322--323. \noindent [27] A.I. Skorik, M.G. Zaidenberg, {\em Groups of isometries containing reflexions}, preprint VINITI, DEP N\, 1638-78 (1978), 43p. (Russian; English transl.: {\em On isometric reflexions in Banach spaces}, Pr\'epublication de l'Institut Fourier des Math\'ematiques, 267, Grenoble 1994, 36p. \noindent [28] J. Wermer, {\em The existence of invariant subspaces}, Duke Math.J., 19 (1952), 615--622. \noindent [29] M.G. Zaidenberg, {\em Groups of isometries of Orlich spaces}, Soviet Math. Dokl., 17 (1976), No.2, 432--436. \noindent [30] M.G. Zaidenberg, {\em On the isometric classification of symmetric spaces}, Soviet Math. Dokl., 18 (1977), 636--640. \noindent [31] M.G. Zaidenberg, {\em Special representations of isometries of functional spaces}, Investigations on the theory of functions of several real variables, Yaroslavl' 1980 (Russian; English transl.: {\em A representation of isometries on function spaces}, Pr\'epublication de l'Institut Fourier des Math\'ematiques, 305, Grenoble 1995, 7p.). \bigskip \noindent M. Zaidenberg, A. Skorik \noindent Institut Fourier des Math\'ematiques \noindent Universit\'e Grenoble I \noindent BP 74 \noindent 38402 Saint Martin d'H\`eres-c\'edex \noindent France \medskip \noindent E-mail: zaidenbe@puccini.ujf-grenoble.fr \end{document}
{ "timestamp": "1998-03-31T20:24:39", "yymm": "9512", "arxiv_id": "math/9512204", "language": "en", "url": "https://arxiv.org/abs/math/9512204", "abstract": "We obtain the following characterization of Hilbert spaces. Let $E$ be a Banach space whose unit sphere $S$ has a hyperplane of symmetry. Then $E$ is a Hilbert space iff any of the following two conditions is fulfilled: a) the isometry group ${\\rm Iso}\\, E$ of $E$ has a dense orbit in S; b) the identity component $G_0$ of the group ${\\rm Iso}\\, E$ endowed with the strong operator topology acts topologically irreducible on $E$. Some related results on infinite dimentional Coxeter groups generated by isometric reflexions are given which allow to analyse the structure of isometry groups containing sufficiently many reflexions.", "subjects": "Functional Analysis (math.FA)", "title": "On isometric reflexions in Banach spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.981453438742759, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.708357357584196 }
https://arxiv.org/abs/1804.03177
Independence algebras, basis algebras and the distributivity condition
Stable basis algebras were introduced by Fountain and Gould and developed in a series of articles. They form a class of universal algebras, extending that of independence algebras. If a stable basis algebra $\mathbb{B}$ of finite rank satisfies the distributivity condition (a condition satisfied by all the previously known examples), it is a reduct of an independence algebra $\mathbb{A}$. Our first aim is to give an example of an independence algebra not satisfying the distributivity condition.Gould showed that if a stable basis algebra $\mathbb{B}$ with the distributivity condition has finite rank, then so does the independence algebra $\mathbb{A}$ of which it is a reduct, and in this case the endomorphism monoid End$(\mathbb{B})$ of $\mathbb{B}$ is a left order in the endomorphism monoid End$(\mathbb{A})$ of $\mathbb{A}$. We complete the picture by determining when End$(\mathbb{B})$ is a right, and hence a two-sided, order in End$(\mathbb{A})$. In fact (for rank at least 2), this happens precisely when every element of End$(\mathbb{A})$ can be written as $\alpha^\sharp\beta$ where $\alpha,\beta\in$ End$(\mathbb{B})$, $\alpha^\sharp$ is the inverse of $\alpha$ in a subgroup of End$(\mathbb{A})$ and $\alpha$ and $\beta$ have the same kernel. This is equivalent to End$(\mathbb{B})$ being a special kind of left order in End$(\mathbb{A})$ known as straight.
\section{Introduction and Preliminaries}\label{sec:intro} The second author introduced the study of the endomorphism monoids of universal algebras called {\em $v^*$-algebras}, which she named {\em independence algebras}. These algebras appear first in an article of Narkiewicz \cite{nar} and were inspired by Marczewski's study of notions of independence, initiated in \cite{mar} (see \cite{gratzer} and the survey article \cite{urb}). Such algebras may be defined via properties of the closure operator $\langle -\rangle$ which takes a subset of an algebra to the subalgebra it generates. In an independence algebra, $\langle -\rangle$ must satisfy the {\em exchange property}, which guarantees that we have a well behaved notion of {\em rank} for subalgebras and hence for endomorphisms, generalising that of the dimension of a vector space. Further, independence algebras are {\em relatively free}. Precise definitions and further details may be found in \cite{gould}. We remark that sets, vector spaces and free acts over any group are examples of independence algebras. A full classification, which we will draw upon for this article, is given by Urbanik in \cite{urb}. We denote the monoid of endomorphisms of an algebra $\A$ by $\mathop{\mathrm{End}}\nolimits (\A)$. The study of $\mathop{\mathrm{End}}\nolimits (\A)$ for an independence algebra $\A$ has flourished over the last twenty years \cite{ araujo,cameronszabo,fountainandlewin,fountainandlewin2,gray}, providing the framework for understanding the common behaviours of several fundamental examples of monoids, including full transformation monoids and the multiplicative monoids of matrix rings over division rings. For example, if $\A$ is an independence algebra of finite rank $n$, then the set $\operatorname{Sing}(\A)$ of endomorphisms of rank strictly less than $n$ forms an idempotent generated ideal \cite{fountainandlewin}. We remark that idempotent generated semigroups are ubiquitous, since every (finite) semigroup embeds into a (finite) idempotent generated semigroup \cite{Howie66}. The study of idempotent generated semigroups has acquired significant momentum, due to recent advances (see, for example, \cite{gray:2012}) building upon Nambooripad's classical theory of free idempotent generated semigroups over biordered sets \cite{nambooripad:1979}. The endomorphism monoid of an independence algebra $\A$ is regular. Surprisingly, regularity of $\operatorname{End} (\A)$ is not necessary for the above results concerning idempotent generation. For example, the results of Laffey \cite{Laffey83} show that if $\A$ is a free module of finite rank $n$ over a Euclidean domain, then the set of non-identity idempotents of $\operatorname{End} (\A)$ generates the subsemigroup of endomorphisms of rank strictly less than $n$. Fountain and the second author introduced in \cite{basisi} a class of algebras called {\em stable basis algebras} that generalise free modules over Euclidean domains, in an attempt to put the results of Laffey, and later work of Fountain \cite{fountainintegermatrices} and Ruitenberg \cite{wim}, into a more general setting, an aim achieved in \cite{basisiii}. Stable basis algebras are in particular relatively free algebras in which the closure operator $\operatorname{PC}$ (pure closure) satisfies the exchange property. Certainly independence algebras are stable basis algebras. Finitely generated free left modules over {\color{black} left Ore} Bezout domains and finitely generated free left $\mathbb{T}$-acts over any cancellative monoid $\mathbb{T}$ such that finitely generated left ideals of $\mathbb{T}$ are principal, are examples of stable basis algebras. We recall that a {\em Bezout domain} is an integral domain (not necessarily commutative) in which all finitely generated left and right ideals are principal; clearly any Euclidean domain is a (left) Ore Bezout domain. As for independence algebras, rank is well defined for subalgebras and endomorphisms of basis algebras, where now the rank is defined via the operator $\operatorname{PC}$\footnote{In earlier articles, rank in a basis algebra was referred to as {\em PC-rank} but, as there is no ambiguity, we simply use the term {\em rank} here.}. We give requisite definitions as we proceed through this article. If $\A$ and $\B$ are algebras such that the universe (that is, the underlying set) $B$ of $\B$ is contained in the universe of $A$ of $\A$, then $\B$ is a {\em reduct} of $\A$ if every basic operation of $\B$ is the restriction to $B$ of a basic operation of $\A$. Theorem 4.14 of \cite{gouldI} shows that if $\B$ is a stable basis algebra satisfying the distributivity condition, then $\B$ is a reduct of an independence algebra $\A$, having the same rank as $\B$. The distributivity condition is stated precisely in Section~\ref{sec:example}: essentially it says that unary operations distribute over basic $n$-ary operations, for $n\geq 2$. All previously mentioned examples of basis algebras and independence algebras satisfy the distributivity condition. The main aim of Section~\ref{sec:example} is to prove: \begin{result} Not all independence algebras satisfy the distributivity condition. \end{result} {\color{black} We achieve the above by giving a particular example of an $\mathbb{S}$-homogeneous independence algebra. Our argument is highly technical, for reasons that we explain in Section~\ref{sec:example}.} Classical ring theory tells us that if $\mathbb{R}$ is a left Ore domain with division ring of quotients {\color{black} $\mathbb{D}$}, then for any $n\in\N$ the $n\times n$ matrix ring $M_n(\mathbb{R})$ has ring of left quotients $M_n(\mathbb{D})$, that is, $M_n(\mathbb{R})$ is a left order in $M_n(\mathbb{D})$ in the sense of ring theory. This means that every element of $M_n(\mathbb{D})$ can be written as $U^{-1}V$ where $U,V\in M_n(\mathbb{R})$, and every cancellable element of $M_n(\mathbb{R})$ has an inverse in $M_n(\mathbb{D})$. The endomorphism monoid of an arbitrary algebra, indeed of an arbitrary independence algebra $\A$, need not be a ring. We will therefore use the following notion of order, due originally to Fountain and Petrich \cite{fp}; here $a\s$ denotes the inverse of $a$ in a (any) subgroup. \begin{defi}\label{defi:order} Let $\mathbb{S}$ be a subsemigroup of a semigroup $\mathbb{Q}$. Then $\mathbb{S}$ is a {\em left (right) order} in $\mathbb{Q}$ and $\mathbb{Q}$ is a {\em semigroup of left (right) quotients} of $\mathbb{S}$ if every $q\in Q$ can be written as $q=a\s b$ ($q=ba\s$) where $a,b\in S$, and every square-cancellable element of $S$ lies in a subgroup of $\mathbb{Q}$. We say that $\mathbb{S}$ is an {\em order} in $\mathbb{Q}$ and $\mathbb{Q}$ is a {\em semigroup of quotients of $\mathbb{S}$} if $\mathbb{S}$ is both a left and a right order in $\mathbb{Q}$. In case $\mathbb{S}$ is a left order in $\mathbb{Q}$ and $a,b$ can be chosen above such that $a$ and $b$ generate the same principal right ideals in $\mathbb{Q}$, then we say that $\mathbb{S}$ is {\em a straight left order} in $\mathbb{Q}$, with corresponding definitions for {\em straight right order} and {\em straight order}. \end{defi} We do not need here the precise definition of being square-cancellable, referring the reader to \cite{fountaingouldorders} - it is a strong necessary condition for an element to lie in a subgroup of an oversemigroup. From Theorems 3.4 and 3.11 of \cite{fountaingouldorders}, if $\mathbb{R}$ is a subring of a ring $\mathbb{Q}$ with identity, and provided $\mathbb{Q}$ satisfies some weak conditions, certainly held by matrix rings over division rings, then $\mathbb{R}$ is a left order in $\mathbb{Q}$ in the sense of ring theory if and only if it is a left order in the sense of Definition~\ref{defi:order}. Theorem 5.3 of \cite{gouldI} states that if $\B$ is a finite rank stable basis algebra satisfying the distributivity condition and $\A$ is the independence algebra constructed of which it is a reduct, then $\operatorname{End}(\B)$ is a left order in $\operatorname{End}(\A)$. Moreover, $\operatorname{End}(\B)$ is a straight left order in $\operatorname{End}(\A)$ if and only if the monoid $\mathbb{T}$ of non-constant unary term operations of $\B$ acts by bijections on the constant subalgebra and (in the case rank $\B$ is at least 2) is both left and right Ore. The second aim of this paper, achieved in Section~\ref{sec:right}, is to complete the picture by proving a series of results which give the following: \begin{result} Let $\B$ be a stable basis algebra of finite rank $n\geq 2$ satisfying the distributivity condition and let $\A$ be the independence algebra from \cite{gouldI} of which it is a reduct, so that $\operatorname{End}(\B)$ is a left order in $\operatorname{End}(\A)$. Then $\operatorname{End}(\B)$ is an order in $\operatorname{End}(\A)$ if and only if it is a straight left order in $\operatorname{End}(\A)$. \end{result} We remark that, on the surface, being a straight left order has nothing to do with being a (two-sided) order. {\color{black} For example, any left Ore cancellative monoid $\mathbb{S}$ is a straight left order in a group $\mathbb{G}$, but will not be a right order unless $\mathbb{S}$ is also right Ore. } We make some remarks on particular examples, including free modules over domains, at the end of Section~\ref{sec:right}. With $\B$ and $\A$ as in the above theorem, $\operatorname{End}(\B)$ sits inside $\operatorname{End}(\A)$ in a particularly nice way, known as being {\em fully stratified} (see Section~\ref{sec:right}). In Section~\ref{sec:FS} we briefly address the question of an infinite rank stable basis algebra $\B$. Fountain and the second author showed, by directly checking a list of criteria from \cite{gould ab}, that the semigroup of endomorphisms $\operatorname{End}_f(\B)$ of finite rank was a fully stratified straight left order (without specifying the semigroup of left quotients). However, they were assuming that $\operatorname{End}(\B)$ is {\em idempotent connected}, which followed from the incorrect Proposition II 2.6 of \cite{mvl}. Without assuming that $\operatorname{End}(\B)$ is idempotent connected, we have a longer list of conditions to check. We restrict ourselves to a special case in order to do so, proving: \begin{result} Let $\B$ be a stable basis algebra that satisfies the distributivity condition, has no constants, and is such that $\mathbb{T}$ is commutative. Then $\mathop{\mathrm{End}}\nolimits_f(\B)$ is a fully stratified straight left order. \end{result} We conclude the paper in Section~\ref{sec:open} with a number of open questions. {\color{black} We presume a passing familiarity with the notation of Universal Algebra but, other than this, we give all necessary background for both Universal Algebra and Semigroup Theory as we proceed through the paper;} further specialist details may be found in \cite{basisi,basisii,basisiii,gould,gouldI,urb}. For standard notions from semigroup theory, we refer the reader to \cite{cp} and \cite{howie} and for universal algebra to \cite{mckenzie}. \section{The distributivity condition for independence algebras}\label{sec:example} If $\B$ is a stable basis algebra, then the monoid of non-constant unary term operations will be denoted by $\mathbb{T}$. It was shown in \cite{basisii} that $\mathbb{T}$ is {\em left Ore} (also known as {\em right reversible}), which means that for any $a,b\in T$ there exist $u,v\in T$ such that $ua=vb$. We quickly repeat some definitions from clone theory. An (abstract) {\em clone} on a set $A$ is a set of non-nullary operations on $A$ that contains all projections and is closed under composition. Given a set $W$ of non-nullary operations on $A$, the clone {\em generated} by $W$ is the smallest clone containing $W$. The {\em clone of an algebra} $\A$ with underlying set $A$ is the clone generated by all non-nullary basic operations of $\A$. An (abstract or algebraic) {\em extended clone} is defined correspondingly by removing the restriction on nullary operations. \begin{defi}\label{def:dist} A basis algebra $\B$ satisfies the {\em distributivity condition} if the clone of $\B$ contains a generating set $W$ of operations such that for all $a\in T$ and $n$-ary operations $t \in W$, where $n\geq 2$, we have \[a(t(x_1,\hdots, x_n))=t(a(x_1),\hdots, a(x_n)).\] \end{defi} Note that Definition~\ref{def:dist} is stated here more precisely than in \cite{gouldI}, since to show $\B$ does {\em not} satisfy the distributivity condition, we wish to show it is impossible to choose {\em any} generating set for the clone that witnesses this. Indeed, we see in Subsection~\ref{subs:lin} an example where the most natural choice of generating set does not witness the condition in Definition~\ref{def:dist}, but we can easily find another that does. Certainly free modules over rings, and acts over monoids, hence our canonical examples of stable basis algebras, satisfy this condition. We have observed that all independence algebras are stable basis algebras. Independence algebras are essentially $v^*$-algebras, which were completely determined (up to clone equivalence) in the 1960's; we refer the reader to the survey article of Urbanik \cite{urb} for the details. In the beginning of Section 4 of \cite{gouldI}, it is claimed all independence algebras can be shown to satisfy the distributivity condition, with the possible exceptions of the $\mathbb{S}$-{\em homogeneous} algebras or $\mathbb{Q}$-{\em homogeneous} algebras, where $\mathbb{S}$ is a monoid and $\mathbb{Q}$ a quasifield. In the following, we will prove this claim, and address the remaining two types of independence algebras. In particular, we show that the distributivity condition does not necessarily hold for $\mathbb{S}$-homogeneous independence algebras, thereby establishing {\bf Result 1}. The following subsections consider the various classification types from \cite{urb}, giving full definitions of the various types. We will first address a technical difference between independence algebras and $v^*$-algebras, and how it affects this classification. The reader might want to note that the next paragraphs deal only with this technicality, and can be skipped without affecting the understanding of our results. The issue at hand originates in the way that $\langle \emptyset \rangle$, the subuniverse generated by the empty set, is defined. In \cite{urb}, this is the set of all elements that are images of constant clone operations, while in the now prevailing definition, this is the subuniverse generated by the images of nullary operations (in both cases, if the algebra has no operations of the relevant type, we set $\langle \emptyset \rangle=\emptyset$). A consequence of the definition from \cite{urb} is that nullary operation can effectively be ignored for classification purposes. Hence the results in \cite{urb} amount to a classification up to the clone of the algebra. If an (abstract) clone $C$ on a set has constant functions, then there are algebras with and without nullary operations whose (algebraic) clone is $C$, and their status as a $v^*$-algebra is not be effected by this difference. For an independence algebra $\A$ with at least two elements one can show that $\langle \emptyset \rangle$ is exactly the subalgebra $[\emptyset]$ consisting of the images on non-nullary constant clone operations, so that $\A$ is a $v^*$-algebra. On the other hand, for a (non-trivial) $v^*$-algebra $\B$ to be an independence algebra the extended clone needs to includes nullary operations for the elements of $[\emptyset]$. The classification of \cite{urb} will be used with this understanding. For simplicity, we will not explicitly list any additional nullary operations below. \subsection{All independence algebras of rank $0$ have the distributivity property} Here all elements are images of constant clone functions. It follows that $\T$ only contains the identity. The distribution property now follows trivially. \subsection{All linear independence algebras have the distributivity property}\label{subs:lin} Let $D$ be a division ring, $A$ the underlying set of a linear space over $D$, and $A_0\subseteq A$ the underlying set of a subspace. The linear independence algebra $\A$ given by $(D,A,A_0)$ has underlying set $A$ and its basic operations are all operations of the form \[f(x_1,\dots,x_n)=\sum_{i=1}^{n} \, \lambda_i x_i +a,\] for each $1\le n$, $\lambda_i \in D, a\in A_0$. Clearly, every clone function of $\A$ is basic. For $\lambda \in D, a \in A_0$, let $f_{\lambda, a}$ be given by $f_{\lambda, a}(x)=\lambda x +a$. Clearly, all $f_{\lambda, a}$ are unary operations in $\A$, and in fact the monoid $\T$ contains exactly the functions $f_{\lambda, a}$ with $\lambda \ne 0$. Moreover let $g(x_1,x_2,x_3)=x_1-x_2+x_3$, which is also a basic operation of $\A$. We claim that $W=\{g, f_{\lambda,a}: \lambda \in D, a \in A_0\}$ witnesses the distribution condition. Note that $x+y=g(x,f_{0,0}(x),y)$. As $+$ and the $f_{\lambda,a}$ clearly generate the clone of $\A$, so does $W$. Finally, for all $\lambda \ne 0$ and $a\in A_0$, $$f_{\lambda,a}(g(x_1,x_2,x_3))=\lambda x_1 -\lambda x_2 +\lambda x_3 +a$$ $$= \lambda x_1+a -\lambda x_2-a +\lambda x_3 +a =g(f_{\lambda,a}(x_1),f_{\lambda,a}(x_2),f_{\lambda,a}(x_3)),$$ and the distributivity condition holds. \medskip We remark that, if $A_0 \ne \{0\}$, then a (natural) set of generators involving $+$ does not witness the distributivity condition. For if $a \in A_0 \setminus\{0\}$, then $$f_{1,a}(+(0,0))=a \ne a+a = +(f_{1,a}(0),f_{1,a}(0)).$$ \subsection{All affine independence algebras have the distributivity property} Let $D$ be a division ring, $A$ the underlying set of a linear space over $D$, and $A_0\subseteq A$ the underlying set of a subspace. The affine independence algebra $\A$ given by $(D,A,A_0)$ has underlying set $A$ and its basic operations are all operations of the form $$f(x_1,\dots,x_n)=\sum_{i=1}^{n} \, \lambda_i x_i +a,$$ for each $1\le n$, $a \in A_0, \lambda_i \in D$, such that $\Sigma_i \lambda_i=1$. Clearly, every clone function of $\A$ is basic, and $\T$ consists of the functions $f_b$ with $f_b(x)=x+b$, for all $b \in A_0$. We can use the set of all clone functions as a generating set $W$. It witnesses the distributivity property, as, for all $i \le 2$, $a, b \in A_0$, $\lambda_i \in D$, $$\Sigma_i \lambda_i f_b(x_i) +a=\Sigma_i \lambda_i x_i +\Sigma_i \lambda_i b +a=\Sigma_i \lambda_i x_i + b +a = f_b(\Sigma_i \lambda_i+a).$$ \subsection{The exceptional independence algebra has the distributivity property} The exceptional independence algebra $\A$ can be described as the algebra on a $4$-element set $A$ with a unary operation $i$ and a ternary operation $q$. Here $i$ is a product of two disjoint transpositions, and $q(x_1,x_2,x_3)$ is either the unique element not among its arguments (if all are different), or the argument that appears at least twice. It is straightforward to check that $T=\{i,1_A\}$, where $1_A$ is the identity map on $A$, and that $W=\{i,q\}$ witnesses the distributivity property. \subsection{All group action independence algebras have the distributivity property} Let $A_0\subseteq A$ be sets, and $G$ a group acting on $A$, such that, for all non-identity $g \in G$, the fixed points of $g$ all lie in $A_0$, and $g(A_0) \subseteq A_0$. The group action independence algebra $\A$ corresponding to $(A,A_0,g)$ has underlying set $A$ and operations $$ f_{g,n,i}(x_1,\dots,x_n)= g(x_i), \quad\quad f_{a,n}(x_1,\dots,x_n)=a,$$ for all $1\le n, 1\le i\le n, g \in G, a \in A_0$. Clearly, all clone operations are essentially unary. It follows that we may generate the clone with unary operations alone. Hence the distributivity condition holds trivially. \subsection{All $\mathbb{Q}$-{homogeneous} independence algebras have the distributivity property} A {\em quasifield} $\mathbb{Q}$ is a set $Q$ with at least two elements, together with two binary operations denoted by juxtaposition, and $-$, such that the multiplicative operation has a zero $0$, the non-zero elements form a group under multiplication, and four further axioms hold (see 4.3 of \cite{urb}). In a $\mathbb{Q}$-{homogeneous} independence algebra $\A$ over a quasifield $\mathbb{Q}$, all basic $k$-ary operations $f$ satisfy $$f(a-ba_1,\dots, a-ba_k)=a-bf(a_1,\dots,a_k),$$ for all $a,b_1,\dots, b_k\in Q$, where subtraction and multiplication are the operations from $\mathbb{Q}$. Setting $b=0$ and $a_i=a$ for $1\leq i\leq k$, we obtain that $f(a,\dots,a)=a$. An inductive argument gives that the identity is the only unary clone operation of $\A$. The distributivity property now follows trivially. \subsection{Not all $\mathbb{S}$-homogeneous independence algebras have the distributivity condition} Let $\mathbb{S}$ be a monoid such that all the non-invertible elements are left zeros. A good example is a group, which is exactly what we will take below. An $n$-ary operation on $S$ is said to be $\mathbb{S}$-{\em homogeneous} if for all $s,s_1,\hdots, s_n\in S$ we have \[f(s_1,\hdots, s_n)s=f(s_1s,\hdots, s_ns).\] Since $\mathbb{S}$ is a monoid, the operations $f(x)=sx$ ($s\in S$) are the only $1$-homogeneous operations. A $\mathbb{S}$-homogeneous independence algebra has underlying set $S$ and the basic operations form a set $V$ of $n$-homogeneous operations on $S$ containing all the $1$-homogeneous operations. The aim of this subsection is to show that with a careful choice of $\mathbb{S}$ and $V$, the resulting independence algebra $\A$ does not satisfy the distributivity condition. Let $Z=\{z_1,z_2,\dots\}$ be a countably infinite set and let $E=\{z_2,z_4,\dots\}$. Let $\F=\F(Z)$ be the free group over $Z$ with identity $1$ and underlying set $FG(Z)$, which we denote for brevity by $F$. In the following, concatenation will always refer to the group operation of $\F$. Let $F_+\subseteq F$ be the set of all non-identity elements of $F$ whose normal form does not include any negative exponents, so that $F_+$ is the underlying set of the copy of the free semigroup $\mathbb{FS}$ on $Z$ sitting inside $\F$. Since $F$ and $E$ are both countably infinite we may choose a function $h:F \to E$, where $w\mapsto h_w$, satisfying the following conditions: \[\begin{array}{lrclllll} (h1)&h_{z_1z_2^{-1}}&=&z_6&(h2)&h_{z_3z_2^{-1}}=z_8&&\\ (h3)& h_{z_1z_4^{-1}}&=&z_{10}&(h4)&h\mbox{ is injective.}&&\end{array}\] Now let $\A=\langle F; \{\nu_c^\A\}_{c \in F}, g^\A\rangle$ where: \begin{enumerate} \item for each $c \in F$, $\nu_c^\A$ is the unary operation given by $\nu_c^\A(w)=cw$ (i.e. $\nu_c^\A$ acts as left translation by the element $c$ in the group $\F$); \item $g^\A$ is the binary operation given by $g^\A(w_1,w_2)=h_{w_1w_2^{-1}}w_2$. \end{enumerate} \begin{lemma} The algebra $\A$ is a monoid independence algebra with underlying monoid $\F$. \end{lemma} \begin{proof} We first remark that as $\F$ is a group, it has no non-invertible elements, and so is a suitable monoid from which to build a monoid independence algebra. We have remarked that $F$-homogeneous unary operations are left translations and by construction, all left translations $\nu_c^\A, c\in F$ are basic in $\A$. Finally, for all $w_1,w_2,w' \in F$, we have $$g^\A(w_1w',w_2w')=h_{w_1w'(w_2w')^{-1}}w_2w'=h_{w_1w_2^{-1}}w_2w'=g^\A(w_1,w_2)w',$$ so that $g^\A$ is $F$-homogeneous and hence $\A$ is a monoid independence algebra. \end{proof} In order to show that $\A$ does not have the distributivity property, we need to examine the clone of $\A$. We let $L=\{\{\nu_c\}_{c \in F}, g\}$, be the language of $\A$, $X=\{x_1,x_2,\dots,\}$ a countably infinite set of variables (which we may think of as being linearly ordered according to their subscripts), and $T$ the set of terms in the language $L$ over $X$. The elements of the clone are obtained from the interpretation in $\A$ of elements in $T$ (and their compositions with projections). For $i,j \in \mathbb{N}$ with $i\le j$ let $\pi^j_i:F^j\to F^i$ be given by $(w_1,\dots,w_j)\mapsto (w_1,\dots,w_i)$, i.e. $\pi^j_i$ is the projection to the first $i$ coordinates. For each term $t \in T$, let $a(t)$ be the largest $n$ such that the variable $x_n$ occurs in $t$. We define a function $\bar t:F^{a(t)} \to F$ by structural induction, as we now describe. We remark that $\bar t$ will essentially be the term function associated with $t$ and usually denoted by $t^\A$. However, our definition of $\bar t$ is needed due to some minor technicalities involving the arities of term functions. For each $i\in\mathbb{N}$ set $\bar x_i(w_1,\dots,w_i)=w_i$. If $t=\nu_c(s)$ for some $s$, then noting that $a(t)=a(s)$, let $\bar t=\nu_c^\A \circ \bar s$. Finally, for $t=g(t_1,t_2)$, we set $$\bar t=g^\A \circ \left(\bar t_1 \circ \pi^{a(t)}_{a(t_1)},\bar t_2 \circ \pi^{a(t)}_{a(t_2)}\right),$$ which is well-defined, as $a(t) \ge a(t_1), a(t_2)$. With some abuse of terminology, we will refer to all functions of the form $\bar t$ as term functions. Given a term $t \in T$, let $\nu(t)$ to be the set of $c\in F$ such that $\nu_c$ appears in $t$. We define the \emph{content} of $t$, denoted $C_t$, by \[C_t=\bigcup_{c\in \nu(t)}\{ z_i\in Z: z_i \mbox{ appears in the normal form of }c\}.\] Note that $C_t\subseteq Z$ and is finite. For each $t \in T$, we define $t^* \in \mathbb{N}$ by structural induction as follows: if $t=x_i$ for some $i$, then $t^*=i$, if $t=\nu_c(t_1)$ for some $t_1 \in T$, then $t^*={t^*_1}$, and if $t=g(t_1,t_2)$ for some $t_1,t_2 \in T$, we set $t^*={t^*_2}$. It is easy to see that $t^*$ is the index of the variable that appears syntactically in the ``right-most" position of $t$ and clearly, $t^*\leq a(t)$. The following lemma characterizes behaviour of the functions of the form $\bar t$, by connecting them to the group structure on the underlying set $F$ of $\mathbb{A}$. This will be essential to our later arguments. \begin{lemma}\label{lm:structure} Let $t\in T$ such that $a(t)=n$. Then one of the following holds: \begin{enumerate} \item\label{form1} there exist a $w \in FG(E \cup C_t)$ such that for all $\vec y \in F^n$, $$\bar t(\vec y)= w y_{t^*};$$ \item there exists a function $f: F^n \to FG(E \cup C_t)$ such that for all $\vec y \in F^n$, $$\bar t(\vec y)= f(\vec y) y_{t^*}.$$ In addition, there are sequences $z_{j_1}, z_{j_2},\ldots$ on $Z$, and $\vec \mu_1, \vec \mu_2,\ldots$ on $ (F_+)^n$ such that \begin{enumerate}\label{form2} \item $j_i \ne j_{i'}$ for $i \ne i'$, and \item $z_{j_i}$ appears in the normal form of $f(\vec \mu_i)$ with a positive exponent. \end{enumerate} \end{enumerate} \end{lemma} \begin{proof} We remark that if $f$ is as in Condition~(\ref{form2}), then, in particular, the image of $f$ is infinite, {\color{black} indeed, the image of $f$ restricted to $(F_+)^n$ is infinite}. We prove the lemma by induction over the structure of $t$. If $t=x_i$ for some $i$, then $n=a(t)=t^*=i$ and $\bar t(\vec y)= \bar{x}_i(y_1,\hdots, y_i)=y_i=1y_{t^*}$ and the result holds with $w=1$ in Condition~(\ref{form1}). Assume for induction that the result holds for any proper subterm of $t$. \noindent{\em Case (i)} Suppose first that $t=\nu_\sigma (t_1)$ for some term $t_1$, so that $t^*=t_1^*$. Then $n=a(t)=a(t_1)$ and by induction, $ t_1$ satisfies the conditions of the lemma. \noindent{\em Case (i)(a)} If $\bar t_1(\vec y)=w y_{t^*_1}$ for some {\color{black} $w \in FG(E\cup C_{t_1})$,} then $\bar t(\vec y)= \sigma \bar t_1(\vec y)= \sigma w y_{t^*_1}=\sigma w y_{t^*}$. Moreover, $\sigma w \subseteq FG(E \cup C_t)$, as $C_t = C_{t_1} \cup C_\sigma$, where $ C_\sigma$ is the set of generators that appear in the normal form of $\sigma$. Hence $\bar t(\vec y)$ satisfies Condition~(\ref{form1}). \noindent{\em Case (i)(b)} Now suppose that Condition~(\ref{form2}) holds for $t_1$, so there exists {\color{black} $f_1: F^n \to FG(E \cup C_{t_1})$} and sequences $(z_{j_i})_{i\in\mathbb{N}}$ and $ (\vec \mu_i)_{i\in\mathbb{N}}$ as in (2), such that $\bar t_1(\vec y)= f_1(\vec y) y_{t^*_1}$. Then $$\bar t(\vec y)= \sigma \bar t_1(\vec y)= \sigma f_1(\vec y) y_{t^*_1}=\sigma f_1(\vec y) y_{t^*}.$$ {\color{black} For each $i\in \mathbb{N}$ let $w_i=f_1(\vec \mu_i)$ and put $ \sigma w_i =:\tau_i$. The normal form of $w_i$ contains $z_{j_i}$ with a positive exponent, so the normal form of $\tau_i$ will do so as well, unless $z_{j_i}$ cancels against a $z_{j_i}^{-1}$. But, considering $\sigma$, there are only finitely many indices $i$ for which $z_{j_i}^{-1}$ appears in the normal form of $\sigma$. It follows that for infinitely many values $i$, the element $ \tau_i\in F$ contains $z_{j_i}$ in its normal form with a positive exponent. } Define {\color{black}$f: F^n \to F$} by $f=\nu_\sigma^\A \circ f_1$. By the assumption on $f_1$, and as $C_t=C_{t_1} \cup C_\sigma$, we have $f:F^n\rightarrow FG(E \cup C_t)$. We obtain that $\bar t(\vec y)= f(\vec y)y_{t^*}$. It is easy to see that $\bar t$ satisfies Condition~(\ref{form2}), with the sequences $(z_{j_i})_{i\in\mathbb{N}}$ and $(\vec \mu_i)_{i\in\mathbb{N}}$ obtained from the corresponding sequences for $\bar t_1$ by removing finitely many elements. \noindent{\em Case (ii)} We now consider the case that $t=g(t_1,t_2)$ for some terms $t_1,t_2$. By induction the lemma holds for $t_1$ and $t_2$. Let $n_1=a(t_1), n_2=a(t_2)$, so that $n$ is the maximum of $n_1$ and $n_2$, and $t^*=t_2^*$. {\color{black} Notice that $C_t=C_{t_1}\cup C_{t_2}$.} \noindent{\em Case (ii)(a)} Assume first that Condition~(\ref{form2}) holds for $t_2$, so that $$\bar t_2\left(\pi^{n}_{n_2}(\vec y)\right)=f_2\left(\pi^{n}_{n_2}(\vec y)\right) y_{t^*_2}$$ for some $f_2:F^{n_2}\rightarrow FG(E\cup C_{t_2})$, such that there are sequences $(z_{j_i})_{i\in\mathbb{N}}$ and $(\vec \mu_i)_{i\in\mathbb{N}}$ as in (\ref{form2}). Since $t^*=t_2^*$ we have $$\bar t(\vec y)= h_{\bar t_1\left(\pi^{n}_{n_1}(\vec y)\right)\left(\bar t_2\left(\pi^{n}_{n_2}(\vec y)\right)\right)^{-1}} \bar t_2\left(\pi^{n}_{n_2}(\vec y)\right)= h_{\bar t_1\left(\pi^{n}_{n_1}(\vec y)\right)\left(\bar t_2\left(\pi^{n}_{n_2}(\vec y)\right)\right)^{-1}} f_2\left(\pi^{n}_{n_2}(\vec y)\right) y_{t^*}.$$ Define $f:F^n\to F$ by $$f(\vec y)=h_{\bar t_1\left(\pi^{n}_{n_1}(\vec y)\right)\left(\bar t_2\left(\pi^{n}_{n_2}(\vec y)\right)\right)^{-1}} f_2\left(\pi^{n}_{n_2}(\vec y)\right).$$ We have that $\bar t (\vec y)=f(\vec y) y_{t^*}$, as required. Moreover, $f:F^n\rightarrow FG(E \cup C_t)$, by the conditions on $f_2$ and since the image of $h$ lies in $E$. Let $\vec\mu_i'\in F_+^n$ be obtained by extending $\vec\mu_i$ to arity $n$ with $n-n_2$ arbitrary elements from $F_+$. By Condition (2) for $t_2$, we have that $z_{j_i}$ appears in the normal form of $f_2(\vec \mu_i)=w_i$ with a positive exponent. Now $$f(\vec \mu_i')= h_{\bar t_1\left(\pi^{n}_{n_1}(\vec \mu_i')\right)\left(\bar t_2(\vec \mu_i)\right)^{-1}}w_i.$$ By definition of $h$, the first factor is just an element of $E$, so in particular an element of $F_+$. It follows that the generator $z_{j_i}$ in $w_i$ cannot cancel, and hence appears in the normal form of $f(\vec \mu_i')$. Thus $\bar t$ satisfies Condition (\ref{form2})\ with $f$ and the sequences $(\vec \mu_i')_{i\in\mathbb{N}}$ and $ (z_{j_i})_{i\in\mathbb{N}}$. \noindent{\em Case (ii)(b)} For our final case we assume that $\bar t_2\left(\pi^n_{n_2}(\vec y)\right) = w_2 y_{t^*_2}$, for some $w_2 \in FG(E \cup C_{t_2})$. We make four further case distinctions. \begin{enumerate} \item $\bar t_1$ satisfies Condition (\ref{form1}) and $t^*_1=t^*_2$. {\color{black} We have for $\vec u\in F^{n_1}$ that $t_1(\vec u)=w_1u_{t_1^*}$ for some $w_1$, and then for $\vec y\in F^n$ we see that $\bar t_1\left(\pi^n_{n_1}(\vec y)\right)= w_1 y_{t^*_1}$, and \[\bar t(\vec y)= h_{\bar t_1\left(\pi^n_{n_1}(\vec y)\right) (\bar t_2\left(\pi^n_{n_2}(\vec y)\right)^{-1}}\bar t_2\left(\pi^n_{n_2}(\vec y)\right)= h_{w_1y_{t_1^*}(w_2y_{t_2}^*)^{-1}} w_2 y_{t^*_2}= h_{w_1w_2^{-1}} w_2 y_{t^*}\] as $t^*_1=t^*_2$. Thus $\bar t$ also satisfies Condition (\ref{form1}), as $h_{w_1w_2^{-1}} w_2 \in FG(E \cup C_t)$.} \item $\bar t_1$ satisfies Condition~(\ref{form2}) with respect to {\color{black} $f_1:F^{n_1}\rightarrow FG(E\cup C_{t_1})$} and $t^*_1=t^*_2$. In this case $$\bar t(\vec y)=h_{f_1\left(\pi^n_{n_1}(\vec y)\right)w_2^{-1}}w_2 y_{t^*}.$$ We claim that $\bar t$ satisfies Condition (\ref{form2}). Let {\color{black} $f$} be given by $$f(\vec y)=h_{f_1\left(\pi^n_{n_1}(\vec y)\right)w_2^{-1}}w_2.$$ Then $f:F^n\rightarrow FG(E\cup C_t)$ by the same argument as above, so it remains to construct appropriate sequences $(\vec \mu_i)_{i\in\mathbb{N}}$ and $(z_{j_i})_{i\in\mathbb{N}}$. By the remark at the beginning of this proof, $f_1\left(\pi^n_{n_1}\left(F_+^{n}\right)\right)$ is infinite and hence so is $f_1\left(\pi^n_{n_1}\left(F_+^{n}\right)\right)w_2^{-1}$. The function $h$ is injective and maps into $E$, so it follows that $h_{f_1(\pi^n_{n_1} (\vec x))w_2^{-1}}$ takes on infinitely many values $z_{j_i}$ in $E$ as $\vec x$ runs over $F^{n}_+$. Only finitely many of these values can cancel against a generator in the normal form of $w_2$. The existence of $(\vec \mu_i)_{i\in\mathbb{N}}$ and $(z_{j_i})_{i\in\mathbb{N}}$ follows. \item $\bar t_1$ satisfies Condition (\ref{form1}) and $t_1^* \ne t_2^*$. In this case we have that $\bar t_1(\vec y)= w_1 y_{t_1^*}$ for some $w_1 \in FG(E \cup C_{t_1})$, for all $\vec y \in F^{n_1}$, and thus $$\bar t(\vec y)=h_{w_1y_{t_1^*} (y_{t_2^*})^{-1} w_2^{-1}} w_2 y_{t^*}.$$ {\color{black} Consider the set $P\subset Z^2$ of pairs $(u_{1},u_{2})$ for which $u_{1} \ne u_{2}$, and neither $u_{1},u_{2}$ nor their inverses appear in the normal forms of $w_1$ or $ w_2$; clearly $P$ is infinite. For any $(u_1,u_2)\in P$, the normal form of $w_1 u_{1} u_{2}^{-1} w_2^{-1}$ contains the subexpression $u_1u_2^{-1}$, and these are the only occurrences of $u_1$ and $u_2$ in the normal form. It follows that $w_1 u_{1} u_{2}^{-1} w_2^{-1}$ takes on only distinct and hence infinitely many elements of $F$ as $(u_{1},u_{2})$ runs through $P$. As $h$ is injective, $h_{w_1 u_{1} u_{2}^{-1} w_2^{-1}}$ also takes on infinitely many values from $E$ as $(u_{1},u_{2})$ runs through $P$. Only finitely many of those values can cancel against generators from the normal form of $w_2$. Removing the corresponding pairs from $P$ we see that $\bar t$ satisfies Condition (\ref{form2}) with respect to $f:F^{n}\rightarrow FG(E\cup C_{t})$, where $$f(\vec y)= h_{w_1y_{t^*_1} y_{t^*_2}^{-1} w_2^{-1}} w_2,$$ with the $\vec \mu_i\in F_+^n$ being chosen so that $\vec \mu_i(t^*_1)=u_{1}^{(i)}$, $\vec \mu_i(t^*_2)=u_{2}^{(i)}$ and with arbitrary elements of $F_+$ in all other coordinates, where $(u_{1}^{(i)}, u_{2}^{(i)})$ runs over a cofinite subset of $P$ and $\left(z_{j_i}\right)_{i\in\mathbb{N}}=(h_{w_1 u_{1}^{(i)} (u_{2}^{(i)})^{-1} w_2^{-1}})_{i\in\mathbb{N}}.$} \item $\bar t_1$ satisfies Condition (\ref{form2}) with respect to {\color{black} $f_1:F^{n_1}\rightarrow FG(E\cup C_{t_1})$, } and $t^*_1\neq t^*_2$. This case is similar to the previous one. We have that $$\bar t(\vec y)=h_{f_1\left(\pi_{n_1}^n(\vec y)\right)y_{t^*_1} y_{t^*_2}^{-1} w_2^{-1}} w_2 y_{t^*}.$$ {\color{black}Let $P\subset Z^2$ be the set of pairs $(u_{1},u_{2})$ for which $u_{1},u_{2} \notin E\cup C_t$, $u_{1} \neq u_{2}$ and neither $u_{1}$, $u_{2}$ nor their inverses appear in the normal form of $w_2$; clearly $P$ is infinite. If $\vec z \in F^n_+$ and $(u_{1},u_{2}) \in P$, then in the expression $$f_1\left(\pi_{n_1}^n(\vec z)\right) u_{1} u_{2}^{-1} w_2^{-1},$$ $u_{1}$ and $u_{2}^{-1}$ cannot cancel against any generators from the normal forms of $w_2^{-1}$ and $f_1\left(\pi_{n_1}^n(\vec z)\right)$, in the latter case because $f_1$ maps into $F(E\cup C_t)$.} Arguing as previously we see that $\bar t$ satisfies Condition \ref{form2}. \end{enumerate} By structural induction, the lemma holds for all terms $t\in T$. \end{proof} We are ready show our main result. \begin{theorem}\label{thm:main} The independence algebra $\A$ does not satisfy the distributivity property. \end{theorem} \begin{proof} By way of contradiction, assume that $W$ is set of functions that generate the clone of $\A$ and witness the distributivity property. The clone of $\A$ contains the function $g^\A$. By the first three conditions of our choice of $h$, calculation shows that $g(z_1,z_2)=z_6z_2, g(z_3,z_2)=z_8z_2,$ and $g(z_1,z_4)=z_{10}z_4$. Combined, these results show that $g$ depends on both of its arguments. It follows that $W$ must contain an operation $v$ that depends on more than one argument, for otherwise the entire clone of $\A$ would consist of functions that are essentially unary. As $v$ is in the clone of $\A$, it is a composition of $\A$-operations and projections, and it is easy to see that such $v$ must have the form $\bar t \circ \pi^n_m$ for some $t\in T$. Moreover, $W \setminus \{v\} \cup \{\bar t\}$ also generates the clone of $\A$ and witnesses the distributivity property. Thus, we may assume that $v=\bar t$ for a term $t \in T$; since $v$ depends on at least two variables, so does $\bar t$. Such a $\bar t$ must satisfy one of the two conditions from Lemma \ref{lm:structure}. As the first condition implies that $\bar t$ only depends on one variable, we must have instead that $\bar t (\vec y)= f(\vec y) y_{t^*}$, where $f$ is as in Condition (\ref{form2}) of Lemma \ref{lm:structure}. Let $(\vec \mu_i)_{i\in\mathbb{N}}$ and $(z_{j_i})_{i\in\mathbb{N}}$ be the sequences associated to $f$ satisfying (a) and (b) of Lemma \ref{lm:structure}, and set $w_1=f(\vec \mu_1)$. We have that $ \bar t(\vec \mu_1)= f(\vec \mu_1) \mu_1^*$ - where $\mu_1^*$ is the $t^*$-th entry of $\vec \mu_1$. Choose $a \in Z \setminus (E \cup C_t)$. As we assume that $\A$ satisfies the distributivity property, and $W$ is a witness of it, we have that $\nu_a(\bar{t}(\vec \mu_1))=\bar{t}(\nu_a(\vec \mu_1))$, where, with abuse of notation, $\nu_a(\vec \mu_1)=a\vec \mu_1$ is the element of $F^n$ obtained by multiplying every coordinate of $\vec \mu_1$ by $a$ on the left. Hence $ af(\vec \mu_1) \mu_1^*= f(a \vec \mu_1) a \mu_1^*$ and so $ af(\vec \mu_1) = f(a \vec \mu_1) a .$ Now, $f(a \vec \mu_1)$ is in the image of $f$ which is contained in $FG(E \cup C_t)$ by Lemma \ref{lm:structure}. As $a \notin FG(E \cup C_t)$, the normal form of $f(a \vec \mu_1)a$, and hence the normal form of $af(\vec \mu_1)$, will end with $a$. However, $f(\vec \mu_1)$ is also an element of $FG(E \cup C_t)$, and so its normal form does not contain $a$. It follows that $f(\vec \mu_1)=1$. However, by Lemma \ref{lm:structure}, the normal form of $f(\vec \mu_1)$ contains the generator $z_{j_1}$, a contradiction. \end{proof} \section{Conditions for $\mathop{\mathrm{End}}\nolimits{\B}$ to be a right order in $\mathop{\mathrm{End}}\nolimits{\A}$}\label{sec:right} {\color{black} Throughout this section, let $\B$ be a stable basis algebra of finite rank $n$ satisfying the distributivity condition. Thus far we have not been explicit about the properties satisfied by $\B$, but for the convenience of the reader we give some brief details. All of these, and further information, can be found in \cite{basisi}. The relation $\prec$ is defined by \[a\prec W\Leftrightarrow a\in \langle \emptyset\rangle_{\mathbb{B}} \mbox{ or } \langle \{ a\} \rangle _{\mathbb{B}}\cap \langle W\rangle_{\mathbb{B}} \neq \langle \emptyset\rangle_{\mathbb{B}}\] for $a\in B$ and $W\subseteq B$. We say that $W$ is {\em pure closed} if \[a\prec W\Leftrightarrow a\in W.\] Any subset of $V$ of $B$ is contained in smallest pure closed subset $\operatorname{PC}(V)$; indeed, $\operatorname{PC}(V)$ is a subalgebra with a basis that can be extended to a basis of $B$. To say $X$ is a {\em basis} of a subalgebra $C$ means that $C=\langle X\rangle_{\mathbb{B}}$, every map from $X$ to $B$ lifts to a (unique) morphism from $C$ to $B$, and $x \not\prec X\setminus \{ x\}$, for any $x\in X$. The {\em rank} of a subalgebra $U$ is the cardinality of a basis of its pure closure $\operatorname{PC}(U)$. Moreover, since $B$ has finite rank $n$, any subalgebra with rank $m\leq n$ itself has a basis of $m$ elements. However, it is only the pure closed subalgebras that have bases that are extendable to bases of $B$. Since $\B$ has rank $n$, we choose and fix a basis $\{b_1,\dots,b_n\}$. As pointed out at the beginning of Section~\ref{sec:example}, the monoid $\mathbb{T}$ of non-constant unary term operations is cancellative and left Ore. It therefore has a group of left quotients $\G$ by \cite[Theorem 1.24]{cp}. Further, the elements of $\mathbb{T}$ are injective (as maps from $B$ to $B$). Another fact of which we make use is that if $u(x)$ is a unary term operation of $\B$, and $u(b)\in \langle \emptyset\rangle_{\mathbb{B}}$ for some $b\notin \langle \emptyset\rangle_{\mathbb{B}}$, then $u(x)$ is a constant map with image $u(b)\in \langle \emptyset\rangle_{\mathbb{B}}$. Finally, for any $\alpha\in \operatorname{End} \B$, the {\em rank} of $\alpha$ is defined to be the rank of the subalgebra $\im \alpha$; this always exists. } In \cite{gouldI} Gould constructs an independence algebra $\A$ such that $\B$ is a reduct of $\A$, and such that $\operatorname{End}(\B)$ is a left order (in the sense of Definition~\ref{defi:order}) in $\operatorname{End}(\A)$. For convenience we gather here some essential information concerning the construction of $\A$ from $\B$. Further details may be found in \cite{gouldI}. Let $\Sigma=T\times B$ and define a relation $\sim$ on $\Sigma$ by the rule that \[(a,c)\sim (b,d)\Leftrightarrow xa=yb\mbox{ and }x(c)=y(d)\mbox{ for some }x,y\in T;\] {\color{black}for clarity, here $xa,yb$ are products in the {\em monoid} $\mathbb{T}$ and $x(c),y(d)$ are the values of $x,y$ acting on $c,d\in B$, respectively.} Then $\sim$ is an equivalence relation and $A=\Sigma/\sim$ is the underlying set of an independence algebra $\A$. Further, $\B$ is a reduct of $\A$, where $\B$ embeds in $\A$ under $b\mapsto [1,b]:=[(1,b)]$. \begin{prop}\label{prop:facts}\cite{gouldI} Let $\B$ and $\A$ be as above. Then $$\{[1,b_1],\dots, [1,b_n]\}$$ is a basis for $\A$ so that $\A$ and $\B$ have the same rank. The map $\theta\mapsto \bar{\theta}$ from $\operatorname{End}(\B)$ to $\operatorname{End}(\A)$ embeds $\operatorname{End}(\B)$ as a left order in $\operatorname{End}(\A)$, where for any $[a,b]\in A$ we have $[a,b]\bar{\theta}=[a,b\theta]$. Further, $\theta$ and $\bar{\theta}$ have the same rank. \end{prop} Before addressing the main question of this section, we first confirm the way in which $\mathop{\mathrm{End}}\nolimits (\B)$ sits inside $\mathop{\mathrm{End}}\nolimits(\A)$ in the general case. Recall from \cite{gould ab} that a left order $\mathbb{S}$ in $\mathbb{Q}$ is {\em fully stratified} if for any $a,b\in S$ we have \[a\,\mbox{$\leq _{{\mathcal R}^{\ast}}$}\, b\mbox{ in }\mathbb{S}\mbox{ if and only if }a\,\mbox{$\leq _{\mathcal R}$}\, b\mbox{ in }\mathbb{Q}\] and \[a\,\mbox{$\leq _{{\mathcal L}^{\ast}}$}\, b\mbox{ in }\mathbb{S}\mbox{ if and only if }a\,\mbox{$\leq _{\mathcal L }$}\, b\mbox{ in }\mathbb{Q}.\] The relations $\mbox{$\leq _{\mathcal R}$}$ and $\mbox{$\leq _{\mathcal L }$}$ above are the pre-orders associated with Green's relations $\mbox{$\mathcal R$}$ and $\mbox{$\mathcal{L}$}$; $\mbox{$\leq _{{\mathcal R}^{\ast}}$}$ and $\mbox{$\leq _{{\mathcal L}^{\ast}}$}$ above are the pre-orders associated with the larger relations $\mbox{${\mathcal R}^{\ast}$}$ and $\mbox{${\mathcal L}^{\ast}$}$. We give further details as and when we use them; the reader may consult \cite{gould ab}. These relations are important for the study of $\operatorname{End}(\B)$ due to the following result. \begin{prop}\label{prop:greens} \cite{basisii,gould} Let $\mathbb{C}$ be an independence algebra and $\mathbb{D}$ a basis algebra, let $\alpha,\beta\in \operatorname{End}(\mathbb{C})$ and $\gamma,\delta\in \operatorname{End}(\mathbb{D})$. Then \[\alpha\,\mbox{$\leq _{\mathcal R}$}\, \beta\mbox{ if and only if }\ker\beta\subseteq \ker \alpha;\] \[\alpha\,\mbox{$\leq _{\mathcal L }$}\, \beta\mbox{ if and only if }\im\alpha\subseteq \im\beta;\] \[\gamma\,\mbox{$\leq _{{\mathcal R}^{\ast}}$}\, \delta\mbox{ if and only if }\ker\delta\subseteq\ker\gamma\] and \[\gamma\,\mbox{$\leq _{{\mathcal L}^{\ast}}$}\, \delta\mbox{ if and only if }\operatorname{PC}(\im\gamma)\subseteq\operatorname{PC}(\im\delta).\] \end{prop} \begin{theorem}\label{rightord} Let $\B$ be a stable basis algebra of finite rank $n\geq 1$ such that $\B$ satisfies the distributivity condition. The monoid $\mathop{\mathrm{End}}\nolimits(\B)$ is a fully stratified left order in $\mathop{\mathrm{End}}\nolimits(\A)$. \end{theorem} \begin{proof} We are required to show that for any $\alpha,\beta\in \mathop{\mathrm{End}}\nolimits(\B)$ we have that $\alpha\,\mbox{$\leq _{{\mathcal L}^{\ast}}$}\,\beta$ in $\mathop{\mathrm{End}}\nolimits(\B)$ if and only if $\overline{\alpha}\,\mbox{$\leq _{\mathcal L }$}\,\overline{\beta}$ in $\mathop{\mathrm{End}}\nolimits(\A)$ and dually, $\alpha\,\mbox{$\leq _{{\mathcal R}^{\ast}}$}\,\beta$ in $\mathop{\mathrm{End}}\nolimits(\B)$ if and only if $\overline{\alpha}\,\mbox{$\leq _{\mathcal R}$}\,\overline{\beta}$ in $\mathop{\mathrm{End}}\nolimits(\A)$. The proof of the first statement is inherent in the proof of \cite[Proposition 5.2 (ii)]{gould}, although not explicitly stated. We concentrate on the second; according to Proposition~\ref{prop:greens} it is sufficient to show that if $\ker \beta \subseteq \ker \alpha$, then $\ker\overline{\beta}\subseteq \ker\overline{\alpha}$. Suppose therefore that $\ker \beta \subseteq \ker \alpha$, and consider $[a,u],[b,v]\in A$ with $[a,u]\overline{\beta}=[b,v]\overline{\beta}$. {\color{black} By Proposition \ref{prop:facts}} we have that $[a,u\beta]=[b,v\beta]$ so that by the definition of $\sim$, there exist $c,d\in T$ with $ca=db$ and $c(u\beta)=d(v\beta)$ so that $(c(u))\beta=(d(v))\beta$. Since $\ker\beta\subseteq \ker\alpha$ we have $c(u\alpha)=(c(u))\alpha=({d}(v))\alpha={d}(v\alpha)$ and then $[a,u]\overline{\alpha}=[a,u\alpha]=[b,v\alpha]=[b,v]\overline{\alpha}$, as required. \end{proof} We have remarked that any $a\in T$ is one-one as a map, and certainly \[a|_{\langle \emptyset\rangle_{\mathbb{B}}}:\langle \emptyset\rangle_{\mathbb{B}}\rightarrow \langle \emptyset\rangle_{\mathbb{B}}.\] \begin{defi}\label{defn:CI} The algebra $\B$ satisfies the {\em Constant Isomorphism Property} (CI) if: \[a|_{\langle \emptyset\rangle_{\mathbb{B}}}:\langle \emptyset\rangle_{\mathbb{B}}\rightarrow \langle \emptyset\rangle_{\mathbb{B}}\] is {\em onto}, hence an {\em isomorphism} of the constant subalgebra $ \langle \emptyset\rangle_{\mathbb{B}}$ of $\mathbb{B}$. \end{defi} We introduced (CI) for the following purpose. \begin{theorem}\cite[Theorem 6.2]{gouldI}\label{thm:straight} Let $\B$ be a stable basis algebra of finite rank satisfying the distributivity condition. Then $\operatorname{End} (\B)$ is a straight left order in $\operatorname{End}(\A)$ if and only if $\B$ satisfies the (CI) and (if $n\geq 2$), $\mathbb{T}$ is right Ore. \end{theorem} From \cite[Theorem 1.24]{cp}, if $\mathbb{T}$ is right Ore, then it has a group of right quotients and it is easy to see in this case that $\G$ is a group of (two-sided) quotients of $\mathbb{T}$. The paper \cite{gouldI} leaves open the question of whether the fact that $\G$ is a group of quotients of $\mathbb{T}$ forces $\mathop{\mathrm{End}}\nolimits(\B)$ to be a right and hence (two-sided) order in $\mathop{\mathrm{End}}\nolimits(\A)$. {\color{black} The aim of this section is to determine the conditions under which $\mathop{\mathrm{End}}\nolimits(\B)$ is a right order in $\mathop{\mathrm{End}}\nolimits(\A)$, thus answering the question posed in the positive for $n\geq 2$ and establishing {\bf Result 2}.} If the rank $n$ of $\B$ is $0$, then $\A=\B=\langle \emptyset \rangle$ and so $\mathop{\mathrm{End}}\nolimits(\A)=\{I_A\}=\mathop{\mathrm{End}}\nolimits(\B)$ is a one element group, and our results hold trivially. We will therefore restrict to $\B$ with positive rank. Via a series of lemmas we now prove: \begin{theorem}\label{rightord} Let $\B$ be a stable basis algebra of finite rank $n\geq 1$ such that $\B$ satisfies the distributivity condition. The monoid $\mathop{\mathrm{End}}\nolimits(\B)$ is a right order in $\mathop{\mathrm{End}}\nolimits(\A)$ if and only if $\mathbb{T}$ is right Ore and $\B$ satisfies (CI). \end{theorem} In fact, we will show slightly more, demonstrating that in the right quotient decomposition $\alpha =\bar \gamma \bar\beta^{\#}$, $\bar\beta$ can be chosen as an automorphism of $\A$. Notice that the conditions given in Theorem~\ref{rightord} imply those of Theorem~\ref{thm:straight}, and coincide if $n\geq 2$. \begin{lemma} \label{minilemma} Suppose $\mathbb{T}$ is right Ore and satisfies (CI). Let $\{s_1, \dots, s_k\}$ be a finite non-empty set of terms in the language of $\B$ over the variables set $\{x_1,\dots,x_n\}$ and let $a \in T$. Then there exists an endomorphism $\theta_a$ of $\B$ such that $s_{i}^\B(b_1,\dots,b_n)\theta_a\in \im a$ for $1\le {i}\le k$, where we interpret all $s_i$ as $n$-ary operations. Moreover, $\theta_a$ satisfies $b_{\ell}\theta_a=a(r(b_{\ell}))$ for $1\le \ell\le n$ and some $r\in T$. \end{lemma} \begin{proof} We have remarked that under these hypotheses $\G$ is the group of quotients of $\mathbb{T}$. Let $m$ be the number of basic, non-nullary term operations appearing in the terms $s_1,\dots, s_k$, counted with multiplicity. We prove the lemma by induction on $m$. If $m=0$ then all $s_i$ are either variables or nullary operation symbols. By reordering the $s_i$ as necessary we may assume that for some $1\le t\leq k+1$ we have \[ s_{i}^\B(b_1,\dots, b_n)= \left\{ \begin{array}{ll}b_{j_{i}}& 1\le {i}< t\\ e_{i}& t\le {i}< k+1,\end{array}\right.\] for some constants $e_{i}$. By (CI), $a$ acts as an isomorphism on $ \langle \emptyset\rangle_\B$, and so $e_{i}=a(f_{i})$, for some $f_{i} \in\langle \emptyset\rangle_\B$. Define $\theta_a$ by $b_i\theta_a=a(b_i)$, then $$s_{i}^\B(b_1,\dots,b_n)\theta_a=b_{j_{i}}\theta_a=a(b_{j_{i}})$$ for $1\le {i} < t$ and $$s_{i}^\B(b_1,\dots,b_n)\theta_a=e_{i}\theta_a=e_{i}=a(f_{i})$$ for $t\le {i}< k.$ Thus $\theta_a$ satisfies the conditions of the lemma with $r$ being the identity. Now suppose that $0<m$, and that the result holds for all $m'$ with $0\leq m'<m$. We may reorder the $s_i$ such that for some integers $1\leq k_1\leq k_2\leq k_3\leq k_4\leq k+1$ we have \[ s_i =\left\{ \begin{array}{ll} x_{j(i)} & \mbox{ for }1\le i < k_1\\ v_i(h_1^i,\dots ,h_{l(i)}^i)& \mbox{ for }k_1\le i <k_2\\ u_i(h_i)&\mbox{ for }k_2\le i < k_3\\ c_i(h_i)&\mbox{ for }k_3\le i < k_4\\ d_i& \mbox{ for }k_4\le i <k+1,\end{array}\right.\] where $1\leq j(i)\leq n$, the $v_i$ are {\color{black} basic $l(i)$-ary operation symbols} where $l(i)\geq 2$, the $u_i,c_i$ are unary function symbols with $u_i^\B=a_i \in T$ and $c_i^\B\notin T$, the $d_i$ are nullary operation symbols, and the $h_i$ and $h_j^i$ are arbitrary terms. {\color{black} Our convention is that if, for example, $1=k_1=k_2$, then there are no instances of $s_i$ of the first two kinds. } For $k_3\leq i< k$ we have that, as $c_i^{\mathbb{B}}\notin T$, $s^\B(b_1,\dots,b_n)=g_i\in\langle \emptyset\rangle_{\mathbb{B}}$, so that by the same argument used for $m=0$, (CI) gives that $ g_i\in \im a$. If $k_1=k_2=k_3$, then let $b_{\ell}\theta_a=a(b_{\ell})$ for $1\leq \ell\leq n$, so that the conditions of the lemma hold with $r$ the identity. Otherwise, since $\T$ is right Ore, we proceed as follows. First, by applying the common denominator theorem in the group $\G$ to the elements ${a}^{-1}a_i$ for $k_1\le i < k_2$, we may find $p_i, u \in T$, such that in $\G$, $a^{-1}a_i=p_iu^{-1},$ and hence in $\T$, we have $a_i u=ap_i$. If $k_1=k_2$, let $u\in T$ be chosen arbitrarily. Again from $\T$ being right Ore, there are $p, v \in T$ for which $up=av$. We now apply our induction hypothesis to all the terms, $x_{j(i)}$, $h_j^i$ and $h_i$ with $av$ in place of $a$ (note that we must have at least one such term). It follows that there exists $\phi_{av} \in \mathop{\mathrm{End}}\nolimits(\B)$ and $z_i',z_j^i, z_i \in B$, for which $$\left(x_{j(i)}^{\B}(b_1,\dots,b_n)\right)\phi_{av}= av(z_i'),$$ $$\left(h_j^{i\B}(b_1,\dots,b_n)\right)\phi_{av}= av(z_j^i),$$ $$\left(h_i^{\B}(b_1,\dots,b_n)\right)\phi_{av}= av(z_i),$$ and that, moreover, $b_{\ell}\phi_{av}=av(r'(b_{\ell}))$ for $1\leq\ell\leq n$ and some $r'\in T$. For $1\le i< k_1$, \[\begin{array}{rcl} s_i^\B(b_1,\dots, b_n)\phi_{av}&=&\left(x_{j(i)}^{\B}(b_1,\dots,b_n)\right)\phi_{av}\\&=& av(z_i'), \end{array}\] For $k_1\le i< k_2$, \[\begin{array}{rcl} s_i^\B(b_1,\dots, b_n)\phi_{av}&=& \left(v_i^\B\left(h_1^{i\B}(b_1,\dots,b_n), \dots, h_{l(i)}^{i\B}(b_1,\dots, b_n)\right)\right)\phi_{av}\\ &=&v_i^\B(av(z_1^i), \dots, av(z_{l(i)}^i))\\ &=&av_i^\B(v(z_1^i), \dots, v(z_{l(i)}^i)), \end{array}\] where the last equality follows from the distributivity condition. For $k_2\le i< k_3$ we have $$\left(s_i^\B(b_1,\dots,b_n)\right)\phi_{av}=\left(u_i^\B(h_i^\B(b_1,\dots,b_n))\right) \phi_{av} =\left(a_i(h_i^\B(b_1,\dots,b_n))\right)\phi_{av}$$ $$=a_i(av(z_i))=a_i(up(z_i))=ap_ip(z_i).$$ Since $\phi_{av}(b_{\ell})=avr'(b_{\ell})$ for some $r' \in T$ and $1\leq \ell\leq n$, the result holds with $\theta_a=\phi_{av}$ and $r=vr'$. \end{proof} \begin{theorem}\label{ro} If $\T$ is right Ore and (CI) holds, then $\mathop{\mathrm{End}}\nolimits(\B)$ is a right order in $\mathop{\mathrm{End}}\nolimits(\A)$. Moreover every $\alpha \in \mathop{\mathrm{End}}\nolimits(\A)$ may be written in the form $\alpha=\bar \gamma \bar\beta^{-1}$ for some $\gamma, \beta \in \mathop{\mathrm{End}}\nolimits(\B)$ with $\bar \beta\in\mathop{\mathrm{Aut}}\nolimits \A$. \end{theorem} \begin{proof} From \cite{gouldI}, $\mathop{\mathrm{End}}\nolimits(\B)$ is a left order in $\mathop{\mathrm{End}}\nolimits(\A)$, so that certainly every square-cancellable element of $\mathop{\mathrm{End}}\nolimits(\B)$ lies in a subgroup of $\mathop{\mathrm{End}}\nolimits(\A)$. Let $\alpha \in \mathop{\mathrm{End}}\nolimits(\A)$. It remains to find $\beta, \gamma \in \mathop{\mathrm{End}}\nolimits(\B)$ such that $\alpha=\bar\gamma\bar\beta^{\#}$. By Proposition~\ref{prop:facts} we have that $$\{[1,b_1],\dots, [1,b_n]\}$$ is a basis for $\A$. Choose $\mu_i \in T$, $d_i \in B$ such that $[1,b_i]\alpha=[\mu_i,d_i]$ for $1\leq i\leq n$. Consider the elements $\mu_i ^{-1} \in G$. By the common denominator theorem, there exists $\mu, \nu_i \in T$ such that $\mu_i^{-1}=\mu^{-1}\nu_i$ in $\G$ for $1\leq i\leq n$. It follows that $\mu=\nu_i\mu_i$, and thus from the definition of $\sim$, we have $[\mu_i, d_i] = [\nu_i \mu_i, \nu_i(d_i)]=[\mu, \nu_i(d_i)]$. Putting $c_i := \nu_i(d_i)\in B$ for $1\leq i\leq n$ we have that $[1,b_i]\alpha=[\mu, c_i]$. Now let $t_1, \dots, t_n$ be $n$-ary terms such that $c_i=t_i^\B(b_1, \dots,b_n)$ for $1\leq i\leq n$. By Lemma \ref{minilemma}, there exists an endomorphism $\theta_{\mu}$ of $\B,z_i \in B$ and $r\in T$ such that $t^\B_i(b_1,\dots,b_n)\theta_{\mu}=\mu (z_i)$ and $b_\ell\theta_\mu=\mu(r(b_\ell))$ for $1\leq \ell\leq n$. Define $\beta,\gamma\in \mathop{\mathrm{End}}\nolimits(\B)$ by $\beta=\theta_{\mu}$ and $b_i\gamma=z_i$. Using Proposition \ref{prop:facts}, we have that for all $i$, $$[1,b_i]\alpha\bar\beta=[\mu,c_i]\bar\beta=[\mu,c_i\beta]=[\mu,c_i\theta_{\mu}]=[\mu,\mu (z_i)]=[1,z_i]=[1,b_i \gamma]=[1,b_i]\bar\gamma.$$ It follows that $\alpha \bar \beta=\bar\gamma$. From the fact $\{ b_1,\hdots ,b_n\}$ is a basis for $\mathbb{B}$, we see that \[ \mu(r(b_i))\not\prec \{ \mu(r(b_1)), \hdots, \mu(r(b_{i-1})),\mu(r(b_{i+1})), \hdots, \mu(r(b_n)\}\] for any $1\leq i\leq n$. It follows that $\beta$ has rank $n$, and hence by Proposition~\ref{prop:facts}, so does $\bar\beta$. But then $\im\, \bar\beta=A$, which means that $\bar \beta$ is a unit of $\A$ by \cite[Proposition 3.2]{gould}. Hence $\alpha=\bar \gamma \bar\beta^{-1}=\bar \gamma \bar\beta^{\sharp}$, as required. \end{proof} We now tackle the converse to Theorem~\ref{ro}. \begin{lemma}\label{CILemma} Suppose that $\mathop{\mathrm{End}}\nolimits(\B)$ is a right order in $\mathop{\mathrm{End}}\nolimits(\A)$. Then (CI) holds in $\B$. \end{lemma} \begin{proof} Let $a \in T$ and $c \in \langle \emptyset \rangle_{\mathbb{B}}$. We need to show that $c$ is in the image of $a$. As $[1,b_1]$ is in a basis of $\A$ there exists an $\alpha \in \mathop{\mathrm{End}}\nolimits(\A)$ such that $[1,b_1] \alpha=[a,c]$. As $\mathop{\mathrm{End}}\nolimits (\B)$ is a right order in $\mathop{\mathrm{End}}\nolimits (\A)$, there are $\beta, \gamma \in \mathop{\mathrm{End}}\nolimits(\B)$ such that $\alpha=\bar \gamma \bar \beta^{\#}$. Note that $$\alpha=\bar \gamma \bar \beta^{\#}=\bar \gamma \left(\bar \beta \bar \beta^{\#} \bar \beta^{\#}\right)=\left(\bar{\gamma} \bar \beta\right) \left(\bar \beta \bar\beta\right)^{\#}= \left(\overline{\gamma}\,\overline{ \beta}\right)\left(\overline{\beta}^2\right)^{\#}.$$ By potentially replacing $\gamma$ and $\beta$ with ${\gamma \beta}$ and $\beta^2$, respectively, we may assume that $\bar{\gamma}= \bar{\gamma}\bar{\beta}^{\sharp}\bar{\beta}$ and, with this assumption, we obtain $\bar \gamma =\alpha\bar \beta$. From $[1,b_1] \alpha=[a,c]$ we have that $$[1,b_1]\bar \gamma= [1,b_1]\alpha\bar \beta=[a,c]\bar \beta$$ giving $$ [1,b_1\gamma]=[a, c\beta] =[a,c].$$ Hence there exists $u,v \in T$ such that $ua=v1$ and $u(c)=v(b_1\gamma)$. Then $u(c)=ua(b_1 \gamma)$ and as $u$ is injective, we have that $c=a(b_1\gamma)$. By comments at the beginning of this section, we have that $b_1\gamma\in \langle \emptyset \rangle_{\mathbb{B}}$. Thus (CI) holds in $\B$. \end{proof} \begin{theorem} \label{lr} Suppose that $\mathop{\mathrm{End}}\nolimits(\B)$ is a right order in $\mathop{\mathrm{End}}\nolimits(\A)$. Then $\T$ is right Ore. \end{theorem} \begin{proof} Let $p,q \in T$. Define $\alpha\in \mathop{\mathrm{End}}\nolimits(\A)$ by $[1,b_i] \alpha= [p, q (b_1)]$ for $1\leq i\leq n$. By \cite[Lemma 4.10]{gouldI} the rank of $\alpha$ is 1. Since $\mathop{\mathrm{End}}\nolimits(\B)$ is a right order in $\mathop{\mathrm{End}}\nolimits(\A)$, there are $\gamma, \beta \in \mathop{\mathrm{End}}\nolimits(\B)$ such that $\alpha=\bar\gamma\bar\beta^{\#}$. As in the proof of Lemma \ref{CILemma}, we may assume that $\alpha \bar \beta =\bar \gamma$. Hence for $i=1,\dots,n$, $$[1,b_i\gamma]=[1,b_i]\bar \gamma=[1,b_i]\alpha \bar \beta =[p, q(b_1)]\bar \beta =[p, (q(b_1)) \beta] =[p, q(b_1 \beta)].$$ Hence there exist $a,b \in T$ such that $ap=b$ and $aq(b_1 \beta)=b(b_i \gamma)$, giving $aq(b_1 \beta)=ap(b_i \gamma)$. By injectivity of $a$, we obtain $q(b_1 \beta)=p(b_i \gamma)$. Let $u=b_1 \beta$, $v_i=b_i \gamma$. It follows that $q(u)=p(v_i)$ and so $\operatorname{PC}(\{u\})=\operatorname{PC}(\{v_i\})=W$ (say) for $1\leq i\leq n$. As it is an element of a basis, we have that $[1,b_1]\notin \langle \emptyset\rangle_{\mathbb{A}}$. By \cite[Lemma 4.10]{gouldI}, $[p, q b_1] \notin \langle \emptyset\rangle_{\mathbb{A}}$. From $\alpha=\bar\gamma\bar\beta^{\#}$ we have that $\alpha=\alpha\bar\beta\bar\beta^{\#}$ and so $\bar\beta\bar\beta^{\#}$ acts as the identity on $\im(\alpha)=\langle[p, q b_1]\rangle_{\mathbb{A}}$, giving $[p, q(b_1 \beta)]=[p, q( b_1)] \bar\beta \notin \langle \emptyset\rangle_{\mathbb{A}}$. Once again by \cite[Lemma 4.10]{gouldI}, we obtain that $q(b_1\beta)\notin \langle \emptyset\rangle_{\mathbb{B}}$ and hence by \cite[Proposition 2.4]{gouldI} we also have $ b_1 \beta \notin \langle \emptyset\rangle_{\mathbb{B}}$. Thus $W$ has rank $1$. It follows that $W$ has a one-element basis, so $W=\langle w_1\rangle_{\mathbb{B}}$ for some $w_1 \in B$ where $\{ w_1,\hdots, w_n\}$ is a basis of $B$. Hence there are $h, k \in T$, such that $h(w_1)=u $, $k(w_1)=v_1$, and so $q h(w_1)= p k(w_1)$. Note that $\{w_1\}$ can be extended to a basis of $B$. It follows that for any $b' \in B$, there is an endomorphism $\tau$ with $w_1 \tau=b'$. Clearly then $q h= p k$ and the result follows. \end{proof} Theorem \ref{rightord} now follows directly from Theorem \ref{ro}, Lemma \ref{CILemma}, and Theorem \ref{lr}. \begin{example} \label{ex:rings} Let $\mathbb{R}$ be an integral domain and $\mathbb{M}$ an $n$-generated ($n\in\mathbb{N}$) free left module over $\mathbb{R}$ such that $\mathbb{M}$ is a stable basis algebra; certainly we must have that $\mathbb{R}$ is left Ore, since the non-zero elements of $R$ form a monoid isomorphic to $\mathbb{T}_{\mathbb{M}}$. For example, $\mathbb{R}$ could be a left Or Bezout domain. Then $\operatorname{End}(\mathbb{M})$ (which is isomorphic to $M_n(\mathbb{R})$) is a left order (in the sense of either ring or semigroup theory), in $\operatorname{End}(\A)$. The construction of \cite{gouldI} in this case yields that $\A$ is the $n$-dimensional vector space $\mathbb{D}\otimes \mathbb{M}$, where $\mathbb{D}$ is the division ring of left quotients of $\mathbb{R})$, so that $\operatorname{End}(\mathbb{A})$ is isomorphic to the monoid $M_n(\mathbb{D})$. Our results now yield (for $n\geq 2$) that $M_n(\mathbb{R})$ is straight in $M_n(\mathbb{D})$ if and only if $M_n(\mathbb{R})$ is a two-sided order in $M_n(\mathbb{D})$ if and only if $\mathbb{R}$ is also right Ore if and only if $\mathbb{R}$ is also a right order in $\mathbb{D}$. \end{example} \begin{example} As explained in \cite[Section 2]{basisii}, any free left $\mathbb{T}$-act $\B$ of rank $n\in\mathbb{N}$ over a cancellative monoid $\mathbb{T}$ such that every finitely generated left ideal is principal is a stable basis algebra. Certainly $\mathbb{T}$ is left Ore, and if $\mathbb{G}$ is its group of left quotients, then $\A$ is isomorphic to the free left $\mathbb{G}$-act on $n$ generators. Our results show that (for $n\geq 2$), $\operatorname{End}(\B)$ is a straight left order in $\operatorname{End}(A)$ if and only if it is a two-sided order, if and only if $\mathbb{T}$ is also right Ore. \end{example} \section{Fully stratified straight left orders}\label{sec:FS} In \cite[Theorem 6.2]{basisii}, it was claimed that for any stable basis algebra $\B$, $\mathop{\mathrm{End}}\nolimits_f(\B)$ is a fully stratified straight left order in {\em some} regular semigroup. However, the proof of this result depends on the invalid Proposition II.2.6 in \cite{mvl}. In this section, we show that we can still obtain that $\mathop{\mathrm{End}}\nolimits_f(\B)$ a fully stratified straight left order under certain conditions, closely related to those in Section~\ref{sec:right}. {\color{black} We remark first that if $\B$ has finite rank and satisfies the distributivity condition, then $\mathop{\mathrm{End}}\nolimits_f(\B)= \mathop{\mathrm{End}}\nolimits(\B)$ and from Theorems~\ref{rightord} and ~\ref{thm:straight}, $\mathop{\mathrm{End}}\nolimits(\B)$ is a fully stratified straight left order in $\mathop{\mathrm{End}}\nolimits(\A)$ (where $\A$ is constructed as in Section~\ref{sec:right}) if and only if $\B$ satisfies (CI) and if rank $\B \ge 2$, then the monoid $\mathbb{T}$ is both left and right reversible. In the general case of arbitrary rank, where we have no construction of $\A$ to hand, we assume rather stronger conditions on $\B$ in order to find sufficient conditions for $\mathop{\mathrm{End}}\nolimits_f(\B)$ to be a fully stratified straight left order. Our approach is to check the list of conditions for a semigroup to be a fully stratified straight left order given in \cite[Proposition 3.2]{gould ab}. In fact, we are left with just two conditions to check, and for the first, we need make no additional assumptions.} \begin{lemma} \label{l:eiil} Let $\B$ be a stable basis algebra and let $\alpha,\beta \in \mathop{\mathrm{End}}\nolimits_f(\B)$ be such that $\alpha \le_{\mathcal{L}^*}\beta$. Then there exist $\gam \in \mathop{\mathrm{End}}\nolimits_f(\B)$, such that $\alpha \, \mathcal{L}^*\, \gam\beta $. \end{lemma} \begin{proof} Let $B$ have basis $\{ b_i:i\in I\}$ so that $B=\langle b_i:i\in I\rangle_{\mathbb{B}}$, and for any $i\in I$, $b_i\not\prec \langle b_j:j\in I\setminus\{ i\}\rangle$. By Proposition~\ref{prop:greens}, $\alpha \le_{\mathcal{L}^*}\beta$ if and only if $\PC(\im \alpha) \subseteq \PC(\im \beta)$. Let $\{ x_1,\dots, x_m\}$ be a basis for $\PC(\im \alpha)$. First, if $m=0$, then $\im \alpha=\langle \emptyset \rangle_{\mathbb{B}}$, so that $\alpha=\alpha\beta$ and so the lemma holds with $\gamma=\alpha$. We suppose therefore that $m\in\mathbb{N}$. Set $v_i=b_i\beta$, $i\in I$ so that $\im \beta =\langle v_i:i\in I\rangle$. There exists a $p\in\mathbb{N}$ and, for $j=1,\dots, m$, $p$-ary term functions $t_j$ and $u_j \in T$ with $u_j(x_j)=t_j(v_1,\dots, v_p)$ (where we allow ourselves the freedom to relabel the $b_i$ and hence $v_i$, as convenient). Define $\gam \in \mathop{\mathrm{End}}\nolimits{\B}$ by $ b_j\gam= t_j(b_1, \dots,b_p)$ for $j=1,\dots, m$, and $b_j\gam=b_1\gam$ else. Then $\gamma\in \mathop{\mathrm{End}}\nolimits_f(\B)$ and \begin{eqnarray*} b_j \gam \beta&=&t_j(b_1, \dots, b_p)\beta \\ &=& t_j(b_1 \beta, \dots, b_p \beta) \\ &=& t_j(v_1,\dots, v_p) \\ &=& u_j(x_j) \end{eqnarray*} for $1\le j\le m$. Moreover, for $j\notin\{ 1,\hdots, m\}$ we have $b_j \gam\beta =b_1 \gam \beta =u_1(x_1)$. We deduce \begin{equation} \label{eq:PC}\im \gam \beta= \langle u_1(x_1), \dots, u_m(x_m) \rangle.\end{equation} so that clearly $\PC(\im \gam\beta) \subseteq \PC(\im \alpha ).$ Further, as $x_i \in \PC(\im \gam \beta)$ for $1\le i \le m$, we have $$\PC(\im \alpha)=\langle x_1, \dots, x_m \rangle \subseteq \PC(\im \gam\beta ).$$ Hence $\PC(\im \alpha)=\operatorname{PC}(\im\gam\beta)$ so that from Proposition~\ref{prop:greens}, we have $\alpha\, \mathcal{L}^*\, \gam\beta $, as required. \end{proof} \begin{lemma} \label{l:eiir} Let $\B$ be a stable basis algebra that satisfies the distributivity condition and has no constants. Assume that $\T$ is commutative. Let $\alpha,\beta \in \mathop{\mathrm{End}}\nolimits_f(\B)$, such that $\alpha \le_{\R^*}\beta$. Then there exist $\gam \in \mathop{\mathrm{End}}\nolimits_f(\B)$, such that $\alpha\, \R^*\, \beta \gam$. \end{lemma} \begin{proof} As in Lemma~\ref{l:eiil}, let $B$ have basis $\{ b_i:i\in I\}$. Let $\{ u_1, \dots, u_m\}$ be a basis for $\operatorname{PC}(\im\beta)$. Set $c_i=b_i\beta$, so that $\im \beta= \langle c_i:i\in I\rangle$, and choose $m$-ary term functions $t_i$ such that $c_i=t_i(u_1,\dots ,u_m)$ for $i\in I$. As the $u_i$ are in $\PC(\im \beta)$, we may find $p_i \in T$ such that $p_i(u_i)\in \im \beta$ for $1\le i \le m$. Let $p=p_1\hdots p_m$ so that, as $\T$ is commutative, $p(u_i) \in \im \beta$ for $1\le i \le m$. Since $m$ is finite, we may find $n\in\mathbb{N}$ and $n$-ary term functions $s_i$, such that $p(u_i) = s_i(c_1,\dots, c_n)$ for $1\le i \le m$ (with, as earlier, some relabelling). We put $\vec b=(b_1,\dots,b_n)$ and $\vec u=(u_1,\hdots, u_m)$. Extend $\{u_1,\dots, u_m\}$ to a basis $\{ u_i:i\in I\}$ of $\B$, and define $\gam \in \mathop{\mathrm{End}}\nolimits(\B)$ by $$u_j \gam=s_j(b_1 \alpha, \dots, b_n \alpha)=s_j(\vec b \alpha)$$ for $1\le j\le m$, and $u_j\gamma=u_1\gamma$ where $j\notin\{ 1,\hdots,m\}$. Clearly $\gamma\in \mathop{\mathrm{End}}\nolimits_f(\B)$. Let $w$ be an arbitrary $k$-ary term function, where without loss of generality we assume $k\geq n$. Let $x=w(\bar b)$ where $\bar b=(b_1,\hdots ,b_k)$. Then \begin{eqnarray*} x\beta\gam&=&w(\bar b)\beta \gam\\ &=& w(b_1 \beta, \dots ,b_k \beta)\gam\\ &=& w(c_1, \dots ,c_k)\gam\\ &=& w(t_1(\vec u),\dots, t_k(\vec u))\gam\\ &=& w(t_1(s_1(\vec b \alpha), \dots, s_m(\vec b \alpha)),\dots,t_k(s_1(\vec b \alpha), \dots, s_m(\vec b \alpha)))\\ &=&\left(w(t_1(s_1,\dots,s_m), \dots, t_k(s_1,\dots, s_m))(\vec b)\right)\alpha\\ &=&\left(w(t_1(s_1,\dots,s_m), \dots, t_k(s_1,\dots, s_m))(\bar b)\right)\alpha \end{eqnarray*} where we reinterpret $(w(t_1(s_1,\dots,s_m), \dots, t_k(s_1,\dots, s_m))$ as being $k$-ary, in the final step. Moreover, \begin{eqnarray*} \left(pw(\bar b)\right)\beta&=&pw(b_1 \beta, \dots, b_k\beta)\\ &=& pw(c_1,\hdots, c_k)\\ &=&pw(t_1(\vec u), \dots, t_k(\vec u))\\ &=& w(t_1(p(u_1),\dots p(u_m)), \dots, t_k(p(u_1 ),\dots, p(u_m)))\\ &=& \left(w(t_1(s_1,\dots, s_m),\dots, t_k(s_1,\dots, s_m))\right)(c_1,\hdots, c_n)\\ &=& \left(\left(w(t_1(s_1,\dots, s_m),\dots, t_k(s_1,\dots, s_m))\right)(\vec b)\right)\beta\\ &=& \left(\left(w(t_1(s_1,\dots, s_m),\dots, t_k(s_1,\dots, s_m))\right)(\bar b)\right)\beta. \end{eqnarray*} {\color{black} Here the fourth equation follows inductively from the distributivity condition, the fact that $\T$ is commutative, and that $\B$ has no constants.} This means that $$\left(pw(\bar b), \left(w(t_1(s_1,\dots, s_m),\dots, t_n(s_1,\dots, s_m))\right)(\bar b)\right) \in \ker \beta$$ and so $$\left(pw(\bar b), \left(w(t_1(s_1,\dots, s_m),\dots, t_n(s_1,\dots, s_m))\right)(\bar b)\right) \in \ker \alpha,$$ as $\alpha \le_{\R^*}\beta$. Let $v,v'$ be arbitrary $k$-ary term functions. Applying the above to $y=v(\bar b)$ and $y'=v'(\bar b)$, and using the fact that $p$ is one-one, we get that \begin{eqnarray*}&& y\beta \gam= y'\beta \gam \\ \Leftrightarrow & &\left( \left(v(t_1(s_1,\dots, s_m),\dots, t_n(s_1,\dots, s_m))\right)(\bar b)\right) \alpha\\ &=&\left(\left(v'(t_1(s_1,\dots, s_m),\dots, t_n(s_1,\dots, s_m))\right)(\bar b)\right)\alpha\\ \Leftrightarrow && \left((pv)(\bar b),(pv')(\bar b)\right)\in \ker \alpha\\ \Leftrightarrow&&pv(b_1 \alpha, \dots, b_k\alpha)=pv'(b_1 \alpha, \dots, b_k\alpha)\\ \Leftrightarrow&&v(b_1 \alpha, \dots, b_k\alpha)=v'(b_1 \alpha, \dots, b_k\alpha)\\ \Leftrightarrow && (v(\bar b))\alpha=(v'(\bar b))\alpha\\ \Leftrightarrow && y\alpha =y'\alpha \end{eqnarray*} It follows that $\alpha\, \R^*\, \beta \gam$. \end{proof} {\color{black} Before stating and proving the main result of this section, establishing {\bf Result 3}, we remind the reader that the relations $\mbox{${\mathcal R}^{\ast}$}$ and $\mbox{${\mathcal L}^{\ast}$}$ are the equivalence relations associated, respectively, with the pre-orders $\mbox{$\leq _{{\mathcal R}^{\ast}}$}$ and $\mbox{$\leq _{{\mathcal L}^{\ast}}$}$ of Proposition~\ref{prop:greens}.} \begin{theorem}\label{thm:end_f} Let $\B$ be a stable basis algebra that satisfies the distributivity condition and has no constants. Assume that $\T$ is commutative. Then $\mathop{\mathrm{End}}\nolimits_f(\B)$ is a fully stratified straight left order in a regular semigroup. \end{theorem} \begin{proof} From \cite[Proposition 3.2]{gould ab} the semigroup $\mathop{\mathrm{End}}\nolimits_f(\B)$is a fully stratified straight left order precisely when the following conditions, together with the left-right duals (Eii)(r),(Eiii)(r), (Evi)(r), of (Eii)(l), (Eiii)(l), (Evi)(l), respectively, are satisfied: \begin{itemize} \item[(Ei)] $\mathcal{L}^*\circ \mathcal{R}^*=\mathcal{R}^*\circ\mathcal{L}^*$. \item[(Eii)(l)] For all $\alpha,\beta \in \mathop{\mathrm{End}}\nolimits_f(\B)$, $\alpha\le_{\mathcal{L}^*} \beta$ if and only if $\alpha\, \mathcal{L}^*\, \gamma \beta$ for some $\gamma \in \mathop{\mathrm{End}}\nolimits_f(\B)$. \item[(Eiii)(l)] Every $\mathcal{L}^*$-class contains a square-cancellable endomorphism. \item[(Evi)(l)] For all square-cancellable $\alpha \in \mathop{\mathrm{End}}\nolimits_f(\B)$, and all $\beta, \gamma \in \mathop{\mathrm{End}}\nolimits_f(\B)$, if $\beta, \gamma \le_{\mathcal{L}^*}\alpha$ and $\beta \alpha=\gamma\alpha$, then $\beta =\gamma$. \item[(Evii)(r)] For all square-cancellable $\alpha \in \mathop{\mathrm{End}}\nolimits_f(\B)$, and all $\beta, \gamma \in \mathop{\mathrm{End}}\nolimits_f(\B)$, if $\beta, \gamma \le_{\mathcal{R}^*}\alpha$ and $\alpha \beta \mathcal{R}^* \alpha \gamma$, then $\beta \mathcal{R}^* \gamma$. \item[(Gii)] If $\alpha \in \mathop{\mathrm{End}}\nolimits_f(\B)$ is square-cancellable, then $H^*_\alpha$ is left Ore. \end{itemize} From \cite[Corollary 6.3]{basisii}, $\mathop{\mathrm{End}}\nolimits_f(\B)$ is abundant. As pointed out at the beginning of \cite[Section 4]{gould ab}, (Eiii)(l) and (r) hold in any abundant semigroup. In \cite{basisii}, it is shown that (Ei), (Evi)(l), (Evi)(r), (Evii)(r), and (Gii) hold in $\mathop{\mathrm{End}}\nolimits_f(\B)$, namely in Corollary 6.3, Lemma 6.9, Lemma 6.10, Lemma 6.11, and Corollary 6.6, respectively. These results do not depend on the incorrect \cite[Proposition II.2.6]{mvl}, although note that another statement in Corollary 6.3 does. It remains to establish the two version of (Eii). It follow from Proposition~\ref{prop:greens} that if $\alpha\,\mbox{${\mathcal R}^{\ast}$}\, \beta\gamma$ for any $\alpha, \beta,\gamma\in \mathop{\mathrm{End}}\nolimits_f(\B)$, then $\ker\beta\subseteq \ker \alpha$, so that $\alpha\leq_{\mathcal{R}^*} \beta$, with a dual statement for $\mbox{${\mathcal L}^{\ast}$}$. We then use Lemma \ref{l:eiil} for condition (Eii)(l) and in Lemma \ref{l:eiir} for Condition (Eii)(r). The result follows. \end{proof} {\color{black} \section{Open questions}\label{sec:open} The main aim of this article was to show that not all stable basis algebras satisfy the distributivity condition, achieved in Section~\ref{sec:example}, and to complete the investigation of the left order $\mathop{\mathrm{End}}\nolimits(\B)$ in $\mathop{\mathrm{End}}\nolimits(\A)$ constructed in \cite{gould}. However, achieving our objectives here, and the discovery of the knock-on effects of the error in \cite{mvl}, has prompted a number of further questions which we would like to pose. {\color{black} \begin{question}\label{qn:1} Is every stable basis algebra $\B$ a reduct of an independence algebra? \end{question} {\color{black} It would make sense to consider first the special case where $\B$ has finite rank. Question~\ref{qn:1} is closed tied to: } \begin{question} If a stable basis algebra $\B$ is a reduct of an independence algebra $\A$, then does $\B$ satisfy the distributivity condition if and only if $\A$ does? \end{question} It may be that in the above, one would want only to consider the case where $\B$ contains all the non-unary basic operations of $\A$. \begin{question} Let $\B$ be a stable basis algebra. Is $\mathop{\mathrm{End}}\nolimits_f(\B)$ is a left order in a regular semigroup if and only if $\B$ is a reduct of an independence algebra $\A$ such that $\mathop{\mathrm{End}}\nolimits_f(\B)$ is a left order in $\mathop{\mathrm{End}}\nolimits_f(\A)$? \end{question} Finally, following from Theorem~\ref{thm:end_f} we ask: \begin{question} Let $\B$ be a stable basis algebra satisfying the distributivity condition. Is $\mathop{\mathrm{End}}\nolimits_f(\B)$ a fully stratified straight left order if and only if $\B$ satisfies (CI) and (if rank $\B \ge 2$) the monoid $\mathbb{T}$ is both left and right reversible? In this case, is $\B$ a reduct of an independence algebra $\A$ such that $\mathop{\mathrm{End}}\nolimits_f(\B)$ is a fully stratified straight left order in $\mathop{\mathrm{End}}\nolimits_f(\A)$? \end{question} }
{ "timestamp": "2019-08-28T02:09:12", "yymm": "1804", "arxiv_id": "1804.03177", "language": "en", "url": "https://arxiv.org/abs/1804.03177", "abstract": "Stable basis algebras were introduced by Fountain and Gould and developed in a series of articles. They form a class of universal algebras, extending that of independence algebras. If a stable basis algebra $\\mathbb{B}$ of finite rank satisfies the distributivity condition (a condition satisfied by all the previously known examples), it is a reduct of an independence algebra $\\mathbb{A}$. Our first aim is to give an example of an independence algebra not satisfying the distributivity condition.Gould showed that if a stable basis algebra $\\mathbb{B}$ with the distributivity condition has finite rank, then so does the independence algebra $\\mathbb{A}$ of which it is a reduct, and in this case the endomorphism monoid End$(\\mathbb{B})$ of $\\mathbb{B}$ is a left order in the endomorphism monoid End$(\\mathbb{A})$ of $\\mathbb{A}$. We complete the picture by determining when End$(\\mathbb{B})$ is a right, and hence a two-sided, order in End$(\\mathbb{A})$. In fact (for rank at least 2), this happens precisely when every element of End$(\\mathbb{A})$ can be written as $\\alpha^\\sharp\\beta$ where $\\alpha,\\beta\\in$ End$(\\mathbb{B})$, $\\alpha^\\sharp$ is the inverse of $\\alpha$ in a subgroup of End$(\\mathbb{A})$ and $\\alpha$ and $\\beta$ have the same kernel. This is equivalent to End$(\\mathbb{B})$ being a special kind of left order in End$(\\mathbb{A})$ known as straight.", "subjects": "Rings and Algebras (math.RA); Group Theory (math.GR)", "title": "Independence algebras, basis algebras and the distributivity condition", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9814534365728416, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7083573560180728 }